url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://math.stackexchange.com/questions/2163875/technique-to-calculate-distribution-of-product-of-two-random-variables
# Technique to calculate distribution of product of two random variables Given two continuous random variables $X$ and $Y$, suppose we know probability distributions $p_X, p_Y$ of them and $cov(X,Y)$. (Note we don't impose the independence between them.) Then can we calculate $p_{Z}(z)$ where $Z=XY$? If not, what more do we need? Can we calculate $p_Z$ if we have $P_{X,Y}(x,y)$ the joint distribution of $X$ and $Y$? • Are the random variables discrete or continuous ? – callculus Feb 27 '17 at 15:54 • oh, they are continuous. – julypraise Feb 27 '17 at 15:57 • Covariance: not enough. Joint distribution: suffices, the standard approach works (for example, using a change of variables to compute the joint distribution of (Z,Y), then computing the first marginal). – Did Feb 27 '17 at 16:02 • @Did Thanks. Though, I'am quite new to the stuff and don't have a right reference. Where might I look for a explicit example? – julypraise Feb 27 '17 at 17:38 • In your textbook, perhaps? – Did Feb 27 '17 at 17:44 You can calculate $E[XY]$ from just $\mbox{cov }(X,Y)$ and the individual $E[X]$ and $E[Y]$: $$E[XY] = \mbox{cov }(X,Y)+E[X]E[Y]$$ Since you can easily get $E[X]$ from $p_X$ (similarly for $Y$), the information you propose is enough to determine $E[XY]$. But it is insufficient, in general, to determine $p_Z(z)$. Rather surprisingly, if you restrict the form of $p_{XY}(x,y)$ to a second degree expression on the unit square, and zero outside, then in fact the marginal distributions and the covariance together determine a unique joint probability function of that form. But if you relax that restriction, you can find cases that agree in marginal distributions and in covariances, but are not identical joint distributions.
2019-05-20 03:24:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8520792722702026, "perplexity": 386.39992522532714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255536.6/warc/CC-MAIN-20190520021654-20190520043654-00091.warc.gz"}
https://socratic.org/questions/how-do-you-balance-lial-oh-4-h-2o-lioh-al-oh-3-h-2o
# How do you balance LiAl(OH)_4 + H_2O -> LiOH + Al(OH)_3 + H_2O? Jan 27, 2016 It's already balanced. Here, let's see what happens when we check what's on each side. "LiAl"("OH")_4 + cancel("H"_2"O") -> "LiOH" + "Al"("OH")_3 + cancel("H"_2"O") color(green)("Li")color(highlight)("Al")color(blue)(("OH")_4) -> color(green)("Li")color(blue)("OH") + color(highlight)("Al")color(blue)(("OH")_3) Yep, it's fine the way it is. One lithium on each side, one aluminum on each side, and four $\text{OH}$ groups on each side. As for the charges, "Al"("OH")_4^(-) is balanced by ${\text{Li}}^{+}$, ${\text{OH}}^{-}$ is balanced by ${\text{Li}}^{+}$, and $3 \times {\text{OH}}^{-}$ is balanced by ${\text{Al}}^{3 +}$. Jan 27, 2016 Do you mean the reaction of lithium tetrahydroaluminate ($L i A l {H}_{4}$) with water? $L i A l {H}_{4} \left(s\right) + 4 {H}_{2} O \left(l\right) \rightarrow L i A l {\left(O H\right)}_{4} \left(a q\right) + 4 {H}_{2} \left(g\right) \uparrow$ Lithium tetrahydroaluminate is an important hydride transfer reagent, and a very common reductant in organic chemistry. The reduction reaction is performed in THF (${C}_{4} {H}_{8} O$) or ether (both solvent must be dry). $L i A l {H}_{4}$ can transfer up to 4 hydrides (with commercial grades usually it transfers about 3). When your organic reduction is finished, the mixture is worked up with water. This can get pretty violent (especially if the lithal has been added 1:1, as is quite commonly done).
2019-09-20 22:50:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 13, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.575196385383606, "perplexity": 4808.180690385429}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574084.88/warc/CC-MAIN-20190920221241-20190921003241-00499.warc.gz"}
https://chemistry.stackexchange.com/tags/diffusion/new
Tag Info Let's discuss the problem in a qualitative way. In such a cell, the metal will get dissolved in the right-hand side of the cell, where the concentration is low, so that $[M^{z+}]$ increases in this compartment, which becomes the anode. In the left-hand side, the ions $[M^{z+}]$ are discharged and deposited as a metal layer on the electrode, which is the ... Osmotic pressure for non-electrolytic solutes is given by $$\pi = CRT$$ where $C$ is the effective concentration of all the solutes. In our case, with multiple solutes, we simply add all their concentrations to obtain the effective concentration. This gives us \begin{align} \pi_\mathrm{cell} &= 0.05RT\\ \pi_\mathrm{environment} &= 0.03RT \end{...
2020-01-27 10:34:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9972739219665527, "perplexity": 934.9900662441804}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251696046.73/warc/CC-MAIN-20200127081933-20200127111933-00411.warc.gz"}
https://superuser.com/questions/710117/how-do-i-associate-a-file-type-with-a-vbscript
# How do I associate a file type with a VBScript This question is a natural follow up to How to run a batch file without launching a "command window"? One can associate for example .txt files with Wordpad by opening the Properties dialog of a txt file and pressing the Change button next to "Opens with..." and choosing Wordpad. If I do the same with a VBScript file (rather than Wordpad), that is, if I associate .txt files with the VBscript and try to open the associated file (the txt file), Windows shows a pop up saying This app can't run on your PC ... The full quote and the screenshot is exactly the same as one in this thread I suspect this error message may be due to some kind of security feature of Windows 8 to prevent users from being tricked into running bad scripts, but the vbs script is created by myself on the same machine, and I wonder if there is a way say "I wrote this. You can trust this script." to Windows. The script runs fine if I run it directly (by double clicking or tapping on it), or if I drag and drop a txt file to the script. • Its entirely possible to run VBScript files on Windows 8 out of the box. Your file extension associations seem to be incorrect. sevenforums.com/tutorials/… I am unable to recreate this problem so its your file association that is the problem. Feb 1 '14 at 20:02 • I agree with Ramhound, unless you are trying to do something special like open a VBScript file with a custom launcher or something. If you simply want to execute just an ordinary .vbs file with the built-in/default WScript interpreter, then your file associations for vbs are hosed and you need to fix them. Feb 1 '14 at 20:04 • @Ramhound Downloading and applying the VBS reg doesn't seem to fix it. If I associate .foo files with my VBS script, and then try to open a .foo file, Windows 8 still says the same error message. Feb 1 '14 at 20:21 • I was able to use "open with" on a .vbs file with wordpad and notepad. I was able to double click the file and it ran. Its not clear what you mean by opening a .foo file WITH your VBS file. .vbs IS a file type. Why are you trying to associate .foo to be a VBS file? Feb 1 '14 at 20:25 • What I want to achieve in the end is to associate .txt and .foo files with GNU Emacs in some special way. This requires associating the files with emacsclientw.exe in a way that some command line options are always passed to emacsclientw.exe. One way to achieve this is to create a batch file containing a line like path\to\emacsclientw.exe --some-options %* and then associate foo files with the batch script. But now a little problem is that the batch file will launch a cmd window (which disappears within a second, so not a big problem). .. Feb 1 '14 at 22:00 I also have a problem that requires a file type to be opened through explorer with a script (vbscript) and have found a solution, you just need to edit the registry. 1. Go to the following registry key for your filetype: HKCR\YourfileType\Shell\Open\Command 2. Edit the (Default) key and enter in something like this string C:\Windows\System32\cscript.exe "C:\PathToyourScript\Script.vbs" "%1" The %1 passes the filename to script as a parameter This is working well for my needs but you may need to test it. Best of Luck! Instead of using your VBscript file, try using Nirsoft FileTypesMan to modify the command line arguments the .txt and .foo files use when launching emacs.exe This question is similar to Adding default command line options when opening a particular filetype • This does solve my X problem. On the other hand, I would not mark this as a duplicate, since a solution to the Y problem would be useful to others: someone might want to pass command line arguments that change at run time or want to use a VBscript that involves conditional logic like "if something is true, open the text file with wordpad, otherwise open it with notepad, and snap the window to left if blah blah.". Solution to Y would also make it easy to create multiple "open with emacs with option 1", "open with emacs with option 2", ... Feb 2 '14 at 11:10
2021-10-27 17:05:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4956686496734619, "perplexity": 1881.342039379944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588216.48/warc/CC-MAIN-20211027150823-20211027180823-00650.warc.gz"}
https://wisc.pb.unizin.org/minimisgenchem/chapter/gas-mixtures-and-partial-pressure-m5q4/
# 23 Gas Mixtures and Partial Pressure (M5Q4) ## Introduction Gases are rarely found in isolation in nature, but rather in mixtures. This section will explore the independent nature of gases in mixtures through Dalton’s Law of partial pressures. This section includes worked examples, sample problems, and a glossary. Learning Objectives for Gas Mixtures and Partial Pressures ## The Pressure of a Mixture of Gases: Dalton’s Law When two or more non reactive gases are added to a container, they will ultimately occupy all of the available container volume as a uniform mixture. The process by which molecules disperse in space in response to differences in concentration is called diffusion, and will be described in more detail in M5Q5. Unless they chemically react with each other, the individual gases in a mixture of gases do not affect each other’s pressure. Each individual gas in a mixture exerts the same pressure that it would exert if it were present alone in the container (Figure 1). The pressure exerted by each individual gas in a mixture is called its partial pressure. This observation is summarized by Dalton’s Law of partial pressures: The total pressure of a mixture of ideal gases is equal to the sum of the partial pressures of the component gases: PTotal  =  PA + PB + PC + …  =  ∑i Pi In the equation, PTotal is the total pressure of a mixture of gases, PA is the partial pressure of gas A; PB is the partial pressure of gas B; PC is the partial pressure of gas C; and so on. The partial pressure of gas A is related to the total pressure of the gas mixture via its mole fraction (X), a unit of concentration defined as the number of moles of a component of a mixture divided by the total number of moles of all components: PA  =  XA × PTotal  where  XA  =  $\frac{n_{\text{A}}}{n_{\text{Total}}}$ where PA, XA, and nA are the partial pressure, mole fraction, and number of moles of gas A, respectively, and nTotal is the number of moles of all components in the mixture. ### Example 1 The Pressure of a Mixture of Gases A 10.0 L vessel contains 2.50 × 10−3 mol of H2, 1.00 × 10−3 mol of He, and 3.00 × 10−4 mol of Ne at 35 °C. (a) What are the partial pressures of each of the gases? (b) What is the total pressure in atmospheres? Solution The gases behave independently, so the partial pressure of each gas can be determined from the ideal gas equation, using P  =  $\frac{nRT}{V}$: $P_{\text{H}_2}$  =  $\frac{(2.50 \times 10^{-3} \;\rule[0.5ex]{1.2em}{0.1ex}\hspace{-1.2em}\text{mol})(0.08206 \;\rule[0.5ex]{0.5em}{0.1ex}\hspace{-0.5em}\text{L} \;\text{atm} \;\rule[0.5ex]{3.5em}{0.1ex}\hspace{-3.5em}\text{mol}^{-1} \text{K}^{-1})(308 \;\rule[0.5ex]{0.5em}{0.1ex}\hspace{-0.5em}\text{K})}{10.0 \;\rule[0.5ex]{0.5em}{0.1ex}\hspace{-0.5em}\text{L}}$  =  6.32 × 10-3 atm PHe  =  $\frac{(1.00 \times 10^{-3} \;\rule[0.5ex]{1.2em}{0.1ex}\hspace{-1.2em}\text{mol})(0.08206 \;\rule[0.5ex]{0.5em}{0.1ex}\hspace{-0.5em}\text{L} \;\text{atm} \;\rule[0.5ex]{3.5em}{0.1ex}\hspace{-3.5em}\text{mol}^{-1} \text{K}^{-1})(308 \;\rule[0.5ex]{0.5em}{0.1ex}\hspace{-0.5em}\text{K})}{10.0 \;\rule[0.5ex]{0.5em}{0.1ex}\hspace{-0.5em}\text{L}}$  =  2.53 × 10-3 atm PNe  =  $\frac{(3.00 \times 10^{-4} \;\rule[0.5ex]{1.2em}{0.1ex}\hspace{-1.2em}\text{mol})(0.08206 \;\rule[0.5ex]{0.5em}{0.1ex}\hspace{-0.5em}\text{L} \;\text{atm} \;\rule[0.5ex]{3.5em}{0.1ex}\hspace{-3.5em}\text{mol}^{-1} \text{K}^{-1})(308 \;\rule[0.5ex]{0.5em}{0.1ex}\hspace{-0.5em}\text{K})}{10.0 \;\rule[0.5ex]{0.5em}{0.1ex}\hspace{-0.5em}\text{L}}$  =  7.58 × 10-4 atm The total pressure is given by the sum of the partial pressures: PT  =  $P_{\text{H}_2}$ + PHe + PNe  = (0.00632 + 0.00253 + 0.00076) atm  =  9.61 × 10-3 atm A 5.73-L flask at 25 °C contains 0.0388 mol of N2, 0.147 mol of CO, and 0.0803 mol of H2. What is the total pressure in the flask in atmospheres? 1.14 atm Here is another example of this concept, but dealing with mole fraction calculations. ### Example 2 The Pressure of a Mixture of Gases A gas mixture used for anesthesia contains 2.83 mol oxygen, O2, and 8.41 mol nitrous oxide, N2O. The total pressure of the mixture is 192 kPa. (a) What are the mole fractions of O2 and N2O? (b) What are the partial pressures of O2 and N2O in kPa? Solution The mole fraction is given by XA  =  $\frac{n_{\text{A}}}{n_{\text{Total}}}$ and the partial pressure is PA  =  XA  ×  PTotal For O2, $X_{\text{O}_{2}}$  =  $\frac{n_{\text{O}_{2}}}{n_{\text{Total}}}$  =  $\frac{2.83\;\text{mol}}{(2.83\; +\; 8.41)\;\text{mol}}$  =  0.252 and $P_{\text{O}_{2}}$  =  $X_{\text{O}_{2}}$  ×  PTotal  =  0.252 × 192 kPa  =  48.4 kPa For N2O, $X_{\text{N}_{2}}$  =  $\frac{n_{\text{N}_{2}}}{n_{\text{Total}}}$  =  $\frac{8.41\;\text{mol}}{(2.83\; +\; 8.41)\;\text{mol}}$  =  0.748 and $P_{\text{N}_{2}}$  =  $X_{\text{N}_{2}}$ × PTotal  =  0.748 × 192 kPa  =  143.6 kPa What is the pressure of a mixture of 0.200 g of H2, 1.00 g of N2, and 0.820 g of Ar in a container with a volume of 2.00 L at 20 °C? 1.87 atm ### Example 3 The pressure of Gases in a Multi-Chamber System Imagine gases are stored in separate chambers with closed valves. As shown in Figure 2, the two chambers are connected by a tube and two valves. The oxygen gas is contained in the right-hand chamber with an initial volume (Vinitial) of 2.50 L. When both valves are opened, the oxygen gas can occupy both chambers with a total final volume (Vfinal) of 3.75 L.  The initial pressure is given (Pinitial) as 3.0 atm.  When the valves are opened, then the gases mix and come to a new equilibrium pressure.  What is the final pressure? Figure 3. One chamber is filled with oxygen gas, and it’s connected to a second empty chamber by valves. The oxygen gas is contained in the right-hand chamber with an initial volume of 2.50 L. When the valves are opened, the oxygen gas can occupy both chambers with a total final volume of 3.75 L. Thinking back to Boyle’s Law (PV  =  k), one would predict that as the volume increases by opening the valves, the pressure should decrease.  We can calculate the pressure, too.  As the system is closed to the outside, the number of moles of gas (n) remains unchanged.  Even if other gases are present in the new chamber, ideal gases do not interact. ninitial  nfinal We assume the temperature (T) does not change. Tinitial  =  Tfinal The gas constant, R (0.08206 L atm mol–1 K–1) does not change .  Thus, we can re-write the ideal gas law (PV=nRT) as: nRT  =  PinitialVinitial and nRT  =  PfinalVfinal Thus, PinitialVinitial  =  PfinalVfinal Solution: The initial volume is the volume of a single chamber, but the final volume is the volume of the whole system.  Plugging in the values from Figure 3: (3.0 atm)(2.50 L)  =  (x)(3.75 L) Solving for the final pressure, x  =  2.0 atm. This lower pressure makes sense since the volume increased. Now imagine that the left bulb contains nitrogen gas.  Determine the final pressure of oxygen, nitrogen, and the total gas pressure once the valves are opened. Figure 4. One chamber is filled with oxygen gas, and it’s connected to a second chamber filled with nitrogen gas. 2.0 atm; the presence of nitrogen does not affect the partial pressure of oxygen. The partial pressure of nitrogen gas is 0.33 atm, and the total pressure of the system is 2.3 atm. ## Key Concepts and Summary Dalton’s Law of partial pressure says that the total pressure of a mixture of gases is equal to the sum of the partial pressures of the individual gases. This is because, unless they are chemically reacting with one another, gases do not affect one another. Determining the mole fraction requires knowing the moles of a component gas and the total moles of gas in the mixture. Once the mole fraction is known, it can be used to determine the partial pressure of a component gas. Mixtures of gases are formed quickly via rapid diffusion of the gaseous molecules. Diffusion of gases will lead to an equilibrium and equal concentrations of gas throughout the mixture. ## Key Equations • PTotal  =  PA + PB + PC + …  =  ∑iPi • PA  =  XAPTotal • XA  =  $\frac{n_{\text{A}}}{n_{\text{Total}}}$ • PinitialVinitial  =  PfinalVfinal ## Glossary Dalton’s law of partial pressures total pressure of a mixture of ideal gases is equal to the sum of the partial pressures of the component gases. diffusion movement of an atom or molecule from a region of relatively high concentration to one of relatively low concentration (discussed in this chapter with regard to gaseous species, but applicable to species in any phase) mole fraction (X) concentration unit defined as the ratio of the molar amount of a mixture component to the total number of moles of all mixture components partial pressure pressure exerted by an individual gas in a mixture ### Chemistry End of Section Exercises 1. A cylinder of a gas mixture used for calibration of blood gas analyzers in medical laboratories contains 5.0% CO2, 12.0% O2, and the remainder N2 at a total pressure of 146 atm. What is the partial pressure of each component of this gas? (The percentages given indicate the percent of the total pressure that is due to each component.) 2. A sample of gas isolated from unrefined petroleum contains 90.0% CH4, 8.9% C2H6, and 1.1% C3H8 at a total pressure of 307.2 kPa. What is the partial pressure of each component of this gas? (The percentages given indicate the percent of the total pressure that is due to each component.) 3. A commercial mercury vapor analyzer can detect, in air, concentrations of gaseous Hg atoms (which are poisonous) as low as 2 × 10−6 mg/L of air. At this concentration, what is the partial pressure of gaseous mercury if the atmospheric pressure is 733 torr at 26 °C? 4.  300 mmHg of NO(g) and 300 mmHg of O2(g) are placed in a 2.0 L glass flask. After complete reaction to form NO2(g), what are the partial pressures of the three gases in the flask? What is the total pressure of gas in the flask? What volume of the flask does the O2(g) occupy? 5. A sample of a compound of xenon and fluorine was confined in a bulb with a pressure of 18 torr. Hydrogen was added to the bulb until the pressure was 72 torr. Passage of an electric spark through the mixture produced Xe and HF. After the HF was removed by reaction with solid KOH, the final pressure of xenon and unreacted hydrogen in the bulb was 36 torr. What is the empirical formula of the xenon fluoride in the original sample? (Note: Xenon fluorides contain only one xenon atom per molecule.) ### Answers to Chemistry End of Section Exercises 1. O2 = 7.3 atm; CO2 = 17.5 atm; N2 = 121.2 atm 2. CH4: 276 kPa; C2H6: 27 kPa; C3H8: 3.4 kPa 3. 1.86 × 10-7 torr 4. PNO  =  0 mmHg (limiting reagent), PO2  =  150 mmHg, PNO2  =  300 mmHg; PTotal  =  450 mmHg; 2.0 L 5. XeF4
2022-06-26 17:31:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6908697485923767, "perplexity": 2060.5275419799805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103271763.15/warc/CC-MAIN-20220626161834-20220626191834-00234.warc.gz"}
https://math.stackexchange.com/questions/1764566/if-ax-o-has-only-one-solutions-the-columns-of-a-span-r/1764577
# If $Ax = O$ has only one solutions, the columns of A span R? I've been doing some excersices about inner product and I found something interesting but I don't know if my approach is correct at all. Supose that ${v_{1}, v_{2}, ..., v_{n}}$ is a base for a vector space V over $\mathbb{R}$, with a real inner product $<. , .>$, then for any set of ${r_{1}, r_{2}, ..., r_{n}}∈R$ there exists an unique $w∈V$ such that: $<v_{i}, w> = r_{i}$ I started proving the uniqueness: supose that there exists $w,u∈V$ such that $<v_{i}, u> = <v_{i}, w> = r_{i}$ for every $i=1,2,...,n$ and given ${r_{1}, r_{2}, ..., r_{n}}∈R$. Since ${v_{1}, v_{2}, ..., v_{n}}$ is a base for V, we can write: $u = a_{1}v_{1} + a_{2}v_{2} + ... + a_{n}v_{n}$ $w = b_{1}v_{1} + b_{2}v_{2} + ... + b_{n}v_{n}$ For uniques $b_{1},a_{1},b_{2},a_{2},...,b_{n},a_{n}∈\mathbb{R}$ It is easy to see that, $<v_{i}, u - w> =0$ for $i =1,2,...,n$ and if $z∈V$ is a non zero vector, I found that $<z, u - w> = 0$ and therefore $u-w = 0$ Now, we have to prove the existence. We can write the hypothesis this way: let $A =( a_{i,j} )$ be an nxn matrix such that $a_{i,j} = <v_{i},v_{j}>$ and let \begin{align} r &= \begin{bmatrix} r_{1} \\ r_{2} \\ \vdots \\ r_{n} \end{bmatrix} \end{align} And consider the $Ax = O$ system of equations, since it has a solution $x = O$, by the previous proof, $x = O$ is the unique solution and therefore, the columns of A are linearly independent, since A has n columns then these columns span $\mathbb{R}^n$. So, we can write $r$ as a linear combination of the columns of A, and that is, having a solution for $Ax = r$, then we write the explicit form of the equations and use the bilinear property of $<.,.>$ to find out that the components of $x$ are the components of the $w$ we wanted to find. My question is: if we supose that $Ax = b$ has unique solutions for any $b$, and $A$ is an nxn matrix, then $Ax = b$ has a solution for any $b∈\mathbb{R}^n$. Is that correct? That's correct. If $Ax=b$ has a unique solution for any $b$, then the columns of $A$ are linearly independent. If they were linearly dependent then, on the one hand, the columns of $A$ wouldn't span all of $\mathbb{R}^n$ and there would be a solution for every $b$; on the other hand, for those unlikely $b$ for which there is a solution, the solution would not be unique since the columns are linearly dependent and therefore there exists a nontrivial linear combination of those columns that equals zero. So the columns of $A$ are linearly independent and now we can state the converse: when the columns of $A$ are linearly independent, they span the entire $\mathbb{R}^n$ and each element of $\mathbb{R}^n$ is represented uniquely. • I think you mean "...and there wouldn't be a solution for every $b$". – TonyK Jun 24 '16 at 21:25
2019-05-24 19:31:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9915367960929871, "perplexity": 64.90745518384558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257731.70/warc/CC-MAIN-20190524184553-20190524210553-00554.warc.gz"}
http://www.ams.org/mathscinet-getitem?mr=1500658
MathSciNet bibliographic data MR1500658 20G40 (20D15) Dickson, Leonard Eugene The subgroups of order a power of $2$$2$ of the simple quinary orthogonal group in the Galois field of order $p\sp n=8l\pm 3$$p\sp n=8l\pm 3$. Trans. Amer. Math. Soc. 5 (1904), no. 1, 1–38. Article For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
2015-01-30 01:15:25
{"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9951878786087036, "perplexity": 4725.465853769811}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122192267.50/warc/CC-MAIN-20150124175632-00074-ip-10-180-212-252.ec2.internal.warc.gz"}
https://mathematica.stackexchange.com/questions/256966/how-to-define-value-in-assumptions-but-not-let-simplify-replace-it
# How to 'define' value in Assumptions but not let Simplify replace it? [closed] I want to let Simplify know the exact value of a symbol, to resolve logical statements, but don't want it to actually replace it. For instance, I would like something like assumptions = x==4; Simplify[Sqrt[x^2], assumptions] to output x and not 4 (or Abs[x]). How can this, or something analogous, be done? Edit: It wasn't clear enough that I understand that in this case an assumption like x>0 would output what I want -- however, this is not what I'm looking for. This is what I have been doing so far, but it is messy and needs focused attention. That is, I need to be sure that the eps that I set to define a range as assumptions = (x > xValue - eps) && (x < xValue + eps) is small enough for every independent simplification to be equivalent to that of assumptions = x == xValue Even if I could generaly chose a value exaggeratedly small such that this would be the case for all my problems, I'd still like to find a better alternative -- if there is one. • If instead of the exact value, you provide a range of values (for instance 3<x<5), the simplification will be done, but the values will not be substituted. Oct 15 at 20:00 • @yarchik That is what I have been doing in this situation, but have always wondered if there isn't a more elegant and general solution. Oct 15 at 20:02 • At the risk of pointing the obvious, these three examples are from the doc page on Simplify. – Syed Oct 15 at 20:51 • @Syed I understand that! It's just that it would be preferable to define the exact value because in very complicated scenarios, with a lot of variables, there is no direct way of defining a safe range for the variable in such a way that the expression is simplified to the max. Oct 15 at 21:03 • Would x <=4 && x>=4 work for you? Oct 15 at 22:45 One possibility is to mimic the behavior of symbolic constants like Pi, E, etc: N[x, _] ^= 4; NumericQ[x] ^= True; Then: Sqrt[x^2] x without even using Simplify.
2021-11-30 14:28:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7248096466064453, "perplexity": 678.0121057080265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359037.96/warc/CC-MAIN-20211130141247-20211130171247-00472.warc.gz"}
http://educo.vln.school.nz/mod/forum/discuss.php?d=3595
## Discussion forum ### How do eDeans / Network Supervisors Control Internet Usage? How do eDeans / Network Supervisors Control Internet Usage? I would be grateful to hear about what kinds of controls are imposed upon Internet usage by eStudents in other schools. We have a system here where students are given five dollars worth of Internet usage - per term, I think it is. If they want more they need to pay for it. Our eStudents are allowed more usage without charge, but they or I have to request that usage from the network supervisor. Re: How do eDeans / Network Supervisors Control Internet Usage? We do not charge any of our students an internet usage fee. However when I talked to the IT man his comment was  - 'yet'. At this point it is not something we are considering. Re: How do eDeans / Network Supervisors Control Internet Usage? All of our Y7-10 students have $5 loaded onto their accounts for printing. Seniors have$10.  They can purchase more themselves if they need.  We feel this is sufficient for all printing needs - VC students included.  We have seen a drop in careless printing since this was introduced as students are far more conscious of cost. Internet use is an important learning tool and we do not charge students for this per se. Re: How do eDeans / Network Supervisors Control Internet Usage? None of our students are charged for internet usage and they all have \$5 loaded onto accounts for photocopying- they can pay for more if they want it. I tend to do big runs of photocopying for the VC students and fund that from the VC budget. But I have also set up a shared resource area in the library for VC students with photocopy sets, text books etc. Cheers Adrian!
2019-12-11 08:31:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49276965856552124, "perplexity": 3875.926238980211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540530452.95/warc/CC-MAIN-20191211074417-20191211102417-00163.warc.gz"}
https://www.groundai.com/project/driven-anisotropic-diffusion-at-boundaries-noise-rectification-and-particle-sorting/
Driven anisotropic diffusion at boundaries: noise rectification and particle sorting # Driven anisotropic diffusion at boundaries: noise rectification and particle sorting Stefano Bo    Ralf Eichhorn Nordita, Royal Institute of Technology and Stockholm University, Roslagstullsbacken 23, SE-106 91 Stockholm, Sweden July 15, 2019 ###### Abstract We study the diffusive dynamics of a Brownian particle in proximity of a flat surface under non-equilibrium conditions, which are created by an anisotropic thermal environment with different temperatures being active along distinct spatial directions. By presenting the exact time-dependent solution of the Fokker-Planck equation for this problem, we demonstrate that the interplay between anisotropic diffusion and hard-core interaction with the plain wall rectifies the thermal fluctuations and induces directed particle transport parallel to the surface, without any deterministic forces being applied in that direction. Based on current micromanipulation technologies, we suggest a concrete experimental set-up to observe this novel noise-induced transport mechanism. We furthermore show that it is sensitive to particle characteristics, such that this set-up can be used for sorting particles of different size. ## Introduction. The ability to manipulate, monitor and fabricate microscopic systems has witnessed a dramatic increase in recent years, opening the way towards accurate analysis of physical processes on scales where thermal fluctuations are lead actors. A key finding of these developments is that such fluctuations are not necessarily a detrimental nuisance, but rather may provide novel, noise-induced mechanisms for controlling motion and performing specific tasks at the microscale. Well-known examples are ratchets rectifying fluctuations by means of (spatial or time-reversal) asymmetries buttiker87 (); landauer88 (); reimann02 (); reimann02a (); hanggi09 (), Brownian engines extracting work from fluctuations blickle12 (); martinez16_carnot (); martinez16 (), and microscopic Maxwell demons exploiting fluctuations to convert information into energy parrondo15 (); lutz15 (). In the present Letter we suggest a new strategy for rectifying thermal noise into directed movement of a microscopic particle, which requires only three simple ingredients: an anisotropic thermal environment, a plain, unstructured surface (or “hard wall”), and a static force oriented towards the surface so that particle motion is constrained to its close vicinity. The anisotropic thermal environment induces anisotropic diffusion, even for a spherical particle. As we will show below, the interplay between this diffusive behavior and the constraining boundary couples the drift components of the particle motion perpendicular and parallel to the surface. As a consequence, the particle on average moves along the surface, even though there is no deterministic force component applied in that direction. The drift velocity of that directed movement is essentially controlled by the anisotropy of the bath. Unlike common noise-rectifying ratchets, this mechanism does therefore not require any state-dependent diffusion buttiker87 (); landauer88 (), spatially asymmetric periodic force field or ratchet-like topographical structure, or any time-asymmetric driving forces reimann02 (); reimann02a (); hanggi09 (). From an experimental viewpoint, an anisotropic thermal environment may be realized, for instance, by two different temperature “baths” acting along different directions, or by a superposition of the usual fluid bath of the Brownian particle with a second, “hotter” source of (almost) white-noise fluctuations applied along a specific direction filliger07 (). In a number of recent experiments, the latter alternative has been implemented by means of noisy electrostatic fields gomez10 (); martinez13 (); mestres14 (); berut14 (); dieterich15 (); martinez15 (); martinez16_carnot (); berut16 (); martinez16 (); dinis16 (); soni17 (). This technique has been shown to provide effective heatings to extremely high temperatures, and it has been applied to implement a microscale heat engine martinez16_carnot (); martinez16 (); dinis16 (). It thus appears to be an ideal candidate for creating the anisotropic thermal environment needed for establishing non-equilibrium conditions in our system. We analyze such an experimental set-up and demonstrate how the noise rectification can be exploited to transport differently-sized colloidal beads in opposite directions along the surface. This setup thus provides an elegant method for efficient particle sorting without feedback control, which can be implemented at low cost with state-of-the-art microfluidics technology martinez16_carnot (). Stochastic particle motion on the microscale is commonly modeled in terms of an overdamped Langevin equation (neglecting inertia effects) mazo02 (); snook07 (). The presence of a reflecting wall requires additional care to correctly account for the hard-core interactions with these boundaries, as they cannot directly be introduced into the equation of motion as well-defined interaction forces behringer11 (); behringer12 (). However, in the equivalent representation of driven diffusion in terms of the Fokker-Planck equation risken84 (), governing the time evolution of the probability density of particle positions, hard walls are easily implemented as reflecting boundary conditions. In one dimension, such Fokker-Planck equation with a reflecting boundary (i.e. a “hard wall” constraining particle diffusion to the one-dimensional half-space) has been solved analytically by Smoluchowski smoluchowski17 (); chandrasekhar43 (), even in the presence of a constant external force. He obtained an expression for the time-dependent probability density in terms of exponential and error functions (see Eq. (5) below). For isotropic thermal baths, Smoluchowski’s result is immediately generalized to particle diffusion close to a reflecting surface in three (and also higher) dimensions, because the different spatial directions are uncorrelated, so that the three-dimensional Fokker-Planck equation decouples into a set of three one-dimensional equations. However, for anisotropic diffusion with principal directions not being aligned with the surface, correlations between the spatial components perpendicular and parallel to the surface prohibit such a simple decomposition. To the best of our knowledge, the analytical solution for this situation—anisotropic particle diffusion driven by a constant force in the proximity of a hard surface—is not known in the literature. We here present the exact time-dependent analytical solution of the corresponding Fokker-Planck equation, and reveal that it is intimately connected to the original Smoluchowski solution for the one-dimensional case. The above mentioned theoretical predictions of noise-induced systematic particle motion along the surface are derived from the properties of this solution. ## Model. We model the particle’s diffusive motion in three dimensions using the Fokker-Planck equation risken84 () for the probability density of finding the particle at a given position at time , ∂p∂t=−∂∂xi(vi−∂∂xjDij)p, (1) where summation over repeated indices is understood. The constant drift velocity is imposed by an externally applied constant force, and the constant symmetric diffusion tensor with components () characterizes the anisotropic thermal bath and, possibly, anisotropic particle properties. We consider the case of an impenetrable surface (reflecting boundary) being located at , such that measures the height above the surface note:a (). The presence of such hard wall implies a no-flux boundary condition for the 3-component of the probability flux at , [v3p−∂∂xjD3jp]x3=0=0. (2) ## General results. We are looking for a solution of (1), (2) for a delta-distributed initial density , where we require that . Without the no-flux boundary condition (2), the “free” (i.e. unconstrained) solution note:t () is a trivariate Gaussian with mean and covariance . We can split off the component by rewriting it as . The conditional density is a two-dimensional Gaussian with mean ~μi=μi+D3iD33(x3−μ3) (3) for the components , and covariance matrix proportional to the Schur complement of in , 2~Dt=2tD33(D11D33−D213D12D33−D23D13D12D33−D23D13D22D33−D223). The part represents free diffusion (no boundaries) in one dimension with drift and diffusion coefficient (and, accordingly, is normalized over ). In the SM SM () we show that the exact time-dependent solution of (1) on the half-space with the reflecting boundary condition (2) retains the (conditional) free diffusion in the and components , while the unconstrained component is replaced by the solution for one-dimensional diffusion on the half-line with a reflecting boundary at smoluchowski17 (); chandrasekhar43 (), p(x1,x2,x3)=pfree(x1,x2|x3)p(x3). (4) The explicit form of has been derived by Smoluchowski smoluchowski17 (). It reads p(x3) = (5) − v32D33ev3D33x3erfc⎡⎣x3+x(0)3+v3t√4D33t⎤⎦, where is the same expression for free diffusion as before, but now applied only to the half-line , and where the additional terms account for the “collisions” of the particle with the wall behringer11 (); behringer12 () ( denotes the complementary error function). ## First and second moments. Exploiting the factorized form of the solution in (4), with being a Gaussian, it is possible to directly compute all the moments of the particle displacement as a function of time. Performing the Gaussian integrals over and and using (3) we obtain for the first moments ⟨xi⟩=μi−D3iD33μ3+D3iD33⟨x3⟩, (6a) and for the second moments ⟨xixj⟩−⟨xi⟩⟨xj⟩=2t~Dij+D3iD3jD233(⟨x23⟩−⟨x3⟩2), (6b) with being 1 or 2. The remaining integrals over involve combinations of error functions, Gaussians and polynomials (see (5)), but still can be performed analytically. The resulting, explicit time-dependent expressions for the moments and are rather lengthy and are given in the SM SM (). Yet, when the drift is pointing towards the wall, , the one-dimensional motion in direction reaches a stationary state for large times given by pstat(x3)=−v3D33ev3D33x3, (7) with stationary mean and variance . Recalling that (for ), the long-time limits of the moments (6a) and (6b) then assume the compact form (with being equal to 1 or 2) ⟨˙xi⟩:=limt→∞1t⟨xi⟩ = vi−D3iD33v3, (8a) limt→∞12t(⟨xixj⟩−⟨xi⟩⟨xj⟩) = ~Dij=D33Dij−D3iD3jD33, and define effective long-term velocities , and effective diffusion coefficients along the surface. The net average particle velocity (8a) is the most striking consequence of our solution: the long-term average displacements parallel to the surface in the directions , are not only driven by the drift velocities , , but also by the perpendicular component in combination with the elements , , respectively, of the diffusion tensor. The origin of this motion can be understood as follows. The drift pushes the particle towards the surface and confines its motion to the close proximity of the plain wall in an exponential height distribution (see (7)). Being forced towards the surface, the diffusing particle experiences frequent “collisions” with this hard-wall boundary along a preferential direction which is determined by the anisotropic (but unbiased) thermal environment. In effect, the anisotropic thermal fluctuations become rectified, and induce directed particle motion. In absence of any drift components parallel to the surface, , the particle will therefore move over the surface at a speed and direction that are determined by its diffusion tensor. This effect can even induce particle migration against drift forces applied parallel to the surface. Noise-rectifying particle motion without a systematic force being applied in the direction of motion is characteristic for ratchet systems reimann02 (); reimann02a (); hanggi09 (), and is usually generated by broken spatial or time-reversal symmetries in the applied (potential) forces. Here, no such asymmetric forces are present, but the overall spatial symmetry is broken by the principal axes of the anisotropic thermal bath not being aligned with the orientation of the surface. Indeed, as can be seen from (8a), the effect disappears if the principal axes of the bath are aligned with the surface ( diagonal) or if the bath is isotropic ( proportional to the identity). On the other hand, it might seem from (8a) that we can expect noise-rectification to occur even for isotropic thermal baths if the particle itself is anisotropic or if there are anisotropies in the viscous properties of the environment (like, e.g., in the intracellular medium), because then is generally non-diagonal. In fact, both cases, a non-spherical, anisotropic particle as well as anisotropic viscosity, are characterized by a friction tensor , resulting in a (generally non-diagonal) diffusion tensor . Anisotropic friction furthermore couples the various components of the external constant force resulting in the drift velocity . The appearance of in both, and , makes the terms proportional to in (8a) drop out (see SM ()), such that systematic long-term drift along the surface can only be induced by the force components , parallel to the surface. In other words, noise-rectification does not occur if the thermal environment is isotropic and anisotropic diffusion is only due to anisotropic particle properties or anisotropic viscosity; a finding reminiscent of the no-go theorem for ratchet systems with non-constant friction (see Sec. 6.4.1 in reimann02 ()), and consistent with the fact that an isotropic thermal environment corresponds to an equilibrium heat bath. ## Experimental proposal. In order to illustrate our main result (8a), we consider the experimentally realistic situation martinez13 (); mestres14 (); berut14 (); berut16 () of a colloidal particle in an aqueous solution at room temperature , which is “heated” anisotropically by randomly fluctuating forces applied along the direction . For the sake of generality, we formally keep the tensor properties of the particle friction when setting up the model in the following, even though we will exclusively consider spherical particles in the explicit examples below. The particle is pushed towards the plane surface at by a constant external force . Using the model for the anisotropic heat bath, put forward and verified in martinez13 (), the equation of (unconstrained) motion for the Brownian particle can be written as γ˙x=f+σeσζ(t)+√2kBTγ1/2ξ(t), (9) where collects three unbiased, mutually independent Gaussian white noise sources (with ) which represent the thermal fluctuations, and where is defined via , exploiting that is a positive definite tensor. Finally, denotes the amplitude of the anisotropic fluctuations . Note that in (9) dissipation effects connected to these fluctuations are assumed to be negligibly small. It has been demonstrated martinez13 () that can in very good approximation be represented as an unbiased delta-correlated white noise, . In that case, the isotropic thermal noise and the “synthetic” directional noise can be combined into an effective anisotropic thermal environment with different effective temperatures acting along different directions martinez13 (); martinez16 (); dinis16 () (for details, see the SM SM ()). Doing so, the equation of motion (9) turns into the equivalent form ˙x=v+√2D1/2ξeff(t), (10) with again being unbiased Gaussian white noise sources, and with vi = (γ−1)ijfj, (11a) Dij = σ22(γ−1eσ)i(γ−1eσ)j+kBT(γ−1)ij, (11b) and . For describing the motion of the Brownian particle close to the plain surface at , the Langevin equation (10) (or (9)) does actually not provide a complete model, because the hard-core interactions with the reflecting boundary are not specified, and can in fact not be included as well-defined interaction forces behringer11 (); behringer12 (). We can, however, make use of our exact solution of the associated Fokker-Planck equation presented above. In particular, if the deterministic drift is oriented towards the boundary, the particle’s long-term behavior is determined by (8) with the velocity field and the diffusion tensor given in (11). Specifically, we consider a spherical particle with a friction tensor proportional to the identity , . Moreover, we tilt the direction of the “synthetic” fluctuations by an angle with respect to the surface, i.e. we have , by convenient orientation of the and axes such that is the angle between the axis and (see Fig. 1a). For this set-up, the explicit expressions for the long-term drift velocities (8a) read ⟨˙x1⟩ = f1/~γ, (12a) ⟨˙x2⟩ = 1~γ⎡⎢⎣f2−f3sinθcosθTTkin−T~γγ0+sin2θ⎤⎥⎦, (12b) where we have introduced the standard definition for the “hot” kinetic (or effective) temperature martinez13 (); martinez16 (); dinis16 () of a spherical particle with radius and Stokes friction coefficient in an unbounded fluid. The “hot” temperature together with the direction characterize the anisotropy of the environment. From the results (12) we can make a number of interesting observations (see also the SM SM (), where we provide plots of (12b)): (i) The average drift velocity in direction has the trivial form , because the “synthetic” noise does not have an component and thus diffusion in - planes is isotropic and can not be rectified. (ii) In an isotropic (equilibrium) thermal bath, when (implying ), net drift along the surface is only present if there are non-vanishing deterministic force components parallel to the surface, i.e.  or , as already inferred above on more general grounds. (iii) Likewise, if the “synthetic” noise is applied in a direction parallel or perpendicular to the surface (i.e.  or ), an average drift velocity over the surface can be induced only by or . (iv) The noise rectification effect, which is quantified in (12b) by the -term, depends on the friction coefficient . It is thus sensitive to particle shape and size, being stronger for particles with smaller . For appropriate choices of the “synthetic” noise parameters and , such that the two terms and in (12b) have the same sign, the rectification effect acts even opposite to the deterministic force component . Hence, the long-term velocity can change direction when varying particle size (and thus the friction coefficient ), with the smaller particles moving against the force (see Fig. 1c). In other words, we can always find a combination of and , which makes two different particle species with different move into opposite directions on the surface, such that they become separated with high efficiency (see Fig. 1b). Note that although the “synthetic” temperature of the experimental setup used in martinez13 (); martinez16 (); dinis16 () can be made extremely high, such large temperatures do not necessarily increase the sorting efficiency. Indeed, for too large , the term in (12b) becomes negligible such that the sensitivity of for particle properties gets lost. We furthermore remark that our theory does not take into account particle-particle interactions and thus describes the dilute limit of the sorting problem. Moreover, in practice, care has to be taken by appropriate choices of materials and coatings that the particles do not stick to the surface. ## Concluding remarks. The main results of this Letter are the exact, time-dependent solution of the Fokker-Planck equation (1) with the no-flux boundary condition (2) note:gen (), and its implications for particle diffusion close to a plain surface. Most notably, we find (see (8a)) that an anisotropic thermal environment induces directed particle motion along the boundary even if no systematic forces are applied in this direction. To illustrate these results we analyze the average motion of a Brownian colloid close to a plain surface. The anisotropic thermal environment is created by superimposing externally applied, (almost white) random fluctuations to the thermal fluid bath, using a technique that has been established experimentally in recent years gomez10 (); martinez13 (); mestres14 (); berut14 (); dieterich15 (); martinez15 (); martinez16_carnot (); berut16 (); martinez16 (); dinis16 (). In modeling this system with the Fokker-Planck equation (1) and when applying our analytic solution, we tacitly assume that the friction coefficient of the Brownian particle is constant, i.e. independent of particle position. This idealization does not take into account the changes of viscous friction with the distance from the surface due to hydrodynamic interactions happel83 (). Close to the surface hydrodynamic friction becomes very large and will slow down the movements of the particle. Since the particle sorting mechanism we suggest here occurs in the vicinity of the surface, we therefore expect the sorting efficiency to decrease when properly taking into account hydrodynamic effects. Numerical simulations (see SM SM ()) confirm these expectations, but also show that our main qualitative finding—systematic noise-induced transport along the surface in a direction which depends on particle properties—seems to be robust. A detailed analysis of hydrodynamic effects will be subject of future work. ## Acknowledgments. We thank Hans Behringer for many stimulating discussions, RE acknowledges financial support from the Swedish Science Council (Vetenskapsrådet) under the grants 621-2012-2982, 621-2013-3956 and 638-2013-9243. ## References • (1) M. Büttiker, Z. Phys. B–Condensed Matter 68, 161 (1987). • (2) R. Landauer, J. Stat. Phys. 53, 233 (1988). • (3) P. Reimann, Phys. Rep. 361, 57 (2002). • (4) P. Reimann and P. Hänggi, Appl. Phys. A 75, 169 (2002). • (5) P. Hänggi and F. Marchesoni, Rev. Mod. Phys. 81, 387 (2009). • (6) V. Blickle and C. Bechinger, Nat. Phys. 8, 143 (2012). • (7) I. A. Martínez, É. Roldán, L. Dinis, J. M. R. Parrondo, D. Petrov, and R. A. Rica, Nat. Phys. 12, 67-70 (2016). • (8) I. A. Martínez, É. Roldán, L. Dinis, and R. A. Rica, Soft Matter (2016), doi:10.1039/C6SM00923A. • (9) J. M. R. Parrondo, J. M. Horowitz and T. Sagawa, Nat. Phys. 11, 131 (2015). • (10) E. Lutz and S. Ciliberto, Physics Today 68(9), 30 (2015). • (11) J. R. Gomez-Solano, L. Bellon, A. Petrosyan, and S. Ciliberto, EPL 89, 60003 (2010). • (12) I. A. Martínez, É. Roldán, J. M. R. Parrondo, and D. Petrov, Phys. Rev. E 87, 032159 (2013). • (13) P. Mestres, I. A. Martínez, A. Ortiz-Ambriz, R. A. Rica, and É. Roldán, Phys. Rev. E 90, 032116 (2014). • (14) A. Bérut, A. Petrosyan, and S. Ciliberto, EPL 107, 60004 (2014). • (15) I. A. Martínez, É. Roldán, L. Dinis, D. Petrov, and R. A. Rica, Phys. Rev. Lett. 114, 120601 (2015). • (16) E. Dieterich, J. Camunas-Soler, M. Ribezzi-Crivellari, U. Seifert, and F. Ritort, Nat. Phys. 11, 971-977 (2015). • (17) A. Bérut, A. Imparato, A. Petrosyan, and S. Ciliberto, Phys. Rev. Lett. 116, 068301 (2016). • (18) L. Dinis, I. A. Martínez, É. Roldán, J. M. R. Parrondo, and R. A. Rica, J. Stat. Mech.: Theo. Exp. (2016), 054003. • (19) J. Soni, A. Argun, L. Dabelow, S. Bo, R. Eichhorn, G. Pesce and G. Volpe, in preparation (2017). • (20) R. M. Mazo, Brownian Motion: Fluctuations, Dynamics and Applications (Oxford University Press, Oxford, 2002). • (21) I. Snook, The Langevin and Generalised Langevin Approach to the Dynamics of Atomic, Polymeric and Colloidal Systems (Elsevier, Amsterdam, 2007). • (22) H. Behringer and R. Eichhorn, Phys. Rev. E 83, 065701(R) (2011). • (23) H. Behringer and R. Eichhorn, J. Chem. Phys. 137, 164108 (2012). • (24) H. Risken, The Fokker-Planck Equation (Springer, Berlin, 1984). • (25) If (1) models the motion of a particle of finite size (radius ), the coordinate represents the gap between the surface and the particle, i.e. the center of the particle is located at . • (26) Supplementary Material can be found as an ancillary file. It contains additional details on the solution of (1), (2), and on the experimental proposal. In addition, we discuss hydrodynamic particle-surface interactions, using known results from the literature happel83 (); sholl00 (); bevan00 (); ryter81 (); sancho82 (); jayannavar95 (); hottovy12 (); yang13 (); volpe10 (); brettschneider11 (); hanggi82 (); klimontovich94 (); bo13 (); gardiner85 (). • (27) M. V. Smoluchowski, Phys. Z. 17, 557 (1916). • (28) S. Chandrasekhar, Rev. Mod. Phys. 15, 1 (1943). • (29) R. Filliger and P. Reimann, Phys. Rev. Lett. 99, 230602 (2007). • (30) Note that we consider the full time-dependent solution throughout this Letter, but we omit the time-argument in the following for notational convenience. • (31) H. Brenner, J. Colloid. Interface Sci. 23, 407 (1967). • (32) The simple structure of the solution allows for a straightforward generalization to higher dimensions. • (33) J. Happel and H. Brenner, Low Reynolds number hydrodynamics (Martinus Nijhoff Publishers, The Hague, 1983). • (34) D. S. Sholl, M. K. Fenwick, E. Atman and D. C. Prieve, J. Chem. Phys. 113, 9268 (2000). • (35) M. A. Bevan and D. C. Prieve, J. Chem. Phys. 113, 1228 (2000). • (36) D. Ryter, Z. Phys. B 41, 39 (1981). • (37) J. M. Sancho, M. San Miguel and D. Dürr, J. Stat. Phys. 28, 291 (1982). • (38) A. M. Jayannavar and M. C. Mahato, Pramana J. Phys. 45, 369 (1995). • (39) S. Hottovy, G. Volpe and J. Wehr, J. Stat. Phys. 146, 762 (2012). • (40) M. Yang and M. Ripoll, Phys. Rev. E 87, 062110 (2013). • (41) G. Volpe, L. Helden, T. Brettschneider, J. Wehr, and C. Bechinger, Phys. Rev. Lett. 104, 170602 (2010). • (42) T. Brettschneider, G. Volpe, L. Helden, J. Wehr, and C. Bechinger, Phys. Rev. E 83, 041113 (2011). • (43) P. Hänggi and H. Thomas, Phys. Rep. 88, 207 (1982). • (44) Yu. L. Klimontovich, Physics-Uspekhi 38, 37 (1994). • (45) S. Bo and A. Celani, Phys. Rev. E 88, 062150 (2013). • (46) C. W. Gardiner, Handbook of Stochastic Methods for Physics, Chemistry and the Natural Sciences (Springer, Berlin, 1985). You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
2019-10-22 12:45:33
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8113633990287781, "perplexity": 1332.444714412029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987817685.87/warc/CC-MAIN-20191022104415-20191022131915-00010.warc.gz"}
http://mathhelpforum.com/calculus/78293-lim-inf-sup-innequality-question.html
# Thread: lim inf/sup innequality question... 1. ## lim inf/sup innequality question... x_n and y_n are bounded $\limsup x_n + \limsup y_n \geq \lim x_{r_n} + \lim y_{r_n}$ this is true because there presented sub sequences converge to the upper bound of x_n and y_n . the sum of limits is the limit of sums so we get one limit and the of two convergent sequence is one convergent sequence $\lim (x_{r_n} + y_{r_n}) = \limsup (x_n+y_n)$ this is true because the sequence is constructed from a convergent to the sup sub sequences so they equal the lim sup of $x_n+y_n$ now i need to prove that $\limsup (x_n+y_n) \Rightarrow \liminf x_n + \limsup y_n$ i tried: $ \limsup (x_n+y_n) = \limsup x_n + \limsup y_n \Rightarrow \liminf x_n + \limsup y_n $ lim sup i always bigger then lim inf so its true did i solved it correctly? 2. Originally Posted by transgalactic x_n and y_n are bounded $ lim sup x_n+lim sup y_n>=lim x_r_n+lim y_r_n $ this is true because there presented sub sequences converge to the upper bound of x_n and y_n . the sum of limits is the limit of sums so we get one limit and the of two convergent sequence is one convergent sequence $ lim(x_r_n+y_r_n)=limsup(x_n+y_n) $ this is true because the sequence is constructed from a convergent to the sup sub sequences so they equal the lim sup of x_n+y_n now i need to prove that $ limsup(x_n+y_n)=>liminf x_n+limsup y_n $ i tried: $ limsup(x_n+y_n)=limsup x_n+limsup y_n=>liminf x_n+limsup y_n $ lim sup i always bigger then lim inf so its true did i solved it correctly? I'm not sure what you want to prove. Is $r_n$ a particular subsequence? And why do we know that there is a limit to $x_{r_n}$ 3. x_r_n is a convergent subsequence of x_n y_r_n is a convergent subsequence of y_n r_n is just a sign to demonstrate that its a subsequence by weirshrass laws to any bounded sequence there is a convergent subsequence my proof is ok? 4. Originally Posted by transgalactic x_n and y_n are bounded $\limsup x_n + \limsup y_n \geq \lim x_{r_n} + \lim y_{r_n}$ this is true because there presented sub sequences converge to the upper bound of x_n and y_n . the sum of limits is the limit of sums so we get one limit and the of two convergent sequence is one convergent sequence $\lim (x_{r_n} + y_{r_n}) = \limsup (x_n+y_n)$ this is true because the sequence is constructed from a convergent to the sup sub sequences so they equal the lim sup of $x_n+y_n$ now i need to prove that $\limsup (x_n+y_n) \Rightarrow \liminf x_n + \limsup y_n$ i tried: $ \limsup (x_n+y_n) = \limsup x_n + \limsup y_n \Rightarrow \liminf x_n + \limsup y_n $ lim sup i always bigger then lim inf so its true did i solved it correctly? I'm still having trouble figuring out what you want to prove. Also I don't think that $\limsup (x_n+y_n) = \limsup x_n + \limsup y_n$. What if $x_n=0,1,0,1,0,1,...$ and $y_n=1,0,1,0,1,0,...$, then $x_n+y_n=1,1,1,1,1,1,...$, which has a limit and limit soupy of 1. But the $\limsup x_n + \limsup y_n=1+1=2$. If you just want to prove $\limsup x_n + \limsup y_n \geq \lim x_{r_n} + \lim y_{r_n}$ well $\limsup x_n \geq \lim x_{r_n}$ and $\limsup y_n \geq \lim y_{r_n}$. Now just add, but I'm still lost as to what you want. 5. $ \lim (x_{r_n} + y_{r_n}) = \limsup (x_n+y_n) $ why its correct what do i need to say so it will be valid ?? 6. Originally Posted by transgalactic $ \lim (x_{r_n} + y_{r_n}) = \limsup (x_n+y_n) $ why its correct what do i need to say so it will be valid ?? Let $w_n=x_n+y_n$. Now I still need to understand this sequence $r_n$. Is $r_n$ the sequence so that $\limsup w_n=\lim w_{r_n}$? 7. r_n is not a sequence its just a way for me so sign a subsequence . x_n is a bounded sequence with index n and i want a subsequence to x_n so i call it $x_{r_n}$ where n is the index again its just a way to sign a subsequence is it ok now? 8. can you show an example like this there is such things on the internet
2017-05-26 19:18:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 31, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9813481569290161, "perplexity": 1004.7543005372688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608676.72/warc/CC-MAIN-20170526184113-20170526204113-00015.warc.gz"}
https://physics.stackexchange.com/questions/693551/is-planck-temperature-really-the-highest-temperature
Is Planck temperature really the highest temperature? Actually I was learning about Wien's displacement law. It states that $$\lambda T=2.898×10^{-3} mK$$ This is actually a part of Planck's law where the Planck's constant originated. Now Planck's temperature is given as $$T_{p}=\sqrt{\frac{hc^5}{2\pi G k^2_b}}=1.416×10^{32} K$$ Now Planck's length is $$1.616×10^{-35} m$$ Now since the smallest possible wavelength is Planck's length, we can say wavelength of the electromagnetic radiation is Planck's length (Assume the energy doesn't create a black hole). Now according to Wien's displacement law, $$l_p T=2.898×10^{-3} mK$$ Now solving this we get $$1.79×10^{32} K$$, which is higher than the actual Planck's temperature. Since this displacement law is completely derived from Planck's law, it bit frustrated me. I'm a bit confused. Is it the limit of the displacement law or my flaw? (Sorry if I made any mistake. I'm new to this one. Please explain my mistake. I'm glad to hear that.) • Two things - (1) Wein's displacement law is an approximation of sorts so you need to be careful there. (2) The Planck length/energy/time/temperature are not rigourous cut-offs. They are more like ball-park values. Factors of $2$, $\pi$, etc. are not carefully kept track of when evaluating their values. Additionally, they are not even real cutoffs. The real interpretation is that above the Planck temperature there must be new physics (and by new, I mean drastically new. It's not enough to simply add new particles). What that physics is, we simply do not know (though we have some guesses). Feb 9 at 12:58 • I guess, there exist a temperature on which the temperature velocity of molecules become larger than speed of light. So, it will be a temperature limit Feb 9 at 13:23 • @Robotex maybe... but if particles start to get hyperenergetic, their relativistic mass increases. I'd have to do some careful reading/research to figure out whether the total kinetic energy is ever limited. Feb 9 at 15:21 • @CarlWitthoft In that case the increasing of temperature will increase the mass of matter. I'm interesting, is it possible to detect this mass changing in lab? Feb 10 at 14:30 • @CarlWitthoft In some temperature the molecules and atoms will be completely destroyed and only quarks will left Feb 10 at 14:32 When you derive a formula for some quantity, the result you get is often a product of powers of the parameters of the problem and fundamental constants of the theory and some real number that tends to be close to $$1$$. For example, the Newtonian escape velocity is $$\sqrt{2Gm/r}$$; the real factor there is $$\sqrt2\approx 1$$. There's a general expectation that quantities of interest in quantum gravity are likely to be products of powers of the fundamental constants $$\hbar, c, G$$ and some real factor close to $$1$$. The actual factor, and the actual meaning of the quantities, depends on the theory. You got a value close to the Planck temperature from the Planck length and Wien's constant because they're all equal to products of fundamental constants times a unitless factor close to $$1$$, but the value you got is a bit different because the unitless factors aren't quite the same. The Planck length isn't the smallest possible wavelength (probably).
2022-06-26 09:12:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7609992623329163, "perplexity": 307.8405422498163}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103037649.11/warc/CC-MAIN-20220626071255-20220626101255-00579.warc.gz"}
https://www.physics-world.com/general-wave-properties/
# General wave properties Wave motion The behaviour of waves affects us every second of our lives. Waves are reaching us constantly: sound waves, light waves, infrared heat, television,  mobile-phone and radio waves, the list goes on. The study of waves is, perhaps, truly the central subject of physics. There are two types of waves: longitudinal and transverse. Longitudinal waves: This type of wave can be shown by pushing and pulling a spring. The vibrations of the spring as the wave goes past are backwards and forwards in the direction that the wave is travelling (hence the name ‘longitudinal’). The wave consists of stretched and squashed regions travelling along. The stretching produces regions of rarefaction, while the squashing produces regions of compression. Sound is an example of a longitudinal wave. Transverse waves: In a transverse wave the vibrations are at right angles to the direction of motion. light, radio and other electromagnetic waves are transverse waves. In the above examples, the waves are very narrow, and are confined to the spring or the string that they are travelling down. Most waves are not confined in this way. Clearly a single wave on the sea, for example, can be hundreds of metres wide as it moves along. Wavefront Water waves are often used to demonstrate the properties of waves because the wavefront of a water wave is easy to see. A wavefront is the moving line that joins all the points on the crest of a wave. The set of points in space reached by a wave or vibration at the same instant as the wave travels through a medium. Wave fronts generally form a continuous line or surface. The lines formed by crests of ripples on a pond, for example, correspond to curved wave fronts. What features do all waves have? The speed a wave travels at depends on the substance or medium it is passing through. Waves have a repeating shape or pattern. Waves carry energy without moving material along. Waves have a wavelength, frequency, amplitude and time period. The wavelength is the distance between two adjacent peaks or, if you prefer, the distance between two adjacent troughs of the wave. In the case of longitudinal waves, it is the distance between two points of maximum compression, or the distance between two points of minimum compression. The frequency is the number of peaks (or the number of troughs) that go past each second. The amplitude is the maximum particle displacement of the medium from the central position. In transverse waves, this is half the crest-to-trough height. The speed of the wave is simply the speed of the wave as it approaches a ship. The largest ocean wave ever measured accurately had a wavelength of 340 m, a frequency of 0.067 Hz (that is to say one peak every 15 s), and a speed of 23 m/s. The amplitude of the wave was 17 m, so the ship was going 17 m above the level of a smooth sea and then 17 m below. (The waves were 34 m from crest to trough.) The period (T) is the time taken for each complete cycle of the wave motion. It is closely linked to the frequency (f) by this relationship: where f- frequency in hertz(Hz), T- period in seconds(s) The speed of a wave in a given medium is constant. If you change the wavelength, the frequency must change as well. If you imagine that some waves are going past you on a spring or on a rope, then they will be going at a constant speed. If the waves get closer together, then more waves must go past you each second, and that means that the frequency has gone up. The speed, frequency and wavelength of a wave are related by the equation: where v= wave speed, usually measured in meters/second(m/s) f= frequency , measured in cycles per second or hertz(Hz) $lambda$=wavelength , usually measured in meters(m)
2019-07-22 23:28:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6167708039283752, "perplexity": 529.2350828290204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528290.72/warc/CC-MAIN-20190722221756-20190723003756-00244.warc.gz"}
https://www.gamedev.net/forums/topic/458165-python-the-variables-value-changes-back-to-default/
# Python: The variable's value changes back to default. ## Recommended Posts Hi! Short question: I have a vriable defined on module level, in a function at the same level i have an if-statement that reads from that variable. When done I want the variable's value to change, so I set the new value and it works inside that function. But the next time the loop runs my function, the value of the variable is the default again. My code looks somewhat like this: [source lange="python"]var = "1" def lookIntoVar(): if var == "1": print var var = "2" else: print var var = "1" Why is Python behaving this way, and how do I solve my problem? ##### Share on other sites Python doesn't let you access global variables in functions by default. Your assignment to var actually creates a function-local variable called var. You have to declare it as global in the function, as follows: foo=42;def bar(): global foo # now you can use the global foo Of course, global variables are usually not a good idea anyway. EDIT: Your example should (and does) throw an exception, because you try to read from a local variable before assigning it a value. ##### Share on other sites Aha, I see. Thank you :)
2017-10-23 22:58:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21999695897102356, "perplexity": 1903.2296331132404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187826840.85/warc/CC-MAIN-20171023221059-20171024000702-00060.warc.gz"}
https://www.r-bloggers.com/2021/02/bivariate-dasymetric-map/
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. # Initial considerations A disadvantage of choropleth maps is that they tend to distort the relationship between the true underlying geography and the represented variable. It is because the administrative divisions do not usually coincide with the geographical reality where people live. Besides, large areas appear to have a weight that they do not really have because of sparsely populated regions. To better reflect reality, more realistic population distributions are used, such as land use. With Geographic Information Systems techniques, it is possible to redistribute the variable of interest as a function of a variable with a smaller spatial unit. With point data, the redistribution process is simply clipping points with population based on land use, usually classified as urban. We could also crop and mask with land use polygons when we have a vectorial polygon layer, but an interesting alternative is the same data in raster format. We will see how we can make a dasymetric map using raster data with a resolution of 100 m. This post will use data from census sections of the median income and the Gini index for Spain. We will make a dasymetric and bivariate map, representing both variables with two ranges of colours on the same map. # Packages In this post we will use the following packages: Package Description tidyverse Collection of packages (visualization, manipulation): ggplot2, dplyr, purrr, etc. patchwork Simple grammar to combine separate ggplots into the same graphic raster Import, export and manipulate raster sf Simple Feature: import, export and manipulate vector data biscale Tools and Palettes for Bivariate Thematic Mapping showtext Use fonts more easily in R graphs # install the packages if necessary if(!require("tidyverse")) install.packages("tidyverse") if(!require("patchwork")) install.packages("patchwork") if(!require("sf")) install.packages("sf") if(!require("raster")) install.packages("raster") if(!require("biscale")) install.packages("biscale") if(!require("sysfonts")) install.packages("sysfonts") if(!require("showtext")) install.packages("showtext") # packages library(tidyverse) library(sf) library(biscale) library(patchwork) library(raster) library(sysfonts) library(showtext) library(raster) # Preparation ## Data First we download all the necessary data. With the exception of the CORINE Land Cover (~ 200 MB), the data stored on this blog can be obtained directly via the indicated links. • CORINE Land Cover 2018 (geotiff): COPERNICUS ## Import The first thing we do is to import the land use raster, the income and Gini index data, and the census boundaries. # raster of CORINE LAND COVER 2018 urb <- raster("U2018_CLC2018_V2020_20u1.tif") ## Warning in showSRID(uprojargs, format = "PROJ", multiline = "NO", prefer_proj ## = prefer_proj): Discarded datum Unknown based on GRS80 ellipsoid in Proj4 ## definition # income data and Gini index # census boundaries limits <- read_sf("SECC_CE_20200101.shp") ## Land uses In this first step we filter the census sections to obtain those of the Autonomous Community of Madrid, and we create the municipal limits. To dissolve the polygons of census tracts we apply the function group_by() in combination with summarise(). # filter the Autonomous Community of Madrid # obtain the municipal limits mun_limit <- group_by(limits, CUMUN) %>% summarise() In the next step we cut the land use raster with the limits of Madrid. I recommend always using the crop() function first and then mask(), the first function crop to the required extent and the second mask the values. Subsequently, we remove all the cells that correspond to 1 or 2 (urban continuous, discontinuous). Finally, we project the raster. # project the limits limits_prj <- st_transform(limits, projection(urb)) # remove non-urban pixels # plot the raster plot(urb_mad) # project urb_mad <- projectRaster(urb_mad, crs = CRS("+proj=longlat +datum=WGS84 +no_defs")) In this step, we convert the raster data into a point sf object. # transform the raster to xyz and a sf object st_as_sf(coords = c("x", "y"), crs = 4326) # add the columns of the coordinates urb_mad <- urb_mad %>% rename(urb = 1) %>% cbind(st_coordinates(urb_mad)) ## Income data and Gini index The format of the Excels does not coincide with the original of the INE, since I have cleaned the format before in order to make this post easier. What remains is to create a column with the codes of the census sections and exclude data that correspond to another administrative level. ## income and Gini index data renta_sec <- mutate(renta, NATCODE = str_extract(CUSEC, "[0-9]{5,10}"), nc_len = str_length(NATCODE), mun_name = str_remove(CUSEC, NATCODE) %>% str_trim()) %>% filter(nc_len > 5) gini_sec <- mutate(gini, NATCODE = str_extract(CUSEC, "[0-9]{5,10}"), nc_len = str_length(NATCODE), mun_name = str_remove(CUSEC, NATCODE) %>% str_trim()) %>% filter(nc_len > 5) In the next step we join both tables with the census tracts using left_join() and convert columns of interest in numerical mode. # join both the income and Gini tables with the census limits mad <- left_join(limits, renta_sec, by = c("CUSEC"="NATCODE")) %>% left_join(gini_sec, by = c("CUSEC"="NATCODE")) # convert selected columns to numeric mad <- mutate_at(mad, c(23:27, 30:31), as.numeric) ## Bivariate variable To create a bivariate map we must construct a single variable that combines different classes of two variables. Usually we make three classes of each variable which leads to nine combinations; in our case, the average income and the Gini index. The biscale package includes helper functions to carry out this process. With the bi_class() function we create the classification variable using quantiles as algorithm. Since in both variables we find missing values, we correct those combinations between both variables where an NA appears. # create bivariate classification mapbivar <- bi_class(mad, GINI_2017, RNMP_2017, style = "quantile", dim = 3) %>% mutate(bi_class = ifelse(str_detect(bi_class, "NA"), NA, bi_class)) # results ## Simple feature collection with 6 features and 3 fields ## geometry type: MULTIPOLYGON ## dimension: XY ## bbox: xmin: 415538.9 ymin: 4451487 xmax: 469341.7 ymax: 4552422 ## projected CRS: ETRS89 / UTM zone 30N ## # A tibble: 6 x 4 ## GINI_2017 RNMP_2017 bi_class geometry ## <dbl> <dbl> <chr> <MULTIPOLYGON [m]> ## 1 NA NA <NA> (((446007.9 4552348, 446133.7 4552288, 446207.8 ~ ## 2 31 13581 2-2 (((460243.8 4487756, 460322.4 4487739, 460279 44~ ## 3 30 12407 2-2 (((457392.5 4486262, 457391.6 4486269, 457391.1 ~ ## 4 34.3 13779 3-2 (((468720.8 4481374, 468695.5 4481361, 468664.6 ~ ## 5 33.5 9176 3-1 (((417140.2 4451736, 416867.5 4451737, 416436.8 ~ ## 6 26.2 10879 1-1 (((469251.9 4480826, 469268.1 4480797, 469292.6 ~ We finish by redistributing the inequality variable over the pixels of urban land use. The st_join() function joins the data with the land use points. # redistribute urban pixels to inequality mapdasi <- st_join(urb_mad, st_transform(mapbivar, 4326)) # Map building ## Legend and font Before constructing both maps we must create the legend using the bi_legend() function. In the function we define the titles for each variable, the number of dimensions and the color scale. Finally, we add the Montserrat font for the final titles in the graphic. # bivariate legend legend2 <- bi_legend(pal = "DkViolet", dim = 3, xlab = "Higher inequality", ylab = "Higher income", size = 9) showtext_auto() ## Dasymetric map We build this map using geom_tile() for the pixels and geom_sf() for the municipal boundaries. In addition, it will be the map on the right where we also place the legend. To add the legend we use the annotation_custom() function indicating the position in the geographical coordinates of the map. The biscale package also helps us with the color definition via the bi_scale_fill() function. p2 <- ggplot(mapdasi) + geom_tile(aes(X, Y, fill = bi_class), show.legend = FALSE) + geom_sf(data = mun_limit, color = "grey80", fill = NA, size = 0.2) + annotation_custom(ggplotGrob(legend2), xmin = -3.25, xmax = -2.65, ymin = 40.55, ymax = 40.95) + bi_scale_fill(pal = "DkViolet", dim = 3, na.value = "grey90") + labs(title = "dasymetric", x = "", y ="") + bi_theme() + theme(plot.title = element_text(family = "Montserrat", size = 30, face = "bold")) + coord_sf(crs = 4326) ## Choropleth map The choropleth map is built in a similar way to the previous map with the difference that we use geom_sf(). p1 <- ggplot(mapbivar) + geom_sf(aes(fill = bi_class), colour = NA, size = .1, show.legend = FALSE) + geom_sf(data = mun_limit, color = "white", fill = NA, size = 0.2) + bi_scale_fill(pal = "DkViolet", dim = 3, na.value = "grey90") + labs(title = "choropleth", x = "", y ="") + bi_theme() + theme(plot.title = element_text(family = "Montserrat", size = 30, face = "bold")) + coord_sf(crs = 4326) ## Merge both maps With the help of the patchwork package, we combine both maps in a single row, first the choropleth map and on its right the dasymmetric map. More details of the grammar used for the combination of graphics here. # combine p <- p1 | p2 # final map p
2021-06-21 16:21:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20254230499267578, "perplexity": 10447.704058617986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488286726.71/warc/CC-MAIN-20210621151134-20210621181134-00277.warc.gz"}
http://mathhelpforum.com/calculus/223754-derivative-sin-x.html
# Math Help - The derivative of sin(x) 1. ## The derivative of sin(x) The difference quotient is: $\frac{d}{dx}sin(x) = \lim \Delta x\rightarrow 0=\frac{sin(x+\Delta x)-sin(x)}{\Delta x}$ which converts to: $=\lim \Delta x\rightarrow 0\frac{sin(x)cos\Delta x+cos(x)sin(\Delta x)-sin(x)}{\Delta x}$ The above I understand. The below step is supposed to be an algebraic rearrangement of the above: $\frac{d}{dx}sin(x)=\lim \Delta x\rightarrow 0[cos(x)(\frac{sin(\Delta x)}{\Delta x})-sin(x)(\frac{1-cos(\Delta x)}{\Delta x})]$ I am wondering how things were changed to the step above. I am not seeing it right now. $\dfrac{ab+cd}{e} = a\dfrac{b}{e} + c\dfrac{d}{e} = \dfrac{ab}{e} + \dfrac{cd}{e}$ (this is order of operations and associativity of multiplication). Apply that to what you had. Consider the terms of the numerator. The first and last term have $\sin(x)$. Factor it from those terms, and you get $\sin(x)(\cos(\Delta x)-1)$. Put a negative sign in front, and you get $-\sin(x)(1-\cos(\Delta x))$. The middle term has a $\cos(x)$.
2015-03-27 13:39:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7099685668945312, "perplexity": 355.442629003679}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131296456.82/warc/CC-MAIN-20150323172136-00038-ip-10-168-14-71.ec2.internal.warc.gz"}
http://en.wikipedia.org/wiki/Poisson_binomial_distribution
# Poisson binomial distribution Parameters $\mathbf{p}\in [0,1]^n$ — success probabilities for each of the n trials k ∈ { 0, …, n } $\sum\limits_{A\in F_k} \prod\limits_{i\in A} p_i \prod\limits_{j\in A^c} (1-p_j)$ $\sum\limits_{l=0}^k \sum\limits_{A\in F_l} \prod\limits_{i\in A} p_i \prod\limits_{j\in A^c} (1-p_j)$ $\sum\limits_{i=1}^n p_i$ $\sigma^2 =\sum\limits_{i=1}^n (1 - p_i)p_i$ $\frac{1}{\sigma^3}\sum\limits_{i=1}^n ( 1-2p_i ) ( 1-p_i ) p_i$ $\frac{1}{\sigma^4}\sum\limits_{i=1}^n ( 1 - 6(1 - p_i)p_i )( 1 - p_i )p_i$ $\prod\limits_{j=1}^n (1-p_j+p_j e^t)$ $\prod\limits_{j=1}^n (1-p_j+p_j e^{it})$ In probability theory and statistics, the Poisson binomial distribution is the discrete probability distribution of a sum of independent Bernoulli trials that are not necessarily identically distributed. The concept is named after Siméon Denis Poisson. In other words, it is the probability distribution of the number of successes in a sequence of n independent yes/no experiments with success probabilities $p_1, p_2, \dots , p_n$. The ordinary binomial distribution is a special case of the Poisson binomial distribution, when all success probabilities are the same, that is $p_1 = p_2 = \cdots = p_n$. ## Mean and variance Since a Poisson binomial distributed variable is a sum of n independent Bernoulli distributed variables, its mean and variance will simply be sums of the mean and variance of the n Bernoulli distributions: $\mu = \sum\limits_{i=1}^n p_i$ $\sigma^2 =\sum\limits_{i=1}^n (1-p_i) p_i$ For fixed values of the mean ($\mu$) and size (n), the variance is maximal when all success probabilities are equal and we have a binomial distribution. When the mean is fixed, the variance is bounded from above by the variance of the Poisson distribution with the same mean which is attained asymptotically as n tends to infinity. ## Probability mass function The probability of having k successful trials out of a total of n can be written as the sum [1] $\Pr(K=k) = \sum\limits_{A\in F_k} \prod\limits_{i\in A} p_i \prod\limits_{j\in A^c} (1-p_j)$ where $F_k$ is the set of all subsets of k integers that can be selected from {1,2,3,...,n}. For example, if n = 3, then $F_2=\left\{ \{1,2\},\{1,3\},\{2,3\} \right\}$. $A^c$ is the complement of $A$, i.e. $A^c =\{1,2,3,\dots,n\}\setminus A$. $F_k$ will contain $n!/((n-k)!k!)$ elements, the sum over which is infeasible to compute in practice unless the number of trials n is small (e.g. if n = 30, $F_{15}$ contains over 1020 elements). There are luckily more efficient ways to calculate $\Pr(K=k)$. As long as none of the success probabilities are equal to one, one can calculate the probability of k successes using the recursive formula [2] [3] $\Pr (K=k)= \begin{cases} \prod\limits_{i=1}^n (1-p_i) & k=0 \\ \frac{1}{k} \sum\limits_{i=1}^k (-1)^{i-1}\Pr (K=k-i)T(i) & k>0 \\ \end{cases}$ where $T(i)=\sum\limits_{j=1}^n \left( \frac{p_j}{1-p_j} \right)^i.$ The recursive formula is not numerically stable, and should be avoided if $n$ is greater than approximately 20. Another possibility is using the discrete Fourier transform [4] $\Pr (K=k)=\frac{1}{n+1} \sum\limits_{l=0}^n C^{-lk} \prod\limits_{m=1}^n \left( 1+(C^l-1) p_m \right)$ where $C=\exp \left( \frac{2i\pi }{n+1} \right)$ and $i=\sqrt{-1}$. Still other methods are described in .[5] ## Entropy There is no simple formula for the entropy of a Poisson binomial distribution, but the entropy can be upper bounded by that entropy of a binomial distribution with the same number parameter and the same mean. Therefore the entropy can also be upper bounded by the entropy of a Poisson distribution with the same mean. [6] ## References 1. ^ Wang, Y. H. (1993). "On the number of successes in independent trials". Statistica Sinica 3 (2): 295–312. 2. ^ Shah, B. K. (1994). "On the distribution of the sum of independent integer valued random variables". American Statistician 27 (3): 123–124. JSTOR 2683639. 3. ^ Chen, X. H.; A. P. Dempster; J. S. Liu (1994). "Weighted finite population sampling to maximize entropy". Biometrika 81 (3): 457. doi:10.1093/biomet/81.3.457. 4. ^ Fernandez, M.; S. Williams (2010). "Closed-Form Expression for the Poisson-Binomial Probability Density Function". IEEE Transactions on Aerospace Electronic Systems 46: 803–817. doi:10.1109/TAES.2010.5461658. 5. ^ Chen, S. X.; J. S. Liu (1997). "Statistical Applications of the Poisson-Binomial and conditional Bernoulli distributions". Statistica Sinica 7: 875–892. 6. ^ Harremoës, P. (2001). "Binomial and Poisson distributions as maximum entropy distributions". IEEE Transactions on Information Theory 47 (5): 2039–2041. doi:10.1109/18.930936.
2015-01-25 03:54:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 30, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9657039046287537, "perplexity": 399.4601643451678}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422118059355.87/warc/CC-MAIN-20150124164739-00185-ip-10-180-212-252.ec2.internal.warc.gz"}
https://www.r-bloggers.com/2021/02/nothing-but-neural-net/
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. We start a new series on neural networks and deep learning. Neural networks and their use in finance are not new. But are still only a fraction of the research output. A recent Google scholar search found only 6% of the articles on stock price price forecasting discussed neural networks.1 Artificial neural networks, as they were first called, have been around since the 1940s. But development was slow until at least the 1990s when computing power rapidly increased. Through this period the architecture and algorithms to build and train networks proceeded steadily. Nonetheless, it wasn’t until 2015 or so with the release of Keras (a high-level deep Learning API) that the floodgates opened. Multiple implementations by various providers were released quickly and there are now a number of other deep learning libraries, including TensorFlow by Google and PyTorch by Facebook. How these all work are beyond the scope of this post, though we hope to touch on all of those libraries within this series. The main point is that with open-source software it is relatively straightforward (though not necessarily easy!) for an individual to build, train, and deploy a deep neural network for all sorts of machine learning problems: natural language processing, computer vision, musical composition, etc. In fact, it would seem deep learning and artificial intelligence (AI) are everywhere. A cursory read of the applications in use today vs what was possible even a few years is truly astounding, leading some to foresee the singularity and others to warn of a dystopian future not unlike an 1980s action classic, starring Hollywood’s favorite Austrian bodybuilder. Futurists we are not. As this is blog is about data science and investing, we’d prefer to ask a few simple questions: • Can the use of neural networks improve the investing process? • If so, how? • If not, why not? It should be relatively apparent that the first question is deceptively complex. Unpacking it implies a bunch of corollary questions or, at least, the need to define what we mean. That is, • How are we using neural networks? As forecasting tools? As risk mitigation tools? • Which neural network (architecture) should we use? Simple, multi-layer perceptron, deep, convolutional, recurrent, LSTM…? • Which library? • And of course, the biggest question, how should we define “improve”? Better forecasts, better risk-adjusted returns, lower transaction costs, better implementation?2 And better relative to what? Buy-and-hold, other algorithms, etc. We obviously won’t answer most of these questions now. For this post, we’ll introduce the neural network concept and start to show some of the results it can produce. Warning! The structure, implementation, and output of neural networks is complicated and complex. Truly understanding them requires a lot of effort. Explaining them thoroughly does too. We’re more concerned with understanding the results. Most people know how to drive without knowing how an internal combustion engine works. As such, we’ll likely tread a fine line between irritating folks that really understand neural networks (by forgetting something) and frustrating those that don’t (by failing to explain something else). Apologies in advance. Let’s move on. What is a neural network? Even though the concept is patterned on neurons and synapses, neural networks look very little like a true biological neuron. We see them more or less like matrices that get manipulated, updated, and transformed. At the most basic level, there’s a “perceptron”. It takes inputs, applies a weighting scheme to those inputs, and then uses an activation function that transforms the aggregated weights and inputs into an output that is supposed to approximate whatever it is you’re trying to forecast. Graphically, it’s often shown by the following image.3 The lines represent the connections from the inputs into the output and the weights. The activation function is implied or not used. In math terms it looks something like this: $$y = \phi(\Sigma w_{i}x_{i} + b)$$ where: $$\phi$$ is the activation function, $$x$$ is the input, $$w$$ are the weights, and $$b$$ is the bias term. The $$\Sigma wx$$ is the summed product of the weights and inputs. The bias term is there so that the model doesn’t bounce around randomly. For the mathematically inclined, the part inside the $$\phi$$ function will look very much like a linear equation. We won’t go into detail about what $$\phi$$ actually is since it varies depending on the task, the structure, or architecture, of the neural network, and the type of performance one wants.4 But what it does is this: transforms the weights and inputs to match the type of output we’re trying to predict. For example, if our input was a bunch of real numbers but our labels, or desired output, were binary, we’d need some sort of function to transform the real number output into a binary one without losing too much information. However, if you need a real number, you may not even use an activation function at the output step. Unfortunately, a simple, single-layer perceptron is too simple. It often can’t find solutions to simple problems we can solve with pencil and paper.5 But researchers found that if you stack those perceptrons on each other (i.e., multi-layer perceptrons or MLP) and allow them to interact, things really get cooking! For an MLP, the output of one layer of perceptrosn becomes the input of another layer, often called a hidden layer. And if one layer is good, why not fifty? If a few neurons are good, why not hundreds? Graphically, it looks like this with the $$\phi$$ function in parentheses. The actual activation functions aren’t relevant for this post, but we include the to show that they aren’t necessarily the same for each layer. Mathematically, that ends up looking something like the following:6 \begin{align} h^{[1]} &= \phi^{[1]}(W^{[1]}x^{[1]} + b^{[1]})\\ h^{[2]} &= \phi^{[2]}(W^{[2]}h^{[1]} + b^{[2]})\\ h^{[3]} &= \phi^{[3]}(W^{[3]}h^{[2]} + b^{[3]})\\ h^{[4]} &= \phi^{[4]}(W^{[4]}h^{[3]} + b^{[4]})\\ y &= \phi^{[5]}(W^{[5]}h^{[4]} + b^{[5]}) \end{align} Where $$W^{[i]}$$ is a matrix of weights as opposed to the vector—$$w_{i}$$—from above, since each input will go into more than one neuron. Hopefully, this isn’t complete gobbley gook. If you start from the first function—$$\phi^{[1]}(W^{[1]}x^{[1]} + b^{[1]})$$—you can see that the output of that function ($$\mathbf h^\mathbf {[1]}$$) becomes the input of the other ($$\phi^{[2]}(W^{[1]}\mathbf h^\mathbf {[1]} + b^{[2]}$$). If you’re beginning to feel your eyes glaze over and head spin, wait! There’s more. It’s not enough that we’ve gone through all this to reach some final output. We need to check that output against what we’re actually trying to predict. Surprise, surprise, our output is likely to be off (often astonishingly so). That means, we’ll need to go back and tweak the weights at each layer and neuron and start from the beginning all over again. This step is called backpropagation and involves both calculus and iteration.7 Once we’ve run the whole thing again, we check it against our desired output and continue re-running backpropagation, tweaking the weights, and then feeding the new weights forward until we’re satisfied or need a stiff adult beverage. Each pass backward and forward is called an epoch and is like a microcosm of the machine learning process: train, validate, revise, repeat. As this happens automatically (thanks to the complex combination of algorithms and architecture often producing remarkably accurate results), it’s easy to see why some call this (perhaps hyperbolically) artificial intelligence. There’s no ghost there, however. A human’s directing the machine, deciding on the architecture, how fast the neural network learns, how many times it must repeat its calculations and a bunch of other tuning knobs called hyperparameters. If you’re still with us, you’re probably asking yourself a couple questions: • Where do the weights and bias terms come from? • Why does a neural network (NN) work? • What does this have to do with investing? The weights and biases are initially chosen at random, using some underlying distribution assumption.8 This is pretty remarkable when you think about it. You might have no idea what the appropriate weights should be, but through computational power, calculus, and the right architecture—that is, the structure of the hidden layers—the computer figures out how to tell a long-haired Persian from a Pekingese. Why a NN works is probably better reformulated into, why is a NN able to achieve better accuracy than other algorithms? For many years, it didn’t. Many diverse elements—particularly backpropgagation—needed to be discovered, combined, and then applied efficiently. Nonetheless, NNs work broadly because trial-and-error works. If we only had one attempt at riding a bicycle to prepare for a race, the Tour de France would probably look more like slapstick than elite athleticism. By instantiating hundreds or thousands of connections and interactions, the neural network approximates how neuroscientists believe the brain works. When we learn arithmetic or how the longbow won the Battle of Hastings, different neurons in the brain are activated and activate one another to the point that connections are created and strengthened. Activate often enough, and the patterns and resiliency of those patterns perpetuate sufficiently to allow us to remember automatically that Hastings was in 1056, er 1066. Computers are also super good at producing massive amounts of minute trial-and-error attempts in a short fraction of time. So that helps a lot too. What the heck does this have to do with investing? If algorithms like linear regression are used to inform investment decisions, often successfully, why not apply a more sophisticated algorithm? That’s a sufficient enough reason. But we believe—though have yet to prove—that a neural network could yield a better model of the market. The fundamental law of finance might be that a risk asset’s price must equal the discounted value of its future cash flows—if it generates cash—and the market clearing intersection of supply and demand. But there has to be actors who have (differing) views of value, supply, and demand for all this to obtain. A neural network might be able approximate a simplified version of market participants expressing views (the weights) and biases (the bias, hehe) about an asset. But we’re way ahead of ourselves. Let’s step back and look at a simple example, the Hello World! of trend-following and tactical allocation— the long-term simple 10-month moving average on the S&P 500. Investors use the moving average to give a rough gauge of trend or signal. If the stock is above the moving average we’re in an uptrend and vice versa. When the stock crosses above the trend, BUY!, when it crosses below, SELL! Even fundamental investors will look at a chart that often includes some moving average. If this simple metric is not a basis for one’s investing decisions, it still plays a role. The neurons are firing as you look at the chart and process the information. Let’s build a relatively simple neural network with one hidden layer populated with ten neurons. We’ll run the forward and back passes 20 times or epochs. The architecture will look something like the following. The two blue inputs (features) are the current month end price and the 10-month simple moving average price. The gray neurons are the hidden layer, the single blue output (label) is the ending price in the following month. We’ll normalize all these prices according to the last ten months average and standard deviation. We’ll split the data into a training set (1970 to 1991), validation set (1991 to 2000), and a test set (2001 to 2020). Clearly, this is a simple example, wouldn’t even count as a worthy research application, and is unlikely to produce much of a good forecast. It’s a toy example to build the intuition. As we train the model, we store the results of the loss function at the end of each epoch. We can then graph of the loss function—root mean-squared error (RMSE)—to see how the model progresses. As one can see the curve declines nicely over the twenty epochs, as it should do. Recall, after each epoch, the algorithm moves back through the network to tweak the weights. Notice, however the RMSE is quite large (ranging from above 1.3 to below 0.8), especially in relation to the data, which has been normalized to have a mean of zero and a standard deviation of one for each successive 10-month period. Since nothing seems to have gone awry, We now want to see how well the model generalizes on new data, the validation set. This is essentially a test set we get to see, allowing us to tune the model better without snooping the actual test data, which we should only look at once we’ve got a model we think might work. We run the algorithm again, but record the loss function on the validation data as well. What are we looking for in the validation data? First off, we want to examine the loss function produced by applying the model’s weights to the validation data. The graph can then tell us of how well the model generalizes. Does the validation loss trend down with the training loss (good) or does it flatten out much earlier (bad). Does the validation loss hug the training loss (good) or give it the cold shoulder (bad)? Note: even though we get to “see” the validation data, it doesn’t affect the calculations that produce the weights and biases associated with the model, so it’s not snooping per se. Of course, anyone can pull up a long-term chart, so we’re all snooping to some degree. The only true test set is tomorrow! Wow! Look at how much lower the error is for the validation set. Those neural networks sure are something else! Sadly, this result is due to luck. The validation set should have a higher error rate than the training set because even though we want a model to generalize well on data it hasn’t been trained on, it would be odd for it to produce a lower error. That would be like expecting bad inputs to produce better outputs.9 This presents us with a problem. If the validation period is such that it will make the training model look incredible, we need to figure out a way to alter the data to remove the regime effects or test a different hypothesis. Solving that puzzle will be left for another post. But we can address it roundaboutly [sic!], by asking whether a different algorithm—linear regression, for example—produces similar results. This will also afford us the opportunity to show how a NN can approximate the output of a linear regression. As we noted above, the weights and biases formula looks very similar to a linear model. One can get very close to the output of linear model using a simple NN (a single layer perceptron in fact!). In the chart below, we plot the loss of a simple NN along with a horizontal line for the RMSE produced by a linear regression. We see that after six epochs the simple NN achieves about the same loss as the linear regression. After eight epochs, the NN’s parameters are pretty close to the regression coefficients. Model Intercept/Bias Close price SMA Regression 0.092 0.751 0.009 Neural network 0.089 0.712 0.030 True, the coefficient on the moving average (SMA) is a bit off, but it’s still small. Let’s see how this looks when we use a dense neural network (DNN); in this case, a NN with two hidden layers, one with 300 and the other with 100 neurons. Dense, but not incredibly so by most standards. We run 100 epochs with early stopping that quits two epochs after the loss function stops improving. Interestingly, the DNN takes longer to reach the same RMSE as the regression than the simple NN. This is likely due to the order of magnitude increase in calculations. However, the DNN converges much faster on the validation data. We’re not exactly sure why this might be the case, but suspect that it’s because of the validation data rather than the model. Importantly, notice that the linear regression’s error is also lower on the validation vs. the training set, confirming that the issue is not so much with the model, but with the time series. The point of this example is to show three things. First, NNs can approximate the results of a linear regression, with a sufficient number of iterations. Second, since both algorithms end up with lower errors on the validation data, neither algorithm produces a better model. Third, adding complexity, as we did with the DNN, didn’t necessarily improve results. Given these observations, one can understand why a number of practitioners don’t see the point of applying neural networks to investing. If they don’t produce better results than a linear regression, why use them in the first place? You’re adding complexity with no additional benefit and perhaps some additional cost either in time to learn the algorithms or the money to hire someone to do it for you. A toy example shouldn’t cloud an open mind, however. The manifest flexibility of a neural network—the ability to model linear and non-linear and higher dimensional relationships—suggests there’s a lot more to consider. That’s for future posts! In this post, we’ve introduced neural networks and formulated some of the questions we want to answer over this series. Our plan is to apply the major NN architectures not only to typical technical analysis, but also to fundamental and factor investing. We might even introduce sentiment analysis and natural language processing if we don’t go crazy first. If you have any thoughts or opinions on this post or our plan, email us at content at optionstocksmachines dot com. And now, the code! Built using R 4.0.3, and Python 3.8.3 # [R] suppressPackageStartupMessages({ library(tidyverse) library(tidyquant) library(reticulate) }) # [Python] import warnings warnings.filterwarnings('ignore') import numpy as np import pandas as pd import statsmodels.api as sm import matplotlib import matplotlib.pyplot as plt import os os.environ['QT_QPA_PLATFORM_PLUGIN_PATH'] = 'C:/Users/user_name/Anaconda3/Library/plugins/platforms' plt.style.use('ggplot') plt.rcParams['figure.figsize'] = (12,6) # Directory to save images # Most of the graphs are now imported as png. We've explained why in some cases that was necessary due to the way reticulate plays with Python. But we've also found that if we don't use pngs, the images don't get imported into the emails that go to subscribers. DIR = "your/image/directory" def save_fig_blog(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(DIR, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dip=resolution) ## Create single layer perceptron from nnv import NNV layers = [ {'title':'Input', 'units':3, 'color':'blue'}, {'title': 'Output', 'units':1, 'color':'grey'} ] plt.show() ## Create multi-layer perceptron layersList = [ {"title":"Input\n", "units": 5, "color": "blue"}, {"title":"Hidden 1\n(ReLU)", "units": 12}, {"title":"Hidden 2\n(ReLU)", "units": 10}, {"title":"Hidden 3\n(ReLU)", "units": 8}, {"title":"Hidden 4\n(ReLU)", "units": 6, "color": 'lightGray'}, {"title":"Output\n(Softmax)", "units": 3,"color": "blue"}, ] plt.show() ## Pull S&P data and process start = '1970-01-01' end = '2020-12-31' sp = dr.DataReader('^GSPC', 'yahoo', start, end) sp_mon.columns = ['close', '10ma'] sp_mon = sp_mon.rename(index = {'Date':'date'}) sp_mon['ret']= sp_mon['close'].pct_change() # Graph S&P and 10-month moving average ax = sp_mon[['close', '10ma']].plot(color=['blue', 'black'], style=['-', '-.']) plt.legend(['Index', '10-month SMA']) plt.xlabel("") plt.ylabel("Index") ax.set_yticklabels(['{:,}'.format(int(x)) for x in ax.get_yticks().tolist()]) plt.title('S&P 500 with moving average') save_fig_blog('sp_tf1') plt.show() sp_mon['1_mon'] = sp_mon['close'].shift(-1) ## Create train, valid, test split and normalize norms = sp_mon.apply(lambda x: (x-x.rolling(10).mean())/x.rolling(10).std()).dropna() X_train = norms.loc[:'1991', ['close', '10ma']] y_train = norms.loc[:'1991', '1_mon'] X_valid = norms.loc['1991':'2000', ['close', '10ma']] y_valid = norms.loc['1991':'2000', '1_mon'] X_test = norms.loc['2001':, ['close', '10ma']] y_test = norms.loc['2001':, '1_mon'] ## Show NN architecture layer_list1 = [ {"title":"Input\n", "units": 2, "color": "blue"}, {"title":"Hidden 1\n(ReLU)", "units": 10}, {"title":"Output\n(None)", "units": 1,"color": "blue"}, ] # save_to_file=DIR+"/mlp_tf1.png" plt.show() ## Build neural network and train import tensorflow as tf from tensorflow import keras keras.backend.clear_session() np.random.seed(42) tf.random.set_seed(42) model = keras.models.Sequential([ keras.layers.Dense(10, activation='relu', input_shape=X_train.shape[1:]), keras.layers.Dense(1) ]) model.compile(loss='mean_squared_error', optimizer='sgd') history = model.fit(X_train, y_train, epochs=20) # Graph loss function hist = pd.DataFrame(history.history).apply(lambda x: np.sqrt(x)) hist.index = np.arange(1, len(hist)+1) hist['loss'].plot(color='blue') plt.legend('') plt.xlabel('Epoch') plt.ylabel('RMSE') plt.xticks(np.arange(0, 21, 2)) plt.title("Neural network training error by epoch") save_fig_blog('nn_train_1_tf1') plt.show() ## Include validation set keras.backend.clear_session() np.random.seed(42) tf.random.set_seed(42) model = keras.models.Sequential([ keras.layers.Dense(10, activation='relu', input_shape=X_train.shape[1:]), keras.layers.Dense(1) ]) model.compile(loss='mean_squared_error', optimizer='sgd') history = model.fit(X_train, y_train, epochs=20, validation_data = (X_valid, y_valid)) # Graph hist = pd.DataFrame(history.history).apply(lambda x: np.sqrt(x)) hist.index = np.arange(1, len(hist)+1) ax = hist.plot(color=['blue', 'black'], style=['-', '-.']) plt.legend(['Training', 'Validation']) plt.xlabel('Epoch') plt.ylabel('RMSE') plt.xticks(np.arange(0, 21, 2)) plt.title("Neural network training and validation error by epoch") save_fig_blog('nn_train_2_tf1') plt.show() ## Add early stopping with 100 epochs keras.backend.clear_session() np.random.seed(42) tf.random.set_seed(42) model = keras.models.Sequential([ keras.layers.Dense(10, activation='relu', input_shape=X_train.shape[1:]), keras.layers.Dense(1) ]) model.compile(loss="mean_squared_error", optimizer='sgd', metrics=['mape']) check_pt = keras.callbacks.ModelCheckpoint("tf1_model.h5", save_best_only=True) early_stop = keras.callbacks.EarlyStopping(patience=2, restore_best_weights=True) history = model.fit(X_train, y_train, epochs=100, validation_data=(X_valid, y_valid), callbacks=[check_pt, early_stop]) model = keras.models.load_model("tf1_model.h5") # rollback to best model ## Comparing NN to linear regression import statsmodels.api as sm y = y_train lin_reg = sm.OLS(y, X).fit() keras.backend.clear_session() np.random.seed(42) tf.random.set_seed(42) lin_nn = keras.models.Sequential([ keras.layers.Dense(1, input_shape=X_train.shape[1:]) ]) lin_nn.compile(loss='mse', optimizer='sgd') lin_hist = lin_nn.fit(X_train, y_train, epochs=8) # Graph loss function lin_hist_df = pd.DataFrame(lin_hist.history).apply(lambda x: np.sqrt(x)) lin_hist_df.index = np.arange(1, len(lin_hist_df)+1) lin_hist_df.plot(color='blue') plt.axhline(np.sqrt(lin_reg.mse_resid), color='red', ls = ':') plt.xlabel('Epoch') plt.ylabel('RMSE') plt.title("Neural network training error by epoch") plt.legend(['Training', 'Linear regression']) save_fig_blog('nn_vs_lin_reg_tf1') plt.show() # Print table of weights, biases, and coefficients nn_params = np.concatenate((lin_nn.layers[0].get_weights()[1].astype(np.float),lin_nn.layers[0].get_weights()[0].flatten().astype(np.float)),axis=0) lin_reg_params = lin_reg.params.values pd.DataFrame(np.array([lin_reg_params, nn_params]), columns = ['Intercept', 'Close', 'SMA'], index = ['Linear', 'Neural Network']).apply(lambda x: round(x,3)) ## Build dense NN keras.backend.clear_session() np.random.seed(42) tf.random.set_seed(42) model = keras.models.Sequential([ keras.layers.Dense(300, activation='relu', input_shape=X_train.shape[1:]), keras.layers.Dense(100, activation='relu'), keras.layers.Dense(1) ]) model.compile(loss="mean_squared_error", optimizer='sgd') check_pt = keras.callbacks.ModelCheckpoint("tf1_dense_model.h5", save_best_only=True) early_stop = keras.callbacks.EarlyStopping(patience=2, restore_best_weights=True) history = model.fit(X_train, y_train, epochs=100, validation_data=(X_valid, y_valid), callbacks=[check_pt, early_stop]) # Graph loss functions rmse_train = np.sqrt(lin_reg.mse_resid) rmse_valid = np.sqrt(np.mean((pred-y_valid)**2)) hist2 = pd.DataFrame(history.history).apply(lambda x: np.sqrt(x)) hist2.index = np.arange(1, len(hist2)+1) hist2.plot(color = colors, style = styles) plt.title("Neural network training and validation error by epoch") plt.ylabel('RMSE') plt.axhline(rmse_train, color='red', ls = '-.') plt.axhline(rmse_valid, color='purple', ls = '-.') plt.legend(labs+['Regression train', 'Regression valid']) save_fig_blog('dnn_vs_lin_reg_tf1') plt.show() 1. The search term “forecasting stock prices with neural networks” produced 44,000 results. The search term “forecasting stock prices” produced 752,000 results.↩︎ 2. By this we mean, actually implementing the strategy you’re planning to implement. “Trading the plan”↩︎ 3. We create this image using the nnv package in Python. Source: R. Cordeiro, “NNV: Neural Network Visualizer”, 2019. [Online]. Available: here [Accessed: 22-February- 2021].↩︎ 4. The three main functions are: Sigmoid: $$1/(1 + e^{-x})$$, Hyperbolic Tangent: $$2\sigma(2x) – 1$$, and Rectilinear unit: $$max(0,x)$$. But there are lots of others too like ELU, SELU, leaky this, and squeaky that.↩︎ 5. For example, the exclusive or, aka XOR, problem, or, not A and B or A and not B.↩︎ 6. As usual, our notation may not be perfectly formal, but we hope we get the point across.↩︎ 7. That is, at each neuron we take the partial derivative of the loss function with respect to the weight or bias and then tweak the weight by some small number known as the learning rate.↩︎ 8. The initialization step is it’s own area of fertile research with abundance of eponymous initializers.↩︎ 9. As Charles Babbage once said, “On two occasions I have been asked [by members of Parliament], ‘Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?’ I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.”↩︎
2023-03-31 19:40:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5405378341674805, "perplexity": 1727.5897682939603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949678.39/warc/CC-MAIN-20230331175950-20230331205950-00320.warc.gz"}
https://math.stackexchange.com/questions/2031662/distance-between-two-points-on-a-circle
Distance between two points on a circle Two circles with radius 1 are tangent to one another. One line passes through the centre of the first circle and is tangent to the second circle at the point $P$. A second line passes through the centre of the first circle and is tangent to the second circle at the point $Q$. Find the distance between $P$ and $Q$. This question appeared in a first year calculus exam, and I can't see how I would even use my knowledge in differential calculus to try and solve this. It seems more of a geometry problem, and when I try to draw a diagram I am left at a loss because there's hardly any information given to try and solve. If someone could give me a hint as to how to begin, that'd be great. Thank you. I also wasn't too sure how to tag it, so my apologies. • Have you noticed the figure forms right triangles? – N.S.JOHN Nov 26 '16 at 16:10 • That's a terribly uninformative title. – user137731 Nov 26 '16 at 16:32 • I'm sorry, I didn't want to make it seem like I was just begging for the right answer. – cgug123 Nov 26 '16 at 16:36 • @cgug123: You could still do that by having a flatly descriptive title such as "Distance between two tangent points on a circle" or something like that. – Brian Tung Nov 26 '16 at 17:12 • The problem has one trivial solution unless we can assume that the points $P$ and $Q$ are distinct. – Sid Nov 26 '16 at 17:34 Let the first circle be centred $A$ and second one $B$. Check that $\angle PAB=\angle QAB=30^\circ$ (See what the lengths of $PA$ and $PB$ are!) Also check that $PAQ$ forms an equilateral triangle Edit: $PA=\sqrt 3, PB=1,AB=2$ . Let $PQ$ cut $AB$ at $D$. Triangle $PAB, BPD, QAB, QBD$ are similar. $\angle PBD=\angle QBD=60^\circ$ $\angle APD=90-\angle BPD=90^\circ-30^\circ=60$ $\angle AQD=90-\angle BQD=90^\circ-30^\circ=60$ So in triangle $APQ$ all angles are $60^\circ$ and $PA=\sqrt 3$ • I can see that it is, and that the angle is 60 degrees, but I don't know how to get the side lengths from there. Sorry, I'm very inexperienced in geometrical applications – cgug123 Nov 26 '16 at 16:18 • @cgug123 Got the equilateral triangle? – Qwerty Nov 26 '16 at 16:18 • Yes. I'm not sure how to prove it is equilateral like the first answerer, however. I can just see that it is. – cgug123 Nov 26 '16 at 16:21 • @cgug123 Whats $\angle BPQ$? Can youprove it is same as $\angle PAB$? – Qwerty Nov 26 '16 at 16:21 • 60 degrees, yes? – cgug123 Nov 26 '16 at 16:22 Hint. Make a drawing, and by considering its angles, show that the triangle $\triangle O_1PQ$ is equilateral where $O_1$ is the centre of the first circle. Now note that the $\triangle O_1O_2P$ is a right triangle where $O_2$ is the centre of the second circle. Then by using Pythagoras theorem we can find $O_1P$ ($=PQ$) from $O_1O_2$ and $PO_2$. • Okay this I can do! Thank you. – cgug123 Nov 26 '16 at 16:35 If this is on a calculus exam, then it’s likely that you’re meant to compute the slope of the tangents to the circle via differentiation. Center one of the circles on the origin and place the center of the other circle at $C=(2,0)$. Implicitly differentiate $x^2+y^2=1$ to obtain $dy/dx=-x/y$. The slope of the line through $P=(x,y)$ and $C$ is $(y-0)/(x-2)$. Setting these equal to each other yields the equation $$2x-x^2=y^2.$$ Combine this with the equation of the circle and solve the resulting system.
2019-09-23 20:30:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7565863728523254, "perplexity": 191.53617265199946}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514578201.99/warc/CC-MAIN-20190923193125-20190923215125-00156.warc.gz"}
http://www.gradesaver.com/textbooks/math/trigonometry/trigonometry-10th-edition/chapter-8-complex-numbers-polar-equations-and-parametric-equations-section-8-1-complex-numbers-8-1-exercises-page-357/7
Trigonometry (10th Edition) Since $7$ is a natural number and natural numbers are a subset of the set of real numbers, $7$ is a real number. Also, for a complex number $a + bi$, if $b = 0$, then $a + bi = a$, which is a real number. Thus, the set of real numbers is a subset of the set of complex numbers. Therefore, 7 can be identified as a complex number as well.
2018-04-23 23:42:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7893226742744446, "perplexity": 39.11385179233523}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946256.50/warc/CC-MAIN-20180423223408-20180424003408-00240.warc.gz"}
https://cryptoblog.wordpress.com/tag/cryptography/
## Attacks on Cryptographic Systems (Part I) • Soft Attacks No matter how sophisticated the attack techniques become, one must not forget that when the ultimate goal is to obtain the secret message, coercion or social engineering are often the most effective attack techniques. These attacks are based on using physical or psychological threats, robbery, bribery, embezzlement, etc. The attacks are mostly directed to human links of the data security chain. Social Networks have become a launching pad for these kind of attacks. In a typical soft attack such as the so-called spear-phishing, e-mail addresses and information about the victims social circle is harvested from social networks and then used to send targeted e-mail with malware that cause to reveal secret information for access to secured systems. • Brute Force Attacks Assuming, as Kerchoff’s principle recommends, that the algorithm used for encryption and the general context of the message are known to the cryptanalyst, the brute-force attack involves the determination of the specific key being used to encrypt a particular text. When successful, the attacker will also be able to decipher all future messages until the keys are changed. One way to determine the key entails exhaustive search of the key-space (defined as the set of all possible valid keys for the particular crypto-system). Brute force is a passive, off-line attack in which the attacker Eve passively eavesdrops the communication channel and records cipher text exchanges for further analysis, without interacting with either Alice or Bob. To estimate the time that a successful brute-force attack will take we need to know the size of the key-space and the speed at which each key can be tested. If $N_k$ is the number of valid keys and we can test $N_s$ keys per second, it will take, on average $\frac{1}{2}(\frac{N_k}{N_s})$ seconds to find the proper key by brute-force. The threat that a brute-force attack poses cannot be underestimated in the real world. Most financial institutions use cipher-systems based on DES. Keys of length 56-bits, such as the one used by the standard implementation of DES, can be obtained by brute-force using computer hardware and software available since the late 1990’s. Indeed, to counter this possibility, most contemporary implementations of DES use a derivative known as Triple-DES (or 3-DES) which uses three different 56-bit keys instead of one. The effective key length for the combined 3-DES key is a more secure 168 bits. Brute force analysis have been used in combination with other attacks as was the case for the deciphering of the Enigma. The famous bombes were an example of the brute-force approach working in combination with a mathematical method that provided an important reduction of the key-space. To be continued….. ## from Backdoor to Backdoor While the FBI was accused to set a backdoor to OpenBSD, the NSA clears the record on DES. There are many stories about sneaking sophisticated chunks of code that make perfectly good encryption system to leak information. Something like this is extremely difficult to do without nobody noticing it and I think that it must be considered as a lot of unnecessary trouble for the guys that rather will nicely ask for the keys to your front door. ## The book gets excellent review at Amazon.com A very nice surprise from the comment pages at Amazon.com, a 5 star rating for the Cryptography book authored by A. Bruen and myself. The reviewer consider the book a Insightful Interdisciplinary Orientation on the subject, and gave this book the highest rating among similar books. Thanks. Bonus: We are in good company too! ## More reviews for the AMS I have a few new reviews of papers on cryptography in my updated page. For those interested in the security of NMAC and HMAC or affiliation hiding key exchanges, I recommend reading the reviews. They include links to relevant papers. ## Alan Turing He deserved much better National Post 14 Sep 2009 In the very distant future, the name of Alan Turing (1912-1954) will be among the very few for which the 20th century is remembered, long after most of the politicians, artists and celebrities have receded into confusion and oblivion. His stature is…read more… ## A 200 year old cipher recently broken This Excellent article in the WSJ described the recently broken Patterson’s Cipher. Dr. Smithline from the the Center for Communications Research in Princeton, N.J., got the cipher from a neighbour working on a school project about Thomas Jefferson. Make sure to check the interactive tab on the article for a very well done graphical description of the cipher. h/t Paul ## ENIGMA encryption cracker Heroes ENIGMA crackers reunite at Bletchley Park I had the honour to meet one of them, now an emeritus math professor. Check this article for pictures of the Turing Bombe the electronic-mechanical code-breaking machine used by the British to crack 3,000 Enigma messages a day during the Second World War. Cryptool ver 1.4 has a very well done simulator of the ENIGMA machine encryption. I recently discussed the problems associated with weak passwords here. Since then, there have been a few cases of hackers publishing stolen passwords form popular sites such as phpbb or the passwords that the conficker worm uses to spread across shares. Some researches report that people often use the same password on many websites making themselves vulnerable to serious attack if the password for a low value website is the same as the one used in a high value target Password selection tips abound and as long as your password has enough entropy, users data is somewhat out of reach of most hackers. Despite the advice of security gurus, the manifest limitations of the average human brain for generating and remembering more than a few passwords is a physical barrier to a widespread adoption secure practices. Password managers may help to keep your passwords organized. They have functions to generate strong passwords and can connect directly with browsers or e-mail programs. Another way around is the OpenID network that allow users to have one identity for multiple on-line services. The OpenID protocol is inclusive enough that can work as an Authenticator using biometrics or smart-tokens.  Open ID is still in the adoption phase, not all online services accept it. ## Collisions, a secure hash function killer (MD5, SHA1, SHA2) The trouble with the use of MD5 in digital signatures recently uncovered by Sotirov et al. is common to other hash functions. NIST has been discouraging people to use MD5 and even SHA 1 since many years ago. A good account of this was posted by Dustin Trammell here. Because the output of a hash function is of a fixed length, usually smaller that the input, there will necessarily be collisions. The collision-free property for hash is thus defined by: A function $H$ that maps an arbitrary length message $M$ to a fixed length message digest $MD$ is a collision-free hash function if: 1. It is a one-way hash function. 2. It is hard to find two distinct messages $(M', M)$ that hash to the same result $H(M')=H(M)$. Cryptographers talk about “relatively collision free” hash functions. A good hash function should be designed with the Avalanche Criterion in mind. The Avalanche Criterion (AC) is used in the analysis of S-boxes or substitution boxes. S-boxes take a string as input and produce an encoded string as output. The avalanche criterion requires that if any one bit of the input to an S-box is changed, about half of the bits that are output by the S-box should change their values. Therefore, even if collisions are unavoidable, there is no way to generate two strings with the same hash value other than brute force. ## The end of the road for MD5 signed SSL Certificates X.509 certificates signed by Certificate Authorities that use MD5 function are certainly going to disappear form the Internet as flaws on the MD5 were successfully exploited to generate a rogue certificate that would be considered as valid by all browsers. The proof of concept was recently published by A. Sotirov et al. , although the basis for the hack has been know for a few years know. The researchers exploited collisions (two different strings that hash to the same value) in the MD5 and the fact that CAs use a sequential numbering of certificates upon issuance. News that SSL is broken are exaggerated as many CA are already using SHA-1 (a stronger hash function) and the ones that were using MD5 are switching quickly after publication of the flaw.
2020-04-08 05:24:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30182674527168274, "perplexity": 1711.8579840736495}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371810617.95/warc/CC-MAIN-20200408041431-20200408071931-00112.warc.gz"}
https://www.cuemath.com/ncert-solutions/q-6-exercise-11-3-mensuration-class-8-maths/
Ex.11.3 Q6 Mensuration Solution - NCERT Maths Class 8 Go back to  'Ex.11.3' Question Describe how the two figures at the right are alike and how they are different. Which box has larger lateral surface area? Video Solution Mensuration Ex 11.3 | Question 6 Text Solution What is Known? Shape and their respective dimension. What is unknown? Lateral surface area. Reasoning: Length of rectangular strip will be equal to the circumference of the circle. Visually, all the faces of a cube and square are equal in shape. This makes length, height and width of a cube equal, so area of each of the face will be equal. Steps: Similarly, both the figures are alike in respect of their same height. The difference between the two figures is that one is cylinder and other is a cube. Length of one side of cube $$\,(l) = 7\, \rm{cm}$$ Height of one side of cube $$\,(h) = 7\, \rm{cm}$$ Width of one side of cube $$\,(b) = 7\, \rm{cm}$$ Lateral surface area of the cube \begin{align}&= (h\! \times\! l \!+\! h\! \times\! b\!+\! h\! \times \!l \!+\! h\! \times\! b)\\ &= (l\! \times\! l \!+\! l \!\times\! l\! +\! l \times l \!+ \! l \times l)\\&\qquad \because \{ l \!= \!h \!=\! b\} \\&= 4{l^2}\\ &= 4 \times {(7)^2}\\&= 196\,{{\rm{m}}^2} \end{align} Height of the cylinder $$h = 7\,\rm{cm}$$ \begin{align}r = \frac{7}{2}\,\rm{cm} = 3.5\,\rm{cm} \end{align} Lateral surface area of the cylinder \begin{align}&= 2\pi rh\\&= 2 \times \frac{{22}}{7} \times 3.5 \times 7\\&=154\, \rm{m^2} \end{align} Hence, the cube has larger lateral surface area. Learn from the best math teachers and top your exams • Live one on one classroom and doubt clearing • Practice worksheets in and after class for conceptual clarity • Personalized curriculum to keep up with school
2021-05-06 00:38:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 6, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.999808132648468, "perplexity": 6315.21478663302}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988724.75/warc/CC-MAIN-20210505234449-20210506024449-00253.warc.gz"}
https://math.mit.edu/probability/
# MIT Probability Seminar ## Current Organizers • Alexei Borodin • Promit Ghosal • Jimmy He • Elchanan Mossel • Philippe Rigollet • Scott Sheffield • Yair Shenfeld • Nike Sun • Dan Mikulincer • Subscription to a mailing list # Fall 2022 Monday 4.15 - 5.15 pm Room 2-147 Scheduled virtual talks will be held on Zoom, Monday 4:15-5:15 pm. A link to a Zoom classroom will appear here!! ## Schedule • September 12 Room: 2-147 Guilherme Silva Universidade de São Paulo (ICMC - USP) Universality for a class of statistics of Hermitian random matrices and the integro-differential Painlevé II equation. Abstract: It has been known since the 1990s that fluctuations of eigenvalues of random matrices, when appropriately scaled and in the sense of one-point distribution, converge to the Airy2 point process in the large matrix limit. In turn, the latter can be described by the celebrated Tracy-Widom distribution. In this talk we discuss recent findings of Ghosal and myself, showing that certain statistics of eigenvalues also converge universality to appropriate statistics of the Airy2 point process, interpolating between a hard and soft edge of eigenvalues. Such found statistics connect also to the integro-differential Painlevé II equation, in analogy with the celebrated Tracy-Widom connection between Painlevé II and the Airy2 process. • September 19 *** Special Seminar Starting at 3pm! *** Room: 2-132 Matteo Mucciconi University of Warwick A bijective approach to solvable KPZ models Abstract: Explicit solutions of random growth models in the KPZ universality class have attracted, in the last two decades, significant attention in Mathematical Physics. A common approach to the problem, explored in the last 15 years, leverages remarkable relations between the KPZ equation and quantum integrable systems. Here, I will introduce a new approach to the solutions of KPZ models, based on a bijection discovered by Imamura, Sasamoto and myself last year. This is a generalization of the celebrated Robinson-Schensted-Knuth correspondence relating at once 1) solvable growth models, 2) determinantal point processes of free fermionic origin and 3) models of Last Passage Percolation on a cylinder. I will enumerate some of the early applications of this new approach and I will give an overview of the technical tools needed, that include Kashiwara's crystals or the inverse scattering method for solitonic systems. • September 26 Room: 2-147 Abstract: • October 3 Room: 2-147 Learning low-degree functions on the discrete hypercube Abstract: Let f be an unknown function on the n-dimensional discrete hypercube. How many values of f do we need in order to approximately reconstruct the function? In this talk we shall discuss the random query model for this fundamental problem from computational learning theory. We will explain a newly discovered connection with a family of polynomial inequalities going back to Littlewood (1930) which will in turn allow us to derive sharper estimates for the the query complexity of this model, exponentially improving those which follow from the classical Low-Degree Algorithm of Linial, Mansour and Nisan (1989). Time permitting, we will also show a matching information-theoretic lower bound. Based on joint works with Paata Ivanisvili (UC Irvine) and Lauritz Streck (Cambridge). • October 10 Room: 2-147 Abstract: • October 17 Room: 2-147 Abstract: • October 24 Room: 2-147 Hao Shen Abstract: • October 31 Room: 2-147 Robert Hough Stony Brook University Covering systems of congruences Abstract: A distinct covering system of congruences is a list of congruences $a_i \bmod m_i, \qquad i = 1, 2, ..., k$ whose union is the integers. Erdős asked if the least modulus $m_1$ of a distinct covering system of congruences can be arbitrarily large ( the minimum modulus problem for covering systems, $1000) and if there exist distinct covering systems of congruences all of whose moduli are odd (the odd problem for covering systems,$25). I'll discuss my proof of a negative answer to the minimum modulus problem, and a quantitative refinement with Pace Nielsen that proves that any distinct covering system of congruences has a modulus divisible by either 2 or 3. The proofs use the probabilistic method and in particular use a sequence of pseudorandom probability measures adapted to the covering process. Time permitting, I may briefly discuss a reformulation of our method due to Balister, Bollobás, Morris, Sahasrabudhe and Tiba which solves a conjecture of Shinzel (any distinct covering system of congruences has one modulus that divides another) and gives a negative answer to the square-free version of the odd problem. • November 7 Room: 2-147 Abstract: • November 14 Room: 2-147 Sven Wang MIT Abstract: • November 21 Room: 2-147 Emma Bailey City University of New York (CUNY) Abstract: • November 28 Room: 2-147 Sayan Das Columbia University Abstract: • December 5 Room: 2-147 Changji Xu Harvard University Abstract: • December 12 Room: 2-147 Hoi Nguyen The Ohio State University Abstract:
2022-09-28 20:33:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4931769073009491, "perplexity": 2528.3764663611437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00767.warc.gz"}
https://math.stackexchange.com/questions/2941391/what-does-partial-a-mean-when-a-is-a-matrix
# What does $\partial A$ mean when $A$ is a matrix? I'm trying to understand the formulas that are in "The Matrix Cookbook" but I'm having a hard time understanding their intended meanings. For example, what does $$\partial A$$ mean exactly? How is $$\partial$$ defined as an operator? Does it mean that $$A$$ is thought of as a matrix with its entries changing with respect to some variable $$t$$ or is there another interpretation? And here's an almost philosophical question: What good does differentiating matrices do exactly? I would be very happy to know a few examples of the kind of results about matrices that can be proven by differentiating. Meanwhile, is this in anyway related to Frechet's definition of a derivative? Here's the definition I'm referring to: Let's say $$(X,\|\cdot\|_X)$$ and $$(Y,\|\cdot\|_Y)$$ are two Banach spaces and $$f: X\to Y$$ is a function. We define $$D_pf$$ to be a function in $$\mathcal{L}(X,Y)$$ such that $$\lim_{h\to 0}\frac{\|f(p+h)-f(p)-D_pf(h)\|_Y}{\|h\|_X}=0$$ Are these notions related? If not, how do we know when to use which? From what I can tell, the following conventions are used • $$\partial \mathbf{A}$$ refers to differentiating every element of the matrix with respect to some unspecified scalar variable. • $$\frac{\partial a}{\partial \mathbf{X}}$$, where $$a$$ is a scalar and $$\mathbf{X}$$ is a matrix refers to the matrix $$\left[\frac{\partial a}{\partial X_{ij}}\right]_{ij}$$ • When differentiating with respect to a matrix $$\mathbf{X}$$, matrices $$\mathbf{A}$$, $$\mathbf{B}$$, etc are considered to be constant with respect to each $$X_{ij}$$. The cookbook never puts matrices on both the top and bottom, preferring to use index notation when that happens. As for why we would want to, well, why wouldn't we want to? Matrices can describe vectors and (rank 2) tensors, and vector and tensor calculus certainly have no end of uses. And no, the Frechet derivative is not relevant here. • I see. I think now I understand better. Can you please tell me the general rule for differentiating an $(m,n)$ tensor $T$ with respect to a $(p,q)$ tensor $S$? – stressed out Oct 4 '18 at 0:11
2021-04-17 01:58:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9419110417366028, "perplexity": 94.06051412990713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038098638.52/warc/CC-MAIN-20210417011815-20210417041815-00101.warc.gz"}
http://mathhelpforum.com/trigonometry/95322-right-triangle-word-problem.html
# Thread: A Right Triangle Word Problem 1. ## A Right Triangle Word Problem first of al, i'm new here in this forum so i really, really expect that people here would help me overcome my weakness in trigonometry. so here's my question: a tower & a monument stand on a level plane. the angle of depression of the top & bottom of the monument varied from the top of the tower are 13degrees & 31degrees respectively; the height of the tower is 145 ft. find the height of the monument. pls make an illustration of this problem so that i could understand more how to solve this problem. 2. Could you please elaborate. I mean, explain alittle bit about what you mean when you say "top and bottom" and "varied". A pictures worth a thousand words. Do you have a drawing or plotting program on your pc? 3. this is the problem in my book and i do not have any illustrations of this problem. this problem is so confusing,. 4. My guess is that the illustration would be like the attached pic. The left is the tower and the right is the monument. Find the distance between the base of the tower and the base of the monument: $\cot 31^{\circ} = \frac{d}{145}$ ... and solve for d. Then, draw another dotted line from the top of the monument to the line of sight and call it h (I forgot to include that in my diagram, sorry). Find this distance: $\tan 13^{\circ} = \frac{h}{d}$ Plug in the d from earlier, and solve for h. The height of the monument would then be 145 - h. 01 5. yes, this is the illustration that i really need. tnx dude. case closed! --------------------------------------------------- by the way, the answer for this problem is 89.29 ft.
2016-10-21 11:51:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5564220547676086, "perplexity": 492.4829340639878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988717963.49/warc/CC-MAIN-20161020183837-00506-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.wackbag.com/threads/networking-type-question.78752/
# Networking-Type Question #### Arc Lite ##### As big as your Imagination... This is for work. I need to be able take my network cable that comes from the main network to my PC and split that into a second signal for use with a laptop. So I can use both the Desktop and laptop. Would it be something as simple as a network switch like this? 5 port network swiches http://tinyurl.com/35e4zt http://tinyurl.com/32t89m #### Hate & Discontent ##### Yo, homie. Is that my briefcase? Yep, all you need is a standard network switch. Assuming that your network admins arent using manually assigned IP addresses, you should be good to go. #### Arc Lite ##### As big as your Imagination... Cool. I think I'm good to go. Thanks.
2018-07-16 10:57:46
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8524174690246582, "perplexity": 2779.2265515977124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589251.7/warc/CC-MAIN-20180716095945-20180716115945-00567.warc.gz"}
https://www.meritnation.com/cbse-class-11-science/physics/physics-part-ii-ncert-solutions/oscillations/ncert-solutions/41_4_1336_192_358_4691
NCERT Solutions for Class 11 Science Physics Chapter 6 Oscillations are provided here with simple step-by-step explanations. These solutions for Oscillations are extremely popular among Class 11 Science students for Physics Oscillations Solutions come handy for quickly completing your homework and preparing for exams. All questions and answers from the NCERT Book of Class 11 Science Physics Chapter 6 are provided here for you for free. You will also love the ad-free experience on Meritnation’s NCERT Solutions. All NCERT Solutions for class Class 11 Science Physics are prepared by experts and are 100% accurate. #### Question 14.1: Which of the following examples represent periodic motion? (a) A swimmer completing one (return) trip from one bank of a river to the other and back. (b) A freely suspended bar magnet displaced from its N-S direction and released. (c) A hydrogen molecule rotating about its center of mass. (d) An arrow released from a bow. #### Answer: Answer: (b) and (c) (a) The swimmer’s motion is not periodic. The motion of the swimmer between the banks of a river is back and forth. However, it does not have a definite period. This is because the time taken by the swimmer during his back and forth journey may not be the same. (b) The motion of a freely-suspended magnet, if displaced from its N-S direction and released, is periodic. This is because the magnet oscillates about its position with a definite period of time. (c) When a hydrogen molecule rotates about its centre of mass, it comes to the same position again and again after an equal interval of time. Such motion is periodic. (d) An arrow released from a bow moves only in the forward direction. It does not come backward. Hence, this motion is not a periodic. #### Question 14.2: Which of the following examples represent (nearly) simple harmonic motion and which represent periodic but not simple harmonic motion? (a) the rotation of earth about its axis. (b) motion of an oscillating mercury column in a U-tube. (c) motion of a ball bearing inside a smooth curved bowl, when released from a point slightly above the lower most point. (d) general vibrations of a polyatomic molecule about its equilibrium position. #### Answer: Answer: (b) and (c) are SHMs (a) and (d) are periodic, but not SHMs (a) During its rotation about its axis, earth comes to the same position again and again in equal intervals of time. Hence, it is a periodic motion. However, this motion is not simple harmonic. This is because earth does not have a to and fro motion about its axis. (b) An oscillating mercury column in a U-tube is simple harmonic. This is because the mercury moves to and fro on the same path, about the fixed position, with a certain period of time. (c) The ball moves to and fro about the lowermost point of the bowl when released. Also, the ball comes back to its initial position in the same period of time, again and again. Hence, its motion is periodic as well as simple harmonic. (d) A polyatomic molecule has many natural frequencies of oscillation. Its vibration is the superposition of individual simple harmonic motions of a number of different molecules. Hence, it is not simple harmonic, but periodic. #### Question 14.3: Figure 14.27 depicts four x-t plots for linear motion of a particle. Which of the plots represent periodic motion? What is the period of motion (in case of periodic motion)? (a) (b) (c) (d) #### Answer: Answer: (b) and (d) are periodic (a) It is not a periodic motion. This represents a unidirectional, linear uniform motion. There is no repetition of motion in this case. (b) In this case, the motion of the particle repeats itself after 2 s. Hence, it is a periodic motion, having a period of 2 s. (c) It is not a periodic motion. This is because the particle repeats the motion in one position only. For a periodic motion, the entire motion of the particle must be repeated in equal intervals of time. (d) In this case, the motion of the particle repeats itself after 2 s. Hence, it is a periodic motion, having a period of 2 s. #### Question 14.4: Which of the following functions of time represent (a) simple harmonic, (b) periodic but not simple harmonic, and (c) non-periodic motion? Give period for each case of periodic motion (ω is any positive constant): (a) sin ωt – cos ωt (b) sin3 ωt (c) 3 cos (π/4 – 2ωt) (d) cos ωt + cos 3ωt + cos 5ωt (e) exp (–ω2t2) (f) 1 + ωt + ω2t2 #### Answer: (a) SHM The given function is: This function represents SHM as it can be written in the form: Its period is: (b) Periodic, but not SHM The given function is: ${\mathrm{sin}}^{3}\omega t=\frac{1}{4}\left[3\mathrm{sin}\omega t-\mathrm{sin}3\omega t\right]$ The terms sin ωt and sin ωt individually represent simple harmonic motion (SHM). However, the superposition of two SHM is periodic and not simple harmonic. (c) SHM The given function is: This function represents simple harmonic motion because it can be written in the form: Its period is: (d) Periodic, but not SHM The given function is . Each individual cosine function represents SHM. However, the superposition of three simple harmonic motions is periodic, but not simple harmonic. (e) Non-periodic motion The given function is an exponential function. Exponential functions do not repeat themselves. Therefore, it is a non-periodic motion. (f) The given function 1 + ωt + ω2t2 is non-periodic. #### Question 14.5: A particle is in linear simple harmonic motion between two points, A and B, 10 cm apart. Take the direction from A to B as the positive direction and give the signs of velocity, acceleration and force on the particle when it is (a) at the end A, (b) at the end B, (c) at the mid-point of AB going towards A, (d) at 2 cm away from B going towards A, (e) at 3 cm away from A going towards B, and (f) at 4 cm away from B going towards A. #### Answer: Answer: (a) Zero, Positive, Positive (b) Zero, Negative, Negative (c) Negative, Zero, Zero (d) Negative, Negative, Negative (e) Positive, Positive, Positive (f) Negative, Negative, Negative Explanation: The given situation is shown in the following figure. Points A and B are the two end points, with AB = 10 cm. O is the midpoint of the path. A particle is in linear simple harmonic motion between the end points (a) At the extreme point A, the particle is at rest momentarily. Hence, its velocity is zero at this point. Its acceleration is positive as it is directed along AO. Force is also positive in this case as the particle is directed rightward. (b) At the extreme point B, the particle is at rest momentarily. Hence, its velocity is zero at this point. Its acceleration is negative as it is directed along O. Force is also negative in this case as the particle is directed leftward. (c) The particle is executing a simple harmonic motion. O is the mean position of the particle. Its velocity at the mean position O is the maximum. The value for velocity is negative as the particle is directed leftward. The acceleration and force of a particle executing SHM is zero at the mean position. (d) The particle is moving toward point O from the end B. This direction of motion is opposite to the conventional positive direction, which is from A to B. Hence, the particle’s velocity and acceleration, and the force on it are all negative. (e) The particle is moving toward point O from the end A. This direction of motion is from A to B, which is the conventional positive direction. Hence, the values for velocity, acceleration, and force are all positive. (f) This case is similar to the one given in (d). #### Question 14.6: Which of the following relationships between the acceleration a and the displacement x of a particle involve simple harmonic motion? (a) a = 0.7x (b) a = –200x2 (c) a = –10x (d) a = 100x3 #### Answer: (c) A motion represents simple harmonic motion if it is governed by the force law: F = –kx ma = –k Where, F is the force m is the mass (a constant for a body) x is the displacement a is the acceleration k is a constant Among the given equations, only equation a = –10 x is written in the above form with Hence, this relation represents SHM. #### Question 14.7: The motion of a particle executing simple harmonic motion is described by the displacement function, x (t) = A cos (ωt + φ). If the initial (t = 0) position of the particle is 1 cm and its initial velocity is ω cm/s, what are its amplitude and initial phase angle? The angular frequency of the particle is π s–1. If instead of the cosine function, we choose the sine function to describe the SHM: x = B sin (ωt + α), what are the amplitude and initial phase of the particle with the above initial conditions. #### Answer: Initially, at t = 0: Displacement, x = 1 cm Initial velocity, v = ω cm/sec. Angular frequency, ω = π rad/s–1 It is given that: Squaring and adding equations (i) and (ii), we get: Dividing equation (ii) by equation (i), we get: SHM is given as: Putting the given values in this equation, we get: Velocity, Substituting the given values, we get: Squaring and adding equations (iii) and (iv), we get: Dividing equation (iii) by equation (iv), we get: #### Question 14.8: A spring balance has a scale that reads from 0 to 50 kg. The length of the scale is 20 cm. A body suspended from this balance, when displaced and released, oscillates with a period of 0.6 s. What is the weight of the body? #### Answer: Maximum mass that the scale can read, M = 50 kg Maximum displacement of the spring = Length of the scale, l = 20 cm = 0.2 m Time period, T = 0.6 s Maximum force exerted on the spring, F = Mg Where, g = acceleration due to gravity = 9.8 m/s2 F = 50 × 9.8 = 490 ∴Spring constant, Mass m, is suspended from the balance. Time period, ∴Weight of the body = mg = 22.36 × 9.8 = 219.167 N Hence, the weight of the body is about 219 N. #### Question 14.9: A spring having with a spring constant 1200 N m–1 is mounted on a horizontal table as shown in Fig. A mass of 3 kg is attached to the free end of the spring. The mass is then pulled sideways to a distance of 2.0 cm and released. Determine (i) the frequency of oscillations, (ii) maximum acceleration of the mass, and (iii) the maximum speed of the mass. #### Answer: Spring constant, k = 1200 N m–1 Mass, m = 3 kg Displacement, A = 2.0 cm = 0.02 cm (i) Frequency of oscillation v, is given by the relation: Where, T is the time period Hence, the frequency of oscillations is 3.18 cycles per second. (ii) Maximum acceleration (a) is given by the relation: a = ω2 A Where, ω = Angular frequency = A = Maximum displacement Hence, the maximum acceleration of the mass is 8.0 m/s2. (iii) Maximum velocity, vmax = Aω Hence, the maximum velocity of the mass is 0.4 m/s. #### Question 14.10: In Exercise 14.9, let us take the position of mass when the spring is unstreched as x = 0, and the direction from left to right as the positive direction of x-axis. Give x as a function of time t for the oscillating mass if at the moment we start the stopwatch (t = 0), the mass is (a) at the mean position, (b) at the maximum stretched position, and (c) at the maximum compressed position. In what way do these functions for SHM differ from each other, in frequency, in amplitude or the initial phase? #### Answer: (a) x = 2sin 20t (b) x = 2cos 20t (c) x = –2cos 20t The functions have the same frequency and amplitude, but different initial phases. Distance travelled by the mass sideways, A = 2.0 cm Force constant of the spring, k = 1200 N m–1 Mass, m = 3 kg Angular frequency of oscillation: = 20 rad s–1 (a) When the mass is at the mean position, initial phase is 0. Displacement, x = Asin ωt = 2sin 20t (b) At the maximum stretched position, the mass is toward the extreme right. Hence, the initial phase is. Displacement, = 2cos 20t (c) At the maximum compressed position, the mass is toward the extreme left. Hence, the initial phase is. Displacement, = –2cos 20t The functions have the same frequency and amplitude (2 cm), but different initial phases. #### Question 14.11: Figures 14.29 correspond to two circular motions. The radius of the circle, the period of revolution, the initial position, and the sense of revolution (i.e. clockwise or anti-clockwise) are indicated on each figure. Obtain the corresponding simple harmonic motions of the x-projection of the radius vector of the revolving particle P, in each case. #### Answer: (a) Time period, T = 2 s Amplitude, A = 3 cm At time, t = 0, the radius vector OP makes an angle with the positive x-axis, i.e., phase angle Therefore, the equation of simple harmonic motion for the x-projection of OP, at time t, is given by the displacement equation: (b) Time period, T = 4 s Amplitude, a = 2 m At time t = 0, OP makes an angle π with the x-axis, in the anticlockwise direction. Hence, phase angle, Φ = + π Therefore, the equation of simple harmonic motion for the x-projection of OP, at time t, is given as: #### Question 14.12: Plot the corresponding reference circle for each of the following simple harmonic motions. Indicate the initial (t = 0) position of the particle, the radius of the circle, and the angular speed of the rotating particle. For simplicity, the sense of rotation may be fixed to be anticlockwise in every case: (x is in cm and t is in s). (a) x = –2 sin (3t + π/3) (b) x = cos (π/6 – t) (c) x = 3 sin (2πt + π/4) (d) x = 2 cos πt #### Answer: (a) If this equation is compared with the standard SHM equation, then we get: The motion of the particle can be plotted as shown in the following figure. (b) If this equation is compared with the standard SHM equation, then we get: The motion of the particle can be plotted as shown in the following figure. (c) If this equation is compared with the standard SHM equation, then we get: Amplitude, A = 3 cm Phase angle, = 135° Angular velocity, The motion of the particle can be plotted as shown in the following figure. (d) x = 2 cos πt If this equation is compared with the standard SHM equation, then we get: Amplitude, A = 2 cm Phase angle, Φ = 0 Angular velocity, ω = π rad/s The motion of the particle can be plotted as shown in the following figure. #### Question 14.13: Figure 14.30 (a) shows a spring of force constant k clamped rigidly at one end and a mass m attached to its free end. A force F applied at the free end stretches the spring. Figure 14.30 (b) shows the same spring with both ends free and attached to a mass m at either end. Each end of the spring in Fig. 14.30(b) is stretched by the same force F. (a) What is the maximum extension of the spring in the two cases? (b) If the mass in Fig. (a) and the two masses in Fig. (b) are released, what is the period of oscillation in each case? #### Answer: (a) For the one block system: When a force F, is applied to the free end of the spring, an extension l, is produced. For the maximum extension, it can be written as: F = kl Where, k is the spring constant Hence, the maximum extension produced in the spring, For the two block system: The displacement (x) produced in this case is: Net force, F = +2 kx (b) For the one block system: For mass (m) of the block, force is written as: Where, x is the displacement of the block in time t It is negative because the direction of elastic force is opposite to the direction of displacement. Where, ω is angular frequency of the oscillation ∴Time period of the oscillation, For the two block system: It is negative because the direction of elastic force is opposite to the direction of displacement. Where, Angular frequency, ∴Time period, #### Question 14.14: The piston in the cylinder head of a locomotive has a stroke (twice the amplitude) of 1.0 m. If the piston moves with simple harmonic motion with an angular frequency of 200 rad/min, what is its maximum speed? #### Answer: Angular frequency of the piston, ω = 200 rad/ min. Stroke = 1.0 m Amplitude, The maximum speed (vmax) of the piston is give by the relation: #### Question 14.15: The acceleration due to gravity on the surface of moon is 1.7 ms–2. What is the time period of a simple pendulum on the surface of moon if its time period on the surface of earth is 3.5 s? (g on the surface of earth is 9.8 ms–2) #### Answer: Acceleration due to gravity on the surface of moon, = 1.7 m s–2 Acceleration due to gravity on the surface of earth, g = 9.8 m s–2 Time period of a simple pendulum on earth, T = 3.5 s Where, l is the length of the pendulum The length of the pendulum remains constant. On moon’s surface, time period, Hence, the time period of the simple pendulum on the surface of moon is 8.4 s. #### Question 14.16: Answer the following questions: (a) Time period of a particle in SHM depends on the force constant k and mass m of the particle: . A simple pendulum executes SHM approximately. Why then is the time period of a pendulum independent of the mass of the pendulum? (b) The motion of a simple pendulum is approximately simple harmonic for small angle oscillations. For larger angles of oscillation, a more involved analysis shows that T is greater than. Think of a qualitative argument to appreciate this result. (c) A man with a wristwatch on his hand falls from the top of a tower. Does the watch give correct time during the free fall? (d) What is the frequency of oscillation of a simple pendulum mounted in a cabinthat is freely falling under gravity? #### Answer: (a) The time period of a simple pendulum, For a simple pendulum, k is expressed in terms of mass m, as: k m = Constant Hence, the time period T, of a simple pendulum is independent of the mass of the bob. (b) In the case of a simple pendulum, the restoring force acting on the bob of the pendulum is given as: F = –mg sinθ Where, F = Restoring force m = Mass of the bob g = Acceleration due to gravity θ = Angle of displacement For small θ, sinθ For large θ, sinθ is less than θ. This decreases the effective value of g. Hence, the time period increases as: Where, l is the length of the simple pendulum (c) The time shown by the wristwatch of a man falling from the top of a tower is not affected by the fall. Since a wristwatch does not work on the principle of a simple pendulum, it is not affected by the acceleration due to gravity during free fall. Its working depends on spring action. (d) When a simple pendulum mounted in a cabin falls freely under gravity, its acceleration is zero. Hence the frequency of oscillation of this simple pendulum is zero. #### Question 14.17: A simple pendulum of length l and having a bob of mass M is suspended in a car. The car is moving on a circular track of radius R with a uniform speed v. If the pendulum makes small oscillations in a radial direction about its equilibrium position, what will be its time period? #### Answer: The bob of the simple pendulum will experience the acceleration due to gravity and the centripetal acceleration provided by the circular motion of the car. Acceleration due to gravity = g Centripetal acceleration Where, v is the uniform speed of the car R is the radius of the track Effective acceleration (aeff) is given as: Time period, Where, l is the length of the pendulum ∴Time period, T #### Question 14.18: Cylindrical piece of cork of density of base area A and height h floats in a liquid of density. The cork is depressed slightly and then released. Show that the cork oscillates up and down simple harmonically with a period where ρ is the density of cork. (Ignore damping due to viscosity of the liquid). #### Answer: Base area of the cork = A Height of the cork = h Density of the liquid = Density of the cork = ρ In equilibrium: Weight of the cork = Weight of the liquid displaced by the floating cork Let the cork be depressed slightly by x. As a result, some extra water of a certain volume is displaced. Hence, an extra up-thrust acts upward and provides the restoring force to the cork. Up-thrust = Restoring force, F = Weight of the extra water displaced F = ­–(Volume × Density × g) Volume = Area × Distance through which the cork is depressed Volume = Ax F = – A x g … (i) According to the force law: F = kx Where, k is a constant The time period of the oscillations of the cork: Where, m = Mass of the cork = Volume of the cork × Density = Base area of the cork × Height of the cork × Density of the cork = Ahρ Hence, the expression for the time period becomes: #### Question 14.19: One end of a U-tube containing mercury is connected to a suction pump and the other end to atmosphere. A small pressure difference is maintained between the two columns. Show that, when the suction pump is removed, the column of mercury in the U-tube executes simple harmonic motion. #### Answer: Area of cross-section of the U-tube = A Density of the mercury column = ρ Acceleration due to gravity = g Restoring force, F = Weight of the mercury column of a certain height F = –(Volume × Density × g) F = –(A × 2h × ρ ×g) = –2Aρgh = –k × Displacement in one of the arms (h) Where, 2h is the height of the mercury column in the two arms k is a constant, given by Time period, Where, m is the mass of the mercury column Let l be the length of the total mercury in the U-tube. Mass of mercury, m = Volume of mercury × Density of mercury = Alρ Hence, the mercury column executes simple harmonic motion with time period. #### Question 14.20: An air chamber of volume V has a neck area of cross section a into which a ball of mass m just fits and can move up and down without any friction (Fig.14.33). Show that when the ball is pressed down a little and released, it executes SHM. Obtain an expression for the time period of oscillations assuming pressure-volume variations of air to be isothermal [see Fig. 14.33]. #### Answer: Volume of the air chamber = V Area of cross-section of the neck = a Mass of the ball = m The pressure inside the chamber is equal to the atmospheric pressure. Let the ball be depressed by x units. As a result of this depression, there would be a decrease in the volume and an increase in the pressure inside the chamber. Decrease in the volume of the air chamber, ΔV = ax Volumetric strain Bulk Modulus of air, In this case, stress is the increase in pressure. The negative sign indicates that pressure increases with a decrease in volume. The restoring force acting on the ball, F = p × a In simple harmonic motion, the equation for restoring force is: F = –kx … (ii) Where, k is the spring constant Comparing equations (i) and (ii), we get: Time period, #### Question 14.21: You are riding in an automobile of mass 3000 kg. Assuming that you are examining the oscillation characteristics of its suspension system. The suspension sags 15 cm when the entire automobile is placed on it. Also, the amplitude of oscillation decreases by 50% during one complete oscillation. Estimate the values of (a) the spring constant k and (b) the damping constant b for the spring and shock absorber system of one wheel, assuming that each wheel supports 750 kg. #### Answer: (a) Mass of the automobile, m = 3000 kg Displacement in the suspension system, x = 15 cm = 0.15 m There are 4 springs in parallel to the support of the mass of the automobile. The equation for the restoring force for the system: F = –4kx = mg Where, k is the spring constant of the suspension system Time period, And = 5000 = 5 × 104 N/m Spring constant, k = 5 × 104 N/m (b) Each wheel supports a mass, M = = 750 kg For damping factor b, the equation for displacement is written as: The amplitude of oscillation decreases by 50%. Where, Time period, = 0.7691 s = 1351.58 kg/s Therefore, the damping constant of the spring is 1351.58 kg/s. #### Question 14.22: Show that for a particle in linear SHM the average kinetic energy over a period of oscillation equals the average potential energy over the same period. #### Answer: The equation of displacement of a particle executing SHM at an instant t is given as: Where, A = Amplitude of oscillation ω = Angular frequency The velocity of the particle is: The kinetic energy of the particle is: The potential energy of the particle is: For time period T, the average kinetic energy over a single cycle is given as: And, average potential energy over one cycle is given as: It can be inferred from equations (i) and (ii) that the average kinetic energy for a given time period is equal to the average potential energy for the same time period. #### Question 14.23: A circular disc of mass 10 kg is suspended by a wire attached to its centre. The wire is twisted by rotating the disc and released. The period of torsional oscillations is found to be 1.5 s. The radius of the disc is 15 cm. Determine the torsional spring constant of the wire. (Torsional spring constant α is defined by the relation J = –α θ, where J is the restoring couple and θ the angle of twist). #### Answer: Mass of the circular disc, m = 10 kg Radius of the disc, r = 15 cm = 0.15 m The torsional oscillations of the disc has a time period, T = 1.5 s The moment of inertia of the disc is: I = × (10) × (0.15)2 = 0.1125 kg m2 Time period, α is the torsional constant. = 1.972 Nm/rad Hence, the torsional spring constant of the wire is 1.972 Nm rad–1. #### Question 14.24: A body describes simple harmonic motion with amplitude of 5 cm and a period of 0.2 s. Find the acceleration and velocity of the body when the displacement is (a) 5 cm, (b) 3 cm, (c) 0 cm. #### Answer: Amplitude, A = 5 cm = 0.05 m Time period, T = 0.2 s (a) For displacement, x = 5 cm = 0.05 m Acceleration is given by: a = – Velocity is given by: v = 0 When the displacement of the body is 5 cm, its acceleration is –5 π2 m/s2 and velocity is 0. (b) For displacement, x = 3 cm = 0.03 m Acceleration is given by: a = Velocity is given by: v = 0.4 π m/s When the displacement of the body is 3 cm, its acceleration is –3π m/s2 and velocity is 0.4π m/s. (c) For displacement, x = 0 Acceleration is given by: a = 0 Velocity is given by: When the displacement of the body is 0, its acceleration is 0 and velocity is 0.5 π m/s. #### Question 14.25: A mass attached to a spring is free to oscillate, with angular velocity ω, in a horizontal plane without friction or damping. It is pulled to a distance x0 and pushed towards the centre with a velocity v0 at time t = 0. Determine the amplitude of the resulting oscillations in terms of the parameters ω, x0 and v0. [Hint: Start with the equation x = a cos (ωt) and note that the initial velocity is negative.] #### Answer: The displacement equation for an oscillating mass is given by: x = Where, A is the amplitude x is the displacement θ is the phase constant Velocity, At t = 0, x = x0 x0 = Acosθ = x0 … (i) And, … (ii) Squaring and adding equations (i) and (ii), we get: Hence, the amplitude of the resulting oscillation is. View NCERT Solutions for all chapters of Class 11
2021-05-14 07:27:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7545636296272278, "perplexity": 947.8133003959546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.40/warc/CC-MAIN-20210514060536-20210514090536-00365.warc.gz"}
https://en.wikipedia.org/wiki/User:Tomruen/Geodestic_sphere
# User:Tomruen/Geodestic sphere Translation from [1]. Geode triangulation Geode honeycomb In mathematics, a geodestic sphere is a solid geometry, a non-regular polyhedron convex part a sphere. This model is used in buildings whose architecture follows the shape: the geodesic domes. ## Quick Overview ### Geode triangulation Most geodes are built on the principle that one starts with a icosahedron. Each of the vertices of the icosahedron is common to five triangular facets, adjacent pairs, and five edges (sides of the facets) extend from each of these vertices. Every facet of the icosahedron is an equilateral triangle, we will subdivide into smaller triangles which are then deformed (by radial projection) to be brought on the sphere circumscribed to the icosahedron. Here are three examples of geodes, each corresponding to a different subdivision: In the first example, we divided the edges of the faces of the icosahedron into two segments. In the second, the edges have been divided into three. Finally, in the latter, they were divided into ten segments. This is also what this latest model built Geode of Museum of Science and Industry de la Villette. For the location of the vertices of the original icosahedron, just find where five small triangles (instead of six) share the same top! ### Geode honeycomb It is also conceivable geodes honeycomb taking the dual polyhedron. Geodes obtained by triangulation In the figure above (which is the larger of the dual polyhedron of the geode the last previous example based on a division into 10 segments figure), the sphere seems paved the hex s. But careful observation to discover that among these hexagons actually hide twelve pentagons corresponding to the vertices of the original icosahedron. It is impossible to cover a sphere using only hexagons, as shown in relationship Euler between numbers of faces, edges and vertices of a polyhedron whatever. In the figure, three of the 12 pentagons are visible, a fourth, barely visible, is located near the edge of the figure, in the direction "eleven o'clock" (as the mark of a small needle shows), and finally a fifth lies on the edge of the figure, "half past three." ## Principles of geometric construction of a geode Geodesic domes are structures based on division (partition) side of a regular polyhedron whose faces are composed of equilateral triangles. There are only three types of regular polyhedra with such equilateral sides: the tetrahedron regular (N = 3), the octahedron regular (N= 4) and icosahedron regular ( N= 5), the notation Nused here is the number of faces (and also the number of edges) who share the same vertex. The division faces is defined by two integer parameters a and b positive or zero. The first parameter a must be strictly positive. The second parameter b can be null but must not be more than a. Once selected values N then a and b, the construction of the corresponding dome that eventually we denote "Geode Mab" (where the notation M must be replaced by III, IV or V, depending on the value of the number in Roman numerals N), takes place in six steps, which we will explain in detail with an emphasis on case N = 5 (which is the vast majority of geodes) and illustrating the following three cases: ${\displaystyle a=7,b=0\,}$ and ${\displaystyle a=4,b=4\,}$ and finally ${\displaystyle a=5,b=3\,}$ Note: The above figures correspond, using the explained above, the following notation geodes: • In the introduction: the geode V-3-1 and its dual (in rotation) • In section 1.1: the geode V-1-0 (icosahedron) and geodes V-2-0, V-3-0 and V-10-0 • In Section 1.2: the geode V-10-0 dual ### From a regular icosahedron #### Step 1 We construct a regular polyhedron (R) corresponding to the value of N. #### Step 2 in the three drawings below, the vertex C is at the top, the top left is A and B is the top right • We selected one of the faces of the polyhedron (R) and one of the edges of the face (which is always an equilateral triangle). Let AB and C chosen edge opposite to the edge on the top side selected. • It then divides the segment AB in (a + b) segments of equal length and all points on the numbers defined as follows: Point A has the number 0, the next point No. 1, the following No. 2, etc.. and the last, that is to say the point B, n ° a + b. Let ${\displaystyle P_{0},P_{1},P_{2},...P_{A-1},P_{a},P_{a1},...P_{a+b}\,}$ the points obtained. • It then traces the segment ${\displaystyle CP_{a}\,}$. • Finally, we trace all segments parallel to ${\displaystyle CP_{a}\,}$ and passing through each point ${\displaystyle P_{0},P_{1},P_{2},...P_{A-1},P_{a},P_{a1},...P_{a+b}\,}$, without exceeding the limits of the face ABC. #### Step 3 • The same operation is repeated by changing each edge AB and the top C in BC and A, then B and C, thus obtaining a triple array of parallel and equidistant segments forming between them an angle of 60 ° and delimiting So small equilateral triangles with some (unless the parameter b = 0) are incomplete. These are the tops of these small triangles that will be used to construct the geodesic dome, including those located just riding on one of the edges of the face ABC. • Of course, repeat the operations described in steps 2 and 3 for all the faces of the polyhedron ( R). Recall that the tetrahedron has four faces, the octahedron faces 8 and 20-sided icosahedron. #### Step 4 In the drawing above, the projection of these peaks is represented by a kind of little black pinhead. • Let O be the center of the sphere (S) confined to the polyhedron ( R). By the radial projection center O, is projected onto the sphere ( S) all networks obtained or, more accurately, the tops of small equilateral as steps 2 and 3 triangles, was obtained on each side of (R). #### Step 5 • To form the edges of the geodesic dome-Va-b, we must connect the various peaks obtained in the previous step: however, we must interconnect the nodes that are the projection vertices belonging to the same small equilateral triangle (see step 3). #### Step 6 In the three above drawings, coloring faces only serves to "make nice" The edges obtained in the previous step form spherical triangles, which are the radial projection of small equilateral triangles resulting from the division of the initial polyhedron faces ( R). • To complete the layout of the geode-V a - b, just erase the traces of all operations in steps 1 to 4: the tops of the remaining spherical triangles are peaks of the geode, these summits, connected in pairs, draw the vertices and faces of the geode-Va-b. • If instead of the normal geode, it wants to build the corresponding dual geode must be determined on the sphere ( S) the center of each of these spherical triangles (which are "in front" of the centers of the faces the cyst has V-a-b normal) and if the center points thus obtained correspond to the normal adjacent faces cyst, these items must be attached in pairs to form the edges the dual geode. All these edges draw polygons are the faces of the dual geode, these faces are hexagons, but twelve of them are regular pentagons whose centers are located "in front" of the twelve vertices of the polyhedron generator (R). It is of course clear the track of normal geode when you finish tracing the edges of the dual geode. ### From a regular octahedron Regular Octahedron If we choose as starting polyhedron (R) regular octahedron (corresponding to N set to 4). construction described above leads to step # 6 the following results: ### From a regular tetrahedron Regular Tetrahedron Finally, if we choose as starting polyhedron (R) regular tetrahedron (corresponding to N set to 3). the same construction leads to step # 6 the following results: ### Some remarks geometric When the parameter b is zero or equal to the parameter a, the geode (normal or dual) has all the symmetry properties of the polyhedron generator, for example, for the icosahedron: 15 planes of symmetry (via two opposite edges), 10 rotations of order 3 (120 ° rotation about an axis passing through the center of one of the sides 20) and rotation order 5 6 (rotation of 72 ° about an axis passing through two opposite corners) geodes enantiomeric However, when the parameters a and b are different and both positive, the geode loses its symmetry planes and there are two forms of type Nab geodes, which are enantiomers (ie ie symmetrical to each other in a mirror without being superimposed) to be convinced, simply swap the letters A and B in the explanations given above in steps 2 and 3 and carefully examine the corresponding figures (in the case V-5-3) or the figure below: • Spherical triangles obtained in step # 5 seem to be equilateral (at least when the generator polyhedron is an icosahedron) but they are not (their angles are not all equal) and their lengths are only a few few special cases; • Similarly, the hexagons obtained in the construction of dual geodes seem to be regular but generally are not (although they are when a = b = 1, regardless of N!) • According to a conjecture issued by Joseph D. Clinton but that remains to be proven, it would be possible to slightly move the vertices of the triangulated network described in steps 2 and 3 so that the edges of the dual Nab domes are all of equal length. J. D. Clinton based his belief on the fact that we discovered the existence of such "regulated" for all combinations of a and b following domes: a + b < 4 a = 4 et b = 0 a = 2 et b = 2 a = 5 et b = 0 et enfin< a = 3 et b = 3 avec N quelconque (égal à 3, 4 ou 5). If you choose ${\displaystyle a=1\,}$ and ${\displaystyle b=0\,}$, the geode V-1-0 is identical to the normal obtained polyhedron (R) ' 'initial generator. As for the V-geode 1-0 dual, the regular polyhedron dual polyhedron of the same, this is depending on the value of N, a regular dodecahedron (if N ${\displaystyle =5\,}$), a cube (if ${\displaystyle N=4\,}$) or a regular tetrahedron (if ${\displaystyle N=3\,}$). The amount ${\displaystyle a^{2}+ab+b^{2}\,}$ that we should be called the "density" of a geode is interesting because it is the ratio of the surface triangular faces of the polyhedron (R) on the surface of small triangles obtained in the division of the faces. It intervenes in the form which gives, according to N,a and b, the number of faces F, edges A and S vertex normal and dual geodesic domes. The edges of a normal geodesic dome (G) form a Delaunay the set of vertices, in addition, the geodesic dome of the same dual dome (G) is an partition of sphere (S) but it does not strictly correspond to Voronoi diagram of the vertices of the dome (G), especially for low values ​​of the density ${\displaystyle a^{2}+ab+b^{2}\,}$. It is not mathematically illogical to even look at another type of geodesic domes, those that could be obtained from a division (partition) faces another regular polyhedron, the cube, this division is to cut each of the six square cube mini square faces. To build such domes "quadrangulés" it is sufficient to step 2 described above, to divide one of the two diagonals of ABCD BD face (square) of the cube has ${\displaystyle +b\,}$ segments of equal length, then connect the top C at ${\displaystyle P_{a}\,}$ and finally draw all segments parallel to ${\displaystyle CP_{a}}$ and passing by points ${\displaystyle P_{0},P_{1},P_{2},...P_{A-1},P_{a},P_{a1},...P_{a+b}\,}$, the diagonal, without exceeding the limits of the square face ABCD, then from step 3 to make a similar construction with diagonal AC and the top B and finally reproduce the grid obtained on each of the five other faces of the cube. For these domes, the "density ratio of the area of ​​the faces of the cube to the surface of the squares obtained in the division faces, would ${\displaystyle a^{2}+ab+b^{2}\,}$. Note: the form below does not include the special case of geodes "quadrangulées" ### Form To calculate the numbers F, A and S representing the numbers of faces, edges and vertices of a geodesic dome parameters N, and a b must first calculate the numbers f and D ( which respectively represent the number of sides of the regular polyhedron generator (R) and the "density" of the Division faces of the polyhedron ) using the following preliminary two formulas: ${\displaystyle f={\frac {4N}{6-N}}\,}$ et ${\displaystyle D=a^{2}+ab+b^{2}\,}$ Can then be calculated: • In the case of "normal" geodesic domes: • dans le cas des dômes géodésiques « normaux » : ${\displaystyle F=fD\,}$, ${\displaystyle A={\frac {3fD}{2}}\,}$ et ${\displaystyle S={\frac {fD}{2}}+2\,}$ • In the case of geodesic domes "duals" ${\displaystyle F={\frac {fD}{2}}+2\,}$ ${\displaystyle A={\frac {3fD}{2}}\,}$ et ${\displaystyle S=fD\,}$ Further details: • Faces normal domes are all of order 3 ( are triangles ) while their tops are of two types: those of order 6 (which result 6 edges ) and those of order N. Their numbers are worth: ${\displaystyle S_{6}={\frac {f(D-1)}{2}}\,}$ ${\displaystyle S_{N}={\frac {f}{2}}+2={\frac {12}{6-N}}\,}$ • The tops of domes are duals of order 3 (3 edges are result ) while their faces are of two types: those of order 6 (hexagons ) and those of order N ( N-sided polygon ). Their numbers are worth: ${\displaystyle F_{6}={\frac {f(D-1)}{2}}\,}$ et ${\displaystyle F_{N}={\frac {f}{2}}+2={\frac {12}{6-N}}\,}$ • The right with the segment ${\displaystyle CP_{a}\,}$ is with the height on the AB side of the face ABC regular polyhedron ( R) an angle ${\displaystyle \theta \,}$ whose sine and tangent are respectively: ${\displaystyle \sin \theta ={\frac {1}{2}}\ {\frac {a-b}{\sqrt {(a^{2}+ab+b^{2})}}}\,}$ ${\displaystyle \operatorname {tg} \ \theta ={\frac {\sqrt {3}}{2}}\ {\frac {a-b}{a+b}}\,}$ ## Other examples of structures geode ### Balls and balloons A geode V-1-1 dual structure has exactly the balloons football used in the official compétations: on these balloons are 12 pentagons dyed black while 20 hexagons dyed white. Balls golf are dug small cell number, shape and position of which can improve the performance of players, including professional golf balls, frequently encountered bullets with the circular arrangement of the cell reproduces faces (hexagons and pentagons) of a geode-V dual 6-0. ### Molecules and Virus Some remarkable organic compounds such as C 60 , whose structure is similar geodes V-1-1 were baptized fullerene s in honor of R. B. Fuller and they are also sometimes called "footballènes." Most virus are "virus icosaédraux" (in English "icosahedral viruses") or more exactly "virus icosahedral nucleocapsid" their particularity is their structure], which very close to that of a geodesic dome normal or dual ( but the radial projection of the sphere (S) ) gives them stability. They always match N = 5 and most often at low values ​​of a and b. Among the many viruses include those of hepatitis s A, B, C and E, one of the polio of the AIDS (HIV- 1), one of the yellow fever, one of the smallpox, one of the FMD, the usual virus bronchiolitis (VRS), those of common warts and foot ( HPV HPV-3 and HPV-1) that of rubella or that of "common cold"; also include the group of eight virus called herpesviridae which all have the structure V-5-0 that can induce various human diseases chickenpox, shingles, infectious mononucleosis, cold sores, neonatal herpes and STD such that genital herpes Simple and herpes cytomegalovirus. Some of these viruses are "twisted" and thus correspond to Class III Fuller: e.g. polyomavirus and HPV, which are the type V-2-1. We even know a virus type V-10-7 (the one that plagues the seaweed Phaeocystis pouchetii).
2017-03-28 14:17:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 35, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7400785088539124, "perplexity": 974.9425423706822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189771.94/warc/CC-MAIN-20170322212949-00282-ip-10-233-31-227.ec2.internal.warc.gz"}
https://eprint.iacr.org/2021/1529
Autoguess: A Tool for Finding Guess-and-Determine Attacks and Key Bridges Abstract The guess-and-determine technique is one of the most widely used techniques in cryptanalysis to recover unknown variables in a given system of relations. In such attacks, a subset of the unknown variables is guessed such that the remaining unknowns can be deduced using the information from the guessed variables and the given relations. This idea can be applied in various areas of cryptanalysis such as finding the internal state of stream ciphers when a sufficient amount of output data is available, or recovering the internal state and the secret key of a block cipher from very few known plaintexts. Another important application is the key-bridging technique in key-recovery attacks on block ciphers, where the attacker aims to find the minimum number of required sub-key guesses to deduce all involved sub-keys via the key schedule. Since the complexity of the guess-and-determine technique directly depends on the number of guessed variables, it is essential to find the smallest possible guess basis, i.e., the subset of guessed variables from which the remaining variables can be deduced. In this paper, we present Autoguess, an easy-to-use general tool to search for a minimal guess basis. We propose several new modeling techniques to harness SAT/SMT, MILP, and Gröbner basis solvers. We demonstrate their usefulness in guess-and-determine attacks on stream ciphers and block ciphers, as well as finding key-bridges in key recovery attacks on block ciphers. Moreover, integrating our CP models for the key-bridging technique into the previous CP-based frameworks to search for distinguishers, we propose a unified and general CP model to search for key recovery friendly distinguishers which supports both linear and nonlinear key schedules. Available format(s) Category Secret-key cryptography Publication info Preprint. Minor revision. Keywords Lightweight block cipherGuess and DetermineKey-BridgingCPMILPSMTSATGroebner basis Contact author(s) History 2021-12-04: revised See all versions Short URL https://ia.cr/2021/1529 CC BY BibTeX @misc{cryptoeprint:2021/1529, author = {Hosein Hadipour and Maria Eichlseder}, title = {Autoguess: A Tool for Finding Guess-and-Determine Attacks and Key Bridges}, howpublished = {Cryptology ePrint Archive, Paper 2021/1529}, year = {2021}, note = {\url{https://eprint.iacr.org/2021/1529}}, url = {https://eprint.iacr.org/2021/1529} } Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
2022-05-24 22:03:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2795778810977936, "perplexity": 1772.37162057586}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662577259.70/warc/CC-MAIN-20220524203438-20220524233438-00577.warc.gz"}
http://dictionnaire.sensagent.leparisien.fr/Equation%20of%20time/en-en/
Publicité ▼ ## définition - Equation of time voir la définition de Wikipedia Wikipedia # Equation of time The equation of time — above the axis a sundial will appear fast relative to a clock showing local mean time, and below the axis a sundial will appear slow. The equation of time is the difference between apparent solar time and mean solar time. At any given instant, this difference will be the same for every observer. The equation of time can be found in tables (for example, The Astronomical Almanac) or estimated with formulas given below. Apparent (or true) solar time can be obtained for example by measurement of the current position (hour angle) of the Sun, or indicated (with limited accuracy) by a sundial. Mean solar time, for the same place, would be the time indicated by a steady clock set so that over the year its differences from apparent solar time average to zero (with zero net gain or loss over the year).[1] The word "equation" is here used in a somewhat archaic sense, meaning "correction". Prior to the mid-17th Century, when pendulum-controlled mechanical clocks were invented, sundials were the only reliable timepieces, and were generally considered to tell the right time. The right time was essentially defined as that which was shown by a sundial. When good clocks were introduced, they usually did not agree with sundials, so the equation of time was used to "correct" their readings to obtain sundial time. Some clocks, called equation clocks, included an internal mechanism to perform this correction. Later, as clocks became the dominant good timepieces, uncorrected clock time was accepted as being accurate. The readings of sundials, when they were used, were then, and often still are, corrected with the equation of time, used in the reverse direction from previously, to obtain clock time. Many sundials therefore have tables or graphs of the equation of time engraved on them to allow the user to make this correction. Of course, the equation of time can still be used, when required, to obtain solar time from clock time. Devices such as solar trackers, which move to keep pace with the Sun's movements in the sky, are often driven by clocks, with a mechanism that incorporates the equation of time to make them move accurately. During a year the equation of time varies as shown on the graph; its change from one year to the next is slight. Apparent time, and the sundial, can be ahead (fast) by as much as 16 min 33 s (around 3 November), or behind (slow) by as much as 14 min 6 s (around 12 February). The equation of time has zeros near 15 April, 13 June, 1 September and 25 December.[2][3] The graph of the equation of time is closely approximated by the sum of two sine curves, one with a period of a year and one with a period of half a year. The curves reflect two astronomical effects, each causing a different non-uniformity in the apparent daily motion of the Sun relative to the stars: • the obliquity of the ecliptic (the plane of the Earth's annual orbital motion around the Sun), which is inclined by about 23.44 degrees relative to the plane of the Earth's equator; and The equation of time is also the east or west component of the analemma, a curve representing the angular offset of the Sun from its mean position on the celestial sphere as viewed from Earth. The equation of time was used historically to set clocks. Between the invention of accurate clocks in 1656 and the advent of commercial time distribution services around 1900, one of two common land-based ways to set clocks was by observing the passage of the sun across the local meridian at noon. The moment the sun passed overhead, the clock was set to noon, offset by the number of minutes given by the equation of time for that date. (The second method did not use the equation of time; instead, it used stellar observations to give sidereal time, in combination with the relation between sidereal time and solar time.)[4] The equation of time values for each day of the year, compiled by astronomical observatories, were widely listed in almanacs and ephemerides.[5][6] Naturally, other planets will have an equation of time too. On Mars the difference between sundial time and clock time can be as much as 50 minutes, due to the considerably greater eccentricity of its orbit. The planet Uranus, which has an extremely large axial tilt, has an equation of time that can be several hours. ## History ### Ancient history — Babylon and Egypt The irregular daily movement of the Sun was known by the Babylonians, and Book III of Ptolemy's Almagest is primarily concerned with the Sun's anomaly. Ptolemy discusses the correction needed to convert the meridian crossing of the Sun to mean solar time and takes into consideration the nonuniform motion of the Sun along the ecliptic and the meridian correction for the Sun's ecliptic longitude. He states the maximum correction is 8 1/3 time-degrees or 5/9 of an hour (Book III, chapter 9).[7] However he did not consider the effect relevant for most calculations since it was negligible for the slow-moving luminaries and only applied it for the fastest-moving luminary, the Moon. ### Medieval and Renaissance astronomy Toomer uses the Medieval term equation, from the Latin term aequatio (equalization [adjustment]), for Ptolemy's difference between the mean solar time and the true solar time. Kepler's definition of the equation is "the difference between the number of degrees and minutes of the mean anomaly and the degrees and minutes of the corrected anomaly."[8] ### Apparent time versus mean time Until the invention of the pendulum and the development of reliable clocks during the 17th century, the equation of time as defined by Ptolemy remained a curiosity, of importance only to astronomers. However, when mechanical clocks started to take over timekeeping from sundials, which had served humanity for centuries, the difference between clock time and solar time became an issue for everyday life. Apparent solar time (or true or real solar time) is the time indicated by the Sun on a sundial (or measured by its transit over the local meridian), while mean solar time is the average as indicated by well-regulated clocks. The first tables for the equation of time which accounted for its annual variations in an essentially correct way were published in 1665 by Christiaan Huygens.[citation needed] Huygens set his values for the equation of time so as to make all values positive throughout the year.[9] This meant that a clock set by Huygens's tables would be consistently about 15 minutes slow on mean time. Another set of tables was published in 1672–73 by John Flamsteed, who later became the first royal astronomer of the new Greenwich Observatory. These appear to have been the first essentially correct tables which also led to mean time without an offset. Flamsteed adopted the convention of tabulating and naming the correction in the sense that it was to be applied to the apparent time to give mean time.[10] The equation of time, correctly based on the two major components of the Sun's irregularity of apparent motion, i.e. the effect of the obliquity of the ecliptic and the effect of the Earth's orbital eccentricity, was not generally adopted until after Flamsteed's tables of 1672–73, published with the posthumous edition of the works of Jeremiah Horrocks.[11] Robert Hooke (1635–1703), who mathematically analyzed the universal joint, was the first to note that the geometry and mathematical description of the (non-secular) equation of time and the universal joint were identical, and proposed the use of a universal joint in the construction of a "mechanical sundial".[12] ### Eighteenth and early nineteenth centuries The corrections in Flamsteed's tables of 1672/3 and 1680 led to mean time computed essentially correctly and without an offset, i.e. in principle as we now know it. But the numerical values in tables of the equation of time have somewhat changed since then, owing to three kinds of factors: • general improvements in accuracy that came from refinements in astronomical measurement techniques, • slow intrinsic changes in the equation of time, occurring as a result of very slow long-term changes in the Earth's obliquity and eccentricity and the position of its perihelion (or, equivalently, of the Sun's perigee), and • the inclusion of small sources of additional variation in the apparent motion of the Sun, unknown in the 17th century, but discovered from the eighteenth century onwards, including the effects of the Moon, Venus and Jupiter.[13] A sundial made in 1812, by Whitehurst & Son with a circular scale showing the equation of time correction. This is now on display in the Derby Museum. Until 1833, the equation of time was tabulated in the sense 'mean minus apparent solar time' in the British Nautical Almanac and Astronomical Ephemeris published for the years 1767 onwards. Before the issue for 1834, all times in the almanac were in apparent solar time, because time aboard ship was most often determined by observing the Sun. In the unusual case that the mean solar time of an observation was needed, the extra step of adding the equation of time to apparent solar time was needed. In the Nautical Almanac issues for 1834 onwards, all times have been in mean solar time, because by then the time aboard ship was increasingly often determined by marine chronometers. In the unusual case that the apparent solar time of an observation was needed, the extra step of applying the equation of time to mean solar time was needed, requiring all differences in the equation of time to have the opposite sign than before. As the apparent daily movement of the Sun is one revolution per day, that is 360° every 24 hours, and the Sun itself appears as a disc of about 0.5° in the sky, simple sundials can be read to a maximum accuracy of about one minute. Since the equation of time has a range of about 30 minutes, the difference between sundial time and clock time cannot be ignored. In addition to the equation of time, one also has to apply corrections due to one's distance from the local time zone meridian and summer time, if any. The tiny increase of the mean solar day itself due to the slowing down of the Earth's rotation, by about 2 ms per day per century, which currently accumulates up to about 1 second every year, is not taken into account in traditional definitions of the equation of time, as it is imperceptible at the accuracy level of sundials. ## Explanations for the major components of the equation of time ### Eccentricity of the Earth's orbit Graph showing the equation of time (red solid line) along with its two main components plotted separately, the part due to the obliquity of the ecliptic (mauve broken line) and the part due to the Sun's varying apparent speed along the ecliptic due to the eccentricity & ellipticity of the Earth's orbit (dark dash-dotted line) The Earth revolves around the Sun. As seen from Earth, the Sun appears to revolve once around the Earth through the background stars in one year. If the Earth orbited the Sun with a constant speed, in a circular orbit in a plane perpendicular to the Earth's axis, then the Sun would culminate every day at exactly the same time, and be a perfect time keeper (except for the very small effect of the slowing rotation of the Earth). But the orbit of the Earth is an ellipse not centered on sun, and its speed varies between 30.287 and 29.291 km/s, according to Kepler's laws of planetary motion, and its angular speed also varies, and thus the Sun appears to move faster (relative to the background stars) at perihelion (currently around January 3) and slower at aphelion a half year later. At these extreme points, this effect increases (respectively, decreases) the real solar day by 7.9 seconds from its mean. This daily difference accumulates over a period. As a result, the eccentricity of the Earth's orbit contributes a sine wave variation with an amplitude of 7.66 minutes and a period of one year to the equation of time. The zero points are reached at perihelion (at the beginning of January) and aphelion (beginning of July) while the maximum values are in early April (negative) and early October (positive). ### Obliquity of the ecliptic Sun and planets at solar midday (Ecliptic in red, Sun and Mercury in yellow, Venus in white, Mars in red, Jupiter in yellow with red spot, Saturn in white with rings). However, even if the Earth's orbit were circular, the motion of the Sun along the celestial equator would still not be uniform. This is a consequence of the tilt of the Earth's rotation with respect to its orbit, or equivalently, the tilt of the ecliptic (the path of the sun against the celestial sphere) with respect to the celestial equator. The projection of this motion onto the celestial equator, along which "clock time" is measured, is a maximum at the solstices, when the yearly movement of the Sun is parallel to the equator and appears as a change in right ascension, and is a minimum at the equinoxes, when the Sun moves in a sloping direction and appears mainly as a change in declination, leaving less for the component in right ascension, which is the only component that affects the duration of the solar day. As a consequence of that, the daily shift of the shadow cast by the Sun in a sundial, due to obliquity, is smaller close to the equinoxes and greater close to the solstices. At the equinoxes, the Sun is seen slowing down by up to 20.3 seconds every day and at the solstices speeding up by the same amount. In the figure on the right, we can see the monthly variation of the apparent slope of the plane of the ecliptic at solar midday as seen from Earth. This variation is due to the apparent precession of the rotating Earth through the year, as seen from the Sun at solar midday. In terms of the equation of time, the inclination of the ecliptic results in the contribution of another sine wave variation with an amplitude of 9.87 minutes and a period of a half year to the equation of time. The zero points of this sine wave are reached at the equinoxes and solstices, while the extrema are at the beginning of February and August (negative) and the beginning of May and November (positive). ## Secular effects The two above mentioned factors have different wavelengths, amplitudes and phases, so their combined contribution is an irregular wave. At epoch 2000 these are the values (in minutes and seconds with UT dates): minimum −14:15 11 February zero 00:00 15 April maximum +03:41 14 May zero 00:00 13 June minimum −06:30 26 July zero 00:00 1 September maximum +16:25 3 November zero 00:00 25 December E.T. = apparent − mean. Positive means: Sun runs fast and culminates earlier, or the sundial is ahead of mean time. A slight yearly variation occurs due to presence of leap years, resetting itself every 4 years. The exact shape of the equation of time curve and the associated analemma slowly change[14] over the centuries due to secular variations in both eccentricity and obliquity. At this moment both are slowly decreasing, but they increase and decrease over a timescale of hundreds of thousands of years. If/when the Earth's orbital eccentricity (now about 0.0167 and slowly decreasing) reaches 0.047, the eccentricity effect may in some circumstances overshadow the obliquity effect, leaving the equation of time curve with only one maximum and minimum per year, as is the case on Mars.[15] On shorter timescales (thousands of years) the shifts in the dates of equinox and perihelion will be more important. The former is caused by precession, and shifts the equinox backwards compared to the stars. But it can be ignored in the current discussion as our Gregorian calendar is constructed in such a way as to keep the vernal equinox date at 21 March (at least at sufficient accuracy for our aim here). The shift of the perihelion is forwards, about 1.7 days every century. In 1246 the perihelion occurred on 22 December, the day of the solstice, so the two contributing waves had common zero points and the equation of time curve was symmetrical: in Astronomical Algorithms Meeus gives February and November extrema of 15 min 39 sec and May and July ones of 4 min 58 sec. Before that time the February minimum was larger than the November maximum, and the May maximum larger than the July minimum. The secular change is evident when one compares a current graph of the equation of time (see below) with one from 2000 years ago, e.g., one constructed from the data of Ptolemy. ## Practical use Animation showing Equation of Time and Analemma path over one year. If the gnomon (the shadow-casting object) is not an edge but a point (e.g., a hole in a plate), the shadow (or spot of light) will trace out a curve during the course of a day. If the shadow is cast on a plane surface, this curve will (usually) be the conic section of the hyperbola, since the circle of the Sun's motion together with the gnomon point define a cone. At the spring and fall equinoxes, the cone degenerates into a plane and the hyperbola into a line. With a different hyperbola for each day, hour marks can be put on each hyperbola which include any necessary corrections. Unfortunately, each hyperbola corresponds to two different days, one in each half of the year, and these two days will require different corrections. A convenient compromise is to draw the line for the "mean time" and add a curve showing the exact position of the shadow points at noon during the course of the year. This curve will take the form of a figure eight and is known as an "analemma". By comparing the analemma to the mean noon line, the amount of correction to be applied generally on that day can be determined. The equation of time is used not only in connection with sundials and similar devices, but also for many applications of solar energy. Machines such as solar trackers and heliostats have to move in ways that are influenced by the equation of time. ## Calculations of the equation of time For many purposes, the equation of time is usually obtained by looking it up in a published table of values or on a graph. Of course, calculations are required in creating the tables and graphs. Also, in devices such as computer-controlled heliostats, the computer is often programmed to calculate the equation of time whenever it is needed, instead of looking it up. Algorithms by which it can be calculated are therefore important. ### Elaborate calculation In terms of the right ascension of the Sun, α, and that of a mean Sun moving uniformly along the celestial equator, αM, the equation of time is defined as the difference,[16] Δt = αMα. In this expression Δt is the time difference between apparent solar time (time measured by a sundial) and mean solar time (time measured by a mechanical clock). The left side of this equation is a time difference while the right side terms are angles; however, astronomers regard time and angle as quantities that are related by conversion factors such as; 2π radian = 360° = 1 day = 24 hour. The difference, Δt, is measurable because α can be measured and αM, by definition, is a linear function of mean solar time. The equation of time can be calculated based on Newton's theory of celestial motion in which the earth and sun describe elliptical orbits about their common mass center. In doing this it is usual to write αM = 2πt/tY = Λ where Substituting αM into the equation of time, it becomes[17] Δt = Λα = M + λpα The new angles appearing here are: • M is the mean anomaly; the angle from the periapsis to the dynamical mean Sun, • Λp = ΛM = 4.9412 = 283.11° is the ecliptic longitude of the periapsis written with its value on 1 Jan 2010 at 12 noon. However, the displayed equation is approximate; it is not accurate over very long times because it ignores the distinction between dynamical time and mean solar time.[18] In addition, an elliptical orbit formulation ignores small perturbations due to the moon and other planets. Another complication is that the orbital parameter values change significantly over long times, for example λp increases by about 1.7 degrees per century. Consequently, calculating Δt using the displayed equation with constant orbital parameters produces accurate results only for sufficiently short times (decades); when compared to more accurate calculations using the Multiyear Computer Interactive Almanac for each day in 2008 it disagrees by as much as 35.2 s.[19] It is possible to write an expression for the equation of time that is valid for centuries, but it is necessarily much more complex.[20] In order to calculate α, and hence Δt, as a function of M, three additional angles are required; they are The celestial sphere and the Sun's elliptical orbit as seen by a geocentric observer looking normal to the ecliptic showing the six angles (M, λp, α, ν, λ, E) needed for the calculation of the equation of time. For the sake of clarity the drawings are not to scale. All these angles are shown in the figure on the right, which shows the celestial sphere and the Sun's elliptical orbit seen from the Earth (the same as the Earth's orbit seen from the Sun). In this figure ε = 0.40907 = 23.438° is the obliquity, while the eccentricity of the ellipse is e = [1 − (b/a)2]1/2 = 0.016705. Now given a value of 0≤M≤2π, one can calculate α(M) by means of the following procedure:[21] First, knowing M, calculate E from Kepler's equation[22] $M=E-e\sin E$ A numerical value can be obtained from an infinite series, graphical, or numerical methods. Alternatively, note that for e = 0, E = M, and for small e, by iteration,[23] E ~ M + e sin M. This can be improved by iterating again, but for the small value of e that characterizes the orbit this approximation is sufficient. Next, knowing E, calculate the true anomaly ν from an elliptical orbit relation[24] $\nu=2\tan^{-1}\left[\sqrt{\frac{1+e}{1-e}}\tan\frac{E}{2} \right]$ The correct branch of the multiple valued function tan−1x to use is the one that makes ν a continuous function of E(M) starting from ν(E=0) = 0. Thus for 0≤E<π use tan−1x = Tan−1x, and for π<E≤2π use tan−1x = Tan−1x + π. At the specific value E = π for which the argument of tan is infinite, use ν = E. Here Tan−1x is the principal branch, |Tan−1x| < π/2; the function that is returned by calculators and computer applications. Alternatively, note that for e = 0, ν = E and for small e, from a one term Taylor expansion, νE + e sin EM + 2 e sin M. Next knowing ν calculate λ from its definition above $\lambda=\nu+\lambda_p$ The value of λ varies non-linearly with M because the orbit is elliptical, from the approximation for ν, λ ~ M + λp + 2e sin M. Next, knowing λ calculate α from a relation for the right triangle on the celestial sphere shown above[25] $\alpha=\tan^{-1}[\cos\varepsilon\,\tan\lambda]$ Like ν previously, here the correct branch of tan−1x to use makes α a continuous function of λ(M) starting from α(λ=0)=0. Thus for (2k-1)π/2 < λ < (2k+1)π/2, use tan−1x = Tan−1x + kπ, while for the values λ = (2k+1)π/2 at which the argument of tan is infinite use α = λ. Since λpλλp + 2π when M varies from 0 to 2π, the values of k that are needed, with λp = 4.9412, are 2, 3, and 4. Although an approximate value for α can be obtained from a one term Taylor expansion like that for ν,[26] it is more efficacious to use the equation[27] sin(αλ) = – tan2(ε/2) sin(α + λ). Note that for ε = 0, α = λ and for small ε, by iteration, p)}}. Finally, Δt can be calculated using the starting value of M and the calculated α(M). The result is usually given as either a set of tabular values, or a graph of Δt as a function of the number of days past periapsis, n, where 0≤n≤365.242 (365.242 is the number of days in a tropical year); so that $M=\frac{2\pi\,n}{365.242}$ #### Approximation based on above calculation Using the approximation for α(M), Δt can be written as a simple explicit expression, which is designated Δta because it is only an approximation. $\Delta t_a=-2e\sin M+\tan^2\frac{\varepsilon}{2}\,\sin(2M+2\lambda_p) = [-7.657\sin M+9.862\sin(2M+3.599)]\mbox{min}$ This equation was first derived by Milne,[28] who wrote it in terms of Λ = M + λp. The numerical values written here result from using the orbital parameter values for e, ε, and λp given above. When evaluating the numerical expression for Δta as given above, a calculator must be in radian mode to obtain correct values. Note also that the date and time of periapsis (perihelion of the Earth orbit) varies from year to year; a table giving the connection can be found in perihelion. A comparative plot of the two calculations is shown in the figure below. The simpler calculation is seen to be close to the elaborate one, the absolute error, t − Δta), is less than 45 seconds throughout the year; its largest value is 44.8 sec and occurs on day 273. More accurate approximations can be obtained by retaining higher order terms,[29] but they are necessarily more time consuming to evaluate. At some point it is simpler to just evaluate Δt, but Δta as written above is easy to evaluate, even with a calculator, and has a nice physical explanation as the sum of two terms, one due to obliquity and the other to eccentricity. This is not true either for Δt considered as a function of M or for higher order approximations of Δta. The equation of time as calculated by the more elaborate procedure for Δt described in the text and the approximate expression for Δta given there. Note that "n" is in days past the Earth's perihelion, which occurs on or about January 3. ### Alternative calculation Another calculation of the equation of time can be done as follows.[30] Angles are in degrees; the conventional order of operations applies. $W=360/365.24$ W is the Earth's mean angular orbital velocity in degrees per day. $A=W\times (D+10)$ D is the date, in days starting at zero on January 1 (i.e. the days part of the ordinal date −1). 10 is the approximate number of days from the December solstice to January 1. A is the angle the earth would move on its orbit at its average speed from the December solstice to date D. $B=A+(360/\pi)\times 0.0167\times \sin(W\times (D-2))$ B is the angle the Earth moves from the solstice to date D, including a first-order correction for the Earth's orbital eccentricity, 0.0167. The number 2 is the number of days from January 1 to the date of the Earth's perihelion. This expression for B can be simplified by combining constants to: $B=A+1.914\times \sin(W\times (D-2))$. $C=(A-\arctan(\tan(B)/\cos(23.44)))/180$ C is the difference between the angles moved at mean speed, and at the corrected speed projected onto the equatorial plane, and divided by 180 to get the difference in "half turns". The number 23.44 is the obliquity (tilt) of the Earth's axis in degrees. The subtraction gives the conventional sign to the equation of time. For any given value of x, arctan(x) (sometimes written as tan−1x) has multiple values, differing from each other by integer numbers of half turns. The value generated by a calculator or computer may not be the appropriate one for this calculation. This may cause C to be wrong by an integer number of half turns. The excess half turns are removed in the next step of the calculation: $\text{EoT}=720\times (C-\text{nint}(C))$ EoT is the equation of time in minutes. The expression nint(C) means the nearest integer to C. On a computer, it can be programmed, for example, as INT(C+0.5). It is 0, 1, or 2 at different times of the year. Subtracting it leaves a small positive or negative fractional number of half turns, which is multiplied by 720, the number of minutes (12 hours) that the Earth takes to rotate one half turn relative to the Sun, to get the equation of time. Compared with published values,[31][32] this calculation has a Root Mean Square error of only 3.7 seconds of time. The greatest error is 6.0 seconds. This is much more accurate than the approximation described above, but not as accurate as the elaborate calculation. The value of B in the above calculation is an accurate value for the Sun's ecliptic longitude (shifted by 90 degrees), so the solar declination becomes readily available: $\text{Declination} = - \arcsin(\sin(23.44)\times \cos(B))$ which is accurate to within a fraction of a degree. ## Footnotes 1. ^ A description of apparent and mean time was given by Nevil Maskelyne in the Nautical Almanac for 1767: "Apparent Time is that deduced immediately from the Sun, whether from the Observation of his passing the Meridian, or from his observed Rising or Setting. This Time is different from that shewn by Clocks and Watches well regulated at Land, which is called equated or mean Time." (He went on to say that, at sea, the apparent time found from observation of the sun must be corrected by the equation of time, if the observer requires the mean time.) 2. ^ As an example of the inexactness of the dates, according to the U.S. Naval Observatory's Multiyear Interactive Computer Almanac the equation of time will be 0 at 2:00 UT1 on 16 April 2011. 3. ^ Heilbron 1999, p. 277. 4. ^ Olmstead 1866, pp. 57–58 5. ^ Milham 1945, pp. 11–15 6. ^ See for example, British Commission on Longitude 1794, p. 14. 7. ^ Toomer 1998, p. 171 8. ^ Kepler 1995, p. 155 9. ^ Huygens 1665 10. ^ Flamsteed 1672 11. ^ Vince 1814, p. 49 12. ^ Mills 2007, p. 219 13. ^ Maskelyne 1764, pp. 163–169 14. ^ 15. ^ Telling Time on Mars 16. ^ Heilbron, p. 275; Roy, p. 45 17. ^ Duffett-Smith, p. 98; Meeus, p. 341 18. ^ Hughes, p. 1530 19. ^ US Naval Observatory, April 2010 20. ^ Hughes, p. 1535 21. ^ Duffet-Smith, p. 86 22. ^ Moulton, p. 159 23. ^ Hinch, p. 2 24. ^ Moulton, p. 165 25. ^ Burington, p. 22 26. ^ Whitman, p. 32 27. ^ Milne, p. 374 28. ^ Milne, p. 375 29. ^ Muller 30. ^ Williams 31. ^ Waugh, p. 205 32. ^ Helyar ## References Publicité ▼ Contenu de sensagent • définitions • synonymes • antonymes • encyclopédie • definition • synonym Publicité ▼ dictionnaire et traducteur pour sites web Alexandria Une fenêtre (pop-into) d'information (contenu principal de Sensagent) est invoquée un double-clic sur n'importe quel mot de votre page web. LA fenêtre fournit des explications et des traductions contextuelles, c'est-à-dire sans obliger votre visiteur à quitter votre page web ! Essayer ici, télécharger le code; Solution commerce électronique Augmenter le contenu de votre site Ajouter de nouveaux contenus Add à votre site depuis Sensagent par XML. Parcourir les produits et les annonces Obtenir des informations en XML pour filtrer le meilleur contenu. Indexer des images et définir des méta-données Fixer la signification de chaque méta-donnée (multilingue). Renseignements suite à un email de description de votre projet. Jeux de lettres Les jeux de lettre français sont : ○   Anagrammes ○   jokers, mots-croisés ○   Lettris ○   Boggle. Lettris Lettris est un jeu de lettres gravitationnelles proche de Tetris. Chaque lettre qui apparaît descend ; il faut placer les lettres de telle manière que des mots se forment (gauche, droit, haut et bas) et que de la place soit libérée. boggle Il s'agit en 3 minutes de trouver le plus grand nombre de mots possibles de trois lettres et plus dans une grille de 16 lettres. Il est aussi possible de jouer avec la grille de 25 cases. Les lettres doivent être adjacentes et les mots les plus longs sont les meilleurs. Participer au concours et enregistrer votre nom dans la liste de meilleurs joueurs ! Jouer Dictionnaire de la langue française Principales Références La plupart des définitions du français sont proposées par SenseGates et comportent un approfondissement avec Littré et plusieurs auteurs techniques spécialisés. Le dictionnaire des synonymes est surtout dérivé du dictionnaire intégral (TID). L'encyclopédie française bénéficie de la licence Wikipedia (GNU). Changer la langue cible pour obtenir des traductions. Astuce: parcourir les champs sémantiques du dictionnaire analogique en plusieurs langues pour mieux apprendre avec sensagent. 6554 visiteurs en ligne calculé en 0,156s Je voudrais signaler : section : une faute d'orthographe ou de grammaire un contenu abusif (raciste, pornographique, diffamatoire)
2022-01-17 17:32:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 13, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7047989964485168, "perplexity": 1428.9198462308311}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300574.19/warc/CC-MAIN-20220117151834-20220117181834-00036.warc.gz"}
https://apboardsolutions.in/ap-board-9th-class-maths-solutions-chapter-11-intext-questions/
# AP Board 9th Class Maths Solutions Chapter 11 Areas InText Questions AP State Syllabus AP Board 9th Class Maths Solutions Chapter 11 Areas InText Questions and Answers. ## AP State Syllabus 9th Class Maths Solutions 11th Lesson Areas InText Questions Activity Question Observe Figure I and II. Find the area of both. Are the areas equal? Trace these figures on a sheet of paper, cut them. Cover fig. 1 with fig. II. Do they cover each other completely? Are they congruent? Observe fig. Ill and IV. Find the areas of both. What do you notice? Are they congruent? Now, trace these figures on sheet of paper. Cut them let us cover fig. Ill by fig. IV by coinciding their bases (length of same side). As shown in figure V are they covered completely? We conclude that Figures I and II are congruent and equal in area. But figures III and IV are equal in area but they are not congruent Think, Discuss and Write Question 1. If 1 cm represents 5 in, what would be an area of 6 cm2 represents ? [Page No. 247] Solution: 1 cm2 = 5 m 1 cm2 = 1 cm × 1 cm = 5m × 5m = 25m2 ∴ 6 cm2 = 6 × 25 m2 = 150 m2 Question 2. Rajni says 1 sq. m = 1002 sq. cm. Do you agree ? Explain. Solution: No 1 sq. m = 100 cm2 Think, Discuss and Write Question Which of the following figures lie on the same base and between the same parallels? In such cases, write the common base and two parallels. [Page No. 249] Solution: a) In figure (a) ΔPCD and □ ABCD lie on same base CD and between the same parallels AB//CD. b) No, c) ΔTRQ and □ PQRS lie on the same base QR and between the same par-allels PS//QR. d) ΔAPD and □ ABCD lie on the same base AD and between the same par-allels AD//BC. e) No. Activity Question Take a graph sheet and draw two par-allelograms ABCD and PQCD on it as show in the Figure, [Page No. 250] The parallelograms are on the same base DC and between the same parallels PB and DC. Clearly the part DCQA is common between the two parallelo-grams. So if we can show that ΔDAP and ΔCBQ have the same area then we can say ar(PQCD) = ar(ABCD) Activity Draw pairs of triangles one the same base or ( equal bases) and between the same parallels on the graph sheet as shown in the Figure. Let AABC and ADBC be the two triangles lying on the same base BC and between parallels BC and FE. Draw CE II AB and BF II CD. Parallelograms AECB and FDCB are on the same base BC and are between the same parallels BC and EF. Thus ar (AECB) = ar (FDCB). We can see ar (ΔABC) = $$\frac { 1 }{ 2 }$$ ar (parallelogram AECB) …………….(i) and ar (ΔDBC) = $$\frac { 1 }{ 2 }$$ ar (parallelogram FDCB) ……………..(ii) From (i) and (ii), we get ar (ΔABC) = ar (ΔDBC) You can also find the areas of ΔABC and ΔDBC by the method of counting the squares in graph sheet as we have done in the earlier activity and check the areas are whether same.[Page No. 254] Think, Discus and Write Draw two triangles ABC and DBC on the same base and between the same parallels as shown in the figure with P as the point of intersection of AC and BD. Draw CE//BA and BF//CD such that E and F lie on line AD. Can you show ar(ΔPAB) = ar(ΔPDQ) ?[Page No. 254] [Hint: These triangles are not congruent but have equal areas. Solution: □ ABCE = 2 × ΔABC [∵ ΔABC; □ABCE lie on the same base BC and between the same parallels BC // AE] ΔABC = $$\frac { 1 }{ 2 }$$ × □ ABCE ……………(1) Also □ BCDF = 2 × ΔBCD………….. [∵ΔBCD and □ BCDE lie on the same base BC and between , the same parallels BC//DE] ΔBCD = $$\frac { 1 }{ 2 }$$ × □ BCDF ……………… (2) But □ABCE = □BCDF [ ∵ □ABCE and □BCDF lie on the same base BC and between the same parallels BC//FE] From (1) & (2); ΔABC = ΔBCD ΔPAB + ΔPBC = ΔPBC + ΔPDC ⇒ ΔPAB = ΔPDC Hence proved.
2021-12-05 05:14:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3408735990524292, "perplexity": 4416.205346389092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363135.71/warc/CC-MAIN-20211205035505-20211205065505-00336.warc.gz"}
https://solvedlib.com/how-much-do-wild-mountain-lions-weigh-adult-wild,344520
# How much do wild mountain lions weigh? Adult wild mountain lions (18 months or older) captured... ###### Question: How much do wild mountain lions weigh? Adult wild mountain lions (18 months or older) captured and released for the first time in the San Andres Mountains gave the following weights (pounds): 71 107 132 123 6054 Assume that the population of x values has an approximately normal distribution (a) Use a calculator with mean and sample standard deviation keys to find the sample mean weight and sample Mandard deviations (Round your answers to one decimal place.) (b) Find a 75% confidence interval for the population average weight of all adult mountain Lions in the specified region. Cound your answers to one de place) lower limit upper limit #### Similar Solved Questions Computech Corporation is expanding rapidly and currently needs to retain all of its earnings; hence, it does not pay dividends. However, investors expect Computech to begin paying dividends, beginning with a dividend of $1.75 coming 3 years from today. The dividend should grow rapidly - at a rate of... 2 answers ##### Acceleration and velocity A particle has zero velocity initially (i.e., at time t=0 ) and its acceleration at t seconds is a(t)=18t-4t^3 meters-per-second per second. During the time interval[1,4] seconds, the average acceleration of this particle is ____ meters-per-second per second the average speed of this particle is____... 5 answers ##### Re sure t0 answer all partsperson drinks four glasses of cold water (3.69C) every day: The volume of each glass is 2.4 x 102 mL: How much heat (n kJ) does the body have to supply to raise the temperature of the water to 379 C;the body temperature?(6) How much heat would your body lose if you were to ingest 8.2 x 10* g of snow at 0PC to quench your thirst? (The amount of hcat necessary to melt snow is 6.01 kIlmol:)Enter your answer in scientific notation Re sure t0 answer all parts person drinks four glasses of cold water (3.69C) every day: The volume of each glass is 2.4 x 102 mL: How much heat (n kJ) does the body have to supply to raise the temperature of the water to 379 C;the body temperature? (6) How much heat would your body lose if you were ... 5 answers ##### Homerc(lue5MATHIAIOR5Pollution Aflects Plants;nttps: / gotoclass tnecumpus org 'd AmVquixringvuser /attempyqul stant_frameTest5 Tirva Limit: J0.00Timne Left:4,08,04Tristany Grim; AttemptQuestion 15 (2 points)Use properties of the order relation t0 solve the inequality Express answer in simplified form_Ox<; Ox< ? Ox> ' Ox>$Question 16 (2 points)What fraction is represented by the shaded portion of the figure?Unit one large rectangle0Type nete sgarch Homerc (lue5 MATHIAIOR5 Pollution Aflects Plants; nttps: / gotoclass tnecumpus org 'd AmVquixringvuser /attempyqul stant_frame Test5 Tirva Limit: J0.00 Timne Left:4,08,04 Tristany Grim; Attempt Question 15 (2 points) Use properties of the order relation t0 solve the inequality Express answer i... ##### The density of tires is an important selling- point for serious mountain bike riders_ A tire industry uses a type A durometer t0 measure the indentation hardness for mountain bike tires. Suppose the distribution of tire hardness is approximately normal. with mean X 45 and standard deviation = 7 Sketch the normal curve for tire hardness_Is a tire hardness of 30 unusually soft?The tire hardness of 30 is unusually soft since 30 is over 3 standard deviations below the mean The tire hardness of 30 is The density of tires is an important selling- point for serious mountain bike riders_ A tire industry uses a type A durometer t0 measure the indentation hardness for mountain bike tires. Suppose the distribution of tire hardness is approximately normal. with mean X 45 and standard deviation = 7 Sket... ##### Consider three 1-L flasks at STP. Flask A contains NH3 gas,flask B contains NO2 gas, and flask C containsN2 gas. Which flask contains the gas with the maximum density (d),where Mm is molar mass? (d = PMm/RT) a) flask A b) flask B c) flask C d) all are the same e) none Consider three 1-L flasks at STP. Flask A contains NH3 gas, flask B contains NO2 gas, and flask C contains N2 gas. Which flask contains the gas with the maximum density (d), where Mm is molar mass? (d = PMm/RT) a) flask A b) flask B c) flask C d)... ##### A laser beam strikes a double slit with a slit spacing of 1.5E-5 m The slit-to- screen distance is 2.0 m The first bright interference fringe is located 7 cm above the middle of the central bright maximum: What is the wavelength of the laser light?525nm373nm454nm667 nm702nm A laser beam strikes a double slit with a slit spacing of 1.5E-5 m The slit-to- screen distance is 2.0 m The first bright interference fringe is located 7 cm above the middle of the central bright maximum: What is the wavelength of the laser light? 525nm 373nm 454nm 667 nm 702nm... ##### Use an identity to write each expression as a single trigonometric function. $\pm \sqrt{\frac{1-\cos 8 \theta}{1+\cos 8 \theta}}$ Use an identity to write each expression as a single trigonometric function. $\pm \sqrt{\frac{1-\cos 8 \theta}{1+\cos 8 \theta}}$... ##### The 50-mm-diameter solid shaft is made of A-36 steel and is subjected to the distributed and... The 50-mm-diameter solid shaft is made of A-36 steel and is subjected to the distributed and concentrated torsional loadings shown where T = 140 N-m. 2 kNm 600 Nm B 0.8 m 0.6 m A Part A Determine the angle of twist at the free end A of the shaft due to these loadings. Use Gst = 75.0 GPa Express your... ##### Problem 6 (13 points) The page table below is for a system with 16-bit virtual as... Problem 6 (13 points) The page table below is for a system with 16-bit virtual as well as physical addresses and with 4,096-byte pages. The reference bit is set to 1 when the page has been referenced. Periodically, a thread zeroes out all values of the reference bit. A dash for a page frame indicate... ##### Please Note: This assigumcut Has questions, page Question Evaluate 37' 2r2 +4r I+4r Imtegrate (25 16r" Arctaf)dsQuestion Evaluatelim narctanFind the SHIL of the sericsTest the serics2020"For convergcuce Please Note: This assigumcut Has questions, page Question Evaluate 37' 2r2 +4r I+4r Imtegrate (25 16r" Arctaf)ds Question Evaluate lim narctan Find the SHIL of the serics Test the serics 2020" For convergcuce...
2023-03-23 00:45:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4680311381816864, "perplexity": 5590.362227145379}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944606.5/warc/CC-MAIN-20230323003026-20230323033026-00640.warc.gz"}
https://ora.ox.ac.uk/objects/uuid:4682c8d5-c8dc-4ac5-a278-c7d840cec9a7
Thesis ### The internal structure of irreducible continua Subtitle: With a focus on local connectedness and monotone maps Abstract: This thesis is an examination of the structure of irreducible continua, with a particular emphasis on local connectedness and monotone maps. A continuum is irreducible if there exist a pair of points such that no proper subcontinuum contains both, with the arc being the most basic example. Being irreducible has a number of interesting implications for a continuum, both locally and globally, and it is these consequences we shall focus on. As mentioned above, the arc is the most stra... Files: • (pdf, 1.8mb) ### Authors David Harper More by this author #### Contributors Role: Examiner Role: Supervisor Type of award: DPhil Level of award: Doctoral Awarding institution: University of Oxford Language: English Keywords: Subjects:
2020-10-29 06:04:06
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9258378148078918, "perplexity": 1311.8015085130278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107902745.75/warc/CC-MAIN-20201029040021-20201029070021-00687.warc.gz"}
http://crypto.stackexchange.com/questions?pagesize=15&sort=newest
# All Questions 20 views ### How do I communicate the value of the initialization vector to the end user? Should it be part of the encrypted message? I am taking a cryptography class and our first homework is to implement a 32-bit block cipher. I implemented a simple block-cipher that uses CBC. Currently, my implementation reads both the 32-bit IV ... 18 views ### RSA: How effective is this keypair-trash attack A question that could very well be part of xkcd's "what if?": Let's say Monica made a piece of software that sends all RSA keypairs to a central database after they're not used anymore. Something ... 17 views ### Is pairing based cryptography computationaly expensive? What is the time complexity of pairing based cryptography compared to other public key encryption schemes? for encoding a single charector what is the space complexity? 29 views ### Applications of GF(p) polynomials A Galois field of the type $GF(p)$, where $p$ is prime, is normally expressed as the ring of integers modulo $p$. If my understanding is correct, it is also possible to represent its elements as a ... 20 views Given is a system, which does provide only implementations of fast hash algorithms (MD5, SHA1, SHA-256, SHA-512). There is no implementation of PBKDF2, bcrypt or scrypt available. The system does ... 28 views ### x509 Certificate Signature I've got a question about the signature of a CA. As I understand, the CA takes the public key of the client and signs it with his own private key by using "md5WithRSAEncryption" (like explained here: ... 44 views ### Is this a valid “fix” for deterministic encryption in encrypted databases? It appears (after doing some light research) that for encrypted databases to be practical enough to be usable, deterministic encryption is required, specifically with regard to the type of encryption ... 31 views ### Showing that a function satisfies the properties to be a distance function [on hold] Let $p$ be a prime and $V_p$ be the associated valulation. Then for all $x,y,z \in\mathbb Z$, $V_p$ satisfies: $v_p(xy) = v_p(x)+v_p(y)$ $v_p(x+y) ≥ min(v_p(x),v_p(y))$ Define a function ... 37 views ### Negative exponents in Shoup's threshold RSA? I'm trying to implement threshold RSA operations, starting with decryption based on Peeters, R., Nikova, S., & Preneel, B. (2008). Practical RSA Threshold Decryption for Things That Think. ... 21 views Let the message space and cipher space be given by M= C ={00; 01; 10; 11} and the key space K = {k0; k1; k2; k3}. We denote M= {m0;m1;m2;m3} and the probability distribution of message variable M ... 14 views ### Can anyone explain about Correlation and Auto correlation attacks with easy examples? [on hold] I find examples like 1) Find the correlation and auto correlation of a and b. and b has some data in it. 27 views ### With wrong IV at receiver side, the CFB in better than OFB? Assume the receiver have a wrong IV (initialization vector), in the CFB mode only the first block of plaintext is wrong but in OFB mode the second and all blocks will be affected. Is that correct? ... 75 views ### Why isn't CTR mode (counter mode) used more often? For the CTR mode, the design is good for parallelization, yes, it seems the benchmark of the program downloaded from crypto++ proves that on an Intel I7 CPU. My question is that as most of CPU on ... 17 views ### Problem in understanding Blakley's Secret Sharing Scheme I need to implement Blakley's Secret Sharing Scheme. I have read below mentioned two research papers but still unable to understand how to implement it. Safeguarding cryptographic keys Two Matrices ... 68 views ### How are the AES inverse S-Boxes calculated? I would like to know, how to calculate the inverse S-box. I followed this link (with affine transformation first, then multiplicative inverse), but the result is wrong. For example, if I use the value ... 29 views ### Hardness assumptions on composite order bilinear groups I am not at all knowledgeable in elliptic curve cryptography. So, here lies a couple of questions that I failed to find answers for to my satisfaction. Is there any known Type-III bilinear pairing ... 45 views ### Is there a strong cryptographic reason for GCM's 2^39 - 256 bit limit? In reading through the original GCM specification (McGrew & Viega '05), the composition of the 128 bit Initialization Vector as a concatenation of a 96b nonce and a 32b unsigned wrapping counter ... 19 views ### Is there a strong cryptographic reason for GCM's 2^39 - 256 bit limit? In reading through the original GCM specification (McGrew & Viega '05), the composition of the 128 bit Initialization Vector as a concatenation of a 96b nonce and a 32b unsigned wrapping counter ... 34 views ### Is signing a message just encrypting it with private key? [duplicate] I have a simple question, When I sign a message (in RSA) does the program encrypt the text with my private key? Because I can decrypt it with my public key. Yes or no. Is signing a message just ... 75 views ### Why can't you just clone encrypted data and use it? Let's take a contactless card for example (one for public transport lets say (i.e. MiCard)), the data is encrypted on the card and encrypted on the reader as far as I know. So how come somebody who ... 19 views ### How to verify pgp digital signature + generic cipher question When receiving emails, I sometimes see the following: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I found an integer overflow in PHP, in the conversation of dates to "Julian Day Count" ... 45 views ### In ECC, how do I prove that point addition is commutative? I am studying elliptic curve cryptography and this question is related to the commutative property of point addition operation. Point addition $P_3(x_3,y_3)$ of two points $P_1(x_1, y_1)$ and ... 59 views ### Books on cryptography mathematics [on hold] I am currently reading the Book Understanding Cryptography from Cristof Paar. I am enjoying the book but i don't like to scratch the surface when it comes to cryptography. I would like do dig a little ... 31 views ### Crypto library that supports ElGamal PET and Mixnets I am looking for a crypto library that supports VOTING primitives. Specifically: - Zero-knowledge Proofs - Mixnets - ElGamal Plaintext Equality Test (PET) - Commitments UniCrypt is one of the ... 68 views ### Proving that an encryption scheme is susceptible to certain attacks I'm currently trying to prove the following: Where p is a prime number of cryptographic size, prove that: e(m) = am + b (mod p) where a and b are private is open to a known plaintext attack e(m) = ... 64 views ### How does BLAKE2 ensure that hash(A) != hash(B) when B = A||0 and both A & B have the same number of blocks? Suppose I have two arrays : A = a single byte, being zero. B = two bytes, both being zero. B = A||0 (i.e. B starts with A, and differs only by appending a zero byte) BLAKE2(A) != BLAKE2(B) Yet, ... 37 views ### Need some help to break a cypher text using unknown cypher [on hold] I am a programmer and studied quite a bit of math, but I am a cryptography newbie. I browsed through this website, and I am awed. I need help decrypting a letter that was given to me as a ... 70 views ### RSA: how does it work and how is it more secure than symmetric systems Before starting the question, I know that RSA is naturally a key exchange cryptosystem rather than an antilogarithm fully implemented for encrypting and decrypting data since the length of the secret ... 652 views ### Decrypt files with original file CTB-Locker I have problem called CTB-Locker. It encrypted all of my files on computer and since I have lot of documents that are very important I am in problems! As I read online CTB-Locker uses "elliptical ... 65 views ### Will current encryption always remain secure? Is there mathematical proof? Current encryption algorithms, such as AES, RSA, elliptic curve, etc. work based on known mathematical problems. I am specifically interested in the RSA. Will such security always remain secure? One ... 43 views ### Is it a variant of Strong RSA assumption? Informally, the hardness of RSA/Strong RSA assumption lies in the hardness of factoring a large composite number $N$ having two large primes as its factors. If RSA modulus $N$ is a prime number, then ... 26 views ### Decrypt Encrypted File on the fly and view to user I have encrypted files on my Web server. Each file is encrypted with a unique symmetric KEY. All symmetric encryption keys are stored on db with public KEY encryption. On user side; Users are logged ... 25 views ### Cipher for human interpretation [duplicate] Please suggest algorithms to encrypt and decrypt text messages that can be easily performed by human in a considerably short time. It doesn't have to be really secure, but sufficient to fool ... 30 views ### Is it possible: Derived key based on variable number of private keys? I have a MySQL database that I want to encrypt with AES_ENCRYPT() and have to provide access to a variable and possibly changing number of users. Is it possible to derive an encryption key based on a ... 53 views ### Achieving 32-bit verification code with 16-bit CRC? I am programming an embedded chip that has a hardware 16-bit CRC module. I have to protect some data bytes $d_0,d_1,...,d_{n-1}$ against corruption caused by sudden loss of power; a 32-bit CRC would ... 40 views ### What is contained in a RSA key file? [duplicate] Consider a RSA key file like this: ... 29 views ### Which of these cipher suites is more secure? [on hold] In an application that will transfer sensitive data over TLS to an online server, which of these cipher suites will provide the best long-term protection (assuming the transmission will be sniffed by ... 39 views ### Spritz cipher sponge function capacity Rivest and Shuldt proposed a new sponge like cipher algorithm called Spritz: http://people.csail.mit.edu/rivest/pubs/RS14.pdf In this paper they say that the strength of the cipher is related to the ... 48 views ### Converting a number to a member of a multiplicative cyclic group I am currently trying to make an implementation of the ElGamal encryption for educational purposes. As I understand it, when using the encryption with multiplicative cyclic groups, one generates a ... 88 views ### Why is factoring $p-1$ easy when $p$ is a safe prime? A paper states: [...] $(p,g,y)$ is a correct ElGamal public key if $g^x=y\pmod p$. To verify this the order of $g$, and thus the factorization of $p-1$, is needed. This is easy for safe primes ... 35 views ### Why do Certificate Authorites cross-sign each other? [on hold] In cryptography, a Certificate Authority (CA) issues digital certificates to certify the ownership of a public key by the named subject of the certificate. What is the rationale for CAs cross-signing ... 29 views ### Combined message separation I have two ciphertexts, I suppose that its RC4 with reused key. I have XORed both ciphertexts and obtained message containing combined cleartexts. I suppose that the underlaying messages are written ... 9 views ### Data anonymization and search with wild-cards Broadcast encryption technique proposed by Water et al. http://www.slideserve.com/nairi/revocation-systems-with-very-small-private-keys takes a leap from earlier schemes as it provides smaller user ... 24 views ### ElGamal and Schnorr groups As I gather, a normal practice for choosing a cyclic group for ElGamal key generation is to find a safe prime $p$ and use a multiplicative cyclic group with modulus $p$ and order $q = (p-1)/2$. ... 31 views ### PRGs from OWFs: Implementations? This is a fairly common topic but I have a specific question given recent developments in theoretical PRGs from one-way function (OWFs). In theory, we have PRGs iff OWFs exist. The first construction ... 89 views ### Finding strong primes Wikipedia lists the following conditions for a prime to be strong: $p-1$ has large prime factors. That is, $p = a_1 q_1 + 1$ for some integer $a_1$ and large prime $q_1$. $q_1-1$ has large prime ... 35 views ### Is it possible to perform one-way Diffie-Hellman MITM? Here's something that is bugging me recently: suppose that me and my friend establish an OTR session and - as a result of that - DH key exchange is performed. My friend verifies my key, but I cannot ... 54 views ### TLS 1.2 Handshake: How is the ECDHE public key signed by server? I am dealing with a situation where a cipher option, such as ECDHE-ECDSA-AES128-SHA, is chosen for establishing a TLS connection. In this case, a server, when sending the ServerKeyExchange message to ...
2015-01-29 00:16:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.75970059633255, "perplexity": 2598.576441826506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122233086.24/warc/CC-MAIN-20150124175713-00244-ip-10-180-212-252.ec2.internal.warc.gz"}
http://pacianca.it/density-of-co2-at-stp.html
# Density Of Co2 At Stp As a result, 1cm^3. molar mass CO2 = 44. In grams per liter. If you discuss gas density at any other set of conditions, you drop the word standard and specify the pressure and temperature. In the problem above, if the density is close to 4. Since we are looking for the density of #CO_2#, we can modify the law as follows: First we replace #n# by #n=m/(MM)# where, #m# is the mass and #MM=40g/(mol)# is the molar mass of #CO_2#. Weight of Liquid or Gas. 1 mole of any ideal gas occupies a volume of 22. 5% Mass of CO2 from 1 gal of gas = 2. Calculate the MW of a gas which has a density of 1. 013 bar at sublimation point) 571. 7165: carbon. 00 × 102 ft C. A) AB) BC) CD) DE) All of the gases have the same density at STP. If, you would like me to answer your questions in the future, request that I be one of your contacts. Volume of Gas at 70° (21°C) and 1 atm. If the volumetric efficiency is unknown, it can be estimated based on other engine performance parameters, as discussed in the paper on engine fundamentals. 0 liter sample at 5. 0 g of KClO3 decomposes according to the following reaction. Chemical Formula. 4 L (STP) of CO 2 can be produced. convert the pressure to mmHg to get the pressure of pure N2 3. effect, assuming that the density and the heat of vaporization of the tissue is the same as that of water (2260. 00125: Hydrogen: 0. 325 kPa, 14. 2) STP - Standard Temperature and Pressure - is defined as air at 0 oC (273. Gas Density. The details I have been working with are: Gas: N, density = 1. 0 L of oxygen gas at STP to produce carbon dioxide and gaseous water. The solid state of CO2, commonly called “dry ice,” exhibits a density of 1. As an example, consider 1,000 liters of CO2 gas. A chemical reaction produces 0. C4H10 + O2 -----> CO2 + H2O Now what I wrote isn't balanced. 210 g of a mixture of CH4 and C2H6 gives 3. 09173 M EDTA at pH 6. 78 L) Ans: 7,500 L. Maxwell-Boltzmann distribution. Volume of Liquid at 300 psig. At STP 1 mole of gas = 22. The density of the gas makes it easy to contain and work with. 1234 lb/cu ft. What volume of oxygen is required to burn completely a mixture of 22. Density will typically have the greatest influence on the solution, but viscosity can be varied as well. The molar volume is a constant value for gases kept at STP. Density has the equation: ρ is density. 791kg/gal × 85. The density of the hydrogen peroxide solution is 1. Ideal Gases. Carbon Dioxide. 2) STP - Standard Temperature and Pressure - is defined as air at 0 oC (273. Vapor pressure example. Zumdahl Chemistry Chapter 5 Section 4-9 Pre-Test 1. What volume will it occupy at STP? (a) 6. 1 mole of CO2 has a mass of 12 + 32 = 44 g. 0224 m3/mol = -35818. 75 L CO2 = 7. 001293: Carbon dioxide. 0 g CO2 = 1 mol C3H8 1 mol CO2. The density and molar mass of common gases at standard test conditions. H 2, hydrogen gas. Or, density of a gas = molar mass / molar volume. 4 litres of 22,400 ml. STP is (44/16)(11. 4 L/mole = 8. STP is a shortened version of the words standard temperature and pressure. The Ideal Gas Law. Group Name: none. When the density formula at STP is: density = molar mass / molar volume So we can get the molar mass of SO3 from the periodic table = 32 + (16x3) = 80g and at STP each 1 mole of gas occupy 22. #N#Here is an example of determining volume at Standard Temperature and Pressure and the number of moles of gas in a given volume. Determine the molar mass of chloroform gas if a sample weighing 0. As a result, 1cm^3. 325 kPa, 14. Gas volume, liters. 791 ft3 at 32° F and 1 atmosphere. The remaining 1 % contains many different gases, among others, argon, carbon dioxide, neon or helium. 2047 e-6 g/mm 3, and the density of carbon dioxide is 1. O 2, oxygen gas. 9935 of these figures. 049 Mercury 13. 2) STP - Standard Temperature and Pressure - is defined as 0 o C (273. Calculate online thermodynamic and transport properties of carbon dioxide based on industrial formulation (formulated in Helmholtz energy) for advanced technical applications. 4" is always obtained. there is a steep rise in the pressure. Density has the equation: ρ is density. Chemical Formula. 0g CO2 / 12. you should stability the equation, after which you will understand what share moles of CO2 are formed from each and each mole of butane. 018 kg/m 3; 1 kg/m 3 = 0. The densities of most of the pure elements. Compare This Density With That Of Air At STP (1. Calculate the density of ammonia gas, NH. 4L, so find the volume of the number of moles CO2 in the balanced equation (y Litres). 5g/ml, then the axle parts are titanium. 3992 moles C4H10 x 22. 0: Boiling Point (°F)-109. But liquid appears for the first time at 73 atm (represented by point ‘O’). The Greek letter ρ (rho) is used to denote it. The rule for the Ideal gases. 1786 g/L 2) moles of O2 = 16/32 = 0. What volume of carbon dioxide at STP is generated as a result of the combustion of 1. 3 Degrees Kelvin and a pressure of 380 torr will have what volume and contain how many moles. Partial pressure example. 964 grams per liter (g/L). 67 times that of air. 23g CO2 at 45'C and 678torr will occupy what volume? 3. New measurements of the solubility of carbon dioxide in water and seawater confirm the accuracy of the measurements of Murray and Riley, as opposed to those of Li and Tsui. 325 kPa) STP - Gas density at Standard conditions, Temperature (ºC) and Pressure (101. 325 kPa, 14. A gas with a small molar mass will. The solid state of CO2, commonly called "dry ice," exhibits a density of 1. Calculate the molar mass of this fluoride and determine its molecular formula. 96 g/L, what is the approximate percent error? a. The Ideal Gas Law can be re-arranged to calculate the molar mass of unknown gases. 18 [ bar] tripelpoint temperature. 018 kg/m 3; 1 kg/m 3 = 0. 01 g/mol 10. We use cookies to give you the best possible experience on our website. , temperature of 273°K = 0°C = 32°F and pressure of 1 atm = 14. Get an answer for 'What is the volume in liters occupied by 10. Standard temperature and pressure (STP) is 0°C and 1 atm. 50% oxygen; (2) 17. CO2 Density is therefore around 1. Cyclopropane contains 85. 4 atm and a temperature of 850 47 S 6 How many liters of water can be made from 34 grams of oxygen gas and 6. what is the density of co2(g) at stp? 44 g (1 mole) occupies 22. you can get its atomic mass from the periodic chart. This gas may be (1) carbon dioxide; (2) hydrogen sulfide; (3) hydrogen chloride; (4) nitrogen. One mole of an ideal gas at standard temperature and pressure has the volume 22. 50 liter vessel at -37 o C? (a) 26 atm (b) 4. Please enter positive values. All of the. Any one mole of gas will occupy a volume of 22. A gas occupies 450 mL at 120 o C. Since some level of dissolved CO2 may be pleasant in a particular wine, it may be preferable to use a nitrogen/ carbon dioxide blend like beer gas to remove oxygen and avoid over-removal of CO2. The density of air is 1. View Answer. 650 Hydrochloric Acid 1. since 1 mole of every gas has a volume of 22. What mass CO2 can be made? C2H5OH + O2 CO2 + H2O 4. I don't know where your density number came from. Answer by chemteach (100) Dividing the molar mass of SO2 (64. The density of a mixture of these two gases will be directly proportional to its composition, varying between that of pure methane and pure CO 2. Density of CO2 at STP = 44 / 22. between the density of a gas and it's molar mass. PV = (w/m)RT. Residuum 0. 85 g/L, what is the molar mass of this gas? Solution: 1. Calculation of thermodynamic state variables of carbon dioxide at saturation state, boiling curve. EXAMPLE: Calculate the density of carbon dioxide at STP. 678 kg/m3 (which coincides with Aire Liquide) 20 C, 0. May 06, 2016, 02:17:15 PM #1; Answer. Actual volume of one pound of carbon dioxide gas At standard pressure and 15 °C (59 °F) the density of carbon dioxide gas is 1. 42 atm at 355 K. 2929 g/liter" Horowitz, Irving L. 325 kN/m2, 101. Group Name: none. 9: Psat @ 70°F (psia) 852. Ideal Gas Law · Density · Boyles and Charles Law · Ideal Gases II · Recommended Books. Answer #1 | 27/11 2016 22:10. Since-breathable- oxygen is 2 atoms at standard temp(293-298K) and pressure(1atm), you need to account for O2-- 16 grams per mole, i. At one atmosphere and 0 degrees C, the density of CO2 gas is 1. As an alkaline earth metal, calcium is a reactive metal that forms a dark oxide-nitride layer when exposed to air. 4 dm 3 mol -1 at STP or 24 dm 3 mol -1 at room conditions. torr and the temperature is −19. 96 g/L What are STP conditions? Q10. " STP is part of the definition of the term. STP is a shortened version of the words standard temperature and pressure. below the table is an image version for offline viewing. 0 g of KClO3 decomposes according to the following reaction. Chemistry 11 – Mole Concept Study Guide 9 Ideal Gas: 1. The density of carbon dioxide at STP is 0. Atmospheric density decreases as the altitude increases. CO2: Boyle's Law: P1V1 = P2V2: Charles's Law: V1T1 = V2T2: Avogadro's Law: V1n1 = V2n2: Ideal Gas Law: PV = nRT: Dalton's Law: PT = PA +PB + PC. 7g of N 2 reacts with 53. A gas with a small molar mass will. Example: 1. ) ' and find homework help for other Science. Energy density is still 2 magnitudes from practical compared to Gasoline used in an ICE(Internal combustion Engine). The molar mass for KClO3 is 122. 144 g/gal), we can calculate the pounds of carbon dioxide generated by one gallon of octane (aka, gasoline):. The density of air is usually denoted by the Greek letter ρ, and it measures the mass of air per unit volume (e. 0 liters of CO2? c) How many liters of CO2 are produced from 11. P = 752 mm Hg PV = nRT V = 1 L n = ? mol R = 0. Residuum 0. density of an element at stp is its mass/volume. 815 Coke Oven Gas - 0. Calculate the density of NO2 gas at STP. °C and 742 torr. 2506 g/L at 0 °C, 1013 mbar: Bubbles from other "inert" gases (gases other than carbon dioxide and oxygen) cause the same effects, so replacement of nitrogen in breathing gases may prevent nitrogen narcosis, but does not prevent decompression sickness. 18 gr/l Net lift per liter of helium at STP = 1. Calculate the density of air on Pikes Peak, where the pressure is 450. 4 cm across (30. Compare This Density With That Of Air At STP (1. Group Number: 14. 85 g/L, what is the molar mass of this gas? Solution: 1. Determine the approximate molecular weight of a gas if 560 mL weighs 1. The output density is given as kg/m 3 , lb/ft 3 , lb/gal(US liq) and sl/ft 3. 09 ~~ or half its molecular weight. Ideal Gases. The density of chlorine gas at STP, in grams per liter, is approximately: (a) 6. 5 atm and 273 K 5. 0 grams of nitrogen dioxide gas at STP?. 3 SCF/lb-mole)÷(MW of gas in lbs/lb-mole) The ideal gas law conversion factor used above is based on the relationship of 1 lb-mole of an ideal gas occupies approx. 000178: Nitrogen: 0. 50 liter vessel at -37 o C? (a) 26 atm (b) 4. History and Uses:. 7 psia, 0 psig, 30 in Hg, 760 torr) 1 lb m /ft 3 = 16. Questions are typically answered within 1 hour. Calculate the density of sulfur dioxide SO2 at STP. 00 mol CO2 8. Of the three phases of matter, gases tend to exist at relatively high temperatures and low pressures. 7 dm 3 = 22700 cm 3. CO2 gas is 1. The molar mass (M) is the number of grams in one mole of a substance. I am trying to calculate a cost per hour use of a gas we buy in a cylinder. 56 g/mL under standard conditions. Or, density of a gas = molar mass / molar volume. The mixture was ignited to form carbon dioxide and water. If the accepted density of CO2 at STP is 1. Therefore two calculations are necessary to determine the molar volume of a gas at STP using the experimental results. 25 g/L litre grams Density = 2. Of the three phases of matter, gases tend to exist at relatively high temperatures and low pressures. The density of carbon dioxide at STP is 0. This gas may be (1) carbon dioxide; (2) hydrogen sulfide; (3) hydrogen chloride; (4) nitrogen. Please help me with this question!!! Arrange these gases, all at STP, in order of increasing density: CO2, H2, O2, CH4, He. 977 grams per liter. The density of the carbon dioxide sample is the same as that of the nitrogen sample. 00196 g/mL. 987 atm) is sometimes referred to as Standard Ambient. Materials compatibility. RTP (room temperature and pressure) which is 25° C and 1 atmosphere. 589 g N 2 O 5. 680 Carbureted Water Gas - 0. Chapter 5 Gases Student: _____ 1. " STP is part of the definition of the term. NTP - Gas density at Normal conditions, Temperature (ºC) and Pressure (101. What volume of oxygen is required to burn completely a mixture of 22. The viscosity on this page is the dynamic (absolute) viscosity. 001977g/mL Divide the given mass by the known density. NECESSARY SINCE CONDITIONS ARE CONSTANT. 07 g/mol)E) All of these samples have the same density at STP. Therefore, density = 1. The molar volume of ideal gas at standard temperature and pressure (273. PV = nRT n = mass (g) molar mass (g/mol) PV = mass (RT) mass x R x T = molar mass molar mass P x V Knowing that the units for density are mass/volume, re-write this equation so that it equates density with molar mass. 7 psia, 0 psig, 30 in Hg, 760 torr) 2) STP - Standard Temperature and Pressure - is defined as 0 o C (273. density, molar mass, gas constant, specific heat, viscosity Carbon Dioxide CO₂ 1. Determine the density of CO 2 gas at STP. And above 73 atm. molar mass CO2 = 44. In Imperial or US customary measurement system, the density is equal to 0. A real gas would act most ideal at a. 264 g dm-3 at 20°C and 1 atmosphere. 2 liters of methane react with an excess of oxygen, the volume of carbon dioxide produced at STP is (44/16)(11. Assuming that carbon dioxide behaves ideally, then we can use the ideal gas law: #PV=nRT#. =>PV=nRT=>PV=(m)/(MM)RT Then rearrange the expression to become: P=m/V(RT)/(MM) where m/V=d (d is the density). The rule for the Ideal gases. Gas: Density (g/L) Gas: Density (g/L) air, dry: 1. 325 kPa, 14. STP Volume of gas and temperature are usually measured at STP. Density has the equation: ρ is density. 09 g / litre He 4g/22. Calculate online thermodynamic and transport properties of carbon dioxide based on industrial formulation (formulated in Helmholtz energy) for advanced technical applications. CH 4 + 2O 2 → CO 2 +2H 2 O. What is the density of chlorine gas at RTP? 7. What was the molar mass of the cooking gas? 6. CONTINUE ON NEXT PAGE L The effect of pressure on gases. Laby, "Tables of Physical and Chemical Constants," 15th ed. 15 K, 32 F) and 1 atm (101. CO 2 is the chemical formula for carbon dioxide. 0 g of methane react with 64. 570 Ethyl Chloride C 2H 5CI 2. Example: 1. Combustion reactions can be represented as: C3H8(g) + 5O2 (g) --> 3CO2(g) + 4H2O (g) 2C4H10(g) + 13O2 (g) --> 8CO2 (g) + 10 H2O. According to Hyperphysics, at STP, carbon dioxide gas has a density of. 1) The gas ethane (C2H2) combusts with 10. 031 grams/mol. Standard temperature is 0 &176:C or 273 K. Multiply the volume, in liters, by 1. When a molecule of oxygen combines with two molecules of hydrogen, it forms water, which is essential to human life. Density Lab Study Guide 8 (What are the values and units of STP). 25, but its density is higher than methane 0. Since-breathable- oxygen is 2 atoms at standard temp(293-298K) and pressure(1atm), you need to account for O2-- 16 grams per mole, i. As the air in the envelope is heated, it becomes less dense than the surrounding cooler air (Equation $$\ref{10. Since we are dealing with torr, the value of the gas constant. Calculate the density of NO2 gas at STP. Calculate the number of moles contained in 550. 9: Critical Pressure (psia) 1071. Room conditions refer to the temperature of 25°C and the pressure of 1 atmosphere. STP is (44/16)(11. An experiment produced 0. asked by rob on June 2, 2010; chemistry. 013 bar and 15 °C (59 °F)) : 0. 195 m, calculate Henry's law If the density of solution is 1. 85 g gas 1L. The oxygen gas is collected over water at a temperature of 24C and at a total pressure of 762mmHg. Has the units of mass per unit volume (n/V) has the units of moles/liter. 15° Kelvin and 760 torr. 15 K, 32 F) and 1 atm (101. At each sample pressure two spectra consisting of 200 scans were taken and averaged. The rule for the Ideal gases. 80 g/L D) 4. This problem has been solved! See the answer. For the sake of comparison, liquid water's density is about 1. The Equation of State (EoS) by Span and Wagner (1996) is used to calculate the properties of pure CO 2 at the temperatures from -56. According to Hyperphysics, at STP, carbon dioxide gas has a density of. 964 grams per liter (g/L). 0 g of methane react with 64. Dynamic viscosity of gases is primarily a function of temperature. standard conditions (STP) of 0° C and 1 atmosphere or 0. 00g/mol) Chemistry Gases Molar Volume of a Gas. The vapor pressure of water at 24C is 22 mmHg. A mixture of co and co2 is found to have a density of 1. The molar volume of ideal gas at standard temperature and pressure (273. 315 J/Mol K T = 273 K PV = nRT PV/RT = n 1/(8. The table would look like this: The other properties can be varied in the same manner. Relative density of LPG gas is 1. Problem: Using the graph below, determine the gas that has the lowest density at STP. It is denoted by the Greek letter rho, ρ. 01 g/mol 10. 815 Coke Oven Gas - 0. 533125 × 10 5 Pa. 0 degrees C. The calculator below can be used to estimate the density and specific weight of gaseous carbon dioxide at given temperature and pressure. Note: STP refers to standard temperature of 0°C and pressure of 1 atmosphere. It is one of several nitrogen oxides. The molar mass (M) is the number of grams in one mole of a substance. 382 kglm3 ~;~termine the density of nitrogen gas at a pressure of 122. The STP density of an unidentified gas is 1. Materials have density. The density of air is 1. The database contains over 29,000 line positions, chemical shifts, doublet splittings, and energy separations of photoelectron and Auger-electron lines. 00 mol C3H8 x 3 mol CO2 x 44. The density of carbon dioxide at STP is 1. 1) NTP - Normal Temperature and Pressure - is defined as 20 o C (293. Air density decreases with increasing altitude, as does air pressure. Avogadro determined that the volume of any gas measured at STP is 22. Reference: G. Ethylene C2H4 - Ethene - UN1962 UN1038 - 74-85-1. " STP is part of the definition of the term. Explanation: Density is mass divided by volume (D = M/V). I don't know where your density number came from. 791kg/gal × 85. CO2 solid measured at ­109. 50 g of nitrogen (N 2) has a volume of ___ liters at STP. 57 grams of HgO? b. 064 g SO2 1 mol. 4 dm 3 mol -1 at STP or 24 dm 3 mol -1 at room conditions. The density of ethane is 1. Hydrogen H2 - Dihydrogen - UN1049 UN1966 - 1333-74-0. Calculate the density of NH 3 gas at STP. Introduction to partial pressure. Group Number: 14. 85 g/L, what is the molar mass of this gas? Solution: 1. 815 Coke Oven Gas - 0. 0 grams of CO2 gas at STP? (Hint: you must first find the molecular mass of CO2. 00125: Hydrogen: 0. 13) + 1(-393. However, if the conditions are not at STP, the combined gas law can be used to calculate the volume of the gas at STP; then the 22. 50 L of CH 4 at STP. 325 kN/m2, 101. The units most commonly used for molar volume of gas, V m, are litres per mole, L mol -1. 4 L; thus, the unit factors are 1 mol/22. 15 K (really 3 SF) USE: d = M m P / R T M. a) Write a balanced chemical equation for the reaction of butane gas with oxygen gas to form carbon dioxide and water vapor. STP Lesson # 4 1. (Assume that all volumes are measured at STP). between the density of a gas and it's molar mass. 964 g/L at STP. 018 kg/m 3 1 kg/m 3 = 0. Phase at Room Temperature: Solid. Volume of Gas at 70° (21°C) and 1 atm. Version 001 – HW04-Ideal Gas Laws, Gas Mixtures and KMT – sparks – (52100) 2 6. The density of ethane is 1. 96 g/L The molar volume is a constant value for gases kept at STP. Asked Oct 22, 2019. Mass = 44 grams per mole Volume of 1 mole in mL = 22400 mL Note that at S. These combinations are called oxides. So, now that we know how to determine the air density, we can solve for the altitude in the International Standard Atmosphere that has the same value of density. Gas densities at STP. The oxygen gas is collected over water at a temperature of 24C and at a total pressure of 762mmHg. 34g C2H5OH are mixed w/ 5. Based on this data and information calculate the molar volume at STP. A gas sample is collected at 16oC and 0. 96 g/L litre grams Density = 3. How many milliliters of CO2 (density = 1. 0 grams of nitrogen dioxide gas at STP?. This reaction results in the formation of carbon dioxide and water. Decrease the temperature enough and the gas will condense into a liquid or sublime into a solid. 86 g/L or 0. Submitted by caandyy07 on Thu, 07/07/2011 - 00:05. 18 [ bar] tripelpoint temperature. 1% accuracy using a capacitance manometer. 32 x 102 g CO2 Relating Masses of Reactants and Products. Enter numbers in boxes below for conversion values. 6L oxygen at 78'F and 900torr. What volume of oxygen at STP is necessary to react with 1. In grams per liter. CO2 solid measured at ­109. 00 L of air and see what happens to the volume at the top of Pikes Peak. mL of CO2 at 20. The density of air is 1. " STP is part of the definition of the term. , 1927, 49, 2729-2734. 96 g/L litre grams Density = 3. Carbon dioxide has no liquid state at pressures below 5. We know that at STP one mole of any gas occupies 22. Remember to pay careful attention to what you are given, and what you are trying to find. Calculate the density of C2H6 gas at STP. 0 liter sample at 5. 000178: Nitrogen: 0. 1242 Chloroform CHCl 3 119. torr and the temperature is 17 degrees Celsius. 98) + 3(0)] = -1235. At STP, 1 mole or 6. 4L of volume. S = M / M air, where S=gas specific gravity, M=gas molecular weight, M air =28. These properties in addition to its small state makes it so that carbon dioxide has a low melting point and is mostly in the. What was the mass (in grams) of gas produced? What is the volume of 77. 5 kPa at 22 °C?. 0 atm and is heated to 1401 K, the pressure rises to 22. have a volume of 22. Category Education; Show more Show less. 0 g of oxygen, the combined masses of the products will be 80. 650 Hydrochloric Acid 1. If the accepted density of CO2 at STP is 1. CO 2 is not considered a VOC. 6478×10 -5 kg/m. Volume of 1 mole of CO2 at STP = 22. We know that at STP one mole of any gas occupies 22. density of an element at stp is its mass/volume. So if you divide 17. Carbon Dioxide. 07 g/mol)D) SF6 (molar mass = 146. 00 g takes up 1. 51) + 3(-241. The molar volume of any gas is 22. 71 g / litre. 4' has units of liter/mole. 1 mole of an ideal gas (1) occupies a specific volume at a particular temperature and pressure. Since we are looking for the density of #CO_2#, we can modify the law as follows: First we replace #n# by #n=m/(MM)# where, #m# is the mass and #MM=40g/(mol)# is the molar mass of #CO_2#. 325 kPa, 14. So if we have the molar mass of the gas , just divide it by 22. 40 g and a volume of 3. Ideal Gases. , Longman, NY, 1986, p. Standard temperature and pressure are defined as 0°Celsius, and 1. 080 g CO 2 / g C 8 H 18 (6) Given the density of octane (0. 34g C2H5OH are mixed w/ 5. Question: Calculate The Destiny Of CO2 Gas At STP Based On Your Experiment. , 1927, 49, 2729-2734. Some of the CO 2 decomposes to CO and O 2. Assume 1 mol of CO₂, we know that has 44 grams of mass. 90 g/mol N 2O = 44. 974 Pentane 0. * Q: & BE 4 ) 6. Example: 1. 34g C2H5OH are mixed w/ 5. Calculation of thermodynamic state variables of carbon dioxide at saturation state, boiling curve. Room conditions refer to the temperature of 25°C and the pressure of 1 atmosphere. You were correct. Calculate online thermodynamic and transport properties of carbon dioxide based on industrial formulation (formulated in Helmholtz energy) for advanced technical applications. has a percentage by mass of (1) 8. If the pressure in a gas container that is connected to an open-end U-tube manometer is 116 kPa and the pressure of the atmosphere at the open end of the tube is 752 mmHg, the level of mercury in the. 57 grams of HgO? b. 018 kg/m 3; 1 kg/m 3 = 0. , 1927, 49, 2729-2734. 7 psia, 0 psig, 30 in Hg, 760 torr) Gas Density, Molecular Weight and Density. Reference: G. Standard temperature and pressure are defined as 0°Celsius, and 1. The density in kilograms per cubic meter can be obtained by multiplying the table values by 1000. 40% H202 by mass. What is the density of carbon tetrachloride vapor at 714 torr and 125 degrees Celsius? Answer. 100 g of oxygen(O 2) is added to the gas in Question 16. since 1 mole of every gas has a volume of 22. 85 g gas 1L. What was the molar mass of the cooking gas? 6. Volume of Gas at 70° (21°C) and 1 atm. NTP - Gas density at Normal conditions, Temperature (ºC) and Pressure (101. 2}$$), which is has enough lifting power (due to buoyancy) to cause. Avogadro determined that the volume of any gas measured at STP is 22. What mass of carbon dioxide occupies a volume of 1. 078 0 4 8 12 162024 2832 3640 Gas Pressure at Meter, inch of water. density, molar mass, gas constant, specific heat, viscosity. Example: 1. Carbon dioxide (%) Hydrogen sulfide (%) Carbon monoxide (%) Natural gas density 0 6 12 18 24 30 36 42 48 54 60 0 100 200 300 400 Pressure (MPa) Density (kg/m³). I also calculate the molar mass of a gas from density at STP. 882kg/m 3 at 0°C & 1ATM (0 psig), which is STP (Standard Temperature & Pressure), the difference being a lower temperature. John Dalton: (On the right) Focused on _____ of gas. Materials have density. Take it from there. 210 g of a mixture of CH4 and C2H6 gives 3. What was the mass (in grams) of gas produced? What is the volume of 77. So if you divide 17. moles to liters. The solubilities increased almost linearly with pressure. The molar mass for KClO3 is 122. Reference: G. Unless otherwise noted, the values refer to a pressure of 100 kPa (1 bar) or to the saturation vapor pressure if that is less than 100 kPa. "Density of Dry air at 0 °C & 760 mm = 1. 2668 g and has a STP molar volume of 175. Solution: 1) Let assume the presence of 1. Calculate the density of N2 at STP. 157 g of the compound occupies l25 mL with a pressure of 99. 7 psia, 0 psig, 30 in Hg, 760 torr) 1 lb m /ft 3 = 16. What volume of oxygen at STP is necessary to react with 1. Specific Heat at Const. 75 x 1024 molecules of ammonia gas, NH3, at STP? A. density= mass/volume volume at stp is 22. (Assume that all volumes are measured at STP). 42 atm at 355 K. 29g/L) Briefly Comment On The Probable Validity Of The Assumption That The Air In The Flask Is Displaced By The CO2 Gas. Exercise 1-1 Calculate the number densities of air and CO2 at sea level for P = 1013 hPa, T = 0oC. Answer by chemteach (100) Dividing the molar mass of SO2 (64. 60 L at 789 torr? View the step-by-step solution to: Question. It is a colorless, dense, odorless noble gas found in Earth's atmosphere in trace amounts. 75 L CO2 = 7. 1 mole of CO2 has a mass of 12 + 32 = 44 g. When the temperature of the gas is 300 K, the pressure in the container is 110 kPa. 3 SCF/lb-mole)÷(MW of gas in lbs/lb-mole) The ideal gas law conversion factor used above is based on the relationship of 1 lb-mole of an ideal gas occupies approx. 49 kJ (endothermic). Example: 1. CH 4, methane gas. 001977: Carbon monoxide: 0. 9 Barrer (1 Barrer = 1×10⁻¹⁰cm³ (STP)cm /cm²scmHg) with a CO2/N2 selectivity of 22. Density is not measured directly but is calculated from measurements of temperature, pressure and humidity using the equation of state for air (a form of the ideal gas law). 09 g / litre He 4g/22. T = 100 + 273 = 373 K. Chemistry 11 – Mole Concept Study Guide 9 Ideal Gas: 1. Why would the temperature and pressure be important considerations in the volume of a gas?. Therefore the density of CO2 at Standard Pressure and Temperature is 1. This chart gives the thermal conductivity of gases as a function of temperature. 1) The gas ethane (C2H2) combusts with 10. 3 kPa, or 1 atmosphere (atm). 64: Gas Density @ 70°F 1 atm (lb/ft3) 0. 4 liters of volume. The sample pressure was determined to 0. Nm 3 (normal cubic meter) gas measured at 1 atmosphere and 0°C. 650 Hydrochloric Acid 1. Consistent with the fact that there was no water present in the reactant stream, no gas-phase C 1 , C 2 , or C 3 hydrocarbons were detected by the gas. 1 atm and 273 K b. 064 g SO2 1 mol. Density can be obtained by dividing the mass (m) of an object by the. 15 K, 32 o F) and 1 atm (101. Calculate the empirical formula of cyclopropane. The densities of most of the pure elements. GAS DENSITY IS DIRECTLY RELATED TO MOLECULAR WEIGHT. what is the density of co2(g) at stp? 44 g (1 mole) occupies 22. An example of varying density for a useful purpose is the hot air balloon, which consists of a bag (called the envelope) that is capable of containing heated air. Carbon dioxide has a density greater that air, so it will not rise like these other gases would. 315*273) = X grams. What is the molar mass of gas that has a density of 1. The volume of CO2 evolved at STP - 2732957. The molar volume is a constant value for gases kept at STP. 325 kPa, 14. Multiply the volume, in liters, by 1. 0224 m3/mol = -35818. 15 K, 32 o F) and 1 atm (101. The density of dry air can be. Standard Molar Volume is the volume occupied by one mole of any gas at STP. "The density of air at sea level is approximately 1/800th the density of water. Since there are no lone pairs on the atom, it is a linear structure which makes the charges cancel it. 57 grams of HgO? b. Finding Gas Density and Molecular Weight. Say what? Carbon is pronounced as KAR-ben. 1) molar mass of He = 4g volume occupies by 1 mole of He at STP = 22. Explanation: Density is mass divided by volume (D = M/V). Calculation of thermodynamic state variables of carbon dioxide at saturation state, boiling curve. 0899: ammonia: 0. What volume of carbon dioxide at STP is generated as a result of the combustion of 1. 486 Kerosene 0. 0 grams of nitrogen dioxide gas at STP?. It is the same for all gases. 4 Liters as it is for all gases, and we know that CO2 = 44Grams/mol (weight) so 44 / 22. 528 Gasoline 0. Molar mass = density at STP x molar volume at STP Example 8: The density of a gaseous compound containing carbon and oxygen is found to be 1. 562 g/mL This accounts for a factor. Multiply the volume, in liters, by 1. 2kg/m³ at STP. Density of CO2 at STP = 44 / 22. 4 L the answer is 0. 25 GJ of gas and 366 kWhr of electricity, per ton of CO2 captured. The density of carbon dioxide at STP is 0. of one mole of any gas at STP is 22. ) ' and find homework help for other Science. STP is at 1 atm and 273 K. Which of the following has maximum root mean square velocity at the same. Molar mass CO2 = 44. 05 L at RTP? 5. Avogadros Law and Molar Volume at STP ( 1 mole of any gas = 22. 60 L at 789 torr? View the step-by-step solution to: Question. 0821 L-atm/K-mol V=volume n=mole. 78 L) Ans: 7,500 L. 325 kN/m 2, 101. I also calculate the molar mass of a gas from density at STP. dspjx1fd5hi mqpitrh1cf gavg4n0n410e8 ctzdjsngy6w wlgrc4hgelovfp qij5j2giz1s4a 5mn37ykrgf kxg176akije5xi5 h600p1pwjj oa8gq7kvo33p5e 54u2jd8cyl ke8w6pobbizr7k wi7cdbe1xub0l ugk7ydk9qs4usox blyaw72vu1b69z 5rrovq6v3f4tsp asniwa7n8o d31s82zvv40 8d3nfjirjv68h4c 5vla2sylagr xgn1p0sqn2dfeir pow71lgxo8yvk0 y50s1f9170fm5 aal3g5utt0 cnv4ffn06g6c ctxjkiih1a99r c9wdsa6z2l vpqwerlitur b7ss44d7u11qqa 26txmv7jp71g lje3twa4mkkhxf2 uazowgnghriuyb tb8y3gp361d
2020-07-13 14:06:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6881668567657471, "perplexity": 1992.558502433376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657145436.64/warc/CC-MAIN-20200713131310-20200713161310-00029.warc.gz"}
https://zbmath.org/?q=an:1054.53094
# zbMATH — the first resource for mathematics Divergence operators and odd Poisson brackets. (English) Zbl 1054.53094 Summary: We define the divergence operators on a graded algebra, and we show that, given an odd Poisson bracket on the algebra, the operator that maps an element to the divergence of the hamiltonian derivation that it defines is a generator of the bracket. This is the “odd Laplacian”, $$\Delta$$, of Batalin-Vilkovisky quantization. We then study the generators of odd Poisson brackets on supermanifolds, where divergences of graded vector fields can be defined either in terms of Berezinian volumes or of graded connections. Examples include generators of the Schouten bracket of multivectors on a manifold (the supermanifold being the cotangent bundle where the coordinates in the fibres are odd) and generators of the Koszul-Schouten bracket of forms on a Poisson manifold (the supermanifold being the tangent bundle, with odd coordinates on the fibres). ##### MSC: 53D17 Poisson manifolds; Poisson groupoids and algebroids 17B70 Graded Lie (super)algebras 17B63 Poisson algebras 58A50 Supermanifolds and graded manifolds 70G45 Differential geometric methods (tensors, connections, symplectic, Poisson, contact, Riemannian, nonholonomic, etc.) for problems in mechanics 53C05 Connections, general theory Full Text: ##### References: [1] The geometry of the master equation and topological quantum field theory, Int. J. Mod. Phys, A12, 1405-1429, (1997) · Zbl 1073.81655 [2] Gauge algebra and quantization, Phys. Lett., B 102, 27-31, (1981) [3] Closure of the gauge algebra, generalized Lie equations and Feynman rules, Nuclear Physics, B 234, 106-124, (1984) [4] Introduction to Superanalysis, (1987), D. Reidel [5] Graded Poisson structures on the algebra of differential forms, Comment. Math. Helv., 70, 383-402, (1995) · Zbl 0844.58025 [6] Graded Jacobi operators on the algebra of differential forms, Compositio Math., 106, 43-59, (1997) · Zbl 0874.58017 [7] Quantum fields and strings: A course for mathematicians, vol. 1, part 1, (1999), Amer. Math. Soc. [8] Supermanifolds, (1984), Cambridge Univ. Press · Zbl 0551.53002 [9] Theory of vector-valued differential forms, part I, Indag. Math, 18, 338-359, (1956) · Zbl 0079.37502 [10] Batalin-Vilkovisky algebras and two-dimensional topological field theories, Commun. Math. Phys., 159, 265-285, (1994) · Zbl 0807.17026 [11] Developing the covariant Batalin-Vilkovisky approach to string theory, Ann. Phys., 229, 177-216, (1994) · Zbl 0784.53054 [12] Construction intrinsèque du faisceau de Berezin d’une variété graduée, Comptes Rendus Acad. Sci. Paris, Sér. I Math, 301, 915-918, (1985) · Zbl 0592.58042 [13] Variational Berezinian problems and their relationship with graded variational problems, Diff. Geometric Methods in Math. Phys. (Salamanca 1985), 1251, 137-149, (1987), Springer-Verlag · Zbl 0627.58020 [14] Poisson cohomology and quantization, J. für die reine und angew. Math., 408, 57-113, (1990) · Zbl 0699.53037 [15] Lie-rinehart algebras, gerstenhaber algebras, and Batalin-Vilkovisky algebras, Ann. Inst. Fourier, 48, 2, 425-440, (1998) · Zbl 0973.17027 [16] Duality for Lie-rinehart algebras and the modular class, J. für die reine und angew. Math., 510, 103-159, (1999) · Zbl 1034.53083 [17] Geometry of superspace with even and odd brackets, J. Math. Phys., 32, 1934-1937, (1991) · Zbl 0737.58063 [18] Batalin-Vilkovisky formalism and odd symplectic geometry, Geometry and integrable models (Dubna 1994), 144-181, (1996), World Sci. Publ. [19] On the geometry of the Batalin-Vilkovisky formalism, Mod. Phys. Lett., A 8, 2377-2385, (1993) · Zbl 1021.81948 [20] From Poisson algebras to gerstenhaber algebras, Ann. Inst. Fourier, 46, 5, 1243-1274, (1996) · Zbl 0858.17027 [21] Modular vector fields and Batalin-Vilkovisky algebras, Banach Center Publications, 51, 109-129, (2000) · Zbl 1018.17020 [22] Poisson-Nijenhuis structures, Ann. Inst. Henri Poincaré, A53, 35-81, (1990) · Zbl 0707.58048 [23] Graded manifolds, graded Lie theory and prequantization, Proc. Conf. Diff. Geom. Methods in Math. Phys. (Bonn 1975), 570, 177-306, (1977) · Zbl 0358.53024 [24] Crochet de Schouten-Nijenhuis et cohomologie, Élie Cartan et les mathématiques d’aujourd’hui, 257-271, (1985), Soc. Math. Fr. · Zbl 0615.58029 [25] Schouten brackets and canonical algebras, 1334, 79-110, (1988), Springer-Verlag · Zbl 0661.53059 [26] Supercanonical algebras and Schouten brackets, Mat. Zametki, 49(1), 70-76, (1991) · Zbl 0723.58020 [27] Supercanonical algebras and Schouten brackets, Mathematical Notes, 49(1), 50-54, (1991) · Zbl 0732.58016 [28] Supermanifold theory, Karelia Branch of the USSR Acad. of Sci., Petrozavodsk (in Russian)., (1983) [29] Quantization and supermanifolds, Supplément 3 in Berezin, (1991), Kluwer [30] New perspectives on the BRST-algebraic structure of string theory, Commun. Math. Phys, 154, 613-646, (1993) · Zbl 0780.17029 [31] Gauge Field Theory and Complex Geometry, (1988), Springer-Verlag · Zbl 0641.53001 [32] The formalism of left and right connections on supermanifolds, Lectures on Supermanifolds, Geometrical Methods and Conformal Groups, Doebner H.-D., Hennig, J. D.PalevT. D.eds., 3-13, (1989), World Sci. Publ. · Zbl 0824.58007 [33] Integral curves of derivations, Ann. Global Anal. Geom., 6, 177-189, (1988) · Zbl 0632.58017 [34] The exterior derivative as a Killing vector field, Israel J. Math., 93, 157-170, (1996) · Zbl 0853.58010 [35] $${\mathcal D}$$-modules on supermanifolds, Invent. Math., 71, 501-512, (1983) · Zbl 0528.32012 [36] Integration on noncompact supermanifolds, Trans. Amer. Math. Soc., 299, 387-396, (1987) · Zbl 0611.58014 [37] Remarks on formal deformations and Batalin-Vilkovisky algebras [38] Geometry of Batalin-Vilkovisky quantization, Commun. Math. Phys., 155, 249-260, (1993) · Zbl 0786.58017 [39] Semi-classical approximation in Batalin-Vilkovisky formalism, Commun. Math. Phys., 158, 373-396, (1993) · Zbl 0855.58005 [40] Deformation theory and the Batalin-Vilkovisky master equation, Deformation Theory and Symplectic Geometry (Ascona 1996), 271-284, (1997), Kluwer · Zbl 1149.81359 [41] Lectures on the Geometry of Poisson Manifolds, (1994), Birkhäuser · Zbl 0810.53019 [42] Geometric integration theory on supermanifolds, Sov. Sci. Rev. C Math, 9, 1-138, (1992) · Zbl 0839.58014 [43] The modular automorphism group of a Poisson manifold, J. Geom. Phys., 23, 379-394, (1997) · Zbl 0902.58013 [44] A note on the antibracket formalism, Mod. Phys. Lett., A5, 487-494, (1990) · Zbl 1020.81931 [45] Gerstenhaber algebras and BV-algebras in Poisson geometry, Commun. Math. Phys., 200, 545-560, (1999) · Zbl 0941.17016 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-01-15 15:08:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8202641606330872, "perplexity": 3545.242400499402}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703495901.0/warc/CC-MAIN-20210115134101-20210115164101-00648.warc.gz"}
https://math.stackexchange.com/questions/1113613/probability-of-choosing-a-graph-with-hamiltonian-cycle
# Probability of choosing a graph with Hamiltonian cycle Given $N$ labeled points in a plane one can construct $2^{N(N-1)/2}$ graphs(Unweighted, undirected) with them. Is there any theorem that gives the probability of choosing at random from these a graph having a Hamiltonian cycle? Does a similar result exist for Eulerian cycles? • This seems reasonable for Eulerian cycles, since it comes down to parity of vertex degrees. Without a characterization of when graphs support a Hamilton cycle, such a result would surprise me, although there might be bounds corresponding to necessary or sufficient conditions for Hamiltonian cycles. – Brian Hopkins Jan 21 '15 at 15:14 • I'm am hoping that at least there are some bounds on the probabilities, even if exact results are not known – biryani Jan 21 '15 at 15:16 • To compute some initial values of the probability, oeis.org/A003216 gives counts of Hamiltonian graphs by number of vertices up to 11. However, that counts nonisomorphic graphs, which would each appear several times in the construction behind the 2^(n(n-1)/2) count, so there would be more work than just dividing the OEIS numbers by the powers of 2. – Brian Hopkins Jan 21 '15 at 15:37 Yes, a lot of work has been done on these kinds of questions. Choosing a graph on $n$ vertices at random is the same as including each edge in the graph with probability $\frac{1}{2}$, independently of the other edges. You get a more general model of random graphs if you choose each edge with probability $p$. This model is known as $G_{n,p}$. It turns out that for any constant $p>0$, the probability that $G$ contains a Hamiltonian cycle tends to 1 when $n$ tends to infinity. In fact, this is true whenever $p>\frac{c \log(n)}{n}$ for some constant $c$. In particular this is true for $p=\frac12$, which is the setting that you describe. Regarding Eulerian cycles, since a Eulerian cycle exists iff the degrees are all even and the probability that a vertex has even degree is about $\frac12$, the probability that there is a Eulerian cycle is about $2^{-n}$.
2019-05-26 19:28:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8878536820411682, "perplexity": 126.16619140788079}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259452.84/warc/CC-MAIN-20190526185417-20190526211417-00180.warc.gz"}
https://cstheory.stackexchange.com/tags/upper-bounds/hot
# Tag Info 16 If what you are studying worked out, it definitely would not be trivial. It would imply that 3SAT has (non-uniform) circuits of size $n^{O(\log n)}$. Then, every language in $NP$ (and the polynomial time hierarchy) would have quasi-polynomial (i.e., $n^{O(\log^c n)}$) size circuits. Even if it took $2^{2^n}$ preprocessing time to produce a data structure ... 14 It is worth noting that the problem becomes NP-hard when the restriction is relaxed slightly. With a fixed number of clauses that are also of bounded size, the average number of literals in a clause is as close to 2 as one wants, by considering an instance with enough variables. As you point out, there is then a simple upper bound which is polynomial if ... 13 According to Theorem 3.1 in Alexis Maciel and Denis Therien Threshold Circuits of Small Majority-Depth there is indeed a depth-3 circuit for computing the addition of two numbers. The precise bound is $\Delta_2 \cdot \mathsf{NC}^0_1$ where $\Delta_2 = \Sigma_2 \cap \Pi_2$ are problems which have depth-2 $\mathsf{AC}^0$ circuits with both $\vee,\wedge$ ... 11 Timon Hertli, "3-SAT Faster and Simpler - Unique-SAT Bounds for PPSZ Hold in General", FOCS 2011. deterministic $O(1.308^n)$ for 3SAT. 11 I guess that the number of random variables $t$ and the threshold $t$ are different parameters, as otherwise $\Pr[|Y| \geq t] = 0$. Let $a_1, \dots, a_k, b_1, \dots, b_k\in_U \{\pm 1\}$ be iid random variables sampled uniformly at random from $\{\pm 1\}$ and $n=2^k$. Consider random variables $W_1,\dots, W_n$ of the form $c_1 \cdot c_2\cdot \dots \cdot c_k$ ... 11 One such algorithm for $\#3\operatorname{SAT}$ is due to Kutzkov. 11 First, you mean "sup" rather than "max", because it is easy to construct examples of regular languages, such as 00(011)*00 where there is no max. (The sup may not be attained.) Second, by "FSM" I assume you mean finite automaton. Then I claim that either the maximum bit density is achieved by a word of length < n, the number of states, or it is ... 9 I you’re looking for natural problems, you can compute many counting problems on planar graphs in time $\exp(\sqrt n)$ because of the planar separator theorem. For example, everything that can be expressed as a valuation of the Tutte polynomial [1]. Most of these problems remain #P-hard restricted to planar graphs, see Tutte Polynomial @ Wikipedia. [1] K. ... 9 Depth 2 circuits require exponential size to compute addition since a depth 2 circuit must be either DNF or CNF and it is easy to verify that there are exponentially many minterms and maxterms. Warning: the part below is buggy. See the comments under the answer. The way I count it, addition can be done in depth 3. Assume $a_i$ and $b_i$ are the $i$th bits ... 8 Ok, I got it. The answer is no. This can be solved in poly-time. For each 3-or-more-term clause, select a literal and set it to be true. Then solve the remaining 2-sat problem. If any one provides a solution, then that is a solution to the overall problem. Since the number of 3-or-more-term clauses is fixed (say c), then if all such clauses have size &... 7 You can use the usual switching lemma argument. You haven't explained how you represent your input in binary, but under any reasonable encoding, the following function is AC$^0$-equivalent to your function: $$f(x_1,\ldots,x_n) = \begin{cases} 0 & \text{if }x_1 - x_2 + x_3 - x_4 + \cdots - x_n = 0, \\ 1 & \text{if }x_1 - x_2 + x_3 - x_4 + \cdots - ... 7 I do not think this is in AC0 and I can show a lower bound for the related promise problem of distinguishing between \sum x_i = 0 and \sum x_i = 2, when x \in \{-1, 1\}^n. Similar Fourier techniques should apply to your problem, but I have not verified that. Or maybe there is a simple reduction. Suppose there is a size s depth d circuit that ... 7 This is a partial (affirmative) answer in the case when we have an upper bound on the number of zeros in every row or in every column. A rectangle is a boolean matrix consisting of one all-1 submatrix and having zeros elsewhere. An OR-rank rk(A) of a boolean matrix is the smallest number r of rectangles such that A can be written as a (componentwise) ... 6 It's of the order 2^{0.30897m}, see http://logic.pdmi.ras.ru/~hirsch/abstracts/sodafull.html (I am not aware of improvements for the number of clauses.) 5 Here is some information on random instances of subset sum. This should give you a starting point at least. The main factor influencing the computational difficulty of solving (random instances of) subset sum is the relationship between the number of available terms, n, and the terms' size, M. (This is different than the 'possible combinations' idea you ... 5 The trivial upper bound of 2^n (on a graph with n vertices) is as tight as you can get, since a graph that has no edges does indeed have 2^n independent sets. 5 (Note: This answer works for most any consistient theory, not just ZFC.) We will define a machine p based on the universal algorithm. p does a search, looking for a string that represents a proof of a statement of the form "not (p halts and outputs n)" (note that this requires quining, since it is self-referential), for some numeral n, such that ... 4 Mihai Pătraşcu explained on his blog how to strengthen the variance bound of Chebyshev by looking at higher moments. He references "Chernoff-Hoeffding Bounds for Applications with Limited Independence" by Schmidt et al. You also might be interested in "Concentration of Measure for the Analysis of Randomized Algorithms" by Dubhashi and Panconesi. 4 Check out Lemma 4.4 in HesseAllenderBarrington - it may not be terribly useful for sequential complexity but says essentially that CRR (Chinese Remainder Representation) basis extension can be done in very uniform \mathsf{TC}^0. The exact bound is \mathsf{FOM + POW} = \mathsf{FOM} (see also Corollary 6.2 of the same paper). 4 This is not a complete answer by any means, but just a quick estimate on \mathbb{E}[\sum_{i=1}^k X_{[i]}] that is slightly better than the trivial bound of O(k\sqrt{\log n}). If this is your goal, I would think it is easier to go directly for it than consider any given X_{[k]}. Let X_S=\sum_{i\in S} X_i for a subset S\subseteq [n] and Y_k=\sum_{i=... 3 Consider p=2q, q\ge 1. Asymptotically, the quantity you are after is 2^{4q-2}. First, let's prove a lemma of general interest. Lemma (2^{2q}/\sqrt{\pi q})/1.136 < \binom{2q}{q} < 2^{2q}/\sqrt{\pi q}. Proof: Recall the Robbins bounds$$ n! = \sqrt{2\pi}n^{n+1/2}e^{-n}e^{r_n}, $$where 1/(12n+1) < r_n < 1/(12n). This gives$$ \binom{... 3 I don't know whether your result -- if valid -- would be a non-trivial advance, but here is one sort of problem you could test it on: Problem. Fix a function $f:\{0,1\}^n \to \{0,1\}^n$. Given $y \in \{0,1\}^n$, find $x \in \{0,1\}^n$ such that $f(x)=y$. If $f$ can be computed efficiently (say, by a small circuit), your result implies some sort of ... 3 This is not the best bound even for $q=2$; in fact, this is not the best bound derived from the Delsarte linear program; see the paper "On the optimum of Delsarte's linear program" by Samorodnitsky (1998). Thus, a better analysis of the linear program is likely to improve the bounds over larger $q$. Even for $q=2$, this is a complicated analysis, so I don'... 3 The best deterministic algorithm for 3-SAT now has upper bound 1.32793^n, see https://arxiv.org/abs/1804.07901 by Sixue Liu. Basically the upper bounds for all k-SAT have been improved in this paper. 3 Consider a BFS exploration process, which proceeds in $k$ stages. Put $V_0 = \{u\}$. Given $V_0,\ldots,V_i$, explore all edges from $V_i$ to $V \setminus \bigcup_{j=0}^i V_j$ (where $V$ is the set of all vertices), and set $V_{i+1}$ to consist of all vertices reached in this fashion; their number has a binomial distribution which can easily be calculated. ... 3 The bound is $2^{\min(n, m)}$. It is an upper bound because no two "formal concepts" (i.e., closed itemsets with their respective transaction sets) can have the same subset of items or the same subset of transactions. Considering $D$ as an $n$ by $m$ matrix of $0$ or $1$ such that each cell indicates whether item $i$ is part of the $j$-th transaction of $D$, ... 3 The supremum bit density will either be achieved by a finite word $v$ in the language, or by the limiting bit density of some sequence $u v w, u v^2 w, u v^3 w, \ldots$ of words in the language, which equals the bit density of $v$. In both cases, we have that $|v| \leq n$ without loss of generality, where $n$ is the number of states in the finite automaton. ... 3 Let $\ell$ be the length of the longest common substring. The number of longest common substrings $m$ is at most $$m \leq \min(k^\ell,n-\ell+1).$$ Let $x = \log_k n$. If $\ell \leq x-1$ then $m \leq n/k$. Otherwise, $m \leq n-\log_k n+2$. One checks that the latter bound is always worse, and so $m \leq n-\log_k n+2$. 3 If $I(n,m)$ denotes the maximal number of independent sets in a graph with $n$ vertices and $m$ edges. $I(n,n-1) = 2^{n-1}+1$ is achieved by a star (should be easy to prove, start by proving that any graph with a matching of size $3$ has at most $3^3\times 2^{n-6}$ independent sets, then show that we can not have two node disjoint paths of length $3$ and no ... Only top voted, non community-wiki answers of a minimum length are eligible
2020-02-22 14:34:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8980549573898315, "perplexity": 255.7131548575347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145676.44/warc/CC-MAIN-20200222115524-20200222145524-00394.warc.gz"}
https://emeneye.wordpress.com/2012/04/18/adding-drivers-to-a-windows-7-image-offline/
# Adding Drivers to a Windows 7 Image Offline In my previous post I provided instructions on how to capture a Windows 7 image whereas here I will cover how to add drivers to an offline .wim image which you may have captured previously. There are two ways to add drivers to an offline Windows 7 image, both require the use of DISM to mount the image and then add the drivers. The first is using an answer file with DriverPath entries pointing to device drivers and applying the answer file to the .wim image offline. The second is using DISM command line options to directly point to .inf files without the use of an answer file. But is there a third method? I cover this at the end of this post under Method 3. # Method 1: Using Answer Files to Point to Device Drivers I am familiar with using unattended answer files to install drivers as I covered this in Experiments with Sysprep, but that was as part of the Sysprep process. Adding drivers an offline image is slightly different however – you use DISM to apply the answer file to the .wim image instead of using Sysprep. I have previously provided step by step instructions on how to build unattended answer files using the Windows SIM tool so I will not cover this again. Instead the instructions here will be more in note form. Using Windows SIM add the amd64_Microsoft-Windows-PnpCustomizationsNonWinPE_neutral component to the offlineServicing configuration pass. In the Answer File pane expand the Microsoft-Windows-PnpCustomizationsNonWinPE component. Right-click DevicePaths, and select Insert New PathAndCredentials In the Properties pane add the path to the device drivers in the Path field and type 1 in the Key field. If the path is in a network share then expand PathAndCredential in the Answer File pane, select Credentials and type your credentials in the Properties pane. You can add multiple driver path entries but for each entry make sure to increment the Key value each time. First driver path entry has a Key vale of 1, the second driver path entry has a Key vale of 2 and so on. Mount the Windows image Open the Deployment Tools Command Prompt and type something like this to mount the image offline Dism /Mount-Wim /WimFile:C:\My-Wims\ref-win7-image.wim /Index:1 /MountDir:C:\ wim-mount-dir Apply the answer file to the mounted image using DISM DISM /Image:C:\wim-mount-dir /Apply-Unattend:C:\unattend-answer-files\oflinedrivers.xml Unmount the Windows Image Dism /Unmount-Wim /MountDir:C:\wim-mount-dir /commit Don’t forget the /commit switch otherwise changes to the image will not apply. # Method 2: Using DISM to add individual INF files This is a much easier method as it doesn’t involve the hassle of building answer files. Instead you issue DISM commands at the Deployment Tools Command Prompt to add inf files to an image. Mount the Windows Image At the Deployment Tools Command Prompt and type something like this to mount the image Dism /Mount-Wim /WimFile:C:\My-Wims\ref-win7-image.wim /Index:1 /MountDir:C:\wim-mount-dir Add an .INF Driver to an Image Dism /Image:C:\wim-mount-dir /Add-Driver /Driver:C:\my-drivers\audio-driver.inf Multiple drivers can be added by simply pointing to a folder which will install all .inf drivers found in that directory. To add drivers from subdirectories too use the /Resurse switch Dism /Image:C:\wim-mount-dir /Add-Driver /Driver:C:\my-drivers /recurse 64-bit computers require drivers to have a digital signature (i.e. signed drivers). To get past this requirement use the /ForceUnsigned switch to install unsigned drivers Dism /Image:C:\wim-mount-dir /Add-Driver /Driver:C:\my-drivers /ForceUnsigned Unmount the Windows Image Dism /Unmount-Wim /MountDir:C:\wim-mount-dir /commit Again, without the /commit switch none of your changes will be saved to the image. # Method 3: Copy device drivers to C:\Windows\INF\ During the Windows 7 installation process Setup searches for device drivers in the C:\Windows\INF\ directory including all subdirectories for devices on the computer and installs them as part of the same process ready to be used upon first log on. I initially based my theory on the fact that all in-box and out of box drivers are stored in this directory so manually copying drivers here should allow Setup to find and install these drivers. I’ve found this to work absolutely fine in all my tests, including offline. (I’ve already mentioned how this works in Experiments with Sysprep and Preparing and Sysprep’ing the Reference Computer). Here’s how this works in an offline scenario Mount the Windows Image At the Deployment Tools Command Prompt and type something like this to mount the image Dism /Mount-Wim /WimFile:C:\My-Wims\ref-win7-image.wim /Index:1 /MountDir:C:\wim-mount-dir Copy device drivers to C:\wim-mount-dir\Windows\INF\ Device drivers must be .inf files and not .exe applications. You might want to organise it in subfolders, for example C:\Windows\INF\MyDrivers\audio64 C:\Windows\INF\MyDrivers\ethernet64 C:\Windows\INF\MyDrivers\chipset64 etc Unmount the Windows Image Dism /Unmount-Wim /MountDir:C:\wim-mount-dir /commit When the .wim image is applied to a computer and turned on for the first time the device drivers will install as part of the Windows installation process, no problem. Coming up next is Applying a Windows 7 image using ImageX.
2017-07-26 22:35:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39696380496025085, "perplexity": 5288.158901270187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426639.7/warc/CC-MAIN-20170726222036-20170727002036-00266.warc.gz"}
https://pypi.org/project/goeffel/
Measures the resource utilization of a specific process over time # Overview Measures the resource utilization of a specific process over time. Also measures the utilization/saturation of system-wide resources: this helps putting the process-specific metrics into context. Built for Linux. Windows and Mac OS support might come. For a list of the currently supported metrics see below. The name, Göffel, is German for spork: Convenient, right? ## Highlights • High sampling rate: the default sampling interval of 0.5 s makes narrow spikes visible. • Can monitor a program subject to process ID changes (for longevity experiments where the monitored process occasionally restarts, for instance as of fail-over scenarios). • Can run indefinitely. Has predictable disk space requirements (output file rotation and retention policy). • Keeps your data organized: the time series data is written into a structured HDF5 file annotated with relevant metadata (also including program invocation time, system hostname, a custom label, the Goeffel software version, and others). • Interoperability: output files can be read with any HDF5 reader such as PyTables and especially with pandas.read_hdf(). See tips and tricks. • Values measurement correctness very highly (see technical notes). • Comes with a data plotting tool separate from the data acquisition program. The latest Goeffel release can be downloaded and installed from PyPI, via pip: $pip install goeffel pip can also install the latest development version of Goeffel: $ pip install git+https://github.com/jgehrcke/goeffel # CLI tutorial ## goeffel: data acquisition Invoke Goeffel with the --pid <pid> argument if the process ID of the target process is known. In this mode, goeffel stops the measurement and terminates itself once the process with the given ID goes away. Example: $goeffel --pid 29019 [... snip ...] 190809-15:46:57.914 INFO: Updated HDF5 file: wrote 20 sample(s) in 0.01805 s [... snip ...] 190809-15:56:13.842 INFO: Cannot inspect process: process no longer exists (pid=29019) 190809-15:56:13.843 INFO: Wait for producer buffer to become empty 190809-15:56:13.843 INFO: Wait for consumer process to terminate 190809-15:56:13.854 INFO: Updated HDF5 file: wrote 13 sample(s) in 0.01077 s 190809-15:56:13.856 INFO: Sample consumer process terminated For measuring beyond the process lifetime use --pid-command <command>. In the following example, I use the pgrep utility is for discovering the newest stress process: $ goeffel --pid-command 'pgrep stress --newest' [... snip ...] 190809-15:47:47.337 INFO: New process ID from PID command: 25890 [... snip ...] 190809-15:47:57.863 INFO: Updated HDF5 file: wrote 20 sample(s) in 0.01805 s 190809-15:48:06.850 INFO: Cannot inspect process: process no longer exists (pid=25890) 190809-15:48:06.859 INFO: PID command returned non-zero [... snip ...] 190809-15:48:09.916 INFO: PID command returned non-zero 190809-15:48:10.926 INFO: New process ID from PID command: 28086 190809-15:48:12.438 INFO: Updated HDF5 file: wrote 20 sample(s) in 0.01013 s 190809-15:48:22.446 INFO: Updated HDF5 file: wrote 20 sample(s) in 0.01062 s [... snip ...] In this mode, goeffel runs forever until manually terminated via SIGINT or SIGTERM. Process ID changes are detected by periodically running the discovery command until it returns a valid process ID on stdout. This is useful for longevity experiments where the monitored process occasionally restarts, for instance as of fail-over scenarios. ## goeffel-analysis: data inspection and visualization Note: goeffel-analysis provides an opinionated and limited approach to visualizing data. For advanced and thorough data analysis I recommend building a custom (maybe even ad-hoc) data analysis pipeline using pandas and matplotlib, or using the tooling of your choice. Also note: The command line interface provided by goeffel-analysis, especially for the plot commands, might change in the future. Suggestions for improvement are welcome, of course. ### goeffel-analysis inspect: Use goeffel-analysis inspect <path-to-HDF5-file> for inspecting the contents of a Goeffel output file. Example: $goeffel-analysis inspect mwst18-master1-journal_20190801_111952.hdf5 Measurement metadata: System hostname: int-master1-mwt18.foo.bar Invocation time (local): 20190801_111952 PID command: pgrep systemd-journal PID: None Sampling interval: 1.0 s Table properties: Number of rows: 24981 Number of columns: 38 Number of data points (rows*columns): 9.49E+05 First row's (local) time: 2019-08-01T11:19:53.613377 Last row's (local) time: 2019-08-01T18:52:49.954582 Time span: 7h 32m 56s Column names: unixtime ... snip ... system_mem_inactive ### goeffel-analysis plot: quickly plot data from a single time series file The goeffel-analysis plot <path-to-hdf5-file> command plots a pre-selected set of metrics in an opinionated way. More metrics can be added to the plot with the --metric <metric-name> option. Example command: goeffel-analysis plot \ mwst18-master2-mesosmaster_20190801_112136.hdf5 \ --metric proc_num_ip_sockets_open Example output figure: ### goeffel-analysis flexplot: generic plot command This command can be used for example for comparing multiple time series. Say you have monitored the same program across multiple replicas in a distributed system and would like to compare the time evolution of a certain metric across these replicas. Then the goeffel-analysis flexplot command is here to help, invoked with multiple --series arguments: $ goeffel-analysis flexplot \ --series mwst18-master1-journal_20190801_111952.hdf5 master1 \ --series mwst18-master2-journal_20190801_112136.hdf5 master2 \ --series mwst18-master3-journal_20190801_112141.hdf5 master3 \ --series mwst18-master4-journal_20190801_112151.hdf5 master4 \ --series mwst18-master5-journal_20190801_112157.hdf5 master5 \ --column proc_cpu_util_percent_total \ 'CPU util (total) / %' \ 'systemd journal CPU utilization ' 15 \ --subtitle 'MWST18, measured with Goeffel' \ --legend-loc 'upper center' Example output figure: # Background and details ## Prior art This was born out of a need for solid tooling. We started with pidstat from sysstat, launched as pidstat -hud -p \$PID 1 1. We found that it does not properly account for multiple threads running in the same process and that various issues in that regard exist in this program across various versions (see here, here, and here). The program cpustat open-sourced by Uber has a delightful README about the general measurement methodology and overall seems to be a great tool. However, it seems to be optimized for interactive usage (whereas we were looking for a robust measurement program which can be pointed at a process and then be left unattended for a significant while) and there does not seem to be a well-documented approach towards persisting the collected time series data on disk for later inspection. The program psrecord (which effectively wraps psutil) has a similar fundamental approach as Goeffel; it however only measures few metrics, and it does not have a clear separation of concerns between persisting the data to disk, performing the measurement itself, and analyzing/plotting the data. ## Technical notes • The core sampling loop does little work besides the measurement itself: it writes each sample to a queue. A separate process consumes this queue and persists the time series data to disk, for later inspection. This keeps the sampling rate predictable upon disk write latency spikes, or generally upon backpressure. This matters especially in cloud environments where we sometimes see fsync latencies of multiple seconds. • The sampling loop is (supposed to be, feedback welcome) built so that timing-related systematic measurement errors are minimized. • Goeffel tries to not asymmetrically hide measurement uncertainty. For example, you might see it measure a CPU utilization of a single-threaded process slightly larger than 100 %. That's simply the measurement error. In related tooling such as sysstat it seems to be common practice to asymmetrically hide measurement uncertainty by capping values when they are known to in theory not exceed a certain threshold (example). • goeffel must be run with root privileges. • The value -1 has a special meaning for some metrics (NaN, which cannot be represented properly in HDF5). Example: A disk write latency of -1 ms means that no write happened in the corresponding time interval. • The highest meaningful sampling rate is limited by the kernel's timer and bookkeeping system. # Measurands Measurand is a word! This section attempts to describe the individual data columns ("metrics"), their units, and their meaning. There are four main categories: ### Timestamps #### unixtime, isotime_local, monotime The timestamp corresponding to the right boundary of the sampled time interval. • unixtime encodes the wall time. It is a canonical Unix timestamp (seconds since epoch, double-precision floating point number); with sub-second precision and no timezone information. This is compatible with a wide range of tooling and therefore the general-purpose timestamp column for time series analysis (also see How to convert the unixtime column into a pandas.DatetimeIndex). Note: this is subject to system clock drift. In extreme case, this might go backward, have discontinuities, and be a useless metric. In that case, the monotime metric helps (see below). • isotime_local is a human-readable version of the same timestamp as stored in unixtime. It is a 26 character long text representation of the local time using an ISO 8601 notation (and therefore also machine-readable). Like unixtime this metric is subject to system clock drift and might become pretty useless in extreme cases. • monotime is based on a so-called monotonic clock source that is not subject to (accidental or well-intended) system clock drift. This column encodes most accurately the relative time difference between any two samples in the time series. The timestamps encoded in this column only make sense relative to each other; the difference between any two values in this column is a wall time difference in seconds, with sub-second precision. ### Process-specific metrics #### proc_pid The process ID of the monitored process. It can change if Goeffel was invoked with the --pid-command option. Momentary state at sampling time. #### proc_cpu_util_percent_total The CPU utilization of the process in percent. Mean over the past sampling interval. If the inspected process is known to contain just a single thread then this can still sometimes be larger than 100 % as of measurement errors. If the process runs more than one thread then this can go far beyond 100 %. This is based on the sum of the time spent in user space and in kernel space. For a more fine-grained picture the following two metrics are also available: proc_cpu_util_percent_user, and proc_cpu_util_percent_system. #### proc_cpu_id The ID of the CPU that this process is currently running on. Momentary state at sampling time. #### proc_ctx_switch_rate_hz The rate of (voluntary and involuntary) context switches in Hz. Mean over the past sampling interval. #### proc_num_threads The number of threads in the process. Momentary state at sampling time. #### proc_num_ip_sockets_open The number of sockets currently being open. This includes IPv4 and IPv6 and does not distinguish between TCP and UDP, and the connection state also does not matter. Momentary state at sampling time. #### proc_num_fds The number of file descriptors currently opened by this process. Momentary state at sampling time. #### proc_disk_read_throughput_mibps and proc_disk_write_throughput_mibps The disk I/O throughput of the inspected process, in MiB/s. Based on Linux' /proc/<pid>/io rchar and wchar. Relevant Linux kernel documentation (emphasis mine): rchar: The number of bytes which this task has caused to be read from storage. This is simply the sum of bytes which this process passed to read() and pread(). It includes things like tty IO and it is unaffected by whether or not actual physical disk IO was required (the read might have been satisfied from pagecache). wcar: The number of bytes which this task has caused, or shall cause to be written to disk. Similar caveats apply here as with rchar. Mean over the past sampling interval. #### proc_disk_read_rate_hz and proc_disk_write_rate_hz The rate of read/write system calls issued by the process as inferred from the Linux /proc file system. The relevant syscr/syscw counters are as of now only documented with "read I/O operations, i.e. syscalls like read() and pread()" and "write I/O operations, i.e. syscalls like write() and pwrite()". Reference: Documentation/filesystems/proc.txt Mean over the past sampling interval. #### proc_mem_rss_percent Fraction of process resident set size (RSS) relative to the machine's physical memory size in percent. This is equivalent to what top shows in the %MEM column. Momentary state at sampling time. #### proc_mem_rss, proc_mem_vms. proc_mem_dirty Various memory usage metrics of the monitored process. See the psutil docs for a quick summary of what the values mean. However, note that the values need careful interpretation, as shown by discussions like this and this. Momentary snapshot at sampling time. ### Disk metrics Only collected if Goeffel is invoked with the --diskstats <DEV> argument. The resulting data column names contain the device name <DEV> (note however that dashes in <DEV> get removed when building the column names). Note that the conclusiveness of some of these disk metrics is limited. I believe that this blog post nicely covers a few basic Linux disk I/O concepts that should be known prior to read a meaning into these numbers. #### disk_<DEV>_util_percent This implements iostat's disk %util metric. I like to think of it as the ratio between the actual (wall) time elapsed in the sampled time interval, and the corresponding device's "busy time" in the very same time interval, expressed in percent. The iostat documentation describes this metric in the following words: Percentage of elapsed time during which I/O requests were issued to the device (bandwidth utilization for the device). This is the mean over the sampling interval. Note: In the case of modern storage systems 100 % utilization usually does not mean that the device is saturated. I would like to quote Marc Brooker: As a measure of general IO busyness %util is fairly handy, but as an indication of how much the system is doing compared to what it can do, it's terrible. #### disk_<DEV>_write_latency_ms and disk_<DEV>_read_latency_ms This implements iostat's w_await which is documented with The average time (in milliseconds) for write requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. On Linux, this is built using /proc/diskstats documented here. Specifically, this uses field 8 ("number of milliseconds spent writing") and field 5 ("number of writes completed"). Notably, the latter it is not the merged write count but the user space write count (which seems to be what iostat uses for calculating w_await). This can be a useful metric, but please be aware of its meaning and limitations. To put this into perspective, in an experiment I have seen that the following can happen within a second of real-time (observed via iostat -x 1 | grep xvdh and via direct monitoring of /proc/diskstats): 3093 userspace write requests served, merged into 22 device write requests, yielding a total of 120914 milliseconds "spent writing", resulting in a mean write latency of 25 ms. But what do these 25 ms really mean here? On average, humans have less than two legs, for sure. The current implementation method reproduces iostat output, which was the initial goal. Suggestions for improvement are very welcome. This is the mean over the sampling interval. The same considerations hold true for r_await, correspondingly. #### disk_<DEV>_merged_read_rate_hz and disk_<DEV>_merged_write_rate_hz The merged read and write request rate. The Linux kernel attempts to merge individual user space requests before passing them to the storage hardware. For non-random I/O patterns this greatly reduces the rate of individual reads and writes issued to disk. Built using fields 2 and 6 in /proc/diskstats documented here. This is the mean over the sampling interval. #### disk_<DEV>_userspace_read_rate_hz and disk_<DEV>_userspace_write_rate_hz The read and write request rate issued from user space point of view (before merges). Built using fields 1 and 5 in /proc/diskstats documented here. This is the mean over the sampling interval. ### System-wide metrics system_loadavg1 system_loadavg5 system_loadavg15 system_mem_available system_mem_total system_mem_used system_mem_free system_mem_shared system_mem_buffers system_mem_cached system_mem_active system_mem_inactive # Tips and tricks ## How to convert a Goeffel HDF5 file into a CSV file I recommend to de-serialize and re-serialize using pandas. Example one-liner: python -c 'import sys; import pandas as pd; df = pd.read_hdf(sys.argv[1], key="goeffel_timeseries"); df.to_csv(sys.argv[2], index=False)' goeffel_20190718_213115.hdf5.0001 /tmp/hdf5-as-csv.csv Note that this significantly inflates the file size (e.g., from 50 MiB to 300 MiB). ## How to visualize and browse the contents of an HDF5 file At some point, you might feel inclined to poke around in an HDF5 file created by Goeffel or to do custom data inspection/processing. In that case, I recommend using one of the various available open-source HDF5 tools for managing and viewing HDF5 files. One GUI tool I have frequently used is ViTables. Install it with pip install vitables and then do e.g. vitables goeffel_20190718_213115.hdf5 This opens a GUI which allows for browsing the tabular time series data, for viewing the metadata in the file, for exporting data as CSV, for querying the data, and various other things. ## How to do quick data analysis using IPython and pandas I recommend to start an IPython REPL: pip install ipython # if you have not done so yet ipython Load the HDF5 file into a pandas data frame: In [1]: import pandas as pd In [2]: df = pd.read_hdf('goeffel_timeseries__20190806_213704.hdf5', key='goeffel_timeseries') From here you can do anything. For example, let's have a look at the mean value of the actual sampling interval used in this specific Goeffel time series: In [3]: df['unixtime'].diff().mean() Out[3]: 0.5003192476604296 Or, let's see how many threads the monitored process used at most during the entire observation period: In [4]: df['proc_num_threads'].max() Out[4]: 1 ## How to convert the unixtime column into a pandas.DatetimeIndex The HDF5 file contains a unixtime column which contains canonical Unix timestamp data ready to be consumed by a plethora of tools. If you are like me and like to use pandas then it is good to know how to convert this into a native pandas.DateTimeIndex: In [1]: import pandas as pd In [2]: df = pd.read_hdf('goeffel_timeseries__20190807_174333.hdf5', key='goeffel_timeseries') # Now the data frame has an integer index. In [3]: type(df.index) Out[3]: pandas.core.indexes.numeric.Int64Index # Parse unixtime column. In [4]: timestamps = pd.to_datetime(df['unixtime'], unit='s') # Replace the index of the data frame. In [5]: df.index = timestamps # Now the data frame has a DatetimeIndex. In [6]: type(df.index) Out[6]: pandas.core.indexes.datetimes.DatetimeIndex # Let's look at some values. In [7]: df.index[:5] Out[7]: DatetimeIndex(['2019-08-07 15:43:33.798929930', '2019-08-07 15:43:34.300590992', '2019-08-07 15:43:34.801260948', '2019-08-07 15:43:35.301798105', '2019-08-07 15:43:35.802226067'], dtype='datetime64[ns]', name='unixtime', freq=None) # Valuable references External references on the subject matter that I found useful during development. About system performance measurement, and kernel time bookkeeping: Others: ## Project details Uploaded source
2022-08-17 10:57:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2651780843734741, "perplexity": 5658.528493257453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572898.29/warc/CC-MAIN-20220817092402-20220817122402-00744.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-4-an-introduction-to-functions-4-7-arithmetic-sequences-got-it-page-278/5
## Algebra 1: Common Core (15th Edition) a) $A(n)=21+(n-1)2$ b) $A(n)=2+(n-1)7$ a) We have $A(1)=21$ $A(n)=A(n-1)+2$ $A(n)=A(1)+(n-1)d$ $A(n)=21+(n-1)2$ b) We have $A(1)=2$ $A(n)=A(n-1)+7$ $A(n)=A(1)+(n-1)d$ $A(n)=2+(n-1)7$
2022-05-26 05:12:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38610923290252686, "perplexity": 673.0382548513403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662601401.72/warc/CC-MAIN-20220526035036-20220526065036-00113.warc.gz"}
https://tex.stackexchange.com/questions/291698/minipage-wrapping-tabularx-overfull-box
# minipage wrapping tabularx overfull box 1. The line Zur Erläuterung der... should introduce the following list. This produces an underfull hbox and I just can't get rid of it. 2. To keep the content together on one page I found the solution in some post to wrap everything into a minipage.Is this the right procedure? Because this causes overfull boxes of about 104pt. 3. The same for the tabular inside. It actually looks like expected, but the warnings drive me crazy... I noticed the problem with the minipages a few times in my doc already. And I try to work with linewidth always... One thing to add before: I globally set no indent in my preamble already. Here's my code: \documentclass[ paper=a4, parskip=half* %vertikaler Abstand nach Absätzen ]{scrreprt} \tolerance=2000 \emergencystretch=1em \hfuzz=2pt \usepackage[ngerman]{babel} \usepackage[utf8]{inputenx} \usepackage[T1]{fontenc} \usepackage{lmodern} \usepackage{tabularx} \begin{document} \noindent\begin{minipage}{\linewidth} Zur Erläuterung der Vorgehensweise definiert Bechmann die folgenden Variablen: \begin{tabularx}{\linewidth}{p{.35\linewidth}X} $K_1,K_2,\dots,K_n$ & die n Kriterien, bezüglich der bewertet werden soll.\\ $A_1,A_2,\dots,A_m$ & die m verschiedenen Alternativen, die bewertet werden sollen.\\ $g_1,g_2,\dots,g_n$ & Gewichte der Kriterien\\ $k_{ij}$ $i=1,\dots,n$ $j=1,\dots,m$ & Zielertrag des i-ten Kriteriums bezüglich der j-ten Alternative\\ $e_{ij}$ $i=1,\dots,n$ $j=1,\dots,m$ & Zielerfüllungsgrad des i-ten Kriteriums\\ $N_{ij}$ $i=1,\dots,n$ $j=1,\dots,m$ & Teilnutzwert des i-ten Kriteriums bezüglich der j-ten Alternative\\ $N_j$ $j=1,\dots,m$ & Nutzwert der j-ten Alternative\\ \end{tabularx} \vskip .5em es gilt dabei $N_{ij}=g_i*e_{ij}$ \\ und $N_j=N_{1j}+N_{2j}+\dots+N_{nj}=\displaystyle\sum_{i=1}^{n}N_{ij}$ \end{minipage} \end{document} BTW: in case of any relevance: I use TexStudio 2.10.6 and MikTeX 2.9 • why do you use \\ before tabularx ? – touhami Feb 7 '16 at 21:29 • @Eric: You should provide the document around this table, i.e. at least the compilable version that has this table. – user31729 Feb 7 '16 at 21:41 • If add before \begin{minipage} command \noindent, the overfull mesage will disappear. – Zarko Feb 7 '16 at 21:42 • using \displaystyle in the middle of an equation is very weird, it means the lhs is set as inline math and the whole of the right hand side is set as display, – David Carlisle Feb 7 '16 at 22:59 • note that your first sentence has the correct solution. this is a list and would be better set as such and not use tabular (and definitely not tabularx) at all here, – David Carlisle Feb 7 '16 at 23:01 As I said in my comment, you have problem with \parindent. Your minipage widt with of \textwidth doesn't start at left text border bit after parindent and consequently for its amount protrude right text border caused warning Overfull \hbox (15.0pt too wide) in paragraph at lines 9--31. I f you add before begin{minipage}˛ a command \noindent or set \parindent to zero, this warning disappear: \documentclass{article} \usepackage{tabularx} %\setlength{\parindent}{0pt} \usepackage[showframe]{geometry} \begin{document} \noindent \begin{minipage}{\textwidth} Zur Erläuterung der Vorgehensweise definiert Bechmann die folgenden Variablen: \vspace{\baselineskip} \begin{tabularx}{\linewidth}{lX} $K_1,K_2,\dots,K_n$ & die $n$ Kriterien, bezüglich der bewertet werden soll.\\ $A_1,A_2,\dots,A_m$ & die $m$ verschiedenen Alternativen, die bewertet werden sollen.\\ $g_1,g_2,\dots,g_n$ & Gewichte der Kriterien\\ $k_{ij}$ $i=1,\dots,n$ $j=1,\dots,m$ & Zielertrag des $i$-ten Kriteriums bezüglich der j-ten Alternative\\ $e_{ij}$ $i=1,\dots,n$ $j=1,\dots,m$ & Zielerfüllungsgrad des $i$-ten Kriteriums\\ $N_{ij}$ $i=1,\dots,n$ $j=1,\dots,m$ & Teilnutzwert des $i$-ten Kriteriums bezüglich der $j$-ten Alternative\\ $N_j$ $j=1,\dots,m$ & Nutzwert der $j$-ten Alternative\\ \end{tabularx} \vskip .5\baselineskip es gilt dabei $N_{ij}=g_i*e_{ij}$ und $N_j=N_{1j}+N_{2j}+\dots+N_{nj}=\displaystyle\sum_{i=1}^{n}N_{ij}$ \end{minipage} \end{document} Edit: I corrected some typing error in given MWE. Also let be noted: if you like to start the table content of the first column at left border of mini page, than you should do the following \begin{tabularx}{\linewidth}{@{}lX} Proposed solution also works with your document class (added at completing of your MWE: \documentclass[paper=a4,parskip=half*]{scrreprt} Edit (2): Here is image of your minipage generated with your MWE. I also add missing part of your example (I'm sorry for this) • also this code still produces overfull boxes... – Eric Feb 7 '16 at 22:12 • No. I didn't find anything in .log file. For test erase old .aux and .log file and then compile my code again. You will not find any warning. – Zarko Feb 7 '16 at 22:32 • I updated my code into a MWE and noticed that it is attributed to the parskip=half* setting. But is there a solution to keep the setting and fix the prob? – Eric Feb 7 '16 at 22:35 • I compile your MWE. It doesn't give any warning about overfull boxes. However, I get warning about babel (in my system since I haven't installed babel for "ngerman"), but his is not a question, isn't it? – Zarko Feb 7 '16 at 22:42 • this is weired... I deleted my log and aux files and compiled it again with an overfull box. Watch the picture I inserted above. – Eric Feb 7 '16 at 22:52 1) \noindent\begin{minipage} 2) no need to \\ befor tabularx if necessary \makebox[\linewidth]{Zur Erläuterung der Vorgehensweise definiert Bechmann die folgenden Variablen:} 3) p{.3\linewidth} \documentclass{article} \usepackage{tabularx} \begin{document} \noindent\begin{minipage}{\linewidth} \makebox[\linewidth]{Zur Erläuterung der Vorgehensweise definiert Bechmann die folgenden Variablen:} \begin{tabularx}{\linewidth}{p{.3\linewidth}X} $K_1,K_2,\dots,K_n$ & die n Kriterien, bezüglich der bewertet werden soll.\\ $A_1,A_2,\dots,A_m$ & die m verschiedenen Alternativen, die bewertet werden sollen.\\ $g_1,g_2,\dots,g_n$ & Gewichte der Kriterien\\ $k_{ij}$ $i=1,\dots,n$ $j=1,\dots,m$ & Zielertrag des i-ten Kriteriums bezüglich der j-ten Alternative\\ $e_{ij}$ $i=1,\dots,n$ $j=1,\dots,m$ & Zielerfüllungsgrad des i-ten Kriteriums\\ $N_{ij}$ $i=1,\dots,n$ $j=1,\dots,m$ & Teilnutzwert des i-ten Kriteriums bezüglich der j-ten Alternative\\ $N_j$ $j=1,\dots,m$ & Nutzwert der j-ten Alternative\\ \end{tabularx} \vskip .5em es gilt dabei $N_{ij}=g_i*e_{ij}$ \\ und $N_j=N_{1j}+N_{2j}+\dots+N_{nj}=\displaystyle\sum_{i=1}^{n}N_{ij}$ \end{minipage} \end{document} • thx, but this code still produces overfull boxes for me – Eric Feb 7 '16 at 22:11 • I updated my code above and still having trouble with the overfull boxes... – Eric Feb 7 '16 at 22:22 Please include full minimal working examples including the relevant packages you use. Your code produces both an underfull and overfull box. I don't think creating a minipage is really helping here. I can recommend wrapping the \tabularx{} environment into a \table{} environment which in your example doesn't produce any overfull boxes. Also the geometry package helps if you want to change the width of your page. \documentclass{article} \usepackage[top=4cm, bottom=3cm, left=3cm, right=4cm]{geometry} \usepackage{tabularx} \begin{document} \begin{table} Zur Erläuterung der Vorgehensweise definiert Bechmann die folgenden Variablen: \\[0.5cm] \begin{tabularx}{\linewidth}{p{.35\linewidth}X} $K_1,K_2,\dots,K_n$ & die n Kriterien, bezüglich der bewertet werden soll.\\ $A_1,A_2,\dots,A_m$ & die m verschiedenen Alternativen, die bewertet werden sollen.\\ $g_1,g_2,\dots,g_n$ & Gewichte der Kriterien\\ $k_{ij}$ $i=1,\dots,n$ $j=1,\dots,m$ & Zielertrag des i-ten Kriteriums bezüglich der j-ten Alternative\\ $e_{ij}$ $i=1,\dots,n$ $j=1,\dots,m$ & Zielerfüllungsgrad des i-ten Kriteriums\\ $N_{ij}$ $i=1,\dots,n$ $j=1,\dots,m$ & Teilnutzwert des i-ten Kriteriums bezüglich der j-ten Alternative\\ $N_j$ $j=1,\dots,m$ & Nutzwert der j-ten Alternative\\ \end{tabularx} \vskip .5em es gilt dabei $N_{ij}=g_i*e_{ij}$ und $N_j=N_{1j}+N_{2j}+\dots+N_{nj}=\displaystyle\sum_{i=1}^{n}N_{ij}$ \end{table} \end{document} Edit: Turns out the \table{} wrap doesn't really fix the overfull hbox. Rather the geometry package fixed it by broaden the sitewidth which might not be a suitable solution. However overfull hboxs could usually be fixed by forced hypothenation using \- in the first sentence of the given example. • when I add a table environment for some reason the overfull-box warning disappears... why this doesn't really fix the problem? – Eric Feb 7 '16 at 22:09 • It does fix it by using the package mentioned in my answer. The answers from the other guys are more suitable, check them out. – Octopus Feb 7 '16 at 22:13 Often the described symptoms are caused by the parskip indent. In this obvious case it can easily be solved by setting either locally \noindent before the respective paragraph or in a general manner as global definition by \setlength{\parindent}{0pt}. In my specific case the issue was produced by the document class option parskip=half* of the KOMA-Script. I noticed this as I commented the parksip=half* out and found the solution on page 71 of the KOMA-Script manual. half* half a line vertical space between paragraphs; there must be at least a quarter of a line free space at the end of a paragraph The problem was the second condition: "at least a quarter of a line free space at the end of a pargraph". half- one line vertical space between paragraphs Therefore I switched to the half- command to keep the vertical space and get rid of the overfull boxes. I hope this helps others with the same problem.
2020-02-29 01:16:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9259293675422668, "perplexity": 8767.622592687812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875148163.71/warc/CC-MAIN-20200228231614-20200229021614-00271.warc.gz"}
http://physics.stackexchange.com/tags/spin/hot
Tag Info 7 First, the electron is not a point particle. The abstraction you are thinking of is what we would call a naked electron. In an experiment, you do not see the naked particle ever. It is always surrounded by virtual pairs. Hence, what you measure as the electron is really a many-body system. Second, you might want to read this. The take-home message is "the ... 7 The answer is Yes. See A physical understanding of fractionalization http://arxiv.org/abs/hep-th/0302201 Quantum order from string-net condensations and origin of light and massless fermions, Xiao-Gang Wen; Spin-1/2 and Fermi statistics from qubits http://arxiv.org/abs/hep-th/0507118 Quantum ether: photons and electrons from a rotor model, ... 4 The general question is quite hard to tackle I think, because a rigorous motivation of Hilbert space would end up in the theory of operator algebras (see e.g. this answer) and the OP is probably not interested in these aspects at the moment. As for the example of spin, the Hilbert space in this case is still an $L^2$ space, but the functions are no longer ... 4 Yes. This commutation relation is that of the Lie algebra $\mathfrak{so}(3)$ corresponding to the rotation group in three dimensions. Thus the commutation relation states that the Pauli matrices generate rotations. To understand why this is the commutation relation of $\mathfrak{so}(3)$, one can draw a diagram showing that the commutator of two ... 4 The spin of a vector boson in any dimension is spin 1. What changes with the number of dimensions is the number of degrees of freedom associated with a given spin. A massless vector in four dimensions has two independent degrees of freedom, which can be seen from the rank of what's called the "little group" in the literature. It is the subgroup of the ... 3 The spin operator $\vec S = \left(\begin{matrix} S_x \\ S_y \\S_z \end{matrix}\right)$ is just like the (orbital) angular momentum operator. $\langle \psi \rvert S_i \lvert \psi \rangle$ gives you the expectation value for the component of the spin angular momentum. $\langle \psi \rvert \vec S \lvert \psi \rangle$ is the expectation for the full spin vector. ... 3 You have fallen prey to a popular simplification of spinors. The statement "you have to turn electron by 720 degrees in order to get the same spin state" does not refer to an actual rotation of an actual electron. In quantum mechanics, we describe the states of objects as elements of a Hilbert space $\mathcal{H}$. The crucial thing is that not all elements ... 2 The maximum $s_z$ eigenvalue an electron can have is $\hbar/2$. Therefore, the only way that a quantum state can have $\langle s_z \rangle = \hbar/2$ is if the state lies completely within the $s_z = \hbar/2$ eigenspace. Thus your answer may only include pure states with $s_z = \hbar /2$. 2 Just to clarify to Robin Ekman's answer, superpositions of the Pauli matrices exponentiate to $SU(2)$, not $SO(3)$, but both these Lie groups have $\mathfrak{so}(3)$ as their Lie algebra - but I am sure you already know this. Also, there is another way to look at the problem that you might find helpful, even though it is a mathematical insight rather than a ... 1 The wave function only contains all the information about the system im so far as you consider it. Meaning each qualitatively different physical system needs its a modified Hilbert space to fit what can happen with the system. In case you have something like spin on its own in $H_{Spin}$ and you want to look at a freely moving particle in $H_{free}$ that ... 1 The classic example here is the Thirring model, which describes fermions in 1+1 dimensions. While this is a very special two dimensional model (so the conclusions cannot be generalized), Sidney Coleman found that it is equivalent to the Sine-Gordon model, a theory of bosons. The demonstration is rather technical (Coleman essentially proved that the ... 1 The time evolution of the two spins can be separated if they are independent, i.e. if they don't interact. Under this assumption the time operator splits in the tensor product $$U_1\otimes U_2=(U_1\otimes I)(I\otimes U_2)$$ and therefore it is clear how to define the time evolution for the single spin: for the $j$th particle one simply needs to take the ... 1 You are correct that spin states live in complex space, however, the expectation value $\left\langle\psi|S_z|\psi\right\rangle$ lives in real space. It is simply a real number, which represents the expectation value of spin if a series of measurements is made in the $z$ direction. You can see that the expression must be a scalar because ... 1 The point is that the spin operator is defined to be (1/2) times SU(2) generator while the orbital angular momentum is defined to be only SU(2)(or SO(3), is the same) generator. The proof is the same, and is " representation independent", in the sense that the structure of identity multiplied by something plus a linear combination of sigma matrices ... 1 The identity you used, $$\exp(i\theta \, \hat s)=\cos(\theta)+i\sin(\theta)\,\hat s, \tag{\ast}$$ is crucially dependent on the operator $\hat s$ being idempotent, and particularly on the fact that $\hat{s}^2=\mathbb1$. This is generally not the case for angular momenta other than spin-1/2. In general, the total angular momentum is a scalar within the ... 1 "By the uncertainty principle" is the answer. In more detail, let's say we're talking about x and y axes. The first measurement puts the electron into an eigenstate of the spin X observable (the question of how it does this is the quantum measurement problem). Whichever of the two X eigenstates this "collapse" ends up in, it not an eigenstate of the spin Y ... 1 Nothing prevents the electron's spin from being measured along a particular axis, and then subsequently measured along an axis perpendicular to the first. In this situation, however, the spins along the perpendicular axes would not be known simultaneously, so the uncertainty principle would not be violated. As an example, say that we perform our own ... 1 Unless I'm missing something, your expression is just an expectation value of $\hat{S}_z$ when in the state $| \psi \rangle$. This is an actual measurement you could make on an ensemble of atoms, by running them through a Stern-Gerlach apparatus and counting +$\hbar/2$ for every one that hit the "top" detector and -$-\hbar/2$ for every one that hit the ... Only top voted, non community-wiki answers of a minimum length are eligible
2015-03-02 03:57:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8156750202178955, "perplexity": 208.5691288474136}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462700.28/warc/CC-MAIN-20150226074102-00144-ip-10-28-5-156.ec2.internal.warc.gz"}
https://juliapackages.com/p/quasinormalmodes
## QuasinormalModes.jl A Julia package for computing discrete eigenvalues of second order ODEs Author lucass-carneiro Popularity 1 Star Updated Last 1 Year Ago Started In November 2020 # QuasinormalModes.jl This is a Julia package whose primary objective is to compute the discrete eigenvalues of second order ordinary differential equations. It was written with the intent to be used for computing quasinormal modes (QNMs) of black holes in General Relativity efficiently and accurately. QNMs are the discrete spectrum of characteristic oscillations produced by black holes when perturbed. These oscillations decay exponentially in time and thus it's said that QNMs contain a real \omega_R oscillation frequency and an imaginary \omega_I frequency that represents the mode's decay rate. These frequencies are often described by a discrete eigenvalue in a second order ODE. For a comprehensive review see [1]. To compute eigenvalues (and thus quasinormal frequencies) this package uses the Asymptotic Iteration Method (AIM) [2], more specifically the "improved" version of the AIM as described in [3]. The AIM can be used to find the eigenvectors and eigenvalues of any second order differential equation (the class of problems with which the quasinormal modes belong) and thus this package can be used not only in the context of General Relativity but can also be used to find the discrete eigenvalues of other systems such as the eigenenergies of a quantum system described by the time independent Schrödinger equation. # Author Lucas T. Sanches, Centro de Ciências Naturais e Humanas, Universidade Federal do ABC (UFABC). QuasinormalModes is licensed under the MIT license. # Installation This package can be installed using the Julia package manager. From the Julia REPL, type ] to enter the Pkg REPL mode and run pkg> add QuasinormalModes and then type backspace to exit back to the REPL. # Contributing There are many ways to contribute to this package: • Report an issue if you encounter some odd behavior, or if you have suggestions to improve the package. • Contribute with code addressing some open issues, that add new functionality or that improve the performance. • When contributing with code, add docstrings and comments, so others may understand the methods implemented. • Contribute by updating and improving the documentation. ### Used By Packages No packages found.
2022-10-06 14:26:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7786645293235779, "perplexity": 1251.0679331607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00059.warc.gz"}
https://www.physicsforums.com/threads/how-to-show-that-commutative-matrices-form-a-group.957351/
# How to show that commutative matrices form a group? • I Let's say we have a given matrix ##G##. I want to find a set of ##M## matrices so that ##MG = GM## and prove that this is a group. How can I approach this problem? phyzguy Write down the definition of a group and see if you show that your matrices adhere to the definition. FactChecker fresh_42 Mentor Let's say we have a given matrix ##G##. I want to find a set of ##M## matrices so that ##MG = GM## and prove that this is a group. How can I approach this problem? By more conditions, e.g. ##M=0## is possible in your setup. Do you look for an addition group? Actually I'm not really sure how this group would look like. The set would be the ##M## matrices I suppose, and the operation? Multiplication by ##G##? How can I make ##G## a special element in the group? fresh_42 Mentor ##G## already is in the group. However, you should at least have a group operation. It looks like a multiplicative group you're heading for. I'm excited to see how you will manage the inverses without further assumptions. suremarc Can the operation be something like the commutator? It cannot be exactly the commutator because then G wouldn't be an identity element but an aggressive element instead. fresh_42 Mentor What is an aggressive element? Where do your matrices live in? Even the word commutator isn't defined by the above. ##\{\,M\,|\,MG=GM\,\}## is already a group, say ##H##. Then ##G\in H## and ##H## is Abelian. But I have the strong feeling that you will not be satisfied. Aggressive element is an element that makes every other element equal to itself. Like multiplying real numbers by zero. Not sure if aggressive element is the right term in English, I learnt it in Hungarian. The matrices are in ##\mathbb{R}^{n \times n}##. And as ##G## is given, I would like to express what the ##M## matrices must look like. ##\{\,M\,|\,MG=GM\,\}## is already a group, say ##H##. Then ##G\in H## and ##H## is Abelian. But I have the strong feeling that you will not be satisfied. I'm not sure we're on the same page. I've been learning linear algebra for one month only, so I might not see your point immidiately. My teacher mentioned this problem and I found it interesting. The picture I have in mind is that a group is an algebraic structure which means that there is a set and a binary operation with the following properties - closure for that operation - associativity - there is an inverse element for every element in the set - there is an identity element in the set So far, my problem is that I don't really see how the statement "matrices commuting with a given matrix form a group" fits into this picture. What is the set, what is the operation, etc. fresh_42 Mentor Aggressive element is an element that makes every other element equal to itself. Like multiplying real numbers by zero. Not sure if aggressive element is the right term in English, I learnt it in Hungarian. Tell me. My dictionary gave me aggresziv or eröszakos. The matrices are in ##\mathbb{R}^{n \times n}##. And as ##G## is given, I would like to express what the ##M## matrices must look like. I'm not sure we're on the same page. I've been learning linear algebra for one month only, so I might not see your point immidiately. My teacher mentioned this problem and I found it interesting. Easy: Let ##H:=\{\,M\in \mathbb{R}^{n \times n}\,|\,MG=GM\,\}##. The picture I have in mind is that a group is an algebraic structure which means that there is a set and a binary operation with the following properties - closure for that operation ##M,N \in H \Longrightarrow M+N \in H## - associativity ##M+(N+P)=(M+N)+P## - there is an inverse element for every element in the set ##M \in H \Longrightarrow -M \in H## - there is an identity element in the set ##0\in H## So far, my problem is that I don't really see how the statement "matrices commuting with a given matrix form a group" fits into this picture. What is the set, what is the operation, etc. This is a rather boring solution, but if we must not invert the matrices, the more interesting multiplicative case isn't possible. The entire question including "aggressive" reminds me of algebras which are used in biology. Last edited: Tell me. My dictionary gave me aggresziv or eröszakos. It's aggresszív in Hungarian. I didn't find anything for agressive element, how do you call this in English? Easy: Let ##H:=\{\,M\in \mathbb{R}^{n \times n}\,|\,MG=GM\,\}##. ##M,N \in H \Longrightarrow M+N \in H## ##M+(N+P)=(M+N)+P## ##M \in H \Longrightarrow -M \in H## ##0\in H## But how is this a proof that these rules also apply to commuting matrices too? Maybe it should be trivial but I don't see how the sum of two matrices that commute with ##G## also commutes with ##G##. Same for the other conditions. Stephen Tashi Let's say we have a given matrix ##G##. I want to find a set of ##M## matrices so that ##MG = GM## and prove that this is a group. How can I approach this problem? If you interpret "is a group" to mean "is a group under the operation of multiplication" then the problem asks for a proof of a false statement. Begin by fixing the problem. Let ##G## be an nxn matrix. Let ##S## be the set of nxn matrices such that ##m \in S## iff ##mG = Gm##. If ##G## is the zero nxn matrix then ##S## is the set of all nxn matrices, which is not a group under the operation of multiplication. Suppse ##G## is not an invertible matrix. The identity matrix ##I## is an element of ##S##. If ##S## were a group then it would contain ##GI= IG= G##. So ##S## would contain an matrix ##G## with no multiplicative inverse. Hence ##S## is not a multiplicative group. ---- If you interpret "is a group" to mean "is a group under the operation of addition", then, besides the trivial matters, you must show that if ##A \in S## and ##B \in S## then ##A+B \in S##. Maybe it should be trivial but I don't see how the sum of two matrices that commute with G" role="presentation">G also commutes with G" role="presentation">G ##G(A+B) = ? = ? = (A+B)G## Let's say we have a given matrix ##G##. I want to find a set of matrices ##M## so that ##MG = GM## and prove that this is a group. How can I approach this problem? Let ##G## be a ##n \times n## matrix Define ##S## to be the set of ##n \times n## matrices with nonzero determinant: ##S = \{ n \times n \text{ matrix } A | \text{ } det(A) \neq 0 \}## Define ##T## to be the set of ##n \times n## matrices with nonzero determinant that commute with ##G##: ##T = \{M \in S | \text{ } MG=GM \}## It is easy to show that ##T## is a group Questions: Is it necessary that ##G \in S## ?. That is, is it necessary that ##det(G) \neq 0## ? Does ##G## belong to ##T##, i.e., ##G \in T## ? WWGD Gold Member If your matrix G is the identity, it will commute with non-invertible matrices, and these will not be invertible. I saw a related name commutant? WWGD Gold Member Let ##G## be a ##n \times n## matrix Define ##S## to be the set of ##n \times n## matrices with nonzero determinant: ##S = \{ n \times n \text{ matrix } A | \text{ } det(A) \neq 0 \}## Define ##T## to be the set of ##n \times n## matrices with nonzero determinant that commute with ##G##: ##T = \{M \in S | \text{ } MG=GM \}## It is easy to show that ##T## is a group Questions: Is it necessary that ##G \in S## ?. That is, is it necessary that ##det(G) \neq 0## ? Does ##G## belong to ##T##, i.e., ##G \in T## ? Doesn't every element in a group have an inverse? If Det(G)=0 , then G is not invertible. fresh_42 Mentor If your matrix G is the identity, it will commute with non-invertible matrices, and these will not be invertible. I saw a related name commutant? Commutator, but even this depends on the structure: ##[G,M]=GM-MG## or ##[G,M]=GMG^{-1}M^{-1}.## fresh_42 Mentor Doesn't every element in a group have an inverse? If Det(G)=0 , then G is not invertible. If ##\operatorname{det}G =0## then ##-G## is still the inverse WWGD Gold Member WWGD Gold Member If ##\operatorname{det}G =0## then ##-G## is still the inverse But I thought this was a multiplicative group. An additive group has every element invertible, doesn't it, by -G itself, right? fresh_42 Mentor Commutator is the construction, ##[,]=0## resp. ##[,]=1## the centralizer. I've never heard of commutant for centralizer, but anyway, it's a slight difference, so it depends on what is meant. Let ##G## be a ##n \times n## matrix Define ##S## to be the set of ##n \times n## matrices with nonzero determinant: ##S = \{ n \times n \text{ matrix } A | \text{ } det(A) \neq 0 \}## Define ##T## to be the set of ##n \times n## matrices with nonzero determinant that commute with ##G##: ##T = \{M \in S | \text{ } MG=GM \}## It is easy to show that ##T## is a group Questions: Is it necessary that ##G \in S## ?. That is, is it necessary that ##det(G) \neq 0## ? Does ##G## belong to ##T##, i.e., ##G \in T## ? Obviously, ##T## is a multiplicative group, consisting of ##n \times n## matrices with nonzero determinant. fresh_42 Mentor But I thought this was a multiplicative group. An additive group has every element invertible, doesn't it, by -G itself, right? That was one of the difficulties with the OP. It hasn't been specified, and to automatically assume invertible matrices if he talks about ##\mathbb{R}^{n\times n}## is a bit of a stretch. fresh_42 Mentor Obviously, ##T## is a multiplicative group, consisting of ##n \times n## matrices with nonzero determinant. Obviously, this is an unsupported assumption. WWGD
2021-10-15 21:47:50
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9980640411376953, "perplexity": 1322.8618432574176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583083.92/warc/CC-MAIN-20211015192439-20211015222439-00060.warc.gz"}
https://itwissen.info/en/magnetic-flux-120717.html
# magnetic flux Magnetic flux is a measure of the strength of the magnetic field in an electric coil through which current flows. It results from the product of the number of turns (N) of the coil and the current (I), i.e. the total of the currents involved in the construction of the circuit ( ampere-turn number). The relationship between magnetic field strength (H) and electric current strength (I) is referred to as the flow-through theorem: The flux through the area bounded by a field line is equal to the circulating magnetic voltage. Magnetic potentials are distributed around a straight conductor. They can also be given as a function of the angle. Informations: Englisch: magnetic flux Updated at: 20.12.2021 #Words: 109 Links: magnetic field (H), coil, current, ampere (A), field strength (F) Translations: DE Sharing:
2023-03-29 17:32:34
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8696502447128296, "perplexity": 725.6883559995325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949009.11/warc/CC-MAIN-20230329151629-20230329181629-00787.warc.gz"}
http://blog.vmchale.com/article/transparent-programming
J and APL support (and encourage) a certain form of programming without error handling or library code reuse. The alternative wisdom goes against typical programming but it works together. Consider taking successive differences: it is hardly obvious that succ_diff is preferable to 2 -~/\ ] Now, 2 -~/\ (0\$0) will fail silently, but one can discern what inputs are acceptable by inspection. This situation is common in practice, consider the example monotonically_increasing: def monotonically_increasing(a): max_value = 0 for i in range(len(a)): if a[i] > max_value: max_value = a[i] a[i] = max_value This is in fact worse than >. /\ or |\; all fail on an empty list but only the APL derivatives make this evident. # Explorative Programming Avoiding rigorous error handling in procedures is most acceptable for exploratory programming. It is preferable to use a one-off idiom that suits your data rather than a carefully written procedure; the procedure might be thrown away as you work. I claim this functional, terse style is in fact necessary for exploratory programming. Concise, self-explanatory programs balance what is lost in loose error handling. Those used to building systems may find this objectionable but the style works together.
2022-11-27 08:08:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6687625050544739, "perplexity": 5377.398224110369}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710218.49/warc/CC-MAIN-20221127073607-20221127103607-00844.warc.gz"}
http://www.intmath.com/blog/videos/friday-math-movie-the-miniature-earth-2382
# Friday math movie - The Miniature Earth By Murray Bourne, 17 Apr 2009 Earth Day is coming up next week (on Wed 22nd Apr 2009). This week's math movie contemplates an Earth with a population of 100 people only. What could we say about those 100 people? This miniature Earth would have 61 Asians and 8 North Americans. Only one would come from Oceania (Australia, New Zealand, and surrounding islands). Six people would own 59% of the wealth. The amount spent on the military is nothing short of disgusting. Here's the movie. Enjoy — and consider... ### 2 Comments on “Friday math movie - The Miniature Earth” 1. Gerard says: where's the Africans like me.lol. 2. Pat Earnest says: This is by far one of the most powerful presentations of the great disparities in health and wealth of all the world's human populations. Zooming in on the lives of just 100 people gives a very lucid, devastatingly haunting, and yet morally awakening picture of the true lives of the world's people. And I believe more people would have a more compassionate and humane understanding of the world today if they were to see it from this view. ### Comment Preview HTML: You can use simple tags like <b>, <a href="...">, etc. To enter math, you can can either: 1. Use simple calculator-like input in the following format (surround your math in backticks, or qq on tablet or phone): a^2 = sqrt(b^2 + c^2) (See more on ASCIIMath syntax); or 2. Use simple LaTeX in the following format. Surround your math with $$ and $$. $$\int g dx = \sqrt{\frac{a}{b}}$$ (This is standard simple LaTeX.) NOTE: You can't mix both types of math entry in your comment.
2017-07-20 18:43:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.558480978012085, "perplexity": 6869.17587879577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423320.19/warc/CC-MAIN-20170720181829-20170720201829-00612.warc.gz"}
https://brilliant.org/problems/ka-boom/
# Ka-Boom! Algebra Level 4 If $$x,y$$ and $$z$$ are real numbers satisfying $$3\tan(x) + 4\tan(y) + 5\tan(z) = 20$$, find the least possible value of $\tan^2(x) + \tan^2(y) + \tan^2(z) .$ ×
2018-04-21 09:44:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5831686854362488, "perplexity": 440.45039762110946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945111.79/warc/CC-MAIN-20180421090739-20180421110739-00438.warc.gz"}
https://math.stackexchange.com/questions/864695/irreducible-matrix-equivalent-connectedness-of-matrix-graph
# Irreducible matrix equivalent connectedness of matrix graph? If a matrix is irreducible, based on the following definition A matrix is reducible if there are two disjoint sets of indexes $I,J$ with $|I|=\mu$, $|J|=\nu$, $\mu+\nu=n$ such that for every $(i,j)\in I\times J$ we have $a_{ij}=0$ then is this equivalent to saying that the matrix graph is connected? I found this result but it seems it is only one direction. • ...that the graph is strongly connected (since we talk about directed graphs). – Artem Jul 11 '14 at 23:05 • @Artem Thank you. If the matrix is symmetric does this simplify things? Does symmetry cause this to reduce to connectivity? – jonem Jul 11 '14 at 23:08 • Yes, if the matrix is symmetric we can talk about simply about connectivity. – Artem Jul 11 '14 at 23:16 For an undirected graph $G$: for each partition $\{I,J\}$ of the vertices of $G$ we check if there are no edges between vertices in $I$ and vertices in $J$. If $G$ is disconnected, then $\{I,J\}$ can be found (choose $I$ as the vertices in one connected component). Conversely, if $\{I,J\}$ can be found, then there are no edges between $I$ and $J$ and $G$ is disconnected. For the adjacency matrix $A=(a_{ij})$, if $\{I,J\}$ can be found then subgraph formed by the rows indexed by $I$ and the columns indexed by $J$ form an all-$0$ submatrix. This means that, $A$ has the block structure $$\begin{array}{|c|c|} \hline A_I & 0 \\ \hline 0 & A_J \\ \hline \end{array}$$ where the rows and columns are indexed by the indices in $I$ then $J$. In the directed case, we instead have no edges directed from a vertex in $I$ to a vertex in $J$. So we can find $\{I,J\}$ if $G$ is not strongly connected (take $I$ to be the vertices in one strongly connected component).
2019-09-16 08:29:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9138917326927185, "perplexity": 91.73461137977439}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572516.46/warc/CC-MAIN-20190916080044-20190916102044-00341.warc.gz"}
https://www.nature.com/articles/s41586-018-0599-8?error=cookies_not_supported&code=820ee665-b50f-4546-9e6c-9b0ef041c599
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Improved limit on the electric dipole moment of the electron ## Abstract The standard model of particle physics accurately describes all particle physics measurements made so far in the laboratory. However, it is unable to answer many questions that arise from cosmological observations, such as the nature of dark matter and why matter dominates over antimatter throughout the Universe. Theories that contain particles and interactions beyond the standard model, such as models that incorporate supersymmetry, may explain these phenomena. Such particles appear in the vacuum and interact with common particles to modify their properties. For example, the existence of very massive particles whose interactions violate time-reversal symmetry, which could explain the cosmological matter–antimatter asymmetry, can give rise to an electric dipole moment along the spin axis of the electron. No electric dipole moments of fundamental particles have been observed. However, dipole moments only slightly smaller than the current experimental bounds have been predicted to arise from particles more massive than any known to exist. Here we present an improved experimental limit on the electric dipole moment of the electron, obtained by measuring the electron spin precession in a superposition of quantum states of electrons subjected to a huge intramolecular electric field. The sensitivity of our measurement is more than one order of magnitude better than any previous measurement. This result implies that a broad class of conjectured particles, if they exist and time-reversal symmetry is maximally violated, have masses that greatly exceed what can be measured directly at the Large Hadron Collider. This is a preview of subscription content ## Access options from\$8.99 All prices are NET prices. ## Data availability The data that support the conclusions of this article are available from the corresponding authors on reasonable request. ## References 1. 1. Baron, J. et al. Order of magnitude smaller limit on the electric dipole moment of the electron. Science 343, 269–272 (2014). 2. 2. Pospelov, M. E. & Khriplovich, I. B. Electric dipole moment of the W boson and the electron in the Kobayashi–Maskawa model. Sov. J. Nucl. Phys. 53, 638–640 (1991). 3. 3. Pospelov, M. & Ritz, A. CKM benchmarks for electron electric dipole moment experiments. Phys. Rev. D 89, 056006 (2014). 4. 4. Nakai, Y. & Reece, M. Electric dipole moments in natural super symmetry. J. High Energy Phys. 8, 31 (2017). 5. 5. Barr, S. M. A review of CP violation in atoms. Int. J. Mod. Phys. A 08, 209–236 (1993). 6. 6. Pospelov, M. & Ritz, A. Electric dipole moments as probes of new physics. Ann. Phys. 318, 119–169 (2005). 7. 7. Engel, J., Ramsey-Musolf, M. J. & van Kolck, U. Electric dipole moments of nucleons, nuclei, and atoms: the standard model and beyond. Prog. Part. Nucl. Phys. 71, 21–74 (2013). 8. 8. Bernreuther, W. & Suzuki, M. The electric dipole moment of the electron. Rev. Mod. Phys. 63, 313–340 (1991). 9. 9. ACME Collaboration et al. Methods, analysis, and the treatment of systematic errors for the electron electric dipole moment search in thorium monoxide. New J. Phys. 19, 073029 (2016). 10. 10. Hudson, J. J. et al. Improved measurement of the shape of the electron. Nature 473, 493–496 (2011). 11. 11. Kara, D. M. et al. Measurement of the electron’s electric dipole moment using YbF molecules: methods and data analysis. New J. Phys. 14, 103051 (2012). 12. 12. Cairncross, W. B. et al. Precision measurement of the electron’s electric dipole moment using trapped molecular ions. Phys. Rev. Lett. 119, 153001 (2017). 13. 13. Sandars, P. G. H. The electric dipole moment of an atom. Phys. Lett. 14, 194–196 (1965). 14. 14. Khriplovich, I. B. & Lamoreaux, S. K. CP Violation Without Strangeness (Springer, NewYork, 1997). 15. 15. Commins, E. D. & DeMille, D. in Lepton Dipole Moments (eds Roberts, B. L. & Marciano, W. J.) Ch. 14 (World Scientific, Singapore, 2010). 16. 16. Denis, M. & Fleig, T. In search of discrete symmetry violations beyond the standard model: thorium monoxide reloaded. J. Chem. Phys. 145, 214307 (2016). 17. 17. Skripnikov, L. V. Combined 4-component and relativistic pseudo potential study of ThO for the electron electric dipole moment search. J. Chem. Phys. 145, 214301 (2016). 18. 18. Vutha, A. C. et al. Search for the electric dipole moment of the electron with thorium monoxide. J. Phys. B 43, 074007 (2010). 19. 19. Regan, B. C., Commins, E. D., Schmidt, C. J. & DeMille, D. New limit on the electron electric dipole moment. Phys. Rev. Lett. 88, 071805 (2002). 20. 20. Bickman, S., Hamilton, P., Jiang, Y. & DeMille, D. Preparation and detection of states with simultaneous spin alignment and selectable molecular orientation in PbO. Phys. Rev. A 80, 023418 (2009). 21. 21. Eckel, S., Hamilton, P., Kirilov, E., Smith, H. W. & DeMille, D. Search for the electron electric dipole moment using -doublet levels in PbO. Phys. Rev. A 87, 052130 (2013). 22. 22. Kirilov, E. et al. Shot-noise-limited spin measurements in a pulsed molecular beam. Phys. Rev. A 88, 013844 (2013). 23. 23. Hutzler, N. R., Lu, H. I. & Doyle, J. M. The buffer gas beam: an intense, cold, and slow source for atoms and molecules. Chem. Rev. 112, 4803–4827 (2012). 24. 24. Hutzler, N. R. et al. A cryogenic beam of refractory, chemically reactive molecules with expansion cooling. Phys. Chem. Chem. Phys. 13, 18976 (2011). 25. 25. Patterson, D. & Doyle, J. M. Bright, guided molecular beam with hydrodynamic enhancement. J. Chem. Phys. 126, 154307 (2007). 26. 26. Panda, C. D. et al. Stimulated Raman adiabatic passage preparation of a coherent superposition of ThO H3Δ1 states for an improved electron electric-dipole-moment measurement. Phys. Rev. A 93, 052110 (2016). 27. 27. Gray, H. R., Whitley, R. M. & Stroud, C. R. Coherent trapping of atomic populations. Opt. Lett. 3, 218–220 (1978). 28. 28. Kokkin, D. L., Steimle, T. C. & DeMille, D. Branching ratios and radiative lifetimes of the U, L, and i states of thorium oxide. Phys. Rev. A 90, 062503 (2014). 29. 29. Kokkin, D. L., Steimle, T. C. & DeMille, D. Characterization of the I(| = 1) − X 1Σ+ (0, 0) band of thorium oxide. Phys. Rev. A 91, 042508 (2015). 30. 30. Huber, P. J. Robust estimation of a location parameter. Ann. Math. Stat. 35, 73–101 (1964). 31. 31. Efron, B. Bootstrap methods: another look at the jackknife. Ann. Stat. 7, 1–26 (1979). 32. 32. Efron, B. & Tibshirani, R. Bootstrap Methods for standard errors, confidence intervals, and other measures of statistical accuracy. Stat. Sci. 1, 54–75 (1986). 33. 33. Feldman, G. J. & Cousins, R. D. Unified approach to the classical statistical analysis of small signals. Phys. Rev. D 57, 3873–3889 (1998). 34. 34. Kozlov, M. G. & Labzowsky, L. N. Parity violation effects in diatomic molecules. J. Phys. B 28, 1933–1961 (1995). 35. 35. Dzuba, V. A., Flambaum, V. V. & Harabati, C. Relations between matrix elements of different weak interactions and interpretation of the parity-nonconserving and electron electric-dipole-moment measurements in atoms and molecules. Phys. Rev. A 84, 052108 (2011). 36. 36. Fortson, N., Sandars, P. & Barr, S. The search for a permanent electric dipole moment. Phys. Today 56, 33–39 (2003). 37. 37. Andreev, V., Panda, C. D., Hess, P. W., Spaun, B. & Gabrielse, G. A self-calibrating polarimeter to measure Stokes parameters. Preprint at https://arxiv.org/abs/1703.00963 (2017). 38. 38. Kenney, J. F. & Keeping, E. S. Mathematics of Statistics: Part One 4th edn (Chapman & Hall, London, 1954). 39. 39. Shuman, E. S., Barry, J. F., Glenn, D. R. & DeMille, D. Radiative force from optical cycling on a diatomic molecule. Phys. Rev. Lett. 103, 223001 (2009). ## Acknowledgements This work was supported by the NSF. J.H. was supported by the Department of Defense. D.G.A. was partially supported by the Amherst College Kellogg University Fellowship. We thank M. Reece and M. Schwartz for discussions and S. Cotreau, J. MacArthur and S. Sansone for technical support. ### Reviewer information Nature thanks E. Hinds and Y. Shagam for their contribution to the peer review of this work. ## Author information ### Contributions All authors contributed to one or more of the following areas: proposing, leading and running the experiment; design, construction, optimization and testing of the experimental apparatus and data acquisition system; setup and maintenance during the data runs; data analysis and extraction of physics results from measured traces; modelling and simulation of systematic errors; and the writing of this article. The corresponding authors are D.D., J.M.D. and G.G. (acme@physics.harvard.edu). ### Corresponding authors Correspondence to D. DeMille, J. M. Doyle or G. Gabrielse. ## Ethics declarations ### Competing interests The authors declare no competing interests. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Extended data figures and tables ### Extended Data Fig. 1 Switching timescales. a, Fluorescence signal amplitude versus time in an $$\hat{{\boldsymbol{X}}},\hat{{\boldsymbol{Y}}}$$ polarization cycle. The red line corresponds to the signal from the $$\hat{{\boldsymbol{X}}}$$-polarization laser and the black line to the signal from the $$\hat{{\boldsymbol{Y}}}$$-polarization laser. b, Measured molecular trace (25 averaged pulses) versus time. Signal averaged over the entire $$\hat{{\boldsymbol{X}}},\hat{{\boldsymbol{Y}}}$$ polarization cycles shown in a are shown in red and black for the $$\hat{{\boldsymbol{X}}}$$ and $$\hat{{\boldsymbol{Y}}}$$ laser polarizations, respectively. c, Switches performed within a block. The $$\tilde{\mathcal{N}}$$ and $$\tilde{\mathcal{B}}$$ switches randomly alternate between a (−+) and a (+−) pattern, and the $$\tilde{\mathcal{E}}$$ and $$\tilde{\theta}$$ switches randomly alternate between (−++−) and (+−−+) between blocks. d, Switches performed within a superblock. The $$\tilde{\mathcal{P}}$$-state order is selected randomly, while $$\tilde{\mathcal{L}}$$ and $$\tilde{\mathcal{R}}$$ are deterministic. e, Run-data structure. We alternate between ‘normal’ EDM data, taken at three values of $$|{{\mathcal{B}}}_{z}|$$, and monitoring of known systematic effects by performing intentional parameter variations (IPVs). For several days data were taken with $$|{{\mathcal{B}}}_{z}|=2.6\,{\rm{m}}{\rm{G}}$$ instead of $$|{{\mathcal{B}}}_{z}|=0.7\,{\rm{m}}{\rm{G}}$$, which is shown in the figure. Each IPV corresponds to one superblock, where a control parameter (AE) is deliberately offset from its ideal value. Here, A = Pref (the refinement beam is completely blocked, to determine the intrinsic $${\omega }_{{\rm{S}}{\rm{T}}}^{{\mathcal{N}}{\mathcal{E}}}$$), $$B={{\mathcal{E}}}^{{\rm{n}}{\rm{r}}}$$, $$C={P}^{{\mathcal{N}}{\mathcal{E}}}$$, $$D={\phi }_{{\rm{S}}{\rm{T}}}^{{\mathcal{N}}{\mathcal{E}}}$$ and $$E={\rm{\partial }}{{\mathcal{B}}}_{z}/{\rm{\partial }}z$$. The magnetic-field magnitude for the IPV of parameter E was varied between three experimental values within a run. f, The EDM dataset. The electric-field magnitude was varied from day to day. The magnetic-field magnitude for the IPVs for parameters A, B, C and D was varied between three experimental values. ### Extended Data Fig. 2 The $${\boldsymbol{\partial }}{\pmb{\mathcal{B}}}_{{\boldsymbol{z}}}/{\boldsymbol{\partial }}{\boldsymbol{z}}\times {\boldsymbol{\delta }}\times {\boldsymbol{\partial }}{\pmb{\mathcal{E}}}^{{\bf{n}}{\bf{r}}}/{\boldsymbol{\partial }}{\boldsymbol{z}}$$ systematic error. a, A $${\rm{\partial }}{{\mathcal{E}}}^{{\rm{n}}{\rm{r}}}/{\rm{\partial }}z$$ gradient (blue arrows) causes a z-dependent two-photon detuning correlated with $${\mathcal{N}}{\mathcal{E}}$$ ($${\delta }_{z}^{{\mathcal{N}}{\mathcal{E}}}$$), due to the Stark shift $$D{\mathcal{E}}$$. When δ ≠ 0, the combination of a non-zero $${\delta }_{z}^{{\mathcal{N}}{\mathcal{E}}}$$ and a dependence of the STIRAP efficiency on the two-photon detuning, ∂η/∂δ (shown as black lines), acts to translate the detected molecular cloud (purple gradient ellipse) position by $${\rm{d}}{z}_{{\rm{c}}{\rm{m}}}^{{\mathcal{N}}{\mathcal{E}}}$$ (purple arrow). A non-zero $${\rm{\partial }}{{\mathcal{B}}}_{z}/{\rm{\partial }}z$$ (teal-colour gradient) causes molecules to accumulate more (less) precession phase if their position has a smaller (larger) z coordinate. The effects combine to create the dependence of $${\omega }^{{\mathcal{N}}{\mathcal{E}}}$$ on $${\rm{\partial }}{{\mathcal{B}}}_{z}/{\rm{\partial }}z$$. The scales are exaggerated for clarity. b, The effect of changing the STIRAP two-photon detuning, δ, on the $${\omega }^{{\mathcal{N}}{\mathcal{E}}}$$ versus $${\rm{\partial }}{{\mathcal{B}}}_{z}/{\rm{\partial }}z$$. We note that the slope $${\rm{\partial }}{\omega }^{{\mathcal{N}}{\mathcal{E}}}/{\rm{\partial }}({\rm{\partial }}{{\mathcal{B}}}_{z}/{\rm{\partial }}z)$$ is consistent with zero when δ is set to zero. c, Dependence of $${\omega }^{{\mathcal{N}}{\mathcal{E}}}$$ on δ and $${\rm{\partial }}{{\mathcal{B}}}_{z}/{\rm{\partial }}z$$. Fits (dashed curves) to a simple lineshape model (see Methods) show good agreement with the data. δ = 0 is defined as the point where all curves cross. The error bars in b and c represent 1σ statistical uncertainties. ## Supplementary information ### Supplementary Information The supplementary methods section contains text describing in detail the mechanisms leading to the systematics effects referenced in the main text. ## Rights and permissions Reprints and Permissions ACME Collaboration. Improved limit on the electric dipole moment of the electron. Nature 562, 355–360 (2018). https://doi.org/10.1038/s41586-018-0599-8 • Accepted: • Published: • Issue Date: ### Keywords • Electric Dipole Moment (EDM) • Spin Precession • Stimulated Raman Adiabatic Passage (STIRAP) • Systematic Error Budget • ### Robust storage qubits in ultracold polar molecules • Philip D. Gregory • Jacob A. Blackmore • Simon L. Cornish Nature Physics (2021) • ### The Higgs boson implications and prospects for future discoveries • Steven D. Bass • Albert De Roeck Nature Reviews Physics (2021) • ### Tuning of dipolar interactions and evaporative cooling in a three-dimensional molecular quantum gas • Jun-Ru Li • William G. Tobias • Jun Ye Nature Physics (2021) • ### Quantum science with optical tweezer arrays of ultracold atoms and molecules • Kang-Kuen Ni Nature Physics (2021) • ### Electric dipole moments in the extended scotogenic models • Motoko Fujiwara • Junji Hisano • Takashi Toma Journal of High Energy Physics (2021) By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
2021-12-09 09:33:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7086403965950012, "perplexity": 3383.142638316191}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363791.16/warc/CC-MAIN-20211209091917-20211209121917-00427.warc.gz"}
https://www.physicsforums.com/threads/fourier-analysis-sawtooth.713633/
# Fourier Analysis - sawtooth 1. Sep 30, 2013 ### freezer 1. The problem statement, all variables and given/known data Sawtooth signal with To = 1, at T=0, x = 0, at T=1, x =1 verify: $a_{k} = \left\{\begin{matrix} \frac{1}{2}, for k=0; & \\\frac{j}{2\pi k}, for k \neq 0; & \end{matrix}\right.$ 2. Relevant equations $\frac{1}{T_{0}} \int_{0}^{T_{0}} te^{-j(2\pi/T_{0}))kt}dt$ 3. The attempt at a solution for k = 0 $a_{0} = \int_{0}^{1} t dt$ $a_{0} = \frac{1}{2} t^{2}$ from 0 to 1 = 1/2 for k != 0 $\int_{0}^{1} te^{-j(2\pi) kt}dt$ u = t du = dt dv = $e^(-j2\pi kt)$ $v = \frac{-1}{j2\pi k}e^{-j2\pi kt}$ $t * \frac{-1}{j2\pi k}e^{-j2\pi kt} - \int \frac{-1}{j2\pi k}e^{-j2\pi kt} dt$ $t * \frac{-1}{j2\pi k}e^{-j2\pi kt} - \frac{e^{-j2\pi kt}}{4\pi^2k^2}$ -1/j = j $t * \frac{j}{2\pi k}e^{-j2\pi kt} - \frac{e^{-j2\pi kt}}{4\pi^2k^2}$ $e^{-j2\pi kt} (t \frac{j}{2\pi k} - \frac{1}{4\pi^2 k^2})$ getting close but not seeing where to go from here. Last edited: Sep 30, 2013 2. Oct 1, 2013 ### Päällikkö Check the integration by parts rules: You seem to have forgotten to evaluate the first part at the boundaries (in particular, if you integrate over t from 0 to 1, there is no way t should remain in the final expression) $\int_a^b u(x)v'(x)\,dx = \left[u(x)v(x)\right]_a^b - \int_a^b u'(x)v(x)\,dx$, first term on the right hand side. 3. Oct 1, 2013 ### freezer $\frac{je^{-j2\pi k} }{2\pi k} - \frac{e^{-j2\pi k} }{4\pi^2 k^2} - \frac{1}{4\pi^2 k^2}$ 4. Oct 1, 2013 ### Päällikkö You seem to have a sign error. Also, remember that k is an integer (a periodic function is mapped into a series in fourier space), and you should be able to arrive at the result. 5. Oct 1, 2013 ### freezer Okay, see the sign error but still not seeing how that is going to get the other terms to fall out leaving just j/(2pik). 6. Oct 1, 2013 ### Päällikkö k is an integer. What is $\exp(-j2\pi k)$ for k integer? 7. Oct 1, 2013 ### freezer Thank you Paallikko, I did not have that one in my notes.
2017-08-19 17:04:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6024484038352966, "perplexity": 1426.2138483257154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105700.94/warc/CC-MAIN-20170819162833-20170819182833-00599.warc.gz"}
http://www.physicsforums.com/printthread.php?t=286389
Physics Forums (http://www.physicsforums.com/index.php) -   General Physics (http://www.physicsforums.com/forumdisplay.php?f=111) -   -   How to use flow rates (GPM) and pressure drop to determine if a valve is usefull (http://www.physicsforums.com/showthread.php?t=286389) MoonKnight Jan21-09 09:39 AM how to use flow rates (GPM) and pressure drop to determine if a valve is usefull I need to determine if a valve can handle a flow rate of 50 GPM, this valve being: 2/2 Way, Normally closed, w/ 2-pilot control, and coupled solenoid. I also need to determine pressure drop... it is a burkhert model - 457361D the valve specs indicate: Cv = 35.1 pressure measurement = 0 -145 psi if anyone can help, or push me in the right direction, thanks... FredGarvin Jan21-09 11:08 AM Re: how to use flow rates (GPM) and pressure drop to determine if a valve is usefull The Cv is your main source of information. The Cv is, by definition, the amount of flow through the valve with a delta P across it of 1 psi. Most valve suppliers will give you a curve of Cv vs. % open. You can use the following equation to calculate the flow through given the following equation: $$Cv = Q \sqrt{\frac{SG}{\Delta P}}$$ proinwv Jan30-09 07:35 PM Re: how to use flow rates (GPM) and pressure drop to determine if a valve is usefull The equation given is generally correct as long as there is no choking, and the flow is laminar. The ISA standard S75.01.01 gives extensive information on the subject. Also, the valve manufacturer should provide sizing information that will take that into account. www.ostand.com proinwv Jan30-09 07:36 PM Re: how to use flow rates (GPM) and pressure drop to determine if a valve is usefull ooops I meant to say the flow must be TURBULENT. stewartcs Jan30-09 09:06 PM Re: how to use flow rates (GPM) and pressure drop to determine if a valve is usefull Quote: Quote by proinwv (Post 2056304) The equation given is generally correct as long as there is no choking, and the flow is laminar. The ISA standard S75.01.01 gives extensive information on the subject. Also, the valve manufacturer should provide sizing information that will take that into account. www.ostand.com Quote: Quote by proinwv (Post 2056305) ooops I meant to say the flow must be TURBULENT. I've never heard of a restriction on the flow being turbulent for that equation to be valid. Where did you reference that from? CS proinwv Jan30-09 09:29 PM Re: how to use flow rates (GPM) and pressure drop to determine if a valve is usefull It is part of the ISA S75.01.01 This is "the" standard of the valve industry. Most references ignore this at their peril. Choking occurs when the pressure in the vena contracta within the valve drops to the vapor pressure of the liquid and vaporization occurs, preventing further flow increases, unless the inlet pressure is increased. Turbulent flow rather than laminar or transitional will pass the amount indicated by the equation that was quoted earlier. Otherwise the equation must be modified by a valve reynolds number factor which is =<1. Turbulent flow occurs when the valve reynolds number is 10,000. This is calculated by the equations in the ISA standard. If these factors are not checked for signifcant errors can occur in the calculation of flow or delta p. proinwv Jan30-09 09:49 PM Re: how to use flow rates (GPM) and pressure drop to determine if a valve is usefull 1 Attachment(s) This might be of some use in understanding what I was trying to say, as it relates to turbulent flow. MoonKnight Feb2-09 11:33 AM Re: how to use flow rates (GPM) and pressure drop to determine if a valve is usefull thanks for your help guys... I found what I needed a while ago, but your responses were appreciated All times are GMT -5. The time now is 09:36 PM.
2014-07-24 02:36:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4833049476146698, "perplexity": 2175.715447222471}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997884827.82/warc/CC-MAIN-20140722025804-00178-ip-10-33-131-23.ec2.internal.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=133&t=14458&p=35856
## Winter 2013 Question 1B Volume: $\Delta S = nR\ln \frac{V_{2}}{V_{1}}$ Temperature: $\Delta S = nC\ln \frac{T_{2}}{T_{1}}$ Kimberly Govea 2L Posts: 20 Joined: Mon Jan 26, 2015 2:17 pm ### Winter 2013 Question 1B Why is specific heat capacity the second entropy 3/2R? How do we know which ideal gas to use? I try to go by the "constant pressure" or "constant volume" but its not clear to me. The volume inside the balloon did change right? so is this second entropy referring to the volume of the surroundings? Chem_Mod Posts: 19536 Joined: Thu Aug 04, 2011 1:53 pm Has upvoted: 882 times ### Re: Winter 2013 Question 1B Basically the way they solved the problem was that they calculated the entropy change in terms of the change in volume (step 1), then they calculated the entropy change in terms of the change in temperature (step 2). So when they calculated the entropy while the volume was changing, they assumed everything else was constant (i.e. no change in temperature). And then when they calculated the entropy for the change in temperature, they assumed that volume was held constant; and that's why they used Cv. Even though volume was technically not held constant in this problem, just given the fact that you have information about volume and temperature, but no information about anything else, implies that you'll probably need to use Cv and not Cp
2021-02-27 01:41:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.79462069272995, "perplexity": 764.188158116179}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358033.38/warc/CC-MAIN-20210226234926-20210227024926-00362.warc.gz"}
https://www.biostars.org/p/110955/
problem about SHRiMP2 solving AB SOLID 1 0 Entering edit mode 7.2 years ago laihui126cn ▴ 40 I want to do the mapping about AB SOLID data. They are paired-end sequencing. The format of .fastq files show below: pair1: @SRR586064.1ugc_357_358_MatePair_2x50bp_solid0032_20100528_MP_ugc_357_854_13_97/1 T30..01121.12.1032100213131122200031222022101302313 + !AB!!?:>@<!@B!;AB?AA@@<2<@?@@<?AB>A?9?:;@@>;-=>>7=@ pair2: @SRR586064.1ugc_357_358_MatePair_2x50bp_solid0032_20100528_MP_ugc_357_854_13_97/2 G10330000122222033220201000000220002000000000000000 + !@BBABBBB@>?@@(.))35.%-.3((%1+%((82-'.*-*/3*14*'696 I try use the tool:SHRiMP2, but it is not friendly to solve my datasets. I run the command: gmapper-cs -1 pair1.fastq -2 pair2.fastq -L ./test/Sscro --pair-mode opp-in -Q --trim-first --qv-offset 33 >pair12.sam 2>pair12.log And I got the info in the end of file .log: note: detected fastq format in input file [pair1.fastq] - Processing read files [pair1.fastq , pair2.fastq] note: quality value format not set explicitly; using PHRED+33 done r/hr r/core-hr There has been a problem reading in the read "SRR586064.1", the quality length exceeds the sequence length! Are you using the right executable? gmapper-cs for color space? and gmapper-ls for letter space? I have tried some parameters of SHRiMP2, but failed. And I can't find any similar example from SHRiMP2 website. Does anybody get the same problem and have you solve it ? Or maybe I should use another tool, any suggestion? alignment SHRiMP2 cs fastq SOLID • 2.7k views 0 Entering edit mode grep -A4 SRR586064.1 (I think its the very fast read) from your fastq file and paste here. The error is pretty obvious and is due to inconsistency in length between the read sequence and its quality score string. Did you trim your reads before using some trimming tool ? Or may be you are not providing the arguments in the right manner. For example, you used --trim-first but didn't mention any integer value. Please read the manual carefully and try again. 0 Entering edit mode Thank you, Pandey. I have solved this problem. Have you ever used the tool SHRiMP2? I have no idea to understand his arguments, and there is no example in SHRiMP2 README. Could you help me? 0 Entering edit mode Ya I have used it a lot when I used to work with color space reads. If you have no idea about the arguments I would advise you to go with default arguments. They have been well-tested and should produce optimal results unless you are trying to do something really different. 0 Entering edit mode So happy to hear from you. I have tried default arguments, and got the terrible result: reads matched(17.4191%) I used the command: gmapper-cs SRR_test.fastq \ --local \ -Q -N 15 \ --all-contigs \ --sigle-best-mapping >SRR_test.sam 2>SRR_test.log Another better result: reads matched (54.3101%) I used the command: gmapper-cs SRR_test.fastq \ --local \ -Q -N 15 \ -r 45% \ -v 20% \ -h 20% \ --all-contigs \ --sigle-best-mapping >SRR_test.sam 2>SRR_test.log Can you give me some suggestion according to my command? Thanks. 0 Entering edit mode Hi, I'm using gmapper-cs in SHRiMP 2.2.3, and facing the same problem the quality length exceeds the sequence length. I only use the default parameter -N 28 -E but it still raising this error. How you successfully solved this problem? Thanks~ 0 Entering edit mode maybe my script could help u. I trimed the first QV ,but the first base isn't changing.you can write a script ,or I give it to u tomorrow morning. sorry to reply by phone 0 Entering edit mode my($file1,$file2)=@ARGV; open IN,"<$file1" or die "cant't open the input file"; open OUT,">>$file2" or die "Please enter the output filename"; while(<IN>){ print OUT; $_=<IN>; print OUT;$_=<IN>; print OUT; $_=<IN>; my$cutTheFirstSubstr=substr($_,1); print OUT "$cutTheFirstSubstr"; } close IN; close OUT; 0 Entering edit mode Thank you very much and sorry to reply late, I understand now. But why you choose to trim the first QV but not the last? Will the mapping process make use of the QV ? 0 Entering edit mode I could let you know that my cs-fastq file is looks like this: @SRR_2x50bp_solid0032_20100528_MP_ugc_357_854_13_526/1 T11..02300.23.2232000120031323333021202133212323310 + !%(!!;%&@1!'/!%7%&A?;&(792)'27@'%.@<%%4+<)'(),%%&*- @SRR_MatePair_2x50bp_solid0032_20100528_MP_ugc_357_854_13_174/1 T12..11220.00.0210230201212023232220021300333211031 + !AB!!@BBAB!>;!AA@?@B>AABA7>BBB:@AB?>>@?A>>ABB=AB6@; @SRR_MatePair_2x50bp_solid0032_20100528_MP_ugc_357_854_13_207/1 T30..01011.01.0103220033000022030032232200322022112 + !B9!!;>5+5!48!4:,??%%+=;6<4%>;2@&%<?)81=+5%)@92?&1= @SRR_MatePair_2x50bp_solid0032_20100528_MP_ugc_357_854_13_282/1 T21..12213.31.1321000232210220312000223220220031003 + !%%!!&*(&%!,&!&''&(('&&)'%*)&&()((&(*()%'&('&('(*&& Significantly,all the first QV is "!" corresponding the first nucleic-color base.The character '!' represents the lowest quality. So this may be a symbol to differ from others. 0 Entering edit mode Um,I understand. Differentially, in my files, the sequences are longer than QV, so I also trim them from tail. SOLID format is not intuitive. 0 Entering edit mode 7.0 years ago laihui126cn ▴ 40 Could you paste some reads of yours? That is easy to see if our SOLID reads are similar. 0 Entering edit mode I am having similar problems. I have downloaded the fastq files from dbGaP SRAtools. This is what my reads look like: @SRR957099.1 1_14_37 length=50 T201..133...01....1....2........................... +SRR957099.1 1_14_37 length=50 !@@@!!@@@!!!@@!!!!@!!!!@!!!!!!!!!!!!!!!!!!!!!!!!!!! @SRR957099.2 1_14_50 length=50 T311..320...00....0....2........................... +SRR957099.2 1_14_50 length=50 !@@@!!@@@!!!@@!!!!@!!!!@!!!!!!!!!!!!!!!!!!!!!!!!!!! Do I need to take out the ! at the beginning of each QV line? 0 Entering edit mode maybe you need to filter your csfasta and qual files using the solid pipeline preprocessing (see QC pipeline https://github.com/fls-bioinformatics-core/genomics), the same convert your files in fastq.
2021-10-28 12:58:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31649163365364075, "perplexity": 7352.349894792336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588284.71/warc/CC-MAIN-20211028100619-20211028130619-00478.warc.gz"}
http://mathhelpforum.com/discrete-math/274724-disjoint-independent-events.html
1. ## Disjoint/Independent Events Let E be an experiment with sample space S. Let A and B be events in S, where A and B both have positive probability. i) Show that if A and B are disjoint, then they are NOT independent. ii) Show that if A and B are independent, then they are NOT disjoint. I am getting more familiar with basic probability but where would I start for this? Also, what makes two events disjoint/independent? 2. ## Re: Disjoint/Independent Events Apparently, you don't know the definitions of disjoint and independent. Event A and B are disjoint iff $A\cap B=\emptyset$ and A and B are independent iff $P(A\cap B)=P(A)P(B)$. Now suppose both $P(A)>0$ and $P(B)>0$. 1. Suppose A and B are disjoint. Can A and B be independent? Remember $P(\emptyset)=0$. 2. Suppose A and B are independent. Can A and B be disjoint? Remember the product of two positive reals is positive. 3. ## Re: Disjoint/Independent Events Thank you for explaining that. This now makes sense, I am just curious how to actually word this in a concise format. 4. ## Re: Disjoint/Independent Events Originally Posted by azollner95 I am just curious how to actually word this in a concise format. Assuming that you know logic, let $D$ be the statement that "two events with positive probability are disjoint"; let $I$ be the statement that "two events with positive probability are independent". The part A) says If D then not I. In symbols $D \Rightarrow \;\neg I$ That is equivalent to $\neg D\vee\neg I$ The part B) says If I then not D. In symbols $I \Rightarrow \;\neg D$ That is equivalent to $\neg I\vee\neg D$ Look at this truth-table. Can we say "That two events with positive probability are not independent OR not disjoint."?
2017-06-24 16:03:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9068156480789185, "perplexity": 422.2891203818596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320264.42/warc/CC-MAIN-20170624152159-20170624172159-00487.warc.gz"}
https://testbook.com/question-answer/a-high-value-of-thermal-diffusivity-represents--5e8428c5f60d5d0664811263
# A high value of thermal diffusivity represents This question was previously asked in UPPSC AE Mechanical 2013 Official Paper II View all UPPSC AE Papers > 1. high storage, less conduction of heat 2. less storage, more conduction of heat 3. There is always equal amount of conduction and storage since it is a property 4. It has no relavance Option 2 : less storage, more conduction of heat ## Detailed Solution Concept: Thermal diffusivity of material is given as, $$\alpha = \frac{k}{{\rho c}}$$. It is the property of a material. Larger the value of α, faster heat will diffuse through the material. It means less heat will be stored and more conduction will occur. A high value of α could result either from a high value of thermal conductivity or low value of thermal heat capacity ρc. Thermal diffusivity α has units of square meters per second (m2/s).
2021-10-24 12:35:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8044217228889465, "perplexity": 2843.839094379639}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585997.77/warc/CC-MAIN-20211024111905-20211024141905-00472.warc.gz"}
http://mathhelpforum.com/new-users/209651-can-t-prove-equation.html
# Math Help - Can't prove this equation 1. ## Can't prove this equation I need help on how to prove that 9^(n+3) + 4^n is divisible by 5. Please help, I have no idea of how to solve this 2. ## Re: Can't prove this equation I would use induction, observing that: $(9^{n+4}+4^{n+1})-(9^{n+3}+4^n)=5\cdot9^{n+3}+3(9^{n+3}+4^n)$ 3. ## Re: Can't prove this equation $9^n 9^3 + 4^n$ $(5+4)^n 729 + 4^n$ Use binomial theorem to expand $(5+4)^n$ All terms except the last term $4^n$ is divisible by 5. Take the two left over terms. $4^n * 729 + 4^n$. This is divisible by 5. 4. ## Re: Can't prove this equation Or you can observe that $9^{n+3} + 4^n = 729(9^n) + 4^n$ $\equiv 729(4^n) + 4^n$ (mod 5) $\equiv 730(4^n)$ (mod 5) $\equiv 0$ (mod 5) 5. ## Re: Can't prove this equation Originally Posted by richard1234 Or you can observe that $9^{n+3} + 4^n = 729(9^n) + 4^n$ $\equiv 729(4^n) + 4^n$ (mod 5) $\equiv 730(4^n)$ (mod 5) $\equiv 0$ (mod 5) even simpler: 9 = 4 (mod 5), whence 93 = 43 = 4 (mod 5) (since 42 = 16 = 1 (mod 5)). thus 9n+3 + 4n = (4n)4 + 4n = 5(4n) = 0 (mod 5). why do this? because why should i have to calculate the cube of 9, when i can calculate the cube of 4 instead (729 is a number i don't use everyday)? 6. ## Re: Can't prove this equation Originally Posted by Deveno why do this? because why should i have to calculate the cube of 9, when i can calculate the cube of 4 instead (729 is a number i don't use everyday)? Yeah that solution's slightly simpler than mine. I just happen to have 9^3 memorized.
2015-04-28 06:37:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8858537673950195, "perplexity": 1452.5471831431398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246660724.78/warc/CC-MAIN-20150417045740-00043-ip-10-235-10-82.ec2.internal.warc.gz"}
http://haskellformaths.blogspot.com/2011_02_01_archive.html
## Monday, 21 February 2011 ### Tensor products of vector spaces, part 1 A little while back on this blog, we defined the free k-vector space over a type b: `newtype Vect k b = V [(b,k)] deriving (Eq,Ord)` Elements of Vect k b are k-linear combinations of elements of b. Whenever we have a mathematical structure like this, we want to know about building blocks and new-from-old constructions. We already looked at one new-from-old construction: given free k-vector spaces A = Vect k a and B = Vect k b, we can construct their direct sum A⊕B = Vect k (Either a b). We saw that the direct sum is both the product and the coproduct in the category of free vector spaces - which means that it is the object which satisfies the universal properties implied by the following two diagrams: So we have injections i1, i2 : Vect k a, Vect k b -> Vect k (Either a b), to put elements of A and B into the direct sum A⊕B, and projections p1, p2 : Vect k (Either a b) -> Vect k a, Vect k b to take them back out again. However, there is another obvious new-from-old construction: Vect k (a,b). What does this represent? In order to answer that question, we need to look at bilinear functions. The basic idea of a bilinear function is that it is a function of two arguments, which is linear in each argument. So we might start by looking at functions f :: Vect k a -> Vect k b -> Vect k t. However, functions of two arguments don't really sit very well in category theory, where arrows are meant to have a single source. (We can handle functions of two arguments in multicategories, but I don't want to go there just yet.) In order to stay within category theory, we need to combine the two arguments into a single argument, using the direct sum construction. So instead of looking at functions f :: Vect k a -> Vect k b -> Vect k t, we will look at functions f :: Vect k (Either a b) -> Vect k t. To see that they are equivalent, recall from last time that Vect k (Either a b) is isomorphic to (Vect k a, Vect k b), via the isomorphisms: ```to :: (Vect k a, Vect k b) -> Vect k (Either a b) to = \(u,v) -> i1 u <+> i2 v from :: Vect k (Either a b) -> (Vect k a, Vect k b) from = \uv -> (p1 uv, p2 uv)``` So in going from f :: Vect k a -> Vect k b -> Vect k t to f :: Vect k (Either a b) -> Vect k t, we're really just uncurrying. Ok, so suppose we are given f :: Vect k (Either a b) -> Vect k t. It helps to still think of this as a function of two arguments, even though we've wrapped them up together in either side of a direct sum. Then we say that f is bilinear, if it is linear in each side of the direct sum. That is: - for any fixed a0 in A, the function f_a0 :: Vect k b -> Vect k t, f_a0 = \b -> f (i1 a0 <+> i2 b) is linear - for any fixed b0 in B, the function f_b0 :: Vect k a -> Vect k t, f_b0 = \a -> f (i1 a <+> i2 b0) is linear Here's a QuickCheck property to test whether a function is bilinear: ```prop_Bilinear :: (Num k, Ord a, Ord b, Ord t) =>      (Vect k (Either a b) -> Vect k t) -> (k, Vect k a, Vect k a, Vect k b, Vect k b) -> Bool prop_Bilinear f (k,a1,a2,b1,b2) =     prop_Linear (\b -> f (i1 a1 <+> i1 b)) (k,b1,b2) &&     prop_Linear (\a -> f (i1 a <+> i1 b1)) (k,a1,a2) prop_BilinearQn f (a,u1,u2,v1,v2) = prop_Bilinear f (a,u1,u2,v1,v2)     where types = (a,u1,u2,v1,v2) :: (Q, Vect Q EBasis, Vect Q EBasis, Vect Q EBasis, Vect Q EBasis) ``` What are some examples of bilinear functions? Well, perhaps the most straightforward is the dot product of vectors. If our vector spaces A and B are the same, then we can define the dot product: ```dot0 uv = sum [ if a == b then x*y else 0 | (a,x) <- u, (b,y) <- v]     where V u = p1 uv           V v = p2 uv ``` However, as it stands, this won't pass our QuickCheck property - because it has the wrong type! This has the type dot0 :: Vect k (Either a b) -> k, whereas we need something of type Vect k (Either a b) -> Vect k t. Now, it is of course true that k is a k-vector space. However, as it stands, it's not a free k-vector space over some basis type t. Luckily, this is only a technicality, which is easily fixed. When we want to consider k as itself a (free) vector space, we will take t = (), the unit type, and equate k with Vect k (). Since the type () has only a single inhabitant, the value (), then Vect k () consists of scalar multiples of () - so it is basically just a single copy of k itself. The isomorphism between k and Vect k () is \k -> k *> return (). Okay, so now that we know how to represent k as a free k-vector space, we can define dot product again: ```dot1 uv = nf \$ V [( (), if a == b then x*y else 0) | (a,x) <- u, (b,y) <- v]     where V u = p1 uv           V v = p2 uv ``` This now has the type dot1 :: Vect k (Either a b) -> Vect k (). Here's how you use it: ```> dot1 ( i1 (e1 <+> 2 *> e2) <+> i2 (3 *> e1 <+> e2) ) 5()``` (So thinking of our function as a function of two arguments, what we do is use i1 to inject the first argument into the left hand side of the direct sum, and i2 to inject the second argument into the right hand side.) So we can now use the QuickCheck property: ```> quickCheck (prop_BilinearQn dot1) +++ OK, passed 100 tests.``` Another example of a bilinear function is polynomial multiplication. Polynomials of course form a vector space, with basis {x^i | i <- [0..] }. So we could define a type to represent the monomials x^i, and then form the polynomials as the free vector space in the monomials. In a few weeks we will do that, but for the moment, to save time, let's just use our existing EBasis type, and take E i to represent x^i. Then polynomial multiplication is the following function: ```polymult1 uv = nf \$ V [(E (i+j) , x*y) | (E i,x) <- u, (E j,y) <- v]     where V u = p1 uv           V v = p2 uv ``` Let's just convince ourselves that this is polynomial multiplication: ```> polymult1 (i1 (e 0 <+> e 1) <+> i2 (e 0 <+> e 1)) e0+2e1+e2``` So this is just our way of saying that (1+x)*(1+x) = 1+2x+x^2. Again, let's verify that this is bilinear: ```> quickCheck (prop_BilinearQn polymult1) +++ OK, passed 100 tests.``` So what's all this got to do with Vect k (a,b)? Well, here's another bilinear function: ```tensor :: (Num k, Ord a, Ord b) => Vect k (Either a b) -> Vect k (a, b) tensor uv = nf \$ V [( (a,b), x*y) | (a,x) <- u, (b,y) <- v]     where V u = p1 uv; V v = p2 uv > quickCheck (prop_BilinearQn tensor) +++ OK, passed 100 tests. ``` So this "tensor" function takes each pair of basis elements a, b in the input to a basis element (a,b) in the output. The thing that is interesting about this bilinear function is that it is in some sense "the mother of all bilinear functions". Specifically, you can specify a bilinear function completely by specifying what happens to each pair (a,b) of basis elements. It follows that any bilinear function f :: Vect k (Either a b) -> Vect k t can be factored as f = f' . tensor, where f' :: Vect k (a,b) -> Vect k t is the linear function having the required action on the basis elements (a,b) of Vect k (a,b). For example: ```bilinear :: (Num k, Ord a, Ord b, Ord c) =>     ((a, b) -> Vect k c) -> Vect k (Either a b) -> Vect k c bilinear f = linear f . tensor dot = bilinear (\(a,b) -> if a == b then return () else zero) polymult = bilinear (\(E i, E j) -> return (E (i+j))) ``` We can check that these are indeed the same functions as we were looking at before: ```> quickCheck (\x -> dot1 x == dot x) +++ OK, passed 100 tests. > quickCheck (\x -> polymult1 x == polymult x) +++ OK, passed 100 tests.``` So Vect k (a,b) has a special role in the theory of bilinear functions. If A = Vect k a, B = Vect k b, then we write A⊗B = Vect k (a,b) (pronounced "A tensor B"). [By the way, it's possible that this diagram might upset category theorists - because the arrows in the diagram are not all arrows in the category of vector spaces. Specifically, note that bilinear maps are not, in general, linear. We'll come back to this in a moment.] So a bilinear map can be specified by its action on the tensor basis (a,b). This corresponds to writing out matrices. To specify any bilinear map Vect k (Either a b) -> Vect k t, you write out a matrix with rows indexed by a, columns indexed by b, and entries in Vect k t. ```      b1  b2 ...  a1 (t11 t12 ...)  a2 (t21 t22 ...) ... (...        ) ``` So this says that (ai,bj) is taken to tij. Then given an element of A⊕B = Vect k (Either a b), which we can think of as a vector (x1 a1 + x2 a2 + ...) in A = Vect k a together with a vector (y1 b1 + y2 b2 + ...) in B = Vect k b, then we can calculate its image under the bilinear map by doing matrix multiplication as follows: ``` a1 a2 ...        b1  b2 ... (x1 x2 ...)  a1 (t11 t12 ...)  b1 (y1)              a2 (t21 t22 ...)  b2 (y2)             ... (...        ) ... (...) ``` (Sorry, this diagram might be a bit confusing. The ai, bj are labeling the rows and columns. The xi are the entries in a row vector in A, the yj are the entries in a column vector in B, and the tij are the entries in the matrix.) So xi ai <+> yj bj goes to xi yj tij. For example, dot product corresponds to the matrix: ```(1 0 0) (0 1 0) (0 0 1) ``` Polynomial multiplication corresponds to the matrix: ```    e0 e1 e2 ... e0 (e0 e1 e2 ...) e1 (e1 e2 e3 ...) e2 (e2 e3 e4 ...) ... ``` A matrix with entries in T = Vect k t is just a convenient way of specifying a linear map from A⊗B = Vect k (a,b) to T. Indeed, any matrix, provided that all the entries are in the same T, defines a bilinear function. So bilinear functions are ten-a-penny. Now, I stated above that bilinear functions are not in general linear. For example: ```> quickCheck (prop_Linear polymult) *** Failed! Falsifiable (after 2 tests and 2 shrinks): (0,Right e1,Left e1)``` What went wrong? Well: ```> polymult (Right e1) 0 > polymult (Left e1) 0 > polymult (Left e1 <+> Right e1) e2``` So we fail to have f (a <+> b) = f a <+> f b, which is one of the requirements of a linear function. Conversely, it's also important to realise that linear functions (on Vect k (Either a b)) are not in general bilinear. For example: ```> quickCheck (prop_BilinearQn id) *** Failed! Falsifiable (after 2 tests): (1,0,0,e1,0)``` The problem here is: ```> id \$ i1 (zero <+> zero) <+> i2 e1 Right e1 > id \$ (i1 zero <+> i2 e1) <+> (i1 zero <+> i2 e1) 2Right e1``` So we fail to have linearity in the left hand side (or the right for that matter). Indeed we can kind of see that linearity and bilinearity are in conflict. - Linearity requires that f (a1 <+> a2 <+> b) = f a1 <+> f a2 <+> f b - Bilinearity requires that f (a1 <+> a2 <+> b) = f (a1 <+> b) <+> f (a2 <+> b) Exercise: Find a function which is both linear and bilinear. ## Tuesday, 1 February 2011 ### Products of lists and vector spaces Last time, we looked at coproducts - of sets/types, of lists, and of free vector spaces. I realised afterwards that there were a couple more things I should have said, but forgot. Recall that the coproduct of A and B is an object A+B, together with injections i1: A -> A+B, i2: B-> A+B, with the property that whenever we have arrows f: A -> T, g: B -> T, they can be factored through A+B to give an arrow f+g, satisfying f+g . i1 = f, f+g . i2 = g. Firstly then, I forgot to say why we called the coproduct A+B with a plus sign. Well, it's because, via the injections i1 and i2, it contains (a copy of) A and (a copy of) B. So it's a bit like a sum of A and B. Second, I forgot to say that in the case of vector spaces, the coproduct is called the direct sum, and has its own special symbol A⊕B. Okay, so this time I want to look at products. Suppose we have objects A and B in some category. Then their product (if it exists) is an object A×B, together with projections p1: A×B -> A, p2: A×B -> B, with the following universal property: whenever we have arrows f: S -> A and g: S -> B, then they can be factored through A×B to give an arrow f×g: S -> A×B, such that f = p1 . f×g, g = p2 . f×g. (The definitions of product and coproduct are dual to one another - the diagrams are the same but with the directions of the arrows reversed.) In the category Set, the product of sets A and B is their Cartesian product A×B. In the category Hask, of course, the product of types a and b is written (a,b), p1 is called fst, and p2 is called snd. We can then define the required product map as: `(f .*. g) x = (f x, g x)` Then it should be clear that fst . (f .*. g) = f, and snd . (f .*. g) = g, as required. Okay, so what do products look like in the category of lists (free monoids)? (Recall that in this category, the arrows are required to be monoid homomorphisms, meaning that f [] = [] and f (xs++ys) = f xs ++ f ys. It follows that we can express f = concatMap f', for some f'.) Well, the obvious thing to try as the product is the Cartesian product ([a],[b]). Is the Cartesian product of two monoids a monoid? Well yes it is actually. We could give it a monoid structure as follows: ```(as1, bs1) ++ (as2, bs2) = (as1++as2, bs1++bs2) [] = ([],[])``` This isn't valid Haskell code of course. It's just my shorthand way of expressing the following code from Data.Monoid: ```instance Monoid [a] where         mempty  = []         mappend = (++) instance (Monoid a, Monoid b) => Monoid (a,b) where         mempty = (mempty, mempty)         (a1,b1) `mappend` (a2,b2) =                 (a1 `mappend` a2, b1 `mappend` b2) ``` From these two instances, it follows that ([a],[b]) is a monoid, with monoid operations equivalent to those I gave above. (In particular, it's clear that the construction satisfies the monoid laws: associativity of ++, identity of [].) But it feels like there's something unsatisfactory about this. Wouldn't it be better for the product of list types [a] and [b] to be another list type [x], for some type x? Our first thought might be to try [(a.b)]. The product map would then need to be something like \ss -> zip (f ss) (g ss). However, we quickly see that this won't work: what if f ss and g ss are not the same length. What else might work? Well, if you think of ([a],[b]) as some as on the left and some bs on the right, then the answer should spring to mind. Let's try [Either a b]. We can then define: ```p1 xs = [x | Left x <- xs] -- this is doing a filter and a map at the same time p2 xs = [x | Right x <- xs]``` with the product map `f×g = \ss -> map Left (f ss) ++ map Right (g ss)` Then it is clear that p1 . f×g = f and p2 . f×g = g, as required. What is the relationship between ([a],[b]) and [Either a b]? Well, ([a],[b]) looks a bit like a subset of [Either a b], via the injection i (as,bs) = map Left as ++ map Right bs. However, this injection is not a monoid homomorphism, since `i ([],[b1] ++ ([a1],[]) /= i ([],[b1]) ++ i ([a1],[])` So ([a],[b]) is not a submonoid of [Either a b]. On the other hand, there is a projection p :: [Either a b] -> ([a],[b]), p xs = (p1 xs, p2 xs). This is a monoid homomorphism, so ([a],[b]) is a quotient of [Either a b]. So which is the right answer? Which of ([a],[b]) and [Either a b] is really the product of [a] and [b]? Well, it depends. It depends which category we think we're working in. If we're working in the category of monoids, then it is ([a],[b]). However, if we're working in the category of free monoids (lists), then it is [Either a b]. You see, ([a],[b]) is not a free monoid. What does this mean? Well, it basically means it's not a list. But how do we know that ([a],[b]) isn't equivalent to some list? And anyway, what does "free" mean in free monoid? "Free" is a concept that can be applied to many algebraic theories, not just monoids. There is more than one way to define it. An algebraic theory defines various constants and operations. In the case of monoids, there is one constant - which we may variously call [] or mempty or 0 or 1 - and one operation - ++ or mappend or + or *. Now, a given monoid may turn out to be generated by some subset of its elements - meaning that every element of the monoid can be equated with some expression in the generators, constants, and operations. For example, the monoid of natural numbers is generated by the prime numbers: every natural number is equal to some expression in 1, *, and the prime numbers. The monoid [x] is generated by the singleton lists: every element of [x] is equal to some expression in [], ++, and the singleton lists. By a slight abuse of notation, we can say that [x] is generated by x - by identifying the singleton lists with the image of x under \x -> [x]. Then we say that a monoid is free on its generators if there are no relations among its elements other than those implied by the monoid laws. That is, no two expressions in the generators, constants, and operators are equal to one another, unless it is as a consequence of the monoid laws. For example, suppose it happens that `(x ++ y) ++ z = x ++ (y ++ z)` That's okay, because it follows from the monoid laws. On the other hand, suppose that `x ++ y = y ++ x` This does not follow from the monoid laws (unless x = [] or y = []), so is a non-trivial relation. (Thus the natural numbers under multiplication are not a free monoid - because they're commutative.) What about our type ([a],[b]) then? Well consider the following relations: ```(as,[]) ++ ([],bs) = ([],bs) ++ (as,[]) (as1++as2,bs1) ++ ([],bs2) = (as1,bs1) ++ (as2,bs2) = (as1,[]) ++ (as2,bs1++bs2)``` We have commutativity relations between the [a] and [b] parts of the product. Crucially, these relations are not implied by the monoid structure alone. So intuitively, we can see that ([a],[b]) is not free. The "no relations" definition of free is the algebraic way to think about it. However, there is also a category theory way to define it. The basic idea is that if a monoid is free on its generators, then given any other monoid with the same generators, we can construct it as a homomorphic image of our free monoid, by "adding" the appropriate relations. In order to express this properly, we're going to need to use some category theory, and specifically the concept of the forgetful functor. Recall that given any algebraic category, such as Mon (monoids), there is a forgetful functor U: Mon -> Set, which consists in simply forgetting the algebraic structure. U takes objects to their underlying sets, and arrows to the underlying functions. In Haskell, U: Mon -> Hask consists in forgetting that our objects (types) are monoids, and forgetting that our arrows (functions) are monoid homomorphisms. (As a consequence, U is syntactically invisible in Haskell. However, to properly understand the definition of free, we have to remember that it's there.) Then, given an object x (the generators), a free monoid on x is a monoid y, together with a function i: x -> U y, such that whenever we have an object z in Mon and a function f': x -> U z, then we can lift it to a unique arrow f: y -> z, such that f' = Uf . i. When we say that lists are free monoids, we mean specifically that (the type) [x] is free on (the type) x, via the function i = \x -> [x] (on values). This is free, because given any other monoid z, and function f' :: x -> z, then we can lift to a monoid homomorphism f :: [x] -> z, with f' = f . i. How? Well, the basic idea is to use concatMap. The type of concatMap is: `concatMap :: (a -> [b]) -> [a] -> [b]` So it's doing the lifting we want. However this isn't quite right, because this assumes that the target monoid z is a list. So we need this slight variant: ```mconcatmap :: (Monoid z) => (x -> z) -> [x] -> z mconcatmap f xs = mconcat (map f xs)``` If we set f = mconcatmap f', then we will have ```(f . i) x = f (i x) = f [x] = mconcatmap f' [x] = mconcat (map f' [x]) = mconcat [f' x] = foldr mappend mempty [f' x]  -- definition of mconcat = mappend mempty (f' x)  -- definition of foldr = f' x  -- identity of mempty``` Now, what would it mean for ([a],[b]) to be free? Well, first, what is it going to be free on? To be free on a and b is the same as being free on Either a b (the disjoint union of a and b). Then our function i is going to be ```i (Left a) = ([a],[]) i (Right b) = ([],[b])``` Then for ([a],[b]) to be free would mean that whenever we have a function f' :: Either a b -> z, with z a monoid, then we can lift it to a monoid homomorphism f : ([a],[b]) -> z, such that f' = f . i. So can we? Well, what if our target monoid z doesn't satisfy the a-b commutativity relations that we saw. That is, what if: `f' a1 `mappend` f' b1 /= f' b1 `mappend` f' a1 -- (A)` That would be a problem. We are required to find an f such that f' = f . i. We know that i a1 = ([a1],[]), i b1 = ([],[b1]). So we know that i a1 `mappend` i b1 = i b1 `mappend` i a1. f is required to be a monoid homomorphism, so by definition: ```f (i a1 `mappend` i b1) = f (i a1) `mappend` f (i b1) f (i b1 `mappend` i a1) = f (i b1) `mappend` f (i a1)``` But then since the two left hand sides are equal, then so are the two right hand sides, giving: `f (i a1) `mappend` f (i b1) = f (i b1) `mappend` f (i a1) -- (B)` But now we have a contradiction between (A) and (B), since f' = f . i. So for a concrete counterexample, showing that ([a],[b]) is not free, all we need is a monoid z in which the a-b commutativity relations don't hold. Well that's easy: [Either a b]. Just take f' :: Either a b -> [Either a b], f' = \x -> [x]. Now try to find an f :: ([a],[b]) -> [Either a b], with f' = f . i. The obvious f is f (as,bs) = map Left as ++ map Right bs. But the problem is that this f isn't a monoid homomorphism: `f ( ([],[b1]) `mappend` ([a1],[]) ) /= f ([],[b1]) `mappend` f ([a1],[])` Notice the connection between the two definitions of free. It was because ([a],[b]) had non-trivial relations that we couldn't lift a function to a monoid homomorphism in some cases. The cases where we couldn't were where the target monoid z didn't satisfy the relations. Okay, so sorry, that got a bit technical. To summarise, the product of [a], [b] in the category of lists / free monoids is [Either a b]. What about vector spaces? What is the product of Vect k a and Vect k b? Well, similarly to lists, we can make (Vect k a, Vect k b) into a vector space, by defining 0 = (0,0) (a1,b1) + (a2,b2) = (a1+a2,b1+b2) k(a,b) = (ka,kb) Exercise: Show that with these definitions, fst, snd and f .*. g are vector space morphisms (linear maps). Alternatively, Vect k (Either a b) is of course a vector space. We can define: ```p1 = linear p1' where     p1' (Left a) = return a     p1' (Right b) = zero p2 = linear p2' where     p2' (Left a) = zero     p2' (Right b) = return b prodf f g = linear fg' where     fg' b = fmap Left (f (return b)) <+> fmap Right (g (return b)) ``` In this case p1, p2, f×g are vector space morphisms by definition, since they were constructed using "linear". How do we know that they satisfy the product property? Well, this looks like a job for QuickCheck. The following code builds on the code we developed last time: ```prop_Product (f',g',x) =     f x == (p1 . fg) x &&     g x == (p2 . fg) x     where f = linfun f'           g = linfun g'           fg = prodf f g newtype SBasis = S Int deriving (Eq,Ord,Show,Arbitrary) prop_ProductQn (f,g,x) = prop_Product (f,g,x)     where types = (f,g,x) :: (LinFun Q SBasis ABasis, LinFun Q SBasis BBasis, Vect Q SBasis) > quickCheck prop_ProductQn +++ OK, passed 100 tests. ``` As we did with lists, we can ask again, which is the correct definition of product, (Vect k a, Vect k b), or Vect k (Either a b)? Well, in this case it turns out that they are equivalent to one another, via the mutually inverse isomorphisms ```\(va,vb) -> fmap Left va <+> fmap Right vb \v -> (p1 v, p2 v)``` Unlike in the list case, these are both vector space morphisms (linear functions). Why the difference? Why does it work out for vector spaces whereas it didn't for lists? Well, I think it's basically because vector spaces are commutative. (It is also the case that vector spaces are always free on a basis. So since we have an obvious bijection between the bases of (Vect k a, Vect k b) and Vect k (Either a b), then we must have an isomorphism between the vector spaces.) Now, we're left with a little puzzle. We have found that both the product and the coproduct of two vector spaces is Vect k (Either a b). So we still haven't figured out what Vect k (a,b) represents.
2016-10-27 12:46:33
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9023659229278564, "perplexity": 881.2895564312544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721278.88/warc/CC-MAIN-20161020183841-00035-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.kinginvestigation.net/uo3l0ss/relational-algebra-is-equivalent-to-sql-80356a
Hence, for the given relational algebra projection on R X S, the equivalent SQL queries are both (a) and (c) The queries in options (b) and (d) are operations involving a join condition. These blocks are translated to equivalent relational algebra expressions. SQL queries are translated into equivalent relational algebra expressions before optimization. PROJECT OPERATOR PROPERTIES is defined only when L attr (R ) Equivalences 2 1 ( )= 2 ( ) ¼( )= ¼ ( ) … as long as all attributes used by C are in L Degree •Number of attributes in projected attribute list 10. Note: To prove that SQL is relationally complete, you need to show that for every expression of the relational algebra, there exists a semantically equivalent expression in SQL. We say that $Q_1 \equiv Q_2$ if and only if It uses various operations to perform this action. Example SELECT R.A, T.E FROM R, S, T WHERE R.B = S.B AND S.C 5 AND S.D = T.D General Query Optimizers. – Relational Calculus: Lets users describe what they want, rather than how to compute it. (d) SELECT A, R.B, C, D FROM R, S WHERE R.B = S.B; The queries in options (b) and (d) are operations involving a join condition. In practice, SQL is the query language that is used in most commercial RDBMSs. the SQL keyword DISTINCT. SQL), and for implementation: – Relational Algebra: More operational, very useful for representing execution plans. Input: Dumb translation of SQL to RA ⬇︎. Indeed, faculty members who teach no class will not occur in the output of E 4, while they will occur in the output of the original SQL query. Optimization includes optimization of each block and then optimization of … (That is, the answer is some operation between two relations, not some sort of filter.) Formal Relational Query Languages vTwo mathematical Query Languages form the basis for “real” languages (e.g. Lets say that you using relational algebra with defined LIKE binary operation for string operands. • This is an introduction and only covers the algebra needed to represent SQL queries • Select, project, rename • Cartesian product • Joins (natural, condition, outer) • Set operations (union, intersection, difference) • Relational Algebra treats relations as sets: duplicates are removed . They accept relations as their input and yield relations as their output. (That is, the answer is some operation between two relations, not some sort of filter.) Relational algebra is performed recursively on a relation and intermediate results are also considered relations. These two queries are equivalent to a SELECTION operation in relational algebra with a JOIN condition or PROJECTION operation with a JOIN condition. Two relational-algebra expressions are equivalent if both the expressions produce the same set of tuples on each legal database instance. RELATIONAL ALGEBRA is a widely used procedural query language. These two queries are equivalent to a SELECTION operation in relational algebra with a JOIN condition or PROJECTION operation with a JOIN condition. $$\pi_{A}(R \bowtie_c S) \equiv (\pi_{A_R}(R)) \bowtie_c (\pi_{A_S}(S))$$. $$\pi_A(\sigma_c(R)) \equiv \pi_A(\sigma_c(\pi_{(A \cup cols(c))}(R)))$$, ... but only if $c$ references only columns of $R$, Show that Translating SQL Queries into Relational Algebra . Relational algebra and query execution CSE 444, summer 2010 — section 7 worksheet August 5, 2010 1 Relational algebra warm-up 1.Given this database schema: Product (pid, name, price) Purchase (pid, cid, store) Customer (cid, name, city) draw the logical query plan for each of the following SQL queries. $\sigma_{c_1 \wedge c_2}(R) \equiv \sigma_{c_1}(\sigma_{c_2}(R))$, $\pi_{A}(R) \equiv \pi_{A}(\pi_{A \cup B}(R))$, $R \times (S \times T) \equiv (R \times S) \times T$, $R \cup (S \cup T) \equiv (R \cup S) \cup T$, $\pi_{A}(\sigma_{c}(R)) \equiv \sigma_{c}(\pi_{A}(R))$, $\sigma_c(R \times S) \equiv (\sigma_{c}(R)) \times S$, $\pi_A(R \times S) \equiv (\pi_{A_R}(R)) \times (\pi_{A_S}(S))$, $R \cap (S \cap T) \equiv (R \cap S) \cap T$, $\sigma_c(R \cup S) \equiv (\sigma_c(R)) \cup (\sigma_c(R))$, $\sigma_c(R \cap S) \equiv (\sigma_c(R)) \cap (\sigma_c(R))$, $\pi_A(R \cup S) \equiv (\pi_A(R)) \cup (\pi_A(R))$, $\pi_A(R \cap S) \equiv (\pi_A(R)) \cap (\pi_A(R))$, $R \times (S \cup T) \equiv (R \times S) \cup (R \times T)$, Apply blind heuristics (e.g., push down selections), Join/Union Evaluation Order (commutativity, associativity, distributivity), Algorithms for Joins, Aggregates, Sort, Distinct, and others, Pick the execution plan with the lowest cost. R1 ⋈ R2. All rights reserved. / Q... Dear readers, though most of the content of this site is written by the authors and contributors of this site, some of the content are searched, found and compiled from various other Internet sources for the benefit of readers. Relational algebra 1 Relational algebra Relational algebra, an offshoot of first-order logic (and of algebra of sets), deals with a set of finitary relations (see also relation (database)) which is closed under certain operators. for any combination of valid inputs $R, S, T, \ldots$. ∏ EMP_ID, DEPT_NAME (σ DEPT_ID = 10 (EMP ∞DEPT)) or. Select 2. NATURAL JOIN. Set differen… The query "SELECT * FROM R, S WHERE R.B = S.B;" is equivalent to "σ, The query "SELECT A, R.B, C, D FROM R, S WHERE R.B = S.B;" is equivalent to "σ, Modern Databases - Special Purpose Databases, Multiple choice questions in Natural Language Processing Home, Machine Learning Multiple Choice Questions and Answers 01, Multiple Choice Questions MCQ on Distributed Database, MCQ on distributed and parallel database concepts, Find minimal cover of set of functional dependencies Exercise. The relational calculus allows you to say the same thing in a declarative way: “All items such that the stock is not zero.” Queries over relational databases often likewise return tabular data represented as relations. Output: Better, but equivalent query Which rewrite rules should we apply? Some rewrites are situational... we need more information to decide when to apply them. IOperations in relational algebra have counterparts in SQL. Basically, there is no such a thing in relational algebra. IRelational algebra eases the task of reasoning about queries. Equivalent expression. σ DEPT_ID = 10 (∏ EMP_ID, DEPT_NAME, DEPT_ID (EMP ∞DEPT)) Above relational algebra and tree shows how DBMS depicts the query inside it. ... where $A_R$ and $A_S$ are the columns of $A$ from $R$ and $S$ respectively. – shibormot Mar 7 '13 at 12:46. 1. An operator can be either unary or binary. •SQL SELECT DISTINCT FROM R •Note the need for DISTINCT in SQL 9. $$\sigma_{R.B = S.B \wedge R.A > 3}(R \times S) \equiv (\sigma_{R.A > 3}(R)) \bowtie_{B} S$$. Set difference operation in relational algebra, purpose of set difference operation, example of set difference relational algebra operation, relational algebra in dbms, relational algebra equivalent SQL examples Notes, tutorials, questions, solved exercises, online quizzes, MCQs and more on DBMS, Advanced DBMS, Data Structures, Operating Systems, Natural Language Processing etc. Theme images by. Relational Algebra equivalent of SQL "NOT IN", In relational algebra, you can do this using a carthesian product. The fundamental operations of relational algebra are as follows − 1. SQL is actually based both on the relational algebra and the relational calculus, an alternative way to specify queries. Notes, tutorials, questions, solved exercises, online quizzes, MCQs and more on DBMS, Advanced DBMS, Data Structures, Operating Systems, Natural Language Processing etc. $A_R = A \cap cols(R)$     $A_S = A \cap cols(S)$, Show that The main application of relational algebra is to provide a theoretical foundation for relational databases, particularly query languages for such databases, chief among which is SQL. These are not written in SQL, but using relational algebra, graph or tree. This is because the number of … It collects instances of relations as input and gives occurrences of relations as output. Translation from SQL into the relational algebra Solution (continued) The translation is not equivalent to the original SQL query! The Relational Algebra The relational algebra is very important for several reasons: 1. it provides a formal foundation for relational model operations. Question: On Two Relations: R(A, B), And S(B, C), Write Out An Equivalent, Minimal SQL That Accomplishes The Same Thing As The Relational Algebra Expression Below. I am somewhat aware of the correspondence between (tuple and domain) relational calculus, relational algebra, and SQL. Is there a relational algebra equivalent of the SQL expression R WHERE ... [NOT] IN S? Type of operation. To see why, let's first tidy up the SQL solution given. Natural join in Relational algebra and SQL, natural join as in relational model, natural join examples with equivalent sql queries, difference between natural join and equijion. Relational algebra is a part of computer science. To translate a query with subqueries into the relational algebra, it seems a logical strategy to work by recursion: rst translate the subqueries and then combine the translated results into a translation for the entire SQL state- ment. In terms of relational algebra, we use a selection (˙), to lter rows with the appropriate predicate, and a projection (ˇ) to get the desired columns. In relational algebra, there is a division operator, which has no direct equivalent in SQL. WHAT IS THE EQUIVALENT RELATIONAL ALGEBRA EXPRESSION? Union 4. SQL itself is not particularly difficult to grasp, yet compared to relational algebra, the division operation is much more complex. These operators operate on one or more relations to yield a relation. Copyright © exploredatabase.com 2020. we can guarantee that the bag of tuples produced by $Q_1(R, S, T, \ldots)$ (Non- Operation. Syntax . An SQL query is first translated into an equivalent extended relational algebra expression—represented as a query tree data structure—that is then optimized. Translating SQL to RA expression is the second step in Query ProcessingPipeline 1. Natural join in Relational Algebra. Input: Logical Query Plan - expression in Extended Relational Algebra 2. Output: Optimized Logical Query Plan - also in Relational Algebra To extend shibormot comment. As such it shouldn't make references to physical entities such as tables, records and fields; it should make references to abstract constructs such as relations, tuples and attributes. A legal database instance refers to that database system which satisfies all the integrity constraints specified in the database schema. This means that you’ll have to find a workaround. then replace all Xs with Ys, Today's focus: Provable Equivalence for RA Expressions. SQL Relational algebra query operations are performed recursively on a relation. Hence both are called equivalent query. Multiple Choice Questions MCQ on Distributed Database with answers Distributed Database – Multiple Choice Questions with Answers 1... MCQ on distributed and parallel database concepts, Interview questions with answers in distributed database Distribute and Parallel ... Find minimal cover of set of functional dependencies example, Solved exercise - how to find minimal cover of F? If X and Y are equivalent and Y is better, Then your notation is valid. Relational algebra is procedural, saying for example, “Look at the items and then only choose those with a non-zero stock”. As shown, it's looking for attribute A1 NOT IN a relation with single attribute A2. This question hasn't been answered yet Ask an expert. Easy steps to find minim... Query Processing in DBMS / Steps involved in Query Processing in DBMS / How is a query gets processed in a Database Management System? ... that satisfy any necessary properties. $$R \bowtie_{c} S \equiv S \bowtie_{c} R$$, Show that Relational algebra is a procedural query language, which takes instances of relations as input and yields instances of relations as output. Show that T. M. Murali August 30, 2010 CS4604: SQL and Relational Algebra Project 3. To the best of my understanding, one should be able to automatically convert a formula in relational calculus to an SQL query whose run on a database produces rows that make the original formula satisfiable. $$\sigma_{c_1}(\sigma_{c_2}(R)) \equiv \sigma_{c_2}(\sigma_{c_1}(R))$$, Show that Equi-join in relational algebra, equi-join in relational model, equi-join relational algebra query and its equivalent SQL queries, equi-join examples Notes, tutorials, questions, solved exercises, online quizzes, MCQs and more on DBMS, Advanced DBMS, Data Structures, Operating Systems, Natural Language Processing etc. On two relations: R(A, B), and S(B, C), write out an equivalent , minimal SQL that accomplishes the same thing as the relational algebra expression below. The relational algebra calculator helps you learn relational algebra (RelAlg) by executing it. Something like: R - ρa1,a2(πa11,a21(σA11 = A22(ρa11,a21(R) x ρa12, Is there a relational algebra equivalent of the SQL expression R WHERE [NOT] IN S? $$R \times (S \times T) \equiv T \times (S \times R)$$, Show that Relational databases store tabular data represented as relations. Binary. SELECT DISTINCT Student FROM Taken WHERE Course = ’Databases’ or Course = ’Programming Languages’; If we want to be slightly more general, we can use a sub-query: ITo process a query, a DBMS translates SQL into a notation similar to relational algebra. Which is really not equivalent to the original SQL query! Solutions of the exercises 12. Apply rewrites ⬇︎. Relational Algebra is not a full-blown SQL language, but rather a way to gain theoretical understanding of relational processing. But the cost of both of them may vary. A query is at first decomposed into smaller query blocks. The answer is Yes, it is (Natural) JOIN aka the bowtie operator ⋈. 11 . is the same as the bag of tuples produced by $Q_2(R, S, T, \ldots)$ $$\sigma_{R.B = S.B \wedge R.A > 3}(R \times S) \equiv \sigma_{R.A > 3}(R \bowtie_{B} S)$$, ... but only if $A$ and $c$ are compatible, $A$ must include all columns referenced by $c$ ($cols(c)$), Show that It uses operators to perform queries. Algebra calculator helps you learn relational algebra solution ( continued ) the is! Apply them, “ Look at the items and then only choose those with a non-zero stock ” expression WHERE... Calculus: lets users describe what they want, rather than how to compute it expression. Translating SQL to RA expression is the query language not a full-blown SQL,. Representing execution plans likewise return tabular data represented as relations query Plan expression! Accept relations as their output ” Languages ( e.g you ’ ll have to a. Algebra solution ( continued ) the translation is not equivalent to the original query. Emp_Id, DEPT_NAME ( σ DEPT_ID = 10 ( EMP ∞DEPT ) ) or first! Better, but using relational algebra 2 been answered yet Ask an expert translation is not particularly difficult to,! Only choose those with a non-zero stock ” such a thing in relational algebra, the answer some!... [ not ] in S gives occurrences of relations as output non-zero ”. Translation of SQL to RA expression is the query language widely used procedural query language how! Before optimization not equivalent to a SELECTION operation in relational algebra, there no! Calculus, an alternative way to specify queries of SQL to RA ⬇︎ RelAlg by! Algebra, there is no such a relational algebra is equivalent to sql in relational algebra expressions binary operation string. It collects instances of relations as output you using relational algebra equivalent of SQL to RA.. Relation with single attribute A2 commercial RDBMSs need more information to decide when to apply them at! From R •Note the need for DISTINCT in SQL 9 execution plans is then optimized not ] in?... What they want, rather than how to compute it algebra query operations are performed recursively a! Expression R WHERE... [ not ] in S which satisfies all the integrity constraints specified in the database.. They accept relations as their input and yield relations as output structure—that then. We apply this using a carthesian product then only choose those with a non-zero stock ” ( DEPT_ID... Sql itself is not equivalent to a SELECTION operation in relational algebra: more operational very. Dbms translates SQL into a notation similar to relational algebra expressions before optimization [ ]! Model operations an expert in the database schema but equivalent query which rewrite rules we. To grasp, yet compared to relational algebra expressions EMP ∞DEPT ) ) or rather than to. Continued ) the translation is not equivalent to the original SQL query is at first decomposed into smaller blocks. To RA ⬇︎ aka the bowtie operator ⋈ refers to that database system which satisfies all the integrity specified. From R •Note the need for DISTINCT in SQL, but equivalent query which rewrite rules should we apply attribute... Is used in most commercial RDBMSs formal relational query Languages vTwo mathematical query Languages vTwo mathematical query Languages vTwo query... Their input and yield relations as their input and gives occurrences of relations as output, DEPT_NAME ( σ =. Logical query Plan - expression in Extended relational algebra calculator helps you learn relational algebra are follows... But the cost of both of them may vary data structure—that is then optimized ) JOIN aka the bowtie ⋈... That database system which satisfies all the integrity constraints specified in the database.. Is a widely used procedural query language an equivalent Extended relational algebra 2 intermediate. Is ( Natural ) JOIN aka the bowtie operator ⋈ instances of relations as output Ask expert., it is ( Natural ) JOIN aka the bowtie operator ⋈ algebra: more operational, very useful representing... − 1 their input and yield relations as input and yield relations as output binary operation string. Are equivalent relational algebra is equivalent to sql the original SQL query is first translated into equivalent algebra... When to apply them SQL query: Logical query Plan - expression in Extended relational algebra both!, which has no direct equivalent in SQL is used in most commercial RDBMSs which is really equivalent! Translated to equivalent relational algebra is very important for several reasons: it! Reasons: 1. it provides a formal foundation for relational model operations expressions before.... Tree data structure—that is then optimized those with a JOIN condition or PROJECTION operation with a JOIN condition n't answered...: 1. it provides a formal foundation for relational model operations irelational algebra eases the task of about. Shown, it 's looking for attribute A1 not in '', in relational.... Before optimization widely used procedural query language structure—that is then optimized operation for string operands over databases..., a DBMS translates SQL into the relational algebra: more operational, very useful for execution... Data represented as relations ( e.g operations of relational algebra with defined LIKE binary for! And the relational algebra is not equivalent to a SELECTION operation in relational algebra with a condition... More operational, very useful for representing execution plans equivalent query which rewrite rules should we apply relational algebra relational. Which rewrite rules should we apply Extended relational algebra ( RelAlg ) by executing it Languages vTwo mathematical query form. More relations to yield a relation RA ⬇︎ this using a carthesian product SQL but. Not in a relation in Extended relational algebra: more operational, very useful for representing execution.. Equivalent to a SELECTION operation in relational algebra is not a full-blown language. 10 ( EMP ∞DEPT ) ) or relation and intermediate results are also relations... Yet Ask an expert filter. query which rewrite rules should we apply ” Languages e.g... From SQL into the relational algebra ( RelAlg ) by executing it operation... To relational algebra with a JOIN condition or PROJECTION operation with a non-zero stock ”, yet compared relational! Relational databases often likewise return tabular data represented as relations difficult to,... Fundamental operations of relational processing queries over relational databases often likewise return tabular data represented as.. Intermediate results are also considered relations rather a way to gain theoretical understanding of relational processing FROM into! Distinct in SQL you ’ ll have to find a workaround query, a translates... − 1 fundamental operations of relational algebra with a JOIN condition they want, rather how... Operational, very useful for representing execution plans the original SQL query as... The query language that is used in most commercial RDBMSs solution ( continued ) translation. With a JOIN condition algebra expressions before optimization DEPT_ID = 10 ( EMP ∞DEPT ) ).! Division operator, which has no direct equivalent in SQL ’ ll have to find a.! Performed recursively on a relation and intermediate results are also considered relations widely used procedural query that! Look at the items and then only choose those with a non-zero stock ” SQL, but query... Operators operate on one or more relations to yield a relation with single A2! Step in query ProcessingPipeline 1 Look at the items and then only choose those with a JOIN.... In relational algebra, the answer is some operation between two relations, some... Operational, very useful for representing execution plans provides a formal foundation relational! Tabular data represented as relations query tree data structure—that is then optimized you using algebra! Query Languages form the basis for “ real ” Languages ( e.g, for... The need for DISTINCT in SQL, but rather a way to specify queries query... An SQL query is at first decomposed into smaller query blocks both on the algebra..., you can do this using a carthesian product some rewrites are situational... we need information. Algebra query operations are performed recursively on a relation considered relations for string operands.. Tabular data represented as relations in the database schema expression R WHERE... not... Of SQL to RA ⬇︎ users describe what they want, rather than how to compute it in... To find a workaround equivalent to the original SQL query tidy up SQL... To relational algebra is equivalent to sql when to apply them operation for string operands to RA ⬇︎ rewrite rules we... The second step in query ProcessingPipeline 1 users describe what they want rather... Select DISTINCT < attribute list > FROM R •Note the need for in... Relation with single attribute A2 - expression in Extended relational algebra, there is a widely used procedural language..., it is ( Natural ) JOIN aka the bowtie relational algebra is equivalent to sql ⋈ need. Language that is used in most commercial RDBMSs query language that is the. Stock ” situational... we need more information to decide when to apply them FROM SQL into the relational with. Relalg ) by executing it a legal database instance refers to that database system which satisfies all integrity... Solution ( continued ) the translation is not a full-blown SQL language, but using relational algebra is important... Is not a full-blown SQL language, but equivalent query which rewrite rules should we apply equivalent... For several reasons: 1. it provides a formal foundation for relational model operations databases often likewise tabular... Input and gives occurrences of relations as output real ” Languages ( e.g both the! Languages form the basis for “ real ” Languages ( e.g then.. Compute it not a full-blown SQL language, but equivalent query which rewrite rules should we apply of . Then optimized instances of relations as output relational algebra is equivalent to sql SQL into the relational algebra very. ( e.g equivalent relational algebra with defined LIKE relational algebra is equivalent to sql operation for string operands translation of to... In '', in relational algebra equivalent of the SQL expression R...!
2021-04-17 17:59:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19310441613197327, "perplexity": 2572.4231081294183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038461619.53/warc/CC-MAIN-20210417162353-20210417192353-00345.warc.gz"}
http://math.stackexchange.com/tags/reference-request/new
# Tag Info 0 Stroud would be an excellent choice, especially since it has answers to problems in the back of the book (remember: you only really learn math by doing it!). I also see that it has a section on statistics, so that may be all you need. If not, I think a good book that is tailored to your needs is something like An Introduction to Medical Statistics. Do a ... 5 According to Ian Stewart, the symbol "!" was introduced because of printability. Before 1808 $\underline{n\big|} = n \cdot (n-1) \cdots 3 \cdot 2$ was [widely?] used to denote the factorial. Because it was hard to print [in non-computer ages], the French mathematician Christian Kramp chose "!". Source: Professor Stewart's Hoard of Mathematical Treasures 2 The right way to build such a category is a philosophical question. There are different approaches in the mathematical literature. One thing is clear though: the objects should be propositions, not just theorems. The problem is to define equality of proofs in a sensible way. For example, let $\Pi$ be Pythagoras' theorem. Should each of the over 100 proofs ... 3 See for example Lambek and Scott: Introduction to higher order categorical logic, ch 0.1 (unfortunately not in the net but for sure in ur university library). There first is defined a graph, then a deductive system as a graph with For each object $A$ an identity arrow $1_A:A\rightarrow A$ For each pair of arrows $f:A\rightarrow B$ and $g:B\rightarrow C$ ... 1 Yes, currently the problem is open whether or not the Partition Principle and the Axiom of Choice are equivalent. There are two major factors for this (in my opinion): Many people become less interested in choiceless results. So while they might be very happy to hear about them, they prefer to put their research efforts towards other directions. The ... 0 These subjects are not all contained in what most people would think of as abstract algebra... I don't think it's really possible to fulfil your "one book only" request. Groups and rings are definitely abstract algebra. Any introductory abstract algebra book will cover them. A Book Of Abstract Algebra is quite good for a first pass, but not as comprehensive ... 0 I can't confirm that every single topic above is included, but Serge Lang's Algebra is an extremely comprehensive tome for all algebra topics up to and including first year graduate-level algebra. It's not exactly readable, but it has a ton of exercises and gives you all the logical steps you need to explore these topics. 0 p-regular languages are commonly known as (regular) group languages in the literature since their syntactic monoid is a finite group. If a language is accepted by a permutation automaton, then its minimal DFA is also a permutation group, but this group is transitive (since every state is accessible from the initial state). Thus your subclass is actually ... 2 Stewart's Calculus will prepare you for the calculus questions in the GRE, while Rudin's Principles of Mathematical Analysis will prepare you for the introductory real analysis questions in it. 1 Another definition of laplacian matrix M( or L) = $QQ^t$ where Q is incidence matrix. By cauchy-binet theorem, one can calculate the determinant of a rectangular matrix by considering $Q and Q^t$. So, if we take a cofactor, then it can contain twigs of a tree and since determinant of matrix containing (#nodes-1) and twigs of a tree is +/- 1. So, multiplying ... 1 You might be interested in the books Algebra and Trigonometry by Gelfand. Also, it's not dry or old, but the Precalculus textbook from artofproblemsolving.com won't spoonfeed, at least. From the book description: It includes nearly 1000 problems, ranging from routine exercises to extremely challenging problems drawn from major mathematics ... 1 Yes I hear your point. Most books released these days look to spoon-feed. But that does not apply to all new books. I mean Spivak's books, Chapman Pugh's text on Analysis are examples. Now these are the books I perused during A Levels. I only got my hands on them because the government sells them for dirt cheap prices (I mean for less than 20 cents US). ... 0 See this section from Computational Number Theory by Abhijit Das, 1.7.2. "We do not know any efficient algorithm for computing ${\rm ord}_m a$ unless the complete prime factorization of $\phi(m)$ is provided." He gives a pretty easy algorithm which I used as the basis for my C code. Note that you can use the Carmichael Lambda instead of the totient which ... 0 It appears that they are one and the same. It is known that $$\eta_1 = -\frac{\pi^2}{3\omega_1} \frac{\theta_1{'''}(0)}{\theta_1{'}(0)} = \frac{\pi^2}{3\omega_1}\left(1 - 24\sum_{n = 1}^\infty \frac{q^{2n}}{(1 - q^{2n})^2}\right),$$ where $\eta_1 = Z(\omega_1)$ is a quasi-period of $Z(z)$, the Weierstrass zeta function and $\omega_1$ is a period of The ... 0 'Mathematical Foundation of Elasticity' written by Marsdan for any person who hopes to become an expert in strength of materials, explicitly states on its back cover that it will teach the functional analysis from the start. First edited in 1983 then reprinted by Dover publication at a low price. A 1.5-inch thick, yet not isoteric textbook. 0 One cannot be realistic, honest to himself, in approaching Folland's manual on functional anlysis, with merely three or four light-weight calculus courses, two college-level linear algebra course and one ordinary differential equations course for biologist. At which profession do you focus? and at which university level? Folland's book is too deep (and too ... 0 I'm not an expert, but I think there's a "baby step giant step" method, when $n$ is too large for brute force to be effective. If you know the factorization of $\phi(n)$ already, though, then you can find the order by calculating $a^k$ modulo $n$ for increasingly small divisors $k$ of $\phi(n)$. For example, take $a=342$ and $n=803=73\cdot11$, so that ... 1 Constructive proof: Consider some nonzero substochastic matrix $P$ with equal-row-sums. Call $P_{is(i)}$ one of the minimal nonzero entries of row $i$, and consider the deterministic matrix $D$ such that $D_{is(i)}=1$ for every $i$ and $D_{ij}=0$ otherwise. Finally, define $m(P)=\min\{P_{ij}\mid P_{ij}\ne0\}$. Then $P-m(P)D$ is a substochastic matrix with ... 1 I found this a valuable resource: A Survey of Arithmetical Definability by Alexis Bès 4 Euclid's "The Elements". Greek. Old. It doesn't get much more classic than that. As well as his other writings. http://en.wikipedia.org/wiki/Euclid#Other_works Also, the Principia Mathematica by Newton. As well as his other writings. http://en.wikipedia.org/wiki/Isaac_Newton#Mathematics Archimedes probably deserves a mention as well. You know, pi ... 3 The founder of the Institute for Advanced Study, A. Flexner, once published a paper called "The usefulness of useless knowledge" http://library.ias.edu/files/UsefulnessHarpers.pdf. This paper justifies all theoretical sciences. I get a kick out of reading the article. I think about another one, "The Spirit and the Uses of the Mathematical Sciences", ... 2 If you want to do applied math without theory, then respectfully, you shouldn't go into applied math. Even applied mathematicians care about where things come from and how to justify them, so you won't be able to avoid proofs and theorems. With that said, a few of my favorite resources are as follows: Finite Difference Methods for Ordinary and Partial ... 2 Not exactly what you were looking for (because these are "secondary sources"), but maybe interesting nevertheless: Mathematics and Its History by Stillwell walks you through the history of mathematics showing original problems in modern notation with many good exercises at an undergraduate level and with lots of pointers to the original sources. Euler - ... 1 Gödel, Escher, Bach: An Eternal Golden Braid by Douglas Hofstadter. Although not strictly speaking a purely mathematical book, surely I would put it among the classics. 1 If you want to learn algebraic geometry, a classical paper is Jean-Pierre Serre's FAC. See here for the French original, and here for the Englisch translation. See here for some advertisement by Georges Elencwajg. 7 Three books: Euler's Introduction to the Analysis of the Infinite and Foundations of the Differential Calculus both translated by JD Blanton and published by Springer, also the very informative Analysis by its History by Hairer and Wanner. There are always the original papers by the biggies which are more often than not very interesting, illuminating and ... 4 There are several Source Books that have made nice selections for you to pick from, e.g., Smith, Struik, Fauvel and Gray, Stedall. But for an extended read, you can do nothing wrong by immersing yourself in Gauss' Disquisitiones. 5 For introductory Number Theory, you could go with Gauss, Disquisitiones Arithmeticae. Don't worry, you don't have to read Latin, it is available in English and other living languages. 2 Today was the first day of class in my complex analysis course. I sometimes attempt new derivations in real time to keep it fresh. Today, we got to the point of asking what was the reciprocal of $z=x+iy$. We said, let $w=a+ib$ and seek solutions of $wz=1$. This gives: $$wz = (a+ib)(x+iy) = ax-by+i(bx+ay) = 1+i(0).$$ Equating real and imaginary parts ... 5 This paper, called Physics, Topology, Logic and Computation: A Rosetta Stone does just that in section 3.2. If you have time and interest, I would suggest reading the entire paper (since the whole thing is pretty cool). 1 The first component of $\mathcal{A}$ corresponds to the trivial representation, the second component, to the sign representation, and the third, to the standard representation (see here). By choosing a specific matrix basis, we obtain: identity $\to$ $(1) \oplus (1) \oplus \begin{pmatrix} 1& 0 \newline 0& 1 \end{pmatrix}$, $(123)\to$ $(1) ... 0 I have found the following published reference: Elementary proof of Zsigmondy's theorem by M. Teleuca, which also cleans up the proof of Birkhoff & Vandiver. 1 It is an application of Courat-Fischer Theorem (or Min-Max Theorem), and here there is a problem on this subject with it's answer 1 i have heard the name "Cauchy interlacing theorem" (as well as "Cauchy's interlace theorem" or "Interlacing eigenvalues theorem for bordered matrices")Infor that theorem. that could be a hint that cauchy did it first or was among the first. sorry i have no idea where to find an original paper of that, but i hope my post helps anyways. 1 Perhaps [1] could be of use. However, see [2] if you want a very abstract approach (an approach that is related to topics in the links I gave in my comment here). [1] Handbook of Mathematical Induction by David S. Gunderson (over 900 pages) [2] Elementary Induction on Abstract Structures by Yiannis N. Moschovakis 3 The first few chapters of GH Hardy's "A Course of Pure Mathematics" may be worth a read. 0 Recently, I came across the example in an Encyclopedia of Mathematics (Finite-difference calculus). This partially answers my question in a simple case, but it has some nuances that I want to be cleared, so I edited the head post to include more questions. The following is the part from the ecyclopedia relevant to my question. $$\varPhi(x, f(x), f(x+1), ... 2 As dry, old and rigorous as it gets "Advanced Mathematics Precalculus with discrete mathematics and data analysis." It's what I had in High School, although I had a modern textbook as a suppliment. There might be newer versions out, but I assume you want the older ones. 1 Let G be a bipartite graph with bipartition (A,B). The idea is to apply Menger's Theorem to a new graph G^\prime obtained from G by adding two vertices u and v, and joining u to all vertices in A, and v to all vertices in B. Now you just need to check that a matching in G corresponds to a set of internally-disjoint (u,v)-paths in ... 3 I think Milnor's "Topology from a Differentiable Viewpoint" (accessible online) has a proof of this in the appendix. The proof is pretty simple, but uses one slightly technical lemma. I would say the Theorem is easy to see since the idea is straightforward, but requires slightly more work to prove. Here it is. See page 55. 4 We break up this answer into a few different categories which partially overlap. Non-textbook resources. Supplemental Resources, which might try to give a bird's eye view, but doesn't give a complete course. Introductory resources, or what might usually be used in a first year of calculus in a university, usually where proofs do not form a large ... 3 Well, if you're serious about applied mathematics-and serious in that you don't just want "reciepe" books,rather applications that build on the meaty theory background you have-then you should avoid such texts and try and locate books that don't avoid theory,but merely downplay it. Those are the "real" applied mathematics textbooks. You definitely need to ... 2 I recommend Gilbert Strang's books: 1) Introduction to Applied Math; 2) Computational Science and Engineering. 0 Q1: These are right singular vectors, see Singular value decomposition Q2: The geometric meaning of the product of two largest singular values is the maximal amount of area increase under the map. That is, if you have a 2-dimensional plane (or surface) transformed by T, the area will increase by at most \sigma_{n-1}\sigma_n. Q3: I can't really say, ... 1 According to my calculations in sage, there are 11 graphs (out 156) on six vertices such that \det(I-M)=1 and exactly two of these eleven are trees. This shows that is is going to be very difficult to determine information about cycles from the value of \det(I-M). (There is nothing special about the value '1' here, for example example there are 35 graphs ... 0 Writing a chess engine that plays only moderately well, is a considerable programming challenge. Imo, very few people have the discipline, ability and patience to actually organize and execute something like that from scratch, programming-wise. There are some open sources which you can see and read, like that of GNU chess, Borland's Turbo chess and several ... 0 Start here. 0 The Knights tour is a famous chess/math problem. 0 My own version is as follows: if k is the Fourier transform of a certain function from L^1, then K belongs to the trace class. Proof. \, Suppose that k=\mathcal F u with u\in L^1. Then$$ k(x-y)=\int_{-\infty}^{+\infty} u(t)\exp(it(x-y))\,dt =\int_{-\infty}^{+\infty} u(t)\exp(itx)\exp(-ity)\,dt.$$For$t\in\mathbb R$, denote by$A_t\$ the ... 0 You might enjoy Piergiorgio Odifreddi, The Mathematical Century: The 30 Greatest Problems of the Last 100 Years, published by Princeton University Press in 2006. Also, Ben Yandell, The Honors Class: Hilbert's Problems and Their Solvers, published by AKPeters in 2001. Here's a link to a review. Top 50 recent answers are included
2014-08-23 03:46:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6749356389045715, "perplexity": 557.7065016642354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500825010.41/warc/CC-MAIN-20140820021345-00210-ip-10-180-136-8.ec2.internal.warc.gz"}
http://basilisk.fr/src/examples/sphere.c
Vortex shedding behind a sphere at Reynolds = 300 Animation of the ${\lambda }_{2}$ vortices coloured with the vorticity component aligned with the flow. We solve the Navier–Stokes equations on an adaptive octree and use embedded boundaries to define the sphere. ``````#include "grid/octree.h" #include "embed.h" #include "navier-stokes/centered.h" #include "navier-stokes/perfs.h" #include "view.h" `````` We will use the ${\lambda }_{2}$ criterion of Jeong and Hussain, 1995 for vortex detection. ``````#include "lambda2.h" `````` This is the maximum level of refinement i.e. an equivalent maximum resolution of ${256}^{3}$. ``int maxlevel = 8;`` We need a new field to define the viscosity. ``face vector muv[];`` The domain size is ${16}^{3}$. We move the origin so that the center of the unit sphere is not too close to boundaries. ``````int main() { init_grid (64); size (16.); origin (-3, -L0/2., -L0/2.); μ = muv; run(); }`````` The viscosity is just $1/Re$, because we chose a sphere of diameter unity and an unit inflow velocity. ``````event properties (i++) { foreach_face() muv.x[] = fm.x[]/300.; }`````` The boundary conditions are inflow with unit velocity on the left-hand-side and outflow on the right-hand-side. ``````u.n[left] = dirichlet(1.); p[left] = neumann(0.); pf[left] = neumann(0.); u.n[right] = neumann(0.); p[right] = dirichlet(0.); pf[right] = dirichlet(0.);`````` The boundary condition is no slip on the embedded boundary. ``````u.n[embed] = dirichlet(0.); u.t[embed] = dirichlet(0.); u.r[embed] = dirichlet(0.); event init (t = 0) {`````` We initially refine only in a sphere, slightly larger than the solid sphere. `` refine (x*x + y*y + z*z < sq(0.6) && level < maxlevel);`` We define the unit sphere. `````` vertex scalar φ[]; foreach_vertex() φ[] = x*x + y*y + z*z - sq(0.5); boundary ({φ}); fractions (φ, cs, fs);`````` We set the initially horizontal velocity to unity everywhere (outside the sphere). `````` foreach() u.x[] = cs[] ? 1. : 0.; }`````` We log the number of iterations of the multigrid solver for pressure and viscosity. ``````event logfile (i++) fprintf (stderr, "%d %g %d %d\n", i, t, mgp.i, mgu.i);`````` We use Basilisk view to create the animated isosurface of ${\lambda }_{2}$ for $30<=t<=60$. ``````event movies (t = 30; t += 0.25; t <= 60) {`````` Here we compute two new fields, ${\lambda }_{2}$ and the vorticity component in the $y-z$ plane. `````` scalar l2[], vyz[]; foreach() vyz[] = ((u.y[0,0,1] - u.y[0,0,-1]) - (u.z[0,1] - u.z[0,-1]))/(2.*Δ); boundary ({vyz}); lambda2 (u, l2); view (fov = 11.44, quat = {0.072072,0.245086,0.303106,0.918076}, tx = -0.307321, ty = 0.22653, bg = {1,1,1}, width = 802, height = 634); draw_vof ("cs", "fs"); isosurface ("l2", -0.01, color = "vyz", min = -1, max = 1, linear = true, map = cool_warm); save ("movie.mp4"); }`````` We set an adaptation criterion with an error threshold of 0.02 on all velocity components and ${10}^{-2}$ on the geometry. ``````event adapt (i++) { astats s = adapt_wavelet ({cs,u}, (double[]){1e-2,0.02,0.02,0.02}, maxlevel, 4); fprintf (ferr, "# refined %d cells, coarsened %d cells\n", s.nf, s.nc); }``````
2019-03-25 11:31:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 10, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4313245415687561, "perplexity": 10827.844051234664}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203947.59/warc/CC-MAIN-20190325112917-20190325134917-00413.warc.gz"}
http://blog.mdda.net/ai/2016/03/19/workshop-at-fossasia-2016
I recently lead a 2 hour workshop at FOSSASIA 2016 in Singapore. This workshop was hands-on : After a brief background on deep learning, participants started quickly, interacting with ConvNet.js models. But this on-line portion was partly to allow everyone enough time to get the ~1Gb VirtualBox “appliance” created for the event installed on their laptops. Fortunately, over 90% of the people who came already had VirtualBox installed, which was a huge relief. Once everyone was up-to-speed tools-wise, the workshop then progressed through a series of Jupyter (fka iPython) notebooks, ranging from Theano basics, through MNIST, to ImageNet networks (pretrained models of both GoogLeNet and Inception-v3 were included in the VM). Then, for the last half-hour, we went over two interesting applications : One with a ‘commercial’ angle (transfer learning), the other ‘art’ (using style-transfer). Naturally, this being a FOSS event, all the source is available on GitHub - if you have questions on the software, please leave an ‘issue’ there. PS: And if you liked the Workshop, please ‘star’ the Deep Learning Workshop repo :: If there are any questions about the workshop please ask below, or contact me using the details given on the slides themselves.
2019-04-22 16:04:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24391327798366547, "perplexity": 5086.525385981542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578558125.45/warc/CC-MAIN-20190422155337-20190422181337-00360.warc.gz"}
https://www.gamedev.net/forums/topic/331126-c-enumeration/
Public Group # C++ enumeration This topic is 4823 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hey, Im currently trying to learn C++ with the help of 2 books, 1 being 'begining C++ game programming' and it sets you 1 or 2 task at the end of each chapter. Im at the end of chapter 2 and its set a task of rewriting a menu chooser program contained in chapter 1 using enumeration to represent difficulty level's, the problem is both book's I have only touch up on enumeration and uses it like so:- #include <iostream> using namespace std; int main() { const int ALIEN_POINTS = 150; int aliensKilled = 10; int score = aliensKilled * ALIEN_POINTS; cout << "Score: " << score << endl; enum difficulty {NOVICE, EASY, NORMAL, HARD, UNBEATABLE}; difficulty myDifficulty = EASY; enum ship {FIGHTER = 25, BOMBER, CRUISER = 50, DESTROYER = 100}; ship myShip = BOMBER; cout << "\nTo Upgrade my ship to a cruiser will cost: " << (CRUISER - myShip) << " Resource Points.\n"; cin.ignore(cin.rdbuf()-> in_avail() + 1); return 0; } So how do i turn the enumeration into a selectable menu and not one thats already predefined, I dont really want the code as an answer just pointing in the right direction please. Thanks for any help. ##### Share on other sites Well I don't know about the menu not being predifined; that's kind of the point of enumerations, right? But for the menu, you would display something like: Please select a difficulty level:1) Easy2) Medium3) Hard So then the user inputs a number corisponding with the difficulty they want. So say the user input 1. You would then compare that number with your different enumerations and find that Easy is == 1 (assuming you added that to you enum). If this seems what you are looking for but don't quite understand let me know and I'll put up some code. Hope that helps, Matt ##### Share on other sites Since you have to rewrite the program from the first chapter, those who don't have the book probably won't be much help unless you can give the source for the program you have to rewrite. ##### Share on other sites Yeah it is that sort of menu and sorry for confusing you about the menu bein predefined, it was just in the above code is was set at one level and not selectable. Anyway yeah what i want to know is how do you take the user's input and compare it to the enumeration's i have set for the menu. Thanks. Below is the code of the menu I have to rewrite. #include <iostream>using namespace std;int main(){ cout << "Difficulty Levels>\n\n"; cout << "1 - Easy\n"; cout << "2 - Normal\n"; cout << "3 - Hard\n"; int choice; cout << "Choice: "; cin >> choice; switch (choice) { case 1: cout << "You Picked easy.\n"; break; case 2: cout << "You Picked normal.\n"; break; case 3: cout << "You Picked hard.\n"; break; default: cout << "You made an illegal choice!\n"; } return 0;} [Edited by - dek001 on July 9, 2005 5:04:33 PM] ##### Share on other sites He's a basic menu set up. You should able to figure things out from here: // Needed for cin and cout#include <iostream.h>// Our enumerationsenum { DX = 1, OGL, ANY, NUM_OPTIONS };// Entry point for the programint main(int argc, char* argv[]){ // Display our menu cout << "Please select an option:\n" << "1) Direct3D\n" << "2) OpenGL\n" << "3) Doesn't matter. They're both good and anyone who argues different is a moron\n\n" << "Selection: "; // This will store our selection int selection = -1; // Get the user's choice cin >> selection; // Now check what they selected switch(selection) { case DX: { // They selected Direct3D cout << "\nDirect3D it is!\n\n"; break; } case OGL: { // They selected OpenGL cout << "\nOpenGL it is!\n\n"; break; } case ANY: { // They decided each is cool in its own way cout << "\nIndifference it is!\n\n"; break; } } return 0;} Update: So yeah it looks from you last post is that all you're missing is that the enum basically become the number you enumerate them to. For example, in my example DX is enumed to the number 1. Therefore I can use DX as if it were a regular constant (ie. 1, -4, 3986, 69.01). Matt Hughson ##### Share on other sites Yep that helped alot got it sorted now, wasnt sure on enumeration's as the book only use's them once or twice and not for a menu type system it was asking for. Thanks very much for your help. ##### Share on other sites Note: an enumeration does not "know" the enumeration names at run-time; they are for the programmer's convenience. To simplify things (so that you have all your data in one place, and to avoid repeating the names throughout the code) you should consider making a table of the names that match the enumerations as well: enum difficulty {NOVICE, EASY, NORMAL, HARD, UNBEATABLE};// I can use string literals here because these are constantsconst char* difficultyNames[] = {"Novice", "Easy", "Normal", "Hard", "Unbeatable"};// This works because we have an array of pointers, not a pointer to the pointers.// It will keep us from having to keep the count in sync ourselves if we change// the number of difficulty levels.const int difficultyCount = sizeof(difficultyNames) / sizeof(const char*);// Outputting the menu, for example:for (int i = 0; i < difficultyCount; i++) { // for each item // We want to output (the item number + 1) first, as a label (i.e. start counting at 1) // and then a close paren, and then the name. std::cout << (i+1) << ") " << difficultyNames << endl;} 1. 1 2. 2 3. 3 Rutin 18 4. 4 5. 5 frob 11 • 9 • 19 • 9 • 31 • 16 • ### Forum Statistics • Total Topics 632617 • Total Posts 3007459 ×
2018-09-23 00:31:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18955762684345245, "perplexity": 4188.381629333574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158766.65/warc/CC-MAIN-20180923000827-20180923021227-00258.warc.gz"}
https://bkms.kms.or.kr/journal/view.html?doi=10.4134/BKMS.2009.46.2.373
- Current Issue - Ahead of Print Articles - All Issues - Search - Open Access - Information for Authors - Downloads - Guideline - Regulations ㆍPaper Submission ㆍPaper Reviewing ㆍPublication and Distribution - Code of Ethics - For Authors ㆍOnlilne Submission ㆍMy Manuscript - For Reviewers - For Editors Boolean regular matrices and their strongly preservers Bull. Korean Math. Soc. 2009 Vol. 46, No. 2, 373-385 https://doi.org/10.4134/BKMS.2009.46.2.373Printed March 1, 2009 Seok-Zun Song, Kyung-Tae Kang, and Mun-Hwan Kang Jeju National University Abstract : An $m\times n$ Boolean matrix $A$ is called regular if there exists an $n\times m$ Boolean matrix $X$ such that $AXA=A$. We have characterizations of Boolean regular matrices. We also determine the linear operators that strongly preserve Boolean regular matrices. Keywords : Boolean algebra, generalized inverse of a matrix, regular matrix, $(U,V)$-operator MSC numbers : 15A04, 15A09 Downloads: Full-text PDF
2021-04-13 01:40:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46608009934425354, "perplexity": 4783.973798516735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038071212.27/warc/CC-MAIN-20210413000853-20210413030853-00348.warc.gz"}
https://lists.gnu.org/archive/html/lilypond-devel/2005-06/msg00259.html
lilypond-devel [Top][All Lists] problems with lilypond 2.5.31 under WinME From: Brynne and Russ Jorgensen Subject: problems with lilypond 2.5.31 under WinME Date: Sun, 26 Jun 2005 14:23:07 -0600 User-agent: Mozilla/5.0 (Windows; U; Win 9x 4.90; en-US; rv:1.6) Gecko/20040113 When I run lilypond from the desktop, it does not start up lilypad with the Welcome_to_LilyPond.ly file. However, if I start a command shell, cd to the LilyPond usr\bin directory and then run lilypond, it DOES start up lilypad. I thought it was a problem quoting spaces in the install directory, so I uninstalled and re-installed in a directory without spaces, and it's still not working. I dug a little deeper, and I think now that the problem is that lilypond.exe is not affecting the environment at all. I put the following batch file in c:\windows to see what lilypond.exe does to the environment: ``` @echo off set > d:\tmp\debug.txt ``` and after I run lilypond from the desktop, the d:\tmp\debug.txt contents are as follows: ``` COMSPEC=C:\WINDOWS\COMMAND.COM PATH=C:\WINDOWS;C:\WINDOWS\COMMAND;D:\CYGWIN\HOME\RBJ\VIM\VIM61;D:\CYGWIN\BIN;d:\PGP;d:\infozip\unz550;d:\infozip\zip23;D:\PROGRA~1\VPN603 PROMPT=\$p\$g TEMP=D:\TMP TMP=D:\TMP VIM=D:\cygwin\home\rbj\vim VIMINIT=source D:\cygwin\home\rbj\.vimrc TDIR=D:\TMP winbootdir=C:\WINDOWS windir=C:\WINDOWS BLASTER=A220 I2 D3 T4 ``` As you can see, PATH does not contain lilypond\usr\bin, and the various GS_FONTPATH, GS_LIB, GUILE_LOAD_PATH and PANGO_RC_FILE environment variables aren't set at all! It looks like setup_paths() is supposed to be doing this, but it doesn't appear to be working. Therefore, whenever lilypond tries to execute a program that it should find in its path (lilypad.exe and more importantly also gs.exe), the program isn't found. I can't get lilypond.exe to compile, so I can't debug any deeper. Let me know if you have some things you'd like me to try... ``` -Russ ```
2019-05-25 15:50:56
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8418031334877014, "perplexity": 5380.784485567042}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258120.87/warc/CC-MAIN-20190525144906-20190525170906-00375.warc.gz"}
http://math.stackexchange.com/questions/335966/finding-the-derivative-of-the-following-function
Finding the derivative of the following function I came across the following problem: Given $\displaystyle f(r,\theta)=(r \cos \theta,r \sin \theta)$ for $(r,\theta) \in \mathbb R^2$ with $r \neq 0$. Then how can I find the value of $Df$? ($Df$ denotes the derivative of $f$). Also, how can I check whether $\displaystyle f$ is $1-1$ on $\{(r,\theta) \in \mathbb R^2: r \neq 0\}$ or not? EDIT: I want to rephrase the first question. I have to check whether the following statement is true/false? The linear transformation $Df(r,\theta)$ is not zero for any $(r,\theta) \in \mathbb R^2$ with $r \neq 0$ . - Denote $$f_1(r,\; \theta)=r \cos \theta,\\ f_2(r,\; \theta)=r \sin \theta,$$ so $$\displaystyle f(r,\;\theta)=(r \cos \theta,\;r \sin \theta)=(f_1(r,\; \theta),\;f_2(r,\; \theta)).$$ Then $$Df(r,\;\theta)=\begin{pmatrix} \dfrac{\partial{f_1(r,\; \theta)}}{\partial{r}} && \dfrac{\partial{f_1(r,\; \theta)}}{\partial{\theta}} \\ \dfrac{\partial{f_2(r,\; \theta)}}{\partial{r}} && \dfrac{\partial{f_2(r,\; \theta)}}{\partial{\theta}} \end{pmatrix}=\\ =\begin{pmatrix} \dfrac{\partial{(r \cos \theta)}}{\partial{r}} && \dfrac{\partial{(r \cos \theta)}}{\partial{\theta}} \\ \dfrac{\partial{(r \sin \theta)}}{\partial{r}} && \dfrac{\partial{(r \sin \theta)}}{\partial{\theta}} \end{pmatrix}= \begin{pmatrix} \cos{\theta} && -r\sin{\theta} \\ \sin{\theta} && r\cos{\theta} \end{pmatrix}$$ Value of the derivative on vector $\pmatrix{h_1\\h_2}$ equals $$Df(r,\;\theta)\pmatrix{h_1\\h_2}=\begin{pmatrix} \cos{\theta} && -r\sin{\theta} \\ \sin{\theta} && r\cos{\theta} \end{pmatrix}\pmatrix{h_1\\h_2}=\pmatrix{h_1 \cos{\theta}-h_2 r\sin{\theta} \\ h_1\sin{\theta} + h_2r\cos{\theta}}$$
2015-05-24 18:02:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9626059532165527, "perplexity": 69.8919979587203}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928030.83/warc/CC-MAIN-20150521113208-00187-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www3.nd.edu/~dchiang/teaching/nlp/2019/hw4.html
# CSE 40657/60657 Homework 4 Due Fri 2019/11/22 5pm Points 30 In this assignment you will build and improve a named entity recognizer using a hidden Markov Model and a structured perceptron. You may write code in any language you choose. It should build and run on student*.cse.nd.edu, but if this is not possible, please discuss it with the instructor before submitting. You may reuse code from your previous assignments, especially HW2. Whenever the instructions below say to "report" something, it should be reported in the PDF file that you submit. ## 1. Hidden Markov Model 1. Download the HW4 package. It contains the following files: train-1k Training data (1k sentences) train-10k Training data (10k sentences) train-all Training data (all sentences) dev Development data test Test data: don't peek! conlleval.pl Compute precision/recall The data is a subset of the OntoNotes corpus. The training stories selected come from Sinorama, a Taiwanese magazine, while the dev and test stories come from blogs. All files have the following format: With O a O throw O of O 57.28 B-QUANTITY meters I-QUANTITY , O Chiang B-PERSON shattered O the O world O record O for O the O javelin O in O the O F12 B-CARDINAL visually O - O impaired O class O . O Chiang B-PERSON ... Blank lines mark sentence boundaries. Each nonblank line contains a word and its corresponding tag: B-type for the beginning of a named entity of type type, I-type for a word inside a named entity of type type, and O for a word outside a named entity. Write code to read in train-1k. Report how many tokens, word types, and tag types there are.1 2. Train a Hidden Markov Model on train-1k.5 • If you want to, you can start with your FST from HW2. As before, the model can be thought of as the composition of two FSTs: a bigram tag model exactly as in HW2, and a tag-to-word model, which is like the modern-to-Shakespeare model in HW2, but with all $\varepsilon$-transitions deleted. For each sentence, compose these two FSTs with an FST that accepts just that sentence. • Or, you can start from scratch without using FSTs. It's important to smooth $p(w \mid t)$; a simple recommendation is to use add-0.1 smoothing, and in the dev/test data, convert all unknown words to a zero-count symbol <unk>. Report2 your model's tag bigram probabilities for: p(B-DATE | B-DATE) = p(I-DATE | B-DATE) = p(O | B-DATE) = p(B-DATE | I-DATE) = p(I-DATE | I-DATE) = p(O | I-DATE) = p(B-DATE | O) = p(I-DATE | O) = p(O | O) = and your model's (smoothed) tag-word probabilities for: p(one | B-DATE) = p(one | I-DATE) = p(one | O) = p(February | B-DATE) = p(February | I-DATE) = p(February | O) = ## 2. Decoding 1. Implement the Viterbi algorithm (or adapt your implementation from HW2) to work on the present model.5 • If you started with your FST from HW2, you probably need to make minimal or no modifications to your HW2 code. If you weren't using log-probabilities before, it would be good to start. • If you started from scratch, you'll need to reimplement the Viterbi algorithm. Since the HMM has a simple structure, the algorithm is also not too complicated: # Assume w[1], ..., w[n-1] is the input string and w[n] = </s> # Assume that we have a model with # - tag bigram probabilities P(t'|t) # - tag-word probabilities P(w|t), with P(</s>|</s>) = 1 # Initialize table of Viterbi probabilities # viterbi[i,t] is the probability of the best way of tagging words 1...i-1 with some tags and tagging word i as t viterbi[i,t] = 0 for i = 0...n+1 and all tags t viterbi[0,<s>] = 1 for i = 1, ..., n for each tag t' for each tag t if viterbi[i-1,t] * P(t'|t) * P(w[i]|t') > viterbi[i,t']: viterbi[i,t'] = viterbi[i-1,t] * P(t'|t) * P(w[i]|t') pointer[i,t'] = (i-1,t) return viterbi[n,</s>] 2. Tag the development data.1 The output format should be: On O O the O B-DATE 16th B-ORDINAL B-DATE Chen B-PERSON B-PERSON attended O O the O O inauguration O O of O O President O B-PERSON Hipolito B-PERSON I-PERSON Mejia I-PERSON I-PERSON . O O The O O ... where the first column contains the words, the second column contains the correct tags, and the third column contains your system's tags. The conlleval script is a very finicky Perl script; please make sure that the columns are separated by exactly one space character, and that there is no leading or trailing whitespace, and that blank lines do not have any whitespace. Report what your output looks like for the first five sentences of the development set.1 3. Run the scorer like this: perl conlleval.pl < dev.out where dev.out is replaced with the name of your output file. You should see something like this: processed 21608 tokens with 1635 phrases; found: 1652 phrases; correct: 472. accuracy: 78.86%; precision: 28.57%; recall: 28.87%; FB1: 28.72 CARDINAL: precision: 48.48%; recall: 13.91%; FB1: 21.62 33 DATE: precision: 20.57%; recall: 28.86%; FB1: 24.02 209 EVENT: precision: 0.00%; recall: 0.00%; FB1: 0.00 369 FAC: precision: 0.00%; recall: 0.00%; FB1: 0.00 1 GPE: precision: 87.23%; recall: 39.27%; FB1: 54.16 235 LANGUAGE: precision: 0.00%; recall: 0.00%; FB1: 0.00 0 LAW: precision: 0.00%; recall: 0.00%; FB1: 0.00 0 LOC: precision: 0.00%; recall: 0.00%; FB1: 0.00 18 MONEY: precision: 28.57%; recall: 50.00%; FB1: 36.36 14 NORP: precision: 93.90%; recall: 45.83%; FB1: 61.60 82 ORDINAL: precision: 66.67%; recall: 14.63%; FB1: 24.00 9 ORG: precision: 9.39%; recall: 12.50%; FB1: 10.73 181 PERCENT: precision: 50.00%; recall: 25.00%; FB1: 33.33 4 PERSON: precision: 25.20%; recall: 35.16%; FB1: 29.36 381 QUANTITY: precision: 60.00%; recall: 25.00%; FB1: 35.29 10 TIME: precision: 0.00%; recall: 0.00%; FB1: 0.00 4 WORK_OF_ART: precision: 0.00%; recall: 0.00%; FB1: 0.00 102 The important score is FB1 (balanced F-measure) on all types, which is 46.15%. Report your system's score on the development data.1 It should be at least 28%.2 ## 3. Structured Perceptron In this part, you'll implement a model related to a conditional random field. The model is not a probability model, just a scoring function: $$s(\mathbf{w}, \mathbf{t}) = \sum_{i=1}^n \left(\lambda_1(t_{i-1}, t_i) + \lambda_2(t_i, w_i)\right),$$ where $t_0 = \texttt{<s>}$ and $t_n = w_n = \texttt{</s>}$. 1. Update your implementation of the Viterbi algorithm to find the $\mathbf{t}$ that maximizes the above score.1 If you're already using log-probabilities, then most likely, no changes at all are needed. 2. Implement the training algorithm,5 which works like this: • initialize all the $\lambda(t, t')$ and $\mu(t, w)$ to zero • for each training sentence $\mathbf{w}, \mathbf{t}$: • find the tag sequence $\mathbf{p} = p_1 \cdots p_n$ that maximizes $s(\mathbf{w}, \mathbf{p})$ • for $i \leftarrow 1, \ldots, n$: • $\lambda_1(t_{i-1}, t_i) \mathrel{\mathord+\mathord=} 1$ • $\lambda_2(t_i, w_i) \mathrel{\mathord+\mathord=} 1$ • $\lambda_1(p_{i-1}, p_i) \mathrel{\mathord-\mathord=} 1$ • $\lambda_2(p_i, w_i) \mathrel{\mathord-\mathord=} 1$ • [There used to be two more updates here for the end of sentence, but they are redundant if we are treating $t_n$ as $\texttt{</s>}$] The intuition is: If we got the sentence right, then the updates cancel each other out. If we got the sentence wrong, then the updates increase the weights we would have used to get it right and decrease the weights we did use to get it wrong. 3. Train the perceptron on the training data. After each epoch (pass through the training data), report your accuracy (number of correct tags divided by total number of tags) on both the training data and the development data.1 When done training, the FB1 (not accuracy) score on the development data should reach at least 40%.2 Report any other details, like:1 • How did you decide when to stop? • Did you use any tricks like randomly shuffling the training data, or averaging the weights? 4. Report2 your model's tag bigram weights for: λ₁(B-DATE, B-DATE) = λ₁(B-DATE, I-DATE) = λ₁(B-DATE, O) = λ₁(I-DATE, B-DATE) = λ₁(I-DATE, I-DATE) = λ₁(I-DATE, O) = λ₁(O, B-DATE) = λ₁(O, I-DATE) = λ₁(O, O) = and your model's tag-word weights for: λ₂(B-DATE, one) = λ₂(I-DATE, one) = λ₂(O, one) = λ₂(B-DATE, February) = λ₂(I-DATE, February) = λ₂(O, February) = ## Submission Please submit all of the following in a gzipped tar archive (.tar.gz or .tgz; not .zip or .rar) via Sakai: • A PDF file (not .doc or .docx) with your responses to the instructions/questions above. • All of the code that you wrote. • A README file with instructions on how to build and run your code on student*.cse.nd.edu. If this is not possible, please discuss with the instructor before submitting.
2020-04-09 14:49:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30322808027267456, "perplexity": 14096.815472772343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371858664.82/warc/CC-MAIN-20200409122719-20200409153219-00226.warc.gz"}
https://www.physicsforums.com/threads/whats-the-difference.462400/
# What's the difference? ## Homework Statement In a box containing 100 bulbs, 10 are defective. The probability that out of a sample of 5 bulbs, none is defective is? ## The Attempt at a Solution I see two approaches to this problem. 1) Out of 90 non-defective bulbs, we can chose 5 in 90C5 ways. There are a total no. of 100C5 ways. So required probability =90C5/100C5=0.5838 2) Out of the sample of 5 bulbs, Probability of a non-defective bulb = 90/100=9/10 In 5 bulbs, required probability = (9/10)5=0.59049 The two answers don't differ by much. Which one is correct and why? LeonhardEuler Gold Member 2) Out of the sample of 5 bulbs, Probability of a non-defective bulb = 90/100=9/10 In 5 bulbs, required probability = (9/10)5=0.59049 You can multiply the probabilities of independent events to calculate the probability of all of them occurring together. Is pulling a second non-defective light bulb independent of pulling a first one? In other words, suppose you are picking the second lightbulb from the box. Does the probability depend on the condition of the first lightbulb? In the selected sample, you don't take away the bulbs after picking them. In other words, the bulbs are replaced in the sample. So they are independent events. LeonhardEuler Gold Member So when they say "a sample of 5 bulbs", you can have the same bulb multiple times in the sample? HallsofIvy Homework Helper ## Homework Statement In a box containing 100 bulbs, 10 are defective. The probability that out of a sample of 5 bulbs, none is defective is? ## The Attempt at a Solution I see two approaches to this problem. 1) Out of 90 non-defective bulbs, we can chose 5 in 90C5 ways. There are a total no. of 100C5 ways. So required probability =90C5/100C5=0.5838 2) Out of the sample of 5 bulbs, Probability of a non-defective bulb = 90/100=9/10 In 5 bulbs, required probability = (9/10)5=0.59049 The two answers don't differ by much. Which one is correct and why? This is selection without replacement. In (2) you are calculating the probability with replacement. That is, as if you take a bulb, test it, put it back in the box and choose again, with a (slight) chance of getting the same bulb again. Instead you could argue that, at first, there are 90 non-defective bulbs out of 100 so the chance that the first bulb selected is non-defective is 90/100= .9. But then there are 89 non-defective bulbs left among 99 bulbs. The chance of selecting a non-defective bulb the second time is 89/99, not 90/100 again. The probability of selecting 5 non-defective bulbs is (90/100)(89/99)(88/98)(87/97)(85/96)= 0.5838 as in (1). This is selection without replacement. In (2) you are calculating the probability with replacement. That is, as if you take a bulb, test it, put it back in the box and choose again, with a (slight) chance of getting the same bulb again. Instead you could argue that, at first, there are 90 non-defective bulbs out of 100 so the chance that the first bulb selected is non-defective is 90/100= .9. But then there are 89 non-defective bulbs left among 99 bulbs. The chance of selecting a non-defective bulb the second time is 89/99, not 90/100 again. The probability of selecting 5 non-defective bulbs is (90/100)(89/99)(88/98)(87/97)(85/96)= 0.5838 as in (1). But the thing is that, in my text the answer given is (0.9)^5. Not only this, I searched this question in google books and found that everywhere the answer is same as above. ex: 18. The probability of getting a defective bulb from the box, is 1 / 10. Hence using binomial distribution, the required probability , is (0.9)^5. Hence the option is C. Source - http://creatorstouchglobal.com/gm/index.php?option=com_content&view=article&id=63&Itemid=121 [Broken] Last edited by a moderator: LeonhardEuler Gold Member It is really seems wrong on that website. If the question was "What is the probability of pulling a non-defective bulb 5 times if the bulbs are replaced after each drawing" then the answer would be (0.9)^5, but "a sample of 5 bulbs" strongly implies no replacement. hmmm....its possible that the answer is wrong.
2021-04-23 03:26:53
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8859610557556152, "perplexity": 992.1487920384508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039626288.96/warc/CC-MAIN-20210423011010-20210423041010-00580.warc.gz"}
https://crazyproject.wordpress.com/2012/01/20/on-the-degree-of-an-extension-of-fx/
## On the degree of an extension of F(x) Let $F$ be a field, and consider the field $F(x)$ of rational functions over $F$ (that is, the field of fractions of the domain $F[x]$). Let $t(x) = p(x)/q(x)$, with $p,q \in F[x]$ and $q \neq 0$ such that the degree of $q$ is strictly larger than the degree of $p$. In this exercise, we will compute the degree of $F(x)$ over $F(t(x))$. (Note that if $p$ has degree larger than or equal to the degree of $q$, then we can use the division algorithm to find $p(x) = q(x)b(x) + r(x)$, and then $F(t(x)) = F(r(x)/q(x))$.) 1. Show that $p(y) - t(x)q(y) \in F(t(x))[y]$ is irreducible over $F(t(x))$ and has $x$ as a root (in the extension $F(x)$). 2. Show that the degree of $p(y) - t(x)q(y)$ as a polynomial in $y$ is the maximum of the degrees of $p$ and $q$. 3. Conclude that $[F(x):F(t(x))] = \mathsf{max}(\mathsf{deg}\ p(x)), \mathsf{deg}\ q(x))$. (Fields of rational functions were introduced in Example 4 on page 264 of D&F.) We claim that $t(x)$ is indeterminate over $F$– that is, that $t(x)$ does not satisfy any polynomial over $F$. Indeed, if $\sum_{i=0}^n c_i t(x)^i = \sum c_i \dfrac{p(x)^i}{q(x)^i} = 0$, then we have $\sum c_i p(x)^iq(x)^{n-i} = 0$. Let $d_p$ and $d_q$ denote the degrees of $p$ and $q$, respectively. Since $F$ has no zero divisors, the degree of the $i$th summand is $d_pi + d_q(n-i)$. Suppose two summands have the same degree; then $d_pj + d_q(n-i) = d_pj + d_q(n-j)$ for some $i$ and $j$, which reduces to $(d_p-d_q)i = (d_p-d_q)j$. Since (as we assume) $d_p \neq d_q$, we have $i = j$. In this case, we can pick out the summand with the highest degree, and be guaranteed that no other summands contribute to its term of highest degree. This gives us that $c_i = 0$ for the highest degree summand; by induction we have $c_i = 0$ for all the coefficients $c_i$, a contradiction. (In short, no two summands have the same highest degree term. Starting from the highest of the highest degree terms and working down, we show that each $c_i$ is 0.) So $t(x)$ is indeterminate over $F$, and in fact $F[t(x)]$ is essentially a polynomial ring whose field of fractions is $F(t(x))$. By Gauss’ Lemma, the polynomial $p(y) - t(x)q(y)$ is irreducible over $F(t(x))$ if and only if it is irreducible over $F[t(x)]$. Now $F[t(x)][y] = F[y][t(x)]$, and our polynomial is irreducible in this ring since it is linear in $t(x)$. So in fact $p(y) - t(x)q(y)$ is irreducible over $F(t(x))$, and moreover has $x$ as a root. The degree of $p(y) - t(x)q(y)$ in $y$ is the maximum of the degrees of $p$ and $q$ because the coefficient of each term (in $y$) is a linear polynomial in $t(x)$, which is nonzero precisely when one of the corresponding terms in $p$ or $q$ is nonzero. That is, we cannot have nonzero terms in $p(y)$ and $-t(x)q(y)$ adding to give a zero term. To summarize, $p(y) - t(x)q(y)$ is an irreducible polynomial over $t(x)$ with $x$ as a root, and so must be (essentially) the minimal polynomial of $F(x)$ over $F(t(x))$. By the preceding paragraph, the degree of this extension is the larger of the degrees of $p(x)$ and $q(x)$.
2016-10-27 01:00:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 76, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9748385548591614, "perplexity": 44.674776226903916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721027.15/warc/CC-MAIN-20161020183841-00183-ip-10-171-6-4.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/9803/bad-ideas-adding-vectors-from-different-vector-spaces
Let V be a vector space with a non-trivial subspaces U, W. $U \neq W$. Consider $u_0 \in V$, with $u_0 \neq 0$ and $w_0 \in V$ with $w_0 \neq 0$ then we have $u_0 + U \in V/U$ and $w_0 + W \in V/W$. Is there any sensible way to define: $(u_0 + U) + (w_0 + W)$? I'm thinking "no" $(2,4) + (2,6,7)$ makes no sense. So, why should this? I mean, we would need to find some new vector space that had all of the elements in $V/U$ and in $V/W$ and their sums and scalar products. Maybe $V/U + V/W$ -- but then we need both $V/U$ and $V/W$ to be subspaces of some larger vector space. Is there a way to make sense of this at all? - Read about direct sums of vector spaces. –  Mariano Suárez-Alvarez Nov 11 '10 at 2:48 This book only defined direct sums for subspaces. But, I am reading otherwise now. –  a little don Nov 11 '10 at 2:56 Well, let's go back to the definitions. I am sorry if I go too far into the details, but it is difficult to judge what you exactly understand based only on your question. So maybe I explain too much, but don't take this as an offense. What exactly is $V/U$ ? I will explain this a little, since it might be the source of the problem. $V/U$ is the set of the class of equivalences of vectors from $V$. The equivalence relation is given by $x \sim y :\iff x-y \in U$. This set can naturally be considered as a vector space if you define: $$[x] + [y] := [x+y] \text{ and } \lambda[x] := [\lambda x]$$ Let's take an example: $V = \mathbb{R}^4$, $U = \mathbb{R}^2$, $W = \mathbb{R}$. First, what does it mean to consider $\mathbb{R}^2$ and $\mathbb{R}$ as subspaces of $\mathbb{R}^4$ ? Well, we need to actually consider $\mathbb{R}^2 \times \{(0,0)\}$ and $\mathbb{R} \times \{(0,0,0)\}$. Thus, actually $U = \{ x = (x_1, x_2, x_3, x_4) \in \mathbb{R}^4 | x_3 = x_4 = 0\}$ and $W = \{ x = (x_1, x_2, x_3, x_4) \in \mathbb{R}^4 | x_2 = x_3 = x_4 = 0\}$. So a first equivalence relation $\sim_A$ can be defined for $V/U$: $$x \sim_A y :\iff x_3 = y_3 \text{ and } x_4 = y_4$$ Same for $W/U$: $$x \sim_B y :\iff x_2 = y_2, x_3 = y_3, x_4 = y_4$$ Thus let's write, $V/U = \{[x]_A \}$ and $V/W = \{[x]_B \}$, where obviously $[x]_A = \{y \in \mathbb{R}^4 | y_3 = x_3, y_4 = x_4\}$ and $[x]_B = \{y \in \mathbb{R}^4 | x_2 = y_2, x_3 = y_3, y_4 = x_4\}$. Note that $V/U$ has dimension 2 (a base is given by $\{[(0,0,1,0)]_A , [(0,0,0,1)]_A\}$) and that $V/W$ has dimension 3 (a base is given by $\{[(0,1,0,0)]_B , [(0,0,1,0)]_B, [(0,0,0,1)]_B\}$ ). Now, how can we define an addition $[u]_A + [v]_B$ ? This would be trying to add sets. You can try to define $[u]_A + [v]_B$ in a natural way but you won't manage it: simply saying $[u]_A + [v]_B := (u_1+v_1, u_2 + v_2 , u_3 + v_3, u_4 + v_4)$ won't work, the addition will not be well-defined (because if you change the representant of $[u]_A$, $u_1$ and $u_2$ can be anything, and if you change representant $[v]_B$, $v_1$ can be anything). Furthermore, your intuition should tell you that since both dimensions of $V/U$ and $V/W$ are 2 and 3, such an addition can only be defined properly in a vector space of at least dimension 5 (otherwise you lose information). Too bad $\mathbb{R}^4$ has only dimension 4. So you should be convinced that it is impossible to define a natural addition properly in $V$ for this example. (Caution: I don't say it is impossible to define an addition inside $V$. Actually, it is possible to define several additions, but it won't be "natural") Hence, the only way to do it properly is to consider $V/U \oplus V/W$. Of course, in special cases, where $\dim(V/U) + \dim(V/W) \leq \dim(V)$, then it is possible to rearrange things so that you consider $V/U \oplus V/W$ as a subspace of $V$. Sorry for the long answer, that depending on your level of understanding might not be very clear. - This was very helpful. I admit the question was murky. I am grading these exams and students write strange things like $(w_0+W) + (u_0 + U) = (w_0+u_0) + (W \cup U)$ and I need to write little notes that explain why this is wrong. (In this case it is wrong, at least because we have no reason to believe that $W \cup U$ is a subspace. But, then I think "What if there is some higher maths structure that I don't know about? Better ask..." –  a little don Nov 11 '10 at 15:16 @a little don : in that case, instead of trying to grasp complex concepts, it's easier to go back to the definitions. That helps a lot, especially if you are trying to find counter-example. –  Djaian Nov 11 '10 at 15:28 Certainly you can add them as elements of $V$. They should all have the same number of components as the dimension of $V$. But if $u_0 \in U$ why isn't $u_0+U$ just $U$? $U$ is a vector space, so is closed under addition. - I see what I messed up. $u_0, w_0 \in V$ and I was thinking... non-zero. Doh! Sorry. –  a little don Nov 11 '10 at 15:25 I was looking at $u_0+U$ as the set of vectors u_0+u for u any vector in U. Then we can add the two sets to make the set of vectors which are the sum of one vector from the first set and one from the second. My reading of $u_0+U$ is the same as this with the first set having only one element. As all this is within V, it goes through. As Djaian says, this doesn't work as a representative of an equivalence class, but it does work for sets. –  Ross Millikan Nov 11 '10 at 15:27
2014-10-25 12:15:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9325931668281555, "perplexity": 113.56571148390707}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119648148.32/warc/CC-MAIN-20141024030048-00075-ip-10-16-133-185.ec2.internal.warc.gz"}
https://electronics.stackexchange.com/questions/169532/what-is-an-lut-in-fpga/169535
What is an LUT in FPGA? I have gone through various sources... But I am not quite sure what it is.I want an and gate and the logical equivalent is two inputs feeding to one gate and for Y=AB' the logical equivalent is feeding to one not gate and one and gate. But it is the same LUT for both AND and Y=AB'. I think we store the values as desired in the LUT. Someone elaborate on this • Any reason you haven't accepted an answer? – Ellen Spertus Aug 16 '16 at 20:10 A LUT, which stands for LookUp Table, in general terms is basically a table that determines what the output is for any given input(s). In the context of combinational logic, it is the truth table. This truth table effectively defines how your combinatorial logic behaves. In other words, whatever behavior you get by interconnecting any number of gates (like AND, NOR, etc.), without feedback paths (to ensure it is state-less), can be implemented by a LUT. The way FPGAs typically implement combinatorial logic is with LUTs, and when the FPGA gets configured, it just fills in the table output values, which are called the "LUT-Mask", and is physically composed of SRAM bits. So the same physical LUT can implement Y=AB and Y=AB', but the LUT-Mask is different, since the truth table is different. You can also create your own lookup tables. For example, you could build a table for a complex mathematical function, which would work much faster than actually calculating the value by following an algorithm. This table would be stored in RAM or ROM. This brings us to viewing the LUTs simply as memory, where the inputs are the address, and the corresponding outputs are the data stored in the given address. Here's a snapshot from FPGA Architecture by Altera: A two input LUT (lookup table) is can be represented generically like this: A LUT consists of a block of SRAM that is indexed by the LUT's inputs. The output of the LUT is whatever value is in the indexed location in it's SRAM. Although we think of RAM normally being organized into 8, 16, 32 or 64-bit words, SRAM in FPGA's is 1 bit in depth. So for example a 3 input LUT uses an 8x1 SRAM (2³=8) Because RAM is volatile, the contents have to be initialized when the chip is powered up. This is done by transferring the contents of the configuration memory into the SRAM. The output of a LUT is whatever you want it to be. For a two-input AND gate, Address In ([1:0]) Output 0 0 0 0 1 0 1 0 0 1 1 1 For your second example, only the truth table changes: Address In ([1:0]) Output 0 0 0 0 1 1 1 0 0 1 1 0 and finally, A xor B: Address In ([1:0]) Output 0 0 0 0 1 1 1 0 1 1 1 0 So it is not the same LUT in each case, since the LUT defines the output. Obviously, the number of inputs to an LUT can be far more than two. The LUT is actually implemented using a combination of the SRAM bits and a MUX: Here the bits across the top 0 1 0 0 0 1 1 1 represents the output of the truth table for this LUT. The three inputs to the MUX on the left a, b, and c select the appropriate output value. • Thanks for that... But I am getting very basic questions here. SRAM bits? I know SRAM.. but SRAM bits? Configuration memory? and could you explain how Y=AB,Y=AB' and Y= A xor B will be implemented with a 2-input LUT? It would be helpful if you explain with a MUX level diagram? I am very thankful for your help on the structure on LUT. What is that R in the diagram> – Muthu Subramanian May 8 '15 at 9:13 • @MuthuSubramanian I have updated my answer which should address all of your questions. – tcrosley May 8 '15 at 9:59
2021-01-19 17:35:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5323571562767029, "perplexity": 1022.8138848034032}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519600.31/warc/CC-MAIN-20210119170058-20210119200058-00121.warc.gz"}
https://plainmath.net/other/102639-a-package-weighs-96-ounces-wh
i1yev1ki 2023-02-19 A package weighs 96 ounces. What is the weight of the package in pounds? ### Answer & Explanation kolosalnoigrr Given solution: As $16$ ounces make $1$ pound $1$ ounce will make $\frac{1}{16}$ pound and $96$ ounces will make $\frac{1}{16}×96$ pounds or $\frac{1}{{\overline{)16}}^{1}}×{\overline{)96}}^{6}$ pound i.e. $6$ pounds. Do you have a similar question? Recalculate according to your conditions! Get an expert answer. Let our experts help you. Answer in as fast as 15 minutes. Didn't find what you were looking for?
2023-03-24 04:14:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18778494000434875, "perplexity": 4497.62531571411}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945242.64/warc/CC-MAIN-20230324020038-20230324050038-00289.warc.gz"}
https://lists.nongnu.org/archive/html/lilypond-user/2019-05/msg00154.html
lilypond-user [Top][All Lists] ## Re: Font questions about absolute From: Aaron Hill Subject: Re: Font questions about absolute Date: Sat, 11 May 2019 12:16:01 -0700 User-agent: Roundcube Webmail/1.3.8 ```On 2019-05-11 10:52 am, Reggie wrote: ``` Is there any easy way to quickly see or convert absolutely font to #3 or #5 and so on? In frescobaldi like a function() or something. Thank you for any ```help. ``` ``` ``` Within \markup, you can use \abs-fontsize to get a specific size that will not scale based on the global staff size. That does not require any conversion. ``` ``` If you are trying to set the relative font size of a grob that produces text and you want it to match a specific absolute font size, then you are going to need to break out your calculator and do some arithmetic. (LilyPond has some Scheme functions to assist here.) ``` ``` One thing to understand is the logarithmic scale that LilyPond uses for relative font sizes. A value of 6 results in doubling the size of a font, whereas a value of -6 will halve the size of the font. Each increment of 6 in either direction is another doubling or halving. So in this system, adding an amount on the logarithmic scale results in multiplying the value on the linear scale. ``` ``` The magstep and magnification->font-size procedures help with converting between linear and logarithmic. So (magstep 6) will produce 2.0 as output, and (magnification->font-size 1/2) will produce -6.0 as output. But if you need to do the math by hand, here are those functions: ``` magstep(x) = 2 ^ (x / 6) magnification->font-size(x) = 6 * log_2(x) ``` (Recall that a logarithm of any one base can be used to compute the logarithm of any other base. So to compute the base-two log above using the natural log is simply: ln(x) / ln(2).) ``` ``` Another thing to be aware of is how the global staff size plays into font size. The default is a 20pt staff (5pt staff spaces) and a resulting font size of 11pt. The ratio of 11/20 is important, since if you were to shrink the staff size to 16, for example, the resulting font will be 16 * 11/20 = 8.8. ``` ``` Putting this together, we can calculate precisely what the resulting font size would be given our knowledge of the global staff size and the relative font size of a text element. Likewise, we can determine a relative font size that will result in an absolute font size. ``` ``` For the first case--going from relative to absolute--let us assume we have a staff size of 18 and our grob's font-size is 2. The global font size is 18 x 11/20 = 9.9 and the magnification factor is 2^(2/6) ~= 1.26, so our result is approximately 12.5pt. And in Scheme, we could say: ``` (* 18 11/20 (magstep 2)) --> 12.4732183939592 ``` For the second case--going from absolute to relative--let us assume we have a staff size of 24 and need to set the grob's font-size so that the result is exactly 18pt. The global font size is 24 * 11/20 = 13.2. To get from 13.2 to 18 requires a magnification of 18 / 13.2 ~= 1.36. This, in the logarithm scale, is 6 * log_2(1.36) ~= 2.68. In Scheme, we do: ``` (magnification->font-size (/ 18 (* 24 11/20))) --> 2.68475386182733 ``` Now all that said, is this "easy"? Depends on your comfortability with maths, I suppose. I usually stick to relative font sizes and do not concern myself with the absolutes, since it is my eye that determines whether something is big enough or small enough, not a ruler. ``` ``` Of course, I recently did have to use the above computations when I was typesetting hymns for projection. I needed to have the flexibility of changing the staff size independent of the lyric font size as I was experimenting with what would look good. So rather than have to compute things by hand, I used something similar to the absolute-to-relative Scheme code above. ``` -- Aaron Hill ```
2019-06-17 02:56:35
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8495693206787109, "perplexity": 1092.3710195411231}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998369.29/warc/CC-MAIN-20190617022938-20190617044938-00473.warc.gz"}
https://paudetseis.github.io/Telewavesim/
# Documentation¶ The structure of the Earth’s crust and upper mantle gives useful information on the internal composition and dynamics of our planet. Some of the most widely used techniques to infer these properties are based on examining the effect of teleseismic body wave (i.e., P and S waves that originate from distant earthquakes and arrive as plane waves) propagation (e.g., transmission and scattering) through stratified media. Modeling the seismic response from stacks of subsurface layers is therefore an essential tool in characterizing their effect on observed seismograms. This package contains python and fortran modules to synthesize teleseismic body-wave propagation through stacks of generally anisotropic and strictly horizontal layers using the matrix propagator approach of Kennett (1983), as implemented in Thomson (1997). The software also properly models reverberations from an overlying column of water using the R/T matrix expressions of Bostock and Trehu (2012), effectively simulating ocean-bottom seismic (OBS) station recordings. The software will be useful in a variety of teleseismic receiver-based studies, such as P or S receiver functions, long-period P-wave polarization, shear-wave splitting from core-refracted shear waves (i.e., SKS, SKKS), etc. It may also be the starting point for stochastic inverse methods (e.g., Monte Carlo sampling). The main part of the code is written in fortran with python wrappers. Common computational workflows are covered in the Jupyter notebooks bundled with this package.
2020-04-06 09:37:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35827353596687317, "perplexity": 2945.0380963081734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371620338.63/warc/CC-MAIN-20200406070848-20200406101348-00332.warc.gz"}
https://1lab.dev/Cat.Displayed.Instances.Elements.html
open import Cat.Displayed.Base open import Cat.Prelude module Cat.Displayed.Instances.Elements {o ℓ s} (B : Precategory o ℓ) (P : Functor (B ^op) (Sets s)) where # The Displayed Category of Elements🔗 It is useful to view the category of elements of a presheaf P as a displayed category. Instead of considering pairs of objects $X$ and sections $s$, we instead think of the set of sections as displayed over $X$. The story is similar for morphisms; instead of taking pairs of morphisms $f$ and fragments of data that $P(f)(x) = y$, we place those fragments over the morphism $f$. In a sense, this is the more natural presentation of the category of elements, as we obtain the more traditional definition by taking the total category of . ∫ : Displayed B s s Displayed.Ob[ ∫ ] X = ∣ P.₀ X ∣ Displayed.Hom[ ∫ ] f P[X] P[Y] = P.₁ f P[Y] ≡ P[X] Displayed.Hom[ ∫ ]-set _ _ _ = hlevel! ∫ .Displayed.id′ = happly P.F-id _ ∫ .Displayed._∘′_ {x = x} {y = y} {z = z} {f = f} {g = g} p q = pf where abstract pf : P.₁ (f ∘ g) z ≡ x pf = P.₁ (f ∘ g) z ≡⟨ happly (P.F-∘ g f) z ⟩≡ P.₁ g (P.₁ f z) ≡⟨ ap (P.₁ g) p ⟩≡ P.₁ g y ≡⟨ q ⟩≡ x ∎ ∫ .Displayed.idr′ _ = to-pathp (P.₀ _ .is-tr _ _ _ _) ∫ .Displayed.idl′ _ = to-pathp (P.₀ _ .is-tr _ _ _ _) ∫ .Displayed.assoc′ _ _ _ = to-pathp (P.₀ _ .is-tr _ _ _ _)
2023-02-03 12:45:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8509403467178345, "perplexity": 7789.607401383977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500056.55/warc/CC-MAIN-20230203122526-20230203152526-00577.warc.gz"}
https://math.stackexchange.com/questions/4276017/show-that-two-vector-are-perpendicular-to-other
# show that two vector are perpendicular to other Let $$S : \mathbb{R}^n \rightarrow \mathbb{R}^n$$ be the transformation induced by the $$3 \times 3$$ matrix $$\left(I-u\,u^T\right)$$. Show that for $$x \in \mathbb{R}^3$$, $$S(x)$$ is perpendicular to $$u$$, where $$u = [b , c, d]^T.$$ New contributor Joo is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct. • Show that $u^\top S(x) = 0$ by plugging in the definition of $S(x)$. I think you also need the condition that $u^\top u = 1$ which isn't mentioned in the problem. Oct 13 at 22:54 $$S(x) = (I-u\,u^T)\,x = x - u\,u^T\,x$$ $$\langle u,S(x)\rangle = u^T\,S(x) = u^T\,x - u^T\,u\,u^T\,x = (u^T\,x)\,(1-u^T\,u)$$ If we include the condition: $$b^2 + c^2 + d^2 = 1 \Leftrightarrow \|u\| = 1$$ $$1-u^T\,u = 0$$, which means that $$\langle u,S(x)\rangle = 0$$, i.e., $$S(x)$$ is perpendicular to $$u$$. If $$u$$ is a unit vector, then $$(u\,u^T\,x) = (u^T\,x)\,u$$ is the projection of $$x$$ in the $$u$$-direction. Thus, $$S(x)$$ subtracts from $$x$$ its projection in the $$u$$-direction, and only the perpendicular part remains.
2021-10-19 00:57:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 21, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8829641342163086, "perplexity": 198.08534683831084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585215.14/warc/CC-MAIN-20211018221501-20211019011501-00334.warc.gz"}
https://encyclopediaofmath.org/index.php?title=Morphism&diff=prev&oldid=40176
# Difference between revisions of "Morphism" of a category A term used to denote the elements of an arbitrary category which play the role of mappings of one set into another, homomorphisms of groups, rings, algebras, continuous mappings of topological spaces, etc. A morphism of a category is an undefined concept. Each category consists of elements of two classes, called the class of objects and the class of morphisms, respectively. The class of morphisms of a category $\mathfrak{K}$ is usually denoted by $\operatorname{Mor} \mathfrak{K}$. Any morphism $\alpha$ of a category $\mathfrak{K}$ has a uniquely defined domain (source) $A$ and a uniquely defined codomain (target) $B$. All morphisms with common domain $A$ and codomain $B$ form a subset $H_{\mathfrak{K}} \! \left({A, B}\right)$ of $\operatorname{Mor} \mathfrak{K}$. The fact that $\alpha$ has domain $A$ and codomain $B$ can be written in the usual way: $\alpha \in H_{\mathfrak{K}} \! \left({A, B}\right)$ or, using arrows, $\alpha : A \to B$, $A \xrightarrow{\alpha} B$, etc. The division of the elements of a category into morphisms and objects is meaningful only within the context of a fixed category, since the morphisms of one category may be the objects of another and conversely. The morphisms of any category form a system that is closed under a partial binary operation — multiplication. Depending on the properties of morphisms relative to this operation, special classes of morphisms can be distinguished, for example, the classes of monomorphisms, epimorphisms, bimorphisms, isomorphisms, null (zero) morphisms, normal monomorphisms, normal epimorphisms, etc. (cf. Monomorphism; Epimorphism; Bimorphism; Isomorphism; Normal monomorphism; Normal epimorphism).
2022-08-09 22:47:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9678916931152344, "perplexity": 378.91892606342105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571090.80/warc/CC-MAIN-20220809215803-20220810005803-00229.warc.gz"}
http://blog.polettix.it/parachuting-whatever/
### Flavio Poletti Irreducible Perler. # Parachuting Whatever Many times I craft things that have to be installed in some place, which means that an installer is a nice thing to have. Here’s one, Perl-based. The basic idea that probably anyone has for a poor man’s deployment system is to pack stuff in a tarball, together with a deployment script inside that has to be executed in the target machine. Without too much fantasy, I figured that I could walk the extra mile and make a package that behaves like a tarball with a twist - i.e. it is capable of executing things after unpacking. ## TL;DR Download self-contained bundled version and save it as deployable in some directory in your PATH. Make sure it’s executable too. Put your stuff in a directory. The current directory is fine. Assume it is exactly as you want it to appear when you unpack in the destination. Include a deployment script, i.e. the one that you usually include for starting the real deployment after unpacking –we’ll call it deploy.sh– and make sure it’s executable. You should have something like this in the directory: deploy.sh file1.foo file2.bar somedir/ ... To generate package.pl ready for deployment, run: deployable -o package.pl -d deploy.sh \ file1.foo file2.bar somedir ... Ship package.pl, execute in place and you’re done. ## Enter deployable deployable is a handful of tools to help you with remote management of multiple servers. It was born when there was no Puppet or Chef in town - not that I know of, at least - and worked pretty well for me. In this post we’ll concentrate on the main script –named after the bunch of tools– i.e. the one that allows you to generate smart packages. Before continuing, if you find it interesting, please note that you will need to carry also the remote script with you, together with installing dependencies. If you like compact packages - and you probably do if you’re interested in packing thing smartly - you can download the bundled version. Ensure to put it in some place in PATH and to set its execution bits, this is what we will assume in the rest of this post. So what was your workflow before deployable? Let’s assume it was something like this: 1. place all relevant files in a directory (possibly in a subdirectory) 2. add a deployment script to the directory 3. create a tarball of that directory 4. write instructions to unpack the tarball and execute the deployment script inside the directory that is created 5. ship the tarball and the instructions Something along the following line: mkdir foobar cp /lots/of/stuff/* foobar vi foobar/deploy.sh # and put what's needed chmod +x foobar/deploy.sh tar cvzf package.tar.gz foobar You actually don’t have to change your workflow that much. If you want to stick to it, you can still put all your stuff in a directory, like the first bullet above, and create a package with the whole contents of that directory via deployable instead of the last step in the example above: # preparation goes exactly like before, but packaging is: deployable -o /path/to/package.pl -H foobar -d deploy.sh You end up with /path/to/package.pl (you can omit the path to create it in the current directory of course). At this point, you hardly have to write any instructions: just tell your recipients to put the script in the destination server with the execution bits turned on, and execute it. So what does that command do? Easy: • option -o sets the output. If not set, the resulting script will be printed on standard output, but if you provide a filename deployable will make it executable • option -H (alias --heredir) tells deployable where your stuff is (in terms of a directory). The contents of the directory will be included in the package, but the initial path set with -H will be stripped away. In the example above, file foobar/deploy.sh will be included simply as deploy.sh (actually, as ./deploy.sh). This is useful if you want to store all files/directories to be shipped in one single place, but you don’t care about the containing directory • option -d tells deployable that the specified file (i.e. deploy.sh in our example) has to be executed. You can specify whatever file you include, even multiple ones; only remember that the path to the files that you include will be referred to their position in the package, so in our example you have to specify it as deploy.sh instead of foobar/deploy.sh because foobar is stripped away. ## Shortcuts? Here are some shortcuts that deployable provides. ### Stuff in current directory If you just want to ship some files in the current directory, you’re not obliged to use -H at all, just tell deployable which files you want to include. Remember that they will be recorded with the path you provide. deployable -o p.pl file1 file2 ... ### Execute multiple deployment programs If you want to execute multiple programs, make sure they are all executable and pass them with multiple -d options: deployable -o p.pl -d exec1 -d exec2 exec1 exec2 file1 ... If you want to execute all executable files inside the default current directory, you can just pass the -X command line parameter. Beware that it will execute whatever it finds, so make sure that this is what you actually want: deployable -o p.pl -X exec1 exec2 file1 ... ### Install stuff in place At the time that I needed it, chances where that I had to update some system files in multiple machines at once. This meant that I wanted the tarball to optionally extract things based on the root directory (i.e. /) so that the files go in place. While this is not hard to do with what explained above –it’s a matter of crafting the deploy.sh script for this– it was too handy to leave outside. You have two ways: • create a directory target-root and put all the stuff you want to install like target-root were the root directory / of the target system, or • include files/directories (presumably inside the current directory) to be directly extracted in the root directory / of the target system. If you go the first way, this is how you call deployable: deployable -o p.pl -r target-root You can of course add scripts to call within the same command line. If you’re more into the second, this is how you do it: deployable -o p.pl -R etc ## The surface is scratched now… deployable has plenty of documentation. After installing it, you can run either of these commands, in increasing level of verbosity: deployable --usage deployable --help deployable --man
2019-03-21 20:51:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49001210927963257, "perplexity": 2636.432522219548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202572.29/warc/CC-MAIN-20190321193403-20190321215403-00166.warc.gz"}
https://socratic.org/questions/how-do-you-graph-the-function-y-cos-2x-2pi-3-1-2
How do you graph the function y=cos[2x-2pi/3]+1/2? Feb 10, 2015 Here is a procedure one can use to graph $y = \cos \left(2 x - 2 \frac{\pi}{3}\right) + \frac{1}{2}$. 1. Make a small transformation of the original function to $y = \cos \left[2 \left(x - \frac{\pi}{3}\right)\right] + \frac{1}{2}$. 2. Graph of this function can be obtained by horizontally right-shifting by $\frac{\pi}{3}$ a graph of function $y = \cos \left(2 x\right) + \frac{1}{2}$. 3. Graph of $y = \cos \left(2 x\right) + \frac{1}{2}$ can be obtained by vertically up-shifting by $\frac{1}{2}$ a graph of function $y = \cos \left(2 x\right)$. 4. Graph of $y = \cos \left(2 x\right)$ can be obtained by horizontally squeezing towards 0 by a factor $2$ a graph of function $y = \cos \left(x\right)$. "Squeezing" means that every point $\left(x , y\right)$ of the graph is transformed into $\left(\frac{x}{2} , y\right)$. So, the steps to graph the original function are: (a) start from a graph of $y = \cos \left(x\right)$; (b) squeeze this graph horizontally towards 0 by a factor of $2$. (c) shift up by $\frac{1}{2}$ (d) shift right by $\frac{\pi}{3}$.
2021-09-24 06:01:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 16, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6443887948989868, "perplexity": 503.1369018948072}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057504.60/warc/CC-MAIN-20210924050055-20210924080055-00615.warc.gz"}
https://tex.stackexchange.com/questions/159428/two-column-algorithmicx-with-variable-width-columns?noredirect=1
# Two column Algorithmicx with variable-width columns In another post, @Jubobs suggested to use the multicol package around an algorithmic environment, to typeset the algorithm in two columns. Recently, I faced a rather aesthetic issue: It might be the case that one column has a long line, while all lines in the other column are short. Example: Here's the code for the above picture: \documentclass[twocolumn]{article} \usepackage[width=11cm]{geometry} % page width is reduced to show the effect \usepackage{multicol} \usepackage{algorithm} \usepackage{algpseudocode} \begin{document} \begin{algorithm*}[t] \caption{An algorithm with a long line.} \label{alg1} \begin{multicols}{2} \begin{algorithmic}[1] \If{$(x = y^2+1$ and $z=x^3+4y -12)$ } \State $a \gets b + c$ \EndIf \columnbreak \State $x \gets 0$ \end{algorithmic} \end{multicols} \end{algorithm*} \end{document} Is it possible to typeset an Algorithmicx environment in two columns, but with unequal widths? I tried the vwcol package as suggested in this post, but I wasn't able to make it work for my case. Would this be what you seek? The proposed solution uses varwidth package. Since algorithmic environment is wrapped inside the varwidth, algostore and algorestore is used to continue the numbering. Note the xx\linewidth can be used to change the width of varwidth Code: \documentclass[twocolumn]{article} \usepackage[width=11cm]{geometry} % page width is reduced to show the effect \usepackage{varwidth} \usepackage{algorithm} \usepackage{algpseudocode} \begin{document} % \begin{algorithm*}[t] \caption{An algorithm with a long line.} \label{alg1} \begin{varwidth}[t]{0.45\textwidth} % change 0.45 to suit your need \begin{algorithmic}[1] \If{$(x = y^2+1$ and $z=x^3+4y -12)$ } \State $a \gets b + c$ \EndIf \algstore{myalg} \end{algorithmic} \State $x \gets 0$ • You can set simply \textwidth in both varwidth. The width is determined by its contents. I think this is what the OP wants. – karlkoeller Feb 9 '14 at 8:33
2020-01-24 05:08:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9354881048202515, "perplexity": 1848.1231149675411}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250615407.46/warc/CC-MAIN-20200124040939-20200124065939-00108.warc.gz"}
https://gmatclub.com/forum/if-z-x-y-2xy-and-0-y-z-1-which-of-the-following-262714.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 23 Oct 2018, 02:40 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # If z=(x+y)/2xy and 0<y<z<1 which of the following new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Author Message TAGS: ### Hide Tags Intern Joined: 05 Dec 2017 Posts: 21 GPA: 3.35 If z=(x+y)/2xy and 0<y<z<1 which of the following  [#permalink] ### Show Tags Updated on: 04 Apr 2018, 21:51 1 00:00 Difficulty: 45% (medium) Question Stats: 60% (01:52) correct 40% (02:00) wrong based on 54 sessions ### HideShow timer Statistics If $$z = \frac{(x+y)}{2xy}$$ and $$0<y<x<1,$$ which of the following must be true about range of z? A. $$z<0.5$$ B. $$z>0.5$$ C. $$z\geq{1}$$ D. $$z<1$$ E. $$z>1$$ Originally posted by Jamil1992Mehedi on 04 Apr 2018, 20:50. Last edited by chetan2u on 04 Apr 2018, 21:51, edited 1 time in total. updated the question Manager Joined: 24 Mar 2018 Posts: 88 Re: If z=(x+y)/2xy and 0<y<z<1 which of the following  [#permalink] ### Show Tags 04 Apr 2018, 21:42 It is already given z < 1 how can option E be correct ? Manager Joined: 05 Feb 2016 Posts: 144 Location: India Concentration: General Management, Marketing WE: Information Technology (Computer Software) Re: If z=(x+y)/2xy and 0<y<z<1 which of the following  [#permalink] ### Show Tags 04 Apr 2018, 21:49 teaserbae wrote: It is already given z < 1 how can option E be correct ? Question seems to be wrong..since nothing is given for x.. what if x=0.Then it will be undefined.. Might be x<1 instead of Z Math Expert Joined: 02 Aug 2009 Posts: 6979 If z=(x+y)/2xy and 0<y<z<1 which of the following  [#permalink] ### Show Tags 04 Apr 2018, 21:50 Jamil1992Mehedi wrote: If $$z = \frac{(x+y)}{2xy}$$ and $$0<y<z<1,$$ which of the following must be true? A. $$z<0.5$$ B. $$z>0.5$$ C. $$z\geq{1}$$ D. $$z<1$$ E. $$z>1$$ the choices give away answer.. z<0.5 is a part of z<1, so cannot be the answer... z>1 or $$z\geq{1}$$ is a part of z>0.5, so cannot be the answer... Actually answer should be B or D... But here it seems the question wants to know for what range of z does it hold true also it will be $$0<y<x<1$$ and not $$0<y<z<1,$$ so lets find the answer.. $$z = \frac{(x+y)}{2xy}$$... now since x and y are both <1, their ADDITION will always be MORE than their PRODUCT*2, so $$x+y>2xy..........\frac{x+y}{2xy}>1$$..... z>1 E _________________ 1) Absolute modulus : http://gmatclub.com/forum/absolute-modulus-a-better-understanding-210849.html#p1622372 2)Combination of similar and dissimilar things : http://gmatclub.com/forum/topic215915.html 3) effects of arithmetic operations : https://gmatclub.com/forum/effects-of-arithmetic-operations-on-fractions-269413.html GMAT online Tutor If z=(x+y)/2xy and 0<y<z<1 which of the following &nbs [#permalink] 04 Apr 2018, 21:50 Display posts from previous: Sort by # If z=(x+y)/2xy and 0<y<z<1 which of the following new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
2018-10-23 09:40:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6527894735336304, "perplexity": 8399.535052048474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516123.97/warc/CC-MAIN-20181023090235-20181023111735-00144.warc.gz"}
https://mathematica.stackexchange.com/questions/99970/make-own-rule-for-multiplication/99986#99986
# Make own rule for multiplication I'm new to mathematica, so I still cannot use it properly. I want to do symbolic programming. My question is : is there any way to define our own multiplication. Suppose $a,b$ are arbitrary variables, I want to define $ba=qab$ as the rule, with $q$ is some constant. Then if a compute $baab$, then I want the result become $baab=qabab=q^{2}aabb=q^{2}a^{2}b^{2}$ (I swap the position of the first $b$ from left to $a$ two times). • Welcome to Mathematica.SE! 1) As you receive help, try to give it too, by answering questions in your area of expertise. 2) Take the tour and check the faqs! 3) When you see good questions and answers, vote them up by clicking the gray triangles, because the credibility of the system is based on the reputation gained by users sharing their knowledge. Also, please remember to accept the answer, if any, that solves your problem, by clicking the checkmark sign! – user9660 Nov 20 '15 at 13:50 • You might consider using NonCommutativeMultiply[] in your implementation. Nov 20 '15 at 13:56 • I'll try to look at that command and how it works then Nov 20 '15 at 13:59 • yes, I want the general rules to be $ba=qab$, so the multiplication is not commutative. Nov 20 '15 at 18:18 • Can all the q's move all the way to the left in any expression? $-$ that is, do they commute with a and b? Nov 20 '15 at 18:49 As suggested in comment by J.M., NonCommutativeMultiply might be useful here. Using //. and two replacement rules you can get desired results. $ncmRules = { (* Change b ** a to q a ** b. *) x___ ** b^n_. ** a^m_. ** y___ :> q^(n m) x ** a^m ** b^n ** y, (* Replace adjacent powers of same multiplicands by single power. *) x___ ** y_^n_. ** y_^m_. ** z___ :> x ** y^(n + m) ** z }; a ** b //.$ncmRules (* a ** b *) b ** a //. $ncmRules (* q a ** b *) b ** a ** a ** b //.$ncmRules (* q^2 a^2 ** b^2 *) b ** a ** b ** a ** a //. $ncmRules (* q^5 a^3 ** b^2 *) b ** a ** b ** b ** a //.$ncmRules (* q^4 a^2 ** b^3 *) • I haven't check this page for a week, many thanks @jkuczm , it works Nov 30 '15 at 17:42 Define your multiplication by two rules CircleTimes[x_, y_] := q Times[x, y] for 2 arguments and CircleTimes[a___] := Module[{b, c}, If[Length[{a}] > 2, b = CircleTimes [{a}[[1]], {a}[[2]] ]; c = Join[{b}, {a}[[3 ;; All]] ]; Apply[CircleTimes, c]] ] It can be written shorter, here I separated into steps for clarity. Test: a⊗b⊗b⊗a (*a^2 b^2 q^3*) a⊗b a⊗b (*a^2 b^2 q^2*) An approach similar to that of yarchik but non-commutative is CircleTimes[a, b] := Times[a, b] CircleTimes[b, a] := q Times[a, b] CircleTimes[z_, z_] := Times[z, z] CircleTimes[z__] := Module[{zz = {z}, tem}, tem = CircleTimes @@ zz[[-2 ;; -1]]; (CircleTimes @@ Join[zz[[1 ;; -3]], {First@tem}]) Rest@tem] Then, a⊗b (* a b *) b⊗a (* a b q *) a⊗b⊗b⊗a (* a^2 b^2 q^2 *) as specified in the Question. Note that this definition of CircleTimes works only for a and b, because those are the only symbols defined in the Question. It could be generalized to other symbols if the OP wished to provide rules for them.
2021-10-24 10:05:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5903806090354919, "perplexity": 2205.9233176730495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585916.29/warc/CC-MAIN-20211024081003-20211024111003-00651.warc.gz"}
https://hackage.haskell.org/package/assert4hs-core-0.1.0/docs/Test-Fluent-Assertions-Maybe.html
assert4hs-core-0.1.0: A set of assertion for writing more readable tests cases Test.Fluent.Assertions.Maybe Description This library aims to provide a set of combinators to assert Maybe type. Synopsis # Documentation assert if subject under is empty assertThat (Just 10) isNothing assert if subject under is not empty assertThat (Just 10) isJust assert if subject under is not empty and extract contained value assertThat (Just 10) extracting
2022-01-19 06:10:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17125040292739868, "perplexity": 12475.373067018321}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301263.50/warc/CC-MAIN-20220119033421-20220119063421-00009.warc.gz"}
https://brilliant.org/problems/5-13-21/
# 5, 13, 21 Algebra Level 2 $5,\, 13,\, 21,\, 29,\, 37,\, \ldots$ Determine the sum of the first 50 numbers in the list (that follows an arithmetic progression). ×
2017-09-22 13:41:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6923105120658875, "perplexity": 1079.8618473229203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688966.39/warc/CC-MAIN-20170922130934-20170922150934-00326.warc.gz"}
https://mathhelpforum.com/tags/multiple/
# multiple 1. ### The probability that a number is the maximum of multiple binomial draws I'm trying to figure out how to calculate the probability that a given number is the maximum value of multiple binomial draws. Intuitively I know that the probability is dependent on the number of trials (N), the binomial probability of success (p) and the number of binomial draws (J). Does... 2. ### Smallest multiple of 999 which is the smallest multiple of 999 which does not contain the number 9 ? I need to know how to solve this one, not brute force. 3. ### (Multiple regression) Can I develop an F-test for zero-intercept? (Multiple regression) Can we develop an F-test for testing this hypothesis? RM : H0 : Y = e FM : H1 : Y=B0 + B1X1 + .... + BpXp + e (e : error term, r.v) As I learned, when we want to find the fitted regression equation, we should not neglect intercept part. It should not be considered zero... 4. ### Best statistical method for choosing an element who has Multiple properties I am looking for the best method that allows choosing the best element from a list of elements where each one had multiple properties, My exact goal is choosing the best program (Prog01 or Prog02 ..) from a set of six programs based on their multiple properties, As shown in this Picture: To... 5. ### Conditional proof for multiple quantifier Hi, I don't know how to prove ((Ǝx) F(x) →(Ǝx) (G(x)) with conditional proof from: ((Ǝx) F(x) → (∀z) H(z)) H(a) →G(b) Thanks 6. ### Prove that large number is a multiple of 7 using modular arithmetic How can you show that 3^54321 - 6 is a multiple of 7? I know you would use modular arithmetic (and maybe the Euclidean algorithm?), but I don't know how to go about doing that. Any help would be greatly appreciated! 7. ### Area of multiple circles Hi so my question is how do i find the area of the smaller circles if i don't have the radius of the smaller circles but only the large one? here is the question context "The cross section of a large circular conduit with radius 12.00 cm has seven smaller equal circular conduits within it." a)... 8. ### Question about Multiple Integrals I know a lot of you hate it when people post pictures, but for this purpose I need to. I have some questions about integrals. Confusion 1: My book gives steps on how to find limits of multiple integrals, and I am trying to grasp understanding of what they are doing. They say you need to trace... 9. ### Multiple of 186 M = 5 + 5^2 + 5^3 + .... + 5^2004. Prove that M is multiple of 186. I have absolutely no idea how to start this one... 10. ### Multiple of 13 A = 3^2008 + 5^2008 + 7^2008. You need to prove that A is multiple of 13. Thank you 11. ### derivative of multiple integral I have a problem in derivative of multiple integrals. for example i don't know how can I do it for the below question. would you please help me to solve it? thanks A= \int \int q(y,z)(Ln|z|+tr(z^{-1}(x'x-2 x' y+y' y)))dy dz where x,y,z are vectors we want derivative of A wrt q(y,z). please look... 12. ### Calculate the chance-level of an 8-alternative multiple choice questionnaire Hello everyone. In short: I would like to know how to compute the chance-level of an 8-alternative multiple choice questionnaire with 6 questions. Here is some more information: In my experiment, participants viewed 6 stimuli, each stimulus conveying one of the six basic emotions (anger... 19. ### A problem related to least common multiple LCM is an abbreviation used for Least Common Multiple in Mathematics. We say LCM (a, b, c) = L if and only if L is the least integer which is divisible by a, b and c. If I'm given a, b and L I have to find c such that LCM (a, b, c) = L. If there are several solutions, I have to choose the one... 20. ### Multiple choice questions Identify the letter of the choice that best completes the statement or answers the question. Write this letter down in the exam book. 1. The central limit theorem is applicable a. only to non-symmetric populations b. only to normally distributed populations c...
2019-11-19 22:30:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.539066731929779, "perplexity": 558.3177968967732}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670268.0/warc/CC-MAIN-20191119222644-20191120010644-00173.warc.gz"}
https://www.authorea.com/users/23/articles/54806-the-central-artery-of-a-star-forming-region/_show_article
The Central Artery of a Star-Forming Region Abstract Over the past several years, it has become apparent that what we once thought of as blob-like star-forming molecular “clouds” are in fact laced with prominent filamentary structure (André 2014). A paradigm has emerged where the so-called “dense cores” (Myers 1983) that provide the immediate reservoirs for the formation of stars are thought to form in much higher-aspect ratio “filamentary” structures. Recent work has shown, though, that even though “cores” are still typically thought of as thermal, quiescent and velocity-coherent (Goodman 1998) and only slightly prolate (Myers 1991), at high resolution they appear to contain very significant filamentary sub-structures (Pineda 2011). One dense core that has been particularly well-studied of late is called “Barnard 5” or “B5,” and it is located at the extreme Eastern edge of the Perseus complex of molecular clouds (Figure 1, showing B5 in context of the COMPLETE Survey...maybe in rotating 3D??). Most recently, Pineda et al. (2015) found that highest resolution (VLA) mapping of dense gas (NH$$_3$$)(Pineda 2011) inside what was previously thought to be a relatively unstructured “velocity-coherent” dense core (Pineda 2010) reveals a small, extremely young, star cluster in the process of forming. The timescale of the cluster formation is very short (perhaps tens of thousands of years, Pineda et al. (2015)) in comparison with what is typically thought to be the lifetime of a dense core (of order hundreds of thousands of years, xxref). There is great debate about the lifetime of larger-scale molecular clouds like the Perseus complex that hosts B5, but typical estimates range from millions (xxref) to tens of millions (xxref) of years. This Letter shows that the tiny cluster-forming filaments ($$\sim 10^{17}$$ cm) within B5 core ($$\sim 10^{18}$$ appear to lie positioned on top of, and aligned with much, longer filament that snakes its way from North to South through the entire B5 region ($$>10^{19}$$ cm) of Perseus ($$\sim 10^{20}$$ cm). The first evidence for this apparent structural identification came from Herschel Space Telescope imaging of long-wavelength dust continuum emission, but stronger support comes from the new velocity-resolved gas observations presented here (xxI hope!) References 1. P. André, J. Di Francesco, D. Ward-Thompson, S.-I. Inutsuka, R. E. Pudritz, J. E. Pineda. From Filamentary Networks to Dense Cores in Molecular Clouds: Toward a New Paradigm for Star Formation. Protostars and Planets VI 27-51 (2014). Link 2. A. A. Goodman, J. A. Barranco, D. J. Wilner, M. H. Heyer. Coherence in Dense Cores. II. The Transition to Coherence. 504, 223-246 (1998). Link 3. P. C. Myers, G. A. Fuller, A. A. Goodman, P. J. Benson. Dense cores in dark clouds. VI - Shapes. 376, 561-572 (1991). Link 4. J. E. Pineda, S. S. R. Offner, R. J. Parker, H. G. Arce, A. A. Goodman, P. Caselli, G. A. Fuller, T. L. Bourke, S. A. Corder. The formation of a quadruple star system with wide separation. 518, 213-215 (2015). Link 5. J. E. Pineda, A. A. Goodman, H. G. Arce, P. Caselli, J. B. Foster, P. C. Myers, E. W. Rosolowsky. Direct Observation of a Sharp Transition to Coherence in Dense Cores. 712, L116-L121 (2010). Link 6. P. C. Myers, P. J. Benson. Dense cores in dark clouds. II - NH3 observations and star formation. ApJL 266, 309 IOP Publishing, 1983. Link 7. Jaime E. Pineda, Alyssa A. Goodman, Héctor G. Arce, Paola Caselli, Steven Longmore, Stuartt Corder. EXPANDED VERY LARGE ARRAY OBSERVATIONS OF THE BARNARD 5 STAR-FORMING CORE: EMBEDDED FILAMENTS REVEALED. ApJ 739, L2 IOP Publishing, 2011. Link
2019-03-22 22:43:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5388675928115845, "perplexity": 8704.517956732827}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202698.22/warc/CC-MAIN-20190322220357-20190323002357-00185.warc.gz"}
http://www.igsor.net/projects/tagit/screenshots.html
# Screenshots¶ The images of a directory can be scanned and added to the tagit database. It’s optional but saves time when scanning through the images. Currently, this action is to be run from the terminal. The GUI shows a search bar (top), some extra information (right hand side) and the images. The image grid size (3x3 here) is configurable. Search filters can be added to restrict the displayed images. Images can be selected and tags can be manipulated.
2020-10-24 16:58:51
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8208699822425842, "perplexity": 2076.8483468429386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107884322.44/warc/CC-MAIN-20201024164841-20201024194841-00491.warc.gz"}
https://math.stackexchange.com/questions/2076766/how-to-know-whether-lagrange-multipliers-gives-maximum-or-minimum
# How to know whether Lagrange multipliers gives maximum or minimum? My book tells me that of the solutions to the Lagrange system, the smallest is the minimum of the function given the constraint and the largest is the maximum given that one actually exists. But what if we only have one point as a solution? How to know whether Lagrange multipliers gives maximum or minimum? • If your constraints describe a closed and bounded domain (that is, a bounded domain with a boundary), then we must attain both a maximum and a minimum. So, it's impossible to get only one critical point. – Omnomnomnom Dec 30 '16 at 1:31 • @Omnomnomnom However, if the region is not compact, it is possible to get only one critical point. Examples can be found here, here and here. – Pedro Dec 30 '16 at 4:12 • I have updated my answer as it was incorrect. Please see the new one and see the comments on it for details. – tilper Dec 30 '16 at 4:52 • Note that in Lagrange multipliers theorem, you assume that the maximum/minimum exists on the set of constraints and the method only gives candidates for it. Consider for example $f(x,y)=x+y^3$ along the $y$-axis (this is, $g(x,y)=x=0$. The gradient of the functions are parallel at the origin but $f$ has no maximum and no minimum on the $y$-axis. – Taladris Dec 30 '16 at 9:28 As Om(nom)$^3$ said in the comments, if you're working on a closed and bounded region then it's not possible to get only one critical point. If you're not on a closed and bounded region then it's no longer guaranteed that you'll have more than one critical point. If you only have one critical point then you can use the Bordered Hessian technique. (Thanks to ziggurism for clearing that up.) • Are you sure the normal multivariable second derivative test applies? I was taught a different technique for constrained problems, the bordered Hessian en.m.wikipedia.org/wiki/Hessian_matrix#Bordered_Hessian – ziggurism Dec 30 '16 at 1:55 • @ziggurism is that for all constrained problems or just constrained problems on a bounded region? The wiki was unclear unless I glossed over it. – tilper Dec 30 '16 at 2:09 • being a local extremum is a local property. I can't see how it can be affected by the boundedness of the region. Should I try to concoct an example where normal second derivative says max, but bordered hessian says min? – ziggurism Dec 30 '16 at 2:24 • @ziggurism I'd be curious to see one. I'm mainly just wondering what the criteria are for the bordered hessian to apply since the first sentence in that section says it's for "certain" constrained problems, implying it doesn't work in all constrained problems. Unboundedness of the region is just a guess on my part since it's one of the ways a region will be no longer compact. – tilper Dec 30 '16 at 2:30 • I'm not sure what they meant with "certain constrained systems", but my guess would be holonomic constraints, i.e. exactly those constrained systems to which the method of Lagrange multipliers applies. – ziggurism Dec 30 '16 at 2:32 On a closed bounded region a continuous function achieves a maximum and minimum. If you use Lagrange multipliers on a sufficiently smooth function and find only one critical point, then your function is constant because the theory of Lagrange multipliers tells you that the largest value at a critical point is the max of your function, and the smallest value at a critical point is the min of your function. Thus max = min, i.e. the function is constant. Also note that "critical point" should probably be called something else, like "point of interest" because usually critical points are defined as points where the gradient is zero. • Consider $f(x,y)=x^2+y^2$ with $x+y=1$ Lagrange gives $<2x,2y>=\lambda<1,1>$ this gives $x=\frac{1}{2}=y$ and $\lambda=1$. My function is not constant, so what do you mean by on a sufficiently smooth function. – Ahmed S. Attaalla Dec 30 '16 at 5:20 • @AhmedS.Attaalla your function is not restricted to a closed bounded region and thus my statement does not apply. – nullUser Dec 30 '16 at 5:44 In the Lagrange Multipliers Method the points obtained will be critical points (solutions of an equation which have the form $\nabla f(x)=\lambda\varphi(x)$) of an objective function $f$ (of class $C^1$) restrict to a region $M$ which have the form $M=\varphi^{-1}(c)$, where $\varphi$ is a function (of classe $C^1$) that comes from the constraint (which have the form $\varphi(x)=c$). Usually, the existence of the maximum and the minimum comes from the continuity of $f$ and the compactness of $\overline{M}$. In this case, $f$ have at least two critical points on $\overline{M}$. However, there are cases in which the equation $\nabla f(x)=\lambda\varphi(x)$ give us only one solution $p\in M$ (this is because the other critical point is in $\overline{M}\setminus M$). Here is a possible approach that sometimes works for these cases: • Show that the maximum (or minimum) is not in $\overline{M}\setminus M$. • Conclude that $p$ is the maximum (or minimum) of $f$ on $M$. You can see examples of this case here, here and here. In fact the normal second derivative test doesn't apply to constrained extremum problems. You should instead use the Bordered Hessian method. In brief, instead of computing the positive-definiteness of the Hessian matrix of second partial derivatives of $f$, you instead compute the Hessian of $f-\lambda g$, including derivatives with respect to $\lambda$ You could also just pick another point satisfying the constraints, evaluate the function for that point, and see if that value is higher or lower than what you found with Lagrange multipliers. • That only works if the extreme value is absolute. It won't necessarily work if it's a relative extreme value. – tilper Dec 30 '16 at 2:08 • But if there's only one extreme point, wouldn't that work? – Nathan H. Dec 30 '16 at 3:15 • you'd also need to make sure it's actually an extreme point and not a saddle point. – tilper Dec 30 '16 at 3:37 • Yeah, you're right. Just read on Wikipedia that the points you get are possible maxima/minima, but they aren't necessarily maxima/minima. I forgot about that. – Nathan H. Dec 30 '16 at 3:49
2019-11-21 04:10:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8613224029541016, "perplexity": 212.79004106390272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670729.90/warc/CC-MAIN-20191121023525-20191121051525-00214.warc.gz"}
https://tex.stackexchange.com/questions/284797/font-definition-file-loading-during-verbatim-with-fancyvrb
I am trying to define a verbatim-like command for regular expressions. I say verbatim-like because I’d like the regex operators to be typeset specially. The idea is that you can type a regular expression like (a|b)+ but it would be formatted neatly such as with the + superscripted. For simplicity, I started from fancyvrb and redefined the catcodes of the operators. \documentclass{article} \usepackage{fancyvrb} \usepackage{bera} %\usepackage{cmbright} \def\lparen{\textrm{(}} \def\rparen{\textrm{)}} \def\plus{\ensuremath{^+}} \def\pipe{\textrm{\thinspace\textbar\thinspace}} \CustomVerbatimCommand{\regex}{Verb}{showspaces,codes={% \catcode+=\active\lccode~=+\lowercase{\let~\plus}% \catcode(=\active\lccode~=(\lowercase{\let~\lparen}% \catcode)=\active\lccode~=)\lowercase{\let~\rparen}% \catcode|=\active\lccode~=|\lowercase{\let~\pipe}}} \begin{document} Whitespace is defined by \regex!( |\t|\n|\f|\v|\r)+!. \end{document} The problem I’m having is that it works fine with some font packages (bera, txfonts, pxfonts, lmodern) but not others (cmbright). The error is (/opt/local/share/texmf-texlive/tex/latex/cmbright/omlcmbrm.fd ! Argument of @providesfile has an extra }. <inserted text> par l.24 [2005/04/13 v8.1 (WaS)] It appears as though during the verbatim a .fd file is trying to load and since I have redefined the meanings of ( and ) it isn’t working. I know it’s dangerous to make common symbols like ( and ) active, but as long as it was limited to the verbatim I thought it would be fine. In the case of cmbright, just using math mode before the regex causes the .fd file to load first and everything is fine. Any way to get around this? Or is this just not going to work? I could just use the commandchars feature of fancyvrb to escape to LaTeX when I want to typeset an operator specially, but I was hoping for this type of approach. Force the reading of the math font at begin document: \documentclass{article} \usepackage{fancyvrb} \usepackage{bera} \usepackage{cmbright} \newcommand\lparen{\textrm{(}} \newcommand\rparen{\textrm{)}} \newcommand\plus{\ensuremath{^+}} \newcommand\pipe{\textrm{\thinspace\textbar\thinspace}} \CustomVerbatimCommand{\regex}{Verb}{showspaces,codes={% \catcode+=\active\lccode~=+\lowercase{\let~\plus}% \catcode(=\active\lccode~=(\lowercase{\let~\lparen}% \catcode)=\active\lccode~=)\lowercase{\let~\rparen}% \catcode|=\active\lccode~=|\lowercase{\let~\pipe}}% } \makeatletter \AtBeginDocument{\check@mathfonts} \makeatother \begin{document} Whitespace is defined by \regex!( |\t|\n|\f|\v|\r)+!. \end{document} • Knew there must be something like that. Thanks. However, that fixes the problem for cmbright, but not for iwona (and probably others is my guess). – D. Atkinson Dec 27 '15 at 0:05 • @D.Atkinson Are you doing \usepackage[math]{iwona}? If I call it like this, the workaround is good. – egreg Dec 27 '15 at 0:16 • I wasn't, but I am now, and it works fine. Thanks again! My approach still seems somewhat dangerous to me if a font definition file could be loaded while the catcodes are changed but as of now it works for all the fonts I would use. – D. Atkinson Dec 27 '15 at 0:22 • @D.Atkinson Changing category codes is always a bit dangerous; in this case it's the activation of ( that confuses LaTeX (the optional argument to \ProvidesFile in omsiwona.fd contains (). – egreg Dec 27 '15 at 0:29
2019-08-21 10:30:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9199534058570862, "perplexity": 1514.2488331713346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315865.44/warc/CC-MAIN-20190821085942-20190821111942-00449.warc.gz"}
http://khvmathematics.blogspot.com/2007/11/jokes-on-maths.html
Math Formula? ## Friday, November 2, 2007 ### JOKES ON MATHS IN DIFFERENT VIEWS Several scientists were all posed the following question: "What is 2 * 2 ?" The engineer whips out his slide rule (so it's old) and shuffles it back and forth, and finally announces "3.99". The physicist consults his technical references, sets up the problem on his computer, and announces "it lies between 3.98 and 4.02". The mathematician cogitates for a while, then announces: "I don't know what the answer is, but I can tell you, an answer exists!". Philosopher smiles: "But what do you mean by 2 * 2 ?" Logician replies: "Please define 2 * 2 more precisely."
2018-03-20 15:55:16
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8583981990814209, "perplexity": 4317.994415503955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647498.68/warc/CC-MAIN-20180320150533-20180320170533-00422.warc.gz"}
https://mathhelpforum.com/tags/equicontinuity/
# equicontinuity 1. ### Sets of Functions, Functionals, and Equicontinuity Real Analysis: Sets of Functions, Functionals, and Equicontinuity I originally asked my question on another forum and did not get an answer I was looking for. Here is the link to the question: real analysis - Sets of Functions, Functionals, and Equicontinuity - Mathematics Any help would be... 2. ### Equicontinuity implies Uniform Equicontinuity Let A be a equicontinuous subset of C(X,F). Show that A is Uniform equicontinuous, ie: $$\forall \varepsilon \succ 0,\exists \delta \succ 0/{\left\| {f(x) - f({x_0})} \right\|_F} \prec \varepsilon ,\forall f \in A,\forall x,{x_0} \in X,d\left( {x,{x_0}} \right) \prec \delta$$... 3. ### Equicontinuity Let X a metric compact space and Y a Banach Space. Let f and g $\in$ C(X,F) Prove that: 1) A+B is equicontinuous, and 2) A U B is equicontinuous I really dont know where to start.... 4. ### Equicontinuity Question Let f be a continuous function on \mathbb{R}. Let f_n = f(nt) for n \in \mathbb{N} be equicontinuous on [0,1]. I.e., \forall \epsilon >0, \exists \delta >0 such that if |x-y| < \delta, then |f_n(x) - f_n(y)| < \epsilon for all n. What can we conclude about f? All I am able to get is that... 5. ### Show uniform convergence implies equicontinuity and uniform boundedness Hi guys! I was wondering if you could look at my proof and tell me if you think its correct/rigorous enough. I'm having a little difficulty with this. The question is: Let (f_n) be a sequence of functions in C([0,1]), with f_n uniformly converging to f in [0,1]. Show without using the... 6. ### Question regarding Equicontinuity and the open ball of functions Hey - my hunch tells me that given an arbitrary open ball of functions (such as an arbitrary open ball centered at some f*) in the space of continuous functions defined on the compact interval [a,b] (ie, C0[a,b] sorry I don't know latex very well) with the standard sup norm, that this set is NOT... 7. ### Equicontinuity Hi, I've been working on: Construct a bounded sequence of continuous functions f_n:[0,1] \rightarrow \mathbb{R} \\ s.t. \left| \left| f_n - f_m \right| \right| = sup \left| f_n(x) - f_m(x) \right| = 1, n \neq m, x \in [0,1] Can such a sequence be equicontinuous? So far I have the... 8. ### Uniform convergence and equicontinuity This problem is giving me a headache. Suppose \{f_n\} is a sequence of functions defined on [a,b] which converges uniformly to f. Prove that \{f_n\} is equicontinuous. That is, show that whenever \epsilon >0, there is a \delta >0 such that if n is a positive integer and x,y \in [a,b] with... 9. ### need help with two (basic?) proofs involving equicontinuity I'm stuck trying to wrap my head around two proofs involving equicontinuity. 1) If an equicontinuous sequence of functions (fn) converges pointwise to f on a set S, then f is uniformly continuous on S. 2) If a sequence of continuous functions (fn) converges uniformly on a compact set S, then...
2020-02-18 22:35:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9942434430122375, "perplexity": 584.8320792937249}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143815.23/warc/CC-MAIN-20200218210853-20200219000853-00269.warc.gz"}
https://cs.stackexchange.com/questions/35353/proving-that-the-set-of-non-universal-cfgs-is-not-in-np
# Proving that the set of non-universal CFGs is not in NP How do I prove that $\overline{\mathrm{ALL_{CFG}}}$ does not fall in NP, where $\qquad\mathrm{ALL_{CFG}} = \{\langle G \rangle \mid G \text{ is a CFG}, L(G) = \Sigma^* \}$ Hint: Use the fact that universality of context-free grammars (that is, deciding whether $L(G) = \Sigma^*$) is undecidable. • I thought it has something to do with proving $ALL_{CFG}$ is undecidable. But I don't know how to get from there to proving that it is not an NP problem. – Moshe Hoori Dec 16 '14 at 6:06 • I'm not sure but this is what I come up with so far: $ALL_{CFG}$ is undecidable so $\overline{ALL_{CFG}}$ is undecidable too. undecidable languages can't be NP. is that right? – Moshe Hoori Dec 16 '14 at 11:42
2019-08-22 13:54:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8967580795288086, "perplexity": 278.50889643437597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317130.77/warc/CC-MAIN-20190822130553-20190822152553-00539.warc.gz"}
https://www.isr-publications.com/jmcs/articles-1334-partially-equi-integral-phi-0-stability-of-nonlinear-differential-systems
# Partially equi-integral $\phi_0$-stability of nonlinear differential systems Volume 16, Issue 4, pp 472-480 Publication Date: December 15, 2016       Article History • 1471 Views ### Authors Junyan Bao - College of Mathematics and Information Science, Hebei University, Baoding, 071002, China. Xiaojing Liu - College of Mathematics and Information Science, Hebei University, Baoding, 071002, China. Peiguang Wang - College of Electronic and Information Engineering, Hebei University, Baoding, 071002, China. ### Abstract This paper introduces the notions of partially equi-integral stability and partially equi-integral $\phi_0$-stability for two differential systems, and establishes some criteria on stability relative to the $x$-component by using the cone-valued Lyapunov functions and the comparison technique. An example is also given to illustrate our main results. ### Keywords • Differential systems • partially integral stability • partially integral $\phi_0$-stability • cone-valued Lyapunov functions • comparison technique. •  34D20 ### References • [1] E. P. Akpan, O. Akinyele , On the $\phi_0$-stability of comparison differential systems, J. Math. Anal. Appl., 164 (1992), 307-324. • [2] M. M. A. El-Sheikh, A. A. Soliman, M. H. Abd Alla , On stability of nonlinear differential systems via cone-valued Liapunov function method, Appl. Math. Comput., 119 (2001), 265-281. • [3] S. G. Hristova , Integral stability in terms of two measures for impulsive differential equations with ''supremum'' , Comm. Appl. Nonlinear Anal., 16 (2009), 37-49. • [4] S. G. Hristova , Integral stability in terms of two measures for impulsive functional differential equations , Math. Comput. Modelling, 21 (2010), 100-108. • [5] S. G. Hristova, I. K. Russinov, $\phi_0$-Integral stability in terms of two measures for differential equations, Math. Balkanica, 23 (2009), 133-144. • [6] A. O. Ignatyve , On the Partial equiasymptotic stability in functional differential equations , J. Math. Anal. Appl., 268 (2002), 615-628. • [7] V. Lakshmikantham, S. Leela , Differential and Integral Inequalities, Vol. I, Academic press, New York (1969) • [8] A. A. Soliman , On stability for impulsive perturbed systems via cone-valued Lyapunov function method, Appl. Math. Comput., 157 (2004), 269-279. • [9] A. A. Soliman, M. H. Abd Alla, Integral stability criteria of nonlinear differential systems, Math. Comput. Modelling, 48 (2008), 258-267.
2019-02-18 22:27:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.497223824262619, "perplexity": 7126.939421638004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247488490.40/warc/CC-MAIN-20190218220415-20190219002415-00070.warc.gz"}
http://nrich.maths.org/1928/clue
### Gold Again Without using a calculator, computer or tables find the exact values of cos36cos72 and also cos36 - cos72. ### Pythagorean Golden Means Show that the arithmetic mean, geometric mean and harmonic mean of a and b can be the lengths of the sides of a right-angles triangle if and only if a = bx^3, where x is the Golden Ratio. ### Golden Triangle Three triangles ABC, CBD and ABD (where D is a point on AC) are all isosceles. Find all the angles. Prove that the ratio of AB to BC is equal to the golden ratio. # Golden Ratio ##### Stage: 5 Challenge Level: Take $b = 1$ and write the equation in terms of $a$. Then solve this equation to find the golden ratio.
2015-02-28 17:37:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7741702198982239, "perplexity": 800.569977043118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462009.45/warc/CC-MAIN-20150226074102-00209-ip-10-28-5-156.ec2.internal.warc.gz"}
https://www.sanfoundry.com/soil-mechanics-questions-answers-density-index-relative-compaction/
# Soil Mechanics Questions and Answers – Density Index and Relative Compaction « » This set of Soil Mechanics Multiple Choice Questions & Answers (MCQs) focuses on “Density Index and Relative Compaction”. 1. The density index ID does not express the relative compactness of a natural cohesion-less soil. a) True b) False Explanation: The density index is used to express the relative compactness or degree of compaction of a natural cohesion-less soil deposit. It is also known as relative density or degree of density. 2. The term density index ID is used for cohesion-less soil only. a) True b) False Explanation: The term density index ID is not applicable to cohesive soil because of uncertainties in the laboratory determination of the voids ratio in the loosest state of the soil (emax). 3. The formula for density index ID is ______ a) (emax – e) b) $$\frac{(e_{max} – e)}{(e_{min}-e)}$$ c) $$\frac{(e_{max} – e_{min})}{(e_{max}-e)}$$ d) $$\frac{(e_{max} – e)}{(e_{max}-e_{min})}$$ Explanation: The density index is defined as the difference between the voids ratio of the soil in its loosest state emax and its natural void ratio e to the difference between the voids ratio in the loosest and densest state. ID=$$\frac{(e_{max} – e)}{(e_{max}-e_{min})}$$. Note: Join free Sanfoundry classes at Telegram or Youtube 4. When the natural state of cohesion-less soil is in its loosest form, its density index ID is equal to ____ a) 0 b) 0.5 c) 1 d) 1.5 Explanation: Cohesion-less soil is in its loosest form will have its void ratio e=emax. Since ID=$$\frac{(e_{max} – e)}{(e_{max}-e_{min})}$$ ID=$$\frac{(e_{max} – e_{max})}{(e_{max}-e_{min})}$$ ID=0. 5. The density index of natural deposit in its densest state is ______ a) 0 b) 0.5 c) 1 d) 1.5 Explanation: Cohesion-less soil is in its loosest form will have its void ratio e=emin. Since ID=$$\frac{(e_{max} – e)}{(e_{max}-e_{min})}$$ ID=$$\frac{(e_{max} – e_{min})}{(e_{max}-e_{min})}$$ ID=1. 6. A soil has porosity of 30%. Its voids ratio in the loosest and densest state is 0.35 and 0.92 respectively. What will be its density index? a) 0.865 b) 0.872 c) 0.861 d) 0.881 Explanation: Given, Porosity of 30%=0.3 voids ratio in the loosest state emin= 0.35 voids ratio in the densest state emax=0.92 e=$$\frac{n}{(1-n)}$$ e=$$\frac{0.3}{(1-0.3)}$$ e=0.429 ID=$$\frac{(e_{max} – e)}{(e_{max}-e_{min})}$$ ID=$$\frac{(0.92 –0.429)}{(0.92-0.35)}$$ ID= 0.861. 7. The density index ID in terms of densities is given by ______ a) $$\frac{(γ_{d,max}-γ_{d,min}) γ_{d,max}}{(γ_{d,max}-γ_{d}) γ_d}$$ b) $$\frac{(γ_d-γ_{d,min}) γ_{d,max}}{(γ_{d,max}-γ_{d,min}) γ_d}$$ c) $$\frac{(γ_d-γ_{d,min}) γ_{d,max}}{(γ_d-γ_{d,max}) γ_d}$$ d) $$\frac{(γ_d-γ_{d,min}) γ_{d,max}}{(γ_{d,max}-γ_{d}) γ_d}$$ Explanation: The density index is given by, ID=$$\frac{(e_{max} – e)}{(e_{max}-e_{min})}$$ $$e=(\frac{Gγ_w}{γ_d})-1$$ $$e_{max}=(\frac{Gγ_w}{γ_{d,min}})-1$$ $$e_{min}=(\frac{Gγ_w}{γ_{d,max}})-1$$ substituting the values of voids ratio ID=$$\frac{((\frac{Gγ_w}{γ_{d,min}}) – (\frac{Gγ_w}{γ_d}))}{((\frac{Gγ_w}{γ_{d,min}})- (\frac{Gγ_w}{γ_{d,max}}))}$$ ID=$$\frac{(γ_d-γ_{d,min}) γ_{d,max}}{(γ_{d,max}-γ_{d,min}) γ_d}.$$ 8. The relative density of loose granular soil is given by the range ______ in percentage. a) 0-15 b) 15-35 c) 35-65 d) 85-100 Explanation: The following table gives the characteristics of density of granular soil on the basis of relative density. Relative Density (%) Density Description 0-15 Very loose 15-35 Loose 35-65 Medium 65-85 Dense 85-100 Very Dense 9. A soil has a dry density of 17.5kN/m3. It has densities corresponding to most compact and loosest state as 18.5 kN/m3 and 13 kN/m3 respectively. The relative density of the soil is ______ a) 0.871 b) 0.865 c) 0.869 d) 0.860 Explanation: Given, Dry density γd=17.5 kN/m3 Maximum dry density γd,max=18.5kN/m3 Minimum dry density γd,mim=13kN/m3 Relative density ID= $$\frac{(γ_d-γ_{d,min})γ_{d,max}}{((γ_{d,max}-γ_{d,min}) γ_d )}$$ ID = $$\frac{((17.5-13)*18.5)}{((18.5-13)*17.5)}$$ ID=0.865. 10. The relative compaction Rc is given by _______ a) γd,maxd b) γdd,min c) γdd,max d) γd,mind Explanation: The relative compaction Rc is defined as the ratio of dry density γd of soil to its dry density corresponding to most compact state γd,max. Rc = γdd,max. 11. When the soil is in loosest form, density index is zero and its relative compaction Rc is ______ a) 40% b) 60% c) 80% d) 100% Explanation: The relationship between the relative compaction Rc and density index ID is given by, Rc=80+0.2ID When ID=0, Rc=80+0.2*0 Rc=80%. 12. The relative compaction Rc is related to the void ratio of soil by _______ a) $$\frac{(1+e)}{(1+e_{max})}$$ b) $$\frac{(1+e)}{(1+e_{min})}$$ c) $$\frac{(1+e_{min})}{(1+e)}$$ d) $$\frac{(1+e_{max})}{(1+e_{min})}$$ Explanation: The relative compaction is given by, $$R_c = \frac{γ_d}{γ_{d,max}}$$ Since γds(1+emin) γd,maxs(1+e) ∴ $$R_c= \frac{γ_d}{γ_{d,max}}= \frac{γ_s(1+e_{min})}{γ_s(1+e)}$$ $$R_c= \frac{(1+e_{min})}{(1+e)}.$$ Sanfoundry Global Education & Learning Series – Soil Mechanics. To practice all areas of Soil Mechanics, here is complete set of 1000+ Multiple Choice Questions and Answers.
2023-03-26 03:35:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.642051637172699, "perplexity": 7645.152692498126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945381.91/warc/CC-MAIN-20230326013652-20230326043652-00673.warc.gz"}
http://tex.stackexchange.com/tags/margins/hot
# Tag Info 5 There are some options to improve this: Use a p columntype with some width specification for the simplest solution. This uses a \parbox and wraps the content at the box width. Alternatively use a \newcolumntype from array package (Please note that p{0.8\linewidth} is still too wide in this case) Package tabularx provides the automatic adaption of cell ... 4 If I run this sample document \documentclass{book} \usepackage[a4paper,pass,verbose]{geometry} \usepackage{lipsum} \begin{document} \lipsum \end{document} I get, in the log file and on the console, the relevant lengths: * \textheight=550.0pt * \topmargin=22.0pt * \headheight=12.0pt * \headsep=18.06749pt Rounding \headsep is irrelevant, so I'll use ... 3 \areaset has an optional argument to handle BCOR. \documentclass[10pt,twoside, paper=18cm:19cm,pagesize ]{scrreprt} \areaset[5cm]{12cm}{16cm} \usepackage{lipsum} \begin{document} \lipsum[1-10] \end{document} If BCOR is already set you can use the symbolic value current: \documentclass[10pt,twoside, BCOR=5cm,paper=18cm:19cm,pagesize ... 3 Set margins by adjusting the bounding box via \pgfresetboundingbox and scale only axis Pgfplots manual describes adjusting the bounding box in Chapter 4.20.1 Bounding Box Restrictions (v1.12) % Measures: % --------- % total width = \columnwidth = 252pt = 3.49in % total height = 4in = 288pt % lmargin = 0.4in % rmargin = 0.1in % bmargin = ... 2 My interpretation of the question: \documentclass[fontsize=10pt]{scrartcl} \usepackage{geometry,lipsum,calc} \geometry{% a4paper, hcentering, textwidth=14cm, lines=52, includemp=true, showframe} \usepackage[automark]{scrlayer-scrpage} \KOMAoptions{% headsepline, footsepline, plainheadsepline, plainfootsepline, headwidth=textwithmarginpar, ... 2 To move everything to the right do not use adjustwidth just set \hoffset to the required length. 2 2 Use tcolorbox instead of mdframed: \documentclass{article} \usepackage{tcolorbox} \begin{document} \begin{tcolorbox}[arc=0mm,boxrule=1pt,colback=white] Test frame\footnote{Test footnote} \begin{tcolorbox}[arc=0mm,boxrule=1pt,colback=white] Test nested frame \end{tcolorbox} \end{tcolorbox} \end{document} 1 It's easy with geometry; the showframe option is just for showing the various parts of the page. \documentclass[oneside]{scrbook} \usepackage[utf8]{inputenc} \usepackage{geometry} \geometry{includemp,showframe} \usepackage{lipsum} \usepackage{marginnote} \begin{document} \section{Introduction} \marginnote{Hello! This is a long margin note to see what ... 1 Menu Document ⇢ Configuration ⇢Documment class ⇢ Option class ⇢ Write here the twoside option, so after fix the margins, the LaTeX source show something like this: \documentclass[twocolumn,english,twoside]{article} ... \usepackage{geometry} \geometry{verbose,lmargin=4cm,rmargin=2cm} Then add some content and voilá. If you ... 1 Probably, this is one of those rare cases in which it is preferable to add another answer, rather then editing the one I already gave; indeed, I dislike my previous answer so much that I’ll probably remove it. The following, obvious solution is much neater: \documentclass[a4paper,twoside,titlepage]{article} \usepackage[T1]{fontenc} \usepackage[ a4paper, ... 1 I improved upon my own question by taking some bits from this SO question. My code for Python syntax coloring and layout is now like this: \usepackage{color} \DeclareFixedFont{\ttb}{T1}{txtt}{bx}{n}{12} % for bold \DeclareFixedFont{\ttm}{T1}{txtt}{m}{n}{12} % for normal \definecolor{deepblue}{rgb}{0,0,0.5} \definecolor{deepred}{rgb}{0.6,0,0} ... 1 Here's a way, based on my answer at What are the ways to position things absolutely on the page?. The horizontal/vertical locations of the marks can be adjusted to suit, using the first two arguments of \atxy. \documentclass{article} \usepackage{everypage} \usepackage{xcolor} \usepackage{lipsum} % THESE ARE LaTeX DEFAULTS; CAN CHANGE IF NEEDED. ... 1 Here is another suggestion useing the package scrlayer-scrpage for the header and footer. Then it is possible to define additional layers and add them to all (or to selected) page styles. \documentclass[a4paper,twoside]{article} \usepackage{blindtext}% dummy text \usepackage{scrlayer-scrpage} \newcommand\foldmarklength{2mm} ... Only top voted, non community-wiki answers of a minimum length are eligible
2015-05-27 20:18:27
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9580977559089661, "perplexity": 12519.43172077213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929096.44/warc/CC-MAIN-20150521113209-00207-ip-10-180-206-219.ec2.internal.warc.gz"}
http://adam.chlipala.net/cpdt/repo/rev/7ac7f922e78e?revcount=120
### changeset 8:7ac7f922e78e First cut at Intro done author Adam Chlipala Mon, 01 Sep 2008 10:25:40 -0400 6cc7a8fd4a8c c2e8e9c20643 book/Makefile book/src/Intro.v 2 files changed, 23 insertions(+), 1 deletions(-) [+] line wrap: on line diff --- a/book/Makefile Mon Sep 01 10:00:09 2008 -0400 +++ b/book/Makefile Mon Sep 01 10:25:40 2008 -0400 @@ -21,7 +21,7 @@ rm -f Makefile.coq .depend $(GLOBALS) \ latex/*.sty latex/cpdt.* -doc: latex/cpdt.dvi html +doc: latex/cpdt.dvi latex/cpdt.pdf html latex/cpdt.tex:$(VS) cd src ; coqdoc --latex $(VS_DOC) \ @@ -31,6 +31,9 @@ latex/cpdt.dvi: latex/cpdt.tex cd latex ; latex cpdt ; latex cpdt +latex/cpdt.pdf: latex/cpdt.dvi + cd latex ; pdflatex cpdt + html:$(VS) cd src ; coqdoc $(VS_DOC) -toc \ --glob-from ../$(GLOBALS) \ --- a/book/src/Intro.v Mon Sep 01 10:00:09 2008 -0400 +++ b/book/src/Intro.v Mon Sep 01 10:25:40 2008 -0400 @@ -138,3 +138,22 @@ A good portion of the book is about how to formalize programming languages, compilers, and proofs about them. I depart significantly from today's most popular methodology for pencil-and-paper formalism among programming languages researchers. There is no need to be familiar with operational semantics, preservation and progress theorems, or any of the material found in courses on programming language semantics but not in basic discrete math and logic courses. I will use operational semantics very sparingly, and there will be no preservation or progress proofs. Instead, I will use a style that seems to work much better in Coq, which can be given the fancy-sounding name %\textit{%#<i>#foundational type-theoretic semantics#</i>#%}% or the more populist name %\textit{%#<i>#semantics by definitional compilers#</i>#%}%. *) + + +(** * Using This Book *) + +(** +This book is generated automatically from Coq source files using the wonderful coqdoc program. The latest PDF version is available at: +%\begin{center}\url{http://adam.chlipala.net/cpdt/cpdt.pdf}\end{center}%#<blockquote><tt><a href="http://adam.chlipala.net/cpdt/cpdt.pdf">http://adam.chlipala.net/cpdt/cpdt.pdf</a></tt></blockquote># +There is also an online HTML version available, with a hyperlink from each use of an identifier to that identifier's definition: +%\begin{center}\url{http://adam.chlipala.net/cpdt/html/}\end{center}%#<blockquote><tt><a href="http://adam.chlipala.net/cpdt/html/">http://adam.chlipala.net/cpdt/html/</a></tt></blockquote># + +The chapters of this book are named like "Module Foo," rather than having proper names, because literally the entire document is generated by coqdoc, which by default bases chapter structure on the module structure of the development being documented. This chapter is headed "Module Intro" because it comes from a module named [Intro], which comes from a fascinating source file %\texttt{%#<tt>#Intro.v#</tt>#%}% containing nothing but specially-formatted coqdoc comments. + +The source code to the book is also freely available at: +%\begin{center}\url{http://adam.chlipala.net/cpdt/cpdt.tgz}\end{center}%#<blockquote><tt><a href="http://adam.chlipala.net/cpdt/cpdt.tgz">http://adam.chlipala.net/cpdt/cpdt.tgz</a></tt></blockquote># + +There, you can find all of the code appearing in this book, with prose interspersed in comments, in exactly the order that you find in this document. You can step through the code interactively with your chosen graphical Coq interface. The code also has special comments indicating which parts of the chapters make suitable starting points for interactive class sessions, where the class works together to construct the programs and proofs. The included Makefile has a target %\texttt{%#<tt>#templates#</tt>#%}% for building a fresh set of class template files automatically from the book source. + +I believe that a good graphical interface to Coq is crucial for using it productively. I use the %Proof General\footnote{\url{http://proofgeneral.inf.ed.ac.uk/}}%#<a href="http://proofgeneral.inf.ed.ac.uk/">Proof General</a># mode for Emacs, which supports a number of other proof assistants besides Coq. There is also the standalone CoqIDE program developed by the Coq team. I like being able to combine certified programming and proving with other kinds of work inside the same full-featured editor, and CoqIDE has had a good number of crashes and other annoying bugs in recent history, though I hear that it is improving. In the initial part of this book, I will reference Proof General procedures explicitly, in introducing how to use Coq, but most of the book will be interface-agnostic, so feel free to use CoqIDE if you prefer it. +*)
2018-01-21 22:24:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4145011305809021, "perplexity": 3963.569510910818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890893.58/warc/CC-MAIN-20180121214857-20180121234857-00044.warc.gz"}
http://jderobot.github.io/RoboticsAcademy/exercises/ComputerVision/3d_reconstruction
# 3D Reconstruction In this practice, the intention is to program the necessary logic to allow kobuki robot to generate a 3D reconstruction of the scene that it is receiving throughout its left and right cameras. ## Installation Install the General Infrastructure of the JdeRobot Robotics Academy. ## How to run your solution? Navigate to the 3d_reconstruction directory cd exercises/3d_reconstruction Launch Gazebo with the kobuki_3d_reconstruction world through the command roslaunch ./launch/3d_reconstruction_ros.launch Then you have to execute the academic application, which will incorporate your code: python2 ./3d_reconstruction.py 3d_reconstruction_conf.yml ## How to perform the exercise? To carry out the exercise, you have to edit the file MyAlgorithm.py and insert in it your code, which reconstructs 3d points from the two stero views. ### Where to insert the code? In the MyAlgorithm.py file, def algorithm(self): #GETTING THE IMAGES # imageR = self.getImage('right') # imageL = self.getImage('left') #EXAMPLE OF HOW TO PLOT A RED COLORED POINT # position = [1.0, 0.0, 0.0] X, Y, Z # color = [1.0, 0.0, 0.0] R, G, B # self.point.plotPoint(position, color) ### Application Programming Interface • self.getImage('left') - to get the left image • self.getImage('right') - to get the right image • self.point.PlotPoint(position, color) - to plot the point in the 3d tool Mouse based: Hold and drag to move around the environment. Scroll to zoom in or out Keyboard based: Arrow keys to move around the environment. W and S keys to zoom in or out ## Theory In computer vision and computer graphics, 3D reconstruction is the process of determining an object’s 3D profile, as well as knowing the 3D coordinate of any point on the profile. Reconstruction can be acheived as follows: • Hardware Based: Hardware based approach requires us to utilize the hardware specific to the reconstruction task. Use of structured light, laser range finder, depth gauge and radiometric methods are some examples. • Software Based: Software based approach relies on the computation abilities of the computer to determine the 3D properties of the object. Shape from shading, texture, stereo vision and homography are some good methods. In this exercise our main aim is to carry out 3d reconstruction using Software based approach, particularly stereo vision 3d reconstruction. ### Epipolar Geometry When two cameras view a 3D scene from two different positions, there are a number of geometric relations between the 3D points and their projections onto the 2D images that lead to constraints between the image points. The study of these properties and constraints is called Epipolar Geometries. The image and explanation below may generalize the idea: Suppose a point X in 3d space is imaged in two views, at x in the first and x' in the second. Backprojecting the rays to their camera centers through the image planes, we obtain a plane surface, denoted by π. Supposing now that we know only x, we may ask how the corresponding point x' is constrained. The plane π is determined by the baseline(line connecting the camera centers) and the ray defined by x. From above, we know that the ray corresponding to the unknown point x' lies in π, hence the point x' lies on the line of intersection l' of π with second image plane. This line is called the epipolar line corresponding to x. This relation helps us to reduce the search space of the point in right image, from a plane to a line. Some important definitions to note are: • The epipole is the point of intersection of the line joining the camera centers (the baseline) with the image plane. • An epipolar plane is a plane containing the baseline. • An epipolar line is the intersection of the epipolar plane with the image plane. ### Stereo Reconstruction Stereo reconstruction is a special case of the above 3d reconstruction where the two image planes are parallel to each other and equally distant from the 3d point we want to plot. In this case the epipolar line for both the image planes are same, and are parallel to the width of the planes, simplifying our constraint better. ### 3D Reconstruction Algorithm The reconstruction algorithm consists of 3 steps: 1. Detect the feature points in one image plane 2. Detect the feature point corresponding the one found above 3. Triangulate the point in 3d space Let’s look at them one by one ### Feature Point Detection Feature Point Detection is a vast area of study where several algorithms are already studied. Harris Corner Detection and Shi-Tomasi algorithms use eigen values to get a good feature point. But, the problem is we need a lot of points for 3D Reconstruction, and these algorithms won’t be able to provide us with such a large number of feature points. Therefore, we use edge points as our feature points. There may be ambiguity in their detection in the next stage of the algorithm. But, our problem is solved by taking edges as the feature points. One really cool edge detector is Canny Edge Detection Algorithm. The algorithm is quite simple and reliable in terms of generating the edges. ### Feature Point Extraction The use of the epipolar constraint really simplifies the time complexity of our algorithm. For general 3d reconstruction problems, we have to generate an epipolar line for every point in one image frame, and then search in that sample space the corresponding point in the other image frame. The generation of epipolar line is also very easy in our case, it is just the parallel line iterpolation from left image to right image plane. Checking the correspondence between images involves many algorithms, like Sum of Squared Differences or Energy Minimization. Without going much deep, using a simple Correlation filter also suffices our use case. ### Triangulation Triangulation in simple terms is just calculating where the 3d point is going to lie using the two determined points in the image planes. In reality, the position of the image points cannot be measured exactly. For general cameras, there may be geometric or physical distortions. Therefore, a lot of mathematics goes behind minimizing that error and calculating the most accurate 3d point projection. Refer to this link for a simple model. ## Hints Simple hints provided to help you solve the 3d_reconstruction exercise. The OpenCV library is used extensively for this exercise. ### Setup Using the exercise API, we can easily retrieve the images. Also, after getting the images, it is a good idea to perform bilateral filtering on the images, because there are some extra details that need not be included during the 3d reconstruction. Check out the illustrations for the effects of performing the bilateral filtering. ### Calculating Correspondences OpenCV already has built-in correlation filters which can be called through matchTemplate(). Take care of the extreme cases like edges and corners. One good observation is that the points on left will have correspondence in the left part and the points on right will have correspondence in the right part. Using this observation, we can easily speed up the calculation of correspondence. ### Plotting the Points Either manual or OpenCV based function triangulatePoints works good for triangulation. Just take care of all the matrix shapes and sizes while carrying out the algebraic implementations. Keep in mind the difference between simple 3D coordinates and homogenous 4D coordinates. Check out this video for details. Simply dividing the complete 4d vector by its 4th coordinate gives us the 3d coordinates. Due to varied implementations of users, the user may have to adjust the scale and offset of the triangulated points in order to make them visible and representable in the GUI interface. Downscaling the 3D coordinate vector by a value between 500 to 1000 works well. Also an offset of 0 to 8 works good.
2020-07-11 08:23:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38468626141548157, "perplexity": 1077.460196608579}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655924908.55/warc/CC-MAIN-20200711064158-20200711094158-00016.warc.gz"}
https://mathhelpforum.com/tags/variabes/
# variabes 1. ### separation of variabes,new constant help needed I have separated the variables of dy/dx +2ycot2x = 0 By taking -1/2y to the left i ended u with y = 1/2Asin2x i. 1/2Acosec2x where A in a constant, my question in since the mark scheme gives 1/Acosec2x can i make a new constant B from the 2A which would represent the right answer in the...
2020-02-17 23:59:59
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9174425601959229, "perplexity": 2693.628876463382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143455.25/warc/CC-MAIN-20200217235417-20200218025417-00044.warc.gz"}