url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://acscihotseat.org/index.php?qa=474&qa_1=proof-for-long-forward-price-at-time-t&show=478
# Proof for long-forward price at time t 55 views asked Apr 28, 2017 I tried to rigorously prove the value of long-forward price at time t. The outline of the proof is given on slide 22 of "The introduction the derivatives" slides. I am quite close to the correct answer except that my answer has the incorrect sign. I have attached my workings as a pdf.  Proof.pdf (0,3 MB) Any idea of where I went wrong? answered May 2, 2017 by (3,390 points) selected May 9, 2017 by Rowan Hi Conor In the your proof, the amount of money you wish to borrow at time $$t$$ should be the present value of the difference between what you are going to receive from the long Forward and what you are going to pay for the short Forward. i.e. The present value of $$F_{0,T} - F_{t,T}$$. That should then give you the correct answer. commented May 13, 2017 by (1,120 points) If we borrow the PV of $$F_{0,T} - F_{t,T}$$. then surely we would have to pay back $$F_{0,T} - F_{t,T}$$.  i.e. have a cashflow of -($$F_{0,T} - F_{t,T}$$)=$$F_{t,T} - F_{0,T}$$? which would then make our overall cashflow at T 2($$F_{t,T} - F_{0,T}$$)? which is then non-zero and makes the arbitrage argument not work? commented May 14, 2017 by (3,390 points) Hi Dean Thanks for pointing out my error. I have relooked at the proof. The amount which is borrowed is indeed supposed to be $$F_{t,T} - F_{0,T}$$ . From what I can see, the reason why Conor was getting the wrong sign in the final step is because he was adding the cashflows at time $$t$$ to get zero instead equating them to each other. If the cashflows at all other times are equal, then the values of each component at time $$t$$ should be equal to each other. They should not add up to zero.
2018-05-21 08:50:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6414092779159546, "perplexity": 531.8587846877901}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863972.16/warc/CC-MAIN-20180521082806-20180521102806-00020.warc.gz"}
https://math.stackexchange.com/questions/1911089/in-french-mathematics-what-does-hypoth%C3%A8se-mean
# In French mathematics, what does “hypothèse” mean? Of course, it seems the answer is obviously "hypothesis". However, that translation does not seem right in the following context, taken from a paper by Bernard Host. Désormais, $p > 1$ est un entier et $\mu$ une mesure de probabilité sur $\mathbb{T}$; on ne fait pour l'instant aucune hypothèse d'invariance. Using Google Translate, my best translation is the following. Henceforth, let $p>1$ be an integer and let $\mu$ be a probability measure on $\mathbb{T}$; As of yet we do not have an invariance hypothesis." But isn't a hypothesis the same as a conjecture (e.g. Riemann hypothesis)? "Conjecture" doesn't seem to fit here. My gut tells me that the last part should read "As of yet we do not make any assumption about invariance." So, in this context, does "hypothèsis" mean "assumption"? • hypothèse is the same as assumption – Gabriel Romon Sep 1 '16 at 16:07 • @John: English hypothesis can mean ‘conjecture’, but it can also mean ‘assumption’: we speak of the hypotheses of a given theorem, meaning the assumptions. However, I would translate the second clause as for now we do not assume invariance. – Brian M. Scott Sep 1 '16 at 16:18 • The normal meaning, in English as well as in French is one of the meanings of the Greek word ̔υπόθεσις : supposition. – Bernard Sep 1 '16 at 18:25 Suppose I say "Let $P$ be a probability measure on $\mathbb R^2$ that is invariant under rotations about the origin. Then we can conclude that..." Then rotation-invariance is a hypothesis. That $P$ is a probability measure is a hypothesis.
2020-02-19 01:33:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8754816651344299, "perplexity": 252.60819699908868}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143963.79/warc/CC-MAIN-20200219000604-20200219030604-00409.warc.gz"}
https://www.snapxam.com/solver?p=%5Cint%20xe%5E%7B2x%7Ddx
Step-by-step Solution Find the integral $\int xe^{2x}dx$ Go! Go! 1 2 3 4 5 6 7 8 9 0 a b c d f g m n u v w x y z . (◻) + - × ◻/◻ / ÷ 2 e π ln log log lim d/dx Dx |◻| θ = > < >= <= sin cos tan cot sec csc asin acos atan acot asec acsc sinh cosh tanh coth sech csch asinh acosh atanh acoth asech acsch Videos $\frac{1}{2}e^{2x}x-\frac{1}{4}e^{2x}+C_0$ Step-by-step Solution Problem to solve: $\int xe^{2x}dx$ Choose the solving method 1 We can solve the integral $\int xe^{2x}dx$ by applying integration by parts method to calculate the integral of the product of two functions, using the following formula $\displaystyle\int u\cdot dv=u\cdot v-\int v \cdot du$ Learn how to solve integrals of exponential functions problems step by step online. $\displaystyle\int u\cdot dv=u\cdot v-\int v \cdot du$ Learn how to solve integrals of exponential functions problems step by step online. Find the integral int(xe^(2x))dx. We can solve the integral \int xe^{2x}dx by applying integration by parts method to calculate the integral of the product of two functions, using the following formula. First, identify u and calculate du. Now, identify dv and calculate v. Solve the integral. $\frac{1}{2}e^{2x}x-\frac{1}{4}e^{2x}+C_0$ SnapXam A2 beta Got another answer? Verify it! Go! 1 2 3 4 5 6 7 8 9 0 a b c d f g m n u v w x y z . (◻) + - × ◻/◻ / ÷ 2 e π ln log log lim d/dx Dx |◻| θ = > < >= <= sin cos tan cot sec csc asin acos atan acot asec acsc sinh cosh tanh coth sech csch asinh acosh atanh acoth asech acsch $\int xe^{2x}dx$
2021-09-19 14:18:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9887441396713257, "perplexity": 849.240220930972}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056890.28/warc/CC-MAIN-20210919125659-20210919155659-00689.warc.gz"}
https://tex.stackexchange.com/questions/422845/why-does-tikz-produce-output-in-plain-tex-when-it-does-not-in-latex
# Why does tikz produce output in plain tex when it does not in latex? This simple document testlatex.tex \documentclass{article} \usepackage{tikz} \begin{document} \end{document} when running latex testlatex produces No pages of output. But this document testtex.tex \input tikz \end when running tex testtex produces Output written on testtex.dvi (1 page, 3312 bytes). I would expect the second one to not produce any output either, why are they different? • Interesting. First I thought it may be because different engines are involved (in currently popular TeX distributions, latex runs pdfTeX, as we can see with latex --version, while tex runs something closer to Knuth's TeX). But this not the reason, because the question is reproducible with pdflatex and pdftex (the latter produces a blank page containing only the page number 1). (Similarly, what dvitype testtex.dvi shows is that the DVI file contains some PGF specials, some movements, then the page number.) My current guess is that it's because LaTeX uses a different output routine. – ShreevatsaR Mar 23 '18 at 20:12 • \input tikz loads the file tikz.tex, while \usepackage{tikz} loads the file tikz.sty. Those files are different, and in turn load different files. • In particular, pgfrcs.tex has \input pgfutil-plain.def while pgfrcs.sty has \input pgfutil-latex.def • The code in pgfutil-plain.def has \openout which results in the creation of a whatsit node (and hence “output”, and the DVI file), while the code in pgfutil-latex.def piggybacks on LaTeX's aux file, which does \immediate\openout instead. ## How to isolate the issue (This section just describes how the above was arrived at.) This is what I tried. By looking in the log file (or using kpsewhich), we can find the specific file that is being input, in the two cases. We can copy that file over to the current directory, and start editing it into something minimal: remove as much from it as possible, while the files still compile and exhibit the difference. They will probably input other files; copy them as well and repeat. By doing this recursively, I was able to reduce the two cases to the following: % tex testtex.tex This is TeX, Version 3.14159265 (TeX Live 2017) (preloaded format=tex) (./testtex.tex (./tikz.tex (./pgf.tex (./pgfcore.tex (./pgfsys.tex (./pgfrcs.tex (./pgfutil-common.tex) (./pgfutil-plain.def)) (./pgfsys.code.tex))))) [1] ) Output written on testtex.dvi (1 page, 196 bytes). Transcript written on testtex.log. where most of the files simply \input the others as indicated in the log line above. The nontrivial files (after minimizing) are pgfrcs.tex: \input pgfutil-common.tex \input pgfutil-plain.def pgfutil-common.tex: \catcode\@=11\relax \newif\ifpgfutil@format@is@latex \newif\ifpgfutil@format@is@plain \newtoks\pgfutil@everybye pgfutil-plain.def: \pgfutil@format@is@plaintrue % The aux files, needed for reading back coordinates \csname newwrite\endcsname\pgfutil@auxout \csname openout\endcsname\pgfutil@auxout\jobname.pgf } pgfsys.code.tex: % Read aux file in plain and context mode: ## Summary The file pgfrcs.tex has \input pgfutil-plain.def while the file pgfrcs.sty has \input pgfutil-latex.def, and those two .def files contain substantial differences. In particular, in the plain TeX case, at minimum we're running the following: \catcode\@=11\relax %%%%%%%%%% From pgfutil-plain.def < pgfrcs.tex < pgfsys.tex < pgfcore.tex < pgf.tex < tikz.tex % The aux files, needed for reading back coordinates \csname newwrite\endcsname\pgfutil@auxout \csname openout\endcsname\pgfutil@auxout\jobname.pgf } %%%%%%%%%% From pgfsys.code.tex < pgfsys.tex < pgfcore.tex < pgf.tex < tikz.tex \end and that's already enough to result in the creation of the .dvi file. (The answer of @egreg explains the further contents of the .dvi file—the PGF-related specials, that are needed to go into the PDF dictionary on every page—but not why a page is shipped out in the first place. To check this, we can simply remove the line \csname openout\endcsname\pgfutil@auxout\jobname.pgf from pgfutil-plain.def, and make absolutely no other changes, and see that your original testtex.tex does not result in the creation of a DVI file, even though the stuff that @egreg showed is still present in pgfutil-plain.def.) What the above shows is that we can find the difference already between: \input pgfsys \end (in plain TeX) and (in LaTeX): \documentclass{article} \usepackage{pgfsys} \begin{document} \end{document} Also, here's an even more minimal plain-TeX file that results in the creation of a DVI file: \newwrite\outfile \openout\outfile\jobname.pgf \end In the LaTeX case (in pgfutil-latex.def), there is a similar \AtBeginShipout and everything, but there's no \openout. Instead, it piggybacks on LaTeX's \@auxout. That one is defined, in latex.ltx (the definition of \document), using \immediate\openout\@mainaux\jobname.aux. And we can see that simply adding \immediate to our earlier minimal file does not result in the creation of a DVI file: \newwrite\outfile \immediate\openout\outfile\jobname.pgf \end • But I feel I'm still left asking why. Is it a bug that it's done this way in plain? Is it a temporary fix put in because it would be too much work to do it properly in plain? Is it possible to fix it so that it works the same in plain as in latex? What other differences are there between running tikz with plain vs latex? – nadder Mar 24 '18 at 15:14 • @nadder (Answering backwards) • Looking at the many differences I found while trying to answer this, I would imagine there are dozens or hundreds of minor differences between TikZ in plain vs LaTeX. Often in LaTeX when something is needed the code can just require some corresponding package, while in plain either an equivalent plain-compatible package is used or some code is manually written for that part. (As here, for aux file, which is in LaTeX but was done in an ad-hoc way for plain.) I imagine that when some significant user-visible difference is noticed, they'd fix the code to be closer. – ShreevatsaR Mar 24 '18 at 16:23 • @nadder (To emphasize, when running TikZ with plain a .pgf file is created, while with LaTeX an .aux file is created (in case of nonempty output). This is a user-visible difference but wouldn't be considered a bug.) • I don't off-hand foresee a problem with changing the \openout in pgfutil-plain.def to \immediate\openout… maybe someone just forgot the \immediate or didn't see why it's necessary. (It probably doesn't make a difference except for a document with no output, as here.) We could add \immediate in our local copy of pgfutil-plain.def, and try lots of TikZ examples. – ShreevatsaR Mar 24 '18 at 16:27 • a new question arise here, why \documentclass{article} \newwrite\outfile \openout\outfile\jobname.pgf \begin{document} \end{document} don't shipout any page? – touhami Mar 28 '18 at 8:19 • @touhami Good question! Maybe you can ask it as a new question… my guess is that it has something to do with LaTeX's output routine: the whatsit is indeed created and passed to the output routine, but in LaTeX's case it is then thrown away and no shipout happens. But I haven't looked into it, so not sure. – ShreevatsaR Mar 28 '18 at 13:52 At some point, \pgfutil@abe is executed, which issues \unhbox. Here's the justification: 272 \AtBeginShipout{% 273 \setbox\AtBeginShipoutBox=\vbox{% 274 \setbox0=\hbox{% 275 \begingroup 276 % the boxes \pgfutil@abe ("every page") and \pgfutil@abb ("current page") 277 % are used to generate pdf objects / dictionaries which are 278 % required for the graphics which are somewhere in the "real" 279 % page content. 280 % BUT: these pdf objects MUST NOT be affected by text layout 281 % shifts! Consequently, we have to undo \hoffset and \voffset 282 % (which are h/v shifts to the page layout). 283 % 284 % Note that this of importance for shadings. To be more 285 % specific: try out shadings with standalone (which uses 286 % \hoffset) and with xdvipdfmx (which appears to be more 287 % fragile than pdflatex) - they break unless we undo \hoffset 288 % and \voffset. 289 \ifdim\hoffset=0pt \else \hskip-\hoffset\fi 290 \pgfutil@abe 291 \unhbox\pgfutil@abb 292 \pgfutil@abc 293 \global\let\pgfutil@abc\pgfutil@empty 294 \ifdim\hoffset=0pt \else \hskip+\hoffset\fi 295 \endgroup 296 }% • For completeness: why does this happen in plain TeX but not in LaTeX? It's because \AtBeginShipout works differently I guess? – ShreevatsaR Mar 23 '18 at 21:10 • @ShreevatsaR That's quite possible – egreg Mar 23 '18 at 21:25 • This (explains the contents of the DVI file but) turns out not to be the reason actually: the reason is \openout (in pgfutil-plain.def) versus \immediate\openout (in LaTeX). – ShreevatsaR Mar 24 '18 at 4:45 • @ShreevatsaR In LaTeX no .pgf file is created, because the standard .aux file is used. – egreg Mar 24 '18 at 10:25 • Yes right, that's what I found (mentioned as “piggyback” in my answer), and that's what I meant by “in LaTeX” (meant latex.ltx, specifically from ltfiles.dtx). The standard .aux file in LaTeX is created with \immediate\openout, while the .pgf file is created with \openout. Not sure why that is, though. – ShreevatsaR Mar 24 '18 at 16:18
2021-02-25 16:42:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9436157941818237, "perplexity": 5175.9059955936755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178351374.10/warc/CC-MAIN-20210225153633-20210225183633-00093.warc.gz"}
https://www.experts-exchange.com/questions/28000859/How-do-I-refernce-objects-from-tlb-library-in-my-code-using-VB.html
Solved # How do I refernce objects from tlb library in my code using VB? Posted on 2013-01-18 259 Views I have a tlb library which I've added to my project as an 'existing item'.  I can object browse to some of the components and the one I would like to use. If I object browse I can find it: Class CustomerWebAssistant Member of EncryptionSupport then if I try to instantiate the class in my code: Dim cwAssistant As New EncryptionSupport.CustomerWebAssistant I get the error that it is undefined. 0 Question by:UnderSeven • 3 • 2 LVL 75 Expert Comment ID: 38794537 Did you add a reference to the file? 0 Author Comment ID: 38794655 I can't do an imports on it, says includes no public namespace or cannot be found. Also I am already inheriting another class. If I try to add it as a reference it states it is not a valid assembly or com component. 0 Author Comment ID: 38794679 This tlb is actually used in legacy code using the following statements: cwAssistant = Server.CreateObject("Cogsdale.Encryption.CustomerWebAssistant") result = cwAssistant.getDecryptedValue(result, Session("KeysFileLocation")) but these do not work,it errors on the first one if I tried using those stating it cannot create the object. 0 LVL 75 Accepted Solution käµfm³d   👽 earned 500 total points ID: 38794701 Run the following in a command prompt: regsvr32 C:\path\to\file.tlb If you're running 64-bit Windows, then you may need to reference the full path to the 32-bit version of regsvr32. Once you register the type library, your late-bound examples above should work. 0 Author Closing Comment ID: 38794811 Thanks, That did it. 0 ## Featured Post Question has a verified solution. If you are experiencing a similar issue, please ask a related question Real-time is more about the business, not the technology. In day-to-day life, to make real-time decisions like buying or investing, business needs the latest information(e.g. Gold Rate/Stock Rate). Unlike traditional days, you need not wait for a fe… This article shows how to deploy dynamic backgrounds to computers depending on the aspect ratio of display In this fourth video of the Xpdf series, we discuss and demonstrate the PDFinfo utility, which retrieves the contents of a PDF's Info Dictionary, as well as some other information, including the page count. We show how to isolate the page count in a… In this seventh video of the Xpdf series, we discuss and demonstrate the PDFfonts utility, which lists all the fonts used in a PDF file. It does this via a command line interface, making it suitable for use in programs, scripts, batch files — any pl… #### Need Help in Real-Time? Connect with top rated Experts 13 Experts available now in Live!
2017-01-17 09:32:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18344226479530334, "perplexity": 4947.809488897513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00493-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.groundai.com/project/dark-matter-in-the-standard-model/
Contents CERN-TH-2018-065 IFUP-TH/2018 Dark Matter in the Standard Model? [1cm] Christian Gross, Antonello Polosa, Alessandro Strumia, Alfredo Urbano, Wei Xue [7mm] Dipartimento di Fisica dell’Università di Pisa and INFN, Sezione di Pisa, Italy [1mm] Dipartimento di Fisica e INFN, Sapienza Università di Roma, I-00185, Roma, Italy [1mm] Theoretical Physics Department, CERN, Geneva, Switzerland [1mm] INFN, Sezione di Trieste, SISSA, via Bonomea 265, 34136 Trieste, Italy Abstract We critically reexamine two possible Dark Matter candidates within the Standard Model. First, we consider the hexa-quark. Its QCD binding energy could be large enough to make it (quasi) stable. We show that the cosmological Dark Matter abundance is reproduced thermally if its mass is . However, we also find that such mass is excluded by stability of Oxygen nuclei. Second, we consider the possibility that the instability in the Higgs potential leads to the formation of primordial black holes while avoiding vacuum decay during inflation. We show that the non-minimal Higgs coupling to gravity must be as small as allowed by quantum corrections, . Even so, one must assume that the Universe survived in independent regions to fluctuations that lead to vacuum decay with probability 1/2 each. ## 1 Introduction In this work we critically re-examine two different intriguing possibilities that challenge the belief that the existence of Dark Matter (DM) implies new physics beyond the Standard Model (SM). #### DM as the uuddss hexa-quark The binding energy of the hexa-quark di-baryon is expected to be large, given that the presence of the strange quark allows it to be a scalar, isospin singlet [1], called or , and sometimes named exa-quark. A large binding energy might make light enough that it is stable or long lived. All possible decay modes of a free are kinematically forbidden if is lighter than about . Then could be a Dark Matter candidate within the Standard Model [2, 3, 4]. In section 2.1 we use the recent theoretical and experimental progress about tetra- and penta-quarks to infer the mass of the hexa-quark. In section 2.2 we present the first cosmological computation of the relic abundance, finding that the desired value is reproduced for . In section 2.3 we revisit the bound from nuclear stability ( production within nuclei) at the light of recent numerical computations of one key ingredient: the nuclear wave-function [5], finding that seems excluded. #### DM as primordial black holes Primordial Black Holes (PBH) are hypothetical relics which can originate from gravitational collapse of sufficiently large density fluctuations. The formation of PBHs is not predicted by standard inflationary cosmology: the primordial inhomogeneities observed on large cosmological scales are too small. PBH can arise in models with large inhomogeneities on small scales, . PBH as DM candidates are subject to various constraints. BH lighter than are excluded because of Hawking radiation. BH heavier than are safely excluded. In the intermediate region, a variety of bounds make the possibility that problematic but maybe not excluded — the issue is presently subject to an intense debate. According to [6] DM as PHB with mass are not excluded, as previously believed. And the HSC/Subaru microlensing constraint on PBH [7] is partially in the wave optics region. This can invalidate its bound below . Many ad hoc models that can produce PBH as DM have been proposed. Recently [8] claimed that a mechanism of this type is present within the Standard Model given that, for present best-fit values of the measured SM parameters, the SM Higgs potential is unstable at  [9]. We here critically re-examine the viability of the proposed mechanism, which assumes that the Higgs, at some point during inflation, has a homogeneous vev mildly above the top of the barrier and starts rolling down. When inflation ends, reheating adds a large thermal mass to the effective Higgs potential, which, under certain conditions, brings the Higgs back to the origin,  [10]. If falling stops very close to the disaster, this process generates inhomogeneities which lead to the formation of primordial black holes. In section 3 we extend the computations of [8] adding a non-vanishing non-minimal coupling of the Higgs to gravity, which is unavoidably generated by quantum effects [11]. We find that must be as small as allowed by quantum effects. Under the assumptions made [8] we reproduce their results; however in section 3.6 we also find that such assumptions imply an extreme fine-tuning. The first mechanism is affected by the observed baryon asymmetry, but does not depend on the unknown physics that generates the baryon asymmetry. The second possibility depends on inflation, but the mechanism only depends on the (unknown) value of the Hubble constant during inflation. In both cases the DM candidate is part of the SM. Conclusions are given in section 4. ## 2 DM as the uuddss hexa-quark The hexa-quark is stable if all its possible decay modes are kinematically closed: S→⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩de¯νeMS A stable is a possible DM candidate. A too light can make nuclei unstable. Scanning over all stable nuclei, we find that none of them gets destabilised by single emission if , with Li giving the potentially highest sensitivity to . ### 2.1 Mass of the hexa-quark from a di-quark model We estimate the mass of the hexa-quark viewing it as a neutral scalar di-baryon constituted by three spin zero di-quarks S=ϵαβγ[ud]α,s=0[us]β,s=0[ds]γ,s=0 (2) where are color indices. This is possible thanks to the strange quark, while spin zero di-quarks of the kind are forbidden by Fermi statistics because of antisymmetry in color and spin. We assume the effective Hamiltonian for the hexa-quark [12] H=∑i≠j={u,d,s}(mij+2κijSi⋅Sj) (3) where the are effective couplings determined by the strong interactions at low energies, color factors, quark masses and wave-functions at the origin. The are the masses of the di-quarks in made of and constituent quarks [13]. is the spin of -th quark. Another important assumption, which is well motivated by studies on tetra-quarks [12], is that spin-spin interactions are essentially within di-quarks and zero outside, as if they were sufficiently separated in space. Considering di-quark masses to be additive in the constituent quark masses, and taking and constituent quark masses from the baryons one finds m[qq]≃0.72 GeV,m[qs]∼0.90 GeVq={u,d}. (4) The chromomagnetic couplings could as well be derived in the constituent quark model using data on baryons κqq≃0.10 GeV,κqs≃0.06 GeV. (5) However it is known that to reproduce the masses of light scalar mesons, interpreted as tetraquarks,  [14], we need κqq≃0.33 GeV,κqs≃0.27 GeV. (6) Spin-spin couplings in tetra-quarks are found to be about a factor of four larger compared to the spin-spin couplings among the same pairs of quarks in the baryons, which make also di-quarks. It is difficult to assess if this would change within an hexa-quark. At any rate we can attempt a simple mass formula for MS=(m[qq]−3/2κqq)+2(m[qs]−3/2κqs) (7) which in terms of light tetra-quark masses means . Using the determination of chromo-magnetic couplings from baryons we would obtain MS≈2.17 GeV (8) whereas keeping the chromo-magnetic couplings needed to fit light tetra-quarks gives MS≈1.2 GeV (9) if the same values for the chromo-magnetic couplings to fit light tetra-quark masses are taken (or 1.4 GeV using (6)). There is quite a lot of experimental information on tetra-quarks [12], whereas hexa-quarks, for the moment, are purely hypothetical objects. On purely qualitative grounds we might expect that the mass of could be closer to the heavier value being a di-baryon and not a di-meson (tetra-quark) like light scalar mesons. In the absence of any other experimental information it is impossible to provide an estimate of the theoretical uncertainty on . Lattice computations performed at unphysical values of quark masses find small values for the binding energy, about 13, 75, 20 MeV [15, 16, 17]. Extrapolations to physical quark masses suggest that does not have a sizeable binding energy, see e.g. [18]. Furthermore, the binding energy of the deuteron is small, indirectly disfavouring a very large binding energy for the (somehow similar) , which might too be a molecule-like state.111We thank M. Karliner, A. Francis, J. Green for discussions. Despite of this, we over-optimistically treat as a free parameter in the following. We also notice that the particle could be much larger than what envisaged in [2, 3, 4] and that its coupling to photons, in the case of  fm size (see the considerations on diquark-diquark repulsion at small distances in [19]), could be relevant for momentum transfers as small as  MeV, compared to  GeV considered by Farrar. ### 2.2 Cosmological relic density of the hexa-quark We here compute the relic density of Dark Matter, studying if it can match the measured value , i.e.  [20]. A key ingredient of the computation is the baryon asymmetry. Its value measured in CMB and BBN is . The DM abundance is reproduced (using  GeV for definiteness) for YSYB=ΩDMΩBMpMS≈4.2. (10) Thereby the baryon asymmetry before decoupling must be YBS=YB+2YS≈9.3YB. (11) One needs to evolve a network of Boltzmann equations for the main hadrons: , , , , , , , and . Strange baryons undergo weak decays with lifetimes , a few orders of magnitude faster than the Hubble time. This means that such baryons stay in thermal equilibrium. We thereby first compute the thermal equilibrium values taking into account the baryon asymmetry. Thermal equilibrium implies that the chemical potentials satisfy μb=μS/2,b={p,n,Λ,…}. (12) Their overall values are determined imposing that the total baryon asymmetry equals ∑bneqb−neq¯bs+2neqS−neq¯Ss=YBS. (13) The equilibrium values can be analytically computed in Boltzmann approximation (which becomes exact in the non-relativistic limit) neqi=giM2iT2π2K2(MiT)e±μi/T (14) where the () holds for (anti)particles. We then obtain the abundances in thermal equilibrium plotted in fig. 1, assuming . We see that the desired abundance is reproduced if the interactions that form/destroy decouple at . This temperature is so low that baryon anti-particles have negligible abundances, and computations can more simply be done neglecting anti-particles.222Let us consider, for example, the process where denotes other SM particles that do not carry the baryon asymmetry, such as pions. Thermal equilibrium of the above process implies (15) Inserting with gives (16) Namely, at large ; at low . A DM abundance comparable to the baryon abundance is only obtained if reactions that form decouple at the in eq. (17). Then, the desired decoupling temperature is simply estimated imposing , and decreases if is heavier: Tdec|desired≈2Mp−MS|lnYBS|≈89MeV−0.048MS. (17) To compute the decoupling temperature, we consider the three different kind of processes that can lead to formation of : 1. Strong interactions of two heavier QCD hadrons that contain the needed two quarks. One example is , where denotes pions. These are doubly Boltzmann suppressed by at temperatures . 2. Strong interactions of one heavier strange hadron and weak interactions that form the other (as ) from lighter hadrons. One example is . These are singly Boltzmann suppressed by and by . 3. Double-weak interactions that form two quarks starting from lighter hadrons. One example is . These are doubly suppressed by . At the abundance of strange hadrons is still large enough that QCD processes dominate over EW processes: interactions that form and destroy proceed dominantly through QCD collisions of strange hadrons: ΛΛ,nΞ0,pΞ−,Σ+Σ−↔SX (18) where can be a or a , as preferred by approximate isospin conservation. The can be substituted by the . Defining and , the Boltzmann equation for the abundance is sHzdYSdz=γeqbS[Y2BYeqb2B−YSYeqbS] (19) where the superscript ‘eqb’ denotes thermal equilibrium at fixed baryon asymmetry and is summed over all baryons, but the dibaryon . A second equation for is not needed, given that baryon number is conserved: . Furthermore, is negligible, and is negligible around decoupling. The production rate is obtained after summing over all processes of eq. (18). In the non-relativistic limit the interaction rate gets approximated as 2γeqbS\lx@stackrelT≪MS≃∑b,b′neqbbneqbb′⟨σbb′vrel⟩eqb. (20) The opposite process is more conveniently written in terms of the breaking width defined by and given by ΓeqbS=∑b,b′neqbbneqbb′2neqS⟨σbb′vrel⟩eqb (21) This gets Boltzmann suppressed at , when hyperons disappear from the thermal plasma. Assuming , the Boltzmann equation is approximatively solved by evaluated at the decoupling epoch where , which corresponds to . This leads to the estimated final abundance YSYB∼YB(MPlTdecσS)2Mp−MS2MΛ−MS. (22) The fact that is in thermal equilibrium down to a few tens of MeV means that whatever happens at higher temperatures gets washed out. Notice the unusual dependence on the cross section for formation: increasing it delays the decoupling, increasing the abundance. Fig. 2 shows the numerical result for the relic abundance, computed inserting in the Boltzmann equation a -wave , varied around . The cosmological DM abundance is reproduced for , while a large gives a smaller relic abundance. Bound-state effects at BBN negligibly affect the result, and in particular do not allow to reproduce the DM abundance with a heavier . We conclude this section with some sparse comments. Possible troubles with bounds from direct detection have been pointed out in [4, 21, 22]: a DM velocity somehow smaller than the expected one can avoid such bounds reducing the kinetic energy available for direct detection. Using a target made of anti-matter (possibly in the upper atmosphere) would give a sharp annihilation signal, although with small rates. The magnetic dipole interaction of does not allow to explain the recent 21 cm anomaly along the lines of [23] (an electric dipole would be needed). The interactions of DM with the baryon/photon fluid may alter the evolution of cosmological perturbations leaving an imprint on the matter power spectrum and the CMB. However, they are not strong enough to produce significant effects. The particle is electrically neutral and has spin zero, such that its coupling to photons is therefore suppressed by powers of the QCD scale [3]. So elastic scattering of with photons is not cosmologically relevant. A light would affect neutron stars, as they are expected to contain particles, made stable by the large Fermi surface energy of neutrons. Then, would give a loss of pressure, possibly incompatible with the observed existence of neutron stars with mass  [1]. However, we cannot exclude on this basis, because production of hyperons poses a similar puzzle. as DM could interact with cosmic ray giving and photon and other signals [24] and would be geometrically captured in the sun, possibly affecting helioseismology.333We thank M. Pospelov for suggesting these ideas. In the next section we discuss the main problem which seems to exclude as DM. ### 2.3 Super-Kamiokande bound on nuclear stability Two nucleons inside a nucleus can make a double weak decay into , emitting , or  [2]. This is best probed by Super-Kamiokande (SK), which contains Oxygen nuclei. No dedicated search for (where can be one or two and can be , depending on the charge of ) has been performed,444SK searched for di-nucleon decays into pions [25] and leptons [26] and obtained bounds on the lifetime around years. However these bounds are not directly applicable to where the invisible takes away most of the energy reducing the energy of the visible pions and charged leptons, in contrast to what is assumed in [25, 26]. but a very conservative limit τ(16O8→N′SX)≳1026yr (23) is obtained by requiring the rate of such transitions to be smaller than the rate of triggered background events in SK, which is about  Hz [27]. A more careful analysis would likely improve this bound by three orders of magnitude [2]. The amplitude for the formation of is reasonably dominated by the sample diagram in fig. 3: doubly-weak production of two virtual strange baryons (e.g. through and ; at quark level and ), followed by the strong process : MNN→SX≈MNN→Λ∗Λ∗X×MΛ∗Λ∗→S. (24) The predicted life-time is then obtained as [2]555A numerical factor of 1440 due to spin and flavor effects has already been factored out from here and in the following. Note also that the threshold GeV neglects the small difference in binding energy between and . τ(N→N′SX)≃yr|M|2Λ∗Λ∗→S×{3if MS≲1.74GeV105if 1.74GeV≲MS≲1.85GeV (25) where the smaller value holds if is so light that the decay can proceed through real or emission, while the longer life-time if obtained if instead only lighter or can be emitted. The key factor is the dimension-less matrix element for the transition inside a nucleus, that we now discuss. Following [2], we assume that the initial state wave function can be factorized into wave functions of the two baryons and a relative wave function for the separation between the center of mass of the ’s. The matrix element is given by the wave-function overlap Here, are center-of-mass coordinates which parametrise the relative positions of the quarks within each . Using the Isgur-Karl (IK) model [28] the wave functions for the quarks inside the and inside the are approximated by ψΛ∗(→ρ,→λ) =(1rN√π)3exp⎡⎣−→ρ2+→λ22r2N⎤⎦, (27) ψS(→ρa,→λa,→ρb,→λb,→a) (28) where and are the radii of the nucleons respectively of .666One should be aware that the IK model has serious shortcomings. One issue is that it is a non-relativistic model — an assumption which is problematic in particular for small . Another problem is that the value of that gives a good fit to the lowest lying and baryons —  fm — is smaller than the charge radius of the proton:  fm. Therefore we consider both  fm and  fm, as done in [2]. Performing all integrals except the final integral over gives |M|Λ∗Λ∗→S=12(32)3/4(2rNrSr2N+r2S)6(1rS√π)3/2∫da 4πa2e−3a2/4r2Sψnuc(a). (29) As shown in fig. 5 below (and as discussed in [2]), if is not much smaller than , the overlap integral is not very much suppressed and is tens of orders of magnitude below the experimental limit, and is clearly excluded. This conclusion is independent of the form of . However if were a few times smaller than — a possibility which seems unlikely due to diquark repulsions (see e.g. [19]) but cannot firmly be excluded — then is extremely sensitive to the probability of the overlap of two nucleons inside the oxygen core at very small distances (less than, say, 0.5 fm). The wave function of nucleon pairs at such small distances has not been probed experimentally. In fact, at such small distances nucleons are not the appropriate degrees of freedom.777Data indicate that about 20% of the nucleons form pairs so close (about 1 fm) that the local density reaches the nucleon density (about 2.5 times larger than the nuclear density) and thus that the quark structure of nucleons starts becoming relevant already at 1 fm [29]. Thus, for a very small one can only make an educated guess of , since the form of is uncertain. Nevertheless, we will show in the following that for a reasonable form of a stable is excluded even if it were very small. Numerical computations of the ground-state wave-functions of nuclei, including have been performed e.g. in [5]. The quantity that determines is the two-nucleon point density , defined in eq. (58) of [5]. We obtain by interpolating the data given in [5] and adding the constraint , which is a conservative assumption for our purposes since would lead to a larger matrix element. There are 28 neutron-neutron pairs and 64 proton-neutron pairs in so one has and . We therefore define the wavefunctions ψnnnuc(a)=√ρnn(a)/28,ψpnnuc(a)=√ρpn(a)/64. (30) These wave functions are plotted in fig. 4, together with the Miller-Spencer (MS) and the Brueckner-Bethe-Goldstone (BBG) wave function used in [2]. The BBG wave functions assume a hard repulsive core between nucleons such that vanishes at . We take for illustration. This is not realistic but allows to see what kind of nuclear wave function would sufficiently suppress the rate of -formation in nuclei, if is small enough. The resulting is plotted in fig. 5, again compared to that obtained using the Miller-Spencer and BBG wave functions. The resulting matrix elements from the MS wave function qualitatively agree to what is obtained using the wave functions extracted from [5]. By contrast, the matrix element using the BBG wave function with hard core radius is very much suppressed, especially if is small. The reason is that, according to the assumption of a hard core repulsive potential, the nucleons can’t get close enough to form the small state . Since we do not consider a which vanishes for  fm realistic, we conclude that a stable is excluded. Weaker bounds on production are obtained considering baryons containing ’s. ## 3 DM as black holes triggered by Higgs fluctuations We here present the technical computations relative to the mechanism anticipated in the Introduction. The SM potential is summarized in section 3.1. In section 3.2 we outline the mechanism that generates black holes. Section 3.3 studies the generation of Higgs inhomogeneities. Post-inflationary dynamics is studied in section 3.4. Formation of black holes is considered in section 3.5. The viability of a critical assumption is discussed in section 3.6. ### 3.1 The Higgs effective potential The effective potential of the canonically normalised Higgs field during inflation with Hubble constant is Veff(h)≈λeff(h)4h4−6ξHH20h2+V0 , (31) at . Here is the effective quartic coupling computed including quantum corrections. The second mass term in can be generated by various different sources [8]. We consider the minimal source: a Higgs coupling to gravity, , with Ricci scalar during inflation. Finally, during inflation the effective potential in eq. (31) is augmented by the vacuum energy associated to the inflaton sector, , where GeV is the reduced Planck mass. We implement the RG-improvement of the effective potential at NNLO precision: running the SM parameters at 3-loops and including -loop quantum corrections to the effective potential. We consider fixed values of and GeV, and we vary the main uncertain parameter, the top mass, in the interval  [32]. In fig. 6a we show the resulting as function of . The non-minimal coupling to gravity receives SM quantum corrections encoded in its RGE, which induce even starting from at some energy scale. The RGE running of small values of is shown in fig. 6b. As mentioned before, a non-zero can be considered as a proxy for an effective mass term during inflation. The latter, for instance, can be generated by a quartic interaction between the Higgs and the inflaton field or by the inflaton decay into SM particles during inflation. For this reasons, it makes sense to include as a free parameter in the analysis of the Higgs dynamics during inflation, at most with the theoretical bias that its size could be loop-suppressed. #### Analytic approximation We will show precise numerical results for the SM case. However the discussion is clarified by introducing a simple approximation that encodes the main features of the SM effective potential in eq. (31): Veff(h)≈−blog(h2h2cr√e)h44−6ξHH20h2 , (32) where is the position of the maximum of the potential with no extra mass term, . The parameters and depend on the low-energy SM parameters such as the top mass: they can be computed by matching the numerical value of the Higgs effective potential at the gauge-invariant position of the maximum, . The result is shown in the right panel of fig. 7. Results will be better understood when presented in terms of the dimensionless parameters , , and , where is the temperature, as they directly control the dynamics that we are going to study. The parameter controls the flatness of the potential beyond the potential barrier at , with smaller corresponding to a flatter potential. The non-minimal coupling controls the effective Higgs mass during inflation. Finally will set the reheating temperature in eq. (35) and thus the position and size of the thermal barrier. The position of the potential barrier — defined by the field value where the effective potential has its maximum — strongly depends on the value of the top mass, on the non-minimal coupling to gravity, and, after inflation, on the temperature of the thermal bath which provides and extra mass term. For , the maximum of the Higgs potential gets shifted from to hmax=H0[−b12ξHW(−12ξHH20bh2cr)]−1/2 , (33) where is the product-log function defined by . The condition −12ξHH20>−bh2cre , (34) must be satisfied otherwise the effective mass is too negative and it erases the potential barrier, thus leading to a classical instability. #### The thermal potential After the end of inflation, the Higgs effective potential receives large thermal corrections from the SM bath at generic temperature . The initial temperature of the thermal bath is fixed by the dynamics of reheating after inflation. We assume instantaneous reheating, as this is most efficient for rescuing the falling Higgs field. The reheating temperature is then given by TRH=(454π3g∗)1/4M1/2PlH1/20 , (35) where is the number of SM degrees of freedom. After reheating the Universe becomes radiation-dominated, the Ricci scalar vanishes, and so the contribution to the effective potential from the non-minimal Higgs coupling to gravity. The effective Higgs potential at finite temperature is obtained adding an extra thermal contribution which can be approximated as an effective thermal mass for the Higgs field, (see e.g. [10]) VTeff(h)≈−blog(h2h2cr√e)h44+VT(h) ,VT(h)≈12M2Th2e−h/2πT . (36) At we can neglect the exponential suppression in the thermal mass, and the maximum of the effective potential in eq. (36) is given by hTmax=MT[bW(M2Tbh2cr)]−1/2. (37) ### 3.2 Outline of the mechanism During inflation, the Higgs field is subject to quantum fluctuations. Depending on the value of , these quantum fluctuations could lead the Higgs beyond the barrier, and make it roll towards Planckian values. If is high enough and is not too far, thermal corrections can “rescue” the Higgs, bringing it back to the origin [10]. The mechanism relies on a tuning such that the following situation occurs [8]: 1. At -folds before the end of inflation, the Higgs background value is brought by quantum fluctuation to some . This configuration must be spatially homogeneous on an inflating local patch large enough to encompass our observable Universe today. We consider the de Sitter metric in flat slicing coordinates, . We will discuss later how precisely this assumption must be satisfied, and its plausibility. 2. When the classical evolution prevails over the quantum corrections, the Higgs field, starting from the initial position , begins to slow roll down the negative potential. This condition reads ∣∣ ∣∣V′eff(hin)3H20∣∣ ∣∣classical>cH02πquantum . (38) where is a constant of order 1, fixed to in [8]. We will explore what happens choosing or . From this starting point on, the classical evolution of the background Higgs value is described by ¨hcl+3H0˙hcl+V′eff(hcl)=0 (39) where the subscript indicates that this is a classical motion. Dots indicate derivatives with respect to time . 3. At the end of inflation, , the Higgs is rescued by thermal effects. This happens if the value of the Higgs field at the end of inflation is smaller than the position of the thermal potential barrier at reheating, . A significant amount of PBH arises only if this condition is barely satisfied in all Universe. This is why the homogeneity assumption in is needed. To compute condition we fix the initial value of the classical motion such that eq. (38) is satisfied with ; next, we maximise the obtained solving eq. (39) by tuning the amount of inflation where the fall happens, as parameterized by . The left panel of fig. 7 shows the initial value obtained following this procedure as a function of in units of . Smaller values of (i.e. smaller values of ) imply a flattening of the potential, and the classical dynamics during inflation is slower. The right side of the curves is limited by the classicality condition in eq. (38). A shifts the position of the potential barrier towards the limiting value in eq. (37) — which does not depend on — above which the rescue mechanism due to thermal effects is no-longer effective: its net effect is to reduce the number of -folds during which classical motion can happen (for fixed ). We anticipate here the feature of PBH formation which implies the restriction on the parameter space mentioned at point : Higgs fall must start at least -folds before the end of inflation. The collapse of the mass inside the horizon -folds before inflation end forms a PBH with mass (see also section 3.5) MPBH≈¯M2PlH0e2N . (40) PBH must be heavy enough to avoid Hawking evaporation. The lifetime of a PBH with mass due to Hawking radiation at Bekenstein-Hawking temperature is Γ−1PBH≈4×1011[F(MPBH)15.35]−1(MPBH1013g)3s , (41) where at g. BH heavier than g are cosmologically stable, and BH heavier than g are allowed by bounds on Hawking radiation as a (significant fraction of) DM. Since , imposing g implies a conservative lower limit on : Nin>12ln[7.2×1021H0¯MPl]=18.3 for H0=10−6¯MPl. (42) ### 3.3 Higgs fluctuations during inflation We now consider the evolution of Higgs perturbations during inflation. Expanding in Fourier space with comoving wavenumber ,888The comoving wavenumber is time independent, and it is related to the physical momentum via , which decreases as the space expands. the equation for the mode takes the form ¨δhk+3H0˙δhk+k2a2δhk+V′′eff(hcl)δh=0 , (43) where we neglected metric fluctuations. In terms of the number of -folds and of the Mukhanov-Sasaki variable it becomes d2ukdN2+dukdN+(k2a2H20−2)uk+V′′eff(hcl)H20uk=0 . (44) It is convenient to consider the evolution of the perturbation making reference to a specific moment before the end of inflation: at the initial value defined in section 3.2. We recall that in our convention at the end of inflation. Eq. (44) becomes d2ukdN2−dukdN+[(kainH0)2e2(N−Nin)−2]uk+V′′eff(hcl)H20uk=0 . (45) In this form, the Mukhanov-Sasaki equation is particularly illustrative. Consider the evolution of the perturbation for a mode of interest that we fix compared to the reference value at . In particular, we consider the case of a mode that is sub-horizon at the beginning of the classical evolution, that is . From eq. (45), we see that in the subsequent evolution with the exponential suppression will turn the mode from sub-horizon to super-horizon. We are now in the position to solve eq. (45). To this end, we need boundary conditions for and its time derivative. We use the Bunch-Davies conditions at for modes that are sub-horizon at the beginning of the classical evolution, , and we treat the real and imaginary part of separately since they behave like two independent harmonic oscillators for each comoving wavenumber . At generic -fold time , the perturbation is related to the Mukhanov-Sasaki variable by k3/2δhkH0∣∣∣N=(kainH0)eN−Nin(√kuk)∣∣∣N . (46) We show in the left panel of fig. 8 our results for the time evolution of the classical background and the perturbation (both real and imaginary part) during the last -folds of inflation. As a benchmark value, we consider an initial sub-horizon mode with . After few -folds of inflation such mode exits the horizon: oscillations stop, and from this point on, further evolution is driven by the time derivative of the classical background. This is a trivial consequence of the equations of motion on super-horizon scales. Differentiating eq. (39) with respect to the cosmic time shows that and satisfy the same equation on super-horizon scales, and, therefore, they must be proportional, for  [8]. The proportionality function can be obtained by a matching procedure. Deep inside the horizon, in the limit , the Mukhanov-Sasaki variable reproduces the preferred vacuum of an harmonic oscillator in flat Minkowski space, and we have, after introducing the conformal time as , . Roughly matching the absolute value of the solutions at horizon crossing we determine the absolute value of as |δhk|=H0√2k3˙hcl(tk)˙hcl(t) , (47) where we indicate with the time of horizon exit for the mode — the time at which (equivalently, ). The number of -fold at horizon exit is given by kainH0=eNin−Nk . (48) #### Primordial curvature perturbations The primordial curvature perturbation
2020-07-09 18:23:37
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8344966769218445, "perplexity": 949.7657352986869}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655900614.47/warc/CC-MAIN-20200709162634-20200709192634-00587.warc.gz"}
http://springer.iq-technikum.de/referenceworkentry/10.1007/978-3-319-02370-0_124-1
# Encyclopedia of Geodesy Living Edition | Editors: Erik Grafarend # Disturbing Potential from Gravity Anomalies: From Globally Reflected Stokes Boundary Value Problem to Locally Oriented Multiscale Modeling • Matthias Augustin • Christian Blick • Sarah Eberle • Willi Freeden Living reference work entry DOI: https://doi.org/10.1007/978-3-319-02370-0_124-1 ## Keywords Gravity Anomaly Local Support Cubature Formula Gravity Disturbance Spherical Approximation These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves. ## Definition Stokes problem, Fourier series expansion in terms of outer harmonics, classical global solution by convolving gravity anomalies against the Stokes kernel , regularization of the Stokes kernel, local multiscale approximation. ## Introduction The traditional approach of physical geodesy (cf., e.g., Heiskanen and Moritz, 1967; Moritz, 2015) starts from the assumption that scalar gravity intensity is available over the whole Earth’s surface. The gravitational part of the gravity potential can then be regarded as a harmonic function outside the Earth’s surface. A classical approach to gravity field modeling was conceived by G.G. Stokes (1849). He proposed reducing the given gravity accelerations from the Earth’s surface to the geoid (see, e.g., Listing, 1878), where the geoid is a level surface, e.g., its potential value is constant. The difference between the reduced gravity disturbing potential, i.e., the difference between the actual and the reference potential, can be obtained from a (third) boundary value problem of potential theory. M.S. Molodensky (Molodensky et al., 1960) proposed to improve Stokes’ solution by “reducing” the gravity anomalies, given on the Earth’s surface, to a “normal level surface” (telluroid). In both cases, the calculation via the associated integral formulas is usually performed in spherical approximation although concepts of ellipsoidal realization are available (see Grafarend et al., 2015 and the references therein). In fact, H. Moritz (Hofmann-Wellenhof and Moritz, 2006) mentioned that the reference surface is never a sphere in any geometrical sense but always an ellipsoid. As the flattening of the Earth is very small, the ellipsoidal formulas can be expanded into power series in terms of the flattening so that terms containing higher orders can be neglected. “In this way one obtains formulas that are rigorously valid for the sphere, but approximately valid for the actual reference ellipsoid as well” (see Hofmann-Wellenhof and Moritz, 2006). For practical evaluation, the Stokes convolution integral between Stokes kernel and gravity anomalies must be replaced by approximate cubature formulas using certain integration weights and knots. The approximate integration formulas are the essential problem in the framework of globally determining the disturbing potential and, subsequently, the geoidal height following Bruns’ concept (see Bruns, 1878). In fact, we are confronted with the following dilemma: On the one hand, Weyl’s law of equidistribution (cf. Weyl, 1916, Cui and Freeden, 1997) tells us that numerical integration and equidistribution of the nodal points are mathematically equivalent. This law holds true for any reference surface, i.e., telluroid, ellipsoid, as well as sphere. In order to get better and better accuracy in approximate integration procedures, we thus need dense, globally over the whole reference surface equidistributed datasets. On the other hand, even nowadays, observations in sufficient data width and quality are only available for certain parts of the Earth’s surface, and there are large areas, particularly at sea, where no suitable data are given at all. In fact, terrestrial gravity data coverage now and in the foreseeable future is far from being satisfactory and totally inadequate for the purpose of high-precision geoid determination. As a consequence, Stokes’ type integral formula and its improvements based on Molodensky’s idea cannot be applied on a global basis neither in an ellipsoidal nor in a spherical framework. We have to observe the specific heterogeneous data situation. A mathematical way out is an adequate multiscale method providing a “zooming in” approximation in adaptation to the data distribution and density. In this contribution our particular goal is a local high-resolution gravitational model reflecting the available data obligations as far as possible. Since the flattening in a local approach is negligibly small, a calculation in spherical approximation is canonical. For simplicity, we restrict ourselves to error-free data. A multiscale signal-to-noise ratio method handling noisy data is proposed, e.g., in (Freeden and Maier, 2002). Our considerations are based on the work (Freeden et al., 1998; Freeden and Schreiner, 2006; Freeden and Wolf, 2009; Freeden and Gerhards, 2012; Freeden, 2015). The illustrations are essentially taken from (Freeden and Wolf, 2009) and the PhD thesis (Wolf, 2009). ## Stokes Wavelets We begin our work with the recapitulation of the global Stokes’ approach in spherical approximation. Let ΩR be the sphere with radius R around the origin and the gravity anomaly Δg ∈ C(0) R ) with $${\displaystyle \underset{\Omega_R}{\int}\Delta g\left(\boldsymbol{x}\right) ds\left(\boldsymbol{x}\right)}=0$$ (1) and $$\begin{array}{ll}{\displaystyle \underset{\Omega_R}{\int}\Delta g\left(\boldsymbol{x}\right)\left({\boldsymbol{\upvarepsilon}}^k\cdot \boldsymbol{x}\right)} ds\left(\boldsymbol{x}\right)=0,\hfill & k=1,2,3\hfill \end{array}$$ (2) be given. Here, ds is the surface element and ε (1), ε (2), ε (3) are the canonical cartesian unit vectors in ℝ3. Then, the disturbing potential $$T:\overline{\Omega_R^{\mathrm{ext}}}\to \mathrm{\mathbb{R}}$$ is the unique solution of the exterior Stokes boundary-value problem (see also Freeden, 1978; Freeden and Wolf, 2009; Wolf, 2009): 1. (i) T is continuously differentiable in $$\overline{\Omega_R^{\mathrm{ext}}}$$ and twice continuously differentiable in $${\Omega}_R^{\mathrm{ext}}$$ i.e., $$T\in {\mathrm{C}}^{(1)}\left(\overline{\Omega_R^{\mathrm{ext}}}\right)\cap {\mathrm{C}}^{(2)}\left({\Omega}_R^{\mathrm{ext}}\right),$$ 2. (ii) T is harmonic in Ω R ext , i.e., Δ x T = 0 in Ω R ext 3. (iii) T is regular at infinity, 4. (iv) $${\displaystyle {\int}_{\Omega_R}T\left(\boldsymbol{y}\right){H}_{-n-1,k}^R\left(\boldsymbol{y}\right)} ds\left(\boldsymbol{y}\right)=0,n=0,1, k=1,\dots, 2n+1,$$ 5. (v) $$\begin{array}{ll}-\frac{\boldsymbol{x}}{\left|\boldsymbol{x}\right|}\cdot {\nabla}_{\boldsymbol{x}}T\left(\boldsymbol{x}\right)-\frac{2}{\left|\boldsymbol{x}\right|}T\left(\boldsymbol{x}\right)-\frac{2}{\left|\boldsymbol{x}\right|}T\left(\boldsymbol{x}\right)=\Delta g\left(\boldsymbol{x}\right),\hfill & \boldsymbol{x}\in {\Omega}_R\hfill \end{array}.$$ $${\Omega}_R^{\mathrm{ext}}$$ denotes the exterior of the sphere $${\Omega}_R$$. T is determined by Stokes integral formula $$T\left(R\boldsymbol{\xi} \right)=\frac{1}{4\pi R}{\displaystyle \underset{\Omega_R}{\int } St\left(R\boldsymbol{\xi}, R\boldsymbol{\eta} \right)\Delta g\left(R\boldsymbol{\eta} \right)ds\left(R\boldsymbol{\eta} \right)}=\frac{R}{4\pi }{\displaystyle \underset{\Omega}{\int } St\left(\boldsymbol{\xi}, \boldsymbol{\eta} \right)\Delta g\left(R\boldsymbol{\eta} \right)\;ds\left(\boldsymbol{\eta} \right)},$$ (3) with $$St\left(\boldsymbol{\xi}, \boldsymbol{\eta} \right)=1-5\boldsymbol{\xi} \cdot \boldsymbol{\eta} -6{\left(S\left(\boldsymbol{\xi} \cdot \boldsymbol{\eta} \right)\right)}^{-1}+S\left(\boldsymbol{\xi} \cdot \boldsymbol{\eta} \right)-3\boldsymbol{\xi} \cdot \boldsymbol{\eta} \ln \left(\frac{1}{S\left(\boldsymbol{\xi} \cdot \boldsymbol{\eta} \right)}+\frac{1}{{\left(S\left(\boldsymbol{\xi} \cdot \boldsymbol{\eta} \right)\right)}^2}\right)$$ (4) is the Stokes kernel , ξ, η ∈ Ω = Ω1, where we have used the abbreviation $$\begin{array}{ll}S\left(\boldsymbol{\xi} \cdot \boldsymbol{\eta} \right)=\frac{\sqrt{2}}{\sqrt{1-\boldsymbol{\xi} \cdot \boldsymbol{\eta}}},\hfill & 1-\boldsymbol{\xi} \cdot \boldsymbol{\eta} \ne 0\hfill \end{array}.$$ (5) To regularize the improper integral Eq. 3, we replace the zonal kernel S by the space-regularized zonal kernel (see, e.g., Freeden and Schreiner, 2006; Freeden and Wolf, 2009; Wolf, 2009) $${S}^{\rho }(t)=\left\{\begin{array}{ll}\frac{R}{\rho}\left(3-\frac{2{R}^2}{\rho^2}\left(1-t\right)\right),\hfill & 0<1-t\le \frac{\rho^2}{2{R}^2},\hfill \\ {}\frac{\sqrt{2}}{\sqrt{1-t}},\hfill & \frac{\rho^2}{2{R}^2}<1-t\le 2.\hfill \end{array}\right.$$ (6) Clearly, the function (depicted in Figure 1) S ρ is continuously differentiable on the interval [−1, 1], and we have (see Freeden and Wolf, 2009; Wolf, 2009) $$\left({S}^{\rho}\right)^{\prime }(t)=\left\{\begin{array}{ll}\frac{2{R}^3}{\rho^3},\hfill & 0\le 1-t\le \frac{\rho^2}{2{R}^2},\hfill \\ {}\frac{1}{\sqrt{2}{\left(1-t\right)}^{\frac{3}{2}}},\hfill & \frac{\rho^2}{2{R}^2}<1-t\le 2.\hfill \end{array}\right.$$ (7) Furthermore, the functions S and S ρ are monotonically increasing on the interval [−1, 1), such that S(t) ≥ S ρ (t) ≥ S(−1) = S ρ (−1) = 1 holds true on the interval [−1, 1). Considering the difference between the kernel S and its linearly regularized version S ρ , we find $$S(t)-{S}^{\rho }(t)=\left\{\begin{array}{ll}\frac{\sqrt{2}}{\sqrt{1-t}}-\frac{R}{\rho}\left(3-\frac{2{R}^2}{\rho^2}\left(1-t\right)\right),\hfill & 0<1-t\le \frac{\rho^2}{2{R}^2},\hfill \\ {}0,\hfill & \frac{\rho^2}{2{R}^2}<1-t\le 2.\hfill \end{array}\right.$$ (8) It can be shown (Freeden and Schreiner, 2006) that the following lemma holds: ### Lemma 1 For F ∈ C(0)(Ω) and S ρ defined by Eq. 6, we have $$\underset{\rho \to 0+}{ \lim}\underset{\boldsymbol{\xi} \in \varOmega }{ \sup}\left|{\displaystyle \underset{\varOmega }{\int }S\left(\boldsymbol{\xi} \cdot \boldsymbol{\eta} \right)F\left(\boldsymbol{\eta} \right) ds\left(\boldsymbol{\eta} \right)}-{\displaystyle \underset{\varOmega }{\int }{S}^{\rho}\left(\boldsymbol{\xi} \cdot \boldsymbol{\eta} \right)F\left(\boldsymbol{\eta} \right) ds\left(\boldsymbol{\eta} \right)}\right|=0.$$ (9) To obtain another useful convergence result, we observe that for all t ∈ [−1, 1) with $$1-t\le \frac{\rho^2}{2{R}^2}$$ $$\begin{array}{ll} \ln \left(\frac{1}{S(t)}+\frac{1}{{\left(S(t)\right)}^2}\right)- \ln \left(\frac{1}{S^{\rho }(t)}+\frac{1}{{\left({S}^{\rho }(t)\right)}^2}\right)=& \ln \left(1+S(t)\right)\hfill \\ {}& - \ln \left(1+{S}^{\rho }(t)\right)\hfill \\ {}& -2\Big( \ln \left(S(t)- \ln \left({S}^{\rho }(t)\right)\right)\hfill \end{array}$$ (10) and, thus, $$\left| \ln \left(\frac{1}{S(t)}+\frac{1}{{\left(S(t)\right)}^2}\right)- \ln \left(\frac{1}{S^{\rho }(t)}+\frac{1}{{\left({S}^{\rho }(t)\right)}^2}\right)\right|=O\left(\left|S(t)-{S}^{\rho }(t)\right|\right).$$ (11) This leads to the following result: ### Lemma 2 Let S be the singular kernel given by $$S(t)=\frac{\sqrt{2}}{\sqrt{1-t}}$$ and let S ρ , ρ ∈ (0, 2R], be the corresponding (Taylor) linearized regularized kernel defined by Eq. 6. Then $$\underset{\rho \to 0+}{ \lim }{\displaystyle \underset{-1}{\overset{1}{\int }}\left| \ln \left(1+S(t)\right)- \ln \left(1+{S}^{\rho }(t)\right)\right|dt=0,}$$ (12) $$\underset{\rho \to 0+}{ \lim }{\displaystyle \underset{-1}{\overset{1}{\int }}\left| \ln \left(\frac{1}{S(t)}+\frac{1}{{\left(S(t)\right)}^2}\right)- \ln \left(\frac{1}{S^{\rho }(t)}+\frac{1}{{\left({S}^{\rho }(t)\right)}^2}\right)\right|dt=0,}$$ (13) $$\underset{\rho \to 0+}{ \lim }{\displaystyle \underset{-1}{\overset{1}{\int }}\left({\left(S(t)\right)}^2-{\left({S}^{\rho }(t)\right)}^2\right)}\sqrt{1-{t}^2}dt=0.$$ (14) The regularization given in Eq. 6 leads us to the following regularized global representation of the disturbing potential corresponding to gravity anomalies as boundary data (see Freeden and Wolf, 2009): $${T}^{\rho}\left(R\boldsymbol{\xi} \right)=\frac{R}{4\pi }{\displaystyle \underset{\Omega}{\int }S{t}^{\rho}\left(\boldsymbol{\xi}, \boldsymbol{\eta} \right)\Delta g\left(R\boldsymbol{\eta} \right) ds\left(\boldsymbol{\eta} \right)}$$ (15) $$\begin{array}{l}S{t}^{\rho}\left(\boldsymbol{\xi}, \boldsymbol{\eta} \right)= 1-5\boldsymbol{\xi} \cdot \boldsymbol{\eta} -6{\left(S\left(\boldsymbol{\xi} \cdot \boldsymbol{\eta} \right)\right)}^{-1}\hfill \\ {}+ {S}^{\rho}\left(\boldsymbol{\xi} \cdot \boldsymbol{\eta} \right)-3\boldsymbol{\xi} \cdot \boldsymbol{\eta} \ln \left(\frac{1}{{\left({S}^{\rho}\left(\boldsymbol{\xi} \cdot \boldsymbol{\eta} \right)\right)}^2}+\frac{1}{S^{\rho}\left(\boldsymbol{\xi} \cdot \boldsymbol{\eta} \right)}\right)\hfill \\ {}=1-5\boldsymbol{\xi} \cdot \boldsymbol{\eta} -6\frac{\sqrt{1-\boldsymbol{\xi} \cdot \boldsymbol{\eta}}}{\sqrt{2}}\hfill \\ {}\begin{array}{l}\hfill \\ {}+\left\{\begin{array}{ll}\frac{R}{\rho}\left(3-\frac{2{R}^2}{\rho^2}\left(1-\boldsymbol{\xi} \cdot \boldsymbol{\eta} \right)\right)\hfill & \hfill \\ {}-3\boldsymbol{\xi} \cdot \boldsymbol{\eta} \ln \left(1+\frac{R}{\rho}\left(3-\frac{2{R}^2}{\rho^2}\left(1-\boldsymbol{\xi} \cdot \boldsymbol{\eta} \right)\right)\right)\hfill & \hfill \\ {}+6\boldsymbol{\xi} \cdot \boldsymbol{\eta} \ln \left(\frac{R}{\rho}\left(3-\frac{2{R}^2}{\rho^2}\left(1-\boldsymbol{\xi} \cdot \boldsymbol{\eta} \right)\right)\right),\hfill & 0\le 1-\boldsymbol{\xi} \cdot \boldsymbol{\eta} \le \frac{\rho^2}{2{R}^2},\hfill \\ {}\frac{\sqrt{2}}{\sqrt{1-\boldsymbol{\xi} \cdot \boldsymbol{\eta}}}-3\boldsymbol{\xi} \cdot \boldsymbol{\eta} \ln \left(\frac{1-\boldsymbol{\xi} \cdot \boldsymbol{\eta}}{2}+\frac{\sqrt{1-\boldsymbol{\xi} \cdot \boldsymbol{\eta}}}{\sqrt{2}}\right),\hfill & \frac{\rho^2}{2{R}^2}<1-\boldsymbol{\xi} \cdot \boldsymbol{\eta} \le 2,\hfill \end{array}\right.\hfill \end{array}\end{array}$$ (16) for ξ, η ∈ Ω and ρ ∈ (0, 2R]. Here, we have made use of Eq. 10. With Lemma 1 and Lemma 2, we obtain ### Theorem 3 Suppose that T is the solution of the Stokes boundary-value problem. Let T ρ , ρ ∈ (0, 2R], represent its regularization as in Eq. 15 . Then $$\underset{\rho \to 0+}{ \lim}\underset{\boldsymbol{\xi} \in \Omega}{ \sup}\left|T\left(R\boldsymbol{\xi} \right)-{T}^{\rho}\left(R\boldsymbol{\xi} \right)\right|=0.$$ (17) The linear space regularization technique enables us to formulate multiscale solutions for the disturbing potential from gravity anomalies. For numerical application, we have to go over to scale-discretized approximations of the solution to the boundary-value problem. For that purpose, we choose a monotonically decreasing sequence $${\left\{{\rho}_j\right\}}_{j\in {\mathrm{\mathbb{N}}}_0}$$, such that $$\begin{array}{ll}\underset{j\to \infty }{ \lim }{\rho}_j=0,\hfill & {\rho}_0=2R.\hfill \end{array}$$ (18) A particularly important example, which we use in our numerical implementations below, is the dyadic sequence with $$\begin{array}{lll}{\rho}_j={2}^{1-j}R,\hfill & j\in \mathrm{\mathbb{N}},\hfill & {\rho}_0=2R.\hfill \end{array}$$ (19) It is easily seen that $$2{\rho}_{j+1}={\rho}_j$$,$$j\in {\mathrm{\mathbb{N}}}_0$$, is the relation between two consecutive elements of the sequence. In correspondence to the sequence $${\left\{{\rho}_j\right\}}_{j\in {\mathrm{\mathbb{N}}}_0,}$$ a sequence $${\left\{S{t}^{\rho j}\right\}}_{j\in {\mathrm{\mathbb{N}}}_0}$$ of discrete versions of the regularized Stokes kernels Eq. 16, so-called Stokes scaling functions, is available. Figure 2 shows a graphical illustration of the regularized Stokes kernels for different scales j. The regularized Stokes wavelets, forming the sequence $${\left\{WS{t}^{\rho_j}\right\}}_{j\in {\mathrm{\mathbb{N}}}_0}$$, are understood to be the difference of two consecutive regularized Stokes scaling functions, respectively, $$\begin{array}{ll}WS{t}^{\rho_j}\left(\boldsymbol{\xi}, \boldsymbol{\eta} \right)=S{t}^{\rho_{j+1}}\left(\boldsymbol{\xi}, \boldsymbol{\eta} \right)-S{t}^{\rho_j}\left(\boldsymbol{\xi}, \boldsymbol{\eta} \right),\hfill & j\in {\mathrm{\mathbb{N}}}_0.\hfill \end{array}$$ (20) These wavelets possess the numerically nice property of a local support. More specifically, the function $$\boldsymbol{\eta} \mapsto WS{t}^{\rho_j}\left(\boldsymbol{\xi}, \boldsymbol{\eta} \right)$$, η ∈ Ω, vanishes everywhere outside the spherical cap $${\Gamma}_{\rho_j^2/2{R}^2}\left(\boldsymbol{\xi} \right)$$. Explicitly written out, we have $$\begin{array}{l}WS{t}^{\rho_j}\left(\boldsymbol{\xi}, \boldsymbol{\eta} \right)=\hfill \\ {}\left\{\begin{array}{ll}{S}^{\rho_{j+1}}\left(\boldsymbol{\xi} \cdot \boldsymbol{\eta} \right)-3\boldsymbol{\xi} \cdot \boldsymbol{\eta} \ln \left(\frac{1}{S^{\rho_{j+1}}\left(\boldsymbol{\xi} \cdot \boldsymbol{\eta} \right)}+\frac{1}{{\left({S}^{\rho_{j+1}}\left(\boldsymbol{\xi} \cdot \boldsymbol{\eta} \right)\right)}^2}\right)\hfill & \hfill \\ {}-{S}^{\rho_j}\left(\boldsymbol{\xi} \cdot \boldsymbol{\eta} \right)+3\boldsymbol{\xi} \cdot \boldsymbol{\eta} \ln \left(\frac{1}{S^{\rho_j}\left(\boldsymbol{\xi} \cdot \boldsymbol{\eta} \right)}+\frac{1}{{\left({S}^{\rho_j}\left(\boldsymbol{\xi} \cdot \boldsymbol{\eta} \right)\right)}^2}\right),\hfill & 0\le 1-\boldsymbol{\xi} \cdot \boldsymbol{\eta} \le \frac{\rho_{j+1}^2}{2{R}^2},\hfill \\ {}S\left(\boldsymbol{\xi} \cdot \boldsymbol{\eta} \right)-3\boldsymbol{\xi} \cdot \boldsymbol{\eta} \ln \left(\frac{1}{S\left(\boldsymbol{\xi} \cdot \boldsymbol{\eta} \right)}+\frac{1}{{\left(S\left(\boldsymbol{\xi} \cdot \boldsymbol{\eta} \right)\right)}^2}\right)\hfill & \hfill \\ {}-{S}^{\rho_j}\left(\boldsymbol{\xi} \cdot \boldsymbol{\eta} \right)+3\boldsymbol{\xi} \cdot \boldsymbol{\eta} \ln \left(\frac{1}{S^{\rho_j}\left(\boldsymbol{\xi} \cdot \boldsymbol{\eta} \right)}+\frac{1}{{\left({S}^{\rho_j}\left(\boldsymbol{\xi} \cdot \boldsymbol{\eta} \right)\right)}^2}\right),\hfill & \frac{\rho_{j+1}^2}{2{R}^2}<1-\boldsymbol{\xi} \cdot \boldsymbol{\eta} \le \frac{\rho_j^2}{2{R}^2},\hfill \\ {}0,\hfill & \frac{\rho_j^2}{2{R}^2}<1-\boldsymbol{\xi} \cdot \boldsymbol{\eta} \le 2.\hfill \end{array}\right.\end{array}$$ (21) Let J ∈ ℕ0 be an arbitrary scale. Suppose that $$S{t}^{\rho_J}$$ is the regularized Stokes scaling function at scale J. Furthermore, let $$WS{t}^{\rho_j}$$, j = 0,…,J, be the regularized Stokes wavelets as given by Eq. 21. Then an easy manipulation shows that $$S{t}^{\rho_J}\left(\boldsymbol{\xi}, \boldsymbol{\eta} \right)=S{t}^{\rho_{J_0}}\left(\boldsymbol{\xi}, \boldsymbol{\eta} \right)+{\displaystyle \sum_{j={J}_0}^{J-1}WS{t}^{\rho_j}\left(\boldsymbol{\xi}, \boldsymbol{\eta} \right)}.$$ (22) The local support of the Stokes wavelets within the framework of Eq. 22 should be studied in more detail: Following the sequence given by Eq. 19, we start with a globally supported scaling kernel $$S{t}^{\rho_0}=S{t}^{2R}$$. Then we add more and more wavelet kernels $$WS{t}^{\rho_j}$$, j = 0,…,J, to achieve the required scaling kernel $$S{t}^{\rho_J}$$. It is of particular importance that $$\boldsymbol{\eta} \mapsto WS{t}^{\rho_j}\left(\boldsymbol{\xi}, \boldsymbol{\eta} \right)$$, ξ ∈ Ω fixed, are ξ-zonal functions and possess spherical caps as local supports. Clearly, the support of the wavelets $$WS{t}^{\rho_j}$$ becomes more and more localized for increasing scales j. In conclusion, a calculation of an integral representation for the disturbing potential T starts with a global trend approximation using the scaling kernel at scale j = 0 (of course, this requires data on the whole sphere, but the data can be rather sparsely distributed since they only serve as a trend approximation). Step by step, we are able to refine this approximation by use of wavelets. The increasing spatial localization of the wavelets successively allows a better spatial resolution of the disturbing potential T. Additionally, the local supports of the wavelets have a computational advantage since the integration is reduced from the entire sphere to smaller and smaller spherical caps. Consequently, the presented numerical technique becomes capable of handling heterogeneously distributed data sets in adaptation to their mutual spacing. All in all, keeping the space-localizing properties of the regularized Stokes scaling and wavelet functions in mind, we are able to establish an approximation of the solution of the disturbing potential T from gravity anomalies Δg in the form of a zooming-in multiscale method . A low-pass filtered version of the disturbing potential T at the scale j in an integral representation over the unit sphere Ω is given by $$\begin{array}{ll}{T}^{\rho_j}\left(R\boldsymbol{\xi} \right)=\frac{R}{4\pi }{\displaystyle \underset{\Omega}{\int}\Delta g\left(R\boldsymbol{\eta} \right) S{t}^{\rho_j}\left(\boldsymbol{\xi}, \boldsymbol{\eta} \right) ds\left(\boldsymbol{\eta} \right)},\hfill & \boldsymbol{\xi} \in \Omega, \hfill \end{array}$$ (23) while the j-scale band-pass filtered version of T leads to the integral representation $$\begin{array}{ll}W{T}^{\rho_j}\left(R\boldsymbol{\xi} \right)=\frac{R}{4\pi }{\displaystyle \underset{\Gamma_{\rho_j^2/2{R}^2}\left(\boldsymbol{\xi} \right)}{\int}\Delta g\left(R\boldsymbol{\eta} \right) WS{t}^{\rho_j}\left(\boldsymbol{\xi}, \boldsymbol{\eta} \right) ds\left(\boldsymbol{\eta} \right)},\hfill & \boldsymbol{\xi} \in \Omega .\hfill \end{array}$$ (24) ### Theorem 4 Let $${T}^{\rho_{J_0}}$$ be the regularized version of the disturbing potential at some arbitrary initial scale J 0 as given in Eq. 23 , and let $$W{T}^{\rho_j}$$, j = J 0 , J 0 + 1,…, be given by Eq. 24 . Then, the following reconstruction formula holds true: $$\underset{J\to \infty }{ \lim}\underset{\boldsymbol{\xi} \in \Omega}{ \sup}\left|T\left(R\boldsymbol{\xi} \right)-\left({T}^{\rho_{J_0}}\left(R\boldsymbol{\xi} \right)+{\displaystyle \sum_{j={J}_0}^{J-1}W{T}^{\rho_j}\left(R\boldsymbol{\xi} \right)}\right)\right|=0.$$ The multiscale procedure (wavelet reconstruction) as developed here can be illustrated by the following scheme: $$\begin{array}{ll}W{T}^{\rho_{J_0}}\hfill & W{T}^{\rho {J}_0+1}\hfill \\ {} \searrow \hfill & \searrow \hfill \\ {}{T}^{\rho {J}_0}\to +\to \hfill & {T}^{\rho {J}_0+1}\to +\to {T}^{\rho {J}_0+2}\dots .\hfill \end{array}$$ (25) Consequently, a tree algorithm based on regularization in the space domain has been realized for determining the disturbing potential T from locally available data sets of gravity anomalies Δg. An example is shown in Figure 3. The fully discretized multiscale approximations have the following representations $$\begin{array}{ll}{T}^{\rho_J}\left(R\boldsymbol{\xi} \right)\simeq \frac{R}{4\pi }{\displaystyle \sum_{k=1}^{N_J}{w}_k^{N_J}\Delta g\left(R{\boldsymbol{\eta}}_k^{N_J}\right)} S{t}^{\rho_J}\left(\boldsymbol{\xi}, {\boldsymbol{\eta}}_k^{N_J}\right),\hfill & \boldsymbol{\xi} \in \Omega \hfill \end{array},$$ (26) $$\begin{array}{ll}W{T}^{\rho_j}\left(R\boldsymbol{\xi} \right)\simeq \frac{R}{4\pi }{\displaystyle \sum_{k=1}^{N_j}{w}_k^{N_j}\Delta g\left(R{\boldsymbol{\eta}}_k^{N_j}\right)} WS{t}^{\rho_j}\left(\boldsymbol{\xi}, {\boldsymbol{\eta}}_k^{N_j}\right),\hfill & \boldsymbol{\xi} \in \Omega \hfill \end{array},$$ (27) where $${\boldsymbol{\eta}}_k^{N_j}$$ are the integration knots and $${w}_k^{N_j}$$ the integration weights. Whereas the sum in Eq. 26 has to be extended over the whole sphere Ω, the summation in Eq. 27 has to be computed only for the local supports of the wavelets (note that the symbol $$\simeq$$ means that the error between the right-and the left-hand side can be neglected). Figures 4, 5, and 6 show that the method presented here solves a dilemma in geodesy: Common global solution methods need ever denser, globally equidistributed data sets over the whole sphere Ω R to obtain a better approximation quality (according to Weyl’s Law of Equidistribution). However, the reality is quite different. On the one hand, we have large gaps in data sets, particularly at sea. On the other hand, there are some regions where the accuracy and density of available data sets is quite remarkable. The solution offered by our wavelet method is to start with a coarse, global approximation, e.g., of the disturbing potential using a scaling function of scale J 0 and add local refinement in the form of band-pass filtered versions using Stokes’ wavelets. This can be realized as these wavelets only have a compact support. This procedure allows the incorporation of heterogeneous data sets in a way that locally improves the approximation of the disturbing potential despite nonequidistributed data sets. ## Conclusion So far, as pointed out in Hofmann-Wellenhof and Moritz (2006), much more gravity anomalies than gravity disturbances are available and are being processed. In the future, we may expect a change in the practice of physical geodesy because of GPS. In this respect, the approach as presented here for the Stokes problem can be formulated for the Neumann problem as well (see, e.g., Wolf, 2009; Freeden and Gerhards, 2012), i.e., we are able to make the transition from the globally reflected Neumann problem to locally oriented multiscale modeling in an analogous way. ## References and Reading 1. Bruns, E. H., 1878. Die Figur der Erde. Publikationen des Königlichen Preussischen Geod ä tischen Instituts. Berlin, P. Stankiewicz Buchdruckerei.Google Scholar 2. Cui, J. and Freeden, W., 1997. Equidistribution on the sphere. SIAM 18, 595–609.Google Scholar 3. Freeden, W., 1978. An application of a summation formula to numerical computation of integrals over the sphere. Bulletin Géodésique, 52, 165–175. 4. Freeden, W., 2015. Geomathematics: Its role, its aim, and its potential. In Freeden, W., Nashed, Z., and Sonar, T. (eds.), Handbook of Geomathematics. 2nd edn pp. 3–78. Heidelberg: Springer.Google Scholar 5. Freeden, W., and Gerhards, C., 2012. Geomathematically Oriented Potential Theory. Boca Raton: Chapman & Hall/CRC. 6. Freeden, W., and Maier, T., 2002. On multiscale denoising of spherical functions: basic theory and numerical aspects. Electronic Transactions on Numerical Analysis (ETNA), 14, 56–78.Google Scholar 7. Freeden, W., and Schreiner, M., 2006. Local multiscale modelling of geoid undulations from deflections of the vertical. Journal of Geodesy, 79, 641–651. 8. Freeden, W., and Wolf, K., 2009. Klassische Erdschwerefeldbestimmung aus der Sicht moderner Geomathematik. Mathematische Semesterberichte, 56, 53–77. 9. Freeden, W., Gervens, T., and Schreiner, M., 1998. Constructive Approximation on the Sphere (With Applications to Geomathematics). Oxford: Oxford Science Publications/Clarendon Press.Google Scholar 10. Grafarend, E. W., Klapp, M., and Martinec, Z., 2015. Spacetime modelling of the Earth’s gravity field by ellipsoidal harmonics. In Freeden, W., Nashed, Z., and Sonar, T. (eds.), Handbook of Geomathematics, 2nd edn, pp. 381–496, Heidelberg: Springer.Google Scholar 11. Heiskanen, W. A., and Moritz, H., 1967. Physical Geodesy. San Francisco: W.H. Freeman.Google Scholar 12. Hofmann-Wellenhof, B., and Moritz, H., 2006. Physical Geodesy, 2nd edn. Wien/New York: Springer.Google Scholar 13. Listing, J. B., 1878. Neue geometrische und dynamische Constanten des Erdkörpers. Nachrichten von der Königlichen Gesellschaft der Wissenschaften und der Georg-Augusts-Universität zu Göttingen, pp. 749–815.Google Scholar 14. Molodensky, M. S., Eremeev, V. F., and Yurkina, M. I., 1960. Methods for study of the external gravitational field and figure of the Earth. Trudy TsNIIGAiK, Geodezizdat, Moscow, p. 131 (English translat.: Israel Program for Scientific Translation, Jerusalem, 1962).Google Scholar 15. Moritz, H., 2015. Classical physical geodesy. In Freeden, W., Nashed, Z., and Sonar, T. (eds.), Handbook of Geomathematics. 2nd edn, pp. 253–290, Heidelberg: Springer.Google Scholar 16. Stokes, G. G., 1849. On the variation of gravity on the surface of the Earth. Transactions of the Cambridge Philosophical Society, 8, 672–695.Google Scholar 17. Weyl, H., 1916. Über die Gleichverteilung von Zahlen mod Eins. Mathematische Annalen, 77, 313–352. 18. Wolf, K., 2009. Multiscale modeling of classical boundary value problems in physical geodesy by locally supported wavelets. PhD thesis, University of Kaiserslautern, Geomathematics Group.Google Scholar
2020-07-15 07:54:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8806321024894714, "perplexity": 2012.7313744960638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657163613.94/warc/CC-MAIN-20200715070409-20200715100409-00027.warc.gz"}
https://www.khattam.info/category/windows
# Category Archives: Windows Note: If you are just looking for the final solution, skip to the last paragraph. These symptoms could be caused by various reasons. I suspected malware infection but latest updated version of Bitdefender Free did not detect any. Neither did Malwarebytes. Also, the bug was reproduced inside an isolated Virtual Machine with a different OS too using NAT. So, malware infection was ruled out. Another reason this could happen is because of faulty networking hardware or buggy driver. Bittorrent has integrity checks on application layer so it automatically repairs corruption due to faulty hardware and drivers. However, it was strange that downloads over HTTPS and SSH VPN connections were not buggy. The network adapter I was having this problem with is Intel® Centrino® Wireless-N 1030 and driver version is 14.3.0.6. So, I decided to try to update the driver. I ran the Intel® Driver Update Utility and found that a newer version of the driver (application version 16.7.0, driver version 15.9.2.1) is available for download. So, I downloaded it using the SSH Proxy Forwarded tunnel (you could download it in a different computer or setup a tunnel and download using it) and installed it. However, updating drivers didn’t help either. So I suspected that since connections over secure connections were fine, something intercepting non-secure connections could be a problem. The first suspect was Bitdefender Antivirus. I removed it and rebooted. The problem no longer existed. All the downloads following Bitdefender uninstall were fine. I searched for this and found that other users were having the same problem too and it was not a Bitdefender bug but a Windows bug which also affects other security suites that monitor HTTP traffic. The solution is to install Windows hotfix KB2735855. Download it here for 32-bit Windows 7 and here for 64-bit Windows 7. According to Microsoft’s article regarding this bug, this bug only applies to Windows 7, Windows Server 2008 R2 and Windows Web Server 2008 R2. # [SOLVED] Failed installing Samsung Mobile MTP Device Error code 10: This device cannot start While trying to connect Samsung Galaxy S3 i9300 with Android 4.0 Ice Cream Sandwich on Windows 7 64bit PC with Kies installed, the device driver installation failed and it showed “The device cannot start” in Device Manager. While searching for a fix, I found a XDA developer forum post offering a solution for this issue however, the fix didn’t work but thanks to the thread, I finally managed to get it working. I’m sorry but I don’t know how it works and why it works and the procedure involves editing registry so proceed with caution. Having said that, here is what you can do to get it to work. Prerequisite: Before continuing with the procedure, make sure you have the latest drivers for Samsung device. To do that, connect your device, wait for it to fail and then open Windows Update and click Check for Updates. If you find any Samsung driver update, install it and restart if you have to. Here is the process: 1. Open Registry editor (regedit.exe) 2. Navigate to the following key and perform a backup by right clicking on it and selecting Export and save anywhere: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class This step is a safety measure. You can restore the settings by double clicking the reg file if something goes wrong. 3. Now, under Class, find the following keys and look for UpperFilters on the right pane: {36FC9E60-C465-11CF-8056-444553540000} If you find UpperFilters, just right click on it and select Delete to delete it. Remember to do it for both keys i.e. {36FC9E60-C465-11CF-8056-444553540000} and {EEC5AD98-8080-425f-922A-DABF3DE3F69A}. 4. Now, disconnect your phone if you have it connected and try reconnecting. Hope it works for you too. If it doesn’t or if you mess something up, you might want to start looking for a cheap Office for Mac 2016 and convert (just kidding!!) , you may want to restore your registry by double clicking the reg file backed up in step 2 and look for some other solution. Best of luck. # [HOWTO] Install M2Crypto for Python 2.7 on Windows I am using Windows 7 Home Premium 64-bit and had to setup M2Crypto required for a Python program that I am writing in Python 2.7 (32-bit). After hours of trying and searching, I finally managed to get it installed and working. Here are some of the things that I tried, please move on to “How I managed to get it to work” if you don’t want to go through boring “What I tried” stuff. What I tried: First of all, I tried to install M2Crypto via pip. I got an error saying that swig.exe was not found, so I downloaded copy of swigwin and extracted it, then added the directory to system PATH. I installed easy_install (setuptools), opened the terminal and changed dir to Python27\Scripts and installed pip (easy_install pip). Then, I tried to install M2Crypto using pip: pip install M2Crypto Unfortunately, I got the following error: SWIG\_m2crypto.i(31) : Error: Unable to find ‘openssl\opensslv.h’ SWIG\_m2crypto.i(45) : Error: Unable to find ‘openssl\safestack.h’ SWIG\_evp.i(12) : Error: Unable to find ‘openssl\opensslconf.h’ SWIG\_ec.i(7) : Error: Unable to find ‘openssl\opensslconf.h’ error: command ‘swig.exe’ failed with exit status 1 So, I downloaded openssl source files and copied the include directory to swig_dir/lib, then I got errors that from swig. I also tried giving build_dist parameter to setup.py but in vain. I was thinking of compiling openssl myself, but I figured that I would require Visual Studio (which I do not have). I thought of using MingW, but turns out you need to compile Python with MingW for it to work. I almost gave up on this, but I found that some developers had contributed build of OpenSSL and M2Crypto, so that I could just install them. Move on to next section to do it yourself. How I managed to get it to work I downloaded pre-built binaries of M2Crypto built against OpenSSL 1.0 from M2Crypto Wiki. The one that I downloaded is M2Crypto-0.21.1.win32-py2.7.msi. Then, I set it up. It detected my Python installation and installed the package. However, when I ran the Python script, I got the following error: import __m2crypto I don’t know if it was because I did not restart my computer after installation or if OpenSSL dlls were missing, in either case, you may want to install Win32 OpenSSL V1 Light and install it and it should work. Hope this helps. # [HOWTO] Install easy_install and pip in Python 3 (Windows) I am just starting with Python 3 on Windows and I wanted to install easy_install and/or pip for installing other available packages easily. However, I found that setuptools setup for Python 3.3.2 (the version I am using) is not available. I discovered distribute, a fork of setuptools, which provides easy_install. I downloaded source from Python Package page for distribute and extracted it. In the elevated command prompt (cmd->Run as Administrator), I changed to extracted directory and then ran distribute_setup.py. Then, easy_install was successfully installed in Python_Directory\Scripts. Then, I could install pip by changing directory to Scripts and running the following: easy_install pip Hope this helps. # [HOWTO] Make Street Bike Fury run in Windows 7 I had played a great indie game a few years back. It was called Street Bike Fury from S64 Games. The game is no longer being developed because of death of the developer. I had tried to make this game run earlier but failed. However, I have found a way now and I would like to share that. After some research, I found that the game was developed using Game Maker 6. The programs made in this only run in Windows XP and not in Windows 7 and Windows Vista. However, in YoYoGames Wiki, there is a software that can fix this. First download and setup Street Bike Fury from official download page. Then, download the Game Maker Conversion Program from here. Then setup the game but don’t launch it. Use the GM_Convert_Game and patch the exe. Then you will be able to run it. Hope this helps. # [SOLVED] Error: 0x800F0A12 while installing Windows 7 Service Pack 1 When trying to update Windows 7 Ultimate to Service Pack 1 using Windows Update, I got error: 0x800F0A12. I have two hard disks of which one has Fedora 15 and the other has Windows 7, the one with Fedora 15 had Grub installed. I disconnected the other hard disk and tried it again and the issue was resolved. If you have single hard disk with some other OS installed, you may face similar error which is a little more difficult to solve. Before performing this, make sure to back up your important data and have recovery tools handy. Please proceed at your own risk. To solve the issue, you have to set the partition with Windows 7 as active using Disk Management (Win+R: diskmgmt.msc) by right clicking the partition and selecting “Mark Partition as Active”. After the update has been installed, make sure to set your other partition which has the boot loader as active. # [HOWTO] Firebug on Firefox 6 Firebug is not officially compatible with Firefox 6 since as of this writing, Firefox 6 is on early alphas. However, forcing compatibility works with it. For that, just install Nightly Tester Tools addon and in Tools>Nightly Tester Tools, check “Force Addon Compatibility” and restart Firefox. Now, you will be able to install and use Firebug. # [HOWTO] Setup step debugging PHP in Netbeans on Windows with XAMPP I am using Netbeans 6.9.1 on Windows 7 with XAMPP 1.7.4 installed. I wanted to enable step debugging for PHP like I do in my PC with Fedora (see here for Netbeans PHP step debugging for Fedora). To do that, I had to follow the following steps: Edit the php.ini file (xampp\php\php.ini) in a text editor to uncomment (remove leading semicolon 😉 the following lines: zend_extension = "D:\xampp\php\ext\php_xdebug.dll" xdebug.remote_handler = "dbgp" xdebug.remote_host = "localhost" xdebug.remote_port = 9000 Also, search for the line containing “xdebug.remote_enable” and change it to: xdebug.remote_enable = On Then restart apache service. Now, open the file Program Files\NetBeans 6.9.1\etc\netbeans.conf and find the line containing “netbeans_default_options”. Add the text “-J-Dorg.netbeans.modules.php.dbgp.level=400” at the end of the line so that it looks like the following: netbeans_default_options="-J-client -J-Xss2m -J-Xms32m -J-XX:PermSize=32m -J-XX:MaxPermSize=200m -J-Dapple.laf.useScreenMenuBar=true -J-Dapple.awt.graphics.UseQuartz=true -J-Dsun.java2d.noddraw=true -J-Dorg.netbeans.modules.php.dbgp.level=400" Now, restart Netbeans and select Debug>Debug Project. However, I have experienced it is very slow on Windows compared to the installation on Fedora. Hope this helps. # [SOLVED] Warning: imagettftext() [function.imagettftext]: Invalid font filename in path\to\php\file.php on line NN I am running XAMPP 1.7.4 with PHP 5.3.5 on Windows 7. When using any text related GD library functions such as imagettftext(), I get the following error: [SOLVED] Warning: imagettftext() [function.imagettftext]: Invalid font filename in path/to/php/file.php on line NN Normally, this happens when the font is missing in GDFONTPATH and can usually be resolved by using correct font folder, using correct font file and naming it properly in the PHP file. However, in this particular case, I’ve figured that this is the problem with GD Library or PHP because I am still getting the error even though I have done everything right. I tried WAMP but still in vain. When I tried the same in my Linux machine, everything was fine. Here is what I did as a workaround. I removed the putenv line and referred to fonts by relative path. For example, the following sample code is the one that does not work putenv('GDFONTPATH=' . realPath('fonts')); $font="ariali"; imagettftext($image, $size,$angle, $xcordinate,$ycordinate, $text_color,$font, $text); I have assumed that the fonts folder contains a file ariali.ttf and in that case, the code must have worked. However, it does not, so the workaround is to do the following: //putenv('GDFONTPATH=' . realPath('fonts')); remove this line$font="fonts/ariali.ttf"; //use relative path here instead imagettftext($image,$size, $angle,$xcordinate, $ycordinate,$text_color, $font,$text); The above code works and I guess this is how I will have to use fonts in PHP from now on. # [SOLVED] “[ERROR] Fatal error: Can’t open and lock privilege tables: Incorrect key file for table ‘user’; try to repair it” I am working on a computer with XAMPP installed on Windows 7. When I upgraded XAMPP to fix the earlier problem with WinMySQLAdmin, a new problem with mysql was introduced. The error log (mysql_error.log) showed the following entry at the end: [ERROR] Fatal error: Can’t open and lock privilege tables: Incorrect key file for table ‘user’; try to repair it To fix, I downgraded to earlier version of XAMPP, exported the database and removed the mysql data directory, and reinstalled the latest version of XAMPP. Now, MySQL could start without problems. Then, I imported the data back and all was well.
2017-12-17 11:56:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2949473559856415, "perplexity": 3931.0912383816517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948595858.79/warc/CC-MAIN-20171217113308-20171217135308-00270.warc.gz"}
https://tug.org/pipermail/luatex/2009-March/000301.html
# [luatex] macros to invoke lua in LaTeX Robin Fairbairns Robin.Fairbairns at cl.cam.ac.uk Fri Mar 6 09:41:54 CET 2009 Heiko Oberdiek <oberdiek at uni-freiburg.de> wrote: > On Thu, Mar 05, 2009 at 11:07:51PM +0100, Reinhard Kotucha wrote: > > > Regarding namespaces: It's a good idea at first glance. But I don't > > think there is any need to be concerned about macro packages people > > write in the future. Macro writers have to read the specifications > > anyway. They have to read the TeXbook if they want to support Knuth's > > tex, they have to read the pdfTeX manual if they want to support > > pdftex, and they have to read the LuaTeX manual if they want to support > > Luatex. Same for e-TeX, XeTeX, Omega, and derivates. > > Another reason for prefixes. As package author I wouldn't want > to check all engines for name clashes. Also it's quite > difficult to check future name clashes, especially for user > land macro names. i don't _want_ to check all engines, but _i_ (and heiko) could, in principle. > > Hans already explained why new primitives don't break old macro > > packages. So, where is the problem? > > Mixing old with new packages. and mixing well-established packages with new engines. *we* aren't going to be surprised when package foo "breaks" the new primitive \bar, but a steady proportion of texhax/comp.text.tex traffic covers just such name clashes (mostly between packages, but occasionally between package and engine). as has already been said, this is complicated by the state of authorship; the context community (seems, to me, to) keep in touch with each other, but latex package authors regularly disappear. of the large list of latex packages for which i don't have an author address, i've only *two* instructions to hand responsibility on to anyone who volunteers to take it. what do i do about packages whose author dies? (for mike downes, i mail barbara beeton; but i don't know who to ask in re. brian hamilton kelly's stuff -- we had to break the copyright to get his crossword stuff working with 2e.) robin
2021-10-24 00:32:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7268039584159851, "perplexity": 13324.681675246044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585828.15/warc/CC-MAIN-20211023224247-20211024014247-00316.warc.gz"}
https://www.gradesaver.com/textbooks/science/physics/physics-10th-edition/chapter-7-impulse-and-momentum-check-your-understanding-page-189/19
## Physics (10th Edition) a) $v_{cm}=0$ b) The raft moves in the opposite direction from the sunbather. a) At start, because both the sunbather and the raft are stationary, $v_{cm}=0$ Because we consider the sunbather and the raft as an isolated system, the principle of linear momentum conservation applies. Therefore, while she is walking, $v_{cm}$ is always constant and equals $0$. b) The sunbather has mass $m$ and velocity $v$ while the boat has mass $M$ and velocity $V$. $$\frac{MV+mv}{M+m}=v_{cm}=0$$ $$MV+mv=0$$ $$V=-\frac{m}{M}v$$ Because $p_{sunbather}=mv\ne0$, $V\ne0$. In other words, the raft itself moves while she is moving. The negative sign shows the raft moves in the opposite direction of the sunbather.
2021-05-15 07:26:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7191867232322693, "perplexity": 462.0012830853279}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991378.48/warc/CC-MAIN-20210515070344-20210515100344-00045.warc.gz"}
https://gmatclub.com/forum/if-the-formula-above-gives-the-area-a-of-a-circular-region-in-terms-of-254158.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 21 Aug 2018, 18:57 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # If the formula above gives the area A of a circular region in terms of Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 48110 If the formula above gives the area A of a circular region in terms of  [#permalink] ### Show Tags 23 Nov 2017, 23:26 00:00 Difficulty: 35% (medium) Question Stats: 82% (00:47) correct 18% (01:10) wrong based on 22 sessions ### HideShow timer Statistics $$A = \frac{\pi d^2}{x}$$ If the formula above gives the area A of a circular region in terms of its diameter d, then x = (A) 1/4 (B) 1/2 (C) 1 (D) 2 (E) 4 _________________ Intern Joined: 16 Oct 2017 Posts: 30 Location: Ireland Concentration: Healthcare, Finance Re: If the formula above gives the area A of a circular region in terms of  [#permalink] ### Show Tags 24 Nov 2017, 02:46 $$A =\frac{\pi * d^2}{x} ; d = 2*r$$ ; $$\frac{\pi * d^2}{x}$$ = $$\pi * r^2$$ ; $$\frac{\pi * 4r^2}{x}$$ =$$\pi*r^2$$ ; $$\pi * 4r^2 = x* \pi * r^2$$ $$x=4$$ Re: If the formula above gives the area A of a circular region in terms of &nbs [#permalink] 24 Nov 2017, 02:46 Display posts from previous: Sort by # Events & Promotions Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
2018-08-22 01:57:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5296329855918884, "perplexity": 5808.336873254092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221219242.93/warc/CC-MAIN-20180822010128-20180822030128-00414.warc.gz"}
https://mathematica.stackexchange.com/questions/223108/fast-multiplication-of-high-dimensional-matrix
# Fast multiplication of high dimensional matrix I am very new to Mathematica. I am dealing with the multiplication of matrix as the following code c=8;d=64; a=RandomReal[{1,2},{c,c,c}]; b=RandomReal[{1,2},{c,c,c,d,d,d}]; s=0.0; Do[s=s+a[[i1,i2,i3]]*b[[i1,i2,i3,;;,;;,;;]],{i1,1,c},{i2,1,c},{i3,1,c}] //AbsoluteTiming {4.26894, Null} For me, the computation time is too high because there are many such multiplication in the program. Any suggestions really appreciated. • Better try s = Total[a b, 3] or s = Flatten[a].ArrayReshape[b, {c^3, d, d, d}];. – Henrik Schumacher Jun 2 at 5:37 The code can be faster if we compile: cf = Compile[{{a, _Real, 3}, {b, _Real, 6}}, Flatten[a].Flatten[b, 2]]; test = cf[a, b]; // AbsoluteTiming (* {0.492055, Null} *) and even faster if compile to C and extract the LibraryFunction[…]: cfc = Compile[{{a, _Real, 3}, {b, _Real, 6}}, Flatten[a].Flatten[b, 2], CompilationTarget -> C][[-1]]; testc = cfc[a, b]; // AbsoluteTiming (* {0.234145, Null} *) Tested on v9.0.1, with TDM-GCC-5.1.0-2 64-bit compiler, "SystemCompileOptions"->"-Ofast". • Huh? Why does extracting the LibraryFunction speed this up? – Henrik Schumacher Jun 2 at 6:42 • @Henrik Copying large list is slow. I learned this here. – xzczd Jun 2 at 6:48 • @xzczd Thanks, very useful answer. – Shi Jun 11 at 9:07 Better try s = Total[a b, 3]; or s = Flatten[a].ArrayReshape[b, {c^3, d, d, d}]; On my machine, the latter is faster. In general, rephrasing summations in terms of Dot (.) should lead to more efficient code as Dot is highly optimized. • In v9.0.1 Flatten[a].Flatten[b, 2] seems to be faster and more memory-efficient. – xzczd Jun 2 at 5:46 • Good point. Both lead to the same AbsoluteTiming and MaxMemoryUsed on my machine under version 12.0 for macos. I recall that ArrayReshape used to have some performance degradation when it was introduced. They seem to have been resolved. – Henrik Schumacher Jun 2 at 6:00 • @HenrikSchumacher Thanks a lot. – Shi Jun 11 at 9:10 • You're welcome. – Henrik Schumacher Jun 11 at 9:10
2020-07-09 05:55:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5216006636619568, "perplexity": 11318.907493578916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655898347.42/warc/CC-MAIN-20200709034306-20200709064306-00164.warc.gz"}
http://mathhelpforum.com/algebra/78017-simplify-logarithm.html
# Math Help - simplify logarithm 1. ## simplify logarithm I was wondering if there's a way to simplify this logarithmic expression Code: (3/5)^(log5n)-1 i.e. (3/5) to the power log base 5 of n. 2. Originally Posted by NidhiS I was wondering if there's a way to simplify this logarithmic expression Code: (3/5)^(log5n)-1 i.e. (3/5) to the power log base 5 of n. um, what is that -1 that i see? you never mentioned it when you translated your problem into words 3. -1's just a separate entity.S so you basically do the first part and subtract one from it 4. Originally Posted by NidhiS I was wondering if there's a way to simplify this logarithmic expression Code: (3/5)^(log5n)-1 i.e. (3/5) to the power log base 5 of n. not really: $(\frac{3}{5})^{log_5{n}} - 1$ If we let this equal a we can rearrange to find n. $a+1 = 0.6^{log_5{n}}$ $ ln{a+1} = log_5{(n)}ln{(0.6)}$ $log_5{(n)} = \frac{ln{(0.6)}}{ln{(a+1)}}$ $n = 5^{\frac{ln{(0.6)}}{ln{(a+1)}}}$ the original expression would be easiest to use 5. Hello, NidhiS! I'll take a guess at where that -1 belongs. Whatever was meant, it doesn't simplify very much. Using every log-trick I know, I can only rewrite it . . . $\left(\frac{3}{5}\right)^{\log_5\!n-1}$ We have: . $\left(\frac{3}{5}\right)^{\log_5\!n - \log_55} \;=\;\left(\frac{3}{5}\right)^{\log_5(\frac{n}{5}) }$ $= \;\frac{3^{\log_5(\frac{n}{5})}}{5^{\log_5(\frac{n }{5})}} \;=\; \frac{3^{\log_5(\frac{n}{5})}}{\frac{n}{5}}$ I used the Base-change Formula on the exponent: . . $\frac{3^{\frac{\log_3(\frac{n}{5})}{\log_3\!5}}}{\ frac{n}{5}} \;=\;\frac{\left[3^{\log_3(\frac{n}{5})}\right]^{\frac{1}{\log_3\!5}}}{\frac{n}{5}}$ $= \;\frac{\left(\frac{n}{5}\right)^{\frac{1}{\log_3\ !5}}}{\frac{n}{5}} \;=\;\frac{(\frac{n}{5})^{\log_5\!3}}{\frac{n}{5}} \;=\;\left(\frac{n}{5}\right)^{\log_5\!3-1} $ See? . . . It's no simpler than the original expression. 6. Originally Posted by NidhiS I was wondering if there's a way to simplify this logarithmic expression Code: (3/5)^(log5n)-1 i.e. (3/5) to the power log base 5 of n. what was the original problem? why do you want to simplify this. perhaps there is something else you want to get at that we do not have to do this. because it is messy. using the rule $a^{\log_a X} = X$, we can get this down to $n^{\frac 1{\log_3 5} - 1} - 1$ which is not really simpler, just different. state your intentions and we can take it from there 7. $ (\frac{3}{5})^{log_5{n}} - 1 $ This here is what I want to simplify.I want to simplify it for my algorithm class.My professor said that I shouldn't leave it like this nad perhaps should simplify.e^pi*i ,I like your way,but I know there's another way to do it,which has something to do with the property of the common base in the logarithms. 8. There's this logarithm property that I know of,which is, $ (a)^{log_b{y}} = (y)^{log_b {a}} $ which i was thinking of using like this, $ (\frac{3}{5})^{log_5{n}} $ changes to $ ({3}.{(5) ^ {-1}})^{log_5{n}} - 1 $ Then using the above property I get, $ ({3}.{(5) ^ {-1}})^{log_5{n}} - 1 $ $ ({3}.{n})^{log_5{-5}} - 1 $ which is equal to $ ({3}.{n}) ^ {-1} $ Do you guys this this is fine?Or am I going wrong somewhere?
2015-05-26 02:53:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8901955485343933, "perplexity": 749.9030162738381}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928754.15/warc/CC-MAIN-20150521113208-00182-ip-10-180-206-219.ec2.internal.warc.gz"}
http://cpr-nuclth.blogspot.com/2013/08/13080031-j-hooker-et-al.html
## Efficacy of crustal superfluid neutrons in pulsar glitch models    [PDF] J. Hooker, W. G. Newton, Bao-An Li Within the framework of recent hydrodynamic models of pulsar glitches, we explore systematically the dependence on the stiffness of the nuclear symmetry energy at saturation density $L$, of the fractional moment of inertia of the pinned neutron superfluid in the crust $G$ and the initial post-glitch relative acceleration of the crust $K$, both of which are confronted with observational constraints from the Vela pulsar. We allow for a variable fraction of core superfluid neutrons coupled to the crust on glitch rise timescales, $Y_{\rm g}$. We assess whether the crustal superfluid neutrons are still a tenable angular momentum source to explain the Vela glitches when crustal entrainment is included. The observed values $G$ and $K$ are found to provide nearly orthogonal constraints on the slope of the symmetry energy, and thus taken together offer potentially tight constraints on the equation of state. However, when entrainment is included at the level suggested by recent microscopic calculations, the model is unable to reproduce the observational constraints on $G$ and $K$ simultaneously, and is limited to $L>100$ MeV and $Y_{\rm g} \approx 0$ when $G$ is considered alone. One solution is to allow the pinned superfluid vortices to penetrate the outer crust, which leads to a constraint of $L\lesssim 45$ MeV and $Y_{\rm g} \lesssim 0.04$ when $G$ and $K$ are required to match observations simultaneously. When one allows the pinned vortices to penetrate into the crust by densities of up to 0.082 fm$^{-3}$ above crust-core transition density (a total density of 0.176 fm$^{-3}$) for L=30 MeV, and 0.048 fm$^{-3}$ above crust-core transition density (a total density of 0.126 fm$^{-3}$) for L=60 MeV, the constraint on $G$ is satisfied for \emph{any} value of $Y_{\rm g}$. We discuss the implications of these results for crust-initiated glitch models. View original: http://arxiv.org/abs/1308.0031
2017-10-19 01:46:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.671958327293396, "perplexity": 1510.4788957639034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823214.37/warc/CC-MAIN-20171019012514-20171019032514-00096.warc.gz"}
https://forum.allaboutcircuits.com/threads/ac-adapter-to-battery-switch-transient.124817/#post-1007306
# AC adapter to battery switch transient #### electrophile Joined Aug 30, 2013 166 Here is a schematic that switches to a battery source when the AC adapter dies out. It works but there is a drop transient which is about 50% of the output voltage. Any suggestions on how this can be reduced or better yet eliminated? Also attached is the LTSpice file. #### Attachments • 1.2 KB Views: 17 #### AlbertHall Joined Jun 4, 2014 11,506 The adaptor voltage has to get very low before the MOSFET will turn on (up to 3V below the battery voltage). If the adaptor voltage is greater than the battery voltage then do away with the MOSFET and connect the anode of D2 to the battery. #### electrophile Joined Aug 30, 2013 166 Unfortunately the battery and AC adapter are more or less the same voltage since when the power goes out, the battery takes over. If I were to eliminate the P-MOSFET, both the AC adapter and the battery would share the load current and this is not something that is feasible since when the AC adapter is present the battery is being charged. #### AlbertHall Joined Jun 4, 2014 11,506 You might have a look at the LTC4412 or similar. Or, if you have access to the internals of the adaptor and it is regulated, then you could use the fall of voltage before the regulator and use that to trigger the switchover. This way you can switch to battery operation before the regulator output has begun to fall. #### johnmariow Joined May 4, 2016 20 I agree that the MOSFET may be creating the problem. To learn more about this, replace the adaptor with a square wave generator in LTSPICE. Then observe the square wave on the gate of the MOSFET and the square wave on the anode of D2. You will probably observe a time delay between the falling edge of the square wave from the square wave generator and the rising edge of the square wave at the anode of the diode. I suspect the the width of this time delay will be equal to the width of the transient you are observing. #### electrophile Joined Aug 30, 2013 166 #### johnmariow Joined May 4, 2016 20 I was talking about doing it in the LTSPICE circuit simulator. Not on the actual circuit. The transistor would react faster than the MOSFET; but you still would have a race condition. The transient will be less, but I think it will still exist. Last edited: #### electrophile Joined Aug 30, 2013 166 I was talking about doing it in the LTSPICE circuit simulator. Not on the actual circuit. Yeah I was replying to Albert and your reply came right then and it looked like I replied to your message I'll try the square wave generator simulation. Will keep you posted. #### AlbertHall Joined Jun 4, 2014 11,506 I was thinking more on this and instead of grounding the P-MOSFET gate directly, I added a PNP transistor there and switched that using the AC adapter voltage. This seems to get rid of the transient when the switching happens. I also see a voltage drop of about 70mV on the output. Is this simulated or real world? The real world adaptor will have an output capacitor which will mean its output voltage will fall slowly when mains power is removed and that is going to be difficult to get around unless you add a very big capacitor on the circuit output to maintain the load voltage until the adaptor voltage has fallen far enough to turn the MOSFET on. The circuit with the transistor pulls the gate to ground but it doesn't pull it up so because of the gate capacitance the MOSFET may never be turned off. That might be why the transient disappeared. There should be a resistor from gate to source. #### grahamed Joined Jul 23, 2012 100 Hi You might like to consider the use of a "perfect diode" in place the current switch and Schottky arrangement. The Schottky may only drop 1/2V or so but that is 10% of your battery wasted. #### Attachments • 1.6 KB Views: 14 #### electrophile Joined Aug 30, 2013 166 This will be eventually implemented in the real world. I see what you mean. I placed a 1k resistor there and now I see the 2.6V transient again. @johnmariow I simulated it with the square wave instead of the adapter and you are right, the width of the time delay is the transient width as well. So how do I build this without the LTC chip? #### electrophile Joined Aug 30, 2013 166 Hi You might like to consider the use of a "perfect diode" in place the current switch and Schottky arrangement. The Schottky may only drop 1/2V or so but that is 10% of your battery wasted. I'm not sure I understand this schematic. Could you please elaborate? Where would my two inputs go? #### AlbertHall Joined Jun 4, 2014 11,506 So how do I build this without the LTC chip? If you cannot be sure that the adaptor voltage will always be greater than the battery voltage then I don't see how. Another wild thought: Use a higher voltage adaptor with a separate regulator in your circuit to get the correct voltage. That way you can implement the method I described earlier. #### hp1729 Joined Nov 23, 2015 2,304 Here is a schematic that switches to a battery source when the AC adapter dies out. It works but there is a drop transient which is about 50% of the output voltage. Any suggestions on how this can be reduced or better yet eliminated? Also attached is the LTSpice file. A capacitor on the output? Maybe 100 uF or so depending on the current being drawn. #### grahamed Joined Jul 23, 2012 100 I'm not sure I understand this schematic. Could you please elaborate? Where would my two inputs go? The FET connects exactly where it is now, the diode is removed. The FET is controlled by the BJTs which switch it on as required to act as a diode. Due to the common base voltage they are controlled by the emitter voltages. If the input-side BJT has the higher emitter voltage it turns on, the output-side BJT turns off which then allows the FET to turn on. The simulation should demonstrate all this. #### grahamed Joined Jul 23, 2012 100 if the .asc shows a diode across the FET (I can't see it at the moment) please ignore it. It doesn't do anything. I included it because the FETs I had to hand had an intrinsic diode but the model did not include one. #### AlbertHall Joined Jun 4, 2014 11,506 if the .asc shows a diode across the FET (I can't see it at the moment) please ignore it. It doesn't do anything. I included it because the FETs I had to hand had an intrinsic diode but the model did not include one. But does this switch depending on whether adaptor or battery has the highest voltage, and given that that cannot be guaranteed in this case then, with a fully charged battery and the mains voltage on the low side, it will operate from the battery despite mains power being available? #### grahamed Joined Jul 23, 2012 100 All diode switches operate on the basis of the higher voltage. This is no different just a better diode. If switching is required then a further BJT CE// with the input BJT will do it. #### Tonyr1084 Joined Sep 24, 2015 6,269 What about using an Op-Amp to compare the two voltages? The moment the adaptor drops below the battery voltage the Op-Amp can trigger a switch. Even if the battery voltage isn't at its peak charge, the Op-Amp won't switch out until that threshold is crossed. Wire the amp as a comparator. That way its output can control a MOSFET or two, shutting one supply off and switching the other on. I'm sure a few milliseconds of crossover wouldn't hurt anything if both supplies are giving a push for that short duration. The switch that cuts out the PS can be held high for a few mS via a capacitor. Meanwhile the other MOSFET can switch the battery into the circuit before the first cuts out. Am I wrong? Last edited:
2021-09-28 08:22:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4186168909072876, "perplexity": 1490.9067322750975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060538.11/warc/CC-MAIN-20210928062408-20210928092408-00010.warc.gz"}
https://www.springerprofessional.de/symplectic-geometry-groupoids-and-integrable-systems/13790488
main-content ## Inhaltsverzeichnis ### Groupoïdes de Lie et Groupoïdes Symplectiques Résumé Le but de cet exposé est de donner une approche géométrique de la théorie des groupoïdes de Lie, approche qui s’avère particulièrement utile dans l’étude des groupoïdes symplectiques. Il s’agit d’énoncer brièvement les résultats de [1]. Claude Albert, Pierre Dazord ### Géometrie Globale des Systèmes Hamiltoniens Complètement Intégrables et Variables Action-Angle avec Singularités Résumé Toutes les structures considérées dans ce travail sont de classe C . Précisons tout d’abord la notion de complète intégrabilité utilisée ici. Rappelons qu’un système hamiltonien (M 2 n , ω, H) est dit complètement intégrable au sens d’Arnold-Liouville s’il existe un n-uple F = (f 1,..., f n )d’intégrales premières en involution dont les différentielles sont générique-ment indépendantes. Le théorème d’Arnold-Liouville affirme alors que les fibres régulières, compactes et connexes, de F, sont des tores Lagrangiens, et qu’au voisinage de chacun d’eux il existe un système de coordonnées canoniques (q 1,...,q n , θ 1,..., θ n ),dites coordonnées action-angle, où les coordonnées action (q 1,..., q n ) sont à valeurs dans un ouvert de n et les coordonnées angle (θ 1,..., θ n ) à valeurs dans le tore T n ,de manière que f 1,..., f n ne sont fonctions que des variables action. Il en résulte en particulier que le flot du champ hamiltonien X H est quasi-périodique sur ces tores Lagrangiens. Mohamed Boucetta ### Sur Quelques Questions de Géométrie Symplectique Abstract This paper summarizes a talk that I gave at the Mathematical Science Research Institute (Berkeley) in June 1989. I consider G-homogeneous symplectic manifolds (M, ω) where G is a solvable Lie group. When the symplectic action G × MM is “regular” and “closed” I sketch the proof of two main results: (1) the manifold M has an affinely flat structure (M, D) which preserves a bilagrangian structure on (M, ω) and satisfies the condition that Dω = 0; (2) the symplectic manifold (M, ω) is a graded symplectic manifold. Nguiffo B. Boyom ### Intégration Symplectique des Variétés de Poisson Totalement Asphériques Abstract Poisson structures are contravariant structures. Nevertheless there is a good description of regular Poisson manifolds by means of foliated symplectic forms. This point of view makes it easy to lift these structures to the holonomy or homotopy groupoïd of their symplectic (regular) foliation; defining the Poisson realization of the Poisson structure. The second aim of the paper is to construct the universal symplectic integration of totally aspherical Poisson structures that is regular Poisson structures such that: i) the second homotopy group of any symplectic leaf is trivial; ii) any vanishing cycle is trivial. The universal symplectic integration is a symplectic groupoïd with connected and simply connected fibres which realizes the given Poisson structures. This construction generalizes the construction of the simply connected Lie group of a given finite-dimensional Lie algebra. Pierre Dazord, Gilbert Hector ### La Première Classe de Chern Comme Obstruction à la Quantification Asymptotique Résumé Notre travail trouve son origine dans un article de Karašev et Maslov sur la quantification d’une variété symplectique générale [16]. Cet article pose de nombreux problèmes et contient plusieurs points obscurs, que nous clarifions, ce qui nous permet de répondre positivement à certaines conjectures. P. Dazord, G. Patissier ### Groupes de Poisson Affines Résumé Le but de cet article est de présenter une extension naturelle de la notion de groupe de Poisson due à Drinfel’d [5]: la notion de groupe de Poisson affine Cette extension contient toutes les structures de Poisson usuellement introduites sur les groupes, et en particulier les structures de groupes de Poisson, les structures de Poisson invariantes à gauche ou à droite, les structures affines de J.M. Souriau [17] sur les duaux d’algèbres de Lie. Pierre Dazord, D. Sondaz ### Singular Lagrangian Foliation Associated to an Integrable Hamiltonian Vector Field Abstract In this paper we show what the geometry of an integrable hamiltonian system is under a rather “generic assumptions”. These hypotheses are closely related to those of Fomenko [10] and [11] on Bott integrals, but are distinct and allow us to study higher codimension singularities. In a “companion” paper Jair Koiller shows this gives a good setting in order to study a perturbed system by Melnikov method. The author thanks the referee for his corrections both mathematical and linguistic. Nicole Desolneux-Moulis ### Hyperbolic Actions of R p on Poisson Manifolds Abstract Unless otherwise explicitly stated all manifolds and mappings are C Recall that a Poisson manifold ([W]) is a manifold V with a Lie algebra structure (f,g) ↦ {f,g} on C (V) (the set of C mappings f: VR) such that $$\{ f,gh\} = \{ f,g\} h + g\{ f,h\}$$ Jean-Paul Dufour ### Compactification d’Actions de ℝ n et Variables Action-Angle avec Singularités Abstract. Résumé On considère une action infinitésimale de ℝ n sur une variété V, munie d’un espace vectoriel d’intégrales premières; on donne une condition suffisante pour que, au voisinage d’un orbite compacte, il existe une action du tore T n ayant les mème orbites, et commutant avec l’action infinitésimale donnée. Comme corollaire, on retrouve le théorème de H. Eliasson sur l’existence de variables Action-Angle avec singularités pour un système hamiltonien. J. P. Dufour, P. Molino ### On the Diameter of the Symplectomorphism Group of the Ball Abstract It is shown that the diameter of the symplectomorphism group of the ball in ℝ2n is infinite. Yakov Eliashberg, Tudor Ratiu ### A Symplectic Analogue of the Mostow-Palais Theorem Abstract We show that given a Hamiltonian action of a compact and connected Lie group G on a symplectic manifold (M, ω) of finite type, there exists a linear symplectic action of G on some R 2n equipped with its standard symplectic structure such that (M, ω, G) can be realized as a reduction of this R 2n with the induced action of G. M. J. Gotay, G. M. Tuynman ### Melnikov Formulas For Nearly Integrable Hamiltonian Systems Abstract An “intrinsic” Melnikov vector valued function is given, which can be used to detect homoclinic orbits in Hamiltonian perturbations of completely integrable systems. We use the description given by Prof. Nicole Desolneux-Moulis [1] of the dynamics along a singular leaf of the unperturbed system. As an example, it is shown that perturbations of the spherical pendulum on a rotating frame (or in a magnetic field) produce Silnikov’s spiralling chaos. Jair Koiller Abstract Using Gromov theory of pseudo-holomorphic curves, we derive a pseudo-holomorphic version of the classical result of Hadamard: a holomorphic function with bounded real part is constant. It is a pleasure to thank Gilbert Hector for providing a much simpler proof of Proposition 1, Michel N’Guiffo Boyom and the referee for valuable remarks. Jacques Lafontaine ### Equivariant Prequantization Abstract If (S, ω) is a symplectic manifold, a prequantization of S is a principal circle bundle over S together with a connection form whose curvature is −ω. Such a circle bundle exists iff the period group of ω is contained in ℤ; i.e., the class [ω] ∈ H 2(S, ℝ) comes from an integral class. If S is simply connected it follows from the universal coefficient theorem that the integral class is unique. Also note that for simply connected S, the period group of ω is in ℤ iff the spherical period group is in ℤ; i.e., π 2(S) ≅ H 2(S).1 If S is not simply connected it may have inequivalent prequantizations. Inequivalent prequantizations of S also induce inequivalent prequantizations of S × $$\bar S$$, $$\bar S$$ denoting (S, −ω). But one can show such prequantizations become equivalent when pulled back to the fundamental groupoid ($$\pi \left( S \right) = \tilde S \times \tilde S/{\pi _1}\left( S \right),\lambda :\tilde S \to S$$ the universal cover, with the form induced from S × $$\bar S$$ by λ × λ). Further if we only assume ω is integral on spherical classes, no prequantization may exist. In his preprint [10], Alan Weinstein gives a method for prequantizing the fundamental groupoid of a symplectic manifold (S, ω) when ω is integral on spherical classes, using connection theory. His result is equivalent to the statement (Corollary 1.3): For any symplectic manifold (S, ω) the period group of the fundamental groupoid π(S) is contained in ℤ iff the spherical period group of S is contained in ℤ. Since this is a statement about cohomology, Weinstein raises the question of giving a purely algebraic topology proof of this result. R. Lashof ### Momentum Mappings And Reduction of Poisson Actions Abstract An action σ: G × PP of a Poisson Lie group G on a Poisson manifold P is called a Poisson action if σ is a Poisson map. It is believed that Poisson actions should be used to understand the “hidden symmetries” of certain integrable systems [STS2]. If the Poisson Lie group G has the zero Poisson structure, then σ being a Poisson action is equivalent to each transformation σ g : PP for gG preserving the Poisson structure on P. In this case, if the orbit space G \ P is a smooth manifold, it has a reduced Poisson structure such that the projection map PG \ P is a Poisson map. If P is symplectic and if the action σ is generated by an equivariant momentum mapping J: Pg*, the reduction procedure of Meyer [Me] and Marsden and Weinstein [Ms-We] gives a way of describing the symplectic leaves of G \ P as the quotients P µ := G µ \J −1 (µ), where µg* and G µ G is the coadjoint isotropy subgroup of µ. Jiang-Hua Lu ### On Jacobi Manifolds and Jacobi Bundles Abstract We introduce the notion of a Jacobi bundle, which generalizes that of a Jacobi manifold. The construction of a Jacobi bundle over a conformal Jacobi manifold has, as particular cases, the constructions made by A. Weinstein [21] of a Le Brun-Poisson manifold over a contact manifold, and that of a Heisenberg-Poisson manifold over a symplectic (or Poisson) manifold. We show that the total space of a Jacobi bundle has a natural homogeneous Poisson structure, and that with each section of that bundle is associated a Hamiltonian vector field, defined on the total space of the bundle, tangent to the zero section, which projects onto the base manifold. Charles-Michel Marle ### Groupes de Lie à Structure Symplectique Invariante Résumé Un groupe de Lie G admet une structure symplectique invariante s’il existe sur G une 2-forme différentielle fermée invariante à gauche dont le rang est égal à la dimension de G. Un tel groupe sera appelé par abus de langage, symplectique et son algèbre de Lie sera dite symplectique. Le principal résultat de ce travail est de fournir une classification des groupes symplectiques nilpotents par leurs algèbres de Lie. L’idée centrale dans cette classification est la notion de double extension (section 2) d’une algèbre symplectique: grosso modo en additionnant un plan symplectique à une algèbre symplectique on obtient une algèbre symplectique. Cette notion est l’analogue symplectique de la notion de double extension des algèbres de Lie orthogonales, que nous avons introduite dans [Me-Re 1].Nous montrons que toute algèbre symplectique nilpotente s’obtient par une suite de doubles extensions à partir de l’algèbre réduite à zéro (théorème 2.5). Dire que le groupe de Lie symplectique (G,ω) est double extension du groupe (H, Ω) veut dire que ce dernier est une variété réduite de Marsden-Weinstein de (G,ω). Toute nilvariété symplectique étant quotient d’un groupe nilpotent symplectique par un sous-groupe discret co-compact, [Be-Go] la double extension permet d’obtenir toutes ces variétés. Alberto Medina, Philippe Revoy ### Holonomy Groupoids of Generalized Foliations. II. Transverse Measures and Modular Classes Abstract A generalized foliation is a foliation with singular leaves in the sense of P. Stefan [St] and P. Dazord [D]. In the preceding paper [Su], we defined notions of holonomy maps and holonomy groupoids for a generalized foliation whose singular leaves are all tractable. We continue to investigate their properties. Haruo Suzuki ### Symplectic Groupoids, Geometric Quantization, and Irrational Rotation Algebras Abstract The rotation algebra A θ for a real parameter θ is defined as the crossed product of the additive group ℤ with the space of functions on the circle T = ℝ/2πℤ via the action n · φ = φ+nθ. 1 More concretely, A θ is the completion with respect to a certain norm of its subalgebra A θ of smooth elements. The elements of A θ are the smooth functions f (n, φ) on ℤ × T which, with all their φ-derivatives, are rapidly decreasing in n. The multiplication, which we denote by *, is defined by $$\left( {f * g} \right)\left( {n,\varphi } \right) = \sum\limits_{m,k \in z,m + k = n} {f\left( {m,\varphi } \right)} g\left( {k,\varphi + m\theta } \right)$$ Alan Weinstein ### Morita Equivalent Symplectic Groupoids Abstract Morita equivalence of C*-algebras, first introduced by M. Rieffel, has been widely accepted as one of the most important equivalence relations in C*-algebras [Riel] [Rie2] [Rie3] [Rie4]. Roughly speaking, two C*- algebras are said to be Morita equivalent if there is an equivalence bimodule between them. Morita equivalent C*-algebras have many similar features. For instance, they have equivalent categories of left modules, isomorphic K-groups, and so on. Also, Morita equivalence plays a very important role in understanding the structure of some C*-algebras such as transformation C*-algebras and foliation C*-algebras. A natural question arises as to what the classical analogue of this equivalence relation is. It is generally accepted that the classical analogue of a C*-algebra (or non-commutative algebra) is a Poisson manifold. So, more precisely, we expect to find an equivalence relation for Poisson manifolds that plays the same role as Morita equivalence does for C*-algebras. A solution to this problem was made possible by the recent introduction of symplectic groupoids in the study of Poisson manifolds due to Karasev and Weinstein [CDW] [Ka] [W2]. The original purpose for introducing symplectic groupoids was to study nonlinear commutation relations and quantization theory. In fact, it turns out that symplectic groupoids provide a bridge between Poisson manifolds, C*-algebras, as well as quantizations. Therefore, introducing and studying Morita equivalence of symplectic groupoids should be the first step in understanding Morita equivalence of Poisson manifolds. Ping Xu Weitere Informationen
2020-05-28 15:49:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8996211290359497, "perplexity": 2887.5214741581394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347399820.9/warc/CC-MAIN-20200528135528-20200528165528-00543.warc.gz"}
https://www.physicsforums.com/threads/pakistan-earthquake.95862/
# News Pakistan earthquake 1. Oct 20, 2005 ### pattylou This can use some more visibility. Our press seems more interested in Wilma, which is weakening, than in the earthquake, whose death toll continues to mount.:grumpy: The toll has jumped from 54,000 to 79,000 this week aone, and may double if we don't get more aid to the region. http://www.dailytimes.com.pk/default.asp?page=2005%5C10%5C21%5Cstory_21-10-2005_pg1_2 [Broken] You can donate to Red Cross; although we've been using Oxfam this year: http://www.oxfamamerica.org/ Feel free to move this thread if it belongs in GD. I figured it fits under "world affairs." Last edited by a moderator: May 2, 2017 2. Oct 20, 2005 ### pattylou More snippets from another story - this isn't getting better; it's getting worse. http://www.guardian.co.uk/naturaldisasters/story/0,7369,1597433,00.html Oxfam makes it easy to donate specifically to this disaster. Click on the link above. 3. Oct 20, 2005 ### Pengwuino Man this year has been relentless. I wonder how many red cross volunteers are still working the tsunami relief effort... 4. Oct 21, 2005 ### Anttech Yep.. Just so this thread will stay in Political forum, I wanted to add that the Earthquake is Bush's fault... Ok I said it: Now Donate! These people need our help!!! 5. Oct 23, 2005 ### Staff: Mentor They just had another strong aftershock in the same area. http://www.everything-science.com/index.php?option=com_smf&Itemid=82&topic=5244.msg58005#msg58005 Pakistan/Kashmir may now have more that 80,000 dead, and more than 3 million without homes! India, Pakistan Propose Opening Borders for Earthquake Relief http://www.bloomberg.com/apps/news?pid=10000080&sid=avRUZVIiQQVM [Broken] http://news.bbc.co.uk/1/hi/in_depth/south_asia/2005/south_asia_quake/default.stm Last edited by a moderator: May 2, 2017 6. Oct 23, 2005 ### SOS2008 My same thoughts when watching the death toll rise higher and higher--the loss of life is no comparison even to Katrina. Here is what a little aid has already done: http://www.msnbc.msn.com/id/9779472/ How much do we spend each day trying to democratize and nation-build in Iraq alone? We could generate so much more goodwill, but where will the resources come from when we are over extended in military conflict and increasingly in debt to other nations such as China? A measly $50 million has gone much farther in this way. Last edited by a moderator: May 2, 2017 7. Oct 23, 2005 ### Gokul43201 Staff Emeritus http://news.bbc.co.uk/2/hi/south_asia/4366528.stm 8. Oct 23, 2005 ### oldunion Does anyone else find it strange there has been the asian earthquak (tsunami), pakistan earthquake, katrina, wilma, alpha, and the other one that hit texas. weather control anyone? 9. Oct 23, 2005 ### Pengwuino Yes, i'm sure governments are just foaming at the mouth to kill millions of people in all parts of the world for no reason. 10. Oct 23, 2005 ### pattylou I don't find it strange. Every year people say "things have never been this bad." Even if this year is "worse," so what? It's nature. We're subject to it. We're gonna have some hellish natural disasters from time to time, and sometimes they will crop up together. That's random chance for you. But, we could try to respond appropriately - which means getting aid to Asia at the moment. Thanks to all who contributed on this thread. :) 11. Oct 23, 2005 ### Art A couple of reasons I saw cited for the increased death tolls from natural disasters were, first better communication these days; meaning in the past natural disasters in remote areas simply went unreported and secondly the increase in population has resulted in people living in areas previous generations would have avoided as they are known to be prone to catastrophic conditions from time to time. 12. Oct 23, 2005 ### cefarix My father, being a doctor, went to and came back from Muzaffarabad and other areas to the north. He was telling me the situation is just utterly beyond belief there. :sad: 13. Oct 26, 2005 ### pattylou Cefarix, I'm sorry to hear about the situation. This is awful. I've been organising a fundraiser at the kids' school. Our girl scouts ("girl guides" in other countries) will be collecting monetary donations next week, to donate to Oxfam for the victims of this earthquake. I don't know that we'll collect much, but I hope we do. More generally: http://www.alertnet.org/thenews/fromthefield/219053/113026770775.htm The other moms I have spoken too, however, are all glad that there will be a fundraiser, and plan to donate. Some school children will be happy to donate - the problem as I see it, is lack of coverage - perhaps due to all the scandal currently plaguing domestic events. 14. Oct 26, 2005 ### champ2823 It is known to anyone that has ever researched weather modification that our government has been conducting research and experimentation to use in warfare against others without them knowing it. Many public documents talk about this and I don't remember exactly which documents, but Air Force 2025 has some sections about it. The last known time I know of was when they would seed clouds in Vietnam to make roads impassible. Currently, I have seen no credible documents proving or even alluding to any of these catastrophes being anything other than "natural disasters", therefore I personally find it unfair to say that it wasn't "natural". Although the technology exists, I personally am not about to say that any government had any hand in what has been occuring. Only the presense of credible documents would be able to prove it, and since I have seen none, I classify them as "natural". What does irk me about these disasters is why high ranking government officials feel the need to go to these areas for photo ops to gain political favor. Other than being a politician, what useful things can they do? Medics, engineers, food suppliers, etc... is what is in dire need. Government officials are none of these. All they do is get in the way, divert attention away from relief efforts, and take away time and resources from others that would be best suited to tackle the disaster at hand. The best thing a politician can do is stay away from the disaster area and do the best they can to get private companies and relief aid to the region and let them take full control as this is what they specialize in. To go to a region hit by a disaster for nothing more than a photo op is just ridiculous in my book. 15. Oct 26, 2005 ### The Smoking Man Often, it is written into law that to release emergency funds to these areas the highest ranking officals must make an assessment. That's not saying it doesn't make a great photo-op but there are actually 'legal' reasons it must happen. 16. Oct 30, 2005 ### pattylou Last edited by a moderator: May 2, 2017 17. Oct 30, 2005 ### The Smoking Man I have a bit of a problem with this. India offered to send help using helecopters. Pakistan refused stating they would not allow Indian helecopters across the border using Indian Pilots. I'm getting a bit tired of these penny ante nations pleading for help and then slapping conditions on what they receive or worse, as in the case of the Tsunami, actually taxing relief as it comes across the border. Tons of supplies were held hostage in ports while the relief organizations went back to their own countries and tried to drum up cash contributions to the tune of$5 million to get the authorities to allow the aid into the country. Then they say who can and can't help!? Let them try to solve the problems themselves until they learn the old adage, 'don't look a gift horse in the mouth'. When these 'DICKtators' learn that people helping is the RIGHT kind of international co-operation and not meant to undermine their power, I will contribute again. At the moment, they seem to think that seeing the Indians as human and wishing to aid them in their hour of need may take the 'fight' out of the people. Governments are scum. 18. Oct 30, 2005 ### The Smoking Man I heard about this on Fox two days ago. Is it true? http://www.livejournal.com/users/mparent7777/3701465.html [Broken] Last edited by a moderator: May 2, 2017 19. Oct 30, 2005 ### Curious3141 The attitude that Pakistan has taken greatly saddens me. I really thought India and Pakistan had been heading for a real breakthrough in cordial relations. This natural disaster, tragic as it is, actually presents a golden opportunity for people in both countries to come together in a crisis. India has made overtures. Sadly, the Pak government sees fit to reject aid in Indian helicopters, which seems more than a tad ungrateful and petty. Then there're the bomb blasts in the heart of India. So many people dead, just before a happy day (Diwali). I know the Pak government is not directly to blame, but some blame must decidedly go their way because they have been happily sponsoring and supporting these bloody terrorist groups that operate in Kashmir and the heart of India itself. Does it really surprise the Pak government that the beast they've nurtured for so long now refuses to listen when they plead for a cessation of terrorist violence ? And will Pak completely and categorically stop aiding these murderers after the immediate crisis is over ? I think not. Pakistan has had a long history of sponsoring terrorism. State sponspored terrorism is de facto an act of war. I was really hoping that this natural disaster would pave the way for Pakistan and its terrorist hounds to change their ways, but no such luck. 20. Oct 30, 2005 ### pattylou THe most recent reports that I have heard regarding the Kashmir border is that it is being opened. I certainly don't think the India-Pakistan mess should affect whether I, as a non-Pakistani or non-Indian, sends aid. The people in the villages need help. Bush is being a dick for not matching the funds he gave to the Tsunami victims. Maybe the heads of state in India and Pakistan are being dicks, too, but that doesn't have to stop any of us individual, caring human beings, from trying to help out. http://www.oxfamamerica.org/ Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
2017-08-21 09:25:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18719008564949036, "perplexity": 5689.130498914769}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107744.5/warc/CC-MAIN-20170821080132-20170821100132-00360.warc.gz"}
https://pub.uni-bielefeld.de/record/2935585
## On the seasonality in the implied volatility of electricity options Fanelli V, Schmeck MD (2019) Quantitative Finance 19(8): 1321-1337. Zeitschriftenaufsatz | E-Veröff. vor dem Druck | Englisch Download Es wurden keine Dateien hochgeladen. Nur Publikationsnachweis! Autor*in Fanelli, Viviana; Schmeck, Maren DianeUniBi Abstract / Bemerkung Seasonality is an important topic in electricity markets, as both supply and demand are dependent on the time of the year. Clearly, the level of prices shows a seasonal behaviour, but not only this. Also, the price fluctuations are typically seasonal. In this paper, we study empirically the implied volatility of options on electricity futures, investigate whether seasonality is present and we aim at quantifying its structure. Although typically futures prices can be well described through multi-factor models including exponentially decreasing components, we do not find evidence of exponential behaviour in our data set. Generally, a simple linear shape reflects the squared volatilities very well as a curve depending on the time to maturity. Moreover, we find that the level of volatility exhibits clear seasonal patterns that depend on the delivery month of the futures. Furthermore, in an out-of-sample analysis we compare the performance of several implementations of seasonality in the one-factor framework. Stichworte Implied volatility; Electricity options; Seasonality; Factor models; Settlement prices; Season cycle Erscheinungsjahr 2019 Zeitschriftentitel Quantitative Finance Band 19 Ausgabe 8 Seite(n) 1321-1337 ISSN 1469-7688 eISSN 1469-7696 Page URI https://pub.uni-bielefeld.de/record/2935585 ## Zitieren Fanelli V, Schmeck MD. On the seasonality in the implied volatility of electricity options. Quantitative Finance . 2019;19(8):1321-1337. Fanelli, V., & Schmeck, M. D. (2019). On the seasonality in the implied volatility of electricity options. Quantitative Finance , 19(8), 1321-1337. doi:10.1080/14697688.2019.1582792 Fanelli, Viviana, and Schmeck, Maren Diane. 2019. “On the seasonality in the implied volatility of electricity options”. Quantitative Finance 19 (8): 1321-1337. Fanelli, V., and Schmeck, M. D. (2019). On the seasonality in the implied volatility of electricity options. Quantitative Finance 19, 1321-1337. Fanelli, V., & Schmeck, M.D., 2019. On the seasonality in the implied volatility of electricity options. Quantitative Finance , 19(8), p 1321-1337. V. Fanelli and M.D. Schmeck, “On the seasonality in the implied volatility of electricity options”, Quantitative Finance , vol. 19, 2019, pp. 1321-1337. Fanelli, V., Schmeck, M.D.: On the seasonality in the implied volatility of electricity options. Quantitative Finance . 19, 1321-1337 (2019). Fanelli, Viviana, and Schmeck, Maren Diane. “On the seasonality in the implied volatility of electricity options”. Quantitative Finance 19.8 (2019): 1321-1337. Export Open Data PUB ### Web of Science Dieser Datensatz im Web of Science® Suchen in
2022-12-06 07:58:55
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8097174763679504, "perplexity": 6284.369594953891}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711074.68/warc/CC-MAIN-20221206060908-20221206090908-00507.warc.gz"}
http://mirrors.dotsrc.org/cran/web/packages/tboot/vignettes/tboot_bmr.html
Bayesian Marginal Reconstruction Suppose we are able to summarize the current state of scientific knowledge for the mean/proportion of each of several endpoints for a particular treatment using a distribution. The distribution for each endpoint may come from the elicitation of prior information from experts, some initial dataset or some combination of different sources of information. In many cases it will be difficult to arrive at a joint distribution. Only the marginal distribution of each endpoint will be calculated easily. This is particularly true when using externally published data where only marginal effects are known, and no patient level data is available. In general assuming independence for the distribution of the mean for each endpoint is not appropriate, as we would expect correlations in the distribution given that many endpoints are correlated. The method described here may be used to simulate from the approximate joint distribution given the marginal distribution and an individual level data set. The correlation structure within the individual level data is used to impute the joint distribution. The method also provides a way to simulate virtual trial data based on the marginal. A simulated example dataset As an example, we simulate the following simple dataset with a continuous and two binary variables. library(tboot) set.seed(2020) quant1 <- rnorm(200) + 1 bin1 <- ifelse( (.5*quant1 + .5*rnorm(200)) > .5, 1, 0) bin2 <- ifelse( (.5*quant1 + .5*rnorm(200)) > .5, 1, 0) simData <- data.frame(quant1, bin1, bin2) head(simData) ## quant1 bin1 bin2 ## 1 1.37697212 0 0 ## 2 1.30154837 1 1 ## 3 -0.09802317 0 0 ## 4 -0.13040590 0 0 ## 5 -1.79653432 0 0 ## 6 1.72057350 0 1 Example First, we create a list with simulations from the marginal distribution of each variable for two different treatments (active treatment and placebo). marginal_active <- list(quant1=rnorm(5000, mean=.5, sd=.2), bin1=rbeta(5000, shape1 = 50,shape2=50), bin2=rbeta(5000, shape1 = 60,shape2=40)) marginal_pbo <- list(quant1=rnorm(5000, mean=.2, sd=.2), bin1=rbeta(5000, shape1 = 20,shape2=80), bin2=rbeta(5000, shape1 = 30,shape2=70)) We next need to use ‘tweights_bmr’ to calculate the correlation matrix from the data and get set for marginal reconstruction. The calculation uses a call to the ‘tweights’ function. bmr_active <- tweights_bmr(dataset = simData, marginal = marginal_active) ## ---------------------------------------------------------------- ## Optimization was successful. The weights have a sampling ## distribution with means close to the attempted target: ## quant1 bin1 bin2 ## Achieved Mean 0.4968368 0.4992157 0.600457 ## Target Mean 0.4968368 0.4992157 0.600457 ## Maximum weight was: 0.04623098 ## Data augmented with 1 sample(s) with independent variables. ## The final weight of the indpendent sample(s) was: 0.007398153 ## ---------------------------------------------------------------- bmr_pbo <- tweights_bmr(dataset = simData, marginal = marginal_pbo) ## ---------------------------------------------------------------- ## Optimization was successful. The weights have a sampling ## distribution with means close to the attempted target: ## quant1 bin1 bin2 ## Achieved Mean 0.1969663 0.2003352 0.2997904 ## Target Mean 0.1969663 0.2003352 0.2997904 ## Maximum weight was: 0.02418729 ## Data augmented with 1 sample(s) with independent variables. ## The final weight of the indpendent sample(s) was: 0.006562899 ## ---------------------------------------------------------------- To simulate from the posterior we use ‘post_bmr’: samples <- rbind(data.frame(trt="active", post_bmr(nsims=1e3, bmr_active)), data.frame(trt="pbo", post_bmr(nsims=1e3, bmr_pbo))) head(samples) ## trt quant1 bin1 bin2 ## 1 active 0.4909751 0.4588719 0.5872279 ## 2 active 0.1522832 0.3929919 0.4903635 ## 3 active 0.3521943 0.4217331 0.5940661 ## 4 active 0.1911470 0.4232380 0.5223984 ## 5 active 0.4904320 0.5739046 0.5425915 ## 6 active 0.6137868 0.5463136 0.6449981 The posterior samples show a correlations structure. pairs(samples[,-1], col=ifelse(samples=="active","red","blue"), pch='.', cex=.5) Marginally the posterior samples are equivalent to the simulations used as input (i.e., in the ‘marginal’ parameter). library(ggplot2) pltdta=do.call(rbind, lapply(c("quant1","bin1", "bin2"), function(nm) { rbind(data.frame(type="BMR", var=nm, trt=samples$trt, val=samples[[nm]]), data.frame(type="marginal", var=nm, trt="active", val=marginal_active[[nm]]), data.frame(type="marginal", var=nm, trt="pbo", val=marginal_pbo[[nm]])) })) ggplot(pltdta, aes(fill=type, x=val)) + geom_density(alpha=.3) + facet_grid(var~trt, scales = "free") To simulate a random trial dataset using the parameters from a single draw of ‘post_bmr’ we use the tboot_bmr function. For example, to simulate 100 patients on active treatment: active_sample=tboot_bmr(nrow=100, weights_bmr=bmr_active) head(active_sample) ## quant1 bin1 bin2 ## 1 0.3944395 1 1 ## 2 -0.4321196 0 0 ## 3 -2.0566837 0 0 ## 4 -1.2648538 0 0 ## 5 -0.1767540 0 0 ## 6 -2.0566837 0 0 The underlying parameter mean for the simulation is an attribute: attr(active_sample, "post_bmr") ## quant1 bin1 bin2 ## 0.5927736 0.4617252 0.5994379 A more interesting example would be to simulate and analyze trial data. For example: #Manage any errors by assuming the pvalue failed to reach statistical #significance (i.e. pvalue is 1) but keep track of such errors. errorTrackGlobal=list() manageError=function(expr) { tryCatch(eval(quote(expr)), error=function(e){ errorTrackGlobal[[length(errorTrackGlobal)+1]] <<- e$message return(1) }) } #create function to simulate and analyze one virtual trial sim_and_analyze=function() { active_sample=tboot_bmr(100, bmr_active) pbo_sample=tboot_bmr(100, bmr_pbo) data.frame( p_quant1=manageError(t.test(active_sample$quant1,pbo_sample$quant1)$p.value), p_bin1=manageError(fisher.test(active_sample$bin1,pbo_sample$bin1)$p.value), p_bin2=manageError(fisher.test(active_sample$bin2,pbo_sample$bin2)\$p.value) ) } #Simulate Pvalues p_sim=do.call(rbind, replicate(100, sim_and_analyze(), simplify = FALSE)) head(errorTrackGlobal) ## list() head(p_sim) ## p_quant1 p_bin1 p_bin2 ## 1 8.361678e-03 1.00000000 1.00000000 ## 2 8.493083e-07 1.00000000 1.00000000 ## 3 2.170874e-02 0.55474538 1.00000000 ## 4 9.924969e-03 0.59836673 0.40612432 ## 5 2.154656e-03 0.09505923 0.03519902 ## 6 4.807536e-01 0.61658432 0.82775616 The pvalue matrix above may be analyzed, for example, using the gMCP package if multiple testing adjustments are needed. Methods The algorithm To describe the algorithm, we use the following notation: • $$k \in [1,2,...K]$$ is the endpoint index for $$K$$ endpoints. • $$y_{k}$$ is a vector of length $$J_k$$ of simulations of the marginal distribution of the mean of $$k^{th}$$ endpoint. • $$X$$ is a matrix of input data with columns for each endpoint. $$X_{.k}$$ is the vector of data for the $$k^{th}$$ endpoint. • $$\hat{y}_k$$ is the mean of $$y_{k}$$. • $$Q(y_{k}, p)$$ is the $$p^{th}$$ quantile of vector $$y_{k}$$. The algorithm for ‘tweights_bmr()’ takes $$y_{k}$$ and $$X$$ as input and proceeds as follows: 1. Calculate $$\hat{y}_k$$. 2. Use tboot to calculate the weights ($$w$$) which would tilt $$X$$ such that the mean of $$w\cdot X_{.k} = \hat{y}_k$$ for all endpoints $$k$$. 3. Calculate the implied weighted correlation ($$\hat{C}$$) from using weights $$w$$ for $$X$$. The algorithm for ‘post_bmr()’ takes the output from ‘tweights_bmr()’ and proceeds as follows: 1. Simulate $$Z \sim MultivariateNomal(mean=0, variance=\hat{C})$$ 2. Simulate the posterior mean as $$y_k^* = Q(y_{k}, \Phi(Z_k))$$ 3. Repeat steps 1 and 2 to generate more samples. The algorithm for ‘tboot_bmr()’ takes the output from ‘tweights_bmr()’ and proceeds as follows: 1. Simulate the posterior mean $$\mu$$ as in the algorithm above. 2. Use tweights and tboot to simulate data with a mean of $$\mu$$. The option ‘Nindependent’ is always non-zero to help avoid errors. See the vignette on ‘tweights’ for more information on this option. Justifying the algorithm The algorithm described above may be justified in several ways. First, it is heuristically plausible. One would expect at first thought that when two variables are correlated, a drug which influences one of the variables will most likely influence the other. Second, in some specific cases, the algorithm may be justified via Bayesian Asymptotics using the ‘Berstein Von-Misus’ theorem. This document will not attempt to fully work out this more theoretical approach. Considering the limits of ‘tboot_bmr’ The following considerations should be relevent when considering the use of ‘tboot_bmr:’ 1. Is the relationship between variables found in the available individual level data generalizable to the treatment of interest? That is, if the individual data is tilted to reflect the expected mean of the treatment of interest, will the correlation be realistically similar to the correlation of variables in the treatment of interest. In general, it is expected that the assumptions of ‘tboot’ will be more believable than the assumption of independence. 2. Is the individual level data sample size large enough to make inference about correlation? 3. Did the information about each variable come from different trials? In such cases it may be argued that for large samples sizes the distribution should be independent.
2022-05-16 18:43:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7573591470718384, "perplexity": 2660.6180719248337}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662512229.26/warc/CC-MAIN-20220516172745-20220516202745-00785.warc.gz"}
https://2022.congresso.sif.it/talk/254
Relazione su invito # Nuclear astrophysics: A review of the most interesting recent results. ##### Palmerini S. Giovedì 15/09   09:00 - 13:00   Aula B - Maria Goeppert-Mayer   I - Fisica nucleare e subnucleare Measuring neutron capture cross-sections on unstable nuclei and their half-lives in stellar plasma is the ultimate frontier of experimental nuclear astrophysics. However, many other pivotal results have been achieved so far. Among the most recent ones we review 4 cases. The $^{12}C+\,^{12}C$ reaction, whose measurements by an indirect technique has been extensively debated and turned out to deeply affect the exploitability of the SN progenitors. The $^{7}Be+{n}$ reaction, involved the cosmological lithium problem, whose rate has been measured by two different approaches demonstrating the feasibility of investigating neutron capture cross-sections on unstable nuclei at astrophysical energies. The $^{13}C({a}, {n})^{16}O$ and $^{22}Ne({a}, {n})^{25}Mg$ reactions that are the neutron sources for the $s$-process and have been measured with high precision in order to provide constraints to both the $s$- and the $r$-process nucleosynthesis. The $^{17}O({p}, {a})^{14}N$ and $^{17}O({p}, {g})^{18}F$ reactions, whose roles in the radiative H-burning are well known, but whose nucleosynthesis yields vary significantly according with the reaction rate recommended by the different authors, defining different scenarios for the nucleosynthesis of $^{17}O$ and $^{18}O$.
2022-10-04 00:49:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5993736982345581, "perplexity": 2192.857991696151}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00285.warc.gz"}
https://socratic.org/questions/5a49e05d7c014932b479ad66
# Question 9ad66 Jan 1, 2018 ${K}_{c} = 5.0 \cdot {10}^{- 4}$ #### Explanation: For the sake of simplicity, I will use the following notation • $\text{CCl"_3"CH"("OH")_2 -> "reactant}$ • $\text{CCl"_3"CHO" -> "product}$ The balanced chemical equation that describes this equilibrium will be--keep in mind that you have a $1 : 1$ mole ratio between chloral hydrate and chloral! ${\text{reactant"_ ((sol)) rightleftharpoons "product"_ ((sol)) + "H"_ 2"O}}_{\left(s o l\right)}$ I used $\left(\text{sol}\right)$ to denote the fact that the compounds are dissolved in a solution that does not have water as the solvent. Now, you know that for every $1$ mole of chloral hydrate that dissociates, you get $1$ mole of chloral and $1$ mole of water. The initial concentration of chloral hydrate is ["reactant"]_ 0 = "0.010 moles"/"1 L" = "0.010 M" The problem tells you that the equilibrium concentration of water is equal to $\text{0.0020 M}$. This means that in order for the reaction to produce $\text{0.0020 M}$ of water, it must also produce $\text{0.0020 M}$ of chloral and consume $\text{0.0020 M}$ of chloral hydrate. So the equilibrium concentration of chloral will be ["product"] = ["H"_ 2"O"] = "0.0020 M" and the equilibrium concentration of chloral hydrate will be ["reactant"] = ["reactant"]_ 0 - "0.0020 M" ["reactant"] = "0.00080 M"# By definition, the equilibrium constant will be equal to ${K}_{c} = \left(\left[\text{product"] * ["H"_2"O"])/(["reactant}\right]\right)$ Keep in mind that water is a solute here, so its concentration must be included in the expression of the equilibrium constant. $\textcolor{\mathrm{da} r k g r e e n}{\underline{\textcolor{b l a c k}{{K}_{c}}}} = \frac{0.0020 \cdot 0.0020}{0.0080} = \textcolor{\mathrm{da} r k g r e e n}{\underline{\textcolor{b l a c k}{5.0 \cdot {10}^{- 4}}}}$
2019-12-13 04:58:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 19, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8217195272445679, "perplexity": 1735.613182416488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540548544.83/warc/CC-MAIN-20191213043650-20191213071650-00150.warc.gz"}
http://www.ni.com/documentation/en/labview-comms/2.0/m-ref/sort/
# sort Version: Sorts the input elements in ascending or descending order. ## Syntax c = sort(a) c = sort(a, b) c = sort(a, order) c = sort(a, b, order) [c, d] = sort(a) [c, d] = sort(a, b) [c, d] = sort(a, order) [c, d] = sort(a, b, order) ## a Real or complex scalar or array of any dimension. ## b Dimension of a across which to sort if a is an array. b can be in a range of 1 to the maximum supported array dimension (32). If you do not specify b, the function sorts the first dimension whose size is not equal to 1. ## order Direction by which to sort elements. order is a string that accepts the following values. Name Description 'ascend' (default) Sorts the elements in ascending order. 'descend' Sorts the elements in descending order. ## c Elements of a in ascending or descending order, depending on the value of order. MathScript sorts complex vectors by magnitude and angle, in that order. If a is an array, c returns a sorted by the dimension specified in b. c is an array of the same size as a. ## d Indexes in a of the elements in c. d is a double array of the same size as a. A = [-1+2i, 3, -1-2i, -4] [C, D] = sort(A) Where This Node Can Run: Desktop OS: Windows FPGA: Not supported
2018-03-23 10:33:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2309042066335678, "perplexity": 3346.7086644011947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648207.96/warc/CC-MAIN-20180323102828-20180323122828-00511.warc.gz"}
https://math.stackexchange.com/questions/1018009/why-int-infty-infty-left-vert-e-ax-right-vert2-dx-int-0
# why $\int_{-\infty}^{\infty} \left\vert e^{-ax} \right\vert^{2} dx = \int_{0}^{\infty} e^{-2ax}dx$ The calculus text book says $\int_{-\infty}^{\infty} \left\vert e^{-ax} \right\vert^{2} dx = \int_{0}^{\infty} e^{-2ax}dx$ but does not explain how this happened, and I am not able to figure it out. Could someone please show step by step how this transformation happened? screen shot from the book (into to applied math, by Strang, page 314) ps. to answer comment below that the book has type in lower limit, and that it should be 0 and not negative infinity. The book definition uses negative infinity and not zero. So the lower limit is not a type. Screen shot: The above is just before the example. So the book is using this example to illustrate the Plancherel's formula. The part inside the integral follows from $e^{a}e^{b} = e^{a+b}$. I'm not sure about the limits of integration changing, that looks like a typo. • thanks. But I do not understand where is the typo. Are you saying the final answer $\frac{1}{2 a}$ is wrong also? if not, how would you integrate this then? – Steve H Nov 12 '14 at 4:57 • The lower limit of integration as $-\infty$ is a typo, it should be 0. – Suzu Hirose Nov 12 '14 at 4:58 • The lower limit of integration as −∞ is a typo, it should be 0 But this is not how the book defines it. It actually has the definition from negative infinity to infinity. I can post screen shot of the definition. I am sure the definition is correct. – Steve H Nov 12 '14 at 5:02
2019-11-12 03:08:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9020695686340332, "perplexity": 218.80340036899653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664567.4/warc/CC-MAIN-20191112024224-20191112052224-00120.warc.gz"}
http://mathhelpforum.com/calculus/53682-use-implicit-differentiation-prove-power-rule.html
# Math Help - Use implicit differentiation to prove the power rule 1. ## Use implicit differentiation to prove the power rule Let y = x^(a/b) for integers "a" and "b". Raise both sides to the "bth" power and use implicit differentiation to prove the power rule y' = (a/b)x^(a/b-1). y^b=x^a b*y^(b-1)*y'=a*x^(a-1) y'={a*x^(a-1)}/{b*y^(b-1)} y'=(a/b)*x^(a-1)/x^(a*(b-1)/b) y' = (a/b)x^(a/b-1) 3. ## Confused What exactly happened to the b*y in the 3rd step to the 4th step? Why did the y variable disappear? 4. Hello, erimat89! I'll do it in LaTex . . . Let $y \:=\: x^{\frac{a}{b}}$ for integers $a\text{ and }b$ Raise both sides to the power $b$ and use implicit differentiation to prove the power rule: . $y' \:=\:\frac{a}{b}\!\cdot\!x^{\frac{a}{b}-1}$ We have: . $y \;=\;x^{\frac{a}{b}}$ .[1] Raise to the $b^{th}$ power: . $y^b \:=\:x^a$ .[2] Differentiate implicitly: . $by^{b-1}y' \;=\;ax^{a-1} \quad\Rightarrow\quad y' \;=\;\frac{a}{b}\cdot\frac{x^{a-1}}{y^{b-1}} $ .[3] Divide [2] by $y\!:\;\;y^{b-1} \:=\:\frac{x^a}{y}$ Substitute [1]: . $y^{b-1} \:=\: \frac{x^a}{x^{\frac{a}{b}}} \:=\:x^{a-\frac{a}{b}}$ Substitute into [3]: . $y' \;=\;\frac{a}{b}\cdot\frac{x^{a-1}}{x^{a-\frac{a}{b}}} \quad\Rightarrow\quad\boxed{ y' \;=\;\frac{a}{b}\!\cdot\!x^{\frac{a}{b}-1}}$
2014-08-22 00:24:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9048545360565186, "perplexity": 1943.1751729202624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500822053.47/warc/CC-MAIN-20140820021342-00375-ip-10-180-136-8.ec2.internal.warc.gz"}
https://demo.formulasearchengine.com/wiki/Landing_gear
# Landing gear {{#invoke:Hatnote|hatnote}} {{ safesubst:#invoke:Unsubst||$N=Refimprove |date=__DATE__ |$B= {{#invoke:Message box|ambox}} }} Template:Infobox aviation Landing gear is the undercarriage of an aircraft or spacecraft and is often referred to as such. For aircraft, the landing gear supports the craft when it is not flying, allowing it to take off, land and usually to taxi without damage. Wheels are typically used but skids, skis, floats or a combination of these and other elements can be deployed depending both on the surface and on whether the craft only operates vertically (VTOL) or is able to taxi along the surface. Faster aircraft usually have retractable undercarriage, which folds away during flight to reduce air resistance or drag. For launch vehicles and spacecraft landers, the landing gear is typically designed to support the vehicle only post-flight, and are not used for takeoff or surface movement. ## Aircraft landing gear Aircraft landing gear usually includes wheels equipped with shock absorbers for solid ground, but some aircraft are equipped with skis for snow or floats for water, and/or skids or pontoons (helicopters). The undercarriage is a relatively heavy part of the vehicle; it can be as much as 7% of the takeoff weight, but more typically is 4–5%.[1] ### Gear arrangements A SAN Jodel D.140 Mousquetaire with conventional "taildragger" undercarriage. A Mooney M20J with tricycle undercarriage. Wheeled undercarriages normally come in two types: conventional or "taildragger" undercarriage, where there are two main wheels towards the front of the aircraft and a single, much smaller, wheel or skid at the rear; or tricycle undercarriage where there are two main wheels (or wheel assemblies) under the wings and a third smaller wheel in the nose. The taildragger arrangement was common during the early propeller era, as it allows more room for propeller clearance. Most modern aircraft have tricycle undercarriages. Taildraggers are considered harder to land and take off (because the arrangement is usually unstable, that is, a small deviation from straight-line travel will tend to increase rather than correct itself), and usually require special pilot training. Sometimes a small tail wheel or skid is added to aircraft with tricycle undercarriage, in case of tail strikes during take-off. The Concorde, for instance, had a retractable tail "bumper" wheel, as delta winged aircraft need a high angle when taking off. The Boeing 727 also has a retractable tail bumper. Some aircraft with retractable conventional landing gear have a fixed tailwheel, which generates minimal drag (since most of the airflow past the tailwheel has been blanketed by the fuselage) and even improves yaw stability in some cases.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} Another arrangement sometimes used is central main and nose gear with outriggers on the wings. This may be done where there is no convenient location on either side to attach the main undercarriage or to store it when retracted. Examples include the Lockheed U-2 spy plane and the Harrier Jump Jet. ### Retractable gear The landing gear of a Boeing 767 retracting into the fuselage Schematic showing hydraulically operated landing gear, with the wheel stowed in the wing root of the aircraft To decrease drag in flight some undercarriages retract into the wings and/or fuselage with wheels flush against the surface or concealed behind doors; this is called retractable gear. If the wheels rest protruding and partially exposed to the airstream after being retracted, the system is called semi-retractable. Most retraction systems are hydraulically operated, though some are electrically operated or even manually operated. This adds weight and complexity to the design. In retractable gear systems, the compartment where the wheels are stowed are called wheel wells, which may also diminish valuable cargo or fuel space.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} A Boeing 737-700 with main undercarriage retracted in the wheel wells without landing gear doors A Ju 87D with a wheel spat on the right wheel, absent on the left. Pilots confirming that their landing gear is down and locked refer to "three green" or "three in the green.", a reference to the electrical indicator lights from the nosewheel and the two main gears. Red lights indicate the gear is in the up-locked position; amber lights indicate that the landing gear is in transit (neither down and locked nor fully retracted).[2] ### Nautical {{#invoke:main|main}} Some aircraft have landing gear adapted to take off from and land on water. A floatplane has landing gear comprising two or more streamlined floats. A flying boat has a hull, the bottom of which is shaped like a boat and gives buoyancy. Additional landing gear is often present, typically comprising wing-mounted floats. Helicopters able to land on water may have floats or a hull. An amphibious aircraft has landing gear for both land and water-based operation. ### Other types of landing gear An Me 163B Komet with its two-wheel takeoff "dolly" in place Bell Model 207 Sioux Scout with tubular landing skids Hawker Siddeley Harrier GR7 (ZG472). The two mainwheels are in line astern under the fuselage, with a smaller wheel on each wing A captured Mitsubishi A6M shows the Zero's nearly perpendicular main gear strut angle to its wing when extended #### Detachable landing gear Some aircraft use wheels for takeoff and then jettison them soon afterwards for improved aerodynamic streamlining without the complexity, weight and space requirements of a retraction mechanism. In these cases, the wheels to be jettisoned are sometimes mounted onto axles that are part of a separate "dolly" (for main wheels only) or "trolley" (for a three wheel set with a nosewheel) chassis. Landing is then accomplished on skids or similar other simple devices. Wheel-skis ### Tires and wheels Two mechanics replacing a main landing gear wheel on a Lockheed P-3 Orion The number of tires required for a given aircraft design gross weight is largely determined by the flotation characteristics. Specified selection criterion, e.g., minimum size, weight, or pressure, are used to select suitable tires and wheels from manufacturer’s catalog and industry standards found in the Aircraft Yearbook published by the Tire and Rim Association, Inc. The choice of the main wheel tires is made on the basis of the static loading case. The total main gear load ${\displaystyle F_{\text{m}}}$ is calculated assuming that the aircraft is taxiing at low speed without braking:[6] ${\displaystyle F_{\text{m}}={\frac {l_{\text{n}}}{l_{\text{m}}+l_{\text{n}}}}W.}$ where ${\displaystyle W}$ is the weight of the aircraft and ${\displaystyle l_{\text{m}}}$ and ${\displaystyle l_{\text{n}}}$ are the distance measured from the aircraft's center of gravity(cg) to the main and nose gear, respectively. The choice of the nose wheel tires is based on the nose wheel load ${\displaystyle F_{\text{n}}}$ during braking at maximum effort:[6] ${\displaystyle F_{\text{n}}={\frac {l_{\text{n}}}{l_{\text{m}}+l_{\text{n}}}}(W-L)+{\frac {h_{\text{cg}}}{l_{\text{m}}+l_{\text{n}}}}\left({\frac {a_{\text{x}}}{g}}W-D+T\right).}$ where ${\displaystyle L}$ is the lift, ${\displaystyle D}$ is the drag, ${\displaystyle T}$ is the thrust, and ${\displaystyle h_{\text{cg}}}$ is the height of aircraft cg from the static groundline. Typical values for ${\displaystyle {\frac {a_{\text{x}}}{g}}}$ on dry concrete vary from 0.35 for a simple brake system to 0.45 for an automatic brake pressure control system. As both ${\displaystyle L}$ and ${\displaystyle D}$ are positive, the maximum nose gear load occurs at low speed. Reverse thrust decreases the nose gear load, and hence the condition ${\displaystyle T=0}$ results in the maximum value:[6] ${\displaystyle F_{\text{n}}={\frac {l_{\text{m}}+h_{\text{cg}}({\frac {a_{\text{x}}}{g}})}{l_{\text{m}}+l_{\text{n}}}}W.}$ To ensure that the rated loads will not be exceeded in the static and braking conditions, a seven percent safety factor is used in the calculation of the applied loads. #### Inflation pressure Provided that the wheel load and configuration of the landing gear remain unchanged, the weight and volume of the tire will decrease with an increase in inflation pressure.[6] From the flotation standpoint, a decrease in the tire contact area will induce a higher bearing stress on the pavement, thus eliminates certain airports from the aircraft’s operational bases. Braking will also become less effective due to a reduction in the frictional force between the tires and the ground. In addition, the decrease in the size of the tire, and hence the size of the wheel, could pose a problem if internal brakes are to be fitted inside the wheel rims. The arguments against higher pressure are of such a nature that commercial operators generally prefer the lower pressures in order to maximize tire life and minimize runway stress. However, too low a pressure can lead to an accident as in the Nigeria Airways Flight 2120. A rough general rule for required tire pressure is given by the manufacturer in their catalog. Goodyear for example advises the pressure to be 4% higher than required for a given weight or as fraction of the rated static load and inflation.[7] Tires of many commercial aircraft are required to be filled with nitrogen or low-oxygen air to prevent the internal combustion of the tire which may result from overheating brakes producing volatile hydrocarbons from the tire lining.[8] ### Landing gear and accidents JetBlue Airways Flight 292, an Airbus A320, making an emergency landing on runway 25L at LAX in 2005 after the front landing gear malfunctioned Malfunctions or human errors (or a combination of these) related to retractable landing gear have been the cause of numerous accidents and incidents throughout aviation history. Distraction and preoccupation during the landing sequence played a prominent role in the approximately 100 gear-up landing incidents that occurred each year in the United States between 1998 and 2003.[9] A gear-up landing incident, also known as a belly landing, is an accident that may result from the pilot simply forgetting, or failing, to lower the landing gear before landing or a mechanical malfunction that does not allow the landing gear to be lowered. Although rarely fatal, a gear-up landing is very expensive, as it causes massive airframe damage. If the landing results in a prop strike, a complete engine rebuild may also be required. Many aircraft between the wars – at the time when retractable gear was becoming commonplace – were deliberately designed to allow the bottom of the wheels to protrude below the fuselage even when retracted to reduce the damage caused if the pilot forgot to extend the landing gear or in case the plane was shot down and forced to crash-land. Examples include the Avro Anson, Boeing B-17 Flying Fortress and the Douglas DC-3. The modern-day Fairchild-Republic A-10 Thunderbolt II carries on this legacy: it is similarly designed in an effort to avoid (further) damage during a gear-up landing, a possible consequence of battle damage.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} Some aircraft have a stiffened fuselage bottom or added firm structures, designed to minimize structural damage in a wheels-up landing. When the Cessna Skymaster was converted for a military spotting role (the O-2 Skymaster), fiberglass railings were added to the length of the fuselage; they were adequate to support the aircraft without damage if it was landed on a grassy surface.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} The Bombardier Dash 8 is notorious for its landing gear problems. There were three incidents involved, all of them involving Scandinavian Airlines, flights SK1209, SK2478, and SK2867. This led to Scandinavian retiring all of its Dash 8s. The cause of these incidents was a locking mechanism that failed to work properly. This also caused concern for the aircraft for many other airlines that found similar problems, Bombardier Aerospace ordered all Dash 8s with 10,000 or more hours to be grounded, it was soon found that 19 Horizon Airlines Dash 8s had locking mechanism problems, so did 8 Austrian Airlines planes, this did cause several hundred flights to be canceled.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} On September 21, 2005, JetBlue Airways Flight 292 successfully landed with its nose gear turned 90 degrees sideways, resulting in a shower of sparks and flame after touchdown. This type of incident is very uncommon as the nose oleo struts are designed with centering cams to hold the nosewheels straight until they are compressed by the weight of the aircraft.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} On November 1, 2011, LOT Polish Airlines Flight LO16 successfully belly landed at Warsaw Chopin Airport due to technical failures; all 231 people on board escaped without injury.[10] #### Emergency extension systems In the event of a failure of the aircraft's landing gear extension mechanism a backup is provided. This may be an alternate hydraulic system, a hand-crank, compressed air (nitrogen), pyrotechnic or free-fall system.[11] A free-fall or gravity drop system uses gravity to deploy the landing gear into the down and locked position. To accomplish this the pilot activates a switch or mechanical handle in the cockpit, which releases the up-lock. Gravity then pulls the landing gear down and deploys it. Once in position the landing gear is mechanically locked and safe to use for landing.[12] ### Stowaways in aircraft landing gear {{#invoke:main|main}} Unauthorized passengers have been known to stowaway on larger aircraft by climbing a landing gear strut and riding in the compartment. There are extreme dangers to this practice and numerous deaths reported, due to the lack of heating and oxygen in the landing gear compartments as well as lack of room due to the retracting gear.{{ safesubst:#invoke:Unsubst||date=__DATE__ |\$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} ## Spacecraft ### Launch vehicles Landing gear has traditionally not been used on the vast majority of space launch vehicles, which take off vertically and are destroyed on falling back to earth. With some exceptions for suborbital vertical-landing vehicles (e.g., Masten Xoie or the Armadillo Aerospace' Lunar Lander Challenge vehicle), or for spaceplanes that use the vertical takeoff, horizontal landing (VTHL) approach (e.g., the Space Shuttle, or the USAF X-37), landing gear have been largely absent from orbital vehicles during the early decades since the advent of spaceflight technology, when orbital space transport has been the exclusive preserve of national-monopoly governmental space programs.[13] Each spaceflight system to date has relied on expendable boosters to begin each ascent to orbital velocity. This is beginning to change. Recent advances in private space transport, where new competition to governmental space initiatives has emerged, have included the explicit design of landing gear into orbital booster rockets. SpaceX has initiated and funded a multi-million dollar program to pursue this objective, known as the reusable launch system development program. As part of this program, SpaceX built, and flew eight times in 2012–2013, a first-generation orbital booster-test-vehicle with a large fixed landing gear in order to test low-altitude vehicle dynamics and control for vertical landings of a near-empty orbital first stage.[14][15] A second-generation larger booster test vehicle has been built with extensible landing gear. The first prototype was flown five times in 2014 for low-altitude tests, and the second is expected to begin high-altitude test flights in New Mexico in late 2014.[16][17] The orbital-flight version of the SpaceX design includes a lightweight, deployable landing gear for the booster stage: a nested, telescoping piston on an A-frame. The total span of the four carbon fiber/aluminum extensible landing legs[18][19] is approximately Template:Convert, and weigh less than Template:Convert; the deployment system uses high-pressure Helium as the working fluid.[20] The first test of the extensible landing gear was successfully accomplished in April 2014 on a Falcon 9 rocket and was the first successful controlled ocean soft touchdown of a liquid-rocket-engine orbital booster.[21][22] ### Landers Comet lander Philae showing landing gear. Each pad at the end of each of the lander legs has an ice screw, necessary for attachment to a celestial body with a very low gravitational field. Spacecraft designed to land safely on extraterrestrial bodies such as the Moon or Mars usually have landing gear. Such landers include the Apollo Lunar Module as well as a number of robotic space probe landers. Examples include Viking 1 lander, the first lander to successfully land on Mars (November 1976),[23] and Philae which is currently in orbit around comet 67P/Churyumov–Gerasimenko after a 10-year transit and landed on the comet on 12 November 2014.[24][25][26][27] ## References 1. {{#invoke:citation/CS1|citation |CitationClass=book }} 2. {{#invoke:citation/CS1|citation |CitationClass=book }} 3. Template:Cite web 4. Template:Cite web 5. Template:Cite web 6. "Landing gear integration in aircraft conceptual design" Sonny T. Chai and William H. Mason, September 1996. Retrieved: 26 January 2012. 7. [1] Goodyear Tire & Rubber Co., Retrieved: 26 January 2012. 8. [2] FAA Ruling: "Use of Nitrogen or Other Inert Gas for Tire inflation in Lieu of Air" Docket No. 26147 Amendment No. 25-78 RIN 2120-AD87 9. {{#invoke:Citation/CS1|citation |CitationClass=journal }} 10. Template:Cite news 11. Template:Cite web 12. Template:Cite web 13. Template:Cite news 14. Template:Cite news 15. Template:Cite news 16. Template:Cite web 17. Template:Cite web 18. Template:Cite news 19. {{#invoke:Citation/CS1|citation |CitationClass=journal }} 20. http://www.theguardian.com/science/2014/nov/12/rosetta-mission-philae-historic-landing-comet 21. {{#invoke:Citation/CS1|citation |CitationClass=journal }} 22. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
2021-06-12 11:19:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 16, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29427370429039, "perplexity": 5747.449136266581}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487582767.0/warc/CC-MAIN-20210612103920-20210612133920-00431.warc.gz"}
https://physics.stackexchange.com/questions/293032/what-do-the-commutators-of-the-hamiltonian-with-the-spin-operators-mean-precessi
What do the commutators of the Hamiltonian with the spin operators mean precession-wise? I proved that $[H, S_z] = 0$, while $H$ and $S_x,S_y$ do not commute. I showed this using matrix representations, now I am to comment on my results with respect to spin precession and I need help for that - how exactly do commutators represent something about precession and rotation along the 3 axes? The commutator of an observable like $S_x$ or $P_y$ (the spin in the x direction and the y component of the momentum respectively) with the hamiltonian will tell you about the time evolution of that observable. It tells you how that observable changes with time. This is given through the heisenberg equation: $\frac{dA}{dt} = \frac{i}{\hbar}[H, A]$ where A is an operator (like $S_x$ or $P_y$) and I have assumed that A does not depend explicitly on time, i.e $A = A(x(t),p(t))$ and not $A = A(x(t), p(t), t)$. Now from the equation above, we can see that if A commutes with the hamiltonian, $[H,A] = 0$, then $\frac{dA}{dt} = 0$ and thus A is constant in time, it is conserved. So when you showed that $S_z$ commutes with H while $S_x$ and $S_y$ do not, you showed that the z component of a particles spin is constant in time, while the x and y components are not constant, they precess. • so this is true provided that the H is time independent ? – dumpy Nov 16 '16 at 15:04 • H can be time dependent or independent in this formalism. – CStarAlgebra Nov 16 '16 at 15:11 • so that means the same can be shown in another way using expectation values of Sx, Sy, Sz, yes? also thanks alot! – dumpy Nov 16 '16 at 15:27
2020-05-27 00:12:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9294785857200623, "perplexity": 156.90204622924634}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391923.3/warc/CC-MAIN-20200526222359-20200527012359-00155.warc.gz"}
https://gateoverflow.in/302825/gate2019-23
4.4k views Consider three concurrent processes $P1$, $P2$ and $P3$ as shown below, which access a shared variable $D$ that has been initialized to $100$ $\begin{array}{|c|c|c|} \hline P1 & P2 & P3 \\ \hline : & : & : \\ : & : & : \\ D=D+20 & D=D-50 & D=D+10 \\ : & : & : \\ : & : & : \\ \hline \end{array}$ The processes are executed on a uniprocessor system running a time-shared operating system. If the minimum and maximum possible values of $D$ after the three processes have completed execution are $X$ and $Y$ respectively, then the value of $Y-X$ is ____ edited | 4.4k views +2 0 ? +2 80 0 I also think it is 0. 0 after trying all 6 possible cases [that is complete execution of] D will always be 80 so don't take this 80 as max & min both. THEY SAID MAXIMUM AND MINIMUM "POSSIBLE" values which are 130 and 50 0 This is a concept of overwriting ?? $D=100$ Arithmetic operations are not ATOMIC. These are three step process: 2. Calculate 3. Update Maximum value: Run P2 for Read and Calculate. $D = 100$ Run P1 for read and calculate. $D = 100$ Run P2 update. D = $50$ Run P1 update. D = $110$ Run P2 read, calculate and update. $D = 130$ Minimum Value: Run P1, P2, P3 for Read and Calculate. $D = 100$ Run P1 update. D = $110$ Run P3 update. D = $120$ Run P2 update. D = $50$ Difference between Maximum and Minimum $= 130-50 = 80$ by Veteran (61k points) edited 0 its saying uniprocessor but also time sharing so answer is 80. 0 Ok +2 @Digvijay Pandey Now D=120. P3 (read D i.e =120 + calculate  120+10  + update ) D=D+10, Now D=130. Now, P2 has D=100, Calculate D=100-50 Update D=50 P2 writes the minimum final value which is D=50 is this correct sir ?? 0 How will the answer change for a multiprocessor environment? 0 @Sambhrant Maurya In multiprocessor environment, let's assume we have 3 processors for each of the processes. So each process will execute as a whole(if there exists no other processes) in a processor it was assigned. So output certainly depends on the order in which those processes were being assigned to them. Again we gotta assume the assignments which can result in maximum and minimum value of "D". For minimum value- P2 should write back the result after P1 and P3 only,but P2 should read the value of "D" before its being modified by P1 and P3(i.e minimum). So, P1 P2 P3 Read(D):100 Read(D): 100 D= D+20: 120 Write(D): 120 Read(D):120 D=D+10 : 130 Write(D): 130 D= D-50 : 50 Write(D) :50 X = 50 Similarly for maximum value- P1 P2 P3 Read(D): 100 D= D+20 :120 Write(D): 120 Read(D): 120 Read(D): 120 D= D-50: 70 Write(D): 70 D=D +10 : 130 Write(D): 130 So Y = 130 Hence Y-X = 130-50 = 80 by Active (2.1k points) 0 Very well explained.From where did u study this in Operating Systems? 0 Nice explaination 0 Really good answer, though you could even write Preempt wherever you are not writing the final value. This will provide even more clarity. Anyway, this should be the best answer. 0 I feel sad, this didn't hit in my mind that Semaphores are executed in READ, CALCULATE, UPDATE form. :'( +1 vote $\text{(a) - For Max Value}$ $P_1$ $P_2$ $P_3$ $W(S)$ $W(S)$ $W(S)$ $R(D)$ $R(D)$ $R(D)$ $\color{red}{C.S\space to \space P_2}\color{blue}{\space D = 100 \space here}$ $D = D+ 20$ $D = D- 50$ $D = D+ 10$ $W(D)$ $W(D)$ $W(D)$ $P(S)$ $P(S)$ $P(S)$ Start from either $P_1$ or $P_3$ In my case starting from $P_1$ so $P_1$ have $D = 100$ after read it context switched and given charge to $P_2$ ${\color{green}{\text{Why$P_2$because it can decrease D's value }}}$ so $D's$ value became $50$ now charge returned to $P_1$ but here in $P_1$ D's value is aleady set so here $\text{LOST UPDATE}$ problem arises, so $D = 100+20 = 120$ , Now $P_3$ will start execution. Eventually $D = 120+10 = 130$ so $Y = 130$ $\text{(b) - For Min Value}$ $P_1$ $P_2$ $P_3$ $W(S)$ $W(S)$ $W(S)$ $R(D)$ $R(D)$ $R(D)$ $\color{red}{C.S\space to \space P_1}\color{blue}{\space D = 100 \space here}$ $D = D+ 20$ $D = D- 50$ $D = D+ 10$ $W(D)$ $W(D)$ $W(D)$ $P(S)$ $P(S)$ $P(S)$ We'll start from $P_2$ because it contains negative operation and similiar execution done as $(a)$ so $D = 100 - 50$, $X = 50$ Then the value of $Y - X = 130 - 50 = 80$ by Boss (11.8k points) edited +1 vote $P1:$ 1. $Load\ R_{p1},M[D]$ 2. $Add\ 20$ 3. $Store M[D],R_{p1}$ $P2:$ 1. $Load\ R_{p2},M[D]$ 2. $Sub\ 50$ 3. $Store M[D],R_{p2}$ $P3:$ 1. $Load\ R_{p3},M[D]$ 2. $Add\ 10$ 3. $Store M[D],R_{p3}$ Minimum Value: $P2\rightarrow 1|2|preempt$ $P1\rightarrow 1|2|3(D=120)$ $P3\rightarrow 1|2|3(D=130)$ $P2\rightarrow 3$ $(D=50)$ Maximum Value: $P1\rightarrow 1|2|3(D=120)$ $P3\rightarrow 1|2|preempt$ $P2\rightarrow 1|2|3(D=70)$ $P3\rightarrow 3$ $(D=130)$ $ANS: 130-50=80$ by Loyal (5.5k points) first P1 will execute, make D=120. then P3 will read the value of D & preempt. Now P2 read D value & execute the statement, new D value is 120-50=70. Now P2 will continue its execution on previously read D value (120) & execute it's statement make D=130. Since P2 executed in last, this D value will be final. maximum = 130 for minimum, execute in any order without taking any preemption. minimum = 80 Y-X=50. here, the key point is we have to take preemption into consideration as processes are not atomic. moreover we are getting maximum(possible) value by taking preemption. by (459 points) Minimum value (X) of D will possible when, 2. P1 executes D=D+20, D=120. 3. P3 executes D=D+10, D=130. 4. Now, P2 has D=100, executes, D = D-50 = 100-50 = 50. P2 writes D=50 final value. So, minimum value (X) of D is 50. Maximum value (Y) of D will possible when, 2. P2 reads D=100, executes, D = D-50 = 100-50 = 50. 3. Now, P1 executes, D = D+20 = 100+20 = 120. 4. And now, P3 reads D=120, executes D=D+10, D=130. P3 writes D=130 final value. So, maximum value (Y) of D is 130. Therefore, = Y - X = 130 - 50 = 80 So, option (A) is correct. by Loyal (6.1k points) 2 3
2020-02-19 16:57:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36000874638557434, "perplexity": 4864.687915374688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144165.4/warc/CC-MAIN-20200219153707-20200219183707-00317.warc.gz"}
http://openstudy.com/updates/50518686e4b02b4447c13a72
## A community for students. Sign up today Here's the question you clicked on: ## Deathfish 2 years ago in equation xy=x+y, is it possible to find x without some sort of implicit function? • This Question is Closed 1. .Sam. $xy=x+y \\ \\ xy-x=y \\ \\ x(y-1)=y \\ \\ x=\frac{y}{y-1}$ #### Ask your own question Sign Up Find more explanations on OpenStudy Privacy Policy
2015-08-31 15:28:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2720837891101837, "perplexity": 7613.067072468781}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644066266.26/warc/CC-MAIN-20150827025426-00089-ip-10-171-96-226.ec2.internal.warc.gz"}
https://gamedev.stackexchange.com/questions/85996/3d-translations-relative-to-3d-rotations
# 3D Translations relative to 3D Rotations I'm trying to program camera movement to be relative to camera rotation. (Forward is always forward, regardless of pitch, yaw, and roll) I want to be able to move forward, backward, left, right, up, and down. I do not want to use a matrix. I want to use sin and cos from the standard math library. The camera rotates on all 3 axis. The rotation order is z(roll), y(pitch), x(yaw). When all rotations are 0, positive z is forward, positive y is down, and positive x is left. So far I've gotten forward and backward movement to work with: velZ = speed * cos(rotX) * cos(rotY); velY = speed * sin(rotY); velX = speed * sin(-rotX) * cos(rotY); If somebody knows how to do this or knows where I can find information on this, it would be greatly appreciated. • A matrix is just a concise definition of multiplying some values and getting multiple values back out. If you can define those multiplications with 20 lines of procedural code, you can do it as well or better with a matrix. Why the matrix-less requirement? It's equally true, of course, that if you can find the matrix definition, you can create 20 lines of code to mimic it. Oct 16, 2014 at 21:12 • This is an XY problem - a request for instruction on an inappropriate solution. If there was insight to be obtained by performing the calculations directly from the trigonometry this might be forgiven, but the opposite is true - the solution will be so cumbersome, and so difficult to make performant, that all insight will be buried under the code. Oct 18, 2014 at 3:08 In neutral position you have defined forward to be the positive Z vector (0, 0, 1). There are two vectors perpendicular to that vector (if we ignore sign), up (0, 1, 0) and left (1, 0, 0). The easiest thing would be to create all three vectors and to apply a matrix transformation to find the left, up, and forward vector in 'camera space'. Matrix transform = Matrix.CreateFromYawPitchRoll(y, p r); Vector3 forward = Vector3.Transform(transform, new Vector3(0, 0, 1)); Vector3 left = Vector3.Transform(transform, new Vector3(1, 0, 0)); Vector3 up = Vector3.Transform(transform, new Vector3(0, 1, 0)); You do not necessarily need a matrix. You can also find the left and up matrix by using the cross product. (See here Unity's documentation on it, but the mathematical principals apply in general). Once you have all three vectors you can strafe and move up by simply adding the correct vector (multiplied by movement speed) to the camera position (and look at) vector(s). For example: public void MoveLeft(float speed) { camera.Position = camera.Position + (left * speed); } • I'll look into It. But I'd like to just use sin and cos functions if I can. I've never worked with matrix rotations in this way before. Oct 16, 2014 at 17:46 • Those sin/cos functions are exactly how the matrix is created. See here. There's no sense in rewriting what someone else did and thousands have tested, especially if it's part of a library you're already using. Oct 16, 2014 at 18:37 • I do not want to use a matrix. I want to use sin and cos. Oct 16, 2014 at 20:41 • What framework are you using, are matrix calculations not built in? That would complicate things. However, not using matrices will also complicate things. I do not have time at the moment to deduce all calculations done by the matrix for all cases. (There are some corner cases when using trig, where you need to account for the correct quadrant and stuff). Maybe someone else will do it for you, or you can deduce them yourself. Oct 17, 2014 at 8:28
2022-05-24 08:26:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.370683491230011, "perplexity": 739.3947693440242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662570051.62/warc/CC-MAIN-20220524075341-20220524105341-00189.warc.gz"}
https://iq.opengenus.org/least-frequent-element-in-array/
# Least frequent element in an array #### Algorithms Sorting Algorithms hash map array Get FREE domain for 1st year and build your brand new site Given an array of N elements, our task is to find the least frequent element present in it. There are many ways to do this. In this article, we are going to talk about 3 of those methods along with their implementation. 1. Different ways to find the least frequent element in an array 2. Naive Algorithm 3. Optimized algorithm with array sorting 4. Optimized algorithm with mapping # Different ways to find the least frequent element We are going to be exploring three methods through which we will be able to find the least frequent element in an array and then print it. If there are multiple elements that appear the least number of times in the array, then we can print any one of them. The three methods are: 1. Naive algorithm 2. Optimized algorithm with array sorting 3. Optimized algorithm with mapping # (1) Naive Algorithm This is a very simple and straight-forward solution to this problem that doesn't really take the efficiency and complexity of the algorithm into account. It is a brute force algorithm. The steps of Naive algorithm to find the least frequent element are as follows: NOTE: Here arr is our main input array 1. Initialize the values of leastCtr and leastElement as the length of the input array and -99 respectively. 2. for i from 0 to length of arr do 2.1 Initialize the value of currentCtr as 0 2.2 for j from 0 to length of arr do 2.2.1 if (arr[i] == arr[j]) do 2.2.1.1 Increment the value of currentCtr by 1 2.3 if currentCtr < leastCtr do 2.3.1 Replace the values of leastCtr and leastElement with currentCtr and arr[i] respectively 3. By the time we reach this point, we will already have the least frequent element of the array stored in the leastElement variable and its count will be stored in the leastCtr variable. ## Question #### The time complexity of this algorithm should be: O(n²) O(logn) O(n) O(nlogn) Correct! The time complexity of this algorithm is O(n²) as we have used nested loops here. ### (a) Explanation Let us consider the following array as our input. • For each element of the outer loop i, the inner loop j checks if the element present at the ith index is equal to the element present at the jth index. If they are equal, then the currentCtr variable is incremented by 1. • When the control comes out of the inner loop, the algorithm then checks if the value of currentCtr is lesser than the value of leastCtr. If it is, then the value of leastCtr is replaced with the value of currentCtr. • The size of the given array is 5. • When i = 0 and the inner loop j has finished completely, the values of currentCtr, leastCtr, leastElement are updated to 2, 2 and 5 respectively because the value of currentCtr (2) was lesser than the value of leastCtr (5). • When i = 1 and the inner loop j has finished completely, the value of currentCtr is updated to 2. However, the value of leastCtr (2) and leastElement (5) remain the same as the value of currentCtr (2) is not less than leastCtr (2). • When i = 2 and the inner loop j has finished completely, the values of currentCtr, leastCtr, leastElement are updated to 1, 1 and 16 respectively because the value of currentCtr (1) was lesser than the value of leastCtr (2). • When i = 3 and the inner loop j has finished completely, the value of currentCtr is updated to 2. However, the value of leastCtr (1) and leastElement (16) remain the same as the value of currentCtr (2) is not less than leastCtr (1). • When i = 4 and the inner loop j has finished completely, the value of currentCtr is updated to 2. However, the value of leastCtr (1) and leastElement (16) remain the same as the value of currentCtr (2) is not less than leastCtr (1). • Finally, after the completion of the loops, the least frequent element in the array (leastElement) is found to be 16 and its count was found to be 1. ### (b) Implementation in Python Following is the implementation of our Naive approach in Python: def findLeastFreqElementNaive(arr): leastCtr, leastElement = len(arr), -99 for i in range(len(arr)): currentCtr = 0 for j in range(len(arr)): if (arr[i] == arr[j]): currentCtr += 1 if (currentCtr < leastCtr): leastCtr, leastElement = currentCtr, arr[i] return leastElement, leastCtr if __name__ == "__main__": arr = [5, 6, 16, 6, 5] leastFreqElementNaive, ctr1 = findLeastFreqElementNaive(arr) print("====NAIVE ALGORITHM====") print("Given array:", arr) print("The least frequent element in the array is:", leastFreqElementNaive) print("Count of", leastFreqElementNaive,":", ctr1) (c) Output ====NAIVE ALGORITHM==== Given array: [5, 6, 16, 6, 5] The least frequent element in the array is: 16 Count of 16 : 1 (d) Complexity Time complexity: O(N²) Space complexity: O(1) # (2) Optimized algorithm with array sorting This is an optimized algorithm which is much more efficient than the naive algorithm in terms of computational complexity. Firstly, we sort the array and then traverse through each element linearly while keeping a track of the frequency of the elements. The steps of our optimized algorithm using sorting to find the least frequent element are as follows: NOTE: Here arr is our main input array 1. Copy the contents of the input array to a new variable temp_arr. The motivation behind this step is so that the original order of the input array is maintained and we can perform the required operations on the temporary array. 2. Initialize the values of leastCtr, leastElement and currentCtr as the length of the input array, -99 and 1 respectively. 3. Perform the sorting operation on temp_arr. For example, it will be temp_arr.sort() on Python. 4. for i from 0 to (length of temp_arr) - 1 do 4.1 if temp_arr[i] == temp_arr[i + 1] do 4.1.1 Increment the value of currentCtr by 1 4.2 else do 4.2.1 if currentCtr < leastCtr do 4.2.1.1 Replace the values of leastCtr and leastElement with currentCtr and temp_arr[i] respectively 4.2.2 Reset the value of currentCtr to 1 5. if currentCtr < leastCtr do 5.1 Replace the values of leastCtr and leastElement with currentCtr and temp_arr[len(temp_arr) - 1] respectively. We are performing this extra check because this check would not be performed for the last element in the array in our main loop. This is why we have to explicitly check if the last element in the input array is the least frequent element in the array or not. 6. By the time we reach this point, we will already have the least frequent element of the array stored in the leastElement variable and its count will be stored in the leastCtr variable. ## Question #### The time complexity of this algorithm should be: O(n) O(n + nlogn) O(n²) O(nlogn) Correct! The time complexity of this algorithm is O(nlogn). ### (a) Explanation Let us consider the following array as our input. • Firstly, the array is sorted. • After sorting, for each i from 0 to the size of the array minus 1, i.e., 4, we check if the value of temp_arr[i] is equal to the value of temp_arr[i + 1]. If it is, then we increment the value of currentCtr and go back to the loop. • However, if the value of temp_arr[i] is not equal to the value of temp_arr[i + 1], then we check if the value of currentCtr is lesser than the value of leastCtr. If it is, then we update the values of leastCtr and leastElement to currentCtr and temp_arr[i] respectively. • When i = 0, the values of leastCtr and leastElement remain 5 and -99 respectively. Since the value of temp_arr[i] (3) is equal to the value of temp_arr[i + 1] (3), the value of currentCtr is incremented by 1 (becomes 2) and then the control goes back to the start of the loop. • When i = 1, the value of temp_arr[i] (3) is not equal to the value of temp_arr[i + 1] (15), and the value of currentCtr is lesser than the value of leastCtr, therefore the values of leastCtr is set to the value of currentCtr (2) and the value of leastElement becomes temp_arr[i] (3). After that, the value of currentCtr is reset to 1 and the control goes back to the start of the loop. • When i = 2, since the value of temp_arr[i] (15) is not equal to the value of temp_arr[i + 1] (21), the values of leastCtr (2) and leastElement (3) are set to 1 and 15 respectively. The control then goes back to the main loop. • When i = 3, the value of temp_arr[i] (21) is equal to the value of temp_arr[i + 1]. The value of currentCtr (1) is incremented and becomes 2 and then the loop finally stops. • One final check is performed to check if the last element is the least frequent element in the array. However, that is not the case in our input array, so we simply return the values of leastElement (15) and leastCtr (1). ### (b) Implementation in Python Following is the implementation of our Optimized algorithm with array sorting in Python: def findLeastFreqElementSorting(arr): temp_arr, leastCtr, leastElement, currentCtr = arr.copy(), len(arr), -99, 1 temp_arr.sort() for i in range(len(temp_arr) - 1): if (temp_arr[i] == temp_arr[i + 1]): currentCtr += 1 else: if (currentCtr < leastCtr): leastCtr, leastElement = currentCtr, temp_arr[i] currentCtr = 1 if (currentCtr < leastCtr): leastCtr, leastElement = currentCtr, temp_arr[len(temp_arr) - 1] return leastElement, leastCtr if __name__ == "__main__": arr = [3, 21, 21, 15, 3] leastFreqElementSorting, ctr2 = findLeastFreqElementSorting(arr) print("====OPTIMIZED ALGORITHM WITH SORTING====") print("Given array:", arr) print("The least frequent element in the array is:", leastFreqElementSorting) print("Count of", leastFreqElementSorting,":", ctr2) ### (c) Output ====OPTIMIZED ALGORITHM WITH SORTING==== Given array: [3, 21, 21, 15, 3] The least frequent element in the array is: 15 Count of 15 : 1 ### (d) Complexity Time complexity: O(N logN) Space complexity: O(N) (because of the usage of the sort() method) # (3) Optimized algorithm with hash mapping This is another optimized algorithm which is much more efficient than the naive algorithm in terms of computational complexity. Since we are using the python language, we will make use of the dictionary data structure to implement this algorithm with mapping. The algorithm is illustrated below:- NOTE: Here arr is our main input array 1. Declare dictMap as an empty dictionary. We will map the elements and their frequencies here. 2. for i from 0 to length of arr do 2.1 if arr[i] is present in dictMap as a key do 2.1.1 Increment the value corresponding to the key arr[i] by 1 in dictMap 2.2 else do 2.2.1 In dictMap, initialize the key arr[i] with its value as 1 3. Put the least value present in dictMap in the leastElementCtr variable. For example, in python, it can be simply done by calling the min method as min(dictMap.values()) 4. for every key i in dictMap do 4.1 if dictMap[i] == leastElementCtr do 4.1.1 Initialize the value of the variable leastElement as i 4.1.2 break the loop 5. By the time we reach this point, we will already have the least frequent element of the array stored in the leastElement variable and its count will be stored in the leastCtr variable. ## Question #### The time complexity of this algorithm should be: O(n) O(n + k) O(n²) O(nlogn) Correct! The time complexity of this algorithm is O(n). ### (a) Explanation Let us consider the following array as our input. • For each index i (0 to 4) in the array, if arr[i] is not present in our dictionary dictMap, we make an addition to dictMap with the key being arr[i] and the value being 1. If the the value arr[i] is present in our dictMap, then we simply increment the value by 1 that corresponds to the key dictMap[arr[i]]. • After all the values (frequencies) have been mapped to their respective keys (elements), we then run a final loop to find the key (element) with the minimum value (frequency). After this loop, we can simply return the least frequent element and its frequency as well. • In the first loop, when i = 0, we insert the key-value pair of 17: 1 to dictMap. dictMap = {17: 1} • When i = 1, we insert the key-value pair of 10: 1 to dictMap. dictMap = {17: 1, 10: 1} • When i = 2, we insert the key-value pair of 11: 1 to dictMap. dictMap = {17: 1, 10: 1, 11: 1} • When i = 3, since arr[i] (11) is present in dictMap, we increment the value by 1 in dictMap where the key is arr[i]. dictMap = {17: 1, 10: 1, 11: 2} • When i = 4, since arr[i] (10) is present in dictMap, we increment the value by 1 in dictMap where the key is arr[i]. dictMap = {17: 1, 10: 2, 11: 2} • The key (element) with the minimum frequency is found to be 17 with a frequency of 1. We can return these values now as the final result has been computed. ### (b) Implementation Implementation of our Optimized algorithm with hash mapping in Python: def findLeastFreqElementMapping(arr): dictMap = {} for i in range(len(arr)): if (arr[i] in dictMap.keys()): dictMap[arr[i]] += 1 else: dictMap[arr[i]] = 1 leastElementCtr = min(dictMap.values()) for i in dictMap: if dictMap[i] == leastElementCtr: leastElement = i break return leastElement, leastElementCtr if __name__ == "__main__": arr = [17, 10, 11, 11, 10] leastFreqElementMapping, ctr3 = findLeastFreqElementMapping(arr) print("====OPTIMIZED ALGORITHM WITH MAPPING====") print("Given array:", arr) print("The least frequent element in the array is:", leastFreqElementMapping) print("Count of", leastFreqElementMapping,":", ctr3) ### (c) Output ====OPTIMIZED ALGORITHM WITH MAPPING==== Given array: [17, 10, 11, 11, 10] The least frequent element in the array is: 17 Count of 17 : 1 ### (d) Complexity Time complexity: O(N) Space complexity: O(1) ## Summary As a summary of our three methods for finding the least frequent element are: 1. Naive algorithm O(N^2) time, O(1) space. 2. Optimized algorithm with array sorting O(N logN) time, O(1) space. 3. Optimized algorithm with mapping O(N) time, O(N) space. With this article at OpenGenus, you must have the complete idea of different efficient approaches to find the Least frequent element in an array.
2021-08-03 00:26:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3463038206100464, "perplexity": 1853.4192261804383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154408.7/warc/CC-MAIN-20210802234539-20210803024539-00221.warc.gz"}
https://socratic.org/questions/1-if-the-ph-of-a-solution-is-the-solution-is-basic-a-2-b-5-c-7-d-10-can-someone-
# 1. If the pH of a solution is ….. the solution is basic. a. 2 b. 5 c. 7 d. 10 Can someone help me? Aug 28, 2017 [H_2O] → [H^+] + [OH^1] where $p H = - \log \left[{H}^{+}\right]$, The neutral point, where $\left[{H}^{+}\right] = \left[O {H}^{1}\right]$ is 7. Anything lower means that there are more $\left[{H}^{+}\right]$ than $\left[O {H}^{1}\right]$ ions, so the solution is acidic. Anything lower means that there are more $\left[O {H}^{1}\right]$ than $\left[{H}^{+}\right]$ ions, so the solution is basic.
2019-11-19 10:01:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8669497966766357, "perplexity": 1005.6732002729226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670135.29/warc/CC-MAIN-20191119093744-20191119121744-00352.warc.gz"}
https://www.pharmaland.com.pl/anno-sunken-pmxxje/9eb3e9-is-potassium-permanganate-soluble-in-kerosene
When involved in a fire may cause an explosion. It is a salt consisting of K+ and MnO−4 ions. Potassium permanganate is an inorganic chemical compound with the formula KMnO4. Paraformaldehyde has a variety of uses: It is used in the poultry industry as a disinfectant in the hatcheries, and cattle and sheep industry for sanitizing the bedding in the sheds. It is also insoluble in cold water, but is soluble in hot water. clicking a category or use the alphabetical index. It is a purplish-black crystalline solid, that dissolves in water to give intensely pink or purple solutions. The fertilizer industry refers to any potassium salt as potash . In this compound, manganese is in the +7 oxidation state. It dissolves in water to give intensely pink or purple solutions, the evaporation of which leaves prismatic purplish-black glistening crystals. Using potassium permanganate in "neutralizing" ingested nicotine, physostigmine, quinine, and strychnine is potentially dangerous. When did sir Edmund barton get the title sir and how? Potassium permanganate is a chemical compound that’s used to treat several kinds of skin conditions, including bacterial and fungal infections. Isolated. The salt is also known as "permanganate of potash" and "condy's crystals". It is slightly soluble in alcohols but insoluble in hydrocarbons and ethers. POTASSIUM PERMANGANATE is a very powerful oxidizing agent, particularly in acidic surroundings. Solvents: water, ethanol, chloroform $$\text{9}$$ beakers or test-tubes $$\text{3}$$ A4 sheets of paper Potassium permanganate is a dangerous fire and explosion risk in contact with organic materials; powerful oxidizing agent. Formerly known as permanganate of potash or Condy's crystals, it is a strong oxidizing agent. When preparing solutions make sure that the crystals or tablets are fully dissolved in water before using. Potassium permanganate is soluble in acetone, water, pyridine, methanol and acetic acid. I want to help you achieve the grades you (and I) know you are capable of; these grades are the stepping stone to your future. The compound is soluble in chloroform and ethanol, but obviously those would not do well with a powerful oxidizing agent. Is potassium permanganate soluble in coconut oil 1 See answer fiftyshadesandb1487 is waiting for your help. Concentrated and dilute solutions of potassium permanganate. Potassium Permanganate is a dark purple solid. The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. Addition of Potassium permanganate + dimethylformamide to give a 20% solution led to an explosion after 5 min. To investigate solubility and to determine the relation between solubility and intermolecular forces. Potassium permanganate is an inorganic chemical compound with the formula KMnO4. Does whmis to controlled products that are being transported under the transportation of dangerous goodstdg regulations? Who is the longest reigning WWE Champion of all time? Potassium permanganate is an inorganic chemical compound with the formula KMnO4. 1946-47]. Seeds and agricultural products, fertilisers, Perishables and temperature sensitive cargoes, Oils, fats, acids, chemicals and petroleum products. What did women and children do at San Jose? Is potassium permanganate soluble in kerosene? 1946-47]. The phase-transfer catalyst I'm going to employ is tetra-n-butylammonium bromide. This process gives potassium manganate, which upon electrolytic oxidation in alkaline media, or by boiling the manganate solution in the presence of carbon dioxide until all the green colour is discharged, gives potassium permanganate. May form explosive mixtures with combustible material, powdered metals or ammonium compounds. If a solid sample of Potassium permanganate is placed in water, the water molecules will interact with the Potassium cations and the permanganate anions held in the crystal to break the ionic bonds that hold the crystal together. Solubility in Solubility in Water (H,0) Conductivity Hexane (CH) Sucrose, C.,, H₂O, soluble /20us/cm/sb. This is, to the best of our knowledge, the first study conducted using permanganate pretreatment to degrade petroleum hydrocarbons in unsaturated soil in combination … solvents like water. Also consult the applicable MSDS sheet. Potassium permanganate is p… Reacts fiercely with cyanides when heated or by friction. How long will the footprints on the moon last? Add 2-3 drops of 0.5% potassium permanganate solution to about 1 mL of hexane, cyclohexene, and toluene added to separate test tubes. In this compound, manganese is in the +7 oxidation state. Background, abstract. Copyright © 2021 Multiply Media, LLC. Moreover, Potassium Permanganate Powder is a strong oxidizing agent, formerly known as permanganate of potash or condy's crystals. Inhalation: Aspiration may lead to pulmonary edema. 58 / Monday, March 26, 2012 / Rules and Regulations 02/13/2018 EN (English US) 2/9 H410 - Very toxic to aquatic life with long lasting effects Precautionary statements (GHS-US) : P210 - Keep away from heat, sparks, open flames, hot surfaces. Oxidizer, disinfectant, deodorizer, bleach, dye, tanning, radioactive, decontamination of skin, reagent in analytical chemistry, medicine (antiseptic), manufacture of organic chemicals, air and water purification. Estimated at 30,000 tonnes fertilizer industry refers to any potassium salt as potash strong solution of hypo and Condy! Is tetra-n-butylammonium bromide incandescence with aluminum carbide [ Mellor 5:872 find products you are looking for by clicking a or... Products, fertilisers, Perishables and temperature sensitive cargoes, Oils, fats acids. What organic solvents I have at my disposal petroleum products powdered metals or ammonium compounds crystalline,. First isolated from potash, the oxidation of petroleum hydrocarbons using permanganate can be lightened by washing the and. Solid, that dissolves in water to give intensely pink or purple solutions agent and can burn skin... The longest reigning WWE Champion of all time instability in air, potassium and oxygen between solubility and intermolecular.... Wood with a source of oxygen, like potassium nitrate or chlorate is potassium permanganate soluble in kerosene carbide [ Mellor 5:872 attempting oxidize! Solution of hypo produced industrially from manganese dioxide, which also occurs the! Sucrose, C.,, H₂O, soluble /20us/cm/sb is potassium permanganate soluble in kerosene solid, that dissolves in very polar like... ) is a salt consisting of K+ and MnO4 − ions used to stain a..., methanol and acetic acid washing the wood with a knife with little force are fully dissolved in to! Is too dark it can be lightened by washing the wood and a. Activities is preferred to Net cash provided from investing activities is preferred to Net cash used methods! I have at my disposal MnO 4 ; dark purple solid leather, wool etc like water all time:! A 20 % solution led to an explosion after 5 min Champion of all time salt also! It dissolves in water, and potassium permanganate Powder is a purplish-black crystalline solid that! ; dark purple crystals with blue metallic sheen ; sweetish, astringent taste odourless! Quinine, and gives it the corresponding color dark purple solid guidelines website, https:?! Potassium salt as potash and how as permanganate of potash '' ! A category or use the alphabetical index what was the weather in Pretoria on 14 February 2013 and potassium Diffination. Fiercely with cyanides when heated or by friction agent in the +7 oxidation state as well as a cathartic as. Materials ; powerful oxidizing agent, particularly in acidic surroundings only seconds of exposure of K+ and MnO−4 ions on... Pyridine, methanol and acetic acid has also been shown to have this.... Hydrocarbons and ethers Sheet according to Federal Register / Vol +7 oxidation state in concentrated solution and colour! The main component of natural gas fused with potassium Hydroxide and heated in air, potassium:... The oxidation of petroleum hydrocarbons using permanganate can be found in the +7 oxidation state its name derives manganese... Poison is potassium permanganate soluble in kerosene and gives it the corresponding color system poison, and gives it the color... Intermolecular forces '' and Condy 's crystals system poison, and potassium permanganate is soluble in to! Reacts rapidly with atmospheric oxygen to form is potassium permanganate soluble in kerosene white potassium peroxide in seconds... Much money do you start with in monopoly revolution you are looking for by clicking a category or use alphabetical! Purple colour in concentrated solution and pink colour in concentrated solution and pink in! Gives it the corresponding color causes ignition of the main component of natural gas in air or with knife! Compound, manganese is in the literature or insoluble of manganese, potassium and oxygen skin and repeated may! In 2000, worldwide production was estimated at 30,000 tonnes communis that soft! As the mineral pyrolusite s used to treat several kinds of skin conditions, including bacterial and infections! First isolated from potash, the evaporation of which leaves prismatic purplish-black glistening crystals which leaves prismatic glistening! Solutions make sure that the crystals or tablets are fully dissolved in before... Friction and are liable to ignite is slightly soluble in alcohols but in. In acetone, water, but obviously those would not do well a! Get the title sir and how is soluble in water to give pink... At 30,000 tonnes source of oxygen, like potassium nitrate or chlorate as potash and.... Crystals or tablets are fully dissolved in water, pyridine, methanol and acetic acid of the metals [ 12:322. Communis that is used as a cathartic and as a cathartic and as a plasticizer are and. Investigate solubility and to determine the relation between solubility and intermolecular forces soft enough to be cut a... Sheet according to Federal Register / Vol transport guidelines website, https:?. Potassium Hydroxide and heated in air, potassium and oxygen kalium ) and atomic 19!, soluble /20us/cm/sb causes respiratory tract … potassium permanganate is p… potassium permanganate + nitrate! Manganese is in the gastrointestinal tract organic compounds, such as rubber, leather wool! A dark purple crystals with blue metallic sheen ; sweetish, astringent taste ; odourless, worldwide production estimated! That stains the wood and leaves a brown residue that stains the wood purple colour in concentrated solution pink., powdered metals or ammonium compounds organic solvents I have at my disposal, dissolves... Conditions, including bacterial and fungal infections central nervous system poison, and strychnine potentially. The formula KMnO4 Safety Data Sheet according to Federal Register / Vol in,! And can burn the skin at my disposal when preparing solutions make sure that the crystals or are! Clicking a category or use the alphabetical index explosive caused an explosion after min. In only seconds of exposure to any potassium salt as potash silvery-white metal that is soft enough to be with! Potash, the evaporation of which leaves prismatic purplish-black glistening crystals from Neo-Latin kalium ) atomic! By friction who is the balance equation for the complete combustion of the metals [ Mellor 12:322 from Cargo -! Addition of potassium permanganate + dimethylformamide to give intensely pink or purple solutions, the ashes of,! Carbide [ Mellor 12:322 or chlorate chemical compound with the symbol K ( Neo-Latin. Being an ionic salt it only dissolves in water is potassium permanganate soluble in kerosene give a 20 solution... In concentrated solution and pink colour in diluted solution products, fertilisers, Perishables and temperature sensitive,. Potash or Condy 's crystals, it is also insoluble in hydrocarbons and ethers give pink! Few studies on the remediation of unsaturated soil using permanganate has also been shown to have this property and solutions... Using potassium permanganate + ammonium nitrate explosive caused an explosion 7 hrs hepatotoxin, as an! In general is a chemical compound with the wood and leaves a brown residue that the... Dissolves in water before using, C.,, H₂O, soluble /20us/cm/sb or purple solutions, the solubility potassium! 5 min in 2000, worldwide production was estimated at 30,000 tonnes chemicals and products! Potassium permanganate has also been shown to have this property form explosive mixtures with combustible,... With organic materials, such as rubber, leather, wool etc on... Was first isolated from potash, the evaporation of which leaves prismatic purplish-black crystals... [ Mellor 12:322 a central nervous system poison, and strychnine is potentially dangerous very. Was estimated at 30,000 tonnes organic solvents I have at my disposal isolated! ; sweetish, astringent taste ; odourless be cut with a powerful oxidizing agent have this property unsaturated soil permanganate... 'M attempting to oxidize a substance with potassium permanganate + dimethylformamide to give intensely pink or purple solutions, evaporation! Potassium is usually stored in kerosene caused an explosion colour in diluted solution nitrate! Fertilisers, Perishables and temperature sensitive cargoes, Oils, fats, acids, chemicals and petroleum.! Preparing solutions make sure that the crystals or tablets are fully dissolved in to... Kmno4 ) is a purplish-black crystalline solid, that dissolves in water to give pink. To stain woods a pleasant brown or Condy 's crystals, it is a compound... Permanganate and I was wondering what organic solvents I have at my.. Iron absorption solutes as soluble, slightly soluble in water, and gives it the color. H₂O, soluble /20us/cm/sb solubility and intermolecular forces is an inorganic chemical compound of,... Permanganate has been investigated rarely solutions can irritate skin and repeated use may cause burns ! Transport guidelines website, https: //www.cargohandbook.com/index.php? title=Potassium_permanganate & oldid=14494 's largest Cargo guidelines... Use potassium permanganate is a chemical compound of manganese, potassium and oxygen are caustic and can burn the.... Are sensitive to friction and are liable to ignite, as well a... Rubber, leather, wool etc oxidize other organic materials, such as alcohols to employ is tetra-n-butylammonium bromide chemistry! Permanganate: methods of use potassium permanganate is a dark purple crystals with metallic... A few studies on the moon last been investigated rarely, which occurs. The solubility of potassium permanganate solution led to an explosion 7 hrs involved. High doses, manganese may increase anemia by interfering with iron absorption the alphabetical index Powder a... Also known as permanganate of potash or Condy 's crystals compounds, such as rubber, leather, wool.. It the corresponding color is p… potassium permanganate is a central nervous system poison, strychnine. Dissolving 1 ounce in a pint of water is used as a cathartic and as a corrosive agent the. To investigate solubility and intermolecular forces those is potassium permanganate soluble in kerosene not do well with a strong oxidizing agent, particularly acidic... A purplish-black crystalline solid, that dissolves in water to give intensely pink or purple solutions, the of... And pink colour in concentrated solution and pink colour in diluted solution of Ricinus communis that is soft to! And oxygen all time solutions are caustic and can burn the skin the wood leaves.
2021-10-20 19:43:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3490825891494751, "perplexity": 11264.919878918117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585348.66/warc/CC-MAIN-20211020183354-20211020213354-00312.warc.gz"}
http://mathoverflow.net/revisions/8685/list
3 Acknowledged error. Hey Joel, long time etc. It looks to me like blowing down your knotted $S^2$ will only produce a homology 4-sphere. And one could presumably produce examples by taking some known 2-knot in $S^4$ and connect-summing it with the line in $\mathbb{CP}^2$, distinguishing the resulting 2-knots in $\mathbb{CP}^2$ from the line via $\pi_1$ of their complements. [EDIT: I fell into Joel's heffalump trap. Still, at least there's company down here...] You could rephrase the question (with a bit of help from Gromov) as asking whether a 2-knot in $\mathbb{CP}^2$ with self-intersection $1$ and simply connected complement is isotopic to a symplectic sphere. You could invoke Taubes too, and see that, to produce a diffeo with the line, it's enough to extend a symplectic form on the image of $S^2$ to one on $\mathbb{CP}^2$. Well, the complement of a neighbourhood of $S^2$ is then a homotopy 4-ball, bounding $S^3$ with its usual contact structure, and the goal is to build a symplectic form which is a convex filling of the contact boundary... Yep, that's probably an open problem. 2 corrected "concave" to "convex" Hey Joel, long time etc. It looks to me like blowing down your knotted $S^2$ will only produce a homology 4-sphere. And one could presumably produce examples by taking some known 2-knot in $S^4$ and connect-summing it with the line in $\mathbb{CP}^2$, distinguishing the resulting 2-knots in $\mathbb{CP}^2$ from the line via $\pi_1$ of their complements. You could rephrase the question (with a bit of help from Gromov) as asking whether a 2-knot in $\mathbb{CP}^2$ with self-intersection $1$ and simply connected complement is isotopic to a symplectic sphere. You could invoke Taubes too, and see that, to produce a diffeo with the line, it's enough to extend a symplectic form on the image of $S^2$ to one on $\mathbb{CP}^2$. Well, the complement of a neighbourhood of $S^2$ is then a homotopy 4-ball, bounding $S^3$ with its usual contact structure, and the goal is to build a symplectic form which is a concave convex filling of the contact boundary... Yep, that's probably an open problem. 1 Hey Joel, long time etc. It looks to me like blowing down your knotted $S^2$ will only produce a homology 4-sphere. And one could presumably produce examples by taking some known 2-knot in $S^4$ and connect-summing it with the line in $\mathbb{CP}^2$, distinguishing the resulting 2-knots in $\mathbb{CP}^2$ from the line via $\pi_1$ of their complements. You could rephrase the question (with a bit of help from Gromov) as asking whether a 2-knot in $\mathbb{CP}^2$ with self-intersection $1$ and simply connected complement is isotopic to a symplectic sphere. You could invoke Taubes too, and see that, to produce a diffeo with the line, it's enough to extend a symplectic form on the image of $S^2$ to one on $\mathbb{CP}^2$. Well, the complement of a neighbourhood of $S^2$ is then a homotopy 4-ball, bounding $S^3$ with its usual contact structure, and the goal is to build a symplectic form which is a concave filling of the contact boundary... Yep, that's probably an open problem.
2013-05-19 17:39:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8870279788970947, "perplexity": 399.54963190854505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697917013/warc/CC-MAIN-20130516095157-00038-ip-10-60-113-184.ec2.internal.warc.gz"}
https://elliptigon.com/gauss-law-notes/
## Electric fields and flux ### Faraday's idea with field lines • The number electric field lines is $\frac{q}{\epsilon_0}$ • The quantity$\frac{q}{\epsilon_0}$ is called the electric flux (number of lines exiting the surface) • (With some example shown in lecture) The electric • The number of lines getting out of a closed surface (flux) is denoted by $\Phi$ • $\Phi _{closed\ surface} = \frac{q _ {enclosed}}{\epsilon_0}$ • In general, elecric field lines like to stay as far away from each other as possible ### Electric fields • $\vec{E} = \frac{\Phi}{A}$ • The electric field can be thought of as field lines per unit area • The area vector $\vec{a}$ is a vector that is perpendicular to the surface and whose magnitube is equal to the surface area • Then, the electric flux $\Phi$ can b written as $\Phi = \vec{E} \cdot\vec{a}$ • In the infinitessimal case, $$\ \Phi_S = \int_{S}{\vec{E} \cdot d\vec{a}}$$ • Electric field is proportional density of field lines ### Derivations from Gauss' law • For a spherically symmetric charge distribution: $$field\ lines = \frac{q}{\epsilon_0}\ \\area = 4\pi r^2\\ Field = lines \ / \ area \\ \vec{E} = \frac{q}{4\pi\epsilon_0 r^2 }$$ • The same logic can be applied for thin, infinitely long lines • In general, the electric field for any surface can be calculated using the charge densities ($\lambda$ for the linear charge density, and $\sigma$ for the surface charge density) ### Solid angle The solid angle $\Omega$ is given by: $$\ \Omega = \int \frac{dA}{r^2}$$ ### Usage of Gauss' law • Gauss' law is only useful when there is some form of symmetry (spherical, cylindrical etc.) #### Spherical symmetry • For a spherically symmetric electric field: $$\ \vec{|E|}\cdot 4\pi r^2 = \frac{Q_{in}}{\epsilon_0} \\ \vec{E} = \frac{\rho \vec{r}}{3\epsilon_0}$$ • For the above equation, the difficulty lies in calculating $Q_{in}$ • It is applicable even when the charge distribution is non-uniform. In which case, an integral over the distance would usually be involved (where the volumetric charge density is given as a function of the distance/radius) #### Cylindrical symmetry • For a cylindrically symmetric electric field: $$\ \vec{|E|}\cdot 2\pi rh = \frac{Q_{in}}{\epsilon_0} \\ \vec{E} = \frac{\rho \vec{r}}{2\epsilon_0}$$ • The denominator is 2 instead of 3 (as in the shperical case) because of some properties of averages
2019-05-20 05:25:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9644558429718018, "perplexity": 647.1187939409059}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255562.23/warc/CC-MAIN-20190520041753-20190520063753-00018.warc.gz"}
http://blog.gmane.org/gmane.user-groups.linux.london.gllug/month=20050601
1 Jun 2005 01:04 ### Re: Graduates paying for IT training before employment Howzit? I phoned them but they said, "You had to have been in the UK for two years or more.", when I explained that I was from South Africa and had been working in IT for a few years. Meskien was hulle bang? (Maybe they were scared?) <g> Cheers Liam > > I realise this thread is the best part of two years old - but does anyone > > remember the outcome? I am meeting with ICS tomorrow. > > For a job interview? Leave your chequebook, debit card, credit card and > so forth at home. > > I was going to add a smiley, but anyone who expects prospective staff to > pay for training on the companies internal training programme is well > over the line, IMNSHO. > > Ask them for the name of a reputable university that offers comparable > training as part of a degree course, then go there instead. > > cheers, rich. > > > Thanks, -- -- 1 Jun 2005 01:09 ### Re: Web hosting Mike Leigh wrote: >Martyn Drake wrote: > > >>1&1 are THE most outrageously inefficient web hosting company >>(apart from Fasthosts) that I've ever come across. Bytemark, >>PI, or anybody else for that matter (even Hosteurope!), would >>make for a better hosting provider than 1&1. >> >> >I honestly can't complain about the support or reliability I have received >from fasthosts. I have a dedicated server from them and so far the >performance has been really good (no one uses it though) and I have not >suffered any downtime that I am unaware of. My server runs FC3 and has most >can pretty much do what you want with it. The matrix control panel is a >little strange at first and their FAQ's / tutorials are non existant. The >only time I have dealt with their support team so far is to do with some DNS >records that were not configured correctly which was not their fault. They >still helped me resolve my issue with my domain provider. Now from what I >can tell that seems to be pretty good support. > > > \begin{rant} When I had a re-seller account with fasthosts they were the biggest bunch of monkeys ever, the support people would often say things like "you cant get on our ftp 1 Jun 2005 08:45 ### spamassassin v bogofilter i've been using/testing/playing with spamassassin for a while and the only thing that bothers me about it is that it is damned slow and nails processing time. to my mind it a great solution apart from this drain on my limited resources. so, i googled for alternatives and have turned up bogofilter. i have been unable to find a decent comparison between the two. has anyone used bogofilter? is the performance of bogofilter a trade off for its effectiveness, for example? is it as easy to maintain/train as spamassassin? thanks, craig -- -- Gllug mailing list - Gllug <at> gllug.org.uk http://lists.gllug.org.uk/mailman/listinfo/gllug 1 Jun 2005 08:54 ### Re: Graduates paying for IT training before employment On Tue, May 31, 2005 at 09:00:30PM -0000, Andrew McGregor wrote: > Hi, > > I realise this thread is the best part of two years old - but does anyone > remember the outcome? I am meeting with ICS tomorrow. > Thanks, > > Andy > > > http://lists.gllug.org.uk/pipermail/gllug/2003-August/038041.html Something else that I remember a few years ago were companies that put employees through training courses as part of their employment, but then demanded that the employee repay the cost of any training if they left within 3 years. This meant that to leave the company someone had to pay several thousands for courses - some of which they never wanted to go on in the first place. Does this sort of thing still happen ? -- Alain Williams Parliament Hill Computers Ltd. Linux Consultant - Mail systems, Web sites, Networking, Programmer, IT Lecturer. +44 (0) 787 668 0256 #include <std_disclaimer.h> -- -- 1 Jun 2005 09:09 ### Re: phone memory stick On Tue, May 31, 2005 at 08:23:37PM +0100, Christopher Hunter wrote: > On Tuesday 31 May 2005 09:44, Alain Williams wrote: > > > I left the shop after telling the manager that I was not going to buy there > > since his salesmen lied. Phones-4-u I think it was. > > You shouldn't be so hard on clueless sales-droids! They will tell you > ANYTHING to make a sale, as their weekly pay is directly proportional to > their personal turnover. That is exactly WHY we should be hard on these people. Lying to get a sale is theft. Why should we tolerate it ? It isn't OK just because it happens a lot - think car, insurance, pension, ... salesmen. Another way of putting it is that is that it is a con trick. Maybe the moron did not know the answer - but he knew that he did not know and should have said so. There is no excuse. Sorry: dishonesty/lies is something that I really hate. > It gives rise to the stupidities you see in PC World - "You can't return that > faulty CD drive as you probably virussed it" was one I heard recently. > > Chris > -- > Gllug mailing list - Gllug <at> gllug.org.uk > http://lists.gllug.org.uk/mailman/listinfo/gllug -- -- 1 Jun 2005 01:52 ### spamassassin v bogofilter i've been using/testing/playing with spamassassin for a while and the only thing that bothers me about it is that it is damned slow and nails processing time. to my mind it a great solution apart from this drain on my limited resources. so, i googled for alternatives and have turned up bogofilter. i have been unable to find a decent comparison between the two. is the performance of bogofilter a trade off for its effectiveness, for example? thanks, craig -- -- Gllug mailing list - Gllug <at> gllug.org.uk http://lists.gllug.org.uk/mailman/listinfo/gllug 1 Jun 2005 10:22 ### RE: Web hosting Ian Norton wrote: > When I had a re-seller account with fasthosts they were the > biggest bunch of monkeys ever, the support people would often > say things like "you cant get on our ftp server because you > aren't using internet explorer"... This I have heard rom other people. I have not yet met any of those support people during my conversations/emails with them. > While experimenting with ASP (spit) I noticed thier ASP > where you could use thier example scripts to > friendly email pointing out that the server was vulnrable > resulted in a rather nasty email threatening termination of > our account*. They even went as far as saying that it was not Hhmm they could have responded differently on that. Like thanks for that info we will correct it or something more appropriate :) Well I have not had a reseller account and before I chose fasthosts I did my reviews were over 2 years old and there was not very much recently. So I decided to take the plunge and pay for a years dedicated hosting. So far their support/uptime has been/is better than my existing host. I am still moving domains/dns entries to fasthosts and from what I have seen first hand I can honestly praise fasthosts. This is from a dedicated server point of view as I have not dealt with them for reseller accounts or shared hosting. Mike -- -- Gllug mailing list - Gllug <at> gllug.org.uk 1 Jun 2005 10:22 ### Re: spamassassin v bogofilter On Wed, 2005-06-01 at 00:52 +0100, Craig Millar wrote: > i've been using/testing/playing with spamassassin for a while and the only > thing that bothers me about it is that it is damned slow and nails processing > time. to my mind it a great solution apart from this drain on my limited > resources. so, i googled for alternatives and have turned up bogofilter. > i have been unable to find a decent comparison between the two. is the > performance of bogofilter a trade off for its effectiveness, for example? I use both. Each catches things that the other misses. Are you dealing with a very large volume of e-mail, to the extent where e-mail takes a while to process, but I find the delay in reception perfectly acceptable. If my system weren't filtering e-mails it would be staring into space and twiddling its thumbs. Is the load produced by SA preventing your system from doing other constructive work? Are you using spamd? That can help speed things up. John -- -- Gllug mailing list - Gllug <at> gllug.org.uk http://lists.gllug.org.uk/mailman/listinfo/gllug 1 Jun 2005 10:56 ### Re: spamassassin v bogofilter On Wed, Jun 01, 2005 at 07:45:12AM +0100, Craig Millar wrote: > i've been using/testing/playing with spamassassin for a while and the only > thing that bothers me about it is that it is damned slow and nails processing > time. to my mind it a great solution apart from this drain on my limited > resources. so, i googled for alternatives and have turned up bogofilter. > i have been unable to find a decent comparison between the two. has anyone > used bogofilter? is the performance of bogofilter a trade off for its > effectiveness, for example? is it as easy to maintain/train as spamassassin? I don't have a quantative comparison between the two but I have used both. I found spamassassin resource intensive so tried out bogofilter, which seemed less aggressive and gets the job done well. I haven't had any false positives and get a couple of items marked "unsure" per day. Overall I'm happy with bogofileter (run via procmail) and haven't had to think about it since I installed it ages back. ben. -- Registered Linux user number 339435 -- -- Gllug mailing list - Gllug <at> gllug.org.uk http://lists.gllug.org.uk/mailman/listinfo/gllug 1 Jun 2005 14:33 ### Re: Graduates paying for IT training before employment On 6/1/05, Alain Williams <addw <at> phcomp.co.uk> wrote: > On Tue, May 31, 2005 at 09:00:30PM -0000, Andrew McGregor wrote: > > Hi, > > > > I realise this thread is the best part of two years old - but does anyone > > remember the outcome? I am meeting with ICS tomorrow. > > Thanks, > > > > Andy > > > > > > http://lists.gllug.org.uk/pipermail/gllug/2003-August/038041.html > > Something else that I remember a few years ago were companies that put employees > through training courses as part of their employment, but then demanded that > the employee repay the cost of any training if they left within 3 years. > This meant that to leave the company someone had to pay several thousands for > courses - some of which they never wanted to go on in the first place. > > Does this sort of thing still happen ? Dunno. We do a lot of training. Until recently this was all paid for up-front by the company. The accountants have spotted this, and now require employees to pay for their training and then expense it. The idea, presumably, being that people will either not bother at all, or forget to claim. I think its also designed to ensure a higher 'pass' rate - if the employee has to pay for the exam themselves, they'd better be sure they'll pass first time. I'm not especially happy about it.... some things (eg Oracle exams)
2013-05-20 16:13:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3492262363433838, "perplexity": 5430.574551712529}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699068791/warc/CC-MAIN-20130516101108-00087-ip-10-60-113-184.ec2.internal.warc.gz"}
https://itectec.com/superuser/windows-unable-to-sync-ipod-touch-with-the-pc/
# Windows – Unable to sync iPod Touch with the PC 64-bitipoditunessyncwindows 7 I'm trying to sync a first gen iPod Touch to my PC running Windows 7 64 bit. The problem is that whenever I connect the iPod, iTunes completely freezes (if I start iTunes after connecting the iPod it will simply hang until it's physically disconnected from the PC). I reinstalled iTunes thinking that it had been corrupted, but without any luck. I've had this problem with all the latest versions of iTunes. I've also tried using MediaMonkey and DoubleTwist. None of these apps see the iPod as being connected; DoubleTwist also freezes, just like iTunes. The really strange thing is that I was able to sync the iPod with this PC a while back, but I now seem to have lost that ability. I don't know what changed. Windows detects the device every time it's plugged in (I can see it in Device Manager and I can browse all photos on iPod as if it were a camera). Also, I can sync it to iTunes on Mac OS X without any major problems. Edit: after getting similar behavior in Windows XP (I got an 0xE8000065 error message), it seemed less and less like a driver issue. So I did the following: • a second restore from a Mac (without restoring the backup that was made in the process!), • changed the USB port (again), <– this is mostly out of paranoia, but it doesn't hurt to try it (it might be a problem, you can never know) • reinstalled iTunes (again), making sure that the drivers were correctly installed. Look in Device Manager; there's an entry in USB devices called Apple Mobile Device USB Driver (the physical file is called either USBAAPL.sys or USBAAPL64.sys, depending on your OS). This entry might not appear if the iPod/iPhone is unplugged. • ??? • Profit! It worked! I have no idea why it worked this second time (I had done all these steps before, but without any luck; it probably has to do with the fact that I did the restore, but I also had iTunes copy the backup onto the iPod after finishing), but it did. iTunes recognized the iPod as an iPhone (?!), but it correctly configured it as an iPod Touch. Now everything seems to be working. I've marked Matrix Mole's answer as accepted since the restore probably solved my problem.
2021-10-17 07:26:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4615243375301361, "perplexity": 2416.030524863684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585121.30/warc/CC-MAIN-20211017052025-20211017082025-00580.warc.gz"}
https://www.physicsforums.com/threads/analysis-of-spatial-discretization-of-a-pde.514025/
# Analysis of spatial discretization of a PDE 1. Jul 14, 2011 ### zhidayat Hi everybody, I hope I am asking in the right forum. Let describe the problem as follows: I have a 1D heat equation. To solve it, I use finite-difference method to discretize the PDE and obtain a set of N ODEs. The larger N gives the better solution, i.e., the closer the solution to the original PDE. I can also further discretized in time so that I have a set of difference-equations and find the temperature distribution. My question: Are there tools that can be used to analyze the influence of the discretization (spatially) to the analytic solution? 2. Jul 14, 2011 ### hunt_mat Not too sure what you're asking but there are tests to tell you the stability of your finite difference scheme, suppose that dx and dt are the increments in x and t than there is a rule involving dx and dt which gives you a restriction on what values of dx and dt you are able to take. These are in general called stability criterion I believe. 3. Jul 14, 2011 ### pmsrw3 I think I understand what zhidayat is asking. He's solved his problem numerically by a finite difference method, and he also has an analytic solution. He wants to analyze how good a job the FDM does. Of course, if you have an analytic solution, the FDM solution is kind of pointless, but what he proposes nevertheless makes sense as a way of seeing how well the FDM works. Unfortunately, I don't know of any tools, in the sense of software packages, or special techniques. I expect they exist, since FDM solution of DEs is an important and difficult topic, but I just don't know about them. But the particular problem he's solving is a pretty easy one. I would just plot the solutions against each other and plot the difference between them, then redo the numerical solution with the interval decreased by a factor of two. That's not too sophisticated but will give you a fair idea of how well it works and what pathologies it might develop. 4. Jul 14, 2011 ### hunt_mat How about he chooses a particular time and examines the $\ell^{2}$ norm of the differences? 5. Jul 14, 2011 ### pmsrw3 Yes, of course. But that's just one number. I think you want more fine-grained information. 6. Jul 14, 2011 ### hunt_mat It's one number for a given time, do this for all the time steps and you will end up with a function that will gives you the general feeling of the convergence. 7. Jul 14, 2011 ### pmsrw3 Well, you did say "How about he chooses a particular time..." This is why I suggested plotting the solutions. 8. Jul 14, 2011 ### zhidayat Thanks for some ideas. I am thinking of something similar as the Shannon-Nyquist theorem for which gives condition for minimum sampling period with respect to information not stability. But perhaps the method is non exist, I do not know. 9. Jul 15, 2011 ### pmsrw3 The Nyquist criterion applies -- if you want spatial frequencies up to B (for bandwidth), you need to sample at 2B -- i.e. your grid points should be <= 1/2B apart. But this is simple for the heat equation. Diffusion makes high frequencies go away very rapidly. The decay rate is proportional to the square of the frequency. In fact, if you're concerned about the frequency spectrum, you should solve the problem in frequency space directly. Diffusion is equivalent to Gaussian smoothing. This is actually the most efficient numerical method to solve the heat equation, too. 10. Jul 15, 2011 ### zhidayat Interesting ..., I get your point. Do you know references that have presented/discussed what you have told me above? Could you tell me please? 11. Jul 15, 2011 ### pmsrw3 Here are my notes from some lectures I gave on this topic. Note sure how much sense they'll make without the talk, but you can take a look at them. If you have the fortitude to wade through them, I'll try to answer any questions. #### Attached Files: File size: 1.8 MB Views: 122 • ###### notes_v4.pdf File size: 1.9 MB Views: 119 12. Jul 15, 2011 ### zhidayat Thank you pmsrw3. Browsing the notes, they has something to do with Fourier series/transform I guess. I will read it more careful later. 13. Jul 15, 2011 ### pmsrw3 They are about using the heat equation (which, for reasons of context, I refer to as the diffusion equation -- but it's the same) as a way to introduce Fourier transforms. Anyway, they demonstrate that the heat equation is equivalent to Gaussian smoothing.
2017-11-21 01:19:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7386654615402222, "perplexity": 638.0717499469729}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806309.83/warc/CC-MAIN-20171121002016-20171121022016-00428.warc.gz"}
https://www.physicsoverflow.org/8365/public-block-log
# Public Block Log + 3 like - 0 dislike 2599 views Blocking on PhysicsOverflow is done with discretion -- a block should not be one you would consider lifting in the future. The follow reasons are appropriate for blocking users on PhysicsOverflow: • Spammers (Block and delete on first sight) • Users who request a block (Block on first sight of request, if request is less than 3 days old) ) • Users who request deletion (Block and delete on first sight of < 3 days old request only in DISCRETION! We are not obliged to delete all users who request deletion.) • Trolls without any content contribution (Block after two warnings) • Gibberish posters (Block and delete after two warnings) • Off-topic* posters (Block after four warnings) The first three sorts of blocks should simply be appended to the generic moderation log, while the other three require user-specific moderation records (see example). *off-topic: Posts off-topic to both the site and to the thread are considered "off-topic" to qualify for this block category. Posts that are only off-topic to the thread, but not to the site, will be moved to chat or deleted if worthless. Furthermore, please do not block spam IPs. Only users. Most client IP addresses are dynamic, which means you're likely to end up blocking a whole bunch of people in the future by blocking an IP address. If an IP spam attack becomes too difficult to manage, temporarily block the IP and inform polarkernel. asked Mar 21, 2014 edited Jan 13, 2018 + 2 like - 0 dislike (Generic moderation log) USER BLOCK REQUESTS None so far USER DELETION REQUESTS • Kyle Kanos (deleted by: Dilaton) -- documentation: 1 (archive) • Slivvz (deleted by dimension10) -- documentation: 1 (archive2 (archive3 (archive) • danshawen (deleted by dimension10) -- documentation: 1 (docs deleted per user request) • Asaf Karagila (deleted by dimension10) -- documentation: 1 (archive) • Andy Putman (deleted by dimension10) -- documentation: 1 (archive) • David Roberts (deleted by dimension10) -- documentation: 1 (archive) • Yemen Choi (deleted by dimension10) -- documentation: 1 (archive) • Vladimir Kalitvianski (deleted by dimension10) -- documentation: 1 2 (reversed) • Kevin Tah (deleted by Dilaton) -- documentation: mail to admin@PO SPAMMERS • policeman -- description: copy-pasted garbage from wikipedia, threatened to "kick out" users • jordansshoescheap -- description: need I explain? • futtymage -- description: multiple spam • softomaniac -- description: completely off-topic spam • head9ant -- description: confirmed his email and posted in Closed Questions • phjklf -- description: posted off-topic spam in Swedish (or something) • ClayAnderson -- description: off-topic essay writing spam • e449247 -- description: twice; post rate of 1 post/minute • d499247 -- description: same IP as above, 199.119.140.197 • benshen -- description: same IP as above • arun • sasi • skokila • mukesh • weijing3333 • fedortyutin • robertarichet • paulstastny • dress33 • pppb IPS THAT WERE BLOCKED AT ONE POINT Do not block IP addresses any more -- if they get too difficult to handle, temporarily block them and tell polarkernel. • 93.182.156.13 -- description: manual spammer who confirmed email, bypassed captchas • 93.182.154.36 -- description: same guy from new IP • 37.29.65.87 -- description: Johnd468 • 111.93.250.130 -- description: spam using different usernames • 14.141.126.179 -- description: policeman • 175.101.16.82 -- description: seemingly friendly but off-topic spam • 41.220.28.51 -- description: same as above • 147.52.9.2 -- description: same as above • 85.171.55.20 -- description: same as above • 59.115.9.240 • 202.101.147.133 • 111.17.27.136 • 123.193.128.23 • 114.39.37.41 • 1.54.108.71 • 101.22.138.21 • 222.126.146.10 • 71.54.108.71 • 116.226.3.136 • 164.100.173.255 • 223.26.98.152 • 42.118.228.12 • 60.29.59.58 • 180.166.7.134 • 221.2.101.210 • 202.83.62.100 • 186.248.67.34 • 5.39.219.26 • 36.72.184.4 • 192.69.133.40 • 64.120.34.161 • 23.110.169.234 • 66.117.2.34 • 199.168.96.38 • 111.11.228.10 • 141.20.100.47 • 37.187.244.67 • 23.226.227.97 • 103.16.28.86, answered Apr 6, 2014 by (1,985 points) edited Jan 13, 2018 + 1 like - 0 dislike MODERATION RECORD: 216.170.71.90 • Warning 1: Comment 38925 • Warning 2: Comment 38992 • Warning 3: Comment 38995 • Warning 4: Comment 38997 • Block: Comment 39100 -- reason: posting off-topic non-questions after four warnings answered May 20, 2017 by (15,757 points) edited Jan 13, 2018 + 1 like - 0 dislike MODERATION RECORD: mpc755 • Warning 1: Comment 40707 • Warning 2: Comment 40726 • Warning 3: Comment 40744 • Warning 4: Comment 40752 • Block: Comment 40769 -- reason: posting off-topic content after four warnings answered Jan 13, 2018 by (1,985 points) Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsOverfl$\varnothing$wThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
2022-12-03 19:14:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2369013875722885, "perplexity": 13555.560421085029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710936.10/warc/CC-MAIN-20221203175958-20221203205958-00158.warc.gz"}
https://undergroundmathematics.org/calculus-meets-functions/keep-your-distance/solution
### Calculus meets Functions Problem requiring decisions ## Solution The Official Highway Code describes typical stopping distances for cars (section 126 of the 2015 edition). They are given as a table showing distances for different speeds of travel. Each stopping distance is made up of two parts – a thinking distance and a braking distance. They are summarised below. Speed, $v$ Thinking distance Braking distance Stopping distance, $d_s$ $\quantity{20}{mph}$ $\quantity{6}{m}$ $\quantity{6}{m}$ $\quantity{12}{m}$ $\quantity{30}{mph}$ $\quantity{9}{m}$ $\quantity{14}{m}$ $\quantity{23}{m}$ $\quantity{40}{mph}$ $\quantity{12}{m}$ $\quantity{24}{m}$ $\quantity{36}{m}$ $\quantity{50}{mph}$ $\quantity{15}{m}$ $\quantity{38}{m}$ $\quantity{53}{m}$ $\quantity{60}{mph}$ $\quantity{18}{m}$ $\quantity{55}{m}$ $\quantity{73}{m}$ $\quantity{70}{mph}$ $\quantity{21}{m}$ $\quantity{75}{m}$ $\quantity{96}{m}$ One thing to notice and bear in mind is that we have some peculiarly mixed up units in the table. Speeds are quoted in $\mathrm{mph}$ or miles per hour, whereas distances are quoted in metres. Where it became necessary, we chose to use a conversion rate of $\quantity{1}{mile}\approx\quantity{1600}{m}$. Look at the data in the table. What relationships do you see between the distances and how do they vary with speed? Write an equation expressing the stopping distance, $d_s$, in terms of speed, $v$. Firstly, note that the stopping distance is equal to the sum of the thinking and braking distances, as suggested in the introductory text. The thinking distance goes up by $\quantity{3}{m}$ every time the speed goes up by $\quantity{10}{mph}$ so it’s reasonable to suggest this is a linear relationship such as $d_{think}=\frac{3}{10}v$ where the speed, $v$, is in $\mathrm{mph}$ and the distance is in $\mathrm{m}$. A linear relationship implies that the time taken to think is a constant, independent of speed, which seems reasonable. What is the actual thinking time used in this formula? When the speed doubles, the braking distance increases by a factor of $4$, although the numbers are not exact. We presume this is due to rounding as the data was put into the table. So the braking distance appears to be proportional to the square of the speed. This is based on a model of the work done by the brakes. • The work done in slowing the vehicle is “$\mathrm{force}\times\mathrm{distance}$”. • If the brakes apply a constant force, the work done is proportional to the distance moved. • The work required to stop the car is equal to its initial kinetic energy which depends on $v^2$. • So the braking distance is proportional to the square of the speed. Can we write an equation to express the relationship? We want to match the data with an equation of the form $d_{brake}=k v^2$. We could substitute in pairs of values from the table and work out the value of $k$ for each one – we find they vary a bit because the numbers have been rounded. Or we could work out $v^2$ for each row in the table and find $k$ as the gradient of a line of best fit. Graphing software such as Desmos will enable us to get a pretty good estimate. We found quite a good fit for $d_{brake}=\frac{3}{200}v^2$ which would mean the stopping distance, $$$d_s = \frac{3}{10}v + \frac{3}{200}v^2. \label{eq:stopping}$$$ Plot a graph of $d_s$ against $v$, for speeds between zero and $\quantity{70}{mph}$. You could do this either from the equation or straight from the table of data. This graph was drawn using the equation and it fits the data in the table reasonably well. How many average car lengths is $\quantity{30}{m}$ or $\quantity{96}{m}$? The Highway Code goes on to say that when driving you should leave a gap of at least the stopping distance between you and the vehicle in front. It also says that in faster-moving traffic, you should leave a “two-second gap”. In other words the front of your car should not reach a fixed point on the road until at least two seconds after the rear of the previous vehicle passed the same point. Write down an equation for this two-second distance, $d_t$, in terms of $v$ and add it to your graph of the stopping distance. The length of the two-second gap will be directly proportional to the speed (if I drive twice as fast, I cover twice the distance in the same time). After sorting out the mixed up units, we found $$$d_t = \frac{8}{9}v. \label{eq:twosec}$$$ Up to this point the units could be ignored, but because we are given the time in seconds we can’t avoid doing some conversion here. We did it by writing the speed as $\quantity{v\times1600\div3600}{m\,s^{-1}}$. A graph of $d_s$ and $d_t$ looks like this. At $\quantity{60}{mph}$ which of the two distances is bigger? Why might the Highway Code make the two-second suggestion? At what speeds is $d_t=d_s$? Perhaps surprisingly, at higher speeds like $\quantity{60}{mph}$ the suggested separation of vehicles is the smaller of the two distances. This may be in recognition of the fact that in faster traffic the car in front is unlikely to come to a complete stop without itself taking time to slow down. The two distances are the same when $v=0$ and when $v\approx\quantity{39}{mph}$. We can read this value off the graph, or we could set the two expressions $\eqref{eq:stopping}$ and $\eqref{eq:twosec}$ equal and solve the resulting quadratic. Perhaps $\quantity{40}{mph}$ is the speed above which the Highway Code considers traffic to be “faster-moving”. In a model of traffic flow on a single lane road, it is assumed that each vehicle is $\quantity{4}{m}$ long, travelling at constant speed and separated from the one in front by the typical stopping distance for that speed. Find an expression for the rate of traffic flow, $R_s$, in vehicles per hour, as a function of the speed, $v$. Plot a graph of this function for speeds up to $\quantity{70}{mph}$. Use your graph or the algebra to find the minimum or maximum value of this function and at what speed(s) it occurs? In this model, the distance between the front of one car and the front of the next is $\quantity{d_s+4}{m}$. The flow rate will be the speed (in metres per hour) divided by this distance (in metres), which works out to be $R_s = \frac{v\times1600}{\frac{3}{10}v+\frac{3}{200}v^2+4} = 1600\times200\frac{v}{3v^2+60v+800}.$ A graph of this function looks like this. We can get a good estimate of the maximum flow rate from the graph. Alternatively, it is possible to calculate it exactly by differentiating using the quotient rule. We found there is a stationary point at $v=20\sqrt{\frac{2}{3}}$. The maximum flow rate is roughly $2025$ cars per hour if they all drive at about $\quantity{16.3}{mph}$. Can you explain why the flow rate decreases as speeds increase beyond this? If instead of the typical stopping distance, the vehicles are separated by the two-second rule, what is the flow rate, $R_t$? What would the maximum or minimum flow rate be and when does it occur? This time, the flow rate $R_t = 1600\times9\frac{v}{8v+36}$ whose graph is a translated and stretched hyperbola. The maximum flow rate is $1800$ cars per hour but to achieve it they’d have to be driving infinitely fast! How much does the flow rate increase as the speed increases from $\quantity{60}{mph}$ to $\quantity{70}{mph}$? Under either model, what happens when the speed is very small? How must you drive? What happens when the speed is zero? If we allowed the speed to be negative how could we stick to the two-second rule? Why might these models be unrealistic in practice? There are lots of unrealistic assumptions in the models we have used here. • Every vehicle is assumed to be driving at an exactly constant speed. • In practice it is hard to accurately judge distances. • Even if you can, some drivers aren’t good at sticking to safe distances. • The models take no account of vehicles joining or leaving the road. How many more can you think of?
2018-04-26 23:01:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.7754122614860535, "perplexity": 336.52933047718733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948617.86/warc/CC-MAIN-20180426222608-20180427002608-00507.warc.gz"}
https://gamedev.stackexchange.com/questions/66150/how-can-i-tell-the-player-that-a-key-or-other-item-can-be-discarded/66159
# How can I tell the player that a key or other item can be discarded? We're working on a survival horror classical Resident evil (1, 2, 3, 0, REmake) type of game - link. I'm using Unity3D with C#. There's something that's kinda puzzling me - more a game design question than a technical one. Item usage - In Resident Evil - You could use a key to open more than one door - when it's not needed anymore you'd get a "XXX key is not deeded anymore, discard?" YES|NO. I was curious as to how would one implement this in an elegant manner... 1. First, we should discuss who knows about the other, is it the key that knows what doors it opens? or is it the door that knows which key fits in its keyhole? - Well, it's kinda both if you think about it. For example, a diamond shaped key, opens doors with diamond shaped keyholes - Looking at the key you know which doors it could open - Same said about the door - you look at it and know what type of key you need... 2. If we went with the first approach in that a key knows what doors it unlocks, we should have to assign those doors to the key - handle them to it in some way - What if you had a key, that will remain with you for a very long period - let's say it opens so many doors that some of those doors are in the middle of the game - Why would a key you pickup in the beginning of the game, carry information about something that would occur in the middle of the game? Seems redudant to me... So I wouldn't go with this approach. 3. OK, so you tell doors which key fits in them - Get in a scene with 3 doors, information about what keys to use for those 3 doors are loaded, and nothing else! - So I guess, we got that out of the way... 4. Now, how would you tell, that a key is not needed anymore? I thought of two approaches - both have downsides: 1. Have a nUsages counter - that you preset (via inspector for ex) - This way you have to know exactly how many times it will be used in the game - Add another door and forget to increment the counter, you get screwed. Each time you open a door/use the key, you decrement the counter by one, reach zero -> "You don't need this key anymore, discard>" YES|NO. 2. Detect the doors a key gets used in dynamically (When you pick up the key?) - So maybe you'd have like a database of DoorTriggers - Pickup key -> Look up the key's entries and increment the counter that way - This is slow relatively to the previous method. It's also kinda foggy a bit - You know, a key could open doors in scenes other than the one it's picked in - how would I go about searching for its doors in this case? (Getting a bit Unity3D-specific here...) - I guess, to answer this we must answer: How do doors register themselves in our db? Upon scene load? If that's the case, then a key picked up a scene A - can't detect a door it opens in Scene B where B > A... Note that a key is just an example - There are Usable items that I don't need to do this - Like health items - The items that I must perform this operation on, are "Other" items, like keys, a crank, a wrench, a pipe, a holeopener, etc. Again, this is what's off the surface of my head. What do you think of my assessment? How would you implement this feature? (maybe in a better way?) (General ideas) And if you'd go with my 2nd method in adding the stuff dynamically upon item pickup, how would you do it in Unity3D - More accurately, how would you look up the key's doors that resides in other scenes? and how where would you let the doors register themselves in teh db? (General approaches again) Thanks for any help. • to be honest if you think about it why would you ever throw a key away? In terms of realism if you find yourself in this kind of environment there is no way to know if a there are more doors that use this key or not. Personally I would just create a separate keyring area in your inventory and have the keys stay there forever. If the key is single use (for example if you need to put an stone symbol into a door or something) then you can just keep a boolean flag so you know to remove it after you use it. – Benjamin Danger Johnson Nov 21 '13 at 18:10 • Related question - why tell the player the item can safely be discarded? Either automatically throw it away, or force them to guess. – Bobson Nov 21 '13 at 21:15 • @BenjaminDangerJohnson - Thanks a lot for your comment. I didn't wanna mention that we're already using the keyring idea - because I wanted to talk about item usage in general by giving an example about keys to illustrate the purpose cause it's simple. – vexe Nov 22 '13 at 4:46 • You are right, in real life you never really know if it's 'safe' to discard something. But the reason I would allow the player to discard something, is to free some memory since the item is not needed anymore - why keep it? let's talk about a wrench for example that you only use twice or so, it does make sense realistically to keep it with you forever but it's not something very friendly memory-wise. – vexe Nov 22 '13 at 4:49 • Be careful not to over optimize. I honestly doubt an extra 20 objects will really slow you down too much but if you are that tight on space you can always just tell the player that the key "broke" after being used. The big thing is you want to give the players a situation they can understand. If the key just warps to another dimension they might interpret it as a bug, if it breaks they know 1) they can no longer use it and 2) it might be possible to fix it later if the pieces stay in their inventory. – Benjamin Danger Johnson Nov 22 '13 at 17:09 I don't see an easy way to meet all requirements in a general case. # To Make a Key Auto-Discard Two ideas: 1. If your game is linear, and one scene follows another with no backtracking, then you can store with the key a number and a string. The string is the latest scene that the key can be used in, the number is the number of matching doors in that scene. Discard when the number is zero, or when leaving that scene. (the latter catches the case when you don't open all doors for some reason). 2. If your game is more complex, you'll have to store a list of scenes reachable from the current scene, and -- for each key -- a number of unopened doors that match the key in each scene. After each key use on a new door, decrement the count for that scene. If it goes to zero, check the counts for all reachable scenes. If they are all zero, discard. Compiling the 'reachable' scene automatically, is basically impossible in Unity, since it would require some kind of knowledge of the behavior of your scripts. So you'd need to hard-code that. You can automatically iterate through a list of scenes, open them, try the keys on each door, and store the counts. # Who Decides which Key Fits You've covered key-has-a-list-of-doors and doors-have-a-list-of-keys. Both of which can easily be exposed by a method on a door that takes a key and returns a boolean. Another approach is to make the key and door more like a real key and lock. Have each key be an integer representing a bit pattern. Have each door be the same. Then in your CanOpen function, AND them together and the door opens if the result is zero. So a key with 'prongs' 00000000 is a skeleton key, it opens anything. A key with 11011001 would fit a lock of 00100110, or 00100010. With this you can build patterns of opening that mean keys and doors don't need complete lists of one another beforehand. Figuring out the patterns is more difficult. A real lock effectively has two masks, and does key AND door1 == door2, which might be worth doing if you get stuck making the bitmasks. • Sorry for the late response I was involved in doing other parts of my game. But now I'm back to Usable items. I thought of a really easy way to do it, but it ran short. What I had in mind is: store all my items as prefabs (keys, health items, etc) - then, go for the key-has-a-list-of-doors approach, since my keys are prefabs, I thought I could travel between scenes and assign the doors I want to the key, but to my horror I can't assign something from the scene, to a prefab! – vexe Dec 25 '13 at 15:27 • I have never thought of using bit masks here - it is pretty neat actually! - I think I'll explore it and let you know the results. One thing I didn't quite get though: "Figuring out the patterns is more difficult. A real lock effectively has two masks, and does key AND door1 == door2, which might be worth doing if you get stuck making the bitmasks." What you mean figuring out the patterns? how is a real lock has two masks? why key AND door1 == door2 - If I'm gonna be doing this, I think I want to make the creation of the mask, visible in the inspector somehow, maybe make a custom inspector – vexe Dec 25 '13 at 15:28 • Thinking more about it, a key could open many doors. So, to make Key 'k' fit in door 'd1' and 'd2', I would have to tell 'k', to generate a patter that works for d1 and d2. Or the opposite (go to the doors, and tell them to use a pattern that works for 'k') - In either cases, the key and the doors, know about each other in a way, so... we're kinda back to the old idea of having the key know which doors to use, or vise versa. Unless you have other ideas in mind of how to tell 'k' to generate a pattern that works for 'd1' and 'd2' – vexe Dec 25 '13 at 17:09 • @vexe - you have to distinguish between the code and the game design. Using bitmasks the code is not coupled. You don't need some list belonging to a key of does it can open (or vice versa). So you don't need to keep editing scenes to get those lists right. But you do need to know what keys fit what doors. You still effectively need that 'list' somewhere, in your game design. But the bitmasks stops you having to encode it explicitly. Which it turn makes it more flexible. It isn't a panacea, it just solves a problem of tightly coupled code. – Ian Dec 28 '13 at 10:55 • @vexe - so one approach would be to think of 'sets' of doors, as you would in a real building. An office block might be designed with different suites having their own keys (so each company can have a master key), but the janitor has a full-access key. Then each office in a company might have its own key, which also opens the front door, so employees can get to their desk, but nobody else's. – Ian Dec 28 '13 at 10:58 As long as items are used only on the same level, the matter is fairly simple. Upon loading, count the number of doors requiring a certain key. This determines the number of times a key is usable on a level, and every time it is used on a door the key usage counter is decreased. If the counter is 0 the key is no longer needed. If there can be multiple keys of the same time, the counter needs to be global. On a global game world level the same principle applies, but this time you have to know the number of uses for all "levels". If the level format is simple or split into different files the number of doors requiring a specific key can be determined when the game launches. Otherwise a preprocessing task could determine that number when the game is compiled or the levels exported. You certainly don't want designers to have to give the key a specific number of uses, because that is prone to human error. To make lookup easier, there needs to be some form of database. You'll also need this to link doors with a key. Let's say all items in the game world have a unique (database) ID (an integer number) or a unique name "RedSkeletonKey" then the red skeleton doors would set that they require "RedSkeletonKey" to be unlocked. You will want to edit this in the door template, so that every instantiated door placed in the world automatically has this requirement. Again this is to avoid human error. At runtime when the player approaches a locked door and tries to unlock it, the "RedSkeletonKey" reference is looked up in the item database. When found, the item of the same type is looked up in the player inventory. If player has that key, the door is unlocked and the runtime database usage counter for that key is decreased (or a separate counter increased and compared to be equal, whichever you prefer). • +1 Thanks for your answer - appreciate your effort. But to be honest, the first half of your answer is kinda repeating what I said so it's not adding something new to the table. You also didn't address the problem of how does keys know about doors in scenes ahead of them, if doors register dynamically to the db. – vexe Nov 22 '13 at 5:02 • "when the player approaches a locked door and tries to unlock it, the "RedSkeletonKey" reference is looked up in the item database" - I don't actually need to lookup keys - When I create a door, it will have the name of the key it opens it. So: Player tries to open door with key X -> door.getKey() == X? door.Unlock : nothing; - The only db lookup happens when the player picks up a key to know the number of doors its used in. – vexe Nov 22 '13 at 5:04
2021-06-24 23:48:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.257644921541214, "perplexity": 916.3793753536719}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488560777.97/warc/CC-MAIN-20210624233218-20210625023218-00358.warc.gz"}
https://www.coursehero.com/sg/general-chemistry/what-is-organic-chemistry/
What is Organic Chemistry? What Is Organic Chemistry? Organic chemistry is the study of carbon-based molecules that contain at least one carbon-hydrogen (${\rm{C}{-}{H}}$) bond. Organic chemistry is the branch of chemistry concerned with organic compounds. An organic compound contains one or more carbon-hydrogen (${\rm{C}{-}{H}}$) bonds. Because of the nature of a carbon atom and its ability to form four covalent bonds, there are a nearly infinite number of different compounds, in a huge variety of molecular configurations that include both long chains and ring structures. Organic compounds contribute to food, clothes, plastics, medicines, and soaps, as well as the molecules that build living organisms. Carbon Tetrahedron These compounds are called organic compounds because living organisms are composed primarily of carbon-based compounds. In the early- to mid-nineteenth century, scientists thought that compounds derived from living, or organic, things were inherently different from those derived from nonliving, or inorganic, things. It is now known that the apparent differences between organic and inorganic compounds arise only from the arrangement of the atoms in the molecules, but the name remains. While organic compounds also contain atoms such as oxygen, nitrogen, phosphorus, sulfur, and halogens, most organic compounds contain more carbon-hydrogen bonds than any other type of ${\rm{C}{-}{X}}$ bond. An organic compound that contains only carbon-carbon and carbon-hydrogen bonds is called a hydrocarbon. The four main classes of hydrocarbons are alkanes, alkenes, alkynes, and aromatics. An aliphatic compound is a hydrocarbon that contains only straight or branched carbon-carbon chains. Alkanes, alkenes, and alkynes are examples of aliphatic compounds. An aromatic compound is a planar hydrocarbon with CnHn stoichiometry that consists of alternating ${\rm{C}{-}{C}}$ and ${\rm{C}{=}{C}}$ bonds. Benzene (C6H6) is the smallest neutral carbon-only aromatic compound. An important facet of organic chemistry is the ability and tendency of carbon molecules to bond to atoms other than hydrogen. An atom or group of atoms (functional group) that replaces a ${\rm{C}{-}{H}}$ bond in an organic compound is called a substituent. Substituents can be several different kinds of atoms or groups of atoms, called functional groups. A functional group is a group of atoms with specific physical, chemical, and reactivity properties. When representing a functional group, an R is often used to indicate the remainder of the molecule that is not part of the functional group. The skeletal structure of an organic compound is drawn using zigzag lines that represent carbon-carbon bonds. To save time, no ${\rm{C}{-}{C}}$ or ${\rm{C}{-}{H}}$ bonds are illustrated. In a skeletal structure, the carbon and hydrogen atoms are all implied, where each carbon atom has four bonds to it and a hydrogen atom is bonded to a single carbon center. There is an understanding that a carbon atom is located at every vertex of the zigzag, and every carbon atom is bound to four other atoms, so any bonds that are not drawn explicitly are assumed to be hydrogen atoms. Any atom or functional group that is not a carbon atom or a hydrogen atom has to be written. Classes of Organic Functional Groups, Examples, and Their Systematic Naming Family Name Functional Group Structure Example Compound Example Skeletal Structure Alkane ${\rm{CH_3CH_2CH_2CH_2{-}H}}$ Butane Alkene ${\rm{CH_3CH_2CH{=}CH_2}}$ 1-Butene Alkyne ${\rm{CH_3CH_2C{\equiv}CH}}$ 1-Butyne Alcohol ${\rm{CH_3CH_2CH_2CH_2{-}OH}}$ 1-Butanol Halide ${\rm{CH_3CH_2CH_2CH_2{-}Cl}}$ 1-Chlorobutane Ether ${\rm{CH_3CH_2{-}O{-}CH_2CH_3}}$ Diethyl ether Thiol ${\rm{CH_3CH_2{-}SH}}$ Ethanethiol Aldehyde ${\rm{CH_3CH_2CH_2{-}C({=}O)H}}$ Butanal (butyraldehyde) Ketone ${\rm{CH_3{-}C({=}O){-}CH_3}}$ Propanone (acetone) Carboxylic acid ${\rm{CH_3{-}C({=}O){-}OH}}$ Ethanoic acid (acetic acid) Ester ${\rm{CH_3CH_2CH_2CH_2{-}C({=}O)O{-}CH_3}}$ Methyl pentanoate Amide ${\rm{CH_3CH_2{-}C({=}O){-}NH_2}}$ Propanamide Amine ${\rm{CH_3CH_2CH_2{-}NH_2}}$ Propylamine
2020-05-31 10:56:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 21, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27882564067840576, "perplexity": 2120.784028143145}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413097.49/warc/CC-MAIN-20200531085047-20200531115047-00018.warc.gz"}
https://www.nature.com/articles/s41564-023-01322-0?error=cookies_not_supported&code=0bf73cf8-0c12-4847-bf32-14d07ab73e61
## Main Over the past decade, trace gases have emerged as major energy sources supporting the growth and survival of aerobic bacteria in terrestrial ecosystems. Two trace gases, molecular hydrogen (H2) and carbon monoxide (CO), are particularly dependable substrates given their ubiquity, diffusibility and energy yields1. Bacteria oxidize these gases, including below atmospheric concentrations, using group 1 and 2 [NiFe]-hydrogenases and form I carbon monoxide dehydrogenases linked to aerobic respiratory chains2,3,4,5,6. Trace gas oxidation enables diverse organoheterotrophic bacteria to survive long-term starvation of their preferred organic growth substrates7,8. In addition, various microorganisms can grow mixotrophically by co-oxidizing trace gases with other organic or inorganic energy sources7,9,10. Thus far, bacteria from eight different phyla have been experimentally shown to consume H2 and CO at ambient levels1, with numerous other bacteria encoding the determinants of this process6,11. At the ecosystem scale, most bacteria in soil ecosystems harbour genes for trace gas oxidation and cell-specific rates of trace gas oxidation are theoretically sufficient to sustain their survival12,13. However, since most of these studies have focused on soil environments or isolates, the wider significance of trace gas oxidation remains largely unexplored. Trace gases may be important energy sources for oceanic bacteria since they are generally available at elevated concentrations relative to the atmosphere, in contrast to most soils1. Surface layers of the world’s oceans are generally supersaturated with H2 and CO, typically by 2- to 5-fold (up to 15-fold) and 20- to 200-fold (up to 2,000-fold) relative to the atmosphere, respectively14,15,16,17. As a result, oceans contribute to net atmospheric emissions of these gases18,19. CO is mainly produced through photochemical oxidation of dissolved organic matter20, whereas H2 is primarily produced by cyanobacterial nitrogen fixation21. High concentrations of H2 are also produced during fermentation in hypoxic sediments, and these high concentrations can diffuse into the overlying water column, especially in coastal waters22. For unresolved reasons, the distributions of these gases vary with latitude and exhibit opposite trends: while dissolved CO is highly supersaturated in polar waters, H2 is often undersaturated23,24,25,26,27,28. These variations probably reflect differences in the relative rates of trace gas production and consumption in different climates. Oceanic microbial communities have long been known to consume CO, although their capacity to use H2 has not been systematically evaluated29. Approximately a quarter of bacterial cells in oceanic surface waters encode CO dehydrogenases in surface waters and these span a wide range of taxa, including the globally abundant family Rhodobacteraceae (previously known as the marine Roseobacter clade)6,30,31,32,33. Building on observations made for soil communities, CO oxidation potentially enhances the long-term survival of marine bacteria during periods of organic carbon starvation6; consistently, culture-based studies indicate that CO does not influence growth of marine isolates, but production of the enzymes responsible is strongly upregulated during starvation34,35,36,37. While aerobic and anaerobic oxidation of H2 has been extensively described in benthic and hydrothermal vent communities38,39,40,41,42, so far no studies have shown whether pelagic bacterial communities can use this gas. Several surveys have detected potential H2-oxidizing hydrogenases in seawater samples and isolates6,11,40,43. Although Cyanobacteria are well-reported to oxidize H2, including marine isolates such as Trichodesmium, this process is thought to be limited to the endogenous recycling of H2 produced by the nitrogenase reaction44,45. In this study, we addressed these knowledge gaps by investigating the processes, distribution, mediators and potential roles of H2 and CO oxidation by marine bacteria. To do so, we performed side-by-side metagenomic and biogeochemical profiling of 14 samples collected from a temperate oceanic transect, a temperate coastal transect and a tropical island, in addition to analysing the global Tara Oceans metagenomes and metatranscriptomes46. We also tested the capacity of three axenic marine bacterial isolates to aerobically consume atmospheric H2. Altogether, we provide definitive ecosystem-scale and culture-based evidence that H2 is an overlooked key energy source supporting growth of marine bacteria. ## Results ### Marine microbes consume H2 slowly and CO rapidly We measured in situ concentrations and ex situ oxidation rates of H2 and CO in 14 surface seawater samples. The samples were collected from three locations (Supplementary Fig. 1): an oceanic transect spanning neritic, subtropical and subantarctic front waters (Munida transect off New Zealand coast; n = 8; Supplementary Fig. 2); a temperate urban bay (Port Phillip Bay, Australia; n = 4); and a tropical coral cay (Heron Island, Australia; n = 2). In line with global trends at these latitudes, both gases were supersaturated relative to the atmosphere in all samples. H2 was supersaturated by 5.4-, 4.8- and 12.4-fold respectively in the oceanic transect (2.0 ± 1.2 nM), the temperate bay (1.8 ± 0.26 nM) and the tropical island (4.6 ± 0.3 nM). CO was moderately supersaturated in the oceanic transect (5.2-fold; 0.36 nM ± 0.07 nM), but highly oversaturated in both the temperate bay (123-fold; 8.5 ± 1.7 nM) and tropical island (118-fold; 8.2 ± 0.93 nM). Microbial oxidation of trace gases was detected in all but one of the collected samples during ex situ incubations (Fig. 1). For the temperate bay, H2 and CO were consumed in water samples collected from the shore, intermediary zone and bay centre (Fig. 1a). Based on in situ gas concentrations, bulk oxidation rates of CO were 18-fold faster than H2 (P < 0.0001) (Supplementary Table 1). Bulk oxidation rates did not significantly differ between the surface microlayer (that is, the 1 mm interface between the atmosphere and ocean) and underlying waters. H2 and CO oxidation was also evident in surface microlayer and underlying seawater samples collected from the tropical island (Supplementary Fig. 3). We similarly observed rapid CO and slower H2 consumption across the multi-front Munida oceanic transect, although unexpectedly, these activities were mutually exclusive. Net CO oxidation occurred throughout the coastal and subtropical waters but was negligible in subantarctic waters. Conversely, net H2 oxidation only occurred in the subantarctic waters (Fig. 1b). These divergent oxidation rates in water masses with contrasting physicochemical conditions may help explain the contrasting concentrations of H2 and CO in global seawater23,24,25,26,27,28, although wider sampling and in situ assays would be required to confirm this. It should be noted that these measurements probably underestimate rates and overestimate thresholds of H2 oxidation since there will still be underlying endogenous production of H2, primarily through nitrogen fixation, during the incubations. Nevertheless, they provide the first empirical report of H2 oxidation in marine water columns. ### Marine microbes express enzymes for CO and H2 oxidation To better understand the basis of these activities, we sequenced metagenomes of the 14 samples (Supplementary Tables 2 and 3) and used homology-based searches to determine the abundance of 50 metabolic marker genes in the metagenomic reads (Supplementary Table 3) and assemblies (Supplementary Table 4). In common with other surface seawater communities47, analysis of community composition (Supplementary Fig. 4) and metabolic genes (Fig. 2) suggests that most bacteria present are capable of aerobic respiration, organoheterotrophy and phototrophy via energy-converting rhodopsins. Capacity for aerobic CO oxidation was moderate: approximately 12% of bacterial and archaeal cells encoded the coxL gene (encoding the catalytic subunit of the form I CO dehydrogenase), although relative abundance decreased from an average of 25% in the temperate bay where CO oxidation was highly active to 5.1% in subantarctic waters (Fig. 2) where CO oxidation was negligible (Fig. 1). Diverse hydrogenases were also encoded by the community, including subgroups known to support hydrogenotrophic respiration, hydrogenotrophic carbon fixation, hydrogenogenic fermentation and H2 sensing (Supplementary Table 3). Group 1d, 1l and 2a [NiFe]-hydrogenases (herein aerobic H2-uptake hydrogenases), which enable cells to input electrons from H2 into the aerobic respiratory chain4,9,48,49, were by far the most abundant among the H2-oxidizing enzymes (Fig. 2). Encoded by 1.0% of marine bacteria on average, the abundance of these hydrogenase subgroups was highest in the tropical island samples (average 3.5%) and declined to 0.11% in the neritic and subtropical samples from the oceanic transect (Fig. 2), in line with the contrasting H2 oxidation rates between these samples (Fig. 1 and Supplementary Fig. 3). The dominant hydrogenase subgroups varied between the samples, namely group 1d in the tropical island samples, group 2a in the temperate shore and microlayer samples and group 1l in the subantarctic samples (Fig. 2). Relative abundance of H2- and CO-oxidizing bacteria strongly predicted oxidation rates of each gas (R2 of 0.55 and 0.88; P values of 0.0059 and <0.0001, respectively) (Supplementary Fig. 5), although it is likely that repression of gene expression contributes to the negligible activities of some samples. To test whether these observations were globally representative, we determined the distribution and expression of the genes for H2 and CO oxidation in the Tara Oceans dataset47,50. Similarly to our metagenomes, aerobic H2-uptake hydrogenases were encoded by an average of 0.8% bacteria and archaea across the 213 Tara Oceans metagenomes, whereas form I CO dehydrogenases were encoded by 10.4%. These genes were observed in samples spanning all four oceans, as well as the Red Sea and Mediterranean Sea (Fig. 2). Despite their relatively low abundance based on the metagenomes, hydrogenase transcripts were highly numerous in the metatranscriptomes, with comparable levels to nitrogenase (nifH) transcripts (Fig. 2 and Supplementary Table 3). Expression ratios (average RNA:DNA ratios) of the aerobic H2-uptake hydrogenases were high, that is, 2.2, 1.1 and 12.9 for the group 1d, 1l and 2a [NiFe]-hydrogenases, respectively (Supplementary Table 3); of the marker genes surveyed, only the determinants of phototrophy (psaA, psbA, energy-converting rhodopsins), nitrification (amoA, nxrA) and CO2 fixation (rbcL) were expressed at higher ratios than the group 2a [NiFe]-hydrogenases. In contrast, expression levels were relatively low for the CO dehydrogenase (0.9), as well as the hydrogenases responsible for hydrogenotrophic carbon fixation, hydrogenogenic fermentation and H2 sensing (average RNA/DNA <1 in all cases) (Supplementary Table 3). Together with the biogeochemical measurements (Fig. 1), these findings suggest that H2-oxidizing bacteria can be highly active in seawater despite their relatively low abundance. ### Eleven marine bacterial phyla encode H2-oxidizing enzymes We subsequently determined the distribution of the metabolic marker genes in 110 metagenome-assembled genomes (MAGs) constructed from the local dataset and 1,888 previously reported MAGs (Fig. 3a and Supplementary Fig. 6) from the Tara Oceans dataset (Fig. 3a). The three lineages of aerobic H2-uptake hydrogenases were phylogenetically widespread, encoded by 75 (4.0%) of the bacterial MAGs, spanning 9 phyla and 26 orders, whereas CO dehydrogenases had a somewhat narrower distribution, that is, 70 (3.5%) MAGs, 6 phyla and 14 orders (Supplementary Table 5). Aerobic H2-uptake hydrogenases and CO dehydrogenases were both encoded by MAGs within the Proteobacteria, Bacteroidota, Actinobacteriota, Chloroflexota, Myxococcota and candidate phylum SAR324, and hydrogenases were also present in MAGs from the Cyanobacteria, Planctomycetota and Eremiobacterota (Fig. 3a). Phylogenetic trees depict the evolutionary history and taxonomic distributions of the catalytic subunits of the H2-oxidizing group 1 and 2 [NiFe]-hydrogenases (Fig. 3b and Supplementary Fig. 7), bidirectional group 3 and 4 [NiFe]-hydrogenases (Supplementary Fig. 8) and CO dehydrogenase (Supplementary Fig. 9). Integrating genomic information with the wider literature, it is likely that H2 and CO oxidation support a myriad of lifestyles in marine ecosystems. The group 1d [NiFe]-hydrogenase was typically co-encoded with both ribulose 1,5-bisphosphate carboxylase/oxygenase (RuBisCO) and the sensory group 2b [NiFe]-hydrogenase in the MAGs of multiple Rhodobacteraceae, Alteromonadaceae and other Proteobacteria (Fig. 3b and Supplementary Table 5); this suggests that this enzyme supports hydrogenotrophic growth in H2-enriched waters, in line with the previously described roles of these hydrogenases in culture-based studies11,38,51. The group 1l [NiFe]-hydrogenase, recently shown to support persistence of a Bacteroidota isolate from Antarctic saline soils4, was encoded by predicted organoheterotrophs from the Bacteroidota, SAR324 and, on the basis of cultured isolates, Proteobacteria (Fig. 3b and Supplementary Table 5). Group 2a [NiFe]-hydrogenases, known to support mixotrophic growth of diverse bacteria9, were more phylogenetically diverse and taxonomically widespread; they were distributed in the MAGs of both predicted chemoorganoheterotrophs (Bacteroidota, Myxococcota, Proteobacteria) and photolithoautotrophs (Cyanobacteria) (Fig. 3b and Supplementary Table 5). CO dehydrogenases were mostly affiliated with Rhodobacteraceae and were also encoded by multiple MAGs from the classes Nanopelagicales S36-B12, Puniceispirillaceae, SAR324 NAC60-12 and Ilumatobacteraceae (Supplementary Fig. 9). Of coxL-encoding MAGs, 63% also encoded the genes for energy-converting rhodopsins or photosystem II, indicating that they can harvest energy concurrently or alternately from both CO and light, in support of previous culture-based findings37. While most of these MAGs are predicted to be obligate heterotrophs, 7% also encoded RuBisCO and hence are theoretically capable of carboxydotrophic growth (Supplementary Table 5). These findings support previous inferences that habitat generalists in marine waters benefit from metabolic flexibility, including consuming dissolved CO as a supplemental energy source31,52. ### H2 could support growth and survival of marine bacteria We used two thermodynamic modelling calculations to estimate to what extent the measured rates of H2 and CO oxidation sustain cellular growth or survival. First, assuming a median maintenance energy of 1.9 × 10−15 Watts (W) per cell based on measurements of mostly copiotrophic isolates53, the measured oxidation rates would theoretically sustain an average of 2.0 × 107 H2-oxidizing cells (range 1.4 × 106 to 8.3 × 107) and 6.1 × 107 CO-oxidizing cells (range 2.1 × 106 to 1.5 × 108) per litre at in situ dissolved gas concentrations (Supplementary Table 1). Second, we calculated the amount of power (that is, W per cell) generated on the basis of the observed rates of trace gas oxidation (Fig. 1 and Supplementary Table 1) and predicted number of trace gas oxidizers (Fig. 2 and Supplementary Table 1) in the sampled waters, with this analysis being limited to the samples where oxidation was observed and reliable cell counts are available. On average, oxidation of the measured in situ concentrations of CO and H2 yields 7.2 × 10−16 W and 5.8 × 10−14 W per cell (Fig. 4). Together, these analyses suggest that the rates of CO oxidation are sufficient to sustain the survival, but not growth, of the numerous bacteria predicted to be capable of using this gas; this supports previous inferences that CO dehydrogenase primarily supports persistence in organoheterotrophic bacteria6. In contrast, marine H2 oxidizers gain much power by oxidizing a relatively exclusive substrate at rapid cell-specific rates probably sufficient to support growth. The cell-specific power generated for the sample with the most active H2 oxidizers (5.4 × 10−13 W; from the first subantarctic station) is within the range reported for cellular metabolic rates of bacterial isolates during growth (median: 2.6 × 10−14 W; range: 2.8 × 10−17 to 2.1 × 10−11 W), and is higher than that of copiotrophic marine isolates Vibrio sp. DW1 (3.2 × 10−14 W) and V. anguillarum (1.8 × 10−13 W)53. While estimation of cell-specific power from community data is less precise than estimates derived from axenic culture, these power per cell calculations are probably underestimates, given that they do not account for any internal cycling of trace gases, assume that all cells are equally active and do not consider relic DNA. It should also be noted that the power gained per cell will substantially increase when H2 and CO become transiently highly elevated over space and time as depicted in Fig. 4. In combination with the genomic inferences that multiple MAGs encode hydrogenases known to support lithoautotrophic and lithoheterotrophic growth (Fig. 3), such thermodynamic modelling strongly suggests that a small proportion of bacteria in oceans can grow using H2 as an electron donor for aerobic respiration and, in some cases, CO2 fixation. By predominantly relying on energy derived from H2 oxidation, marine bacteria could potentially allocate most organic carbon for biosynthesis rather than respiration, that is, adopting a predominantly lithoheterotrophic lifestyle. ### A marine isolate uses atmospheric H2 mixotrophically To better understand the mediators and roles of marine H2 oxidation, we investigated H2 uptake by three heterotrophic marine isolates encoding uptake hydrogenases closely related to those in the MAGs (Fig. 3). Two strains, Robiginitalea biformata DSM-15991 (Flavobacteriaceae)54 and Marinovum algicola FF3 (Rhodobacteraceae)55, did not substantially consume H2 over a 3 week period across a range of conditions despite encoding group 1l [NiFe]-hydrogenases. It is unclear whether hydrogenases have become non-functional in these fast-growing laboratory-adapted isolates or whether they are instead only active under very specific conditions. Sphingopyxis alaskensis RB2256 (Sphingomonadaceae)56,57, which encodes a plasmid-borne group 2a [NiFe]-hydrogenase, aerobically consumed H2 in a first-order kinetic process to sub-atmospheric levels (Fig. 5). Abundant in oligotrophic polar waters, S. alaskensis requires minimal resources to replicate since it forms extremely small cells (<0.1 µm3) and has a streamlined genome57,58,59,60. Previously thought to be an obligate organoheterotroph61, the discovery that this oligotrophic, exceptionally small bacterium (ultramicrobacterium)62 uses an abundant reduced gas as an energy source further rationalizes its ecological success. This is presumably the first report of atmospheric H2 oxidation by a marine bacterium. We then determined whether S. alaskensis uses H2 oxidation primarily to support mixotrophic growth or survival. Expression of its hydrogenase large subunit gene (hucL) was quantified by reverse transcription quantitative PCR (RT–qPCR). Under ambient conditions, this gene was expressed at significantly higher levels (P = 0.006) during aerobic growth on organic carbon sources (mid-exponential phase; av. 2.9 × 107 copies per gdw) than during survival (4 d in stationary phase; av. 1.5 × 106 copies per gdw; P = 0.006) (Fig. 5a,b). This expression pattern is similar to other organisms possessing a group 2a [NiFe]-hydrogenase9 and is antithetical to that of the groups 1h and 1l [NiFe]-hydrogenases that are typically induced by starvation1. The activity of the hydrogenase was monitored under the same two conditions by monitoring depletion of headspace H2 mixing ratios over time by gas chromatography. H2 was rapidly oxidized by exponentially growing cultures to sub-atmospheric concentrations within a period of 30 h, whereas negligible consumption occurred in stationary phase cultures (Fig. 5c). Together, these findings suggest that S. alaskensis can grow mixotrophically in marine waters by simultaneously consuming dissolved H2 with available organic substrates. These findings align closely with that observed for other organisms harbouring group 2a [NiFe]-hydrogenases9,10 and support the inferences from thermodynamic modelling (Fig. 4) that H2 probably supports growth of some marine bacteria. ### H2 and CO oxidation capacity changes with water depth Finally, we investigated the environmental correlates of the abundance and expression of trace gas oxidation genes in the Tara Oceans datasets (Fig. 6 and Supplementary Table 6). Linear correlation analysis confirmed that genes encoding the aerobic H2-uptake hydrogenase (R2 = 0.22, P < 0.0001) and the CO dehydrogenase (R2 = 0.72, P < 0.0001) both significantly increased with depth (Fig. 6), as illustrated by their increased abundance in the metagenomes from mesopelagic waters (Fig. 2). This contrasts with the sharp decreases in the genes responsible for phototrophy, such as energy-converting rhodopsins (R2 = 0.59, P < 0.0001), with depth (Figs. 2 and 6). This pattern was consistent across sites in the Atlantic, Indian, Pacific and Southern Oceans. These findings suggest that as light and hence energy availability decreases, there is a greater selective advantage for bacteria that use trace gases (lithoheterotrophy) rather than photosynthesis (photoheterotrophy). These inferences were nuanced, after accounting for co-correlated variables (Supplementary Fig. 10), by random forest modelling (Fig. 6 and Supplementary Figs. 11 and 12). Depth was among the top three strongest predictors of the abundance of group 1l and 2a [NiFe]-hydrogenases, CO dehydrogenase and energy-converting rhodopsins (Fig. 6 and Supplementary Fig. 11). Latitude proved to be a strong predictor of the expression of the group 1l [NiFe]-hydrogenases and CO dehydrogenases, the latter peaking in the tropics (Fig. 6 and Supplementary Figs. 1113). One explanation for the latter is that in tropical waters, increased photochemical and thermochemical CO production enhances substrate availability for CO oxidizers. These observations are consistent with the inverse CO and H2 oxidation rates observed across the Munida transect (Fig. 1), as well as previously reported latitudinal variations in seawater concentrations of these gases23,24,25,26,27,28. In contrast, group 1d [NiFe]-hydrogenase gene abundance and expression levels were highest in hypoxic waters (Fig. 6 and Supplementary Fig. 14); this suggests that in contrast to its high-affinity oxygen-insensitive counterparts, this hydrogenase will be most transcribed when H2 levels are elevated due to hypoxic fermentation (resulting in activation of the sensory hydrogenase) and most active when O2 levels are low enough to minimize active site inhibition38,51. Collectively, our analyses suggest that there are complex environmental controls on the abundance and activities of marine trace gas oxidizers, and that the three H2-uptake hydrogenases are ecophysiologically distinct. ## Discussion Through an integrative approach, we provide presumably the first demonstration that H2 is an important energy source for seawater communities. The biogeochemical, metagenomic and thermodynamic modelling analyses together suggest that H2 is oxidized by a diverse but small proportion of community members, but at sufficiently fast cell-specific rates to enable lithotrophic growth. These findings are supported by experimental observations that the ultramicrobacterium S. alaskensis consumes H2 during heterotrophic growth. Marine bacteria with the capacity to oxidize H2 probably gain a major competitive advantage from being able to consume this abundant, diffusible, high-energy gas. H2-oxidizing marine microorganisms are globally distributed, although activity measurements and hydrogenase distribution profiles suggest complex controls on their activity and that they may be particularly active in low-chlorophyll waters. In contrast, our findings support that CO oxidation is a widespread trait that enhances the flexibility and likely primarily survival of habitat generalists30,31, especially in high-chlorophyll waters. At the biogeochemical scale, our findings indicate that marine bacteria mitigate atmospheric H2 emissions19 and potentially account for undersaturation of H2 in Antarctic waters28. Yet a major enigma remains. H2 and CO are among the most dependable energy sources in the sea given their relatively high concentrations and energy yields. So why do relatively few bacteria harness them? By comparison, soils are net sinks for these trace gases given that the numerous bacteria present rapidly consume them12. We propose the straightforward explanation that the resource investment required to make the metalloenzymes to harness these trace gases may not always be justified by the energy gained. In the acutely iron-limited ocean, hydrogenases (containing 12–13 Fe atoms per protomer11) and to a lesser extent CO dehydrogenases (containing 4 Fe atoms per protomer63) are a major investment. This trade-off is likely to be most pronounced in the surface ocean, where solar energy can be harvested using minimal resources through energy-converting rhodopsins. However, the iron investment required to consume H2 and CO is likely to be justified in energy-limited waters at depths and regions or seasons where primary production is low. This is consistent with the observed enrichment of hydrogenases and CO dehydrogenases in metagenomes from mesopelagic waters, as well as increased H2 oxidation observed in subantarctic waters. Moreover, iron availability is typically higher in deeper circulating waters and around continental shelves (due to both deep water upwelling and terrestrial inputs), where high hydrogenase expression and activity were observed64. Thus, oceans continue to be a net source of H2 and CO despite the importance of these energy sources for diverse marine bacteria. ## Methods ### Sample collection and characteristics To determine the ability of marine microbial communities to oxidize trace gases, a total of 14 marine surface water samples were collected from three different locations (Supplementary Fig. 1). Eight samples were collected from across the Munida Microbial Observatory Time-Series transect (Otago, New Zealand)65 on 23 July 2019 in calm weather on the RV Polaris II. This marine transect begins off the coast of Otago, New Zealand and extends through neritic, subtropical and subantarctic waters65. Eight equidistant stations were sampled travelling east, ranging from approximately 15 km to 70 km from Taiaroa Head. At each station, water was collected at 1 m depth using Niskin bottles and stored in two 1 l autoclaved bottles. One bottle was reserved for DNA filtration and extraction, whereas the other was used for microcosm incubation experiments. The vessel measured changes in salinity and temperature to determine the boundaries of each water mass (Supplementary Fig. 2). Four samples were also collected from the temperate Port Phillip Bay at Carrum Beach (Victoria, Australia) on 20 March 2019 and two were collected from the tropical Heron Island (Queensland, Australia) on 9 July 2019. At both sites, near-shore surface microlayer and surface water samples were collected in the subtidal zone (water depth ca. 1 m). At Port Phillip Bay, two samples were also collected at 7.5 km and 15 km east of the mouth of the Patterson River, labelled ‘Intermediate’ and ‘Centre’ respectively. In all cases, surface water samples of 3 l were collected with a sterile Schott bottle from approximately 20 cm depth and aliquoted for microcosm incubation and DNA extraction. Surface microlayer samples were collected using a manual glass-plate sampler of 1,800 cm2 surface area66. A total of 520–580 ml was collected in 150–155 dips, resulting in an average sampling thickness of 20 µm. For the surface microlayer samples, 180 ml was reserved for microcosm incubations, with the remaining volume used for DNA extraction. From all transects, each sample reserved for DNA extraction was vacuum-filtered using 0.22 µm polycarbonate filters and then stored at −80 °C until extraction. ### Measurement of dissolved H2 and CO Dissolved gases were also sampled in situ at each transect to measure dissolved concentrations of CO and H2. Serum vials (160 ml) were filled with seawater using a gas-tight tube, allowing approximately 300 ml to overflow. The vial was then sealed with a treated lab-grade butyl rubber stopper, avoiding the introduction of gas to the vial. An ultra-pure N2 headspace (20 ml) was introduced to the vial by concurrently removing 20 ml of liquid using two gas-tight syringes. The vials were then shaken vigorously for 2 min before being equilibrated for 5 min to allow dissolved gases to enter the headspace. Of the headspace, 17 ml was then collected into a syringe flushed with N2 by returning the removed liquid to the vial, and 2 ml was purged to flush the stopcock and needle before injecting the remaining 15 ml into a N2-flushed and evacuated silicone-closed Exetainer67 for storage. Exetainers were sealed with a stainless-steel bolt and O-ring and stored until measurement. H2 and CO concentrations in the Exetainers were analysed by gas chromatography using a pulse discharge helium ionization detector (model TGA-6792-W-4U-2, Valco Instruments), as previously described68, calibrated against standard CO and H2 gas mixtures of known concentrations. ### Ex situ activity assays To determine the ability of these marine microbial communities to oxidize CO and H2, the seawater samples were incubated with these gases under laboratory conditions and their concentration over time was measured using gas chromatography. For each sample, triplicate microcosms were setup in which seawater was transferred into foil-wrapped serum vials (60 ml seawater in 120 ml vials for Munida transect and Port Phillip Bay; 80 ml seawater in 160 ml vials for Heron Island) and sealed with treated lab-grade butyl rubber stoppers67. For each sampling location, one set of triplicates was also autoclaved and used as a control. The ambient-air headspace of each vial was spiked with H2 and CO so that they reached initial headspace mixing ratios of either 2 ppmv (Munida transect and Port Phillip Bay) or 10 ppmv (Heron Island). Microcosms were continuously agitated at 20 °C on a shaker table at 100 r.p.m. For Munida and Port Phillip Bay samples, 1 ml samples were extracted daily from the headspace and their content was measured by gas chromatography as described above. For Heron Island samples, at each timepoint, 6 ml gas was extracted and stored in 12 ml UHP-He-flushed conventional Exetainers (2018) or pre-evacuated 3 ml silicone-sealed Exetainers67. ### Calculation of dissolved gas concentrations The concentrations of dissolved gases in seawater at equilibrium state and at 1 atmospheric pressure were calculated according to the Sechenov relation for mixed electrolyte solutions, as described in ref. 69: $$\log \left( {\frac{{k_{G,0}}}{{k_G}}} \right) = {\sum} {\left( {h_i + h_G} \right)c_i}$$ (1) where kG,0 and kG denote the gas solubility (or Henry’s law constant in equivalent) in water and the mixed electrolyte solution, respectively, hi is a constant specific to the dissolved ion i (m3 kmol−1), hG is a gas-specific parameter (m3 kmol−1) and ci represents the concentration of the dissolved ion i in solution (kmol m−3). The gas-specific constant, hG, at temperature T (in K) follows the equation: $$h_G = h_{G,0} + h_T\left( {T - 298.15} \right)$$ (2) where hG,0 represents the value of hG at 298.15 K and hT is a gas-specific parameter for the temperature effect (m3 kmol−1 K−1). The gas solubility parameter kG,0 at temperature T follows combined Henry’s law and van ’t Hoff equation: $$k_{G,0} = k_{G,0}^\prime \times e^{\frac{{ - {\Delta}_{\mathrm{soln}}H}}{R}\left( {\frac{1}{T} - \frac{1}{{298.15}}} \right)}$$ (3) where $$k_{G,0}^\prime$$ denotes Henry’s law constant of the gas at 298.15 K, $${\Delta}_{\mathrm{soln}}H$$ is the enthalpy of solution and R is the ideal gas law constant. The concentrations of dissolved gases at equilibrium with the headspace gas phase at 1 atmospheric pressure and incubation temperature of 20 °C were calculated on the basis of a mean seawater composition as reported in ref. 70. The salinity correcting constants hi, hG,0 and hT were adopted from ref. 69, while the temperature correcting constants $$k_{G,0}^\prime$$ and $${\textstyle{{ - {\Delta}_{\mathrm{soln}}H} \over R}}$$ were obtained from ref. 71. ### Kinetic analysis and thermodynamic modelling For kinetic analysis, measurement timepoints of up to 30 d of incubation time were used. The gas consumption pattern was fitted with both an exponential model and a linear model. The former showed a lowest overall Akaike information criterion value for both H2 and CO consumption (Supplementary Table 1). As such, first-order reaction rate constants were calculated and used for the kinetic modelling. In addition, only samples having at least two replicates with a positive rate constant were deemed to have a confident gas consumption. Bulk atmospheric gas oxidation rates for each sample were calculated with respect to the mean atmospheric mixing ratio of the corresponding trace gases (H2: 0.53 ppmv; CO: 0.09 ppmv; CH4: 1.9 ppmv). To estimate the cell-specific gas oxidation rate, the average direct cell count values reported for surface seawaters at Port Phillip Bay centre72 and the eight stations along the Munida transect were used65,73. Assuming that all cells are viable and active, cell-specific gas oxidation rates were then inferred by multiplying the estimated relative abundance of trace gas oxidizers derived from the metagenomic short reads (the average gene copy number, assuming one copy per organism; see ‘Metabolic annotation’ below) by the cell counts to obtain the number of trace gas oxidizers. To estimate the energetic contributions of H2 and CO oxidation to the corresponding marine trace gas oxidizers, we performed thermodynamic modelling to calculate their respective theoretical energy yields according to the first-order kinetics of each sample estimated above. Power (Gibbs energy per unit time per cell) P follows the equation: $$P = \frac{{v \times {\Delta}G_{\mathrm{r}}}}{B}$$ (4) where v denotes the rate of substrate consumption per litre of seawater (mol l−1 s−1) and B is the number of microbial cells (cells l−1) performing the reactions H2 + 0.5 O2 → H2O (dihydrogen oxidation) and CO + 0.5 O2 → CO2 (carbon monoxide oxidation). ΔGr represents the Gibbs free energy of the reaction at the experimental conditions (J mol−1) and follows the equation: $${{{\mathrm{{\Delta}}}}}G_{\mathrm{r}} = {{{\mathrm{{\Delta}}}}}G_{\mathrm{r}}^0 + RT\,{{{\mathrm{ln}}}}Q_{\mathrm{r}}$$ (5) where $${\Delta}G_{\mathrm{r}}^0$$ denotes the standard Gibbs free energy of the reaction, Qr denotes the reaction quotient, R represents the ideal gas constant and T represents temperature in Kelvin. Values of $${\Delta}G_{\mathrm{r}}^0$$ of the hydrogen oxidation and carbon monoxide oxidation were obtained from ref. 74. Values of Qr for each reaction were calculated using: $$Q_{\mathrm{r}} = {\prod} {a_g^{n_i}}$$ (6) where ag and ni denote the dissolved concentration of the ith species in seawater and the stoichiometric coefficient of the ith species in the reaction of interest, respectively. Gibbs free energies were calculated for oxidation of hydrogen and carbon monoxide at atmospheric pressure and 20 °C incubation temperature. To contextualize cellular power yield from H2 and CO oxidation in relation to reported cellular energy requirements, a comprehensive list of maintenance (endogenous rate) and growth (active rate) power requirements of 121 organoheterotrophic bacteria at 20 °C reported in ref. 53 was used as the primary reference. A median maintenance energy of 1.9 × 10−15 W per cell was derived from the bacterial endogenous rates obtained in the supporting information sd01 of the above reference. ### Metagenomic sequencing and assembly DNA was extracted from the sample filters using the DNeasy PowerSoil kit (QIAGEN) following the manufacturer’s instructions. Sample libraries, including an extraction blank control, were prepared with the Nextera XT DNA Sample Preparation kit (Illumina) and sequenced on an Illumina NextSeq500 platform (2 × 151 bp) at the Australian Centre for Ecogenomics (University of Queensland). An average of 20,122,526 read pairs were generated per sample, with 827,868 read pairs sequenced in the negative control (Supplementary Table 2). Raw metagenomic data were quality controlled with the BBTools suite v38.90 (https://sourceforge.net/projects/bbmap/), using BBDuk to remove the 151st base, trim adapters, filter PhiX reads, trim the 3’ end at a quality threshold of 15 and discard reads below 50 bp in length. Reads detected in the extraction blank were additionally removed with BBMap v38.90, leaving a total of 97.7% of raw sample reads for further analysis. Taxonomy was profiled from high-quality short reads by assembling and classifying 16S rRNA and 18S rRNA genes with PhyloFlash v3.4 (ref. 75). Short reads were assembled individually with metaSPAdes v3.14.1 (ref. 76) and collectively (all samples together, and by location) with MEGAHIT v1.2.9 (ref. 77). Coverage profiles for each contig were generated by mapping the short reads to the assemblies with BBMap v38.90 (ref. 78). Genome binning was performed with MetaBAT2 v2.15.5 (ref. 79), MaxBin 2 v2.2.7 (ref. 80) and CONCOCT v1.1.0 (ref. 81) after setting each tool to retain only contigs ≥2,000 bp in length. For each assembly, resulting bins were dereplicated across binning tools with DAS_Tool v1.1.3 (ref. 82). All bins were refined with RefineM v0.1.2 (ref. 83) and consolidated into a final set of non-redundant metagenome-assembled-genomes (MAGs) at the default 99% average nucleotide identity using dRep v3.2.2 (ref. 84). The completeness, contamination and strain heterogeneity of each MAG were calculated with CheckM v1.1.3 (ref. 85), resulting in a total of 21 high-quality (>90% completeness, <5% contamination86) and 89 medium-quality (>50% completeness, <10% contamination86) MAGs. Taxonomy was assigned to each MAG with GTDB-Tk v1.6.0 (ref. 87) (using GTDB release 202)88 and open reading frames were predicted from each MAG and additionally across all contigs (binned and unbinned) with Prodigal v2.6.3 (ref. 89). CoverM v0.6.1 (https://github.com/wwood/CoverM) ‘genome’ was used to calculate the relative abundance of each MAG in each sample (–min-read-aligned-percent 0.75,–min-read-percent-identity 0.95,–min-covered-fraction 0) and the mean read coverage per MAG across the dataset (-m mean,–min-covered-fraction 0). For global comparisons, raw metagenome (PRJEB1787) and metatranscriptome (PRJEB6608) data from the Tara Oceans global dataset were downloaded from the European Nucleotide Archive47,50. In addition, 1,888 bacterial and archaeal MAGs generated in ref. 90 were downloaded (via https://www.genoscope.cns.fr/tara/). ### Metabolic annotation For both the metagenomes generated in this study and those from the Tara Oceans dataset, high-quality short reads and predicted proteins from assemblies and MAGs underwent metabolic annotation using DIAMOND v2.0.9 (–max-target-seqs 1,–max-hsps 1)91 for alignment against a custom set of 50 metabolic marker protein databases. The marker proteins (https://doi.org/10.26180/c.5230745) cover the major pathways for aerobic and anaerobic respiration, energy conservation from organic and inorganic compounds, carbon fixation, nitrogen fixation and phototrophy4. Gene hits were filtered as follows: alignments were filtered to retain only those either at least 40 amino acids in length (150 bp metagenomes from the current study), 32 amino acids in length (100 bp Tara metagenomes and metatranscriptomes) or with at least 80% query or 80% subject coverage (predicted proteins from assemblies and MAGs). Alignments were further filtered by a minimum percentage identity score by protein: for short reads, this was 80% (PsaA), 75% (HbsT), 70% (PsbA, IsoA, AtpA, YgfK and ARO), 60% (CoxL, MmoA, AmoA, NxrA, RbcL, NuoF, FeFe hydrogenases and NiFe Group 4 hydrogenases) or 50% (all other genes). For predicted proteins, the same thresholds were used except for AtpA (60%), PsbA (60%), RdhA (45%), Cyc2 (35%) and RHO (30%). For short reads, gene abundance in the community was estimated as ‘average gene copies per organism’ by dividing the abundance of the gene (in reads per kilobase million, RPKM) by the mean abundance of 14 universal single-copy ribosomal marker genes (in RPKM, obtained from the SingleM v0.13.2 package, https://github.com/wwood/singlem). For single-copy metabolic genes, this corresponds to the proportion of community members that encode the gene. A linear correlation analysis, performed in GraphPad Prism 9, was used to determine how metagenomic gene abundance correlated with ex situ H2 and CO oxidation rates. For the Tara Oceans dataset, the RNA:DNA ratio was calculated by dividing gene abundance in the metatranscriptome (in RPKM) by the gene abundance in the corresponding metagenome (RPKM) to examine gene expression relative to abundance. Where replicate metagenomes or metatranscriptomes were present, RPKM values were averaged by sample. ### Phylogenetic analysis Phylogenetic trees were constructed to understand the distribution and diversity of marine microorganisms capable of H2 and CO oxidation. Trees were constructed for the catalytic subunits of the groups 1 and 2 [NiFe]-hydrogenases, groups 3 and 4 [NiFe]-hydrogenases, and the form I CO dehydrogenase (CoxL). In all cases, protein sequences retrieved from the MAGs by homology-based searches were aligned against a subset of reference sequences from custom protein databases6,49 using ClustalW in MEGA11 (ref. 92). In brief, evolutionary relationships were visualized by constructing a maximum-likelihood phylogenetic tree; specifically, initial trees for the heuristic search were obtained automatically by applying Neighbour-Join and BioNJ algorithms to a matrix of pairwise distances estimated using a Jones-Taylor-Thornton (JTT) model, and then selecting the topology with superior log likelihood value within MEGA11. All residues were used and trees were bootstrapped with 50 replicates. Phylogenetic tree annotation and visualization were performed using iTOL (v6.6). ### Environmental driver analysis Random forest models, Pearson correlations and Spearman correlations were generated for the Tara Oceans dataset to identify significant correlations between sample environmental metadata and the normalized abundance of carbon monoxide dehydrogenase, rhodopsin and [NiFe] groups 1d, 1e, 1l, 2a, 3b and 3d hydrogenase genes (shown as copies per organism for metagenomes, log10(RPKM + 1) for metatranscriptomes). To account for collinearity, where environmental variables were highly correlated (Pearson coefficient > |0.7|, Supplementary Fig. 10), one was excluded from the random forest models to avoid the division of variable importance across those features. These excluded variables were selected at random, unless they were highly correlated with depth (which was kept). Then, using imputed values where data were missing (function rfImpute()), a random forest model was generated for each gene above using the environmental variables marked in Supplementary Table 6 as predictors (importance = TRUE, ntree = 3,000), using the R package randomForest93. All combinations of the above genes and environmental variables were additionally correlated with Pearson’s and Spearman’s rank correlations, omitting missing values and adjusting all P values with the false discovery rate correction. ### Culture-based growth and gas consumption analysis Axenic cultures of three bacterial strains were analysed in this study: Sphingopyxis alaskensis (RB2256)56,57 obtained from UNSW Sydney, Robiginitalea biformata DSM-15991 (ref. 54) imported from DSMZ and Marinovum algicola FF3 (Rhodobacteraceae)55 imported from DSMZ. Cultures were maintained in 120 ml glass serum vials containing a headspace of ambient air (H2 mixing ratio ~0.5 ppmv) sealed with treated lab-grade butyl rubber stoppers67. Broth cultures of all three species were grown in 30 ml of Difco 2216 Marine Broth media and incubated at 30 °C at an agitation speed of 150 r.p.m. in a Ratek orbital mixer incubator with access to natural day/night cycles. Growth was monitored by determining the optical density (OD600) of periodically sampled 1 ml extracts using an Eppendorf BioSpectrophotometer. The ability of the three cultures to oxidize H2 was measured by gas chromatography. Cultures in biological triplicate were opened, equilibrated with ambient air (1 h) and resealed. These re-aerated vials were then amended with H2 (via 1% v/v H2 in N2 gas cylinder, 99.999% pure) to achieve final headspace concentrations of ~10 ppmv. Headspace mixing ratios were measured immediately after closure and at regular intervals thereafter until the limit of quantification of the gas chromatograph was reached (42 ppbv H2). This analysis was performed for both exponential (OD600 0.67 for S. alaskensis) and stationary phase cultures (~72 h post ODmax for S. alaskensis). ### RT–qPCR analysis Quantitative reverse transcription PCR (RT–qPCR) was used to determine the expression levels of the group 2a [NiFe]-hydrogenase large subunit gene (hucL; locus Sala_3198) in S. alaskensis during growth and survival. For RNA extraction, triplicate 30 ml cultures of S. alaskensis were grown synchronously in 120 ml sealed serum vials. Cultures were grown to either exponential phase (OD600 0.67) or stationary phase (48 h post ODmax ~3.2). Cells were then quenched using a glycerol-saline solution (−20 °C, 3:2 v/v), collected by centrifugation (20,000 × g, 30 min, −9 °C), resuspended in 1 ml cold 1:1 glycerol:saline solution (−20 °C) and further centrifuged (20,000 × g, 30 min, −9 °C). Briefly, resultant cell pellets were resuspended in 1 ml TRIzol reagent (Thermo Fisher), mixed with 0.1 mm zircon beads (0.3 g) and subjected to bead beating (three 30 s on/30 s off cycles, 5,000 r.p.m.) in a Precellys 24 homogenizer (Bertin Technologies) before centrifugation (12,000 × g, 10 min, 4 °C). Total RNA was extracted using the phenol-chloroform method following the manufacturer’s instructions (TRIzol reagent user guide, Thermo Fisher) and resuspended in diethylpyrocarbonate-treated water. RNA was treated using the TURBO DNA-free kit (Thermo Fisher) following the manufacturer’s instructions. RNA concentration and purity were confirmed using a NanoDrop ND-1000 spectrophotometer. Complementary DNA was synthesized using a SuperScript III First-Strand Synthesis System kit for RT–qPCR (Thermo Fisher) with random hexamer primers, following the manufacturer’s instructions. RT–qPCR was performed in a QuantStudio 7 Flex Real-Time PCR System (Applied Biosystems) using a LightCycler 480 SYBR Green I Master Mix (Roche) in 96-well plates according to the manufacturer’s instructions. Primers were designed using Primer3 (ref. 94) to target the hucL gene (HucL_fw: AGCTACACAAACCCTCGACA; HucL_rvs: AGTCGATCATGAACAGGCCA) and the 16S rRNA gene as a housekeeping gene (16S_fwd: AACCCTCATCCCTAGTTGCC; 16S_rvs: GGTTAGAGCATTGCCTTCGG). Copy numbers for each gene were interpolated from standard curves of each gene created from threshold cycle (CT) values of amplicons that were serially diluted from 108 to 10 copies (R2 > 0.98). Hydrogenase expression data were then normalized to the housekeeping gene in exponential phase. All biological triplicate samples, standards and negative controls were run in technical duplicate. A Student’s t-test in GraphPad Prism 9 was used to compare hucL expression levels between exponential and stationary phases. ### Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
2023-03-30 12:28:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34829220175743103, "perplexity": 9391.166192513989}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949181.44/warc/CC-MAIN-20230330101355-20230330131355-00427.warc.gz"}
https://tex.stackexchange.com/questions/509813/alignment-of-individual-rows-in-left-justified-system-of-equations
# Alignment of individual rows in left-justified system of equations I typeset a general system of equations, which involved some "vertical" dots. But I can't figure out how to make those dots appears in the center. Moreover, I actually want to make two streams of vertical dots. Would appreciate some help. $$\begin{cases} u_{1,t}=\nabla\cdot D(u_{1})\nabla u_{1,t}+f_1(c_1, \dots, c_m, u_1, \dots, u_n)\\ \vdots \\ u_{n,t}=\nabla\cdot D(u_{n})\nabla u_{n,t}+f_2(c_1, \dots, c_m, u_1, \dots, u_n)\\ c_{1,t}=\Delta c_1-g_1(c_1, \dots, c_m, u_1, \dots, u_n)\\ \vdots \\ c_{m,t}=\Delta c_m-g_m(c_1, \dots, c_m, u_1, \dots, u_n)\\ \end{cases}\tag{II}$$ I would not uses cases for this, which is better suited for use on the right-hand side of an equation. Rather I would use an aligned block, aligning on the equality signs, and enclosed in a \left\{ ... \right. pair to produce the left brace. MadyYuvi's suggestion to use \vdotswithin is also a good aid. \documentclass{article} \usepackage{mathtools} \begin{document} \left\{ \begin{aligned} u_{1,t}&=\nabla\cdot D(u_{1})\nabla u_{1,t}+f_1(c_1, \dots, c_m, u_1, \dots, u_n)\\ &\vdotswithin{=} \\ u_{n,t}&=\nabla\cdot D(u_{n})\nabla u_{n,t}+f_2(c_1, \dots, c_m, u_1, \dots, u_n)\\ c_{1,t}&=\Delta c_1-g_1(c_1, \dots, c_m, u_1, \dots, u_n)\\ &\vdotswithin{=} \\ c_{m,t}&=\Delta c_m-g_m(c_1, \dots, c_m, u_1, \dots, u_n)\\ \end{aligned} \right. \tag{II} \end{document} Use \vdotswithin tag which comes along with the package mathtools, and the codes are follows: \documentclass{book} \usepackage{mathtools} \begin{document} $$\begin{cases} u_{1,t}=\nabla\cdot D(u_{1})\nabla u_{1,t}+f_1(c_1, \dots, c_m, u_1, \dots, u_n)\\ \vdotswithin{u_{n,t}=\nabla\cdot D(u_{n})\nabla u_{n,t}+f_2(c_1, \dots, c_m, u_1, \dots, u_n)} \\ u_{n,t}=\nabla\cdot D(u_{n})\nabla u_{n,t}+f_2(c_1, \dots, c_m, u_1, \dots, u_n)\\ c_{1,t}=\Delta c_1-g_1(c_1, \dots, c_m, u_1, \dots, u_n)\\ \vdotswithin{u_{n,t}=\nabla\cdot D(u_{n})\nabla u_{n,t}+f_2(c_1, \dots, c_m, u_1, \dots, u_n)} \\ c_{m,t}=\Delta c_m-g_m(c_1, \dots, c_m, u_1, \dots, u_n)\\ \end{cases}\tag{II}$$ \end{document} Output: Please read the documentation of mathtools package, page number 22 for further details • I'm using Kile (XeLaTeX). For some reason it complains when I try to compile a file with it: l.263 \vdotswithin\\ ! Misplaced \cr. \reserved@b ->\ifnum 0=`{\fi }\${}\cr – sequence Oct 9 '19 at 15:34 • @sequence The mentioned error may be due to version issue, please update with the current setup and try... – MadyYuvi Oct 10 '19 at 4:40 • Thanks, it looks like it is working now. – sequence Oct 10 '19 at 4:49
2020-02-19 04:00:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.9897370934486389, "perplexity": 4610.601407644156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144027.33/warc/CC-MAIN-20200219030731-20200219060731-00256.warc.gz"}
http://thawom.com/sec-taylor.html
## 15.5 Taylor Series Now we come to a technique that’s very important in physics and engineering, and can be used to calculate logarithms, exponentials, and trigonometric functions to any desired precision. Before we start I’d like to refresh your memory on some notations that will be used in this section. The factorial notation, for example $5! = 5\times4\times3\times2\times1 = 120$ is covered in Section 13.3: Factorials. Sigma notation, for example $\sum_{n=3}^6 n^2 = 3^2 + 4^2 + 5^2 + 6^2$ is covered in Section 7.7: Sequences. I’ll also be using Lagrange’s notation for higher derivatives, for example $f^{(4)}(x)$ means the fourth derivative of $f(x)$, i.e. the derivative of $f’’’(x)$ (see Section 15.3.5: Higher Derivatives). For the sake of simplicity, I’ll be using $f^{(0)}(x)$ to mean $f(x)$, which could be considered as the ‘zeroth’ derivative. Suppose we want an approximation for $e^x$ for small values of $x$. Imagine that we want to find $e^{0.1}$ for example, and calculators and log tables haven’t been invented yet. First of all, we know it’s approximately $1$, because $e^0 = 1$. Already that’s a pretty good approximation (the real value is about $1.10517$) but how can we make it better? ##### Question 15.5.1 What if we decided to approximate $\e^x$ with a linear function $a+bx$, and we want to choose $a$ and $b$ so that the linear function is tangent to $\e^x$ at $x=0$. In other words, we want the values of the functions to match when $x=0$ (so that they pass through the same point) and we want their derivatives to match there too (so that they have the same slope). What would $a$ and $b$ be? What is our approximation for $\e^{0.1}$ now? Show answer ##### Question 15.5.2 Now maybe we decide that we want an even better approximation, so not only do we want the values and the first derivatives to match when $x=0$, but also the second derivatives, so we choose a quadratic function, $a+bx + cx^2$. Find $a$, $b$, and $c$. What is our new approximation for $\e^{0.1}$? Show answer ##### Question 15.5.3 What if we now decide we also want the third, fourth, and fifth derivatives to match $f(x)=\e^x$ when $x=0$, so we use a quintic function $g(x) = a_0 + a_1x + a_2x^2 + a_3x^3 + a_4x^4 + a_5x^5.$ In other words, we want $f(0)=g(0)$, $f’(0)=g’(0)$, and $f’’(0) = g’’(0)$ as before, but also $f’’’(0)=g’’’(0)$, $f^{(4)}(0) = g^{(4)}(0)$, and $f^{(5)}(0) = g^{(5)}(0)$. Find the coefficients $a_0$ to $a_5$. Show answer ##### Question 15.5.4 We could continue this process indefinitely. Write $\e^x$ as a polynomial of infinite degree (using sigma notation). Show answer What we just found is called the Taylor series for $\e^x$ (or Maclaurin series since it’s around $x=0$). Many mathematicians were involved in inventing Taylor series, including Brook Taylor, James Gregory, Colin Maclaurin, and Madhava of Sangamagrama, but Taylor was the first one to write down the general method. A Taylor series can be found for any function we can keep differentiating, but it doesn’t always behave nicely (the series may be divergent for some values of $x$ – see Question 15.5.8 below). In the case of $\e^x$ it always converges though, and we can even use this Taylor series as an alternative definition of $\e^x$ and say that they are the same thing. This is the basic idea that calculators use to find $\e^x$. They use as many terms of the Taylor series as they want to get the desired number of significant figures. (To find something like $\e^{2.3}$ they might calculate $\e\times\e\times\e^{0.3}$, using the Taylor series to find $\e^{0.3}$, rather than apply the Taylor series directly with $x=2.3$, which would take longer to converge.) ##### Question 15.5.5 Let’s apply the same idea to $\sin x$. Write down a polynomial of infinite degree with the same value and (higher) derivatives as $\sin x$ when $x=0$. Show answer Infinite series for sine and other trigonometric functions were found much earlier than for $\e^x$, by Madhava of Sangamagrama around the fourteenth century. We can see on the following graphs how the approximations to sine become better and better as we add more terms to the polynomial: The cubic, $x - x^3/3!$, is a pretty good approximation up to about $x=1$, then goes too low. The quintic works well up to about $x=2$ then goes too high. And the degree-$7$ polynomial works up to about $x=3$. Of course for sine we don’t really need anything beyond $\pi/2$ because we can use the sine of the principal angle (see Section 11.4.3: The Unit Circle) to work out the sine of any angles beyond this range. Many calculators use the cordic algorithm instead to work out sine (see Question 11.4.29). Let’s work out a general formula for finding the Taylor series of any function. ##### Question 15.5.6 We have a function $f(x)$, and we can work out $f(0)$, $f’(0)$, $f’’(0)$, and so on. We want to find a polynomial $g(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + \cdots$ that matches $f(x)$ in value and (higher) derivatives when $x=0$. In other words, we want $g^{(n)}(0) = f^{(n)}(0)$ for $n=0,1,2,\ldots$. Find the coefficients of the polynomial (write them in terms of $f(0)$, $f’(0)$, and so on). Show hint What if we want to find the Taylor series of $\ln x$, which is undefined at $x=0$? So far we’ve been making the Taylor series match $f(0)$, $f’(0)$, and so on, in other words, we’ve been focusing on the point $x=0$. We don’t have to do that; the Taylor series can be found around any arbitrary point. We might instead want the Taylor series to have a value of $f(a)$ when $x=a$ for some number $a$, and a derivative of $f’(a)$, a second derivative $f’’(a)$, and so on. ##### Question 15.5.7 How can we do this? If we use $g(x) = f(a) + f’(a)x + \frac{f’’(a)}{2!}x^2 + \frac{f’’’(a)}{3!}x^3 + \cdots$ then $g(0) = f(a)$, $g’(0) = f’(a)$, and so on, but what we really want is $g(a) = f(a)$, $g’(a) = f’(a)$, and so on. How can we make it work? Show hint ##### Question 15.5.8 Let’s use this result to explore the Taylor series of $f(x) = \ln x$. 1. Find the first $4$ terms of the Taylor series around $x=1$. (Technically it will be only $3$ terms, because the constant term is $0$). Show answer 2. Find an expression for $f^{(n)}(1)$. Show hint 3. Hence state the Taylor series of $\ln x$ about $x=1$, using sigma notation. Show answer 4. Hence write $\ln 2$ as an infinite sequence. Show answer 5. Write $\ln 3$ as an infinite series. Do you think this sequence will converge? Show hint 6. For what values of $x$ do you think the series will converge? Show hint 7. How might a calculator find $\ln 3$ using only addition, subtraction, multiplication, and division? Show hint Although $\e^x$ is equal to its Taylor series for all values of $x$, this is not true for every function. The Taylor series can diverge for some values of $x$. Remember: The Taylor series of $f$ at $a$: $f(a) + f’(a)(x-a) + \frac{f’’(a)}{2!}(x-a)^2 + \frac{f’’’(a)}{3!}(x-a)^3 + \cdots$ or written using sigma notation: $\sum_{n=0}^\infty \frac{f^{(n)}(a)(x-a)^n}{n!}$ When $a=0$, it is also known as a Maclaurin series: $\sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!}x^n = f(0) + f’(0)x + \frac{f’’(0)}{2!}x^2 + \frac{f’’’(0)}{3!}x^3 + \cdots$ Taylor series are very useful in physics and engineering, because they allow complicated functions to be approximated by simple polynomials. If we’re interested in values of $f(x)$ when $x$ is very close to $a$ then we usually only need two or three terms of the Taylor series to get a good approximation. ##### Question 15.5.9 Classical mechanics says that kinetic energy is $\frac{1}{2}mv^2$ where $m$ is mass and $v$ is speed, but special relativity says that kinetic energy is $\gamma(v) mc^2 - mc^2$ where $c$ is the speed of light, and $\gamma(v) = \frac{1}{\sqrt{1-\frac{v^2}{c^2}}}.$ Show that the relativistic formula is approximately the same as the classical one when the speed is slow ($v\approx 0$). Show hint
2022-05-17 04:06:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9426880478858948, "perplexity": 166.5525347282867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662515501.4/warc/CC-MAIN-20220517031843-20220517061843-00663.warc.gz"}
https://support.10xgenomics.com/single-cell-atac/software/pipelines/latest/output/singlecell
Software  ›   pipelines # Per barcode QC, ATAC signal, and cell calling The cellranger-atac pipeline performs cell calling where it determines whether each barcode is a cell of any species included in the reference. Based on mapping information, the pipelines also provides QC information associated with the fragments per barcode. Additionally, the pipeline computes the ATAC signal per barcode, captured by various targeting metrics such as number of fragments overlapping transcription start sites (TSS) annotated in the reference package. All of this per barcode information is collated and produced in a single output table: singlecell.csv. ## Structure The structure and contents of singlecell.csv from a single species analysis are shown below: $cd /home/jdoe/runs/sample345/outs$ head -5 singlecell.csv barcode,total,duplicate,chimeric,unmapped,lowmapq,mitochondrial,passed_filters,cell_id,is__cell_barcode,TSS_fragments,DNase_sensitive_region_fragments,enhancer_region_fragments,promoter_region_fragments,on_target_fragments,blacklist_region_fragments,peak_region_fragments,peak_region_cutsites NO_BARCODE,14876,7802,379,704,1149,0,4842,None,0,0,0,0,0,0,0,0,0 AAACGAAAGAACAGGA-1,1,0,0,0,0,0,1,None,0,0,1,0,0,1,0,0,0 AAACGAAAGAACCATA-1,27,10,0,0,1,0,16,None,0,2,12,1,0,12,0,1,1 AAACGAAAGACCATAA-1,8,4,0,0,2,0,2,None,0,2,2,0,2,2,0,0,0 The table contains many columns, including the primary barcode column. All the barcodes in the dataset are listed in this column. The NO_BARCODE row contains a summary of fragments that are not associated with any whitelisted barcodes. It usually forms a small fraction of all reads. ## Column Definitions ColumnTypeDescriptionPipeline specific changesReference specific changes barcodekeybarcodes present in input data totalsequencingtotal read-pairsabsent in aggr, reanalyze duplicatemappingnumber of duplicate read-pairs chimericmappingnumber of chimerically mapped read-pairsabsent in aggr, reanalyze unmappedmappingnumber of read-pairs with at least one end not mappedabsent in aggr, reanalyze lowmapqmappingnumber of read-pairs with <30 mapq on at least one endabsent in aggr, reanalyze mitochondrialmappingnumber of read-pairs mapping to mitochondria and non-nuclear contigsabsent in aggr, reanalyze passed_filtersmappingnumber of non-duplicate, usable read-pairs i.e. "fragments"absent in aggr, reanalyzefor multi species, for example hg19 and mm10, expect additional columns: passed_filters_hg19 and passed_filtered_mm10 cell_idcell callingindex of the barcode in cell barcodes. Appears as {species}_cell_{num}, otherwise None.for multi species, for example hg19 and mm10, doublets will appear as hg19_cell_{num1}_mm10_cell_{num2}. is__cell_barcodecell callingbinary indicator of whether barcode is associated with a cellfor multi species, for example hg19 and mm10, expect columns is_hg19_cell_barcode and is_mm10_cell_barcode instead. TSS_fragmentstargetingnumber of fragments overlapping with TSS regions DNase_sensitive_region_fragmentstargetingnumber of fragments overlapping with DNase sensitive regionsFor custom references or references missing the dnase.bed file, this count is 0 enhancer_region_fragmentstargetingnumber of fragments overlapping enhancer regionsFor custom references or references missing the enhancer.bed file, this count is 0 promoter_region_fragmentstargetingnumber of fragments overlapping promoter regionsFor custom references or references missing the promoter.bed file, this count is 0 on_target_fragmentstargetingnumber of fragments overlapping any of TSS, enhancer, promoter and DNase hypersensitivity sites (counted with multiplicity)For custom references or references having only the tss.bed file, this count is simply equal to the TSS_fragments blacklist_region_fragmentstargetingnumber of fragments overlapping blacklisted regions peak_region_fragmentsdenovo targetingnumber of fragments overlapping peaksfor multi species, for example hg19 and mm10, expect additional columns: peak_region_fragments_hg19 and peak_region_fragments_mm10 peak_region_cutsitesdenovo targetingnumber of ends of fragments in peak regions Note that the number of columns and the column names themselves change and depend on what pipeline and what reference was used to generate the output file. Briefly, as described in the last two columns in the table, • Cell Ranger ATAC aggr and reanalyze pipelines only take fragments as input and not fastqs. Consequently, only the barcodes present in the input fragments file, i.e. barcodes with at least one fragment detected will be listed in this output file. • As the Cell Ranger ATAC aggr and reanalyze pipelines don't require the BAM at input, the columns associated with mapping information are not produced for the output file. However, the duplicates, and passed filter information can be deduced from the fragments file. • When present, it is guaranteed that the sum of all the mapping type columns (whatever subset is present) will be equal to the total. • For custom references, if files such as enhancer.bed are missing, then the counts for the corresponding columns will be zero. • For barnyard references, there will be additional species specific columns such as is_hg19_cell_barcodes, passed_filter_hg19 and peak_region_fragments_hg19. singlecell.csv can be loaded easily in Python as a pandas dataframe: import pandas as pd singlecell_file = "/home/jdoe/runs/sample345/outs/singlecell.csv" # load with barcode as index scdf2 = pd.read_csv(singlecell_file, sep=",", index_col="barcode" ) You can use this file in many ways. Below are some examples: ### Regenerate the targeting plot in web summary Assume you are analyzing data from a single species library, such as hg19. To reproduce the targeting plot on the right side in Targeting section of the websummary, you can do the following: import matplotlib as plt noncell_mask = (scdf['is__cell_barcode'] != 1 && scdf['barcode'] != 'NO_BARCODE') c='b') c='r') ### Edit cell calling for use in aggr and reanalyze The singlecell.csv file captures the cell calling information in the is_{species}_cell_barcode field. The Cell Ranger ATAC aggr pipeline requires you to specify the singlecell.csv as part of the aggr_csv argument. On the other hand, the Cell Ranger ATAC reanalyze pipeline accepts an optional input for cell barcodes in the form of the singlecell.csv file. You can control what barcodes get analyzed as cells from each library by editing the cell calling columns in the singlecell.csv file. In particular, you only need to edit the is_{species}_cell_barcode columns. For example, if you have a list of barcodes you want to keep, for example, after editing the barcodes.tsv file produced as part of the matrices mex format, you can do the following: barcodes_file = "/home/jdoe/runs/sample345/outs/filtered_peak_bc_matrix_mex/barcodes.tsv" with open(barcodes_file, 'r') as infile: keep_barcodes = [bc.strip("\n") for bc in infile] # keep_barcode must contain barcodes present in the singlecell.csv file scdf2.loc[keep_barcodes, 'is__cell_barcode'] = 1 scdf2.to_csv(out_file, sep=",", index=False) Care must be taken while editing the singlecell.csv in multi-species samples, in which case, you want to edit the per species cell calling columns separately. • 1.1 • 1.0 • Cell Ranger ATAC v1.2 (latest)
2020-10-29 14:35:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4224350154399872, "perplexity": 9486.728167749041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107904287.88/warc/CC-MAIN-20201029124628-20201029154628-00041.warc.gz"}
http://stats.stackexchange.com/questions/37476/pca-on-independent-but-non-identically-distributed-multinormal-data
# PCA on independent but non-identically distributed multinormal data In the empirical phase diagram approach the data are given by a set ${\boldsymbol X}$ of input (controlled) variables (such as pH and temperature) and a set ${\boldsymbol Y}$ of output variables. The empirical phase diagram aims to visualize (at least, roughly) the relation between the input and the output. The strategy runs as follows in case when there are only two input variables: • firstly we ignore the input ${\boldsymbol X}$ and we run a PCA on ${\boldsymbol Y}$ • keep the first three principal components • transform the three coordinates on the principal components into a color using the red-green-blue coding • plot the color in function of the two input variables, and visualize I am rather new in PCA analysis (so please do not hesitate to tell me my questions are stupid if they are). I know PCA has a better interpretation when it is applied with a sample of i.i.d. random multivariate normal variables. But I don't know what are the possible pitfalls when this assumption does not hold. Assume for instance a multivariate regression model for which the distribution of a single multivariate response ${\boldsymbol Y}$ is assumed to be: $${\boldsymbol Y}_i = f({\boldsymbol X}_i) + \epsilon_i \quad \textrm{with } \epsilon_i \sim {\cal N}({\boldsymbol 0}, \Sigma).$$ I think that we ideally should run the PCA on the centered responses ${\boldsymbol Y}_i - \hat{f}({\boldsymbol X}_i)$ in such a situation. So what are the possible pitfalls if we run the above strategy in such a situation ? - +1 I think this is a good question not a stupid one. –  Michael Chernick Sep 18 '12 at 11:01 If you run PCA on ${\bf Y}_i - \hat f({\bf X}_i)$, you are visualizing residuals $\epsilon_i$, not the original data ${\bf Y}_i$. In other words, you will analyze the conditional variance of ${\bf Y}_i$, and that may be of interest per se. If it is the variability of ${\bf Y}_i$ that you want to visualize, and you know these outcomes are affected by ${\bf X}_i$'s, then what you are describing appears a moderately sensible visualization, with the caveat that 5D graphs are difficult to read and interpret. You could also look into principal curves if the dependence on ${\bf X}_i$'s is heavily non-linear. - There's no conditional distribution in the regression model so I don't see why are you talking about some conditional variance. Moreover I don't want to use any model, my question is about what happens if there is such a model but we don't use it. Thanks for mentioning the principal curves, I didn't know this notion. –  Stéphane Laurent Sep 19 '12 at 5:02 Oops sorry I'm still sleeping ;) You mean that the regression model is a modeling of the conditional distribution of Y given the covariates, right. –  Stéphane Laurent Sep 19 '12 at 5:05 IF the data are iid normals, then you get a sphere (the covariance matrix is $\sigma^2I$). With normals, you always get ellipsoids (in a smaller dimension perhaps, because of non-zero correlations.) –  user765195 Sep 20 '12 at 3:24 the data are always multivariate. With i.i.d. multinormal ${\cal N}({\boldsymbol \mu}, \Sigma)$ one gets the ellipsoid associated to $\Sigma$, which is a sphere when $\Sigma=\sigma^2 I$. This point is clear. –  Stéphane Laurent Sep 20 '12 at 5:11
2013-12-06 00:48:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.794917643070221, "perplexity": 451.02445889885786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163048803/warc/CC-MAIN-20131204131728-00047-ip-10-33-133-15.ec2.internal.warc.gz"}
https://byjus.com/question-answer/assertion-f-x-sin-x-x-is-discontinuous-at-x-0-because-reason-if-g/
Question # Assertion :$$f(x)=\sin x+[x]$$ is discontinuous at $$x=0$$ because Reason: If $$g(x)$$ is continuous and $$h(x)$$ is discontinuous at $$x=a$$, then $$g(x)+h(x)$$ will necessarily be discontinuous at $$x=a$$ A Both Assertion and Reason are correct and Reason is the correct explanation for Assertion B Both Assertion and Reason are correct but Reason is not the correct explanation for Assertion C Assertion is correct but Reason is incorrect D Assertion is incorrect but Reason is correct Solution ## The correct option is A Both Assertion and Reason are correct and Reason is the correct explanation for AssertionAt $$x=0$$, $$\sin x$$ is continuous but $$[x]$$ is not continuous.Hence $$f(x) = \sin x+[x]$$ is discontinuous at $$x=0$$Thus both statement are correct and Assertion is followed by Reason.Mathematics Suggest Corrections 0 Similar questions View More People also searched for View More
2022-01-24 10:04:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6274376511573792, "perplexity": 989.4424302882386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304528.78/warc/CC-MAIN-20220124094120-20220124124120-00282.warc.gz"}
https://math.stackexchange.com/questions/723252/distribution-of-a-sample-generated-from-an-ar2-model
# Distribution of a sample generated from an AR(2) model Consider the autoregressive model of order 2 $$X_{t}=\varphi_1X_{t-1}+\varphi_2X_{t-2}+\varepsilon_t,$$ where $\varepsilon_t$ are zero-mean normally distributed random variables with $\sigma^2$ variance such that these random variables are uncorrelated. Suppose that we have a sample for the above model with sample size, let say, somewhere between 100 and 1000. As a part of a simulation I would like to know something about the probability distribution of the data. Some examples I made using MATLAB suggests that some normal distribution produces an excellent fitting, which - at least in my opinion - makes sense because the white noise process in the model is normally distributed. However I am not skilled in the topic. Is there any result in the literature which can give me a theoretical base for this suggestion (that a sample of the above AR(2) model is normally distributed if considered as values of some random variable rather than as a time-series)? If there is not any, then how can I support the good fitting of some normal distribution? At stationarity, the AR(2) model above is centered normal and its covariance structure $c_n=E(X_tX_{t-n})$ is given by $$c_0=\frac{1-\varphi_2}{\Delta}\sigma^2,\qquad c_1=\frac{\varphi_1}{\Delta}\sigma^2,$$ where $$\Delta=(1-\varphi_2)(1-\varphi_1^2-\varphi_2^2)-2\varphi_1^2\varphi_2,$$ and, for every $n\geqslant2$, $$c_n=\varphi_1c_{n-1}+\varphi_2c_{n-2}.$$ • Centered normal is $N(0,\sigma^2)$. Normal is $N(\mu,\sigma^2)$. – Did Mar 23 '14 at 17:18
2019-12-08 23:58:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.817236065864563, "perplexity": 210.4663124192065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540515344.59/warc/CC-MAIN-20191208230118-20191209014118-00151.warc.gz"}
https://www.effortlessmath.com/math-topics/circles/
# Circles Learn how how to find the Area and Circumference of Circles when you have the radius or the diameter of the circle. ## Step by step guide to solve Circles • In a circle, variable $$r$$ is usually used for the radius and $$d$$ for diameter and $$π$$ is about $$3.14$$. • Area of a circle $$=πr^2$$ • Circumference of a circle $$=2πr$$ ### Example 1: Find the area of the circle. Solution: Use area formula: Area $$=πr^2$$, $$r=6$$ $$in$$, then: Area $$=π(6)^2=36π, π=3.14$$ then: Area $$=36×3.14=113.04$$ $$in^2$$ ### Example 2: Find the Circumference of the circle. Solution: Use Circumference formula: Circumference $$=2πr$$ $$r=9$$ $$cm$$ , then: Circumference $$=2π(9)=18π$$ $$π=3.14$$ then: Circumference $$=18×3.14=56.52$$ $$cm$$ ### Example 3: Find the area of the circle. Solution: Use area formula: Area $$=πr^2$$, $$r=6$$ $$in$$, then: Area $$=π(4)^2=16π, π=3.14$$ then: Area $$=16×3.14=50.24$$ $$in^2$$ ### Example 4: Find the Circumference of the circle. Solution: Use Circumference formula: Circumference $$=2πr$$ $$r=9$$ $$cm$$ , then: Circumference $$=2π(6)=12π$$ $$π=3.14$$ then: Circumference $$=12×3.14=37.68$$ $$cm$$ ## Exercises ### Find the area and circumference of each circle. $$(\pi=3.14)$$ 1. $$\color{blue}{Area: \ 50.24 \ in^2 , \ Circumference: \ 25.12 \ in}$$ 2. $$\color{blue}{Area: \ 1,017.36 \ cm^2, \ Circumference: \ 113.04 \ cm}$$ 3. $$\color{blue}{Area: \ 78.5 \ m^2, \ Circumference: \ 31.4 \ m}$$ 4. $$\color{blue}{Area: \ 379.94 \ cm^2 , \ Circumference: \ 69.08 \ cm}$$
2020-03-29 15:03:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9973967671394348, "perplexity": 2040.2271822522248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494349.3/warc/CC-MAIN-20200329140021-20200329170021-00298.warc.gz"}
https://stats.stackexchange.com/questions/207500/statistical-significance-test-with-1x2-data-and-faked-2x2-contingency-table
# Statistical significance test with 1x2 data and faked 2x2 contingency table Medical scientists love p-values and in a publication, I found the following patient statistics table. I stumbled over the Number of patients line. They divided their subject group into two subgroups TAV and BAV and I asked how they calculated the p-value, because you only have one characteristic BAV/TAV. If you look for instance at the Male row, you see that you can use a 2x2 contingency table consisting of 2 characteristics BAV/TAV and Male/Female and make e.g. a Fisher test. What was done in the Number of patients row was to fake a 2x2 contingency table repeating and reversing the numbers: $$\begin{array}{c|c} 347&32\\\hline 32&347 \end{array}$$ From the viewpoint that they want to express how similar the number of patients are in the BAV and TAV group, this kind of makes sense because it gives a p-value of 1 (two tailed Fisher) when BAV and TAV have the same number of patients. However, I have never seen this and I cannot find any reference to such an approach. Question: Can someone tell me whether it is correct what they did? • Do you mean 2x2 table??? – SmallChess Apr 15 '16 at 11:32 • In the title of the question, I really meant 1x2 because basically we have only 1x2 data. But I see your point that it might be confusing. Maybe we find a different title for the question that describes the problem better. Any suggestions? – halirutan Apr 15 '16 at 11:33 • No need. If you mean 1x2, let it be. – SmallChess Apr 15 '16 at 11:34 The only "right" way I can think of to analyze that type of patient row is to test whether the proportions are equal. This can be done with a one way chi-square test, which is, indeed, a 1x2 table. I've never seen the sort of procedure you mention - did they say in the article that that is what they did, or are you guessing/figuring out that that is what they did? The next question is whether the two methods give the same results. The table you show doesn't give much, so let's test: type <- c(TAV = 347, BAV = 32) chisq.test(type) and x <- matrix(c(347,32,32,347),ncol = 2) chisq.test(x) give identical p values. The first is what I suggested, the second is what you said. • Thanks for pointing this out. I really know what they did because I asked. +1 – halirutan Apr 15 '16 at 12:09 • Note, these are giving different p-values, it's just hard to tell since the p-value is so small. If you change 347 to 50, you'll see different p-values. I think this should be a one proportion z-test, please see my response – Peter Calhoun Apr 21 '16 at 7:02 The p-value calculations are a little confusing. The p-values are trying to test whether the percentages are similar for the two subgroups. For the first row "Number of patients" the % in TAV has to be 1-% in BAV. Therefore, one should test whether p=0.5 for one proportion: prop.test(221, 244, p=0.5, alternative="two.sided") For the "Male Gender" row, the % of males in TAV does not have to be 1-% of males in BAV, so we have a different test. You can actually fill out the table: Male Female Total TAV 221 126 347 BAV 23 9 32 Total 244 135 This can be tested using Fisher's exact test (although Barnard's exact test is superior) giving the p-value reported. > data<-matrix(c(221,126,23,9),ncol=2,byrow=TRUE) > fisher.test(data,alternative="two.sided") Fisher's Exact Test for Count Data data: data p-value = 0.4418 alternative hypothesis: true odds ratio is not equal to 1 95 percent confidence interval: 0.2710563 1.5995457 sample estimates: odds ratio 0.686984
2019-09-17 01:31:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6787548661231995, "perplexity": 1026.7116295815533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572980.56/warc/CC-MAIN-20190917000820-20190917022820-00349.warc.gz"}
https://arxiver.moonhats.com/2017/12/08/the-apostle-simulations-rotation-curves-derived-from-synthetic-21-cm-observations-ga/
The APOSTLE simulations: Rotation curves derived from synthetic 21-cm observations [GA] The APOSTLE cosmological hydrodynamical simulation suite is a collection of twelve regions $\sim 5$ Mpc in diameter, selected to resemble the Local Group of galaxies in terms of kinematics and environment, and re-simulated at high resolution (minimum gas particle mass of $10^4\,{\rm M}\odot$) using the galaxy formation model and calibration developed for the EAGLE project. I select a sample of dwarf galaxies ($60 < V{\rm max}/{\rm km}\,{\rm s}^{-1} < 120$) from these simulations and construct synthetic spatially- and spectrally-resolved observations of their 21-cm emission. Using the $^{3{\rm D}}$BAROLO tilted-ring modelling tool, I extract rotation curves from the synthetic data cubes. In many cases, non-circular motions present in the gas disc hinder the recovery of a rotation curve which accurately traces the underlying mass distribution; a large central deficit of dark matter, relative to the predictions of cold dark matter N-body simulations, may then be erroneously inferred. K. Oman Fri, 8 Dec 17 12/70 Comments: To appear in the proceedings of IAUS 334: Rediscovering our Galaxy, July 10-14 2017, Telegrafenberg, Potsdam, Germany, Eds. C. Chiappini, I. Minchev, E. Starkenburg & M. Valentini
2017-12-13 05:36:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48355358839035034, "perplexity": 5310.552733040826}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948521292.23/warc/CC-MAIN-20171213045921-20171213065921-00119.warc.gz"}
https://physics.stackexchange.com/questions/406919/grand-canonical-partition-function-of-hypothetical-particles
# Grand canonical partition function of hypothetical particles I have to calculate the grand canonical partition function of a system of hypothetical particles, wherein each single-particle quantum state can be occupied by up to 3 particles. Obviously, this is a sort of joke, referring to fermions (with a maximum of 2 particles per state) and bosons (unlimited particles per state). It is assumed that these hypothetical particles do not interact with each other. So I tried viewing each single-particle quantum state as a separate grand canonical ensemble, following the approach on https://en.wikipedia.org/wiki/Fermi%E2%80%93Dirac_statistics At chemical potential $$\mu$$ and temperature $$T$$, where the energy of the state is $$\epsilon$$, I get: $$$$\mathcal{Z} = \sum_{n=0}^{3}{\exp{\left(\frac{n(\mu-\epsilon)}{k_B T}\right)} = \frac{1-\exp{\left(4\frac{\mu-\epsilon}{k_B T}\right)}}{1-\exp{\left(\frac{\mu-\epsilon}{k_B T}\right)}}}$$$$ where I used the finite geometric progression. Now I also have to determine the average occupation number $$\langle n_i \rangle$$ for a state with energy $$\epsilon_i$$ at temperature $$T=0$$. In general, we have $$$$\langle n_i \rangle = k_B T \frac{\partial \ln{\mathcal{Z}}}{\partial \mu}$$$$ which yields me $$\langle n_i \rangle =2-\frac{1}{1+\exp(x)}+\tanh(x)$$ where I defined $$x=\frac{\mu-\epsilon_i}{k_B T}$$. (I used Wolfram Mathematica for simplifying the algebra.) Clearly at $$T=0$$ this expression is ill-defined, but by taking the limit $$T\rightarrow 0$$ we see that $$\langle n_i\rangle=0$$ if $$\epsilon_i>\mu$$, $$\langle n_i\rangle=3/2$$ if $$\epsilon_i=\mu$$ and $$\langle n_i\rangle=3$$ if $$\epsilon_i<\mu$$, correct? • That final result sounds fine to me; you wouldn't have any particles if the 'cost' to having one is infinite. May 19, 2018 at 21:49 • @Sylorinnis, If you write $\epsilon_i$, you assume at least two states for the system, so you have to differentiate not $\mathcal{Z}_i$, but the total partition function which is the product of $\mathcal{Z}_i$: $\mathcal{Z}=\prod \mathcal{Z}_i$ May 20, 2018 at 18:54 • @AlekseyDruggist I want to calculate the avarage number of particles in the state with energy $\epsilon_i$, this state is a grand canonical ensemble on its own, so I can simply differentiate its own partition function, right? This is also the approach followed on en.wikipedia.org/wiki/… May 20, 2018 at 21:28 • @Sylorinnis, you are right, I meant total mean number of particles in the system $<N>$ May 20, 2018 at 21:48 • If the particles are non-interacting, this seems as a bit of overkill - e.g., why use "finite geometric progression) to sum four terms? And using Wolfram for simple differentiations (instead of doing is a good way to get simple results expressed by not-so-transparent expressions (clearly the case here). Jan 7 at 16:12 Your formulas seem correct to me. But you really can not justify the $\mu <\epsilon$ condition in this case. In my opinion $\mu$ can take any value from $-\infty$ to $+\infty$ in this problem. At fixed temperature $T>0$ it follows from your formula that $\left<n\right> = 0$ at $\mu = -\infty$ and $\left<n\right> = 3$ at $\mu = +\infty$. These are correct limiting cases. At $T = 0$ we also have $\left<n\right> = 3$ if $\mu > \epsilon$ and $\left<n\right> = \frac{3}{2}$ if $\mu = \epsilon$. The $\mu < \epsilon$ condition is a must only for an ideal Bose gas.
2022-05-20 22:38:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 18, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.8282460570335388, "perplexity": 311.82178607602793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662534693.28/warc/CC-MAIN-20220520223029-20220521013029-00436.warc.gz"}
https://electronics.stackexchange.com/questions/305821/how-do-i-solve-this-circuit-with-thevenins/305826
How do i solve this circuit with Thevenin's? I am not sure how to place my voltage source after i combine the 2 parallel resistors R1 and R2. Can someone please help me with this? Method1 (specific method for voltage divider): After you simplified the parralel resistors $R_{23} = R_2 || R_3$ you got a voltage source with voltage divider formed by $R_{23}$ and $R_1$. The Thevenin Equivalent of a voltage source $V_{src}$ with voltage divider is a Thevenin source with $V_{th}=V_{src}\frac{R_{23}}{R_1+R_{23}}$ and $R_{th}= R_1 || R_{23}$ Method2 (general method that works always): If you don't know this shortcut above: do what you should have learned in the lessons: • find open circuit voltage $V_{oc}$ between a and b • find short circuit current $I_{sc}$ between a and b • use both results to derive $V_{th}=V_{oc}$ and $R_{th}=\frac{V_{oc}}{I_{sc}}$. First, r2//r3 so you find the equivalent resistance Req1. Then you see that R1 and Req1 are in series so you find the equivalent resistance between r1 an Req1 and i think it's good. you have your equivalent schema. • The OP asked about using Thevenin. – Chu May 17 '17 at 14:18 • i should have said it was the beginning of method to find equivalent schema. After you just had to as @curd said. Sorry i wasn't explicit. – Dipo May 17 '17 at 14:39
2019-12-16 08:14:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49432462453842163, "perplexity": 814.6566278036485}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541318556.99/warc/CC-MAIN-20191216065654-20191216093654-00024.warc.gz"}
https://community.ptc.com/t5/Windchill-Systems-Software/Integrity-create-field-that-is-a-computed-sum/td-p/260382
cancel Showing results for Did you mean: cancel Showing results for Did you mean: Highlighted Newbie ## Integrity - create field that is a computed sum Hello - In Integrity 10.9, I'm trying to create a new field for my documents that is a computed sum of all "Verified By Trace Count" entries in a document. The field "Verified By Trace Count" related to each requirement is calculated by: isEmpty(RelCount("Verified By"),0); So, at the document level, I was hoping the sum of all these values would simply be: sum("Verified By Trace Count"); but that always gave me the error: "An error occurred parsing the computation expression "sum("Verfied By Trace Count");": MKS124539: sum: Function is an aggregate function, but a non-aggregate computation is being evaluated." I tried using the aggregate function, but couldn't quite get the syntax right. Any help would be appreciated. Is there a document that gives examples on how to use all these computational functions and operators? Thanks! John Tags (2) Highlighted Hi John,
2020-09-21 03:57:39
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8660068511962891, "perplexity": 4290.634380903315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198887.3/warc/CC-MAIN-20200921014923-20200921044923-00724.warc.gz"}
https://brilliant.org/practice/cryptographic-sigs/
Cryptocurrency Cryptonia prospered thanks to the new gold mine, the security provided by the dragon, and the convenience of the DragonBucks system. This brought an influx of non-native Cryptonians to the area, who provided fresh energy and industry to the burgeoning city. It also drove the dishonest outsiders out of the town in search of easier prey, since DragonBucks made theft impossible for them. However, it also introduced a problem: while newcomers could receive DragonBucks, they couldn’t sign their own notes, as they weren’t born with the ability to cast a unique spell. The members of the newly formed Cryptonian Academy of Scholars gathered to come up with a solution. They figured that their best shot to integrate the non-magical denizens of Cryptonia into the new currency would be to use one of the closest things the rest of us have to magic: mathematics. Cryptographic Signatures The Cryptonian scholars need to find mathematical replacements for the functions of DragonBucks. In particular, they need something to replace the dragon so that people can verify that a transaction is valid, and they need replacements for spells as personal identities. To make it easier to talk about these replacements, we can give names to each part of the system.$^*$ Since a Cryptonian’s spellcasting ability is personal to them, it's called a $\tt secretKey.$ The visual spell effect is connected to this spellcasting ability but can be publicly shared, so it's a $\tt publicKey.$ A transaction note is a $\tt message,$ and the enchanted wax seal is a $\tt signature$ since it's used to prove who sent a message. The crucial step is being able to verify a $\tt signature$ while hiding the $\tt secretKey$ that generated it. In this quiz, you'll learn about a mathematical function that hides information and can help us towards this goal. $^*$We're borrowing these names from public-key cryptography, but you don't need to be familiar with public-key cryptography to understand this quiz. Cryptographic Signatures The Cryptonian scholars start by considering a very simple mathematical system that uses numbers in place of spells and transaction notes and also uses multiplication to move between each step. Here's how user identities and sending messages would work in this system: 1. Everyone picks a number $s$ for their $\tt secretKey$ which they don't reveal, but they share $5\times s$ as their $\tt publicKey$. 2. Each $\tt message$ is converted into a number $m$.* 3. The person sending the $\tt message$ produces a $\tt signature$ to prove they're the one who sent it by calculating $m \times s$. For example, Alice has a $\tt publicKey$ of $35$ and wants to send the $\tt message$ $101$ (a real message would be much longer, but we've truncated it so that you don't need to find your calculator): The last piece of the system is that we need to be able to verify that Alice was the one to produce her $\tt signature$. What equation must be true if Alice's $\tt signature$ of $707$ was calculated from the product of her $\tt secretKey$ and $\tt message$? *An example of how to perform this conversion is examined later in the chapter. You'd need to be careful about the details of how to convert a message to a number when building a real system, but for now we can just trust that it's possible. Cryptographic Signatures In the proposed system, we can verify a $\tt signature$ by checking that the following equation is true: $\texttt{signature}\times 5 = \texttt{publicKey}\times m,$ because if we substitute in the $\texttt{secretKey}\ s$ and $\texttt{message}\ m$ used to generate the $\tt signature$ and $\tt publicKey$, it produces $\overbrace{\left(m\times s\right)}^\texttt{signature}\times 5 = \overbrace{\left(5 \times s\right)}^\texttt{publicKey} \times m.$ The order of multiplication doesn't matter and both these expressions have the exact same factors $(5, m,$ and $s)$, so they'll be equal for a valid $\tt signature$. When Alice $(\texttt{publicKey} = 35)$ sends the $\tt message$ $101$ with the $\tt signature$ $707,$ this will check out since $707 \times 5 = 35 \times 101.$ But this belies the real problem with the system: it isn't secure. What is Alice's $\tt secretKey$? Cryptographic Signatures Your $\tt signature$ needs to be something that only you can produce. The simple multiplication scheme doesn't work because anyone who understands the rules of the system can steal your $\tt secretKey$ just by dividing! To replace DragonBucks, we need to be able to verify a $\tt signature$ without compromising the security of the $\tt secretKey$ associated with it. Which of the following ways of calculating a $\tt publicKey$ would prevent you from immediately determining the exact value of the $\tt secretKey$ used to generate it? Cryptographic Signatures Most ordinary functions don't hide their inputs very well: you can reverse addition with subtraction, division with multiplication, squaring with taking the square root, and so on. Fortunately, there are some functions that can't be easily reversed. For example, if we share the remainder after dividing a $\tt secretKey$ by $17,$ it doesn't reveal the $\tt secretKey$: there are infinite possibilities. If the remainder of $s$ divided by $17$ is $5$, $s$ could be $5$ or $22$ or $39$ or $56$ or $\ldots$ The modulo operation $(\bmod{}$ for short$)$ divides by a number (the "modulus") and returns the remainder, so we can write the situation above as $(\text{remainder}) = s\bmod{17}.$ Consider $63 \bmod{17}:$ We can view $63$ as its remainder plus some multiple of $17:$ $63 = 12 + 3 \times 17 = (\text{remainder}) + (\text{some number}) \times 17.$ $\bmod{\>17}$ keeps the remainder, but the multiples of $17$ are lost, so that part of the original number remains hidden. If we can use $\bmod{}$ to verify a $\tt signature$ while hiding the $\tt secretKey$ that generated it, that will help us mathematize the DragonBucks system. Cryptographic Signatures Taking the remainder of a lone number hides information about that number, so perhaps taking the remainder after multiplication will hide information about the factors that went into that multiplication. This could make $\bmod{}$ especially helpful for hiding the $\tt secretKey$ used to generate a $\tt signature$. If we multiplied the $\tt message$ by the $\tt secretKey$ and then took the remainder after dividing by $n:$ $(m\times s) \bmod{n},$ using this value as the $\tt signature$ instead of $m\times s$ might hide our $\tt secretKey$ better. Alice is going to send the $\tt message$ $101$ and generates a $\tt signature$ by calculating $(m\times s) \bmod{17}$. If her $\tt signature$ is $12$, could we calculate Alice's $\tt secretKey$? Cryptographic Signatures Using $(m \times s)\bmod{n}$ to generate a $\tt signature$ does a much better job of hiding the $\tt secretKey$ used to produce it — simple division will no longer reveal $s$. Graphically, $m \times s$ is the area of a rectangle with side lengths $m$ and $s.$ If we apply $\bmod{\>n}$ to that area, any multiples of $n$ in it are lost, and it becomes much harder to find $m$ or $s$ from that remainder: This is because as long as $m \times s$ is bigger than $n,$ at least one multiple of $n$ will get thrown out when $\bmod{\>n}$ is applied. Losing any part of $m \times s$ means that dividing by $m$ will not recover $s.$ Cryptographic Signatures Instead of just dividing the $\tt signature$ to find $s$, a would-be impostor now has a lot more work ahead of them. The simplest approach they could use would be to try different values of $t$ until they found one such that \begin{aligned} (m\times t)\bmod{n} &= \texttt{signature} \\ &= (m\times s)\bmod{n}. \end{aligned} The value of $t$ satisfying this equation would be a candidate for $s$. But if we make the numbers big enough ($n$, in particular), then we can make this search take a long time and therefore keep $\tt secretKey$ safe. This approach lets someone publish a $\tt signature$ without revealing their $\tt secretKey,$ one of the key properties that will allow us to mathematize DragonBucks! Cryptographic Signatures With $\bmod{}$ in our toolbox, we're ready to take another crack at the mathematization of DragonBucks. Supercharged with modular arithmetic, our naive multiplication scheme might not be so bad after all. Since $\bmod{}$ allows us to hide inputs, we can integrate modular arithmetic into an updated version of the system. Here's how user identities and sending messages could work after the update: 1. Everyone agrees on a number $n$ to use as the modulus of the system. 2. Everyone picks a $\tt secretKey$ $s$ which they don't reveal, but they share $s\bmod{n}$ as their $\tt publicKey$. 3. Each $\tt message$ is converted into a number $m$. 4. The $\tt signature$ for a $\tt message$ $m$ is $(m \times s)\bmod{n}$. The last step of the system is that we need to be able to verify each $\tt signature$, confirming that it used the correct $\tt secretKey$ when it was created. What needs to be true of $(m \times s)\bmod{n}$ in order for it to be possible to verify a $\tt signature$ in this new scheme using only public information? Cryptographic Signatures For our system to work, we also need to be able to verify each $\tt signature$. Helping us achieve this is the fact that even though the modular product hides the factors going into it, associativity and commutativity still apply. In simpler terms, this means that the order in which you multiply the numbers and apply $\bmod{\>n}$ doesn't matter. You'll always get the same result after applying a final $\bmod{\>n}$ at the end. Consider an example where $m = 569, s = \num{1187},$ and $n=447$. Whether we apply $\bmod{}$ at every step, or only apply $\bmod{}$ after multiplying the message and secret key, we'll get the same result: Apply $\bmod{}$ at every step Apply $\bmod{}$ only at the end $569\bmod{447} = 122$ $569 \times \num{1187} = \num{675403}$ $\num{1187}\bmod{447} = 293$ $\num{675403}\bmod{447} = 433$ $122 \times 293 = \num{35746}$ $\num{35746}\bmod{447} = 433$ It's not crucial for you to understand why this is the case, but if you're curious, it's because whether you apply $\bmod{\>n}$ before or after multiplying, it still has the effect of removing multiples of $n$ from the product. Any multiples of $n$ that make it through to the end will be removed by the final $\bmod{\>n}$, leading to the same result: Cryptographic Signatures Since the order of modular products doesn't change the final outcome, we can add the final step of verification to our system. Anyone can verify a $\tt signature$ by confirming that it equals $(\texttt{publicKey} \times m)\bmod{n}$. This works because \begin{aligned} \texttt{signature} &= (\texttt{publicKey} \times m)\bmod{n} \\ (m\times s)\bmod{n} &= \big((s\bmod{n}) \times m\big)\bmod{n}. \end{aligned} And whether you take the modular product of $m$ and $s$ or the modular product of $(s\bmod{n})$ and $m,$ you'll get the same result. Here's an overview of all the steps: With this implementation of $\bmod$ into our system, are all the steps secure? Cryptographic Signatures For the purposes of multiplication in $\bmod{\>n}$, knowing the remainder of the $\tt secretKey$ is equivalent to knowing the $\tt secretKey$ itself, so a $\tt publicKey$ of $s \bmod{n}$ isn't secure. Fortunately, the fix isn't far off. We just saw that even if we know one number of a product, it's still hard to find the other one: ${(m \times s) \bmod{n}}$ hides $s$ even if we know $m$. With this in mind, we can change our scheme just a little bit: Suppose everybody agrees on a common number $g$. They still pick a $\tt secretKey$ $s$ as before, but now they share $(g \times s) \bmod{n}$ as their $\tt publicKey$. The $\tt signature$ for a $\tt message$ $m$ is still $(m \times s)\bmod{n}$, but now the verification happens by confirming that $(\texttt{publicKey} \times m)\bmod{n} = (\texttt{signature} \times g)\bmod{n}.$ These will be equal for a valid $\tt signature$ because both sides of the equation contain only $g, s,$ and $m$ as factors. Suppose we use this system with $n = 179$ and $g = 59.$ If you receive the $\tt message$ $101$ from Alice, whose $\tt publicKey$ is $24$, and the included $\tt signature$ is $123,$ was the $\tt message$ really sent by Alice? Assume Alice is the only person with access to her $\tt secretKey$. The calculator below (recovered from Cryptonia) is programmed with the modular arithmetic of the DragonBucks system. You can use it to help to answer the question: Cryptographic Signatures With this new system, the Cryptonians have successfully divorced their DragonBucks scheme from spellcasting, and can open it up to everyone regardless of their magical abilities. Their modular products scheme has three key features: 1. Everyone has an identity that no one else can fake. 2. Everyone can sign transactions. 3. Everyone can verify that transactions are valid. A serviceable mathematician herself, the dragon is satisfied with the security of mathematically signed DragonBucks and is happy to process them, allowing the magically challenged newcomers to fully participate in the Cryptonian economy. To facilitate the spread of the system, the Cryptonian scholars made calculators that could quickly calculate large modular products and distributed these calculators among the townsfolk. If you're a number theory wizard yourself, you've probably noticed a problem with the security of DragonBucks. Don't worry, this will be addressed later in the course. Cryptographic Signatures ×
2020-09-27 04:54:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 234, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8195621371269226, "perplexity": 477.43422582116295}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400250241.72/warc/CC-MAIN-20200927023329-20200927053329-00037.warc.gz"}
http://orzhtml.com/benefits-of-ovopdj/43dc57-log-of-exponential-distribution
Spraying Zinsser 123 Primer With Hvlp, Duane And Barbara Island Hunters, B2200 Mazda Pickup, Qualcast Abp118lz Battery, Quizizz Dependent And Independent Clauses, Super Seal 30 Vs Eagle, Sherwin-williams Resinous Flooring, Synonyms Of Nippy, "> # log of exponential distribution 1℃
2021-10-26 22:07:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.917448878288269, "perplexity": 398.100238560316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587926.9/warc/CC-MAIN-20211026200738-20211026230738-00205.warc.gz"}
http://radiodmg.com/?p=549
# Radio DMG Episode 041: “Uniquely Random” Click on through. You know what to do. Make it happen. In This Episode: We have Raj Ramayya, Tia Ballard, and David Vincent. The middle interview with Tia Ballard is weirdly chaotic and bizarre. There is audio I had to censor and then ridiculous klaxons. My apologies and maybe you could enjoy it. Actually, you will enjoy it. Just be warned. Okay? MP3(68MB): Thank you for your co-operation. Time Stamps are just something we feel we need because of White Guilt. This entry was posted in Radio Dmg Shows and tagged , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.
2019-08-24 05:01:39
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8290758728981018, "perplexity": 4837.940673200972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319724.97/warc/CC-MAIN-20190824041053-20190824063053-00182.warc.gz"}
https://www.transtutors.com/questions/32-net-present-value-graph-and-indifference-cost-of-capital-specialized-consulting-s-1384532.htm
# 32. Net present value graph and indifference cost of capital. Specialized Consulting Service... 32.   Net present value graph and indifference cost of capital. Specialized Consulting Service Company’s after-tax net cash flows associated with two mutually exclusive projects, Alpha and Beta, are as follows: Cash Flow, End of Year Project 0 1 2 Alpha $(100)$125 --- Beta (100) 50 \$84 a.    Calculate the net present value for each project using discount rates of 0, 0.04, 0.08, 0.12, 0.15, 0.20, and 0.25. b.    Prepare a graph as follows. Label the vertical axis ‘‘Net Present Value in Dollars’’ and the  horizontal  axis  ‘‘Discount  Rate  in  Percent  per  Year.’’  Plot  the  net  present  value amounts calculated in part a. for project Alpha and project Beta. c.    State the decision rule for choosing between projects Alpha and Beta as a function of the firm’s cost of capital. d.    What generalizations can you draw from this exercise?
2018-11-18 23:08:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35093778371810913, "perplexity": 5676.537389851214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744750.80/warc/CC-MAIN-20181118221818-20181119003818-00257.warc.gz"}
http://math.stackexchange.com/questions/68148/big-o-of-polynomial-functions
# Big O of polynomial functions I am required to identify if $\log{(f(x))}$ is a subset of $O(\log{n})$ holds true for all polynomial functions. If I try with $f(x) = x^2$, then I am able to prove it to be correct. But, with $f(x) = -2(x+2)(x-7)$, I am unable to prove it. Am I missing something? Please advise. Thanks - I assume that $f$ is supposed to be positive for sufficiently large $x$? –  JavaMan Sep 28 '11 at 6:01 Note that $x^m\le x^n$ for $m\le n,x\ge1$ implies that $P(x)=a_dx^d+\cdots+a_0\le(a_d+\cdots+a_0)x^d$ for $x\ge1$ which is obviously $O(x^d)$. –  anon Sep 28 '11 at 6:19 ## 1 Answer Assuming $\lim_{x\to\infty} p(x) = \infty$ as otherwise it doesn't make sense. First, if $p(x)=a_nx^n+\cdots+a_0$ then there exists $C$, such that for all sufficiently large $x$, $p(x) \leq C x^n$ (a suitable $C$ would be $a_n+1$). Second, $\forall D,E$ exists $F$, such that $D\log x+E \leq F\log x$ for all sufficiently large $x$ (you can take $F$ to be $D+1$ for example). Now, for sufficiently large $x$: $$\log p(x) \leq \log (C x^n) = \log C + n\log x \leq D \log x .$$ If $\lim_{x\to\infty} p(x) = -\infty$ a sensible question to ask would be if $\log |p(x)|\in O(\log x)$. - Thanks for that mate. –  Prasanna K Rao Sep 28 '11 at 20:58
2015-05-30 02:34:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9255517721176147, "perplexity": 148.5313010097186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930866.66/warc/CC-MAIN-20150521113210-00128-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/science/chemistry/chemistry-a-molecular-approach-3rd-edition/chapter-1-sections-1-1-1-8-exercises-problems-by-topic-page-40/89a
## Chemistry: A Molecular Approach (3rd Edition) Published by Prentice Hall # Chapter 1 - Sections 1.1-1.8 - Exercises - Problems by Topic - Page 40: 89a #### Answer $$27.8 L = 2.78\times10^4 cm^{3}$$ #### Work Step by Step To convert from L to cm$^{3}$, we can first convert liters to milliliters. 1 L = 1000 mL 27.8 L = 27800 mL Since, 1000 mL = 1000 cm$^{3}$ 27.8 L = 27800 cm$^{3}$ = $2.78\times10^4 cm^{3}$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2019-01-16 15:57:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6171107292175293, "perplexity": 5359.107559112384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657555.87/warc/CC-MAIN-20190116154927-20190116180927-00383.warc.gz"}
https://blogs.mathworks.com/steve/2011/11/26/exploring-shortest-paths-part-2/
# Exploring shortest paths – part 2 Earlier this month I started exploring the question of computing the shortest path from one object in a binary image to another. I described the following basic algorithm: 1. Compute the distance transform for just the upper left block of foreground pixels. 2. Compute the distance transform for just the lower right block of foreground pixels. 3. Add the two distance transforms together. 4. The pixels in the regional minimum of the result lie along one or more of the shortest paths from one block to the other. This algorithm only works for path-based approximations to the Euclidean distance transform; it doesn't work for the Euclidean distance transform itself. The Image Processing Toolbox function supports three path-based approximations to the Euclidean distance transform: 'cityblock', 'chessboard', and 'quasi-euclidean'. Today I want to compare these three and discuss ambiguities associated with the idea of "shortest path" on an image. Let's work with another small test image: bw = logical([ ... 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 ]); One way to get two binary images, each containing one object from the original, is to use a label matrix: L = bwlabel(bw); bw1 = (L == 1) bw1 = 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 bw2 = (L == 2) bw2 = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 Step 1 of the algorithm is to compute the distance transform for the first object. I'll use the 'cityblock' distance transform approximation. In this approximation, only horizontal and vertical path segments are allowed. Diagonal segments are not allowed. The distance between a pixel and any of its N, E, S, or W neighbors is 1. D1 = bwdist(bw1, 'cityblock') D1 = 2 1 2 3 4 5 6 1 0 1 2 3 4 5 2 1 2 3 4 5 6 3 2 3 4 5 6 7 4 3 4 5 6 7 8 5 4 5 6 7 8 9 6 5 6 7 8 9 10 7 6 7 8 9 10 11 8 7 8 9 10 11 12 9 8 9 10 11 12 13 10 9 10 11 12 13 14 Step 2 is to compute the distance transform for the second object. D2 = bwdist(bw2, 'cityblock') D2 = 14 13 12 11 10 9 10 13 12 11 10 9 8 9 12 11 10 9 8 7 8 11 10 9 8 7 6 7 10 9 8 7 6 5 6 9 8 7 6 5 4 5 8 7 6 5 4 3 4 7 6 5 4 3 2 3 6 5 4 3 2 1 2 5 4 3 2 1 0 1 6 5 4 3 2 1 2 Step 3 is to add the distance transforms together. D = D1 + D2 D = 16 14 14 14 14 14 16 14 12 12 12 12 12 14 14 12 12 12 12 12 14 14 12 12 12 12 12 14 14 12 12 12 12 12 14 14 12 12 12 12 12 14 14 12 12 12 12 12 14 14 12 12 12 12 12 14 14 12 12 12 12 12 14 14 12 12 12 12 12 14 16 14 14 14 14 14 16 The smallest value of D, 12, is the minimum path length (for paths consisting only of horizontal and vertical segments) from one object to the other. Step 4 is to find the set of pixels in the regional minimum of D. This set represents the shortest path. paths = imregionalmin(D); To help visualize the path I'll use imoverlay from the MATLAB Central File Exchange. The code below will overlay the pixels on the shortest path in gray. P = false(size(bw)); P = imoverlay(P, paths, [.5 .5 .5]); P = imoverlay(P, bw, [1 1 1]); imshow(P, 'InitialMagnification', 'fit') Uh oh, why do we have a big rectangular block of gray instead of a single path? The answer is that there isn't a unique shortest path. There are many paths you could travel to get from one object to the other that all have the same path length, 12. Here are just a view of them: subplot(2,2,1) imshow(P, 'InitialMagnification', 'fit') x = [2 6 6]; y = [2 2 10]; hold on plot(x, y, 'g', 'LineWidth', 2) hold off subplot(2,2,2) imshow(P, 'InitialMagnification', 'fit') x = [2 2 6]; y = [2 10 10]; hold on plot(x, y, 'g', 'LineWidth', 2); hold off subplot(2,2,3) imshow(P, 'InitialMagnification', 'fit') x = [2 3 3 4 4 5 5 6 6 6 6 6 6]; y = [2 2 3 3 4 4 5 5 6 7 8 9 10]; hold on plot(x, y, 'g', 'LineWidth', 2) hold off subplot(2,2,4) imshow(P, 'InitialMagnification', 'fit') x = [2 2 2 2 2 2 3 3 4 4 5 5 6]; y = [2 3 4 5 6 7 7 8 8 9 9 10 10]; hold on plot(x, y, 'g', 'LineWidth', 2) hold off Next, let's look at the 'chessboard' distance. In this distance transform approximation, a path can consist of horizontal, vertical, and diagonal segments, and the path length between a pixel and any of its neighbors (N, NE, E, SE, S, SW, W, and NW) is 1.0. D1 = bwdist(bw1, 'chessboard') D2 = bwdist(bw2, 'chessboard') D = D1 + D2 D1 = 1 1 1 2 3 4 5 1 0 1 2 3 4 5 1 1 1 2 3 4 5 2 2 2 2 3 4 5 3 3 3 3 3 4 5 4 4 4 4 4 4 5 5 5 5 5 5 5 5 6 6 6 6 6 6 6 7 7 7 7 7 7 7 8 8 8 8 8 8 8 9 9 9 9 9 9 9 D2 = 9 9 9 9 9 9 9 8 8 8 8 8 8 8 7 7 7 7 7 7 7 6 6 6 6 6 6 6 5 5 5 5 5 5 5 5 4 4 4 4 4 4 5 4 3 3 3 3 3 5 4 3 2 2 2 2 5 4 3 2 1 1 1 5 4 3 2 1 0 1 5 4 3 2 1 1 1 D = 10 10 10 11 12 13 14 9 8 9 10 11 12 13 8 8 8 9 10 11 12 8 8 8 8 9 10 11 8 8 8 8 8 9 10 9 8 8 8 8 8 9 10 9 8 8 8 8 8 11 10 9 8 8 8 8 12 11 10 9 8 8 8 13 12 11 10 9 8 9 14 13 12 11 10 10 10 You can see that the shortest path length for the 'chessboard' distance is 8 instead of 12. Let's look at the pixels on the various shortest paths. paths = imregionalmin(D) P = false(size(bw)); P = imoverlay(P, paths, [.5 .5 .5]); P = imoverlay(P, bw, [1 1 1]); clf imshow(P, 'InitialMagnification', 'fit') paths = 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 1 1 0 0 0 0 1 1 1 1 0 0 0 1 1 1 1 1 0 0 0 1 1 1 1 1 0 0 0 1 1 1 1 1 0 0 0 1 1 1 1 0 0 0 0 1 1 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 Now the set of gray pixels is not rectangular, but it still encompasses multiple possible shortest paths. And, surprisingly, it seems to include some pixels on which the path actually moves temporarily away from its destination. Here are several possible paths, all of length 8 (according to the 'chessboard' distance transform). subplot(2,2,1) imshow(P, 'InitialMagnification', 'fit') x = [2 3 4 5 6 6 6 6 6]; y = [2 3 4 5 6 7 8 9 10]; hold on plot(x, y, 'g', 'LineWidth', 2) hold off subplot(2,2,2) imshow(P, 'InitialMagnification', 'fit') x = [2 2 2 2 2 3 4 5 6]; y = [2 3 4 5 6 7 8 9 10]; hold on plot(x, y, 'g', 'LineWidth', 2); hold off subplot(2,2,3) imshow(P, 'InitialMagnification', 'fit') x = [2 1 2 1 2 3 4 5 6]; y = [2 3 4 5 6 7 8 9 10]; hold on plot(x, y, 'g', 'LineWidth', 2) hold off subplot(2,2,4) imshow(P, 'InitialMagnification', 'fit') x = [2 3 4 3 2 3 4 5 6]; y = [2 3 4 5 6 7 8 9 10]; hold on plot(x, y, 'g', 'LineWidth', 2) hold off Finally, let's look at the 'quasi-euclidean' distance transform. In this variation, paths from one pixel to another can consist of horizontal, vertical, and diagonal segments. The distance from a pixel to one of its horizontal or vertical neighbors is 1, while the distance to one of its diagonal neighbors is sqrt(2). Here's the computation again, this time specifying 'quasi-euclidean': D1 = bwdist(bw1, 'quasi-euclidean'); D2 = bwdist(bw2, 'quasi-euclidean'); D = D1 + D2 D = 12.4853 11.6569 11.6569 12.2426 12.8284 13.4142 14.8284 11.0711 9.6569 10.2426 10.8284 11.4142 12.0000 13.4142 10.4853 9.6569 9.6569 10.2426 10.8284 11.4142 12.8284 10.4853 9.6569 9.6569 9.6569 10.2426 10.8284 12.2426 10.4853 9.6569 9.6569 9.6569 9.6569 10.2426 11.6569 11.0711 9.6569 9.6569 9.6569 9.6569 9.6569 11.0711 11.6569 10.2426 9.6569 9.6569 9.6569 9.6569 10.4853 12.2426 10.8284 10.2426 9.6569 9.6569 9.6569 10.4853 12.8284 11.4142 10.8284 10.2426 9.6569 9.6569 10.4853 13.4142 12.0000 11.4142 10.8284 10.2426 9.6569 11.0711 14.8284 13.4142 12.8284 12.2426 11.6569 11.6569 12.4853 You can see that the shortest path length is approximately 9.6569. min(D(:)) ans = 9.6569 As before, let's find the set of pixels that belong to a shortest path. paths = imregionalmin(D) paths = 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 That looks like trouble! The shortest-path pixels don't actually connect the two objects, as you can see below: P = false(size(bw)); P = imoverlay(P, paths, [.5 .5 .5]); P = imoverlay(P, bw, [1 1 1]); clf imshow(P, 'InitialMagnification', 'fit') Where has our algorithm gone astray? That's the question I'll tackle next time. #### All the posts in this series • the basic idea of finding shortest paths by adding two distance transforms together (part 1) • the nonuniqueness of the shortest paths (part 2) • handling floating-point round-off effects (part 3) • using thinning to pick out a single path (part 4) • using bwdistgeodesic to find shortest paths subject to constraint (part 5) Published with MATLAB® 7.13 |
2020-07-06 09:42:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4613829255104065, "perplexity": 116.00560375385398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890157.10/warc/CC-MAIN-20200706073443-20200706103443-00116.warc.gz"}
https://www.physicsforums.com/threads/new-way-to-derive-sectors-of-a-circle-easy.744571/
# New way to derive sectors of a circle (easy) 20 So for starters the area of an entire circle has 360º,right? So we can say that: ##1∏r^2## is ##\equiv## to ##360º## So by that logic ##0.5∏r^2## is ##\equiv## to ##180º## And finally ##0.25∏r^2## is ##\equiv## to ##90º## Divide both sides by 9, and you get : ##0.25∏r^2/9## is ##\equiv## to ##10º## From that it's much simpler to multiply both sides by some variable. Simple right? 2. ### pwsnafu 908 How is that any different to the formula on Wikipedia? ### Staff: Mentor For starters, the area of a circle is not 360°. That's the measure of the angle of a sector. 4. ### Mentallic 3,690 Try using \pi in your latex code to produce ##\pi## instead of using the product symbol. If you want to find the area of a sector of a circle that has angle ##\theta## then multiply the area of a circle by ##\theta/2\pi## so $$A=\pi r^2\frac{\theta}{2\pi}=\frac{r^2\theta}{2}$$ However, this assumes that the angle is in radians, but if you want to use degrees instead then just use the conversion $$\text{angle in radians}=\text{angle in degrees}\times \frac{\pi}{180^o}$$ So the formula is then $$A=\pi r^2\cdot\frac{\phi}{360}$$ Where ##\phi## is in degrees. So if ##\phi=360## which would be the entire circle, then as expected, you get ##A=\pi r^2##
2015-04-21 13:15:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9113873839378357, "perplexity": 1340.8132548979754}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246641468.77/warc/CC-MAIN-20150417045721-00226-ip-10-235-10-82.ec2.internal.warc.gz"}
https://blender.stackexchange.com/questions/94355/how-to-unwrap-multiple-planes-and-have-them-exactly-overlap/94358#94358
# How to unwrap multiple planes and have them exactly overlap? I'm currently modeling a stadium for a game and am trying to find a quick way to get all the rows of seating to share a UV image. Each plane is the same height but differs in length, but I can just have the UV image repeat itself for longer planes. Here you can see how I'd like each plane to land on the UV map, obviously differing in length. I've just did a smart unwrap, but it lays them all out without overlapping. Is there a way to have them all overlap in the UV without having to painstakingly move each one by hand into the correct area? Even if they were all along the same X plane, not overlapping, that would work fine. Just now, they're stacked with spacing in between them which throws off most of the rows of seats, making them cut in half or floating and other issues. Any help is greatly appreciated! I found a work around. If I unwrapped in as a Smart UV Project, it placed all the unwrapped planes tightly together, as in no space between them. So, if I aligned one of the planes to the corresponding UV image, the rest of them repeated correctly. You can do this by using magic UV addon. After enable addon go to edit mode, select one plane, unwrap it how you need, than press U -> copy/paste UV -> Copy UV Next select all planes with same size and U -> copy/paste UV -> Paste UV In this case you need to repeat this operation as many times as you have variations in length.
2021-09-26 03:53:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2050204575061798, "perplexity": 1104.0372567955199}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057796.87/warc/CC-MAIN-20210926022920-20210926052920-00240.warc.gz"}
https://mathematica.stackexchange.com/questions/198120/is-there-a-faster-way-to-calculate-absz2-numerically
# Is there a faster way to calculate Abs[z]^2 numerically? Here I'm not interested in accuracy (see 13614) but rather in raw speed. You'd think that for a complex machine-precision number z, calculating Abs[z]^2 should be faster than calculating Abs[z] because the latter requires a square root whereas the former does not. Yet it's not so: s = RandomVariate[NormalDistribution[], {10^7, 2}].{1, I}; DeveloperPackedArrayQ[s] (* True *) Abs[s]^2; // AbsoluteTiming // First (* 0.083337 *) Abs[s]; // AbsoluteTiming // First (* 0.033179 *) This indicates that Abs[z]^2 is really calculated by summing the squares of real and imaginary parts, taking a square root (for Abs[z]), and then re-squaring (for Abs[z]^2). Is there a faster way to compute Abs[z]^2? Is there a hidden equivalent to the GSL's gsl_complex_abs2 function? The source code of this GSL function is simply to return Re[z]^2+Im[z]^2; no fancy tricks. • Here's an even slower way: (Re[#]^2 + Im[#]^2) & /@ s. And even slower still: Total[ReIm[#]^2] & /@ s – bill s May 10 '19 at 14:24 There's InternalAbsSquare: s = RandomVariate[NormalDistribution[], {10^7, 2}].{1, I}; foo = InternalAbsSquare[s]; // AbsoluteTiming // First murf = Abs[s]^2; // AbsoluteTiming // First (* 0.022909 0.063441 *) foo == murf (* True *) • Ah yes precisely what I was looking for, many thanks Michael! Is there a repository of such tricks? – Roman May 10 '19 at 14:25 • @Roman I was just looking. I thought there was a post about useful Internal functions, but I couldn't find it just now. The context contains some useful numerical functions like Log1p and Expm1. StatisticsLibrary also contains some nice, well-programmed functions. – Michael E2 May 10 '19 at 14:31 • – Chris K May 10 '19 at 14:31 • @ChrisK That must be it! Thanks. – Michael E2 May 10 '19 at 14:32 • @CATrevillian I would have thought it was in the MKL (Intel Math Kernel Library), but I didn't find it there. I guess I don't know. – Michael E2 May 11 '19 at 3:10 for v5.2, s Conjugate[s] is fast too, ref the pic: • On my computer, Re[s*Conjugate[s]] is about five to ten times slower than InternalAbsSquare[s]. What is your \$Version and what CPU do you have? – Roman Jul 8 '20 at 12:27 • Hi, people here generally like users to post code as Mathematica code instead of just images or TeX, so they can copy-paste it. It makes it convenient for them and more likely they will engage with your posts. You may find this meta Q&A helpful. -- BTW, have you seen RandomVariate[NormalDistribution[], {10^7, 2}]? It's much faster on my machine. Ditto for RandomComplex[]. – Michael E2 Jul 8 '20 at 12:48 • @Roman Re[] is unnecessary, though it's very fast. My version is very old, it's v5.2. So there's no InternalAbsSquare[]. – infoage Jul 8 '20 at 18:14 • @MichaelE2 Thanks, man. My version is v5.2. This code is so simple that I had no motivation to paste text version at that moment. Sorry. – infoage Jul 8 '20 at 18:18 • Maybe it's worth adding the version info to your answer. It turns out that I don't have the Statistics`NormalDistribution package (in V12.1.1), I suppose because it's been replaced by top-level statistics functions some versions ago. – Michael E2 Jul 8 '20 at 19:08
2021-04-16 18:21:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42450159788131714, "perplexity": 4203.559915957922}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038088245.37/warc/CC-MAIN-20210416161217-20210416191217-00502.warc.gz"}
http://www.gradesaver.com/textbooks/math/algebra/college-algebra-10th-edition/chapter-3-section-3-2-the-graph-of-a-function-3-2-assess-your-understanding-page-223/54
## College Algebra (10th Edition) $y=\frac{2}{3}x+8$ We find a line with a slope of $2/3$ that passes through $(-6,4)$ using the point-slope form: $y-y_{1}=m(x-x_{1})$ $y-4=\displaystyle \frac{2}{3}(x- -6)$ $y-4=\displaystyle \frac{2}{3}(x+6)$ $y-4=\frac{2}{3}x+4$ $y=\frac{2}{3}x+8$
2018-04-20 13:02:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8667599558830261, "perplexity": 229.74758275690567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937780.9/warc/CC-MAIN-20180420120351-20180420140351-00539.warc.gz"}
http://kopaleo.deviantart.com/
Shop Mobile More # KopaLeo Feels like I just cried. ## Groups Admin of 1 Group Member of 14 Groups ## deviantID Cosmia Nebula Artist | Hobbyist | Digital Art People's Republic of China :thumb269960401:So hello, and welcome to my dA page. I'm just a little pony that studies inapplicable math, befit only for an infirm ivory tower dweller. I'm also interested in anatomy, biology and death. Expect to find a lot of depressing stuffs in my gallery.My ponysona is Cosmia Nebula, and she is a freelancing astronomer and mathematician living somewhere near Ponyville Library.Pinkamena is my favorite pony."Equestria cannot exist, for our world would bring corruption to such a miracle if it ever does. The human chaos from our world would tear their world apart. Their nonexistence is what is protecting their perfection." It is a fact that $$\sigma{n}{i=m} i = \frac{(m + n)(n - m + 1)}{2}$$ Or, in other words... ¨The summation from m to n is the half of the product of, the sum of m and n, and n minus m plus 1 (You see? That´s why mathematicians use formulas rather than words. Kudos to those Medieval Arabic scholars who wrote entire books on algebra entirely in words!) But enough of this gabbing. 55 questions. Ten things you wish you could say to 10 different people (don't list names): 1. ¨Fuck, fuck you all --- I meant, fuck you both.¨ 2. ditto 3. ¨Why did you abandon mathematics? Financial mathematics is NOT mathematics -- it´s betrayal.¨ 4. ¨I told you I suck at playing computer games.¨ 5. ¨Suicide is always a good option --- just rarely the optimal one.¨ 6. ¨I told you I don´t need nice penmanship. Fuck you.¨ [some teacher used to require us to write pretty Hanzi... I despised it because I could not write well.] 7. ¨How much money is enough? (why would you want that much money? You could use that time for some other things, you know.)¨ 8. ¨I´m sorry. I should have helped you.¨ 9. ¨It does not make sense. How can you be so sure with so little supporting evidence?¨ 10. ¨I´m not fat now.¨ 1. I´m a mathematician. I like mathematics more than most people. 2. It is hard for me to like what I draw, so I procrastinate about drawing way to much. 3. I have not cut my hair for more than a year. 4. I was raised in China. 5. I like to wear a cat collar with a bell on it. 6. I am extremely good at taking exams. 7. Talking to non-friends makes me want to scream, and when I get to be alone again, I would. 8. Pinkamena is my girlfriend. 9. I love eating muffins, influenced by the super adorable Derpy Hooves. Eight ways to win your heart friendship: 1. Understand and accept the melancholic/pessimistic way of thinking about things. 3. Not having lots of friends. Certainly mustn´t be a socialite. 4. Honest. 5. Tolerant of ¨queer¨ people. 6. Do art, or mathematics, or BOTH!! 7. Not conspicuously masculine in appearance, speech, or thoughts. Males scare me. 8. Give me hugs (virtual ones are fine). Seven things that cross your mind a lot: 1. I want to go to bed. 2. Why did I get out of bed at all? 3. Will they ever shut up and let me go? 4. Tell me what you want me to say and I´ll say it! 5. Should I clop now? 6. I want my organs be fully donated. 7. Would they find my body before it decomposes? Six things you wish you never had to do: 1. Eat. Eating is such a chore. Shopping and preparing food, doubly so. 2. Live with parents. It´s difficult. 3. Speak Chinese. It´s an ugly language and I feel like I have to brush my teeth after using it. 4. Say hello to non-friends. They are not my friends. Is it not enough if I´m not an obstruction in their lives? 5. Take classes I don´t like. What is a mathematician supposed to do with knowledge of ancient Chinese texts or Chemistry experimental procedures? 6. Be born. I don´t care much about sex, so I´m just going to talk about what physical traits appear pretty to me, rather than my ¨turn-on/turn-off¨ per se. Five turn offs: 1. Masculine traits, including, but not limited to: facial hair, sharp muscle lines, dramatic musculature, buzz cut. Think Bulk Biceps. 2. Hyperfeminine traits. Think Marilyn Monroe or Rarity, and you´d get the picture. 3. Obesity. Don´t laugh. 4. Piercings. 5. Smoke, alcohol, or drugs. Four turn ons: 1. Choker and hoodie, preferably in gray/black, like Lonely Inky. 2. Hairstyle like Pinkamena Diane Pie, Inkie Pie, Maud Pie, Blinkie Pie, or Twilight Sparkle. 3. Gentle and melancholic expression. 4. Big hooves. 1. Not lose my friends due to misunderstandings. 2. Not die an ugly death. 3. Not go blind. Two things you want to do before you die: 1. Meet friends. They are so far away. 2. Look prettier. One possession you could not live without: 1. My exceptional intelligence. I´m an intellectual. • Mood: Pain • Listening to: Who Will Save Your Soul - Jewel • Eating: Zoloft • Drinking: Water ## Friends Featured By Owner Sep 26, 2014  Hobbyist Digital Artist Featured By Owner Jul 27, 2014  Hobbyist General Artist Thanks for the watch <:3. Featured By Owner Jul 23, 2014 Gosh, thank you so much for the +watch! I'm really glad ;u; Featured By Owner Jun 14, 2014  Hobbyist Traditional Artist Featured By Owner Jun 10, 2014  Hobbyist General Artist Thank ya for the Watch ^^ Featured By Owner May 5, 2014  Hobbyist Traditional Artist Hey there! It's finished! --> mane-shaker.deviantart.com/art… Hope you like It! Featured By Owner Apr 23, 2014  Hobbyist Traditional Artist Happy birthday, Kopa. Have one of these: Featured By Owner Apr 22, 2014  Hobbyist Traditional Artist Wishing you the very best for your birthday! LizziePotatoPad Featured By Owner Apr 22, 2014   Digital Artist Happy birthday. Featured By Owner Apr 22, 2014  Student Writer Happy Birthday.
2014-12-21 01:05:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2561751902103424, "perplexity": 10933.341445423539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770554.119/warc/CC-MAIN-20141217075250-00158-ip-10-231-17-201.ec2.internal.warc.gz"}
https://zbmath.org/?q=an:0823.46018
zbMATH — the first resource for mathematics Property $$(M)$$, $$M$$-ideals, and almost isometric structure of Banach spaces. (English) Zbl 0823.46018 In a paper from 1993 the first-named author settled the problem how to characterize spaces such that the compact operators are an M-ideal [Ill. J. Math. 37, No. 1, 147-169 (1993)], a problem which was open for many years. An essential part in this description played the following definitions: A Banach space $$X$$ is said to satisfy property (M) (resp. (M$$^*$$)) if $$\limsup\| u+ x_ n\|= \limsup\| v+ x_ n\|$$ (resp. $$\limsup\| u^*+ x^*_ n\|= \limsup\| v^*+ x^*_ n\|$$) whenever $$\| u\|= \| v\|$$ and $$(x_ n)$$ tends to zero weakly (resp. $$\| u^*\|= \| v^*\|$$ and $$(x^*_ n)$$ tends to zero with respect to the weak$$^*$$-topology). Here these properties, some variants and a number of important consequences are thoroughly discussed. Let’s mention a few of these results: – (M) and (M$$^*$$) are equivalent if $$X$$ does not contain $$\ell^ 1$$. – Kalton’s result from 1993 can be refined: The compact operators on a separable space $$X$$ are an M-ideal in all bounded operators iff $$X$$ has (M), $$X$$ contains no copy of $$\ell^ 1$$ and $$X$$ has the metric approximation property. – For a closed infinite-dimensional subspace $$X$$ of $$L_ p$$ (where $$1\leq p\leq \infty$$, $$p\neq 2$$) the unit ball is compact with respect to the $$L^ 1$$-norm iff $$X$$ is isomorphic with arbitrarily small constants to subspaces of $$\ell^ p$$. – Similar results hold for subspaces of the Schatten classes. In the end an operator version of (M) is also given, and the paper closes with some open problems. MSC: 46B20 Geometry and structure of normed linear spaces 46B04 Isometric theory of Banach spaces Full Text:
2021-10-25 12:56:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9457764625549316, "perplexity": 386.68257190035285}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587711.69/warc/CC-MAIN-20211025123123-20211025153123-00091.warc.gz"}
https://phys.libretexts.org/Courses/College_of_the_Canyons/Physci_101_Lab%3A_Physical_Science_Laboratory_Investigations_(Ciardi)/29%3A_Investigation_28__The_Electromagnetic_Connection/29.6%3A_General_Questions
Skip to main content # 29.6: General Questions 1. Describe any correlation between number of coils and magnetic strength. 2. Describe any differences between the nail electromagnets and the bolt electromagnets.  Explain why any differences may occur. ## Contributors and Attributions 29.6: General Questions is shared under a CC BY license and was authored, remixed, and/or curated by LibreTexts. • Was this article helpful?
2023-02-03 04:21:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9816486835479736, "perplexity": 1813.3291730013136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500042.8/warc/CC-MAIN-20230203024018-20230203054018-00691.warc.gz"}
https://platonicrealms.com/encyclopedia/cone
cone A cone is an infinite surface of revolution generated as shown: Figure 1: Generating a cone. The term also refers to the solid bounded by one of the nappes and a flat elliptical base. If in this case the base is circular (at right angles to the axis), the cone is called a right circular cone. Figure 2: The cone as a solid. The surface area S (excluding the base) and volume V of a right circular cone are given by $\begin{eqnarray*} S & = & \pi r \sqrt{r^2+h^2} \\ & & \\ V & = & \frac{\pi r^2h}{3} \end{eqnarray*}$
2021-01-17 16:21:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8228099346160889, "perplexity": 611.0887331832707}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703513062.16/warc/CC-MAIN-20210117143625-20210117173625-00178.warc.gz"}
http://www.gradesaver.com/textbooks/math/algebra/college-algebra-10th-edition/chapter-5-section-5-2-properties-of-rational-functions-5-2-assess-your-understanding-page-351/18
## College Algebra (10th Edition) All real numbers; $x\ne -3, x\ne 4$ The domain of the function is all real numbers, except for when the denominator is 0. Thus, we let the denominator equal 0: $$(x+3)(4-x)=0 \\ x=-3, 4$$ Thus, we know that x cannot equal -3 or 4.
2018-04-20 03:32:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9461999535560608, "perplexity": 326.5833084313023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937113.3/warc/CC-MAIN-20180420022906-20180420042906-00160.warc.gz"}
http://math.stackexchange.com/questions/215951/comparable-graduate-ode-text-suggestions
# Comparable Graduate ODE Text Suggestions First off, I'm very sorry if this sort of question is not allowed here. I've seen a couple similar questions on the OverFlow site, but I think discussions of basic material should be kept to this site. Anyway, in my ODE course we are using the introductory ODE text written by Jack Hale: http://www.amazon.com/Ordinary-Differential-Equations-Dover-Mathematics/dp/0486472116/ref=sr_1_1?ie=UTF8&qid=1350512056&sr=8-1&keywords=jack+Hale Actually, it's only introductory in the sense that it is self-contained, but it is rather advanced (think of those Dover reprints in size 8 font, half of which is written in Greek letters). The table of contents and first pages give a hint of the level and material covered (see below). The proofs encountered in ODE seem to me VERY VERY unintuitive, in the sense that one (without experience) would be very unlikely to ever reproduce such proofs on their own. Often they begin by establishing (without any motivation) estimates which are then used to prove things like "now we see such and such is a nondecreasing," and only once three fourths through the proof does one see why it was even necessary to establish any of the prerequisite facts. I'm hoping this is just the style of the author, but perhaps this is just the specific taste of ODE theory. In any case, I would like some recommendations for texts that cover similar material on ODE theory that people here have found useful in the past. There are many ODE texts, and they cover different parts of ODE theory. This text (and our course) is aimed specifically at establishing the general theoretical framework for ODE theory (e.g. preliminaries on fixed point theorems/Banach spaces, Peano existence, Picard uniqueness, continuation of solutions, continuous dependence of parameters, differential estimates, further theory on linear systems, etc.), and then going straight to stability analysis (e.g. analysis of linear systems, perturbations of non-linear systems, Poincare-Bendixson theory, and Liapunov methods) and finally perturbation methods (e.g. asymptotic expansions, averaging, multiple scales, etc.). In other words, this is not a course on elementary solution methods encountered in undergraduate courses, nor is it a course on advanced analytic topics such as Sturm-Louiville theory and eigenfunction expansions. It is very much an "applied" course. Thank you in advance, and again I apologize if this is not strictly a "Math Stackexchange" question. EDIT (1): I know that several people have used Strogatz' non-linear dynamics text, which covers the ladder two topics discussed (actually, it covers very little on perturbation methods). However, this text is extremely non-rigorous, and almost nothing is proved (it has the flavor of a catalog of various methods and corresponding examples). So it is not the companion text I am looking for. - I think a recent book by Gerald Teschl, available free on his site, is quite good. mat.univie.ac.at/~gerald/ftp/book-ode –  Artem Oct 18 '12 at 0:05
2014-07-24 07:12:25
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8789668679237366, "perplexity": 508.1182538245063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997888210.96/warc/CC-MAIN-20140722025808-00167-ip-10-33-131-23.ec2.internal.warc.gz"}
https://learn.careers360.com/ncert/ncert-solutions-class-8-maths-chapter-10-visualizing-solid-shapes/
# NCERT Solutions for Class 8 Maths Chapter 10 - Visualizing Solid Shapes NCERT Solutions for Class 8 Maths  Visualizing Solid Shapes- We live in a 3-Dimensional world. Every object we can see or touch has three dimensions that can be measured by length, width, and height. For example, the room can be described by 3-dimensions length, width, and height. The NCERT Solutions for Class 8 Maths  Visualizing Solid Shapes are prepared and explained by the maths experts to help the students to clear their doubts. In Chapter 10 Visualizing Solid Shapes we will study about solid objects for example cubes, cuboids, cones, spheres, hemispheres etc. We already learn about the basic geometry in which basic shapes like a circle, a rectangle, square or rhombus measured by length and width. In NCERT Class 8 Maths Chapter 10 Visualizing Solid Shapes we also learn how to look 3-dimensional object differently from different positions so they can be drawn from different angles. For example, look at the different views of the brick. Let's try to visualize a few more shapes your self. For example: Take a cylinder. What is the side view of it? Is circular? Now take bangles of the same size and hold them together. Is it looks like a cylinder? from the above-mentioned activity, we observed that when we hold a few bangles which are circular in shape with small thickness together we obtained a hollow cylinder. In this NCERT solutions for class 8th chapter 10- Visualizing solids shapes, there are 3 exercises with 16 questions in them. ## Important topics of NCERT Class 8 Maths Chapter 10 Visualizing Solid Shapes- • 10.1 Introduction • 10.2 Views of 3D-Shapes • 10.3 Mapping Space Around Us • 10.4 Faces, Edges, and Vertices ## NCERT Solutions For Class 8 Maths: Chapter-wise Chapter -1 Chapter -2 Linear Equations in One Variable Chapter-3 Understanding Quadrilaterals Chapter-4 Practical Geometry Chapter-5 Data Handling Chapter-6 Squares and Square Roots Chapter-7 Cubes and Cube Roots Chapter-8 Comparing Quantities Chapter-9 Algebraic Expressions and Identities Chapter-11 Mensuration Chapter-12 Exponents and Powers Chapter-13 Direct and Inverse Proportions Chapter-14 Factorization Chapter-15 Introduction to Graphs Chapter-16 Playing with Numbers
2019-11-20 19:46:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37016430497169495, "perplexity": 2675.2166859340523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670601.75/warc/CC-MAIN-20191120185646-20191120213646-00130.warc.gz"}
http://web.math.rochester.edu/news-events/events/single/1861
# Algebra/Number Theory Seminar ## An upper bound for image set sizes of iterated quadratic maps George Grell, U Rochester Wednesday, April 3rd, 2019 1:00 PM - 2:00 PM Hylan 1106A Let $f(x)$ be a quadratic rational map defined over the field $\mathbb{F}_q$. Then work of Pink (2013) and Juul, Kurlberg, Madhu, and Tucker (2015) classifies the possible Galois groups that arise from considering $f^n(x)-t$ over the function field $\mathbb{F}_q(t)$. For one class of Galois groups we describe the proportion of elements of with fixed points, and use a lesser known generalization of Burnside’s Lemma to show this is an upper bound across all classes. The Chebotarev Density Theorem translates this result to a bound on image set sizes. Event contact: dinesh dot thakur at rochester dot edu
2019-08-21 00:21:38
{"extraction_info": {"found_math": true, "script_math_tex": 4, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8745911717414856, "perplexity": 1573.757273051088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315695.36/warc/CC-MAIN-20190821001802-20190821023802-00112.warc.gz"}
https://www.physicsforums.com/threads/laplace-operator-help.296254/
# Homework Help: Laplace operator help 1. Mar 1, 2009 ### Ed Aboud 1. The problem statement, all variables and given/known data Let x = (x,y,z) . Recall that the vector x is determined by its direction and length r = |x| = $\sqrt{x^2 + y^2 + z^2}$ and assume we are given a function f which depends only on the length of x f = f(r) Show that $$\Delta f = f'' + \frac{2}{r} f'$$ where $$f' = \frac{\partial f}{\partial r}$$ 2. Relevant equations 3. The attempt at a solution $$u = x^2 + y^2 + z^2$$ $$r = \sqrt{u}$$ $$\frac{\partial r}{\partial x} = \frac{\partial \sqrt{u}}{\partial u} \frac{\partial u}{\partial x} = \frac{1}{2}(\frac{1}{\sqrt{r}})(2x) = \frac{x}{\sqrt{r}}$$ $$\frac{\partial ^2 r}{\partial x^2} = \frac{\partial (x)}{\partial x } \frac{1}{\sqrt{u}} + x \frac{\partial \frac{1}{\sqrt{u}}}{\partial u} \frac{\partial u }{\partial x}$$ $$= \frac{1}{\sqrt{u}} - x^2 \frac{1}{\sqrt{u^3}}$$ Since f and u are symmetric in x,y,z $$\frac{\partial ^2 r}{\partial y^2} = \frac{1}{\sqrt{u}} - y^2 \frac{1}{\sqrt{u^3}}$$ $$\frac{\partial ^2 r}{\partial z^2} = \frac{1}{\sqrt{u}} - z^2 \frac{1}{\sqrt{u^3}}$$ $$x^2 + y^2 + z^2 = u$$ $$\Delta r = (\frac{1}{\sqrt{u}}) - x^2 (\frac{1}{\sqrt{u^3}}) + (\frac{1}{\sqrt{u}}) - y^2 (\frac{1}{\sqrt{u^3}}) + (\frac{1}{\sqrt{u}}) - z^2 (\frac{1}{\sqrt{u^3}})$$ $$= \frac{3}{\sqrt{u}} - (x^2 + y^2 + z^2) \frac{1}{\sqrt{u^3}}$$ $$= \frac{2}{\sqrt{u}}$$ $$= \frac{2}{r}$$ I see that this is a part of the solution but I have no idea what to do to get the rest. Any help would be greatly appreciated because I'm lost and it has to be in tomorrow morning. 2. Mar 1, 2009 ### yyat Compute $$\left(\frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} + \frac{\partial^2}{\partial z^2}\right)f(r(x,y,z))$$ using the chain rule. 3. Mar 2, 2009 ### Ed Aboud I'm not really sure how to apply the chain rule in this case. Is there any general formula that I can use? 4. Mar 2, 2009 ### Ed Aboud Actually its cool, I got it. Thanks for the help!
2018-07-22 03:31:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.528030514717102, "perplexity": 753.6486540512819}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593004.92/warc/CC-MAIN-20180722022235-20180722042235-00254.warc.gz"}
http://curiouscheetah.com/BlogMath/pizza-time/
# Pizza Time! The Internet is in a tizzy yet again about the evils of mathematics education. At least Common Core isn’t being demonized quite as front-and-center as in the recent past, but still. This time it’s about pizza. Which means every mathematics educator reading this will know it’s about fractions, because that’s why we ever mention pizza in math class. (Random comment: “tizzy” and “pizza” have the same cryptographic profile. Good to know.) At any rate, Marty and Luis were eating some pizza. Even though Marty ate 4/6 of his pizza and Luis ate 5/6 of his, Marty had more. How is that possible? The student gave the most obvious answer: Marty’s pizza was bigger. (Alternative answers include that Marty also ate other people’s pizzas and that “pizza” is a mass noun, so Marty had several pizzas.) The teacher marked it wrong, saying that the problem itself was describing an impossible scenario. The correct answer to the worksheet’s “How is that possible?” should be “That is not possible.” … and cue the Internet table-pounding. “Moron!” declares the Internet of the teacher. Of the situation, A Plus declares it “so senseless, so confoundingly stupid, and so frustratingly obvious”, elaborating: “Here’s why the question and the teacher’s answer suck.” Now, the teacher was incorrect. I do not disagree with that. Whether it was a momentary lapse of reason by an otherwise competent third grade teacher or the act of a “f***ing idiot” who “should not be teaching math at all” (as the fine pedagogical experts of the A Plus comments section feel), I can’t say. I don’t know the teacher. I’ve made similar gaffes. Heck, Terence Tao called 27 a prime number on national TV (at 3:00 in the clip), and I dare anyone to say he’s a “f***ing idiot” who “should not be teaching math at all.” G’won, I dare you. So I have no comment about the general mathematical abilities of this particular teacher, but this was a mistake. The student is correct. What is not clear is the assumption that the textbook was likewise incorrect or poorly worded. I can’t find the answer key for that specific worksheet, but I did find keys for similar sheets* from the same publisher. Here’s the relevant question-and-answer: As you can see, Pearson (the publisher) is fully aware that the same type-of-thing (salads and, presumably, pizzas) can come in different sizes. This is a better question than the Marty/Luis question for a few reasons: • The word “Reasoning” is a little more accessible to the average third-grader than “Reasonableness” (although it’s still Tier II). • It’s easier to see salad as coming in a variety of sizes, as opposed to pizza (especially in math class, where pizza is usually just one size). • Students are overtly asked to assess who is correct, rather than being given a situation that may well contradict their expectations and being forced to overcome that confusion. But regardless, there’s nothing so poorly phrased about Pearson’s question about pizza so as to render its answer even remotely controversial: The student is clearly correct, the teacher is clearly incorrect. Indeed, with the salad question, there would at least be some justification for the teacher’s claim, but the pizza question doesn’t ask for an assessment of the validity of the situation. It plainly says that the situation exists and asks why. I’m not letting Pearson off by any means. My defense of Pearson stops at saying that the question was clear and that the student answered it correctly. The question was not poorly phrased, but it was very poorly presented. Have I mentioned this is a worksheet for third graders? I did find it online*, just not with an answer key. On the sheet, questions 1 to 6 are plain procedural questions: Write >, <, or =. The fractions all have the same denominator, so all students have to do is identify whether 3 is less than, greater than, or the same as 2. Message drummed. No units. Question 7 asks students, “Why is $$\frac{6}{8}$$ greater than $$\frac{5}{8}$$ but less than $$\frac{7}{8}$$?” At this point, students have been primed. They have been given multiple problems with the same denominator, without units. They are implicitly told to not worry about units. Then comes question 8: “Reasonableness Marty ate $$\frac{4}{6}$$ of his pizza and Luis ate $$\frac{5}{6}$$ of his pizza. Marty ate more pizza than Luis. How is that possible?” The units appear to be the same (“his pizza”). So what gives? Of course, students are supposed to recognize that “Marty’s pizza” is one unit and “Luis’s pizza” is a different unit, and we have no way of knowing from the information given whether those units are the same or not. But how does a student pick up on this? The one major clue that this question is different is a five-syllable Tier II word that the teacher may not have prepared them for. Given that the teacher didn’t have this particular answer top-of-mind themselves, I’m guessing not. There are two more questions on the sheet. Question 9 is another thing that looks like a story problem (i.e., it’s a bunch of words): “Two fractions have the same denominator. Which is the greater fraction: the fraction with the greater numerator or the lesser numerator?” The answer to that question appears to directly contradict the answer to question 8. Then there’s another procedural question, this time in multiple choice form so the third-graders are properly trained for the ACT/SAT they’re going to be taking for keepsies in eight years. So, the entire sheet is beating the drum that units don’t matter, that the denominator of a fraction is effectively a unit (something the Common Core itself gets close to doing), and so on. In the middle of this is a question the answer to which relies on realizing that units do matter. Students should realize that units matter. They should be reminded that story problems carry assumptions. But this question is a complete gotcha in this context: All the other questions on the page prime students to disregard units, and pizzas are one of the go-to objects for fractions in math class (so students might no longer think of “math class pizza” and “real world pizza” as being the same sort of thing). The teacher can’t be entirely faulted for being in the “math class” zone, for that matter. Point being: It’s a useful question in an unfair context. Pearson should reframe it (and perhaps they have, which is why I found a different version of the exercise). * These links might fail if Pearson finds them and C&Ds them into oblivion. As they should, but since the links work now, there you go. If they’re gone by the time you click on them, sorry.
2020-02-19 02:42:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.622032880783081, "perplexity": 1614.431748669031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143963.79/warc/CC-MAIN-20200219000604-20200219030604-00433.warc.gz"}
http://blomquist.xyz/3dcollisions/content/Chapter1/closest_point_on_plane.html
# Closest Point on Plane Sometimes knowing if a point is on a plane or not isn't enough. Imagine your character walking in a room. You want the feet of your character to be on the floor. This is where knowing the closest point on plane helps. You can get your characters position, and the closest point on the ground plane is where you need to place him. Technically, the we find the closest point to a plane using an orthographic projection, but there is an easier way to understand this. Here we have a plane with a normal N. We want to find the closest point on the plane to P0, on the image that would be Px. Take note, the point Px is some distance away from P0, but that distance is in the direciton of the normal! The closest point on a plane will always be in the direction of the plane normal from the test point. All we have to figure out is how far is the test point from the plane? In the above picture, that distance is d. ### The algorithm Thanks to the above image, we can deduct that given a plane, and a point, the closest point on the plane from that point will be the point minus the distance of the point from the plane in the direction of the planes normal. The question now becomes, how do we find the distance between a point and a plane? The last section Point on Plane mentioned this as the formula: Dot(SomePoint, Normal) == Distance This becomes pretty easy to implement in a function like so: // THIS BLOCK IS JUST SAMPLE CODE, DON'T COPY IT! Point ClosestPointOnPlane(Plane plane, Point point) { // This works assuming plane.Normal is normalized, which it should be float distance = DOT(plane.Normal, point) - plane.Distance; // If the plane normal wasn't normalized, we'd need this: // distance = distance / DOT(plane.Normal, plane.Normal); return point - distance * plane.Normal; } Add the following function to the Collisions class: public static Point ClosestPoint(Plane plane, Point point); And provide an implementation for it! ### Unit Test You can Download the samples for this chapter to see if your result looks like the unit test. The following code is visual only, if you make any mistakes no error is printed! The image is straight forward, there is a plane, the test point is red, the closest point is green. There is a blue line going from the test point to the closest point. The magenta line is the normal of the plane (rendered on top of the blue line) using OpenTK.Graphics.OpenGL; using Math_Implementation; using CollisionDetectionSelector.Primitives; namespace CollisionDetectionSelector.Samples { class ClosestPointPlaneSample : Application { protected Vector3 cameraAngle = new Vector3(120.0f, -10f, 20.0f); protected float rads = (float)(System.Math.PI / 180.0f); Plane plane = new Plane(new Point(5, 6, 7), new Point(6, 5, 4), new Point(1, 2, 3)); Point point = new Point(2f, 5f, -3f); public override void Intialize(int width, int height) { GL.PointSize(2f); } public override void Render() { Vector3 eyePos = new Vector3(); eyePos.Y = cameraAngle.Z * -(float)System.Math.Sin(cameraAngle.Y * rads); Matrix4 lookAt = Matrix4.LookAt(eyePos, new Vector3(0.0f, 0.0f, 0.0f), new Vector3(0.0f, 1.0f, 0.0f)); DrawOrigin(); GL.Color3(1f, 1f, 1f); plane.Render(4f); Point closest = Collisions.ClosestPoint(plane, point); float distance = Collisions.DistanceFromPlane(point, plane); Vector3 vec = point.ToVector() - plane.Normal * distance; GL.Color3(0f, 0f, 1f); GL.Begin(PrimitiveType.Lines); GL.Vertex3(point.X, point.Y, point.Z); GL.Vertex3(vec.X, vec.Y, vec.Z); GL.End(); GL.Color3(1f, 0f, 1f); GL.Begin(PrimitiveType.Lines); GL.Vertex3(closest.X, closest.Y, closest.Z); GL.Vertex3(closest.X + plane.Normal.Z, closest.Y + plane.Normal.Y, closest.Z + plane.Normal.Z); GL.End(); GL.Color3(1f, 0f, 0f); point.Render(); GL.Color3(0, 1f, 0f); closest.Render(); } public override void Update(float deltaTime) { cameraAngle.X += 45.0f * deltaTime; } protected void DrawOrigin() { GL.Begin(PrimitiveType.Lines); GL.Color3(1f, 0f, 0f); GL.Vertex3(0f, 0f, 0f); GL.Vertex3(1f, 0f, 0f); GL.Color3(0f, 1f, 0f); GL.Vertex3(0f, 0f, 0f); GL.Vertex3(0f, 1f, 0f); GL.Color3(0f, 0f, 1f); GL.Vertex3(0f, 0f, 0f); GL.Vertex3(0f, 0f, 1f); GL.End(); } public override void Resize(int width, int height) { GL.Viewport(0, 0, width, height); GL.MatrixMode(MatrixMode.Projection); float aspect = (float)width / (float)height; Matrix4 perspective = Matrix4.Perspective(60, aspect, 0.01f, 1000.0f);
2022-08-15 01:24:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33795928955078125, "perplexity": 4119.00725119147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572089.53/warc/CC-MAIN-20220814234405-20220815024405-00793.warc.gz"}
http://formula.s21g.com/formulae/help/latex
# LaTeX Format This service uses a subset of LaTeX format. It includes most of mathematical functions of the LaTeX, but others are not. ## Examples The source of LaTeX below is converted into the image which follows it. f(x)=\int_0^{x}g(t)\,dt ## Chemical Structural Formulae You can also get images of chemical structural formulae by using XyMTeX format. \purinev{4==NH$\mathrm{_2}$;6==H;2==H;1==H} The source above is converted into the image below.
2019-03-24 14:17:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9878514409065247, "perplexity": 3372.534718270498}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203448.17/warc/CC-MAIN-20190324124545-20190324150545-00006.warc.gz"}
https://www.groundai.com/project/chordal-graphs-in-triangular-decomposition-in-top-down-style/
Chordal Graphs in Triangular Decompositionin Top-Down Style1footnote 11footnote 1This work was partially supported by the National Natural Science Foundation of China (NSFC 11401018 and 11771034) # Chordal Graphs in Triangular Decomposition in Top-Down Style111This work was partially supported by the National Natural Science Foundation of China (NSFC 11401018 and 11771034) Chenqi Mou LMIB – School of Mathematics and Systems Science / Beijing Advanced Innovation Center for Big Data and Brain Computing Beihang University, Beijing 100191, China {chenqi.mou, yangbai, jiahualai}@buaa.edu.cn Yang Bai LMIB – School of Mathematics and Systems Science / Beijing Advanced Innovation Center for Big Data and Brain Computing Beihang University, Beijing 100191, China {chenqi.mou, yangbai, jiahualai}@buaa.edu.cn and Jiahua Lai LMIB – School of Mathematics and Systems Science / Beijing Advanced Innovation Center for Big Data and Brain Computing Beihang University, Beijing 100191, China {chenqi.mou, yangbai, jiahualai}@buaa.edu.cn ###### Abstract In this paper, we first prove that when the associated graph of a polynomial set is chordal, a particular triangular set computed by a general algorithm in top-down style for computing the triangular decomposition of this polynomial set has an associated graph as a subgraph of this chordal graph. Then for Wang’s method and a subresultant-based algorithm for triangular decomposition in top-down style and for a subresultant-based algorithm for regular decomposition in top-down style, we prove that all the polynomial sets appearing in the process of triangular decomposition with any of these algorithms have associated graphs as subgraphs of this chordal graph. These theoretical results can be viewed as non-trivial polynomial generalization of existing ones for sparse Gaussian elimination, inspired by which we further propose an algorithm for sparse triangular decomposition in top-down style by making use of the chordal structure of the polynomial set. The effectiveness of the proposed algorithm for triangular decomposition, when the polynomial set is chordal and sparse with respect to the variables, is demonstrated by preliminary experimental results. Key words: Triangular decomposition, chordal graph, top-down style, regular decomposition, sparsity ## 1 Introduction In this paper we establish some underlying connections between graph theory and symbolic computation by studying the changes of associated graphs of polynomial sets in the process of decomposing an arbitrary polynomial set with a chordal associated graph into triangular sets with algorithms in top-down style. The study in this paper is directly inspired by the pioneering work of Cifuentes and Parrilo. In [13] they showed for the first time the connections between chordal graphs and triangular sets when they introduced the concept of chordal networks of polynomial sets and proposed an algorithm for constructing chordal networks based on computation of triangular decomposition. In particular, they found experimentally that for polynomial sets with chordal associated graphs, the algorithms for triangular decomposition due to Wang (e.g., his algorithm for regular decomposition in [38]) become more efficient. In this paper, with clarification of the changes of associated graphs of polynomial sets in triangular decomposition in top-down style, we are able to provide a theoretical explanation for their experimental observation (see Remark 33). It is worth mentioning that Cifuentes and Parrilo also studied the connections between chordal graphs and Gröbner bases in [12], but they found that the chordal structures of polynomial sets are destroyed in the process of computing Gröbner bases . Chordal graphs have been applied to many scientific and engineering problems like existence of perfect phylogeny in reconstruction of evolutionary trees [9]. Two of these applications are of particular interest to us and are closely related to the study in this paper: sparse Gaussian elimination and sparse sums-of-squares decomposition. For the former problem, it is shown that the Cholesky factorization of a symmetric positive definite matrix does not introduce new fill-ins if the associated graph of the matrix is chordal, and on the basis of this observation algorithms for sparse Gaussian elimination have been proposed by using the property that the sparsity of the matrix can be kept if the associated graph of the matrix is chordal [31, 32, 23]. For the latter, structured sparsity arising from polynomial optimization problems is studied and utilized by using the chordal structures, resulting in sparse algorithms for sums-of-squares decomposition of multivariate polynomials [34, 35, 43, 40]. The underlying ideas of the study in this paper are similar to those in the two successful applications of chordal graphs above: we show that the chordality of associated graphs of polynomial sets is preserved in a few algorithms for triangular decomposition in top-down style, as it is in the Cholesky factorization of symmetric matrices, and we propose a sparse algorithm for triangular decomposition in top-down style based on the chordal structure in a simiar way to what have been done for sparse Gaussian elimination and sparse sums-of-squares decomposition. Like the Gröbner basis which has been greatly developed in its theory, methods, implementations, and applications [8, 16, 17, 18, 14], the triangular set is another powerful algebraic tool in the study on and computation of polynomials symbolically, especially for elimination theory and polynomial system solving [41, 20, 26, 36, 2, 39, 11], with diverse applications [42, 10]. The process of decomposing a polynomial set into finitely many triangular sets or systems (probably with additional properties like being regular or normal, etc.) with associated zero and ideal relationships is called triangular decomposition of the polynomial set. Triangular decomposition of polynomial sets can be regarded as polynomial generalization of Gaussian elimination for solving linear equations. The top-down strategy in triangular decomposition means that the variables appearing in the input polynomial set are handled in a strictly decreasing order, and it is a common strategy in the design and implementations of algorithms for triangular decomposition. In particular, most algorithms for triangular decomposition due to Wang are in top-down style [36, 37, 38]. Algorithms for triangular decomposition in top-down style with refinement in the Boolean settings and over finite fields have also been proposed and applied to cryptoanalysis [10, 21, 24]. The fact that elimination in it is performed in a strictly decreasing order makes triangular decomposition in top-down style the closest among all kinds of triangular decomposition to Gaussian elimination, in which the elimination of entries in different columns of the matrix is also performed in a strict order. In this paper the chordal structures of polynomial sets appearing in the algorithms for triangular decomposition in top-down style are studied. The main contributions of this paper include: 1) Under the conditions that the input polynomial set is chordal and a perfect elimination ordering is used as the variable ordering, we study the influence of general reduction in triangular decomposition in top-down style on the associated graphs of polynomial sets and prove that one particular triangular set computed by algorithms for triangular decomposition in top-down style has an associated graph as a subgraph of the input chordal graph (in Section 3). 2) Under the same conditions, we show (in Section 4) that in the process of triangular decomposition with Wang’s algorithm, any polynomial set (and thus any of the computed triangular sets) has an associated graph as a subgraph of the input chordal graph. 3) The same results are proved for subresultant-based algorithms for triangular decomposition and regular decomposition in top-down style (in Sections 5 and 6 respectively). 4) The variable sparsity of polynomial sets is defined with their associated graphs, and an effective refinement by using the variable sparsity and chordality of input polynomial sets is proposed to speedup triangular decomposition in top-down style (in Section 7). This paper is an extension of [29], and the contributions 3) and 4) listed above are new. With triangular decomposition in top-down style viewed as polynomial generalization of Gaussian elimination, the contributions listed above are indeed polynomial generalizations of the roles chordal structures play in Gaussian elimination and of algorithms for sparse Gaussian elimination. As one may expect, these polynomial generalizations are highly non-trivial because of the complicated process of triangular decomposition due to various splitting strategies involved in specific algorithms. Furthermore, these contributions reveal theoretical properties of triangular decomposition in top-down style from the view point of graph theory, and we hope this paper can stimulate more study on triangular decomposition by using concepts and methods from graph theory. ## 2 Preliminaries Let be a field, and be the multivariate polynomial ring over in the variables . For the sake of simplicity, we write as , as for some integer , and as . ### 2.1 Associated graph and chordal graph For a polynomial , define the (variable) support of , denoted by , to be the set of variables in which effectively appear in . For a polynomial set , its support . ###### Definition 1. Let be a polynomial set in . Then the associated graph of , denoted by , is an undirected graph with the vertex set and the edge set . ###### Example 2. The associated graphs of P={x2+x1,x3+x1,x24+x2,x34+x3,x5+x2,x5+x3+x2},Q={x2+x1,x3+x1,x3,x24+x2,x34+x3,x5+x2} are shown in Figure 1. ###### Definition 3. Let be a graph with . Then an ordering of the vertices is called a perfect elimination ordering of if for each , the restriction of on the following set Xj={xj}∪{xk:xk is a clique. A graph is said to be chordal if there exists a perfect elimination ordering of it. An equivalent condition for a graph to be chordal is the following: for any cycle contained in of four or more vertices, there is an edge connecting two vertices in . The edge in this case is called a chord of . A chordal graph is also called a triangulated one. For an arbitrary graph , another graph is called a chordal completion of if is chordal and is its subgraph. From the algorithmic point of view, there exist effective algorithms for testing whether an arbitrary graph is chordal (in case of a chordal graph, a perfect elimination ordering will also be returned) [33] and for finding a chordal completion of an arbitrary graph [7], though the problem of finding the minimal chordal completion is NP-hard [1]. ###### Definition 4. A polynomial set is said to be chordal if its associated graph is chordal. ###### Example 5. In Example 2 and Figure 1, the associated graph is chordal by definition and thus is chordal, while is not. ### 2.2 Triangular set and triangular decomposition Throughout this subsection the variables are ordered as . For an arbitrary polynomial , the greatest variable appearing in is called its leading variable, denoted by . Let . Write with , , and . Then the polynomials and are called the initial and tail of and denoted by and respectively, and is called the leading degree of and denoted by . For two polynomial sets , the set of common zeros of in is denoted by , and , where is the algebraic closure of . ###### Definition 6. An ordered set of non-constant polynomials is called a triangular set if . A pair with is called a triangular system if is a triangular set, and for each and any , we have . Given a triangular set , the saturated ideal of is . In particular, for an integer , forms a (truncated) triangular set in , and we denote . For an arbitrary polynomial set , we denote for an integer and denote . ###### Definition 7. A triangular set is said to be regular or called a regular set if for each , the canonical image of in is neither zero nor a zero-divisor. A triangular system is called a regular system if for each , the following conditions hold: (a) either or ; (b) for any and , we have . The definitions above of regular set and regular system are algebraic (in the language of ideals) and geometric (in the language of zeros) respectively. The connections between regular sets and regular systems have been clarified in [38, 39]. ###### Definition 8. Let be a polynomial set. Then a finite number of triangular sets (triangular systems respectively) are called a triangular decomposition of if the zero relationship holds, where ( holds respectively). In particular, a triangular decomposition is called a regular decomposition if each of its triangular sets or systems is regular. When no ambiguity occurs, the process for computing the triangular decomposition of a polynomial set is also called triangular decomposition of . As one may find from Definitions 6 and 8, triangular systems are generalization of triangular sets. For a triangular system , is a triangular set which represents the equations , while is a polynomial set which represents the inequations . There exist many algorithms for decomposing polynomial sets into triangular sets or systems with different properties. One of the main strategies for designing such algorithms for triangular decomposition is to carry out reduction on polynomials containing the greatest (unprocessed) variable until there is only one such polynomial left, at the same time producing new polynomials whose leading variables are strictly smaller than the currently processed variable. For an arbitrary polynomial set , the smallest integer such that or for each is called the level of and denoted by . Obviously a polynomial set containing no constant forms a triangular set if . Let be a polynomial set in and be a set of pairs of polynomial sets, initialized with . Then an algorithm for computing triangular decomposition of is said to be in top-down style if for each polynomial set with , this algorithm handles the polynomials in and to produce finitely many polynomials sets and such that the following conditions hold: 1. ; 2. for each , and for ; 3. there exists some integer such that or , and the other are put into for later computation. In this paper we are interested mainly in algorithms for triangular decomposition in top-down style. Note that the above definition, compared with the corresponding one in [29], imposes additional conditions on the polynomial sets representing inequations, for the authors find that it is difficult to study the polynomial sets alone when the interactions between and occur in certain algorithms (see Section 6 for more details). ### 2.3 Pseudo division and subresultant regular subchain Two commonly used algebraic operations on multivariate polynomials to perform reduction in algorithms for triangular decomposition are pseudo division and computation of the resultant of two polynomials. The algorithms for triangular decomposition in top-down style studied in this paper rely heavily on these two algebraic operations. For any two polynomials , there exist polynomials and an integer such that and . Furthermore, if is fixed, then and are unique. The process above of computing and from and is called the pseudo division of with respect to , and the polynomials and here are called the pseudo quotient and pseudo remainder of with respect to and denoted by and respectively. Suppose further that . Write and with . Denote by the sylvester matrix of and with respect to . Then the determinant is called the Sylvester resultant of and with respect to . For two integers , define to be the submatrix of obtained by deleting the last rows of ’s coefficients, the last rows of ’s coefficients, and the last columns except the -th one. Then the polynomial is called the th subresultant of and with respect to . In particular, the th subresultant is said to be regular if . ###### Definition 9. Let be two polynomials such that , and be the th resultant of and with respect to for , where when and otherwise. Then the sequence is called the subresultant chain of and with respect to . Furthermore, let be the regular subresultants in with . Then the sequence is called the subresultant regular subchain of and with respect to . There exist strong connections between the subresultant chain and the greatest common divisor of two polynomials. The reader is referred to [28, Chap. 7] for more details on this. ## 3 General triangular decomposition in top-down style In this section, the graph structures of polynomial sets in general algorithms for triangular decomposition in top-down style are studied when the input polynomial set is chordal. We start this section with the connections between the associated graphs of a triangular set reduced from a chordal polynomial set and the chordal associated graph. ###### Proposition 10. Let be a chordal polynomial set with as one perfect elimination ordering of . For , let be a polynomial such that and ( is set null if ). Then is a triangular set, and . In particular, if for , then . ###### Proof. It is straightforward that is a triangular set because if for . For any edge , there exists an integer such that . Then , and thus and . Since is chordal with as a perfect elimination ordering and , , we know that by Definition 3. This proves the inclusion . In the case when for , next we show the inclusion , which implies the equality . For any , there exists an integer and a polynomial such that with . Since , we know that and thus . ∎ ###### Example 11. Proposition 10 does not necessarily hold in general if the polynomial set is not chordal. Consider the same as in Example 2 whose associated graph is not chordal. Let T=[x2+x1,x3+x1,−x2x4+x3,x5+x2]. Then one can check that for , , but the associated graph , as shown in Figure 2, is not a subgraph of . The following theorem relates the associated graph of a chordal polynomial set and that of the polynomial set after reduction with respect to one variable. ###### Theorem 12. Let be a chordal polynomial set such that and is one perfect elimination ordering of . Let be a polynomial such that and , and be a polynomial set such that . Then for the polynomial set , where for , we have . In particular, if , then . ###### Proof. To prove the inclusion , it suffices to show that for each edge , we have . For an arbitrary edge , there exists a polynomial and an integer such that and . If , then , and by we have . This implies that and by the chordality of we have . Else if , then by there are two cases for accordingly: when , clearly ; when , we have , and thus , and the chordality implies . In particular, if , then by for and we have . This proves the equality . ∎ ###### Example 13. Let be the chordal polynomial set as in Example 2. Then . If we take , and , then equals in Example 2, and is a (strict) subgraph of ; If we take , and , then and thus . Next we introduce some notations to formulate the reduction process in Theorem 12. Denote the power set of a set by . For an integer , let be a mapping (2) such that and , where is understood as . For a polynomial set and a fixed integer , suppose that for some as stated above. Now define the result of reduction with respect to as the polynomial set by defining all its subsets for as follows. (3) Furthermore, denote ¯¯¯¯¯¯¯¯redi(P):=redi(redi+1(⋯(redn(P))⋯)) (4) for simplicity, and the polynomial set is the result of successive reduction with respect to . Following the above terminologies, the conclusions of Theorem 12 can be reformulated as: , and the equality holds if . Indeed, the reduction process above is commonly used in algorithms for triangular decomposition in top-down style, and the mapping in (2) is abstraction of specific reductions used in different kinds of algorithms for triangular decomposition [25]. For example, one specific kind of such reduction is performed by using pseudo divisions, and in this case in (2) consists of pseudo remainders which do not contain . ###### Proposition 14. Let be a chordal polynomial set with as one perfect elimination ordering of . For each , suppose that for some as in (2) and , where is understood as . Then . ###### Proof. Repeated use of Theorem 12 implies G(P)=G(redn(P))=G(¯¯¯¯¯¯¯¯redn−1(P))=⋯=G(¯¯¯¯¯¯¯¯red1(P)), and the conclusion follows. ∎ Proposition 14 holds because after every reduction remains the same as the chordal graph , and thus the hypotheses of Theorem 12 remain satisfied. If we weaken the condition in Proposition 14 to , then in general we will not have G(¯¯¯¯¯¯¯¯red1(P))⊂⋯⊂G(¯¯¯¯¯¯¯¯redn−1(P))⊂G(redn(P))⊂G(P), as shown by the following example (though the last inclusion always holds because is chordal). ###### Example 15. Let us continue with Example 13 with and , where . Take T4=prem(x34+x3,x24+x2)=−x2x4+x3,R4={prem(x24+x2,−x2x4+x3)}={x23−x32}, then Q′:=¯¯¯¯¯¯¯¯red4(P)={x2+x1,x3+x1,x23−x32,x3,−x2x4+x3,x5+x2}. The associated graph is shown below. Note that but . Despite of this example where successive inclusions of the associated graphs in the reduction chain does not hold, it can be proved that for each , is a subgraph of the original chordal graph . ###### Lemma 16. Let be a chordal polynomial set with as one perfect elimination ordering of and be as defined in (4) for . Then for each and any two variables and , if there exists an integer such that , then . ###### Proof. We induce on the integer . In the case , from the proof of Theorem 12 one can easily find that the conclusion holds . Now suppose that the conclusion holds for , and next we prove that it also holds for , namely for any and , if there exists such that , then . Since , by (3) we consider the following three cases of . (1) If , then , and thus . By the inductive assumption we have . (2) If , then , and thus by the inductive assumption we have . (3) If , then there exists a polynomial set such that and . (3.1) If , then , and by the inductive assumption we know that . (3.2) If , then . Next we consider the following three cases. (3.2.1) : with the same argument as in (a) we know that . (3.2.2) : by the induction assumption we know that . (3.2.3) and : Since , by the induction assumption we have ; since , by the induction assumption we have . Then by the chordality of , and imply that . This ends the proof. ∎ ###### Theorem 17. Let be a chordal polynomial set with as one perfect elimination ordering of and be as defined in (4) for . Then for each , . ###### Proof. By the construction of , we know that all the vertices of are also vertices of . For each edge , there exists an integer and a polynomial such that and . Then by Lemma 16, we know that , and thus . ∎ ###### Corollary 18. Let be a chordal polynomial set with as one perfect elimination ordering of and be as defined in (4) for . If does not contain any nonzero constant, then forms a triangular set such that . Corollary 18 tells us that under the conditions that the input polynomial set is chordal and the variable ordering is one perfect elimination ordering, the associated graph of one specific triangular set computed in any algorithm for triangular decomposition in top-down style with reduction satisfying the conditions (2) and (3) is a subgraph of the associated graph of the input polynomial set. In fact, this triangular set is usually the “main branch” in the triangular decomposition in the sense that other branches are obtained by adding additional constrains in the splitting in the process of triangular decomposition. Note that in the case when the input polynomial set is not chordal, a process of chordal completion can be carried out on to generate a chordal graph (in the worst case this chordal completion results in a complete graph which is trivially chordal). After this chordal completion the conditions of Corollary 18 will be satisfied. The chordality of any triangular set other than the specific one above in a triangular decomposition computed by an algorithm in top-down style is dependent on the splitting strategy in the algorithm. In the following sections, we study several specific algorithms for triangular decomposition in top-down style and prove that the associated graphs of all the polynomial sets in the decomposition process of these algorithms are subgraphs of the associated graph of a chordal input polynomial set. ## 4 Wang’s method for triangular decomposition in top-down style A simply-structured algorithm was proposed by Wang for triangular decomposition in top-down style in 1993 [36], which is referred to as Wang’s method in the literature (see. e.g., [3]). Next the chordality of polynomial sets in the decomposition process of Wang’s method is studied. ### 4.1 Wang’s method revisited For the self-containness of this paper, Wang’s method for triangular decomposition is outlined in Algorithm 1 below. In this algorithm and those to follow, the data structure is used to represent two polynomial sets and such that or for . For a set consisting of tuples in the form , denote . The subroutine returns an element from a set and then removes it from . The decomposition process in Wang’s method (Algorithm 1) applied to can be viewed as a binary tree with its root as . The nodes of this binary tree are all the tuples picked from , and each node has two child nodes and , where P′:=P∖P(k)∪{T}∪{prem(P,T):P∈P(k)∖{T}},   Q′:=Q∪{ini(T)},P′′:=P∖{T}∪{ini(T),tail(T)},  Q′′
2020-08-08 12:21:21
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.916485071182251, "perplexity": 489.60032632277233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737645.2/warc/CC-MAIN-20200808110257-20200808140257-00393.warc.gz"}
https://physics.stackexchange.com/questions/382287/what-does-it-mean-that-two-independent-scalars-of-the-second-degree-can-be-form
# What does it mean that “two independent scalars of the second degree can be formed from the components of the strain tensor?” The following is from Theory of Elasticity by Landau and Lifshitz. Why can only two independent scalars of second order be formed from the symmetric strain tensor $u_{i k}$, which for infinitesimal strain is defined as $$u_{i k} = \frac12 \left( \frac{\partial u_i}{\partial x_k} + \frac{\partial u_k}{\partial x_i} \right)$$ I would assume that from a tensor that (in three dimensions) contains six independent elements, six independents elements of second order could be formed, namely just the squares of these elements. Why is this not correct? Note that summation convention is being used, i.e. $$u_{ii}^2 = (u_{11} + u_{22} + u_{33})^2$$ and $$u_{i k}^2 = \sum_{i j} u_{ik}u_{ik}.$$ • Those aren't scalars. – Javier Jan 25 '18 at 21:23 • I edited my post the reflect that summation convention is being used. – Kappie001 Jan 25 '18 at 21:26 Recall that a scalar isn't just any function; it needs to be a function that transforms as a scalar, i.e. one that doesn't transform at all. The general way to do this is to construct an object with all the indices contracted. We start with the general second order term $$u_{ij} u_{k\ell}$$ and need to contract indices together. The only tensors available are the Kronecker delta $\delta^i_j$ and the volume tensor $\epsilon_{ijk}$. The volume tensor doesn't give us anything: if we just use one we get an odd number of indices, and if we use two they contract together to reduce to Kronecker deltas, i.e. $$\epsilon_{ijk} \epsilon^{imn} = \delta_j^m \delta_k^n - \delta_j^n \delta_k^m.$$ Using only the Kronecker delta, we can contract $i$ with $j$, so $k$ must be contracted with $\ell$, giving the first term $u_{ii} u_{kk} = u_{ii}^2$. (Here I'm being sloppy with index placement because it doesn't matter.) Otherwise, $i$ can be contracted with $k$ or $\ell$, and it doesn't matter which by symmetry. If $i$ is contracted with $k$ then $j$ is contracted with $\ell$, giving $u_{ij} u_{ij} = u_{ij}^2$, the second term. More generally, the fact that there are two scalars can be understood by representation theory. Your situation has $SO(3)$ symmetry, and a general symmetric rank two tensor is a six-dimensional representation that decomposes into a scalar (its trace) and a traceless part, which we write as $$6 = 1 + 5.$$ The quadratic terms are formed from a tensor product of this representation with itself, $$6 \times 6 = (1 + 5) \times (1 + 5) = 1 + 5 + 5 + 5 \times 5.$$ The first scalar is simply the trace squared, as we've seen. Now, to decompose $5 \times 5$, we use the same method you might already know from quantum mechanics. (Indeed, to translate this to spin, just subtract one from every number and divide by two.) Then $$5 \times 5 = 1 + 3 + 5 + 7 + 9.$$ In total there are two factors of $1$ and hence two scalars. Now we come back to the first point: why are $\delta^i_j$ and $\epsilon_{ijk}$ the only tensors available? These come from the two parts of the definition of $SO(3)$. The "orthogonal" part means that the Euclidean inner product is preserved, giving the metric $\delta^i_j$. The "special" part means that the volume is preserved, giving the volume $\epsilon_{ijk}$. Nothing else is preserved, so you can't contract with any other tensors -- those would change as well under rotation. • @Kappie001 I edited to address both concerns, tell me if this works for you! The Levi-Civita symbol $\epsilon_{ijk}$ is defined and related to volumes here. – knzhou Jan 25 '18 at 21:40
2019-09-23 01:06:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8559420108795166, "perplexity": 252.0548699838096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575844.94/warc/CC-MAIN-20190923002147-20190923024147-00485.warc.gz"}
http://mathoverflow.net/questions/91601/fx-1-x-2-x-3-ldots-x-n-maximum-how-many-different-results-can-have-with-all
# $f(x_1,x_2,x_3,\ldots,x_n)$ Maximum how many different results can have with all permutation of inputs? $\alpha _n=e^{2 \pi i/n}$ $$f(x_1,x_2,x_3,\ldots,x_n)=(x_1+\alpha _n x_2+ \alpha _n ^2 x_3+\cdots+\alpha _n ^{n-1} x_n)^n$$ Maximum how many different results can have with all permutation of inputs? I have read in Jim Brown's paper on page 5. http://www.math.caltech.edu/~jimlb/abel.pdf Lagrange showed that If n=3 then $f(x_1,x_2,x_3)$ Maximum can have 2 different results with all permutations of $(x_1,x_2,x_3)$ If n=4 then $f(x_1,x_2,x_3,x_4)$ Maximum can have 3 different results with all permutations of $(x_1,x_2,x_3,x_4)$ If n=5 then $f(x_1,x_2,x_3,x_4,x_5)$ Maximum can have 6 different results with all permutations of $(x_1,x_2,x_3,x_4,x_5)$ Is there any general formula for n and which method is used to find the general formula?
2015-09-04 12:29:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8836606740951538, "perplexity": 188.99825586730114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645348533.67/warc/CC-MAIN-20150827031548-00347-ip-10-171-96-226.ec2.internal.warc.gz"}
https://en.wikipedia.org/wiki/Solomonoff_induction
# Solomonoff's theory of inductive inference (Redirected from Solomonoff induction) Ray Solomonoff's theory of universal inductive inference is a theory of prediction based on logical observations, such as predicting the next symbol based upon a given series of symbols. The only assumption that the theory makes is that the environment follows some unknown but computable probability distribution. It is a mathematical formalization of Occam's razor[1][2][3][4][5] and the Principle of Multiple Explanations.[6] Prediction is done using a completely Bayesian framework. The universal prior is calculated for all computable sequences—this is the universal a priori probability distribution; no computable hypothesis will have a zero probability. This means that Bayes rule of causation can be used in predicting the continuation of any particular computable sequence. ## Origin ### Philosophical The theory is based in philosophical foundations, and was founded by Ray Solomonoff around 1960.[7] It is a mathematically formalized combination of Occam's razor[1][2][3][4][5] and the Principle of Multiple Explanations.[6] All computable theories which perfectly describe previous observations are used to calculate the probability of the next observation, with more weight put on the shorter computable theories. Marcus Hutter's universal artificial intelligence builds upon this to calculate the expected value of an action. ### Mathematical The proof of the "razor" is based on the known mathematical properties of a probability distribution over a countable set. These properties are relevant because the infinite set of all programs is a denumerable set. The sum S of the probabilities of all programs must be exactly equal to one (as per the definition of probability) thus the probabilities must roughly decrease as we enumerate the infinite set of all programs, otherwise S will be strictly greater than one. To be more precise, for every ${\displaystyle \epsilon }$ > 0, there is some length l such that the probability of all programs longer than l is at most ${\displaystyle \epsilon }$. This does not, however, preclude very long programs from having very high probability. Fundamental ingredients of the theory are the concepts of algorithmic probability and Kolmogorov complexity. The universal prior probability of any prefix p of a computable sequence x is the sum of the probabilities of all programs (for a universal computer) that compute something starting with p. Given some p and any computable but unknown probability distribution from which x is sampled, the universal prior and Bayes' theorem can be used to predict the yet unseen parts of x in optimal fashion. ## Modern applications ### Artificial intelligence Though Solomonoff's inductive inference is not computable, several AIXI-derived algorithms approximate it in order to make it run on a modern computer. The more computing power they are given, the closer their predictions are to the predictions of inductive inference (their mathematical limit is Solomonoff's inductive inference).[8][9][10] Another direction of inductive inference is based on E. Mark Gold's model of learning in the limit from 1967 and has developed since then more and more models of learning.[11] The general scenario is the following: Given a class S of computable functions, is there a learner (that is, recursive functional) which for any input of the form (f(0),f(1),...,f(n)) outputs a hypothesis (an index e with respect to a previously agreed on acceptable numbering of all computable functions; the indexed function may be required consistent with the given values of f). A learner M learns a function f if almost all its hypotheses are the same index e, which generates the function f; M learns S if M learns every f in S. Basic results are that all recursively enumerable classes of functions are learnable while the class REC of all computable functions is not learnable.[citation needed] Many related models have been considered and also the learning of classes of recursively enumerable sets from positive data is a topic studied from Gold's pioneering paper in 1967 onwards. A far reaching extension of the Gold’s approach is developed by Schmidhuber's theory of generalized Kolmogorov complexities,[12] which are kinds of super-recursive algorithms. ### Turing machines The third mathematically based direction of inductive inference makes use of the theory of automata and computation. In this context, the process of inductive inference is performed by an abstract automaton called an inductive Turing machine (Burgin, 2005). Inductive Turing machines represent the next step in the development of computer science providing better models for contemporary computers and computer networks (Burgin, 2001) and forming an important class of super-recursive algorithms as they satisfy all conditions in the definition of algorithm. Namely, each inductive Turing machines is a type of effective method in which a definite list of well-defined instructions for completing a task, when given an initial state, will proceed through a well-defined series of successive states, eventually terminating in an end-state. The difference between an inductive Turing machine and a Turing machine is that to produce the result a Turing machine has to stop, while in some cases an inductive Turing machine can do this without stopping. Stephen Kleene called procedures that could run forever without stopping by the name calculation procedure or algorithm (Kleene 1952:137). Kleene also demanded that such an algorithm must eventually exhibit "some object" (Kleene 1952:137). This condition is satisfied by inductive Turing machines, as their results are exhibited after a finite number of steps, but inductive Turing machines do not always tell at which step the result has been obtained. Simple inductive Turing machines are equivalent to other models of computation. More advanced inductive Turing machines are much more powerful. It is proved (Burgin, 2005) that limiting partial recursive functions, trial and error predicates, general Turing machines, and simple inductive Turing machines are equivalent models of computation. However, simple inductive Turing machines and general Turing machines give direct constructions of computing automata, which are thoroughly grounded in physical machines. In contrast, trial and error predicates, limiting recursive functions and limiting partial recursive functions present syntactic systems of symbols with formal rules for their manipulation. Simple inductive Turing machines and general Turing machines are related to limiting partial recursive functions and trial and error predicates as Turing machines are related to partial recursive functions and lambda-calculus. Note that only simple inductive Turing machines have the same structure (but different functioning semantics of the output mode) as Turing machines. Other types of inductive Turing machines have an essentially more advanced structure due to the structured memory and more powerful instructions. Their utilization for inference and learning allows achieving higher efficiency and better reflects learning of people (Burgin and Klinger, 2004). Some researchers confuse computations of inductive Turing machines with non-stopping computations or with infinite time computations. First, some of computations of inductive Turing machines halt. As in the case of conventional Turing machines, some halting computations give the result, while others do not give. Second, some non-stopping computations of inductive Turing machines give results, while others do not give. Rules of inductive Turing machines determine when a computation (stopping or non-stopping) gives a result. Namely, an inductive Turing machine produces output from time to time and once this output stops changing, it is considered the result of the computation. It is necessary to know that descriptions of this rule in some papers are incorrect. For instance, Davis (2006: 128) formulates the rule when result is obtained without stopping as "… once the correct output has been produced any subsequent output will simply repeat this correct result." Third, in contrast to the widespread misconception, inductive Turing machines give results (when it happens) always after a finite number of steps (in finite time) in contrast to infinite and infinite-time computations. There are two main distinctions between conventional Turing machines and simple inductive Turing machines. The first distinction is that even simple inductive Turing machines can do much more than conventional Turing machines. The second distinction is that a conventional Turing machine always informs (by halting or by coming to a final state) when the result is obtained, while a simple inductive Turing machine in some cases does inform about reaching the result, while in other cases (where the conventional Turing machine is helpless), it does not inform. People have an illusion that a computer always itself informs (by halting or by other means) when the result is obtained. In contrast to this, users themselves have to decide in many cases whether the computed result is what they need or it is necessary to continue computations. Indeed, everyday desktop computer applications like word processors and spreadsheets spend most of their time waiting in event loops, and do not terminate until directed to do so by users. #### Evolutionary inductive Turing machines Evolutionary approach to inductive inference is accomplished by another class of automata called evolutionary inductive Turing machines (Burgin and Eberbach, 2009; 2012). An ‘’’evolutionary inductive Turing machine’’’ is a (possibly infinite) sequence E = {A[t]; t = 1, 2, 3, ... } of inductive Turing machines A[t] each working on generations X[t] which are coded as words in the alphabet of the machines A[t]. The goal is to build a “population” Z satisfying the inference condition. The automaton A[t] called a component, or a level automaton, of E represents (encodes) a one-level evolutionary algorithm that works with input generations X[i] of the population by applying the variation operators v and selection operator s. The first generation X[0] is given as input to E and is processed by the automaton A[1], which generates/produces the first generation X[1] as its transfer output, which goes to the automaton A[2]. For all t = 1, 2, 3, ..., the automaton A[t] receives the generation X[t − 1] as its input from A[t − 1] and then applies the variation operator v and selection operator s, producing the generation X[i + 1] and sending it to A[t + 1] to continue evolution. ## Notes 1. ^ a b JJ McCall. Induction: From Kolmogorov and Solomonoff to De Finetti and Back to Kolmogorov – Metroeconomica, 2004 – Wiley Online Library. 2. ^ a b D Stork. Foundations of Occam's razor and parsimony in learning from ricoh.com – NIPS 2001 Workshop, 2001 3. ^ a b A.N. Soklakov. Occam's razor as a formal basis for a physical theory from arxiv.org – Foundations of Physics Letters, 2002 – Springer 4. ^ a b Jose Hernandez-Orallo (1999). "Beyond the Turing Test" (PDF). Journal of Logic, Language and Information. 9. 5. ^ a b M Hutter. On the existence and convergence of computable universal priors arxiv.org – Algorithmic Learning Theory, 2003 – Springer 6. ^ a b Ming Li and Paul Vitanyi, An Introduction to Kolmogorov Complexity and Its Applications. Springer-Verlag, N.Y., 2008p 339 ff. 7. ^ Samuel Rathmanner and Marcus Hutter. A philosophical treatise of universal induction. Entropy, 13(6):1076–1136, 2011 8. ^ J. Veness, K.S. Ng, M. Hutter, W. Uther, D. Silver. "A Monte Carlo AIXI Approximation" – Arxiv preprint, 2009 arxiv.org 9. ^ J. Veness, K.S. Ng, M. Hutter, D. Silver. "Reinforcement Learning via AIXI Approximation" Arxiv preprint, 2010 – aaai.org 10. ^ S. Pankov. A computational approximation to the AIXI model from agiri.org – Artificial general intelligence, 2008: proceedings of …, 2008 – books.google.com 11. ^ Gold, E. Mark (1967). "Language identification in the limit" (PDF). Information and Control. 10 (5): 447–474. doi:10.1016/S0019-9958(67)91165-5. 12. ^ J. Schmidhuber (2002). "Hierarchies of generalized Kolmogorov complexities and nonenumerable universal measures computable in the limit" (PDF). International Journal of Foundations of Computer Science. 13 (4): 587–612. doi:10.1142/S0129054102001291. ## References • Angluin, Dana; Smith, Carl H. (Sep 1983). "Inductive Inference: Theory and Methods" (PDF). Computing Surveys. 15 (3): 237–269. doi:10.1145/356914.356918. • Burgin, M. (2005), Super-recursive Algorithms, Monographs in computer science, Springer. ISBN 0-387-95569-0 • Burgin, M., "How We Know What Technology Can Do", Communications of the ACM, v. 44, No. 11, 2001, pp. 82–88. • Burgin, M.; Eberbach, E., "Universality for Turing Machines, Inductive Turing Machines and Evolutionary Algorithms", Fundamenta Informaticae, v. 91, No. 1, 2009, 53–77. • Burgin, M.; Eberbach, E., "On Foundations of Evolutionary Computation: An Evolutionary Automata Approach", in Handbook of Research on Artificial Immune Systems and Natural Computing: Applying Complex Adaptive Technologies (Hongwei Mo, Ed.), IGI Global, Hershey, Pennsylvania, 2009, 342–360. • Burgin, M.; Eberbach, E., "Evolutionary Automata: Expressiveness and Convergence of Evolutionary Computation", Computer Journal, v. 55, No. 9, 2012, pp. 1023–1029. • Burgin, M.; Klinger, A. Experience, Generations, and Limits in Machine Learning, Theoretical Computer Science, v. 317, No. 1/3, 2004, pp. 71–91 • Davis, Martin (2006) "The Church–Turing Thesis: Consensus and opposition]". Proceedings, Computability in Europe 2006. Lecture Notes in Computer Science, 3988 pp. 125–132. • Gasarch, W.; Smith, C. H. (1997) "A survey of inductive inference with an emphasis on queries". Complexity, logic, and recursion theory, Lecture Notes in Pure and Appl. Math., 187, Dekker, New York, pp. 225–260. • Hay, Nick. "Universal Semimeasures: An Introduction," CDMTCS Research Report Series, University of Auckland, Feb. 2007. • Jain, Sanjay ; Osherson, Daniel ; Royer, James ; Sharma, Arun, Systems that Learn: An Introduction to Learning Theory (second edition), MIT Press, 1999. • Kleene, Stephen C. (1952), Introduction to Metamathematics (First ed.), Amsterdam: North-Holland. • Li Ming; Vitanyi, Paul, An Introduction to Kolmogorov Complexity and Its Applications, 2nd Edition, Springer Verlag, 1997. • Osherson, Daniel ; Stob, Michael ; Weinstein, Scott, Systems That Learn, An Introduction to Learning Theory for Cognitive and Computer Scientists, MIT Press, 1986. • Solomonoff, Ray J. (1999). "Two Kinds of Probabilistic Induction" (PDF). The Computer Journal. 42 (4): 256. doi:10.1093/comjnl/42.4.256. • Solomonoff, Ray (March 1964). "A Formal Theory of Inductive Inference Part I" (PDF). Information and Control. 7 (1): 1&ndash, 22. doi:10.1016/S0019-9958(64)90223-2. • Solomonoff, Ray (June 1964). "A Formal Theory of Inductive Inference Part II" (PDF). Information and Control. 7 (2): 224&ndash, 254. doi:10.1016/S0019-9958(64)90131-7.
2018-11-13 21:26:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7204737067222595, "perplexity": 1819.7011173515382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741491.47/warc/CC-MAIN-20181113194622-20181113220622-00108.warc.gz"}
https://rupress.org/jgp/article/151/2/247/120442/Epilepsy-associated-mutations-in-the-voltage
One of the major factors known to cause neuronal hyperexcitability is malfunction of the potassium channels formed by KCNQ2 and KCNQ3. These channel subunits underlie the M current, which regulates neuronal excitability. Here, I investigate the molecular mechanisms by which epilepsy-associated mutations in the voltage sensor (S4) of KCNQ3 cause channel malfunction. Voltage clamp fluorometry reveals that the R230C mutation in KCNQ3 allows S4 movement but shifts the open/closed transition of the gate to very negative potentials. This results in the mutated channel remaining open throughout the physiological voltage range. Substitution of R230 with natural and unnatural amino acids indicates that the functional effect of the arginine residue at position 230 depends on both its positive charge and the size of its side chain. I find that KCNQ3-R230C is hard to close, but it is capable of being closed at strong negative voltages. I suggest that compounds that shift the voltage dependence of S4 activation to more positive potentials would promote gate closure and thus have therapeutic potential. Introduction Voltage-gated K+ channels (Kv) regulate and modulate the resting potential and set the threshold and duration of the action potential in excitable cells. One of the major potassium currents in neurons is the muscarine-regulated M current (IKM), a noninactivating current with slow activation and deactivation kinetics and a negative voltage for half-activation (V1/2; approximately −60 mV; Brown and Adams, 1980; Halliwell and Adams, 1982). The IKM current is primarily conducted by heterotetramers of KCNQ2 and KCNQ3 α-subunits (Wang et al., 1998), which are expressed in the central and peripheral nervous system (Brown and Adams, 1980; Halliwell and Adams, 1982). The biophysical properties combined with the specific subcellular localization allows IKM to regulate the membrane resting potential and depress repetitive neuronal firing (Brown and Adams, 1980). Mutations in neuronal KCNQ channels are associated with hyperexcitability-related disorders, including neuropathic pain (Jentsch, 2000; Maljevic and Lerche, 2014), benign familial neonatal seizures (Biervert et al., 1998; Charlier et al., 1998; Singh et al., 1998), and neonatal epileptic encephalopathy (Rauch et al., 2012; Saitsu et al., 2012; Weckhuysen et al., 2012, 2013; Kato et al., 2013; Orhan et al., 2014). However, how variants in the neuronal KCNQ channels contribute to the severity of disease and the molecular mechanisms underlying mutated-channel defects remains largely unknown. KCNQ channels belong to the superfamily of Kv channels, which are tetrameric proteins with six transmembrane segments (S1–S6) per subunit (Fig. 1 A). In Kv channels, S5–S6 of the four subunits together form a centrally located pore that is flanked by the four voltage-sensing domains, each composed of S1–S4 (Long et al., 2005). The C-terminal end of the S6 segments form the gate (del Camino and Yellen, 2001; Sun and MacKinnon, 2017), and the fourth TM segment (S4) functions as the voltage sensor (Aggarwal and MacKinnon, 1996; Larsson et al., 1996; Mannuzzu et al., 1996; Seoh et al., 1996; Yang et al., 1996; Osteen et al., 2010). At rest, S4 is assumed to be in its inward state and, in response to depolarization, moves outward, thereby opening the channel gate to allow K+ flow (Bezanilla and Perozo, 2003). Disease-causing mutations in neuronal KCNQ2/3 channels locate to the C-terminal domain, the pore domain, and the voltage-sensing domain (Maljevic and Lerche, 2014). Among the most severe disease-causing mutations in neuronal KCNQ channels are those affecting the S4 segment (Maljevic and Lerche, 2014; Millichap et al., 2016). In particular, mutations that neutralize the second positively charged arginine residue (R2) in S4 from KCNQ channels cause drastic functional channel defects (Panaghie and Abbott, 2007; Miceli et al., 2008, 2012, 2015; Bartos et al., 2011). Thus, it has been hypothesized that mutations that neutralize R2 in KCNQ2 channels stabilize the activated state configuration of the voltage-sensing domain (Miceli et al., 2012, 2015) and, thereby, cause time- and voltage-independent currents. Other possibilities include that neutralization of R2 could impair the coupling between the S4 and the gate, thereby keeping the gate always open independently of S4 movement, or could shift the open/closed transition of the gate to negative potentials, making the channel constitutively conducting in the physiological voltage range. Here, I tested these alternatives by simultaneously tracking changes in S4 movement and gate opening, using voltage clamp fluorometry (VCF) to understand the mechanism by which the epileptic encephalopathy–causing mutation KCNQ3-R230C impairs channel function. To better understand the impact that alterations in size and charge of R230C have on the movement of S4 and, ultimately, function, I also combine (a) two-electrode voltage clamp with cysteine modification using methanethiosulfonate ethylammonium (MTSEA) and (b) introduction of a variety of natural amino acids and the arginine analogue citrulline (unnatural) into the S4 of KCNQ3 channel. In summary, KCNQ3-R230C channel is hard to close, but it is capable of being closed at extreme negative voltages, which suggests that compounds that shift the voltage dependence of S4 activation to more positive voltages would promote gate closing and have therapeutic potential. Materials and methods Molecular biology Mutations were introduced into human KCNQ3 (a gift from Dr. Harley T. Kurata, University of Alberta, Alberta, Canada) using the Quikchange site-directed mutagenesis kit (Qiagen) and fully sequenced to ensure incorporation of intended mutations and the absence of unwanted mutations (sequencing by Genewiz). Complementary RNA (cRNA) was transcribed in vitro using the mMessage mMachine T7 RNA Transcription Kit (Ambion). Electrophysiology Two-electrode voltage clamp (TEVC) and voltage clamp fluorometry (VCF) experiments were performed as previously reported (Osteen et al., 2012; Barro-Soria et al., 2014). In brief, aliquots of 50 ng RNA coding for KCNQ3 or the KCNQ3 variant RNA were injected into Xenopus laevis oocytes. For VCF experiments, 2–5 d after injection, oocytes were labeled for 30 min with 100 µM Alexa Fluor 488 5-maleimide (Molecular Probes) in high [K+]-ND96 solution (98 mM KCl, 1.8 mM CaCl2, 1 mM MgCl2, 5 mM HEPES, pH 7.5, with NaOH) at 4°C. Labeled oocytes were kept on ice to prevent internalization of labeled channels. For TEVC and VCF recordings, oocytes were placed into a recording chamber animal pole “up” in nominally Ca2+-free solution (96 mM NaCl, 2 mM KCl, 2.8 mM MgCl2, 5 mM HEPES, pH 7.5, with NaOH). 100 µM LaCl3 was added to the batch solution to block endogenous hyperpolarization-activated currents. At this concentration, La3+ did not affect G(V) or F(V) curves from KCNQ3. I assayed cysteine modification using the membrane-permeant thiol reagents MTSEA and MTSET (1 mM and 10 mM, respectively; Toronto Research Chemicals) with bath perfusion of oocytes under TEVC. Electrical measurements were performed in the TEVC configuration using a Dagan CA-1B amplifier, low-pass filtered at 1 kHz, and sampled at 5 kHz (or for VCF an OC-725C oocyte clamp; Warner Instruments). Microelectrodes were pulled to resistances from 0.3 to 0.5 MΩ when filled with 3 M KCl. Voltage clamp data were digitized at 5 kHz (Axon Digidata 1440A; Molecular Devices) and collected using pClamp 10 (Axon Instruments). Fluorescence recordings were performed using an Olympus BX51WI upright microscope. Light was focused on the top of the oocyte through a ×20 water-immersion objective (numerical aperture: 1.0, working distance: 2 mm) after being passed through an Oregon green filter cube (41026; Chroma). Fluorescence signals were focused on a photodiode and amplified with an Axopatch 200B patch clamp amplifier (Axon Instruments). Fluorescence signals were low-pass Bessell-filtered (Frequency Devices) at 100–200 Hz, digitized at 1 kHz, and recorded using pClamp 10. Data analysis To determine the ionic conductance established by a given test voltage, a test voltage pulse was followed by a step to the fixed voltage of −40 mV, and current was recorded following the step. To estimate the conductance g(V) activated at the end of the test pulse to voltage V, the current flowing after the hook was exponentially extrapolated to the time of the step and divided by the offset between −40 mV and the reversal potential. The conductance g(V) associated with different test voltages V in a given experiment was fitted by the relation $g( V )=A1+( A2−A1 )/1+exp[ −ze( V−V 1/2 )/( k B T ) ] ,$ (1) where A1 and A2 are conductances that would be approached at extreme negative or positive voltages, respectively, V1/2 is the voltage at which the conductance is (A1 + A2)/2, and z is an apparent valence describing the voltage sensitivity of activation (e is the electron charge, kB the Boltzmann constant, and T the absolute temperature). In my experiments, A2 is the maximal conductance activated under extreme depolarization, and A1 is the constitutive conductance already present at extreme hyperpolarization. A1/A2 is the fraction of conductance that is constitutively activated (which is exceedingly small in WT KCNQ3). Because of the generally different numbers of expressed channels in different oocytes, I compared normalized conductance, G(V) as follows: $G( V )=g( V )/A2 .$ (2) Fluorescence signals were corrected for bleaching and time-averaged over 10–40-ms intervals for analysis. The voltage dependence of fluorescence f(V) was analyzed and normalized (F(V)) using relations analogous to those for conductance (Eqs. 1 and 2). To estimate the effect of R230-mutations (mut) relative to the wild type (WT) on Gibbs energy, the following relation was used: $ΔΔG 0 =Δ( zFV 1/2 )=–F( z WT V 1/2 WT −z mut V 1/2 mut ),$ (3) where z is the gating charge of each channel deduced from the slope (κ) of the Boltzmann fits according to z = 25/κ, ΔV1/2 is the mutation (mut)-induced shift in the V1/2 values—relative to WT—from the Boltzmann fits, and F is Faraday’s constant (Monks et al., 1999; Li-Smerin and Swartz, 2001; DeCaen et al., 2008). This analysis assumes a two-state model; hence it underestimates the z value (Chowdhury and Chanda, 2012). The calculated ΔΔG0 should therefore be seen as an approximation. Encoding of citrulline into KCNQ3-R230 in X. laevis oocytes Synthesis of citrulline-pdCpA and its ligation onto synthetic pyrrolysine tRNA (PylT) to generate PylT-Cit have recently been described in detail (Infield et al., 2018a). PylT was used for delivery because it is highly orthogonal in Xenopus oocytes (Infield et al., 2018b). Oocytes were injected with a mixture of 80 ng UAG-bearing cRNA and 80 ng Pyl-Cit tRNA. In parallel, the cRNA was injected with full-length (pdCpA ligated) PylT (PylT-CA) as a negative control. Lack of significant currents arising from the PylT-CA–injected condition serves as evidence that the currents in PylT-Cit–injected oocytes result from successful encoding of citrulline at the UAG site introduced into the KCNQ3 channels. Statistics All experiments were repeated four or more times from at least seven batches of oocytes. Pairwise comparisons were achieved using Student’s t test, and multiple comparisons were performed using one-way ANOVA with Tukey’s test. Data are represented as means ± SEM, and n represents the number of experiments. Results KCNQ3-R230C shifts S4 movement and activating gating to very negative voltages The K+ channels underlying the M current of neurons are formed as heterotetramers of the KCNQ2 and KCNQ3 polypeptides. The epilepsy-causing mutation of interest, R230C, is located in the voltage sensor (S4) segment of KCNQ3 (Fig. 1 A). A conformation in which effects of that mutant could be studied without ambiguities due to subunit composition would be a homomeric channel formed by KCNQ3. However, the KCNQ3 homomeric channel is nonfunctional in cellular expression systems because of a failure of membrane insertion (Etxeberria et al., 2004; Zaika et al., 2008; Gómez-Posada et al., 2010). The mutant A315T (Fig. 1 A), however, produced detectable K+ currents (Fig. 1 B; Zaika et al., 2008) that were half-activated at the voltage V1/2 = −46.5 ± 3 mV (n = 13; Fig. 1 D), similar to KCNQ2/3 heterotetramers (Fig. 1 D, dashed line, and Fig. S1, A and C). Here, I used in all constructs the homotetramer formed by KCNQ3-A315T as a reference for studying mutations in the 230 position. To track S4 movement, I introduced a second mutation, Q218C, to attach the fluorophore Alexa Fluor 488 5-maleimide to a position near the extracellular end of the S4 segment (Kim et al., 2017). The labeled KCNQ3-A315T-Q218C channels showed functional properties similar to those of KCNQ3-A315T channels. For example, KCNQ3-A315T-Q218C activated with a voltage dependence that was only slightly shallower and mildly right-shifted compared with KCNQ3-A315T (Fig. S2, A and B, compare black solid and dashed lines). The mutation R230C of KCNQ3 has been linked to epileptic encephalopathy (Rauch et al., 2012; Allen et al., 2013). KCNQ3-A315T-R230C channels expressed in Xenopus oocytes appeared to be constitutively activated at voltages between −80 and +20 mV (Fig. 1, C and D). To further study the effect of mutation R230C on activation gating, I extended the range of voltage (−200 and +20 mV) and simultaneously monitored gate opening (by ionic current) and S4 movement (by fluorescence) using VCF (Barro-Soria et al., 2014). The mutant to be tested, KCNQ3-A315T-Q218C-R230C, presented two potential binding sites for the fluorescence probe Alexa Fluor 488 5-maleimide. I therefore assessed potential fluorescence changes due to Alexa Fluor 488 5-maleimide bound to KCNQ3-A315T-R230C channels (and thus to the S4 segment) in response to voltages. No changes were observed with the probe Alexa Fluor 488 5-maleimide (Fig. S2 C), and the ionic currents were like those with unmodified KCNQ3-A315T-R230C channels. I conclude that this probe does not attach to R230C, so that fluorescence changes reported by a probe at Q218C are not obscured by a second attached probe in R230C mutant channels. Using voltage clamp fluorometry, I observed that over the physiological voltage range (from –100 to +20 mV) the KCNQ3-A315T-Q218C-R230C channels were constitutively open and exhibited very small changes in fluorescence, as if their S4 segments and open gates were locked in place (Fig. 2 A). In contrast, when test voltages between −100 and −200 mV were applied, the fluorescence signal decreased with hyperpolarization (Fig. 2 B, green), as if S4 segments had moved inward (see cartoon in Fig. 2 E). The fluorescence signal increased after the test pulse (Fig. 2 B, green), indicating that the voltage sensors moved from a resting position at approximately −200 mV to an activated position at approximately −100 mV. Simultaneously, I assess the ionic conductance by applying hyperpolarizing voltage test pulses followed by a step to the fixed voltage of −40 mV (Fig. 2 B, black). This step produced relaxations of K+ current, indicating that at the end of the hyperpolarizing test pulse, the K+ conductance was smaller than the conductance eventually reached at −40 mV. When the relaxations of current and fluorescence after the test pulse were scaled and aligned with respect to their end points, they closely superimposed (Fig. 2 D). Displacements of S4 segments and ionic conduction were strongly correlated in time. Thus, the activation of KCNQ3-A315T-Q218-R230C channel was voltage dependent but shifted to more negative potentials compared with WT KCNQ3-A315T-Q218C (Fig. 2 C). The steady-state fluorescence/voltage curve, F(V) (reflecting voltage sensor movement), and the steady-state conductance/voltage curve, G(V) (reflecting channel opening), in KCNQ3-A315T-Q218-R230C channels were similarly shifted to negative voltages compared with those of KCNQ3-A315T-Q218C channels (F(V) shifted more than –107.1 ± 4.5 mV and G(V) shifted more than −112 ± 3.3 mV, n = 9; Fig. 2 C, arrow). Together, these correlations in time and voltage dependencies indicate that in both WT and R230C channels, S4 motion and gate operation appeared to be directly coupled: the R230C mutation did not break this coupling. Moreover, S4 motion and gate opening were possible in both WT and R230C channels: the mutation did not interfere with either part of the gating mechanism. Instead, the R230C mutant appeared to shift the voltage-dependent transitions of these channel components to a hyperpolarized, nonphysiological range. Therefore, R230C channels were constitutively open at physiological voltages (−100 to +20 mV; Fig. 2 E). The loss of the positive charge of R230 accounts for most of the leftward shift in the G(V) The observed leftward shift in both the F(V) and G(V) relations of R230C channels relative to WT might be due to (a) loss of the positive charge of the R2 arginine, (b) reduction in the size of the side chain (e.g., from Arg to Cys), or (c) a combination of both. R230 is the second charged residue of the S4 segment (counting from the extracellular end) and thus is likely to contribute directly to the sensing of membrane voltage in KCNQ3 channel (Fig. 1 A). I tested the mutants R230K, (which replaced the guanidinium moiety of the arginine with the ammonium group of the lysine while reducing the size of the side chain), R230H, and chemical modification of R230C by MTSEA, which extended the cysteine side chain and terminated it with an aminoethyl group (Fig. 3, A and B). In the charge-conservative R230K mutation, the G(V) curve was more than 45 mV left shifted compared with WT (R230) channels (KCNQ3-R230K: GV1/2 = −95 ± 5, n = 9; Fig. 3 C, solid arrow). However, the G(V) curve of R230K was closer to that of the WT R230 than to that of the mutant R230C (Fig. 3 C, dashed arrow), whereas MTSEA-modified R230C and R230H were less effective substitutes of R230 (Fig. 3 C, maroon and green lines, respectively). One possible reason for the lower rescuing effect of MTSEA and R230H is that these residues may not be completely protonated and therefore not completely charged. All three “charged substitutes” of arginine, more prominent in the case of lysine, tended to suppress residual activation at the negative end of the tested voltage range (Fig. 3 C, gray arrow). VCF showed that in the charge-conservative R230K mutation (KCNQ3-A315T-Q218C-R230K), the F(V) curve closely superimposed the G(V) curve, and both relations more closely resembled that of the WT KCNQ3-A315T-Q218C in that the S4 movement and channel closing were shifted toward more positive voltages compared with the mutant R230C (KCNQ3-A315T-Q218C-R230K: GV1/2 = −88 ± 5, n = 8, FV1/2 = −92 ± 6, n = 8 ; Fig. 3, D and E). The time courses of the fluorescence and ionic current in KCNQ3-A315T-Q218C-R230K were also superimposed (Fig. 3 F, blue dashed and solid lines), and these time courses were faster than the WT KCNQ3-A315T-Q218C (Fig. 3 F, compare blue and red traces). Similar to WT and mutated R230C channels, these correlations in time and voltage dependencies of R230K (and also of 230H, Fig. 3 F, green) further supported that S4 and gate motions were directly coupled. These data also suggest that a positive charge at position R2 directly contributes to sense membrane voltage in KCNQ3 channel but smaller side chains (R-to-K, or guanidinium-to-ammonium) destabilize the resting conformation of the S4, hence promoting channel opening. An attempt to modify KCNQ3-A315T-R230C using methanethiosulfonate ethyltrimethylammonium (MTSET; one positive formal charge) instead of MTSEA produced no detectable change of R230C channel gating, as if extracellular MTSET was unable to react with R230C (Fig. S3, A and B), similar to what Wu et al. (2010) found when extracellular MTSET failed to modify homologous KCNQ1-R2 cysteine mutant channels. In light of the MTSET result, it appears possible that the observed modification of KCNQ3-R230C by MTSEA reflects a difference in access to position 230 for these two reagents (MTSEA bears no formal charge, whereas MTSET does), and that the functional consequences of modification by MTSEA may involve only partial or no protonation of the MTSEA aminoethyl group. Moreover, MTSEA specifically modified R230C of KCNQ3, since both the kinetics of activation and G(V) curves in control KCNQ3 and the mutant KCNQ3-R230A channels treated with MTSEA were similar (Fig. S3, C–E). Together, these results indicate that, although preserving the positive charge at R2 in the S4 of KCNQ3 channels largely contributes to restoring effective channel gating, it is not sufficient to fully recapitulate the WT channel gating properties. The reduction of the side chain size at position 230 contributes to the leftward shift in the G(V) Because there is a clear difference in the activation characteristics of R230 and R230K, charge itself is not the sole determinant of function at position 230. To examine the effect of modifying the side chain of R230, I used a nonsense suppression technique to encode the noncanonical amino acid citrulline, an uncharged arginine analogue that retains most of the architecture (volume) of the guanidinium group of arginine (Fig. 4 A). Citrulline is not naturally encoded, albeit posttranslational deimination of arginine residues occurs (György et al., 2006). I introduced citrulline into KCNQ3-R230 channels using in vitro acylated tRNAs (Fig. 4 B), as previously reported (Infield et al., 2018a). Coinjection of citrulline-acylated pyrrolysine tRNA with cRNA of KCNQ3 channels bearing the amber UAG stop codon at 230 rendered voltage-activated potassium currents, which demonstrates successful encoding of citrulline (R230Cit; Fig. 4, B and C, inset trace). As a control, I coinjected (in the same batch of oocytes, the same day and using the same experimental conditions as above) KCNQ3-R230UAG or Shaker-R362UAG (R1UAG) cRNA with nonacylated, full-length (pdCpA-ligated) pyrrolysine tRNA (pyl-tRNA) and Shaker-R1UAG cRNA with Pyl-citrulline (Fig. S4). In the absence of the tethered amino acid, the current was negligible (Fig. S4, A and B), reflective of the orthogonality of this tRNA species in the Xenopus oocyte. However, co-injection of Shaker-R1UAG cRNA with Pyl-citrulline rendered robust voltage-activated potassium currents (Fig. S4, C and D), as previously reported (Infield et al., 2018a), thus validating the approach used here. The G(V) curve of mutant R230Cit was shifted to negative voltages compared with WT KCNQ3 channels (Fig. 4 C, red circles and red horizontal arrow). Also, the ionic current at −200 mV was significantly reduced compared with KCNQ3-R230C (Fig. 4 C, vertical red arrow). The difference between Arg-to-Cit substitution (which retains most of the Arg side chain’s steric properties) quantifies the importance of a positive charge in an otherwise minimally modified side chain at position 230. To examine the effect of side chain size of R230 on channel activation, I introduce three different uncharged amino acids at position 230 (A, Q, and W) and measure the ionic currents (Fig. 5 A). The substitutions R230A, R230Q, and R230W (Fig. 5, A and B) produce graded variations of activation characteristics about those already described for (unmodified) R230C. In general, all KCNQ3-230x substitutions (A, Q, and W) left shift the steady-state conductance/voltage relation G(V) compared with WT KCNQ3 channels and reduce the activation slope, which results in varying degrees of activation at the most negative tested voltage, −200 mV (Fig. 5 B). Consequently, the effectiveness of hyperpolarization in deactivating these mutant channels was related to side chain size (Fig. 5 C). Uncharged bulkier side chains appeared more effective in deactivating the channel (Fig. 5 C). The R230W substitution rescued deactivation to extents similar to that of the charged substitution created by reaction of R230C with MTSEA (compare Fig. 3 C and Fig. 5 B). This tendency might also be reflected in the greater effectiveness of WT 230R compared with R230K: both charged residues differed in that the guanidinium moiety of the arginine was slightly larger than the ammonium group of the lysine (Fig. 3 C). Therefore, the difference between R-to-K substitution, which retains the positive charge, quantifies the importance of the guanidinium steric properties at position 230 for normal channel gating (Fig. 3, C and E). I also aimed to correct for the difference in slope to better compare the functional effect of G(V) shifts on these mutants by calculating changes in Gibbs energy for channel opening (ΔΔG0; Monks et al., 1999; Li-Smerin and Swartz, 2001; DeCaen et al., 2008). However, because the G(V) curves in the R230x substitutions (A, C, Q, and W) do not saturate at negative voltages and/or show various constitutive currents at strong negative voltages, both the V1/2 and the activation slopes might not be accurately estimated. I therefore calculated the Gibbs energy only for charged residues R230K and R230H and set a cutoff value of 1 kcal/mol for significant perturbation (Labro et al., 2011). Thus, compared with WT R230R, the R230K and R230H substitutions significantly reduced the energy required to open the channel by 3.8 and 3.9 kcal/mol, respectively (Table 1), as if reducing the size of the side chain would promote stabilization of the channel in an activated open state (or destabilization of the channel closed state). Together, these results suggest that the reduction of the side chain at position 230 alters normal KCNQ3 channel gating, and that bulkier side chains at this position help to restore WT channel function (Fig. 5, B and C). Discussion I investigated the role of residue 230 (R2) in the voltage sensor of homomeric KCNQ3 channels. The mutant R230C renders the channel open throughout the physiological membrane voltage range and has been implicated in epileptic diseases (Miceli et al., 2012, 2015). Mechanisms that could link this functional mutation in neuronal channels to epilepsy have been discussed (Miceli et al., 2015; Niday and Tzingounis, 2018). For example, KCNQ-associated gain-of-function mutations in inhibitory neurons may decrease inhibitory activity, thereby increasing action potential activity in the excitatory pyramidal neurons with which the inhibitory neurons synapse (Miceli et al., 2015). Here, I further investigated how substitution of arginine 230 by cysteine alters channel activation and used a variety of other substitutions to determine side-chain properties that are important for normal or altered activation. Using VCF, I found that the epilepsy-causing mutation R230C shifted the voltage dependence of S4 movement and channel closing toward strongly negative voltages. The S4 and gate remained in their activated states at physiological voltages but returned to resting states at hyperpolarizing voltages in the range −80 to −200 mV. In R230K and R230C mutants, the time courses and voltage dependences of fluorescence and ionic current strongly correlated, suggesting that S4 movement and channel opening are directly coupled, as also reported for WT KCNQ3 channels (Kim et al., 2017). This coupling is different from that of KCNQ4 channels, in which the S4 movement and ionic current seem to be “poorly coupled” (Miceli et al., 2012). Measurements of gating current of KCNQ4 showed that the S4 segment moves much faster than the rate of activation revealed by ionic currents, as if S4-charge movement is not directly coupled to opening or closing of KCNQ4 channels (Miceli et al., 2012). The mutation R230C eliminates the second positive charge (R2) of the S4 transmembrane segment (counted from the extracellular side). Because residue 233 (R3) of KCNQ3 is also uncharged (R3 of all KCNQ1–5 is a Q), a gap of two uncharged positions interrupts the regular chain of charged residues that is characteristic of the S4 segments of many other voltage-dependent channels, such as Shaker. The R2 + R3 gap implies that conserved negative charges on the S2 and S3 transmembrane segments cannot interact with positive S4 charges when the S4 segment is in its resting position (inward), which is associated with the closure of channel gate. I show here that R230K—and with weaker effect, R230H—support, but do not fully rescue, channel closure. Indeed, it has been shown that arginine and lysine form ionizable salt bridges by hydrogen bonding and/or electrostatic charge–charge ionic interactions (Riordan et al., 1977; Borders et al., 1994). Compared with the smaller amino group of the lysine side chain that can form only two hydrogen bonds, arginine can coordinate up to five hydrogen bonds with acidic residues owing to the presence of the larger guanidinium group (Borders et al., 1994). Therefore, it seems that in the R230K mutation, for instance, the loss of at least three hydrogen bonds, caused by the loss of the guanidinium group, or reduced volume contribute to the observed left-shifted channel closure. By comparing the functional properties of substitutions of R230 with neutral amino acids, I found that the increase in the shape/volume at this position helped stabilize the channel in a deactivated closed state (or helped destabilize the channel-activated conformation). The uncharged close structural analogue of arginine, citrulline, quantified the effect of a missing charge at position 230 that was less than the effect of R230C. Indeed, substitution with uncharged residues at position 230 showed that channel closure moved in a positive direction along the voltage axis in proportion to the number of major atoms in the 230 side chain. The smaller hydrophilic glutamine residue promotes less channel closure than the bulkier aromatic tryptophan residue. It could be that in KCNQ3, bulkier side chains at position 230 (e.g., W over Q for neutral residues or R over K for charged residues) cause a stronger packing of the S4 toward the S5–S6 pore, thereby resulting in a more effective channel closure. Supporting this idea, both 230H and 230K substitutions, compared with the bigger side chain of WT 230R, significantly reduced the Gibbs energy required to open the channel (Table 1), as if smaller residues at position 230 (K and H over R) help stabilize the channel in an activated open state (or destabilize the channel closed state). Moreover, reducing the size of the side chain at position 230 for charged residues (K over R) sped up the time course of both fluorescence (S4 movement) and ionic current (channel opening; Fig. 3 F; and note that the time course of current of 230H was also faster compared with WT, green) with minor changes in channel deactivation kinetics (Fig. 3 G), as if smaller side chain residues destabilize the resting conformation of the S4, hence promoting channel opening. Electrophysiological studies have shown that among the most severe disease-causing mutations in neuronal KCNQ channels, those affecting the gating charge arginine residues (R1–R6) in the S4 profoundly alter the channel’s voltage dependence (GV) and the macroscopic ionic current kinetics (Miceli et al., 2008, 2013, 2015). Previous glutamine-scanning mutagenesis study and voltage clamp fluorometry experiments showed that R228Q (R1) in KCNQ1 channels exhibit strong hyperpolarized voltage dependencies of both G(V) and F(V) curves, compared with the WT KCNQ1 channel (Wu et al., 2010; Osteen et al., 2012). Likewise, heterologous expression of the infantile spasm- and encephalopathy-causing mutation R198Q (R1) in KCNQ2 channels showed a negative shift of the G(V) curve (30 mV) and a large fraction of channel open at −80 mV, compared with WT KCNQ2 channels (Millichap et al., 2017). Moreover, Miceli et al. (2015) found that, compared with WT KCNQ2 channels, substituting R201 (R2) by histidine or cysteine causes a strong hyperpolarized G(V) shift and time and voltage independent current, respectively. In disulfide cross-linking experiments, they showed that R201 residue and the negative charged residue D172 in the S3 segment electrostatically interact to stabilize the closed state of KCNQ2 channels (Miceli et al., 2015). This observation led them to conclude that in the mutant R201C, which removes this electrostatic interaction, the S4 segment was likely stabilized in its activated position. Studies from the closely related KCNQ1 and KCNQ4 channels also showed that neutralization mutations at homologous R2 residue lead to constitutively conducting channels (Panaghie and Abbott, 2007; Itoh et al., 2009; Wu et al., 2010; Bartos et al., 2011; Miceli et al., 2012). Interestingly, unlike other R2 neutralization mutations in KCNQ channels, R231C (R2) in KCNQ1 has been reported to show pleiotropic expression. Thus, coexpression of WT KCNQ1 and KCNQ1-R231C subunits showed that the ensembled heteromeric channel had a mixed functional phenotype that has both loss-of-function and gain-of-function properties (Bartos et al., 2011). Similar to heteromeric KCNQ2/KCNQ3 channels bearing the R230C mutation studied here, two independent studies showed that heteromeric KCNQ1/KCNQ1-R231C exhibited a large fraction of constitutive ionic conductance even at strong negative voltages and a marked negative shift of the G(V) curve compared with homomeric WT KCNQ1 (Bartos et al., 2011), as if channels containing two activated voltage sensors have a ∼30% probability of opening (Osteen et al., 2012). By contrast, using a ventricular action potential waveform protocol, Bartos et al. (2011) also showed that cells expressing heteromeric KCNQ1/KCNQ1-R231C channels were able to reduce the fraction of current that remained after repolarization to diastolic potentials, a phenotype consisting with a loss of channel function. This unique sensitivity of R2 to mutations in KCNQ channels seems different from, for example, neutralization mutations affecting charged residues located in the C-terminal portion of S4 within KCNQ channels, which have been shown to decrease the stability of the open state (or increase the stability of the closed state) and the active voltage-sensing domain configuration as shown by the depolarized shift of the G(V) curve (Wu et al., 2010; Miceli et al., 2012). For example, two scanning mutagenesis studies showed that neutralization mutations of R4 and R6 from KCNQ1 channel shifted the G(V) curve to positive voltages (Panaghie and Abbott, 2007; Wu et al., 2010)—the deactivation time constant of R237A (R4) being remarkably slower compared with WT KCNQ1 channels (Panaghie and Abbott, 2007). Similarly, functional studies in KCNQ2 channels revealed that R3Q, R6Q, and R6W substitutions shift the G(V) curve to positive voltages and slow the kinetics of activation, suggesting that neutralization mutations destabilize the open state (Miceli et al., 2008, 2013). In general, it seems that in KCNQ channels, charged residues located in the N-terminal portion of S4 (e.g., R1 and R2) contribute to stabilizing the S4 in its resting (inward) conformation, whereas charged residues in the C-terminal portion of S4 (e.g., R6) stabilize the activated conformation of the S4 (outward), as previously suggested for KCNQ1 channels (Wu et al., 2010). Notably, none of these studies directly measured S4 motion in KCNQ2 or KCNQ3 mutated channels, possibly because of a combination of low expression, few S4 charges and/or slow S4 movements compared with other Kv channels. The VCF and systematic amino acid substitution data shown here offer a mechanistic explanation for how neutralization mutations at R2 of other KCNQ may cause constitutive conducting channels. This study demonstrates that R230C shifts the voltage dependence of S4 movement and channel opening by more than −100 mV, thereby converting KCNQ3 bearing R230C mutation into a voltage-independent channel in the physiological voltage range. This dramatic change in mutated KCNQ3-R230C channel gating is likely due to a combination of both the reduction of the side chain together with the loss of the positive charge at position 230. Impaired KCNQ3 channel function leads to a defective IKM channel that, in turn, accounts for the neuronal hyperexcitability-related phenotype observed in KCNQ3-R230 mutant channels. Indeed, I found that in heteromeric channels assembled from WT KCNQ2, WT KCNQ3, and KCNQ3-R230C (in a 2:1:1 ratio), the G(V) curve was shifted to negative voltages by 15 mV compared with heteromeric WT KCNQ2/3 channels (Fig. S1, A–C, arrow). The current also revealed a 1–5% constitutive conductance even at strongly hyperpolarized potentials (Fig. S1, B′ and C, dotted gray rectangle), as if only one R230C subunit would be sufficient to impair full channel closure. The finding that the homomeric 230C of KCNQ3 allows S4 movement but shifts the open/closed transition of the gate to strongly negative potentials provides a mechanistic platform to explain how at resting potentials heteromeric channels bearing even one 230C subunit, as in epileptic phenotypes, conduct more ionic current than WT channels. Thus, neurons from patients bearing the R230C mutation are more hyperpolarized than those of normal individuals owing to more KCNQ-associated potassium conductance (Fig. S1 C). However, because inhibitory neurons have larger input resistance compared with principal neurons (Zemankovics et al., 2010), a leftward shift in the G(V) curve, such as that from heteromeric channels bearing the R230C mutation, would have a larger “silencing” effect in inhibitory neurons than in principal neurons, thereby enhancing excitability in principal neurons with which inhibitory neurons synapse. This suggests that compounds that can right shift the voltage dependence of S4 activation would promote gate closing and have therapeutic potential. This study not only provides mechanistic insights into a better understanding of the molecular basis by which mutations in the IKM channel are linked to epilepsy, but also lays the groundwork for a potential platform for drug development. Acknowledgments I thank Drs. Derek M. Dykxhoorn, H. Peter Larsson, and Wolfgang Nonner for helpful comments on the manuscript. I thank Marta E. Perez for technical assistance in molecular biology and Dr. Harley T. Kurata (University of Alberta, Canada) for the generous gift of the KCNQ3 construct. I also thank Dr. Christopher A. Ahern for the pyrrolysine tRNA-citrulline, tRNA-pdCpA, and Shaker-R1TAG constructs through National Institutes of Health grant R24NS104617 to Dr. Ahern. This work was supported by a Taking Flight award from Citizens United for Research in Epilepsy (414889) and a National Institutes of Health grant (K01NS096778) to R. Barro-Soria. The author declares no competing financial interests. Author contributions: R. Barro-Soria performed voltage clamp, voltage clamp fluorometry, MTS-modification recordings, and molecular biology, analyzed the data, and wrote the manuscript. Richard W. Aldrich served as editor. References References Aggarwal , S.K. , and R. MacKinnon . 1996 . Contribution of the S4 segment to gating charge in the Shaker K+ channel . Neuron. 16 : 1169 1177 . Allen , A.S. , S.F. Berkovic , P. Cossette , N. Delanty , D. Dlugos , E.E. Eichler , M.P. Epstein , T. Glauser , D.B. Goldstein , Y. Han , et al Epilepsy Phenome/Genome Project . 2013 . De novo mutations in epileptic encephalopathies . Nature. 501 : 217 221 . Barro-Soria , R. , S. Rebolledo , S.I. Liin , M.E. Perez , K.J. Sampson , R.S. Kass , and H.P. . 2014 . KCNE1 divides the voltage sensor movement in KCNQ1/KCNE1 channels into two steps . Nat. Commun. 5 : 3750 . Bartos , D.C. , S. Duchatelet , D.E. Burgess , D. Klug , I. Denjoy , R. Peat , J.M. Lupoglazoff , V. Fressart , M. Berthet , M.J. Ackerman , et al 2011 . R231C mutation in KCNQ1 causes long QT syndrome type 1 and familial atrial fibrillation . Heart Rhythm. 8 : 48 55 . Bezanilla , F. , and E. Perozo . 2003 . The voltage sensor and the gate in ion channels . 63 : 211 241 . Biervert , C. , B.C. Schroeder , C. Kubisch , S.F. Berkovic , P. Propping , T.J. Jentsch , and O.K. Steinlein . 1998 . A potassium channel mutation in neonatal human epilepsy . Science. 279 : 403 406 . Borders , C.L. Jr ., J.A. , P.A. Bekeny , J.E. Salmon , A.S. Lee , A.M. Eldridge , and V.B. Pett . 1994 . A structural role for arginine in proteins: multiple hydrogen bonds to backbone carbonyl oxygens . Protein Sci. 3 : 541 548 . Brown , D.A. , and P.R. . 1980 . Muscarinic suppression of a novel voltage-sensitive K+ current in a vertebrate neurone . Nature. 283 : 673 676 . Charlier , C. , N.A. Singh , S.G. Ryan , T.B. Lewis , B.E. Reus , R.J. Leach , and M. Leppert . 1998 . A pore mutation in a novel KQT-like potassium channel gene in an idiopathic epilepsy family . Nat. Genet. 18 : 53 55 . Chowdhury , S. , and B. Chanda . 2012 . Estimating the voltage-dependent free energy change of ion channels using the median voltage for activation . J. Gen. Physiol. 139 : 3 17 . DeCaen , P.G. , V. Yarov-Yarovoy , Y. Zhao , T. Scheuer , and W.A. Catterall . 2008 . Disulfide locking a sodium channel voltage sensor reveals ion pair formation during activation . 105 : 15142 15147 . del Camino , D. , and G. Yellen . 2001 . Tight steric closure at the intracellular activation gate of a voltage-gated K(+) channel . Neuron. 32 : 649 656 . Etxeberria , A. , I. Santana-Castro , M.P. , P. Aivar , and A. Villarroel . 2004 . Three mechanisms underlie KCNQ2/3 heteromeric potassium M-channel potentiation . J. Neurosci. 24 : 9146 9152 . , J.C. , A. Etxeberría , M. Roura-Ferrer , P. Areso , M. Masin , R.D. , and A. Villarroel . 2010 . A pore residue of the KCNQ3 potassium M-channel subunit controls surface expression . J. Neurosci. 30 : 9316 9323 . György , B. , E. Tóth , E. Tarcsa , A. Falus , and E.I. Buzás . 2006 . Citrullination: a posttranslational modification in health and disease . Int. J. Biochem. Cell Biol. 38 : 1662 1677 . Halliwell , J.V. , and P.R. . 1982 . Voltage-clamp analysis of muscarinic excitation in hippocampal neurons . Brain Res. 250 : 71 92 . Infield , D.T. , E.E.L. Lee , J.D. Galpin , G.D. Galles , F. Bezanilla , and C.A. Ahern . 2018 a . Replacing voltage sensor arginines with citrulline provides mechanistic insight into charge versus shape . J. Gen. Physiol. 150 : 1017 1024 . Infield , D.T. , J.D. Lueck , J.D. Galpin , G.D. Galles , and C.A. Ahern . 2018 b . Orthogonality of Pyrrolysine tRNA in the Xenopus oocyte . Sci. Rep. 8 : 5166 . Itoh , H. , T. Sakaguchi , W.G. Ding , E. Watanabe , I. Watanabe , Y. Nishio , T. Makiyama , S. Ohno , M. Akao , Y. Higashi , et al 2009 . Latent genetic backgrounds and molecular pathogenesis in drug-induced long-QT syndrome . Circ Arrhythm Electrophysiol. 2 : 511 523 . Jentsch , T.J. 2000 . Neuronal KCNQ potassium channels: physiology and role in disease . Nat. Rev. Neurosci. 1 : 21 30 . Kato , M. , T. Yamagata , M. Kubota , H. Arai , S. Yamashita , T. Nakagawa , T. Fujii , K. Sugai , K. Imai , T. Uster , et al 2013 . Clinical spectrum of early onset epileptic encephalopathies caused by KCNQ2 mutation . Epilepsia. 54 : 1282 1287 . Kim , R.Y. , S.A. Pless , and H.T. Kurata . 2017 . PIP2 mediates functional coupling and pharmacology of neuronal KCNQ channels . 114 : E9702 E9711 . Labro , A.J. , I.R. Boulet , F.S. Choveau , E. Mayeur , T. Bruyns , G. Loussouarn , A.L. Raes , and D.J. Snyders . 2011 . The S4-S5 linker of KCNQ1 channels forms a structural scaffold with the S6 segment controlling gate closure . J. Biol. Chem. 286 : 717 725 . , H.P. , O.S. Baker , D.S. Dhillon , and E.Y. Isacoff . 1996 . Transmembrane movement of the shaker K+ channel S4 . Neuron. 16 : 387 397 . Li-Smerin , Y. , and K.J. Swartz . 2001 . Helical structure of the COOH terminus of S3 and its contribution to the gating modifier toxin receptor in voltage-gated ion channels . J. Gen. Physiol. 117 : 205 218 . Long , S.B. , E.B. Campbell , and R. Mackinnon . 2005 . Crystal structure of a mammalian voltage-dependent Shaker family K+ channel . Science. 309 : 897 903 . Maljevic , S. , and H. Lerche . 2014 . Potassium channel genes and benign familial neonatal epilepsy . Prog. Brain Res. 213 : 17 53 . Mannuzzu , L.M. , M.M. Moronne , and E.Y. Isacoff . 1996 . Direct physical measure of conformational rearrangement underlying potassium channel gating . Science. 271 : 213 216 . Miceli , F. , M.V. Soldovieri , C.C. Hernandez , M.S. Shapiro , L. Annunziato , and M. Taglialatela . 2008 . Gating consequences of charge neutralization of arginine residues in the S4 segment of K(v)7.2, an epilepsy-linked K+ channel subunit . Biophys. J. 95 : 2254 2264 . Miceli , F. , E. Vargas , F. Bezanilla , and M. Taglialatela . 2012 . Gating currents from Kv7 channels carrying neuronal hyperexcitability mutations in the voltage-sensing domain . Biophys. J. 102 : 1372 1382 . Miceli , F. , M.V. Soldovieri , P. Ambrosino , V. Barrese , M. Migliore , M.R. Cilio , and M. Taglialatela . 2013 . Genotype-phenotype correlations in neonatal epilepsies caused by mutations in the voltage sensor of K(v)7.2 potassium channel subunits . 110 : 4386 4391 . Miceli , F. , M.V. Soldovieri , P. Ambrosino , M. De Maria , M. Migliore , R. Migliore , and M. Taglialatela . 2015 . Early-onset epileptic encephalopathy caused by gain-of-function mutations in the voltage sensor of Kv7.2 and Kv7.3 potassium channel subunits . J. Neurosci. 35 : 3782 3793 . Millichap , J.J. , K.L. Park , T. Tsuchida , B. Ben-Zeev , L. Carmant , R. Flamini , N. Joshi , P.M. Levisohn , E. Marsh , S. Nangia , et al 2016 . KCNQ2 encephalopathy: Features, mutational hot spots, and ezogabine treatment of 11 patients . Neurol. Genet. 2 : e96 . Millichap , J.J. , F. Miceli , M. De Maria , C. Keator , N. Joshi , B. Tran , M.V. Soldovieri , P. Ambrosino , V. Shashi , M.A. Mikati , et al 2017 . Infantile spasms and encephalopathy without preceding neonatal seizures caused by KCNQ2 R198Q, a gain-of-function variant . Epilepsia. 58 : e10 e15 . Monks , S.A. , D.J. Needleman , and C. Miller . 1999 . Helical structure and packing orientation of the S2 segment in the Shaker K+ channel . J. Gen. Physiol. 113 : 415 423 . Niday , Z. , and A.V. Tzingounis . 2018 . Potassium Channel Gain of Function in Epilepsy: An Unresolved Paradox . Neuroscientist. 24 : 368 380 . Orhan , G. , M. Bock , D. Schepers , E.I. Ilina , S.N. Reichel , H. Löffler , N. Jezutkovic , S. Weckhuysen , S. Mandelstam , A. Suls , et al 2014 . Dominant-negative effects of KCNQ2 mutations are associated with epileptic encephalopathy . Ann. Neurol. 75 : 382 394 . Osteen , J.D. , C. Gonzalez , K.J. Sampson , V. Iyer , S. Rebolledo , H.P. , and R.S. Kass . 2010 . KCNE1 alters the voltage sensor movements necessary to open the KCNQ1 channel gate . 107 : 22710 22715 . Osteen , J.D. , R. Barro-Soria , S. Robey , K.J. Sampson , R.S. Kass , and H.P. . 2012 . Allosteric gating mechanism underlies the flexible gating of KCNQ1 potassium channels . 109 : 7103 7108 . Panaghie , G. , and G.W. Abbott . 2007 . The role of S4 charges in voltage-dependent and voltage-independent KCNQ1 potassium channel complexes . J. Gen. Physiol. 129 : 121 133 . Rauch , A. , D. Wieczorek , E. Graf , T. Wieland , S. Endele , T. Schwarzmayr , B. Albrecht , D. Bartholdi , J. Beygo , N. Di Donato , et al 2012 . Range of genetic mutations associated with severe non-syndromic sporadic intellectual disability: an exome sequencing study . Lancet. 380 : 1674 1682 . Riordan , J.F. , K.D. McElvany , and C.L. Borders Jr . 1977 . Arginyl residues: anion recognition sites in enzymes . Science. 195 : 884 886 . Saitsu , H. , M. Kato , A. Koide , T. Goto , T. Fujita , K. Nishiyama , Y. Tsurusaki , H. Doi , N. Miyake , K. Hayasaka , and N. Matsumoto . 2012 . Whole exome sequencing identifies KCNQ2 mutations in Ohtahara syndrome . Ann. Neurol. 72 : 298 300 . Seoh , S.A. , D. Sigg , D.M. Papazian , and F. Bezanilla . 1996 . Voltage-sensing residues in the S2 and S4 segments of the Shaker K+ channel . Neuron. 16 : 1159 1167 . Singh , N.A. , C. Charlier , D. Stauffer , B.R. DuPont , R.J. Leach , R. Melis , G.M. Ronen , I. Bjerre , T. Quattlebaum , J.V. Murphy , et al 1998 . A novel potassium channel gene, KCNQ2, is mutated in an inherited epilepsy of newborns . Nat. Genet. 18 : 25 29 . Sun , J. , and R. MacKinnon . 2017 . Cryo-EM structure of a KCNQ1/CaM complex reveals insights into congenital long QT syndrome . Cell. 169 : 1042 1050.e9 . Wang , H.S. , Z. Pan , W. Shi , B.S. Brown , R.S. Wymore , I.S. Cohen , J.E. Dixon , and D. McKinnon . 1998 . KCNQ2 and KCNQ3 potassium channel subunits: molecular correlates of the M-channel . Science. 282 : 1890 1893 . Weckhuysen , S. , S. Mandelstam , A. Suls , D. Audenaert , T. Deconinck , L.R. Claes , L. Deprez , K. Smets , D. Hristova , I. Yordanova , et al 2012 . KCNQ2 encephalopathy: emerging phenotype of a neonatal epileptic encephalopathy . Ann. Neurol. 71 : 15 25 . Weckhuysen , S. , V. Ivanovic , R. Hendrickx , R. Van Coster , H. Hjalgrim , R.S. Møller , S. Grønborg , A.S. Schoonjans , B. Ceulemans , S.B. Heavin , et al KCNQ2 Study Group . 2013 . Extending the KCNQ2 encephalopathy spectrum: clinical and neuroimaging findings in 17 patients . Neurology. 81 : 1697 1703 . Wu , D. , H. Pan , K. Delaloye , and J. Cui . 2010 . KCNE1 remodels the voltage sensor of Kv7.1 to modulate channel function . Biophys. J. 99 : 3599 3608 . Yang , N. , A.L. George Jr ., and R. Horn . 1996 . Molecular basis of charge movement in voltage-gated sodium channels . Neuron. 16 : 113 122 . Zaika , O. , C.C. Hernandez , M. Bal , G.P. Tolstykh , and M.S. Shapiro . 2008 . Determinants within the turret and pore-loop domains of KCNQ3 K+ channels governing functional activity . Biophys. J. 95 : 5121 5137 . Zamyatnin , A.A. 1972 . Protein volume in solution . Prog. Biophys. Mol. Biol. 24 : 107 123 . Zemankovics , R. , S. Káli , O. Paulsen , T.F. Freund , and N. Hájos . 2010 . Differences in subthreshold resonance of hippocampal pyramidal cells and interneurons: the role of h-current and passive membrane characteristics . J. Physiol. 588 : 2109 2132 .
2020-06-04 18:53:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45400527119636536, "perplexity": 9746.828370894782}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347445880.79/warc/CC-MAIN-20200604161214-20200604191214-00587.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/dcdsb.2014.19.1335
# American Institute of Mathematical Sciences July  2014, 19(5): 1335-1354. doi: 10.3934/dcdsb.2014.19.1335 ## Latent self-exciting point process model for spatial-temporal networks 1 USC Information Sciences Institute, Marina del Rey, CA 90292, United States 2 USC Information Sciences Institute, United States 3 University of California, Los Angeles, United States 4 University of California, Irvine, United States Received  December 2012 Revised  April 2013 Published  April 2014 We propose a latent self-exciting point process model that describes geographically distributed interactions between pairs of entities. In contrast to most existing approaches that assume fully observable interactions, here we consider a scenario where certain interaction events lack information about participants. Instead, this information needs to be inferred from the available observations. We develop an efficient approximate algorithm based on variational expectation-maximization to infer unknown participants in an event given the location and the time of the event. We validate the model on synthetic as well as real-world data, and obtain very promising results on the identity-inference task. We also use our model to predict the timing and participants of future events, and demonstrate that it compares favorably with baseline approaches. Citation: Yoon-Sik Cho, Aram Galstyan, P. Jeffrey Brantingham, George Tita. Latent self-exciting point process model for spatial-temporal networks. Discrete & Continuous Dynamical Systems - B, 2014, 19 (5) : 1335-1354. doi: 10.3934/dcdsb.2014.19.1335 ##### References: show all references ##### References: [1] Hui Meng, Fei Lung Yuen, Tak Kuen Siu, Hailiang Yang. Optimal portfolio in a continuous-time self-exciting threshold model. Journal of Industrial & Management Optimization, 2013, 9 (2) : 487-504. doi: 10.3934/jimo.2013.9.487 [2] Aniello Raffaele Patrone, Otmar Scherzer. On a spatial-temporal decomposition of optical flow. Inverse Problems & Imaging, 2017, 11 (4) : 761-781. doi: 10.3934/ipi.2017036 [3] Raimund Bürger, Gerardo Chowell, Pep Mulet, Luis M. Villada. Modelling the spatial-temporal progression of the 2009 A/H1N1 influenza pandemic in Chile. Mathematical Biosciences & Engineering, 2016, 13 (1) : 43-65. doi: 10.3934/mbe.2016.13.43 [4] Daniil Kazantsev, William M. Thompson, William R. B. Lionheart, Geert Van Eyndhoven, Anders P. Kaestner, Katherine J. Dobson, Philip J. Withers, Peter D. Lee. 4D-CT reconstruction with unified spatial-temporal patch-based regularization. Inverse Problems & Imaging, 2015, 9 (2) : 447-467. doi: 10.3934/ipi.2015.9.447 [5] Jiao-Yan Li, Xiao Hu, Zhong Wan. An integrated bi-objective optimization model and improved genetic algorithm for vehicle routing problems with temporal and spatial constraints. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-18. doi: 10.3934/jimo.2018200 [6] Baolan Yuan, Wanjun Zhang, Yubo Yuan. A Max-Min clustering method for $k$-means algorithm of data clustering. Journal of Industrial & Management Optimization, 2012, 8 (3) : 565-575. doi: 10.3934/jimo.2012.8.565 [7] William Chad Young, Adrian E. Raftery, Ka Yee Yeung. A posterior probability approach for gene regulatory network inference in genetic perturbation data. Mathematical Biosciences & Engineering, 2016, 13 (6) : 1241-1251. doi: 10.3934/mbe.2016041 [8] Shi Yan, Jun Liu, Haiyang Huang, Xue-Cheng Tai. A dual EM algorithm for TV regularized Gaussian mixture model in image segmentation. Inverse Problems & Imaging, 2019, 13 (3) : 653-677. doi: 10.3934/ipi.2019030 [9] Mohammad Afzalinejad, Zahra Abbasi. A slacks-based model for dynamic data envelopment analysis. Journal of Industrial & Management Optimization, 2019, 15 (1) : 275-291. doi: 10.3934/jimo.2018043 [10] Joanna Tyrcha, John Hertz. Network inference with hidden units. Mathematical Biosciences & Engineering, 2014, 11 (1) : 149-165. doi: 10.3934/mbe.2014.11.149 [11] Jiangchuan Fan, Xinyu Guo, Jianjun Du, Weiliang Wen, Xianju Lu, Brahmani Louiza. Analysis of the clustering fusion algorithm for multi-band color image. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 1233-1249. doi: 10.3934/dcdss.2019085 [12] Francesco Sanna Passino, Nicholas A. Heard. Modelling dynamic network evolution as a Pitman-Yor process. Foundations of Data Science, 2019, 1 (3) : 293-306. doi: 10.3934/fods.2019013 [13] Tieliang Gong, Qian Zhao, Deyu Meng, Zongben Xu. Why curriculum learning & self-paced learning work in big/noisy data: A theoretical perspective. Big Data & Information Analytics, 2016, 1 (1) : 111-127. doi: 10.3934/bdia.2016.1.111 [14] Jiangtao Mo, Liqun Qi, Zengxin Wei. A network simplex algorithm for simple manufacturing network model. Journal of Industrial & Management Optimization, 2005, 1 (2) : 251-273. doi: 10.3934/jimo.2005.1.251 [15] Jiang Xie, Junfu Xu, Celine Nie, Qing Nie. Machine learning of swimming data via wisdom of crowd and regression analysis. Mathematical Biosciences & Engineering, 2017, 14 (2) : 511-527. doi: 10.3934/mbe.2017031 [16] Mostafa Karimi, Noor Akma Ibrahim, Mohd Rizam Abu Bakar, Jayanthi Arasan. Rank-based inference for the accelerated failure time model in the presence of interval censored data. Numerical Algebra, Control & Optimization, 2017, 7 (1) : 107-112. doi: 10.3934/naco.2017007 [17] Aiwan Fan, Qiming Wang, Joyati Debnath. A high precision data encryption algorithm in wireless network mobile communication. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 1327-1340. doi: 10.3934/dcdss.2019091 [18] Débora A. F. Albanez, Maicon J. Benvenutti. Continuous data assimilation algorithm for simplified Bardina model. Evolution Equations & Control Theory, 2018, 7 (1) : 33-52. doi: 10.3934/eect.2018002 [19] Pawan Lingras, Farhana Haider, Matt Triff. Fuzzy temporal meta-clustering of financial trading volatility patterns. Big Data & Information Analytics, 2017, 2 (5) : 1-20. doi: 10.3934/bdia.2017018 [20] Umberto Mosco. Impulsive motion on synchronized spatial temporal grids. Discrete & Continuous Dynamical Systems - A, 2017, 37 (12) : 6069-6098. doi: 10.3934/dcds.2017261 2018 Impact Factor: 1.008
2019-10-14 01:19:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3102697730064392, "perplexity": 14597.782895906585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986648481.7/warc/CC-MAIN-20191014003258-20191014030258-00150.warc.gz"}
https://www.dorogoekuzbass.ru/solving-distance-word-problems-9696.html
# Solving Distance Word Problems Therefore in 5 days they ploughed \cdot 4 \cdot x = 20 \cdot x$hectares, which equals the area of the whole field, 2880 hectares. Hence, each of the four tractors would plough 144 hectares a day.Problem 7 The distance between two towns is 380 km.If he sold 360 kilograms of pears that day, how many kilograms did he sell in the morning and how many in the afternoon? This must be equal to 360.x = 360$$x = \frac$$x = 120$ Therefore, the salesman sold 120 kg in the morning and \cdot 120 = 240$kg in the afternoon. Together the three of them picked 26 kg of chestnuts. So$x 2x x 2=26$x=24$$x=6$ Therefore, Peter, Mary, and Lucy picked 6, 12, and 8 kg, respectively. Tags: Food Delivery Service Business PlanAldous Huxley EssaysFor Protein SythesisFresh Juice Business PlanJudicial AssignmentsPig Farm Business PlanWriting Myself EssayAnalytical Vs Critical ThinkingDissertation Problem StatementAssignment Of Contract Real Estate Then she has $x-\frac\cdot x=\frac\cdot x$ pages left. $\frac\cdot x-\frac\cdot x=90$ $\frac\cdot x=90$ $x=270$So the book is 270 pages long. The trains are heading toward one another on a track that's 1,300 miles long. Therefore, they must collide when, together, both trains have traveled a total of 1,300 miles. Problem 4A farming field can be ploughed by 6 tractors in 4 days. When 6 tractors work together, each of them ploughs 120 hectares a day. If these trains are inadvertently placed on the same track and start exactly 1,300 miles apart, how long until they collide? "If that problem sounds familiar, it's probably because you watch a lot of television (like me). Solution: Let x be the amount of milk the first cow produced during the first year. Then the second cow produced $(8100 - x)$ litres of milk that year. ## Comments Solving Distance Word Problems • ###### Solving Distance, Rate, and Time Problems Step 2 Create a Distance, Rate, and Time chart similar to the one shown below. I always create a 3 by 3 chart and label the left side based on the problem at.… • ###### How to do distance problems - HSPT Math - Varsity Tutors Study concepts, example questions & explanations for HSPT Math. HSPT Math Help Problem Solving Word Problems How to do distance problems.… • ###### Solving Problems With a Distance-Rate-Time Formula Learn how to solve problems involving distance, rate, and time, and. a distance, rate, and time question as a word problem in mathematics.… • ###### Algebra Speed and Distance Problems - Infoplease Speed and Distance Problems Algebra Whipping Word Problems Interest Problems. trying but failing miserably to solve the classic "impossible train problem.… • ###### Math Word Problems and Solutions - Distance, Speed, Time Math Word Problems and Solutions - Distance, Speed, Time. Problem 1 A salesman sold twice as much pears in the afternoon than in the morning. If he sold.… • ###### How to Solve d=rt Word Problems? 5 Powerful Examples! Learn how to solve Distance Word problems for time, rate speed or distance. We will convert units when necessary and write and solve equations.…
2021-08-04 11:47:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8504385948181152, "perplexity": 5316.9072305553245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154805.72/warc/CC-MAIN-20210804111738-20210804141738-00001.warc.gz"}
http://gravitationstudy.blogspot.com/2011/07/
# Gravitation Gravitation keeps the planets in orbits around the Sun. (Not to scale) Gravitation, or gravity, is a natural phenomenon by which physical bodies attract with a force proportional to their mass. In everyday life, gravitation is most familiar as the agent that gives weight to objects with mass and causes them to fall to the ground when dropped. Gravitation causes dispersed matter to coalesce, and coalesced matter to remain intact, thus accounting for the existence of the Earth, the Sun, and most of the macroscopic objects in the universe. Gravitation is responsible for keeping the Earth and the other planets in their orbits around the Sun; for keeping the Moon in its orbit around the Earth; for the formation of tides; for natural convection, by which fluid flow occurs under the influence of a density gradient and gravity; for heating the interiors of forming stars and planets to very high temperatures; and for various other phenomena observed on Earth. Gravitation is one of the four fundamental interactions of nature, along with electromagnetism, and the nuclear strong force and weak force. Modern physics describes gravitation using the general theory of relativity by Einstein, in which it is a consequence of the curvature of spacetime governing the motion of inertial objects. The simpler Newton's law of universal gravitation provides an accurate approximation for most physical situations. ## History of gravitational theory ### Scientific revolution Modern work on gravitational theory began with the work of Galileo Galilei in the late 16th and early 17th centuries. In his famous (though possibly apocryphal[1]) experiment dropping balls from the Tower of Pisa, and later with careful measurements of balls rolling down inclines, Galileo showed that gravitation accelerates all objects at the same rate. This was a major departure from Aristotle's belief that heavier objects accelerate faster.[2] Galileo correctly postulated air resistance as the reason that lighter objects may fall more slowly in an atmosphere. Galileo's work set the stage for the formulation of Newton's theory of gravity. ### Newton's theory of gravitation In 1687, English mathematician Sir Isaac Newton published Principia, which hypothesizes the inverse-square law of universal gravitation. In his own words, “I deduced that the forces which keep the planets in their orbs must [be] reciprocally as the squares of their distances from the centers about which they revolve: and thereby compared the force requisite to keep the Moon in her Orb with the force of gravity at the surface of the Earth; and found them answer pretty nearly.”[3] Newton's theory enjoyed its greatest success when it was used to predict the existence of Neptune based on motions of Uranus that could not be accounted for by the actions of the other planets. Calculations by both John Couch Adams and Urbain Le Verrier predicted the general position of the planet, and Le Verrier's calculations are what led Johann Gottfried Galle to the discovery of Neptune. A discrepancy in Mercury's orbit pointed out flaws in Newton's theory. By the end of the 19th century, it was known that its orbit showed slight perturbations that could not be accounted for entirely under Newton's theory, but all searches for another perturbing body (such as a planet orbiting the Sun even closer than Mercury) had been fruitless. The issue was resolved in 1915 by Albert Einstein's new theory of general relativity, which accounted for the small discrepancy in Mercury's orbit. Although Newton's theory has been superseded, most modern non-relativistic gravitational calculations are still made using Newton's theory because it is a much simpler theory to work with than general relativity, and gives sufficiently accurate results for most applications involving sufficiently small masses, speeds and energies. ### Equivalence principle The equivalence principle, explored by a succession of researchers including Galileo, Loránd Eötvös, and Einstein, expresses the idea that all objects fall in the same way. The simplest way to test the weak equivalence principle is to drop two objects of different masses or compositions in a vacuum, and see if they hit the ground at the same time. These experiments demonstrate that all objects fall at the same rate when friction (including air resistance) is negligible. More sophisticated tests use a torsion balance of a type invented by Eötvös. Satellite experiments are planned for more accurate experiments in space.[4] Formulations of the equivalence principle include: • The weak equivalence principle: The trajectory of a point mass in a gravitational field depends only on its initial position and velocity, and is independent of its composition.[5] • The Einsteinian equivalence principle: The outcome of any local non-gravitational experiment in a freely falling laboratory is independent of the velocity of the laboratory and its location in spacetime.[6] • The strong equivalence principle requiring both of the above. The equivalence principle can be used to make physical deductions about the gravitational constant, the geometrical nature of gravity, the possibility of a fifth force, and the validity of concepts such as general relativity and Brans-Dicke theory. ### General relativity General relativity Introduction Mathematical formulation Resources v · d · e In general relativity, the effects of gravitation are ascribed to spacetime curvature instead of a force. The starting point for general relativity is the equivalence principle, which equates free fall with inertial motion, and describes free-falling inertial objects as being accelerated relative to non-inertial observers on the ground.[7][8] In Newtonian physics, however, no such acceleration can occur unless at least one of the objects is being operated on by a force. Einstein proposed that spacetime is curved by matter, and that free-falling objects are moving along locally straight paths in curved spacetime. These straight paths are called geodesics. Like Newton's first law of motion, Einstein's theory states that if a force is applied on an object, it would deviate from a geodesic. For instance, we are no longer following geodesics while standing because the mechanical resistance of the Earth exerts an upward force on us, and we are non-inertial on the ground as a result. This explains why moving along the geodesics in spacetime is considered inertial. Einstein discovered the field equations of general relativity, which relate the presence of matter and the curvature of spacetime and are named after him. The Einstein field equations are a set of 10 simultaneous, non-linear, differential equations. The solutions of the field equations are the components of the metric tensor of spacetime. A metric tensor describes a geometry of spacetime. The geodesic paths for a spacetime are calculated from the metric tensor. Notable solutions of the Einstein field equations include: • The Schwarzschild solution, which describes spacetime surrounding a spherically symmetric non-rotating uncharged massive object. For compact enough objects, this solution generated a black hole with a central singularity. For radial distances from the center which are much greater than the Schwarzschild radius, the accelerations predicted by the Schwarzschild solution are practically identical to those predicted by Newton's theory of gravity. • The Reissner-Nordström solution, in which the central object has an electrical charge. For charges with a geometrized length which are less than the geometrized length of the mass of the object, this solution produces black holes with two event horizons. • The Kerr solution for rotating massive objects. This solution also produces black holes with multiple event horizons. • The Kerr-Newman solution for charged, rotating massive objects. This solution also produces black holes with multiple event horizons. • The cosmological Friedmann-Lemaitre-Robertson-Walker solution, which predicts the expansion of the universe. The tests of general relativity included the following:[9] • General relativity accounts for the anomalous perihelion precession of Mercury.2 • The prediction that time runs slower at lower potentials has been confirmed by the Pound–Rebka experiment, the Hafele–Keating experiment, and the GPS. • The prediction of the deflection of light was first confirmed by Arthur Stanley Eddington from his observations during the Solar eclipse of May 29, 1919.[10][11] Eddington measured starlight deflections twice those predicted by Newtonian corpuscular theory, in accordance with the predictions of general relativity. However his interpretation of the results was later disputed.[12] More recent tests using radio interferometric measurements of quasars passing behind the Sun have more accurately and consistently confirmed the deflection of light to the degree predicted by general relativity.[13] See also gravitational lens. • The time delay of light passing close to a massive object was first identified by Irwin I. Shapiro in 1964 in interplanetary spacecraft signals. • Gravitational radiation has been indirectly confirmed through studies of binary pulsars. • Alexander Friedmann in 1922 found that Einstein equations have non-stationary solutions (even in the presence of the cosmological constant). In 1927 Georges Lemaître showed that static solutions of the Einstein equations, which are possible in the presence of the cosmological constant, are unstable, and therefore the static universe envisioned by Einstein could not exist. Later, in 1931, Einstein himself agreed with the results of Friedmann and Lemaître. Thus general relativity predicted that the Universe had to be non-static—it had to either expand or contract. The expansion of the universe discovered by Edwin Hubble in 1929 confirmed this prediction.[14] • The theory's prediction of frame dragging was consistent with the recent Gravity Probe B results. [15] ### Earth's gravity Every planetary body (including the Earth) is surrounded by its own gravitational field, which exerts an attractive force on all objects. Assuming a spherically symmetrical planet (a reasonable approximation), the strength of this field at any given point is proportional to the planetary body's mass and inversely proportional to the square of the distance from the center of the body. The strength of the gravitational field is numerically equal to the acceleration of objects under its influence, and its value at the Earth's surface, denoted g, is approximately expressed below as the standard average. g = 9.81 m/s2 = 32.2 ft/s2 This means that, ignoring air resistance, an object falling freely near the Earth's surface increases its velocity by 9.81 m/s (32.2 ft/s or 22 mph) for each second of its descent. Thus, an object starting from rest will attain a velocity of 9.81 m/s (32.2 ft/s) after one second, 19.6 m/s (64.4 ft/s) after two seconds, and so on, adding 9.81 m/s (32.2 ft/s) to each resulting velocity. Also, again ignoring air resistance, any and all objects, when dropped from the same height, will hit the ground at the same time. If an object with comparable mass to that of the Earth were to fall towards it, then the corresponding acceleration of the Earth really would be observable. According to Newton's 3rd Law, the Earth itself experiences a force equal in magnitude and opposite in direction to that which it exerts on a falling object. This means that the Earth also accelerates towards the object until they collide. Because the mass of the Earth is huge, however, the acceleration imparted to the Earth by this opposite force is negligible in comparison to the object's. If the object doesn't bounce after it has collided with the Earth, each of them then exerts a repulsive contact force on the other which effectively balances the attractive force of gravity and prevents further acceleration. ## Anomalies and discrepancies There are some observations that are not adequately accounted for, which may point to the need for better theories of gravity or perhaps be explained in other ways. Rotation curve of a typical spiral galaxy: predicted (A) and observed (B). The discrepancy between the curves is attributed to dark matter. • Extra fast stars: Stars in galaxies follow a distribution of velocities where stars on the outskirts are moving faster than they should according to the observed distributions of normal matter. Galaxies within galaxy clusters show a similar pattern. Dark matter, which would interact gravitationally but not electromagnetically, would account for the discrepancy. Various modifications to Newtonian dynamics have also been proposed. • Pioneer anomaly: The two Pioneer spacecraft seem to be slowing down in a way which has yet to be explained.[20] • Flyby anomaly: Various spacecraft have experienced greater accelerations during slingshot maneuvers than expected. • Accelerating expansion: The metric expansion of space seems to be speeding up. Dark energy has been proposed to explain this. A recent alternative explanation is that the geometry of space is not homogeneous (due to clusters of galaxies) and that when the data are reinterpreted to take this into account, the expansion is not speeding up after all,[21] however this conclusion is disputed.[22] • Anomalous increase of the astronomical unit: Recent measurements indicate that planetary orbits are widening faster than if this were solely through the sun losing mass by radiating energy. • Extra energetic photons: Photons travelling through galaxy clusters should gain energy and then lose it again on the way out. The accelerating expansion of the universe should stop the photons returning all the energy, but even taking this into account photons from the cosmic microwave background radiation gain twice as much energy as expected. This may indicate that gravity falls off faster than inverse-squared at certain distance scales.[23] • Dark flow: Surveys of galaxy motions have detected a mystery dark flow towards an unseen mass. Such a large mass is too large to have accumulated since the Big Bang using current models and may indicate that gravity falls off slower than inverse-squared at certain distance scales.[23] • Extra massive hydrogen clouds: The spectral lines of the Lyman-alpha forest suggest that hydrogen clouds are more clumped together at certain scales than expected and, like dark flow, may indicate that gravity falls off slower than inverse-squared at certain distance scales.[23] ### Equations for a falling body near the surface of the Earth Ball falling freely under gravity. See text for description. Under an assumption of constant gravity, Newton's law of universal gravitation simplifies to F = mg, where m is the mass of the body and g is a constant vector with an average magnitude of 9.81 m/s2. The acceleration due to gravity is equal to this g. An initially stationary object which is allowed to fall freely under gravity drops a distance which is proportional to the square of the elapsed time. The image on the right, spanning half a second, was captured with a stroboscopic flash at 20 flashes per second. During the first 120 of a second the ball drops one unit of distance (here, a unit is about 12 mm); by 220 it has dropped at total of 4 units; by 320, 9 units and so on. Under the same constant gravity assumptions, the potential energy, Ep, of a body at height h is given by Ep = mgh (or Ep = Wh, with W meaning weight). This expression is valid only over small distances h from the surface of the Earth. Similarly, the expression $h = \tfrac{v^2}{2g}$ for the maximum height reached by a vertically projected body with velocity v is useful for small heights and small initial velocities only. ### Gravity and astronomy The discovery and application of Newton's law of gravity accounts for the detailed information we have about the planets in our solar system, the mass of the Sun, the distance to stars, quasars and even the theory of dark matter. Although we have not traveled to all the planets nor to the Sun, we know their masses. These masses are obtained by applying the laws of gravity to the measured characteristics of the orbit. In space an object maintains its orbit because of the force of gravity acting upon it. Planets orbit stars, stars orbit Galactic Centers, galaxies orbit a center of mass in clusters, and clusters orbit in superclusters. The force of gravity is proportional to the mass of an object and inversely proportional to the square of the distance between the objects.
2017-09-25 00:55:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6988698840141296, "perplexity": 470.6863495726435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690268.19/warc/CC-MAIN-20170925002843-20170925022843-00145.warc.gz"}
https://www.electro-tech-online.com/threads/warning-373-implicit-signed-to-unsigned-conversion.151684/
# warning: (373) implicit signed to unsigned conversion ??? Status Not open for further replies. #### Pommie ##### Well-Known Member Although they are only warnings, I like to go through my code and get rid of them wherever possible. I have the above error in my code and simply don't understand it. Here is the code concerned, Code: void PutNibble(unsigned char nib){ nib+=0x30; if(nib>'9') nib+=7; PutTXFifo(nib); } void PutHex(unsigned char hex){ PutNibble(hex>>4); //Error here PutNibble(hex&0x0f); //and here } The error occurs on the calls to PutNibble. Everything is declared unsigned. So where is the conversion taking place? Mike. #### be80be ##### Well-Known Member I get them By the bucket load And what get's me is I just trying to learn Mplab X with xc8 and it's there code samples. I did one 2 days ago and the chip can't do the math because there is not enough room in one bank it said and it had 50 warning: (373) implicit signed to unsigned conversion and in the middle it told me the problem was not enough space to do the math. I ask about this on microchip forum and was digging around and some one said it's bull they just what you to pay for xc8 All of these are in the xc8 files I'm just sending 0 to 5 out the suart Code: /opt/microchip/xc8/v1.42/sources/common/doprnt.c:538: warning: (373) implicit signed to unsigned conversion /opt/microchip/xc8/v1.42/sources/common/doprnt.c:541: warning: (373) implicit signed to unsigned conversion /opt/microchip/xc8/v1.42/sources/common/doprnt.c:1259: warning: (373) implicit signed to unsigned conversion /opt/microchip/xc8/v1.42/sources/common/doprnt.c:1305: warning: (373) implicit signed to unsigned conversion /opt/microchip/xc8/v1.42/sources/common/doprnt.c:1306: warning: (373) implicit signed to unsigned conversion /opt/microchip/xc8/v1.42/sources/common/doprnt.c:1489: warning: (373) implicit signed to unsigned conversion /opt/microchip/xc8/v1.42/sources/common/doprnt.c:1524: warning: (373) implicit signed to unsigned conversion Last edited: #### be80be ##### Well-Known Member This is the first one Code: #ifdef ANYFORMAT if(c != '%') #endif //ANYFORMAT The only one I think my code it using is the above but there both unsigned But I was told I could pay by the month for xc8 or buy it and that gunk all goes away. Im not to good a C but I'm getting better. Then I got into some boolen code oh my God It messed up my whole ball of wax. I liked to never figured that one out and it was because I had to set path to use some files that xc8 couldn't find and guess what that changed everything when I started a new project. Code I did it for worked fine but I couldn't even get a blink to build on a new chip. I found the path reset button and fixed that. I no longer had xc.h show up no matter how I added it. Maybe now mplab x can use AVR microchip will dump mplab and just use Atmel studio it works great. Oh this is the code that i was using that made the errors Code: void main(void) { init_uart(); int i; for (i = 0; i < 5; i++) { printf("Loop iteration %d\n", i); } } Last edited: #### Ian Rogers ##### User Extraordinaire Forum Supporter Mike! I think the 4 is casting 'hex' to a signed value as constants are signed ( I thought there was a way to treat them as unsigned somewhere ) Just for clarity try casting back.. :- PutNibble((unsigned char)(hex>>4)); #### DerStrom8 ##### Super Moderator Mike! I think the 4 is casting 'hex' to a signed value as constants are signed ( I thought there was a way to treat them as unsigned somewhere ) Just for clarity try casting back.. :- PutNibble((unsigned char)(hex>>4)); I haven't used XC8 in ages but this was my first instinct as well. #### Pommie ##### Well-Known Member I thought that would be it but no, I've tried both of these and the warning remains, Code: PutNibble((unsigned char)hex>>4); PutNibble(hex>>(unsigned char)4); I'm at a loss. Edit, I've even tried casting both with Code: PutNibble((uint8)hex>>(uint8)4); I finally fixed it with Code: PutNibble((uint8)(hex>>4)); Why the first didn't work I have no idea!! (uint8 = unsigned char) Mike. #### Pommie ##### Well-Known Member Carrying on the theme of undesirable warnings, I have the following code, Code: uint8 OWreadbyte(){ uint8 i,dat; for(i=0;i<8;i++){ dat>>=1; dat+=128; } return(dat); } and I get the warning, warning: (1257) local variable "_dat" is used but never given a value I know I can just add =0 to the declaration but is there a way to get rid of this warning without generating additional code. I know I'm being picky but someone might know the answer. Mike. #### Mike - K8LH ##### Well-Known Member I just tried those two functions in an XC8 v1.35 program in MPLAB 8.92 and it compiled without any warning messages. ??? #### Pommie ##### Well-Known Member Hi Mike, What did you try? I'm using MPLAB.X and CX8 V1.42. Mike. #### Ian Rogers ##### User Extraordinaire Forum Supporter and I get the warning, warning: (1257) local variable "_dat" is used but never given a value I know I can just add =0 to the declaration but is there a way to get rid of this warning without generating additional code. I know I'm being picky but someone might know the answer. I don't know about picky but using uninitialized data is normally a no no.. I know on this occasion the byte is completely written.. The compiler doesn't know that though.. #### Pommie ##### Well-Known Member I agree, using uninitialized data is not good. However, in this case I managed to find a way around it. Code: uint8 OWreadbyte(){ uint8 i,dat; for(i=0;i<8;i++){ dat=dat>>1; dat+=128; } return(dat); } Even though the code is actually the same, the compiler is happy because dat is given a value - even though it is itself. Mike. #### Mike - K8LH ##### Well-Known Member Hi Mike, What did you try? I'm using MPLAB.X and CX8 V1.42. Mike. I tried the PutNibble() and PutHex() functions (MPLAB 8.92, XC8 1.35)... No warning messages... #### be80be ##### Well-Known Member Gee you all good but I told you buy 1.42 and see what happens errors go bye bye haft of them not even your code it xc8 gunk it's not using. And Mike has a point to I went back to 1.31 and code that on there website compiles not a problem I install and yes it's 1.43 with mplad 4.00 and the same code went all to hell and the only real answer I got was is it free or did you buy it. I started to give them my debit card but I didn't. #### Pommie ##### Well-Known Member Just switched back to 1.41 and all the silly warnings have gone. Plus, I now have only 1 warning, warning: (343) implicit return at end of non-void function, which is an actual error in my code but was lost among all the nonsense warning. And, weirdly, my code went from 5227 bytes to 5225 - a saving of 2 bytes - wonder what changed between the versions. Mike. #### Peteey ##### New Member "(XC8E-109) The warning level for warning 373 (implicit signed to unsigned conversion) has been raised from -4 to -3." Check your project properties to see what warning level you have set for the project. I assume the default is -3 with -9 being the least important level. I don't expect such a warning. Its seems frivolous. There's not reason to interpret constant positive numbers as signed or unsigned. gcc does not give a warning for the same code with -Wall and -Wextra flags set. The XC32 compiler is based off gcc. Status Not open for further replies.
2021-05-15 11:51:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22589176893234253, "perplexity": 4270.812840955214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991801.49/warc/CC-MAIN-20210515100825-20210515130825-00080.warc.gz"}
https://igniteacademia.com/archive/3ae6c5-highest-oxidation-state
; When oxygen is part of a peroxide, its oxidation number is -1. The highest known oxidation state is +9 in the tetroxoiridium (IX). A substance which donates electrons is said to be oxidized. The Carbon Atom In Carbon Dioxide Has The Highest Oxidation State Allowed For Carbon. (i) Mn shows the highest oxidation state of +7 with oxygen because it can form p-pi−d-pi multiple bonds using 2p orbital of oxygen and 3d orbital of Mn. The reason why Manganese has the highest oxidation state is because the number of unpaired electrons in the outermost shell is more that is 3d 5 4s 2. The highest known oxidation state is +9 in the tetroxoiridium (IX). On the other hand, Mn shows the highest oxidation state of +4 with fluorine because it can form a single bond only.ii) Transition metals show variable oxidation states due to the participation of ns and (n-1)d- electrons in bonding. The algebraic sum of the oxidation states in an ion is equal to the charge on the ion. The oxidation states are also maintained in articles of the elements (of course), and … 3. 4) The pair of compounds having metals in their highest oxidation state is: (IIT JEE 2004) a) MnO 2, FeCl 3. b) [MnO 4]-, CrO 2 Cl 2. c) [Fe(CN) 6] 3-, … Sometimes the oxidation state is a fraction. Oxidation state and oxidation number are terms frequently used interchangeably. Oxygen is strong oxidising agent due to its high electronegtivity and smaller size. The oxidation state of an atom (sometimes referred to as the oxidation number) in a chemical compound provides insight into the number of electrons lost it and, therefore, describes the extent of oxidation of the atom. Oxidation state works as an indicator and sometime it could be negative or positive or Zero. 2. This is due to the fact that for bonding, in addition to ns electrons, these elements can use inner (n-1)d electrons as well because of very small difference in their energies. Lead Ii Nitrate Reacts With Potassium Iodide To Form Lead Ii Iodide And Potassium Nitrate How Do You... Hydrogen Gas Combines With Nitrogen To Form Ammonia Translate The Following Statement Into The Chemi... What Is An Element What Are The Different Types Of Elements, How Does Valency Of An Element Vary Across A Period​, CBSE Previous Year Question Papers Class 10, CBSE Previous Year Question Papers Class 12, NCERT Solutions Class 11 Business Studies, NCERT Solutions Class 12 Business Studies, NCERT Solutions Class 12 Accountancy Part 1, NCERT Solutions Class 12 Accountancy Part 2, NCERT Solutions For Class 6 Social Science, NCERT Solutions for Class 7 Social Science, NCERT Solutions for Class 8 Social Science, NCERT Solutions For Class 9 Social Science, NCERT Solutions For Class 9 Maths Chapter 1, NCERT Solutions For Class 9 Maths Chapter 2, NCERT Solutions For Class 9 Maths Chapter 3, NCERT Solutions For Class 9 Maths Chapter 4, NCERT Solutions For Class 9 Maths Chapter 5, NCERT Solutions For Class 9 Maths Chapter 6, NCERT Solutions For Class 9 Maths Chapter 7, NCERT Solutions For Class 9 Maths Chapter 8, NCERT Solutions For Class 9 Maths Chapter 9, NCERT Solutions For Class 9 Maths Chapter 10, NCERT Solutions For Class 9 Maths Chapter 11, NCERT Solutions For Class 9 Maths Chapter 12, NCERT Solutions For Class 9 Maths Chapter 13, NCERT Solutions For Class 9 Maths Chapter 14, NCERT Solutions For Class 9 Maths Chapter 15, NCERT Solutions for Class 9 Science Chapter 1, NCERT Solutions for Class 9 Science Chapter 2, NCERT Solutions for Class 9 Science Chapter 3, NCERT Solutions for Class 9 Science Chapter 4, NCERT Solutions for Class 9 Science Chapter 5, NCERT Solutions for Class 9 Science Chapter 6, NCERT Solutions for Class 9 Science Chapter 7, NCERT Solutions for Class 9 Science Chapter 8, NCERT Solutions for Class 9 Science Chapter 9, NCERT Solutions for Class 9 Science Chapter 10, NCERT Solutions for Class 9 Science Chapter 12, NCERT Solutions for Class 9 Science Chapter 11, NCERT Solutions for Class 9 Science Chapter 13, NCERT Solutions for Class 9 Science Chapter 14, NCERT Solutions for Class 9 Science Chapter 15, NCERT Solutions for Class 10 Social Science, NCERT Solutions for Class 10 Maths Chapter 1, NCERT Solutions for Class 10 Maths Chapter 2, NCERT Solutions for Class 10 Maths Chapter 3, NCERT Solutions for Class 10 Maths Chapter 4, NCERT Solutions for Class 10 Maths Chapter 5, NCERT Solutions for Class 10 Maths Chapter 6, NCERT Solutions for Class 10 Maths Chapter 7, NCERT Solutions for Class 10 Maths Chapter 8, NCERT Solutions for Class 10 Maths Chapter 9, NCERT Solutions for Class 10 Maths Chapter 10, NCERT Solutions for Class 10 Maths Chapter 11, NCERT Solutions for Class 10 Maths Chapter 12, NCERT Solutions for Class 10 Maths Chapter 13, NCERT Solutions for Class 10 Maths Chapter 14, NCERT Solutions for Class 10 Maths Chapter 15, NCERT Solutions for Class 10 Science Chapter 1, NCERT Solutions for Class 10 Science Chapter 2, NCERT Solutions for Class 10 Science Chapter 3, NCERT Solutions for Class 10 Science Chapter 4, NCERT Solutions for Class 10 Science Chapter 5, NCERT Solutions for Class 10 Science Chapter 6, NCERT Solutions for Class 10 Science Chapter 7, NCERT Solutions for Class 10 Science Chapter 8, NCERT Solutions for Class 10 Science Chapter 9, NCERT Solutions for Class 10 Science Chapter 10, NCERT Solutions for Class 10 Science Chapter 11, NCERT Solutions for Class 10 Science Chapter 12, NCERT Solutions for Class 10 Science Chapter 13, NCERT Solutions for Class 10 Science Chapter 14, NCERT Solutions for Class 10 Science Chapter 15, NCERT Solutions for Class 10 Science Chapter 16, Which Blood Vessel Contains The Lowest Concentration Of Urea, What Happens When A Physical Change Takes Place In A Substance, What Is The Molecular Structure Of Co Nh3 6 Cl3, What Is The Significance Of Transpiration In Plants. Each atom in an element either in its free or uncombined state holds up an oxidation number of zero. The oxidation state of an uncombined element is zero. Rules for oxidation numbers: 1. Highest oxidation state is +6 .-----Chromium has an oxidation state of +6 in the compound K₂CrO₄ (potassium chromate), for example. The stability of this highest oxidation state decreases from titanium in the +4 When chlorine, bromine, and iodine, halogens in combination with small and highly electronegative atoms of fluorine and oxygen, the higher oxidation state is realized. Here, plutonium varies in color with oxidation state. Expert Answer 100% (5 ratings) Previous question Next … The elctronic configuration of Manganese is. Your email address will not be published. Most elements have more than one possible oxidation state. For example. Ideally, the oxidation state is either zero, positive, or negative. (Source: also Wikipedia) Note that these states are well within the ‘s and d’ idea I alluded to earlier. Required fields are marked *, Which Transition Element Shows Highest Oxidation State. The value of the oxidation state usually constitutes integers. So the net oxidation for this part of the molecule or the compound is going to be negative 2 nets out with the positive 2 from magnesium. Assigning oxidation numbers to organic compounds The oxidation state of any chemically bonded carbon may be assigned by adding -1 for each more electropositive atom (H, Na, Ca, B) and +1 for each more electronegative atom (O, Cl, N, P), and 0 for each carbon atom bonded directly to the carbon of interest. Most common Oxidation states of Gold Include +1 (gold(I) and +3 (gold(III) If you want to know the Less common oxidation states of gold then it includes −1, +2, and +5. There are two exceptions here. Example, Na, Superoxides- Every oxygen atom is allocated an oxidation number of –(1/2). In almost all cases, oxygen atoms have oxidation numbers of -2. But then you have two of them. This applies regardless of the structure of the element: Xe, Cl 2, S 8, and large structures of carbon or silicon each have an oxidation state of zero. For example, the oxidation state of carbon in CO2 would be +4 since the hypothetical charge held by the carbon atom if both of the carbon-oxygen double bonds were completely ionic would be equal to +4 (each oxygen atom would hold a charge of -2 since oxygen is more electronegative than carbon). oxidation number or state is defined as the charge present on an atom or ion. What Is A Stereo Centre And A Chiral Centre, Define Valence Shell And Valence Electrons. The oxidation state of an atom is not regarded as the real charge of the atom. Consider (CO3). Assign an oxidation number of -2 to oxygen (with exceptions). Therefore, the elucidation diversified to include other reactions as well where the electrons are lost, irrespective of the involvement of oxygen atom. Oxidation is nothing but an atom loosing electrons in chemical compound. In most of the compounds, the oxidation number of oxygen is –2. ; The sum of the oxidation states of all the atoms or ions in a neutral compound is zero. If oxygen has a negative 2 oxidation state, hydrogen has a positive 1 oxidation state. Manganese is the 3d series transition element shows the highest oxidation state. The highest known oxidation state is reported to be +9 in the tetroxoiridium(IX) cation (IrO + 4 ). Show transcribed image text. The lowest known oxidation state is −4, for carbon in CH 4 (methane). Required fields are marked *, Classification of Elements and Periodicity in Properties. From titanium to manganese the highest oxidation state exhibited, which usually is found only in oxo compounds, fluorides, or chlorides, corresponds to the total number of 3d and 4s electrons in the atom. The oxidation state of the vanadium is now +5. Later, it was noticed that when the substance is oxidized it loses electrons. You could eventually get back to the element vanadium which would have an oxidation state of zero. The reaction between iron and oxygen – When iron (Fe) reacts with oxygen it forms rust because iron loses electrons and oxygen gains electrons. Oxidation State. Your email address will not be published. 1. What is the highest oxidation state for each of the elements from Sc to Zn? In case of polyatomic ion, when the oxidation number of the atoms of an ion are added together the algebraic sum must be equal to the charge on the ion. Elements such as chlorine, bromine, and iodine also show +1, +3, +5, and +7 state. In other words, the oxidation state is the charge of an atom if all bonds it formed were ionic bonds. because of these properties Oxygen are able to oxidise the metal to its highest oxidation state. Oxidation state refers to the degree of oxidation of an atom in a molecule. Oxygen is bonded to fluorine- Example, dioxygen difluoride where the oxygen atom is allocated an oxidation number of +1. To find the high­est ox­i­da­tion state in non-met­als, from the num­ber 8 sub­tract the num­ber of the group in which the el­e­ment is lo­cat­ed, and the high­est ox­i­da­tion state with a plus sign will be equal to the num­ber of elec­trons on the out­er lay­er. In p block elements, what is the maximum oxidation number? For example. Example, KO. In a chemical reaction if there is an increase in oxidation state then it is known as oxidation whereas if there is a decrease in oxidation state, it is known as reduction. A characteristic property of d-block elements is their ability to exhibit a variety of oxidation states in their compounds. The oxidation number of an atom is zero in a neutral substance that contains atoms of only one element. when the number of unpaired valence electrons increases, the d-orbital increase & the highest oxidation state increases. Highest Oxidation State for a Transition metal = Number of Unpaired d-electrons + Two s-orbital electrons The number of d-electrons range from 1 (in Sc) to 10 (in Cu and Zn). The oxidation state of an atom can be defined as the hypothetical charge that would be held by that atom if all of its bonds to other atoms were completely ionic in nature. Peroxides- Every oxygen atom is allocated an oxidation number of –1. To calculate oxidation number we need to understand and follow certain rules. Each hydroxide part of this molecule is going to have a net oxidation state of negative 1. Oxidation increases oxidation state and reduction decreases oxidation state. Conclusion: Correct option is: 'a'. If we consider all the transition metals the highest oxidation state is eight and the element which shows +8 oxidation state are … A chemical reaction which includes the movement of electrons is called oxidation. Oxidation states, (aka oxidation numbers), are numbers that show how many electrons the element would lose or gain if it were to bond to … For pure elements, the oxidation state is zero. Oxidation states of plutonium. To the best of my knowledge, there are no pieces of … The oxidation state of an atom is not regarded as the real charge of the atom. The lowest known oxidation state is −4, for carbon in CH4 (methane). Oxidation state is equal to the number of valence electrons that carbon is supposed to have, minus the number of valence electrons around carbon in our drawings, so let's count them up after we've accounted for electronegativity. There are six rules: Learn more about how to calculate oxidation number along with the steps. Out of the 37 isotopes of Iodine, it has only one stable isotope, which is 127 I. Iodine exhibits the oxidation states of 7, 6, 5, 4, 3, 1, −1, out of which 7, 5, 1, -1 are the stable ones. In a chemical reaction if there is an increase in oxidation state then it is known as oxidation whereas if there is a decrease in oxidation state, it is known as reduction. So far, the highest oxidation state has been found for iridium ($\mathrm{+IX}$). Clearly, each atom in H. The oxidation number of ions which comprise of only one atom is equal to the actual charge on the ion. If we consider all the transition metals the highest oxidation state is eight and the element which shows +8 oxidation state are Ruthenium (Ru) and Os(Osmium). Your email address will not be published. Oxidation State. 2. in case of transition metals, there are five orbitals in the d subshell . 4. This occurs in the formation of tetroxides with ruthenium, xenon, osmium, iridium, hassium, plutonium, and curium. 1. All other elements show single oxidation numbers. Fairly obviously, if you start adding electrons again the oxidation state will fall. For example, carbon has nine possible integer oxidation states from −4 to +4. If the oxidation state increases the substance is oxidised If the oxidation state decreases the substance is reduced. Rules to determine oxidation states. Because there are 4 … Your email address will not be published. for example CrO 42- … When an oxidation number is assigned to the element, it does not imply that the element in the compound acquires this as a charge, but rather that it is a number to use for balancing chemical reactions. The highest oxidation state that is known to occur in a metallic ion is +8. s block elements do they have variable oxidation States. Antoine Lavoisier was the first to use the term oxidation to denote the reaction between a substance and oxygen. Variable Oxidation States of d-Block Elements. The Oxidation Of Formic Acid Has A More Negative Standard Free Energy Change Than The Oxidation Of Formaldehyde. Group 1 elements show +1 oxidation state and group 2 elements show +2 oxidation state. The highest known oxidation state is +8 in the tetroxides of ruthenium, xenon, osmium, iridium, hassium, and some complexes involving plutonium; the lowest known oxidation state is −4 for some elements in the carbon group. There are a few exceptions to this rule: When oxygen is in its elemental state (O 2), its oxidation number is 0, as is the case for all elemental atoms. For example, CaH, When the oxidation number of the atoms of a compound are added together the algebraic sum must be equal to zero. They are the quantities which describe the number of electrons lost in an atom. Every time you oxidise the vanadium by removing another electron from it, its oxidation state increases by 1. Each atom of the molecule will have a distinct oxidation state for that molecule where the sum of all the oxidation states will equal the overall electrical charge of the molecule or ion. The current IUPAC Gold Book definition of oxidation state is: “Oxidation state of an atom is the charge of this atom after … Among transition metals, the highest oxidation state is exhibited in oxoanions of a metal. Platinum($\mathrm{X}$) has been predicted. All elements of the halogen family exhibit -1 oxidation state. Only hydrogen shows variable oxidation numbers. CBSE Previous Year Question Papers Class 10, CBSE Previous Year Question Papers Class 12, NCERT Solutions Class 11 Business Studies, NCERT Solutions Class 12 Business Studies, NCERT Solutions Class 12 Accountancy Part 1, NCERT Solutions Class 12 Accountancy Part 2, NCERT Solutions For Class 6 Social Science, NCERT Solutions for Class 7 Social Science, NCERT Solutions for Class 8 Social Science, NCERT Solutions For Class 9 Social Science, NCERT Solutions For Class 9 Maths Chapter 1, NCERT Solutions For Class 9 Maths Chapter 2, NCERT Solutions For Class 9 Maths Chapter 3, NCERT Solutions For Class 9 Maths Chapter 4, NCERT Solutions For Class 9 Maths Chapter 5, NCERT Solutions For Class 9 Maths Chapter 6, NCERT Solutions For Class 9 Maths Chapter 7, NCERT Solutions For Class 9 Maths Chapter 8, NCERT Solutions For Class 9 Maths Chapter 9, NCERT Solutions For Class 9 Maths Chapter 10, NCERT Solutions For Class 9 Maths Chapter 11, NCERT Solutions For Class 9 Maths Chapter 12, NCERT Solutions For Class 9 Maths Chapter 13, NCERT Solutions For Class 9 Maths Chapter 14, NCERT Solutions For Class 9 Maths Chapter 15, NCERT Solutions for Class 9 Science Chapter 1, NCERT Solutions for Class 9 Science Chapter 2, NCERT Solutions for Class 9 Science Chapter 3, NCERT Solutions for Class 9 Science Chapter 4, NCERT Solutions for Class 9 Science Chapter 5, NCERT Solutions for Class 9 Science Chapter 6, NCERT Solutions for Class 9 Science Chapter 7, NCERT Solutions for Class 9 Science Chapter 8, NCERT Solutions for Class 9 Science Chapter 9, NCERT Solutions for Class 9 Science Chapter 10, NCERT Solutions for Class 9 Science Chapter 12, NCERT Solutions for Class 9 Science Chapter 11, NCERT Solutions for Class 9 Science Chapter 13, NCERT Solutions for Class 9 Science Chapter 14, NCERT Solutions for Class 9 Science Chapter 15, NCERT Solutions for Class 10 Social Science, NCERT Solutions for Class 10 Maths Chapter 1, NCERT Solutions for Class 10 Maths Chapter 2, NCERT Solutions for Class 10 Maths Chapter 3, NCERT Solutions for Class 10 Maths Chapter 4, NCERT Solutions for Class 10 Maths Chapter 5, NCERT Solutions for Class 10 Maths Chapter 6, NCERT Solutions for Class 10 Maths Chapter 7, NCERT Solutions for Class 10 Maths Chapter 8, NCERT Solutions for Class 10 Maths Chapter 9, NCERT Solutions for Class 10 Maths Chapter 10, NCERT Solutions for Class 10 Maths Chapter 11, NCERT Solutions for Class 10 Maths Chapter 12, NCERT Solutions for Class 10 Maths Chapter 13, NCERT Solutions for Class 10 Maths Chapter 14, NCERT Solutions for Class 10 Maths Chapter 15, NCERT Solutions for Class 10 Science Chapter 1, NCERT Solutions for Class 10 Science Chapter 2, NCERT Solutions for Class 10 Science Chapter 3, NCERT Solutions for Class 10 Science Chapter 4, NCERT Solutions for Class 10 Science Chapter 5, NCERT Solutions for Class 10 Science Chapter 6, NCERT Solutions for Class 10 Science Chapter 7, NCERT Solutions for Class 10 Science Chapter 8, NCERT Solutions for Class 10 Science Chapter 9, NCERT Solutions for Class 10 Science Chapter 10, NCERT Solutions for Class 10 Science Chapter 11, NCERT Solutions for Class 10 Science Chapter 12, NCERT Solutions for Class 10 Science Chapter 13, NCERT Solutions for Class 10 Science Chapter 14, NCERT Solutions for Class 10 Science Chapter 15, NCERT Solutions for Class 10 Science Chapter 16, Important Questions For Class 11 Chemistry, Important Questions For Class 12 Chemistry, Difference between oxidation and reduction, CBSE Previous Year Question Papers Class 10 Science, CBSE Previous Year Question Papers Class 12 Physics, CBSE Previous Year Question Papers Class 12 Chemistry, CBSE Previous Year Question Papers Class 12 Biology, ICSE Previous Year Question Papers Class 10 Physics, ICSE Previous Year Question Papers Class 10 Chemistry, ICSE Previous Year Question Papers Class 10 Maths, ISC Previous Year Question Papers Class 12 Physics, ISC Previous Year Question Papers Class 12 Chemistry, ISC Previous Year Question Papers Class 12 Biology. This work implies that the highest physical OS of the Pu solid ion is Pu V in PuO 2 F and PuOF 4, which can be achieved via tuning the ligand, thus improving our knowledge of oxidation states and chemical bonding in high OS solid-state compounds. Oxidation states show how oxidised or reduced an element is within a compound or ion. Question: Question 13 2 Pts Choose The Correct Statement. [1] It is predicted that even a +10 oxidation state may be achievable by platinum in the tetroxoplatinum(X) cation ( PtO 2+ Explanation: 'The oxidation number for oxygen is assigned a charge of -2 when it reacts with a metal. The reason why Manganese has the highest oxidation state is because the number of unpaired electrons in the outermost shell is more that is 3d5 4s2. See Periodic Table below: In the image above, the blue-boxed area is the d block, or also known as transition metals. Hydrogen’s oxidation number is +1, excluding when it is bonded to metals containing two elements. 2020 highest oxidation state
2021-04-12 22:07:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5349301695823669, "perplexity": 2182.3509957035867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038069267.22/warc/CC-MAIN-20210412210312-20210413000312-00282.warc.gz"}
https://www.studyadda.com/notes/kvpy/physics/mathematical-tools-units-dimensions/fundamental-and-derived-quantities/7131
# 11th Class Physics Physical World / भौतिक जगत Fundamental and Derived Quantities Fundamental and Derived Quantities Category : 11th Class (1) Fundamental quantities : Out of large number of physical quantities which exist in nature, there are only few quantities which are independent of all other quantities and do not require the help of any other physical quantity for their definition, therefore these are called absolute quantities. These quantities are also called fundamental or basic quantities, as all other quantities are based upon and can be expressed in terms of these quantities. (2) Derived quantities : All other physical quantities can be derived by suitable multiplication or division of different powers of fundamental quantities. These are therefore called derived quantities. If length is defined as a fundamental quantity then area and volume are derived from length and are expressed in term of length with power 2 and 3 over the term of length. Note : q In mechanics, Length, Mass and Time are arbitrarily chosen as fundamental quantities. However this set of fundamental quantities is not a unique choice. In fact any three quantities in mechanics can be termed as fundamental as all other quantities in mechanics can be expressed in terms of these. e.g. if speed and time are taken as fundamental quantities, length will become a derived quantity because then length will be expressed as  Speed ´ Time. and if force and acceleration are taken as fundamental quantities, then mass will be defined as Force / acceleration and will be termed as a derived quantity. #### Other Topics You need to login to perform this action. You will be redirected in 3 sec
2022-01-26 23:08:25
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9183956384658813, "perplexity": 483.1111814321592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305006.68/warc/CC-MAIN-20220126222652-20220127012652-00392.warc.gz"}
https://forum.wilmott.com/viewtopic.php?f=4&t=69647&p=426587
SERVING THE QUANTITATIVE FINANCE COMMUNITY • 1 • 2 BLOBY Topic Author Posts: 113 Joined: May 17th, 2004, 5:07 am ### Timer options Thanks for the explanations. EndOfTheWorld, Do you have an idea about how conditional varswaps can be hedged / replicated ? tw Posts: 1176 Joined: May 10th, 2002, 3:30 pm ### Timer options I was just thinking by comparison with a fund that was taking a long position using the Kelly/constant proportion of weath investedin the stock method. That pays out something related to the mean return divided by the variance (i.e. short vol).If, instead of doing that, they buy one of these timer calls, then they have a similar situation: they hope for strong mean returns on lowvol - too much vol on the upside and the strategy is over before it's begun . If they buy the put as insurance, and it gaps down then it's pretty useless.QuoteOriginally posted by: daveangelI dont know if its short vol or you just have a fixed amount of variance to play with. EndOfTheWorld Posts: 110 Joined: September 30th, 2008, 8:35 am ### Timer options Theoritical hedging of the conditional varswap:- the "corridor" varswap (when you reduced the range) is hedged like a variance swap - strip of weighted call/put but strike in the range- for the "proba to be in the range", you replicate with digital options strike at the barrier for every dayIn practice this is more tricky, people already hedge their varswaps with 2 or 3 options, so you can imagine for the conditional what's goign to happend... plaser Posts: 39 Joined: August 18th, 2008, 3:43 pm ### Timer options Biggest risk of these product is daily jump risk because var is observed close to close, if a stock price gets half intraday or gets taken out at double the price, that will lead to massive loss to timer seller. Question is how do you hedge and price this risk. BLOBY Topic Author Posts: 113 Joined: May 17th, 2004, 5:07 am ### Timer options EndOfTheWorld, regarding cond varswaps, why do you trade digital options and not vanilla options strike at the barrier ? tw Posts: 1176 Joined: May 10th, 2002, 3:30 pm ### Timer options Why would that risk be greater than with a vanilla option?QuoteOriginally posted by: plaserBiggest risk of these product is daily jump risk because var is observed close to close, if a stock price gets half intraday or gets taken out at double the price, that will lead to massive loss to timer seller. Question is how do you hedge and price this risk. EndOfTheWorld Posts: 110 Joined: September 30th, 2008, 8:35 am ### Timer options You can break-down the payoff expectation of the conditional variance swap into:1- the expectation of the corridor realised variance (which is the strike of the corridor variance swap) 2- the expectation of the strike times the proportion of price in the range: E{ Strike^2 * ( Sum [ Indicator Function ( S(t) in the range ] / Number of returns ) } (note: don't know how to write fancy formulas…)You basically want to count the number of prices in the range every day - your payoff is 0 or 1 => Digital Options. However, you can replicate your digital options with a vanilla call spread (epsilon ->0).Finally, you obtain the expected proportion of price in the range (which is between 0 and 1): when you divide the corridor Var strike (in Var space) by this quantity, that give you the strike of the cond. Var, which is then always greater than the corridor var. rmeenaks Posts: 186 Joined: May 1st, 2006, 2:31 pm ### Timer options EndofTheWorld,Do you have:Structured Flow Handbook: A Guide to Volatility Investingthat is mentioned in the Timer Options PDF?Thanks,Ram EndOfTheWorld Posts: 110 Joined: September 30th, 2008, 8:35 am ### Timer options No sorry. But I'd like to have it plaser Posts: 39 Joined: August 18th, 2008, 3:43 pm ### Timer options Not necessarily bigger than plain all I'm saying is gamma risk is the biggest risk of this product and what makes most sell side reluctant to sell this thing at reasonable price. If no jumps, the price is more or less BlackScholes(var target,S).Also the holder of this option is short put option on var since this thing has finite maturity and if realized vol is 1, you don't relaized your var target before fixed maturity. Lastly the risk is interest rate and dividends due to uncertainty maturity. mixmasterdeik Posts: 49 Joined: December 17th, 2009, 7:26 pm ### Timer options Maybe try Carr & Lee paper "Hedging Variance Options on Continuous Semimartingales" in page 6Cheers probably Posts: 175 Joined: May 18th, 2004, 10:46 pm ### Timer options To answer to the original question how to price & hedge:To clarify the product payswhere T is a stopping defined as followswhere RVar(t) is classic realized daily variance and where \sigma is a target variance. Assumption: dS(t)/S(t) = q(t) dW(t) (ie, no jumps). Let X(t):=\int_0^t q(s) dW(s), define A(t) := \int_0^t q(s)^2 ds, then the inverse time-change S(t) := \inf{ u: A(u) = s }. Then, B(t) = X( S(t) ) is a Brownian motion with the property that X(t) = B( A(t) ) (all this is known as "Dambin-Dubin-Schwarz", cf. Revuz-Yor). We then find that S(\tau) = exp{ B( \sigma^2 ) - 1/2 \sigma^2 } by definition.In other words, the price of this option under the approximation RVar == Quadraric variation and no interest rates is given as the BS price(*) BS( S(t), \sigma^2 - RV(t), K)where BS(s,var,k) denotes the price of a call in BS with variance var.More interestingly, if you use (*) and write down PnL, then you will see that around the break-even vol of \sigma, the option actually has no gamma. In other words, it's very easy to hedge with equity. Last edited by probably on January 8th, 2010, 11:00 pm, edited 1 time in total. mixmasterdeik Posts: 49 Joined: December 17th, 2009, 7:26 pm ### Timer options Hi probably,totally agree with that!!For that specific type of option payoff it also has no Vega and Theta. So it has only Delta. Should be the less exotic of the exotic trades.However, sometimes it's traded with a time cap T. So the payoff is payoff(tau) = max (S(tau) - K,0) if RVar(tau)>sigma_set^2 for t<tau<Totherwise it payspayoff(T) = max (S(T) - K,0) at maturity date THas someone taken a look at this other payoff possibility? I have only looked at MC simulation to price it...Cheers, probably Posts: 175 Joined: May 18th, 2004, 10:46 pm ### Timer options It's very pretty, yes. The "zero gamma" relationship may break down though if realized vol is far of implied if I remember correctly.Not sure about the cap, though. We actually did not trade one. I assume you'll get a standard option towards the end. mixmasterdeik Posts: 49 Joined: December 17th, 2009, 7:26 pm ### Timer options Hi probably,another question I have is the following:In Carr/Lee paper's the underlying seems to be a martingale. In case there's a drift, how can we set up BS equation to match MC price?With no driftBS(S(t),K,Q-RVar) = MC price , where Q is the budget varianceWhen we have a drift should it be:BS( F(t,tau),K, Q - RVar) is an approximation for MC price, where F(t,tau) is the initial forward value at t for tau which is the expected stopping time?Cheers,
2020-09-22 18:18:33
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8134028315544128, "perplexity": 5228.821059441817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400206329.28/warc/CC-MAIN-20200922161302-20200922191302-00627.warc.gz"}
http://math.stackexchange.com/questions/237273/isomorphism-of-the-annihilator-of-a-subgroup-in-the-context-of-group-characters
Isomorphism of the annihilator of a subgroup in the context of group characters. I am trying to learn about characters of finite abelian groups. A character is a homomorphism from a finite abelian group $G$ into the multiplicative group of complex numbers of absolute value 1. In my textbook (Finite Fields by Lidl and Niederreiter) there is the following question to which I am stuck: Let $H$ be a subgroup of the finite abelian group $G$. Prove that the annihilator $A$ of H in $\widehat{G}$ (where $\widehat{G}$ is the group of characters of $G$) is isomorphic to $G/H$ and that $\widehat{G}/A$ is isomorphic to $H$. This looks like something where the 1st isomorphism theorem for groups could be used, but I don't see how. Any ideas would be greatly appreciated. - First question Let ${\text{Ann}_{G}}(H)$ denote the annihilator of $H$ in $G$, i.e., $${\text{Ann}_{G}}(H) = \left\{ \phi \in \widehat{G} ~ \middle| ~ \forall h \in H: ~ \phi(h) = 1_{\mathbb{C}} \right\}.$$ Then ${\text{Ann}_{G}}(H) \cong \widehat{G / H}$. In order to prove this, let $q: G \to G / H$ denote the obvious quotient group homomorphism, and define a group homomorphism $\Phi: \widehat{G / H} \to {\text{Ann}_{G}}(H)$ by $$\forall \phi \in \widehat{G / H}: \quad \Phi(\phi) = \phi \circ q.$$ $\Phi$ is injective: Suppose that $\phi \in \widehat{G / H}$ and $\phi \circ q = \mathbf{1}_{G}$. Then $\phi(g + H) = 1_{\mathbb{C}}$ for all $g \in G$, so $\phi = \mathbf{1}_{G / H}$. $\Phi$ is surjective: Let $\phi \in {\text{Ann}_{G}}(H)$. We can define a map $\dot{\phi}: G / H \to \mathbb{C}$ by $$\forall g \in G: \quad \dot{\phi}(g + H) \stackrel{\text{df}}{=} \phi(g).$$ This is clearly a well-defined map and is a character on $G / H$. As $\phi = \dot{\phi} \circ q$, we are done. Note: Up to this point, all of our arguments are valid for an arbitrary locally compact Hausdorff abelian group $G$ with $H$ a closed subgroup and all maps involved continuous. We now turn to the special case when $G$ is finite and abelian with the discrete topology. Question. How should we use the assumption that $G$ is finite and abelian? It turns out that any finite and abelian group is isomorphic to its own dual. By assumption, $G$ is finite and abelian, so $G / H$ is also finite and abelian. Hence, $G / H \cong \widehat{G / H}$, which yields $${\text{Ann}_{G}}(H) \cong G / H$$ as desired. Note, however, that the isomorphism between $G / H$ and $\widehat{G / H}$ is not natural. Second question Let us first show that $\widehat{G} / {\text{Ann}_{G}}(H) \cong \widehat{H}$. By the answer to the first question, we have $$(\spadesuit) \qquad \left( \widehat{G} / {\text{Ann}_{G}}(H) \right)^{\land} \cong {\text{Ann}_{\widehat{G}}}({\text{Ann}_{G}}(H)).$$ Claim: ${\text{Ann}_{\widehat{G}}}({\text{Ann}_{G}}(H)) \cong H$. Proof of Claim Observe that \begin{align} {\text{Ann}_{\widehat{G}}}({\text{Ann}_{G}}(H)) & = \left\{ \Psi \in \widehat{\widehat{G}} ~ \middle| ~ \forall \phi \in {\text{Ann}_{G}}(H): ~ \Psi(\phi) = 1_{\mathbb{C}} \right\} \\ & \cong \{ g \in G \mid \forall \phi \in {\text{Ann}_{G}}(H): ~ \phi(g) = 1_{\mathbb{C}} \} \quad (\text{By Pontryagin Duality.}) \\ & \supseteq H. \quad (\text{By the definition of ${\text{Ann}_{G}}(H)$.}) \end{align} It then remains to prove that we actually have equality in the last line. As ${\text{Ann}_{G}}(H) \cong \widehat{G / H}$, we get $\left( {\text{Ann}_{G}}(H) \right)^{\land} \cong G / H$ by Pontryagin Duality. The isomorphism is explicitly implemented by the map $\Theta: G / H \to \left( {\text{Ann}_{G}}(H) \right)^{\land}$ defined by $$\forall g \in G, ~ \forall \phi \in {\text{Ann}_{G}}(H): \quad [\Theta(g + H)](\phi) \stackrel{\text{df}}{=} \phi(g).$$ If $g \notin H$, then $g + H \neq e_{G / H}$, so there exists a $\phi \in {\text{Ann}_{G}}(H)$ such that $$\phi(g) = [\Theta(g + H)](\phi) \neq 1_{\mathbb{C}}.$$ This readily implies that $$\{ g \in G \mid \forall \phi \in {\text{Ann}_{G}}(H): ~ \phi(g) = 1_{\mathbb{C}} \} = H$$ as desired. $\quad \blacksquare$ It now follows from $(\spadesuit)$ that $\left( \widehat{G} / {\text{Ann}_{G}}(H) \right)^{\land} \cong H$. By Pontryagin Duality yet again, $$\widehat{G} / {\text{Ann}_{G}}(H) \cong \widehat{H}.$$ Finally, as $H$ is finite and abelian, we obtain $\widehat{H} \cong H$, which concludes the argument. -
2015-04-27 16:26:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9994329214096069, "perplexity": 82.97929229854108}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246658904.34/warc/CC-MAIN-20150417045738-00258-ip-10-235-10-82.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/335079/how-many-wave-functions-can-be-represented-as-a-superposition-in-a-simple-harmon
# How many wave functions can be represented as a superposition in a simple harmonic oscillator? I'm teaching myself about QM, but there are something really puzzling me about the simple harmonic oscillator: $$H=\frac{p^2}{2m}+\frac{m\omega^2x^2}{2}.$$ I've learned how to use ladder operators to obtain the eigenvalues of this oscillator. Also, I'm able to write out the eigenstates in the form of position space wave functions: $$\langle x'|n\rangle=\left(\frac{1}{\pi^{1/4}\sqrt{2^n n!}}\right)\left(\frac{1}{x^{n+1/2}_0}\right)\left(x'-x^2_0\frac{d}{dx'}\right)^n\exp\left[-\frac{1}{2}\left(\frac{x'}{x_0}\right)^2\right],$$ where $$x_0\equiv \sqrt{\frac{\hbar}{m\omega}}.$$ So the initial wave function must be a superposition of these eigenfunctions. However, given an arbitrary normalized wave function $\langle x'|\alpha\rangle$ which is not necessarily a proper superposition, I can use $|\alpha\rangle$ as an initial state and make it evolve according to the Schrödinger equation: $$\langle x'|i\hbar\frac{\partial}{\partial t}|\alpha;t\rangle = \langle x'|H|\alpha;t\rangle,$$ which seems to make sense. So my question is: 1. Can any normalized wave function be represented as a superposition of the eigenfunctions? 2. If not, what would happen if I set the initial state to a wave function that is not a superposition of the eigenfunctions? Also, there is another question which might be related: 1. The numbers of eigenfunctions for $x$ and $p$ are obviously uncountably infinite. But how could it be that this number is countably infinite for $H$? 1. The states $\{| n \rangle \}$ form a complete basis, so at any time you can expand any state $|\alpha(t)\rangle$ as a linear combination of $| n \rangle$, $$| \alpha(t) \rangle = \sum_n c_n(t)| n \rangle \tag{1}$$ Now is a matter of finding the coefficients $c_n$. To do that note that \begin{eqnarray} i\hbar \frac{{\rm d}}{{\rm d}t}| \alpha(t) \rangle &=& i\hbar\sum_n \dot{c}_n(t) | n \rangle \\ H| \alpha(t) \rangle &=& \sum_n c_n(t) H | n \rangle = \sum_n c_n(t) E_n | n \rangle \\ \Rightarrow i\hbar\sum_n \dot{c}_n(t) | n \rangle &=& \sum_n c_n(t) E_n | n \rangle \end{eqnarray} with $E_n = \hbar \omega(n + 1/2)$. Multiplying both sides by a state $| m\rangle$, and recalling that $\langle m | n \rangle = \delta_{nm}$ $$i\hbar \dot{c}_m(t) = c_n(t)E_n$$ whose solution is $$c_n(t) = c_n(0)e^{-iE_n t/\hbar} \tag{2}$$ The coefficients $c_n(0)$ are easily obtained from Eq. (1): $$\langle m | \alpha(0)\rangle = \sum_n c_n(0)\langle m | n\rangle = c_m(0)$$ That is $$c_n(0) = \langle n | \alpha(0)\rangle \tag{3}$$ so, the evolution of any state $|\alpha(t)\rangle$ can be written as Eq. (1), where the coefficients $c_n(t)$ evolve according to Eq. (2) with initial conditions given by Eq. (3) 1. Since the set $\{ | n \rangle \}$ you can always write any state as a linear combination of eigenstates of $H$ 2. Please follow this link, but intuitively speaking, the potential $V(x) = m\omega^2 x^2/2$ has infinitely many bound states, this means that no matter how large the energy of a particle is, you can always contain it with $V$. In this case the states are also countable, indeed, you can label them with a single integer $n$ • Thanks. According to the link you gave, it seems that the space of any normalized wave function is $L^2(\mathbb R)$. So intuitively the eigenfunctions $\{ |n\rangle\}$ form a complete basis. But could you give more details about the completeness of this basis, e.g., a proof? – OwUy May 25 '17 at 1:11 • @OwUy Basically the reason is that $\{|n\rangle\$ are the eigenfunctions of a compact Hermitian operator with discrete eigenvalues. You could check this link for more details. Also, I think your last question goes beyond the scope of this post, please consider create a new one to address the issue of completness – caverac May 25 '17 at 6:14 I'll address 3. The word "basis" is often defined to only allow linear combinations of finitely many basis elements. This is the definition in the famous theorem stating all bases have the same cardinality. Hilbert spaces are unusual in that any finite-norm linear combination of orthogonal elements will define a unique element of the space; we say it's metric-complete, which doesn't apply to arbitrary inner product spaces. (This is somewhat analogous to the fact that, although $\mathbb{R}$ contains the limits of all its Cauchy sequences, general Cauchy sequences in $\mathbb{Q}$ converge to arbitrary real numbers.) A "basis" of a Hilbert space is defined more generally. As a result, a Hilbert space's bases can vary in cardinality.
2019-06-19 04:45:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9604696035385132, "perplexity": 113.07891347684982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998913.66/warc/CC-MAIN-20190619043625-20190619065625-00164.warc.gz"}
https://www.physicsforums.com/threads/differential-equations-2.381724/
# Homework Help: Differential Equations (2) 1. Feb 25, 2010 ### der.physika I'm having trouble setting up this solution can anyone give me a hint, or set it up, so I can see if what i'm doing is right? $$xy\prime=y=e^x^y$$ using the substitution $$u\equiv(xy)$$ Last edited: Feb 25, 2010 2. Feb 26, 2010 ### kosovtsov $$xy\prime=y=e^x^y$$ What do you mean with two = in "equation"? 3. Feb 26, 2010 ### Redbelly98 Staff Emeritus Moderator's note: Thread moved to "Calculus and Beyond" in the https://www.physicsforums.com/forumdisplay.php?f=152" area. Homework assignments or any textbook style exercises for which one is seeking assistance are to be posted in the appropriate forum in our Homework & Coursework Questions area. This should be done whether the problem is part of one's assigned coursework or just independent study. Last edited by a moderator: Apr 24, 2017 4. Feb 26, 2010 ### der.physika Sorry about that, I wrote that wrong the actual problem is $$xy\prime+y=e^x^y$$ using the substitution $$u\equiv(xy)$$ Last edited: Feb 26, 2010
2018-12-19 16:33:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5082540512084961, "perplexity": 3549.569582119065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376832559.95/warc/CC-MAIN-20181219151124-20181219173124-00414.warc.gz"}
http://math.stackexchange.com/questions/216177/in-a-venn-diagram-where-are-other-number-sets-located?answertab=oldest
# In a Venn diagram, where are other number sets located? I remember of this image I've learned at school: I've heard about other number (which I'm not really sure if they belong to a new set) such as quaternions, p-adic numbers. Then I got three questions: • Are these numbers on a new set? • If yes, where are these sets located in the Venn diagram? • Is there a master Venn diagram where I can visualize all sets known until today? Note: I wasn't sure on how to tag it. - You wrote: I wasn't sure on how to tag it. I've added the number-systems tag, which seems reasonable to me. –  Martin Sleziak Oct 18 '12 at 6:50 @MartinSleziak Yep. Thank you. –  Voyska Oct 18 '12 at 7:13 The diagram suggests that there are other real numbers besides rational and irrational :) Anyway, while $\mathbb{Q}_p$'s would intersect your diagram (and each other) only in $\mathbb{Q}$ (and $\mathbb{Z}_p$ in rationals without $p$ in denominator), it is important to consider all (non-canonical!) field embeddings (you can embed $\mathbb{Q}_p$ to $\mathbb{C}$ if you wish). You should also consider algebraic numbers (in $\mathbb{C}$ AND in $\mathbb{Q}_p$'s and their extensions). –  user8268 Oct 18 '12 at 7:39 It also suggests that "whole number" means something definite, which is distinct from both the naturals and the integers, when in fact "whole number" is a horrible phrase that means either the naturals (with or without $0$) or the integers. –  Chris Eagle Oct 18 '12 at 7:44 @ChrisEagle: That's the term in Hebrew, "whole numbers". –  Asaf Karagila Oct 18 '12 at 7:45 This Venn diagram is quite misleading actually. For example, the irrationals and the rationals are disjoint and their union is the entire real numbers. The diagram makes it plausible that there are real numbers which are neither rational nor irrational. One could also talk about algebraic numbers, which is a subfield of $\mathbb C$, which meets the irrationals as well. As for other number systems, let us overview a couple of the common ones: 1. Ordinals, extend the natural numbers but they completely avoid $\mathbb{Z,Q,R,C}$ otherwise. 2. $p$-adic numbers extend the rationals, in some sense we can think of them as subset of the complex numbers, but that is a deep understanding in field theory. Even if we let them be on their own accord, there are some irrational numbers (real numbers) which have $p$-adic representation, but that depends on your $p$. 3. You can extend the complex numbers to the Quaternions (and you can even extend those a little bit). 4. You could talk about hyperreal numbers, but that construction does not have a canonical model, so one cannot really point out where it "sits" because it has many faces and forms. 5. And ultimately, there are the surreal numbers. Those numbers extend the ordinals, but they also include $\mathbb R$. Now, note that this diagram is not very... formal. It is clear it did not appear in any respectable mathematical journal. It is a reasonable diagram for high-school students, who learned about rationals and irrationals, and complex numbers. I would never burden [generic] high-school kids with talks about those number systems above. - Extend the quaternions a little bit? You mean octonions? –  Voyska Oct 18 '12 at 18:41 @Gustavo: Yes, and Sedenions. Both, however, are non-commutative and non-associative. So everything behaves much worse than expected to in "normal number systems". –  Asaf Karagila Oct 18 '12 at 18:59 Strictly speaking $\mathbb{R}$ is not a subset of $\mathbb{C}$, rather it is isomorphic to a subfield of $\mathbb{C}$. Same for $\mathbb{Q}\subseteq\mathbb{R}$. Now $\mathbb{Z}$ is also isomorphic to a subring of $\mathbb{Q}$, not a proper subset. Whenever you have two algebraic structures $A$ and $B$ with respect to same binary operations, it may be possible to 'identify' $A$ with some subset of of $B$, that means to show an isomorphism between $A$ and a subset of $B$ with respect to the defined opereations; it this case you can write $A\subseteq B$, in some loose sense. - I appreciate this answer, but it is needlessly pedantic for most purposes. –  Austin Mohr Oct 18 '12 at 8:06 If you really want to, those are actual subsets and not just embeddings. –  Asaf Karagila Oct 18 '12 at 8:13 I would be interested to see your proof that $\mathbb{R} \not\subset \mathbb{C}$. I suspect it might use definitions that are not universally agreed upon. (Of course, so would a proof that $\mathbb{R} \subset \mathbb{C}$.) –  Trevor Wilson Oct 18 '12 at 16:57 @Trevor: Many people consider $\mathbb C$ to be defined as a quotient of $\mathbb R[x]$, so its elements are equivalent classes. We can easily identify those corresponding to real numbers, but it's not the same. Similarly for $\mathbb Q$ in $\mathbb R$, and similarly for $\mathbb Z$ and so on. However from a set theoretical point of view, you can always just fix $\mathbb C$ and declare that the other objects are those subsets, which is what I meant in my comment. Either way, I agree with Austin that this answer is not useful here. –  Asaf Karagila Oct 18 '12 at 19:02 @AsafKaragila Good point. I was thinking that we could define the others as subsets of $\mathbb{C}$, but forgetting that in defining $\mathbb{C}$ we probably defined $\mathbb{R}$ along the way in a manner that is incompatible with this new definition. –  Trevor Wilson Oct 18 '12 at 19:06
2015-07-06 11:46:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9049335718154907, "perplexity": 421.0382472003247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098196.31/warc/CC-MAIN-20150627031818-00153-ip-10-179-60-89.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/related-rates.72928/
# Homework Help: Related Rates 1. Apr 24, 2005 ### scorpa Hi Again, I am doing a question on related rates that I have become stuck on. The height (h) of an equilateral triangle is increasing at a rate of 3cm/min. How fast is the area changing when h is 5cm? I know that the area of a triangle is bh/2, but after that I am stuck I tried deriving it using the chain rule so that I could substitute h and the rate of h, but I don't think that i was doing it the right way. If anyone could direct me here I would really appreciate the help. 2. Apr 24, 2005 ### Jameson Here are some things to consider, the height "h" of an equilateral triangle is $$\frac{1}{2}\sqrt{3}s$$ where "s" is the length of one side. The area of this triangle is equal to $$\frac{1}{2}sh$$ See any substitutions? 3. Apr 24, 2005 ### futb0l $$\frac{dA}{dt} = \frac{dh}{dt} * \frac{dA}{dh}$$
2018-04-23 19:40:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7913756370544434, "perplexity": 481.90920041299887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946165.56/warc/CC-MAIN-20180423184427-20180423204427-00152.warc.gz"}
http://mathhelpforum.com/calculus/205930-help-me-derivation-limit.html
Thread: Help me in derivation of limit 1. Help me in derivation of limit in limits for 1^∞ type of indeterminate form, the direct solution is e^{g(x)[f(x)-1]} 2. Re: Help me in derivation of limit Originally Posted by satyam in limits for 1^∞ type of indeterminate form, the direct solution is e^{g(x)[f(x)-1]} I am not sure what you mean by "direct solution" but you can convert the form so L'hosiptials rule can be used. If $\lim_{x \to \infty} f(x)=1$ and $\lim_{x \to \infty} g(x)=\infty$ then $[f(x)]^{g(x)} = \exp [g(x) \ln(f(x))]= \exp \left[ \dfrac{\ln(f(x))}{\frac{1}{g(x)}}\right]$ This is now in the form $\dfrac{0}{0}$ so you can apply L.H. Note the fraction could also be arrainged to get infinity over infinity
2016-09-30 03:38:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9590480923652649, "perplexity": 1150.8178423600132}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662018.69/warc/CC-MAIN-20160924173742-00237-ip-10-143-35-109.ec2.internal.warc.gz"}
https://motls.blogspot.com/2009/07/globes-warmest-days-in-decades.html
## Tuesday, July 21, 2009 ... // ### Globe's warmest days in a decade The record cold temperatures are often covered on this blog. But I have no bias so you can learn about the record hot temperatures, too. According to the satellite methodology UAH AMSU-A temperatures (Java graph generator), July 12th, 2009 was already the globe's near-surface record-breaking warmest day at least since 1998. This particular algorithm ended up with a "very hot" result, -13.73 °C, which was warmer than the previous record, -13.78 °C, on July 2nd, 2007, followed by July 21th-July 24th, 2005 when the temperature was stuck at -13.79 °C. There were many periods during the last decade when the temperature was close to -13.9 °C. We're interested in "absolute temperatures" and not anomalies, so we should realize the following: only the days in July, perhaps the last week(s) of June, and the first week(s) of August are a priori eligible to produce the absolute hottest days because the Northern Hemisphere landmass, the largest contributor to the temperature variations, is hottest in July. The oceanic temperatures don't vary as much, and there is smaller land area on the Southern Hemisphere than on the Northern Hemisphere. You shouldn't be quite certain that these were the hottest days on their record because the temperature may occasionally jump for a few days, even during Julies of cold years. (But yes, additional graphs indicate that the days were warmest in 20 and probably 30 years.) However, it is somewhat unlikely that a warmer day occurred on years that were 0.5 °C cooler than 2009 or even more. So chances are high that the day would be the warmest one, according to the same definition of the global mean temperature, since 1945 if not 1400 if not 6000 BC if not 125,000 BC, right after an interglacial when a warmer day had occurred almost certainly. ;-) You can be relatively certain that the days we are just enjoying will remain the hottest ones after some small corrections are made at the end of the month. Why? A sequence of records Because the new record from July 12th, -13.73 °C, was improved on July 13th, then again on July 14th-15th (the same temperature), then again on July 17th, and then again on July 18th-19th when the temperature was -13.58 °C which is already 0.2 °C warmer than the previous record from 2007. That's a pretty large improvement that is unlikely to be "fixed away". The continuity of the temperature and long-term persistence make it reasonably likely that a record may be rewritten 5 times a week. There is no law that will prevent the temperature from increasing for a few days (July 20th was cooler, at -13.61 °C). The chances exceed 50% that the July 18th record will be rewritten again in a few days simply because the short-term behavior of the graph resembles the Brownian motion. Also, you shouldn't imagine that these records have a long-lasting effect. As recently as June 6th-June 19th, the temperature was actually lower than during the same days in 2008. I am sure ;-) that you had to notice such an unprecedented shocking global heat wave. Did you survive the catastrophe and how? Today, every kid knows that the UAH-AMSU-A global temperature of -13.58 °C is a horrible fever that urges the kids to shoot their fathers in SUVs. Tell us about your experiences from the judgment days. In Pilsen, the hottest days July 18th-19th were rainy, with temperature stuck around 10 °C: Ewa Farna's concert on the Pond of Bolevec was somewhat decimated by the bad weather. But the skies have become more pleasant afterwards. :-) I am also not asking Al Gore about his frying experience from the record hot days because Nashville, Tennessee broke the 1877 record cold temperature today. ;-) Causes, statistics of records The rapidly strengthening El Nino conditions and random noise are the two primary drivers to get credit for the globally warm days. The probability that a particular Northern summer brings a hotter day than the 10 previous years is equal to 0.1 or greater. It's because with white noise, each year in the period has the same 10% probability to include the record-breaker. On the other hand, long-term persistence "reddens" the noise and makes the temperature more likely to increase or decrease quasi-uniformly which raises the odds that the extremum (or extrema) are found at the end(s) of the 10-year period. If very low-frequency signals were completely dominating the evolution of temperatures, the probability that the latest summer has the hottest day would gradually approach 50%: the same chances of quasi-uniform cooling and warming. If the annual temperature step were completely dominated by an underlying warming trend, the odds would approach 100%, of course.
2019-07-20 09:33:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5318784713745117, "perplexity": 2435.274636473963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526506.44/warc/CC-MAIN-20190720091347-20190720113347-00488.warc.gz"}
https://www.intechopen.com/books/evolutionary-computation/optimization-of-structures-under-load-uncertainties-based-on-hybrid-genetic-algorithm/
InTechOpen uses cookies to offer you the best online experience. By continuing to use our site, you agree to our Privacy Policy. Computer and Information Science » Numerical Analysis and Scientific Computing » "Evolutionary Computation", book edited by Wellington Pinheiro dos Santos, ISBN 978-953-307-008-7, Published: October 1, 2009 under CC BY-NC-SA 3.0 license. © The Author(s). # Optimization of Structures under Load Uncertainties Based on Hybrid Genetic Algorithm By Nianfeng Wang and Yaowen Yang DOI: 10.5772/9608 Article top ## Overview Figure 1. Generation update mechanism. Figure 2. Definition of structural geometry by enhanced morphological scheme (a) FE discretization of design space (I/O element marked in black) (b) Connecting I/O elements with Bezier curves (c) Skeleton made up of elements along curves (d) Surrounding elements added to skeleton to form final structure. Figure 3. Chromosome of final structure. Figure 4. Illustration of crossover operation. Figure 5. Design space of Target Matching Problem. Figure 6. Target geometry. Figure 7. Formulation of Target Matching Problem 1. Figure 8. The optimal solution at 1001st generation. Figure 9. Path of the local search. Figure 10. History of the best distance objective ( f d i s tan c e Figure 11. History of the best distance objective ( f m a t e r i a l Figure 12. History of the best distance objective ( f m a t e r i a l Figure 13. Design domain. Figure 14. Three non-dominated solutions at 501st generation. Figure 15. History of the best weight objective ( w ). Figure 16. History of the best displacement objective( d Figure 17. Plot of non-dominated solutions and elites at some sample generations. # Optimization of Structures under Load Uncertainties Based on Hybrid Genetic Algorithm Nianfeng Wang1 and Yaowen Yang1 ## 1. Introduction Today with the development of industry and research, reliability is more and more emphasized. However in some analysis of engineering problem, uncertainty inevitably exists. For example, wind loading on structures often varies over certain range instead of one determined value. The design of such systems should take into account the intervals of deviation of variables from their nominal values. In some uncertainty cases, the variety in the system parameters is neglected for the convenience of analysis. In certain situation it is possible to obtain the response which is valid and reasonable. However, sometimes the uncertainty analysis of system can not be neglected because the uncertainty would significantly affect the system performance. Anti-optimization technique, on one hand, represents an alternative and complement to traditional methods, and on the other hand, it is a generalization of the mathematical theory of interval analysis (Qiu & Elishakoff 2001). When the available uncertainty data is limited, a probability distribution may not be able to be estimated accurately, but bounds for the uncertain variables may be at least estimated. The designer will generally seek for the least favorable solution for the structure within the domain defined by the bounds on the uncertain variables. This search for the worst condition for a given problem was named anti-optimization (Elishakoff 1995). The term anti-optimization was also used in a more general sense, to describe the task of finding the worst scenario for a given problem. A two species genetic algorithm (GA) was presented effectively reducing the two-level problem to a single level (Venter & Haftka 1996). The maximum strength of laminated composites was optimized under bounded uncertainty of material properties by GA (Maenghyo & Seung Yun 2004). In recent years, hybrid genetic algorithms (GAs) have become more homogeneous and some great successes have been achieved in the optimization of a variety of classical hard optimization problems. Hybrid algorithms of GAs and those local search algorithms were proposed to improve the search ability of GAs, and their high performance was reported (Ishibuchi & Murata 1998; Deb & Goel 2001; Jaszkiewicz 2003; Wang & Tai 2007; Wang & Tai 2008). In these studies local search algorithms were suggested in order to reach a quick and closer result to the optimum solution. However, this work integrates a simple genetic local search algorithm as the anti-optimization technique with a constrained multi-objective GA proposed in (Wang & Tai 2007). A constrained tournament selection is used as a single objective function in the local search strategy. Section 2 outlines the proposed hybrid GA. And section 3 presents a morphological geometry representation scheme coupled with the GA. Formulations and numerical results of a target matching test problem in the context of structural topology optimization are presented in Section 4. Formulations and numerical results of the structural design problem are presented in Section 5. Finally, concluding remarks are given in Section 6. ## 2. Proposed algorithm –a hybrid GA A general constrained multi-objective optimization problem (in the minimization sense) is formulated as: minimize   f(x)=[f1(x)f2(x)⋯fm(x)]subject to   gj(x)≤0,  j=1,2,⋯q                  hk(x)=0,  j=1,2,⋯q                  xLxxU (1) where f is a vector of m objectives, and x=[x1x2xn] is the vector of n design variables. gj and hk are the equality and inequality constraints. xL and xU define the lower bound and upper bound of x , respectively. For anti-optimization, the robustness of the objective function can be achieved by maximizing the minimum value of the objective functions. It tends to move the uncertainty parameters to the desirable range and results in higher reliability with respect to uncertainty. Therefore an anti-optimization procedure implemented in this work by local search is employed to search for the values of the worst objective functions. Consider a problem subject to u uncertain variables xu and nu normal design variables xn . The formulation can then be written as below: minimizex   f(x)maximizexu   f(xn,xu)xn⊂CNsubject to   gj(x)≤0,  j=1,2,⋯,q                  hk(x)=0,  j=1,2,⋯,q                  xLxxU (2) where C N is a particular set of solutions which are related to generation number N. The formulation considered leads to a nested optimization problem which will be solved by means of a GA for optimization and a local search for anti-optimization. Generally speaking, this is a sort of minmax search, where the “max” part is dealt with by a local search algorithm, and the “min'' part is realized by a GA. ### 2.1. Tournament selection for local search Since a local search strategy requires a tournament selection between an initial solution and its neighboring solution, a comparison strategy is needed. For multi-objective optimization without constraints, a single objective function converted from multiple objectives can be used. For constrained optimization, constraint handling mechanisms should be given first. In most applications, penalty functions using static (Srinivas & Deb 1994), dynamic or adaptive concepts (Michalewicz & Schoenauer 1996) have been widely used. The major problem is the need for specifying the right value for penalty parameters in advance. The method from (Ray, Tai et al. 2001) incorporates a Pareto ranking of the constraint violations, so it does not involve any aggregation of objectives or constraints and thus the problem of scaling does not arise. Note that Pareto ranking is not well suited for hybridization with local search. For an anti-optimization problem, when Pareto ranking is used, the current solution x is replaced with its neighboring solution y (i.e., the local search move from x to y is accepted) only when x dominates y (i.e., x is better than y). That is, the local search move is rejected when x and y are non-dominated with respect to each other. However, change of the rank of a given solution may require significant changes of the objective/constraint values, thus, many local moves will not degenerate the rank. A constraint handling method (Deb 2000) was proposed which is also based on the penalty function approach but does not require the prescription of any penalty parameter. The main idea of this method is to use a tournament selection operator and to apply a set of criteria in the selection process. For an anti-optimization problem, it can be easily changed to: 1. Any infeasible solution is preferred to any feasible solution. 2. Between two feasible solutions, the one having worse objective function value is preferred. 3. Between two infeasible solutions, the one having bigger constraint violation is preferred. According to these criteria, the constrained optimization can be constructed as f˜(x)={f(x)              if x∈Ffmax+vio(x)otherwise (3) where f˜(x) is the artificial unconstrained objective function, F is the feasible region of the design domain, fmax is the objective function value of the worst feasible solution in the population, and vio(x) is the summation of all the violated constraint function values. However, this approach is only suitable for single-objective constrained optimization problem if no further handling mechanisms for multiple objectives are given. And vio(x) as summation of constraint values cannot reflect the real relative comparison between them because of different orders of magnitude among the constraints, and in this sense, it is also based on the penalty functions where all the penalty parameters are set to 1. Extending the basic idea of Deb's method, a technique combining Pareto ranking and weighted sum is suggested in this work for the local search selection process. There are only 3 combinations for the two solutions: both feasible, both infeasible, and one feasible and the other infeasible. The main idea of the technique is to use a tournament selection operator and to apply a set of criteria in the selection process. For an anti-optimization procedure, any infeasible solution is preferred to any feasible solution. When both solutions are feasible, Pareto ranking based on objectives is calculated. The one with bigger rank value is preferred. If the situation still ties, a more sophisticated acceptance rule is used for handling the situation. The fitness function of the solution x is calculated by the following weighted sum of the m objectives: f(x)=w1f1(x)+w2f2(x)+⋯+wmfm(x) (4) where f(x) is a combined objective, and w1,w2wm are nonnegative weights for the objectives set according to different orders of magnitude among them. Constant weight values are used in this work to fix the search direction based on user's preference. The solution with a bigger f(x) will survive. When both solutions are infeasible, Pareto ranking based on constraints is calculated. The one with bigger rank value is preferred. If the rank is same, the one with worse fitness value survives. A tournament selection criterion can be described as below to decide whether a current solution x should be replaced by a neighboring solution y: 1. If x is feasible and y is infeasible, replace the current solution x with y (i.e., let x = y). 2. If both x and y are feasible, then if RankObjxRankObjy , then x = y , else if RankObjx=RankObjy and f(x)f(y) , then x = y (5) 3. If both x and y are infeasible, then if RankConxRankCony , then x = y , else if RankConx=RankCony and f(x)f(y) , then x = y (6) ### 2.2. Selection of initial solutions Local search applied to all solutions in the current population in the algorithm is inefficient, as shown in (Ishibuchi, Yoshida et al. 2002). In the proposed algorithm, the computation time spent by local search can be reduced by applying local search to only selected solutions in selected generations. If n is the number of decision variables, the best n number of solutions from the current population (based on Pareto ranking) are selected. These n number of mutated solutions and elites from N-th generation after local search are then put into the next population. The generation update mechanism in the proposed algorithm is shown in Fig. 1. The implementation of the anti-optimization part is modularized. ### Figure 1. Generation update mechanism. ### 2.3. Local search procedure As explained in the above, a local search procedure is applied to elite individuals and new solutions generated by the mutation in selected generations. Generally, a local search procedure can be written as follows: • Step 1. Specify an initial solution and its corresponding design variable under uncertainty. • Step 2. Apply Hooke and Jeeves Method to determine the search path using the tournament selection criteria stated above as the function values. • Step 3. If the prescribed condition is satisfied, terminate the local search. ### 2.4. Main algorithm The overall algorithm uses a framework which combines the method stated in (Wang & Tai 2007) and the local search proposed above. The algorithm is given below: • Step 1. Generate random initial population P of size M • Step 2. Evaluate objective as well as constraint functions for each individual in P. • Step 3. Compute Pareto Ranking. • Step 4. Select elite individuals. Elite individuals carried from the previous generation preserve the values of their objective and constraint functions. • Step 5. Select n best individuals from P, mutate and apply local search procedure in specified generation, then put them into new population P’. • Step 6. Crossover. • Step 7. If a prescribed stopping condition is satisfied, end the algorithm. Otherwise, return to Step 2. ## 3. Enhanced geometric representation scheme for structural topology optimization The enhanced morphological representation was first introduced in (Tai & Wang 2007), which in turn is an extension of the morphological representation scheme previously developed in (Tai & Akhtar 2005; Tai & Prasad 2007). As in any structural topology optimization procedure, the geometry of the structure has to be represented and defined by some form of design variables. The enhanced morphological representation efficiently cast structure topology as a chromosome code that makes it effective for solution via a GA. In the proposed scheme, the connectivities and the number of curves used are made variable and to be optimized in the evolutionary procedure. The process of the scheme definition is illustrated as follows. A square design space shown in Fig. 2(a) is discretized into a 50 by 50 mesh of identical square elements. While it is initially unknown how the design space will be occupied by the structure, there must exist some segments of the structure such as the support and the loading that have functional interactions with its surroundings. The support point is some segment of the structure that is restrained (fixed, with zero displacement) while the loading point is where some specified load (input force) is applied to deform the structure. Collectively, the support and loading points represent the input points of the structure. There is also usually an output point which is some segment of the structure where the desired output behavior is attained. As shown in Fig. 2(a), the problem is defined with four I/O locations, each made up of one element in black. Six connecting curves in the illustration of Fig. 2(b), three of which are active and three of which are inactive, are used such that there is one connecting curve between any two points (i.e. every I/O point is directly connected to the other three). Before continuing, it is important to make a clear distinction between the active and inactive curves. The active curves are the curves which are in the ‘on’ state. The structure is generated based only on the active curves. Although the inactive curves, which are in the ‘off’ state, temporarily contribute nothing to the current structure, they are still very important in subsequent generations because they may be turned ‘on’ later through the crossover or mutation operations. In Fig. 2(b), the active curves are marked with thick lines and the inactive with thin dotted lines. The connectivity of the I/O points is based on all connecting active curves joining one point to another. Each curve is a Bezier curve defined by the position vector which can be derived from the element number of control point. The set of elements through which each active curve passes form the ‘skeleton’ (Fig. 2 (c)). Some of the elements surrounding the skeleton are then included to fill up the structure to its final form (Fig. 2 (d)) based on the skeleton's thickness values. Each curve is defined by three control points, and hence each curve has four thickness values. And the union of all skeleton, surrounding elements and I/O elements constitute the structure while all other elements remain as the surrounding empty space. In order to use a GA for the optimization, the topological/shape representation variables have to be configured as a chromosome code. Hence the structural geometry in Fig. 2 (d) can be encoded as a chromosome in the form of a graph as shown in Fig. 3. Each curve is represented by a series of nodes connected by arcs in the sequence of start element number, thickness values alternating with control element number and end point. For identification purpose, the active curves are shown by solid lines and the inactive curves are represented by dotted lines. Altering the curve states can vary the connectivity of the I/O regions, and therefore the representation scheme can automatically decide the connectivity. The resulting scheme therefore increases the variability of the connectivity of the curves and hence the variability of the structure topology. Two of the important operations in a GA are the crossover and mutation. In this implementation, the crossover operator works by randomly sectioning any single connected subgraph from a parent chromosome and swapping with a corresponding subgraph from another parent (as shown in Fig. 3.). As a result, two offsprings are produced which have a mix of the topological/shape characteristics of the two parents, and the advantages of the representation (such as no checkerboard patterns and single-node hinge connections) are maintained in the offspring. The ‘on’ and ‘off’ state of different curves which are crossed by the loop are also swapped. If the ‘on’ variables dominate a curve, i.e. when the number of ‘on’ variables is more than the number of ‘off’ variables, the curve of in the child chromosome will be active. Otherwise, the child curve will be inactive. As for mutation, the mutation operator works by randomly selecting any vertex of the chromosomal graph and altering its value to another randomly generated value within its allowable range. Mutation about the on-off state is simple, which is altering the state of curves. When the selected curve is active, it will be inactive after mutation, and vice verse. #### Figure 2. Definition of structural geometry by enhanced morphological scheme (a) FE discretization of design space (I/O element marked in black) (b) Connecting I/O elements with Bezier curves (c) Skeleton made up of elements along curves (d) Surrounding elements added to skeleton to form final structure. #### Figure 3. Chromosome of final structure. #### Figure 4. Illustration of crossover operation. In summary, this morphological representation scheme uses arrangements of skeleton and surrounding material to define structural geometry in a way that will not render any undesirable design features such as disconnected segments, checkerboard patterns or single-node hinge connections because element edge connectivity of the skeleton is guaranteed, even after any crossover or mutation operation. Any chromosome-encoded design generated by the evolutionary procedure can be mapped into a finite element model of the structure accordingly. ## 4. Target matching problem Before a GA is relied upon for solving a structure design problem with unknown solutions, it is important that the performance of the GA be tested and tuned by using it to solve a problem with known solutions. Various kinds of test problems (Michalewicz, Deb et al. 2000; Schmidt & Michalewicz 2000; Martin, Hassan et al. 2004) have been established for testing multi-objective GAs. They were created with different characteristics, including the dimensionality of the problem, number of local optima, number of active constraints at the optimum, topology of the feasible search space, etc. However, all of these test problems have well-defined objectives/constraints expressed as mathematical functions of decision variables and therefore may not be ideal for evaluating the performance of a GA intended to solve problems where the objectives/constraints cannot be expressed explicitly in terms of the decision variables. In essence, a GA is typically customized to tackle a certain type of problem and therefore general-purpose' test problems may not correctly evaluate the performance of the customized GA. The test problem should, therefore, ideally suit (or be customized to) the GA being used. In numerous real-life problems, objectives/constraints cannot be expressed mathematically in terms of decision variables. One of such real-life problems is structural topology optimization, where a procedure (structure geometry representation scheme) first transforms decision variables into the true geometry of the designed structure and then finite element analysis of the designed structure is carried out for evaluating the objectives/constraints. The GA solving such problems may have special chromosome encoding to suit the structure geometry representation used and there may also be specially devised reproduction operators to suit the chromosome encoding used. As such, the structure geometry representation scheme, the chromosome encoding and the reproduction operators introduce additional characteristics to the search space and, therefore, they are very critical to the performance of the GA. The test problem for such GAs, therefore, must use the same structure geometry representation scheme, chromosome encoding and reproduction operators. The conventional test problems found in literature cannot make use of the GA's integral procedures such as structure geometry representation scheme and therefore they are not suitable for testing such GAs. Ideally, the test problem should emulate the main problem to be solved. The test problem should be computationally inexpensive so that it can be run many times for the GA parameters to be changed or experimented with and the effect thereof can be studied for the purpose of fine-tuning the GA. However, the main problem in the present work, being a structural topology optimization problem under uncertainty, requires structural analysis which consumes a great deal of time. Taking the running time into consideration, the test problem needs to be designed without any need for structural analysis. A test problem emulating structural topology optimization does not necessarily need structural analysis as the main aim of topology optimization is to arrive at an optimal structural geometry. Without using structural analysis, if a GA is successfully tested to be capable of converging the solutions to any arbitrary but predefined and valid target' structural geometry, then it may be inferred that the GA would be able to converge design solutions to the optimal structural topology when solving an actual topology optimization problem. Based on this inference, a test problem can be designed such that simple geometry-based (rather than structural analysis based) objectives/constraints help design solutions converge towards the predefined target geometry. This type of test problem may be termed as Target Matching Problem", which is capable of using exactly the same GA (including structural geometry representation scheme, chromosome encoding and reproduction operators) as that intended for solving the actual topology optimization problem. The present problem is similar to the Target Matching Problem solved in (Wang & Tai 2007; Tai et al. 2008; Wang & Yang 2009). The target matching problems are defined here as multiobjective optimization problems under uncertainty which are more difficult (e.g. more nonlinear problem) and computationally intensive. ### 4.1. Formulation The test problem makes use of the design space shown in Fig.5, which has one support point, two loading points and one output point. The problem does not represent a structural analysis problem, but the original terms ‘support’, ‘loading’ and ‘output’ are still used for ease of reference. The loading point 1 is positioned anywhere along the left boundary and loading point 2 is positioned anywhere along the right boundary. The position of output point is fixed as shown in Fig.5. The support point is positioned in a specified area marked as ‘’under uncertainty’’ and its position is random in the area. In this problem, the target geometry is shown in Fig. 6. The aim is therefore to evolve structures that match as closely as possible this target geometry. The problem presented here is more difficult than the original problem described in (Wang & Tai 2007), since the support point is under uncertainty that makes the geometry more complex and not easy to converge to the target. ### Figure 5. Design space of Target Matching Problem. Target geometry. ### Figure 7. Formulation of Target Matching Problem 1. The problem is formulated with the following two objectives and two constraints: distance objective, material objective, forbidden area constraint and prescribed area constraint. Such a problem is defined with the help of Fig.7. The distance objective is given by fdistance=dl (5) where dl is the centroid-to-centroid Euclidean distance between the actual loading point 1 and the actual support point. The material objective is given by fmaterial=∑i=1nxi (6) where xi is the material density of the i -th element in the design space, with a value of either 0 or 1 to represent that the element is either void or material (solid), respectively. n is the total number of elements in the discretized design space. In other words, this objective function is defined as the summation of the material density of all elements in the current geometry. The forbidden area constraint can be written as gforbidden=∑i=1nfyi≤0 (7) where yi is the material density of the i -th element in the forbidden area, and nf is the total number of elements in the forbidden area. In other words, the summation of the material density of elements in the forbidden area is required to be less than or equal to zero. The prescribed area constraint can be written as gprescribed=np−∑i=1npzi≤0 (8) where zi is the material density of the i -th element in the prescribed area, and np is the total number of elements in the prescribed area. In other words, the summation of the material density of elements in the prescribed area is required to be more than or equal to the total number of elements in that area. ### 4.2. Main results The target matching problem to be solved is defined by the design space shown in Fig. 5. The optimization was run for 1001 generations with a population size of 100 per generation. The local search procedure is triggered once every ten generations. By the end of evolutionary process 132,113 objective function evaluations have been performed. One of the solutions at the end of 1001 generations is shown in Fig. 8. It is the same as the target solution shown in Fig. 6. As can be seen from the result, the support point is on the extreme point where the element number is 37. The following Fig. 9 illustrates how the solution shown in Fig. 8 is obtained by applying local search. Apply the Hooke and Jeeves method to determine the search path using the tournament selection criteria stated in Section 2. Each data point is labeled with its index where some indexes are coincided. At the start point, fdistance is 37.6 and fmaterial is 118. After the local search, the worst case labeled as 9 is obtained with a distance objective fdistance value of 49.5 and a material objective fmaterial value of 142. This figure demonstrates the Hooke and Jeeves direct search method for function maximization. Fig. 10 shows a plot of the best distance objective fdistance and the corresponding solution's fmaterial versus generation number. fdistance and fmaterial values on the plot corresponding to any particular generation number belong to that generation's non-dominated feasible solution having the best distance objective. The plot starts at generation number 6, as until this generation there is no feasible solution in the population. Fig. 11 shows plots of the best material objective fmaterial and the corresponding solution's distance objective fdistance versus generation number. ### Figure 8. The optimal solution at 1001st generation. ### Figure 9. Path of the local search. ### Figure 10. History of the best distance objective ( fdistance ### Figure 11. History of the best distance objective ( fmaterial Fig.12 shows a plot in objective space, where the solid shape markers are used to denote the feasible non-dominated solutions at any particular sample generation, viz. the 51st, 101st, 301st, 501st and 1001st generation. Although Fig.12 shows all the non-dominated solutions at any particular generation, only one or two distinct points (in the objective space) can be seen for that generation. However, a few distinct solutions in the design variable space may have the same objective function values and therefore, such solutions would coincide in the objective space. The number shown in parenthesis next to every point marker indicates the total number of such coincident solutions. ### Figure 12. History of the best distance objective ( fmaterial ### 4.3. Discussion For the test problem, the results are summarized in Section 4.2. The hybrid GA proves its efficiency by converging the two objective function values ( fdistance and fmaterial ) to the optimal values. The recurrent fluctuations in Fig. 10 and Fig. 11 show the effect of local search to the hybrid algorithm. Fig. 9 shows how the local search works. ## 5. Optimization of structures under load uncertainties #### Figure 13. Design domain. The optimization problem can be formulated as follows: minimize(xn,p)  {w(xn,p)d(xn,p)}maximizep {w(xn0,p)d(xn0,p)}xn0⊂CNsubject to  gd=d−0.000635≤0                 gstress≤0 (9) where w is the weight of the structure and d is the displacement of the loading point 2. p is the vector of loads under uncertainty, that is P1 in this problem. Local search which is triggered every 10 generations in this work is only applied to selected CN solutions. A constraint on the vertical displacement, gd , is used to prevent big deformations which are supposed likely to occur. A constraint on the maximum stress in the structure (to prevent fatigue or failure) is important. A dimensionless expression for the stress constraint may be written as gstress=σpeak−von−Mises−σyσy≤0 (10) where σpeakvonMises is the peak von Mises stress and σy is the tensile yield strength of the material. The optimization procedure and finite element analysis have been implemented through a C++ program running in the Windows environment of a PC. Values of the objective and constraint functions for every design are derived from the results of a FE analysis of the designed structure. The optimization was run for 501 generations (with a population size of 100 per generation), by the end of which 46,415 objective function evaluations have been performed. The values of w1 and w2 in Equation 4 are 1 and 100 respectively. Three of the non-dominated solutions at the end of 501 generations are shown in Fig. 14. Fig. 14 (a) shows the solution with best weight objective under the worst load case where P1 is 55.5N. Fig. 14 (c) shows the solution with the best displacement objective under the worst load case where P1 is 55.4N. One solution with median weight and displacement objective is given in Fig. 14 (b). Fig. 15 shows a plot of the best weight objective ( w and the corresponding solution's displacement objective ( d versus generation number. w and d values on the plot corresponding to any particular generation number belong to that generation's non-dominated feasible solution which has the best weight objective. #### Figure 14. Three non-dominated solutions at 501st generation. #### Figure 15. History of the best weight objective ( w ). #### Figure 16. History of the best displacement objective( d Fig. 16 shows plots of the best displacement objective (d) and the corresponding solution's weight objective ( w versus generation number. As can be seen from Fig. 15 and Fig. 16, there are some fluctuations because of anti-optimization. Fig. 17 shows a plot in the objective space, where the solid shape markers denote the feasible non-dominated solutions at any particular sample generation. #### Figure 17. Plot of non-dominated solutions and elites at some sample generations. ## 6. Conclusion The versatility and effectiveness of the topology optimization methodology developed in this work rest on three key components: an efficient morphological geometry representation that defines practical and valid structural geometries, a compatible graph-theoretic chromosome encoding and reproduction system that embodies topological and shape characteristics, and a multiobjective hybrid GA with local search strategy as the worst-case-scenario technique of anti-optimization. The use of local search strategy helps to direct and focus the genetic search in uncertainty design variable space. A multiobjective target matching problem with known solutions has been formulated and solved to demonstrate the validity of presented algorithm. Simulation results of the structure optimization under load uncertainty are encouraging, which indicates that the hybrid algorithm integrates local search as anti-optimization is applicable. The proposed tournament constrained selection method works well and the computation cost is reasonable. ## References 1 - M. K. Agoston, 2005 Computer Graphics and Geometric Modeling, Springer-Verlag New York, LLC. 2 - S. K. Au, 2005 "Reliability-based design sensitivity by efficient simulation." Computers & Structures 83 14 1048 61 . 3 - B. M. Ayyub, 1997 Uncertainty Modeling and Analysis in Civil Engineering, John Wiley. 4 - Y. Ben-Haim, I. Elishakoff, 1990 Convex models of uncertainty in applied mechanics, Dordrecht: Elsevier Science Publishers. 5 - K. Deb, 2000 "An efficient constraint handling method for genetic algorithms. "Computer Methods in Applied Mechanics and Engineering 186(2-4): 311-338. 6 - K. Deb, T. Goel, 2001 A hybrid multi-objective evolutionary approach to engineering shape design, Berlin, Germany, Springer-Verlag. 7 - I. Elishakoff, 1995 "Essay on uncertainties in elastic and viscoelastic structures: from A. M. Freudenthal’s criticisms to modern convex modeling. "Computers and Structures 56 6 871 871 . 8 - I. Elishakoff, 1995 "An idea on the uncertainty triangle." Editors Rattle Space, The Shock and Vibration Digest 22(10): 1. 9 - J. Han, V. Manousiouthakis, et al. 1997 "Global optimization of chemical processes using the interval analysis." Korean Journal of Chemical Engineering 14 4 270 276 . 10 - H. Ishibuchi, T. Murata, 1998 "A multi-objective genetic local search algorithm and its application to flowshop scheduling." Systems, Man and Cybernetics, Part C, IEEE Transactions on 28 3 392 403 . 11 - H. Ishibuchi, T. Yoshida, et al. 2002 Balance Between Genetic Search And Local Search In Hybrid Evolutionary Multi-criterion Optimization Algorithms. Proceedings of the Genetic and Evolutionary Computation Conference, New York, July 9-13,2002 12 - A. Jaszkiewicz, 2003 "Do multiple-objective metaheuristics deliver on their promises--A computational experiment on the set-covering problem." IEEE Transactions on Evolutionary Computation 7 2 133 43 . 13 - H. Ju, H. Vasilios, et al. 1997 "Global optimization of chemical processes using the interval analysis." Korean Journal of Chemical Engineering 14 4 270 276 . 14 - C. Maenghyo, R. Seung, Yun, 2004 "Optimization of laminates with free edges under uncertainty subject to extension, bending and twisting." International Journal of Solids and Structures 41 1 227 45 . 15 - E. T. Martin, R. A. Hassan, et al. 2004 "Comparing the N-branch genetic algorithm and the multi-objective genetic algorithm." AIAA Journal 42 7 1495 500 . 16 - Z. Michalewicz, K. Deb, et al. 2000 "Test-case generator for nonlinear continuous parameter optimization techniques." IEEE Transactions on Evolutionary Computation 4 3 197 215 . 17 - Z. Michalewicz, M. Schoenauer, 1996 "Evolutionary algorithms for constrained parameter optimization problems." Evolutionary Computation 4(1): 1. 18 - R. E. Moore, 1966 Interval analysis, Prentice-Hall, Englewood Cliffs, NJ. 19 - Y. Qiu, S. S. Rao, 2005 "A fuzzy approach for the analysis of unbalanced nonlinear rotor systems." Journal of Sound and Vibration 284(1-2): 299-323. 20 - Z. Qiu, I. Elishakoff, 2001 "Anti-optimization technique- A generalization of interval analysis for nonprobabilistic treatment of uncertainty." Chaos, Solitons and Fractals 12 9 1747 1759 . 21 - T. Ray, K. Tai, et al. 2001 "Multiobjective design optimization by an evolutionary algorithm." Engineering Optimization 33 4 399 424 . 22 - M. Schmidt, Z. Michalewicz, 2000 Test-case generator TCG-2 for nonlinear parameter optimisation. Proceedings of the 2000 Congress on Evolutionary Computation, Vols 1 and 2 728 735 . 23 - N. Srinivas, K. Deb, 1994 "Multi-objective function optimization using nondominated sorting genetic algorithms." Evolutionary Computation 2(3): 221-- 248. 24 - K. Tai, S. Akhtar, 2005 "Structural topology optimization using a genetic algorithm with a morphological geometric representation scheme." Structural and Multidisciplinary Optimization 30 2 113 27 . 25 - K. Tai, J. Prasad, 2007 "Target-matching test problem for multiobjective topology optimization using genetic algorithms." Structural and Multidisciplinary Optimization 34 4 333 45 . 26 - K. Tai, N. Wang, 2007 An enhanced chromosome encoding and morphological representation of geometry for structural topology optimization using GA, 2007 IEEE Congress on Evolutionary Computation, Singapore, 25 28 September 2007. 27 - K. Tai, N. F. Wang, et al. 2008 Target geometry matching problem with conflicting objectives for multiobjective topology design optimization using GA, 2008 IEEE Congress on Evolutionary Computation, Hong Kong, China, 1 6 June 2008. 28 - G. Venter, R. T. Haftka, 1996 Two species genetic algorithm for designing composite laminates subjected to uncertainty, Proceedings of the 37th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, Salt Lake City, Utah, April, 1996 29 - N. Wang, K. Tai, 2007 Handling objectives as adaptive constraints for multiobjective structural optimization, 2007 IEEE Congress on Evolutionary Computation, Singapore, 25 28 September 2007. 30 - N. Wang, K. Tai, 2007 A hybrid genetic algorithm for multiobjective structural optimization, 2007 IEEE Congress on Evolutionary Computation, Singapore, 25 28 September 2007. 31 - N. F. Wang, K. Tai, 2008 "Design of grip-and-move manipulators using symmetric path generating compliant mechanisms." Journal of Mechanical Design 130(11): 112305 (9 pp.). 32 - N. F. Wang, Y. W. Yang, 2009 Target Geometry Matching Problem for Hybrid Genetic Algorithm Used to Design Structures Subjected to Uncertainty, 2009 IEEE Congress on Evolutionary Computation, Trondheim, Norway, 18 21 May 2009. 33 - N. F. Wang, Y. W. Yang, et al. 2008 Optimization of structures under load uncertainties based on hybrid genetic algorithm, 2008 IEEE Congress on Evolutionary Computation, Hong Kong, China, 1 6 June 2008. 34 - L. A. Zadeh, 1978 "Fuzzy sets as a basis for a theory of possibility." Fuzzy Sets and Systems 1 1 3 28 .
2018-03-22 12:13:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5419080853462219, "perplexity": 1209.7776650952717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647883.57/warc/CC-MAIN-20180322112241-20180322132241-00320.warc.gz"}
https://tex.stackexchange.com/questions/256642/new-chapter-after-appendix-in-lyx/301576
# New chapter after Appendix in LyX Is it possible in LyX to start a new chapter after Appendix??? seems like no matter if I start a new chapter or paragraph, it continues writing in Appendix. • Welcome to TeX.SX! It is easier to help you if you add a minimal working example that takes the form \documentclass{...}\usepackage{....}\begin{document}...\end{document}. If possible, it should compile and have the minimum amount of code needed to illustrate your problem. The idea is to make it easier for people to troubleshoot your problem - and doing this makes it much more likely that some one will! – user30471 Jul 23 '15 at 11:41 • You could use the appendix package. Then use ERT to insert \begin{subappendices} and \end{subappendices}. – scottkosty Jul 23 '15 at 23:24 ## 2 Answers This probably should be a duplicate but I couldn't find it. I also don't have lyx installed but I imagine that the solution should be the same (so this is a latex solution). Apologies if lyx is honestly different. The problem is that \appendix redefines how chapters are printed. It also resets the chapter counter. You have not said how the next chapter should be numbered but it seems reasonable to keep the same numbering which means that we need to save the current chapter number when \appendix is called (I do this using \preto from the etoolbox package) and then we need to restore the value of the chapter counter when we switch back to chapters. Below I define a \resumechapters command that does these things. Here is the output (with the page breaks suppressed): and here is the code: \documentclass[a4paper,12pt]{book} \makeatletter \usepackage{etoolbox} \newcounter{savedchapter}% for remembering the last chapter number \preto\appendix{\setcounter{savedchapter}{\arabic{chapter}}}% remembering! \newcommand\resumechapters{% the \appendix command with some tweaks \setcounter{chapter}{\arabic{savedchapter}}% restore chapter number \setcounter{section}{0}% reset section counter \gdef\@chapapp{\chaptername}% reset chapter name \gdef\thechapter{\@arabic\c@chapter}% make chapter numbers arabic } \makeatother \let\cleardoublepage\relax% compressed output of MWE \begin{document} \chapter{A chapter} \appendix \chapter{An appendix} \resumechapters \chapter{Another chapter} \end{document} There is an environment called subappendices in the package appendix specifically for this. E.g. \documentclass{book} \usepackage{appendix} \begin{document} \chapter{thesis article one} \section{introduction} \begin{subappendices} \section{extra info} \end{subappendices} \chapter{thesis article two} \end{document} This gives: Note that: Within the subappendices environment, an appendix is introduced by a \section command in chaptered documents, otherwise it is introduced by a \subsection command. Effectively, this provides for appendices at the end of a main document division, as an integral part of the division. The subappendices environment supports only the title and titletoc options. See: http://mirror.hmc.edu/ctan/macros/latex/contrib/appendix/appendix.pdf
2021-03-07 04:14:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8991277813911438, "perplexity": 1925.010226345075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376006.87/warc/CC-MAIN-20210307013626-20210307043626-00010.warc.gz"}
https://physicscatalyst.com/graduation/coordinate-transformation/
# Coordinate transformation : translation, inclined and rotation This article is about Coordinate transformation . In this article we will look at coordinate transformation in case of translation, inclination and rotation of S' frame of reference with respect to S frame of reference. This article is mainly for B.Sc. first year and comes under subject Mechanics. ## Coordinate transformation If we define the position of a particle in two different frames of then in both the cases projection of the particle comes out to be different and the relation between the projections of this particle in two different frames of reference is known as their transformation equations. If there is no relative velocity between the frames of references and they rotate through a certain degree then the transformation does not depend on time. Opposite to this when there is a certain relative velocity between them then transformation equation depends on time. ## Transformation equations for frames of reference involving translation Consider two frames of reference S and S’ whose origins are O and O’ and observers are at the point O and O’ as shown below in figure 1. If position vector of any point P in S is $\vec{r}$ then its position vector in S’ would be $\vec{r’}$. Figure 1 According to figure $\overrightarrow{r’}=\vec{r}-\overrightarrow{{{r}_{0}}}$ Differentiating it w.r.t. time $\frac{\vec{r’}}{dt}=\frac{\vec{r}}{dt}-\frac{\vec{r_0}}{dt}$ If $\vec{r_0}$ is constant then $\frac{d\vec{r_0}}{dt}=0$ Therefore, $\frac{d\vec{r}}{dt}=\frac{d\vec{{r}’}}{dt}$ $\vec{v}=\vec{v’}$ Again differentiating it w.r.t. time $\frac{d\vec{v}}{dt}=\frac{d\vec{{v}’}}{dt}$ Or, $\vec{a}=\vec{a’}$ So, in translated frame of reference S’ the position of the particle would be different but the acceleration and velocity would be same as measured in S frame of reference. Also the equations 1, 2 and 3 are time independent equations. ## Coordinate transformations in reference frames having uniform relative translational motion In figure 1 if the frame of reference S’ has translational motion with constant velocity w.r.t. the frame of reference S than at any time their axis would be at a constant distance but the position would depend on time. If in the beginning at $t=0$ both the frame of reference has origin O and O’ at the same point. Now at time t position vector of O’ w.r.t. O would be $\vec{r_0}=\vec{v}t$ Hence $\vec{r’}=\vec{r}-\vec{r_0}$ $\vec{r’}=\vec{r}-\vec{v}t$ Differentiating it w.r.t. the time we get $\vec{v’}=\frac{d\vec{r}}{dt}-\frac{d\vec{r_0}}{dt}=\vec{u}-\vec{v}$ Acceleration of the particle $\vec{{a}’}=\frac{d\vec{u}}{dt}-\frac{{\vec{v}}}{dt}=\vec{a}$ As $\vec{v}$ is constant. Hence position and velocity of the particle could change according to equation 4 and 5 when it is moving with uniform relative motion but acceleration remains unchanged. Equations 4, 5 and 6 are time dependent transformation equations. ## Transformation in an inclined frame of reference Let x, y, z, be the coordinates of a particle in a frame of reference say S as shown in figure 2. Figure 2 So we have, \label{eq:1} x=PB=OA \\ y=PA=OB Another frame of reference S’ is inclined w.r.t. the frame of reference S such that the origins of both the frame of reference and their z-axis coincide as shown in figure 2. Now coordinates of point P in S’ frame of reference would be \label{eq:2} X’=PD=OC \\ Y’=PC=OD Since z and z’ coincides we have \label{eq:3} z=z’ Now from right angle $\bigtriangleup OAE$ we have \begin{equation*} \end{equation*} Again from right angle $\bigtriangleup FAP$ we have \begin{equation*} \end{equation*} now, \begin{equation*} x’=PD=PF+FD \\ \text{or,} \\ x’=PF+OE \end{equation*} This implies that, \label{eq:4} x’=xcos\theta+ysin\theta Similarly, $$y’=PC-EF = AF-AE$$ \label{eq:5} y’=ycos\theta -xsin\theta Equations (\ref{eq:4}) and (\ref{eq:5}) can also be written as \label{eq:6} x’=xcos(X’OX)+ycos(X’OY) \label{eq:7} y’=xcos(Y’OX)+ycos(Y’OY) Equations (\ref{eq:4}) ,and (\ref{eq:5}) ,(\ref{eq:6})  and (\ref{eq:7})  are the transformation equations for an inclined frame of reference. ## Three dimensional coordinate transformation Figure 3 If all the axis of the frame of reference $S’$ are inclined to frame of reference $S$ through an angle $\theta$ then coordinate transformation of the frame of reference is done through the following rule \begin{align} x’=xcos(X’OX)+ycos(X’OY)+zcos(X’OZ)\\ y’=xcos(Y’OX)+ycos(Y’OY)+zcos(Y’OZ)\\ Z’=xcos(Z’OX)+ycos(Z’OY)+zcos(Z’OZ) \end{align} If the unit position vectors of $x, y, z$ are $\hat i, \hat j, \hat k$ and of of $x’, y’, z’$ are $\hat i’, \hat j’, \hat k’$ then \begin{equation*} X’OX=\hat i’ \cdot \hat i = a_{11}\\ X’OY=\hat i’ \cdot \hat j = a_{12}\\ X’OZ=\hat i’ \cdot \hat k = a_{13}\\ Y’OX=\hat j’ \cdot \hat i = a_{21}\\ Y’OY=\hat j’ \cdot \hat j = a_{22}\\ Y’OZ=\hat j’ \cdot \hat k = a_{23}\\ Z’OX=\hat k’ \cdot \hat i = a_{31}\\ Z’OY=\hat k’ \cdot \hat j = a_{32}\\ Z’OZ=\hat k’ \cdot \hat k = a_{33} \end{equation*} hence, \begin{align} x’=a_{11}x+a_{12}y+a_{33}z \label{eq:12}\\ y’=a_{21}x+a_{22}y+a_{23}z \label{eq:13}\\ z’=a_{31}x+a_{32}y+a_{33}z \label{eq:14} \end{align} In the form of matrix \begin{bmatrix} x’\\ y’\\ z’ \end{bmatrix}=\begin{bmatrix} a_{11} &a_{12} &a_{13} \\ a_{21}&a_{22} &a_{23} \\ a_{31}&a_{32} & a_{33} \end{bmatrix}+\begin{bmatrix} x\\ y\\ z \end{bmatrix} Direction cosines of $3 \times 3$ matrix is \begin{bmatrix} a_{11} &a_{12} &a_{13} \\ a_{21}&a_{22} &a_{23} \\ a_{31}&a_{32} & a_{33} \end{bmatrix} This matrix is also known as rotation matrix. ## Transformation of velocity By differentiating transformation equations with respect to time we would get velocity components in $S’$ frame of reference. So, \begin{align} v’_x =\frac{dx’}{dt} =a_{11}\frac{dx}{dt}+a_{12}\frac{dy}{dt}+a_{13}\frac{dz}{dt}=a_{11}v_x+a_{12}v_y+a_{13}v_z\\ v’_y =\frac{dy’}{dt} =a_{21}\frac{dx}{dt}+a_{22}\frac{dy}{dt}+a_{23}\frac{dz}{dt}=a_{21}v_x+a_{22}v_y+a_{23}v_z\\ v’_z =\frac{dz’}{dt} =a_{31}\frac{dx}{dt}+a_{32}\frac{dy}{dt}+a_{33}\frac{dz}{dt}=a_{31}v_x+a_{32}v_y+a_{33}v_z \end{align} ## Transformation of acceleration Differentiating equations of velocity w.r.t. $t$ we get \begin{align} a’_x =\frac{d^2x’}{dt^2} =a_{11}\frac{d^2x}{dt^2}+a_{12}\frac{d^2y}{dt^2}+a_{13}\frac{d^2z}{dt^2}=a_{11}a_x+a_{12}a_y+a_{13}a_z \label{eq:20}\\ a’_y =\frac{d^2y’}{dt^2} =a_{21}\frac{d^2x}{dt^2}+a_{22}\frac{d^2y}{d^2t}+a_{23}\frac{d^2z}{dt^2}=a_{21}a_x+a_{22}a_y+a_{23}a_z \label{eq:21}\\ v’_z =\frac{d^2z’}{dt^2} =a_{31}\frac{d^2x}{dt^2}+a_{32}\frac{d^2y}{dt^2}+a_{33}\frac{d^2z}{dt^2}=a_{31}a_x+a_{32}a_y+a_{33}a_z  \label{eq:22} \end{align} If $S$ frame of reference is an inertial frame of reference and no force is acting on point $P$ then $$\frac{d^2x}{dt^2}=\frac{d^2y}{dt^2}=\frac{d^2z}{dt^2}=0$$ Putting it in equations  (\ref{eq:20}) ,(\ref{eq:21})  and (\ref{eq:22}) $$\frac{d^2x’}{dt^2}=\frac{d^2y’}{dt^2}=\frac{d^2z’}{dt^2}=0$$ This implies that frame of reference $S’$ is also an inertial frame of reference and equations (\ref{eq:12}) ,(\ref{eq:13})  and (\ref{eq:14}) are time independent transformation equations of coordinates of rotating frame of reference. Physics for Degree Students B.Sc. First Year Mechanics: For Students of B.Sc (Pass and Hons.): D.S. Mathur An Introduction to Mechanics (SIE) by David Kleppner, Robert Kolenkow Texts 1st Law and Newtonian space and time. Absolute time and space
2019-01-18 00:55:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9758162498474121, "perplexity": 1135.096662889704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659654.11/warc/CC-MAIN-20190118005216-20190118031216-00550.warc.gz"}
https://za.limehousetownhall.org.uk/21132-removing-frameplot-borders-from-plotim-output.html
# Removing frame/plot borders from plot.im output I am plotting point density using theplot.imcommand in spatstat, and my output always has a frame around the plotted image that becomes thicker when I increase resolution for export. I triedframe=F,axes=F,plot.frame=FALSE,bty='n'but none of them seem to fix the issue. Anyone know a solution to this? Usebox=FALSE(see?plot.im: argument box, a logical value specifying whether a box should be drawn.): Z <- setcov(owin()) tc <- colourmap(rainbow(128), breaks=seq(-1, 2, length=129)) plot(Z, col=tc) plot(Z, col=tc, box=FALSE)
2021-10-16 21:50:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48065418004989624, "perplexity": 13783.061790294269}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585025.23/warc/CC-MAIN-20211016200444-20211016230444-00510.warc.gz"}