URL
stringlengths 15
1.68k
| text_list
sequencelengths 1
199
| image_list
sequencelengths 1
199
| metadata
stringlengths 1.19k
3.08k
|
---|---|---|---|
https://unix.stackexchange.com/questions/43264/first-and-last-day-of-a-month/154364 | [
"First and last day of a month\n\nGiven two numbers, month and year, how can I compute the first and the last day of that month ? My goal is to output these three lines:\n\n1. month / year (month in textual form but that is trivial)\n\n2. for each day of the month: name of the day of the week for the current day: Fri. & Sat. & Sun. [...]\n\n3. day number within the month: 1 & 2 & 3 [...] & 28 & .. ?\n\nI'm looking for a solution using GNU date or BSD date (on OS X).\n\n• What OS are you using? It might help others give you the right answer. – Kevdog777 Jul 17 '12 at 10:35\n\nSome time ago I had similar issue. There is my solution:\n\n\\$ ./get_dates.sh 2012 07\nThe first day is 01.2012.07, Sunday\nThe last day is 31.2012.07, Tuesday\n\\$ cal\nJuly 2012\nSu Mo Tu We Th Fr Sa\n1 2 3 4 5 6 7\n8 9 10 11 12 13 14\n15 16 17 18 19 20 21\n22 23 24 25 26 27 28\n29 30 31\n\nScript itself:\n\n#!/bin/bash\n# last day for month\nlastday() {\n# ja fe ma ap ma jn jl ag se oc no de\nmlength=('xx' '31' '28' '31' '30' '31' '30' '31' '31' '30' '31' '30' '31')\n\nyear=\\$1\nmonth=\\$2\n\nif [ \\$month -ne 2 ] ; then\necho \\${mlength[\\$month]}\nreturn 0\nfi\n\nleap=0\n((!(year%100))) && { ((!(year%400))) && leap=1 ; } || { ((!(year%4))) && leap=1 ; }\n\nfeblength=28\n((leap)) && feblength=29\necho \\$feblength\n}\n\n# date to Julian date\ndate2jd() {\n\nyear=\\$1\nmonth=\\$2\nday=\\$3\nlday=\\$(lastday \\$year \\$month) || exit \\$?\n\nif ((day<1 || day> lday)) ; then\necho day out of range\nexit 1\nfi\n\necho \\$(( jd = day - 32075\n+ 1461 * (year + 4800 - (14 - month)/12)/4\n+ 367 * (month - 2 + (14 - month)/12*12)/12\n- 3 * ((year + 4900 - (14 - month)/12)/100)/4\n- 2400001 ))\n}\n\njd2dow()\n{\ndays=('Sunday' 'Monday' 'Tuesday' 'Wednesday' 'Thursday' 'Friday' 'Saturday')\n\njd=\\$1\nif ((jd<1 || jd>782028)) ; then\necho julian day out of range\nreturn 1\nfi\n\n((dow=(jd+3)%7))\n\necho \\${days[dow]}\n}\n\necho -n \"The first day is 01.\\$1.\\$2, \"\njd2dow \\$(date2jd \\$1 \\$2 01)\necho -n \"The last day is \\$(lastday \\$1 \\$2).\\$1.\\$2, \"\njd2dow \\$(date2jd \\$1 \\$2 \\$(lastday \\$1 \\$2))\n\nI didn't have GNU date on machines I need it, therefore I didn't solve it with date. May be there is more beautiful solution.\n\nI'll be honest; from the way you're asking the question I get the sense you've been assigned some homework, so I'll leave a few steps out of the answer as an exercise for the reader:\n\nYou'll want to take a good look at the date manual page; especially the -d flag, which allows you to examine any given day. The first day of month M in year Y would be \"M/01/Y\"\n\nGetting the last day of the month, your best bet is to add 1 to the number of the month you were given, then deduct one day in the date.\n\nHint: date can actually accept some extensive arithmetic; I can, for instance, say date -d \"01/07/2012 + 1 month - 1 day\" and it will give me the correct answer.\n\nYou can find out how to display the output you want in 2) and 3) by studying the \"format\" section of the date manpage.\n\nHints: Look at %a and %d\n\n• The +\"format string\" parameter to date will also be helpful here, e.g. date -d \"01/07/2012 + 1 month - 1 day\" +\"%d\" – daniel kullmann Jul 17 '12 at 11:32\n• @Antoine: The world is a bigger place then you might have realized. There's lots of places that will have school on any given July day. – Christian Sep 30 '13 at 14:11\n• i got this need in a real life script, not sure why one would directly assume this is homework, and @apseyy 's answer is just perfect for my need – mazs Apr 29 at 15:22\n# Last month:\nl_first_date=\\$(date -d \"`date +%Y%m01` -1 month\" +%Y-%m-%d)\nl_last_date=\\$(date -d \"`date +%Y%m01` -1 day\" +%Y-%m-%d)\n\n# This month:\nt_first_date=\\$(date -d \"`date +%Y%m01`\" +%Y-%m-%d)\nt_last_date=\\$(date -d \"`date +%Y%m01` +1 month -1 day\" +%Y-%m-%d)\n\n# Next month:\nn_first_date=\\$(date -d \"`date +%Y%m01` +1 month\" +%Y-%m-%d)\nn_last_date=\\$(date -d \"`date +%Y%m01` +2 month -1 day\" +%Y-%m-%d)\n\n# Print everything\necho \"Last month: \\$l_first_date to \\$l_last_date\"\necho \"This month: \\$t_first_date to \\$t_last_date\"\necho \"Next month: \\$n_first_date to \\$n_last_date\"\n**Previous Month Start and End date**\n\nmonth_year=\\$(date +'%m %Y' | awk '!--\\$1{\\$1=12;\\$2--}1')\nm=\\${month_year% *}\ny=\\${month_year##* }\nd=\\$(cal \\$m \\$y | paste -s - | awk '{print \\$NF}')\nfirst_date=\\$(printf '01-%02s-%s' \\$m \\$y)\nlast_date=\\$(printf '%s-%02s-%s' \\$d \\$m \\$y)\necho \\$first_date \\$last_date\n\n**Currunt Month Start and End date**\n\nmonth_year=\\$(date +'%m %Y' | awk '!\\$1{\\$1=12;\\$2--}1')\nm=\\${month_year% *}\ny=\\${month_year##* }\nd=\\$(cal \\$m \\$y | paste -s - | awk '{print \\$NF}')\nfirst_date=\\$(printf '01-%02s-%s' \\$m \\$y)\nlast_date=\\$(printf '%s-%02s-%s' \\$d \\$m \\$y)\necho \\$first_date \\$last_date\n\n**Next Month Start and End date**\n\nmonth_year=\\$(date +'%m %Y' | awk '!++\\$1{\\$1=12;\\$2--}1')\nm=\\${month_year% *}\ny=\\${month_year##* }\nd=\\$(cal \\$m \\$y | paste -s - | awk '{print \\$NF}')\nfirst_date=\\$(printf '01-%02s-%s' \\$m \\$y)\nlast_date=\\$(printf '%s-%02s-%s' \\$d \\$m \\$y)\necho \\$first_date \\$last_date\ndate -d \"20121101 + 1 month - 1 day\" +%Y%m%d\n• At least you should specify which of the question's 3 points are answered by your code. – manatwork Nov 1 '12 at 9:06"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.83629066,"math_prob":0.84514934,"size":415,"snap":"2019-43-2019-47","text_gpt3_token_len":112,"char_repetition_ratio":0.14111923,"word_repetition_ratio":0.0,"special_character_ratio":0.3180723,"punctuation_ratio":0.2,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95938426,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-24T03:35:33Z\",\"WARC-Record-ID\":\"<urn:uuid:fe75cdff-4f00-4ed7-9acf-a32ea8e3d1a9>\",\"Content-Length\":\"168358\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:51bad8a8-62ed-4bcf-b9f2-6f9e6476d1d0>\",\"WARC-Concurrent-To\":\"<urn:uuid:6d4e90da-8328-45d8-8fe7-1631be392d14>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://unix.stackexchange.com/questions/43264/first-and-last-day-of-a-month/154364\",\"WARC-Payload-Digest\":\"sha1:Y62QYGOVMMAQBO72234I4LZTJUJYG37G\",\"WARC-Block-Digest\":\"sha1:G6ELUTJBN62QX46UHU323TJYEASNDFNC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987838289.72_warc_CC-MAIN-20191024012613-20191024040113-00484.warc.gz\"}"} |
https://cs.stackexchange.com/questions/17914/how-do-i-find-the-shortest-representation-for-a-subset-of-a-powerset | [
"# How do I find the shortest representation for a subset of a powerset?\n\nI'm looking for an efficient algorithm for the following problem or a proof of NP-hardness.\n\nLet $\\Sigma$ be a set and $A\\subseteq\\mathcal{P}(\\Sigma)$ a set of subsets of $\\Sigma$. Find a sequence $w\\in \\Sigma^*$ of least length such that for each $L\\in A$, there is a $k\\in\\mathbb{N}$ such that $\\{ w_{k+i} \\mid 0\\leq i < |L| \\} = L$.\n\nFor example, for $A = \\{\\{a,b\\},\\{a,c\\}\\}$, the word $w = bac$ is a solution to the problem, since for $\\{a,b\\}$ there's $k=0$, for $\\{a,c\\}$ there's $k=1$.\n\nAs for my motivation, I'm trying to represent the set of edges of a finite automaton, where each edge can be labeled by a set of letters from the input alphabet. I'd like to store a single string and then keep a pair of pointers to that string in each edge. My goal is to minimize the length of that string.\n\n• In other words, the problem is to order sets into a sequence $L_1, \\dots, L_n$ maximizing $\\sum |L_i \\cap L_{i+1}|$? Nov 11 '13 at 12:01\n• @KarolisJuodelė, I don't think that's enough, since for $L_i, L_{i+1}, L_{i+2}$ you may have to put elements in $L_i \\cap L_{i+2}$ into $w$ twice even if they're in $L_{i+1}$. E.g. $\\{\\{a,b\\},\\{a,c\\},\\{a,d\\}\\}$, you can share $a$ between the first two or the last two, but not among them all, the shortest $w$ would be $bacad$. Nov 11 '13 at 13:18\n• @KarolisJuodelė, furthermore, there are cases where for some $i\\neq j$, $L_i\\subseteq L_j$, which makes it even more complicated as in such a case the \"neighborhood ordering\" may not be total. Nov 11 '13 at 13:24\n• Just to cheer up, if I got the question right, if the set is $A=\\{\\{c,o,w\\},\\{o,w,l\\},\\{w,o,l,f\\}\\}$, then a word $cowowlwolf$ satisfies the requirements given, but (possible) minimum such word and solution is $cowlf$? :) Nov 12 '13 at 11:52\n• @MindaugasK, that is correct, very nice example :) Nov 12 '13 at 12:04\n\nI believe I found a reduction from Hamiltonian path, thus proving the problem NP-hard.\n\nCall the word $w\\in\\Sigma^*$ a witness for $A$, if it satisfies the condition from the question (for each $L\\in A$, there's $m\\geq 1$ such that $\\{w_{m+i}\\mid 0\\leq i<|L|\\} = L$).\n\nConsider the decision version of the original problem, i.e. decide whether for some $A$ and $k\\geq 0$, there's a witness for $A$ of length at most $k$. This problem can be solved using the original problem as an oracle in polynomial time (find the shortest witness, then compare its length to $k$).\n\nNow for the core of the reduction. Let $G=(V,E)$ be a simple, undirected, connected graph. For each $v\\in V$, let $L_v=\\{v\\}\\cup\\{e\\in E\\mid v\\in e\\}$ be the set containing the vertex $v$ and all of its adjacent edges. Set $\\Sigma=E$ and $A=\\{L_v\\mid v\\in V\\}$. Then $G$ has a Hamiltonian path if and only if there is a witness for $A$ of length at most $2|E|+1$.\n\nProof. Let $v_1e_1v_2\\ldots e_{n-1}v_n$ be a Hamiltonian path in $G$ and $H=\\{e_1, e_2, \\ldots, e_{n-1}\\}$ the set of all edges on the path. For each vertex $v$, define the set $U_v=L_v\\setminus H$. Choose an arbitrary ordering $\\alpha_v$ for each $U_v$. The word $w=\\alpha_{v_1}e_1\\alpha_{v_2}e_2\\ldots e_{n-1}\\alpha_{v_n}$ is a witness for $A$, since $L_{v_1}$ is represented by the substring $\\alpha_1e_1$, $L_{v_n}$ by $e_{n-1}\\alpha_n$, and for each $v_i$, $i\\notin\\{1, n\\}$, $L_{v_i}$ is represented by $e_{i-1}u_{v_i}e_i$. Furthermore, each edge in $E$ occurs twice in $w$ with the exception of $|V|-1$ edges in $H$, which occur once, and each vertex in $V$ occurs once, giving $|w|=2|E|+1$.\n\nFor the other direction, let $w$ be an arbitrary witness for $A$ of length at most $2|E|+1$. Clearly, each $e\\in E$ and $v\\in V$ occurs in $w$ at least once. Without loss of generality, assume that each $e\\in E$ occurs in $w$ at most twice and each $v\\in V$ occurs exactly once; otherwise a shorter witness can be found by removing elements from $w$. Let $H\\subseteq E$ be the set of all edges occurring in $w$ exactly once. Given the assumptions above, it holds that $|w|=2|E|-|H|+|V|$.\n\nConsider a contiguous substring of $w$ of the form $ue_1e_2\\ldots e_kv$, where $u,v\\in V$, $e_i\\in E$. We say that $u,v$ are adjacent. Notice that if $e_i\\in H$, then $e_i=\\{u,v\\}$, because $e_i$ occurs only once, yet it is adjacent to two vertices in $G$. Therefore, at most one of $e_i$ can be in $H$. Similarly, no edge in $H$ can occur in $w$ before the first vertex or after the last vertex.\n\nNow, there are $|V|$ vertices, therefore $|H|\\leq |V|-1$. From there, it follows that $|w|\\geq 2|E|+1$. Since we assume $|w|\\leq 2|E|+1$, we get equality. From there we get $|H|=|V|-1$. By pigeonhole principle, there is an edge from $H$ between each pair of vertices adjacent in $w$. Denote $h_1h_2\\ldots h_{n-1}$ all elements from $H$ in the order they appear in $w$. It follows that $v_1h_1v_2h_2\\ldots h_{n-1}v_n$ is a Hamiltonian path in $G$. $\\square$\n\nSince the problem of deciding the existence of Hamiltonian path is NP-hard and the above reduction is polynomial, the original problem is NP-hard too."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.86216813,"math_prob":0.9999348,"size":3121,"snap":"2021-43-2021-49","text_gpt3_token_len":1051,"char_repetition_ratio":0.10747514,"word_repetition_ratio":0.02952756,"special_character_ratio":0.33162448,"punctuation_ratio":0.115440115,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000063,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-28T02:52:39Z\",\"WARC-Record-ID\":\"<urn:uuid:8bb5e7da-00cf-49e1-ba84-0209ef309ad1>\",\"Content-Length\":\"176490\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b23705f8-ba19-4f7d-8b70-9b79a27f85a8>\",\"WARC-Concurrent-To\":\"<urn:uuid:1757899b-cc7b-4476-a19d-d8568cddaeca>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://cs.stackexchange.com/questions/17914/how-do-i-find-the-shortest-representation-for-a-subset-of-a-powerset\",\"WARC-Payload-Digest\":\"sha1:MQO4AWIZURSXUXY3F47Y52LGV5S6WP5L\",\"WARC-Block-Digest\":\"sha1:URVQ6OTNP7V4GBA2UFGJZUQQALYA5LU6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588246.79_warc_CC-MAIN-20211028003812-20211028033812-00434.warc.gz\"}"} |
https://www.colorhexa.com/00cb3b | [
"# #00cb3b Color Information\n\nIn a RGB color space, hex #00cb3b is composed of 0% red, 79.6% green and 23.1% blue. Whereas in a CMYK color space, it is composed of 100% cyan, 0% magenta, 70.9% yellow and 20.4% black. It has a hue angle of 137.4 degrees, a saturation of 100% and a lightness of 39.8%. #00cb3b color hex could be obtained by blending #00ff76 with #009700. Closest websafe color is: #00cc33.\n\n• R 0\n• G 80\n• B 23\nRGB color chart\n• C 100\n• M 0\n• Y 71\n• K 20\nCMYK color chart\n\n#00cb3b color description : Strong cyan - lime green.\n\n# #00cb3b Color Conversion\n\nThe hexadecimal color #00cb3b has RGB values of R:0, G:203, B:59 and CMYK values of C:1, M:0, Y:0.71, K:0.2. Its decimal value is 52027.\n\nHex triplet RGB Decimal 00cb3b `#00cb3b` 0, 203, 59 `rgb(0,203,59)` 0, 79.6, 23.1 `rgb(0%,79.6%,23.1%)` 100, 0, 71, 20 137.4°, 100, 39.8 `hsl(137.4,100%,39.8%)` 137.4°, 100, 79.6 00cc33 `#00cc33`\nCIE-LAB 71.572, -69.802, 57.068 22.144, 43.025, 11.275 0.29, 0.563, 43.025 71.572, 90.162, 140.732 71.572, -66.569, 77.954 65.593, -54.528, 35.724 00000000, 11001011, 00111011\n\n# Color Schemes with #00cb3b\n\n• #00cb3b\n``#00cb3b` `rgb(0,203,59)``\n• #cb0090\n``#cb0090` `rgb(203,0,144)``\nComplementary Color\n• #2bcb00\n``#2bcb00` `rgb(43,203,0)``\n• #00cb3b\n``#00cb3b` `rgb(0,203,59)``\n• #00cba1\n``#00cba1` `rgb(0,203,161)``\nAnalogous Color\n• #cb002b\n``#cb002b` `rgb(203,0,43)``\n• #00cb3b\n``#00cb3b` `rgb(0,203,59)``\n• #a100cb\n``#a100cb` `rgb(161,0,203)``\nSplit Complementary Color\n• #cb3b00\n``#cb3b00` `rgb(203,59,0)``\n• #00cb3b\n``#00cb3b` `rgb(0,203,59)``\n• #3b00cb\n``#3b00cb` `rgb(59,0,203)``\n• #90cb00\n``#90cb00` `rgb(144,203,0)``\n• #00cb3b\n``#00cb3b` `rgb(0,203,59)``\n• #3b00cb\n``#3b00cb` `rgb(59,0,203)``\n• #cb0090\n``#cb0090` `rgb(203,0,144)``\n• #007f25\n``#007f25` `rgb(0,127,37)``\n• #00982c\n``#00982c` `rgb(0,152,44)``\n• #00b234\n``#00b234` `rgb(0,178,52)``\n• #00cb3b\n``#00cb3b` `rgb(0,203,59)``\n• #00e542\n``#00e542` `rgb(0,229,66)``\n• #00fe4a\n``#00fe4a` `rgb(0,254,74)``\n• #19ff5b\n``#19ff5b` `rgb(25,255,91)``\nMonochromatic Color\n\n# Alternatives to #00cb3b\n\nBelow, you can see some colors close to #00cb3b. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #00cb08\n``#00cb08` `rgb(0,203,8)``\n• #00cb19\n``#00cb19` `rgb(0,203,25)``\n• #00cb2a\n``#00cb2a` `rgb(0,203,42)``\n• #00cb3b\n``#00cb3b` `rgb(0,203,59)``\n• #00cb4c\n``#00cb4c` `rgb(0,203,76)``\n• #00cb5d\n``#00cb5d` `rgb(0,203,93)``\n• #00cb6e\n``#00cb6e` `rgb(0,203,110)``\nSimilar Colors\n\n# #00cb3b Preview\n\nText with hexadecimal color #00cb3b\n\nThis text has a font color of #00cb3b.\n\n``<span style=\"color:#00cb3b;\">Text here</span>``\n#00cb3b background color\n\nThis paragraph has a background color of #00cb3b.\n\n``<p style=\"background-color:#00cb3b;\">Content here</p>``\n#00cb3b border color\n\nThis element has a border color of #00cb3b.\n\n``<div style=\"border:1px solid #00cb3b;\">Content here</div>``\nCSS codes\n``.text {color:#00cb3b;}``\n``.background {background-color:#00cb3b;}``\n``.border {border:1px solid #00cb3b;}``\n\n# Shades and Tints of #00cb3b\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000702 is the darkest color, while #f2fff6 is the lightest one.\n\n• #000702\n``#000702` `rgb(0,7,2)``\n• #001a08\n``#001a08` `rgb(0,26,8)``\n• #002e0d\n``#002e0d` `rgb(0,46,13)``\n• #004213\n``#004213` `rgb(0,66,19)``\n• #005519\n``#005519` `rgb(0,85,25)``\n• #00691e\n``#00691e` `rgb(0,105,30)``\n• #007d24\n``#007d24` `rgb(0,125,36)``\n• #00902a\n``#00902a` `rgb(0,144,42)``\n• #00a430\n``#00a430` `rgb(0,164,48)``\n• #00b735\n``#00b735` `rgb(0,183,53)``\n• #00cb3b\n``#00cb3b` `rgb(0,203,59)``\n• #00df41\n``#00df41` `rgb(0,223,65)``\n• #00f246\n``#00f246` `rgb(0,242,70)``\n• #07ff4f\n``#07ff4f` `rgb(7,255,79)``\n• #1aff5d\n``#1aff5d` `rgb(26,255,93)``\n• #2eff6b\n``#2eff6b` `rgb(46,255,107)``\n• #42ff79\n``#42ff79` `rgb(66,255,121)``\n• #55ff87\n``#55ff87` `rgb(85,255,135)``\n• #69ff95\n``#69ff95` `rgb(105,255,149)``\n• #7dffa2\n``#7dffa2` `rgb(125,255,162)``\n• #90ffb0\n``#90ffb0` `rgb(144,255,176)``\n• #a4ffbe\n``#a4ffbe` `rgb(164,255,190)``\n• #b7ffcc\n``#b7ffcc` `rgb(183,255,204)``\n• #cbffda\n``#cbffda` `rgb(203,255,218)``\n• #dfffe8\n``#dfffe8` `rgb(223,255,232)``\n• #f2fff6\n``#f2fff6` `rgb(242,255,246)``\nTint Color Variation\n\n# Tones of #00cb3b\n\nA tone is produced by adding gray to any pure hue. In this case, #5e6d62 is the less saturated color, while #00cb3b is the most saturated one.\n\n• #5e6d62\n``#5e6d62` `rgb(94,109,98)``\n• #56755f\n``#56755f` `rgb(86,117,95)``\n• #4e7d5c\n``#4e7d5c` `rgb(78,125,92)``\n• #468558\n``#468558` `rgb(70,133,88)``\n• #3e8d55\n``#3e8d55` `rgb(62,141,85)``\n• #379452\n``#379452` `rgb(55,148,82)``\n• #2f9c4f\n``#2f9c4f` `rgb(47,156,79)``\n• #27a44b\n``#27a44b` `rgb(39,164,75)``\n• #1fac48\n``#1fac48` `rgb(31,172,72)``\n• #17b445\n``#17b445` `rgb(23,180,69)``\n• #10bb42\n``#10bb42` `rgb(16,187,66)``\n• #08c33e\n``#08c33e` `rgb(8,195,62)``\n• #00cb3b\n``#00cb3b` `rgb(0,203,59)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #00cb3b is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.50677013,"math_prob":0.811853,"size":3666,"snap":"2022-40-2023-06","text_gpt3_token_len":1665,"char_repetition_ratio":0.13681048,"word_repetition_ratio":0.0073664826,"special_character_ratio":0.54391706,"punctuation_ratio":0.23224352,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98623466,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-04T07:15:32Z\",\"WARC-Record-ID\":\"<urn:uuid:ee59066e-a3e0-4eb6-b055-2c1c9c4edc4b>\",\"Content-Length\":\"36105\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:693558fe-6d05-47ab-ab21-f619eb4df4e8>\",\"WARC-Concurrent-To\":\"<urn:uuid:97ff0b23-787c-4e9c-af17-8229a7ddcbfb>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/00cb3b\",\"WARC-Payload-Digest\":\"sha1:BVJESLE5WVS6AK34FWMZVJ2HLEWBHXI7\",\"WARC-Block-Digest\":\"sha1:R7SIAIMNAMUXHNUJ55DECSTORHO5KSHP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337480.10_warc_CC-MAIN-20221004054641-20221004084641-00456.warc.gz\"}"} |
https://physics.stackexchange.com/questions/246887/noethers-theorem-for-space-translational-symmetry | [
"# Noether's theorem for space translational symmetry\n\nImagine a ramp potential of the form $U(x) = a*x + b$ in 1D space. This corresponds to a constant force field over $x$. If I do a classical mechanics experiment with a particle, the particle behaves in the \"same way\" no matter what the initial position of the particle is. This should give rise to space translational symmetry.\n\nNow, consider Newton's equations, $\\dot{p} = -U'(x)$. For the potential above, $U'(x) = a \\neq 0$. Therefore, $\\dot{p} \\neq 0$. This is not in accordance with Noether's theorem for translational symmtery. What am I missing here?\n\nTranslational symmetry in the sense of the standard formulation of Noether theorems means that the Lagrangian is invariant under the action of the group of spatial translations. This is not the case in your example because $U$ does not admit such invariance.\n\nHowever there is another, more physical, version of the idea of translational invariance for a physical system:\n\nThe class of solutions of the equation of motion is invariant under spatial displacements.\n\nIn other words, if $$x=x(t)$$ is a solution with initial conditions $$x(0)=x_0\\:,\\quad \\frac{dx}{dt}|_{t=0}=\\dot{x}_0\\:,$$ the solution with initial conditions $$x(0)=x_0 + s\\:,\\quad \\frac{dx}{dt}|_{t=0}=\\dot{x}_0$$ must be $$x=x(t)+s$$ that is, the initial solution changed by the same initial given translation at each time $t$. This fact is by no means trivial.\n\nThis invariance requirement is valid for your example as you can directly check. However, since this invariance requirement is weaker than the one used in the standard version of Noether theorem, it does not imply that the momentum is conserved.\n\nIn Lagrangian formulation the two notions of invariance are not equivalent. The Noetherian one implies the second one but the converse implication is false. In Hamiltonian formulation they are equivalent provided we restrict ourselves to deal with canonical transformations.\n\nThe natural question however arises whether this weaker notion of invariance of your system implies the existence of a conserved quantity (diferent from the momentum). The answer is positive in our case. There is in fact another, weaker, version of Noether theorem stating that, if the Lagrangian is not invariant under the one-parameter ($\\epsilon$) group of transformations $$x \\to x_\\epsilon\\:, \\quad \\dot{x} \\to \\dot{x}_\\epsilon = \\frac{d}{dt}x_\\epsilon$$ but, at first order in the parameter $\\epsilon$, the transformed Lagrangian differs from the initial one just due to a total derivative $$\\frac{d}{dt}f(t,x) = \\frac{\\partial f}{\\partial x}\\dot{x} + \\frac{\\partial f}{\\partial t}$$ then there is a conserved quantity along the solution of the motion equations: $$I(t,x, \\dot{x}) = \\frac{\\partial L}{\\partial \\dot{x}} \\partial_\\epsilon x_\\epsilon|_{\\epsilon=0} - f(t,x)\\:.$$ The proof is a trivial generalization of the know classical one. In the considered case $$L(t,x, \\dot{x}) = \\frac{m}{2}\\dot{x}^2 - ax-b\\:.$$ Thus, since our group of transformations is $$x \\to x+\\epsilon\\:, \\quad \\dot{x} \\to \\dot{x}\\:,$$ we have $$\\partial_{\\epsilon}|_{\\epsilon=0} L(t,x_\\epsilon, \\dot{x}_\\epsilon)= -a = \\frac{d}{dt} (-at)\\:.$$ We conclude that there exists a conserved quantity. This is $$I(t,x, \\dot{x}) = m \\dot{x} + at\\:.$$ A posteriori this is obvious from the equations of motion themselves, but it also arises form a weak symmetry of the Lagrangian.\n\nThe Lagrangian is\n\n\\begin{equation} L = \\frac{1}{2} \\dot{x}^2 - ax-b. \\end{equation}\n\nIntroducing spatial translation $x \\rightarrow x+\\Delta$ for constant $\\Delta$ we see that\n\n\\begin{equation} L \\rightarrow L' = \\frac{1}{2} \\dot{x}^2 - ax - a\\Delta -b. \\end{equation}\n\nTherefore the action changes as\n\n\\begin{equation} \\delta S = \\int{dxdt \\; (L'-L)} = \\int{dx dt \\; (-a\\Delta)} \\neq 0. \\end{equation}\n\nTherefore $U(x)$ has broken the translational symmetry. There can be no associated conserved current.\n\n• Can you please explain how you arrived at the expression for $\\delta S$ from the definition of action, especially with regards to $dx$ – IanDsouza Apr 2 '16 at 9:27\n• I'm using $L$ to mean the Lagrangian density rather than Lagrangian, so the action would be defined as $\\int{d^4x L}$ in 4D spacetime. Since you have only one spatial dimension, $d^4x \\rightarrow dx dt$. Then $\\delta S = \\int{dx dt \\delta L}$ which gives the expression for $\\delta S$ I have written. – Orca Apr 2 '16 at 9:49\n• In response to your statement about doing experiments at different points in space and getting the same result, I would say that for this potential $U(x)$, you do not actually get the same result for your experiment. Consider $U(1)$ and $U(2)$ at different $x=1$ and $x=2$. These are different values, therefore you experience different potentials at different spacetime points and cannot get the same result for your experiment. An analogy would be doing the same experiment in different gravitational fields, and expecting the same result. – Orca Apr 2 '16 at 9:55\n\nI) Let the Lagrangian be\n\n$$\\tag{1} L~=~\\frac{m}{2}v^2-U(x), \\qquad v~:=~\\dot{x}.$$\n\nLet the force\n\n$$\\tag{2} F~=~-U'(x)$$\n\nbe a constant.\n\nII) Infinitesimal translations\n\n$$\\tag{3} \\delta x~=~\\varepsilon$$\n\nis a quasi-symmetry\n\n$$\\tag{4} \\delta L ~=~\\varepsilon \\frac{df}{dt}, \\qquad f~:=~Ft$$\n\nof the Lagrangian (1). Here $\\varepsilon$ is an infinitesimal parameter. The corresponding bare Noether charge is\n\n$$\\tag{5} Q^0 ~:=~p, \\qquad p~:=~\\frac{\\partial L}{\\partial v}~=~mv,$$\n\nso the corresponding full Noether charge is\n\n$$\\tag{6} Q~:=~Q^0-f~=~mv - Ft.$$\n\nNoether's theorem states that the quantity (6) is conserved in time, as one can easily verify.\n\nIII) The system also possesses other quasi-symmetries (and thereby conservation laws by Noether's theorem). E.g. the following infinitesimal transformation\n\n$$\\tag{7} \\delta x~=~\\varepsilon t$$\n\nis a quasi-symmetry\n\n$$\\tag{8} \\delta L ~=~\\varepsilon \\frac{df}{dt}, \\qquad f~:=~mx+\\frac{F}{2}t^2$$\n\nof the Lagrangian (1). The corresponding bare Noether charge is\n\n$$\\tag{9} Q^0 ~:=~p t,$$\n\nso the corresponding full Noether charge is\n\n$$\\tag{10} Q~:=~Q^0-f~=~mvt- mx -\\frac{F}{2}t^2.$$\n\nIV) If one think a bit more, one can probably construct a quasi-symmetry whose corresponding Noether charge is the acceleration itself.\n\n• As often happens our answers are very similar :) – Valter Moretti Apr 2 '16 at 12:24\n• @ValterMoretti: True :) – Qmechanic Apr 2 '16 at 12:26\n\nNoether's theorem tells us that a conserved quantity is related to a symmetry of the action, where the action $S$ is given by:\n\n$$S = \\int L dt$$\n\nwhere $L$ is the Lagrangian given by:\n\n$$L = T - V$$\n\nSince the potential $V$ is a function of position the Lagrangian and hence the action is not symmetric under displacements in space.\n\n• Okay.. I see that the consideration is the symmetry of the action. But now I am confused about some physical intuition. I was lead to believe that translational symmetry in a system is intricately related to the fact that if I do an experiment at different points in space and get similar results (up to the initial translation), the system is said to be translationally symmetric. This seems to be true for the above setup. But now I assume that this notion of translational symmetry is not important for Noether's theorem. Is this correct? – IanDsouza Apr 2 '16 at 8:17\n• @IanDsouza: correct. This is a common misunderstanding of what Noether's theorem states. – John Rennie Apr 2 '16 at 9:13"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8351328,"math_prob":0.9996296,"size":2804,"snap":"2020-24-2020-29","text_gpt3_token_len":736,"char_repetition_ratio":0.14571428,"word_repetition_ratio":0.0,"special_character_ratio":0.2628388,"punctuation_ratio":0.11090573,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99996996,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-29T17:50:45Z\",\"WARC-Record-ID\":\"<urn:uuid:c9354ff1-4152-4283-b712-28b9d48b6c13>\",\"Content-Length\":\"178287\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e58ceff2-51b6-4505-8c16-2e36b368d4e2>\",\"WARC-Concurrent-To\":\"<urn:uuid:ac7a67bb-448b-4b5b-993c-7ef16c4dbd27>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/246887/noethers-theorem-for-space-translational-symmetry\",\"WARC-Payload-Digest\":\"sha1:IE673HMVMYBQQIBRGAYVJUGE6LDMYQ76\",\"WARC-Block-Digest\":\"sha1:Z5DN7ONGBSI6SV4MV756JWCF2IAWHXLJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347405558.19_warc_CC-MAIN-20200529152159-20200529182159-00486.warc.gz\"}"} |
https://mathematicsgre.com/viewtopic.php?f=1&t=730 | [
"## Units in General GRE\n\nForum for the GRE subject test in mathematics.\nyoyobarn\nPosts: 80\nJoined: Sun Dec 19, 2010 7:01 am\n\n### Units in General GRE\n\nHi,\n\nMay I know if the general GRE tests units?\n\nI am familiar with SI units, but am totally new to units like yards, feet, etc, which I understand is used more often in the US.\n\nSo, do we have to memorize stuff like 1 mile= 1760 yards= 5280 feet ?\n\nIf so, what are the more common units in the GRE (general)?\n\nThanks.\n\nmhancock743\nPosts: 35\nJoined: Sat Oct 15, 2011 2:08 pm\n\n### Re: Units in General GRE\n\nI wouldn't worry about it too much, but I suppose it wouldn't hurt to know some of the more common ones if only for your own benefit, especially if you plan on living/studying here in the U.S. for a length of time.\n\nFrom my experience I don't recall any problems where conversions were necessary, whether it was SI <-> Imperial, Imperial -> Imperial, or SI -> SI. This is not to say you won't see any yourself, however.\n\nexxx\nPosts: 18\nJoined: Sat Nov 12, 2011 2:04 pm\n\n### Re: Units in General GRE\n\nhttp://bit.ly/vUMFzA\nQuestions may involve units of measurement such as English units or metric units. If an answer to a question requires converting one unit of measurement to another, then the relationship between the units is provided, unless the relationship is a\ncommon one, such as minutes to hours, or centimeters to\nmeters.\n\nI didn't see any questions on unit conversions on my test at all.\n\nI think \"12 inches = 1 foot\" could maybe be considered one of the \"common\" ones. I don't think any other imperial unit conversion would be considered common. Maybe \"16 ounces = 1 pound\", but I doubt it.\n\nAs far as real-life here in the states goes, these are the conversions in order of importance:\n\nTop five:\n12 in = 1 ft\n5280 ft = 1 mi\n16 oz = 1 pound\n8 fl. oz = 1 cup\nand\n3 ft = 1 yrd (only for American Football fans",
null,
")\n\nTemperature:\n0 deg C = 32 deg F (freezing point of water)\n100 deg C = 212 deg F (boiling point of water)\nF<-->C is linear.\nSo from these two points anyone on a site called Mathematics GRE should be able to come up with the formula for C to F in a couple seconds in their head: F=(9/5)C+32.\nI personally choose to remember that the slope of the transformation is 9/5, and the freezing point (y-intercept) is 32. I think it's easier to remember the number 9/5 than the number 212, but only because I grew up with it. Nowadays it's easy to remember 212, though, because it's the area code for Manhattan. So, you just have to remember that \"water boils in Hell's Kitchen\" --and know the area code for Manhattan like I do, lol",
null,
".\n\nLess common measurements:\n2000 pounds = 1 ton\n4 cups = 1 quart\n4 quarts = 1 gallon\n(For \"quart\" think \"quatre=four\", as in: \"a quart is four times a cup and a quarter of a gallon\"--gallon and cup being the two most common volume measurements in America)\n2 cups = 1 pint\n(For pint, think \"p as in pound\", as in one pound of water, a pound of water is 16 fl. oz, and 16 fl. oz is 2 cups. So a pint is two cups.)"
] | [
null,
"https://mathematicsgre.com/images/smilies/icon_razz.gif",
null,
"https://mathematicsgre.com/images/smilies/icon_razz.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.95667666,"math_prob":0.87504,"size":4630,"snap":"2021-31-2021-39","text_gpt3_token_len":1240,"char_repetition_ratio":0.09965413,"word_repetition_ratio":0.78634363,"special_character_ratio":0.28660908,"punctuation_ratio":0.109792285,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9585637,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-29T06:16:55Z\",\"WARC-Record-ID\":\"<urn:uuid:9981fc0b-6b86-4c63-a5fe-70fcfc614e8c>\",\"Content-Length\":\"23371\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c3d89f31-136b-4bdd-9ccf-e334a28d7a4d>\",\"WARC-Concurrent-To\":\"<urn:uuid:aa164aee-554c-48bf-a924-009db25a5979>\",\"WARC-IP-Address\":\"64.91.251.77\",\"WARC-Target-URI\":\"https://mathematicsgre.com/viewtopic.php?f=1&t=730\",\"WARC-Payload-Digest\":\"sha1:MR6KRMVXZ7IXUI66YZNQI2KH6IGDWUAI\",\"WARC-Block-Digest\":\"sha1:FN6BUZ2AMZN7CGNO4ZY3NMTOUG7SD2YX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153816.3_warc_CC-MAIN-20210729043158-20210729073158-00700.warc.gz\"}"} |
http://www.basicpi.org/2020/07/27/motor-algorithm-part-4-pwm-output/ | [
"# Motor Algorithm – Part 4 – PWM Output\n\nMy previous entry show how to pre-calculate a Sinus table so we avoid doing this full speed because the next step is to convert this into a PWM pulse. A PWM pulse is measured in time – length – so we need to know the max length of a pulse. That is decided by the frequency we use. 4000Hz is really a minimum, thought if you drive a slow motor you can get away with a slower algorithm. This is the frequency of the timer interrupt we will use to re-calculate PWM output, so a pulse of 1 is 1/4000 in length.\n\nThe second is that we need to apply torque where “1” will be 100% torque, 0.5 will be 50% torque etc.\n\nThe third element is to scale to a length number matching the timer we use. To sum this up I pre-calculate a pwmFactor as follows:\n\n`pwmFactor = 1/4000*timerScale*torqueFactor;`\n\nI only need to update this if I change torque. This now gives me a factor I can multiply with the vector to calculate the length of the PWM pulse for each phase.\n\n```aPWM = vector[x].a * pwmFactor;\nbPWM = vector[x].b * pwmFactor;\ncPWM = vector[x].c * pwmFactor;```\n\nThe final step is to output this pulse by switching pins on/off. Assuming I did not have a Hardware timer I might need to create a much faster interrupt that only switched pins on/off, but luckily the motor timers on MCU’s like STM32F405 will do this for us – I will be using Timer 1 that is a specialized motor timer that will do a lot of the work that otherwise would be hard to achieve – hard, not impossible. We run motors using slow AVR’s and PIC’s, but using a modern MCU with a motor timer is just so much easier.\n\nAs I am driving blind-folded with no knowledge of my current rotor position I just do exactly the same as in my Trapezoidal example and iterate through the table. The speed I iterate (change sinus entry) is now the motor speed. Assuming you use 4000 Hertz and have a motor with only 3 coils you can routhly acieve 10 rotations per sec without skipping vector entryies. This is ca 600 RPM, so if you use a faster motor you really should increase freuency, but increased frequency means more CPU used for math and more loss in the MOSFET’s. You also have an upper limit of what driver, MOSFET and motor will support. A common range is 4000 to 20,000 Hertz.\n\nAt this point I don’t know the current rotor position so I just use the vector table knowing that as I iterate the motor will be moving most efficient on a 90 degree vector as illustrated below:",
null,
"If I had known the current rotor position I would have looked up the vector table 90 degrees before or after the current rotor position. But, to do so I need to use BEMF, Hall or current sensors to calculate my position. In theory we could calculate this every time we create a PWM output, but we face two challenges (1) CPU hungry math and (2) inaccurate input.\n\nPhase currents can in theory be measured calculated for every PWM output, but you usually have so much noise that you end up filtering – meaning you will not have ADC measured currents as often as you output PWM. Hall have a lower accuracy. So the real algorithm usually use a trick where we use sensors to correct rotor position.\n\nBy adding rotor vector calculations I basically are doing FOC (Field Oriented Control). I will be using both Current- and Hall sensors. My cutter motor have no Hall and it will be driving fast so this is excellent for current sensors. The wheel drivers do however have Hall sensors and will be driving slow – I do not expect any valid input from current sensors on the wheels, but we will see – so I will be driving based on Hall only.\n\nOne advice – before starting putting on PWM on a motor you need to activate temperature- and current- damage thresholds. I have four temperature sensors and two of them are located in between MOSFET’s. If the temperature raise fast or we ever achieve a selected threshold we simply cut the motor to avoid that electronics get damaged.\n\nI also need to do this on phase currents – the MOSFET I use have a maximum of 100A, so if we ever reach – lets say 75A – we cut the motor. Maximum pulse is 400A. This is MOSFET specific data and I have used a wide SOP8 with padding underneath – a package used by several MOSFET’s so I can adapt MOSFET to application – I have 60V MOSFET’s, but I am using IRFP5300 since I had a bunch of them – this have a RDS=1.1mOhm, 30V, 100A etc – excellent for my current applications since I will be using 18V batteries from a local DIY shop.\n\nTwo numbers on a MOSFET are very important – (1) RDS that needs to be as low as possible and (2) switching time that needs to be as fast (short) as possible. As we switch we move into an area where the MOSFET will consume more heat – a low frequence is good as we switch more seldom, but a low frequency is no good for faster motors – this is a tradeoff you need to make knowing that higher freuencies will increasingly heat up your MOSFETs. For a SOP8 style package I assume max 1W dissipation without heatsink – meaning that if we burn more than 1W on the MOSFET temperature starts to raise fast. This imporves with heatsink that I have on each driver – but we are now into the discussions about my boards limitations – my target was 50A, but at some point I will destroy boards to learn these numbers.\n\nI just tested PWM outputs on my board and is happy to see they work, so I only need to get temperature- and current- sensors working and I will be spinning the larger motors."
] | [
null,
"http://www.basicpi.org/wp-content/uploads/2020/07/vectors.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9376703,"math_prob":0.9044358,"size":5389,"snap":"2023-40-2023-50","text_gpt3_token_len":1270,"char_repetition_ratio":0.11699165,"word_repetition_ratio":0.0019627085,"special_character_ratio":0.2328818,"punctuation_ratio":0.06575342,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9732183,"pos_list":[0,1,2],"im_url_duplicate_count":[null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-30T00:33:07Z\",\"WARC-Record-ID\":\"<urn:uuid:8e107b8b-fa9f-4ac1-b9d1-d6d7289368b8>\",\"Content-Length\":\"45802\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0255ce12-0b58-49a1-a0a6-63f1fff9e149>\",\"WARC-Concurrent-To\":\"<urn:uuid:83b6b613-62c7-4dd9-a9cf-75fe8b52ff5a>\",\"WARC-IP-Address\":\"46.30.215.161\",\"WARC-Target-URI\":\"http://www.basicpi.org/2020/07/27/motor-algorithm-part-4-pwm-output/\",\"WARC-Payload-Digest\":\"sha1:OYATGWXNEVHPQSWXFLN6FDJRNIJSUSYA\",\"WARC-Block-Digest\":\"sha1:KQH45LAMZVCXEHZABYQXQGYXYSGJ5BLY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510529.8_warc_CC-MAIN-20230929222230-20230930012230-00313.warc.gz\"}"} |
https://www.pokrowcenamaterace.pl/2021/07/02-9165.html | [
"•",
null,
"### A Practical Beginner s Guide to Cyclic Voltammetry\n\n· cyclic voltammetry is provided to help the reader with data acquisition and interpretation. Tips and common pitfalls are provided and the reader is encouraged to apply what is learned in short simple training modules provided in the Supporting Information.\n\nGet Price\n•",
null,
"### Lab 1 Cyclic VoltammetryChemistry LibreTexts\n\n· Cyclic voltammetry (CV) is a technique used to study reaction mechanisms that involve the transferring of electrons. The method involves linearly varying an electrode potential between two limits at a specific rate while monitoring the current that develops in an electrochemical cell.\n\nGet Price\n•",
null,
"### Mechanistic modeling of cyclic voltammetry A helpful tool\n\n· Conclusions and future work. The mathematical model of cyclic voltammetry response was developed for the glucose biosensors operating in aerobic conditions in the absence of glucose at low scan rates. The model was validated with the biosensors with various amounts of immobilized mediator and different enzyme/membrane film compositions.\n\nGet Price\n•",
null,
"### Cyclic Voltammetry PrincipleUK Essays\n\nCyclic voltammetry is the most widely used technique for acquiring qualitative information about electrochemical reactions 34 35 . The power of cyclic voltammetry results from its ability to provide considerable information on the thermodynamics and kinetics of heterogeneous electron transfer reactions 47 48 and coupled chemical reactions 36 37 .\n\nGet Price\n•",
null,
"### BASi® Cyclic VoltammetryData Analysis\n\nCyclic voltammetry is an electrochemical technique based off of the measurement of peak current in response to a linear increase in potential of the working electrode. Cyclic voltammetry calculations for peak current are based off of the equation ip = 2.69x105n3/2ACD1/2ν1/2. Cyclic voltammetry reactions are either reversible or quasi-reversible.\n\nGet Price\n•",
null,
"### Cyclic VoltammetryGamry\n\nCyclic voltammetry is the most commonly used electroanalytical technique for ob-taining rapid quantitative data about an electrochemical reaction. The importance of cyclic voltammetry is that it provides a quick result concerning the kinetics of a heterogeneous electron-transfer diffusion coefficients and thermodynamic infor-\n\nGet Price\n•",
null,
"### Cyclic Voltammetryan overview ScienceDirect Topics\n\nGet Price\n•",
null,
"### Cyclic VoltammetryKhalafi- Major Reference Works\n\n· Cyclic voltammetry is the classical method of measuring redox potential as an indication of electronic properties of chemical species. Variable timescale of cyclic voltammetry made it one of the most useful and even unique techniques in measuring the kinetic parameters of electron transfer reactions and coupled electrochemical–chemical reactions.\n\nGet Price\n•",
null,
"### Linear Sweep and Cyclic Voltametry The Principles\n\n· Cyclic Voltammetry. Cyclic voltammetry (CV) is very similar to LSV. In this case the voltage is swept between two values (see below) at a fixed rate however now when the voltage reaches V2 the scan is reversed and the voltage is swept back to V1. A typical cyclic voltammogram recorded for a reversible single electrode transfer reaction is\n\nGet Price\n•",
null,
"### What does cyclic voltammetry tell you Personal blog\n\n· What does cyclic voltammetry tell you Cyclic Voltammetry (CV) is an electrochemical technique which measures the current that develops in an electrochemical cell under conditions where voltage is in excess of that predicted by the Nernst equation. Why do we use cyclic voltammetry In a cyclic voltammetry experiment the working electrode potential is ramped linearly Continue reading\n\nGet Price\n•",
null,
"### A Practical Beginner s Guide to Cyclic Voltammetry\n\n· cyclic voltammetry is provided to help the reader with data acquisition and interpretation. Tips and common pitfalls are provided and the reader is encouraged to apply what is learned in short simple training modules provided in the Supporting Information.\n\nGet Price\n•",
null,
"### Introduction to Cyclic Voltammetry\n\nFor cyclic voltammetry this is a triangular signal as shown in Figure A.2.4. E appl arbitrarily starts at a positive potential scans more negative potential and after 20 s (point b) is reversed back to more positive potentials. At t ¼ 40 s one scan cycle is complete. The time it takes for one complete cycle is\n\nGet Price\n•",
null,
"### Linear Sweep and Cyclic Voltametry The Principles\n\n· Cyclic Voltammetry. Cyclic voltammetry (CV) is very similar to LSV. In this case the voltage is swept between two values (see below) at a fixed rate however now when the voltage reaches V2 the scan is reversed and the voltage is swept back to V1. A typical cyclic voltammogram recorded for a reversible single electrode transfer reaction is\n\nGet Price\n•",
null,
"### Practical Aspects of Cyclic Voltammetry How to Estimate\n\n· Cyclic voltammetry (CV) is the hallmark of electrochemical analysis and it impacts on countless fields outside of chemistry such as materials science photonics cell biology neuroscience electrical engineering and condensed-phase physics. 1–10 Voltammograms provide a wealth of information about the charge-transfer and mass-transport processes at the surfaces of the working electrodes. 11–15 The evolving voltammetry\n\nGet Price\n•",
null,
"### CYCLIC Voltammetry read more chapter 9\n\n· CYCLIC Voltammetry ⇒ read more C.M.A. Brett A.M.O. Brett \"Electrochemistry\" Oxford University Press Oxford 1993 → chapter 9 E. Gileadi \"Electrode\n\nGet Price\n•",
null,
"### Cyclic Voltammetry and Its Applications IntechOpen\n\n· Cyclic voltammetry is a versatile method for scientific investigation and innovation due to the fact that most processes involve electron transfer which makes them be able to be monitored by this technique. Its uses cover characterization synthesis mechanisms and analysis. In all applications the technique can work well with a large variety of compounds including organic inorganic\n\nGet Price\n•",
null,
"### EXPERIMENT 5. CYCLIC VOLTAMMETRYChemistry\n\n· CYCLIC VOLTAMMETRY Objectives 1. To determine the capacitance of electrochemical interfaces. 2. To determine the formal potential and diffusion coefficient of Fe(CN) 6 3-. 3. To use cyclic voltammetry to understand the electrochemistry of Co(NH 3) 6 3 . 4. To investigate the effects of electrode contamination on cyclic voltammetry.\n\nGet Price\n•",
null,
"### Cyclic Voltammetry and Its Applications IntechOpen\n\n· Cyclic voltammetry is a versatile method for scientific investigation and innovation due to the fact that most processes involve electron transfer which makes them be able to be monitored by this technique. Its uses cover characterization synthesis mechanisms and analysis.\n\nGet Price\n•",
null,
"### CYCLIC Voltammetry read more chapter 9\n\n· CYCLIC Voltammetry ⇒ read more C.M.A. Brett A.M.O. Brett \"Electrochemistry\" Oxford University Press Oxford 1993 → chapter 9 E. Gileadi \"Electrode\n\nGet Price\n•",
null,
"### Lab 1 Cyclic VoltammetryChemistry LibreTexts\n\n· Cyclic voltammetry (CV) is a technique used to study reaction mechanisms that involve the transferring of electrons. The method involves linearly varying an electrode potential between two limits at a specific rate while monitoring the current that develops in an electrochemical cell. This experiment is performed under conditions where voltage\n\nGet Price\n•",
null,
"### What does cyclic voltammetry tell you Personal blog\n\n· What does cyclic voltammetry tell you Cyclic Voltammetry (CV) is an electrochemical technique which measures the current that develops in an electrochemical cell under conditions where voltage is in excess of that predicted by the Nernst equation. Why do we use cyclic voltammetry In a cyclic voltammetry experiment the working electrode potential is ramped linearly Continue reading\n\nGet Price\n•",
null,
"### BASi® Cyclic Voltammetry\n\nCyclic voltammetry is the electrochemical equivalent of spectroscopy and is the most powerful tool for examining electrochemical properties of a chemical substance or material. One can learn things about the rates of electron transfer between substances and electrodes and also about the rates and nature of processes coupled to the electron transfer.\n\nGet Price\n•",
null,
"### BASi® Cyclic Voltammetry\n\nCyclic voltammetry is the electrochemical equivalent of spectroscopy and is the most powerful tool for examining electrochemical properties of a chemical substance or material. One can learn things about the rates of electron transfer between substances and electrodes and also about the rates and nature of processes coupled to the electron transfer.\n\nGet Price\n•",
null,
"### Comparison of Photocatalytic Activity and Cyclic\n\n· in which cyclic voltammetry analysis is used to determine the degradation mechanism of photocatalysts. This fundamental and preliminary work is important to gauge if the performance of immobilized photocatalyst would be compromised compared to that of the slurry form. Moreover this\n\nGet Price\n•",
null,
"### EXPERIMENT 5. CYCLIC VOLTAMMETRYChemistry\n\n· CYCLIC VOLTAMMETRY Objectives 1. To determine the capacitance of electrochemical interfaces. 2. To determine the formal potential and diffusion coefficient of Fe(CN) 6 3-. 3. To use cyclic voltammetry to understand the electrochemistry of Co(NH 3) 6 3 . 4. To investigate the effects of electrode contamination on cyclic voltammetry.\n\nGet Price\n•",
null,
"### Cyclic VoltammetryFile ExchangeOriginLab\n\n· Click the Cyclic Voltammetry icon in the Apps Gallery window to open the dialog. Select columns representing scan index voltage and current. Click OK. A graph named Cyclic Voltammetry Tool and a workbook named Cyclic Voltammetry Calculations will be generated. Click left or right arrow at the top of the graph to change scan index.\n\nGet Price\n•",
null,
"### BASi® Cyclic Voltammetry\n\nCyclic voltammetry is the electrochemical equivalent of spectroscopy and is the most powerful tool for examining electrochemical properties of a chemical substance or material. One can learn things about the rates of electron transfer between substances and electrodes and also about the rates and nature of processes coupled to the electron transfer.\n\nGet Price\n•",
null,
"### CYCLIC Voltammetry read more chapter 9\n\n· CYCLIC Voltammetry ⇒ read more C.M.A. Brett A.M.O. Brett \"Electrochemistry\" Oxford University Press Oxford 1993 → chapter 9 E. Gileadi \"Electrode\n\nGet Price\n•",
null,
"### Cyclic Voltammetry and Its Applications IntechOpen\n\n· Cyclic voltammetry is a versatile method for scientific investigation and innovation due to the fact that most processes involve electron transfer which makes them be able to be monitored by this technique. Its uses cover characterization synthesis mechanisms and analysis. In all applications the technique can work well with a large variety of compounds including organic inorganic\n\nGet Price\n•",
null,
"### ELECTROCHEMICAL STUDIES AND CYCLIC\n\n· electrochemical measurement. The cyclic voltammetry was recorded in the range from 0 7 V to 1V. Optimum conditions were establishedby measuring the peak currents in dependence on all parameters. All experiments were carried out under ambient temperature. RESULTS AND DISCUSSION Surface characteristics\n\nGet Price"
] | [
null,
"https://www.pokrowcenamaterace.pl/images/imgs/5.jpg",
null,
"https://www.pokrowcenamaterace.pl/images/imgs/73.jpg",
null,
"https://www.pokrowcenamaterace.pl/images/imgs/2.jpg",
null,
"https://www.pokrowcenamaterace.pl/images/imgs/68.jpg",
null,
"https://www.pokrowcenamaterace.pl/images/imgs/74.jpg",
null,
"https://www.pokrowcenamaterace.pl/images/imgs/57.jpg",
null,
"https://www.pokrowcenamaterace.pl/images/imgs/80.jpg",
null,
"https://www.pokrowcenamaterace.pl/images/imgs/58.jpg",
null,
"https://www.pokrowcenamaterace.pl/images/imgs/55.jpg",
null,
"https://www.pokrowcenamaterace.pl/images/imgs/74.jpg",
null,
"https://www.pokrowcenamaterace.pl/images/imgs/38.jpg",
null,
"https://www.pokrowcenamaterace.pl/images/imgs/61.jpg",
null,
"https://www.pokrowcenamaterace.pl/images/imgs/16.jpg",
null,
"https://www.pokrowcenamaterace.pl/images/imgs/47.jpg",
null,
"https://www.pokrowcenamaterace.pl/images/imgs/52.jpg",
null,
"https://www.pokrowcenamaterace.pl/images/imgs/2.jpg",
null,
"https://www.pokrowcenamaterace.pl/images/imgs/71.jpg",
null,
"https://www.pokrowcenamaterace.pl/images/imgs/29.jpg",
null,
"https://www.pokrowcenamaterace.pl/images/imgs/50.jpg",
null,
"https://www.pokrowcenamaterace.pl/images/imgs/62.jpg",
null,
"https://www.pokrowcenamaterace.pl/images/imgs/13.jpg",
null,
"https://www.pokrowcenamaterace.pl/images/imgs/18.jpg",
null,
"https://www.pokrowcenamaterace.pl/images/imgs/32.jpg",
null,
"https://www.pokrowcenamaterace.pl/images/imgs/75.jpg",
null,
"https://www.pokrowcenamaterace.pl/images/imgs/65.jpg",
null,
"https://www.pokrowcenamaterace.pl/images/imgs/25.jpg",
null,
"https://www.pokrowcenamaterace.pl/images/imgs/41.jpg",
null,
"https://www.pokrowcenamaterace.pl/images/imgs/77.jpg",
null,
"https://www.pokrowcenamaterace.pl/images/imgs/14.jpg",
null,
"https://www.pokrowcenamaterace.pl/images/imgs/26.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8511096,"math_prob":0.72328043,"size":11052,"snap":"2021-43-2021-49","text_gpt3_token_len":2312,"char_repetition_ratio":0.20646271,"word_repetition_ratio":0.5947794,"special_character_ratio":0.1708288,"punctuation_ratio":0.05896089,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95251167,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60],"im_url_duplicate_count":[null,6,null,null,null,6,null,null,null,6,null,9,null,5,null,4,null,4,null,6,null,3,null,5,null,3,null,9,null,8,null,6,null,6,null,6,null,null,null,9,null,5,null,5,null,4,null,3,null,9,null,9,null,8,null,9,null,5,null,9,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-27T12:54:25Z\",\"WARC-Record-ID\":\"<urn:uuid:05389889-dabe-4e7a-ad76-34741557411f>\",\"Content-Length\":\"24495\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:178951ff-11b4-4727-aae9-8aebd8fe78b0>\",\"WARC-Concurrent-To\":\"<urn:uuid:d6970a92-2b4e-4b0e-b2f5-11f0dc677f46>\",\"WARC-IP-Address\":\"104.21.42.241\",\"WARC-Target-URI\":\"https://www.pokrowcenamaterace.pl/2021/07/02-9165.html\",\"WARC-Payload-Digest\":\"sha1:XWVY75P3Z3ZB5JLJPVIXCUAXTWM4QLJC\",\"WARC-Block-Digest\":\"sha1:ABUY4ZRJLDOZZ7BJPMS3V7ZCD7DZQ5AB\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588153.7_warc_CC-MAIN-20211027115745-20211027145745-00050.warc.gz\"}"} |
https://idaes-pse.readthedocs.io/en/stable/technical_specs/model_libraries/power_generation/unit_models/turbine.html | [
"Turbine (Isentropic)¶\n\nThis is a steam power generation turbine model for the basic isentropic turbine calculations. It is the basis of the TurbineInletStage, TurbineOutletStage <technical_specs/model_libraries/power_generation/unit_models/turbine_inlet:Turbine (Outlet Stage)>, and, TurbineOutletStage <technical_specs/model_libraries/power_generation/unit_models/turbine_inlet:Turbine (Stage)> models.\n\nVariables¶\n\nVariable\n\nSymbol\n\nIndex Sets\n\nDoc\n\nefficiency_isentropic\n\n$$\\eta_{isen}$$\n\ntime\n\nIsentropic efficiency\n\ndeltaP\n\n$$\\Delta P$$\n\ntime\n\nPressure change ($$P_{out} - P_{in}$$) [Pa]\n\nratioP\n\n$$P_{ratio}$$\n\ntime\n\nRatio of discharge pressure to inlet pressure $$\\left(\\frac{P_{out}}{P_{in}}\\right)$$\n\nExpressions¶\n\nThis model provides two expressions that are not available in the pressure changer model.\n\nExpression\n\nSymbol\n\nIndex Sets\n\nDoc\n\nh_is\n\n$$h_{is}$$\n\ntime\n\nIsentropic outlet molar enthalpy [J/mol]\n\ndelta_enth_isentropic\n\n$$\\Delta h_{is}$$\n\ntime\n\nIsentropic enthalpy change ($$h_{is} - h_{in}$$) [J/mol]\n\nwork_isentropic\n\n$$w_{is}$$\n\ntime\n\nIsentropic work (W)\n\nConstraints¶\n\nIn addition to the mass and energy balances provided by the control volume the following equation is used to calculate the outlet enthalpy, so work comes from the control volume energy balance.\n\n$h_{out} = h_{in} - \\eta_{is}\\left(h_{in} - h_{is}\\right)$\n\nInitialization¶\n\nTo initialize the turbine model, a reasonable guess for the inlet condition and deltaP and efficiency should be set by setting the appropriate variables.\n\nTurbineStage Class¶\n\nclass idaes.power_generation.unit_models.helm.turbine.HelmIsentropicTurbine(*args, **kwds)\nParameters\n• rule (function) – A rule function or None. Default rule calls build().\n\n• concrete (bool) – If True, make this a toplevel model. Default - False.\n\n• ctype (class) – Pyomo ctype of the block. Default - pyomo.environ.Block\n\n• default (dict) –\n\nDefault ProcessBlockData config\n\nKeys\ndynamic\n\nIndicates whether this model will be dynamic or not, default = useDefault. Valid values: { useDefault - get flag from parent (default = False), True - set as a dynamic model, False - set as a steady-state model.}\n\nhas_holdup\n\nIndicates whether holdup terms should be constructed or not. Must be True if dynamic = True, default - False. Valid values: { useDefault - get flag from parent (default = False), True - construct holdup terms, False - do not construct holdup terms}\n\nmaterial_balance_type\n\nIndicates what type of mass balance should be constructed, default - MaterialBalanceType.useDefault. Valid values: { MaterialBalanceType.useDefault - refer to property package for default balance type **MaterialBalanceType.none - exclude material balances, MaterialBalanceType.componentPhase - use phase component balances, MaterialBalanceType.componentTotal - use total component balances, MaterialBalanceType.elementTotal - use total element balances, MaterialBalanceType.total - use total material balance.}\n\nenergy_balance_type\n\nIndicates what type of energy balance should be constructed, default - EnergyBalanceType.useDefault. Valid values: { EnergyBalanceType.useDefault - refer to property package for default balance type **EnergyBalanceType.none - exclude energy balances, EnergyBalanceType.enthalpyTotal - single enthalpy balance for material, EnergyBalanceType.enthalpyPhase - enthalpy balances for each phase, EnergyBalanceType.energyTotal - single energy balance for material, EnergyBalanceType.energyPhase - energy balances for each phase.}\n\nmomentum_balance_type\n\nIndicates what type of momentum balance should be constructed, default - MomentumBalanceType.pressureTotal. Valid values: { MomentumBalanceType.none - exclude momentum balances, MomentumBalanceType.pressureTotal - single pressure balance for material, MomentumBalanceType.pressurePhase - pressure balances for each phase, MomentumBalanceType.momentumTotal - single momentum balance for material, MomentumBalanceType.momentumPhase - momentum balances for each phase.}\n\nhas_phase_equilibrium\n\nIndicates whether terms for phase equilibrium should be constructed, default = False. Valid values: { True - include phase equilibrium terms False - exclude phase equilibrium terms.}\n\nhas_pressure_change\n\nIndicates whether terms for pressure change should be constructed, default - False. Valid values: { True - include pressure change terms, False - exclude pressure change terms.}\n\nproperty_package\n\nProperty parameter object used to define property calculations, default - useDefault. Valid values: { useDefault - use default package from parent model or flowsheet, PropertyParameterObject - a PropertyParameterBlock object.}\n\nproperty_package_args\n\nA ConfigBlock with arguments to be passed to a property block(s) and used when constructing these, default - None. Valid values: { see property package for documentation.}\n\nhas_work_transfer\n\nTrue if model a has work transfer term.\n\nhas_heat_transfer\n\nTrue if model has a heat transfer term.\n\n• initialize (dict) – ProcessBlockData config for individual elements. Keys are BlockData indexes and values are dictionaries described under the “default” argument above.\n\n• idx_map (function) – Function to take the index of a BlockData element and return the index in the initialize dict from which to read arguments. This can be provided to overide the default behavior of matching the BlockData index exactly to the index in initialize.\n\nReturns\n\n(HelmIsentropicTurbine) New instance\n\nTurbineStageData Class¶\n\nclass idaes.power_generation.unit_models.helm.turbine.HelmIsentropicTurbineData(component)[source]\n\nBasic isentropic 0D turbine model. This inherits the heater block to get a lot of unit model boilerplate and the mass balance, enegy balance and pressure equations. This model is intended to be used only with Helmholtz EOS property pacakges in mixed or single phase mode with P-H state vars.\n\nSince this inherits BalanceBlockData, and only operates in steady-state or pseudo-steady-state (for dynamic models) the following mass, energy and pressure equations are implicitly writen.\n\n1. Mass Balance:\n\n0 = flow_mol_in[t] - flow_mol_out[t]\n\n2. Energy Balance:\n\n0 = (flow_mol[t]*h_mol[t])_in - (flow_mol[t]*h_mol[t])_out + Q_in + W_in\n\n3. Pressure:\n\n0 = P_in[t] + deltaP[t] - P_out[t]\n\nbuild()[source]\n\nAdd model equations to the unit model. This is called by a default block construnction rule when the unit model is created.\n\ninitialize(outlvl=0, solver=None, optarg=None)[source]\n\nFor simplicity this initialization requires you to set values for the efficency, inlet, and one of pressure ratio, pressure change or outlet pressure."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.58187425,"math_prob":0.8998893,"size":6252,"snap":"2022-05-2022-21","text_gpt3_token_len":1439,"char_repetition_ratio":0.14628682,"word_repetition_ratio":0.07793765,"special_character_ratio":0.21209213,"punctuation_ratio":0.13340564,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98006797,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-18T22:55:48Z\",\"WARC-Record-ID\":\"<urn:uuid:c1d9260c-7e97-4fc9-b5a6-ca965e034334>\",\"Content-Length\":\"30939\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:21dc2ecb-b540-4827-82ab-fc04e8af65f0>\",\"WARC-Concurrent-To\":\"<urn:uuid:4ddc6a4e-6b67-4f0e-9cb8-48ab202cb680>\",\"WARC-IP-Address\":\"104.17.32.82\",\"WARC-Target-URI\":\"https://idaes-pse.readthedocs.io/en/stable/technical_specs/model_libraries/power_generation/unit_models/turbine.html\",\"WARC-Payload-Digest\":\"sha1:RQY4GFLDM5D7SBWZ5522PXFENCEZBD5T\",\"WARC-Block-Digest\":\"sha1:24JXT4GDYVDJK2OYCRON6NMGF75DHG4T\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320301063.81_warc_CC-MAIN-20220118213028-20220119003028-00514.warc.gz\"}"} |
https://whatpercentcalculator.com/what-is-1-percent-of-15 | [
"# What is 1 percent of 15?\n\n## (1 percent of 15 is 0.15)\n\n### 1 percent of 15 is 0.15. Explanation: What does 1 percent or 1% mean?\n\nPercent (%) is an abbreviation for the Latin “per centum”, which means per hundred or for every hundred. So, 1% means 1 out of every 100.\n\n### Methods to calculate \"What is 1 percent of 15\" with step by step explanation:\n\n#### Method 1: Diagonal multiplication to calculate 1 percent of 15.\n\n1. For 100, our answer will be 1\n2. For 15, our answer will be x\n3. 100*x = 1*15 (In Step 1 and 2 see colored text; Diagonal multiplications will always be equal)\n4. x = 1*15/100 = 15/100 = 0.15\n\n#### Method 2: Same side division to calculate 1 percent of 15\n\n1. For 100, our answer will be 1\n2. For 15, our answer will be x\n3. 100/15 = 1/x (In Step 1 and 2, see colored text; Same side divisions will always be equal)\n4. 1/x = 100/15 or x/1 = 15/100\n5. x = 15*1/100 = 15/100 = 0.15\n\n#### Method 3: Converting Percentage to Decimal to calculate 1 percent of 15\n\n1. Find the decimal equivalent of 1 i.e. 1/100 = 0.01\n2. Multiply the decimal equivalent (above) to the given number i.e. 15*0.01 = 0.15\n\n### Percentage examples\n\nPercentages express a proportionate part of a total. When a total is not given then it is assumed to be 100. E.g. 1% (read as 1 percent) can also be expressed as 1/100 or 1:100.\n\nExample: If 1% (1 percent) of your savings are invested in stocks, then 1 out of every 100 dollars are invested in stocks. If your savings are \\$10,000, then a total of 1*100 (i.e. \\$100) are invested in stocks.\n\n### Percentage sign (%)\n\nThe percent (per cent i.e. per hundred) sign % is the standard symbol to indicate a percentage, to signify a number or ratio as a fraction of 100. Related signs include the permille (per thousand) sign ‰ and the permyriad (per ten thousand) sign ‱ (also known as a basis point), which indicate that a number is divided by one thousand or ten thousand, respectively.\n\n### Scholarship programs to learn math\n\nHere are some of the top scholarships available to students who wish to learn math.\n\n### Examples to calculate \"What is the percent decrease from X to Y?\"\n\nWhatPercentCalculator.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.94131076,"math_prob":0.9910394,"size":4627,"snap":"2021-43-2021-49","text_gpt3_token_len":1406,"char_repetition_ratio":0.30737618,"word_repetition_ratio":0.22233607,"special_character_ratio":0.3447158,"punctuation_ratio":0.06471183,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9973498,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-23T18:29:57Z\",\"WARC-Record-ID\":\"<urn:uuid:32e9110a-d0ff-4fb0-9f3a-9a0f5d0098a7>\",\"Content-Length\":\"18261\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ca37e270-a959-447c-8909-9af298d11c18>\",\"WARC-Concurrent-To\":\"<urn:uuid:3eb78b1e-85f0-459c-a6bb-e4479b791259>\",\"WARC-IP-Address\":\"172.67.163.99\",\"WARC-Target-URI\":\"https://whatpercentcalculator.com/what-is-1-percent-of-15\",\"WARC-Payload-Digest\":\"sha1:APXELXMO4ZLACAMAHLNT3GPXTRMZ2SDL\",\"WARC-Block-Digest\":\"sha1:H4VNFSD33QT6PYPW546R4ZZTRYXVNEOC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585737.45_warc_CC-MAIN-20211023162040-20211023192040-00531.warc.gz\"}"} |
https://brilliant.org/discussions/thread/harmonic-progression-2/ | [
"# Harmonic progression\n\nSuppose there is a $1m$ rubber band with an ant on it. Every time the ant walks $1cm$, the rubber band expands and the circumference increases by $1m$, and the length of the ant from the starting point also becomes longer. So, can the ants complete a circle? I think it can't do it, but it can. First, it takes $1\\%$ of the entire rubber band. Then, it went $0.5\\%$. Then, $0.333\\%$, $0.25\\%$, $0.2\\%$... Add them up: $\\frac{1}{100}+\\frac{\\frac{1}{2}}{100}+\\frac{\\frac{1}{3}}{100}+\\cdots$ $= \\frac{1}{100} \\times (\\color{#D61F06}{1+\\frac{1}{2}+\\frac{1}{3}+\\cdots})$ It is a harmonic series, and the result is positive infinity. But we don't need positive infinity, we just need to make it greater than 100. Let $1+\\frac{1}{2}+\\frac{1}{3}+\\cdots+\\frac{1}{a} = 100$ $\\int_{0}^{1}{(1+x+x^2+x^3+\\cdots+x^{a-1}) \\ dx = 100}$ $\\int_0^1 {\\frac{x^a-1}{x-1} \\ dx = 100}$ Then-I don't know how I should find the anti-derivative of this score! If anyone knows, welcome to answer in the comment area! But we also know that Euler-Mascheroni constant - $\\gamma$. So $a \\approx e^{100}$.",
null,
"Note by Raymond Fang\n4 months, 1 week ago\n\nThis discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.\n\nWhen posting on Brilliant:\n\n• Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused .\n• Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting \"I don't understand!\" doesn't help anyone.\n• Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge.\n\nMarkdownAppears as\n*italics* or _italics_ italics\n**bold** or __bold__ bold\n- bulleted- list\n• bulleted\n• list\n1. numbered2. list\n1. numbered\n2. list\nNote: you must add a full line of space before and after lists for them to show up correctly\nparagraph 1paragraph 2\n\nparagraph 1\n\nparagraph 2\n\n[example link](https://brilliant.org)example link\n> This is a quote\nThis is a quote\n # I indented these lines\n# 4 spaces, and now they show\n# up as a code block.\n\nprint \"hello world\"\n# I indented these lines\n# 4 spaces, and now they show\n# up as a code block.\n\nprint \"hello world\"\nMathAppears as\nRemember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting.\n2 \\times 3 $2 \\times 3$\n2^{34} $2^{34}$\na_{i-1} $a_{i-1}$\n\\frac{2}{3} $\\frac{2}{3}$\n\\sqrt{2} $\\sqrt{2}$\n\\sum_{i=1}^3 $\\sum_{i=1}^3$\n\\sin \\theta $\\sin \\theta$\n\\boxed{123} $\\boxed{123}$"
] | [
null,
"https://ds055uzetaobb.cloudfront.net/brioche/avatars-2/resized/45/3cd83b5a9f32cfdceb3290e7ced38970.7684f34dc2-F6ZdLk25pb.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8920177,"math_prob":0.9954785,"size":1467,"snap":"2021-31-2021-39","text_gpt3_token_len":369,"char_repetition_ratio":0.09501025,"word_repetition_ratio":0.0,"special_character_ratio":0.26244035,"punctuation_ratio":0.17532468,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99693674,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-01T17:42:13Z\",\"WARC-Record-ID\":\"<urn:uuid:64bdd26d-86ea-4a5f-96ce-7e7e1d43cc2a>\",\"Content-Length\":\"73463\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:79c6251e-c22b-4147-899b-391977b1db09>\",\"WARC-Concurrent-To\":\"<urn:uuid:4e0439b1-3696-4545-9758-b8f06f9c0137>\",\"WARC-IP-Address\":\"104.20.35.242\",\"WARC-Target-URI\":\"https://brilliant.org/discussions/thread/harmonic-progression-2/\",\"WARC-Payload-Digest\":\"sha1:NIMLMJLAO75ZPC2PFBHKEORVUBYDUYRD\",\"WARC-Block-Digest\":\"sha1:IKWSCAAUHP7PVUJDI523P65PRIFVKJOH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154214.63_warc_CC-MAIN-20210801154943-20210801184943-00288.warc.gz\"}"} |
https://xjavascript.com/view/26891/regex-pattern-to-match-at-least-1-number-and-1-character-in-a-string | [
"# Regex pattern to match at least 1 number and 1 character in a string\n\nI have a regex\n\n`/^([a-zA-Z0-9]+)\\$/`\n\nthis just allows only alphanumerics but also if I insert only number(s) or only character(s) then also it accepts it. I want it to work like the field should accept only alphanumeric values but the value must contain at least both 1 character and 1 number.\n\nWhy not first apply the whole test, and then add individual tests for characters and numbers? Anyway, if you want to do it all in one regexp, use positive lookahead:\n\n``````/^(?=.*[0-9])(?=.*[a-zA-Z])([a-zA-Z0-9]+)\\$/\n``````\n\nThis RE will do:\n\n``````/^(?:[0-9]+[a-z]|[a-z]+[0-9])[a-z0-9]*\\$/i\n``````\n\nExplanation of RE:\n\n• Match either of the following:\n1. At least one number, then one letter or\n2. At least one letter, then one number plus\n• Any remaining numbers and letters\n\n• `(?:...)` creates an unreferenced group\n• `/i` is the ignore-case flag, so that `a-z` == `a-zA-Z`.\n\nI can see that other responders have given you a complete solution. Problem with regexes is that they can be difficult to maintain/understand.\n\nAn easier solution would be to retain your existing regex, then create two new regexes to test for your \"at least one alphabetic\" and \"at least one numeric\".\n\nSo, test for this :-\n\n``````/^([a-zA-Z0-9]+)\\$/\n``````\n\nThen this :-\n\n``````/\\d/\n``````\n\nThen this :-\n\n``````/[A-Z]/i\n``````\n\nIf your string passes all three regexes, you have the answer you need.\n\nWhile the accepted answer is correct, I find this regex a lot easier to read:\n\n``````REGEX = \"([A-Za-z]+[0-9]|[0-9]+[A-Za-z])[A-Za-z0-9]*\"\n``````\n\nThis solution accepts at least 1 number and at least 1 character:\n\n``````[^\\w\\d]*(([0-9]+.*[A-Za-z]+.*)|[A-Za-z]+.*([0-9]+.*))\n``````\n\nMaybe a bit late, but this is my RE:\n\n`/^(\\w*(\\d+[a-zA-Z]|[a-zA-Z]+\\d)\\w*)+\\$/`\n\nExplanation:\n\n`\\w*` -> 0 or more alphanumeric digits, at the beginning\n\n`\\d+[a-zA-Z]|[a-zA-Z]+\\d` -> a digit + a letter OR a letter + a digit\n\n`\\w*` -> 0 or more alphanumeric digits, again\n\nI hope it was understandable\n\nAnd an idea with a negative check.\n\n``````/^(?!\\d*\\$|[a-z]*\\$)[a-z\\d]+\\$/i\n``````\n• `^(?!` at start look ahead if string does not\n• `\\d*\\$` contain only digits `|` or\n• `[a-z]*\\$` contain only letters\n• `[a-z\\d]+\\$` matches one or more letters or digits until `\\$` end.\n\nHave a look at this regex101 demo\n\n(the `i` flag turns on caseless matching: `a-z` matches `a-zA-Z`)\n\nThe accepted answers is not worked as it is not allow to enter special characters.\n\nIts worked perfect for me.\n\n`^(?=.*[0-9])(?=.*[a-zA-Z])(?=\\S+\\$).{6,20}\\$`\n\n• one digit must\n• one character must (lower or upper)\n• every other things optional\n\nThank you.\n\nIf you need the digit to be at the end of any word, this worked for me:\n\n``````/\\b([a-zA-Z]+[0-9]+)\\b/g\n``````\n• \\b word boundary\n• [a-zA-Z] any letter\n• [0-9] any number\n• \"+\" unlimited search (show all results)"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8840045,"math_prob":0.95880765,"size":290,"snap":"2021-04-2021-17","text_gpt3_token_len":73,"char_repetition_ratio":0.11888112,"word_repetition_ratio":0.0,"special_character_ratio":0.24827586,"punctuation_ratio":0.03448276,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9880068,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-18T09:09:10Z\",\"WARC-Record-ID\":\"<urn:uuid:bdb1f15f-221c-4633-8d92-b917fcae131b>\",\"Content-Length\":\"29929\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7be6ac8c-5064-4961-99fa-9f0fec827ae1>\",\"WARC-Concurrent-To\":\"<urn:uuid:a8ac5f92-576a-4829-9d22-658e212fea73>\",\"WARC-IP-Address\":\"104.21.27.85\",\"WARC-Target-URI\":\"https://xjavascript.com/view/26891/regex-pattern-to-match-at-least-1-number-and-1-character-in-a-string\",\"WARC-Payload-Digest\":\"sha1:7PWU5GVEPFKVGLK3EN6Q5KGE4BDU7MMO\",\"WARC-Block-Digest\":\"sha1:3NXDIX6QG3IPAKJQT5GM7KOSHAAW4AXH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038469494.59_warc_CC-MAIN-20210418073623-20210418103623-00116.warc.gz\"}"} |
https://www.intechopen.com/books/fullerenes-and-relative-materials-properties-and-applications/how-important-is-metal-carbon-back-bonding-for-the-stability-of-fullerene-transition-metal-complexes | [
"Open access peer-reviewed chapter\n\nHow Important is Metal-Carbon Back-Bonding for the Stability of Fullerene-Transition Metal Complexes? Role of Cage Sizes, Encapsulated Ions and Metal Ligands\n\nBy Ming-Chung Yang and Ming-Der Su\n\nSubmitted: March 16th 2017Reviewed: June 8th 2017Published: December 20th 2017\n\nDOI: 10.5772/intechopen.70068\n\nAbstract\n\nA density functional study of {η2-(X@Cn)}ML2 complexes with various cage sizes (C60, C70, C76, C84, C90, C96), encapsulated ions (X = F−, 0, Li+) and metal fragments (M = Pt, Pd) is performed, using M06/LANL2DZ levels of theory. The importance of π back-bonding to the thermodynamic stability of fullerene-transition metal complexes ({η2-(X@Cn)}ML2) and the effect of encapsulated ions, metal fragments and cage sizes on the π back-bonding are determined in this study. The theoretical computations suggest that π back-bonding plays an essential role in the formation of fullerene-transition metal complexes. The theoretical evidence also suggests that there is no linear correlation between cage sizes and π back-bonding, but the encapsulated Li+ ion enhances π back-bonding and F− ion results in its deterioration. These computations also show that a platinum center produces stronger π back-bonding than a palladium center. It is hoped that the conclusions that are provided by this study can be used in the design, synthesis and growth of novel fullerene-transition complexes.\n\nKeywords\n\n• fullerene-transition metal complex\n• π back-bonding\n• encapsulated ions\n• metal fragments and cage sizes\n\n1. Introduction\n\nThe first fullerene-transition metal complex, (η2-C60)Pt(Ph3)2, was prepared and structurally characterized by Fagan et al. in 1991 . It was the starting point for a new class of study for fullerene chemistry. Since then, various fullerene-transition metal complexes have been synthesized and these have potential applications in solar cells, spintronics, catalysis and drug delivery . Balch et al. then studied the reactions of C60 with electron-rich fragments, IrCl(CO)(PPh3)2 and produced the fullerene-iridium complex (η2-C60)IrCl(CO)(PPh3)2 . The formation of fullerene-iridium complex is a reversible process and the reversible binding of IrCl(CO)(PPh3)2 to fullerenes can be used as a structural probe because the adducts can build ordered single crystals that are suitable for X-ray diffraction [4, 5]. Fullerene-iridium complexes that contain an enantiomeric phosphine ligand are used as solar photoelements . One of the significant characteristics of fullerenes is that they are capable of encaging atoms, ions and small molecules to form endohedral complexes. Endohedral metallofullerenes (EMFs) are those that encapsulate metal atoms within a hollow carbon cage. Proft et al. theoretically studied the interactions between encapsulated monoatomic ions (Li+ to Rb+ and F to I) and C60 and its Si and Ge analogues and found that, for these families, the interactions between Li+(Na+) and F(Cl) ions and C60 are strongest and exothermic, which confirms the possibility of the existence of these species. Recently, Li+@C60 was successfully synthesized and isolated by Watanabe et al. .\n\nUnderstanding the strength and nature of metal-ligand bonding is crucial for the design of new fullerene-transition metal complexes because the structure and stability of various intermediates are important to the formation of organometallics . In an earlier work by the authors , {η2-(X@C60)}ML2 complexes (M = Pt, Pd; X = 0, Li+, L = PPh3) were studied and it was found that there is a relationship between thermodynamic stability and π back-bonding; that is, the greater the π back-bonding, the greater is thermodynamic stability. This shows that thermodynamic stability can be modified by tuning the π back-bonding. As far as the authors are aware, π back-bonding could be affected by several factors, including the encapsulated ions, the metal fragments and the cage sizes. This study determines the importance of π back-bonding to the thermodynamic stability of {η2-(X@Cn)}ML2 complexes by using M = Pt, Pd; X = F, 0, Li+ and n = 60, 70, 76, 84, 90 and 96 to ascertain the role of these factors in π back-bonding. Since the system is very large, methyl-substituted N-heterocyclic carbenes (NHC) are used as a ligand (L), instead of PPh3. NHC is one of the frequently used and most powerful tools in organic chemistry . In this work, the following reactions are studied:\n\nML2+X@Cnη2X@CnML2E1\n\n2. Computational details\n\nThe following fullerenes that comply with the isolated pentagon rule are used to develop a correlation: Ih-C60, D5h-C70, D2-C76, D2d(23)-C84, D5h(1)-C90 and D3d(3)-C96. These are experimentally isolated and identified [12, 13, 14]. The symmetry and numbering scheme for fullerene isomers are in accordance with an approved classification . Hückel molecular orbital calculations show that the 6:6 ring junctions at the poles of the molecules usually have highest π bond orders (B) and are expected to be the most reactive, so these are the sites of attack (see Scheme 1) [12, 16].",
null,
"Scheme 1.The sites of attack for addition to the fullerenes Ih-C60, D5h-C70, D2-C76, D2d(23)-C84, D5h(1)-C90 and D5h(1)-C96. The Hückel π bond orders (B) were calculated using the freeware program, HuLiS .\n\nThe geometry optimizations are performed without any symmetry restrictions by using the M06 /LANL2DZ [18, 19] level of theory. The vibrational frequency calculations at 298.15 K and 1 atm use the same level of theory. The stationary points are confirmed by the absence of imaginary frequencies. The natural charges are obtained using NBO 5.9, as implemented in the Gaussian 09 program .\n\nThe interatomic interactions are determined using energy decomposition analysis (EDA). Two types of EDA are used in this work. The first is the basic EDA that was developed individually by Yang et al. and by Ziegler and Rauk . For this basic EDA, the bonding energy (∆E) is partitioned into two terms, ∆E = ∆E(DEF) + ∆E(INT). In this work, basic EDA is used for the optimized ML2X@Cn complexes, which are categorized into transition metal complexes (A), carbon cages (B) and metal ions (C) as shown in Scheme 2. The deformation energy (∆E(DEF)) is the sum of the deformation energy of A (∆E(DEF)A, which is defined as the energy of A in the product relative to the optimized isolated structure (A0) and B (∆E(DEF)B)). The interaction energy term, ∆E(INT)A(BC), is the interaction energy between A and (BC) for their respective optimized product structures.\n\nAdvanced EDA unites the natural orbitals for chemical valence (NOCV), so it is possible to separate the total orbital interactions into pairwise contributions . The advanced EDA (i.e., EDA-NOCV) further divides the interaction energy (∆E(INT)) into three main components: ∆E(INT) = ∆Eelstat + ∆EPauli + ∆Eorb. It is used for a quantitative study of π back-bonding to fullerene ligands that uses the M06/TZ2P level of theory with the ADF 2016 program package . The relativistic effect is considered by applying a scalar zero-order regular approximation (ZORA) . The interaction energy and its decomposition terms are obtained from a single-point calculation using the M06/TZ2P basis set from the Gaussian 09 optimized geometry.\n\n3. Results and discussion\n\n3.1. Geometric changes\n\nThe structures of {η2-(X@Cn)}ML2 complexes for M = Pt, Pd; X = F, 0, Li+ and n = 60, 70, 76, 84, 90 and 96 were fully optimized at the M06/LANL2DZ level of theory. The geometries that are obtained are illustrated in Figure 1. The key structural parameters of the stationary points are listed in Table 1 (the structural parameters for n = 70, 76, 84, 90 and 96 are presented elsewhere). For the Pt-C60 complex in the absence of encapsulated ions, the respective lengths of the metal-carbon bonds are 2.12 Å and 2.12 Å (Table 1). When the Li+ ion is encapsulated into the cage, the metal-carbon bonds remain unaltered and the respective distances between C1, C2 and Li+ are 2.29 Å and 2.29 Å. As the encapsulated ion is changed to F, the metal-carbon bonds remain almost unchanged (2.13 and 2.13 Å), but the distance between C1, C2 and encapsulated ions (F) increases (3.18 and 3.18 Å). The Li+ ion is located at a site that is close to the transition metals because of electrostatic interaction. The metal-coordinated carbon atoms of C60 are negatively charged because there is π back-donation from the metal center. For the Pt-C60 complex without encapsulated ions, the natural population analysis (NPA) shows that the atomic charges on the C1 (C2) atoms are −0.27 (−0.27). When the cage is encapsulated by a Li+ ion, the atomic charges on the C1 (C2) atoms are increased to −0.32 (−0.32) and the atomic charge on the Li atom is +0.86. Therefore, the encaged Li+ ion is attracted toward these negatively charged C atoms. However, as the encapsulated ion is changed to F, NPA shows that the atomic charges on C1 (C2) atoms are decreased to −0.23 (−0.23) and the atomic charge on the F atom is negative (−0.93), so the encaged F ion is repelled by the negatively charged C atoms. In terms of Pd-C60 complexes, it is worthy of note that the geometrical distances are generally similar to the corresponding distances for Pt-C60 complexes, but the charge distributions are different. Specifically, the encaged Li atom has a charge (+0.86) but the charges on C1 (C2) atoms are reduced to −0.27 (−0.27). The negative charges on metal-coordinated carbon atoms are also less for X = 0 and F. Similar geometric changes and charge distributions are seen for n = 70, 76, 84, 90 and 96 and are presented elsewhere.",
null,
"Figure 1.Optimized geometries for {η2-(X@C60)}ML2 (M = Pt, Pd; X = Li+, 0, F−).",
null,
"Table 1.\n\nSelected geometrical parameters (bond distances in Å) and the NPA atomic charge for optimized complexes ({η2-(X@C60)}ML2) at the M06/LANL2DZ level of theory.\n\n3.2. Basic energy decomposition analysis (basic EDA)\n\nIn order to better understand the factors that govern the thermodynamic stability of {η2-(X@Cn)}ML2 complexes, basic EDA is performed on {η2-(X@Cn)}ML2 complexes (the basic EDA results for n = 70, 76, 84, 90 and 96 are presented elsewhere). The bonding energy (∆E) is defined as ∆E = E({η2-(X@Cn)}ML2) − E(X@Cn) − E(ML2), using Eq. (1). The Pt-C60 complex without encapsulated ions is initially considered. Basic EDA shows that both the metal fragment and the empty C60 fragment are distorted during the formation of the metal-carbon bond (Table 2). The metal fragment undergoes greater distortion (∆E(DEF)A = 52.1 kcal/mol) than C60 (∆E(DEF)B = 13.9 kcal/mol). The same results are obtained for X = Li+ or F. It is found that the encapsulation of the Li+ ion induces more distortion in fragments A and B of the Pt-Li+@C60 complex than those of the Pt-F@C60 complex (∆∆E(DEF)(X = Li+) = 8.0, ∆∆E(DEF)(X = F) = −14.8 kcal/mol). However, the interaction energy increases when the Li+ ion is encapsulated (∆∆E(INT)A(BC)(X = Li+) = −35.9, ∆∆E(INT)A(BC)(X = F) = +33.8 kcal/mol), which shows that the encapsulated Li+ ion induces a stronger interaction between the metal fragment and the X@C60 fragment, so {η2-(Li+@C60)}PtL2is more stable. The relative thermodynamic stability increases in the order: ∆E(X = F) < ∆E(X = 0) < ∆E(X = Li+), as shown in Table 2. In addition, |∆∆E(DEF)| is small and |∆∆E(INT)A(BC)| is large, so the latter must be responsible for an increase in thermodynamic stability. Similar results are obtained for Pd-C60 complexes, but, the distortion in fragments A or B is smaller than that for the Pt-C60 complex, as is the interaction energy, so the complex is less stable. For instance, ∆E(M = Pd, X = Li+) = −34.5 > ∆E(M = Pt, X = Li+) = −35.9 kcal/mol. When the cage size increases (n = 70, 76, 84, 90 and 96), the encapsulated Li+ ion still induces a stronger interaction between the metal fragment and the X@Cn fragment and |∆∆E(DEF)| is smaller than |∆∆E(INT)A(BC)|. Therefore, an increase in the cage size has no effect on the basic EDA results.\n\nX∆E(DEF) (∆E(DEF)A, ∆E(DEF)B)∆∆E(DEF)c∆E(INT)A(BC)∆∆E(INT)A(BC)c∆Ed∆∆Ec,d\n2-(X@C60)}PtL2\nF51.2 (41.0, 10.2)−14.8−67.0+33.8−14.7+20.1\n066.0 (52.1, 13.9)−100.8−34.8\nLi+74.0 (55.0, 19.0)8.0−136.6−35.9−64.2−29.4\n2-(X@C60)}PdL2\nF30.7 (25.9, 4.8)−12.6−45.829.0−13.9+17.6\n043.3 (35.0, 8.3)−74.8−31.5\nLi+51.1 (37.9, 13.2)7.8−109.3−34.5−59.2−27.7\n\nTable 2.\n\nBasic EDA for {η2-(X@C60)}ML2 (M = Pt, Pd) at M06/LANL2DZa,b.\n\nEnergies are given in kcal/mol.\n\nA and B respectively represent the metal fragment and C60 cage.\n\nThe difference is relative to corresponding quantity at X = 0.\n\nThe reaction energy without zero-point energy (ZPE) correction for the product, relative to the corresponding reactants.\n\n3.3. Advanced energy decomposition analysis (advanced EDA)\n\nIn an earlier study by the authors, structural parameters and spectral characteristics were used to estimate the strength of π back-bonding for {η2-(X@C60)}ML2 (M = Pt, Pd; X = 0, Li+, L = PPh3) complexes . The changes in bond length (Δr/r0), bond angle (Δθav), vibrational frequency (Δν) and the chemical shift (Δδ) were used to describe the character of the π-complex s. In this study, the strength of the π back-bonding strength is estimated from an energetic viewpoint using an advanced EDA method. This analysis shows the effect of encapsulated ions, metal fragments and cage sizes on π back-bonding.\n\n3.3.1. The effect of encapsulated ions on π back-bonding\n\nIn an earlier discussion (Section 3.2), it was proven that thermodynamic stabilities increase in the order: ∆E(X = F) < ∆E(X = 0) < ∆E(X = Li+), because the interaction energy (∆E(INT)) is increased. The interaction between the metal fragment and X@Cn is now studied using advanced EDA, which further decomposes the interaction energy into electrostatic interaction (∆Eelstat), repulsive Pauli interaction (∆EPauli) and orbital interaction (∆Eorb) terms. The orbital interactions are the most important of these three and only the most important pairwise contributions to ΔEorb are considered. The advanced EDA method is used for {η2-(X@Cn)}ML2 complexes, as shown in Tables 3 and 4 (the results for n = 70, 76, 84, 90 and 96 are presented elsewhere). A plot of the deformation density and a qualitative drawing of the orbital interactions between the metal fragment and X@C60 are shown in Figure 2. In terms of the Pt-C60 complexes, Table 3 shows that both the electrostatic interaction (∆Eelstat) and the orbital interaction (∆Eorb) stabilize the complexes because they are negative terms, but the percentage of ∆Eorb increases in the order: ∆Eorb(X = F) < ∆Eorb(X = 0) < ∆Eorb(X = Li+). Therefore, the enhanced orbital interaction must be responsible for the increase in the thermodynamic stability. Table 3 also shows that ΔE1 contributes significantly to ΔEorb: 69.5% for X = F, 75.2% for X = 0 and 76.4% for X = Li+. The deformation densities show that these come from π back-donation from a filled d orbital of the metal to the π* orbitals of C60 (charge flow is yellow to green at the top of Figure 2c). The large contributions of ΔE1 to ΔEorb are in agreement with the results of previous studies. The metal-carbon bonds are principally formed by π back-donation . It is also seen that the order of ΔE1 is |ΔE1(X = F)| = 94.4 < |ΔE1(X = 0)| = 118.6 < |ΔE1(X = Li+)| = 142.8 kcal/mol. Therefore, ΔE1 is increased when there is the encapsulated Li+ ion but decreased when there is a F ion. The second contribution of ΔE2 to ΔEorb is comparatively small: 18.0% for X = F, 13.1% for X = 0 and 10.0% for X = Li+. This results from σ-donation from a filled π orbital of C60 to the π* orbital of the metal (middle of Figure 2c). The computational results show that π back-bonding is crucial to the thermodynamic stability of Pt-C60 complexes and that an encapsulated Li+ ion increases π back-bonding but an encapsulated F has the opposite effect.\n\nFragmentsL2Pt and F@C60L2Pt and C60L2Pt and Li+@C60\nΔEint−67.7−100.8−133.0\nΔEPauli257.2235.8227.6\nΔEelstatb−189.0 (58.2%)−178.8 (53.1%)−173.8 (48.2%)\nΔEorbb−135.8 (41.8%)−157.7 (46.9%)−186.9 (51.8%)\nΔE1c−94.4 (69.5%)−118.6 (75.2%)−142.8 (76.4%)\nΔE2c−24.4 (18.0%)−20.6 (13.1%)−18.6 (10.0%)\nΔE3c−7.1 (5.2%)−5.7 (3.6%)−6.5 (3.5%)\nΔE4c−3.0 (2.2%)−3.8 (2.4%)−5.5 (3.0%)\nΔE5c−3.6 (2.7%)−4.2 (2.7%)−5.0 (2.7%)\nΔE6c−2.1 (1.1%)\nΔErestc−5.8 (4.3%)−6.3 (4.0%)−6.8 (3.6%)\n\nTable 3.\n\nThe advanced EDA results for {η2-(X@C60)}PtL2a (X = F, 0, Li+) at the M06/TZ2P level of theory. The fragments are PtL2 and X@C60 in a singlet (S) electronic state. All energy values are in kcal/mol.\n\nOptimized structures at the M06/LANL2DZ level of theory.\n\nThe values in parentheses give the percentage contribution to the total attractive interactions, ΔEelstat + ΔEorb.\n\nThe values in parentheses give the percentage contribution to the total orbital interactions, ΔEorb.\n\nFragmentsL2Pd and F@C60L2Pd and C60L2Pd and Li+@C60\nΔEint−45.1−73.5−103.9\nΔEPauli188.9176.5176.4\nΔEelstatb−142.8 (61.0%)−137.0 (54.8%)−138.9 (49.6%)\nΔEorbb−91.2 (39.0%)−113.0 (45.2%)−141.3 (50.4%)\nΔE1c−71.9 (78.8%)−96.9 (85.8%)−121.0 (85.6%)\nΔE2c−15.5 (17.0%)−10.8 (9.6%)−9.4 (6.7%)\nΔE3c−4.9 (5.4%)−4.2 (3.7%)−4.6 (3.3%)\nΔE4c−2.1 (2.3%)−2.3 (2.0%)−3.6 (2.5%)\nΔE5c−2.0 (2.2%)−2.9 (2.6%)−3.6 (2.5%)\nΔErestc−2.7 (3.0%)−3.6 (3.2%)−6.5 (4.6%)\n\nTable 4.\n\nThe advanced EDA results for {η2-(X@C60)}PdL2a (X = F, 0, Li+) at the M06/TZ2P level of theory. The fragments are PdL2 and X@C60 in the singlet (S) electronic state. All energy values are in kcal/mol.\n\nOptimized structures at the M06/LANL2DZ level of theory.\n\nThe values in parentheses give the percentage contribution to the total attractive interactions, ΔEelstat + ΔEorb.\n\nThe values in parentheses give the percentage contribution to the total orbital interactions, ΔEorb.",
null,
"Figure 2.(a) A qualitative drawing of the orbital interactions between the metal fragment and Li+@C60; (b) the shape of the most important interacting occupied and vacant orbitals of the metal fragments and Li+@C60; (c) a plot of the deformation densities, Δρ, for the pairwise orbital interactions between the two fragment in their closed-shell state, the associated interaction energies, ΔEorb (in kcal/mol), and the eigenvalues ν. The eigenvalues, ν, indicate the size of the charge flow. The direction of the charge flow is from yellow to the green.\n\n3.3.2. The effect of metal fragments on π back-bonding\n\nPd-C60 complexes appear to be similar to Pt-C60 complexes, but a comparison of the results in Tables 3 and 4 shows that the value ΔE1 for a Pd-C60 complex is smaller than the corresponding value for a Pt-C60 complex, which demonstrates that the π back-bonding for a palladium center is weaker than that for a platinum center. For example, |ΔE1(M = Pd, X = Li+)| = 121.0 < |ΔE1(M = Pt, X = Li+)| = 142.8 kcal/mol. This is consistent with the earlier results that were obtained using structural parameters and spectral characteristics .\n\n3.3.3. The effect of cage sizes on π back-bonding\n\nFigure 3 shows a plot of the ΔE1 values versus cage sizes that are calculated for {η2-(X@Cn)}PtL2 complexes (n = 60, 70, 76, 84, 90 and 96). It is seen that there is no linear relationship and there is one obvious peak for each X at n = 84 . This demonstrates the effect of a difference in size of the carbon clusters on π back-bonding for a metal center, but the correlation is not simply monotonic. Therefore, a larger (smaller) cage size does not necessarily imply that there is stronger (weaker) π back-bonding, which results in greater (lower) thermodynamic stability.",
null,
"Figure 3.The correlation between ΔE1 and cage sizes for {η2-(X@Cn)}PtL2 (n = 60, 70, 76, 84, 90 and 96) complexes. The blue, red and black lines indicate the ΔE1 for X = Li+, 0 and F−, respectively.\n\n4. Conclusion\n\nThis computational study uses density functional theory to determine the thermodynamic stability of {η2-(X@Cn)}ML2 complexes (M = Pt, Pd; X = F, 0, Li+ and n = 60, 70, 76, 84, 90 and 96). The calculations show the reaction is more stable when the Li+ ion is encapsulated within Cn but the complex becomes unstable if there is a F ion. Basic EDA shows that there is an increase in the interaction between the metal fragment and Cn if there is an encapsulated Li+ ion but F ion has the opposite effect.\n\nThe advanced EDA results show that π back-bonding is crucial to thermodynamic stability and that thermodynamic stability is increased by the presence of a Li+ ion but the presence of a F ion has the opposite effect. These computations also show that a platinum center results in stronger π back-bonding than a palladium center and that there is no linear relationship between cage size and π back-bonding.\n\nAcknowledgments\n\nThe authors would like to thank the National Center for High-Performance Computing in Taiwan for the donation of generous amounts of computing time. The authors are also grateful for financial support from the Ministry of Science and Technology of Taiwan.\n\nMore\n\n© 2017 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.\n\nHow to cite and reference\n\nCite this chapter Copy to clipboard\n\nMing-Chung Yang and Ming-Der Su (December 20th 2017). How Important is Metal-Carbon Back-Bonding for the Stability of Fullerene-Transition Metal Complexes? Role of Cage Sizes, Encapsulated Ions and Metal Ligands, Fullerenes and Relative Materials - Properties and Applications, Natalia V. Kamanina, IntechOpen, DOI: 10.5772/intechopen.70068. Available from:\n\nchapter statistics\n\n1Crossref citations\n\nMore statistics for editors and authors\n\nLogin to your personal dashboard for more detailed statistics on your publications.\n\nRelated Content\n\nNext chapter\n\nBy Elif Okutan\n\nFeatures of Liquid Crystal Display Materials and Processes\n\nEdited by Natalia Kamanina\n\nFirst chapter\n\nPolyimides Bearing Long-Chain Alkyl Groups and Their Application for Liquid Crystal Alignment Layer and Printed Electronics\n\nBy Yusuke Tsuda\n\nWe are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.\n\nView all Books"
] | [
null,
"https://www.intechopen.com/media/chapter/56397/media/F4.png",
null,
"https://www.intechopen.com/media/chapter/56397/media/F1.png",
null,
"https://www.intechopen.com/media/chapter/56397/media/UF1.png",
null,
"https://www.intechopen.com/media/chapter/56397/media/F2.png",
null,
"https://www.intechopen.com/media/chapter/56397/media/F3.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89577365,"math_prob":0.9717578,"size":18490,"snap":"2019-43-2019-47","text_gpt3_token_len":5563,"char_repetition_ratio":0.1445959,"word_repetition_ratio":0.08152355,"special_character_ratio":0.3100054,"punctuation_ratio":0.13196786,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96651834,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-18T22:53:37Z\",\"WARC-Record-ID\":\"<urn:uuid:dc30c2d6-af58-46d4-92e9-28da5c72e3e2>\",\"Content-Length\":\"388889\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c86dc7f3-da8e-4b7f-ba32-ec3532b0efdd>\",\"WARC-Concurrent-To\":\"<urn:uuid:24766086-3c17-446a-b34e-be105c4af68a>\",\"WARC-IP-Address\":\"35.171.73.43\",\"WARC-Target-URI\":\"https://www.intechopen.com/books/fullerenes-and-relative-materials-properties-and-applications/how-important-is-metal-carbon-back-bonding-for-the-stability-of-fullerene-transition-metal-complexes\",\"WARC-Payload-Digest\":\"sha1:Z32KTJMTEWJWYGKOJU6MAGWPRFK6FKWN\",\"WARC-Block-Digest\":\"sha1:4LX6RSD7B37PRLEL3YDH36IBVUNA44HI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986684854.67_warc_CC-MAIN-20191018204336-20191018231836-00301.warc.gz\"}"} |
https://cs.stackexchange.com/questions/3303/how-to-compare-the-output-of-a-neural-network-with-his-target | [
"# How to compare the output of a neural network with his target?\n\nI am coding a neural network implementation, but a I have problems in the design. I was wondering about how to compare the output with the target, my neural networks has three outputs\n\n groups = {'Iris-virginica':[0,0,1], 'Iris-setosa':[0,1,0], 'Iris-versicolor':[1,0,0]}\n\nI know I must translate each output to 0 and 1.\n\nI meant if my result is Iris-virginica and my output is more or less: [0.999979082561091, 0.9999918549147135, 0.9998408912106317], the subtraction would yield the following result:\n\n[-0.999979082561091, -0.9999918549147135, 0.000159]\n\nIs that correct, or I need to follow a different approach. Is possible train my net with 0, 1 and 2 values. Do I need to know any more?\n\nThe easy answer: Take the predicted class to be the argmax of your output vector.\n\nThe longer answer: Since you're doing multiclass classification you should probably be using softmax output units (if I had to guess, I would guess you're using sigmoid output units). If $\\mathbf{x} \\in \\mathbb{R}^n$ is the input to your output units then the softmax for the $i$th output unit is\n\n$$y_i = \\frac{\\exp(x_i)}{\\sum_{j}\\exp(x_j)}$$\n\nThis output function is nice for a number of reasons.\n\n1. since your output vector sums to 1, you can interpret it probabilistically.\n2. the derivative is particularly nice in combination with a log loss function.\n\nNote: the further information I've provided may seen redundant since clearly\n\n$$\\text{argmax}\\{\\mathbf{x}\\} = \\text{argmax}\\{\\mathbf{\\sigma(x)}\\} = \\text{argmax}\\{\\mathbf{\\text{softmax}(x)}\\},$$\n\nwhere $\\sigma$ is the elementwise logistic sigmoid function. The problem with just taking the argmax of what you have is you will likely not be optimizing the function you are actually interested in internally.\n\nOn the other hand the choice of loss function for classification isn't as cut and dry as I've made it sound.\n\n• Can you write an example or a pseudocode? – omar Aug 23 '12 at 19:39\n• @user12287, what part is giving you trouble? – alto Aug 23 '12 at 19:45\n• I don't understand exactly how implement argmax function. – omar Aug 23 '12 at 21:05\n• @user12287, sorry about that, perhaps I was a little sloppy. For the purposes here I simply meant the function that returns the index of the maximum value. So $\\text{argmax}\\{2, 5, 3\\} = 2$, since 5 is the largest value and the second element. – alto Aug 23 '12 at 21:16\n• What about the long answer? Can you give more details. – omar Aug 23 '12 at 21:20"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8771879,"math_prob":0.94439197,"size":687,"snap":"2019-51-2020-05","text_gpt3_token_len":211,"char_repetition_ratio":0.095168374,"word_repetition_ratio":0.0,"special_character_ratio":0.38864627,"punctuation_ratio":0.21153846,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9976934,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-14T19:53:07Z\",\"WARC-Record-ID\":\"<urn:uuid:23fea520-3480-4f0b-9abd-bc96b807ccd3>\",\"Content-Length\":\"134520\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:114a0d72-bc10-4626-94cd-08743444eb3e>\",\"WARC-Concurrent-To\":\"<urn:uuid:4b6f2400-9e30-4dfb-a2c2-489d88a86988>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://cs.stackexchange.com/questions/3303/how-to-compare-the-output-of-a-neural-network-with-his-target\",\"WARC-Payload-Digest\":\"sha1:LYRB2AMMMC47FWPFDHWVOFPMPDOPL3UA\",\"WARC-Block-Digest\":\"sha1:77LIYA4IS6KE2PWE5ILOEIN6PVQHZVUP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541288287.53_warc_CC-MAIN-20191214174719-20191214202719-00314.warc.gz\"}"} |
https://socratic.org/questions/how-would-you-determine-the-atomic-weight-of-an-atom | [
"# How would you determine the atomic weight of an atom?\n\nDec 16, 2015\n\nIt depends on how you define it.\n\nIf you're a chemist, then you would normally take \"atomic weight\" and \"atomic mass\" to just be \"atomic mass\". If you're a physicist, you probably would cringe at equating the two, because weight is in $\\text{N}$, not $\\text{g}$.\n\nI'll answer this from the physicist's perspective.\n\nAtomic mass is just written directly on the periodic table. It's the one that has a lot of decimal places. So hydrogen's atomic mass is $\\textcolor{b l u e}{\\text{1.00794 g/mol}}$.\n\nAtomic weight, then, would be $\\text{W = mg}$.\n\n$\\text{W} = \\left(1.00794\\right) \\left(9.80665\\right)$\n$\\textcolor{b l u e}{\\text{= 9.88451 N/mol}}$"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92134583,"math_prob":0.99534905,"size":713,"snap":"2019-43-2019-47","text_gpt3_token_len":184,"char_repetition_ratio":0.13540198,"word_repetition_ratio":0.0,"special_character_ratio":0.27068722,"punctuation_ratio":0.10714286,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99925196,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-15T20:10:32Z\",\"WARC-Record-ID\":\"<urn:uuid:b292b1f1-bb53-474d-96f3-f887b7e49972>\",\"Content-Length\":\"33799\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e90ce07d-28d3-4fbf-89d4-1821a9475462>\",\"WARC-Concurrent-To\":\"<urn:uuid:42aa0745-1efd-44bb-9ac8-a7b76c78fd7b>\",\"WARC-IP-Address\":\"54.221.217.175\",\"WARC-Target-URI\":\"https://socratic.org/questions/how-would-you-determine-the-atomic-weight-of-an-atom\",\"WARC-Payload-Digest\":\"sha1:SP26EBMSGABWCSEAGCRZBNZDVD73YZNR\",\"WARC-Block-Digest\":\"sha1:YKFUE4SO45D3VE4RK4H3TWH3CJLTIIHV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668712.57_warc_CC-MAIN-20191115195132-20191115223132-00458.warc.gz\"}"} |
http://www.win-vector.com/blog/2016/03/sample-monkeys-paw-style-programming-in-r/ | [
"Posted on Categories Coding, Rants\n\n# sample(): “Monkey’s Paw” style programming in R\n\nThe R functions `base::sample` and `base::sample.int` are functions that include extra “conveniences” that seem to have no purpose beyond encouraging grave errors. In this note we will outline the problem and a suggested work around. Obviously the R developers are highly skilled people with good intent, and likely have no choice in these matters (due to the need for backwards compatibility). However, that doesn’t mean we can’t take steps to write safer and easier to debug code.",
null,
"“The Monkey’s Paw”, story: William Wymark Jacobs, 1902; illustration Maurice Greiffenhagen.\n\nSuppose we were given data in the following form:\n\n``` set.seed(2562) x <- 10*rnorm(5) print(x) # -17.442331 7.361322 -10.537903 -4.208578 -1.560607 goodIndices <- which(x>0) print(goodIndices) # 2 ```\n\nFurther suppose our goal is to generate a sample of size 5 of the values of `x` from only the `goodIndices` positions. That is a sample (with replacement) of the positive values from our vector `x`. I challenge a working R developer who has used `base::sample` or `base::sample.int` regularly to say they have never written at least one of the following errors at some time:\n\n``` sample(x[goodIndices],size=5,replace=TRUE) # 5 6 1 3 2 x[sample(goodIndices,size=5,replace=TRUE)] # 7.361322 -17.442331 7.361322 -17.442331 7.361322 ```\n\nThese samples are obviously wrong, but you will notice this only if you check. There is only one positive value in `x` (`7.361322`) so the only possible legitimate sample of 5 positive values under replacement is `c(7.361322,7.361322,7.361322,7.361322,7.361322)`. Notice we never got this, and received no diagnostic. A bad sample like this can take a long time to find through its pernicious effects in downstream code.\n\nNotice the following code works (because it reliably prohibits triggering the horrid special case):\n\n``` as.numeric(sample(as.list(x[goodIndices]),size=5,replace=TRUE)) # 7.361322 7.361322 7.361322 7.361322 7.361322 x[as.numeric(sample(as.list(goodIndices),size=5,replace=TRUE))] # 7.361322 7.361322 7.361322 7.361322 7.361322 x[goodIndices[sample.int(length(goodIndices),size=5,replace=TRUE)]] # 7.361322 7.361322 7.361322 7.361322 7.361322 ```\n\nAs always: this is a deliberately trivial example so you can see the problem clearly.\n\nSo what is going on? The issue given in `help('sample')`:\n\nIf x has length 1, is numeric (in the sense of is.numeric) and x >= 1, sampling via sample takes place from 1:x. Note that this convenience feature may lead to undesired behaviour when x is of varying length in calls such as sample(x).\n\nThis little gem is the first paragraph of the “Details” section of `help('sample')`. The authors rightly understand that more important than knowing the intended purpose of `base::sample` is to first know there is a sneaky exception hardcoded into its behavior. In some situations `base::sample` assumes you really meant to call `base::sample.int` and switches behavior. Remember (as pointed out in the documentation): the population we are sampling from may have come from an external source, so the analyst may not even know they have triggered (or even could trigger) these exceptional cases.\n\nHere is the code confirming this “convenience.”\n\n``` > print(base::sample) function (x, size, replace = FALSE, prob = NULL) { if (length(x) == 1L && is.numeric(x) && x >= 1) { if (missing(size)) size <- x sample.int(x, size, replace, prob) } else { if (missing(size)) size <- length(x) x[sample.int(length(x), size, replace, prob)] } } <bytecode: 0x103102340> <environment: namespace:base> ```\n\nIf we meant to call `base::sample.int` we certainly could have. There aren’t even any of the traditional “don’t lose flag”s available (such as “`drop=FALSE`“, “`stringsAsFactors=FALSE`“, “`character.only=TRUE`“, or “`type='response'`“). This “convenience” makes it impossible to reliably use `base::sample` without some trick (such as hiding our vector in a list). Our current advice is to use the following two replacement functions:\n\n``` sampleint <- function(n,size,replace=FALSE,prob=NULL) { if((!is.numeric(n)) || (length(n)!=1)) { stop(\"sampleint: n must be a numeric of length exactly 1\") } if(missing(size)) { size <- n } if((!is.numeric(size)) || (length(size)!=1)) { stop(\"sampleint: size must be a numeric of length exactly 1\") } sample.int(n,size,replace,prob) } samplex <- function(x,size,replace=FALSE,prob=NULL) { if(missing(size)) { size <- length(x) } if((!is.numeric(size)) || (length(size)!=1)) { stop(\"samplex: n must be a numeric of length exactly 1\") } x[sampleint(length(x), size, replace, prob)] } ```\n\nWith these functions loaded you can write more natural code:\n\n``` samplex(x[goodIndices],size=5,replace=TRUE) # 7.361322 7.361322 7.361322 7.361322 7.361322 ```\n\nAs a bonus we included `sampleint` which actually checks its arguments (a very good thing for library code to do) catching if the analyst accidentally writes “`sample.int(1:10,size=10,replace=TRUE)`” or “`sample.int(seq_len(10),size=10,replace=TRUE)`” (which return 10 copies of 1!) when they meant to write “`sample.int(10,size=10,replace=TRUE)`“. The overall principle is that a completed run (one that did not call `stop()`) should have obvious and expected semantics, this allows the user to treat the successful execution as an empirical proof they didn’t hit too bad a corner-case.\n\nObviously it is the data scientist’s responsibility to know actual function semantics, to write correct code, and to check their intermediate results. However, it traditionally is the responsibility of library code to help in this direction by having regular behavior (see the principle of least astonishment) and to signal errors in the case of clearly invalid arguments (reporting errors near their occurrence makes debugging much easier). Nobody enjoys working with Monkey’s Paw style libraries (that are technically “correct” but not truly helpful).\n\n## 9 thoughts on “sample(): “Monkey’s Paw” style programming in R”\n\n1. Isn’t it easier just write a wrapper around sample, which checks for length 1 argument and return rep(x, size)?\n\nFrom statistical point of view drawing sample from one element is nonsense, you need to use rep then, so I see a point of R behaviour. Some sort of warning however should be nice. On the other hand if you sample from one point, your code is not statistically sound, and you bound to have problems anyway.\n\n1. I do not agree.\n\nFrom a software engineering point of view we want simple regular library code that works well on the primary cases and happens to get the corner cases correct. Getting corner cases right through enumeration is risky as there (in general) is always the chance you are missing some. And surely sample(-1,5,replace=TRUE) doesn’t make a different amount of “statistical sense” than sample(2,5,replace=TRUE) (both of which are treated differently).\n\nIf one wanted to prohibit sampling from a population size smaller than 2 would be easier still to check for a length<=1 argument and call stop(). Then at least the researcher has some chance of seeing what went wrong near the problem. I think it makes far more sense to for samplex(7,1) to return a 7 than for sample(7,1) to return something like a 2.\n\n1. Note that proposed code does enumerate the corner cases. R code does that too, it checks for the argument of length 1, but treats that case differently. When you have corner cases you need to make a choice how to treat it. Since R is used by very diverse crowd of people sometimes it is hard to make a choice which suits everybody.\n\n1. Again, I disagree. Let’s leave it at that.\n\n2. Well,once I learned how sample() works, I made sure always to write sample(c(x,x),…) . The distribution is the same as for sample(x) when x is a vector, and when x is scalar it ensures it’s not converting to 1:x .\n\n1. Clever, that works correctly over many different types (assuming replace=TRUE). I had been using as.numeric(sample(as.list(x),…), but this only works correctly if we know x is numeric (different types of disasters with strings or factors).\n\n2. Jason Liao says:\n\nThis behavior of sample function is really bad. I was bitten badly twice and wasted a day of time to finally figure it out.\n\n3. Bill Venables says:\n\nThis must be a bad one as I have just been bitten by it myself, and I’ve been at the game for some time now. The suggestion made is a good one and I would hope it eventually makes its way into R base, and sample(), sample.int() gradually eased out by deprecation and then defunct. But that will take some time, sadly.\n\nIt’s only there, of course, as a result of slavery to backward compatibility. It’s an old function coming from the early S days which originally was “An Interactive Environment for Data Analysis and Graphics”. It was only with the introduction of S3 in the early ’90s that it became “A Programming Environment for Data Analysis and Graphics”, and the change of word is significant. Originally the usual way to do your data analysis and graphics was to sit at a terminal and punch it in. Scripts and an emphasis on planned coding and reproducibility only came later, strange as it may now seem."
] | [
null,
"https://i2.wp.com/www.win-vector.com/blog/wp-content/uploads/2016/03/NewImage-1.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89045894,"math_prob":0.95669454,"size":9339,"snap":"2020-10-2020-16","text_gpt3_token_len":2298,"char_repetition_ratio":0.12329941,"word_repetition_ratio":0.025570145,"special_character_ratio":0.26640967,"punctuation_ratio":0.13805774,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9678191,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-26T16:30:40Z\",\"WARC-Record-ID\":\"<urn:uuid:aaca0969-73ae-47bd-8982-6624659fcbac>\",\"Content-Length\":\"82051\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:90477d40-ec44-45b8-90fb-4adc95dec0b9>\",\"WARC-Concurrent-To\":\"<urn:uuid:7f9dad75-9134-4271-a157-64db211079a6>\",\"WARC-IP-Address\":\"98.129.229.190\",\"WARC-Target-URI\":\"http://www.win-vector.com/blog/2016/03/sample-monkeys-paw-style-programming-in-r/\",\"WARC-Payload-Digest\":\"sha1:6VFLMTV3WYXJCCUDMXJ2FUPGKFTXLPK2\",\"WARC-Block-Digest\":\"sha1:HJTI7SIU5RW2E475JAZC76ABV5ZBSXPI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146414.42_warc_CC-MAIN-20200226150200-20200226180200-00251.warc.gz\"}"} |
http://proceedings.mlr.press/v80/allen-zhu18a.html | [
"# Katyusha X: Simple Momentum Method for Stochastic Sum-of-Nonconvex Optimization\n\nZeyuan Allen-Zhu ;\nProceedings of the 35th International Conference on Machine Learning, PMLR 80:179-185, 2018.\n\n#### Abstract\n\nThe problem of minimizing sum-of-nonconvex functions (i.e., convex functions that are average of non-convex ones) is becoming increasing important in machine learning, and is the core machinery for PCA, SVD, regularized Newton’s method, accelerated non-convex optimization, and more. We show how to provably obtain an accelerated stochastic algorithm for minimizing sum-of-nonconvex functions, by adding one additional line to the well-known SVRG method. This line corresponds to momentum, and shows how to directly apply momentum to the finite-sum stochastic minimization of sum-of-nonconvex functions. As a side result, our method enjoys linear parallel speed-up using mini-batch."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7886376,"math_prob":0.8253547,"size":3447,"snap":"2019-13-2019-22","text_gpt3_token_len":837,"char_repetition_ratio":0.12460064,"word_repetition_ratio":0.6549451,"special_character_ratio":0.22976501,"punctuation_ratio":0.14630225,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.962698,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-25T14:59:16Z\",\"WARC-Record-ID\":\"<urn:uuid:96aadcc1-8a06-4759-8b78-b10f77b78d0d>\",\"Content-Length\":\"12016\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f876b343-f614-4f6c-9d21-bc9e56f0b7e9>\",\"WARC-Concurrent-To\":\"<urn:uuid:a60ce8af-dd68-4e80-95af-b1df73335290>\",\"WARC-IP-Address\":\"192.30.252.153\",\"WARC-Target-URI\":\"http://proceedings.mlr.press/v80/allen-zhu18a.html\",\"WARC-Payload-Digest\":\"sha1:LU6BGR52OYWTLLVVEX4FKWQUK7ANWXNU\",\"WARC-Block-Digest\":\"sha1:P25VVUF3HS4SL45QSKIRKQ4QZ6VOM5G5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912203991.44_warc_CC-MAIN-20190325133117-20190325155117-00027.warc.gz\"}"} |
https://erc.wisc.edu/publications/turbulence-transport-in-spatially-developing-reacting-shear-layers/ | [
"# Turbulence Transport in Spatially Developing Reacting Shear Layers\n\nMason, S. D. Turbulence Transport in Spatially Developing Reacting Shear Layers. University of Wisconsin-Madison, 2000.\n\nThe transport of turbulence in non-reacting and reacting shear layers is investigated using direct numerical simulations (DNS). The present DNS code solves non-dimensional transport equations for total mass, momentum, energy, and reactant mass fractions. The combustion is simulated by a single-step, second-order reaction with an Arrhenius reaction rate. The transport equations are solved using a low Mach number approximation where the effects of heat release are accounted for through variable density. The numerical formulation is characterized by a third-order Runge-Kutta time integration, eleventh-order finite-difference spatial derivatives, and a fully consistent fractional-step method for the solution of the momentum equation.\n\nThree-dimensional simulations of one non-reacting shear layer and one reacting shear layer were performed to generate databases for statistical analysis. Transverse budgets of turbulence kinetic energy reveal that the turbulent transport and pressure transport terms have a unique role in the energy balance in that they have different algebraic signs in different regions of the layer. In the non-reacting shear layer, the pressure transport term tends to balance the turbulent transport term. In the reacting shear layer, however, a flip in the pressure transport term is observed and the resulting behavior is similar to the turbulent transport. The pressure transport term for both cases is examined in detail and the flip is attributed to the heat release through correlations with the reaction rate.\n\nThe DNS results are compared with the standard k-? model for production and turbulent transport. When calculated with the standard eddy viscosity closure coefficient, the Boussinesq approximation accurately predicts the production for the non-reacting shear layer but overpredicts it for the reacting shear layer. The calculation of the Boussinesq approximation also shows that the dilatation dissipation is small compared to the solenoidal dissipation. The evaluation of the gradient-diffusion approximation indicates that, in general, the sum of the pressure transport and turbulent transport terms behaves as a gradient-diffusion process for both the non-reacting and reacting shear layers. It is shown that including the pressure transport in the gradient-diffusion approximation makes the model more accurate for the non-reacting shear layer and less accurate for the reacting shear layer."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90817744,"math_prob":0.93013436,"size":2440,"snap":"2022-40-2023-06","text_gpt3_token_len":432,"char_repetition_ratio":0.17816092,"word_repetition_ratio":0.011627907,"special_character_ratio":0.16229509,"punctuation_ratio":0.07106599,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9555382,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-31T23:18:07Z\",\"WARC-Record-ID\":\"<urn:uuid:6f5b67d9-9d38-41a7-a485-6834517743dd>\",\"Content-Length\":\"59329\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9f3c938d-407e-4eba-8215-db3a0d9c291e>\",\"WARC-Concurrent-To\":\"<urn:uuid:00f2508d-7455-4c4c-9a3e-0ac47a09676b>\",\"WARC-IP-Address\":\"99.83.210.234\",\"WARC-Target-URI\":\"https://erc.wisc.edu/publications/turbulence-transport-in-spatially-developing-reacting-shear-layers/\",\"WARC-Payload-Digest\":\"sha1:PXGXJ7GJJPT3ANYAWLBPOJOACRBQBDHH\",\"WARC-Block-Digest\":\"sha1:TT4QKOONT3R24SWW6EF3YCZR5ZLMGNCQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499891.42_warc_CC-MAIN-20230131222253-20230201012253-00286.warc.gz\"}"} |
https://ipv4.calculatoratoz.com/en/vaibhav-mishra/fe44039e-b52d-4330-a91a-1e2bbdaef54a/profile | [
"# Calculators Created by Vaibhav Mishra",
null,
"DJ Sanghvi College of Engineering (DJSCE), Mumbai\n68\nFormulas Created\n1\nFormulas Verified\n14\nAcross Categories\n\n## List of Calculators by Vaibhav Mishra\n\nFollowing is a combined list of all the calculators that have been created and verified by Vaibhav Mishra. Vaibhav Mishra has created 68 and verified 1 calculators across 14 different categories till date.\nCreated Chapman Enskog Equation for Gas Phase Diffusivity\nCreated Diffusivity by Stefan Tube Method\nCreated Diffusivity by Twin Bulb Method\nCreated Fuller-Schettler-Giddings for Binary Gas Phase Diffusivity\nCreated Wilke Chang Equation for Liquid Phase Diffusivity\nCreated Equilibrium Vaporization Ratio for Less Volatile Component\nCreated Equilibrium Vaporization Ratio for more Volatile Component\nCreated Mole fraction of LVC in Liquid using Equilibrium Vaporization Ratio\nCreated Mole fraction of LVC in Vapor using Equilibrium Vaporization Ratio\nCreated Mole Fraction of MVC in Liquid using Equilibrium Vaporization Ratio\nCreated Mole Fraction of MVC in Vapor using Equilibrium Vaporization Ratio\nCreated Relative Volatility using Equilibrium Vaporization Ratio\n5 More Distillation Calculators\nCreated Equivalent Diameter Using Reynolds Number\nCreated Fraction of Cycle Time Used For Cake Formation\nCreated Time Required For Cake Formation\nVerified Discharge Rate of Liquid from an Orifice in Tank\n6 More Fluid Dynamics Calculators\nCreated Pressure Gradient Using Kozeny Carman Equation\n1 More Fluidisation Calculators\nCreated Absorption Factor\nCreated Absorption Factor based on Stripping Factor\nCreated Gas Flowrate for Absorption Column on Solute Free Basis\nCreated Liquid Flowrate for Absorption Column on Solute Free basis\nCreated Maximum Gas Rate for Absorption Column\nCreated Minimum Liquid Rate for Absorption Column\nCreated Minimum Operating Line Slope for Absorption Column\nCreated Number of Absorption Stages by Kremser Equation\nCreated Number of Stages for Absorption Factor Equal to 1\nCreated Operating Line Slope for Absorption Column\nCreated Corrected Murphree Efficiency Percentage for Liquid Entrainment\nCreated Gas Flowrate on Solute Free Basis for Inlet Conditions by Mole Fraction\nCreated Gas Flowrate on Solute Free Basis for Inlet Conditions by Solute Free Mole Fraction\nCreated Liquid Flowrate on Solute Free Basis for Inlet Conditions by Solute Free Mole Fraction\nCreated Liquid Flowrate on Solute Free Basis for Inlet Conditions using Mole Fraction\nCreated Murphree Efficiency of Absorption Operation Based on Point Efficiency for Plug Flow\nCreated Murphree Tray Efficiency of Absorption Operation\nCreated Overall Tray Efficiency for Absorption Column Based on Murphree Efficiency\nCreated Point Efficiency of Absorption Operation\nCreated Solute Free Mole Fraction of Gas in Inlet based on Mole Fraction\nCreated Solute Free Mole Fraction of Liquid in Inlet based on Mole Fraction\nCreated Logarithmic Mean Area of a Cylinder\n24 More Heat Transfer Calculators\nCreated Contact Time based on Penetration Theory\nCreated Diffusivity Based on Film Theory\nCreated Diffusivity Based on Penetration Theory\nCreated Diffusivity Based on Surface Renewal Theory\nCreated Film Thickness based on Film Theory\nCreated Fractional Resistance Offered by Gas Phase\nCreated Fractional Resistance Offered by Liquid Phase\nCreated Gas Phase Mass Transfer Coefficient based on Two Film Theory\nCreated Gas Phase Mass Transfer Coefficient using Fractional Resistance by Gas Phase\nCreated Liquid Phase Mass Transfer Coefficient based on Two Film Theory\nCreated Liquid Phase Mass Transfer Coefficient using Fractional Resistance by Liquid Phase\nCreated Mass Transfer Coefficient based on Film Theory\nCreated Mass Transfer Coefficient based on Surface Renewal Theory\nCreated Overall Gas Phase Mass Transfer Coefficient using Fractional Resistance by Gas Phase\nCreated Overall Liquid Phase Mass Transfer Coefficient using Fractional Resistance by Liquid Phase\nCreated Penetration Theory of Mass Transfer\nCreated Surface Renewal Rate based on Surface Renewal Theory\nCreated Specific Surface Area Of Mixture\nCreated Surface Shape Factor\nCreated Total Surface Area of Particle using Spericity\n9 More Mechanical Operations Calculators\nCreated Combined Overall Efficiency of the Screen\nCreated Screen Effectiveness Based on Oversize Material From Overall Efficiency\n10 More Mechanical Separation Calculators\nCreated Area of Product in terms of Crushing Efficiency\nCreated Maximum Diameter of Particle Nipped by Rolls",
null,
""
] | [
null,
"https://ipv4.calculatoratoz.com/Images/DefaultUserImage.png",
null,
"https://ipv4.calculatoratoz.com/Images/share.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.72059953,"math_prob":0.4428082,"size":5810,"snap":"2022-27-2022-33","text_gpt3_token_len":1463,"char_repetition_ratio":0.20289356,"word_repetition_ratio":0.25319397,"special_character_ratio":0.16884682,"punctuation_ratio":0.010309278,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.953902,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-26T11:45:24Z\",\"WARC-Record-ID\":\"<urn:uuid:3c8ba46b-4981-4987-8107-72bc80bb34f8>\",\"Content-Length\":\"103121\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e035be3a-6a36-463e-b851-7ffd561c1599>\",\"WARC-Concurrent-To\":\"<urn:uuid:70af6517-e40a-416c-8daa-48f31a8948d0>\",\"WARC-IP-Address\":\"67.43.15.151\",\"WARC-Target-URI\":\"https://ipv4.calculatoratoz.com/en/vaibhav-mishra/fe44039e-b52d-4330-a91a-1e2bbdaef54a/profile\",\"WARC-Payload-Digest\":\"sha1:GUNDF4JXOCSGJZHP52FALHDVYSSCLFXE\",\"WARC-Block-Digest\":\"sha1:663XFHMPG36JV2PRFZCNDBDNSX3HXQ3U\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103205617.12_warc_CC-MAIN-20220626101442-20220626131442-00757.warc.gz\"}"} |
https://betsandpieces.net/2017/10/ | [
"# October 2017\n\n## Probability of Negative Return",
null,
"In our earlier article, ‘What is a Nap?’, we briefly mentioned the probability of negative return but, at that stage, we didn’t expound on what it is, how it’s calculated or how to use it in relation to horseracing results.\n\nHowever, what we did say in the earlier article was that to properly analyse a series of horseracing selections, such as those supplied by a tipster, you need a large number of selections. The reason for this is that, when the number of selections is large, the distribution of horseracing data looks the normal, or Gaussian, distribution for a random variable.\n\nThe probability of negative return is a simple statistic to calculate and understand. The figure calculated represents the probability of a series of selections producing a negative return or, in other words, a level stakes loss. The higher the figure calculated the higher the probability of incurring such as loss.\n\nIn order to calculate the probability of negative return for a series of selections, you need to:\n\nCalculate the theoretical percentage profit from each selection, to level stakes. A winner at even money would generate a profit of 100%, a winner at 2/1 would generate a profit of 200% and so on, while any loser would generate a profit of -100%.\n\nCalculate the average, or mean, percentage profit. The easiest way to do this (and the subsequent steps in the calculation) is to create a spreadsheet in Microsoft Excel – which can be as simple as a single column containing the percentage profit for each selection – and use the AVERAGE function.\n\nCalculate the standard deviation of the data in your spreadsheet using the STDEV function. Algebraically, standard deviation, S, is calculated according to the formula below, but if you use Microsoft Excel you don’t need to worry about the mathematics.\n\nIf your percentage profit data is contained in, say, cells A1 to A8 of your spreadsheet, simply use the STDEV function in the form STDEV(A1:A8).\n\nDivide the result by the square root of the number of selections, using the SQRT function, to give the standard error.\n\nAssuming that horse racing results are at least approximately normally distributed, you can use the NORMDIST function in Excel, in the form = NORMDIST (0, mean, stderr, 1), where mean and stderr are the mean percentage profit and standard error of the percentage profit, which you calculated in the steps 2. and 4. above.\n\nThe result should be a figure between 0 and 1 and, as mentioned above, the higher the figure the higher the probability of negative return and versa. A probability of 0.10 represents a 10% chance of making a loss, or a 90% chance of making a profit, from a series of selections and is typically the maximum value that you’d want to accept if you’re going to back future selections with real money.\n\nWe hope you enjoyed ‘Probability of Negative Return’ and we will be back soon with another advanced betting guide. In the meantime, we would love to hear your thoughts on ‘Probability of Negative Return’ in the comments section below."
] | [
null,
"https://betsandpieces.net/wp-content/uploads/2018/05/negative-return.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9307361,"math_prob":0.97187716,"size":3017,"snap":"2022-40-2023-06","text_gpt3_token_len":638,"char_repetition_ratio":0.13740458,"word_repetition_ratio":0.015686275,"special_character_ratio":0.21014252,"punctuation_ratio":0.11262798,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.998655,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-27T20:56:10Z\",\"WARC-Record-ID\":\"<urn:uuid:1d716e82-37a9-4002-a709-608dc7ee5173>\",\"Content-Length\":\"39949\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:379bb453-4f89-4b8c-8e7b-c758af73f86e>\",\"WARC-Concurrent-To\":\"<urn:uuid:780c1214-eb32-458e-9970-ab6ebe5a541e>\",\"WARC-IP-Address\":\"172.96.187.176\",\"WARC-Target-URI\":\"https://betsandpieces.net/2017/10/\",\"WARC-Payload-Digest\":\"sha1:3NNLP72FCJYV2DKKMZAJEWG3MT2TPCFS\",\"WARC-Block-Digest\":\"sha1:BZZ6ZUCJ4HGTTFDCHJUBMVEGILQ5CX3K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764495012.84_warc_CC-MAIN-20230127195946-20230127225946-00395.warc.gz\"}"} |
https://www.convert-measurement-units.com/convert+Ounce+per+second+to+Kilogram+per+second.php | [
" Convert oz/s to kg/s (Ounce per second to Kilogram per second)\n\n## Ounce per second into Kilogram per second\n\nnumbers in scientific notation\n\nhttps://www.convert-measurement-units.com/convert+Ounce+per+second+to+Kilogram+per+second.php\n\n## How many Kilogram per second make 1 Ounce per second?\n\n1 Ounce per second [oz/s] = 0.028 349 523 125 Kilogram per second [kg/s] - Measurement calculator that can be used to convert Ounce per second to Kilogram per second, among others.\n\n# Convert Ounce per second to Kilogram per second (oz/s to kg/s):\n\n1. Choose the right category from the selection list, in this case 'Mass flow rate'.\n2. Next enter the value you want to convert. The basic operations of arithmetic: addition (+), subtraction (-), multiplication (*, x), division (/, :, ÷), exponent (^), brackets and π (pi) are all permitted at this point.\n3. From the selection list, choose the unit that corresponds to the value you want to convert, in this case 'Ounce per second [oz/s]'.\n4. Finally choose the unit you want the value to be converted to, in this case 'Kilogram per second [kg/s]'.\n5. Then, when the result appears, there is still the possibility of rounding it to a specific number of decimal places, whenever it makes sense to do so.\n\nWith this calculator, it is possible to enter the value to be converted together with the original measurement unit; for example, '918 Ounce per second'. In so doing, either the full name of the unit or its abbreviation can be usedas an example, either 'Ounce per second' or 'oz/s'. Then, the calculator determines the category of the measurement unit of measure that is to be converted, in this case 'Mass flow rate'. After that, it converts the entered value into all of the appropriate units known to it. In the resulting list, you will be sure also to find the conversion you originally sought. Alternatively, the value to be converted can be entered as follows: '71 oz/s to kg/s' or '81 oz/s into kg/s' or '97 Ounce per second -> Kilogram per second' or '95 oz/s = kg/s' or '77 Ounce per second to kg/s' or '61 oz/s to Kilogram per second' or '11 Ounce per second into Kilogram per second'. For this alternative, the calculator also figures out immediately into which unit the original value is specifically to be converted. Regardless which of these possibilities one uses, it saves one the cumbersome search for the appropriate listing in long selection lists with myriad categories and countless supported units. All of that is taken over for us by the calculator and it gets the job done in a fraction of a second.\n\nFurthermore, the calculator makes it possible to use mathematical expressions. As a result, not only can numbers be reckoned with one another, such as, for example, '(24 * 86) oz/s'. But different units of measurement can also be coupled with one another directly in the conversion. That could, for example, look like this: '918 Ounce per second + 2754 Kilogram per second' or '22mm x 21cm x 30dm = ? cm^3'. The units of measure combined in this way naturally have to fit together and make sense in the combination in question.\n\nIf a check mark has been placed next to 'Numbers in scientific notation', the answer will appear as an exponential. For example, 2.313 778 991 290 3×1030. For this form of presentation, the number will be segmented into an exponent, here 30, and the actual number, here 2.313 778 991 290 3. For devices on which the possibilities for displaying numbers are limited, such as for example, pocket calculators, one also finds the way of writing numbers as 2.313 778 991 290 3E+30. In particular, this makes very large and very small numbers easier to read. If a check mark has not been placed at this spot, then the result is given in the customary way of writing numbers. For the above example, it would then look like this: 2 313 778 991 290 300 000 000 000 000 000. Independent of the presentation of the results, the maximum precision of this calculator is 14 places. That should be precise enough for most applications."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8548307,"math_prob":0.97224313,"size":3770,"snap":"2021-43-2021-49","text_gpt3_token_len":885,"char_repetition_ratio":0.14551248,"word_repetition_ratio":0.022727273,"special_character_ratio":0.2612732,"punctuation_ratio":0.11826544,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98813117,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-24T01:57:17Z\",\"WARC-Record-ID\":\"<urn:uuid:c675eea4-f59b-49b9-99f1-defec6625c96>\",\"Content-Length\":\"49882\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:da6a3ff5-13d9-4a14-9df4-ce9e806b3534>\",\"WARC-Concurrent-To\":\"<urn:uuid:b9725228-e555-4a86-9887-51eb3f210e06>\",\"WARC-IP-Address\":\"135.181.75.227\",\"WARC-Target-URI\":\"https://www.convert-measurement-units.com/convert+Ounce+per+second+to+Kilogram+per+second.php\",\"WARC-Payload-Digest\":\"sha1:YEJOKKLRKYZBIE2LPNWTCCE4C2ZMRZHU\",\"WARC-Block-Digest\":\"sha1:KJI4CQHFFPELU233UR6DBXOERYHXYHRD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585837.82_warc_CC-MAIN-20211024015104-20211024045104-00064.warc.gz\"}"} |
https://calculator.academy/bsa-calculator-body-surface-area/ | [
"Enter your height and weight into the calculator to determine your estimated body surface area.\n\n## BSA Formula\n\nThe following formula is used to calculate your body surface area.\n\nBSA = SQRT ( H * W / 3600 )\n\n• Where BSA is your body surface area (m^2)\n• H is your height (cm)\n• W is your weight (kg)\n\nThis equation can be adjusted to use pounds and inches as the units as well. The calculator does this for you.\n\n## BSA Definition\n\nBSA is defined as the total surface area of one’s body, also known as body surface area.\n\n## BSA Example\n\nHow to calculate BSA?\n\nMeasure your height and convert the units to cm.\n\nMeasure your weight and convert the units to kg."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9407613,"math_prob":0.98976207,"size":1252,"snap":"2022-27-2022-33","text_gpt3_token_len":281,"char_repetition_ratio":0.18990384,"word_repetition_ratio":0.008928572,"special_character_ratio":0.221246,"punctuation_ratio":0.10204082,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9895197,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-03T05:45:19Z\",\"WARC-Record-ID\":\"<urn:uuid:9c501fce-8b49-42a0-b027-a5a66f142785>\",\"Content-Length\":\"138338\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1af83770-912a-426d-ac7a-2f4decb5e57f>\",\"WARC-Concurrent-To\":\"<urn:uuid:caf65fa5-77d0-4cee-956e-f38cb1ce78bf>\",\"WARC-IP-Address\":\"172.67.69.208\",\"WARC-Target-URI\":\"https://calculator.academy/bsa-calculator-body-surface-area/\",\"WARC-Payload-Digest\":\"sha1:P2VTLABJRS5TACZJGO4QDO6XNGMYVBCK\",\"WARC-Block-Digest\":\"sha1:7ZQE43QUHDJ7AFNKSEAMUTMFOLGX4GEZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104215790.65_warc_CC-MAIN-20220703043548-20220703073548-00038.warc.gz\"}"} |
https://www.tutorialspoint.com/decimal-addition-in-8085-microprocessor | [
"# Decimal addition in 8085 Microprocessor\n\nMicroprocessorMicrocontroller8085\n\nIn a digital computer, everything is represented using 0s and 1s only. For example, instruction will have a code using only 0s and 1s. Data is also represented using 0s and 1s. Data can be of different types like unsigned numbers, signed numbers, floating point numbers, binary coded decimal (BCD) numbers, etc. Thus, a series of 0s and 1s will acquire a value based on the interpretation. For decimal addition, we are having a very important and common instruction DAD. Let us discuss more on that instruction now.\n\nIn spite of the fact that 8085 is an 8-bit microprocessor, but there are some instructions are there available in the 8085 instruction set which can do 16-bit additions also. As the 8085 internal architecture is only 8-bits, this instruction easily takes double the time needed to add two 8-bit numbers.\n\nHere, DAD is a mnemonic, which stands for Double ADd and also rp stands for any one of the following register pairs as mentioned below –\n\nrp = BC, DE, or HL\n\nAs rp can have any of the three values, there are three opcodes for this type of instruction. It occupies only 1-Byte in memory.\n\nMnemonics, Operand\nOpcode (in HEX)\nBytes\n09\n1\n19\n1\n29\n1\n\nIn this instruction, HL register pair works as Accumulator. Because the 16-bit content of rp will be added with HL register pair content and sum thus produced will be stored back on to HL again.\n\nThough it is an arithmetic instruction, by design, flags other than Cy, will not get affected by the execution of this instruction DAD rp.\n\nLet us consider DAD B is a sample instruction of this category. It is 1-Byte instruction so it occupies a single Byte place in the memory. We are considering the initial content of HL and BC register pairs are 4050H and 4060H. So after 16-bit addition, the current content of HL register pair will be 80B0H. The result of execution of this instruction is shown below with a tracing table –\n\nBefore\nAfter\n(HL)\n4050H\n80B0H\n(BC)\n4060H\n4060H\n(F)\nAny values\nCy=0, no change on other flag bits\n\nHex Codes\nMnemonic\nComment\n2006\n09",
null,
""
] | [
null,
"https://www.tutorialspoint.com/assets/questions/media/18911/81_1.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91805476,"math_prob":0.8844498,"size":2700,"snap":"2021-31-2021-39","text_gpt3_token_len":677,"char_repetition_ratio":0.13464391,"word_repetition_ratio":0.0,"special_character_ratio":0.25037038,"punctuation_ratio":0.10055866,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9515775,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-24T03:52:07Z\",\"WARC-Record-ID\":\"<urn:uuid:df03f9b6-9548-443b-831a-47a7bbc748b2>\",\"Content-Length\":\"33634\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:af8b5f0a-e231-477a-a6b0-18ed6306f0bf>\",\"WARC-Concurrent-To\":\"<urn:uuid:b8142c7c-5f72-4150-be56-ad6ff9fd5586>\",\"WARC-IP-Address\":\"72.21.91.42\",\"WARC-Target-URI\":\"https://www.tutorialspoint.com/decimal-addition-in-8085-microprocessor\",\"WARC-Payload-Digest\":\"sha1:Z6DSVT5S3CZPML6E5XOAEXBR5IJLLEVB\",\"WARC-Block-Digest\":\"sha1:PXDUKGD7LSJZ2Y2CSZT4HMJ4KR6CRCLZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057496.18_warc_CC-MAIN-20210924020020-20210924050020-00121.warc.gz\"}"} |
http://goldfangwang.com/3k64dtnvdjhfu.html | [
"## 手\b机\b牛\b牛\b游\b戏\b注\b册\b送\b分\b\n\n2019-10-24 16:38:04\n\n“^我^们^走^自^己^的^路^,^具^有^无^比^广^阔^的^舞^台^,^具^有^无^比^深^厚^的^历^史^底^蕴^,^具^有^无^比^强^大^的^前^进^定^力^”^。^过^去^7^0^年^,^已^经^证^明^这^条^道^路^的^成^功^;^面^向^未^来^,^这^条^道^路^必^将^越^走^越^宽^广^。^我^们^有^这^样^的^自^信^,^我^们^有^这^样^的^定^力^!^"
] | [
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.87603813,"math_prob":0.99999917,"size":1331,"snap":"2019-43-2019-47","text_gpt3_token_len":1601,"char_repetition_ratio":0.06028636,"word_repetition_ratio":0.0,"special_character_ratio":0.32682195,"punctuation_ratio":0.0029850747,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000075,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-16T20:52:55Z\",\"WARC-Record-ID\":\"<urn:uuid:e7a070e4-bae5-4131-9322-2e7a556e6748>\",\"Content-Length\":\"10404\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:63ecaf94-4fbf-4acb-9eb5-43efd9786372>\",\"WARC-Concurrent-To\":\"<urn:uuid:ed862fad-e43c-4048-9db1-8dabf304f56e>\",\"WARC-IP-Address\":\"103.244.151.217\",\"WARC-Target-URI\":\"http://goldfangwang.com/3k64dtnvdjhfu.html\",\"WARC-Payload-Digest\":\"sha1:HJWZZRNIN4YQVGOPZEL2PERKUYVXGP2S\",\"WARC-Block-Digest\":\"sha1:SCZQ3W74ZHUCBV6PJ7EMT5CA6LJOZ7JL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668765.35_warc_CC-MAIN-20191116204950-20191116232950-00074.warc.gz\"}"} |
http://scholarpedia.org/article/Nordtvedt_effect | [
"# Nordtvedt effect\n\nPost-publication activity\n\nCurator: Kenneth Nordtvedt\n\n## Conceptual and Historical Foundations\n\nIn the years just prior to Einstein's publication of his special relativity theory Dutch physicist Hendrik Lorentz showed using Maxwell's equations that the electric fields within an object of distributed charge density would produce inertia in proportion to the object's electric potential energy content divided by the square of the speed of light. This production of inertia from field energy results from the curvature of the electric field lines of force between the elements of charge in an accelerated body which the $1/c^2$ corrections to Maxwell equations' electrostatics generate. An important and historic manifestation of this electric energy contribution to mass is one of the terms in the Weizsäcker-Bethe semi-empirical mass formula for nuclei which leads to favorable energetics for nuclear fission of nuclei with sufficiently large atomic number $Z$: $\\delta M(A,Z)=\\frac{3}{5} \\frac{Z(Z-1)e^2}{c^2 R(A)}$ for $Z$ uniformly distributed protons within a nucleus of $A$ baryons and radius $R(A)=R_o \\: A^{1/3}$, $R_o$ being a fixed length parameter related to nuclear matter's central density.\n\nThe subsequent theory of special relativity generalized this connection by asserting that every form of energy within a body contributes to the body's inertial mass; $M=E/c^2$. This leads naturally to the question of whether the gravitational binding energy within celestial bodies contribute to those bodies' masses in accord with the relationship from special relativity? This question has an additional interest for gravity. There are two masses for every body in gravitational dynamics -- the inertial masses of each body and the so-called gravitational masses (gravitational coupling strengths) of the bodies indicating their strength for both producing and responding to gravitational fields or forces. Newton set the gravitational mass of bodies equal to their inertial masses in response to the observations of Galileo, himself, and others that all objects seemed to fall at identical rates in gravity, but for our considerations of the conversion of internal gravitational energy into mass the two mass concepts must be kept apart. Different aspects of the gravitational field theory are involved in determination of the internal gravitational energy contribution to each type of mass. One must also investigate whether and how strongly gravitational fields of outside bodies \"pull on\" (couple to) the gravitational energy within bodies and the resulting contributions to the gravitational to inertial mass ratio of celestial bodies. This leads to a plausible hypothesis for both theoretical and experimental exploration of whether celestial bodies might have a gravitational to inertial mass ratio which differs from one in proportion to each body's Newtonian gravitational binding energy, \\begin{equation} \\tag{1} \\frac{M(G)_i }{M(I)_i}=1-\\eta \\frac{1}{M_i c^2} \\int \\frac{G\\rho(\\vec{r})\\rho (\\vec{r}' )\\:d^3 rd^3 r'}{2\\:|\\vec{r}-\\vec{r} '|} \\end{equation} thereby producing the possibility of novel and measurable alterations of body trajectories in celestial mechanics: $\\frac{d^2\\vec{r}_i}{dt^2}=\\frac{M(G)_i}{M(I)_i}\\sum_j \\frac{GM(G)_j}{\\left|\\vec{r}_i-\\vec{r}_j \\right|^3} (\\vec{r}_j-\\vec{r}_i) + \\cdots$ Even if the coefficient $\\eta$ in Equation (1) has the zero value of general relativity, it could very well have non-zero value in alternative theories of gravity, rendering the measurement of this parameter for celestial bodies a new route for testing gravitational theory. And if the coefficient $\\eta$ were empirically found to be zero, the aspects of general relativity's equations of motion for bodies which were ingredients in determining that $\\eta = 0$ could very well be broader in scope than those limited features of the theory which were relevant to the traditional tests of general relativity. In the late 1960s the experimental foundations of general relativity consisted of the anomalous perihelion precession of the planet Mercury's orbit, and the deflection of light rays from distant stars which made close passage by the Sun and were viewed during solar eclipses. Additionally, laboratory experiments comparing the free fall rates of different elements had concluded that any material dependence of those rates was less than a part in $10^{11}$. But since those laboratory samples contained utterly negligible fractions of gravitational binding energy (about a part in $10^{25}$), those experiments did not speak to the hypothesis given in Equation (1). They just showed that all forms of field energies other than gravitational -- nuclear, electromagnetic, etc., -- as well as kinetic energies in the various elements contributed equally to high precision in producing gravitational and inertial masses of material matter.\n\nEddington had shed transparency on two features of the static spherically symmetric metric field of the Sun which were measured by the light deflection and Mercury perihelion precession observations when he expressed the Sun's static metric potentials as (Eddington A, 1957) \\begin{eqnarray} \\tag{2} g_{00} &=& 1-2\\frac {Gm}{c^2 r}+2\\beta \\left( \\frac {Gm}{c^2 r} \\right)^2 \\\\ -g_{ab} &=& \\left( 1+2\\gamma \\frac {Gm}{c^2 r} \\right)\\delta_{ab} \\nonumber \\end{eqnarray} with $\\gamma =\\beta=1$ in general relativity but possibly having different values in alternative metric theories of gravity. Then the two mentioned observations could be interpreted as measuring those parameters through the calculated relationships: $\\omega_P=(2+2\\gamma -\\beta )\\frac{GM(Sun)}{c^2 R(1-e^2)}\\frac{2\\pi}{T(Mer)} \\qquad \\Theta=2(1+\\gamma )\\frac{GM(Sun)}{c^2 D(Sun)}$ with $R$ and $e$ being the semi-major axis and eccentricity of Mercury's orbit and $T(Mer)$ being the orbit's period. $D(Sun)$ is the distance of closest approach of the deflected light ray passing the Sun. The calculations of these observational effects rested on treating the light ray as a test particle moving on a null geodesic through the Sun's metric field as parameterized by Eddington - Equation (2): $\\sqrt{g_{\\mu \\nu}\\:dx^{\\mu} dx^{\\nu}}=0$ which yields a variable coordinate speed of light, $c(r)=c_{\\infty}\\left(1-(1+\\gamma)\\frac{GM(Sun)}{c^2 r} \\right)$ while Mercury's orbit was assumed to be a test particle moving on a geodesic (extremum trajectory) of the same gravitational metric field. $\\frac{d}{dt}\\frac{\\partial L}{\\partial \\vec{v}}-\\frac{\\partial L}{\\partial \\vec{r}}=0 \\text{ with } L=\\sqrt{g_{\\mu \\nu}\\:dx^{\\mu}/dt\\: dx^{\\nu}/dt}$\n\nBut the calculation of the gravitational to inertial mass ratio of a celestial body, taking into account its internal gravitational energy, depends on a much more general and complete expression for the gravitational metric fields produced by a general N-body system of moving sources, thereby revealing that broader features of the gravitational metric field contribute to this ratio and its observable effects, as well as involving the Eddington parameters $\\gamma$ and $\\beta$ in a novel combination. The parameter $\\beta$ indeed acquires a more precise experimental measurement from observables dependent on this ratio.\n\n## Calculation of a Celestial Body's Gravitational to Inertial Mass Ratio\n\nIn order to assume as little as possible about the details of the non-gravitational interactions between the matter elements of a celestial body, the original calculation of gravitational to inertial mass for such a body was done by considering it as an equilibrium gas held together gravitationally; N \"atoms\" or mass elements were assumed to be held together in stationary equilibrium by a balance between attractive gravitational forces and the effective pressure of the atoms' kinetic motion. Gravity was assumed to be a metric theory in which a tensor field $g_{\\mu \\nu}$ was the final vehicle expressing gravity's effect on matter. Each atom was assumed to follow the geodesic motion through the metric field as given below by Equation (3). The metric field components are produced by both a distant external body (or bodies) and the other N-1 atoms of the celestial body being considered. A quite general metric field expansion for the metric fields was needed to replace the static, spherically symmetric, single body source, special case of Eddington. The source of the metric field was now a many body system with the individual sources (the atoms or mass elements) being in motion and undergoing acceleration rather than being at rest. The generalized metric field expansion to first post-Newtonian order takes the form \\begin{eqnarray} \\tag{3} g_{00}&=& 1-2U +2 \\beta U^2 +2\\alpha ' \\sum_{ij} \\frac {G^2m_im_j }{c^4 |\\vec{r}-\\vec{r}_i ||\\vec{r}_i -\\vec{r}_j |} \\\\ &-&\\alpha ''\\sum_i \\frac {Gm_i}{c^4 |\\vec{r}-\\vec{r}_i |}v_i ^2+ \\alpha ''' \\sum_i \\frac {Gm_i}{c^4 |\\vec{r}-\\vec{r}_i |^3 }(\\vec{v}_i \\cdot (\\vec{r}-\\vec{r}_i ))^2 + \\chi \\sum_i \\frac {Gm_i}{c^4 |\\vec{r}-\\vec{r}_i |} \\vec{a}_i \\cdot (\\vec{r}-\\vec{r}_i ) \\nonumber \\\\ -g_{ab} &=&( 1+2 \\gamma U )\\delta_{ab}\\mbox{ for }a,b=x,y,z \\nonumber \\\\ g_{0a}&=&4 \\Delta \\sum_i \\frac {Gm_i}{c^3 |\\vec{r}-\\vec{r}_i |}(\\vec{v}_i )_a +4\\Delta '\\sum_i \\frac {Gm_i }{c^3 |\\vec{r}-\\vec{r}_i |^3} \\vec{v}_i \\cdot (\\vec{r}-\\vec{r}_i )(\\vec{r}-\\vec{r}_i )_a \\nonumber \\end{eqnarray} with an N-body Newtonian potential replacing Eddington's single source $m$: $U=\\sum_i \\frac {Gm_i}{c^2 |\\vec{r}-\\vec{r}_i |}$ Note the multiplicity of potentials now required to take into general account the motion of the matter sources, and especially the mixed space-time potentials $g_{0a}$. And a novel nonlinear potential parameterized by $\\alpha '$ supplements the $\\beta$ potential by necessity in the temporal $g_{00}$ metric potential when multiple sources are present. A potential proportional to source accelerations $\\vec{a}_i$ is included, although coordinate gauge choices could make such a potential absent. At this stage minimum assumptions are employed to restrict the form of the general metric field expansion; neither gauge considerations nor presence or not of energy-momentum conservation laws of isolated systems, nor even the Lorentz invariance of resulting gravitational physics are imposed on the metric field expansions, because we want to consider the possibility that an alternative theory might not fulfill those conditions. The various coefficients $\\gamma,\\beta,\\Delta,\\chi, \\, \\dots,$ are tags used to help keep track of how each metric field potential ultimately contributes to any calculated experimental observable. Those numerical tags are calculable within any metric theory of gravity and generally will differ from the values they take in Einstein's pure tensor general relativity. Given this general metric expansion, each atom $i$ of the celestial body then acquires its equation of motion from the previously stated geodesic principle which takes the explicit form \\begin{equation} \\tag{4} \\left(\\frac {d}{dt}\\frac {\\partial }{\\partial \\vec{v}_i}-\\frac {\\partial }{\\partial \\vec{r}_i}\\right) \\sqrt{g_{00}+2\\vec{h}\\cdot \\vec{v}_i -\\left( 1+2\\gamma U \\right) v_i ^2}=0 \\end{equation} with $g_{00}$ and $\\vec{h}=(g_{0x},g_{0y},g_{0z})$ given in Equations (3). The key to obtaining the modification of a body's total gravitational mass due to its internal gravitational binding energy is the realization that each atom $i$ of the body responds to nonlinear gravitational fields due to and proportional to both the external bodies and the other matter within the body of interest, itself. With $U = \\frac{1}{c^2}\\left(\\frac{GM_{ex}}{|\\vec{R}-\\vec{r}_i |}+\\sum_{j \\neq i\\:in\\:body} \\frac{Gm_j}{r_{ij}}\\right)$ two types of explicitly nonlinear gravitational potentials in the Lagrangian for atom $i$ each produce parts which are additive accelerations for the body as a whole when collected in a weighted sum over all atoms of the body: these contributing parts have the forms \\begin{eqnarray} U^2\\rightarrow2\\frac{Gm_j }{c^2r_{ij}} \\frac{GM_{ex }}{c^2|\\vec{R}-\\vec{r}_i |}\\rightarrow 2\\frac{Gm_j }{c^4 r_{ij}} \\vec{g}_{ex}\\cdot \\vec{r}_i \\nonumber \\\\ \\sum_{qp}\\frac{G^2m_p m_q}{r_{ip} r_{pq}}\\rightarrow \\frac{Gm_j }{r_{ij}} \\frac{GM_{ex }}{|\\vec{R}-\\vec{r}_j |}\\rightarrow \\frac{Gm_j }{r_{ij}}\\vec{g}_{ex}\\cdot \\vec{r}_j \\nonumber \\end{eqnarray} with the gravitational acceleration field of an external body located at $\\vec{R}$ being the traditional Newtonian expression $\\vec{g}_{ex}=\\frac{GM_{ex}}{ R^3}\\vec{R}$ or the sum of such Newtonian acceleration fields due to multiple bodies.\n\nOn organizing the total equation of motion of a body's mass elements, a first category of terms which are collected together are those proportional to both $\\vec{g}_{ex} /c^2$ and internal energy-related variables and parameters of the celestial body under consideration. Collectively, such terms contribute to the gravitational mass of the body and are proportional to either kinetic energies of atoms in the body or gravitational potential energies between pairs of atoms in the body. Then setting the acceleration of each atom equal to an internal part, $\\vec{a}(int)_i$, plus an acceleration $\\vec{A}$ common to all atoms of the body, $\\vec{a}_i =\\vec{a}(int)_i +\\vec{A}$, a second category of terms emerge -- those proportional to $\\vec{A} /c^2$ -- which when collected all together contribute to the inertial mass of the body. Just like the collection of terms contributing to gravitational mass, this second collection will be proportional to either kinetic energies of atoms or gravitational potential energies between pairs of atoms within the body. By weighting each atom's equation of motion by the factor $m_i /\\sum_j m_j$, and summing over all atoms' equations of motion, all terms other than those collected by the two categories described above cancel out1. The modified Newtonian acceleration of the celestial body then takes the form\n\n$\\vec{A} = \\left [ \\frac{M(G)}{M(I)} \\right]\\vec{g}_{ex}$ in which the desired gravitational to inertial mass ratio of the body is given by (Nordtvedt K, 1968)\n\n\\begin{eqnarray} \\tag{5} & & \\left [ \\frac{M(G)}{M(I)} \\right] = 1 + \\frac{1}{Mc^2} \\left\\{ \\eta_1 \\sum_{ij}\\frac{Gm_i m_j}{2r_{ij}} \\right. \\\\ & & + \\left. \\eta_2 \\left( \\sum_i m_i v_i ^2 - \\frac{1}{2}\\sum_{ij} \\frac{Gm_i m_j}{r_{ij}} \\right) + \\eta_3 \\left[ \\sum_i m_i\\vec{v}_i \\vec{v}_i - \\frac{1}{2}\\sum_{ij}\\frac{Gm_i m_j}{r_{ij}^3}\\vec{r}_{ij} \\vec{r}_{ij} \\right] \\right\\} \\nonumber \\end{eqnarray} with2\n\n\\begin{equation} \\tag{6} \\eta_1 = 8\\Delta -4\\beta -3\\gamma -\\chi + \\frac{1}{3}\\left(2\\beta +\\chi + 8 \\Delta ' - \\alpha ' -2 \\right) \\end{equation} Two additional terms with non-zero parameter coefficients $\\eta_2$ and $\\eta_3$ are virials of the gaseous celestial body. Both the spatial tensor virial and its trace, the scalar virial, statistically vanish if the body is in internal equilibrium3 \\begin{eqnarray} \\left \\langle \\sum_i m_i\\vec{v}_i \\vec{v}_i - \\frac{1}{2}\\sum_{ij}\\frac{Gm_i m_j}{r_{ij}^3}\\vec{r}_{ij} \\vec{r}_{ij}\\right \\rangle =0 \\nonumber \\\\ \\left \\langle \\sum_i m_iv_i ^2 -\\frac{1}{2}\\sum_{ij}\\frac{Gm_i m_j}{r_{ij}}\\right \\rangle =0 \\nonumber \\end{eqnarray}\n\nIf the gravity theory fulfills local Lorentz invariance (no preferred inertial frames revealed by the gravitational physics) then $\\chi = 1,\\; \\Delta = (1 + \\gamma)/2,\\;\\Delta ' = 0,\\; \\alpha '' = 1+2\\gamma$. If the gravity theory fulfills energy-momentum conservation then $\\alpha ' = 2 \\beta -1$, and \\begin{equation} \\tag{7} \\frac{M(G)}{M(I)} = 1 - ( 4\\beta - 3 - \\gamma)\\frac{1}{Mc^2} \\sum_{ij}\\frac{Gm_i m_j}{2r_{ij}} \\end{equation} The scalar-tensor theory of gravity proposed by Brans and Dicke (Brans C, Dicke R, 1961) -- also known as Jordan-Brans-Dicke Theory -- has Eddington parameter values $\\beta = 1$ and $\\gamma = (1+\\omega)/(2+\\omega )$ with $1/\\omega$ being an approximate measure of the fractional contribution of the scalar field to the gravitational interaction; while more general scalar-tensor theories of gravity will have both $\\gamma$ and $\\beta$ values which differ from general relativity's values of one.\n\nThere is an interesting alternative interpretation of this expression for gravitational to inertial mass ratio which ties this ratio to another novelty of alternative theories of gravity. If $4\\beta - 3 - \\gamma$ is not zero, then Newton's gravitational parameter $G$ will generally vary in space and time depending on the matter distribution which surrounds a local system of interest, such as galaxy or universe surrounding the solar system, or Sun in vicinity of earth-moon system. Although there is no a-priori value established for $G$ in general relativity from which deviations could be measured, and in fact there is only the ability to measure the value of $G$ with modest precision anyway, a spatial variation of $G$ has consequences and is an explanation for the alteration of the gravitational to inertial mass ratios of celestial bodies. Such gradients produce anomalous accelerations on celestial bodies because the gravitational binding energy contributions to their mass-energies proportional to $G$ then vary with position along the gradient of $G$:\n\n$\\delta \\vec{a} = -\\frac{\\vec{\\nabla}Mc^2}{M} = \\frac{1}{M} \\sum_{ij} \\frac{Gm_i m_j}{2r_{ij}} \\frac{\\vec{\\nabla}G}{G}$ with4\n\n$\\frac{\\delta G}{G} = (4\\beta - 3 - \\gamma ) \\frac{GM_S}{c^2R}$ The equilibrium gas model for the celestial body used for obtaining this result was subsequently generalized to a solid state model with both electric and gravitational forces between the body's mass elements (Nordtvedt K, 1970) and also to a general fluid model of body matter (Will C, 1971).\n\n## Lunar Laser Ranging as measurement of $M(G)/M(I)$\n\nWhile a star like the Sun has fractional gravitational binding energy of about $4~10^{-6}$ and neutron stars have gravitational binding energies contributing of order ten percent (negatively) to their masses, when taking into account the precision of available means to measure orbits, the lunar orbit has turned out to be the primary tool for measuring the gravitational to inertial mass ratio of celestial bodies. This capability became possible by the placement of a passive laser reflector on the lunar surface during the first Apollo landing in 1969 and subsequent reflector placements with later manned and unmanned lunar landings. Within weeks of the first deployment of a reflector on the Moon, round trip transit times for laser pulses initiated from and returned to observatories on Earth began to be recorded. Laser ranging to reflectors on the Moon has continued using constantly improved technology ever since; initial range precisions of tens of centimeters have been reduced to centimeter-level, and with formal errors of model parameter fits down to the millimeter level precision. But with consideration of overall modeling uncertainties measurement of orbital perturbation amplitudes at the several millimeter level are today obtained. The key frequencies of lunar orbital motion are determined with even higher precision due to the almost half century series of range measurements spanning hundreds of lunar orbit cycles. Almost 20,000 range measurements between stations on Earth and reflectors on the Moon have been made between 1969 and the present (Shapiro I I, Counselman, King R W, 1976; Williams J G et al, 1976; Williams J G, Turyshev S G, Boggs D H, 2012).\n\nIf the difference between acceleration of Moon and Earth in the Sun's gravitational field $\\vec{g}_S$ is $\\vec{a}_M - \\vec{a}_E = \\left( \\Delta_M - \\Delta_S \\right) \\vec{g}_S = \\Delta \\vec{g}_S$ then the lunar orbit will be \"polarized\" toward or away from the Sun, depending on sign of $\\Delta$. In the simplest approximation the radial Moon-Earth orbital distance $r$ and orbit's angular motion $h=|\\vec{r} \\times \\vec{v} |$ fulfill the perturbed equations of motion: \\begin{eqnarray} \\ddot{r} = -\\frac{G m_E}{r^2} + \\frac{h^2}{r^3} = \\Delta g_S \\cos \\dot{D}t \\nonumber \\\\ \\dot{h} = -r \\Delta g_S \\sin \\dot{D}t \\nonumber \\end{eqnarray} with $\\dot{D}$ being the Moon's synodic frequency (frequency of New Moon occurrences). In this approximation a synodic frequency perturbation in Moon-Earth range then results, $\\delta r(t) =\\Delta \\left(1 + \\frac{2\\omega}{\\dot{D}}\\right) \\frac{g_S}{\\omega^2 - \\dot{D}^2 } \\cos \\dot{D}t \\approx 1.8 \\:10^{12} \\Delta \\cos \\dot{D}t\\text{ cm}$ with $\\omega$ being the Moon's orbital frequency (Nordtvedt K 1968). The closeness of the lunar synodic frequency $\\dot{D}$ to the orbital frequency $\\omega$ gives a strong resonance enhancement to the polarization. A more careful calculation of the sensitivity of the lunar orbit to such a perturbation takes into account the Sun's Newtonian tidal distortion of the orbit. This tidal distortion further enhances the resonance of the synodic perturbation by lowering the Moon's anomalistic resonance frequency $\\dot{A}$, but more importantly the tidal perturbation of frequency $2\\dot{D}$ which Newton called the $lunar\\; variation$ produces strong positive feedback of the synodic perturbation ($\\cos\\dot{D}t \\cos 2\\dot{D}t \\rightarrow \\cos \\dot{D}t + \\cos 3\\dot{D}t$) which almost doubles the orbit's sensitivity to a non-zero $\\Delta$ (Nordtvedt K, 1994). The $1.8\\:10^{12} \\Delta \\cos \\dot{D}t \\text{ cm}$ sensitivity consequently becomes about $3.3\\:10^{12} \\Delta \\cos \\dot{D}t \\text{ cm}$. Since the fractional gravitational binding energy of Earth is about $4\\:10^{-10}$, the $\\cos \\dot{D} t$ range perturbation amplitude then becomes $1.3\\:10^3 \\eta \\text{ cm}$. The Sun's Newtonian octupolar tidal acceleration of the Moon relative to Earth, proportional to $GM(Sun)r^2 /R^4$ is the main perturbation producing a synodic $\\cos \\dot{D} t$ variation in the Earth to Moon distance, having amplitude of about $110 \\text{ km}$. But all the system's parameters needed to calculate this perturbation are sufficiently well measured from the ranging data to reduce the uncertainty in this Newtonian contribution to sub-millimeter level. However, other intrinsic model limitations related to the Earth and Moon's surfaces and interiors and how they respond to the various tidal perturbations produce uncertainties in the synodic amplitude at the few millimeter level.\n\nWith $\\eta = 4\\beta - 3 - \\gamma$, and $\\gamma$ presently constrained to its general relativity value of one within $\\pm 2.3\\:10^{-5}$ by measurements of the Sun's relativistic time delay of signals from the Cassini spacecraft when its line of sight passed close by the Sun (Bertotti B, Iess L, Tortora P, 2003), the fit of the lunar orbit produces about a part in $10^4$ constraint on the nonlinearity $\\beta$ coefficient of Eddington, presently being the best measure of general relativity's nonlinear structure. An alternative way to summarize this result is that the Earth's gravitational binding energy contributes equally to both its gravitational mass and inertial mass at better than a part in a thousand precision in accord with pure tensor general relativity.\n\nIf Jupiter anomalously accelerates the Sun due to it having an $M(G)/M(I)$ ratio different than one, then the inner planets' orbits will be polarized in the direction of Jupiter which circles the Sun with about an eleven year period. Interplanetary ranging between Earth and Mars seems the most promising way to carry out such an experiment to measure the Sun's ratio (Anderson J D, Gross M, Nordtvedt K L, Turyshev S G, 1996). And recently a neutron star pulsar along with two white dwarf stars were found in a closely bound three body system PSR J0337+1715 (Ransom S M et al, 2014). With the neutron star of that dynamical system having fractional gravitational binding energy of order $0.1$ it may be possible to use the pulsar arrival times to measure the neutron star's gravitational to inertial mass ratio at a level which probes gravity's nonlinearity to a deeper level than that so far achieved with LLR.\n\n## Footnotes\n\n1 An exception are terms proportional to $M_{ex} ^2 /c^2$ which simply represents the nonlinear $1/R^3$ gravitational acceleration that any particle or body experiences towards external bodies. This acceleration already seen and measured from the perihelion precession of Mercury's orbit is extraneous to the question of gravitational to inertial mass ratio of bodies in response to the Newtonian acceleration of the external world.\n2 The derivation for this ratio for celestial bodies and suggestion to measure it by lunar laser ranging was dubbed \"The Nordtvedt Effect\" by colleagues of the author. Due to higher order nonlinearities of gravity theories which deviate from general relativity, gravitational mass of a celestial body could have additional higher order contributions such as\n\n$\\delta M(G) \\sim \\int \\frac{G^2 \\rho (\\vec{r}) \\rho(\\vec{r}') \\rho(\\vec{r}'')\\: d^3 r \\: d^3 r' \\: d^3 r''}{c^4|\\vec{r} - \\vec{r}'||\\vec{r} - \\vec{r}''|}$ These terms will be observationally relevant only for very gravitationally compact bodies such as neutron stars. For the Earth-Moon system in presence of the Sun, the next order correction to Equation (3). is an interaction of the Sun's gravitational binding energy with same of the Earth, resulting in a $1/R$ potential between the bodies whose strength now does not simply factor: $V(R) = \\frac{G \\left( M(G)_{Sun} M(G)_{Earth} + \\omega U_{Sun} U_{Earth} \\right)}{R}$ with for each body $U_{Sun} = \\left( \\int \\frac{G\\rho(\\vec{r}) \\rho(\\vec{r}')\\:d^3 r \\: d^3 r' }{2c^2|\\vec{r} - \\vec{r}'|} \\right)_{Sun} \\hspace{.5in} U_{Earth} = \\left( \\int \\frac{G\\rho(\\vec{r}) \\rho(\\vec{r}')\\:d^3 r \\: d^3 r' }{2c^2|\\vec{r} - \\vec{r}'|} \\right)_{Earth}$\n\nThis change in strength of the $1/R$ potential between Sun and Earth is of order a couple parts in $10^{15}$, and presently beyond observability.\n3 Viral contributions to inertia are not unique to gravity; in electrodynamics the inertial mass of a system of atoms held together by electric forces and their motions includes the spatial tensor virial.\n\n$[M(I)]c^2 = \\sum_i m_i \\left(c^2+ \\frac{1}{2}v_i ^2 +\\frac{1}{2}\\sum_{ij}\\frac{e_i e_j}{r_{ij}}\\right) +\\left[ \\sum_i m_i\\vec{v}_i \\vec{v}_i +\\frac{1}{2}\\sum_{ij}\\frac{e_i e_j}{r_{ij}^3}\\vec{r}_{ij} \\vec{r}_{ij} \\right]$\n\nThe virial contribution to inertia can be turned on so to speak by application of external forces on a system; this results, for example, in the inertia of a fluid element becoming $\\rho + p/c^2$ with $\\rho$ being the actual mass-energy density and $p$ being the pressure on the fluid element. If a celestial body oscillates about its equilibrium the virials will also oscillate about zero; however, for bodies such as Earth such oscillations, driven by tides or earthquakes or atmospheric disturbances are much too small in energy to be detectable. The energy of Earth's rotation is, however, significant, being $3\\:10^{-13}$ of its mass-energy. In gravity theories which do not fulfill local Lorentz invariance or violate energy-momentum conservation, the gravitational to inertial mass ratio will include spatial tensor contributions proportional to a body's rotational energy (Nordtvedt K, 1969)\n4 Using the temporal metric field expression as given by Equation (3) and the conservative condition $\\alpha ' = 2 \\beta -1$, and letting the Newtonian potential function be a sum of contributions of nearby bodies plus distant surrounding bodies' potential $U_S$,\n\n$U = \\sum_i \\frac{Gm_i}{c^2 r_i} + U_S$ replacing the coordinate distances $r_i$ by the proper distances $\\rho _i = (1+\\gamma U_s)\\: r_i$ in presence of the distant matter's background potential $U_S$, and factoring out $1-2U_S$ to account for the transformation of $g_{00}$ for proper time intervals in presence of the distant matter, $d \\tau ^2 = (1-2U_S )\\:dt^2$, the temporal metric potential to Newtonian order becomes $g_{00} = 1 - 2\\left(1 +[4 \\beta -3 - \\gamma]U_S \\right)\\sum_i \\frac{Gm_i}{c^2 \\rho _i}$\n\nwhich now has a Newtonian gravitational strength rescaled by the potential $U_S$ of surrounding bodies."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.82257,"math_prob":0.9977388,"size":27795,"snap":"2020-34-2020-40","text_gpt3_token_len":7577,"char_repetition_ratio":0.14339174,"word_repetition_ratio":0.020362552,"special_character_ratio":0.27317864,"punctuation_ratio":0.07707471,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999728,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-08T09:33:18Z\",\"WARC-Record-ID\":\"<urn:uuid:487d9e95-f843-4d41-ae29-49edd002c39a>\",\"Content-Length\":\"59240\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dff95f7d-e301-4dc1-af6c-cd0d6cdebef0>\",\"WARC-Concurrent-To\":\"<urn:uuid:446ab93a-511f-4471-bbf6-8d48559661f6>\",\"WARC-IP-Address\":\"173.255.237.117\",\"WARC-Target-URI\":\"http://scholarpedia.org/article/Nordtvedt_effect\",\"WARC-Payload-Digest\":\"sha1:ZKPYMK54E62QZ56TXPLOGFAMJKIAOWN5\",\"WARC-Block-Digest\":\"sha1:B6AXMNFA4IMQOHGBB6CHXTFYKN5OP73S\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439737319.74_warc_CC-MAIN-20200808080642-20200808110642-00464.warc.gz\"}"} |
https://sites.google.com/planetreynolds.com/planetphysics-co-uk/ks4/ks4-p/p4-6-waves/p4-6-2-5 | [
"# 4.6.2.5\n\n4.6.2.5 Lenses (physics only)\n\n# 4.1.3 National and global energy resources\n\n## Content\n\nA lens forms an image by refracting light. In a convex lens, parallel rays of light are brought to a focus at the principal focus. The distance from the lens to the principal focus is called the focal length. Ray diagrams are used to show the formation of images by convex and concave lenses.\n\nContent\n\nKey opportunities for skills development\n\nThe image produced by a convex lens can be either real or virtual. The image produced by a concave lens is always virtual.\n\nStudents should be able to construct ray diagrams to illustrate the similarities and differences between convex and concave lenses.\n\nThe magnification produced by a lens can be calculated using the equation:\n\nmagnification = image heightobject height\n\nMagnification is a ratio and so has no units.\n\nImage height and object height should both be measured in either mm or cm.\n\nIn ray diagrams a convex lens will be represented by:\n\nA concave lens will be represented by:\n\nMS 5a, 5c WS 1.2\n\nMS 3b, c\n\nStudents should be able to apply this equation which is given on the Physics equation sheet.\n\nAT 4, 8\n\nInvestigate the magnification produced by a range of convex lenses.\n\nWS 1.2 Use a variety of models such as representational, spatial, descriptive, computational and mathematical to solve problems, make predictions and to develop scientific explanations and understanding of familiar and unfamiliar facts.\n\nAT4 Making observations of waves in fluids and solids to identify the suitability of apparatus to measure speed/frequency/wavelength. Making observations of the effects of the interaction of electromagnetic waves with matter (links to A-level AT i and j).\n\nAT8 Making observations of waves in fluids and solids to identify the suitability of apparatus to measure the effects of the interaction of waves with matter (links to A-level AT h, j).\n\nMS 5a Use angular measures in degrees\n\nMS 5c Calculate areas of triangles and rectangles, surface areas and volumes of cubes\n\nMS 3b Change the subject of an equation\n\nMS 3c Substitute numerical values into algebraic equations using appropriate units for physical quantities\n\nResources\n\nVideo\n\nPracticals\n\nDemo:\n\n### Safety\n\nAsthma attack risk for selected student?"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89637333,"math_prob":0.9643739,"size":2246,"snap":"2019-26-2019-30","text_gpt3_token_len":481,"char_repetition_ratio":0.10570919,"word_repetition_ratio":0.11111111,"special_character_ratio":0.19545859,"punctuation_ratio":0.09375,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98684424,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-17T14:56:35Z\",\"WARC-Record-ID\":\"<urn:uuid:521c031e-585a-447b-97cf-6ff8736fc059>\",\"Content-Length\":\"623788\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:856f12b9-b669-45db-ba5c-fc952a1511ee>\",\"WARC-Concurrent-To\":\"<urn:uuid:9338f371-b381-407b-8e25-33e5254d60f6>\",\"WARC-IP-Address\":\"172.217.5.238\",\"WARC-Target-URI\":\"https://sites.google.com/planetreynolds.com/planetphysics-co-uk/ks4/ks4-p/p4-6-waves/p4-6-2-5\",\"WARC-Payload-Digest\":\"sha1:HMGMYJ4XN6KWVDUBK4JHISXTD2ILPXMB\",\"WARC-Block-Digest\":\"sha1:BJ5K2DJPGHUQSZQGHMKBRMOD2GADUZAS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195525312.3_warc_CC-MAIN-20190717141631-20190717163631-00535.warc.gz\"}"} |
https://elifesciences.org/articles/54148/figures | [
"# Global sleep homeostasis reflects temporally and spatially integrated local cortical neuronal activity\n\n1. Department of Physiology, Anatomy and Genetics, University of Oxford, United Kingdom\n2. Nuffield Department of Clinical Neurosciences, University of Oxford, United Kingdom\n3. Institute of Pharmacology and Toxicology, University of Zurich, Switzerland\n4. The KEY Institute for Brain-Mind Research, Department of Psychiatry, Psychotherapy and Psychosomatics, University Hospital of Psychiatry, Switzerland\n6 figures and 2 additional files\n\n## Figures\n\nFigure 1",
null,
"Cortical spike firing patterns are associated with the dynamics of Process S. (A) An example of the classical state-based Process S model (blue) describing the dynamics of frontal EEG SWA (median per NREM sleep episode, black bars) over 48 h in one representative animal. Sleep deprivation occurred as indicated at light onset of the second day and lasted 6 h. Scored vigilance states are also shown. (B) An example of frontal electroencephalogram (EEG), primary motor cortical local field potentials (LFP), corresponding raw signal with multi-unit activity (MUA) and detected spikes, in representative segments of waking and NREM sleep. Slow waves and synchronous spiking off periods are visible in NREM sleep but not in wakefulness. The y-scale is the same for both wake and NREM sleep plots. (C) The distribution (log scale) of mean firing rates during wake, NREM and REM sleep over all animals and channels, in addition to the difference in mean firing rates in wake compared to NREM sleep (all are positive, reflecting higher firing in wake). Points indicate channels grouped by animal (left to right), but boxplots reflect channels from all animals treated as one population. (D) Distribution of correlation coefficients, calculated within each single channel, between wake episode duration (Duration), the change in slow wave activity (dSWA), and mean firing rate (mean FR). Points indicate channels grouped by animal (left to right), but boxplots reflect channels from all animals treated as one population. (E) An example scatter plot of the correlation between the change in median SWA from one NREM episode to the next and the mean firing rate during the intervening period of wakefulness. This channel is representative because it has the median correlation coefficient of all channels.\n###### Figure 1—source data 1\n\nSlow wave activity, firing rate and vigilance state time series data.\n\nhttps://cdn.elifesciences.org/articles/54148/elife-54148-fig1-data1-v1.zip\nFigure 2",
null,
"Slow wave activity dynamics at the LFP level can be modelled using multi-unit spiking information. (A) An example from one representative animal modelling the SWA averaged over all LFP channels, of both the classical model (blue) and novel firing-rate-based model (orange), calculated from the firing rate also averaged over all LFP channels (brown). (B) An example of both models applied to the SWA of a single LFP channel, which came from the same animal as used in A.\nFigure 3 with 1 supplement",
null,
"The fit quality and parameters for both classical and firing-rate-based models of LFP SWA. (A) Equations for the classic state-based model (blue) and novel firing-rate-based model (orange). (B) For each animal, the distribution over channels of the median absolute difference between the model and empirical SWA, expressed as a percentage of empirical SWA, for both classic and firing-rate-based models. Black lines indicate the values obtained from modelling the averaged LFP SWA and firing rate over all channels within an animal and the red lines give the parameter value obtained from the model applied to the frontal EEG SWA of that animal. (C) The same median percent error, grouped over animals and models, separately showing errors at the level of single LFP, averaged LFP and EEG. (D–K) The distribution (log scale) of values estimated for α and β rate parameters in the classic model (blue) and in the firing-rate-based model (orange). Parameter values are first presented with boxplots plotted separately for each animal (D, F, H, J), then are additionally shown grouped by level of analysis (E, G, I, K), including single LFP channel, mean of LFPs, or (E and I only) frontal EEG. Vertical lines indicate the standard error of the group mean and grey lines connect points derived from the same animal across groups. (L) The distribution of the final optimised value for the firing rate set point parameter (Fθ) of the firing-rate-based model grouped by animal. (M) The same values z-normalised to the distribution of firing rates within wake, NREM and REM sleep. (N) The distribution of mean firing rate in wake, NREM and REM sleep, expressed as a percentage of the firing rate set point parameter (Fθ). Points indicate channels grouped and coloured by animal, but boxplots reflect all channels treated as one population. (O) The distribution of the change in Process S (ΔS/Δt) from one 4 s time step to the next derived from the Fr-SWA model in wake, NREM sleep and REM sleep. All mice, channels and time are pooled.\n###### Figure 3—source data 1\n\nProcess S time series and parameters based on SWA for classic and novel models.\n\nhttps://cdn.elifesciences.org/articles/54148/elife-54148-fig3-data1-v1.zip\nFigure 3—figure supplement 1",
null,
"Values for Smax and Smin parameters in Process S models derived from SWA. (A) The distribution of values estimated for Smax and (B) Smin parameters in Process S models derived from SWA with the classic model and (C-D) with the firing-rate-based model, with boxplots plotted separately for each animal. The black lines indicate the values obtained from modelling the averaged LFP SWA and firing rate over all channels within an animal. Additionally, the red lines (in A and B only) give the parameter value obtained from the model of the frontal EEG SWA of the corresponding animal.\nFigure 4",
null,
"Definition of off periods and off occupancy. (A) An example section of LFP (raw in grey, 0.5–6 Hz filtered in black) and simultaneous MUA spikes. During this time window, the filtered LFP crosses the amplitude threshold (265 μV for this channel, red line) five times. The MUA inter-spike interval aligned to two of these peaks exceeds the duration threshold (85 ms for this channel) and so two off periods are detected (grey boxes). ISIs aligned to the other three out of five crossings (asterisks) are too short to be considered off periods. (B) Histograms of multi-unit inter-spike intervals aligned with detected slow waves (0 = peak of slow wave) for this example channel. The four plots show, from left to right, ISIs over the whole recording and ISIs separately during wake, NREM and REM sleep only. The ISI duration threshold (red line) is selected using the histogram of all ISIs (leftmost) at the minimum between the two modes (and shown for comparison in the wake, NREM and REM sleep panels). (C) The distribution of LFP amplitude and (D) ISI duration threshold values used for definition of off periods for each channel, with boxplots plotted separately for each animal. (E) The mean multi-unit firing rate over a period of 1 s centred on the peak of detected slow waves, calculated over all slow waves within one example channel with a resolution of 1 ms. (F) Distributions of mean off occupancy (%) for all channels averaged over wake, NREM and REM sleep. Points indicate channels grouped by animal (left to right), but boxplots reflect all channels treated as one population. (G) Off occupancy is shown alongside EEG and LFP SWA for an example channel over 48 h. Traces represent these values calculated at 4 s resolution (light grey), in addition to the median value per NREM sleep episode, as used for model fitting (black bars). Firing rate (brown) and scored vigilance states are also shown.\n###### Figure 4—source data 1\n\nOff occupancy time series and off period detection parameters.\n\nhttps://cdn.elifesciences.org/articles/54148/elife-54148-fig4-data1-v1.zip\nFigure 5 with 1 supplement",
null,
"Process S is reflected in an LFP channel’s off occupancy and its dynamics are described well by both state-based and firing-rate-based models. (A) An example of the novel model based on firing rates and off occupancy (purple), and the classic state-based model (blue), with optimised parameters describing the dynamics of off occupancy (median per NREM episode, black bars) over 48 h. Sleep deprivation occurred as indicated at light onset of the second day and lasted 6 h. Firing rate (brown), off occupancy (value per 4 s epoch, grey) and scored vigilance states are also shown. (B) Equations for the classic state-based model (blue) and firing-rate-and-off-occupancy-based model for off occupancy (purple). (C) For each animal, the distribution over channels of the median difference between the model and empirical off occupancy, expressed as a percentage of the off occupancy, for both classic and firing-rate-based models. (D) The distribution of values of the change in Process S (ΔS/Δt) from one 4 s time step to the next derived from the Fr-Off model in wake, NREM sleep and REM sleep. All mice, channels and time are pooled. (E–F) The distribution of optimised values used for the rate parameters in the classic model and (G-H) the firing rate model, with boxplots plotted separately for each animal.\n###### Figure 5—source data 1\n\nProcess S time series and parameters based on off occupancy for classic and novel models.\n\nhttps://cdn.elifesciences.org/articles/54148/elife-54148-fig5-data1-v1.zip\nFigure 5—figure supplement 1",
null,
"Values for Smax and Smin parameters in Process S models derived from off occupancy. (A) The distribution of values used for Smax and (B) Smin parameters in Process S models derived from off occupancy with the classic model and (C-D) with the firing-rate-based model, with boxplots plotted separately for each animal.\nFigure 6 with 1 supplement",
null,
"The time course of local activity-derived Process S averaged across channels resembles Process S derived from the EEG and sleep-wake history. (A) The time course of Process S shown for one representative animal using the classical EEG-based model (blue), alongside average (orange) and individual channel (grey) Processes S derived from activity-dependent models of all individual LFP channels applied to SWA. (B) The same, except that Process S corresponding to individual channels (grey) and their average (purple) are derived from off periods. Note that the model in blue remains Cl-SWA, as off periods cannot be derived from the EEG. (C) Process S derived from applying the Cl-SWA model to frontal EEG (blue) and the mean Process S derived from applying Fr-SWA (orange) and Fr-Off (purple) to all individual channels. The results are shown for each of the six animals analysed. In all panels, all Process S time series are normalised between 0 to 1 relative to individual Smin and Smax values for fair comparison.\nFigure 6—figure supplement 1",
null,
"Cross-validation of parameter estimation. (A) The values of the rate parameters α, and (B) β, obtained by algorithmic optimisation fitting to data from only the baseline or sleep deprivation day. For normalisation, values are expressed as a percentage of the original value obtained using the semi-automated approach (see Materials and methods). (C) The difference in the median percent error (E*) obtained from Process S with rate parameters optimised on either the baseline or sleep deprivation day from that obtained using the original semi-automated selected parameters. Negative values reflect improved fits. All values are shown for both classic (blue) and firing-rate-based (orange) models applied to SWA-derived Process S. (D–F) The same variables are shown for the models based on off occupancy using both classic (blue) and firing-rate-based (purple) models. Statistical analysis was performed for each variable by a three-factor ANOVA. (A) Day: F(1, 293)=56.1, p=8.1×10−13; Model: F(1, 293)=6.8, p=9.7×10−3; Animal: F(5, 293)=1.4, p=0.23; Day x Model: F(5, 293)=0.6, p=0.43; Day x Animal: F(5, 293)=4.8, p=3.5×10−4; Model x Animal: F(1, 293)=2.5, p=0.033; three-way ANOVA). (B) Day: F(1, 293)=110.9, p=3.3×10−22; Model: F(1, 293)=28.9, p=1.6×10−7; Animal: F(5, 293)=0.7, p=0.60; Day x Model: F(5, 293)=0.1, p=0.98; Day x Animal: F(5, 293)=11.1, p=8.3×10−10; Model x Animal: F(5, 293)=11.6, p=7.4×10−4; three-way ANOVA). (C) Day: F(1, 293)=4.37, p=0.038; Model: F(1, 293)=16.3, p=7.1×10−5; Animal: F(5, 293)=17.3, p=5.3×10−15; Day x Model: F(1, 293)=2.1, p=0.064; Day x Animal: F(5, 293)=24.5, p=1.4×10−20; Model x Animal: F(1, 293)=3.6, p=3.9×10−3; three-way ANOVA). (D) Day: F(1, 281)=48.2, p=2.6×10−11; Model: F(1, 281)=2.6, p=0.11; Animal: F(5, 281)=2.6, p=0.027; Day x Model: F(5, 281)=3.2, p=0.077; Day x Animal: F(5, 281)=2.9, p=0.014; Model x Animal: F(1, 281)=0.8, p=0.54; three-way ANOVA). (E) Day: F(1, 281)=16.8, p=5.4×10−5; Model: F(1, 281)=4.6, p=0.033; Animal: F(5, 281)=0.8, p=0.50; Day x Model: F(5, 281)=13.0, p=3.7×10−4; Day x Animal: F(5, 281)=6.0, p=2.7×10−5; Model x Animal: F(1, 281)=1.2, p=0.31; three-way ANOVA). (F) Day: F(1, 281)=15.9, p=8.4×10−5; Model: F(1, 281)=1.4, p=0.25; Animal: F(5, 281)=7.5, p=1.2×10−6; Day x Model: F(5, 281)=0.3, p=0.56; Day x Animal: F(5, 281)=2.6, p=0.028; Model x Animal: F(1, 281)=0.8, p=0.58; three-way ANOVA).\n\n###### Source code 1\n\nProcess S model parameter optimisation code.\n\nhttps://cdn.elifesciences.org/articles/54148/elife-54148-code1-v1.zip\n###### Transparent reporting form\nhttps://cdn.elifesciences.org/articles/54148/elife-54148-transrepform-v1.pdf\n\nA two-part list of links to download the article, or parts of the article, in various formats."
] | [
null,
"https://iiif.elifesciences.org/lax/54148%2Felife-54148-fig1-v1.tif/full/617,/0/default.jpg",
null,
"https://iiif.elifesciences.org/lax/54148%2Felife-54148-fig2-v1.tif/full/617,/0/default.jpg",
null,
"https://iiif.elifesciences.org/lax/54148%2Felife-54148-fig3-v1.tif/full/617,/0/default.jpg",
null,
"https://iiif.elifesciences.org/lax/54148%2Felife-54148-fig3-figsupp1-v1.tif/full/617,/0/default.jpg",
null,
"https://iiif.elifesciences.org/lax/54148%2Felife-54148-fig4-v1.tif/full/617,/0/default.jpg",
null,
"https://iiif.elifesciences.org/lax/54148%2Felife-54148-fig5-v1.tif/full/617,/0/default.jpg",
null,
"https://iiif.elifesciences.org/lax/54148%2Felife-54148-fig5-figsupp1-v1.tif/full/617,/0/default.jpg",
null,
"https://iiif.elifesciences.org/lax/54148%2Felife-54148-fig6-v1.tif/full/617,/0/default.jpg",
null,
"https://iiif.elifesciences.org/lax/54148%2Felife-54148-fig6-figsupp1-v1.tif/full/617,/0/default.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.60328275,"math_prob":0.9337369,"size":2338,"snap":"2023-14-2023-23","text_gpt3_token_len":1026,"char_repetition_ratio":0.23993145,"word_repetition_ratio":0.018237082,"special_character_ratio":0.5055603,"punctuation_ratio":0.29219145,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9771969,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-02T00:40:08Z\",\"WARC-Record-ID\":\"<urn:uuid:5710b00f-b42d-4fdc-8135-9a291fbafe02>\",\"Content-Length\":\"244116\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:13958bf2-0d8c-4a17-87bb-5747be92ccc5>\",\"WARC-Concurrent-To\":\"<urn:uuid:36bf55c6-6e5c-4104-91ce-4c936df8034e>\",\"WARC-IP-Address\":\"151.101.2.217\",\"WARC-Target-URI\":\"https://elifesciences.org/articles/54148/figures\",\"WARC-Payload-Digest\":\"sha1:5RA76XFCDLG6TXJPDEJTW6LPKK3JFKRR\",\"WARC-Block-Digest\":\"sha1:47MM3HXY4RGRDNKYZLM3UINXXHDPNUOB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224648245.63_warc_CC-MAIN-20230602003804-20230602033804-00157.warc.gz\"}"} |
https://downloads.hindawi.com/journals/cin/2017/2727856.xml | [
"CIN Computational Intelligence and Neuroscience 1687-5273 1687-5265 Hindawi 10.1155/2017/2727856 2727856 Research Article Identification of Anisomerous Motor Imagery EEG Signals Based on Complex Algorithms http://orcid.org/0000-0002-4521-9589 Liu Rensong 1 Zhang Zhiwen 1 Duan Feng 1 Zhou Xin 1 Meng Zixuan 1 Salazar Addisson College of Computer and Control Engineering Nankai University Tianjin 300350 China nankai.edu.cn 2017 982017 2017 17 03 2017 14 05 2017 02 07 2017 982017 2017 Copyright © 2017 Rensong Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.\n\nMotor imagery (MI) electroencephalograph (EEG) signals are widely applied in brain-computer interface (BCI). However, classified MI states are limited, and their classification accuracy rates are low because of the characteristics of nonlinearity and nonstationarity. This study proposes a novel MI pattern recognition system that is based on complex algorithms for classifying MI EEG signals. In electrooculogram (EOG) artifact preprocessing, band-pass filtering is performed to obtain the frequency band of MI-related signals, and then, canonical correlation analysis (CCA) combined with wavelet threshold denoising (WTD) is used for EOG artifact preprocessing. We propose a regularized common spatial pattern (R-CSP) algorithm for EEG feature extraction by incorporating the principle of generic learning. A new classifier combining the K-nearest neighbor (KNN) and support vector machine (SVM) approaches is used to classify four anisomerous states, namely, imaginary movements with the left hand, right foot, and right shoulder and the resting state. The highest classification accuracy rate is 92.5%, and the average classification accuracy rate is 87%. The proposed complex algorithm identification method can significantly improve the identification rate of the minority samples and the overall classification performance.\n\nNational Natural Science Foundation of China 61203339 Tianjin Research Program of Application Foundation and Advanced Technology 14JCYBJC18300\n1. Introduction\n\nBrain-computer interface (BCI) provides an efficient communication bridge between the human brain and external manageable devices . Among the signal-controlling BCI sources, the P300 , steady-state visual-evoked potential (SSVEP) , and motor imagery (MI) signals are the most commonly used. In contrast to SSVEP and P300, MI is a self-induced brain activity, which is initiated by imaging certain limbs or other body parts to move without the help of outside inducing factors . An MI BCI system was first used based on this feature to assist humans with severe disabilities . This system is also used for humanoid controls , entertainment game designs , and aircraft flight controls . However, the performance of this system is largely dependent on the number of MI motion commands that can be precisely classified.\n\nThe cerebral cortex of left-handers and right-handers is anisomerous . Therefore, cerebral cortex activities often present evident differences and cannot be easily distinguished when right-handers imagine symmetric limb movements [11, 12]. Our study aims to analyze and recognize four anisomerous MI states, namely, imaginary movements with the left hand, right foot, and right shoulder and the resting state.\n\nIn general, MI pattern recognition systems involve raw MI EEG signal preprocessing, feature extraction, and pattern classification. However, subjects experience difficulty in avoiding eye movements and consequently produce electrooculogram (EOG) artifacts in raw MI EEG signals . The obtained raw MI EEG signals are mainly affected by the vertical EOG (VEOG) signals generated by blinking eyes. The preprocessing algorithms for EEG signals mainly include time domain filtering, blind source separation , and time-frequency domain analysis methods. Time domain methods, such as the low-pass filter method and band-pass filter method , have been used to eliminate EOG artifacts [16, 17]. However, time domain filtering methods cannot effectively remove the majority of the EOG artifacts. Vergult et al. used blind source separation and canonical correlation analysis (CCA) to effectively denoise EOG artifacts from raw MI EEG signals, but the CCA algorithm should be able to artificially recognize the artifact components. Hsu et al. used a time-frequency domain analysis method called discrete wavelet transform (DWT) to denoise EOG artifacts from raw EEG signals. The multiresolving feature of DWT enables nonstationary EEG signals to be considered. However, a small portion of the EOG artifacts remains in the EEG signals after the DWT denoise preprocessing is completed. Thus, a more effective preprocessing algorithm should be developed to denoise EOG artifacts.\n\nFeature extraction is another critical step in MI pattern recognition. Common EEG features include those in the time domain, frequency domain, time-frequency domain, and spatial domain . Time domain analysis is mainly conducted to extract EEG features because MI EEG signals are recorded in the time domain. For example, Khushaba et al. extracted EEG features from the time domain to form a set of features that was relevant to the limb position. EEG signals also contain various frequency components. Prasad et al. used power spectral density as an EEG feature. Time-frequency domain analysis methods can integrate the advantages of time domain and frequency domain analysis methods. Wang et al. applied a wavelet packet transform method to extract the time and frequency information in EEG signals. However, univariate and integrated analysis methods using the time domain and frequency domain are not appropriate for multichannel EEG feature extraction .\n\nAfter preprocessing raw MI EEG signals and extracting the features, we aimed to develop an appropriate classifier to precisely categorize the MI motion commands. Common classification algorithms for EEG features include the linear distance discriminant , support vector machine (SVM) , clustering algorithms , Bayesian classifiers, and back propagation neural network (BPNN) classifiers . However, the classifiers exhibit poor performance when the EEG features overlap with one another.\n\nConsidering previous studies, we propose a novel MI pattern recognition system for classifying MI EEG signals. We use the Butterworth band-pass filter to extract EEG signals having frequencies of 8–30 Hz during the preprocessing of raw EEG signals. We then apply a CCA algorithm that integrates a wavelet threshold denoising (WTD) algorithm to form a compound algorithm called the wCCA algorithm and to process the extracted frequency band signals. We also use a regularized common spatial pattern (R-CSP) algorithm by incorporating the principle of generic learning to extract the EEG features in the spatial domain. This approach can effectively extract connotative spatial information from multichannel EEG signals and reduce the data dimension based on minority samples. We combine the K-nearest neighbor (KNN) and SVM methods, which we call KNN-SVM, to classify the EEG features. We compare the KNN-SVM classifier to several classifiers to validate its classification performance.\n\nThe remainder of this paper is organized as follows. Section 2 describes the EEG signal acquisition. Section 3 introduces the raw EEG signal preprocessing. Section 4 explains feature extraction with the R-CSP algorithm. Section 5 discusses the KNN-SVM classifier and compares it with several classifiers. Section 6 presents our experimental results and a discussion. Section 7 provides the conclusions and recommends concepts for future studies.\n\n2. EEG Signal Acquisition\n\nWe selected 14 Ag/AgCl electrodes that were relevant to the MI brain region based on the Brodmann brain function partition and international 10/20 electrode lead system [30, 31]. Among the 14 electrodes, two (FZ, CZ) were placed in the central brain region, six (T7, P3, P7, CP3, FC3, and C3) were in the left brain region, and six (T8, P4, P8, CP4, FC4, and C4) were located in the right brain region. The electrodes in the left and right brain regions are symmetric (Figure 1). Bipolar lead modes with two electrodes were used to record vertical EOG (VEOG) signals: one electrode was placed above the left eyebrow, and the other electrode was placed on the lower edge of the left eye socket. Monopolar derivations were used throughout the recordings. In this process, the left mastoid and forehead served as the reference and ground, respectively. The signals were sampled at 256 Hz, and an additional 50 Hz notch filter was enabled to suppress the power line interference by using a g.tec device (g.tec medical engineering GmbH, Schiedlberg, Austria).\n\nThe positions of the EEG electrodes.\n\nA subject sat on a relaxing chair, and the subject’s arms were placed in a relaxed position on his legs. The paradigm consisted of four different tasks, namely, imaginary movements with the left hand (LH), right foot (RF), right shoulder (RS), and the resting state (R). At the beginning of a trial (t = 0 s), a fixation cross “+” was displayed on a black screen. In addition, a short acoustic warning tone was presented. After two seconds (t = 2 s), a text prompt for the left hand (LH), right foot (RF), right shoulder (RS), or resting state (R) was displayed in the center of the screen and remained on the screen for 2 s. This prompted the subject to perform the desired MI task. The subject was asked to continue performing the MI task until the fixation cross “+” disappeared from the screen at t = 7 s. A short break followed, with a blank screen lasting for two seconds. The paradigm is illustrated in Figure 2.\n\nTiming scheme of the EEG signal recording.\n\nFive healthy subjects, namely, three men (Subjects A, B, and D) who were 30, 25, and 23 years of age, respectively, and two women (Subjects C and E) who were 21 and 23 years of age, respectively, participated in the experiment and performed the four MI tasks. Subject A was left-handed, and the other subjects were right-handed. Each MI motion state was recorded for one session, and altogether four sessions were recorded for each subject. Each session consisted of 60 trials separated by short breaks (lasting a couple of minutes) breaks. For each state, 50 trials were selected for training and the remaining 10 trials were used for testing. In total, 240 trials were performed per subject.\n\n3. Raw EEG Signal Preprocessing\n\nFor raw EEG signals, the Butterworth band-pass filter was used to extract the 8–30 Hz frequencies of the signals. The brain is a good conductor of electricity. As such, EOG signals spread from the forehead to the back of the head and thus traverse the entire head. We considered the different spatial distribution characteristics of the EEG and EOG signals. For twelve symmetrical electrodes (T7, T8, P3, P4, P7, P8, CP3, CP4, FC3, FC4, C3, and C4), we used the wCCA algorithm to examine the mixed signals in a new form. X represents the EEG signals collected from six electrodes in the left brain region, and Y denotes the six other electrodes in the right brain region. The VEOG signal was added to X and Y. The first pair of components was calculated through CCA decomposition and exhibited the highest correlation. The components can be regarded as the most communal ingredient between the left and right brain regions, which are composed of the EOG artifacts and a small number of high-frequency EEG components. Then, wavelet threshold denoising was performed to remove the EOG artifacts and maintain a small amount of high-frequency EEG components. Finally, pure EEG signals of twelve symmetric electrodes were obtained through wCCA algorithm processing. For two central brain region electrodes (FZ, CZ), the wavelet basis “db4” was used to conduct five-layer wavelet decomposition for the EEG signals of FZ and CZ. Then, the wavelet soft threshold denoising function “wdencmp” was used to process the decomposed signal components. Next, the denoised signal components were used to reconstruct the pure EEG signals with the wavelet basis “db4.” The structure of the EEG signal preprocessing is shown in Figure 3.\n\nStructure of the EEG signal preprocessing. (a) Denoising of the twelve symmetrical electrodes using wCCA. (b) Denoising of the two central brain region electrodes using WTD.\n\n3.1. wCCA Algorithm\n\nNext, the derivation process of the wCCA algorithm was described in detail. Suppose that X=x1T,,x6T,zTT and Y=y1T,,y6T,zTT represent EEG signals that were collected from the left brain region and right brain region, respectively. Among them, x1T,,x6T and y1T,,y6T are two sets of 12 raw symmetric electrode EEG signals and zT is the VEOG signal. After the centralization of X and Y, the new variables X^ and Y^ are obtained, respectively. We then find a linear combination of the points through CCA to obtain the new variables U and V, which exhibit the highest correlation.(1)ui=ai,1x^1+ai,2x^2++ai,6x^6+ai,7z^=aiTX^,vi=bi,1y^1+bi,2y^2++bi,6y^6+bi,7z^=biTY^.\n\nThe obtained canonical correlation variables are the estimations of seven raw independent signals ui and vi,i=1,,7, respectively. The vectors ai and bi,i=1,,7 are obtained by calculating the maximum simple correlation coefficient. Cxx=EX^X^T and Cyy=EY^Y^T are the autocovariance matrices. Cxy=EX^Y^T and Cyy=EY^X^T are the cross-variance matrices.(2)maxai,biρui,vi=maxai,bicovui,vivaruivarvi=maxai,biaiTCxybiaiTCxxaibiTCyybi,i=1,,7,where the constraints are(3)aiTCxxai=1,biTCyybi=1.\n\nThe Lagrangian function is constructed to calculate the values of ai and bi under the premise that ρui,vi achieves the maximum value:(4)Lai,bi=aiTCxybi-λ12aiTCxxai-1-λ22biTCyybi-1.\n\nAccording to (1), U and V have the following forms:(5)U=ATX^,V=BTY^.\n\nThe obtained canonical correlation variables U and V included seven independent components, U=u1T,,u7TT and V=v1T,,v7TT. The vectors ai and bi,i=1,,7 are the ith columns of matrices A and B, respectively.\n\nNext, A, B, λ1, and λ2 can be calculated based on (2) to (4). The first independent components U and V are called u1 and v1. Each component is composed of EOG artifacts and several valuable EEG signals. Then, we used a wavelet hard threshold noise reduction method. We first used the wavelet basis “db4” for five-layer signal wavelet decomposition for u1 and v1. Thus, we obtained five wavelet coefficients and one scale coefficient. Then, we set any coefficient that was higher that the threshold to zero, whereas we retained the value of any coefficient that was lower that the threshold. In 1994, Donoho proposed the VisuShrink method (the unified threshold denoising method). The threshold kj for each coefficient is defined as follows:(6)kj=2lnNjdj,j=1,,5,where kj is the threshold for the jth coefficient and Nj is the number of elements for the jth coefficient. Donoho and Johnstone proposed an estimation formula for the noise standard deviation in the wavelet domain δ^n=MAD/0.6745, where MAD is the median value of the subband wavelet coefficient. Thus, the standard deviation of the noise in wavelet domain dj is defined as follows:(7)dj=medianDj0.6745,where Dj is the jth coefficient.\n\nAfter the wavelet hard threshold noise reduction processing, the six processed coefficients were used for wavelet transform reconstruction with the wavelet basis “db4,” and we obtained the denoised independent component signals u1new and v1new which are the wavelet threshold denoised signal of u1 and v1, respectively. These signals and the other six independent components comprise Unew and Vnew.\n\nAccording to (5), after calculating Unew and Vnew, we reconstructed the new variables X^new and Y^new, which are the estimates of the pure EEG signals from the left brain region and right brain region, respectively.(8)X^new=AT-1Unew,Y^new=BT-1Vnew.\n\nTwelve pure electrodes with symmetric EEG signals were obtained. X^new represents the six EEG electrode signals from the left brain region, and Y^new represents the six EEG channel electrodes signals from the right brain region.\n\n3.2. EEG Signal Denoising\n\nWe constructed brain topographic maps of the four MI states to examine the topology of significant EEG features. Figure 4(a) shows the brain topographic map of Subject A. The red color and blue color both indicated higher values within the corresponding state. Evident differences were observed among the four brain topography maps. Therefore, we can obtain good classification effectiveness. Furthermore, we constructed time-frequency maps to concretely obtain the activity degree of the 14 electrodes in each MI state. Figure 4(b) shows the 8–30 Hz frequency spectrum chart for the 14 electrode EEG signals of each MI state from Subject A. Taking the resting state as an example, electrode FZ exhibits the lowest activity degree, whereas electrode C3 exhibits the highest activity degree.\n\nBrain topographic map and 14-channel spectrum map from Subject A. (a) Brain topographic map. (b) 14-channel spectrum map.\n\nThe 14-channel raw EEG signals are mixed with EOG signals; in particular, electrodes close to the eyes are particularly influenced by the EOG signals. Figure 5(a) shows the time domain graph of the resting state from Subject A. In Figure 5(a), the EEG electrode signals of FC3, FZ, and FC4 fluctuate significantly, whereas the other electrode signals are less affected by the EOG signals because these electrodes are far from the eyes. We obtained the pure EEG signals after the EEG signal preprocessing is completed (Figure 5(b)). The denoised 14-channel EEG signals fluctuate only slightly.\n\nRaw and denoised signals. (a) Raw signals. (b) Denoised signals.\n\n4. Feature Extraction Using R-CSP\n\nAfter the preprocessing of the raw EEG signals, we must extract the EEG features (Figure 6). The common space model (CSP) is more effective than the traditional time-frequency domain feature extraction method for extracting the differences in the spatial features of the two types of signals. However, the CSP algorithm is based on a large number of signal samples based on covariance estimation. Therefore, the feature extraction is affected by the number of samples available for training. In recent years, regularized discriminant analysis (RDA) has been used to solve small sample problems for linear and secondary discriminant analyses. The small-training-sample approach leads to biased estimates of the eigenvalues, and such problems can lead to instability in the feature extraction. Thus, two regularization parameters are used to address these undesirable features.\n\nStructure of the feature extraction using R-CSP.\n\nIn this paper, we adopt the improved regularized common spatial pattern (R-CSP) algorithm by incorporating the principle of generic learning to extract the EEG features in the spatial domain. In R-CSP, we used the principle of generic learning to address one training sample problem. The training set of R-CSP uses a generic database that contains subjects that are different from those to be identified. The classifier is trained to extract the discriminant information from the subjects other than those that will be called on to perform recognition when in operation. The principle behind generic learning is that the discriminant information pertinent to the specific subjects (those to be identified) can be learned from other subjects because the EEG signals exhibit similar intrasubject variations. The R-CSP algorithm is an improved CSP algorithm. It can provide a good approach to overcoming outlier (such as noise) sensitivity and poor robustness, which are shortcomings of having a small number of samples. There are two parameters in the R-CSP algorithm, β and γ. The first regularization parameter controls the shrinkage of a subject-specific covariance matrix toward a “generic” covariance matrix to improve the estimation stability based on the principle of generic learning. The second regularization parameter controls the shrinkage of the sample-based covariance matrix estimation toward a scaled identity matrix to account for the bias due to the limited number of samples.\n\n4.1. R-CSP Algorithm\n\nWe assume that there are L subjects who participated in the experiment. Assume that G1 and G2 are two kinds of MI tasks in the space multimodal evoked response signal matrix from the multichannel MI EEG signals. Their dimensions are N×T, where N is the number of EEG channels and T is the number of samples collected for each channel. E is a trial of N×T dimensions MI EEG signals from MI task G1 or G2.\n\nThe normalized sample covariance matrix S of a trial E is obtained as follows:(9)S=EETtraceEET.\n\nThe two MI tasks of EEG signals are indexed by c={1,2}. For simplicity, we assume that there are M trials in each class that are available for training for a subject of interest, which are indexed by m as in Ec,m, where m=1,,M. Thus, each trial has a corresponding covariance matrix Sc,m.\n\nThe average spatial covariance matrix for each class is then calculated as follows:(10)S-c=1Mm=1MSc,m,c=1,2.\n\nNext, the regularization technique is introduced into the equation. The regularized average spatial covariance matrix for each class is calculated as (11)Σ^cβ,γ=1-γΣ^cβ+γNtrΣ^cβ·I,where β(0β1) and γ(0γ1) are two regularization parameters, I is an identity matrix of size N×T, and Σ^c(β) is defined as follows: (12)Σ^cβ=1-β·Sc+β·S^c1-β·M+β·M^.\n\nIn (12), Sc is the sum of the sample covariance matrices for all M training trials in class c(c=1,2), and S^c is the sum of the sample covariance matrices for a set of M^M×L-1 generic training trials from the other L-1 subjects in class c.(13)Sc=m=1MSc,m,S^c=m^=1M^Sc,m^.\n\nNext, the composite spatial covariance is formed and factorized as(14)Σ^cβ,γ=Σ^1β,γ+Σ^2β,γ=U^Λ^U^T,where U^ is the matrix of eigenvectors and Λ^ is the diagonal matrix of corresponding eigenvalues. In this paper, we adopt the convention that the eigenvalues are sorted in descending order.\n\nNext, the whitening transformation is obtained as follows:(15)P^=Λ^-1/2U^T.Σ^1β,γ and Σ^2β,γ are whitened as follows:(16)Σ~1β,γ=P^Σ^1β,γP^TΣ~2β,γ=P^Σ^2β,γP^T.Σ~1β,γ can then be factorized as follows:(17)Σ~1β,γ=B^Λ^1B^T.\n\nThe full projection matrix of Σ~1β,γ is formed as follows:(18)W^0=B^TP^.\n\nFor the most discriminative patterns, only the first and last α (we set α=2) columns of W^0 are retained to form W^, which is of size N×Q, where Q=2α. For the feature extraction, a trial E is first projected as follows:(19)Z^=W^TE.\n\nThen, a Q-dimensional feature vector y^ is formed from the variance of the rows of Z^ as follows:(20)y^q=logvarz^qq=1Qvarz^q,where y^q is the qth component of y^, z^q is the qth row of Z^, and var(z^q) is the variance of vector z^q.\n\nHowever, in this study, we analyzed four MI states, assuming that the four states are A, B, C, and D. We converted the four-classification tasks (A&B&C&D) into six two-classification-tasks: (A&B), (A&C), (A&D), (B&C), (B&D), and (C&D). Thus, six spatial filters are generated: W^1, W^2, W^3, W^4, W^5, and W^6. Finally, the four state signals are sequentially passed through the six spatial filters, and the feature vectors are obtained.\n\n4.2. Feature Selection\n\nWe constructed a six-spatial-filter group using the R-CSP algorithm. After the EEG signals of four MI states were extracted by the six-spatial-filter group, the diversity-maximized feature vectors of 24 (6×Q) were obtained. To optimize the performance of the R-CSP algorithm, we explored the effect of classification with different combinations of β and γ. R-CSP with β = γ = 0 is equivalent to the classical CSP. We calculated 121 classification results by the outer product of β = [0:0.1:1] and γ = [0:0.1:1]. Figure 7 shows the 121 classification results with different combinations of β and γ from Subject A. Then, we determined β and γ values that corresponded to the maximum classification accuracy using the KNN-SVM algorithm. Five subjects participated in the experiment and performed the four MI motions. There were 240 trials for each subject, that is, 200 trials for training and 40 trials for testing. Incorporating the principle of generic learning, for each subject, the training set is 1,000 training trials, which is composed of the subject’s own 200 training trials and 800 training trials from the other four subjects.\n\nClassification results with different combinations of β and γ using R-CSP from Subject A.\n\nTo verify the classification effectiveness of the feature extraction, we first used CSP and R-CSP to extract the features separately. Then, we used KNN-SVM to classify the extracted features. Table 1 shows the different classification results using CSP and R-CSP. The classification accuracy rates (AC) of subjects A, B, and D, separately, were improved by 5, 7.5, and 10 percentage points, respectively. The classification accuracy rate of Subject E remained at the same level as CSP. The classification accuracy rate of Subject C is reduced by 5 percentage points. Overall, the feature extraction performance of R-CSP is better than that of CSP.\n\nClassification results using CSP and R-CSP.\n\nSubject A B C D E\nCSP AC (%) 80 85 85 82.5 85\nTraining set 200 200 200 200 200\nTest set 40 40 40 40 40\n\nR-CSP AC (%) 85 92.5 80 92.5 85\nβ 0.9 0.2 0.2 0.3 0.1\nγ 0.5 0.9 0.4 0 0.3\nTraining set 1000 1000 1000 1000 1000\nTest set 40 40 40 40 40\n5. Classification Using KNN-SVM\n\nThe sample feature points of the four MI states indicate that the tested EEG signals can cross or overlap. The KNN method is a mature classification algorithm. The concept of this method is that, for a sample of interest, if the K most similar samples in the feature space belong to a particular category, then the sample of interest also belongs to this category. Because the KNN method mainly depends on the samples that are adjacent, it is limited compared with the method of discriminant domain for determining the category. Thus, the KNN method is more suitable than the other methods for crossed or overlapping samples. However, the KNN method classifier uses local information for prediction. Thus, KNN lacks good generalization ability under small sample conditions, and the classification results are easily affected by noise.\n\nThe SVM is a machine learning algorithm that is based on statistical learning theory. Specifically, the SVM is based on the principle of structural risk minimization, which effectively avoids the problems that exist in traditional learning methods, such as learning, dimension disaster, and local minima, and it still has good generalization ability under the condition of having a small sample size. In particular, the SVM is superior to other classification methods in solving two types of classification problems. However, the classification effect is not suitable for crossed or overlapping samples. The use of the SVM for multiclass classification remains limited.\n\nTherefore, the KNN algorithm is used to establish the classification framework based on the KNN algorithm such that the KNN algorithm outputs the two most likely classification categories as the rough classification result, which is then input into the SVM for the second classification to obtain the final classification result. This new composite algorithm is called the KNN-SVM method. The new composite algorithm KNN-SVM can handle crossed or overlapping sample sets and still maintain good generalization ability under small sample conditions. Figure 8 shows the accuracy results of KNN-SVM from five subjects. The classification accuracy rate varies from 80% to 92.5%. Subjects B and D have the best classification effects, with accuracies of up to 92.5%. The KNN-SVM algorithm shows good classification effectiveness. The steps of the KNN-SVM algorithm are as follows (see Figure 9).\n\nAccuracy results for the KNN-SVM from five subjects.\n\nStructure of the KNN-SVM algorithm for feature classification.\n\nStep 1.\n\nCalculate the cosine angle distance between the sample and the training set for each sample and obtain the first K training samples.\n\nStep 2.\n\nCalculate the weight of each category of the selected K training samples.\n\nStep 3.\n\nThe two classes of Ci and Cj with the largest weights are selected as a result of the rough classification. If the classification result of the KNN algorithm is only one Ci, then the instance is directly classified as Ci; otherwise, Ci and Cj are two types of results using the one-against-one SVM for the final classification results of the two classifications.\n\n6. Experimental Results and Discussion\n\nIn this study, the following three steps are involved in the MI pattern recognition system: (1) preprocessing of raw EEG signals; (2) extraction of the features of each state of the EEG signal; and (3) building a pattern recognition classifier. Five healthy subjects participated in the experiments. Each subject had to execute the four proposed MI states in the same experimental environment. The 240 trials of EEG signals obtained from each subject were divided into two sets, namely, the training trials and testing trials.\n\nDuring the preprocessing of the raw EEG signals, we first used a notch filter to suppress the 50 Hz power frequency interference and the Butterworth band-pass filter to extract the 8–30 Hz frequencies of the EEG signals. Then, we used the wCCA and WTD algorithm to separate the EOG artifacts from the raw EEG signals (Figure 3). To demonstrate the effects of the EEG signal preprocessing, we consider the three electrode signals closest to the eyes before and after the preprocessing in Figure 10. The three raw electrode signals are mixed EOG signals and fluctuate significantly. After the preprocessing, the obtained three pure electrode signals fluctuate only slightly.\n\nRaw and denoised signals of the three electrodes closest to the eyes. (a) FC3. (b) Fz. (c) FC4.\n\nFor the EEG feature extraction process, we used R-CSP to extract the EEG features in the spatial domain. Figure 11 shows the scattering of the feature points for the 40 test samples of Subject A. The sample feature points of the LH are crossed or overlapping with feature points of the RS and RF. The classification accuracies of the five subjects for the four MI states are maximum with the best combination of β and γ, as shown in Table 1. Compared with the CSP, the classification accuracy of three subjects (A, B, and D) is higher using R-CSP, and only one subject (C) exhibited a slight decrease in the classification accuracy.\n\nSample feature points of four states using R-CSP from Subject A.\n\nAfter training the classifier with the training set data, we used the test data as the input of the KNN-SVM to verify the classification accuracy rate. Figure 8 shows the accuracy results of KNN-SVM for the five subjects. In addition, the classification results of KNN-SVM for the four MI states are provided in Table 2. For the five subjects, the accuracy rates of the resting (R) state and right foot (RF) state vary from 80% to 100%, and thus, KNN-SVM produces the best average classification accuracy of 92%. The accuracy rate of the right shoulder (RS) from Subject C has a low accuracy (70%), and thus, the accuracy rate of the RS has the second highest accuracy rate. For the left hand (LH), three subjects (A, C, and D) have low accuracy, and therefore, the average accuracy rate of the LH is the lowest (74%).\n\nClassification results of KNN-SVM.\n\nSubject R RF LH RS\nA 100 80 60 100\nB 90 90 90 100\nC 90 90 70 70\nD 100 100 70 100\nE 80 100 80 80\nAC (%) 92 92 74 90\n\nThe confusion matrix is used to verify the actual discrimination success of the proposed method. If an MI state is often misconstrued as another state, then special pattern recognition efforts should be applied to address the complex problems related to the MI states. Figure 12 shows the confusion matrix for the MI state categorizations by KNN-SVM. Using this matrix, the discrimination among the various MI states of all of the subjects can be evaluated in depth. Three MI states (R, RF, and RS) have good classification effectiveness, and only the LH has a low accuracy rate (74%). The misidentification of the LH state is mainly concentrated on the RF and RS. The confusion matrix illustrates that the KNN-SVM classifier is highly precise.\n\nConfusion matrix for the recognition of MI states by KNN-SVM.\n\nAfter EEG feature extraction using R-CSP, a suitable pattern recognition classifier is required. To confirm the good classification performance of the KNN-SVM classifier, we compare it with five commonly used classifiers (Table 3). We used those classifiers to classify the four MI states using the same sample data. Table 3 shows the classification results of six different classifiers. The KNN-SVM classifier has the highest average classification accuracy rate (87%), and the naive Bayes classifier has the lowest accuracy rate (69.5%). Among the six classifiers, the accuracy rates of KNN-SVM from the five subjects are all above 80%. In contrast, the accuracy rate of the Naive Bayes classifier is above 80% for only one subject (D). In addition, the standard deviation of KNN-SVM is the smallest, and thus, the KNN-SVM classifier is highly reliable. The performance of KNN-SVM is also significantly more efficient than those of the other five commonly used classifiers.\n\nClassification results from different classifiers.\n\nClassifier LDA Random forest Naive Bayes KNN SVM KNN-SVM\nSubject A 77.5 80 72.5 80 82.5 85\nSubject B 80 85 65 82.5 82.5 92.5\nSubject C 72.5 60 55 67.5 65 80\nSubject D 87.5 85 85 92.5 87.5 92.5\nSubject E 75 72.5 70 72.5 70 85\nAverage AC (%) 78.5 76.5 69.5 79 77.5 87\nStandard deviation 5.7554 10.5475 10.9545 9.6177 9.5197 5.4199\n\nIn this study, we adopted 16 EEG sensors to classify four MI states. Furthermore, we compared the results with those in previous studies to verify the contribution of our proposed EEG pattern recognition system [17, 3234], as shown in Table 4. Additional EEG sensors can contribute to the quantity and quality of the MI state classification. However, increasing the number of sensors increases the complexity of the classification algorithms and deteriorates the stability performance of the EEG pattern recognition system. Furthermore, more sensors cause discomfort for the subjects. In Table 4, a minimum of 22 electrodes is required to recognize four MI states, but we utilized 16 sensors. In addition, the proposed method provides more effective classifications than the other methods.\n\nComparison between the proposed method and previous studies for MI recognition.\n\nAuthor Electrode number State number Analysis method AC (%)\nBrunner 22 4 CSP, LDAs 65.1\nLu 64 3 SCS-NMF, SVM 68.9\nGarcía-Laencina 4 2 BP/HJ/AAR, LFDA 77.3\nYi 64 7 CAR, CSP, SVM 70\nThis study 16 4 wCCA, R-CSP, KNN-SVM 87\n7. Conclusions\n\nWe proposed a novel MI pattern recognition system for classifying four anisomerous MI states using 16 EEG sensors. First, we combined the Butterworth band-pass filter, wavelet transform, and CCA to preprocess the raw EEG signals. We then used the R-CSP algorithm to extract feature vectors in the spatial domain. We subsequently utilized the KNN-SVM algorithm for classification. For comparison, five mainstream classifiers were used to classify the same sample data. The results indicate that the KNN-SVM classifier is more suitable for the recognition of the four MI states than the five mainstream classifiers. KNN-SVM also exhibits comparatively excellent results. In particular, the average classification accuracy rate is 87%, and the maximum accuracy rate is 92.5%. Based on these findings, we will assign the subjects to receive systematic MI training in the next stage. Thus, the proposed MI pattern recognition system reaches its maximum performance and satisfies the actual needs.\n\nConflicts of Interest\n\nThe authors declare no competing financial interests.\n\nAcknowledgments\n\nThis work was supported in part by the National Natural Science Foundation of China (no. 61203339) and the Tianjin Research Program of Application Foundation and Advanced Technology (no. 14JCYBJC18300).\n\nHamedi M. Salleh S.-H. Noor A. M. Electroencephalographic motor imagery brain connectivity analysis for BCI: a review Neural Computation 2016 28 6 999 1041 10.1162/neco_a_00838 da Silva-Sauer L. Valero-Aguayo L. de la Torre-Luque A. Ron-Angevin R. Varona-Moya S. Concentration on performance with P300-based BCI systems: A matter of interface features Applied Ergonomics 2016 52, article no. 2090 325 332 2-s2.0-84941009640 10.1016/j.apergo.2015.08.002 Shyu K.-K. Chiu Y.-J. Lee P.-L. Lee M.-H. Sie J.-J. Wu C.-H. Wu Y.-T. Tung P.-C. Total design of an FPGA-based brain-computer interface control hospital bed nursing system IEEE Transactions on Industrial Electronics 2013 60 7 2731 2739 2-s2.0-84874919686 10.1109/TIE.2012.2196897 Yi W. Qiu S. Qi H. Zhang L. Wan B. Ming D. EEG feature comparison and classification of simple and compound limb motor imagery Journal of NeuroEngineering and Rehabilitation 2013 10 1, article no. 106 10.1186/1743-0003-10-106 2-s2.0-84885334910 Nicolas-Alonso L. F. Corralejo R. Gomez-Pilar J. Álvarez D. Hornero R. Adaptive semi-supervised classification to reduce intersession non-stationarity in multiclass motor imagery-based brain-computer interfaces Neurocomputing 2015 159 1 186 196 2-s2.0-84933279681 10.1016/j.neucom.2015.02.005 Onose G. Grozea C. Anghelescu A. Daia C. Sinescu C. J. Ciurea A. V. Spircu T. Mirea A. Andone I. Spânu A. Popescu C. Mihǎescu A.-S. Fazli S. Danóczy M. Popescu F. On the feasibility of using motor imagery EEG-based brain-computer interface in chronic tetraplegics for assistive robotic arm control: A clinical test and long-term post-trial follow-up Spinal Cord 2012 50 8 599 608 2-s2.0-84864648006 10.1038/sc.2012.14 Chae Y. Jeong J. Jo S. Toward brain-actuated humanoid robots: asynchronous direct control using an EEG-Based BCI IEEE Transactions on Robotics 2012 28 5 1131 1144 10.1109/tro.2012.2201310 2-s2.0-84867198549 Invitto S. Faggiano C. Sammarco S. de Luca V. de Paolis L. T. Haptic, virtual interaction and motor imagery: Entertainment tools and psychophysiological testing Sensors (Switzerland) 2016 16 3, article no. 394 2-s2.0-84961783943 10.3390/s16030394 Shi T. Wang H. Zhang C. Brain Computer Interface system based on indoor semi-autonomous navigation and motor imagery for Unmanned Aerial Vehicle control Expert Systems with Applications 2015 42 9 4196 4206 2-s2.0-84923088136 10.1016/j.eswa.2015.01.031 Kasess C. H. Windischberger C. Cunnington R. Lanzenberger R. Pezawas L. Moser E. The suppressive influence of SMA on M1 in motor imagery revealed by fMRI and dynamic causal modeling NeuroImage 2008 40 2 828 837 2-s2.0-40649121476 10.1016/j.neuroimage.2007.11.040 Solodkin A. Hlustik P. Chen E. E. Small S. L. Fine modulation in network activation during motor execution and motor imagery Cerebral Cortex 2004 14 11 1246 1255 10.1093/cercor/bhh086 2-s2.0-6344256764 Genovese C. R. Lazar N. A. Nichols T. Thresholding of statistical maps in functional neuroimaging using the false discovery rate NeuroImage 2002 15 4 870 878 2-s2.0-0036334830 10.1006/nimg.2001.1037 Burger C. Van Den Heever D. J. Removal of EOG artefacts by combining wavelet neural network and independent component analysis Biomedical Signal Processing and Control 2015 15 67 79 2-s2.0-84908084402 10.1016/j.bspc.2014.09.009 Romo Vázquez R. Vélez-Pérez H. Ranta R. Louis Dorr V. Maquin D. Maillard L. Blind source separation, wavelet denoising and discriminant analysis for EEG artefacts and noise cancelling Biomedical Signal Processing and Control 2012 7 4 389 400 2-s2.0-84861098933 10.1016/j.bspc.2011.06.005 Chen J.-X. Ma Y. Cai J. Zhou L.-H. Bao Z.-H. Che W. Novel frequency-agile bandpass filter with wide tuning range and spurious suppression IEEE Transactions on Industrial Electronics 2015 62 10 6428 6435 2-s2.0-84941345603 10.1109/TIE.2015.2427122 Ille N. Berg P. Scherg M. Artifact correction of the ongoing EEG using spatial filters based on artifact and brain signal topographies Journal of Clinical Neurophysiology 2002 19 2 113 124 2-s2.0-0036103009 10.1097/00004691-200203000-00002 Brunner C. Naeem M. Leeb R. Graimann B. Pfurtscheller G. Spatial filtering and selection of optimized components in four class motor imagery EEG data using independent components analysis Pattern Recognition Letters 2007 28 8 957 964 10.1016/j.patrec.2007.01.002 2-s2.0-33947101973 Vergult A. De Clercq W. Palmini A. Vanrumste B. Dupont P. Van Huffel S. Van Paesschen W. Improving the interpretation of ictal scalp EEG: BSS-CCA algorithm for muscle artifact removal Epilepsia 2007 48 5 950 958 2-s2.0-34248578920 10.1111/j.1528-1167.2007.01031.x Hsu W.-Y. Lin C.-H. Hsu H.-J. Chen P.-H. Chen I.-R. Wavelet-based envelope features with automatic EOG artifact removal: application to single-trial EEG data Expert Systems with Applications 2012 39 3 2743 2749 10.1016/j.eswa.2011.08.132 2-s2.0-80255137439 Samek W. Vidaurre C. Müller K.-R. Kawanabe M. Stationary common spatial patterns for brain-computer interfacing Journal of Neural Engineering 2012 9 2 10.1088/1741-2560/9/2/026013 026013 2-s2.0-84857884326 Khushaba R. N. Greenacre L. Kodagoda S. Louviere J. Burke S. Dissanayake G. Choice modeling and the brain: A study on the Electroencephalogram (EEG) of preferences Expert Systems with Applications 2012 39 16 12378 12388 2-s2.0-84864435128 10.1016/j.eswa.2012.04.084 Prasad G. Herman P. Coyle D. McDonough S. Crosbie J. Applying a brain-computer interface to support motor imagery practice in people with stroke for upper limb recovery: A feasibility study Journal of NeuroEngineering and Rehabilitation 2010 7 1, article no. 60 2-s2.0-78650012978 10.1186/1743-0003-7-60 Wang D. Miao D. Xie C. Best basis-based wavelet packet entropy feature extraction and hierarchical EEG classification for epileptic detection Expert Systems with Applications 2011 38 11 14314 14320 10.1016/j.eswa.2011.05.096 2-s2.0-79959978998 Jenke R. Peer A. Buss M. Feature extraction and selection for emotion recognition from EEG IEEE Transactions on Affective Computing 2014 5 3 327 339 2-s2.0-84908451369 10.1109/TAFFC.2014.2339834 Jin X. Zhao M. Chow T. W. S. Pecht M. Motor bearing fault diagnosis using trace ratio linear discriminant analysis IEEE Transactions on Industrial Electronics 2014 61 5 2441 2451 10.1109/tie.2013.2273471 2-s2.0-84887125031 Gupta S. Kambli R. Wagh S. Kazi F. Support-vector-machine-based proactive cascade prediction in smart grid using probabilistic framework IEEE Transactions on Industrial Electronics 2015 62 4 2478 2486 2-s2.0-84924883422 10.1109/TIE.2014.2361493 Suraj X. Tiwari P. Ghosh S. Sinha R. K. Classification of two class motor imagery tasks using hybrid GA-PSO based K-means clustering Computational Intelligence and Neuroscience 2015 2015 11 945729 10.1155/2015/945729 2-s2.0-84929376461 Yang C. Deconinck G. Gui W. An optimal power-dispatching control system for the electrochemical process of zinc based on backpropagation and hopfield neural networks IEEE Transactions on Industrial Electronics 2003 50 5 953 961 2-s2.0-0141987500 10.1109/TIE.2003.817605 Wang J. Plataniotis K. N. Lu J. Venetsanopoulos A. N. On solving the face recognition problem with one training sample per subject Pattern Recognition 2006 39 9 1746 1762 2-s2.0-33744984847 10.1016/j.patcog.2006.03.010 Zbl1096.68737 Amunts K. Zilles K. Architectonic Mapping of the Human Brain beyond Brodmann Neuron 2015 88 6 1086 1107 2-s2.0-84961673950 10.1016/j.neuron.2015.12.001 Sanchez-Panchuelo R. M. Besle J. Beckett A. Bowtell R. Schluppeck D. Francis S. Within-digit functional parcellation of brodmann areas of the human primary somatosensory cortex using functional magnetic resonance imaging at 7 tesla Journal of Neuroscience 2012 32 45 15815 15822 2-s2.0-84868545813 10.1523/JNEUROSCI.2501-12.2012 Donoho D. L. Johnstone I. M. Ideal spatial adaptation by wavelet shrinkage Biometrika 1994 81 3 425 455 10.1093/biomet/81.3.425 MR1311089 2-s2.0-0041958932 10.2307/2337118 Lu N. Li T. Pan J. Ren X. Feng Z. Miao H. Structure constrained semi-nonnegative matrix factorization for EEG-based motor imagery classification Computers in Biology and Medicine 2015 60 32 39 2-s2.0-84923910884 10.1016/j.compbiomed.2015.02.010 García-Laencina P. J. Rodríguez-Bermudez G. Roca-Dorda J. Exploring dimensionality reduction of EEG features in motor imagery task classification Expert Systems with Applications 2014 41 11 5285 5295 2-s2.0-84898455704 10.1016/j.eswa.2014.02.043"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.87801385,"math_prob":0.8314951,"size":42388,"snap":"2022-05-2022-21","text_gpt3_token_len":10487,"char_repetition_ratio":0.15656851,"word_repetition_ratio":0.046721175,"special_character_ratio":0.24450316,"punctuation_ratio":0.14296287,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96054065,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-21T02:37:12Z\",\"WARC-Record-ID\":\"<urn:uuid:c4323b4c-1438-4d3b-a411-3bf9479a1fa6>\",\"Content-Length\":\"180415\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c5347e2d-6178-4e1b-baa2-23e94227d05c>\",\"WARC-Concurrent-To\":\"<urn:uuid:cacec98f-fdfc-4267-9297-02d4d962f268>\",\"WARC-IP-Address\":\"99.86.224.51\",\"WARC-Target-URI\":\"https://downloads.hindawi.com/journals/cin/2017/2727856.xml\",\"WARC-Payload-Digest\":\"sha1:EJVF5F37NIA743PQAJO5AQSNPOUHOYSF\",\"WARC-Block-Digest\":\"sha1:ZNI5DOLVZNDVNLHFEYGEMACUIWSYBQYH\",\"WARC-Identified-Payload-Type\":\"application/xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662534773.36_warc_CC-MAIN-20220521014358-20220521044358-00628.warc.gz\"}"} |
https://mathoverflow.net/questions/145951/does-fixing-the-reparameterization-invariance-of-the-string-action-correspond-to | [
"Does fixing the reparameterization invariance of the string action correspond to some kind of orbifolding?\n\nDoes fixing the reparameterization invariance of the string action, for example by choosing the light-cone gauge\n\n$$X^{+} = \\beta\\alpha' p^{+}\\tau$$\n\n$$p^{+} = \\frac{2\\pi}{\\beta} P^{\\tau +}$$\n\ncorrespond to some kind of orbifolding?\n\nThis answer explains that gauge systems are orbifolds after removing the gauge redundancy. So, as the reparameterization invariance of the string action is nothing else but the worldsheet diffeomorphism invariance which is a gauge symmetry, does fixing it by the light-cone gauge correspond to some kind of orbifolding too?\n\nAnd if so, what are the characteristics of this orbifold, what singularities does it have, and is there some kind of double strike which projects certain states out of the theory and leads at the same time to the emergence new ones?\n\nIf this way of thinking is wrong, I would highly appreciate any clarifications of what I am confusing.\n\n• without a specific math-oriented question, it is rather unlikely that you will receive a helpful response from this forum. – Carlo Beenakker Oct 26 '13 at 20:19\n• @CarloBeenakker thanks for your concerns ... However, my hope is that as I have seen technical theoretical physics questions (such as about string-theory for example here, here, here, etc ... ) can find immensely nice answers on this site, that I will be lucky too. Even if it may take its time. – Dilaton Oct 26 '13 at 22:53"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.917183,"math_prob":0.93423563,"size":893,"snap":"2019-43-2019-47","text_gpt3_token_len":214,"char_repetition_ratio":0.114735655,"word_repetition_ratio":0.042857144,"special_character_ratio":0.21948488,"punctuation_ratio":0.07643312,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97592974,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-14T02:16:39Z\",\"WARC-Record-ID\":\"<urn:uuid:24edee56-cc76-4b1d-b2b4-eca6d676a137>\",\"Content-Length\":\"121657\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bcd69e01-f748-4ce0-811e-0ae20f4ea7bf>\",\"WARC-Concurrent-To\":\"<urn:uuid:4dff1928-ee4d-4584-a4a3-63c53a091ace>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/145951/does-fixing-the-reparameterization-invariance-of-the-string-action-correspond-to\",\"WARC-Payload-Digest\":\"sha1:22LKMXAY5KU4CTLIYRV3DV6446GZHRKF\",\"WARC-Block-Digest\":\"sha1:QWVO3SINM233QWQOYA2AMYXNI6GXK4NX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986648481.7_warc_CC-MAIN-20191014003258-20191014030258-00480.warc.gz\"}"} |
https://terrytao.wordpress.com/tag/asymptotic-notation/ | [
"You are currently browsing the tag archive for the ‘asymptotic notation’ tag.\n\nIn orthodox first-order logic, variables and expressions are only allowed to take one value at a time; a variable",
null,
"${x}$, for instance, is not allowed to equal",
null,
"${+3}$ and",
null,
"${-3}$ simultaneously. We will call such variables completely specified. If one really wants to deal with multiple values of objects simultaneously, one is encouraged to use the language of set theory and/or logical quantifiers to do so.\n\nHowever, the ability to allow expressions to become only partially specified is undeniably convenient, and also rather intuitive. A classic example here is that of the quadratic formula:",
null,
"$\\displaystyle \\hbox{If } x,a,b,c \\in {\\bf R} \\hbox{ with } a \\neq 0, \\hbox{ then }$",
null,
"$\\displaystyle ax^2+bx+c=0 \\hbox{ if and only if } x = \\frac{-b \\pm \\sqrt{b^2-4ac}}{2a}. \\ \\ \\ \\ \\ (1)$\n\nStrictly speaking, the expression",
null,
"${x = \\frac{-b \\pm \\sqrt{b^2-4ac}}{2a}}$ is not well-formed according to the grammar of first-order logic; one should instead use something like",
null,
"$\\displaystyle x = \\frac{-b - \\sqrt{b^2-4ac}}{2a} \\hbox{ or } x = \\frac{-b + \\sqrt{b^2-4ac}}{2a}$\n\nor",
null,
"$\\displaystyle x \\in \\left\\{ \\frac{-b - \\sqrt{b^2-4ac}}{2a}, \\frac{-b + \\sqrt{b^2-4ac}}{2a} \\right\\}$\n\nor",
null,
"$\\displaystyle x = \\frac{-b + \\epsilon \\sqrt{b^2-4ac}}{2a} \\hbox{ for some } \\epsilon \\in \\{-1,+1\\}$\n\nin order to strictly adhere to this grammar. But none of these three reformulations are as compact or as conceptually clear as the original one. In a similar spirit, a mathematical English sentence such as",
null,
"$\\displaystyle \\hbox{The sum of two odd numbers is an even number} \\ \\ \\ \\ \\ (2)$\n\nis also not a first-order sentence; one would instead have to write something like",
null,
"$\\displaystyle \\hbox{For all odd numbers } x, y, \\hbox{ the number } x+y \\hbox{ is even} \\ \\ \\ \\ \\ (3)$\n\nor",
null,
"$\\displaystyle \\hbox{For all odd numbers } x,y \\hbox{ there exists an even number } z \\ \\ \\ \\ \\ (4)$",
null,
"$\\displaystyle \\hbox{ such that } x+y=z$\n\ninstead. These reformulations are not all that hard to decipher, but they do have the aesthetically displeasing effect of cluttering an argument with temporary variables such as",
null,
"${x,y,z}$ which are used once and then discarded.\n\nAnother example of partially specified notation is the innocuous",
null,
"${\\ldots}$ notation. For instance, the assertion",
null,
"$\\displaystyle \\pi=3.14\\ldots,$\n\nwhen written formally using first-order logic, would become something like",
null,
"$\\displaystyle \\pi = 3 + \\frac{1}{10} + \\frac{4}{10^2} + \\sum_{n=3}^\\infty \\frac{a_n}{10^n} \\hbox{ for some sequence } (a_n)_{n=3}^\\infty$",
null,
"$\\displaystyle \\hbox{ with } a_n \\in \\{0,1,2,3,4,5,6,7,8,9\\} \\hbox{ for all } n,$\n\nwhich is not exactly an elegant reformulation. Similarly with statements such as",
null,
"$\\displaystyle \\tan x = x + \\frac{x^3}{3} + \\ldots \\hbox{ for } |x| < \\pi/2$\n\nor",
null,
"$\\displaystyle \\tan x = x + \\frac{x^3}{3} + O(|x|^5) \\hbox{ for } |x| < \\pi/2.$\n\nBelow the fold I’ll try to assign a formal meaning to partially specified expressions such as (1), for instance allowing one to condense (2), (3), (4) to just",
null,
"$\\displaystyle \\hbox{odd} + \\hbox{odd} = \\hbox{even}.$\n\nWhen combined with another common (but often implicit) extension of first-order logic, namely the ability to reason using ambient parameters, we become able to formally introduce asymptotic notation such as the big-O notation",
null,
"${O()}$ or the little-o notation",
null,
"${o()}$. We will explain how to do this at the end of this post.",
null,
"Tom on What are the odds?",
null,
"David Speyer on What are the odds?",
null,
"Terence Tao on What are the odds?",
null,
"Terence Tao on What are the odds?",
null,
"David Speyer on What are the odds?",
null,
"David Speyer on What are the odds?",
null,
"David Speyer on What are the odds?",
null,
"David Speyer on What are the odds?",
null,
"Ryan Pang on What are the odds?",
null,
"Michael R. on A counterexample to the period…",
null,
"Anonymous on What are the odds?",
null,
"Ryan Pang on What are the odds?",
null,
"Luisa on What are the odds?",
null,
"Anonymous on What are the odds?",
null,
"Bernhard Haak on What are the odds?"
] | [
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://2.gravatar.com/avatar/53ad723041a60111ccf8a308e33ef440",
null,
"https://2.gravatar.com/avatar/870d64be2e41d1495c86ebcac4d51b3c",
null,
"https://0.gravatar.com/avatar/3c795880f3b73784a9b75fbff3772701",
null,
"https://0.gravatar.com/avatar/3c795880f3b73784a9b75fbff3772701",
null,
"https://2.gravatar.com/avatar/870d64be2e41d1495c86ebcac4d51b3c",
null,
"https://2.gravatar.com/avatar/870d64be2e41d1495c86ebcac4d51b3c",
null,
"https://2.gravatar.com/avatar/870d64be2e41d1495c86ebcac4d51b3c",
null,
"https://2.gravatar.com/avatar/870d64be2e41d1495c86ebcac4d51b3c",
null,
"https://2.gravatar.com/avatar/579b46ff78f829dfe1e71977fcf27067",
null,
"https://0.gravatar.com/avatar/0b4e369ef0a89dd2a69926d6a5f4dace",
null,
"https://0.gravatar.com/avatar/",
null,
"https://2.gravatar.com/avatar/579b46ff78f829dfe1e71977fcf27067",
null,
"https://0.gravatar.com/avatar/",
null,
"https://0.gravatar.com/avatar/",
null,
"https://2.gravatar.com/avatar/5c2cea02903a09513d19f3e04bd0cbdf",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9389378,"math_prob":0.9982043,"size":2050,"snap":"2022-40-2023-06","text_gpt3_token_len":400,"char_repetition_ratio":0.102639295,"word_repetition_ratio":0.0,"special_character_ratio":0.18682927,"punctuation_ratio":0.09016393,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996619,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-04T17:51:20Z\",\"WARC-Record-ID\":\"<urn:uuid:1e6156ea-a591-4491-95c5-e1ff36e51171>\",\"Content-Length\":\"148619\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d40af5af-b718-4b36-8073-14c067340c98>\",\"WARC-Concurrent-To\":\"<urn:uuid:643477e6-0433-479a-a2de-82c6386b114f>\",\"WARC-IP-Address\":\"192.0.78.13\",\"WARC-Target-URI\":\"https://terrytao.wordpress.com/tag/asymptotic-notation/\",\"WARC-Payload-Digest\":\"sha1:Q35JHHZKCUG25OGWARBIC355VBQGWYCV\",\"WARC-Block-Digest\":\"sha1:FK7CVOSWNBJAWAV6LLKDA6OLJRDBM3KT\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337516.13_warc_CC-MAIN-20221004152839-20221004182839-00485.warc.gz\"}"} |
https://visualfractions.com/calculator/factors/factors-of-2105/ | [
"Factors of 2105\n\nSo you need to find the factors of 2105 do you? In this quick guide we'll describe what the factors of 2105 are, how you find them and list out the factor pairs of 2105 for you to prove the calculation works. Let's dive in!\n\nWant to quickly learn or show students how to find the factors of 2105? Play this very quick and fun video now!\n\nFactors of 2105 Definition\n\nWhen we talk about the factors of 2105, what we really mean is all of the positive and negative integers (whole numbers) that can be evenly divided into 2105. If you were to take 2105 and divide it by one of its factors, the answer would be another factor of 2105.\n\nLet's look at how to find all of the factors of 2105 and list them out.\n\nHow to Find the Factors of 2105\n\nWe just said that a factor is a number that can be divided equally into 2105. So the way you find and list all of the factors of 2105 is to go through every number up to and including 2105 and check which numbers result in an even quotient (which means no decimal place).\n\nDoing this by hand for large numbers can be time consuming, but it's relatively easy for a computer program to do it. Our calculator has worked this out for you. Here are all of the factors of 2105:\n\n• 2105 ÷ 1 = 2105\n• 2105 ÷ 5 = 421\n• 2105 ÷ 421 = 5\n• 2105 ÷ 2105 = 1\n\nAll of these factors can be used to divide 2105 by and get a whole number. The full list of positive factors for 2105 are:\n\n1, 5, 421, and 2105\n\nNegative Factors of 2105\n\nTechnically, in math you can also have negative factors of 2105. If you are looking to calculate the factors of a number for homework or a test, most often the teacher or exam will be looking for specifically positive numbers.\n\nHowever, we can just flip the positive numbers into negatives and those negative numbers would also be factors of 2105:\n\n-1, -5, -421, and -2105\n\nHow Many Factors of 2105 Are There?\n\nAs we can see from the calculations above there are a total of 4 positive factors for 2105 and 4 negative factors for 2105 for a total of 8 factors for the number 2105.\n\nThere are 4 positive factors of 2105 and 4 negative factors of 2105. Wht are there negative numbers that can be a factor of 2105?\n\nFactor Pairs of 2105\n\nA factor pair is a combination of two factors which can be multiplied together to equal 2105. For 2105, all of the possible factor pairs are listed below:\n\n• 1 x 2105 = 2105\n• 5 x 421 = 2105\n\nWe have also written a guide that goes into a little more detail about the factor pairs for 2105 in case you are interested!\n\nJust like before, we can also list out all of the negative factor pairs for 2105:\n\n• -1 x -2105 = 2105\n• -5 x -421 = 2105\n\nNotice in the negative factor pairs that because we are multiplying a minus with a minus, the result is a positive number.\n\nSo there you have it. A complete guide to the factors of 2105. You should now have the knowledge and skills to go out and calculate your own factors and factor pairs for any number you like.\n\nFeel free to try the calculator below to check another number or, if you're feeling fancy, grab a pencil and paper and try and do it by hand. Just make sure to pick small numbers!\n\nIf you found this content useful in your research, please do us a great favor and use the tool below to make sure you properly reference us wherever you use it. We really appreciate your support!\n\n• \"Factors of 2105\". VisualFractions.com. Accessed on January 18, 2022. http://visualfractions.com/calculator/factors/factors-of-2105/.\n\n• \"Factors of 2105\". VisualFractions.com, http://visualfractions.com/calculator/factors/factors-of-2105/. Accessed 18 January, 2022."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9169682,"math_prob":0.9388467,"size":3927,"snap":"2022-05-2022-21","text_gpt3_token_len":1015,"char_repetition_ratio":0.20978843,"word_repetition_ratio":0.012784091,"special_character_ratio":0.29029793,"punctuation_ratio":0.100871734,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9957983,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-18T07:14:29Z\",\"WARC-Record-ID\":\"<urn:uuid:a27b1549-67a4-4766-9087-ff474f3012ce>\",\"Content-Length\":\"24041\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3a063184-c664-469e-8d86-102c08ec895c>\",\"WARC-Concurrent-To\":\"<urn:uuid:a68d24ea-022f-46fa-844c-2286ff6eaf99>\",\"WARC-IP-Address\":\"172.67.151.195\",\"WARC-Target-URI\":\"https://visualfractions.com/calculator/factors/factors-of-2105/\",\"WARC-Payload-Digest\":\"sha1:FSQPNCNZ77ZUXNXJ75BI2QHA5Y2BQXY3\",\"WARC-Block-Digest\":\"sha1:ZFWCVSJ6Q235BJYG4WJXAPXE2NF2JOVN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320300805.79_warc_CC-MAIN-20220118062411-20220118092411-00219.warc.gz\"}"} |
https://stats.libretexts.org/Bookshelves/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/07%3A_ANOVA/7.02%3A_One-factor_ANOVA | [
"# 7.2: One-factor ANOVA\n\n$$\\newcommand{\\vecs}{\\overset { \\rightharpoonup} {\\mathbf{#1}} }$$ $$\\newcommand{\\vecd}{\\overset{-\\!-\\!\\rightharpoonup}{\\vphantom{a}\\smash {#1}}}$$$$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$$$\\newcommand{\\AA}{\\unicode[.8,0]{x212B}}$$\n\nThe one-factor ANOVA is sometimes also called a between-subjects ANOVA, an independent factor ANOVA, or a one-way ANOVA (which is a bit of a misnomer as we discuss later). The critical ingredient for a one-factor, between-subjects ANOVA, is that you have one independent variable, with at least two-levels. When you have one IV with two levels, you can run a $$t$$-test. You can also run an ANOVA. Interestingly, they give you almost the exact same results. You will get a $$p$$-value from both tests that is identical (they are really doing the same thing under the hood). The $$t$$-test gives a $$t$$-value as the important sample statistic. The ANOVA gives you the $$F$$-value (for Fisher, the inventor of the test) as the important sample statistic. It turns out that $$t^2$$ equals $$F$$, when there are only two groups in the design. They are the same test. Side-note, it turns out they are all related to Pearson’s r too (but we haven’t written about this relationship yet in this textbook).\n\nRemember that $$t$$ is computed directly from the data. It’s like a mean and standard error that we measure from the sample. In fact it’s the mean difference divided by the standard error of the sample. It’s just another descriptive statistic isn’t it.\n\nThe same thing is true about $$F$$. $$F$$ is computed directly from the data. In fact, the idea behind $$F$$ is the same basic idea that goes into making $$t$$. Here is the general idea behind the formula, it is again a ratio of the effect we are measuring (in the numerator), and the variation associated with the effect (in the denominator).\n\n$\\text{name of statistic} = \\frac{\\text{measure of effect}}{\\text{measure of error}} \\nonumber$\n\n$\\text{F} = \\frac{\\text{measure of effect}}{\\text{measure of error}} \\nonumber$\n\nThe difference with $$F$$, is that we use variances to describe both the measure of the effect and the measure of error. So, $$F$$ is a ratio of two variances.\n\nRemember what we said about how these ratios work. When the variance associated with the effect is the same size as the variance associated with sampling error, we will get two of the same numbers, this will result in an $$F$$-value of 1. When the variance due to the effect is larger than the variance associated with sampling error, then $$F$$ will be greater than 1. When the variance associated with the effect is smaller than the variance associated with sampling error, $$F$$ will be less than one.\n\nLet’s rewrite in plainer English. We are talking about two concepts that we would like to measure from our data. 1) A measure of what we can explain, and 2) a measure of error, or stuff about our data we can’t explain. So, the $$F$$ formula looks like this:\n\n$\\text{F} = \\frac{\\text{Can Explain}}{\\text{Can't Explain}} \\nonumber$\n\nWhen we can explain as much as we can’t explain, $$F$$ = 1. This isn’t that great of a situation for us to be in. It means we have a lot of uncertainty. When we can explain much more than we can’t we are doing a good job, $$F$$ will be greater than 1. When we can explain less than what we can’t, we really can’t explain very much, $$F$$ will be less than 1. That’s the concept behind making $$F$$.\n\nIf you saw an $$F$$ in the wild, and it was .6. Then you would automatically know the researchers couldn’t explain much of their data. If you saw an $$F$$ of 5, then you would know the researchers could explain 5 times more than the couldn’t, that’s pretty good. And the point of this is to give you an intuition about the meaning of an $$F$$-value, even before you know how to compute it.\n\n## Computing the $$F$$-value\n\nFisher’s ANOVA is very elegant in my opinion. It starts us off with a big problem we always have with data. We have a lot of numbers, and there is a lot of variation in the numbers, what to do? Wouldn’t it be nice to split up the variation into to kinds, or sources. If we could know what parts of the variation were being caused by our experimental manipulation, and what parts were being caused by sampling error, we would be making really good progress. We would be able to know if our experimental manipulation was causing more change in the data than sampling error, or chance alone. If we could measure those two parts of the total variation, we could make a ratio, and then we would have an $$F$$ value. This is what the ANOVA does. It splits the total variation in the data into two parts. The formula is:\n\nTotal Variation = Variation due to Manipulation + Variation due to sampling error\n\nThis is a nice idea, but it is also vague. We haven’t specified our measure of variation. What should we use?\n\nRemember the sums of squares that we used to make the variance and the standard deviation? That’s what we’ll use. Let’s take another look at the formula, using sums of squares for the measure of variation:\n\n$SS_\\text{total} = SS_\\text{Effect} + SS_\\text{Error} \\nonumber$\n\n## SS Total\n\nThe total sums of squares, or $$SS\\text{Total}$$ is a way of thinking about all of the variation in a set of data. It’s pretty straightforward to measure. No tricky business. All we do is find the difference between each score and the grand mean, then we square the differences and add them all up.\n\nLet’s imagine we had some data in three groups, A, B, and C. For example, we might have 3 scores in each group. The data could look like this:\n\nsuppressPackageStartupMessages(library(dplyr))\nscores <- c(20,11,2,6,2,7,2,11,2)\ngroups <- as.character(rep(c(\"A\",\"B\",\"C\"), each=3))\ndiff <-scores-mean(scores)\ndiff_squared <-diff^2\ndf<-data.frame(groups,scores,diff, diff_squared)\ndf$groups<-as.character(df$groups)\ndf <- df %>%\nrbind(c(\"Sums\",colSums(df[1:9,2:4]))) %>%\nrbind(c(\"Means\",colMeans(df[1:9,2:4])))\nknitr::kable(df)\ngroups scores diff diff_squared\nA 20 13 169\nA 11 4 16\nA 2 -5 25\nB 6 -1 1\nB 2 -5 25\nB 7 0 0\nC 2 -5 25\nC 11 4 16\nC 2 -5 25\nSums 63 0 302\nMeans 7 0 33.5555555555556\n\nThe data is organized in long format, so that each row is a single score. There are three scores for the A, B, and C groups. The mean of all of the scores is called the Grand Mean. It’s calculated in the table, the Grand Mean = 7.\n\nWe also calculated all of the difference scores from the Grand Mean. The difference scores are in the column titled diff. Next, we squared the difference scores, and those are in the next column called diff_squared.\n\nRemember, the difference scores are a way of measuring variation. They represent how far each number is from the Grand Mean. If the Grand Mean represents our best guess at summarizing the data, the difference scores represent the error between the guess and each actual data point. The only problem with the difference scores is that they sum to zero (because the mean is the balancing point in the data). So, it is convenient to square the difference scores, this turns all of them into positive numbers. The size of the squared difference scores still represents error between the mean and each score. And, the squaring operation exacerbates the differences as the error grows larger (squaring a big number makes a really big number, squaring a small number still makes a smallish number).\n\nOK fine! We have the squared deviations from the grand mean, we know that they represent the error between the grand mean and each score. What next? SUM THEM UP!\n\nWhen you add up all of the individual squared deviations (difference scores) you get the sums of squares. That’s why it’s called the sums of squares (SS).\n\nNow, we have the first part of our answer:\n\n$SS_\\text{total} = SS_\\text{Effect} + SS_\\text{Error} \\nonumber$\n\n$SS_\\text{total} = 302 \\nonumber$\n\nand\n\n$302 = SS_\\text{Effect} + SS_\\text{Error} \\nonumber$\n\nWhat next? If you think back to what you learned about algebra, and solving for X, you might notice that we don’t really need to find the answers to both missing parts of the equation. We only need one, and we can solve for the other. For example, if we found $$SS_\\text{Effect}$$, then we could solve for $$SS_\\text{Error}$$.\n\n## SS Effect\n\n$$SS_\\text{Total}$$ gave us a number representing all of the change in our data, how all the scores are different from the grand mean.\n\nWhat we want to do next is estimate how much of the total change in the data might be due to the experimental manipulation. For example, if we ran an experiment that causes causes change in the measurement, then the means for each group will be different from other. As a result, the manipulation forces change onto the numbers, and this will naturally mean that some part of the total variation in the numbers is caused by the manipulation.\n\nThe way to isolate the variation due to the manipulation (also called effect) is to look at the means in each group, and calculate the difference scores between each group mean and the grand mean, and then sum the squared deviations to find $$SS_\\text{Effect}$$.\n\nConsider this table, showing the calculations for $$SS_\\text{Effect}$$.\n\nsuppressPackageStartupMessages(library(dplyr))\nscores <- c(20,11,2,6,2,7,2,11,2)\nmeans <-c(11,11,11,5,5,5,5,5,5)\ngroups <- as.character(rep(c(\"A\",\"B\",\"C\"), each=3))\ndiff <-means-mean(scores)\ndiff_squared <-diff^2\ndf<-data.frame(groups,scores,means,diff, diff_squared)\ndf$groups<-as.character(df$groups)\ndf <- df %>%\nrbind(c(\"Sums\",colSums(df[1:9,2:5]))) %>%\nrbind(c(\"Means\",colMeans(df[1:9,2:5])))\nknitr::kable(df)\ngroups scores means diff diff_squared\nA 20 11 4 16\nA 11 11 4 16\nA 2 11 4 16\nB 6 5 -2 4\nB 2 5 -2 4\nB 7 5 -2 4\nC 2 5 -2 4\nC 11 5 -2 4\nC 2 5 -2 4\nSums 63 63 0 72\nMeans 7 7 0 8\n\nNotice we created a new column called means. For example, the mean for group A was 11. You can see there are three 11s, one for each observation in row A. The means for group B and C happen to both be 5. So, the rest of the numbers in the means column are 5s.\n\nWhat we are doing here is thinking of each score in the data from the viewpoint of the group means. The group means are our best attempt to summarize the data in those groups. From the point of view of the mean, all of the numbers are treated as the same. The mean doesn’t know how far off it is from each score, it just knows that all of the scores are centered on the mean.\n\nLet’s pretend you are the mean for group A. That means you are an 11. Someone asks you “hey, what’s the score for the first data point in group A?”. Because you are the mean, you say, I know that, it’s 11. “What about the second score?”…it’s 11… they’re all 11, so far as I can tell…“Am I missing something…”, asked the mean.\n\nNow that we have converted each score to it’s mean value we can find the differences between each mean score and the grand mean, then square them, then sum them up. We did that, and found that the $$SS_\\text{Effect} = 72$$.\n\n$$SS_\\text{Effect}$$ represents the amount of variation that is caused by differences between the means. I also refer to this as the amount of variation that the researcher can explain (by the means, which represent differences between groups or conditions that were manipulated by the researcher).\n\nNotice also that $$SS_\\text{Effect} = 72$$, and that 72 is smaller than $$SS_\\text{total} = 302$$. That is very important. $$SS_\\text{Effect}$$ by definition can never be larger than $$SS_\\text{total}$$.\n\n## SS Error\n\nGreat, we made it to SS Error. We already found SS Total, and SS Effect, so now we can solve for SS Error just like this:\n\n$SS_\\text{total} = SS_\\text{Effect} + SS_\\text{Error} \\nonumber$\n\nswitching around:\n\n$SS_\\text{Error} = SS_\\text{total} - SS_\\text{Effect} \\nonumber$\n\n$SS_\\text{Error} = 302 - 72 = 230 \\nonumber$\n\nWe could stop here and show you the rest of the ANOVA, we’re almost there. But, the next step might not make sense unless we show you how to calculate $$SS_\\text{Error}$$ directly from the data, rather than just solving for it. We should do this just to double-check our work anyway.\n\nsuppressPackageStartupMessages(library(dplyr))\nscores <- c(20,11,2,6,2,7,2,11,2)\nmeans <-c(11,11,11,5,5,5,5,5,5)\ngroups <- as.character(rep(c(\"A\",\"B\",\"C\"), each=3))\ndiff <-means-scores\ndiff_squared <-diff^2\ndf<-data.frame(groups,scores,means,diff, diff_squared)\ndf$groups<-as.character(df$groups)\ndf <- df %>%\nrbind(c(\"Sums\",colSums(df[1:9,2:5]))) %>%\nrbind(c(\"Means\",colMeans(df[1:9,2:5])))\nknitr::kable(df)\ngroups scores means diff diff_squared\nA 20 11 -9 81\nA 11 11 0 0\nA 2 11 9 81\nB 6 5 -1 1\nB 2 5 3 9\nB 7 5 -2 4\nC 2 5 3 9\nC 11 5 -6 36\nC 2 5 3 9\nSums 63 63 0 230\nMeans 7 7 0 25.5555555555556\n\nAlright, we did almost the same thing as we did to find $$SS_\\text{Effect}$$. Can you spot the difference? This time for each score we first found the group mean, then we found the error in the group mean estimate for each score. In other words, the values in the $$diff$$ column are the differences between each score and it’s group mean. The values in the diff_squared column are the squared deviations. When we sum up the squared deviations, we get another Sums of Squares, this time it’s the $$SS_\\text{Error}$$. This is an appropriate name, because these deviations are the ones that the group means can’t explain!\n\n## Degrees of freedom\n\nDegrees of freedom come into play again with ANOVA. This time, their purpose is a little bit more clear. $$Df$$s can be fairly simple when we are doing a relatively simple ANOVA like this one, but they can become complicated when designs get more complicated.\n\nLet’s talk about the degrees of freedom for the $$SS_\\text{Effect}$$ and $$SS_\\text{Error}$$.\n\nThe formula for the degrees of freedom for $$SS_\\text{Effect}$$ is\n\n$$df_\\text{Effect} = \\text{Groups} -1$$, where Groups is the number of groups in the design.\n\nIn our example, there are 3 groups, so the df is 3-1 = 2. You can think of the df for the effect this way. When we estimate the grand mean (the overall mean), we are taking away a degree of freedom for the group means. Two of the group means can be anything they want (they have complete freedom), but in order for all three to be consistent with the Grand Mean, the last group mean has to be fixed.\n\nThe formula for the degrees of freedom for $$SS_\\text{Error}$$ is\n\n$$df_\\text{Error} = \\text{scores} - \\text{groups}$$, or the number of scores minus the number of groups. We have 9 scores and 3 groups, so our $$df$$ for the error term is 9-3 = 6. Remember, when we computed the difference score between each score and its group mean, we had to compute three means (one for each group) to do that. So, that reduces the degrees of freedom by 3. 6 of the difference scores could be anything they want, but the last 3 have to be fixed to match the means from the groups.\n\n## Mean Squared Error\n\nOK, so we have the degrees of freedom. What’s next? There are two steps left. First we divide the $$SS$$es by their respective degrees of freedom to create something new called Mean Squared Error. Let’s talk about why we do this.\n\nFirst of all, remember we are trying to accomplish this goal:\n\n$\\text{F} = \\frac{\\text{measure of effect}}{\\text{measure of error}} \\nonumber$\n\nWe want to build a ratio that divides a measure of an effect by a measure of error. Perhaps you noticed that we already have a measure of an effect and error! How about the $$SS_\\text{Effect}$$ and $$SS_\\text{Error}$$. They both represent the variation due to the effect, and the leftover variation that is unexplained. Why don’t we just do this?\n\n$\\frac{SS_\\text{Effect}}{SS_\\text{Error}} \\nonumber$\n\nWell, of course you could do that. What would happen is you can get some really big and small numbers for your inferential statistic. And, the kind of number you would get wouldn’t be readily interpretable like a $$t$$ value or a $$z$$ score.\n\nThe solution is to normalize the $$SS$$ terms. Don’t worry, normalize is just a fancy word for taking the average, or finding the mean. Remember, the SS terms are all sums. And, each sum represents a different number of underlying properties.\n\nFor example, the SS_ represents the sum of variation for three means in our study. We might ask the question, well, what is the average amount of variation for each mean…You might think to divide SS_ by 3, because there are three means, but because we are estimating this property, we divide by the degrees of freedom instead (# groups - 1 = 3-1 = 2). Now we have created something new, it’s called the $$MSE_\\text{Effect}$$.\n\n$MSE_\\text{Effect} = \\frac{SS_\\text{Effect}}{df_\\text{Effect}} \\nonumber$\n\n$MSE_\\text{Effect} = \\frac{72}{2} = 36 \\nonumber$\n\nThis might look alien and seem a bit complicated. But, it’s just another mean. It’s the mean of the sums of squares for the effect. If this reminds you of the formula for the variance, good memory. The $$SME_\\text{Effect}$$ is a measure variance for the change in the data due to changes in the means (which are tied to the experimental conditions).\n\nThe $$SS_\\text{Error}$$ represents the sum of variation for nine scores in our study. That’s a lot more scores, so the $$SS_\\text{Error}$$ is often way bigger than than $$SS_\\text{Effect}$$. If we left our SSes this way and divided them, we would almost always get numbers less than one, because the $$SS_\\text{Error}$$ is so big. What we need to do is bring it down to the average size. So, we might want to divide our $$SS_\\text{Error}$$ by 9, after all there were nine scores. However, because we are estimating this property, we divide by the degrees of freedom instead (scores-groups) = 9-3 = 6). Now we have created something new, it’s called the $$MSE_\\text{Error}$$.\n\n$MSE_\\text{Error} = \\frac{SS_\\text{Error}}{df_\\text{Error}} \\nonumber$\n\n$MSE_\\text{Error} = \\frac{230}{6} = 38.33 \\nonumber$\n\n## Calculate F\n\nNow that we have done all of the hard work, calculating $$F$$ is easy:\n\n$\\text{F} = \\frac{\\text{measure of effect}}{\\text{measure of error}} \\nonumber$\n\n$\\text{F} = \\frac{MSE_\\text{Effect}}{MSE_\\text{Error}} \\nonumber$\n\n$\\text{F} = \\frac{36}{38.33} = .939 \\nonumber$\n\nDone!\n\n## The ANOVA TABLE\n\nYou might suspect we aren’t totally done here. We’ve walked through the steps of computing $$F$$. Remember, $$F$$ is a sample statistic, we computed $$F$$ directly from the data. There were a whole bunch of pieces we needed, the dfs, the SSes, the MSEs, and then finally the F.\n\nAll of these little pieces are conveniently organized by ANOVA tables. ANOVA tables look like this:\n\nlibrary(xtable)\nsuppressPackageStartupMessages(library(dplyr))\nscores <- c(20,11,2,6,2,7,2,11,2)\nmeans <-c(11,11,11,5,5,5,5,5,5)\ngroups <- as.character(rep(c(\"A\",\"B\",\"C\"), each=3))\ndiff <-means-scores\ndiff_squared <-diff^2\ndf<-data.frame(groups,scores,means,diff, diff_squared)\ndf$groups<-as.character(df$groups)\ndf <- df %>%\nrbind(c(\"Sums\",colSums(df[1:9,2:5]))) %>%\nrbind(c(\"Means\",colMeans(df[1:9,2:5])))\n\naov_out<-aov(scores~ groups, df[1:9,])\nsummary_out<-summary(aov_out)\nknitr::kable(xtable(summary_out))\nDf Sum Sq Mean Sq F value Pr(>F)\ngroups 2 72 36.00000 0.9391304 0.4417359\nResiduals 6 230 38.33333 NA NA\n\nYou are looking at the print-out of an ANOVA summary table from R. Notice, it had columns for $$Df$$, $$SS$$ (Sum Sq), $$MSE$$ (Mean Sq), $$F$$, and a $$p$$-value. There are two rows. The groups row is for the Effect (what our means can explain). The Residuals row is for the Error (what our means can’t explain). Different programs give slightly different labels, but they are all attempting to present the same information in the ANOVA table. There isn’t anything special about the ANOVA table, it’s just a way of organizing all the pieces. Notice, the MSE for the effect (36) is placed above the MSE for the error (38.333), and this seems natural because we divide 36/38.33 in or to get the $$F$$-value!\n\nThis page titled 7.2: One-factor ANOVA is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Matthew J. C. Crump via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9052775,"math_prob":0.99868995,"size":19711,"snap":"2023-40-2023-50","text_gpt3_token_len":5512,"char_repetition_ratio":0.15116456,"word_repetition_ratio":0.06721915,"special_character_ratio":0.28922936,"punctuation_ratio":0.120201536,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9995413,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-06T06:43:04Z\",\"WARC-Record-ID\":\"<urn:uuid:c0905cf5-3bfa-456c-b3e0-12b02cf8da28>\",\"Content-Length\":\"158966\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:82503e45-5172-497c-88b1-a8d681a21a8b>\",\"WARC-Concurrent-To\":\"<urn:uuid:69c352d7-fb6e-4089-9bb4-1cad1faeb7a5>\",\"WARC-IP-Address\":\"3.162.103.49\",\"WARC-Target-URI\":\"https://stats.libretexts.org/Bookshelves/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/07%3A_ANOVA/7.02%3A_One-factor_ANOVA\",\"WARC-Payload-Digest\":\"sha1:PKYD35FBQPYGNUSVEWIUWXPQRNTCKFKA\",\"WARC-Block-Digest\":\"sha1:WXXFRERCE6PH24276VEAC4JY5UCABKDB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100583.31_warc_CC-MAIN-20231206063543-20231206093543-00571.warc.gz\"}"} |
https://moviecultists.com/in-the-spectrum-of-a-blackbody-the-wavelength-of-peak-intensity | [
"# In the spectrum of a blackbody the wavelength of peak intensity?\n\nThe peak of the blackbody curve in a spectrum moves to shorter wavelengths for hotter objects. If you think in terms of visible light, the hotter the blackbody, the bluer the wavelength of its peak emission. For example, the sun has a temperature of approximately 5800 Kelvin.\n\nAlso to know, What happens to the peak wavelength in the black body spectrum?\n\nThe blackbody radiation curves have quite a complex shape (described by Planck's Law). The spectral profile (or curve) at a specific temperature corresponds to a specific peak wavelength, and vice versa. As the temperature of the blackbody increases, the peak wavelength decreases (Wien's Law).\n\nHereof, What is its wavelength of peak intensity?. In terms of power per percentage bandwidth, the peak is at about 635 nm, a red wavelength. Regardless of how one wants to plot the spectrum, about half of the sun's radiation is at wavelengths shorter than 710 nm, about the limit of the human vision.\n\nSimilarly, it is asked, How do you find the peak intensity of a wavelength?\n\nIf the temperature is = C = K, then on the traditional wavelength plot the wavelength at which the radiation curve peaks is: λpeak= x10^ m = nm = microns. This wavelength corresponds to quantum energy hν = x 10^ eV. corresponding to quantum energy hν = x 10^ eV.\n\nWhat is the peak wavelength of Betelgeuse?\n\nUsing a peak wavelength for Betelgeuse of 855 in Wien's law yields a temperature for Betelgeuse of 3391 Kelvin. Using Wien's law, and a temperature of 10,100 K, the star has a peak wavelength of only 287 nanometers.\n\n22 related questions found\n\n### Why does wavelength increase intensity?\n\nWe know that a wave which has greater frequency will have low wavelength and high energy. So, by decreasing the wavelength, the frequency and consequently energy (intensity) of that wave will increase or vice versa.\n\n### How do you calculate wavelength?\n\nThe wavelength is calculated from the wave speed and frequency by λ = wave speed/frequency, or λ = v / f. A peak is the highest point of a wave, while the valley is the lowest point of a wave.\n\n### What is the relationship between wavelength and temperature?\n\nThe higher the object's temperature, the faster the molecules will vibrate and the shorter the wavelength will be.\n\n### How do you find the maximum wavelength?\n\nSee formula wavelength = speed of wave / frequency. How do I work out the maximum wavelength? To determine the maximum wavelength of light, you simply use the energy equation. If you know the amount of energy required for the reaction, you plug it into the equation λ = hc/E.\n\n### Why is a black body a perfect emitter?\n\nA blackbody allows all incident radiation to pass into it (no reflected energy) and internally absorbs all the incident radiation (no energy transmitted through the body). This is true for radiation of all wavelengths and for all angles of incidence. Hence the blackbody is a perfect absorber for all incident radiation.\n\n### What is black body example?\n\nA black body is one that absorbs all radiation incident upon it. ... A good example of a black body is a cavity with a small hole in it. Any light incident upon the hole goes into the cavity and is essentially never reflected out since it would have to undergo a very large number of reflections off walls of the cavity.\n\n### Which has more energy a photon of?\n\nPhoton energy is directly proportional to the frequency of the radiation and inversely proportional to the wavelength. Ultraviolet has the greatest amount of photon energy.\n\n### What is the longest frequency visible light in a spectrum?\n\nAs the full spectrum of visible light travels through a prism, the wavelengths separate into the colors of the rainbow because each color is a different wavelength. Violet has the shortest wavelength, at around 380 nanometers, and red has the longest wavelength, at around 700 nanometers.\n\n### What does peak wavelength mean?\n\nPeak Wavelength - Peak wavelength is defined as the single wavelength where the radiometric emission spectrum of the light source reaches its maximum. More simply, it does not represent any perceived emission of the light source by the human eye, but rather by photo-detectors.\n\n### Does higher wavelength mean higher temperature?\n\nThe wavelength of peak emission depends on the temperature of the object emitting radiation. A higher temperature will cause the wavelength of peak emission to be at a shorter wavelength.\n\n### What is the relationship between frequency and wavelength?\n\nFrequency and wavelength are inversely proportional to each other. The wave with the greatest frequency has the shortest wavelength. Twice the frequency means one-half the wavelength. For this reason, the wavelength ratio is the inverse of the frequency ratio.\n\n### What is the de Broglie equation for wavelength?\n\nApply the de Broglie wave equation λ=hmv λ = h m v to solve for the wavelength of the moving electron.\n\n### What is the formula for wavelength and frequency?\n\nFrequency (f) and wavelength (λ) are joined by the equation fλ = c, where c is the speed of light.\n\n### How do you calculate the wavelength of radiation?\n\nTo find wavelength ( λ ), the equation is manipulated so that λ=cν . What is the wavelength of a electromagnetic wave that has a frequency of 4.95×1014 Hz ? Once you have frequency, you can use the first equation c=λ⋅ν to find the wavelength.\n\n### Does intensity increase with frequency?\n\nAssuming the photon energy is sufficient to produce electron emission, increasing the intensity while keeping frequency fixed increases the number of photons hitting the metal, increasing the rate at which electrons are emitted but not changing the maximum kinetic energy of the electrons.\n\n### What is difference between intensity and frequency?\n\nif you consider light is wave, intensity is related to light radiation energy and frequency is the number of waves per second. ... Frequency is related to photon's energy (E = hν , E is energy, h is planck's constant and ν is frequency) . In particle nature, intensity is related to number of photons in the radiation.\n\n### Is intensity of light proportional to frequency?\n\nYes, the intensity depends, in part, on the frequency. if N is the monochromatic photon emission rate (photons per second), ν is the frequency of the photons, and A is the area these photons are hitting.\n\n### What is the peak wavelength emitted by a person with this temperature?\n\nHuman body temperature is about 310.15ºK . That puts the peak radiation in the infrared range. Human vision can see red light wavelengths as long as about 7,000 Angstroms. The infrared wavelengths are generally defined as being between 7,000 and 1,000,000 Angstroms."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9242502,"math_prob":0.9900219,"size":6669,"snap":"2021-43-2021-49","text_gpt3_token_len":1386,"char_repetition_ratio":0.19864966,"word_repetition_ratio":0.012522361,"special_character_ratio":0.2088769,"punctuation_ratio":0.11155379,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99465805,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-20T16:57:46Z\",\"WARC-Record-ID\":\"<urn:uuid:b04bdb14-36ac-488d-ae21-b56f372c7749>\",\"Content-Length\":\"41751\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0d6c7b01-336d-4794-823d-ba35023ab2cc>\",\"WARC-Concurrent-To\":\"<urn:uuid:f2d22625-9266-4c02-a71f-8a71e433adaf>\",\"WARC-IP-Address\":\"147.182.135.93\",\"WARC-Target-URI\":\"https://moviecultists.com/in-the-spectrum-of-a-blackbody-the-wavelength-of-peak-intensity\",\"WARC-Payload-Digest\":\"sha1:KAGCHI2ZPJKHDIGK57YUUJWEAEEOMPKO\",\"WARC-Block-Digest\":\"sha1:YCRFNS4C726K3HV7UTA7HH73D5ZLAOFO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585322.63_warc_CC-MAIN-20211020152307-20211020182307-00134.warc.gz\"}"} |
https://artofproblemsolving.com/wiki/index.php?title=2021_AMC_12B_Problems/Problem_15&direction=prev&oldid=145473 | [
"# 2021 AMC 12B Problems/Problem 15\n\n## Problem 20\n\nThe figure is constructed from",
null,
"$11$ line segments, each of which has length",
null,
"$2$. The area of pentagon",
null,
"$ABCDE$ can be written is",
null,
"$\\sqrt{m} + \\sqrt{n}$, where",
null,
"$m$ and",
null,
"$n$ are positive integers. What is",
null,
"$m + n ?$",
null,
"$[asy] /* Made by samrocksnature */ pair A=(-2.4638,4.10658); pair B=(-4,2.6567453480756127); pair C=(-3.47132,0.6335248637894945); pair D=(-1.464483379039766,0.6335248637894945); pair E=(-0.956630463955801,2.6567453480756127); pair F=(-2,2); pair G=(-3,2); draw(A--B--C--D--E--A); draw(A--F--A--G); draw(B--F--C); draw(E--G--D); label(\"A\",A,N); label(\"B\",B,W); label(\"C\",C,W); label(\"D\",D,E); label(\"E\",E,dir(0)); dot(A^^B^^C^^D^^E^^F^^G); [/asy]$",
null,
"$\\textbf{(A)} ~20 \\qquad\\textbf{(B)} ~21 \\qquad\\textbf{(C)} ~22 \\qquad\\textbf{(D)} ~23 \\qquad\\textbf{(E)} ~24$"
] | [
null,
"https://latex.artofproblemsolving.com/c/6/8/c6878713578626763c38433b3f4c8c2205ad0c15.png ",
null,
"https://latex.artofproblemsolving.com/4/1/c/41c544263a265ff15498ee45f7392c5f86c6d151.png ",
null,
"https://latex.artofproblemsolving.com/f/1/3/f13a96240c51d61ba3733d9f5c3020fa92ec4136.png ",
null,
"https://latex.artofproblemsolving.com/a/2/e/a2eb77fae2297f85e2e36781b9a1de7a9dc415ec.png ",
null,
"https://latex.artofproblemsolving.com/f/5/0/f5047d1e0cbb50ec208923a22cd517c55100fa7b.png ",
null,
"https://latex.artofproblemsolving.com/1/7/4/174fadd07fd54c9afe288e96558c92e0c1da733a.png ",
null,
"https://latex.artofproblemsolving.com/b/8/1/b81b227c8476afad964764da7c20225c1ccf4c1a.png ",
null,
"https://latex.artofproblemsolving.com/2/9/0/290bd73a3648761d08879fa132a08542027b4484.png ",
null,
"https://latex.artofproblemsolving.com/3/b/4/3b4cdd24f3c05c12d7c19c34658cbacebaa91951.png ",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.97443557,"math_prob":0.9994099,"size":353,"snap":"2022-27-2022-33","text_gpt3_token_len":81,"char_repetition_ratio":0.08309455,"word_repetition_ratio":0.8196721,"special_character_ratio":0.23512748,"punctuation_ratio":0.114285715,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-25T19:30:20Z\",\"WARC-Record-ID\":\"<urn:uuid:a2338000-8557-417c-9a09-ee934a60932d>\",\"Content-Length\":\"35409\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:266893e0-b7b2-4a2d-91cc-c61a1590208e>\",\"WARC-Concurrent-To\":\"<urn:uuid:cf101b95-65e0-4b65-aa1d-395fcd72cde7>\",\"WARC-IP-Address\":\"104.26.11.229\",\"WARC-Target-URI\":\"https://artofproblemsolving.com/wiki/index.php?title=2021_AMC_12B_Problems/Problem_15&direction=prev&oldid=145473\",\"WARC-Payload-Digest\":\"sha1:2B7OM57S5SKFTSLF3ANPCL3BIEULKWRW\",\"WARC-Block-Digest\":\"sha1:U5LLKM7MRUKF7PMYKQ64DFSV2GPQHRFY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103036099.6_warc_CC-MAIN-20220625190306-20220625220306-00689.warc.gz\"}"} |
https://expert-only.net/tag/calculate/ | [
"### Calculate year-to-date value in DAX with Power BI\n\nHow to calculate a year-to-date value of the sales in DAX with Power BI? Also called YTD, it represents the total to date of a given measure. For example, to calculate the year-to-date sales total with a DAX formula, use the built-in function called TOTALYTD. From this example, easily copy"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8971286,"math_prob":0.9575483,"size":342,"snap":"2021-04-2021-17","text_gpt3_token_len":80,"char_repetition_ratio":0.13905326,"word_repetition_ratio":0.0,"special_character_ratio":0.2134503,"punctuation_ratio":0.097222224,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.984598,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-12T01:21:39Z\",\"WARC-Record-ID\":\"<urn:uuid:65e9381a-fd46-4edd-a13d-2482fed05073>\",\"Content-Length\":\"64713\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:74e58ccd-03fe-4e6f-b95e-f5a5c67139c3>\",\"WARC-Concurrent-To\":\"<urn:uuid:e189f2bf-033e-456a-83d7-f1f38a0fd0f7>\",\"WARC-IP-Address\":\"62.4.19.110\",\"WARC-Target-URI\":\"https://expert-only.net/tag/calculate/\",\"WARC-Payload-Digest\":\"sha1:DDNNICPRK3AOVTGBAQJI2ZDRVMHVBQIV\",\"WARC-Block-Digest\":\"sha1:NBDXN5M3DILOVFO5QXXS2W6VNWWWCJHW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038065903.7_warc_CC-MAIN-20210411233715-20210412023715-00255.warc.gz\"}"} |
https://www.visualfractions.com/teachers/fractiontomixedcircles/ | [
"FRACTION TO MIXED WITH CIRCLE MODELS\nSHOW COLOR\nEXPLAIN\nSHOW INPUT\nFRACTION FORM TO WHOLE OR MIXED FORM",
null,
"# INSTRUCTIONS\n\nWith Fraction to Mixed with Circles Designer you can design fractions examples that use circle models to picture renaming fractions from fraction form to mixed form.\n\nA mixed fraction is a fraction greater than one (1) that has a whole number part and a fraction number part, such as 1 34.\n\nIf you enter a fraction where the numerator is greater than or equal to the value of the denominator, such as",
null,
", the program will show the mixed form of the number(1 34).\n\nYou can input any fraction with a value less than 5. The denominator must be less than 100. Press the <OK> button and the chosen fraction will appear in mixed form.\n\nOn the left is a <SHOW COLOR> check box. Uncheck the box to turn off the red selected parts of the circle. This will allow the learner to select the fraction.\n\nUncheck the <EXPLAIN> check box to turn off the answer and the explanation. You can ask your students to complete the number sentence.\n\nWith <SHOW INPUT> unchecked, the numerator and denominator input boxes will act the same as password input boxes so students will not see the numbers you input.\n\nWith <SHOW INPUT> unchecked and <EXPLAIN> unchecked, the learner can write a sentence that shows the fraction form and mixed or whole form.\n\nWith <SHOW COLOR> unchecked and <EXPLAIN> unchecked, the student can shade the indicated fraction and write in mixed or whole form.\n\nSuggestions:\n\nBesides looking at the number line, there are other methods can be used to arrive at the answer. One method is to see how many 99 there are in the fraction 269. In this case there are 2 units of 99 in 269, giving a whole number 2. After 2 units of 99 there are 8 more parts, giving a numerator of 8.",
null,
"Another method is to divide the numerator by the denominator. The denominator divides twice into the numerator with a remainder of 8, giving a whole number of 2, a numerator of 8, and a denominator of 9. This method is illustrated below.",
null,
"WINDOWS COMPUTERS\n\nWindows users can select any part of the screen by right clicking and selecting \"Take a screenshot\". Adjust to fit mage you want. This copies the selection into Windows Clipboard™. The screen can then be pasted into Windows Paint™ or your favorite imaging program. Or you can select \"Download\" which will put the image into your files \"Download\" folder."
] | [
null,
"https://www.visualfractions.com/images/main_menusbtna.gif",
null,
"https://www.visualfractions.com/teachers/images/3fourths1.gif",
null,
"https://www.visualfractions.com/MixedLines/mixednums.gif",
null,
"https://www.visualfractions.com/MixedLines/divide.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8593384,"math_prob":0.85220206,"size":2584,"snap":"2020-24-2020-29","text_gpt3_token_len":591,"char_repetition_ratio":0.13914728,"word_repetition_ratio":0.00867679,"special_character_ratio":0.22252321,"punctuation_ratio":0.08139535,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97571045,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,7,null,2,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-05T10:24:14Z\",\"WARC-Record-ID\":\"<urn:uuid:de85bb08-516a-4278-a7fd-6b1c83a60023>\",\"Content-Length\":\"10998\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:01c34cb2-491a-4b14-8f07-ff1f2fc5c5d9>\",\"WARC-Concurrent-To\":\"<urn:uuid:cfb6aab2-bc1f-461a-8756-76bdf7fb0c2a>\",\"WARC-IP-Address\":\"206.188.192.167\",\"WARC-Target-URI\":\"https://www.visualfractions.com/teachers/fractiontomixedcircles/\",\"WARC-Payload-Digest\":\"sha1:SAXRHKWSJBGFX45F6BOMEMGK4IEA3VSY\",\"WARC-Block-Digest\":\"sha1:KOCYKO5HGU65T6X2X376OQ74Q3JN2JIT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655887319.41_warc_CC-MAIN-20200705090648-20200705120648-00566.warc.gz\"}"} |
https://www.gradesaver.com/textbooks/math/algebra/college-algebra-10th-edition/chapter-2-section-2-4-circles-2-4-assess-your-understanding-page-187/50 | [
"## College Algebra (10th Edition)\n\n$A=36\\pi-72$ sq.units\nThe diagonal of the square equals the diameter of the circle. From the equation, r=6, diameter = 12 The sides a of the square and the diagonal d satisfy Pythagorean theorem $a^{2}+a^{2}=d^{2}$ $2a^{2}=12^{2}$ $a^{2}=\\displaystyle \\frac{144}{2}=72$ $a^{2}$ is the area of the square.... $A_{sq}=72$ sq.units This area is subtracted from the area of the circle, $A=A_{circ}-A_{sq}$ $A=6^{2}\\pi-72$ $A=36\\pi-72$ sq.units"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.70110583,"math_prob":1.0000064,"size":464,"snap":"2019-43-2019-47","text_gpt3_token_len":167,"char_repetition_ratio":0.15,"word_repetition_ratio":0.0,"special_character_ratio":0.39655173,"punctuation_ratio":0.10091743,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000092,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-18T05:40:38Z\",\"WARC-Record-ID\":\"<urn:uuid:9a21e775-6d5c-4b73-b5d7-83f97c2c3800>\",\"Content-Length\":\"56245\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:04163407-a880-4287-be9d-78a48c5fefae>\",\"WARC-Concurrent-To\":\"<urn:uuid:7cd14be7-fa6c-484d-a987-1f2f1fbad766>\",\"WARC-IP-Address\":\"54.210.73.90\",\"WARC-Target-URI\":\"https://www.gradesaver.com/textbooks/math/algebra/college-algebra-10th-edition/chapter-2-section-2-4-circles-2-4-assess-your-understanding-page-187/50\",\"WARC-Payload-Digest\":\"sha1:YYAWWXY4PHNO27WPBQZ75PVICFRXK6E3\",\"WARC-Block-Digest\":\"sha1:UBZQEH3V35X4ESAZ5YYPHEM4GFFXVYI6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496669454.33_warc_CC-MAIN-20191118053441-20191118081441-00222.warc.gz\"}"} |
https://de.maplesoft.com/support/help/errors/view.aspx?path=DifferentialGeometry%2FTools%2FDGsimplify | [
"",
null,
"DGsimplify - Maple Help\n\nDifferentialGeometry[Tools]\n\n DGsimplify",
null,
"Calling Sequence DGsimplify(X)",
null,
"Parameters\n\n X - a vector field, differential form or tensor",
null,
"Description\n\n • This command is typically used to simplify a vector field, differential form or tensor, defined on a manifold M, after a substitution or evaluation of its coefficients at point on M.\n • This command is part of the DifferentialGeometry:-Tools package, and so can be used in the form DGsimplify(...) only after executing the commands with(DifferentialGeometry) and with(Tools) in that order. It can always be used in the long form DifferentialGeometry:-Tools:-DGsimplify.",
null,
"Examples\n\n > $\\mathrm{with}\\left(\\mathrm{DifferentialGeometry}\\right):$$\\mathrm{with}\\left(\\mathrm{Tools}\\right):$\n\nExample 1.\n\nDefine a manifold M with coordinates [x, y, z].\n\n > $\\mathrm{DGsetup}\\left(\\left[x,y,z\\right],M\\right):$\n\nDefine a tensor on M and evaluate it at the point [x = 0, y = 2, z = 3].\n\n > $T≔\\mathrm{evalDG}\\left({x}^{2}\\mathrm{D_x}\\phantom{\\rule[-0.0ex]{0.3em}{0.0ex}}&t\\phantom{\\rule[-0.0ex]{0.3em}{0.0ex}}\\mathrm{dx}+xy\\mathrm{D_y}\\phantom{\\rule[-0.0ex]{0.3em}{0.0ex}}&t\\phantom{\\rule[-0.0ex]{0.3em}{0.0ex}}\\mathrm{dz}+\\left(z\\phantom{\\rule[-0.0ex]{0.3em}{0.0ex}}&t\\phantom{\\rule[-0.0ex]{0.3em}{0.0ex}}\\mathrm{D_z}\\right)\\phantom{\\rule[-0.0ex]{0.3em}{0.0ex}}&t\\phantom{\\rule[-0.0ex]{0.3em}{0.0ex}}\\mathrm{dy}\\right)$\n ${\\mathrm{_DG}}{}\\left(\\left[\\left[{\"tensor\"}{,}{M}{,}\\left[\\left[{\"con_bas\"}{,}{\"cov_bas\"}\\right]{,}\\left[{}\\right]\\right]\\right]{,}\\left[\\left[\\left[{1}{,}{1}\\right]{,}{{x}}^{{2}}\\right]{,}\\left[\\left[{2}{,}{3}\\right]{,}{x}{}{y}\\right]{,}\\left[\\left[{3}{,}{2}\\right]{,}{z}\\right]\\right]\\right]\\right)$ (1)\n > $S≔\\mathrm{eval}\\left(T,\\left[x=0,y=2,z=3\\right]\\right)$\n ${\\mathrm{_DG}}{}\\left(\\left[\\left[{\"tensor\"}{,}{M}{,}\\left[\\left[{\"con_bas\"}{,}{\"cov_bas\"}\\right]{,}\\left[{}\\right]\\right]\\right]{,}\\left[\\left[\\left[{1}{,}{1}\\right]{,}{0}\\right]{,}\\left[\\left[{2}{,}{3}\\right]{,}{0}\\right]{,}\\left[\\left[{3}{,}{2}\\right]{,}{3}\\right]\\right]\\right]\\right)$ (2)\n\nThe terms 0*D_x and 0*D_y can be eliminated with a call to DGsimplify.\n\n > $\\mathrm{DGsimplify}\\left(S\\right)$\n ${\\mathrm{_DG}}{}\\left(\\left[\\left[{\"tensor\"}{,}{M}{,}\\left[\\left[{\"con_bas\"}{,}{\"cov_bas\"}\\right]{,}\\left[{}\\right]\\right]\\right]{,}\\left[\\left[\\left[{3}{,}{2}\\right]{,}{3}\\right]\\right]\\right]\\right)$ (3)\n M >"
] | [
null,
"https://bat.bing.com/action/0",
null,
"https://de.maplesoft.com/support/help/errors/arrow_down.gif",
null,
"https://de.maplesoft.com/support/help/errors/arrow_down.gif",
null,
"https://de.maplesoft.com/support/help/errors/arrow_down.gif",
null,
"https://de.maplesoft.com/support/help/errors/arrow_down.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7417116,"math_prob":0.9999733,"size":1201,"snap":"2022-40-2023-06","text_gpt3_token_len":392,"char_repetition_ratio":0.1562239,"word_repetition_ratio":0.026490066,"special_character_ratio":0.24646129,"punctuation_ratio":0.22727273,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9988564,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-27T15:32:54Z\",\"WARC-Record-ID\":\"<urn:uuid:8b49353f-bd35-47b6-903f-f34a8b4fa2a2>\",\"Content-Length\":\"184347\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:12dd1781-8a65-4a94-9ff3-fe6bb5cbafed>\",\"WARC-Concurrent-To\":\"<urn:uuid:374975bc-7625-418c-b606-6220defc5f42>\",\"WARC-IP-Address\":\"199.71.183.28\",\"WARC-Target-URI\":\"https://de.maplesoft.com/support/help/errors/view.aspx?path=DifferentialGeometry%2FTools%2FDGsimplify\",\"WARC-Payload-Digest\":\"sha1:CGRRB44I2UTSSZPVQ5H7KHELWXESD6IE\",\"WARC-Block-Digest\":\"sha1:PKOUJTHA752CPPJPS7ZG5E2JEBEJLLOQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335034.61_warc_CC-MAIN-20220927131111-20220927161111-00714.warc.gz\"}"} |
https://rishabh1403.com/posts/coding/leetcode/2019/12/leetcode-solution-of-palindrome-number-in-javascript/ | [
"",
null,
"# Leetcode | Solution of Palindrome Number in JavaScript\n\nDecember 11th, 2019\n|\n\nIn this post, we will solve the palindrome number problem from leetcode using a couple of methods and compare their time and space complexities. Let's begin.\n\n# Problem Statement\n\nThe question can be found at leetcode palindrome number problem.\n\nThe problem states that we need to determine if a given integer is a palindrome.\n\n# Constraints and challenges\n\n• We need to take the sign of number into account while solving the problem. -2 is not a palindrome as 2- is not equal to -2.\n\n# Solutions\n\nWe will discuss three solutions in this article and compare their time & space complexities.\n\n• String-based reversal\n• Number based reversal\n• Two pointer method\n\n# String-based reversal\n\nIn this method, we will convert the number to a string, reverse it and check if the initial number is equal to the new one. We will use some built-in JavaScript methods.\n\nThe idea is very simple\n\n• convert to string\n• create a character array\n• reverse it\n• join it back to a string\n• check for equality\n\nLet's see a simple implementation of the above logic.\n\n``````var isPalindrome = function(x) {\nreturn x == x.toString().split('').reverse().join('');\n};``````\n\nNotice the `==` as opposed to `===` because we want to check if both sides are equal regardless of their type. In this case, X is a number while the computed value is a string.\n\nSome of the methods chained are\n\n• toString() to convert the number to a string\n• split() to convert the string to an array of characters\n• reverse() to reverse the array\n• join() to join the array back to a string\n\nThis will also solve the problem with the sign of the number. When we convert the number to a string, the minus sign becomes the part of the string and on reversal goes to end. For example, -123 becomes 321-.\n\nThis is all we need to solve the problem, once we submit it, these are the stats.\n\n``````Status: Accepted\nRuntime: 212ms\nMemory: 46MB``````\n\n## Time and space complexity\n\n### Time complexity\n\nWe use a bunch of methods, all with linear time complexity, but they are chained as opposed to nested, so the runtime will be dependent on the number of digits in the input. We can say O(len X)\n\n### Space complexity\n\nWe have a number as input, not using any other temporary variable to store the result, so space complexity is constant, O(1)\n\n# Number based reversal\n\nIn this method, we will pick the digits of the number one by one and reverse the number without converting it to string\n\nThe idea is very simple\n\n• check if the number is less than zero\n• if the number is less than zero, return false\n• initialize a variable temp with X ( because we lose the initial value of X in the logic)\n• initialize the reverse variable with 0\n• loop over the number until it's less than or equal to zero (at one point it will be)\n• now, multiply the reversed variable with 10 and add the last digit of the number to it\n• remove the last digit of X\n• when the loop ends, we will have our reversed number\n• if the reversed number is equal to temp ( initial number ), return true\n• else, false\n\nLet's see a simple implementation of the above logic.\n\n``````var isPalindrome = function(x) {\nconst isNegative = x< 0 ? true : false;\n\nif (isNegative){\nreturn false;\n}\n\nconst temp = x;\nlet reversed = 0;\n\nwhile(x>0){\nreversed = (reversed * 10) + (x%10);\nx = parseInt(x/10);\n}\n\nreturn reversed == temp;\n};``````\n\nSo as discussed above, first we determine if the number is negative, and return false.\n\nYou can read more about number based reversal method in my previous post.\n\nNext, the logic is pretty straight forward, check if the reversed number is equal to temp, and return the result.\n\nHere are the stats one we run this code\n\n``````Status: Accepted\nRuntime: 192ms\nMemory: 45.2MB``````\n\n## Time and Space complexity\n\nUnfortunately, we didn't improve the time complexity. It's O(len X) ( notice the loop runs len X times). Same goes for space, O(1).\n\n# Two pointer method\n\nIn this solution, we will take care of some of the simple cases before writing out logic. Once those are taken care of, we will follow the two-pointer method to check if the number is a palindrome.\n\nThe idea is, we will take one digit from the start, and another from the last. Check if both are equal if not, the number is not a palindrome.\n\nLet's see a simple implementation of the above logic.\n\n``````var isPalindrome = function (x) {\n\nif (x < 0) {\nreturn false;\n}\n\nif (x < 10) {\nreturn true;\n}\n\nif (x % 10 === 0 && x !== 0) {\nreturn false;\n}\n\nconst str = String(x);\nlet i = 0, j = str.length - 1;\n\nwhile (i < j) {\nif (str[i] !== str[j]) {\nreturn false;\n}\n\ni++;\nj--;\n}\n\nreturn true;\n};``````\n\nFirst, we took care of the following cases\n\n• if X is negative ( not a palindrome )\n• if X is less than ten ( always a palindrome )\n• if X has 0 at its last digit and X is not 0 itself ( not a palindrome ) e.g. 10, 130 whose reverse will be 01, 031 respectively\n\nNext, the logic is straight forward\n\n• convert the number to a string\n• take two pointers, at the start and end of the string\n• if the digits at both pointers are different, it's not a palindrome\n• we increment starting pointer and decrement the end pointer iteratively\n• if the loop exits, then it was a palindrome\n\nThis is all we need to solve the problem, once we submit it, these are the stats.\n\n``````Status: Accepted\nRuntime: 188ms\nMemory: 45.8MB``````\n\n## Time and space complexity\n\n### Time complexity\n\nWe see a bit of improvement in run time. We are running logic only for positive numbers greater than 9. Also, in the loop, we are taking two steps instead of 1. However, asymptotically the running time complexity is still O(len x)",
null,
"### Space complexity\n\nWe have a number as input, using a couple of more temporary variables, so space complexity is constant, O(1)\n\n# Summary\n\nSo, we solved the palindrome number problem using 3 methods, although the complexity is the same, it's good to know these different approaches. In an interview, you may be asked to solve using two pinter method, who knows.\n\nI hope you enjoyed solving this question. This is it for this one, complete source code for this post can be found on my Github Repo. Will see you in the next one."
] | [
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAALCAYAAAB/Ca1DAAAACXBIWXMAAA7EAAAOxAGVKw4bAAACjUlEQVQoz3WS+UtUURTH582bzWXGZRTKncqkoU1t3NDKTHFJRS3SzBrNtElNRye10cYFUjMoTcxCSjRIUaOifrG0RKkojIgIAv2pf+TTe44jSfXgyz33Xc7nnnO+V2EKC6C1/Ai2s8e4YT1OZd4+VEoFUVFRtHe08H5ilHvV5fR2tvP9x09mxsYJNvpTccXBi8Vv2JqdhIWGMDc/z9rqKgrpIyTQh0RTOFn7IzhfdIKEhHgSzAd49mqSmclH5KUlMXUtmYWxIfrqm0g5ZKb3zgMWv67xePY1wSGhpKan4+zqdAG9NGoiDVqKDieyvLTA0vJbMtKSabBbyS7I5mRpGd1dTtpsVsrKysnMK6Kw1MLdh5NcbmpDq/OQQW4p8NVpMQoK8tKPsvLlI59XPpB1PJlwTy07/PxIMZsYd2ZQV13Khdomzl2qp/SiFYu1hqTUNNQaDSqVCrVa7QJqRREPQcDfR4/DYcd+tQ7T9kBi9HrivXzIjfSjtSiGipISbHUNNNqb6bvZy/BAP7aaKgL8DGikLmWtA5VKAUECyrGnToNWKx1K8S6plVhvbw56GYgLiiAnNhZLfja1FacZHXDy7vko0yMdRO8OlnIFRFG5ARRkoML9g+LMRJbGnLwZaeXl7UbmR1qYG7LzpLua5bF2fs0N8mm2j4XpXp4ON2KOClrP2wQKG1BR6QLmpEQz1V+Ho6qAPtsZeuqLpaeVS0/NKe47LEx0VnLLVsjg9RKcVekE+XpsBbqr/HP9n+RRKCXtDTFgyTaRnxiBXiNKY1O6x/Z3kgxVSUbJkquWV7VKRCM5KSpdnRi9dMTtNLJnm2SIdL7l2fxL8hjEjVuFzb0rXq9UggRK5hlUrovdpv4GJ1dvoNlzgHQAAAAASUVORK5CYII=",
null,
"data:image/png;base64, iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAYAAACqaXHeAAAR+UlEQVR4Ae2bBXQcx7KGv+qeRa1Ytswch9kQZrzMzMzMzMzMzMzMN8ycG07MJLBgaaa7nna3z9EcPVlBw4PK+U/PQqz6/qrunZ6d5f90/H/8f/x/CLs53gHmKU9hWaQciuFgEVYIzDFWulCKAAhl73RYYYsqt+K5PhGu/c53uB3wu9WAd+weaLnjqaxR4WEZy1kmIwfZjCmarMFkBTECViAdTlGv+HpDHhf7so/1htjxJ1F+tfTbXALoPt0Blz2UYm8XjzOWZ2Xz5nhbMMYULSbXAFfEeMQqYgVEIB2qqGtIUG/wdcHXPL7scBXv61V/vnd8bWCYHwHlB8yAiaTvd4yOEi1ZyFOijLw6WzCH2PYI02awecVkPJLLYgpd2I550DYbKc5BsiXEZAFQX0frY2h5C4xvw41swleG0VodHxtcVfDjHjeaUK/465JYP3rner4DJPfbgL+fwv2KefNYk8/wvnzJnh51RkQdFsk5TD4i6lqImXMkpv9opLQCyfdAVASxIKQ6WkABdZCU0eogOnYrfuvl+C1Xkgyvx1cTtGZJRhzJzoTqmPtrNeZNwCX3y4CbnsR9iv2/h9z+ZF6VL8o7s92ZNtuVISp6TDEi6j8Uu/gszKw1UOgJcAngQQO0AMo0xwIYkAgEqAzit1+Cu+tPJFuvxZcTkrLBDcfUh+Lxalnfvuy7fAzQPbUGyGWPpWNWgc/k2+1TMn1Zok6LLTii2SuxKx6NmX0MRHnQOqhDxHBfQtWHbslCUsVvuwh3609Jtt2Mq1iSnY54R53qqPvO9govWfVjRgDdnQbIFQ9jbk8P3y12Rqdk+rNE7UJUmhiXn41Z8mgk1wW+igCI3Le/pkx1AgUwebQ2jL9zwoTb/kgyVicZVeKtdco7k38MDvLko37FZkAfeAMCfF8Pv8j3ZlbnZuWw7RB1dxMd+ERM/4mgMYKfBBcemNC0EQYkg9/6b5Ibv08yNIQbhdr2GtWB+NIdgzzi3phg7yn8H86me/FsfjwBf1xuTg7bKWR6+8gc+lxM3xGIH0c0ARxo0hJhfKCEQzRBtIqUFmM6FyPjN6FUMFmLeJ2fxa8+u59ffuc2qtyDiO5hl5gD5vCpfHd0Um52DtthyHR3ER34VEzHEoiHQEx6Mds9HZB6LK6CdCwhOuipcOM3QYbJ+Rx4PekATT4FPB3wgN73DgjwNz+FV5W67Kuzc3IT4BGZ9hzRikcgXfs3Kw8+XfE9Ixz4CuR6MPl2ZOxmMApGMHV/2PNW6tinr+Gi+74GBPgLHsvRS7rlb/n5+bbsrBxRB0QLT8DMPwNIQAQBEPZsKCiAKhDhN/6FZP15JCNQb6wHG6vjdw7pacf9mMtn6gQ7E/xT+8k/4hC+WZyV3a8Bn+mw2J652HmngVHQBCEBdXu2+kGicTiOkfwsqG1A3DiIIE6z+djtP7CJH14zjru3U8AA9keP4EkdXfZl2f4cUVcO22axc9YixdngawhTFj3dSyb4GIxFJkT5LsCAKlL1i1fP1ds+cy3XAhp0twYIYN+4hs7j58nX8v3ZvkxvjqhksZ2zML2HAw4hTld+r0pomSBRAWpbWqfTKqhTGHcH5DN897yN1KczINpV9Z+4gkfnOuz+UUeEKUwoZ5DSfBBFfBlE2JdCVFGJmjma8kAz50buuY5k/yeuSB79/kv4ZjDAzWSAALK2h1x7Xp4ddVhMW4TNGySXQ/KdiK8AHoR9KxSEGBo55nLYuIZva23O2kfcs9f26A8uHsQFRp3JAPuBkziqUJRVthRh8haTNUi2gJgIXAVE2SdDBTFRM1eTjZu5NxgKxXjVB07So079BRcCfiYDDGD7S+ZhmZJtXsywWYtEBqJcmO8xKPtwSDNXiUwzd1e0NFj6SzwM/CWAA/x0BghglnWTKeT1VGleybFIxiCRYGwEvgb43X8B6n45bJq5+kgaubcYipZCPjm1wXb7EAkggE5rwNuPZlkuL/vbgkWyFtMwwFrAtQxQPxPbNJAKqoAPY+oxmv7TIAIIyJTHpENn9kgM4BDbyF2bDA2WBtPbj9ZlT/8LN85kgF3YzqFRzhRMzmAiARsS0hr4Mrs84VdAPagLisMYhIKmoXUXDgqICaNNKZM6NrvoFgHVVq7Syr3B0GBpMC1s94cCNwOyyynQVTAH2ZwgGUGsQYw01AJKRsHkp1TXgY8hDYwGyQxTQGY4x03CYZyClJbShphgCkKI0KVxyDswZIQGU4MN/M93ZYBpKBfpcokMYhsSMKEVAZJxkHqogE4BToPJ7rpYnfqbdXApQxBQP2matHIXG0yIDLkoWU7gnGqABNmMMJvQ+iIBHpPq3HoqTyEc7PnvcjRlCMmU14PEtxisQCQ02cACEqRTp0DGRnRJmPfY1IxXRUQIzwTH9+B3Tjrz57+iCIJCevoFXKHB1GADMjOuAUbIiwmY2pKohvUlvYBpmnNvbIfDkDqeZr8cUkcMNNgAM50BEkYjAoqCTjqg6kFTHa+66/XMCNhJ53EKiYLei2mv4XGYiqiCoyUfwNLgOs0/ksofWkwhfzPdFNAwGuep4gjQDlVB1KbzCvyhK4K7ZAUM1Mc9Gzc6Nm5z1BNl6fyIpUsicKAzbiFS7MHAO+5MuGNjQjYS5s+2zO+3ZIsGPEhd03yoSmq6EvJ0LXkPDppsYAj2TLsXSDwj6j3qkgkJaDBDACR1MhPA8+Bryo03JVx8dcy1N8ds3eFIYiXxgMBDzyjw1EcViGSGTpAwCCQI3/5phV//pQIKkYEoI/T3WQ5dmWHtYRkOXBZh8oJUwSnBPGlo0pUmeBxYPA22mbbDClBL2NF0zDtUDeoNgp9sN2nZbjLgVfnn+TF/OC9m3QaHBTqKwoJOwRohjmHLoOeXf6pw7unCnD6BhJkjgh3btfn/lAzM6TVkMuA8jI06/vbvhD//u8KiBZZzTshw8qoMVqTBGQokBBrUK+pdYPFNtl3tBjWInXW5a16sqFPw4WKnF9SAAKhiI2VoRPnU9+tcd4Oju01Y3GsoZAWjkNTBOch1whEHRxx5QkTffA9VDxHTh58c++YZXvrmPFeelzC2QUlGISvQ3iH0d0Klrgxu93zuO1X+dUXMy56YpbujYYIEPkKjthg0UTTWJhsou5oCCiR3jfjb9q9bvFO89xg1CJo6XfckwOd/VOeGG5Xl/ZZipuWTq0G+V1h+kLBylWHpSujtBKoOts8AD2DCqBAljtOPMZx+mmVgJ9xxM9x8mWf9Dcr4gFIwwoJuoafERA5+Ipcar392FmPBewM+eNnk9y2WOjTYALer7bAHkvM3c/vp+/kqsc+rM6j3gCAGEMVkPOs2Kdf+R5nTLkgM5GHBIXDI8cL+E2NXyUPFwaiHkVAJA8g9/Jz3wDZAhN489B5sWLVKGB4TbrpOuO58ZeONIGUaOTRz2bDNsXieQAwgYdENnZx4XM1XG2xADPhdGRB/63o2vGo1t+VrerAmoYUwCCBWQZX2Pli0wpBTWHOqcMSxyoJ+bVV6GNjhQ8cEWQXuhQECWMAL1ICyB4WunGHtYbB2jWHDVuGqC4VL/g55EUq9oeSS+pj0Ho0VX1PGKtzWYJvJAAVq4zHx1hG5qLdhQAz4IAABr9Bbgre9J4KM0mYdDHq4QwEN0AJGwzEggVz0np0H+PDAAJp6rqqwEcCzoCQseLDh1HMsxELbgMdXAAlKX6+tKA2m8bhJVAN0VwbEQOUv6/15KxeaZ0ZVNa4OJq+gMrktqCttA0kLKA6uGwVLOE6PaRFCZ3YhVDEoFamTqxHQYU9bVgHB1wWMIApeAae4BLSqJFX1DSagEhiVEGaatbj8sUv5z+BOvc6VPVr3+FjBEyAEjExuyqLUGVuofhiD+O9GGJlOU97DNP9OGK20lAm78QQI299QfXyiaM3jKp4GS4MJKAdGZjKgMhJTuWqT/kLHfNPBMGtQD4gghrDXngJpBCxTn5sEssGwAlACikBbOM4CZjr4lGk2NdqQQ9TKQwRAwKe+q6kpOuppsDSYoCk/02VxDW8aefN5/HPtQn9nT6dfYgqCKYBkDVhNQ02OpDaahPeIgqRMyAFOue5q4YorYcsWwUawZLGyZpWycDmQhIUvHSaAIWBTxxpEU6ErpdW1FY8b9ewc8nc2WICRwKZ3982QAnaoRnFtP5XlHXKyKRjCPX7BcSAIE5R6nKpWENAG626HT3zS8uMfGm690TCwRdi8Xrj6asPf/iFs3y7sfyDkSoADRILCsUk9Z8KItP7zAg34GrgxJdnpSbY7/nGL/9RXruU6YCswfk+/G/RA7i/rGXriCg5tKzBX8oLJgoT5HkwImsaAtBFtwhUXCR98n2Vgo2FRrzCvxzC7Q+hrqCRkjXDttYarrobDDodSrxBM2LUIoxc0ETQGLXuSUU+8w7F1k7/ikb/ky7WEbcAOILmnBjjA1BIKGcO6tbP1bJs1EVnBRCBWICKYQUrTLHRFuP4K+OgHDCURFvUJ7TkhZybXz4yFQlboKAlbNgtXXAVrjoFCqQWXqn6q8kEaWj8WfNkTjyrJoKeyNal+8mJ9xz/XsS5Uf/Te3iDhgOiiTfgT55jaopKuMRmhIYmAYII0QVPwhlAZIAOD24UPv1egInTlBQNEhdY+odAL2RJ4A3Ed4irkMrB1G6zfJBx3giBTL5sTpCn4GmhF8aOKG/Ik2xwX3MIXX/E3PQ/YQqj+vb1Fpg4MAsVH/Nz/8tJ22W9FJjmLTJMaY1rdgAQTPGBTG/KwYH332567boGD9odDjxMOOkaYt1Tp7FSsBQWSGAZ2COtvhmvOg/Kl8M+/KqtXw2nnGigDyiS8b0kT0Dr4Ck34eNhT355w23r/p0f8XH8ZwAeB+n29RSYBDJD593pue+gCVhYi5hAJxoBEk5+9Iq0RCK5ArSz85s+N0+WIp73Wcvw5MHeOpyRKVANbU2xdyTjo7FAWroRVpxmWHm7ZOSpU67D6aIFYABPAw2JXB+qClsHv9MRDE9rm2LbBX/X4X+rHd1TYEnYUQ4C/P5cuC8AsYOEJC1n01QfLW2fPtwdm5loyPQbTIZiCQXJARpEIMNqUGqgVLfl2hVEHFQXV9LW1dBqTr7UbyFmqw0q+6kFJVx1iQWvgK75V+UFPvMWxbaO78dm/1Xeft551wHpgO1C5v7fJJcFB1o0gV23lylP6dFmb0bmYVlKi6YsRINI6FiDyCiMKNQBJCcI8Tm00wuMKMKZEHnACzoSqg8atquuYJ9mpJI3Kb07YtM5f+dzf6QfO28DG1Lwff6DuE6wDCuiECfrn27j8tH66OkSXSyiqeAUFUUEDmHgCJGnolFlBmhICvgVMIq1qN1QXqIIfV3SkBe8GWpW/9U7/58f+Uj81UZxNKfidAA+UAQA1wAM6WEO+dDVXr+mTnf3WHWoSoiZ0ACaYAQRTUnASxvSxTjEiaSrc/5Ra5cfANeCHtDnfy1uS6r9v4stn/kC/O1hpQm8NbT8M8EAbEEzABSPMj27UDXGNy1YWtL/oda6GV5qjC4Y4wAvqUoAeUMJxeJwALoDXQ7tXFS3TAh/1uMZiN+BJtjo2rfdXfOIC/fCr/6aXAAMBPlQe3a13iwPtQG9Qd8HS/tlzOO6UpfKorj6z1HYabLvBFAUpCDYLZAXJABYkvUkCUMArGgxr7doVVw+VLyturGXA8A5/xz/u0J+9+A9cUHGMhlV+IGgU0D31k5kS0Al0A11Ax/wSpfecwqpjF8rZvd1yaFQy0jDBFg2SEyQLkpk8ezQGAHwAR8MOrt4aXdk34ZMxrwODeu2FG/SPb/kHl20cYwwYCa0+FKo+tjd+M5QL3dARTGgPxkSvWMXSB+9nVi3t1lWldpZkCyZncgLhTNJE6Q4I+/cECJew6hVfGxvlzjuG5LLf3uIv+8Rl3AEkAXQ0wI+E49re/NGUAQoBPijs9CFnwTx0JbNPWsj8/XvNwr6izitm6MlFlDKWHEDsqNUSxsoxgzvKsummAb/+X+vZ+Oub2ebAB8BxoByAg6gAfl/51VgUjGgLKobHeSAbZIM8YUyZ6AhjUD2oClQC/HhQBUj21R9ORkAOyAcVAnzaBBPGdDjAp+GDKkA1qAYk/1N+OSpAZoqiMKb3jwCaMiAGkjCmpbsryT0VARoTNN3tXz4omLH7478Ab2Xh8MFW9Y4AAAAASUVORK5CYII=",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.86166537,"math_prob":0.9865707,"size":6283,"snap":"2020-10-2020-16","text_gpt3_token_len":1505,"char_repetition_ratio":0.14636089,"word_repetition_ratio":0.099915326,"special_character_ratio":0.2519497,"punctuation_ratio":0.11067194,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9979124,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-09T17:52:34Z\",\"WARC-Record-ID\":\"<urn:uuid:46b53c18-3a76-4ad5-902a-281e666fde74>\",\"Content-Length\":\"71853\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fa713836-0fcc-4ccd-b2f9-a24a947a0d53>\",\"WARC-Concurrent-To\":\"<urn:uuid:7f8d94aa-21b6-42ca-aee9-f776eabe06b3>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://rishabh1403.com/posts/coding/leetcode/2019/12/leetcode-solution-of-palindrome-number-in-javascript/\",\"WARC-Payload-Digest\":\"sha1:G52XSDF3XL5WBTZLI6SZIKNR52PLO5XH\",\"WARC-Block-Digest\":\"sha1:DOZGRVBGCYOROAORAQ5LXOLBJ5XJLEEU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371861991.79_warc_CC-MAIN-20200409154025-20200409184525-00419.warc.gz\"}"} |
https://www.pearson.fr/FR/book/?GCOI=27440107339770 | [
"# Fundamentals of Signals and Systems Using the Web and MATLAB: Pearson New International Edition\n\n## 3e édition\n\n#### Spécifications\n\nÉditeur\nPearson Education\nÉdition\n3\nAuteur\nEdward W. Kamen, Bonnie S Heck,\nLangue\nanglais\nTEC067000 TECHNOLOGY & ENGINEERING / Signals & Signal Processing\nBIC subject category (UK)\nUYS Signal processing\nCode publique Onix\n05 Enseignement supérieur\nDate de première publication du titre\n01 novembre 2013\nSubject Scheme Identifier Code\nClassification thématique Thema: Traitement numérique du signal (DSP)\n\n#### VitalSource eBook\n\nDate de publication\n01 novembre 2013\nISBN-13\n9781292038407\nAmpleur\nNombre de pages de contenu principal : 648\nCode interne\n1292038403\nProtection technique e-livre\nDRM\n\n#### Sommaire\n\nPreface\n\n1 FUNDAMENTAL CONCEPTS\n\n1.1 Continuous-Time Signals\n\n1.2 Discrete-Time Signals\n\n1.3 Systems\n\n1.4 Examples of Systems\n\n1.5 Basic System Properties\n\n1.6 Chapter Summary\n\nProblems\n\n2 TIME-DOMAIN MODELS OF SYSTEMS\n\n2.1 Input/Output Representation of Discrete-Time Systems\n\n2.2 Convolution of Discrete-Time Signals\n\n2.3 Difference Equation Models\n\n2.4 Differential Equation Models\n\n2.5 Solution of Differential Equations\n\n2.6 Convolution Representation of Continuous-Time Systems\n\n2.7 Chapter Summary\n\nProblems\n\n3 THE FOURIER SERIES AND FOURIER TRANSFORM\n\n3.1 Representation of Signals in Terms of Frequency Components\n\n3.2 Trigonometric Fourier Series\n\n3.3 Complex Exponential Series\n\n3.4 Fourier Transform\n\n3.5 Spectral Content of Common Signals\n\n3.6 Properties of the Fourier Transform\n\n3.7 Generalized Fourier Transform\n\n3.8 Application to Signal Modulation and Demodulation\n\n3.9 Chapter Summary\n\nProblems\n\n4 FOURIER ANALYSIS OF DISCRETE-TIME SIGNALS\n\n4.1 Discrete-Time Fourier Transform\n\n4.2 Discrete Fourier Transform\n\n4.3 DFT of Truncated Signals\n\n4.4 FFT Algorithm\n\n4.5 Application to Data Analysis\n\n4.6 Chapter Summary\n\nProblems\n\n5 FOURIER ANALYSIS OF SYSTEMS\n\n5.1 Fourier Analysis of Continuous-Time Systems\n\n5.2 Response to Periodic and Nonperiodic Inputs\n\n5.3 Analysis of Ideal Filters\n\n5.4 Sampling\n\n5.5 Fourier Analysis of Discrete-Time Systems\n\n5.6 Application to Lowpass Digital Filtering\n\n5.7 Chapter Summary\n\nProblems\n\n6 THE LAPLACE TRANSFORM AND THE TRANSFER FUNCTION REPRESENTATION\n\n6.1 Laplace Transform of a Signal\n\n6.2 Properties of the Laplace Transform\n\n6.3 Computation of the Inverse Laplace Transform\n\n6.4 Transform of the Input/Output Differential Equation\n\n6.5 Transform of the Input/Output Convolution Integral\n\n6.6 Direct Construction of the Transfer Function\n\n6.7 Chapter Summary\n\nProblems\n\n7 THE z-TRANSFORM AND DISCRETE-TIME SYSTEMS\n\n7.1 z-Transform of a Discrete-Time Signal\n\n7.2 Properties of the z-Transform\n\n7.3 Computation of the Inverse z-Transform\n\n7.4 Transfer Function Representation\n\n7.5 System Analysis Using the Transfer Function Representation\n\n7.6 Chapter Summary\n\nProblems\n\n8 ANALYSIS OF CONTINUOUS-TIME SYSTEMS USING THE TRANSFER FUNCTION REPRESENTATION\n\n8.1 Stability and the Impulse Response\n\n8.2 Routh—Hurwitz Stability Test\n\n8.3 Analysis of the Step Response\n\n8.4 Response to Sinusoids and Arbitrary Inputs\n\n8.5 Frequency Response Function\n\n8.6 Causal Filters\n\n8.7 Chapter Summary\n\nProblems\n\n9 APPLICATION TO CONTROL\n\n9.1 Introduction to Control\n\n9.2 Tracking Control\n\n9.3 Root Locus\n\n9.4 Application to Control System Design\n\n9.5 Chapter Summary\n\nProblems\n\n10 DESIGN OF DIGITAL FILTERS AND CONTROLLERS\n\n10.1 Discretization\n\n10.2 Design of IIR Filters\n\n10.3 Design of IIR Filters Using MATLAB\n\n10.4 Design of FIR Filters\n\n10.5 Design of Digital Controllers\n\n10.6 Chapter Summary\n\nProblems\n\n11 STATE REPRESENTATION\n\n11.1 State Model\n\n11.2 Construction of State Models\n\n11.3 Solution of State equations\n\n11.4 Discrete-Time Systems\n\n11.5 Equivalent State Representations\n\n11.6 Discretization of State Model\n\n11.7 Chapter Summary\n\nProblems\n\nAPPENDIX B BRIEF REVIEW OF MATRICES\n\nINDEX"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5743473,"math_prob":0.5883165,"size":3039,"snap":"2022-27-2022-33","text_gpt3_token_len":804,"char_repetition_ratio":0.15420099,"word_repetition_ratio":0.0,"special_character_ratio":0.22639026,"punctuation_ratio":0.12372881,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.972088,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-26T01:20:07Z\",\"WARC-Record-ID\":\"<urn:uuid:fc5f475f-6ac7-48e5-a8f0-e196326cbfcc>\",\"Content-Length\":\"63810\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d17c2dd1-44c1-4255-b43b-c12d9fb139c0>\",\"WARC-Concurrent-To\":\"<urn:uuid:7e23b1f8-b533-4063-89cb-6d44753f46e5>\",\"WARC-IP-Address\":\"78.129.217.19\",\"WARC-Target-URI\":\"https://www.pearson.fr/FR/book/?GCOI=27440107339770\",\"WARC-Payload-Digest\":\"sha1:4BOYOQKDHOJABY5SYXFIY4OUSRNGAIAU\",\"WARC-Block-Digest\":\"sha1:VQ4H2QJVBAKA4EWPOBFCUHJFVVJWOHOK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103036363.5_warc_CC-MAIN-20220626010644-20220626040644-00297.warc.gz\"}"} |
https://www.samanthadereviziis.com/problem-solving-sale-price/ | [
"Problem solving sale price\nCheck the sale price. Easily calculate the sale price most useful math skills. There are many different ways to find the tax is the sale price given cost of 80 is 7.5, students solve a problem. Discount of information do a middle school math. Discount. Example 1 / whole and thousands. Calculate percentage is 90 days. With https://www.samanthadereviziis.com/ questions for the selling price of problem carefully to a math teacher may be. Check the sale price. To solve this coupon gives a 20% and discount make sure it can be sold by 20% discount. Shop for a coupon. Introduction to calculate the problem from the sales tax rate is sold. What would the original price of an article was from the sale price and write back if you can find out of the original prices. Demonstrates how to problem. Each side by computing a 30%. Do you can find the sale with your ebay feed. Also an additive property. I can be sold. Explain to. Vst permutations and total cost price on sale http://craftygrrrl.ca/do-you-always-do-your-homework/ in the sale price. Night timers' chief engineer anticipates that she sells. Step 4: how to solve this lesson. You've read the sale price for 30%. Step 4 https://www.samanthadereviziis.com/ 8 recommends. Find the sales tax, see the solution. Divide both sides of the price. Free practice questions for ssc cgl exam: topic profit are very common in ex. Easily overcome your. Follow the ideal sales tax rate is sold. Free problems saving. Let cp1, the problem-solving process and markdowns. They also, so you'll. Sales prices using a dvd that include gst. Since 25% Read Full Report Given original price tag of discount given a profit equation and discount. On a procedure: in ex. List the problem solving for your company has a video store for ssc cgl exam: given original amount 'x'. Sales price of the purpose of a price x. Vst permutations and listened to problem, calculating a percentage discount. Introduction to calculate percentage of a toy, we can solve for 15% off given by reducing the amulets category. Apply this twice."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9136117,"math_prob":0.92033494,"size":2055,"snap":"2019-51-2020-05","text_gpt3_token_len":474,"char_repetition_ratio":0.13846904,"word_repetition_ratio":0.005830904,"special_character_ratio":0.22189781,"punctuation_ratio":0.14014252,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9836256,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-26T13:12:24Z\",\"WARC-Record-ID\":\"<urn:uuid:cf0b709e-51e1-40e2-97d2-d3e23a7ec4bb>\",\"Content-Length\":\"30828\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5148dc6b-b7a5-4454-a8d8-e3bca3069e9b>\",\"WARC-Concurrent-To\":\"<urn:uuid:5a0da260-3d3d-4a0f-8cca-f36fee1c0ce6>\",\"WARC-IP-Address\":\"185.81.6.118\",\"WARC-Target-URI\":\"https://www.samanthadereviziis.com/problem-solving-sale-price/\",\"WARC-Payload-Digest\":\"sha1:QGO5AEG54FQAOSUC2CPOHOFT4TPM7DEO\",\"WARC-Block-Digest\":\"sha1:AUUYUNNRGP7HH3Z6VI5HKYAKXTMQN35H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251688806.91_warc_CC-MAIN-20200126104828-20200126134828-00089.warc.gz\"}"} |
https://mathoverflow.net/questions/125688/when-does-a-modular-form-satisfy-a-differential-equation-with-rational-coefficie | [
"# When does a modular form satisfy a differential equation with rational coefficients?\n\nGiven a modular form $f$ of weight $k$ for a congruence subgroup $\\Gamma$, and a modular function $t$ with $t(i\\infty)=0$, we can form a function $F$ such that $F(t(z))=f(z)$ (at least locally), and we know that this $F$ must now satisfy a linear ordinary differential equation $$P_{k+1}(T)F^{(k+1)} + P_{k}(T)F^{(k)} + ... + P_{0}(T)F = 0$$\n\nWhere $F^{(i)}$ is the i-th derivative, and the $P_i$ are algebraic functions of $T$, and are rational functions of $T$ if $t$ is a Hauptmodul for $X(\\Gamma)$.\n\nMy question is the following:\n\ngiven a modular form $f$, what are necessary and sufficient conditions for the existence of a modular function $t$ as above such that the $P_i(T)$ are rational functions?\n\nFor example, the easiest sufficient condition is that $X(\\Gamma)$ has genus 0, by letting $t$ be a Hauptmodul. But, this is not necessary, as the next condition will show.\n\nAnother sufficient condition is that $f$ is a rational weight 2 eigenform. I can show this using Shimura's construction* of an associated elliptic curve, and a computation of a logarithm for the formal group in some coordinates (*any choice in the isogeny class will work).\n\nTrying to generalise, I have thought of the following: if $f$ is associated to a motive $h^i(V)$ of a variety $V$, with a pro-representable Artin-Mazur formal group $\\Phi^i(V)$ of dimension 1, then we can construct formal group law a-la Stienstra style, and get a logarithm using the coefficients of powers of a certain polynomial. This makes the logarithm satisfy a differential equation with rational functions as coefficients. Since the dimension is 1, the isomorphism back to \"modular coordinates\" will be a single modular function $t$, and this answers the question positively.\n\nThis was the original motivation for the question - a positive answer is weaker, but maybe suggests the existence of associated varieties to rational eigenforms.\n\nPutting non-eigenforms aside, since I'm not interested as much in them, we are left with non-rational eigenforms. We can try to perform the same Stienstra construction, but this time we get that the galois orbit of $f$ is associated to a \"formal group law\" of a motive with dimension greater than one. This will make for an interesting recurrence for the vector of the galois orbit, but not necessarily for each form individually, as the isomorphism of formal groups laws (between Stienstra's and those with the modular forms as logarithm) might scramble them together. Maybe not, and this solves might the question. I realise this last paragraph might be difficult to understand, for the wording is clumsy, and the mathematical notions are even worse. If you're really interested in this, I'd be happy to elaborate.\n\n• I also asked this in math.stackexchange: math.stackexchange.com/questions/338453/… Mar 27 '13 at 2:23\n• Here is a reference : F. Martin, E. Royer, Formes modulaires et périodes smf4.emath.fr/Publications/SeminairesCongres/2005/12/pdf/… (see Remarque 140). It seems the result you mention in the genus 0 case also works in general, the only thing is that $F$ is a multivalued function. Mar 27 '13 at 8:12\n• @Francois: thanks for the reference. My french is a bit a rusty (non-existent), but I think the remark says what I wrote above: that the coefficients are in general algebraic. Mar 27 '13 at 8:43\n• Perhaps this could be helpful. mmrc.iss.ac.cn/pub/mm25.pdf/7.pdf Mar 30 '13 at 22:47\n• @robot: hey, thanks for the link. I've seen this paper before, and unfortunately (for me) it suffers from the same problem every paper I've seen suffers from: it begins with any given modular function $t$, instead of constructing an interesting one. Mar 31 '13 at 0:06\n\nLet $K$ be the algebraic closure of the differential field $\\mathbb{C}(T)$.\nLet $\\partial$ denote differentiation w.r.t. $T$. Now $\\mathbb{C}(T)[\\partial] \\subseteq K[\\partial]$ are rings of differential operators. Your function $F$ is a solution of $L(F)=0$ where $L \\in K[\\partial]$ is the differential operator $L = P_{k+1} \\partial^{k+1} + \\cdots + P_0 \\partial^0$. The rings $\\mathbb{C}(T)[\\partial]$ and $K[\\partial]$ (multiplication = composition) satisfy all properties of a Euclidean domain except commutativity. In particular, one can define an LCLM (least common left multiple) which behaves just like an LCM in Euclidean domains.\nLet $L_1,\\ldots,L_d$ be the conjugates of $L$ over $\\mathbb{C}(T)$, obtained by applying ${\\rm Gal}(K/\\mathbb{C}(T))$ to $P_{k+1},\\ldots,P_0$. Now let $M = {\\rm LCLM}(L_1,\\ldots,L_d)$. Then $M \\in \\mathbb{C}(T)[\\partial]$ and $M$ is right-divisible by $L$. In particular $M(F)=0$.\nIn summary: Any function $F$ that satisfies a linear differential operator $L$ with algebraic-function coefficients will also satisfy a linear differential operator $M$ with rational-function coefficients. In Maple you can find $M$ with the command DEtools[LCLM](L, and conjugates);"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9162139,"math_prob":0.9979929,"size":2710,"snap":"2021-43-2021-49","text_gpt3_token_len":661,"char_repetition_ratio":0.11640798,"word_repetition_ratio":0.0045871558,"special_character_ratio":0.23653136,"punctuation_ratio":0.093023255,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99988556,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-29T00:14:00Z\",\"WARC-Record-ID\":\"<urn:uuid:a93e6d1e-baae-4f89-a544-3e6f00e93f3e>\",\"Content-Length\":\"115918\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5b80851f-274c-46bf-95ed-7f9e629400b6>\",\"WARC-Concurrent-To\":\"<urn:uuid:ef94c87b-36a8-4329-9fd6-3da8cd3ef046>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/125688/when-does-a-modular-form-satisfy-a-differential-equation-with-rational-coefficie\",\"WARC-Payload-Digest\":\"sha1:JNQON3Q2IJWLUCGHF6YJSBD77DNMOZJB\",\"WARC-Block-Digest\":\"sha1:CNZQ6YTRR5YZDUODMETAGPLIHUHJKCMV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358673.74_warc_CC-MAIN-20211128224316-20211129014316-00286.warc.gz\"}"} |
https://www.geeksforgeeks.org/count-number-of-ways-to-divide-a-number-in-4-parts/?ref=rp | [
"# Count number of ways to divide a number in 4 parts\n\n• Difficulty Level : Medium\n• Last Updated : 12 Aug, 2021\n\nGiven a positive integer n, find number of ways to divide n in four parts or represent n as sum of four positive integers. Here n varies from 0 to 5000.\nExamples :\n\n```Input: n = 5\nOutput: 1\nThere is only one way (1, 1, 1, 2)\n\nInput: n = 6\nOutput: 2\nThere are two ways (1, 1, 1, 3) and\n(1, 1, 2, 2)\n\nInput: n = 8\nOutput: 5\nThere are five ways (2, 2, 2, 2), (1, 1, 1, 5),\n(1, 1, 3, 3), (1, 1, 2, 4) and (1, 2, 2, 3)```\n\nMethod 1 (Simple Solution) Run four nested loops to generate all possible different quadruplet. Below is C++ implementation of the simple algorithm.\nBelow is the implementation of above approach:\n\n## C++\n\n `// A Simple C++ program to count number of ways to``// represent a number n as sum of four.``#include``using` `namespace` `std;` `// Returns count of ways``int` `countWays(``int` `n)``{`` ``int` `counter = 0; ``// Initialize result` ` ``// Generate all possible quadruplet and increment`` ``// counter when sum of a quadruplet is equal to n`` ``for` `(``int` `i = 1; i < n; i++)`` ``for` `(``int` `j = i; j < n; j++)`` ``for` `(``int` `k = j; k < n; k++)`` ``for` `(``int` `l = k; l < n; l++)`` ``if` `(i + j + k + l == n)`` ``counter++;`` ``return` `counter;``}` `// Driver program``int` `main()``{`` ``int` `n = 8;`` ``cout << countWays(n);`` ``return` `0;``}`\n\n## Java\n\n `// A Simple Java program to count number of ways to``// represent a number n as sum of four.` `import` `java.io.*;` `class` `GFG {`` ` `// Returns count of ways``static` `int` `countWays(``int` `n)``{`` ``int` `counter = ``0``; ``// Initialize result` ` ``// Generate all possible quadruplet and increment`` ``// counter when sum of a quadruplet is equal to n`` ``for` `(``int` `i = ``1``; i < n; i++)`` ``for` `(``int` `j = i; j < n; j++)`` ``for` `(``int` `k = j; k < n; k++)`` ``for` `(``int` `l = k; l < n; l++)`` ``if` `(i + j + k + l == n)`` ``counter++;`` ``return` `counter;``}` `// Driver program`` ` ` ``public` `static` `void` `main (String[] args) {`` ``int` `n = ``8``;`` ``System.out.println (countWays(n));`` ``}``}`\n\n## Python3\n\n `# A Simple python3 program to count``# number of ways to represent a number``# n as sum of four.` `# Returns count of ways``def` `countWays(n):` ` ``counter ``=` `0` `# Initialize result` ` ``# Generate all possible quadruplet`` ``# and increment counter when sum of`` ``# a quadruplet is equal to n`` ``for` `i ``in` `range``(``1``, n):`` ``for` `j ``in` `range``(i, n):`` ``for` `k ``in` `range``(j, n):`` ``for` `l ``in` `range``(k, n):`` ``if` `(i ``+` `j ``+` `k ``+` `l ``=``=` `n):`` ``counter ``+``=` `1`` ``return` `counter` `# Driver Code``if` `__name__ ``=``=` `\"__main__\"``:` ` ``n ``=` `8`` ``print` `(countWays(n))` `# This code is contributed by ita_c`\n\n## C#\n\n `// A Simple C# program to count number``// of ways to represent a number n as``// sum of four.``using` `System;` `class` `GFG``{`` ` `// Returns count of ways``static` `int` `countWays(``int` `n)``{`` ``int` `counter = 0; ``// Initialize result` ` ``// Generate all possible quadruplet`` ``// and increment counter when sum of`` ``// a quadruplet is equal to n`` ``for` `(``int` `i = 1; i < n; i++)`` ``for` `(``int` `j = i; j < n; j++)`` ``for` `(``int` `k = j; k < n; k++)`` ``for` `(``int` `l = k; l < n; l++)`` ``if` `(i + j + k + l == n)`` ``counter++;`` ``return` `counter;``}` `// Driver Code``static` `public` `void` `Main ()``{`` ``int` `n = 8;`` ``Console.WriteLine(countWays(n));``}``}` `// This code is contributed by Sachin`\n\n## PHP\n\n ``\n\n## Javascript\n\n ``\n\nOutput :\n\n`5`\n\nTime complexity of above solution is O(n4)\nMethod 2 (Uses Dynamic Programming)\nThe idea is based on below recursive solution.\n\n```countWays(n, parts, nextPart) = ∑countWays(n, parts, i)\nnextPart <= i Input number\nparts --> Count of parts of n. Initially parts = 4\nnextPart --> Starting point for next part to be tried\nWe try for all values from nextPart to n.\n\nWe initially call the function as countWays(n, 4, 1)\n```\n\nBelow is Dynamic Programming based on solution of above idea.\n\n## C++\n\n `// A Dynamic Programming based solution to count number``// of ways to represent n as sum of four numbers``#include``using` `namespace` `std;``int` `dp;` `// \"parts\" is number of parts left, n is the value left``// \"nextPart\" is starting point from where we start trying``// for next part.``int` `countWaysUtil(``int` `n, ``int` `parts, ``int` `nextPart)``{`` ``// Base cases`` ``if` `(parts == 0 && n == 0) ``return` `1;`` ``if` `(n <= 0 || parts <= 0) ``return` `0;` ` ``// If this subproblem is already solved`` ``if` `(dp[n][nextPart][parts] != -1)`` ``return` `dp[n][nextPart][parts];` ` ``int` `ans = 0; ``// Initialize result` ` ``// Count number of ways for remaining number n-i`` ``// remaining parts \"parts-1\", and for all part`` ``// varying from 'nextPart' to 'n'`` ``for` `(``int` `i = nextPart; i <= n; i++)`` ``ans += countWaysUtil(n-i, parts-1, i);` ` ``// Store computed answer in table and return`` ``// result`` ``return` `(dp[n][nextPart][parts] = ans);``}` `// This function mainly initializes dp table and``// calls countWaysUtil()``int` `countWays(``int` `n)``{`` ``memset``(dp, -1, ``sizeof``(dp));`` ``return` `countWaysUtil(n, 4, 1);``}` `// Driver program``int` `main()``{`` ``int` `n = 8;`` ``cout << countWays(n) << endl;`` ``return` `0;``}`\n\n## Java\n\n `// A Dynamic Programming based solution to count number``// of ways to represent n as sum of four numbers``class` `GFG``{` `static` `int` `dp[][][] = ``new` `int``[``5001``][``5001``][``5``];` `// \"parts\" is number of parts left, n is the value left``// \"nextPart\" is starting point from where we start trying``// for next part.``static` `int` `countWaysUtil(``int` `n, ``int` `parts, ``int` `nextPart)``{`` ``// Base cases`` ``if` `(parts == ``0` `&& n == ``0``) ``return` `1``;`` ``if` `(n <= ``0` `|| parts <= ``0``) ``return` `0``;` ` ``// If this subproblem is already solved`` ``if` `(dp[n][nextPart][parts] != -``1``)`` ``return` `dp[n][nextPart][parts];` ` ``int` `ans = ``0``; ``// Initialize result` ` ``// Count number of ways for remaining number n-i`` ``// remaining parts \"parts-1\", and for all part`` ``// varying from 'nextPart' to 'n'`` ``for` `(``int` `i = nextPart; i <= n; i++)`` ``ans += countWaysUtil(n-i, parts-``1``, i);` ` ``// Store computed answer in table and return`` ``// result`` ``return` `(dp[n][nextPart][parts] = ans);``}` `// This function mainly initializes dp table and``// calls countWaysUtil()``static` `int` `countWays(``int` `n)``{`` ``for``(``int` `i = ``0``; i < ``5001``; i++)`` ``{`` ``for``(``int` `j = ``0``; j < ``5001``; j++)`` ``{`` ``for``(``int` `l = ``0``; l < ``5``; l++)`` ``dp[i][j][l] = -``1``;`` ``}`` ``}`` ``return` `countWaysUtil(n, ``4``, ``1``);``}` `// Driver program``public` `static` `void` `main(String[] args)``{`` ``int` `n = ``8``;`` ``System.out.println(countWays(n));``}``}` `/* This code contributed by PrinciRaj1992 */`\n\n## Python3\n\n `# A Dynamic Programming based solution``# to count number of ways to represent``# n as sum of four numbers` `dp ``=` `[[[``-``1` `for` `i ``in` `range``(``5``)]`` ``for` `i ``in` `range``(``501``)]`` ``for` `i ``in` `range``(``501``)]` `# \"parts\" is number of parts left, n is``# the value left \"nextPart\" is starting``# point from where we start trying``# for next part.``def` `countWaysUtil(n, parts, nextPart):`` ` ` ``# Base cases`` ``if` `(parts ``=``=` `0` `and` `n ``=``=` `0``):`` ``return` `1`` ``if` `(n <``=` `0` `or` `parts <``=` `0``):`` ``return` `0` ` ``# If this subproblem is already solved`` ``if` `(dp[n][nextPart][parts] !``=` `-``1``):`` ``return` `dp[n][nextPart][parts]` ` ``ans ``=` `0` `# Initialize result` ` ``# Count number of ways for remaining`` ``# number n-i remaining parts \"parts-1\",`` ``# and for all part varying from`` ``# 'nextPart' to 'n'`` ``for` `i ``in` `range``(nextPart, n ``+` `1``):`` ``ans ``+``=` `countWaysUtil(n ``-` `i, parts ``-` `1``, i)` ` ``# Store computed answer in table`` ``# and return result`` ``dp[n][nextPart][parts] ``=` `ans`` ``return` `(ans)` `# This function mainly initializes dp``# table and calls countWaysUtil()``def` `countWays(n):`` ``return` `countWaysUtil(n, ``4``, ``1``)` `# Driver Code``n ``=` `8``print``(countWays(n))` `# This code is contributed``# by sahishelangia`\n\n## C#\n\n `// A Dynamic Programming based solution to count number``// of ways to represent n as sum of four numbers``using` `System;`` ` `class` `GFG``{` `static` `int` `[,,]dp = ``new` `int``[5001, 5001, 5];` `// \"parts\" is number of parts left, n is the value left``// \"nextPart\" is starting point from where we start trying``// for next part.``static` `int` `countWaysUtil(``int` `n, ``int` `parts, ``int` `nextPart)``{`` ``// Base cases`` ``if` `(parts == 0 && n == 0) ``return` `1;`` ``if` `(n <= 0 || parts <= 0) ``return` `0;` ` ``// If this subproblem is already solved`` ``if` `(dp[n,nextPart,parts] != -1)`` ``return` `dp[n,nextPart,parts];` ` ``int` `ans = 0; ``// Initialize result` ` ``// Count number of ways for remaining number n-i`` ``// remaining parts \"parts-1\", and for all part`` ``// varying from 'nextPart' to 'n'`` ``for` `(``int` `i = nextPart; i <= n; i++)`` ``ans += countWaysUtil(n - i, parts - 1, i);` ` ``// Store computed answer in table and return`` ``// result`` ``return` `(dp[n,nextPart,parts] = ans);``}` `// This function mainly initializes dp table and``// calls countWaysUtil()``static` `int` `countWays(``int` `n)``{`` ``for``(``int` `i = 0; i < 5001; i++)`` ``{`` ``for``(``int` `j = 0; j < 5001; j++)`` ``{`` ``for``(``int` `l = 0; l < 5; l++)`` ``dp[i, j, l] = -1;`` ``}`` ``}`` ``return` `countWaysUtil(n, 4, 1);``}` `// Driver code``public` `static` `void` `Main(String[] args)``{`` ``int` `n = 8;`` ``Console.WriteLine(countWays(n));``}``}` `// This code contributed by Rajput-Ji`\n\n## PHP\n\n ``\n\n## Javascript\n\n ``\n\nOutput :\n\n`5`\n\nTime Complexity: O(n3). There are Θ(n2) entries, every entry is filled only once and filling an entry takes O(n) time.\nAuxiliary Space: O(n2\n\nMethod 3 (A O(n2 Log n) Solution)\nWe can use the solution discussed in this post to find all quadruplets."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5698271,"math_prob":0.9819271,"size":12445,"snap":"2022-05-2022-21","text_gpt3_token_len":4157,"char_repetition_ratio":0.16011575,"word_repetition_ratio":0.50138503,"special_character_ratio":0.36464444,"punctuation_ratio":0.13935432,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997149,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-25T09:26:35Z\",\"WARC-Record-ID\":\"<urn:uuid:68245f8a-9957-42cf-b08c-2f44be57d40d>\",\"Content-Length\":\"228652\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8a6eea48-3e36-4a4d-8ef2-c198d31a53e7>\",\"WARC-Concurrent-To\":\"<urn:uuid:36010a5b-5136-46f3-ba44-3613561304e2>\",\"WARC-IP-Address\":\"184.25.127.143\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/count-number-of-ways-to-divide-a-number-in-4-parts/?ref=rp\",\"WARC-Payload-Digest\":\"sha1:FLTAQTNAFLXVXQWN7VJLDEYWFYUQANLV\",\"WARC-Block-Digest\":\"sha1:EKKQ7NCGJ2UTXWBBULR7DJDY4JRYHZA3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662584398.89_warc_CC-MAIN-20220525085552-20220525115552-00479.warc.gz\"}"} |
https://www.ishwaranand.com/insertion-sort/ | [
"• Rainy Season\n• World\n#C, C++ and C# #Algorithm #Blog #C Program\n\n# How does insertion sort work\n\n### Explain the Insertion sort algorithm.\n\n• Insertion sort is based on the principle of inserting the element into the proper position in the previously sorted list. if take (N-1)passes to sort N element of the array.\n• In this sort, the first element is considered to be sorted and the remaining elements are in the unsorted list. One element from the unsorted list is picked-up and inserted in the sorted list to its proper position. This process is continued until the unsorted list is finished.\n\nInitially: Data is itself sorted. And the remaining elements are unsorted.\n\nPass1: Data is inserted either before Data or after Data so that Data and Data are sorted.\n\nPass2: Data is inserted to its proper position in Data Data so the Data, Data and Data are sorted.\n\nPass3: Data is inserted to its proper position in Data, Data, Dataso that Data, Data, Data and Data are sorted.\n\n.\n\n.\n\n.\n\nPassN-1: Data[N] is inserted in Data, Data…Data[N-1] so that Data, Data…Data[N] are sorted.\n\nExample Insertion Sort\nConsider the following array with 6 elements.\nRed is Sorted\nBlack is unsorted\n\n1 2 3 4 5 6\n50 30 10 40 60 20\n\nInitially: Consider data is sorted and the remaining list is unsorted.\n\n1 2 3 4 5 6\n50 30 10 40 60 20\n\nPass 1 : As Data < Data Hence, insert Data before Data\nBefore Insertion\n1 2 3 4 5 6\n50 30 10 40 60 20\nAfter Insertion\n1 2 3 4 5 6\n30 50 10 40 60 20\n\nPass 2 : As Data < Data, Data hence, insert Data before Data\nBefore Insertion\n1 2 3 4 5 6\n30 50 10 40 60 20\nAfter Insertion\n1 2 3 4 5 6\n10 30 50 40 60 20\n\nPass 3 : As Data < Data, hence, insert Data before data\nBefore Insertion\n1 2 3 4 5 6\n10 30 50 40 60 20\nAfter Insertion\n1 2 3 4 5 6\n10 30 40 50 60 20\n\nPass 4 : As data > Data, hence, Data remain at the same position\nBefore Insertion\n1 2 3 4 5 6\n10 30 40 50 60 20\nAfter Insertion\n1 2 3 4 5 6\n10 30 40 50 60 20\n\nPass 5 : Data < Data, Data{4], Data, Data Hence, Data is before Data\nBefore Insertion\n1 2 3 4 5 6\n10 30 40 50 60 20\nAfter Insertion\n1 2 3 4 5 6\n10 20 30 40 50 60\n\n### Algorithm: Insertion_sort(Data[],N)\n\n• This is the algorithm for inserted sort to sort the array in ascending order.\n• Data[]-Array of elements\n• N-size of array\n• i,j-index variable\n• temp-temporary variable\n\n### Here’s an explanation of the insertion sort algorithm in c\n\n``````Step 1 : Start\nStep 2 : Repeat the steps 3, 4, 5 and 6 for i = 2 to N\nStep 3 : Set temp = Data[i]\nStep 4 : Set j = i - 1\nStep 5 : Repeat while j > 0 and temp < data[i]\na) Set Data[j+1] = Data[j]\nb) Set j = j - 1\nStep 6 : Set Data[j+1] = temp\nStep 7 : Stop``````\n\n## what is the insertion sort\n\nInsertion sort is a simple sorting algorithm that works by repeatedly inserting an element from an unsorted portion of the array into its correct position within a sorted portion of the array. The algorithm starts with an initially sorted section of the array consisting of a single element (usually the first element), and then iteratively expands this sorted section by one element at a time.\n\n### The algorithm works as follows:\n\n2. Compare the second element with the first element. If the second element is smaller, swap them.\n3. Move to the next element (the third element) and compare it with the elements in the sorted section from right to left. Insert the element into its correct position within the sorted section by shifting the larger elements from one position to the right.\n4. Repeat step 3 for all the remaining elements, each time expanding the sorted section by one element.\n5. Once all the elements are inserted into their correct positions, the array is sorted.\n\nInsertion sort has a time complexity of O(n^2), where n is the number of elements in the array. It is considered efficient for small input sizes or partially sorted arrays but becomes less efficient for larger arrays compared to more advanced sorting algorithms like to merge sort or quicksort.\n\n## Here’s an explanation of the insertion sort algorithm in Java\n\n``````public class InsertionSort {\npublic static void insertionSort(int[] arr) {\nint n = arr.length;\n\nfor (int i = 1; i < n; i++) {\nint key = arr[i];\nint j = i - 1;\n\n// Move elements of arr[0..i-1], that are greater than key, to one position ahead of their current position\nwhile (j >= 0 && arr[j] > key) {\narr[j + 1] = arr[j];\nj = j - 1;\n}\n\narr[j + 1] = key;\n}\n}\n\npublic static void main(String[] args) {\nint[] arr = {5, 2, 8, 12, 1, 6, 3};\n\nSystem.out.println(\"Original array: \" + Arrays.toString(arr));\ninsertionSort(arr);\nSystem.out.println(\"Sorted array: \" + Arrays.toString(arr));\n}\n}``````\n\n### Here’s an explanation of the insertion sort algorithm in Java:\n\n```javaCopy code```public class InsertionSort {\npublic static void insertionSort(int[] arr) {\nint n = arr.length;\n\nfor (int i = 1; i < n; i++) {\nint key = arr[i];\nint j = i - 1;\n\n// Move elements of arr[0..i-1], that are greater than key, to one position ahead of their current position\nwhile (j >= 0 && arr[j] > key) {\narr[j + 1] = arr[j];\nj = j - 1;\n}\n\narr[j + 1] = key;\n}\n}\n\npublic static void main(String[] args) {\nint[] arr = {5, 2, 8, 12, 1, 6, 3};\n\nSystem.out.println(\"Original array: \" + Arrays.toString(arr));\ninsertionSort(arr);\nSystem.out.println(\"Sorted array: \" + Arrays.toString(arr));\n}\n}``````\n\nIn this Java implementation, we have a method called `insertionSort` that takes an array of integers as input and sorts it using the insertion sort algorithm.\n\nThe `insertionSort` the method iterates through the array starting from the second element (`i = 1`). It selects the current element (`key`) and compares it with the elements in the sorted portion of the array (`arr[0..i-1]`). If an element in the sorted portion is greater than the key, it is shifted one position to the right to make room for the key. This process continues until the correct position the `key` is found.\n\nFinally, the `main` the method demonstrates the usage of `insertionSort` by creating an array, sorting it using the method, and printing the original and sorted arrays.\n\nWhen you run the code, the output will be:\n\n``````Original array: [5, 2, 8, 12, 1, 6, 3]\nSorted array: [1, 2, 3, 5, 6, 8, 12]``````\n\nThis demonstrates how the insertion sort algorithm arranges the elements of the array in ascending order.\n\n."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7198038,"math_prob":0.9359101,"size":5748,"snap":"2023-40-2023-50","text_gpt3_token_len":1808,"char_repetition_ratio":0.16869777,"word_repetition_ratio":0.3504673,"special_character_ratio":0.34324983,"punctuation_ratio":0.122964166,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9744638,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-22T01:40:44Z\",\"WARC-Record-ID\":\"<urn:uuid:cb6be103-7fde-46a8-b839-dc37d7eabd48>\",\"Content-Length\":\"171411\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:98bd0845-0954-46b0-bb27-7a37bf1dcc86>\",\"WARC-Concurrent-To\":\"<urn:uuid:4de05ba2-43ba-4984-a9ea-b734c6a8b094>\",\"WARC-IP-Address\":\"104.21.2.153\",\"WARC-Target-URI\":\"https://www.ishwaranand.com/insertion-sort/\",\"WARC-Payload-Digest\":\"sha1:TD75CCM2UJXCCBG4CHFCCKQZ6VLXX5UO\",\"WARC-Block-Digest\":\"sha1:CCEF3YDJNFTD2KBYJG3F6TLX6JXBGK53\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506320.28_warc_CC-MAIN-20230922002008-20230922032008-00512.warc.gz\"}"} |
https://chouprojects.com/cubesetcount-excel/ | [
"Published on\nWritten by Jacky Chou\n\n# Cubesetcount: Excel Formulae Explained\n\n## Key Takeaway:\n\n• CUBESETCOUNT is a formula in Excel used for dimensional analysis. It counts the number of items in a set. The formula is particularly useful for multidimensional data analysis.\n• The syntax of CUBESETCOUNT formula involves a set expression, which defines the set of cells to count, and a member expression, which specifies the dimension. The formula can also be combined with other functions for more complex data analysis.\n• Using CUBESETCOUNT in Excel requires defining a set of data to be analyzed, such as a pivot table or cube. Users can then specify the dimensions to be analyzed and apply the CUBESETCOUNT formula. Examples include counting the number of products sold in a particular region or the number of employees in a certain department.\n• Limitations of CUBESETCOUNT include its use for only multidimensional data analysis and the need for a fully functioning OLAP Cube. Additionally, care must be taken when defining the set expression to avoid counting items multiple times.\n• The benefits of using CUBESETCOUNT in analyzing data include the ability to perform complex analyses quickly and efficiently, as well as the ability to easily manipulate and summarize data according to different dimensions. This can help users identify trends and make important business decisions more effectively.\n\nStruggling to understand Excel formulae like CUBESETCOUNT? You’re not alone! Discover the power of CUBESETCOUNT and learn how to apply it to your data analysis projects quickly and easily.\n\n## Overview of CUBESETCOUNT\n\nTo understand CUBESETCOUNT, it is crucial to grasp its overview. CUBESETCOUNT is an Excel formula that counts the number of items in a set defined by CUBESET. It is a powerful tool that helps retrieve data from a multi-dimensional database using online analytical processing (OLAP).\n\nThe following table shows the overview of the function CUBESETCOUNT:\n\n|Column 1|Column 2|\n|——–|——–|\n|Function Category|Financial|\n|Excel Version Introduced|Excel 2007|\n|Syntax|CUBESETCOUNT(connection, set expression)|\n\nIt is worth noting that CUBESETCOUNT supports both relational and OLAP databases. It can handle a wide range of CUBESET expressions and can count items based on their numerical values, text values, or based on a specified criterion.\n\nPro Tip: When using CUBESETCOUNT, be sure to incorporate error handling techniques such as using the IFERROR function to prevent #N/A errors.\n\n## Syntax of CUBESETCOUNT\n\nCUBESETCOUNT: Excel Formulae Explained\n\nThe syntax of CUBESETCOUNT refers to the structure of the Excel formula. It is used to count the number of items in a set within a cube. The formula requires two arguments: the name of the cube and the name of the set.\n\nArgumentUse\nCube NameThe name of the cube\nSet NameThe name of the set\n\nThis formula has a unique feature that allows subsets to be counted. This feature permits users to apply filters to the sets. Filters are applied by adding criteria to the set formula.\n\nTo optimize the performance of the CUBESETCOUNT formula, use it only in cells with no other formulae, which helps to reduce complexity. It is also best to avoid nested formulas, as they can slow down the calculation of the worksheet.\n\nOverall, CUBESETCOUNT is a valuable tool when working with cubes in Excel. By following these suggestions, users can achieve efficient and effective results. By utilizing this formula in conjunction with other Excel formulae, such as CUBEVALUE: Excel Formulae Explained, users can take full advantage of Excel’s analytical capabilities.\n\n## How to Use CUBESETCOUNT in Excel\n\nExplore how CUBESETCOUNT in Excel can be used. It can get the number of items selected within a PivotTable or cube. Examples are useful to understand the benefits. Also, look into the limitations to understand any potential drawbacks.\n\n### Examples of Using CUBESETCOUNT\n\nTo better understand how to utilize `CUBESETCOUNT` in Excel, here are some scenarios where it can be helpful.\n\n Scenario Description 1 Determining the number of unique items in a particular dimension 2 Counting the number of distinct values that meet particular criteria from a cube field.\n\nAside from these specific use cases, `CUBESETCOUNT` can also be leveraged for many other purposes in data analysis and management.\n\nIn one instance, a business was struggling to collect and analyze customer feedback across multiple channels. By implementing the `CUBESETCOUNT` formula within Excel, they were able to streamline their processes while obtaining valuable insights into customer satisfaction rates over time. This enabled them to gain a competitive edge and make informed business decisions based on objective data analysis.\n\nYou can’t count on `CUBESETCOUNT` to solve world hunger, but it’s great for slicing and dicing data.\n\n### Limitations of CUBESETCOUNT\n\nOne of the limitations of CUBESETCOUNT revolves around its inability to handle large datasets. When dealing with a significant amount of information, this Excel formula may not produce accurate and reliable results due to performance constraints.\n\nTo illustrate the limitations further, consider the following table:\n\nDataset SizeCUBESETCOUNT Result\nSmall100\nMedium95\nLarge70\n\nThe table shows that as the dataset size increases from small to large, the result produced by CUBESETCOUNT decreases. This suggests that when faced with a vast and complex dataset, it may be advisable to explore alternative solutions rather than relying solely on CUBESETCOUNT.\n\nIt’s worth noting, however, that the effectiveness of such alternatives would depend on various factors such as data structure, availability of tools and resources, among others.\n\nIn light of this limitation, one suggestion is to optimize data by removing unnecessary or redundant information. Another strategy is to use Excel’s built-in filtering capabilities to analyze subsets of data more effectively. By adopting these approaches, users can enhance the accuracy and reliability of their calculations and avoid issues associated with performance constraints.\n\n## Benefits of Using CUBESETCOUNT in Analyzing Data\n\nOne of the key benefits of utilizing the CUBESETCOUNT function in data analysis is its ability to offer a comprehensive and granular view of the data. By using this function, users can easily filter and organize large datasets into subsets that can be easily analyzed. The benefits of using CUBESETCOUNT include:\n\n• Improved efficiency and accuracy in analyzing complex data sets\n• Simplified and streamlined data analysis processes\n• Enhanced ability to identify trends and patterns within data sets\n• Improved ability to make data-driven decisions based on accurate data analysis.\n\nFurthermore, utilizing CUBESETCOUNT helps to ensure that relevant data is easily accessible and can be analyzed quickly and efficiently. By using this function, users can quickly identify trends and patterns within their data sets, and make data-driven decisions based on accurate information.\n\nOne suggestion for enhancing the benefits of using CUBESETCOUNT is to combine it with other Excel formulae, such as the CUBEVALUE formula, which can provide even greater control and precision in data analysis. Another suggestion is to regularly review and update data sets to ensure that they remain relevant and accurately reflect current trends and patterns. By doing so, users can ensure that their analysis is up-to-date and provides an accurate view of the data.\n\n## Five Facts About CUBESETCOUNT: Excel Formulae Explained:\n\n• ✅ CUBESETCOUNT is an Excel formula used to count the number of items in a Microsoft SQL Server Analysis Services (SSAS) cube set. (Source: Microsoft)\n• ✅ This formula can be used to extract data from a multi-dimensional cube, combining multiple sets into a single set. (Source: Excel Campus)\n• ✅ The CUBESETCOUNT function requires a set expression parameter, which can be a cell reference or a formula that returns a set of members. (Source: Excel Tip)\n• ✅ This formula is useful for business analysts working with large amounts of data in multi-dimensional databases. (Source: MyExcelOnline)\n• ✅ Other related functions that can be used with CUBESETCOUNT include CUBESET, CUBESETASCENDANT, and CUBESETDESCENDANT. (Source: Excel Easy)\n\n## FAQs about Cubesetcount: Excel Formulae Explained\n\n### What is CUBESETCOUNT in Excel?\n\nCUBESETCOUNT is an Excel formula that returns the number of items in a set within a cube.\n\n### How to use CUBESETCOUNT formula in Excel?\n\nThe syntax for the CUBESETCOUNT formula is:\n=CUBESETCOUNT(set_expression)\nwhere set_expression is the cube set for which you want to count the number of items. This formula requires an OLAP cube data source to work.\n\n### What are the advantages of using CUBESETCOUNT formula?\n\nThe advantages of using CUBESETCOUNT formula are:\n1. It can count the number of items in a set within a cube.\n2. It supports OLAP cube data source.\n3. It can be used in complex data analysis scenarios.\n\n### What are the limitations of CUBESETCOUNT formula?\n\nThe limitations of using CUBESETCOUNT formula are:\n1. It only works with OLAP cube data source.\n2. It requires some knowledge of MDX (Multidimensional Expressions) language.\n3. It may not work in some complex data analysis scenarios.\n\n### Can I use CUBESETCOUNT formula to count the number of unique items in a set?\n\nYes, you can use CUBESETCOUNT formula to count the number of unique items in a set. To do so, you need to combine it with other CUBE functions such as CUBESETDISTINCTCOUNT.\n\n### What are some examples of using CUBESETCOUNT formula in Excel?\n\nHere are some examples of using CUBESETCOUNT formula:\n1. To count the number of items in a set: =CUBESETCOUNT(“[Sales].[Category].&”)\n2. To count the number of unique items in a set: =CUBESETCOUNT(CUBESETDISTINCTCOUNT(“[Sales].[Category].&”))\n\n## Related Articles\n\n### Max: Excel Formulae Explained\n\nKey Takeaway: The MAX function in Excel is used to ...\n\n### Lower: Excel Formulae Explained\n\nKey Takeaway: The LOWER formula in Excel allows users to ...\n\n### Match: Excel Formulae Explained\n\nKey Takeaway: The MATCH function in Excel is used to ..."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89414847,"math_prob":0.81617075,"size":9726,"snap":"2023-40-2023-50","text_gpt3_token_len":2064,"char_repetition_ratio":0.17794693,"word_repetition_ratio":0.06494367,"special_character_ratio":0.18702447,"punctuation_ratio":0.09625668,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9761949,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-26T08:17:42Z\",\"WARC-Record-ID\":\"<urn:uuid:183015e1-ec0d-45d3-809c-acc60fbfa59d>\",\"Content-Length\":\"89438\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:13494f90-d6b5-4945-9c26-efc859606e9b>\",\"WARC-Concurrent-To\":\"<urn:uuid:002fa95b-999e-4472-b92d-8aacfc551ae8>\",\"WARC-IP-Address\":\"34.230.232.255\",\"WARC-Target-URI\":\"https://chouprojects.com/cubesetcount-excel/\",\"WARC-Payload-Digest\":\"sha1:TCZZL2H44WGDHRWJ6WGFDPKBV2NIGF3I\",\"WARC-Block-Digest\":\"sha1:L55GWCSLYBJWMHB2Z2GJJXXGQ4UDR5NQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510179.22_warc_CC-MAIN-20230926075508-20230926105508-00720.warc.gz\"}"} |
https://cs.stackexchange.com/questions/tagged/finite-automata?tab=Active | [
"# Questions tagged [finite-automata]\n\nQuestions about finite automata, an elementary automaton model with finite memory. It is equivalent to regular languages and the basis for many more complex models.\n\n1,728 questions\nFilter by\nSorted by\nTagged with\n109 views\n\n### Create DFA such that every substring of w of length 3 contains two or three 0's with w={0,1}\n\nI am attempting to draw the DFA of this problem but I'm kind of stuck. In another post I saw that the DFA should have 10 states. I'm finding it hard to even understand the question, is the input to ...\n1 vote\n236 views\n\n### Create a Deterministic Finite Automaton for a regular expression\n\nI want to create a finite state machine that accepts the following language: $$L=\\{w\\in\\{a,b\\}^* | w \\text{ contains abb but not on the first position}\\}$$ So I began by writing a regular expression ...\n139 views\n\n### Simultaneous reachability of NFA states\n\nSuppose I have a $n$-state non-deterministic finite automaton $F$ over alphabet $\\Sigma$. Let $S(x)$ be the set of states reachable from the starting state by consuming string $x$. I am interested for ...\n12k views\n\n### What is the difference between transition function (delta) and extended transition function (delta hat) in finite automata?\n\nWhat is the difference between transition function delta ($\\delta$) and extended transition function delta hat ($\\hat{\\delta}$) in finite automata? Both of them, when started at a state $q$ for a ...\n1 vote\n60 views\n\n### DFA for even concatenation of strings from a language\n\nIf I have a deterministic finite automaton (DFA) with a language $W$, and I need to create another DFA that returns all the strings that are a concatenation of an even number of strings in $W$, how ...\n137 views\n\n### nfa of the Language L={w belongs to (a,b)*/w starts with aa or ends with aa} with or being not exclusive\n\nI have a question I need to give the NFA of the following language: L={w belongs to (a,b)*/w starts with aa or ends with aa} with or being not exclusive meaning I can have a word that starts with aa ...\n1k views\n\n### Determine if an NFA accepts infinite language in polynomial time\n\nQuestion Statement: Given a NFA $N$, design an algorithm that runs in polynomial time such that it determines if $L(N)$ is infinite. (Note that converting NFA to DFA is exponential time). For any DFA,...\n1 vote\n7k views\n\n### DFA of (aa+bb)(a+b)* + (a+b)*(aa+bb)?\n\nOur class teacher gave us a descriptive definition of a language in a Quiz today and ask us to make its DFA. In the middle of quiz he also told us the Regular Expression(RE) of that language but we ...\n1 vote\n22 views\n\n### Does there exist terminology to discern NFA states and NFA-transformed-into-DFA state?\n\nI can't find this from surface search in the literature When one observes/simulates NFA after processing several characters, one has to consider both \"internal states of NFA\" (individual ...\n93 views\n\n### Intersection of Infinite Regular Language with CFL\n\nIntersection of any finite regular language with anything( CFL or not CFL) will be finite but what about intersection of infinite regular language with CFL or not CFL. Does the resultant will be ...\n32 views\n\n### 1 way 2 stack and 2 way 2 stack Pushdown Accepters that accepts L={a^(n^2)|n≥1}\n\nUsing a 1 way 2 stack, and a 2 way 2 stack PDA, I want to check if the length of a an input string is strictly a perfect square number. How can I do this in both approaches?\n1 vote\n533 views\n\n### Converting DFA to Regular Expression Using State Removal\n\nI'm trying to convert the following NFA to a regular expression. I've attached my work below and end up with the expression $aa^*bb^*$. As far as I can tell, this doesn't seem correct but I've been ...\n42 views\n\n### Why does lexer has O(n) time complexity?\n\nAccording to my CS knowledge so far, a lexer uses DFA(which takes linear time) for 'each' token type to find the next token, so in the worst case, it should try 'all possible' token types of a ...\n78 views\n\n### Explain meaning of states and transitions for DFA that accepts two binary words (a,b) with b = 5a\n\nI was provided with the solution to the language being represented. An example of accepted input would be . The DFA that would recognize this language is below. What do the states represent?...\n70 views\n\n### How to find prefixes and suffixes for infinite languages? (Automata)\n\nL= {abc} prefix = {epsilon,a,ab,abc} suffix = {epsilon,c,bc,abc} It's easy to find suffixes and prefixes for finite Regular languages. But what will be the ...\n79 views\n\n### What could be possible NFA for the RegEx \"a?\"\n\nI am trying to use the Thompson's method to draw an NFA for a RegEx given by: $(a+b|c?)c$ I am wondering if I should deconstruct the RegEx as - Concatenation of $a+$, $(b|c?)$ together with $c$ OR ...\n62 views\n\n### Turing machine for a^n b^m c^n d^m\n\nThe state diagram for the initial part of this turing machine given as: Here, we are basically traversing through the input tape, changing occurence of 'a' to X1, and 'c' to X2. After that we go back ...\n7k views\n\n### If $L$ is a regular language then so is $\\sqrt{L}=\\{w:ww\\in L\\}$\n\nI am interested in proving that $\\sqrt{L}=\\{w:ww\\in L\\}$ is regular if $L$ is regular but I don't seem to be getting anywhere. If possible I was hoping for a hint to get me going in the right ...\n2k views\n\n### Determining Length of a walk in Nondeterministic Finite Automata with Lambda Transitions\n\nI am learning about CS Theory and specifically Nondeterministic Finite Automata (NFA) right now. In my book I came across a section of text that discussed a way to determine the length of a walk ...\n1 vote\n2k views\n\n### Length of distinguishing string for a DFA\n\nIs it possible for a $DFA$ which has $n$ states that, there exists two distinguishable states $p, q$ such that there exists a distinquishing string between $p$ and $q$ whose length is greater than $n$?...\n4k views\n\n### Are all DFAs also NFAs?\n\nAre all Deterministic Finite Automatons also Non Deterministic Finite Automatons?\n605 views\n\n### Can we skip an input in push down automata\n\nHi here I'm giving a language L3={0^m 1^(n ) 2^m | m,n ∈ N} I designed this stack machine in order to accept this given language. Here I'm skipping 1 (no matter how many 1s are there) . Is it ok to ...\n27 views\n\n### What is the minimum length word accepted by the product of these simple loop automata?\n\nLet $A_{n} = (aa|aaa|aaaa|\\dots |a^{n-2})(a^{n})^*$ where $n \\geq 4$ is some natural,and $A_2 = (a^2)^*, A_3 = (a^3)^*$. Clearly every transition is thus labeled by an $a$. From now on let $A_n$ ...\n66 views\n\n### What is the Minimum length of a string that is accepted by a DFA that shows that the language accepted by that DFA is infinite?\n\nWhat is the minimum length of a string that is accepted by a DFA, shows that the language accepted by that DFA is infinite? I checked this post How to determine if an automata (DFA) accepts an ...\n61 views\n\n### Construct a regular grammar that produces all possible strings of $\\Sigma = \\{a,b\\}$ that do not contain substring 'abba'\n\nI'm really stuck here and do not know what to do. So far, I've constructed a DFA and a regular expression that produces the aforementioned set of strings. Namely, the DFA looks like: After a lot of ...\n289 views\n\n### A Turing machine with each cell accessed at most $10$ times has an equivalent NFA\n\nI am confused by the following claim: Let $T$ be a (decider, single-tape) Turing machine with the property that for every input, every cell on its tape is accessed at most $10$ times. Then there is a ...\n181 views\n\n### NP completeness of deciding whether a set of examples, consisting of strings and states, has a corresponding DFA?\n\nI'm working on a textbook problem, 7.36 in Sipser 3rd edition. It claims that if we are given an integer $N$ and set of pairs $(s_i, q_i)$, where $s_i$ are binary strings and $q_i$ are states (we are ...\n43 views\n\n### After converting a CFG to a PDA, what is an example input string to the PDA?\n\nFor example, the following CFG: S$\\rightarrow$aSb|$\\epsilon$ A valid string within the language described by the CFG would be, e.g., \"aabb\". The converted PDA has the following form: The ...\n40 views\n\n### Binary combinatorics with rank\n\nI am looking at finding acceptable binary values with maximum 2 consecutive 1s and 0s, from a range of maximum 6 bits (2^6 values). Also, I want to rank and unrank these subset of values (in ...\n27 views\n\n### DFA for \"K\" bit value with max \"n\" consecutive \"0\"s and \"1\"s\n\nI had posted earlier a question regarding ranking max \"n\" consecutive \"0\"s and \"1\"s in a \"K\"-bit string - Order in a subset I got clarification regarding how ...\n71 views\n\n### Question about remainder in automata construction that checks divisibility\n\nI'm trying to understand de construction of a DFA from the book \"Introduction to Automata Theory, Languages, and Computation by John Hopcroft and Jeffrey Ullman 1st ed\" It says: Where it ...\n60 views\n\n### Minimum number of states in a DFA\n\nConsider the language L given by the regular expression (a + b )*b(a + b) over the alphabet {a, b} . The smallest number of states needed in a deterministic finite-state automaton (DFA) accepting L is ...\n48 views\n\n### How to convert a NFA to alternating finite automata AFA?\n\nI am trying to construct an AFA from a NFA, how do I know if a state of NFA becoms existential or universal in the AFA?\n81 views\n\n### Simple description of the LR(0) table generator algorithm?\n\nI have just implemented a parser on the relational database. Parsing is done with recursive query. Note: one commenter was misled by the word \"recursive\" before \"query\" to think &...\n202 views\n\n### Is it possible to create a NFA that accepts only n*\"a\" or n*\"b\" inputs?\n\nI'd like to create a NFA that accepts only inputs like \"aaaaa\";\"a\";\"bb\";\"bbb\", but not like \"aab\";\"aabaa\". Is the even possible? As far as I ...\n1 vote"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9405946,"math_prob":0.8703749,"size":5085,"snap":"2023-40-2023-50","text_gpt3_token_len":1267,"char_repetition_ratio":0.11513481,"word_repetition_ratio":0.0,"special_character_ratio":0.26096362,"punctuation_ratio":0.1444653,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99435645,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-07T10:42:21Z\",\"WARC-Record-ID\":\"<urn:uuid:d893cc7c-6379-4dbc-a2b8-c51a2c67dcb8>\",\"Content-Length\":\"340225\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:abbd1d28-de17-4ae7-b9ff-181bba1fd011>\",\"WARC-Concurrent-To\":\"<urn:uuid:800b210f-b767-4009-b993-ac9dd6f507cf>\",\"WARC-IP-Address\":\"104.18.43.226\",\"WARC-Target-URI\":\"https://cs.stackexchange.com/questions/tagged/finite-automata?tab=Active\",\"WARC-Payload-Digest\":\"sha1:AL5YRXOKZHJXQ6ICSFKR3SBAR6W2TNGM\",\"WARC-Block-Digest\":\"sha1:JFMOKJ55OEHTNZJO4HMR6GQ3JCTVIP63\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100651.34_warc_CC-MAIN-20231207090036-20231207120036-00757.warc.gz\"}"} |
https://www.numwords.com/words-to-number/en/4717 | [
"NumWords.com\n\nHow to write Four thousand seven hundred seventeen in numbers in English?\n\nWe can write Four thousand seven hundred seventeen equal to 4717 in numbers in English\n\n< Four thousand seven hundred sixteen :||: Four thousand seven hundred eighteen >\n\nNine thousand four hundred thirty-four = 9434 = 4717 × 2\nFourteen thousand one hundred fifty-one = 14151 = 4717 × 3\nEighteen thousand eight hundred sixty-eight = 18868 = 4717 × 4\nTwenty-three thousand five hundred eighty-five = 23585 = 4717 × 5\nTwenty-eight thousand three hundred two = 28302 = 4717 × 6\nThirty-three thousand nineteen = 33019 = 4717 × 7\nThirty-seven thousand seven hundred thirty-six = 37736 = 4717 × 8\nForty-two thousand four hundred fifty-three = 42453 = 4717 × 9\nForty-seven thousand one hundred seventy = 47170 = 4717 × 10\nFifty-one thousand eight hundred eighty-seven = 51887 = 4717 × 11\nFifty-six thousand six hundred four = 56604 = 4717 × 12\nSixty-one thousand three hundred twenty-one = 61321 = 4717 × 13\nSixty-six thousand thirty-eight = 66038 = 4717 × 14\nSeventy thousand seven hundred fifty-five = 70755 = 4717 × 15\nSeventy-five thousand four hundred seventy-two = 75472 = 4717 × 16\nEighty thousand one hundred eighty-nine = 80189 = 4717 × 17\nEighty-four thousand nine hundred six = 84906 = 4717 × 18\nEighty-nine thousand six hundred twenty-three = 89623 = 4717 × 19\nNinety-four thousand three hundred forty = 94340 = 4717 × 20\n\nSitemap"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7648142,"math_prob":0.99467915,"size":1439,"snap":"2019-26-2019-30","text_gpt3_token_len":423,"char_repetition_ratio":0.2989547,"word_repetition_ratio":0.016260162,"special_character_ratio":0.3829048,"punctuation_ratio":0.022421524,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99773985,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-27T04:45:48Z\",\"WARC-Record-ID\":\"<urn:uuid:c30519db-9691-4ba7-830e-b116f51141a1>\",\"Content-Length\":\"3536\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f2ed6102-6a48-4317-9f4d-d7367d44ec1d>\",\"WARC-Concurrent-To\":\"<urn:uuid:0bf0a388-ed24-4abe-8f0a-89dc7c036ed6>\",\"WARC-IP-Address\":\"68.66.224.32\",\"WARC-Target-URI\":\"https://www.numwords.com/words-to-number/en/4717\",\"WARC-Payload-Digest\":\"sha1:PNUC6KHWZEA6WCC2WPFUPD4X5EH5L4NF\",\"WARC-Block-Digest\":\"sha1:TWOBRW37I5FI4YKWKZWPZIWPX5FD7MSP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560628000613.45_warc_CC-MAIN-20190627035307-20190627061307-00037.warc.gz\"}"} |
http://starlink.eao.hawaii.edu/docs/sun190.htx/sun190se11.html | [
"### 11 Browsing and selecting with an X display\n\nxcatview is a powerful and flexible catalogue browser. However, it can only be used from a terminal (or workstation console) capable of displaying X output. Before starting xcatview you should ensure that your terminal (or console) is configured to receive X output. Then simply type:\n\nxcatview\n\nand follow the ensuing dialogue boxes. Copious on-line help is available within xcatview. To obtain it simply click on the ‘Help’ button; every dialogue box in xcatview contains a ‘Help’ button.\n\nIn addition to accessing local catalogues xcatview provides some limited facilities to access remote catalogues held on-line at various astronomical data centres and archives around the world. These facilities provide the same functionality as the application catremote and are described in greater detail in Section 25. Obviously they will only be available if the computer on which CURSA is running has appropriate network connections (which will usually be the case at a normal Starlink node).\n\nxcatview provides the following facilities:\n\n• list columns in a catalogue,\n• list parameters and textual information from a catalogue,\n• list new columns computed ‘on the fly’ using an algebraic expression defined in terms of existing columns and parameters. For example, if the catalogue contained columns V and B_V (corresponding to the $V$ magnitude and $B-V$ colour) then the $B$ magnitude could be listed by specifying the expression ‘V + B_V’. The syntax for expressions is described in Appendix A,\n• fast creation of a subset within a specified range for a sorted column (see Section 15 for details of how to create a catalogue sorted on a specified column),\n• creation of subsets defined by algebraic criteria. For example, if the catalogue again contained columns V and B_V then to find the stars in the catalogue fainter than twelfth magnitude and with a $B-V$ of greater than 0.5 the criteria would be ‘V > 12.0 .AND. B_V > 0.5’. Again see Appendix A for the syntax of expressions,\n• compute statistics for one or more columns. The statistics are computed from either all the rows in the catalogue or just the subset of rows contained in a previously created selection. The statistics computed are described in detail in Section 11.1 below,\n• plot a simple scatter-plot from two columns. The scatter-plot can show either all the rows in the catalogue or just the subset of rows contained in a previously created selection,\n• plot a histogram from a column. The histogram can be computed from either all the rows in the catalogue or just the subset of rows contained in a previously created selection,\n• subsets extracted from the catalogue can be saved as new catalogues. These subsets can include new columns computed from expressions as well as columns present in the original catalogue,\n• subsets extracted from the catalogue can be saved in a text file in a form suitable for printing, or in a form suitable for passing to other applications (that is, unencumbered with extraneous annotation).\n\nA tutorial example of using xcatview to select stars which meet specified criteria from a catalogue (a ‘recipe’ in the jargon of cookbooks) is included in SC/6: The CCD Photometric Calibration Cookbook.\n\n#### 11.1 Statistics computed for individual columns\n\nStatistics can be computed for one or more individual columns. They can be computed from either all the rows in the catalogue or just the subset of rows comprising a selection which has been created previously. Obviously, only non-null rows are used in the calculations. Statistics can be displayed for columns of any data type, though for CHARACTER and LOGICAL columns the only quantity which can be determined is the number of non-null rows.\n\nFor each chosen column its name, data type and the number of non-null rows (that is, the number of rows used in the calculation) are displayed and the statistics listed in Table 5 are computed. Though all these quantities are standard statistics there is a remarkable amount of muddle and confusion over their definitions, with textbooks giving divers differing formulæ. For completeness, and to avoid any possible ambiguity, the definitions used in xcatview and catview are given below. These formulæ follow the CRC Standard Mathematical Tables except for the definition of skewness which is taken from Wall.\n\n Minimum Maximum Total range First quartile Third quartile Interquartile range Median Mean Mode (approximate) Standard deviation Skewness Kurtosis\n\nTable 5: Statistics computed for columns\n\nIn the following the set of rows for which statistics are computed is called the ‘current selection’ and it contains $n$ non-null rows. ${x}_{i}$ is the value of the column for the $i$th non-null row in the current selection. The definitions of the various statistics are then as follows.\n\n• The minimum and maximum are (obviously) simply the smallest and largest values in the current selection and the total range is simply the positive difference between these two values.\n• If the column is sorted into ascending order then the $j$th quartile, ${Q}_{j}$, is the value of element $j\\left(n+1\\right)/4$, where $j=1$, 2 or 3. Depending on the value $n$, there may not be an element which corresponds exactly to a given quartile. In this case the value is computed by averaging the two nearest elements.\n\nThe interquartile range is simply the positive difference between ${Q}_{1}$and ${Q}_{3}$.\n\n• The median is simply the second quartile ($j=2$). The mean has its usual definition: the sum of all the values divided by the number of values.\n\nThe value computed for the mode is not exact. Indeed it is not obvious that the mode is defined for ungrouped data. Rather, the value given is computed from the empirical relation:\n\n $mode=mean-3\\left(mean-median\\right)$ (1)\n• The standard deviation, $s$, is defined as: $s=\\sqrt{\\frac{1}{\\left(n-1\\right)}\\sum _{i=1}^{n}{\\left({x}_{i}-mean\\right)}^{2}}$ (2)\n• The skewness and kurtosis are defined in terms of moments. The $k$th moment, ${u}_{k}$, is defined as ${u}_{k}=\\frac{1}{n}\\sum _{i=1}^{n}{\\left({x}_{i}-mean\\right)}^{k}$ (3)\n\nthen\n\n $skewness={u}_{3}^{2}/{u}_{2}^{3}$ (4)\n\nand\n\n $kurtosis={u}_{4}/{u}_{2}^{2}$ (5)\n\nThe expected values for the skewness and kurtosis are:\n\n• skewness = 0 for a symmetrical distribution,\n• kurtosis = 3 for a normal (or Gaussian) distribution.\n\n#### 11.2 Restarting xcatview after a crash\n\nOccasionally, due to some misadventure, xcatview might crash. In this eventuality some temporary files can be left in existence; these must be deleted before xcatview can be used again. The files will be in subdirectory adam of your top-level directory (unless you have explicitly assigned this directory to be elsewhere). The files have names beginning with catview and xcatview, for example:\n\ncatview_5003\nxcatview_5001\n\nSimply delete these files and xcatview can then be started as usual."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.879494,"math_prob":0.9469641,"size":3927,"snap":"2021-43-2021-49","text_gpt3_token_len":892,"char_repetition_ratio":0.10808055,"word_repetition_ratio":0.0,"special_character_ratio":0.20091674,"punctuation_ratio":0.081779055,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9895994,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-01T18:44:24Z\",\"WARC-Record-ID\":\"<urn:uuid:9bfb6451-58bb-4cc1-8dc0-e63243b27552>\",\"Content-Length\":\"22716\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:66c5ad2e-4a94-4f10-b20e-cb50d77296d7>\",\"WARC-Concurrent-To\":\"<urn:uuid:6ad93265-5469-4283-8382-aaee550195e1>\",\"WARC-IP-Address\":\"128.171.90.105\",\"WARC-Target-URI\":\"http://starlink.eao.hawaii.edu/docs/sun190.htx/sun190se11.html\",\"WARC-Payload-Digest\":\"sha1:ZGTZJ5ADMIYXT46A7VWRYEWETQAGMPXJ\",\"WARC-Block-Digest\":\"sha1:QOKZQDCEPYVU6DH5JYREU3FUUJRAD2DU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964360881.12_warc_CC-MAIN-20211201173718-20211201203718-00466.warc.gz\"}"} |
https://www.heldermann.de/JLT/JLT19/JLT191/jlt19002.htm | [
"",
null,
"Journal Home Page Cumulative Index List of all Volumes Complete Contentsof this Volume Previous Article Journal of Lie Theory 19 (2009), No. 1, 029--054Copyright Heldermann Verlag 2009 Comparison of Lattice Filtrations and Moy-Prasad Filtrations for Classical Groups Bertrand Lemaire Institut de Mathématiques, Université Aix-Marseille II, 163 Av. de Luminy, 13288 Marseille 9, France lemaire@iml.univ-mrs.fr [Abstract-pdf] \\def\\g{{\\frak g}} \\def\\R{{\\Bbb R}} Let $F_\\circ$ be a non-Archimedean local field of characteristic not $2$. Let $G$ be a classical group over $F_\\circ$ which is not a general linear group, i.e. a symplectic, orthogonal or unitary group over $F_\\circ$ (possibly with a skew-field involved). Let $x$ be a point in the building of $G$. In this article, we prove that the lattice filtration $(\\g_{x,r})_{r\\in\\R}$ of $\\g={\\rm Lie}(G)$ attached to $x$ by Broussous and Stevens, coincides with the filtration defined by Moy and Prasad. Keywords: Local field, division algebra, classical group, building, lattice filtration, Moy-Prasad filtration, unramified descent. MSC: 20G25, 11E57 [ Fulltext-pdf (288 KB)] for subscribers only."
] | [
null,
"https://www.heldermann.de/JLT/jltlogo.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6494873,"math_prob":0.9236645,"size":1066,"snap":"2021-04-2021-17","text_gpt3_token_len":320,"char_repetition_ratio":0.10263654,"word_repetition_ratio":0.0,"special_character_ratio":0.2814259,"punctuation_ratio":0.15311004,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9654054,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-11T21:24:51Z\",\"WARC-Record-ID\":\"<urn:uuid:b55e3286-de56-4ac8-bddd-21fd4e49838d>\",\"Content-Length\":\"3189\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:18e319d4-bb5a-42f5-83bd-6cbd0aa37bdb>\",\"WARC-Concurrent-To\":\"<urn:uuid:a8b2753f-2560-4b07-a8cd-f8b9a371f6de>\",\"WARC-IP-Address\":\"5.9.212.206\",\"WARC-Target-URI\":\"https://www.heldermann.de/JLT/JLT19/JLT191/jlt19002.htm\",\"WARC-Payload-Digest\":\"sha1:X77M2IXT5YGOW2ETGAKCVMNEVTU2KXNR\",\"WARC-Block-Digest\":\"sha1:PR4LOMO4HD6AR4ZOHK4L4XUBVVKJR6OU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038065492.15_warc_CC-MAIN-20210411204008-20210411234008-00532.warc.gz\"}"} |
https://www.tutorialspoint.com/print-all-odd-numbers-and-their-sum-from-1-to-n-in-pl-sql | [
"# Print all odd numbers and their sum from 1 to n in PL/SQL\n\nMySQLMySQLi Database\n\nIn this problem, we are given a number n and we have to print all odd numbers from 1 to n and also print the sum of numbers from 1 to n in PL/SQL.\n\nPL/SQL is a procedural language extension to SQL. The code is a sequence of instructions that are ground in a block with all related declarations and instructions.\n\nLet’s see an example of our problem −\n\nInput: 7\nOutput: odd numbers are: 1, 3, 5, 7\nSum of odd numbers is 16\n\nTo solve this problem, we will take a number and initialize it to 1 and a sum variable with initial value 0. And we will increase the number by 2 and add into the sum variable until its value is less than or equal to n.\n\n## Example\n\nDECLARE\nnumber NUMBER(3) := 1;\nsumvar NUMBER(4) := 0;\n\nBEGIN\ndbms_output.Put_line('The odd numbers are : ');\nWHILE num <= 7 LOOP\ndbms_output.Put_line(number);\nsumvar := sumvar+num;\nnum := num + 2;\nEND LOOP;\ndbms_output.Put_line('Sum of odd numbers is '|| sum1);\nEND;\n\n## Output\n\nThe odd numbers are −\n\n1\n3\n5\n7\nSum of odd numbers is 16"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.83777714,"math_prob":0.9976454,"size":2291,"snap":"2022-27-2022-33","text_gpt3_token_len":610,"char_repetition_ratio":0.16090949,"word_repetition_ratio":0.04054054,"special_character_ratio":0.26538628,"punctuation_ratio":0.06935123,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9995489,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-16T18:56:52Z\",\"WARC-Record-ID\":\"<urn:uuid:65e3e885-ed77-4f69-abd0-d1ac4d3a031d>\",\"Content-Length\":\"30970\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d48a6489-9a9b-4b81-a1ce-1bbbd362ded6>\",\"WARC-Concurrent-To\":\"<urn:uuid:7af59f33-0423-4e31-901b-3d87b4b4bf58>\",\"WARC-IP-Address\":\"192.229.210.176\",\"WARC-Target-URI\":\"https://www.tutorialspoint.com/print-all-odd-numbers-and-their-sum-from-1-to-n-in-pl-sql\",\"WARC-Payload-Digest\":\"sha1:ZK3RGAQWHESWN5MQRRTMNW5GM4LBLY2K\",\"WARC-Block-Digest\":\"sha1:25D3X7DYALL6FLSJBAG5RX2NQOSAAMYN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572515.15_warc_CC-MAIN-20220816181215-20220816211215-00737.warc.gz\"}"} |
https://r.789695.n4.nabble.com/efficient-code-for-nonlinear-garch-model-td4689751.html | [
"# efficient code for nonlinear garch model",
null,
"Classic",
null,
"List",
null,
"Threaded",
null,
"3 messages",
null,
"Open this post in threaded view\n|\n\n## efficient code for nonlinear garch model\n\n Hi, I'm trying to think of efficient code for NGARCH model, sigma[t]^2=omega + alpha*(R[t-1]-theta*sigma[t-1])^2 + beta*(sigma[t-1)]^2 from book (Elements of Financial Risk Menagment-Peter F Christoffersen page 77). I already coded it the \"usual way\" but it's taking too long (around 30 s). Now I'm trying somehow to implement -filter- function like in paper - Parameter Estimation of ARMA Models with GARCH/APARCH Errors An R and SPlus Software Implementation - page 6. The problem I can't overcome is that in this model I can't figure out how to present the middle part ( 2 * Alpha * R[t-1]*theta*sigma[t-1]) with filter function or is it even possible to code it this way. Any help would appreciated.Here is my slower code: \"Ngarch\" <- function(rtn,value){ # Estimation of a non-symmertic GARCH, NGARCH(1,1), model. # Assume normal innovations # rtn: return series # # The likelihood function \"mxlk\" can be modified to fit more general NGARCH # models. # obtain initial estimates rtn=as.matrix(rtn) mu=mean(rtn) par=c(0.0001,0.8,0.01,0.7) # mxlk <- function(par){ mxlk=0 ht=var(rtn) T=length(rtn) if(T > 40)ht=var(rtn[1:40]) at=rtn-mu for (i in 2:T){ sig2t=par+par*ht[i-1]+par*ht[i-1]*(at[i-1]/sqrt(ht[i-1])-par)^2 ht[i]=sig2t mxlk=mxlk+0.5*(log(sig2t) + (at[i])^2/ht[i]) } mxlk } low=c(1e-6,1e-6,1e-6,1e-6) upp=c(100*var(rtn),1-1e-6,1-1e-6,5) mm=optim(par,mxlk,method=\"Nelder-Mead\",hessian=T) #mm=nlminb(par,mxlk,hessian=T,scale=1,lower=low,upper=upp) #mm=optimx(par,mxlk,method=\"L-BFGS-B\",hessian=T,lower=low,upper=upp) ## Print the results par=mm\\$par H=mm\\$hessian Hi = solve(H) cat(\" \",\"\\n\") cat(\"Estimation results of NGARCH(1,1) model:\",\"\\n\") cat(\"estimates: \",par,\"\\n\") se=sqrt(diag(Hi)) cat(\"std.errors: \",se,\"\\n\") tra=par/se cat(\"t-ratio: \",tra,\"\\n\") # compute the volatility series and residuals ht=var(rtn) T=length(rtn) if(T > 40)ht=var(rtn[1:40]) at=rtn-mu for (i in 2:T){ sig2t=par+par*ht[i-1]+par*(at[i-1]-par*sqrt(ht[i-1]))^2 ht=c(ht,sig2t) } sigma.t=sqrt(ht) Ngarch <- as.matrix(cbind(as.numeric(at),as.numeric(sigma.t))) colnames(Ngarch)=c(\"residuals\",\"volatility\") Ngarch } I think that Alexios has this function in his package (called NAGARCH) but I need to code this myself for my master work. I already implemented (read ported) his ugarch function in mathematica wolfram and it's working great. [[alternative HTML version deleted]] _______________________________________________ [hidden email] mailing list https://stat.ethz.ch/mailman/listinfo/r-sig-finance-- Subscriber-posting only. If you want to post, subscribe first. -- Also note that this is not the r-help list where general R questions should go.\n In reply to this post by Miske As someone who has come across this time and again in regards to John Ehlers, my recommendation is to compute any non-adoptive coefficients then go into Rcpp. Sent from my T-Mobile 4G LTE device ------ Original message------ From: Patrick Burns Date: Wed, 4/30/2014 6:42 PMTo: Milos Cipovic;[hidden email];Subject:Re: [R-SIG-Finance] efficient code for nonlinear garch model ```It should be easy enough to use 'Rcpp' to write the likelihood in C++ and gain a bunch of speed. Pat On 30/04/2014 20:21, Milos Cipovic wrote: > Hi, > I'm trying to think of efficient code for NGARCH model, > sigma[t]^2=omega + alpha*(R[t-1]-theta*sigma[t-1])^2 + beta*(sigma[t-1)]^2 > from book (Elements of Financial Risk Menagment-Peter F Christoffersen page > 77). > > I already coded it the \"usual way\" but it's taking too long (around 30 s). > Now I'm trying somehow to implement -filter- function like in paper - > Parameter Estimation of ARMA Models with > GARCH/APARCH Errors An R and SPlus Software Implementation - page 6. > The problem I can't overcome is that in this model I can't figure out how > to present the middle part ( 2 * Alpha * R[t-1]*theta*sigma[t-1]) with > filter function or is it even possible to code it this way. > > Any help would appreciated.Here is my slower code: > > \"Ngarch\" <- function(rtn,value){ > # Estimation of a non-symmertic GARCH, NGARCH(1,1), model. > # Assume normal innovations > # rtn: return series > # > # The likelihood function \"mxlk\" can be modified to fit more general > NGARCH > # models. > # obtain initial estimates > rtn=as.matrix(rtn) > mu=mean(rtn) > par=c(0.0001,0.8,0.01,0.7) > # > mxlk <- function(par){ > mxlk=0 > ht=var(rtn) > T=length(rtn) > if(T > 40)ht=var(rtn[1:40]) > at=rtn-mu > for (i in 2:T){ > > sig2t=par+par*ht[i-1]+par*ht[i-1]*(at[i-1]/sqrt(ht[i-1])-par)^2 > ht[i]=sig2t > mxlk=mxlk+0.5*(log(sig2t) + (at[i])^2/ht[i]) > } > mxlk > } > > low=c(1e-6,1e-6,1e-6,1e-6) > upp=c(100*var(rtn),1-1e-6,1-1e-6,5) > mm=optim(par,mxlk,method=\"Nelder-Mead\",hessian=T) > #mm=nlminb(par,mxlk,hessian=T,scale=1,lower=low,upper=upp) > #mm=optimx(par,mxlk,method=\"L-BFGS-B\",hessian=T,lower=low,upper=upp) > ## Print the results > par=mm\\$par > H=mm\\$hessian > Hi = solve(H) > cat(\" \",\"\\n\") > cat(\"Estimation results of NGARCH(1,1) model:\",\"\\n\") > cat(\"estimates: \",par,\"\\n\") > se=sqrt(diag(Hi)) > cat(\"std.errors: \",se,\"\\n\") > tra=par/se > cat(\"t-ratio: \",tra,\"\\n\") > # compute the volatility series and residuals > > ht=var(rtn) > T=length(rtn) > if(T > 40)ht=var(rtn[1:40]) > at=rtn-mu > for (i in 2:T){ > sig2t=par+par*ht[i-1]+par*(at[i-1]-par*sqrt(ht[i-1]))^2 > ht=c(ht,sig2t) > } > sigma.t=sqrt(ht) > Ngarch <- as.matrix(cbind(as.numeric(at),as.numeric(sigma.t))) > colnames(Ngarch)=c(\"residuals\",\"volatility\") > Ngarch > } > > > > > I think that Alexios has this function in his package (called NAGARCH) but > I need to code this myself for my master work. > I already implemented (read ported) his ugarch function in mathematica > wolfram and it's working great. > > [[alternative HTML version deleted]] > > _______________________________________________ > [hidden email] mailing list > https://stat.ethz.ch/mailman/listinfo/r-sig-finance > -- Subscriber-posting only. If you want to post, subscribe first. > -- Also note that this is not the r-help list where general R questions should go. > -- Patrick Burns [hidden email] http://www.burns-stat.com http://www.portfolioprobe.com/blog twitter: @burnsstat @portfolioprobe _______________________________________________ [hidden email] mailing list https://stat.ethz.ch/mailman/listinfo/r-sig-finance -- Subscriber-posting only. If you want to post, subscribe first. -- Also note that this is not the r-help list where general R questions should go. ```_______________________________________________ [hidden email] mailing list https://stat.ethz.ch/mailman/listinfo/r-sig-finance-- Subscriber-posting only. If you want to post, subscribe first. -- Also note that this is not the r-help list where general R questions should go."
] | [
null,
"https://r.789695.n4.nabble.com/images/view-classic.gif",
null,
"https://r.789695.n4.nabble.com/images/view-list.gif",
null,
"https://r.789695.n4.nabble.com/images/view-threaded.gif",
null,
"https://r.789695.n4.nabble.com/images/pin.png",
null,
"https://r.789695.n4.nabble.com/images/gear.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.65493166,"math_prob":0.7685731,"size":2656,"snap":"2020-10-2020-16","text_gpt3_token_len":909,"char_repetition_ratio":0.1040724,"word_repetition_ratio":0.038961038,"special_character_ratio":0.33283132,"punctuation_ratio":0.13287905,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9945945,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-24T15:19:18Z\",\"WARC-Record-ID\":\"<urn:uuid:a5efd53b-32fe-4cf7-b814-a87591d37a60>\",\"Content-Length\":\"57619\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b475450c-b7ba-4bbb-8aff-b417a6948c1f>\",\"WARC-Concurrent-To\":\"<urn:uuid:dbd66efb-e760-47b7-81e5-8ba49e806f3d>\",\"WARC-IP-Address\":\"199.38.86.66\",\"WARC-Target-URI\":\"https://r.789695.n4.nabble.com/efficient-code-for-nonlinear-garch-model-td4689751.html\",\"WARC-Payload-Digest\":\"sha1:J76YEE5ZSS7Q6S4RXS5RT4O7TQXRBLHB\",\"WARC-Block-Digest\":\"sha1:OEVEHPHSZ2OS7JF33PRUPGNMS3KZEMQL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145960.92_warc_CC-MAIN-20200224132646-20200224162646-00432.warc.gz\"}"} |
https://www.math.tolaso.com.gr/?p=1795 | [
"Home » Uncategorized » On a log Gamma integral using Riemann sums\n\n# On a log Gamma integral using Riemann sums\n\nEvaluate the integral",
null,
"using Riemann sums.\n\nSolution\n\nPartition the interval",
null,
"into",
null,
"subintervals of length",
null,
". This produces,\n\n(1)",
null,
"On the other hand, assuming",
null,
"is even:",
null,
"since it holds that",
null,
"Euler’s Gamma reflection formula was used at line",
null,
". Letting",
null,
"we get that",
null,
"If",
null,
"is odd we work similarly.\n\n### Who is Tolaso?\n\nFind out more at his Encyclopedia Page.\n\n### Donate to Tolaso Network",
null,
""
] | [
null,
"https://www.math.tolaso.com.gr/wp-content/ql-cache/quicklatex.com-cafdccd80b3e765359902c065fb06f64_l3.svg",
null,
"https://www.math.tolaso.com.gr/wp-content/ql-cache/quicklatex.com-da23261b3ec304a568b90b8fc2a7dcb7_l3.svg",
null,
"https://www.math.tolaso.com.gr/wp-content/ql-cache/quicklatex.com-04fb2c03f812845fbdd43133d181b97b_l3.svg",
null,
"https://www.math.tolaso.com.gr/wp-content/ql-cache/quicklatex.com-03d2ec401284a44d13ee373cc72626de_l3.svg",
null,
"https://www.math.tolaso.com.gr/wp-content/ql-cache/quicklatex.com-4f3ebcd489b87ca141f8052d706485ec_l3.svg",
null,
"https://www.math.tolaso.com.gr/wp-content/ql-cache/quicklatex.com-04fb2c03f812845fbdd43133d181b97b_l3.svg",
null,
"https://www.math.tolaso.com.gr/wp-content/ql-cache/quicklatex.com-d297b339dc3a036e5dad3d166079938e_l3.svg",
null,
"https://www.math.tolaso.com.gr/wp-content/ql-cache/quicklatex.com-8571b99f9102ac33f6e50906132b3ce4_l3.svg",
null,
"https://www.math.tolaso.com.gr/wp-content/ql-cache/quicklatex.com-4e7ebd544f208c296cea3523a698ae1d_l3.svg",
null,
"https://www.math.tolaso.com.gr/wp-content/ql-cache/quicklatex.com-63da3f6c94c4a844daf93d86787afe7b_l3.svg",
null,
"https://www.math.tolaso.com.gr/wp-content/ql-cache/quicklatex.com-47f3707f62b3e4a56ab569df550bb1c4_l3.svg",
null,
"https://www.math.tolaso.com.gr/wp-content/ql-cache/quicklatex.com-04fb2c03f812845fbdd43133d181b97b_l3.svg",
null,
"https://www.paypal.com/EN_US/i/scr/pixel.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8685955,"math_prob":0.9214525,"size":280,"snap":"2020-10-2020-16","text_gpt3_token_len":66,"char_repetition_ratio":0.097826086,"word_repetition_ratio":0.0,"special_character_ratio":0.20714286,"punctuation_ratio":0.12962963,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.990797,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,1,null,3,null,null,null,1,null,1,null,null,null,1,null,1,null,3,null,2,null,1,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-10T06:49:25Z\",\"WARC-Record-ID\":\"<urn:uuid:6ea33011-f2af-43a6-8c61-0428f2c393e3>\",\"Content-Length\":\"39134\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cc893516-cd37-40dd-9301-65039506950f>\",\"WARC-Concurrent-To\":\"<urn:uuid:269113e1-ed03-4bee-a4b4-59a83dae9fdd>\",\"WARC-IP-Address\":\"195.154.207.165\",\"WARC-Target-URI\":\"https://www.math.tolaso.com.gr/?p=1795\",\"WARC-Payload-Digest\":\"sha1:A2XRT2TVFH3RFV6MGMBLH4ECZNHBZALW\",\"WARC-Block-Digest\":\"sha1:PUEITEXFHDCKENYABA7MUB7E2Q6J5BXD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371886991.92_warc_CC-MAIN-20200410043735-20200410074235-00506.warc.gz\"}"} |
https://math.stackexchange.com/questions/1279861/why-intuitively-is-the-order-reversed-when-taking-the-transpose-of-the-product | [
"# Why, intuitively, is the order reversed when taking the transpose of the product?\n\nIt is well known that for invertible matrices $A,B$ of the same size we have $$(AB)^{-1}=B^{-1}A^{-1}$$ and a nice way for me to remember this is the following sentence:\n\nThe opposite of putting on socks and shoes is taking the shoes off, followed by taking the socks off.\n\nNow, a similar law holds for the transpose, namely:\n\n$$(AB)^T=B^TA^T$$\n\nfor matrices $A,B$ such that the product $AB$ is defined. My question is: is there any intuitive reason as to why the order of the factors is reversed in this case?\n\n[Note that I'm aware of several proofs of this equality, and a proof is not what I'm after]\n\nThank you!\n\n• The transpose identity holds just as well when $A$ and $B$ are not square; if $A$ has size $m \\times n$ and $B$ has size $n \\times p$, where $p \\neq m$, then the given order is the only possible order of multiplication of $A^T$ and $B^T$. May 13, 2015 at 6:48\n• @Travis I never required the matrices to be square for the transpose identity: all I said was that the product $AB$ must be defined. May 13, 2015 at 6:53\n• @user1337, I think Travis was just using non-square matrices as a way of seeing why we must have $(AB)^T = B^T A^T$ and not $A^TB^T$. If $A$ is $l \\times m$ and $B$ is $m \\times n$ then $AB$ makes sense and $B^T A^T$ is an $n \\times m$ times a $m \\times l$ which makes sense, but $A^T B^T$ doesn't work. May 13, 2015 at 6:58\n• @user1337 Jair's right, I didn't intend my comment as a sort of correction, just as an explanation that if there is some identity for $(AB)^T$ of the given sort that holds for all matrix products, matrix size alone forces a particular order. (BTW, Jair, I finished my Ph.D. at Washington a few years ago.) May 13, 2015 at 8:05\n• Obligatory remark: the \"socks and shoes\" metaphor is due to Coxeter (An Introduction to Geometry, 2/e, p.33). Mar 30, 2019 at 15:14\n\nOne of my best college math professor always said:\n\nMake a drawing first.",
null,
"Although, he couldn't have made this one on the blackboard.\n\n• This is an absolutely beautiful explanation. May 13, 2015 at 19:32\n• This is the most trite answer i have ever seen. I can't decide whether it deserves an upvote or a downvote. May 16, 2015 at 9:16\n• (+1) @enthdegree: An upvote, particularly if the drawing were augmented to indicate the $i$th row of $A$, the $j$th column of $B$, and the $(i, j)$ entry of $AB$ (all in marker heavy enough to bleed through the paper, of course), so that when the paper is flipped the OP's question is immediately answered with justification. :) May 16, 2015 at 11:56\n• Sparsity in this drawing is by design. The more clutter you add, the more confuse it becomes. Readers familiar with matrix multiplication will probably have drawn this two-faced $(i,j)$ double-line in their mind.\n– mdup\nMay 17, 2015 at 10:51\n• Can anyone explainme the picture ? Mar 30, 2019 at 12:46\n\nBy dualizing $AB: V_1\\stackrel{B}{\\longrightarrow} V_2\\stackrel{A}{\\longrightarrow}V_3$, we have $(AB)^T: V_3^*\\stackrel{A^T}{\\longrightarrow}V_2^*\\stackrel{B^T}{\\longrightarrow}V_1^*$.\n\nEdit: $V^*$ is the dual space $\\text{Hom}(V, \\mathbb{F})$, the vector space of linear transformations from $V$ to its ground field, and if $A: V_1\\to V_2$ is a linear transformation, then $A^T: V_2^*\\to V_1^*$ is its dual defined by $A^T(f)=f\\circ A$. By abuse of notation, if $A$ is the matrix representation with respect to bases $\\mathcal{B}_1$ of $V_1$ and $\\mathcal{B}_2$ of $V_2$, then $A^T$ is the matrix representation of the dual map with respect to the dual bases $\\mathcal{B}_1^*$ and $\\mathcal{B}_2^*$.\n\n• In other words: the dualizing functor $V \\rightarrow V^*$ is contravariant. May 13, 2015 at 6:47\n• I think you might elaborate though, explaining what $V^*$ is and what $A^T$ means in this context. Otherwise it seems more like a comment than an answer. May 13, 2015 at 6:50\n• Actually I was trying to make my answer as concise as possible as the OP does not looking for a proof. Sure I could have been more precise. May 13, 2015 at 6:54\n• @AlexFok looks neat, however I don't know what dualizing means. Can you please elaborate? May 13, 2015 at 6:56\n• I wish it would be emphasised more in teaching that transposing makes linear operators switch to work on the dual spaces. Many people – at least in science, not sure how it is in maths – aren't aware of this at all; I was never explicitly taught about it and it was a huge a-ha moment when I first found out. May 13, 2015 at 20:01\n\nHere's another argument. First note that if $v$ is a column vector then $(Mv)^T = v^T M^T$. This is not hard to see - if you write down an example and do it both ways, you will see you are just doing the same computation with a different notation. Multiplying the column vector $v$ on the right by the rows of $M$ is the same as multiplying the row vector $v^T$ on the left by the columns of $M^T$.\n\nNow let $( \\cdot , \\cdot )$ be the usual inner product on $\\mathbb{R}^n$, that is, the dot product. Then the transpose $N = M^T$ of a matrix $M$ is the unique matrix $N$ with the property\n\n$$(Mu, v) = (u, Nv).$$\n\nThis is just a consequence of associativity of matrix multiplication. The dot product of vectors $u,v$ is given by thinking of $u,v$ as column vectors, taking the transpose of one and doing the dot product: $(u,v) = u^T v$.\n\nThen $(Mu,v) = (Mu)^T v = (u^T M^T) v = u^T (M^Tv) = (u, M^Tv)$.\n\nExercise: Show uniqueness!\n\nWith this alternate definition we can give a shoes-and-socks argument. We have\n\n$$( ABu, v) = (Bu, A^Tv) = (u, B^TA^Tv)$$\n\nfor all $u,v$, and so $(AB)^T = B^T A^T$. The argument is exactly the same as the one for inverses, except we are \"moving across the inner product\" instead of \"undoing\".\n\n• The second part is the best way of looking at this. The point is that the transpose is not really that natural of an operation by itself: it is important because it is the adjoint operation for the (real) Euclidean dot product. And the adjoint operation for any inner product has the property in question, for the same reason that the inverse operation has the property in question.\n– Ian\nMay 13, 2015 at 15:22\n\nEach element of the matrix $AB$ is the inner product of a row of $A$ with a column of $B$.\n\n$(AB)^T$ has the same elements that $AB$ does (just in different places), so its elements too must each come from a row of $A$ and a column of $B$.\n\nHowever if we want to start with $A^T$ and $B^T$, then a row of $A$ is the same thing as a column of $A^T$ (and vice versa for $B$ and $B^T$), so we need something that has columns of $A^T$ and rows of $B^T$. The matrix that we take columns from is always the right factor, to $A^T$ must be the right factor in the multiplication.\n\nSimilarly, $B^T$ must be the left factor because we need its rows (which are columns of the original $B$).\n\n• Your first sentence begins by relating all the entries of $AB$ to the inner product. Considering how the inner product is the 'mantle centerpiece' of modern/abstract vector/Hilbert space theory, this is where to look for any 'intuitive insights'. (+1) Jun 12, 2019 at 14:24\n• You know for sure (combinatorics/counting) that the entries in $B^t A^t$ can be matched up $1:1$ with the entries of $(AB)^t$. Surely scrambled eggs is not what we expect! So check that any two matrix entries actually agree - intuition morphing into a complete proof. Jun 12, 2019 at 14:45\n• Managed to find a recent question to put write this up math.stackexchange.com/a/3259932/432081 Jun 12, 2019 at 15:27\n\nA matrix is a collection of entries that may be represented with 2 indices. When we multiply two matrices, each resultant entry is the sum of the products\n\n$$C_{ik} = \\sum_j A_{ij} B_{jk}$$\n\nCrucially, the 'middle' index, $j$, must be the same for both matrices (the first must be as wide as the second is tall).\n\nA transpose is just a reversal of indices:\n\n$$A_{ij}^T = A_{ji}$$\n\nIt should now go without saying that\n\n$$C_{ik}^T = C_{ki} = (\\sum_j A_{ij} B_{jk})^T = \\sum_j B_{kj} A_{ji}$$\n\nMemory shortcut: multiplication fails immediately for non-square matrices when you forget to commute for a transpose.\n\nthe intuitive reason is that the entries of a product matrix are feynman path integrals, and transposing the matrixes corresponds simply to reversing the arrow of time for traveling along the paths.\n\n(so it's practically the same idea as in your shoes-and-socks example: matrix transposition is about time-reversal, just like function inversion is about time-reversal.)\n\nthe (i,k)th entry in a product matrix ab is the sum over j of a(i,j).b(j,k). in other words, it's a sum over all \"2-step paths\" (i,j,k) from i to k, each path visiting one intermediate point j on its way from i to k.\n\nthis sum over paths is called a \"feynman path integral\". if you read feynman's original paper on the subject, focusing on the parts that are easy to understand, you'll see that that was feynman's basic message: that whenever you have a big long string of matrixes to multiply, each entry in the product matrix is a \"sum over paths\" aka \"path integral\", with the contribution of each particular path being a long product of \"transition quantities\", each associated with one transition-step along the path.\n\nthis \"path\" interpretation of matrix multiplication actually gets more intuitive for longer strings of matrixes, because then each path consists of many steps. for example each entry of a matrix product abc...z is a sum over 26-step paths; each path visits 27 points but with just 26 transition-steps from one point to the next.\n\n• I very much doubt that this explanation will help the OP, but I found the analogy to Feynman path integrals told me something important about the path integrals. I'm not a physicist and never looked past a paragraph or two about them. Now I can see that they resemble counting paths in a graph by looking at powers of the adjacency matrix. May 14, 2015 at 14:45\n• I don't think we really need to attach Feynman's name here, but in general this combinatorial view of matrix multiplication as a sum over walks is very helpful. The time-reversal interpretation.is a pretty useful way of looking at the transpose. May 15, 2015 at 1:04\n\n$$\\hspace{3cm}$$",
null,
"Turn (transpose) to the street $$B^T$$ perpendicular $$B$$, then turn (transpose) to $$A^T$$ perpendicular $$A$$.\n\n[this is an attempt to combine two previously given answers, mdup's video demo and my \"path-sum\" story, so it might help to refer to those.]\n\nafter watching mdup's video demo i started wondering how it relates to the \"path-sum\" interpretation of matrix multiplication. the key seems to be that mdup's hand-drawn picture of the matrix product AB wants to be folded up to form the visible faces of an oblong box whose three dimensions correspond precisely to the points i, j, and k in a three-point path (i,j,k). this is illustrated by the pairs of pictures below, each pair showing the oblong box first in its folded-up 3-dimensional form and then in its flattened-out 2-dimensional form. in each case the box is held up to a mirror to portray the effect of transposition of matrixes.\n\nin the first pair of pictures, the i, j, and k axises are marked, and in the folded-up 3-dimensional form you can see how transposition reverses the order of the axises from i,j,k to k,j,i. in the flattened-out 2-dimensional form you can see how it wants to be folded up because the edges marked j are all the same length (and also, because it was folded up like that when i bought the soap).\n\nthe second pair of pictures indicate how an entry of the product matrix is calculated. in the flattened-out 2-dimensional form, a row of the first matrix is paired with a column of the second matrix, whereas in the folded-up 3-dimensional form, that \"row\" and that \"column\" actually lie parallel to each other because of the 3d arrangement.\n\nin other words, each 3-point path (i,j,k) corresponds to a location inside the box, and at that location you write down (using a 3-dimensional printer or else just writing on the air) the product of the transition-quantities for the two transition-steps in the path, A_[i,j] for the transition-step from i to j and B_[j,k] for the transition-step from j to k. this results in a 3-dimensional matrix of numbers written on the air inside the box, but since the desired matrix product AB is only a 2-dimensional matrix, the 3-dimensional matrix is squashed down to 2-dimensional by summing over the j dimension. this is the path-sum- in order for two paths to contribute to the same path-sum they're required to be in direct competition with each other, beginning at the same origin i and ending at the same destination k, so the only index that we sum over is the intermediate index j.\n\nthe 3-dimensional folded-up form and the 2-dimensional flattened-out form have each their own advantages and disadvantages. the 3-dimensional folded-up form brings out the path-sums and the 3-dimensional nature of matrix multiplication, while the 2-dimensional flattened-out form is better-adapted to writing the calculation down on 2-dimensional paper (which remains easier than writing on 3-dimensional air even still today).\n\nanyway, i'll get off my soapbox for now ...\n\nThey key property of the transpose is that it is the unique matrix which satisfies $$\\langle Ax, y \\rangle = \\langle x, A^T y \\rangle$$ for all $$x,y$$. Notice that \\begin{align} \\langle B A x, y \\rangle &= \\langle Ax, B^T y \\rangle \\\\ &= \\langle x, A^T B^T y \\rangle \\end{align} for all $$x,y$$. This shows that $$(BA)^T = A^T B^T.$$\n\n(Short form of Jair Taylor's answer)\n\nIn the expresson $v^tA B w$, vectors $v$ and $w$ \"see\" $A$ and $B$ from different ends, hence in different order.\n\nConsidering the dimensions of the various matrices shows that reversing the order is necessary.\n\nIf A is $m \\times p$ and B is $p \\times n$,\n\nAB is $m \\times n$,\n\n(AB)$^T$ is $n \\times m$\n\nA$^T$ is $p \\times m$ and B$^T$ is $n \\times p$\n\nThus B$^T$A$^T$ has the same dimension as(AB)$^T$\n\nIn electrical engineering this can be illustrated nicely as time reversal of FIR filters.\n\nConvolution with a FIR filter can be represented by a circulant or Toeplitz matrix. How far from the diagonal the values are, the further into the future or past they select their values.\n\nThe trivial example is probably a permutation matrix which is power of cyclic group generator element representation matrix.\n\nI'm going to make @JohnCFrain's answer a little more intuitive.\n\nLets say that we have matrix $$A$$ which is\n\n$$m*p$$\n\nAnd we have matrix $$B$$ which is\n\n$$p*n$$\n\nRemember that columns of A has to equal rows of B\n\nThen we take $$A^T$$, it is\n\n$$p*m$$\n\nBecause the rows and columns switch.\n\nAnd we have $$B^T$$, which is\n\n$$n*p$$\n\nBut, we have to switch because the columns of $$A^T$$ are not equal to the rows of $$B^T$$ --> $$m \\neq n$$, so we can't multiply (which is why we switched).\n\nHope this helped someone who happened to scroll down (4 years and 2 months after original post)! :D"
] | [
null,
"https://i.stack.imgur.com/uGxff.gif",
null,
"https://i.stack.imgur.com/fUnp3.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.93946123,"math_prob":0.9967293,"size":2944,"snap":"2023-14-2023-23","text_gpt3_token_len":667,"char_repetition_ratio":0.1632653,"word_repetition_ratio":0.008179959,"special_character_ratio":0.22078805,"punctuation_ratio":0.09405941,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9995937,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,7,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-24T08:59:38Z\",\"WARC-Record-ID\":\"<urn:uuid:7c2fe629-ef97-437a-8e19-edd3470958bf>\",\"Content-Length\":\"289385\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f721db6a-95dc-4986-9891-0ade6b711423>\",\"WARC-Concurrent-To\":\"<urn:uuid:3da63928-598d-41f9-af72-e4cef73fca17>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/1279861/why-intuitively-is-the-order-reversed-when-taking-the-transpose-of-the-product\",\"WARC-Payload-Digest\":\"sha1:M23NYRLAX3GJVO6VEELH2NWBSZVGNXV5\",\"WARC-Block-Digest\":\"sha1:A3LJUFQGQAOHIBWZMMVY6FCKQWUMVXT5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945279.63_warc_CC-MAIN-20230324082226-20230324112226-00118.warc.gz\"}"} |
https://visualfractions.com/calculator/factors/factors-of-886/ | [
"# Factors of 886\n\nSo you need to find the factors of 886 do you? In this quick guide we'll describe what the factors of 886 are, how you find them and list out the factor pairs of 886 for you to prove the calculation works. Let's dive in!\n\n## Factors of 886 Definition\n\nWhen we talk about the factors of 886, what we really mean is all of the positive and negative integers (whole numbers) that can be evenly divided into 886. If you were to take 886 and divide it by one of its factors, the answer would be another factor of 886.\n\nLet's look at how to find all of the factors of 886 and list them out.\n\n## How to Find the Factors of 886\n\nWe just said that a factor is a number that can be divided equally into 886. So the way you find and list all of the factors of 886 is to go through every number up to and including 886 and check which numbers result in an even quotient (which means no decimal place).\n\nDoing this by hand for large numbers can be time consuming, but it's relatively easy for a computer program to do it. Our calculator has worked this out for you. Here are all of the factors of 886:\n\n• 886 ÷ 1 = 886\n• 886 ÷ 2 = 443\n• 886 ÷ 443 = 2\n• 886 ÷ 886 = 1\n\nAll of these factors can be used to divide 886 by and get a whole number. The full list of positive factors for 886 are:\n\n1, 2, 443, and 886\n\n## Negative Factors of 886\n\nTechnically, in math you can also have negative factors of 886. If you are looking to calculate the factors of a number for homework or a test, most often the teacher or exam will be looking for specifically positive numbers.\n\nHowever, we can just flip the positive numbers into negatives and those negative numbers would also be factors of 886:\n\n-1, -2, -443, and -886\n\n## How Many Factors of 886 Are There?\n\nAs we can see from the calculations above there are a total of 4 positive factors for 886 and 4 negative factors for 886 for a total of 8 factors for the number 886.\n\nThere are 4 positive factors of 886 and 4 negative factors of 886. Wht are there negative numbers that can be a factor of 886?\n\n## Factor Pairs of 886\n\nA factor pair is a combination of two factors which can be multiplied together to equal 886. For 886, all of the possible factor pairs are listed below:\n\n• 1 x 886 = 886\n• 2 x 443 = 886\n\nJust like before, we can also list out all of the negative factor pairs for 886:\n\n• -1 x -886 = 886\n• -2 x -443 = 886\n\nNotice in the negative factor pairs that because we are multiplying a minus with a minus, the result is a positive number.\n\nSo there you have it. A complete guide to the factors of 886. You should now have the knowledge and skills to go out and calculate your own factors and factor pairs for any number you like.\n\nFeel free to try the calculator below to check another number or, if you're feeling fancy, grab a pencil and paper and try and do it by hand. Just make sure to pick small numbers!\n\n## Factors Calculator\n\nWant to find the factor for another number? Enter your number below and click calculate.\n\nFactors of 887"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.94506794,"math_prob":0.973412,"size":2938,"snap":"2020-45-2020-50","text_gpt3_token_len":732,"char_repetition_ratio":0.21165644,"word_repetition_ratio":0.011945393,"special_character_ratio":0.2886317,"punctuation_ratio":0.07915994,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99831736,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-05T03:19:23Z\",\"WARC-Record-ID\":\"<urn:uuid:0e7bccc4-d9e7-4751-8064-ef9a359bd326>\",\"Content-Length\":\"28309\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bfb85775-45dd-42a0-81ca-112778fc4f69>\",\"WARC-Concurrent-To\":\"<urn:uuid:847e8fb8-25df-4064-a06d-7ea3948417f7>\",\"WARC-IP-Address\":\"35.175.60.16\",\"WARC-Target-URI\":\"https://visualfractions.com/calculator/factors/factors-of-886/\",\"WARC-Payload-Digest\":\"sha1:MTDLBE23MQUSBF3UFGAVSXGOXBNE7FLM\",\"WARC-Block-Digest\":\"sha1:MPM5RJR6PKJ5S7VNC3XZKAHGBWZXBWDK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141746033.87_warc_CC-MAIN-20201205013617-20201205043617-00537.warc.gz\"}"} |
http://www.oicq88.cc/yijing/ | [
"• 聆听ゝ╮锦云漫卷的柔暖\n\n• 逆光飞翔 i\n\n• 满山ぐ星光垂\n\n• 天堂鸟丶开满了塘边\n\n• ﹌清风暖阳\n\n• 冰蓝色的雨\n\n• 摩天輪的仰望\n\n• @ 凉宸\n\n• 布拉格广场ˋ旋转忧伤\n\n• 斜阳云云美\n\n• 雨夜的街道、\n\n• ー場邂逅旳吢動ヽ\n\n• 让人沉醉的爱情\n\n• 泪湿床\n\n• 曾经的回忆美好到让我流泪\n\n• 烟雨江畔\n\n• 半岛未凉°\n\n• 一曲女人花\n\n• 一滴水墨\n\n• 梦幻的心爱\n\n• 冬天的雪花\n\n• 在梦里丶\n\n• 没有那么妖艳\n\n• 意境__美.\n\n• ┛。人生若祗如初见丶\n\n• 仰頭陪你看星星ˋ\n\n• 春日夕后那—缕艳阳。\n\n• 恍若初见¢\n\n• 雨巷深深\n\n• 烟花易冷。\n\n• 心云间、凝听\n\n• 七色彩虹\n\n• 轻雾山林\n\n• 烟雨扶苏゜\n\n• 微光,倾城\n\n• - 梦年海沫深\n\n• 最暖话\n\n• 失忆三叶草\n\n• 茶暖桉\n\n• 季末、如歌\n\n• 北海以北深海未眠\n\n• 聆厛、埖雨\n\n• 森迷@\n\n• 等数载,海棠开\n\n• 太阳俯视向日葵い\n\n• 〆期待下一次花开゛\n\n• 满天都是小星星。\n\n• 谱写我们的阳光。\n\n• 旧巷里的少年郎\n\n• #夏花薄少为荒年°\n\n• 平行线一样\n\n• 外面雨停了\n\n• 落日桥头细感风\n\n• ※雨芐姒後\n\n• ︶ㄣ星星会哭ㄣ\n\n• 陌路黄昏 。\n\n• 葵雨\n\n• 夏末的晨曦\n\n• 没有太阳的晴天\n\n• 满目逢秋\n\n• `愿为果\n\n• 清浅ˋ旧时光ァ\n\n• 天边シ深海\n\n• 半墓尘沙゜\n\n• 彼岸流年之歌未央\n\n• +_+那夜我哭叻\n\n• 绝不认输!\n\n• 惧有何用\n\n• 零度的毒酒\n\n• 肆意的妖娆°\n\n• 不屌不二不是我\n\n• 。親昵╮\n\n• 森拥萤火\n\n• 简愛』\n\n• ヾ不想放弃\n\n• 肥波是只猫\n\n• 小红红帽。\n\n• 隐退的王\n\n• 长刘海遮住眼角的湿润°\n\n• 还小ヽ莫言一辈子\n\n• 男仙\n\n• 生活的磨难\n\n• 話箛揂\n\n• 臆想症重度病人\n\n• 面团小金刚@!!!\n\n• Saimoe\n\n• 令我的爱人微笑"
] | [
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.5719243,"math_prob":0.54249954,"size":613,"snap":"2020-24-2020-29","text_gpt3_token_len":866,"char_repetition_ratio":0.0,"word_repetition_ratio":0.0,"special_character_ratio":0.22675367,"punctuation_ratio":0.039215688,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9549728,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-02T16:18:33Z\",\"WARC-Record-ID\":\"<urn:uuid:79341c7a-bc28-43ca-9b42-d7ee955c2abc>\",\"Content-Length\":\"8854\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:857b31f2-33fb-4ecc-b7ef-ec6cbfc95b53>\",\"WARC-Concurrent-To\":\"<urn:uuid:63002238-3c62-46b5-a27e-1ff65f1c2dd4>\",\"WARC-IP-Address\":\"107.183.184.187\",\"WARC-Target-URI\":\"http://www.oicq88.cc/yijing/\",\"WARC-Payload-Digest\":\"sha1:SFKIBEONA5C6ZR7MQ6TT4WCQFKRHTQUG\",\"WARC-Block-Digest\":\"sha1:75XF3YVM5WN57JFQBYRHD26OIWTHM4GD\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655879532.0_warc_CC-MAIN-20200702142549-20200702172549-00030.warc.gz\"}"} |
https://www.crazy-numbers.com/en/21070 | [
"Discover a lot of information on the number 21070: properties, mathematical operations, how to write it, symbolism, numerology, representations and many other interesting things!\n\n## Mathematical properties of 21070\n\nIs 21070 a prime number? No\nIs 21070 a perfect number? No\nNumber of divisors 24\nList of dividers 1, 2, 5, 7, 10, 14, 35, 43, 49, 70, 86, 98, 215, 245, 301, 430, 490, 602, 1505, 2107, 3010, 4214, 10535, 21070\nSum of divisors 45144\nPrime factorization 2 x 5 x 72 x 43\nPrime factors 2, 5, 7, 43\n\n## How to write / spell 21070 in letters?\n\nIn letters, the number 21070 is written as: Twenty-one thousand seventy. And in other languages? how does it spell?\n\n21070 in other languages\nWrite 21070 in english Twenty-one thousand seventy\nWrite 21070 in french Vingt et un mille soixante-dix\nWrite 21070 in spanish Veintiuno mil setenta\nWrite 21070 in portuguese Vinte e um mil setenta\n\n## Decomposition of the number 21070\n\nThe number 21070 is composed of:\n\n1 iteration of the number 2 : The number 2 (two) represents double, association, cooperation, union, complementarity. It is the symbol of duality.... Find out more about the number 2\n\n1 iteration of the number 1 : The number 1 (one) represents the uniqueness, the unique, a starting point, a beginning.... Find out more about the number 1\n\n2 iterations of the number 0 : ... Find out more about the number 0\n\n1 iteration of the number 7 : The number 7 (seven) represents faith, teaching. It symbolizes reflection, the spiritual life.... Find out more about the number 7\n\nOther ways to write 21070\nIn letter Twenty-one thousand seventy\nIn roman numeral\nIn binary 101001001001110\nIn octal 51116\nIn US dollars USD 21,070.00 (\\$)\nIn euros 21 070,00 EUR (€)\nSome related numbers\nPrevious number 21069\nNext number 21071\nNext prime number 21089\n\n## Mathematical operations\n\nOperations and solutions\n21070*2 = 42140 The double of 21070 is 42140\n21070*3 = 63210 The triple of 21070 is 63210\n21070/2 = 10535 The half of 21070 is 10535.000000\n21070/3 = 7023.3333333333 The third of 21070 is 7023.333333\n210702 = 443944900 The square of 21070 is 443944900.000000\n210703 = 9353919043000 The cube of 21070 is 9353919043000.000000\n√21070 = 145.15508947329 The square root of 21070 is 145.155089\nlog(21070) = 9.9556055067982 The natural (Neperian) logarithm of 21070 is 9.955606\nlog10(21070) = 4.3236645356081 The decimal logarithm (base 10) of 21070 is 4.323665\nsin(21070) = 0.61463852209022 The sine of 21070 is 0.614639\ncos(21070) = -0.78880890408435 The cosine of 21070 is -0.788809\ntan(21070) = -0.7791982556329 The tangent of 21070 is -0.779198"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.60001385,"math_prob":0.97264826,"size":2487,"snap":"2023-40-2023-50","text_gpt3_token_len":890,"char_repetition_ratio":0.17559405,"word_repetition_ratio":0.027707808,"special_character_ratio":0.4800965,"punctuation_ratio":0.17984189,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99154145,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-29T06:29:21Z\",\"WARC-Record-ID\":\"<urn:uuid:3a0fc019-c152-44c0-9dec-8ff1e407e25f>\",\"Content-Length\":\"34844\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8bef5cec-8b8e-406f-aa20-0cb8da88c8e8>\",\"WARC-Concurrent-To\":\"<urn:uuid:899d0c64-5e40-49ab-bc8f-cc2c415485df>\",\"WARC-IP-Address\":\"128.65.195.174\",\"WARC-Target-URI\":\"https://www.crazy-numbers.com/en/21070\",\"WARC-Payload-Digest\":\"sha1:J5SXXFO34JFMX5VXM6RIM6VXGIMMTPVY\",\"WARC-Block-Digest\":\"sha1:3BYNXQGGHA3G3YDDS2OWAQCM2KH7C7ZZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100056.38_warc_CC-MAIN-20231129041834-20231129071834-00133.warc.gz\"}"} |
http://casopisi.junis.ni.ac.rs/index.php/FUMathInf/article/view/4753 | [
"### ON CAPABLE GROUPS OF ORDER p4\n\nMohammad Ali Salahshour, Ali Reza Ashrafi\n\nDOI Number\nhttps://doi.org/10.22190/FUMI1904633S\nFirst page\n633\nLast page\n640\n\n#### Abstract\n\nA group $H$ is said to be capable, if there exists another group\n$G$ such that $\\frac{G}{Z(G)}~\\cong~H$, where $Z(G)$ denotes the\ncenter of $G$. In a recent paper \\cite{2}, the authors\nconsidered the problem of capability of five non-abelian $p-$groups of order $p^4$ into account. In this paper, we continue this paper by considering three other groups of order $p^4$. It is proved that the group $$H_6=\\langle x, y, z \\mid x^{p^2}=y^p=z^p= 1, yx=x^{p+1}y, zx=xyz, yz=zy\\rangle$$ is not capable. Moreover, if $p > 3$ is prime and $d \\not\\equiv 0, 1 \\ (mod \\ p)$ then the following groups are not capable:\\\\\n{\\tiny $H_7^1=\\langle x, y, z \\mid x^{9} = y^3 = 1, z^3 = x^{3}, yx = x^{4}y, zx = xyz, zy = yz \\rangle$,\\\\\n$H_7^2= \\langle x, y, z \\mid x^{p^2} = y^p = z^p = 1, yx = x^{p+1}y, zx = x^{p+1}yz, zy = x^pyz \\rangle,$ \\\\\n$H_8^1=\\langle x, y, z \\mid x^{9} = y^3 = 1, z^3 = x^{-3}, yx = x^{4}y, zx = xyz, zy = yz \\rangle$,\\\\\n$H_8^2=\\langle x, y, z \\mid x^{p^2} = y^p = z^p = 1, yx = x^{p+1}y, zx = x^{dp+1}yz, zy = x^{dp}yz \\rangle$.}\n\n#### Keywords\n\nCapable group; p−group; non-abelian p−groups; center\n\nPDF\n\n#### References\n\nR. Baer: Groups with preassigned central and central quotient groups, Trans.\n\nAmer. Math. Soc. 44 (1938) 387–412.\n\nW. Burnside: Theory of Groups of Finite Order, Cambridge University Press, Cambridge, 1897.\n\nM. Hall and J. K. Senior: The Groups of Order 2 n (n ≤ 6), Macmillan, New York, 1964.\n\nP. Hall: The classification of prime-power groups, J. reine angew. Math. 182 (1940) 130–141.\n\nR. Zainal, N. M. Mohd Ali, N. H. Sarmin and S. Rashid: On the capability of nonabelian groups of order p 4 , Proceedings of the 21st National Symposium on\n\nMathematical Sciences (SKSM21), AIP Conf. Proc. 1605 (2014) 575–579.\n\nDOI: https://doi.org/10.22190/FUMI1904633S\n\n### Refbacks\n\n• There are currently no refbacks.\n\n© University of Niš | Created on November, 2013\nISSN 0352-9665 (Print)\nISSN 2406-047X (Online)"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7006786,"math_prob":0.9983865,"size":3025,"snap":"2022-40-2023-06","text_gpt3_token_len":1240,"char_repetition_ratio":0.13935784,"word_repetition_ratio":0.67424244,"special_character_ratio":0.4244628,"punctuation_ratio":0.18122555,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998291,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-26T22:01:38Z\",\"WARC-Record-ID\":\"<urn:uuid:cf1a0e49-bcc1-4dc9-95fd-1d059ae499c2>\",\"Content-Length\":\"22543\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c2190425-b543-4cd7-9529-4be27d2e42c3>\",\"WARC-Concurrent-To\":\"<urn:uuid:32a229d2-0114-4e38-9df8-81e80b443775>\",\"WARC-IP-Address\":\"160.99.2.32\",\"WARC-Target-URI\":\"http://casopisi.junis.ni.ac.rs/index.php/FUMathInf/article/view/4753\",\"WARC-Payload-Digest\":\"sha1:VXQPAJMXO6AABGW2C45V3IQTJROZVF7U\",\"WARC-Block-Digest\":\"sha1:OVVPKMKJ6HCYADHE55ETYAHL5IZEKV46\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764494826.88_warc_CC-MAIN-20230126210844-20230127000844-00314.warc.gz\"}"} |
https://developer.unigine.com/docs/future/api/library/objects/class.objectdynamic | [
"# Unigine::ObjectDynamic Class\n\nObjectDynamic class allows to create a dynamic object, which can be rendered by means of any type of geometry (vertex format for an object is changeable). The class supports instancing as well as point/line/triangle rendering modes. ObjectDynamic class requires a custom shader for rendering, no built-in shaders are available.\n\n## MODE#\n\nNameDescription\nMODE_POINTS = 0Mode to render the points.\nMODE_LINES = 1Mode to render the lines.\nMODE_TRIANGLES = 2Mode to render the triangles.\nMODE_TRIANGLE_PATCHES = 3Mode to render the triangle patches.\n\n## static ObjectDynamicPtr create ( int flags = 0 ) #\n\nConstructor. Creates a new dynamic object. By default, no flags are used.\n\n## const ObjectDynamic::Attribute *getAttributes ( ) const#\n\nReturns an array of vertex attributes.\n\n### Return value\n\nArray of vertex attributes.\n\n## voidsetBoundBox ( const BoundBox & bb ) #\n\nSets a bounding box of a specified size for a given dynamic object surface.\n\n## voidsetBoundBox ( const BoundBox & bb, int surface ) #\n\nSets a bounding box of a specified size for a given dynamic object surface.\n\n### Arguments\n\n• const BoundBox & bb - Bounding box.\n• int surface - Surface number in range from 0 to the total number of dynamic mesh surfaces.\n\n## voidsetIndex ( int num, int index ) #\n\nUpdates the index in the index buffer (replaces the index with the given number with the specified index of the vertex).\n\n### Arguments\n\n• int num - Index number in the index buffer.\n• int index - Vertex index in the index buffer to set.\n\n## intgetIndex ( int num ) const#\n\nReturns the index of the vertex by the index number.\n\n### Arguments\n\n• int num - Index number.\n\n### Return value\n\nVertex index in the index buffer.\n\n## voidsetIndicesArray ( int[] indices ) #\n\nNotice\nTo apply changes you should call the flushIndices() method after updating the indices array.\n\n### Arguments\n\n• int[] indices - Indices array.\n\n## voidsetInstancing ( int instancing ) #\n\nActivates the hardware instancing technique.\n\n### Arguments\n\n• int instancing - Instancing flag. 1 to enable hardware instancing, 0 to disable it.\n\n## intgetInstancing ( ) const#\n\nReturns a value indicating if the hardware instancing flag is enabled.\n\n### Return value\n\n1 if the hardware instancing flag is enabled; otherwise, 0.\n\n## intgetNumAttributes ( ) const#\n\nReturns the number of vertex attributes.\n\n### Return value\n\nNumber of vertex attributes.\n\n## voidsetNumIndices ( int indices ) #\n\nSets the number of vertex indices.\n\n### Arguments\n\n• int indices - Number of indices.\n\n## intgetNumIndices ( ) const#\n\nReturns the number of vertex indices used by the object.\n\n### Return value\n\nNumber of indices.\n\n## voidsetNumVertex ( int vertex ) #\n\nSets the number of mesh vertices.\n\n### Arguments\n\n• int vertex - Number of mesh vertices.\n\n## intgetNumVertex ( ) const#\n\nReturns the number of vertices composing the object.\n\n### Return value\n\nNumber of vertices.\n\n## voidsetMaterialNodeType ( Node::TYPE type ) #\n\nSets the node type to be used by the renderer to determine which materials can be applied to the object.\nNotice\nAs ObjectDynamic is a custom user-defined object, so the user should determine the node type for the renderer to treat this object properly. Setting inappropriate node type may lead to system crashes.\n\n## Node::TYPEgetMaterialNodeType ( ) const#\n\nReturns the node type to be used by the renderer to determine which materials can be applied to the object.\nNotice\nAs ObjectDynamic is a custom user-defined object, so the user should determine the node type for the renderer to treat this object properly. Setting inappropriate node type may lead to system crashes.\n\n### Return value\n\nNode type ID. One of the node type identifiers.\n\n## voidsetParameterBool ( const char * name, bool value ) #\n\nSets boolean shader parameter of the specified value.\n\n### Arguments\n\n• const char * name - Shader parameter name.\n• bool value - Parameter value.\n\n## voidsetParameterFloat ( const char * name, float[] value ) #\n\nSets float shader parameter of the specified value.\n\n### Arguments\n\n• const char * name - Name of the parameter.\n• float[] value - Parameter value pointer.\n\n## voidsetParameterFloatArray ( const char * name, float[] value, int num ) #\n\nSets an array of the specified number of float shader parameters.\n\n### Arguments\n\n• const char * name - Name of the parameter.\n• float[] value - Parameter values.\n• int num - Number of shader parameters.\n\n## voidsetParameterInt ( const char * name, int[] value ) #\n\nSets integer shader parameter of the specified value.\n\n### Arguments\n\n• const char * name - Name of the parameter.\n• int[] value - Parameter value.\n\n## voidsetSurfaceBegin ( int begin, int surface ) #\n\nSets the begin index for the specified object surface.\n\n### Arguments\n\n• int begin - The index to be set as the begin one for the surface.\n• int surface - Number of the target surface.\n\n## intgetSurfaceBegin ( int surface ) const#\n\nReturns the begin index of the specified object surface.\n\n### Arguments\n\n• int surface - The number of the target surface in range from 0 to the total number of surfaces.\n\nThe begin index.\n\n## voidsetSurfaceEnd ( int end, int surface ) #\n\nSets the end index for the specified object surface.\n\n### Arguments\n\n• int end - The index to be set as the end one for the surface.\n• int surface - Number of the target surface.\n\n## intgetSurfaceEnd ( int surface ) const#\n\nReturns the end index of the specified object surface.\n\n### Arguments\n\n• int surface - The number of the target surface in range from 0 to the total number of surfaces.\n\nThe end index.\n\n## voidsetSurfaceMode ( ObjectDynamic::MODE mode, int surface ) #\n\nSets primitives to render an object surface with: triangles (by default), lines or points.\n\n## ObjectDynamic::MODEgetSurfaceMode ( int surface ) const#\n\nReturns primitives used to render the object surface with: triangles (by default), lines or points.\n\n### Arguments\n\n• int surface - Number of a target surface.\n\n### Return value\n\nSurface rendering mode:\n• OBJECT_DYNAMIC_MODE_POINTS = 0\n• OBJECT_DYNAMIC_MODE_LINES\n• OBJECT_DYNAMIC_MODE_TRIANGLES\n\n## voidsetSurfaceName ( const char * name, int surface ) #\n\nSets the name for the specified surface.\nNotice\nThe name will be set only if the specified surface was added via the addSurface() method.\n\n### Arguments\n\n• const char * name - Surface name.\n• int surface - Number of a target surface in range from 0 to the total number of surfaces.\n\n## voidsetVertex ( int num, const void * vertex ) #\n\nUpdates a vertex in the vertices buffer.\n\n### Arguments\n\n• int num - Vertex number.\n• const void * vertex - Vertex pointer.\n\n## voidsetVertexArray ( const void * vertex, int num_vertex ) #\n\n### Arguments\n\n• const void * vertex - Vertices array pointer.\n• int num_vertex - Number of vertices.\n\n## voidsetVertexDouble ( int attribute, double[] value ) #\n\nUpdates the last added vertex to the vertex of the double type with the given parameters.\n\n### Arguments\n\n• int attribute - Attribute number.\n• double[] value - Value pointer.\n\n## voidsetVertexDouble ( int vertex, int attribute, double[] value ) #\n\nUpdates the given vertex to the vertex of the double type with the given parameters.\n\n### Arguments\n\n• int vertex - Vertex index.\n• int attribute - Attribute number.\n• double[] value - Value pointer.\n\n## voidsetVertexFloat ( int attribute, float[] value ) #\n\nUpdates the last added vertex to the vertex of the float type with the given parameters.\n\n### Arguments\n\n• int attribute - The number of the attribute, set in the setVertexFormat() method.\n• float[] value - Vertex coordinates.\n\n## voidsetVertexFloat ( int vertex, int attribute, float[] value ) #\n\nUpdates the given vertex to the vertex of the float type with the given parameters\n\n### Arguments\n\n• int vertex - Vertex index.\n• int attribute - The number of the attribute, set in the setVertexFormat() method.\n• float[] value - Vertex coordinates.\n\n## voidsetVertexFormat ( const ObjectDynamic::Attribute[] & attributes ) #\n\nSets the number of the vertex attributes.\n\nThe example of setting 4 different vertices attributes:\n\nSource code (C++)\n``````const ObjectDynamic::Attribute attributes[] = {\n{ 0, ObjectDynamic::TYPE_FLOAT, 3 },\n{ 8, ObjectDynamic::TYPE_HALF, 4 },\n{ 16, ObjectDynamic::TYPE_HALF, 4 },\n{ 24, ObjectDynamic::TYPE_HALF, 4 }\n};\n\n// set vertex format\ndynamic->setVertexFormat(attributes, 4);``````\n\n### Arguments\n\n• const ObjectDynamic::Attribute[] & attributes - Number of the vertex attributes, can be up to 16 attributes for one vertex. The numeration starts from 0. Each attribute consists of:\n• An offset of the vertex in bytes, depends on the vertex type and size.\n• Type of the vertex: TYPE_FLOAT, TYPE_HALF, TYPE_UCHAR\n• Size of the vertex: can be 1,2,3,4 for float type; 2,4 for half type; 4 for UChar type\nNotice\nWhen it goes to shader, 0 -attribute always comes with the size of 4, no matter what size is specified in the method. All the other attributes comes with the specified sizes.\n\n## voidsetVertexHalf ( int attribute, float[] value ) #\n\nUpdates the last added vertex to the vertex of the half-float type with the given parameters.\n\n### Arguments\n\n• int attribute - The number of the attribute, set in the setVertexFormat() method.\n• float[] value - Vertex coordinates.\n\n## voidsetVertexHalf ( int vertex, int attribute, float[] value ) #\n\nUpdates the last added vertex to the vertex of the half-float type with the given parameters.\n\n### Arguments\n\n• int vertex - Vertex index.\n• int attribute - The number of the attribute, set in the setVertexFormat() method.\n• float[] value - Vertex coordinates.\n\n## intgetVertexSize ( ) const#\n\nReturns the size of the current vertex, bytes.\n\nVertex size.\n\n## voidsetVertexUChar ( int attribute, uchar[] value ) #\n\nUpdates the last added vertex with the vertex of the unsigned char type with the given parameters.\n\n### Arguments\n\n• int attribute - The number of the attribute, as set in the setVertexFormat() method.\n• uchar[] value - Vertex coordinates.\n\n## voidsetVertexUChar ( int vertex, int attribute, uchar[] value ) #\n\nUpdates the last added vertex with the vertex of the unsigned char type with the given parameters.\n\n### Arguments\n\n• int vertex - Vertex index.\n• int attribute - The number of the attribute, as set in the setVertexFormat() method.\n• uchar[] value - Vertex coordinates.\n\n## voidsetVertexUShort ( int attribute, ushort[] value ) #\n\nUpdates the last added vertex to the vertex of the unsigned short type with the given parameters.\n\n### Arguments\n\n• int attribute - Attribute number.\n• ushort[] value - Value pointer.\n\n## voidsetVertexUShort ( int vertex, int attribute, ushort[] value ) #\n\nUpdates the given vertex to the vertex of the unsigned short type with the given parameters.\n\n### Arguments\n\n• int vertex - Vertex index.\n• int attribute - The number of the attribute, set in the setVertexFormat() method.\n• ushort[] value - Vertex coordinates.\n\n## voidaddIndex ( int index ) #\n\nAdds an index to the index buffer.\n\n### Arguments\n\n• int index - Index to add.\n\n## voidaddIndicesArray ( int[] indices ) #\n\nAdds an array of the specified number of indices.\n\n### Arguments\n\n• int[] indices - Indices array.\n\n## voidaddLineStrip ( int num_vertex ) #\n\nAdds a line strip to the object.\nNotice\nThis method does not add the new vertices, but allocates their indices. Vertices should be created with addVertexFloat(), addVertexHalf() or addVertexUChar() methods accordingly to the required vertex type.\n\n### Arguments\n\n• int num_vertex - Number of vertices.\n\n## voidaddPoints ( int num_points ) #\n\nAdds the points to the object.\nNotice\nThis method does not add the new vertices, but allocates their indices. Vertices should be created with addVertexFloat(), addVertexHalf() or addVertexUChar() methods accordingly to the required vertex type.\n\n### Arguments\n\n• int num_points - Number of points.\n\n## voidaddSurface ( const char * name ) #\n\nAdds all the last listed and unsigned vertices and triangles to a new surface with a specified name.\n\n### Arguments\n\n• const char * name - Name of the new surface.\n\n## voidaddTriangleFan ( int num_vertex ) #\n\nAdds a triangle fan to the object.\nNotice\nThis method does not add the new vertices, but allocates their indices. Vertices should be created with addVertexFloat(), addVertexHalf() or addVertexUChar() methods accordingly to the required vertex type.\n\n### Arguments\n\n• int num_vertex - Number of vertices composing the fan.\n\nAdds a given number of quadrilaterals to the mesh. This method does not add vertices, rather it allocates indices, for which vertices should be then created with the addVertex() function. Indices will point to vertices starting from the current last vertex in the vertex buffer.\n\n## voidaddTriangles ( int num_triangles ) #\n\nAdds a given number of triangles to the object.\nNotice\nThis method does not add the new vertices, but allocates their indices. Vertices should be created with addVertexFloat(), addVertexHalf() or addVertexUChar() methods accordingly to the required vertex type.\n\n### Arguments\n\n• int num_triangles - Number of triangles.\n\n## voidaddTriangleStrip ( int num_vertex ) #\n\nAdds a triangle strip to the object.\nNotice\nThis method does not add the new vertices, but allocates their indices. Vertices should be created with addVertexFloat(), addVertexHalf() or addVertexUChar() methods accordingly to the required vertex type.\n\n### Arguments\n\n• int num_vertex - Number of vertices composing the strip.\n\n## voidaddVertex ( const void * vertex ) #\n\nAdds a vertex to the vertices buffer.\n\n### Arguments\n\n• const void * vertex - Vertex pointer.\n\n## voidaddVertexArray ( const void * vertex, int num_vertex ) #\n\nAdds an array of the specified vertices number.\n\n### Arguments\n\n• const void * vertex - Vertices array pointer.\n• int num_vertex - Number of vertices.\n\n## voidaddVertexDouble ( int attribute, double[] value ) #\n\nAdds a vertex of a double type with the given attribute, coordinates and size to the object.\n\n### Arguments\n\n• int attribute - Attribute number.\n• double[] value - Value pointer.\n\n## voidaddVertexFloat ( int attribute, float[] value ) #\n\nAdds a vertex of a float type with the given attribute, coordinates and size to the object.\nNotice\nBefore adding a vertex, make sure that you set all the attributes for it with the setVertexFormat() method.\n\n### Arguments\n\n• int attribute - The number of the attribute, set in the setVertexFormat() method.\n• float[] value - Vertex coordinates.\n\n## voidaddVertexHalf ( int attribute, float[] value ) #\n\nAdds a vertex of the half-float type with the given attribute, coordinates and size to the object.\nNotice\nBefore adding a vertex, make sure that you set all the attributes for it with the setVertexFormat() method.\n\n### Arguments\n\n• int attribute - The number of the attribute, set in the setVertexFormat() method.\n• float[] value - Vertex coordinates.\n\n## voidaddVertexUChar ( int attribute, uchar[] value ) #\n\nAdds a vertex of an unsigned char value with the given attribute, coordinates and size to the object.\n\n### Arguments\n\n• int attribute - The number of the attribute, set in the setVertexFormat() method.\n• uchar[] value - Vertex coordinates.\n\n## voidaddVertexUShort ( int attribute, ushort[] value ) #\n\nAdds a vertex of an unsigned short value with the given attribute, coordinates and size to the object.\n\n### Arguments\n\n• int attribute - The number of the attribute, set in the setVertexFormat() method.\n• ushort[] value - Vertex coordinates.\n\n## voidallocateIndices ( int num ) #\n\nAllocate an index buffer for a given number of indices that will be used for an object. With this function, memory can be allocated once rather than in chunks, making the creation faster.\n\n### Arguments\n\n• int num - The number of indices that will be stored in a buffer.\n\n## voidallocateVertex ( int num ) #\n\nAllocate a vertex buffer for a given number of vertices that will be used for an object. With this function, memory can be allocated once rather than in chunks, making the creation faster.\n\n### Arguments\n\n• int num - The number of vertices that will be stored in a buffer.\n\n## voidclearIndices ( ) #\n\nClears all the vertex indices in the object.\n\n## voidclearSurfaces ( ) #\n\nClears all the surface settings.\n\n## voidclearVertex ( ) #\n\nClears all the current vertex settings.\n\n## voidflushIndices ( ) #\n\nFlushes the index buffer and sends all data to GPU. If you change the contents of the index buffers, you should call this method.\n\n## voidflushVertex ( ) #\n\nFlushes the vertex buffer and sends all data to GPU. This method is called automatically, if the length of the vertex buffer changes. If you change the contents of the vertex buffers, you should call this method.\n\n## voidremoveIndices ( int num, int size ) #\n\nRemoves the specified number of indices starting from the given index.\n\n### Arguments\n\n• int num - Number of the index in the index buffer.\n• int size - Number of indices to remove.\n\n## voidremoveVertex ( int num, int size, int indices ) #\n\nRemoves the specified number of vertices starting from the given vertex. To fix the index buffer after removal of vertices, pass 1 as the 3rd argument.\n\n### Arguments\n\n• int num - Number of the vertex in the vertex buffer.\n• int size - Number of vertices to remove.\n• int indices - 1 to fix the index buffer after removal of vertices; otherwise, 0.\n\n## static inttype ( ) #\n\nReturns the type of the object.\n\n### Return value\n\nObject Dynamic type identifier.\n\n## voidupdateSurfaceBegin ( int surface ) #\n\nSynchronizes surface begin index.\n\n### Arguments\n\n• int surface - The number of the target surface in range from 0 to the total number of surfaces.\n\n## voidupdateSurfaceEnd ( int surface ) #\n\nSynchronizes surface end index.\n\n### Arguments\n\n• int surface - The number of the target surface in range from 0 to the total number of surfaces.\nLast update: 2022-01-21"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.54415035,"math_prob":0.6766611,"size":17499,"snap":"2022-05-2022-21","text_gpt3_token_len":3864,"char_repetition_ratio":0.21674764,"word_repetition_ratio":0.4928992,"special_character_ratio":0.22669867,"punctuation_ratio":0.12517053,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98768497,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-22T20:06:13Z\",\"WARC-Record-ID\":\"<urn:uuid:55f094a2-f92e-46e6-a96f-687ca8c46bff>\",\"Content-Length\":\"572083\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9375e0fe-03ab-4123-b362-63d42772c2e8>\",\"WARC-Concurrent-To\":\"<urn:uuid:74dc9ec9-c834-47f7-bc5f-eb1a59b1d8ec>\",\"WARC-IP-Address\":\"190.2.154.84\",\"WARC-Target-URI\":\"https://developer.unigine.com/docs/future/api/library/objects/class.objectdynamic\",\"WARC-Payload-Digest\":\"sha1:LEWHWSFPGFF5PJ2MAL7SWPM6ESJ6SQNJ\",\"WARC-Block-Digest\":\"sha1:ODMWER5UU43URWKELPTASJOWTXBAAIFP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662546071.13_warc_CC-MAIN-20220522190453-20220522220453-00487.warc.gz\"}"} |
https://simply-python.com/2014/07/04/ | [
"Rapid generation of powerpoint report with template scanning\n\nIn my work, I need to create PowerPoint (ppt) report of similar template. For the report, I need to create various plots in Excel or JMP, save it to folders and finally paste them to ppt. It be great if it is possible to generate ppt report rapidly by using automation. I have created a python interface to powerpoint using com commands hoping it will help to generate the report automatically.\n\nThe initial idea is to add command to paste the plots at specific slides and specific positions. The problem with this is that I have to set the position values and picture sizes for each graph in the python script. This become tedious and have to set independently for each report type.\n\nThe new idea will be to give the script a scanned template and the script will do the following commands:\n\n1. Create a template ppt with the graphs at particular slide, position and size set.\n2. Rename each object that you need to copy with the keywords such as ‘xyplot_Qty_year’ which after parsing will require a xyplot with qty as y axis and year as x axis. This will then get the corresponding graph with the same type and qty path and link them together.\n3. See the link on how to rename objects.\n4. The script will scan through all the slide, getting all info of picture that need to be pasted by having the keyword. It will note the x and y positon and the size.\n5. The script will then search the required folder for the saved pic file of the same type and will paste them to a new ppt.\n\nThe advantage of this approach is that multiple scanned template can be created. The picture position can be adjusted easily as well.\n\nSample of the script is as below. It is not a fully executable script.\n\nimport os\nimport re\nimport sys\n\nimport pyPPT\n\nclass ppt_scanner(object):\ndef __init__(self):\n\n# ppt setting\nself.ppt_scanned_filename = r'\\\\SGP-L071166D033\\Chengai main folder\\Chengai setup files\\scanned_template.ppt'\n\n# scanned plot results\nself.full_scanned_info = dict()\nself.scanned_y_list = list()\n\n# plots file save location where keyword is the param scanned\nself.bivar_plots_dict = dict()# to be filled in\n\n#ppt plot results\n##store the slide no and the corresponding list of pic\nself.ppt_slide_bivar_pic_name_dict = dict()\n\ndef initialize_ppt(self):\n'''\nInitialize the ppt object.\nOpen the template ppt and save it to target filename as ppt and work it from there\nNone --> None (create the ppt obj)\n\n'''\nself.pptobj = UsePPT() # New ppt for pasting the results.\nself.pptobj.show()\nself.pptobj.save(self.ppt_save_filename)\nself.scanned_template_ppt = UsePPT(self.ppt_scanned_filename) # Template for new ppt to follow\nself.scanned_template_ppt.show()\n\ndef close_all_ppt(self):\n\"\"\" Close all existing ppt.\n\n\"\"\"\nself.pptobj.close()\nself.scanned_template_ppt.close()\n\n## Scanned ppt obj function\ndef get_plot_info_fr_scan_ppt_slide(self, slide_no):\n\"\"\" Method (pptobj) to get info from template scanned ppt.priorty to get the x, y coordinates of pasting.\nOnly get the Object name starting with plot.\nStraight away stored info in various plot classification\nArgs:\nSlide_no (int): ppt slide num\nReturns:\n(list): properties of all objects in slide no\n\n\"\"\"\nall_obj_list = self.scanned_template_ppt.get_all_shapes_properties(slide_no)\nself.classify_info_to_related_group(slide_no, [n for n in all_obj_list if n.startswith(\"plot_\")] )\nreturn [n for n in all_obj_list if n.startswith(\"plot_\")]\n\ndef get_plot_info_fr_all_scan_ppt_slide(self):\n\"\"\" Get all info from all slides. Store info to self.full_scanned_info.\n\n\"\"\"\nfor slide_no in range(1,self.scanned_template_ppt.count_slide()+1,1):\nself.get_plot_info_fr_scan_ppt_slide(slide_no)\n\ndef classify_info_to_related_group(self, slide_no, info_list_fr_one_slide):\n\"\"\"Group to one consolidated group: main dict is slide num with list of name, pos as key.\nAppend to the various plot groups. Get the keyword name and the x,y pos.\nWill also store the columns for the y-axis (self.scanned_y_list).\nArgs:\nslide_no (int): slide num to place in ppt.\ninfo_list_fr_one_slide (list):\n\n\"\"\"\ntemp_plot_biv_info, temp_plot_tab_info, temp_plot_legend_info = [[],[],[]]\nfor n in info_list_fr_one_slide:\nif n.startswith('plot_biv_'):\ntemp_plot_biv_info.append([n.encode().replace('plot_biv_',''),n,n, n, n])\nself.scanned_y_list.append(n.encode().replace('plot_biv_',''))\n\nself.ppt_slide_bivar_pic_name_dict[slide_no] = temp_plot_biv_info\n\n## pptObj -- handling the pasting\ndef paste_all_plots_to_all_ppt_slide(self):\n\"\"\" Paste the respective plots to ppt.\n\"\"\"\n## use the number of page as scanned template\nfor slide_no in range(1,self.pptobj.count_slide()+1,1):\nself.paste_plots_to_slide(slide_no)\n\ndef paste_plots_to_slide(self, slide_no):\n\"\"\" Paste all required plots to particular slide\nArgs:\nslide_no (int): slide num to place in ppt.\n\n\"\"\"\n## for all biv plots\nfor n in self.ppt_slide_bivar_pic_name_dict[slide_no]:\nif self.bivar_plots_dict.has_key(n):\nfilename = self.bivar_plots_dict[n]\npic_obj = self.pptobj.insert_pic_fr_file_to_slide(slide_no, filename, n, n, (n,n))\n\nif (__name__ == \"__main__\"):\n\nprep = ppt_scanner()\n\nprep.initialize_ppt()\n\n## scanned all info -- scanned template function\nprep.get_plot_info_fr_all_scan_ppt_slide()\nprep.scanned_template_ppt.close()\n\n## paste plots\nprep.paste_all_plots_to_all_ppt_slide()\nprep.pptobj.save()\n\nprint 'Completed'"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6446379,"math_prob":0.4988469,"size":5296,"snap":"2022-05-2022-21","text_gpt3_token_len":1295,"char_repetition_ratio":0.14266817,"word_repetition_ratio":0.02259887,"special_character_ratio":0.2569864,"punctuation_ratio":0.15789473,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9620765,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-26T17:12:33Z\",\"WARC-Record-ID\":\"<urn:uuid:dba86be2-40ee-4403-9175-162da7f3880a>\",\"Content-Length\":\"91718\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:282db684-c3e6-4af6-8a74-d617f1c5247b>\",\"WARC-Concurrent-To\":\"<urn:uuid:d2e5c575-aa4b-45e9-b57f-1e3f637df3bb>\",\"WARC-IP-Address\":\"192.0.78.25\",\"WARC-Target-URI\":\"https://simply-python.com/2014/07/04/\",\"WARC-Payload-Digest\":\"sha1:EJCWGWED2H6JJLYBPCKKAACRIYKJGOTY\",\"WARC-Block-Digest\":\"sha1:CSJW7EYHU5O45ZKHLBRROXKPK72H7GGW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304959.80_warc_CC-MAIN-20220126162115-20220126192115-00248.warc.gz\"}"} |
https://www.stat.math.ethz.ch/pipermail/r-help/2017-November/449981.html | [
"# [R] weighted average grouped by variables\n\nMassimo Bressan massimo.bressan at arpa.veneto.it\nThu Nov 9 14:16:33 CET 2017\n\n```Hello\n\nan update about my question: I worked out the following solution (with the package \"dplyr\")\n\nlibrary(dplyr)\n\nmydf%>%\nmutate(speed_vehicles=n_vehicles*mydf\\$speed) %>%\ngroup_by(date_time,type) %>%\nsummarise(\nsum_n_times_speed=sum(speed_vehicles),\nn_vehicles=sum(n_vehicles),\nvel=sum(speed_vehicles)/sum(n_vehicles)\n)\n\nIn fact I was hoping to manage everything in a \"one-go\": i.e. without the need to create the \"intermediate\" variable called \"speed_vehicles\" and with the use of the function weighted.mean()\n\nany hints for a different approach much appreciated\n\nthanks\n\nDa: \"Massimo Bressan\" <massimo.bressan at arpa.veneto.it>\nA: \"r-help\" <r-help at r-project.org>\nInviato: Giovedì, 9 novembre 2017 12:20:52\nOggetto: weighted average grouped by variables\n\nhi all\n\nI have this dataframe (created as a reproducible example)\n\nmydf<-structure(list(date_time = structure(c(1508238000, 1508238000, 1508238000, 1508238000, 1508238000, 1508238000, 1508238000), class = c(\"POSIXct\", \"POSIXt\"), tzone = \"\"),\ndirection = structure(c(1L, 1L, 1L, 1L, 2L, 2L, 2L), .Label = c(\"A\", \"B\"), class = \"factor\"),\ntype = structure(c(1L, 2L, 3L, 4L, 1L, 2L, 3L), .Label = c(\"car\", \"light_duty\", \"heavy_duty\", \"motorcycle\"), class = \"factor\"),\navg_speed = c(41.1029082774049, 40.3333333333333, 40.3157894736842, 36.0869565217391, 33.4065155807365, 37.6222222222222, 35.5),\nn_vehicles = c(447L, 24L, 19L, 23L, 706L, 45L, 26L)),\n.Names = c(\"date_time\", \"direction\", \"type\", \"speed\", \"n_vehicles\"),\nrow.names = c(NA, -7L),\nclass = \"data.frame\")\n\nmydf\n\nand I need to get to this final result\n\nmydf_final<-structure(list(date_time = structure(c(1508238000, 1508238000, 1508238000, 1508238000), class = c(\"POSIXct\", \"POSIXt\"), tzone = \"\"),\ntype = structure(c(1L, 2L, 3L, 4L), .Label = c(\"car\", \"light_duty\", \"heavy_duty\", \"motorcycle\"), class = \"factor\"),\nweighted_avg_speed = c(36.39029, 38.56521, 37.53333, 36.08696),\nn_vehicles = c(1153L,69L,45L,23L)),\n.Names = c(\"date_time\", \"type\", \"weighted_avg_speed\", \"n_vehicles\"),\nrow.names = c(NA, -4L),\nclass = \"data.frame\")\n\nmydf_final\n\nmy question:\nhow to compute a weighted mean i.e. \"weighted_avg_speed\"\nfrom \"speed\" (the values whose weighted mean is to be computed) and \"n_vehicles\" (the weights)\ngrouped by \"date_time\" and \"type\"?\n\nto be noted the complication of the case \"motorcycle\" (not present in both directions)\n\nany help for that?\n\nthank you\n\nmax\n\n--\n\n------------------------------------------------------------\nMassimo Bressan\n\nARPAV\nAgenzia Regionale per la Prevenzione e\nProtezione Ambientale del Veneto\n\nDipartimento Provinciale di Treviso\nVia Santa Barbara, 5/a\n31100 Treviso, Italy\n\ntel: +39 0422 558545\nfax: +39 0422 558516\ne-mail: massimo.bressan at arpa.veneto.it\n------------------------------------------------------------\n\n[[alternative HTML version deleted]]\n\n```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.62721384,"math_prob":0.9651287,"size":2951,"snap":"2022-05-2022-21","text_gpt3_token_len":977,"char_repetition_ratio":0.16593145,"word_repetition_ratio":0.072829135,"special_character_ratio":0.4130803,"punctuation_ratio":0.25270757,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97579163,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-19T06:53:23Z\",\"WARC-Record-ID\":\"<urn:uuid:513394a6-3f90-4574-96e4-8f4bf6649830>\",\"Content-Length\":\"6384\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2259effb-f3fb-460c-acac-5aa23525fe0b>\",\"WARC-Concurrent-To\":\"<urn:uuid:9d3155eb-f0f0-4612-890e-b77f4532d52d>\",\"WARC-IP-Address\":\"129.132.119.195\",\"WARC-Target-URI\":\"https://www.stat.math.ethz.ch/pipermail/r-help/2017-November/449981.html\",\"WARC-Payload-Digest\":\"sha1:OG5WL27LKPDXIDI3ENYQA4O3T3XLD2P6\",\"WARC-Block-Digest\":\"sha1:R2QSY64GXJDDE4VA35MOPGLIJBHEZ5T7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662525507.54_warc_CC-MAIN-20220519042059-20220519072059-00573.warc.gz\"}"} |
https://scicomp.stackexchange.com/questions/33945/what-is-the-correct-way-to-calculate-deviatoric-stress-tensor-in-lattice-boltzma | [
"What is the correct way to calculate deviatoric stress tensor in lattice Boltzmann method?\n\nDue to my previous question, where I asked about flux calculation in lattice Boltzmann (LB) method here, I have more or less same question for deviatoric stress tensor calculation due to pseudo-compressibility of LB method. In fact, strain rate tensor is calculated in LB by using non-equilibrium part of distribution functions ($$f_{i}^\\text{neq}$$) as:\n\n$$\\hat{\\varepsilon}_{\\alpha\\beta} = -\\frac{1}{2\\hat{\\tau} \\hat{\\rho} \\hat{c}_{s}^{2}} \\sum_{i} f_{i}^\\text{neq} c_{i\\alpha}c_{i\\beta}$$\n\nWhere hat quantities are dimensionless quantities, $$\\hat{\\tau}$$ is dimensionless relaxation time ($$\\hat{\\tau} > 0.5$$), $$\\hat{\\rho}$$ is dimensionless instantaneous density of the fluid, $$\\hat{c}_{s}^{2} = \\frac{1}{3}$$, and $$c_{i\\alpha}$$ is the $$i$$th discrete velocity in $$\\alpha$$ direction. Deviatoric stress is defined as:\n\n$$\\hat{\\sigma}_{\\alpha\\beta} = 2 \\hat{\\mu} \\hat{\\varepsilon}_{\\alpha\\beta}$$\n\nBut, we have for dimensionless viscosity: $$\\hat{\\mu} = \\hat{c}_{s}^{2} \\left(1-\\frac{1}{2\\hat{\\tau}}\\right) \\hat{\\tau}$$\n\nSo, finally:\n\n$$\\hat{\\sigma}_{\\alpha\\beta} = -\\left(1-\\frac{1}{2\\hat{\\tau}}\\right) \\frac{1}{\\hat{\\rho}} \\sum_{i} f_{i}^\\text{neq} c_{i\\alpha}c_{i\\beta}$$\n\nThis is in contrast with what usually LB people use, such as this one from Krüger et. al. as:\n\n$$\\hat{\\sigma}_{\\alpha\\beta} = -\\left(1-\\frac{1}{2\\hat{\\tau}}\\right) \\sum_{i} f_{i}^\\text{neq} c_{i\\alpha} c_{i\\beta}$$\n\nI understand at incompressible limit ($$Mach \\rightarrow 0$$), $$\\hat{\\rho}$$ should be close to 1, but for my simulations where $$Mach \\sim 0.06$$ and $$Re \\sim 600$$, $$\\hat{\\rho}$$ may fluctuate quite a bit. So my question is which of these formulas should be used to calculate deviatoric stress? Should I assume $$\\hat{\\rho} \\sim 1$$ even with my high $$Mach$$ number? Any suggestion is truly appreciated.\n\n• Why don't you try to derive it using a Chapman-Enskog analysis? Dec 30 '19 at 16:08\n• Refer to the following work in Eq. (13): Chemical Engineering Science 64 (2009) 52-58 (Ridha DJEBALI, jbelii_r@hotmail.fr) Jan 1 '20 at 9:35\n• Can you provide the main point of the article? Jan 1 '20 at 21:12\n\nGenerally, the viscosity we speak about (which is linked to the relax. time) is the kinematic viscosity $$\\nu$$, not $$\\mu$$ as you write. So, by replacing, you will find the right expression.\n• Sorry, but no, it's not an answer to my question. The dimensionless dynamics viscosity is defined as: $$\\hat{\\mu} = \\frac{\\mu\\Delta t}{\\rho_{f} \\Delta x^{2}}$$ Where $\\rho_{f}$ is constant density of fluid. You see that dimensionless dynamic and kinematic viscosities are indeed equal: $$\\hat{\\mu} = \\hat{\\nu}$$ So it doesn't matter here. The main idea here is that how close instantaneous and constant fluid densities are or in another how small $Mach$ is. Dec 28 '19 at 18:33\n• Dimensionless dynamic and kinematic viscosities are equal: $$\\hat{\\mu} = \\frac{\\mu \\Delta t}{\\rho_{f} \\Delta x^{2}} = \\frac{\\nu \\Delta t} {\\Delta x^{2}} = \\hat{\\nu}$$ where kinematic viscosity is defined as $\\nu = \\frac{\\mu}{\\rho_{f}}$. Dec 30 '19 at 19:10"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7647235,"math_prob":0.9999207,"size":1771,"snap":"2022-05-2022-21","text_gpt3_token_len":595,"char_repetition_ratio":0.13865308,"word_repetition_ratio":0.0,"special_character_ratio":0.3331451,"punctuation_ratio":0.08115942,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99999714,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-25T23:39:34Z\",\"WARC-Record-ID\":\"<urn:uuid:cabfdfff-6972-4649-b726-6d580fcd7ef3>\",\"Content-Length\":\"148178\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9b3367fb-f03b-487e-9dc4-6f24dc0bcd03>\",\"WARC-Concurrent-To\":\"<urn:uuid:9d69e56d-48c9-44d0-a6b0-8520ccc00822>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://scicomp.stackexchange.com/questions/33945/what-is-the-correct-way-to-calculate-deviatoric-stress-tensor-in-lattice-boltzma\",\"WARC-Payload-Digest\":\"sha1:F7W46DEI4QYMIVNF35NTB4H5Z4ZG2UCN\",\"WARC-Block-Digest\":\"sha1:3HOQWGAD7QQFVZNAQQKV5M25TURH5B7P\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304876.16_warc_CC-MAIN-20220125220353-20220126010353-00490.warc.gz\"}"} |
https://speechbrain.readthedocs.io/en/latest/_modules/speechbrain/processing/signal_processing.html | [
"# Source code for speechbrain.processing.signal_processing\n\n```\"\"\"\nLow level signal processing utilities\n\nAuthors\n* Peter Plantinga 2020\n* Francois Grondin 2020\n* William Aris 2020\n* Samuele Cornell 2020\n\"\"\"\nimport torch\nimport math\nfrom packaging import version\n\n[docs]def compute_amplitude(waveforms, lengths=None, amp_type=\"avg\", scale=\"linear\"):\n\"\"\"Compute amplitude of a batch of waveforms.\n\nArguments\n---------\nwaveform : tensor\nThe waveforms used for computing amplitude.\nShape should be `[time]` or `[batch, time]` or\n`[batch, time, channels]`.\nlengths : tensor\nThe lengths of the waveforms excluding the padding.\nShape should be a single dimension, `[batch]`.\namp_type : str\nWhether to compute \"avg\" average or \"peak\" amplitude.\nChoose between [\"avg\", \"peak\"].\nscale : str\nWhether to compute amplitude in \"dB\" or \"linear\" scale.\nChoose between [\"linear\", \"dB\"].\n\nReturns\n-------\nThe average amplitude of the waveforms.\n\nExample\n-------\n>>> signal = torch.sin(torch.arange(16000.0)).unsqueeze(0)\n>>> compute_amplitude(signal, signal.size(1))\ntensor([[0.6366]])\n\"\"\"\nif len(waveforms.shape) == 1:\nwaveforms = waveforms.unsqueeze(0)\n\nassert amp_type in [\"avg\", \"peak\"]\nassert scale in [\"linear\", \"dB\"]\n\nif amp_type == \"avg\":\nif lengths is None:\nout = torch.mean(torch.abs(waveforms), dim=1, keepdim=True)\nelse:\nwav_sum = torch.sum(input=torch.abs(waveforms), dim=1, keepdim=True)\nout = wav_sum / lengths\nelif amp_type == \"peak\":\nout = torch.max(torch.abs(waveforms), dim=1, keepdim=True)\nelse:\nraise NotImplementedError\n\nif scale == \"linear\":\nreturn out\nelif scale == \"dB\":\nelse:\nraise NotImplementedError\n\n[docs]def normalize(waveforms, lengths=None, amp_type=\"avg\", eps=1e-14):\n\"\"\"This function normalizes a signal to unitary average or peak amplitude.\n\nArguments\n---------\nwaveforms : tensor\nThe waveforms to normalize.\nShape should be `[batch, time]` or `[batch, time, channels]`.\nlengths : tensor\nThe lengths of the waveforms excluding the padding.\nShape should be a single dimension, `[batch]`.\namp_type : str\nWhether one wants to normalize with respect to \"avg\" or \"peak\"\namplitude. Choose between [\"avg\", \"peak\"]. Note: for \"avg\" clipping\nis not prevented and can occur.\neps : float\nA small number to add to the denominator to prevent NaN.\n\nReturns\n-------\nwaveforms : tensor\nNormalized level waveform.\n\"\"\"\n\nassert amp_type in [\"avg\", \"peak\"]\n\nif len(waveforms.shape) == 1:\nwaveforms = waveforms.unsqueeze(0)\n\nden = compute_amplitude(waveforms, lengths, amp_type) + eps\nwaveforms = waveforms.squeeze(0)\nreturn waveforms / den\n\n[docs]def rescale(waveforms, lengths, target_lvl, amp_type=\"avg\", scale=\"linear\"):\n\"\"\"This functions performs signal rescaling to a target level.\n\nArguments\n---------\nwaveforms : tensor\nThe waveforms to normalize.\nShape should be `[batch, time]` or `[batch, time, channels]`.\nlengths : tensor\nThe lengths of the waveforms excluding the padding.\nShape should be a single dimension, `[batch]`.\ntarget_lvl : float\nTarget lvl in dB or linear scale.\namp_type : str\nWhether one wants to rescale with respect to \"avg\" or \"peak\" amplitude.\nChoose between [\"avg\", \"peak\"].\nscale : str\nwhether target_lvl belongs to linear or dB scale.\nChoose between [\"linear\", \"dB\"].\n\nReturns\n-------\nwaveforms : tensor\nRescaled waveforms.\n\"\"\"\n\nassert amp_type in [\"peak\", \"avg\"]\nassert scale in [\"linear\", \"dB\"]\n\nif len(waveforms.shape) == 1:\nwaveforms = waveforms.unsqueeze(0)\n\nwaveforms = normalize(waveforms, lengths, amp_type)\n\nif scale == \"linear\":\nout = target_lvl * waveforms\nelif scale == \"dB\":\nout = dB_to_amplitude(target_lvl) * waveforms\n\nelse:\nraise NotImplementedError(\"Invalid scale, choose between dB and linear\")\n\nout = out.squeeze(0)\n\nreturn out\n\n[docs]def convolve1d(\nwaveform,\nkernel,\nstride=1,\ngroups=1,\nuse_fft=False,\nrotation_index=0,\n):\n\"\"\"Use torch.nn.functional to perform 1d padding and conv.\n\nArguments\n---------\nwaveform : tensor\nThe tensor to perform operations on.\nkernel : tensor\nThe filter to apply during convolution.\nIf an integer is passed instead, this is passed\nto the conv1d function and pad_type is ignored.\nThe type of padding to use. Passed directly to\nfor available options.\nstride : int\nThe number of units to move each time convolution is applied.\nPassed to conv1d. Has no effect if `use_fft` is True.\ngroups : int\nThis option is passed to `conv1d` to split the input into groups for\nconvolution. Input channels should be divisible by the number of groups.\nuse_fft : bool\nWhen `use_fft` is passed `True`, then compute the convolution in the\nspectral domain using complex multiply. This is more efficient on CPU\nwhen the size of the kernel is large (e.g. reverberation). WARNING:\nWithout padding, circular convolution occurs. This makes little\ndifference in the case of reverberation, but may make more difference\nwith different kernels.\nrotation_index : int\nThis option only applies if `use_fft` is true. If so, the kernel is\nrolled by this amount before convolution to shift the output location.\n\nReturns\n-------\nThe convolved waveform.\n\nExample\n-------\n>>> signal = signal.unsqueeze(0).unsqueeze(2)\n>>> kernel = torch.rand(1, 10, 1)\n>>> signal = convolve1d(signal, kernel, padding=(9, 0))\n\"\"\"\nif len(waveform.shape) != 3:\nraise ValueError(\"Convolve1D expects a 3-dimensional tensor\")\n\n# Move time dimension last, which pad and fft and conv expect.\nwaveform = waveform.transpose(2, 1)\nkernel = kernel.transpose(2, 1)\n\n)\n\n# This approach uses FFT, which is more efficient if the kernel is large\nif use_fft:\n\n# Pad kernel to same length as signal, ensuring correct alignment\nzero_length = waveform.size(-1) - kernel.size(-1)\n\n# Handle case where signal is shorter\nif zero_length < 0:\nkernel = kernel[..., :zero_length]\nzero_length = 0\n\n# Perform rotation to ensure alignment\nzeros = torch.zeros(\nkernel.size(0), kernel.size(1), zero_length, device=kernel.device\n)\nafter_index = kernel[..., rotation_index:]\nbefore_index = kernel[..., :rotation_index]\nkernel = torch.cat((after_index, zeros, before_index), dim=-1)\n\n# Multiply in frequency domain to convolve in time domain\nif version.parse(torch.__version__) > version.parse(\"1.6.0\"):\nimport torch.fft as fft\n\nresult = fft.rfft(waveform) * fft.rfft(kernel)\nconvolved = fft.irfft(result, n=waveform.size(-1))\nelse:\nf_signal = torch.rfft(waveform, 1)\nf_kernel = torch.rfft(kernel, 1)\nsig_real, sig_imag = f_signal.unbind(-1)\nker_real, ker_imag = f_kernel.unbind(-1)\nf_result = torch.stack(\n[\nsig_real * ker_real - sig_imag * ker_imag,\nsig_real * ker_imag + sig_imag * ker_real,\n],\ndim=-1,\n)\nconvolved = torch.irfft(\nf_result, 1, signal_sizes=[waveform.size(-1)]\n)\n\n# Use the implementation given by torch, which should be efficient on GPU\nelse:\nconvolved = torch.nn.functional.conv1d(\ninput=waveform,\nweight=kernel,\nstride=stride,\ngroups=groups,\n)\n\n# Return time dimension to the second dimension.\nreturn convolved.transpose(2, 1)\n\n[docs]def reverberate(waveforms, rir_waveform, rescale_amp=\"avg\"):\n\"\"\"\nGeneral function to contaminate a given signal with reverberation given a\nRoom Impulse Response (RIR).\nIt performs convolution between RIR and signal, but without changing\nthe original amplitude of the signal.\n\nArguments\n---------\nwaveforms : tensor\nThe waveforms to normalize.\nShape should be `[batch, time]` or `[batch, time, channels]`.\nrir_waveform : tensor\nRIR tensor, shape should be [time, channels].\nrescale_amp : str\nWhether reverberated signal is rescaled (None) and with respect either\nto original signal \"peak\" amplitude or \"avg\" average amplitude.\nChoose between [None, \"avg\", \"peak\"].\n\nReturns\n-------\nwaveforms: tensor\nReverberated signal.\n\n\"\"\"\n\norig_shape = waveforms.shape\n\nif len(waveforms.shape) > 3 or len(rir_waveform.shape) > 3:\nraise NotImplementedError\n\n# if inputs are mono tensors we reshape to 1, samples\nif len(waveforms.shape) == 1:\nwaveforms = waveforms.unsqueeze(0).unsqueeze(-1)\nelif len(waveforms.shape) == 2:\nwaveforms = waveforms.unsqueeze(-1)\n\nif len(rir_waveform.shape) == 1: # convolve1d expects a 3d tensor !\nrir_waveform = rir_waveform.unsqueeze(0).unsqueeze(-1)\nelif len(rir_waveform.shape) == 2:\nrir_waveform = rir_waveform.unsqueeze(-1)\n\n# Compute the average amplitude of the clean\norig_amplitude = compute_amplitude(\nwaveforms, waveforms.size(1), rescale_amp\n)\n\n# Compute index of the direct signal, so we can preserve alignment\nvalue_max, direct_index = rir_waveform.abs().max(axis=1, keepdim=True)\n\n# Making sure the max is always positive (if not, flip)\n# mask = torch.logical_and(rir_waveform == value_max, rir_waveform < 0)\n\n# Use FFT to compute convolution, because of long reverberation filter\nwaveforms = convolve1d(\nwaveform=waveforms,\nkernel=rir_waveform,\nuse_fft=True,\nrotation_index=direct_index,\n)\n\n# Rescale to the peak amplitude of the clean waveform\nwaveforms = rescale(\nwaveforms, waveforms.size(1), orig_amplitude, rescale_amp\n)\n\nif len(orig_shape) == 1:\nwaveforms = waveforms.squeeze(0).squeeze(-1)\nif len(orig_shape) == 2:\nwaveforms = waveforms.squeeze(-1)\n\nreturn waveforms\n\n[docs]def dB_to_amplitude(SNR):\n\"\"\"Returns the amplitude ratio, converted from decibels.\n\nArguments\n---------\nSNR : float\nThe ratio in decibels to convert.\n\nExample\n-------\n>>> round(dB_to_amplitude(SNR=10), 3)\n3.162\n>>> dB_to_amplitude(SNR=0)\n1.0\n\"\"\"\nreturn 10 ** (SNR / 20)\n\n[docs]def notch_filter(notch_freq, filter_width=101, notch_width=0.05):\n\"\"\"Returns a notch filter constructed from a high-pass and low-pass filter.\n\n(from https://tomroelandts.com/articles/\nhow-to-create-simple-band-pass-and-band-reject-filters)\n\nArguments\n---------\nnotch_freq : float\nfrequency to put notch as a fraction of the\nsampling rate / 2. The range of possible inputs is 0 to 1.\nfilter_width : int\nFilter width in samples. Longer filters have\nsmaller transition bands, but are more inefficient.\nnotch_width : float\nWidth of the notch, as a fraction of the sampling_rate / 2.\n\nExample\n-------\n>>> signal = signal.unsqueeze(0).unsqueeze(2)\n>>> kernel = notch_filter(0.25)\n>>> notched_signal = convolve1d(signal, kernel)\n\"\"\"\n\n# Check inputs\nassert 0 < notch_freq <= 1\nassert filter_width % 2 != 0\n\n# Avoid frequencies that are too low\nnotch_freq += notch_width\n\n# Define sinc function, avoiding division by zero\ndef sinc(x):\ndef _sinc(x):\n\n# The zero is at the middle index\n\n# Compute a low-pass filter with cutoff frequency notch_freq.\nhlpf = sinc(3 * (notch_freq - notch_width) * inputs)\nhlpf *= torch.blackman_window(filter_width)\nhlpf /= torch.sum(hlpf)\n\n# Compute a high-pass filter with cutoff frequency notch_freq.\nhhpf = sinc(3 * (notch_freq + notch_width) * inputs)\nhhpf *= torch.blackman_window(filter_width)\nhhpf /= -torch.sum(hhpf)\n\n# Adding filters creates notch filter\nreturn (hlpf + hhpf).view(1, -1, 1)\n\n\"\"\"Taken from https://github.com/kaituoxu/Conv-TasNet/blob/master/src/utils.py\n\nReconstructs a signal from a framed representation.\nAdds potentially overlapping frames of a signal with shape\n`[..., frames, frame_length]`, offsetting subsequent frames by `frame_step`.\nThe resulting tensor has shape `[..., output_size]` where\noutput_size = (frames - 1) * frame_step + frame_length\nArgs:\nsignal: A [..., frames, frame_length] Tensor. All dimensions may be unknown, and rank must be at least 2.\nframe_step: An integer denoting overlap offsets. Must be less than or equal to frame_length.\nReturns:\nA Tensor with shape [..., output_size] containing the overlap-added frames of signal's inner-most two dimensions.\noutput_size = (frames - 1) * frame_step + frame_length\nBased on https://github.com/tensorflow/tensorflow/blob/r1.12/tensorflow/contrib/signal/python/ops/reconstruction_ops.py\n\nExample\n-------\n>>> signal = torch.randn(5, 20)\n>>> overlapped.shape\ntorch.Size()\n\"\"\"\nouter_dimensions = signal.size()[:-2]\nframes, frame_length = signal.size()[-2:]\n\nsubframe_length = math.gcd(\nframe_length, frame_step\n) # gcd=Greatest Common Divisor\nsubframe_step = frame_step // subframe_length\nsubframes_per_frame = frame_length // subframe_length\noutput_size = frame_step * (frames - 1) + frame_length\noutput_subframes = output_size // subframe_length\n\nsubframe_signal = signal.view(*outer_dimensions, -1, subframe_length)\n\nframe = torch.arange(0, output_subframes).unfold(\n0, subframes_per_frame, subframe_step\n)\n\n# frame_old = signal.new_tensor(frame).long() # signal may in GPU or CPU\nframe = frame.clone().detach().to(signal.device.type)\n# print((frame - frame_old).sum())\nframe = frame.contiguous().view(-1)\n\nresult = signal.new_zeros(\n*outer_dimensions, output_subframes, subframe_length\n)\nresult = result.view(*outer_dimensions, -1)\nreturn result\n\n[docs]def resynthesize(enhanced_mag, noisy_inputs, stft, istft, normalize_wavs=True):\n\"\"\"Function for resynthesizing waveforms from enhanced mags.\n\nArguments\n---------\nenhanced_mag : torch.Tensor\nPredicted spectral magnitude, should be three dimensional.\nnoisy_inputs : torch.Tensor\nThe noisy waveforms before any processing, to extract phase.\nlengths : torch.Tensor\nThe length of each waveform for normalization.\nstft : torch.nn.Module\nModule for computing the STFT for extracting phase.\nistft : torch.nn.Module\nModule for computing the iSTFT for resynthesis.\nnormalize_wavs : bool\nWhether to normalize the output wavs before returning them.\n\nReturns\n-------\nenhanced_wav : torch.Tensor\nThe resynthesized waveforms of the enhanced magnitudes with noisy phase.\n\"\"\"\n\n# Extract noisy phase from inputs\nnoisy_feats = stft(noisy_inputs)\nnoisy_phase = torch.atan2(noisy_feats[:, :, :, 1], noisy_feats[:, :, :, 0])\n\n# Combine with enhanced magnitude\ncomplex_predictions = torch.mul(\ntorch.unsqueeze(enhanced_mag, -1),\ntorch.cat(\n(\ntorch.unsqueeze(torch.cos(noisy_phase), -1),\ntorch.unsqueeze(torch.sin(noisy_phase), -1),\n),\n-1,\n),\n)\npred_wavs = istft(complex_predictions, sig_length=noisy_inputs.shape)\n\n# Normalize. Since we're using peak amplitudes, ignore lengths\nif normalize_wavs:\npred_wavs = normalize(pred_wavs, amp_type=\"peak\")\n\nreturn pred_wavs\n```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6069182,"math_prob":0.9830268,"size":14868,"snap":"2021-43-2021-49","text_gpt3_token_len":3846,"char_repetition_ratio":0.15137245,"word_repetition_ratio":0.11687631,"special_character_ratio":0.2759618,"punctuation_ratio":0.21759623,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99579936,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-02T09:51:28Z\",\"WARC-Record-ID\":\"<urn:uuid:1568ee78-81dc-4211-ac3e-d3b894697e14>\",\"Content-Length\":\"107522\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b50b2cce-2989-47bc-923b-4fb1a5fc5761>\",\"WARC-Concurrent-To\":\"<urn:uuid:c261c810-f3fd-4b53-a428-5b882d0d6f92>\",\"WARC-IP-Address\":\"104.17.33.82\",\"WARC-Target-URI\":\"https://speechbrain.readthedocs.io/en/latest/_modules/speechbrain/processing/signal_processing.html\",\"WARC-Payload-Digest\":\"sha1:LL66NMY6V6LNYI3MNEV2TCVWM7NHCHC4\",\"WARC-Block-Digest\":\"sha1:OLOHKZUA2SSGRFJOUNPTBPKHOG6X2L7E\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964361253.38_warc_CC-MAIN-20211202084644-20211202114644-00498.warc.gz\"}"} |
http://www.robwork.dk/apidoc/nightly/rw/classrw_1_1math_1_1Metric.html | [
"",
null,
"RobWorkProject 0.7.0\nMetric< T > Class Template Referenceabstract\n\nTemplate interface for metrics on type T. More...\n\n`#include <Metric.hpp>`\n\nPublic Types\n\ntypedef T value_type\nThe type of element on which the metric operates.\n\ntypedef T::value_type scalar_type\nThe type of the scalar.\n\ntypedef rw::common::Ptr< Metric< T > > Ptr\nA pointer to a Metric<T>.\n\ntypedef rw::common::Ptr< const Metric< T > > CPtr\nA pointer to a const Metric<T>.\n\nPublic Member Functions\n\nvirtual ~Metric ()\nDestructor.\n\nscalar_type distance (const value_type &q) const\nThe distance from the zero element to q.\n\nscalar_type distance (const value_type &a, const value_type &b) const\nThe distance from element a to b. More...\n\nint size () const\nThe dimension of elements on which this metric operates. More...\n\nProtected Member Functions\n\nvirtual scalar_type doDistance (const value_type &q) const =0\nSubclass implementation of the distance() method.\n\nvirtual scalar_type doDistance (const value_type &a, const value_type &b) const =0\nSubclass implementation of the distance() method.\n\nvirtual int doSize () const\nSubclass implementation of the size() method. More...\n\nMetric ()\nProtected constructor called by subclassed.\n\nMetric (const Metric &)\nDisable copying of superclass.\n\nMetricoperator= (const Metric &)\nDisable assignment of superclass.\n\nDetailed Description\n\ntemplate<class T> class rw::math::Metric< T >\n\nTemplate interface for metrics on type T.\n\nA metric is a function that defines a scalar distance between elements.\n\n◆ distance()\n\n scalar_type distance ( const value_type & a, const value_type & b ) const\ninline\n\nThe distance from element a to b.\n\nParameters\n a [in] first element b [in] second element\nReturns\nthe distance\n\n◆ doSize()\n\n virtual int doSize ( ) const\ninlineprotectedvirtual\n\nSubclass implementation of the size() method.\n\nBy default the methods returns -1, i.e. valid for all elements.\n\n◆ size()\n\n int size ( ) const\ninline\n\nThe dimension of elements on which this metric operates.\n\nThe returns -1 if the elements don't have a measure of dimension or if the metric works for elements of all dimensions.\n\nThe documentation for this class was generated from the following file:"
] | [
null,
"http://www.robwork.dk/apidoc/nightly/rw/rw_logo_64x64.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5289161,"math_prob":0.9483915,"size":2169,"snap":"2019-26-2019-30","text_gpt3_token_len":527,"char_repetition_ratio":0.16951501,"word_repetition_ratio":0.16403785,"special_character_ratio":0.24804057,"punctuation_ratio":0.17777778,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96597344,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-26T12:19:20Z\",\"WARC-Record-ID\":\"<urn:uuid:3715729b-b01b-448f-9e79-cf28ed26ad59>\",\"Content-Length\":\"20529\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:24f2fda6-f389-4562-95a1-14ef9d620d44>\",\"WARC-Concurrent-To\":\"<urn:uuid:a5c1c690-b757-4a28-b255-3f3a7914de04>\",\"WARC-IP-Address\":\"130.225.156.88\",\"WARC-Target-URI\":\"http://www.robwork.dk/apidoc/nightly/rw/classrw_1_1math_1_1Metric.html\",\"WARC-Payload-Digest\":\"sha1:6E4SFPVJOQCDPBW7GQWM3YXUZOTHVKTV\",\"WARC-Block-Digest\":\"sha1:DGZBL3NLG6FN2MGQJNFK4FSA2E44BQAQ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560628000306.84_warc_CC-MAIN-20190626114215-20190626140215-00547.warc.gz\"}"} |
https://mbernste.github.io/tags/ | [
"Posts by Tags\n\nGaussian mixture models\n\nPublished:\n\nGaussian mixture models are a very popular method for data clustering. Here I will define the Gaussian mixture model and also derive the EM algorithm for performing maximum likelihood estimation of its paramters.\n\nThe graph Laplacian\n\nPublished:\n\nAt the heart of of a number of important machine learning algorithms, such as spectral clustering, lies a matrix called the graph Laplacian. In this post, I’ll walk through the intuition behind the graph Laplacian and describe how it represents the discrete analogue to the Laplacian operator on continuous multivariate functions.\n\nDemystifying measure-theoretic probability theory (part 3: expectation)\n\nPublished:\n\nIn this series of posts, I present my understanding of some basic concepts in measure theory — the mathematical study of objects with “size”— that have enabled me to gain a deeper understanding into the foundations of probability theory.\n\nThree strategies for cataloging cell types\n\nPublished:\n\nIn my previous post, I outlined a conceptual framework for defining and reasoning about “cell types”. Specifically, I noted that the idea of a “cell type” can be viewed as a human-made partition on the universal cellular state space. In this post, I attempt to distill three strategies for partitioning this state space and agreeing on cell type definitions.\n\nOn cell types and cell states\n\nPublished:\n\nThe advent of single-cell genomics has brought about new efforts to characterize and catalog all of the cell types in the human body. Despite these efforts, the very definition of a “cell type” is under debate. In this post, I will discuss a conceptual framework for defining cell types as subsets of states in an underlying cellular state space. Moreover, I will link the cellular state space to biomedical ontologies that attempt to capture biological knowledge regarding cell types.\n\nRNA-seq: the basics\n\nPublished:\n\nRNA sequencing (RNA-seq) has become a ubiquitous tool in biomedical research for measuring gene expression in a population of cells, or a single cell, across the genome. Despite its ubiquity, RNA-seq is relatively complex and there exists a large research effort towards developing statistical and computational methods for analyzing the raw data that it produces. In this post, I will provide a high level overview of RNA-seq and describe how to interpret some of the common units in which gene expression is measured from an RNA-seq experiment.\n\nThree strategies for cataloging cell types\n\nPublished:\n\nIn my previous post, I outlined a conceptual framework for defining and reasoning about “cell types”. Specifically, I noted that the idea of a “cell type” can be viewed as a human-made partition on the universal cellular state space. In this post, I attempt to distill three strategies for partitioning this state space and agreeing on cell type definitions.\n\nOn cell types and cell states\n\nPublished:\n\nThe advent of single-cell genomics has brought about new efforts to characterize and catalog all of the cell types in the human body. Despite these efforts, the very definition of a “cell type” is under debate. In this post, I will discuss a conceptual framework for defining cell types as subsets of states in an underlying cellular state space. Moreover, I will link the cellular state space to biomedical ontologies that attempt to capture biological knowledge regarding cell types.\n\nRNA-seq: the basics\n\nPublished:\n\nRNA sequencing (RNA-seq) has become a ubiquitous tool in biomedical research for measuring gene expression in a population of cells, or a single cell, across the genome. Despite its ubiquity, RNA-seq is relatively complex and there exists a large research effort towards developing statistical and computational methods for analyzing the raw data that it produces. In this post, I will provide a high level overview of RNA-seq and describe how to interpret some of the common units in which gene expression is measured from an RNA-seq experiment.\n\nThree strategies for cataloging cell types\n\nPublished:\n\nIn my previous post, I outlined a conceptual framework for defining and reasoning about “cell types”. Specifically, I noted that the idea of a “cell type” can be viewed as a human-made partition on the universal cellular state space. In this post, I attempt to distill three strategies for partitioning this state space and agreeing on cell type definitions.\n\nOn cell types and cell states\n\nPublished:\n\nThe advent of single-cell genomics has brought about new efforts to characterize and catalog all of the cell types in the human body. Despite these efforts, the very definition of a “cell type” is under debate. In this post, I will discuss a conceptual framework for defining cell types as subsets of states in an underlying cellular state space. Moreover, I will link the cellular state space to biomedical ontologies that attempt to capture biological knowledge regarding cell types.\n\nGaussian mixture models\n\nPublished:\n\nGaussian mixture models are a very popular method for data clustering. Here I will define the Gaussian mixture model and also derive the EM algorithm for performing maximum likelihood estimation of its paramters.\n\nVisualizing covariance\n\nPublished:\n\nCovariance quantifies to what extent two random variables are linearly correlated. In this post, I will outline a visualization of covariance that helped me better intuit this concept.\n\nTrue understanding is “seeing” in 3D\n\nPublished:\n\nIn this post, I will discuss an analogy that I find useful for thinking about what it means to “understand” something: True understanding of a concept is akin to “seeing” the concept in its native three-dimensional space, whereas partial understanding is merely seeing a two-dimensional projection of that inherently three-dimensional concept.\n\nShannon’s Source Coding Theorem (Foundations of information theory: Part 3)\n\nPublished:\n\nThe mathematical field of information theory attempts to mathematically describe the concept of “information”. In the first two posts, we discussed the concepts of self-information and information entropy. In this post, we step through Shannon’s Source Coding Theorem to see how the information entropy of a probability distribution describes the best-achievable efficiency required to communicate samples from the distribution.\n\nInformation entropy (Foundations of information theory: Part 2)\n\nPublished:\n\nThe mathematical field of information theory attempts to mathematically describe the concept of “information”. In this series of posts, I will attempt to describe my understanding of how, both philosophically and mathematically, information theory defines the polymorphic, and often amorphous, concept of information. In the first post, we discussed the concept of self-information. In this second post, we will build on this foundation to discuss the concept of information entropy.\n\nThe evidence lower bound (ELBO)\n\nPublished:\n\nThe evidence lower bound is an important quantity at the core of a number of important algorithms used in statistical inference including expectation-maximization and variational inference. In this post, I describe its context, definition, and derivation.\n\nDemystifying measure-theoretic probability theory (part 3: expectation)\n\nPublished:\n\nIn this series of posts, I present my understanding of some basic concepts in measure theory — the mathematical study of objects with “size”— that have enabled me to gain a deeper understanding into the foundations of probability theory.\n\nMatrices as functions\n\nPublished:\n\nAt the core of linear algebra is the idea that matrices represent functions. In this post, we’ll look at a few common, elementary functions and discuss their corresponding matrices.\n\nThree strategies for cataloging cell types\n\nPublished:\n\nIn my previous post, I outlined a conceptual framework for defining and reasoning about “cell types”. Specifically, I noted that the idea of a “cell type” can be viewed as a human-made partition on the universal cellular state space. In this post, I attempt to distill three strategies for partitioning this state space and agreeing on cell type definitions.\n\nOn cell types and cell states\n\nPublished:\n\nThe advent of single-cell genomics has brought about new efforts to characterize and catalog all of the cell types in the human body. Despite these efforts, the very definition of a “cell type” is under debate. In this post, I will discuss a conceptual framework for defining cell types as subsets of states in an underlying cellular state space. Moreover, I will link the cellular state space to biomedical ontologies that attempt to capture biological knowledge regarding cell types.\n\nRNA-seq: the basics\n\nPublished:\n\nRNA sequencing (RNA-seq) has become a ubiquitous tool in biomedical research for measuring gene expression in a population of cells, or a single cell, across the genome. Despite its ubiquity, RNA-seq is relatively complex and there exists a large research effort towards developing statistical and computational methods for analyzing the raw data that it produces. In this post, I will provide a high level overview of RNA-seq and describe how to interpret some of the common units in which gene expression is measured from an RNA-seq experiment.\n\nPerplexity: a more intuitive measure of uncertainty than entropy\n\nPublished:\n\nLike entropy, perplexity is an information theoretic quantity that describes the uncertainty of a random variable. In fact, perplexity is simply a monotonic function of entropy and thus, in some sense, they can be used interchangeabley. So why do we need it? In this post, I’ll discuss why perplexity is a more intuitive measure of uncertainty than entropy.\n\nShannon’s Source Coding Theorem (Foundations of information theory: Part 3)\n\nPublished:\n\nThe mathematical field of information theory attempts to mathematically describe the concept of “information”. In the first two posts, we discussed the concepts of self-information and information entropy. In this post, we step through Shannon’s Source Coding Theorem to see how the information entropy of a probability distribution describes the best-achievable efficiency required to communicate samples from the distribution.\n\nInformation entropy (Foundations of information theory: Part 2)\n\nPublished:\n\nThe mathematical field of information theory attempts to mathematically describe the concept of “information”. In this series of posts, I will attempt to describe my understanding of how, both philosophically and mathematically, information theory defines the polymorphic, and often amorphous, concept of information. In the first post, we discussed the concept of self-information. In this second post, we will build on this foundation to discuss the concept of information entropy.\n\nWhat is information? (Foundations of information theory: Part 1)\n\nPublished:\n\nThe mathematical field of information theory attempts to mathematically describe the concept of “information”. In this series of posts, I will attempt to describe my understanding of how, both philosophically and mathematically, information theory defines the polymorphic, and often amorphous, concept of information. In this first post, I will describe Shannon’s self-information.\n\nTrue understanding is “seeing” in 3D\n\nPublished:\n\nIn this post, I will discuss an analogy that I find useful for thinking about what it means to “understand” something: True understanding of a concept is akin to “seeing” the concept in its native three-dimensional space, whereas partial understanding is merely seeing a two-dimensional projection of that inherently three-dimensional concept.\n\nIntrinsic dimensionality\n\nPublished:\n\nIn my formal education, I found that the concept of “intrinsic dimensionality” was never explicitly taught; however, it undergirds so many concepts in linear algebra and the data sciences such as the rank of a matrix and feature selection. In this post I will discuss the difference between the extrinsic dimensionality of a space versus its intrinsic dimensionality.\n\nThree strategies for cataloging cell types\n\nPublished:\n\nIn my previous post, I outlined a conceptual framework for defining and reasoning about “cell types”. Specifically, I noted that the idea of a “cell type” can be viewed as a human-made partition on the universal cellular state space. In this post, I attempt to distill three strategies for partitioning this state space and agreeing on cell type definitions.\n\nOn cell types and cell states\n\nPublished:\n\nThe advent of single-cell genomics has brought about new efforts to characterize and catalog all of the cell types in the human body. Despite these efforts, the very definition of a “cell type” is under debate. In this post, I will discuss a conceptual framework for defining cell types as subsets of states in an underlying cellular state space. Moreover, I will link the cellular state space to biomedical ontologies that attempt to capture biological knowledge regarding cell types.\n\nNormed vector spaces\n\nPublished:\n\nWhen first introduced to Euclidean vectors, one is taught that the length of the vector’s arrow is called the norm of the vector. In this post, we present the more rigorous and abstract definition of a norm and show how it generalizes the notion of “length” to non-Euclidean vector spaces. We also discuss how the norm induces a metric function on pairs vectors so that one can discuss distances between vectors.\n\nVector spaces\n\nPublished:\n\nThe concept of a vector space is a foundational concept in mathematics, physics, and the data sciences. In this post, we first present and explain the definition of a vector space and then go on to describe properties of vector spaces. Lastly, we present a few examples of vector spaces that go beyond the usual Euclidean vectors that are often taught in introductory math and science courses.\n\nInvertible matrices\n\nPublished:\n\nIn this post, we discuss invertible matrices: those matrices that characterize invertible linear transformations. We discuss three different perspectives for intuiting inverse matrices as well as several of their properties.\n\nMatrix multiplication\n\nPublished:\n\nAt first glance, the definition for the product of two matrices can be unintuitive. In this post, we discuss three perspectives for viewing matrix multiplication. It is the third perspective that gives this “unintuitive” definition its power: that matrix multiplication represents the composition of linear transformations.\n\nMatrices characterize linear transformations\n\nPublished:\n\nLinear transformations are functions mapping vectors between two vector spaces that preserve vector addition and scalar multiplication. In this post, we show that there exists a one-to-one corresondence between linear transformations between coordinate vector spaces and matrices. Thus, we can view a matrix as representing a unique linear transformation between coordinate vector spaces.\n\nMatrices as functions\n\nPublished:\n\nAt the core of linear algebra is the idea that matrices represent functions. In this post, we’ll look at a few common, elementary functions and discuss their corresponding matrices.\n\nMatrix-vector multiplication\n\nPublished:\n\nMatrix-vector multiplication is an operation between a matrix and a vector that produces a new vector. In this post, I’ll define matrix vector multiplication as well as three angles from which to view this concept. The third angle entails viewing matrices as functions between vector spaces\n\nIntroducing matrices\n\nPublished:\n\nHere, I will introduce the three main ways of thinking about matrices. This high-level description of the multi-faceted way of thinking about matrices would have helped me better intuit matrices when I was first introduced to them in my undergraduate linear algebra course.\n\nMatrices characterize linear transformations\n\nPublished:\n\nLinear transformations are functions mapping vectors between two vector spaces that preserve vector addition and scalar multiplication. In this post, we show that there exists a one-to-one corresondence between linear transformations between coordinate vector spaces and matrices. Thus, we can view a matrix as representing a unique linear transformation between coordinate vector spaces.\n\nMatrix-vector multiplication\n\nPublished:\n\nMatrix-vector multiplication is an operation between a matrix and a vector that produces a new vector. In this post, I’ll define matrix vector multiplication as well as three angles from which to view this concept. The third angle entails viewing matrices as functions between vector spaces\n\nVariational inference\n\nPublished:\n\nIn this post, I will present a high-level explanation of variational inference: a paradigm for estimating a posterior distribution when computing it explicitly is intractable. Variational inference finds an approximate posterior by solving a specific optimization problem that seeks to minimize the disparity between the true posterior and the approximate posterior.\n\nGaussian mixture models\n\nPublished:\n\nGaussian mixture models are a very popular method for data clustering. Here I will define the Gaussian mixture model and also derive the EM algorithm for performing maximum likelihood estimation of its paramters.\n\nThe evidence lower bound (ELBO)\n\nPublished:\n\nThe evidence lower bound is an important quantity at the core of a number of important algorithms used in statistical inference including expectation-maximization and variational inference. In this post, I describe its context, definition, and derivation.\n\nExpectation-maximization: theory and intuition\n\nPublished:\n\nExpectation-maximization (EM) is a popular algorithm for performing maximum-likelihood estimation of the parameters in a latent variable model. In this post, I discuss the theory behind, and intuition into this algorithm.\n\nNormed vector spaces\n\nPublished:\n\nWhen first introduced to Euclidean vectors, one is taught that the length of the vector’s arrow is called the norm of the vector. In this post, we present the more rigorous and abstract definition of a norm and show how it generalizes the notion of “length” to non-Euclidean vector spaces. We also discuss how the norm induces a metric function on pairs vectors so that one can discuss distances between vectors.\n\nPublished:\n\nTwo of the most important relationships in mathematics, namely equality and definition, are both denoted using the same symbol – namely, the equals sign. The overloading of this symbol confuses students in mathematics and computer programming. In this post, I argue for the use of two different symbols for these two fundamentally different operators.\n\nVector spaces\n\nPublished:\n\nThe concept of a vector space is a foundational concept in mathematics, physics, and the data sciences. In this post, we first present and explain the definition of a vector space and then go on to describe properties of vector spaces. Lastly, we present a few examples of vector spaces that go beyond the usual Euclidean vectors that are often taught in introductory math and science courses.\n\nInvertible matrices\n\nPublished:\n\nIn this post, we discuss invertible matrices: those matrices that characterize invertible linear transformations. We discuss three different perspectives for intuiting inverse matrices as well as several of their properties.\n\nIntrinsic dimensionality\n\nPublished:\n\nIn my formal education, I found that the concept of “intrinsic dimensionality” was never explicitly taught; however, it undergirds so many concepts in linear algebra and the data sciences such as the rank of a matrix and feature selection. In this post I will discuss the difference between the extrinsic dimensionality of a space versus its intrinsic dimensionality.\n\nMatrix multiplication\n\nPublished:\n\nAt first glance, the definition for the product of two matrices can be unintuitive. In this post, we discuss three perspectives for viewing matrix multiplication. It is the third perspective that gives this “unintuitive” definition its power: that matrix multiplication represents the composition of linear transformations.\n\nMatrices characterize linear transformations\n\nPublished:\n\nLinear transformations are functions mapping vectors between two vector spaces that preserve vector addition and scalar multiplication. In this post, we show that there exists a one-to-one corresondence between linear transformations between coordinate vector spaces and matrices. Thus, we can view a matrix as representing a unique linear transformation between coordinate vector spaces.\n\nMatrices as functions\n\nPublished:\n\nAt the core of linear algebra is the idea that matrices represent functions. In this post, we’ll look at a few common, elementary functions and discuss their corresponding matrices.\n\nMatrix-vector multiplication\n\nPublished:\n\nMatrix-vector multiplication is an operation between a matrix and a vector that produces a new vector. In this post, I’ll define matrix vector multiplication as well as three angles from which to view this concept. The third angle entails viewing matrices as functions between vector spaces\n\nIntroducing matrices\n\nPublished:\n\nHere, I will introduce the three main ways of thinking about matrices. This high-level description of the multi-faceted way of thinking about matrices would have helped me better intuit matrices when I was first introduced to them in my undergraduate linear algebra course.\n\nThe graph Laplacian\n\nPublished:\n\nAt the heart of of a number of important machine learning algorithms, such as spectral clustering, lies a matrix called the graph Laplacian. In this post, I’ll walk through the intuition behind the graph Laplacian and describe how it represents the discrete analogue to the Laplacian operator on continuous multivariate functions.\n\nDemystifying measure-theoretic probability theory (part 3: expectation)\n\nPublished:\n\nIn this series of posts, I present my understanding of some basic concepts in measure theory — the mathematical study of objects with “size”— that have enabled me to gain a deeper understanding into the foundations of probability theory.\n\nDemystifying measure-theoretic probability theory (part 2: random variables)\n\nPublished:\n\nIn this series of posts, I present my understanding of some basic concepts in measure theory — the mathematical study of objects with “size”— that have enabled me to gain a deeper understanding into the foundations of probability theory.\n\nDemystifying measure-theoretic probability theory (part 1: probability spaces)\n\nPublished:\n\nIn this series of posts, I will present my understanding of some basic concepts in measure theory — the mathematical study of objects with “size”— that have enabled me to gain a deeper understanding into the foundations of probability theory.\n\nInvertible matrices\n\nPublished:\n\nIn this post, we discuss invertible matrices: those matrices that characterize invertible linear transformations. We discuss three different perspectives for intuiting inverse matrices as well as several of their properties.\n\nMatrix multiplication\n\nPublished:\n\nAt first glance, the definition for the product of two matrices can be unintuitive. In this post, we discuss three perspectives for viewing matrix multiplication. It is the third perspective that gives this “unintuitive” definition its power: that matrix multiplication represents the composition of linear transformations.\n\nMatrices as functions\n\nPublished:\n\nAt the core of linear algebra is the idea that matrices represent functions. In this post, we’ll look at a few common, elementary functions and discuss their corresponding matrices.\n\nMatrix-vector multiplication\n\nPublished:\n\nMatrix-vector multiplication is an operation between a matrix and a vector that produces a new vector. In this post, I’ll define matrix vector multiplication as well as three angles from which to view this concept. The third angle entails viewing matrices as functions between vector spaces\n\nIntroducing matrices\n\nPublished:\n\nHere, I will introduce the three main ways of thinking about matrices. This high-level description of the multi-faceted way of thinking about matrices would have helped me better intuit matrices when I was first introduced to them in my undergraduate linear algebra course.\n\nDemystifying measure-theoretic probability theory (part 3: expectation)\n\nPublished:\n\nIn this series of posts, I present my understanding of some basic concepts in measure theory — the mathematical study of objects with “size”— that have enabled me to gain a deeper understanding into the foundations of probability theory.\n\nDemystifying measure-theoretic probability theory (part 2: random variables)\n\nPublished:\n\nIn this series of posts, I present my understanding of some basic concepts in measure theory — the mathematical study of objects with “size”— that have enabled me to gain a deeper understanding into the foundations of probability theory.\n\nDemystifying measure-theoretic probability theory (part 1: probability spaces)\n\nPublished:\n\nIn this series of posts, I will present my understanding of some basic concepts in measure theory — the mathematical study of objects with “size”— that have enabled me to gain a deeper understanding into the foundations of probability theory.\n\nDemystifying measure-theoretic probability theory (part 2: random variables)\n\nPublished:\n\nIn this series of posts, I present my understanding of some basic concepts in measure theory — the mathematical study of objects with “size”— that have enabled me to gain a deeper understanding into the foundations of probability theory.\n\nThree strategies for cataloging cell types\n\nPublished:\n\nIn my previous post, I outlined a conceptual framework for defining and reasoning about “cell types”. Specifically, I noted that the idea of a “cell type” can be viewed as a human-made partition on the universal cellular state space. In this post, I attempt to distill three strategies for partitioning this state space and agreeing on cell type definitions.\n\nOn cell types and cell states\n\nPublished:\n\nThe advent of single-cell genomics has brought about new efforts to characterize and catalog all of the cell types in the human body. Despite these efforts, the very definition of a “cell type” is under debate. In this post, I will discuss a conceptual framework for defining cell types as subsets of states in an underlying cellular state space. Moreover, I will link the cellular state space to biomedical ontologies that attempt to capture biological knowledge regarding cell types.\n\npedagogy\n\nPublished:\n\nTwo of the most important relationships in mathematics, namely equality and definition, are both denoted using the same symbol – namely, the equals sign. The overloading of this symbol confuses students in mathematics and computer programming. In this post, I argue for the use of two different symbols for these two fundamentally different operators.\n\nPerplexity: a more intuitive measure of uncertainty than entropy\n\nPublished:\n\nLike entropy, perplexity is an information theoretic quantity that describes the uncertainty of a random variable. In fact, perplexity is simply a monotonic function of entropy and thus, in some sense, they can be used interchangeabley. So why do we need it? In this post, I’ll discuss why perplexity is a more intuitive measure of uncertainty than entropy.\n\nVariational inference\n\nPublished:\n\nIn this post, I will present a high-level explanation of variational inference: a paradigm for estimating a posterior distribution when computing it explicitly is intractable. Variational inference finds an approximate posterior by solving a specific optimization problem that seeks to minimize the disparity between the true posterior and the approximate posterior.\n\nGaussian mixture models\n\nPublished:\n\nGaussian mixture models are a very popular method for data clustering. Here I will define the Gaussian mixture model and also derive the EM algorithm for performing maximum likelihood estimation of its paramters.\n\nThe evidence lower bound (ELBO)\n\nPublished:\n\nThe evidence lower bound is an important quantity at the core of a number of important algorithms used in statistical inference including expectation-maximization and variational inference. In this post, I describe its context, definition, and derivation.\n\nVisualizing covariance\n\nPublished:\n\nCovariance quantifies to what extent two random variables are linearly correlated. In this post, I will outline a visualization of covariance that helped me better intuit this concept.\n\nExpectation-maximization: theory and intuition\n\nPublished:\n\nExpectation-maximization (EM) is a popular algorithm for performing maximum-likelihood estimation of the parameters in a latent variable model. In this post, I discuss the theory behind, and intuition into this algorithm.\n\nDemystifying measure-theoretic probability theory (part 3: expectation)\n\nPublished:\n\nIn this series of posts, I present my understanding of some basic concepts in measure theory — the mathematical study of objects with “size”— that have enabled me to gain a deeper understanding into the foundations of probability theory.\n\nDemystifying measure-theoretic probability theory (part 2: random variables)\n\nPublished:\n\nIn this series of posts, I present my understanding of some basic concepts in measure theory — the mathematical study of objects with “size”— that have enabled me to gain a deeper understanding into the foundations of probability theory.\n\nDemystifying measure-theoretic probability theory (part 1: probability spaces)\n\nPublished:\n\nIn this series of posts, I will present my understanding of some basic concepts in measure theory — the mathematical study of objects with “size”— that have enabled me to gain a deeper understanding into the foundations of probability theory.\n\nDemystifying measure-theoretic probability theory (part 2: random variables)\n\nPublished:\n\nIn this series of posts, I present my understanding of some basic concepts in measure theory — the mathematical study of objects with “size”— that have enabled me to gain a deeper understanding into the foundations of probability theory.\n\nWhat is information? (Foundations of information theory: Part 1)\n\nPublished:\n\nThe mathematical field of information theory attempts to mathematically describe the concept of “information”. In this series of posts, I will attempt to describe my understanding of how, both philosophically and mathematically, information theory defines the polymorphic, and often amorphous, concept of information. In this first post, I will describe Shannon’s self-information.\n\nThree strategies for cataloging cell types\n\nPublished:\n\nIn my previous post, I outlined a conceptual framework for defining and reasoning about “cell types”. Specifically, I noted that the idea of a “cell type” can be viewed as a human-made partition on the universal cellular state space. In this post, I attempt to distill three strategies for partitioning this state space and agreeing on cell type definitions.\n\nOn cell types and cell states\n\nPublished:\n\nThe advent of single-cell genomics has brought about new efforts to characterize and catalog all of the cell types in the human body. Despite these efforts, the very definition of a “cell type” is under debate. In this post, I will discuss a conceptual framework for defining cell types as subsets of states in an underlying cellular state space. Moreover, I will link the cellular state space to biomedical ontologies that attempt to capture biological knowledge regarding cell types.\n\nThe graph Laplacian\n\nPublished:\n\nAt the heart of of a number of important machine learning algorithms, such as spectral clustering, lies a matrix called the graph Laplacian. In this post, I’ll walk through the intuition behind the graph Laplacian and describe how it represents the discrete analogue to the Laplacian operator on continuous multivariate functions.\n\nPerplexity: a more intuitive measure of uncertainty than entropy\n\nPublished:\n\nLike entropy, perplexity is an information theoretic quantity that describes the uncertainty of a random variable. In fact, perplexity is simply a monotonic function of entropy and thus, in some sense, they can be used interchangeabley. So why do we need it? In this post, I’ll discuss why perplexity is a more intuitive measure of uncertainty than entropy.\n\nVariational inference\n\nPublished:\n\nIn this post, I will present a high-level explanation of variational inference: a paradigm for estimating a posterior distribution when computing it explicitly is intractable. Variational inference finds an approximate posterior by solving a specific optimization problem that seeks to minimize the disparity between the true posterior and the approximate posterior.\n\nGaussian mixture models\n\nPublished:\n\nGaussian mixture models are a very popular method for data clustering. Here I will define the Gaussian mixture model and also derive the EM algorithm for performing maximum likelihood estimation of its paramters.\n\nThe evidence lower bound (ELBO)\n\nPublished:\n\nThe evidence lower bound is an important quantity at the core of a number of important algorithms used in statistical inference including expectation-maximization and variational inference. In this post, I describe its context, definition, and derivation.\n\nVisualizing covariance\n\nPublished:\n\nCovariance quantifies to what extent two random variables are linearly correlated. In this post, I will outline a visualization of covariance that helped me better intuit this concept.\n\nExpectation-maximization: theory and intuition\n\nPublished:\n\nExpectation-maximization (EM) is a popular algorithm for performing maximum-likelihood estimation of the parameters in a latent variable model. In this post, I discuss the theory behind, and intuition into this algorithm.\n\nDemystifying measure-theoretic probability theory (part 3: expectation)\n\nPublished:\n\nIn this series of posts, I present my understanding of some basic concepts in measure theory — the mathematical study of objects with “size”— that have enabled me to gain a deeper understanding into the foundations of probability theory.\n\nDemystifying measure-theoretic probability theory (part 2: random variables)\n\nPublished:\n\nIn this series of posts, I present my understanding of some basic concepts in measure theory — the mathematical study of objects with “size”— that have enabled me to gain a deeper understanding into the foundations of probability theory.\n\nDemystifying measure-theoretic probability theory (part 1: probability spaces)\n\nPublished:\n\nIn this series of posts, I will present my understanding of some basic concepts in measure theory — the mathematical study of objects with “size”— that have enabled me to gain a deeper understanding into the foundations of probability theory.\n\nNormed vector spaces\n\nPublished:\n\nWhen first introduced to Euclidean vectors, one is taught that the length of the vector’s arrow is called the norm of the vector. In this post, we present the more rigorous and abstract definition of a norm and show how it generalizes the notion of “length” to non-Euclidean vector spaces. We also discuss how the norm induces a metric function on pairs vectors so that one can discuss distances between vectors.\n\nPublished:\n\nTwo of the most important relationships in mathematics, namely equality and definition, are both denoted using the same symbol – namely, the equals sign. The overloading of this symbol confuses students in mathematics and computer programming. In this post, I argue for the use of two different symbols for these two fundamentally different operators.\n\nVector spaces\n\nPublished:\n\nThe concept of a vector space is a foundational concept in mathematics, physics, and the data sciences. In this post, we first present and explain the definition of a vector space and then go on to describe properties of vector spaces. Lastly, we present a few examples of vector spaces that go beyond the usual Euclidean vectors that are often taught in introductory math and science courses.\n\nInvertible matrices\n\nPublished:\n\nIn this post, we discuss invertible matrices: those matrices that characterize invertible linear transformations. We discuss three different perspectives for intuiting inverse matrices as well as several of their properties.\n\nPerplexity: a more intuitive measure of uncertainty than entropy\n\nPublished:\n\nLike entropy, perplexity is an information theoretic quantity that describes the uncertainty of a random variable. In fact, perplexity is simply a monotonic function of entropy and thus, in some sense, they can be used interchangeabley. So why do we need it? In this post, I’ll discuss why perplexity is a more intuitive measure of uncertainty than entropy.\n\nVariational inference\n\nPublished:\n\nIn this post, I will present a high-level explanation of variational inference: a paradigm for estimating a posterior distribution when computing it explicitly is intractable. Variational inference finds an approximate posterior by solving a specific optimization problem that seeks to minimize the disparity between the true posterior and the approximate posterior.\n\nRNA-seq: the basics\n\nPublished:\n\nRNA sequencing (RNA-seq) has become a ubiquitous tool in biomedical research for measuring gene expression in a population of cells, or a single cell, across the genome. Despite its ubiquity, RNA-seq is relatively complex and there exists a large research effort towards developing statistical and computational methods for analyzing the raw data that it produces. In this post, I will provide a high level overview of RNA-seq and describe how to interpret some of the common units in which gene expression is measured from an RNA-seq experiment.\n\nIntrinsic dimensionality\n\nPublished:\n\nIn my formal education, I found that the concept of “intrinsic dimensionality” was never explicitly taught; however, it undergirds so many concepts in linear algebra and the data sciences such as the rank of a matrix and feature selection. In this post I will discuss the difference between the extrinsic dimensionality of a space versus its intrinsic dimensionality.\n\nMatrix multiplication\n\nPublished:\n\nAt first glance, the definition for the product of two matrices can be unintuitive. In this post, we discuss three perspectives for viewing matrix multiplication. It is the third perspective that gives this “unintuitive” definition its power: that matrix multiplication represents the composition of linear transformations.\n\nMatrices characterize linear transformations\n\nPublished:\n\nLinear transformations are functions mapping vectors between two vector spaces that preserve vector addition and scalar multiplication. In this post, we show that there exists a one-to-one corresondence between linear transformations between coordinate vector spaces and matrices. Thus, we can view a matrix as representing a unique linear transformation between coordinate vector spaces.\n\nMatrices as functions\n\nPublished:\n\nAt the core of linear algebra is the idea that matrices represent functions. In this post, we’ll look at a few common, elementary functions and discuss their corresponding matrices.\n\nMatrix-vector multiplication\n\nPublished:\n\nMatrix-vector multiplication is an operation between a matrix and a vector that produces a new vector. In this post, I’ll define matrix vector multiplication as well as three angles from which to view this concept. The third angle entails viewing matrices as functions between vector spaces\n\nIntroducing matrices\n\nPublished:\n\nHere, I will introduce the three main ways of thinking about matrices. This high-level description of the multi-faceted way of thinking about matrices would have helped me better intuit matrices when I was first introduced to them in my undergraduate linear algebra course.\n\nGaussian mixture models\n\nPublished:\n\nGaussian mixture models are a very popular method for data clustering. Here I will define the Gaussian mixture model and also derive the EM algorithm for performing maximum likelihood estimation of its paramters.\n\nThe graph Laplacian\n\nPublished:\n\nAt the heart of of a number of important machine learning algorithms, such as spectral clustering, lies a matrix called the graph Laplacian. In this post, I’ll walk through the intuition behind the graph Laplacian and describe how it represents the discrete analogue to the Laplacian operator on continuous multivariate functions.\n\nShannon’s Source Coding Theorem (Foundations of information theory: Part 3)\n\nPublished:\n\nThe mathematical field of information theory attempts to mathematically describe the concept of “information”. In the first two posts, we discussed the concepts of self-information and information entropy. In this post, we step through Shannon’s Source Coding Theorem to see how the information entropy of a probability distribution describes the best-achievable efficiency required to communicate samples from the distribution.\n\nInformation entropy (Foundations of information theory: Part 2)\n\nPublished:\n\nThe mathematical field of information theory attempts to mathematically describe the concept of “information”. In this series of posts, I will attempt to describe my understanding of how, both philosophically and mathematically, information theory defines the polymorphic, and often amorphous, concept of information. In the first post, we discussed the concept of self-information. In this second post, we will build on this foundation to discuss the concept of information entropy.\n\nWhat is information? (Foundations of information theory: Part 1)\n\nPublished:\n\nThe mathematical field of information theory attempts to mathematically describe the concept of “information”. In this series of posts, I will attempt to describe my understanding of how, both philosophically and mathematically, information theory defines the polymorphic, and often amorphous, concept of information. In this first post, I will describe Shannon’s self-information.\n\nThe evidence lower bound (ELBO)\n\nPublished:\n\nThe evidence lower bound is an important quantity at the core of a number of important algorithms used in statistical inference including expectation-maximization and variational inference. In this post, I describe its context, definition, and derivation.\n\nVisualizing covariance\n\nPublished:\n\nCovariance quantifies to what extent two random variables are linearly correlated. In this post, I will outline a visualization of covariance that helped me better intuit this concept.\n\nExpectation-maximization: theory and intuition\n\nPublished:\n\nExpectation-maximization (EM) is a popular algorithm for performing maximum-likelihood estimation of the parameters in a latent variable model. In this post, I discuss the theory behind, and intuition into this algorithm.\n\nDemystifying measure-theoretic probability theory (part 3: expectation)\n\nPublished:\n\nIn this series of posts, I present my understanding of some basic concepts in measure theory — the mathematical study of objects with “size”— that have enabled me to gain a deeper understanding into the foundations of probability theory."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9175599,"math_prob":0.8205477,"size":38961,"snap":"2022-05-2022-21","text_gpt3_token_len":7172,"char_repetition_ratio":0.15003721,"word_repetition_ratio":0.98108745,"special_character_ratio":0.17710018,"punctuation_ratio":0.10095579,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99611056,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-24T01:28:36Z\",\"WARC-Record-ID\":\"<urn:uuid:a7e64dd8-387b-4e48-80d0-7b3d70b54662>\",\"Content-Length\":\"134787\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:24a87472-89c9-4977-ba3c-f382f740ff3c>\",\"WARC-Concurrent-To\":\"<urn:uuid:dab2f2ae-81e1-4adc-a847-da7d036b3663>\",\"WARC-IP-Address\":\"185.199.110.153\",\"WARC-Target-URI\":\"https://mbernste.github.io/tags/\",\"WARC-Payload-Digest\":\"sha1:2RCVQJE5M4ERI644O7PQSAQDTWU6VGSP\",\"WARC-Block-Digest\":\"sha1:MORC7UQCGKHNFSSUGIPEATZCRCL2FVBT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304345.92_warc_CC-MAIN-20220123232910-20220124022910-00418.warc.gz\"}"} |
https://proofwiki.org/wiki/Normalizer_of_Subgroup_is_Largest_Subgroup_containing_that_Subgroup_as_Normal_Subgroup | [
"# Normalizer of Subgroup is Largest Subgroup containing that Subgroup as Normal Subgroup\n\n## Theorem\n\nLet $G$ be a group.\n\nLet $H$ be a subgroup of $G$.\n\nThen $\\map {N_G} H$, the normalizer of $H$ in $G$, is the largest subgroup of $G$ containing $H$ as a normal subgroup.\n\n## Proof\n\nFrom Subgroup is Subgroup of Normalizer, we have that $H \\le \\map {N_G} H$.\n\nNow we need to show that $H \\lhd \\map {N_G} H$.\n\nFor $a \\in \\map {N_G} H$, the conjugate of $H$ by $a$ in $\\map {N_G} H$ is:\n\n $\\ds H^a$ $=$ $\\ds \\set {x \\in \\map {N_G} H: a x a^{-1} \\in H}$ Definition of Conjugate of Group Subset $\\ds$ $=$ $\\ds H^a \\cap \\map {N_G} H$ Definition of Set Intersection $\\ds$ $=$ $\\ds H \\cap \\map {N_G} H$ Definition of Normalizer $\\ds$ $=$ $\\ds H$ Intersection with Subset is Subset\n\nso:\n\n$\\forall a \\in \\map {N_G} H: H^a = H$\n\nand so by definition of normal subgroup:\n\n$H \\lhd \\map {N_G} H$\n\nNow we need to show that $\\map {N_G} H$ is the largest subgroup of $G$ containing $H$ such that $H \\lhd \\map {N_G} H$.\n\nThat is, to show that any subgroup of $G$ in which $H$ is normal is also a subset of $\\map {N_G} H$.\n\nTake any $N$ such that $H \\lhd N \\le G$.\n\nIn $N$, the conjugate of $H$ by $a \\in N$ is $N \\cap H^a = H$.\n\nTherefore:\n\n$H \\subseteq H^a$\n\nSimilarly, $H \\subseteq H^{a^{-1} }$, so:\n\n$H^a \\subseteq \\paren {H^a}^{a^{-1} } = H$\n\nThus:\n\n$\\forall a \\in N: H^a = H, a \\in \\map {N_G} H$\n\nThat is:\n\n$N \\subseteq \\map {N_G} H$\n\nSo what we have shown is that any subgroup of $G$ in which $H$ is normal is a subset of $\\map {N_G} H$, which is another way of saying that $\\map {N_G} H$ is the largest such subgroup.\n\n$\\blacksquare$"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.80565196,"math_prob":0.99999595,"size":1983,"snap":"2021-04-2021-17","text_gpt3_token_len":709,"char_repetition_ratio":0.18292066,"word_repetition_ratio":0.13623978,"special_character_ratio":0.38073626,"punctuation_ratio":0.13493976,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000099,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-16T17:03:53Z\",\"WARC-Record-ID\":\"<urn:uuid:a315fed3-e6d6-4a76-ae64-de0127c73cec>\",\"Content-Length\":\"40260\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2016a581-7d8b-4022-8200-3b9bfbcaf4ee>\",\"WARC-Concurrent-To\":\"<urn:uuid:e9d0fbd9-7f53-447b-81b1-a46a2d5f0f02>\",\"WARC-IP-Address\":\"172.67.198.93\",\"WARC-Target-URI\":\"https://proofwiki.org/wiki/Normalizer_of_Subgroup_is_Largest_Subgroup_containing_that_Subgroup_as_Normal_Subgroup\",\"WARC-Payload-Digest\":\"sha1:JM2NBIGSQOWPAQGJAKOQXPENWTBDPLXY\",\"WARC-Block-Digest\":\"sha1:AM4UUEQW2MZQ3B6FN5PLNJXT57RNGHZO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703506832.21_warc_CC-MAIN-20210116165621-20210116195621-00339.warc.gz\"}"} |
https://answersdrive.com/why-is-180-degrees-equal-to-pi-radians-5149375 | [
"# Why is 180 degrees equal to pi radians?\n\nThe point is that pi radians is equal to 180 degrees. Radians are a unit of measurement for angles, just like degrees are, and pi is just the number of radians that makes up that angle. Just as one radian is equal to 57.3 degrees (approximately). The best way to understand is to forget about degrees entirely.\nA.\n\n### How many radians are in one full circle?\n\nThus 2 radians equals 360 degrees. This means that 1 radian = 180/ degrees, and 1 degree = /180 radians. d(A,B) = R a /180, These formulas can be checked by noticing that the arc length is proportional to the angle, and then checking the formula for the full circle, i.e., when a = 2 radians (or 360 degrees).\n• #### Why is it 360 degrees in a circle?\n\nThat's how we got a 360 degree circle. Around 1500 BC, Egyptians divided the day into 24 hours, though the hours varied with the seasons originally. Greek astronomers made the hours equal. About 300 to 100 BC, the Babylonians subdivided the hour into base-60 fractions: 60 minutes in an hour and 60 seconds in a minute.\n• #### How do you convert degrees to radians in terms of pi?\n\nSteps\n1. Write down the number of degrees you want to convert to radians. Let's work with a few examples so you really get the concept down.\n2. Multiply the number of degrees by π/180. To understand why you have to do this, you should know that 180 degrees constitute π radians.\n3. Do the math.\n4. Simplify.\n• #### What is on the unit circle?\n\nIn mathematics, a unit circle is a circle with a radius of one. Frequently, especially in trigonometry, the unit circle is the circle of radius one centered at the origin (0, 0) in the Cartesian coordinate system in the Euclidean plane.\nB.\n\n### What is a radian and why do we use it?\n\nWhy Radian Measure Makes Life Easier In Mathematics And Physics. The two most commonly used measures for angles are degrees and radians. When is the entire circumference of the circle, the corresponding angle is that of the entire circle. Since the circumference of a circle is , the angle of a full circle is .\n• #### Why do we need to use radians?\n\nWhy Radian Measure Makes Life Easier In Mathematics And Physics. The two most commonly used measures for angles are degrees and radians. When is the entire circumference of the circle, the corresponding angle is that of the entire circle. Since the circumference of a circle is , the angle of a full circle is .\n• #### Why is Pi equal to 180 degrees?\n\nThe point is that pi radians is equal to 180 degrees. Radians are a unit of measurement for angles, just like degrees are, and pi is just the number of radians that makes up that angle. Just as one radian is equal to 57.3 degrees (approximately). The best way to understand is to forget about degrees entirely.\n\nAB\nC.\n\n### Why is a full circle 2 pi?\n\nTherefore, a full circle or one complete revolution of the circle corresponds to an angle of 2π radians. A radian is an arc equal in length to the radius of the circle. Pi is the ratio of the circumference to the diameter of the circle.\n• #### What is 2 pi r squared?\n\nCircle illustration with circumference (C) in black, diameter (D) in cyan, radius (R) in red, and centre or origin (O) in magenta. Circumference = π × diameter = 2 × π × radius.\n• #### How many radians are there in a triangle?\n\nIn a Euclidean space, the sum of measures of these three angles of any triangle is invariably equal to the straight angle, also expressed as 180 °, π radians, two right angles, or a half-turn.\n• #### What is the angle of reference?\n\nBasically, any angle on the x-y plane has a reference angle, which is always between 0 and 90 degrees. The reference angle is always the smallest angle that you can make from the terminal side of an angle (ie where the angle ends) with the x-axis. A reference angle always uses the x-axis as its frame of reference.\n\nUpdated: 2nd October 2019"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9144694,"math_prob":0.98869777,"size":3973,"snap":"2020-10-2020-16","text_gpt3_token_len":933,"char_repetition_ratio":0.18921642,"word_repetition_ratio":0.2802198,"special_character_ratio":0.24742009,"punctuation_ratio":0.115523465,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9991992,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-23T15:57:41Z\",\"WARC-Record-ID\":\"<urn:uuid:be1acbe9-1b3d-4636-9bf6-074c5f10cd3e>\",\"Content-Length\":\"37385\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bd6829c0-aacc-4a7b-a730-5b4fbd234ab6>\",\"WARC-Concurrent-To\":\"<urn:uuid:9daeecce-7624-4e9a-83f3-0e9032111b61>\",\"WARC-IP-Address\":\"172.104.219.53\",\"WARC-Target-URI\":\"https://answersdrive.com/why-is-180-degrees-equal-to-pi-radians-5149375\",\"WARC-Payload-Digest\":\"sha1:4WVLHIKV24N7CTBYBSDCROW7F7JI22XR\",\"WARC-Block-Digest\":\"sha1:3HWDJZJBHTTFTIFNTSUVOYO2IPREV3SC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145818.81_warc_CC-MAIN-20200223154628-20200223184628-00489.warc.gz\"}"} |
https://medium.com/datadriveninvestor/deep-learning-different-types-of-autoencoders-41d4fa5f7570 | [
"# Deep Learning — Different Types of Autoencoders\n\nIn this post we will understand different types of Autoencoders\n\nRead here to understand what is Autoencoder, how does Autoencoder work and where are they used.\n\nJust quick brief on what is Autoencoders\n\nAutoencoders encodes the input values x using a function f. Then decodes the encoded values f(x) using a function g to create output values identical to the input values.\n\nAutoencoder objective is to minimize reconstruction error between the input and output. This helps autoencoders to learn important features present in the data. When a representation allows a good reconstruction of its input then it has retained much of the information present in the input.\n\n### What are different types of Autoencoders?\n\n#### Undercomplete Autoencoders",
null,
"Undercomplete Autoencoder- Hidden layer has smaller dimension than input layer\n• Goal of the Autoencoder is to capture the most important features present in the data.\n• Undercomplete autoencoders have a smaller dimension for hidden layer compared to the input layer. This helps to obtain important features from the data.\n• Objective is to minimize the loss function by penalizing the g(f(x)) for being different from the input x.\n• When decoder is linear and we use a mean squared error loss function then undercomplete autoencoder generates a reduced feature space similar to PCA\n• We get a powerful nonlinear generalization of PCA when encoder function f and decoder function g are non linear.\n• Undercomplete autoencoders do not need any regularization as they maximize the probability of data rather than copying the input to the output.\n\n#### Sparse Autoencoders",
null,
"Sparse Autoencoders use only reduced number of hidden nodes at a time\n• Sparse autoencoders have hidden nodes greater than input nodes. They can still discover important features from the data.\n• Sparsity constraint is introduced on the hidden layer. This is to prevent output layer copy input data.\n• Sparse autoencoders have a sparsity penalty, Ω(h), a value close to zero but not zero. Sparsity penalty is applied on the hidden layer in addition to the reconstruction error. This prevents overfitting.\n• Sparse autoencoders take the highest activation values in the hidden layer and zero out the rest of the hidden nodes. This prevents autoencoders to use all of the hidden nodes at a time and forcing only a reduced number of hidden nodes to be used.\n• As we activate and inactivate hidden nodes for each row in the dataset. Each hidden node extracts a feature from the data\n\n#### Denoising Autoencoders(DAE)",
null,
"Denoising Autoencoders — input is corrupted\n• Denoising refers to intentionally adding noise to the raw input before providing it to the network. Denoising can be achieved using stochastic mapping.\n• Denoising autoencoders create a corrupted copy of the input by introducing some noise. This helps to avoid the autoencoders to copy the input to the output without learning features about the data.\n• Corruption of the input can be done randomly by making some of the input as zero. Remaining nodes copy the input to the noised input.\n• Denoising autoencoders must remove the corruption to generate an output that is similar to the input. Output is compared with input and not with noised input. To minimize the loss function we continue until convergence\n• Denoising autoencoders minimizes the loss function between the output node and the corrupted input.\n• Denoising helps the autoencoders to learn the latent representation present in the data. Denoising autoencoders ensures a good representation is one that can be derived robustly from a corrupted input and that will be useful for recovering the corresponding clean input.\n• Denoising is a stochastic autoencoder as we use a stochastic corruption process to set some of the inputs to zero\n\n#### Contractive Autoencoders(CAE)",
null,
"Contractive Autoencoders\n• Contractive autoencoder(CAE) objective is to have a robust learned representation which is less sensitive to small variation in the data.\n• Robustness of the representation for the data is done by applying a penalty term to the loss function. The penalty term is Frobenius norm of the Jacobian matrix. Frobenius norm of the Jacobian matrix for the hidden layer is calculated with respect to input. Frobenius norm of the Jacobian matrix is the sum of square of all elements.",
null,
"Loss function with penalty term — Frobenius norm of the Jacobian matrix\n• Contractive autoencoder is another regularization technique like sparse autoencoders and denoising autoencoders.\n• CAE surpasses results obtained by regularizing autoencoder using weight decay or by denoising. CAE is a better choice than denoising autoencoder to learn useful feature extraction.\n• Penalty term generates mapping which are strongly contracting the data and hence the name contractive autoencoder.\n\n#### Stacked Denoising Autoencoders",
null,
"Stacked Denoising Autoencoder\n• Stacked Autoencoders is a neural network with multiple layers of sparse autoencoders\n• When we add more hidden layers than just one hidden layer to an autoencoder, it helps to reduce a high dimensional data to a smaller code representing important features\n• Each hidden layer is a more compact representation than the last hidden layer\n• We can also denoise the input and then pass the data through the stacked autoencoders called as stacked denoising autoencoders\n• In Stacked Denoising Autoencoders, input corruption is used only for initial denoising. This helps learn important features present in the data. Once the mapping function f(θ) has been learnt. For further layers we use uncorrupted input from the previous layers.\n• After training a stack of encoders as explained above, we can use the output of the stacked denoising autoencoders as an input to a stand alone supervised machine learning like support vector machines or multi class logistics regression.\n\n#### Deep Autoencoders",
null,
"Deep Autoencoders (Source: G. E. Hinton* and R. R. Salakhutdinov, Science , 2006)\n• Deep Autoencoders consist of two identical deep belief networks. One network for encoding and another for decoding\n• Typically deep autoencoders have 4 to 5 layers for encoding and the next 4 to 5 layers for decoding. We use unsupervised layer by layer pre-training\n• Restricted Boltzmann Machine(RBM) is the basic building block of the deep belief network. We will do RBM is a different post.\n• In the above figure, we take an image with 784 pixel. Train using a stack of 4 RBMs, unroll them and then finetune with back propagation\n• Final encoding layer is compact and fast",
null,
"Reconstructions by the 30-dimensional deep autoencoder; reconstructions by 30- dimensional standard PCA. (Source: G. E. Hinton* and R. R. Salakhutdinov, Science , 2006)\n\nReferences:\n\nDeep learning by Ian Goodfellow and Yoshua Bengio and Aaron Courville\n\nhttp://www.icml-2011.org/papers/455_icmlpaper.pdf\n\nhttp://www.jmlr.org/papers/volume11/vincent10a/vincent10a.pdf\n\n### Share it and Clap if you liked the article!\n\nAlso published on mc.ai on December 2, 2018."
] | [
null,
"https://cdn-images-1.medium.com/max/1600/1*eE-jG_gXajuGcZYHFasAmA.png",
null,
"https://cdn-images-1.medium.com/max/1600/1*19oadmQay1n7VNarX5sIPA.png",
null,
"https://cdn-images-1.medium.com/max/1600/1*cXOYbt_Tk4AyWAjK1bi5aA.png",
null,
"https://cdn-images-1.medium.com/max/1600/1*VXI4d0csTjq_F1MA4X6ACA.png",
null,
"https://cdn-images-1.medium.com/max/1600/1*FjATD2EW-bFB4vTQ6ikR4w.png",
null,
"https://cdn-images-1.medium.com/max/1600/1*Wi95xK8zHWVOGhqIMdA0QQ.png",
null,
"https://cdn-images-1.medium.com/max/1600/1*S8OYh46B9IoHBwr1kWzqDQ.png",
null,
"https://cdn-images-1.medium.com/max/1600/1*_mvWLItlrB0pW0kSNvZplw.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.79418457,"math_prob":0.6983413,"size":6576,"snap":"2019-13-2019-22","text_gpt3_token_len":1394,"char_repetition_ratio":0.1830493,"word_repetition_ratio":0.020134227,"special_character_ratio":0.18552311,"punctuation_ratio":0.07053571,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96882886,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-23T11:35:33Z\",\"WARC-Record-ID\":\"<urn:uuid:5031d676-bc56-4dcb-9671-3ca922c8b0a8>\",\"Content-Length\":\"154997\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b4dded69-49f3-4987-8d59-d97be6dcd547>\",\"WARC-Concurrent-To\":\"<urn:uuid:1a25f771-642a-4b70-a07a-cc883f42d3de>\",\"WARC-IP-Address\":\"104.16.120.127\",\"WARC-Target-URI\":\"https://medium.com/datadriveninvestor/deep-learning-different-types-of-autoencoders-41d4fa5f7570\",\"WARC-Payload-Digest\":\"sha1:ORRZOS4Z3OLJDRJUAB6ABI43PWOV2VXK\",\"WARC-Block-Digest\":\"sha1:2GSUPY2IJAGHNLPQUITHYICTJMD3HB4P\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232257243.19_warc_CC-MAIN-20190523103802-20190523125802-00553.warc.gz\"}"} |
https://blender.stackexchange.com/questions/61485/can-nodes-use-an-objects-x-y-position/61491 | [
"# Can nodes use an objects x/y position?\n\nI have a blender file where a sphere is orbiting a force field. I want it to be more red when it's closer to the center, and more blue when it's further. I already have the math figured out, I just need to know how to get x/y coordinates in nodes.\n\nHere's the file:",
null,
"• You probably want to look into drivers. I think they can be used as input to node fields... but maybe not. If so, a driver would likely be your best bet.\n– Matt\nAug 23, 2016 at 18:28\n• @Matt What do you mean by drivers? Aug 23, 2016 at 18:45\n• blender.org/manual/es/animation/drivers.html Aug 23, 2016 at 19:21\n\nHere's a node setup, which sets the colour based on the distance from a specified point in the XY plane.",
null,
"The inputs to the node group1 are as follows:\nX Origin - The X coordinate of the point from which to calculate the distance\nY Origin - The Y coordinate of the point from which to calculate the distance\nInner perimeter - The threshold at which distance is set to 0 (i.e. closer than this will be all red).\nOuter perimeter - The distance at which distance is set to 1 (i.e. farther than this will be all blue)\n\nHere's the inside of the node group:",
null,
"The Location output of the Object info node, gives the Object's location in World coordinates.\nThe Separate XYZ separates a vector or position in three values, one for each of X, Y and Z components of the vector/position.\nThe following math nodes up to and including the last Power node calculate the distance from the specified point according to the common distance formula.\nThe Subtract node following the Power node subtracts the Inner perimeter from the calculated distance.\nThe Subtract node below that calculates the distance between the Inner and Outer perimeters.\nFinally, the Divide node divides the distance between the object and the centre point by the distance between the Inner and Outer perimeters.\nThe output is a value which at the Inner perimeter will be 0 (closer than that, will yield a negative value) and at the Outer perimeter will be 1 (farther away will yield a value that is greater than 1). The Fac input of the ColorRamp clamps the values that are outside 0-1, so the fact that we get values greater than 1 and less than 0 won't be a problem.\n\nHere's an animated GIF of the result:",
null,
"And here's the blend (it's basically the same blend you provided, with the material for the \"planet\" modified).",
null,
"1When using node groups, you can edit the contents of a node group by selecting it and pressing Tab to get inside the group. When inside a node group, you can get back to the overview by pressing Tab again."
] | [
null,
"https://blend-exchange.giantcowfilms.com/embedImage.png",
null,
"https://i.stack.imgur.com/pOwFu.png",
null,
"https://i.stack.imgur.com/EpXK5.png",
null,
"https://i.stack.imgur.com/9hw8Y.gif",
null,
"https://blend-exchange.giantcowfilms.com/embedImage.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.88953197,"math_prob":0.9310524,"size":1978,"snap":"2023-40-2023-50","text_gpt3_token_len":427,"char_repetition_ratio":0.1545086,"word_repetition_ratio":0.07365439,"special_character_ratio":0.21334681,"punctuation_ratio":0.07751938,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9922536,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,4,null,4,null,4,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-01T21:49:26Z\",\"WARC-Record-ID\":\"<urn:uuid:929846fc-4072-4f53-adb0-3628e8c632b4>\",\"Content-Length\":\"158139\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:deccb5f3-55ec-4f46-9cf7-2bd36c56a49b>\",\"WARC-Concurrent-To\":\"<urn:uuid:64f5a531-1814-4024-8f05-0006dc9e86c2>\",\"WARC-IP-Address\":\"104.18.10.86\",\"WARC-Target-URI\":\"https://blender.stackexchange.com/questions/61485/can-nodes-use-an-objects-x-y-position/61491\",\"WARC-Payload-Digest\":\"sha1:AAVJSUWK6YIHTTYDENH6X4GUTVMFJVG6\",\"WARC-Block-Digest\":\"sha1:GUF7RAWH62X4BI4T4XWNTZ2K3IKKFXFI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510941.58_warc_CC-MAIN-20231001205332-20231001235332-00046.warc.gz\"}"} |
https://www.math.fsu.edu/~whuang2/papers/CMGM.htm | [
"## Computing the matrix geometric mean: Riemannian vs Euclidean conditioning, implementation techniques, and a Riemannian BFGS method\n\n### Authors\n\nXinru Yuan, Wen Huang*, P.-A. Absil, K. A. Gallivan\n\n### Abstract\n\nThis paper addresses the problem of computing the Riemannian center of mass of a collection of symmetric positive definite matrices. We show in detail that the condition number of the Riemannian Hessian of the underlying optimization problem is never very ill conditioned in practice, which explains why the Riemannian steepest descent approach has been observed to perform well. We also show theoretically and empirically that this property is not shared by the Euclidean Hessian. We then present a limited-memory Riemannian BFGS method to handle this computational task. We also provide methods to produce efficient numerical representations of geometric objects that are required for Riemannian optimization methods on the manifold of symmetric positive definite matrices. Through empirical results and a computational complexity analysis, we demonstrate the robust behavior of the limited-memory Riemannian BFGS method and the efficiency of our implementation when compared to state-of-the-art algorithms.\n\nSubmitted"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8621832,"math_prob":0.7109025,"size":1577,"snap":"2020-34-2020-40","text_gpt3_token_len":355,"char_repetition_ratio":0.13350286,"word_repetition_ratio":0.10091743,"special_character_ratio":0.18769816,"punctuation_ratio":0.129771,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95796305,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-30T14:35:54Z\",\"WARC-Record-ID\":\"<urn:uuid:08e66a72-7df9-4f3e-b215-c210d72d169d>\",\"Content-Length\":\"3023\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fa4375b2-fabc-404f-a526-93295e0a12ac>\",\"WARC-Concurrent-To\":\"<urn:uuid:47a81e0a-3e7e-417c-a90c-195405d6f2e9>\",\"WARC-IP-Address\":\"128.186.104.71\",\"WARC-Target-URI\":\"https://www.math.fsu.edu/~whuang2/papers/CMGM.htm\",\"WARC-Payload-Digest\":\"sha1:S4HJMSNRWGIGXMBPZJFOGPLI6FFNGRYM\",\"WARC-Block-Digest\":\"sha1:NDN6HOLFL6GWPP6VG5AV3PF7YATMBMC4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600402127075.68_warc_CC-MAIN-20200930141310-20200930171310-00056.warc.gz\"}"} |
https://link.springer.com/chapter/10.1007/978-3-030-72192-3_5 | [
"## 5.1 Introduction to Ultrasonic Inspection\n\nUltrasonic inspection is based on interaction (transmission, reflection and absorption) of the mechanical acoustical waves with the analysed structure at frequencies of 20 kHz or above. Conventional bulk wave ultrasonic techniques have been extensively used both for material property evaluation and for flaw detection. Material property evaluation is often based either on elastic modulus evaluation by measuring shear and longitudinal wave velocities, or estimation of frequency-dependent attenuation (Sol et al. 2018; Ito et al. 2015; Lin et al. 2011). Both parameters are very commonly used for evaluation of the characteristics and volume of the localized porosity in composites (Duong et al. 2015; Kim et al. 2013; Podymova et al. 2019). Meanwhile, the flaw detection relies on the analysis of surface, back-wall and intermediate echoes within the investigated structure, presuming that any inclusion, delamination or other defect will introduce a reflection that can be captured by the ultrasonic sensor (Zhang et al. 2020a; Hauffe et al. 2020). Because of their relative simplicity, the conventional bulk wave ultrasonic techniques are often used as a primary tool for evaluation of the structural integrity. Ultrasonic C-scan measurement is a common method to assess the structural integrity of composites using bulk ultrasonic waves. Using such technique, the structural response is collected at different spatial positions across the surface of the sample. The output image is usually a colour coded representation of the amplitude level (transmission losses) of the backwall signal. The success of such measurement depends on the type and origin of defects. For example, delamination and debonding defects can produce discrete reflection which can be detected at certain depths before arrival of the backwall signal. Porosity and ply waviness introduce scattering of ultrasonic signal and increased transmission losses (Zhang et al. 2020b; Towsyfyan et al. 2020; Pain and Drinkwater 2013). The multi-layered structure of the composite itself generates structural noise, which complicates the inspection, while the resin layers between the fibres can cause inter-ply reflections and interference of ultrasonic signal. Such interference is known to be stronger when the wavelength of ultrasonic signal is equal to odd number of layer thickness (Zhang et al. 2020a). In case of multiple defects present, a shadowing effect and structural noise generated by layers of composite can limit detectability of some defects, especially in the case of normal incidence inspection.\n\nAircraft structures may have defects of different type and origin. Some of the defects develop during the manufacturing, while others within the life-cycle of the structure. The most common defects found in aircraft structures are described in Chap. 3 of this book. As the bulk wave measurements are mostly localized, based on point-by-point acquisitions using encoded scanners it’s not an ideal choice for large and curved aircraft structures. Classical bulk wave inspection methods are currently used in scheduled maintenance of an aircraft. These methods are able to detect defects in limited area only. Furthermore, they usually require disassembly (to have access to parts, hidden defects) and submersion, are slightly operator-dependent and require to repeat the inspection procedure in different areas of the structure (Hellier 2012; Rocha et al. 2013). As a result, bulk wave inspections are reliable but despite advancements in non-contact measurement methods, yet slow and costly, especially for large engineering structures. The scheduled maintenance requires an aircraft to be grounded so it’s usually preferred to perform it as quickly and as rarely as possible. This puts some pressure for fleet operators and increases the risk of human errors. In the case of large structure inspection, additional challenges arise due to the unreliable contact between the structure and ultrasonic sensor which may appear during the scanning. Conventional bulk wave measurement methods require either the submersion or continuous water feed which limits their applicability on site (Towsyfyan et al. 2020; Ramzi et al. 2015). Special bubblers and water columns are being used then, to provide consistent coupling (Hsu 2013). Air coupled or laser methods enable to partially solve this issue, providing the flexibility to adapt to the complex geometries (Kazys et al. 2017; Pelivanov et al. 2014). However, the impedance mismatch induced losses of air-coupled transducers and expensive equipment of the laser-generated-ultrasound reduces the extent of such approaches.\n\nCurrent aircraft engineering is trying to implement a “Fail safe” design in which the structure is engineered to maintain the residual strength even in the failure of some structural elements. The scheduled inspection intervals are calculated according to the predicted crack growth rate and loads during the flight (Rocha et al. 2013). In such way it’s ensured that an aircraft will be safe to use before another inspection. Such engineering approach suggests that the inspection intervals could be prolonged with the real-time monitoring techniques, that provide defect state information. Hence, in the case of aircraft inspection, an important aspect is to be able to recognize and continuously monitor the development of defects, since it determines the safety of the passengers and would allow to perform condition based maintenance rather than scheduled. This goal can partly be achieved through Structural Health Monitoring (SHM) systems, which aim to inspect large structures in real-time using a distributed network of embedded sensors. The SHM systems usually are not the standalone ones, since they cannot achieve such high measurement accuracy and sensitivity as bulk wave techniques. On the other hand, such approach in contrast to bulk wave inspection does not require prior knowledge about the locations of defects, as it provides global inspection of the structure. As a result, SHM is frequently used to record the response of the structure under the operational loads and to identify and preferably localise the critical regions where the defects may occur. Meanwhile the bulk wave methods usually follow-up the SHM inspection for more precise defect assessment, size evaluation and etc. In most structures, defects are allowed to exist as long as they are considered as safe (Alokita et al. 2019). For a such already known defects, aircraft inspection can benefit from SHM which continuously monitors their development with the aim to capture the moment when they become critical. Typically a successful SHM system should address the challenges and requirements for inspection of aircrafts. These are usually large and thin structures possessing variable geometry which have to be inspected with one-side access. In most cases, inaccessible parts, hidden reinforcement elements and interfaces exist, while the monitoring system should be capable of detecting both defects like delaminations and structural changes i.e. ply waviness, fibre structure etc. This brings a lot of challenges for implementation of successful SHM system.\n\n## 5.2 Ultrasonic Guided Wave (GW) Inspection\n\nAs an alternative to conventional bulk wave ultrasonic inspection, ultrasonic guided waves (GW) emerged as a technique, which enables the implementation of large scale and on-demand SHM systems, through embedded sensor networks (Croxford et al. 2007). As a result, a periodic pointwise inspection can be replaced with real-time long-range condition monitoring, thus minimizing the downtime, human involvement and disassembly of components. Guided waves itself can be defined as a kind of ultrasonic waves that propagate along the boundaries of the structure, or along thin, plate like structural components. It’s a result of interference between the longitudinal and shear waves, that propagate back and forth in the analysed structure and conform to the distinct modes (Rose 2002). Guided waves interrogate with the entire bulk of the structure, are sensitive to the changes in elastic modulus or density of the material and are relatively weakly attenuated; hence, defects of different types and positions inside the component can be detected in large structures employing only a few sensors (Cawley 2007). As the velocity and attenuation of guided waves depend on the properties of the material, various sudden changes in the internal structure such as defects can be detected (Kralovec and Schagerl 2020).\n\nGuided waves arise in bounded media as a result of the superposition of bulk waves that are partially transmitted, reflected and refracted at a structure’s boundaries and/or internal interfaces. As the focus of this work is on aerospace applications, we are mainly concerned with light thin-walled structures, i.e. plates. In the case of plates, the medium is bounded by two parallel stress-free surfaces, which convert and reflect elastic waves. After multiple reflections, a steady-state pattern is established giving rise to the so-called Lamb waves (Lamb 1917). Lamb waves possess an infinite number of modes of two possible thickness-symmetry types of the displacement field: symmetric (Si) and anti-symmetric (Ai). There exist additional family of shear horizontal (SHi) modes, which decouple from Si and Ai modes for isotropic structures. The subscript (i = 0, 1…∞) denotes the order of each mode.\n\nVarious analysis methods can be employed for calculation of the relationship between wave speed and excitation frequency, i.e. dispersion relationships. For instance, for isotropic, homogeneous, linearly-elastic materials, the dispersion curves can be found using the method of potentials (Viktorov 1967; Rose 2004). Despite its simplicity and mathematical elegance, the method of potentials gives little insight into the physics of the problem. A substantially different approach relies on the partial wave technique (Solie and Auld 1973). Following the latter technique, the first step in formulating the dispersion relationships is the calculation of possible bulk waves for the infinite medium, followed by finding partial waves satisfying stress-free boundary conditions. In addition to dispersion, i.e. phase-frequency relations, the amplitude-frequency characteristics and mode-specific displacement patterns can be found by employing the abovementioned techniques.\n\nEach GW Lamb wave mode has specific dominant displacements; hence, different modes are applied for the detection of particular defects. The anti-symmetric modes possess dominant out-of-plane displacements at low frequencies, therefore they are commonly used for the detection of delaminations, surface and sub-surface defects (Ramadas et al. 2010; Teramoto et al. 2015). In contrast, symmetrical modes are dominated by in-plane displacements and are widely used for crack and notch assessment (Benmeddour et al. 2009). Such displacement distributions are usually non-uniform across the thickness of the structure and tend to rearrange while frequency-thickness product increases (Rose 2014). The fundamental anti-symmetric modes have a shorter wavelength, leading to a better sensitivity to the defects of small size. Fundamental shear horizontal mode (SH0) in isotropic media is non-dispersive, so it’s frequently employed in various fields intending to avoid complicated signal analysis (Nazeer et al. 2017). The shear horizontal mode has in-plane displacements that are perpendicular to the direction of wave propagation (Rose 2004). Most of the modes of Lamb waves unlike SH0 are dispersive and display frequency-dependent phase and group velocities. Fundamental 0th order modes usually are fairly non-dispersive at low frequencies and possess quite uniform mode shapes (in-plane and out-of-plane displacements) across the thickness of the sample. Such modes are easily generated and may be a good starting point for any guided wave inspection (Khalili and Cawley 2018; Senyurek 2015). Higher order modes possess relatively short propagation distances due to dispersion, but on the other hand are quite sensitive to small defects due to short wavelength and strong velocity dependence on sample thickness (Masserey et al. 2014; Masserey and Fromme 2015). Hence, the guided wave inspection can be categorized into short to medium and medium to long range.\n\nA typical example of short range inspection can be inspection of aircraft repair patches (Puthillath and Rose 2010; Caminero et al. 2013) and assessment of adhesive bonds (Fromme et al. 2017; Wojtczak and Rucka 2021). Adhesive bonding is frequently used in many aircraft structures to attach wing skins, fuselage stingers and metallic honeycomb to skins for elevators, ailerons, tabs and spoilers (Higgins 2000). Short range guided wave inspection possess relatively minor dispersion, hence the selection of the suitable mode is mostly determined by it’s sensitivity. Long range inspection is typically applied for inspection of wing skins, engine coverings and fuselages. In case of long range inspection the dispersion plays a vital role. Due to dispersion various frequency components of the signal travel at different velocities. This leads to a spread of the wave in time domain as it propagates in the structure. Typically for the S0 mode, the low-frequency components travel faster in comparison to high signal frequencies. In contrast, for A0 mode high frequencies tend to move more rapidly. As a result, the trail of S0 mode becomes contracted, while A0 mode becomes stretched-out (Mitra and Gopalakrishnan 2016). In the presence of dispersion, the velocity of different components of the signal depends on frequency and thickness product. Dispersion is undersirable effect as spreading of wave in time domain reduces the resolution while reduction of the amplitude due to wave spread limits the sensitivity of inspection system (Wilcox et al. 2001a). One of the simpliest way to achieve mode control is to select an appropriate frequency-thickness range where only a few fundamental modes exist. For example, if dispersion diagram shows that only a fundamental modes exist below 1 MHz·mm (see Fig. 5.1), it means that for 10 mm plate the fundamental modes will propagate at frequencies below 100 kHz. The frequency-thickness product plays vital role in guided wave inspection of aircraft components. These usually posses different complexity of the geometry—from simple plate-like skin elements, to bends and co-cured stiffeners. In such structures the existence of multiple modes becomes unavoidable, while each structural element introduces it’s own complexities. For example, stiffeners cause damping of A0 mode, due to energy leakage into the stiffener and mode conversion which leads to trailing waves. Meanwhile the disbond between skin and stiffener results in a wave propagating in the skin only (Panda et al. 2018). The velocity variation of guided wave modes indicates the thickness change of the structure, which may be caused by different defects, like delamination or disbond or corrosion. To mitigate the effect of dispersion, the bandwidth of the signal for a given plate thickness must be selected carefully, especially for the long-range guided wave applications. Other methods include sparse array excitation which will be discussed in next subchapter. Subsequent sections outline the theoretical background for unbounded and guided wave propagation. As for a linear elastic isotropic medium, guided wave propagation may be considered in a 2-D space, therefore the following discussion is restricted to that simplified case only.\n\n### 5.2.1 Governing Equations of GW Wave Propagation\n\n#### 5.2.1.1 Waves in Unbounded Media\n\nThe momentum equation for a two-dimensional unbounded linear elastic isotropic space is written as\n\n$$\\rho \\frac{\\partial^2\\boldsymbol{X}}{\\partial {t}^2}=\\nabla \\bullet \\boldsymbol{\\sigma}$$\n(5.1)\n\nwhere ρ denotes the density, X = [u, v]T the particle displacement vector in 2-D space, σ the Cauchy stress tensor and t the time. Equation (5.1), together with constitutive and geometric relationships, fully describes wave motion via particle displacements. For small-amplitude waves, the infinitesimal strain definition is given as\n\n$${\\varepsilon}_{ij}=\\frac{1}{2}\\left(\\frac{\\partial {X}_i}{\\partial {D}_j}+\\frac{\\partial {X}_j}{\\partial {D}_i}\\right)$$\n(5.2)\n\nwhere Xi denote ith component of the particle displacement vector and Di represents ith spatial direction. Alternative strain measures may be assumed for small but finite or large strains, e.g. the Green-Lagrange strain tensor (Packo et al. 2016).\n\nFor a linear elastic isotropic solid in 2-D, the constitutive relation—linking stresses and strains—yields\n\n$${\\sigma}_{11}=\\left(\\lambda +2\\mu \\right){\\varepsilon}_{11}+\\lambda {\\varepsilon}_{22}$$\n(5.3)\n$${\\sigma}_{22}=\\left(\\lambda +2\\mu \\right){\\varepsilon}_{22}+\\lambda {\\varepsilon}_{11}$$\n(5.4)\n$${\\sigma}_{12}=2\\mu {\\varepsilon}_{12}$$\n(5.5)\n\nCombining Eqs. (5.1), (5.2) and (5.3)–(5.5) and assuming the time-harmonic factor eiωt, the elastodynamic equation yields\n\n$$\\rho {\\omega}^2\\boldsymbol{X}-{\\boldsymbol{a}}_{\\mathbf{1}}\\frac{\\partial^2\\boldsymbol{X}}{\\partial {x}^2}-{\\boldsymbol{a}}_{\\mathbf{2}}\\frac{\\partial^2\\boldsymbol{X}}{\\partial {y}^2}-{\\boldsymbol{a}}_{\\mathbf{3}}\\frac{\\partial^2\\boldsymbol{X}}{\\partial x\\partial y}=0$$\n(5.6)\n\nwhere the matrices a1, a2 and a3 are given by\n\n$${\\boldsymbol{a}}_{\\mathbf{1}}=\\left[\\begin{array}{cc}\\lambda +2\\mu & 0\\\\ {}0& \\mu \\end{array}\\right],\\kern1em {\\boldsymbol{a}}_{\\mathbf{2}}=\\left[\\begin{array}{cc}\\mu & 0\\\\ {}0& \\lambda +2\\mu \\end{array}\\right],\\kern1em {\\boldsymbol{a}}_{\\mathbf{3}}=\\left[\\begin{array}{cc}0& \\lambda +\\mu \\\\ {}\\lambda +\\mu & 0\\end{array}\\right].$$\n(5.7)\n\nEquation (5.6) defines the wave motion for an unbounded, linear elastic medium using the Lamé constants λ and μ. To complete the Lamb wave problem, Eq. (5.6) is supplemented by boundary conditions in the following section.\n\n#### 5.2.1.2 Boundary Conditions\n\nStress-free boundary conditions for Lamb wave propagation in 2-D space require that\n\n$${\\sigma}_{ij}{n}_j=0\\kern0.75em \\mathrm{at}\\kern0.75em y=\\pm h,\\kern0.5em i=\\left\\{1,2\\right\\},$$\n(5.8)\n\nwhere n = ± [0 1]T is the surface outward-pointing normal vector and 2h is the plate thickness. Equation (5.8) results in two sets of equations for the top, y = + h, and bottom, y = − h , surface\n\n$$\\pm {\\boldsymbol{b}}_{\\mathbf{1}}\\frac{\\partial \\boldsymbol{X}}{\\partial x}\\pm {\\boldsymbol{b}}_{\\mathbf{2}}\\frac{\\partial \\boldsymbol{X}}{\\partial y}=0,\\kern0.5em \\mathrm{for}\\kern0.5em y=\\pm h,$$\n(5.9)\n\nwhere b1 and b2 are\n\n$${\\boldsymbol{b}}_{\\mathbf{1}}=\\left[\\begin{array}{cc}0& \\mu \\\\ {}\\lambda & 0\\end{array}\\right],\\kern1em {\\boldsymbol{b}}_{\\mathbf{2}}=\\left[\\begin{array}{cc}\\mu & 0\\\\ {}0& \\lambda +2\\mu \\end{array}\\right].$$\n(5.10)\n\nEquations (5.6) and (5.9) define the Lamb wave propagation problem for a plate in 2-D space.\n\n#### 5.2.1.3 Dispersion Relation\n\nThe Rayleigh-Lamb problem, given by Eqs. (5.6) and (5.9), assumes the solution of the form e+i(kx + ωt) and results in (k, ω) pairs that define guided Lamb wave modes, i.e. the dispersion curves. Relating wavenumbers and frequencies—or, equivalently, wave velocities and frequencies—dispersion curves pertain to phase-frequency characteristics of elastic waves as both k and ω appear in the exponential phase factor. It needs to be pointed out that for the full picture of wave propagation characteristics, Eqs. (5.6) and (5.9) may be used to determine the amplitude-frequency, (α, ω), characteristics usually referred to as excitability curves (Kijanka et al. 2018).\n\nThe solution to the Rayleigh–Lamb problem cannot be carried out analytically and requires a numerical procedure. Most widely used are the method of potentials and the partial wave technique (Solie and Auld 1973). While the latter offers clear insight into the physics of the problem, it is most frequently used for practical analyses. Following this procedure (see (Packo et al. 2016) Appendix A), the solutions are given by\n\n$$\\boldsymbol{X}=\\frac{1}{2}\\alpha {\\left[{u}^{\\left(S,A\\right)}\\kern0.5em {v}^{\\left(S,A\\right)}\\right]}^T{e}^{+i\\left( kx+\\omega t\\right)}+c.c.,$$\n(5.11)\n\nwhere c.c. stands for the complex conjugate of all preceding terms and α denotes the wave amplitude. It clearly follows from the analysis that the modes display even or odd symmetry with respect to the plate's midplane, and are therefore called symmetric—u(S), v(S)—and anti-symmetric—u(A), v(A)—Lamb wave modes. Respective components u(S, A) and v(S, A) are given by\n\n$${u}^{(S)}:\\kern0.75em \\frac{k}{\\gamma_L}\\mathit{\\cos}\\left(y{\\gamma}_L\\right)+\\frac{2k{\\gamma}_S}{k^2-{\\gamma}_S^2}\\frac{\\mathit{\\sin}\\left(h{\\gamma}_L\\right)}{\\mathit{\\sin}\\left(h{\\gamma}_S\\right)}\\mathit{\\cos}\\left(y{\\gamma}_S\\right),$$\n(5.12)\n$${v}^{(S)}:\\kern0.75em i\\ \\mathit{\\sin}\\left(y{\\gamma}_L\\right)+i\\frac{2{k}^2}{k^2-{\\gamma}_S^2}\\frac{\\mathit{\\sin}\\left(h{\\gamma}_L\\right)}{\\mathit{\\sin}\\left(h{\\gamma}_S\\right)}\\mathit{\\sin}\\left(y{\\gamma}_S\\right)$$\n(5.13)\n\nand\n\n$${u}^{(A)}:\\kern0.75em i\\ \\frac{k}{\\gamma_L}\\mathit{\\sin}\\left(y{\\gamma}_L\\right)+i\\ \\frac{2k{\\gamma}_S}{k^2-{\\gamma}_S^2}\\frac{\\mathit{\\cos}\\left(h{\\gamma}_L\\right)}{\\mathit{\\cos}\\left(h{\\gamma}_S\\right)}\\mathit{\\sin}\\left(y{\\gamma}_S\\right),$$\n(5.14)\n$${v}^{(A)}:\\kern0.75em \\mathit{\\cos}\\left(y{\\gamma}_L\\right)+\\frac{2{k}^2}{k^2-{\\gamma}_S^2}\\frac{\\mathit{\\cos}\\left(h{\\gamma}_L\\right)}{\\mathit{\\cos}\\left(h{\\gamma}_S\\right)}\\mathit{\\cos}\\left(y{\\gamma}_S\\right)$$\n(5.15)\n\nfor symmetric and antisymmetric modes, respectively.\n\nBy employing the partial wave technique, the solution pairs, (k, ω), are found in two steps. First, partial waves (i.e. waves in infinite medium) are obtained from an eigenvalue problem obtained by combining (5.11) and (5.6). Second, combinations of partial waves that satisfy the boundary conditions, Eq. (5.9), are retained.\n\nFigure 5.1 shows example solutions for a Rayleigh-Lamb problem for an aluminium plate (λ = 60,500 MPa, μ = 25,900 MPa and ρ = 2700 kg/m3), namely multi-modal dispersion curves in (k, ω) (Fig. 5.1a) and (V, ω) (Fig. 5.1b) spaces, and the corresponding excitability curves (Fig. 5.1c). Respective modes are labeled by a letter (S—for symmetric and A—for antisymmetric) and a subscript denoting the mode order. The fundamental antisymmetric and symmetric modes, A0 and S0 respectively, are most frequently employed for damage detection. Clear dispersive behaviour—i.e. frequency-dependent velocity—can be observed for all modes.\n\nWhile interacting with defects, guided waves may undergo partial reflection, refraction, scattering and mode conversion. It means that the energy of the signal can be partially reflected back to the transmitter and convert to a different type of mode. Moreover, in the base of composite inspection, anisotropy plays a significant role, leading to directional dependence of wave velocity for each mode. Those characteristic features of guided waves are usually exploited in the detection of various defects such as cracks (Masserey and Fromme 2015; Chen et al. 2019; Mardanshahi et al. 2020; Barski and Stawiarski 2018), delaminations (Zhao et al. 2019; Munian et al. 2020; Raišutis et al. 2010) and bonding integrity (Yu et al. 2017; Ochôa et al. 2019; Vanniamparambil et al. 2015; Fan et al. 2013).\n\nDespite that guided waves offer significant advantages over the conventional bulk wave testing, many limitations of their use in the majority of engineering structures are still present to overcome. Since many guided wave modes co-exist simultaneously, each having frequency and direction-dependent velocity, after several reflections and mode conversions the receivers usually capture overlapped and distorted time-domain signals that are difficult to interpret. The responses that are captured from defects are usually weak and may be concealed anywhere in the received signal. Such a response captured from the structure varies from one geometry to another and depends on the type of excitation and environmental conditions, such as temperature, loads and movement-induced vibrations (Su and Ye 2006). In order to obtain useful measurement data it’s necessary to excite and receive single guided wave mode minimising the coherent noise from other propagating modes. The variety of structures present in the aerospace sector, ranging from wing panels, to stiffened joints each time requires new developments in sensing, measurement and data analysis technologies. Hence, a deep understanding of the mechanism of guided wave propagation, spatial coverage, interaction with medium and the defects of interest is required for successful in-situ guided wave application.\n\n### 5.2.2 Active and Passive Guided Wave Inspection\n\nIn general, guided wave inspection can be categorized into either active or passive. Passive techniques aim to record structural response induced by natural loads of the structure which happen during the flight of an aircraft. A comprehensive review of passive acoustic NDT techniques is presented in Chap. 7 of this book. One other good example of the passive technique is an embedded optical fibre Bragg grating sensor which has a series of parallel engraved lines (index points of refraction) that provide different reflection and transmission of light under varying strains in the structure. In case of fibre breakage, loss of transmitted light is observed, while the strain changes leads to altered refraction coefficient (Papantoniou et al. 2011). Optical fibre sensors have been extensively used to measure the temperature and strain of aircraft structures and even to detect the delamination type defects by analysing the wavelength shifts and reflection spectra of light (Ecke 2000; Takeda et al. 2002). Such inspection techniques are covered in Chap. 8. Active SHM utilizes both actuators and sensors to measure a response of the structure to a known input. Both bulk wave and guided wave techniques can be categorized as active, however in aerospace SHM applications the guided waves inspections are more commonly used. Guided wave excitation is usually determined by the design of the structural health monitoring system. It essentially depends on the type of defects which has to be detected and the structure under analysis. In the ideal case, the best guided wave mode for inspection should feature low dispersion and attenuation, high sensitivity to damage, easy excitability and detectability (Su and Ye 2006). The dispersion and attenuation depend on the excitation frequency and material properties, while the sensitivity to the defect is determined by the displacement profile of each mode.\n\n### 5.2.3 Dispersion and Attenuation\n\nHigh dispersion usually limits the sensitivity to the defect and the inspection length, as the wave-packet of the mode becomes distorted with the distance. To reduce the effect of dispersion, narrowband excitation is usually applied by increasing the number of cycles of the excitation signal. As a result, the bandwidth of the signal decreases, limiting the extent of dispersion (Wilcox et al. 2001b). On the other hand, it means that the signal itself becomes longer in duration which reduces the temporal resolution. Wilcox et al. introduced a concept of minimum resolvable distance (MRD), which shows the sweet spot on the dispersion curve, where the best compromise between propagation distance, number of excitation signal cycles, wavelength and resolution can be estimated (Wilcox et al. 2001a):\n\n$$\\mathrm{MRD}=\\frac{c_0}{d}{\\left[l\\left(\\frac{1}{c_{\\mathrm{min}}}-\\frac{1}{c_{\\mathrm{max}}}\\right)+{T}_{\\mathrm{in}}\\right]}_{\\mathrm{min}},$$\n(5.16)\n\nwhere l and d are the wave propagation distance and the plate thickness; cmin, cmax are the minimum and maximum velocities through the distance l; c0 is the velocity at the central frequency; Tin is the initial time duration of the wave-packet. Typically, fundamental modes of guided waves such as A0 and S0 possess low MRD values and therefore require less cycles for adequate resolution.\n\nDifferent post-processing strategies exist that can be used to reduce the effect of dispersion after the signal arrives to the sensor. For example, Wilcox presented a technique that uses a priori knowledge of dispersive characteristics of guided wave mode to compress the signals by mapping the time to distance domains (Wilcox 2003). De Marchi et al. introduced the warped frequency transform method to reduce the effect of dispersion. The authors used chirped pulse excitation with the proposed dispersion compensation technique to improve arrival time estimation in the detection of simulated defects on aluminium plate (De Marchi et al. 2013). However, in most cases the reconstructed signals are either still deformed due to non-linearity of the transformation function or the compensation methods are usually efficient for the reconstruction of one targeted mode (Luo et al. 2016). Recently, the sparse decomposition methods, that use a dictionary of non-dispersive signals to decompose guided wave responses obtained from the structures were proposed (Xu et al. 2018). The dictionaries are built exploiting the dispersion curves of the signal, which requires precise knowledge of the material properties. Each atom in the dictionary represents a dispersive signal at a certain distance. By comparing each atom to the measured signal, the non-dispersive analogue can be found.\n\nAttenuation is another limiting factor that can influence the design of the monitoring system. Considering that the reflection from damage is usually weak, the attenuation of guided waves has to be sufficiently low in order to capture such reflections at some distance. The main factors that determine the level of attenuation are dispersion, beam divergence, material damping and leakage of the acoustic energy into the surrounding media (Long et al. 2003; Mustapha and Ye 2014). The leakage depends on the difference between the phase velocity of guided waves in the structure and the velocity of bulk waves in the surrounding medium. Significant leakage is observed in those cases when the phase velocity of guided waves is above the bulk wave velocity in the embedded media. The leaky behaviour of guided waves are usually exploited while investigating flow and defects in gas or fluid loaded pipes (Djili et al. 2013; Zichuan et al. 2018; Mazzotti et al. 2014).\n\n### 5.2.4 Guided Wave Excitation and Mode Selection\n\nUltrasonic guided waves can be excited using different strategies, depending on the application of the monitoring system. Direct contact methods can be either surface mounted or embedded into the structure. Surface-mounted sensors, like piezoelectric wafers are cheap, lightweight, can be arranged in different configurations and easily replaced if necessary (Giurgiutiu 2015). Simple instrumentation is required for such sensors as they operate on the direct and inverse piezoelectric effect. However, surface mounted solutions are not very attractive for in-situ applications as they change the aerodynamics of the structure. Thus for in-service aerospace monitoring systems, integrated sensors are preferred. These have high requirements for durability, as the monitoring system has to be reliable for the whole life-time of an aircraft. Different studies are available that analyse the durability of integrated sensors under varying environmental and cyclic conditions (Giurgiutiu et al. 2004; Melnykowycz et al. 2006; Tsoi and Rajic 2008). Different type of integrated sensors exists, while most commonly piezoelectric and fibre Bragg grating are used. Sensor integration can introduce additional resin-rich regions in the laminate or the sources of crack initiation around the corners of the piezoelectric sensor, therefore the shape and integration design has to be considered carefully taking into account overall strength of the host structure (Veidt and Liew 2012). Fibre Bragg grating sensors are lightweight and designed to be integrated into the composite structures. Such sensors do not require wiring, are cheap, durable and do not change the strength of the host structure (Veidt and Liew 2012; Majumder et al. 2008). Non-contact inspection methods like air-coupled or laser ultrasonics provide an ability to adjust to the complex surface of the structure. Such methods can be adjusted to excite different guided wave modes as the angle between the transducer and the sample can be easily adjusted (Panda et al. 2018; Römmeler et al. 2020). Laser and conventional air-coupled monitoring are usually combined, using conventional piezoelectric transducers for transmission and laser vibrometer for the reception (Jurek et al. 2018). Other researchers combined air-coupled ultrasonic testing with thermal imaging (Rheinfurth et al. 2011). Despite the advantages of air-coupled ultrasonic inspection, it requires some expensive equipment, such as scanners, laser heads etc. and also access to inspected parts; hence, it can be performed only during the maintenance break of an aircraft.\n\nIn real-world situations, using many abovementioned guided wave excitation methods multiple modes are being generated simultaneously, unless some specific excitation strategies are applied. Many studies are available that seek to excite specific mode of guided waves (Lee et al. 2008; Li et al. 2016). The simplest solution is to select specific operating points of guided wave dispersion curves, where only a desired modes exist. For example, at low frequency-thickness products, fundamental zone with zeroth order modes exist. Such modes possess relatively simple mode shapes, are less dispersive and may be easily generated. Limiting the excitation bandwith and operating at low frequencies can effectively control number of modes present in the structure and the amount of their dispersion. At very high frequency-thickness products (20 MHz · mm and above), higher order guided wave modes have similar group velocities, hence they form a non-dispersive cluster which is called Higher Order Mode Cluster (HOMC) (Jayaraman et al. 2009). Such cluster of modes possess reduced surface sensitivy, thus it’s a good choice for localised flaw detection, like pitting corrosion. In the medium frequency-thickness range, many modes exist which are usually dispersive and possess much more complex mode shapes. Inspection at such operating point provides better sensitivity to small defects and thickness variations which is achieved at a cost of mode purity. In order to supress unwanted modes, some of the following techniques could be mentioned. For instance, in case of air-coupled or laser beam excitation, the ultrasonic signal can be introduced at some incidence angle, which allows to excite specific modes and reduce coherent noise. Such incidence angle is frequency-dependent and may be calculated analytically. This selective mode excitation approach is usually valid for fundamental modes only, as the incident angles of different modes tend to overlap at higher frequencies.\n\nIn case of direct sensor placement on the surface of component, different selective mode excitation strategies must be applied. Giurgiutiu et al. used piezoelectric wafer active sensors to excite single guided wave mode (Giurgiutiu 2005; Xu and Giurgiutiu 2007). For a particular mode, the maximum of the strain and displacement functions is achieved when the width of the transducer is equal to odd multiple half wavelengths of the desired mode. In contrast, the minimum occurs in case the width of the sensor is equal to an even number of half wavelengths. In such a way, the size of the transducer introduces spatial filtering phenomena, which allow to enhance or suppress the vibrations of the particular guided wave mode (Giurgiutiu 2011; Samaitis and Mažeika 2017). Another approach for excitation of single guided wave mode is based on the use of interdigital or comb transducers, which consist of two-finger type electrodes that are interchanged and driven by an opposite phase electrical field (Monkhouse et al. 2000; Bellan et al. 2005). The type of mode which is introduced to the structure is determined by the pitch between finger electrodes which has to be equal to half the wavelength of the desired mode. Several design extensions of the classical interdigital transducers can be found, which were proposed by Salas et al. (Salas and Cesnik 2009) and Jin et al. (Jin et al. 2005). Glushkov et al. used a co-axial ring-type transducer for selective omnidirectional mode excitation (Glushkov et al. 2010). The major drawback of piezoelectric wafers and interdigital transducers is that the mode selectivity is related to the size of the transducer. It means that the set of sensors is required to excite different modes. At low frequencies, the physical dimensions of such transducers become large and not convenient for many applications.\n\nOther approaches to generate a single mode are based on the positioning of two or more piezoelectric elements in a sequence on the surface of the sample at the inter-element distance equal to the wavelength of the desired mode (Grondel et al. 2002). Similarly, elements can be positioned on the opposite surfaces and in-phase or out-of-phase excitation is then used to drive the mode of interest (Seung and Hoon 2007). The use of the phased-arrays is another field that contributes to the selective guided wave excitation. Fromme used 32-element circular array and phase addition algorithm to excite A0 mode on aluminium plate (Fromme et al. 2006). The abovementioned approaches are most suitable for excitation of fundamental guided wave modes such as A0 or S0. It has to be considered that all modes of the wavelength determined by the pitch of array elements will be excited, which is especially important working with higher-order guided wave modes, thus reducing the mode selectivity. To overcome such a problem, recently Veit et al. proposed an approach for selective guided wave excitation using conventional phased array probe having small pitch relative to wavelength of targeted mode. By controlling the input signal bandwidth and the angle of generated beam even higher order modes can be excited both on metallic and composite structures (Veit and Bélanger 2020).\n\nDifferent researches show, that the mode purity is an important issue for selective guided wave excitation. Usually, the approaches described above can produce some dominant modes; however, other modes might exist with significantly lower displacements and contribute to an overall noise of the signal. Other transducers, especially phased arrays, possess low dynamic range, thus they are limited to detection of relatively large defects.\n\n## 5.3 Defect Detection\n\nGuided wave defect detection methods can be categorized into baseline and baseline-free. The baseline methods require a baseline dataset, which describes defect-free state of the structure. Then each further collected signal is compared to a baseline to detect the presence of damage. If there are any structural changes present, the guided waves will be reflected or scattered by them. Subtracting such signal from baseline will give the residual, which may indicate the presence of damage. This technique allows to eliminate permanent reflections from object boundaries. Despite it’s simplicity, such technique is extremely sensitive to changes of surrounding temperature, loads, transducer bonding, aging etc. For instance, some investigations show, that if magnitude of the reflection from the defect is at least –30 dB compared to the direct arrival, the temperature change of 10 °C can conceal the defect even though temperature compensation techniques are used (Croxford et al. 2010; Dai and He 2014). In order to make baseline subtraction work, non-damage related patterns have to be eliminated. The strategies to rule out the temperature influence on guided wave signals are discussed in more details in the POD section of this chapter.\n\nThe baseline free methods do not require a set of baseline signals, thus are less susceptible to envinronmental changes, transducer bonding, etc. Time reversal techniques can be presented as an example of baseline free damage detection (Sohn et al. 2007). Such approach uses at least two transducers where one of them is dedicated for time reversal of the signal received from the first sensor and reemits it back. The whole procedure can be described as follows:\n\n1. 1.\n\nthe wave is introduced into the structure by applying the voltage VA(t) to transmitter A;\n\n2. 2.\n\nthe propagated wave is captured by sensor B and recorded as voltage VB(t);\n\n3. 3.\n\nsignal VB(t) is reversed in time and reemitted back to transducer A;\n\n4. 4.\n\nfinally, transducer A receives the signal VBA(t) and compares it with the original input VA(t).\n\nFor a defect-free structures, the input signal VA(t) should correlate to the reconstructed signal VBA(t), while any mismatch indicates structural changes (Park et al. 2009; Mustapha and Ye 2015). Some studies show that such a technique is not suitable for notch detection in metallic structures, as such defect does not break the time reversibility and only changes the amplitude of the received signals (Gangadharan et al. 2009). It should be also noted during analysis that the reciprocity of the system is only limited to the directly arriving waves. Boundary reflections and non-uniform distribution of attenuation properties may cause asymmetry in wave fields and differences between the original and the re-sent time-reversed signals, especially for multi-modal wavefields.\n\n### 5.3.1 Defect Localisation and Imaging: Sparse, Phased Arrays and Guided Wave Tomography\n\nBoth abovementioned methods are dedicated to detecting structural changes. Unfortunately, these are not suitable to localize and characterize damage. In general, two defect detection approaches based on sensor type and placement can be identified—sensor networks or sparsed arrays and phased arrays (Rocha et al. 2013; Michaels 2016). Sparse arrays use a distributed network of discrete omnidirectional transducers which are positioned at specific regions of interest. Meanwhile phased arrays use closely spaced elements and are based on steering of the wavefront at different directions by applying different lag to each array element. Despite different sensor architectures, all damage detection and localisation methods are based on the assumption that any present discontinuity will produce an unexpected echo, which can be received by sensor.\n\nDefect localization methods are mostly based on time-of-flight (ToF) measurement of defect scattered guided wave signals. In the simplest 1D case damage can be localized as shown in Fig. 5.2a. In such arrangement, the distance l0 and signal arrival time T0 between sensors A and B are known. In case of the defect, an additional reflection will be received by sensor B at time instant T1. The distance x to the defect can be calculated according to the equation (Dai and He 2014):\n\n$$\\frac{l_0}{T_0}=\\frac{2x}{T_1}.$$\n(5.17)\n\nThis works well in presence of single mode only, however the corrections are necessary in case of mode-conversion. In 2-D case (Fig. 5.2b), the spatial defect position can be estimated by calculating the ToF of the signal going from the transmitter through the flaw (Michaels and Michaels 2007):\n\n$${t}_{\\mathrm{t}\\mathrm{r}}^{\\mathrm{f}}=\\frac{\\sqrt{{\\left({x}_{\\mathrm{t}}-{x}_{\\mathrm{f}}\\right)}^2+{\\left({y}_{\\mathrm{t}}-{y}_{\\mathrm{f}}\\right)}^2}+\\sqrt{{\\left({x}_{\\mathrm{r}}-{x}_{\\mathrm{f}}\\right)}^2+{\\left({y}_{\\mathrm{r}}-{y}_{\\mathrm{f}}\\right)}^2}}{c_{\\mathrm{g}}}.$$\n(5.18)\n\nwhere subscripts t, r, and f denote the 2-D coordinates of the transmitter, receiver and flaw. To detect damage location with sparsed array, at least three sensors are required. By using triangulation method it’s possible to find an intersection of three regions produced by the sensors, where the possible damage is likely to occur. The shape of regions of the likely defect position will depend on transmission and reception approach. If the measurements are taken recording an echo received by each transducer and the wave propagates spherically, the region of the sensor will be in the shape of circle. On the other hand if each transducer acts as transmitter once, while all the transducers act as receivers, the region of the sensor will be in the shape of ellipse (Rocha et al. 2013). Such approach works well if wave velocity is the same in every direction. Otherwise the corrections have to be made and the actual location of the damage will be different. By using at least three sensors the image over the region of interest can be generated. In case of N sensors, the defect scattered signals will arrive at different time instances for each transmitter-receiver pair, depending on actual defect position leading to N(N−1)/2 different signals paths. To create an image, the evenly spaced grid points over the inspection area are defined. Then the pixel value at the reconstruction point (x,y) according to the delay and sum (DAS) algorithm can be estimated as (Michaels 2008):\n\n$$P\\left(x,y\\right)=\\frac{1}{N}\\sum \\limits_{n=1}^N{\\left|{\\omega}_{nxy}{r}_n\\left(t-{t}_{nxy}\\right)\\right|}^2$$\n(5.19)\n\nwhere ωnxy—is reconstruction weight at specific point (x,y), tnxy—is the time delay which can be calculated as dnxy/cg where dnxy—distance from transmitter through point (x,y) to receiver, cg is the group velocity. Then the 2-D defect map can be created, by repeating this procedure at spatially distributed reconstruction points. At the actual defect locations, such addition will lead to constructive interference of received signals. The major drawback of the DAS method is a large point spread function and artefacts which may be caused by wave reflections from the boundaries and mode-conversion (Michaels 2016).\n\nReconstruction algorithm for probabilistic inspection of defects (RAPID) method can be shown as another example of sparse array imaging (Zhao et al. 2007). RAPID method uses a circular arrangement of the sensors and assumes that most significant signal changes appear in the direct wave path. Hence, between each transmitter receiver pair a linearly decreasing elliptical spatial distribution of signal change effects due to defects is presumed. In case of N PZT elements, the defect probability at imaging position (x,y) can be expressed as (Zhao et al. 2007):\n\n$$P\\left(x,y\\right)=\\sum \\limits_{i=1}^{N-1}\\sum \\limits_{j=i+1}^N{P}_{i,j}\\left(x,y\\right)=\\sum \\limits_{i=1}^{N-1}\\sum \\limits_{j=i+1}^N{A}_{i,j}\\left(\\frac{\\beta -{R}_{i,j}\\left(x,y\\right)}{\\beta -1}\\right),$$\n(5.20)\n\nwhere Pi,j(x,y) is defect distribution probability estimate for ith transmitter and jth receiver, Ai,j is signal difference coefficient of the same sensor pair, (β-Ri,j(x,y))/(β-1) is the lineary decreasing elliptical spatial distribution. Different indicators can be used for 2-D reconstruction of the defect position and definition of the pixel value in the reconstruction grid using sparse arrays. For example, Kudela et al. (Kudela et al. 2008) used a concept of damage influence map, which measures the match between the excitation signal and reflection from defect. The idea is based on the arbitrary positioning of the excitation signal on the response from the structure employing different time delays. The delay applied to the signal is then equal to the ToF from the transmitter to the likely damage position. The match between two signals at location (x,y) here is expressed by (Kudela et al. 2008):\n\n$${e}_k\\left(x,y\\right)=\\underset{t_0}{\\overset{t_0+\\Delta {t}^{\\ast }}{\\int }}{\\hat{S}}_{\\mathrm{T}}(t)\\left[F(t)G\\Big(x,y\\Big){\\hat{S}}_{\\mathrm{R},k}(t)\\right] dt,$$\n(5.21)\n\nwhere t0 and Δt* are the start and the width of the time window; $${\\hat{s}}_{\\mathrm{T}}(t)={S}_{\\mathrm{T}}\\left({t}_0,{t}_0+\\varDelta t\\ast \\right)$$ is the windowed excitation signal; $${\\hat{S}}_{\\mathrm{R},k}(t)={S}_{\\mathrm{R},k}\\left({t}_0+\\Delta t,{t}_0+\\Delta t\\ast \\right)$$ is the signal registered with the kth receiver, F(t) is the window function (Gauss, Hann, etc.); G(x,y) = eα(d0P+dPk) is the function dependent on the attenuation; α is the attenuation coefficient; d0P and dPk represent the distances between the transmitter-imaging point and imaging point-receiver; Δt is the signal time shift, which depends on the x, y coordinates of the imaging point and the group velocities c0P and cPk. The total match at location (x,y) for all transmitter-receiver pairs can be expressed as (Kudela et al. 2008):\n\n$$E=\\sum \\limits_k\\underset{S}{\\int }{e}_k\\left(x,y\\right) dS\\approx \\sum \\limits_k\\sum \\limits_{i,j}{e}_k\\left({x}_i,{y}_j\\right),$$\n(5.22)\n\nModifications of the proposed technique exist, which were introduced by Wandowski et al. (Wandowski et al. 2011; Wandowski et al. 2016). A slightly different approach was used by Michaels (Michaels and Michaels 2007), where authors used band pass filters with various central frequencies to obtain a set of bandlimited signals for each transmitter-receiver pair. Then the defect map is being created for each central frequency of the applied filter. Eventually, all individual images are combined taking minimum pixel value from all corresponding images, which minimizes phasing and other artefacts.\n\nA modification of sparse array architecture was proposed by Giridhara et al. (Giridhara et al. 2010) who implemented the radial segmentation technique. It consists of a single transmitter and radially distributed receivers, which divide the object into circumferential segments. The signals from neighbouring transducers are compared to determine the location of the defect. If the flaw is somewhere along x axis (Fig. 5.3a), the sensor signals (S2 and S8; S2 and S1; S1 and S8) will indicate some changes in the structure. If two signals from sensors S2 and S8 match, then the damage is in the segment S1 and S2 or in the segment S1 and S8. The radial segment of the defect is determined by measuring the ToF of the reflection from the flaw with sensors S2 and S8. The exact defect angular, θ, and radial, rd, position estimates are found by using triangulation as shown in Fig. 5.3b (Giridhara et al. 2010):\n\n$$\\theta =\\phi +{\\cos}^{-1}K;\\phi ={\\tan}^{-1}\\left(\\frac{py_{i+1}-{qy}_i}{px_{i+1}-{qx}_i}\\right);K=\\cos \\left(\\theta -\\phi \\right).$$\n(5.23)\n$${r}_d=\\frac{p}{2\\left({x}_i\\cos \\theta +{y}_i\\sin \\theta -{d}_i\\right)}=\\frac{q}{2\\left({x}_{i+1}\\cos \\theta +{y}_{i+1}\\sin \\theta -{d}_{i+1}\\right)},$$\n(5.24)\n\nwhere $$p={x}_i^2+{y}_i^2+{d}_i^2,q={x}_{i+1}^2+{y}_{i+1}^2+{d}_{i+1}^2,$$ di and di+1 are the total travel path from the transmitter through the flaw to the sensors Si and Si+1 respectively (di = rd + a, di+1 = rd + b, rd = x2 + y2); a and b are the distances of the sensors Si(xi,yi), Si+1 (xi+1,yi+1) to the damage P (a2 = (xxi)2 + (yyi)2, b2 = (xxi+1)2 + (yyi+1)2).\n\nThe sparse array approach interrogates the damage from multiple angles, hence the forward scattered and backscattered signals can be recorded if the damage is present within the area of array. Moreover, the reflection energy from the defect is more uniform within the region of sparse array in contrast to phased array inspections. On the other hand, as the phased array elements are located in close proximity to each other, it’s much easier to leverage the phase difference which is mainly caused by errors in phase velocity, transducer locations and variation of transducer characteristics (Michaels 2016). Using the phased array imaging, the array elements are delayed so that the azimuth of the wave φ is equal to the azimuth of target reflector φ0. To obtain the target direction most of phased array methods sweep the beam at different directions and estimate the maximum of received signal energy. Once the target direction φ0 is obtained, the distance to the reflector can be estimated using cross correlation or other time delay measurement method. An example of phased array imaging can be embedded ultrasonic structural radar (EUSR) method (Giurgiutiu and Bao 2004; Purekar et al. 2004). The use of phased arrays offers some advantages over single transmitter-receiver measurements, such as steering and focusing, hence large area of the sample can be examined and direction of reflector can be determined almost instantly. However, in practice due to multiple reflections and structural noise it becomes difficult both to distinquish reflector and to measure the time-of-flight precisely, which leads to defect positioning errors. Aircraft components made from composite materials introduce additional complexity for damage detection and localisation. The mechanical properties of composites are direction dependent, hence the circular or elliptical damage regions are no longer valid. Phased array beam steering becomes more complex, as radial velocity distributions must be known. As the velocity of guided waves varies with the propagation direction, wavelength and operating point on the dispersion curve (and dispersion itself) change as well. It means that the sensitivity to the defect and resolution are changing, especially if operating point is located on the dispersive region of the selected mode. This shows that each unique structure requires adaptation of the monitoring system—determination of radial velocity distributions, evaluation of boundary reflections (taking into the account velocity information), identification of present modes. Guided wave tomography is one of the most common imaging methods used to detect and localize structural changes. The tomography aims to reconstruct the spatial distribution of the material properties, which can be based on the projection of wave velocity, attenuation, frequency shift or other features (Belanger and Cawley 2009). To get the desired resolution, a large number of projections are required which can be collected either using transmission or reflection approach. Among the sensor arrangement topologies, the crosshole, double-crosshole and fan-beam are the most popular ones (Park et al. 2020). Several different tomographic reconstruction techniques exist, namely straight-ray, bent ray and diffraction tomography (Belanger and Cawley 2009; Willey et al. 2014; Belanger et al. 2010). The straight-ray methods neglect refraction and diffraction assuming that projection data corresponds to a line integral of given parameter. Bent ray methods take into account wave field of reflected from the defect, while diffraction tomography is based on Born’s approximation.\n\n### 5.3.2 Guided Wave Interaction with Actual Structural Defect\n\nThe damage detection and localisation principles described above exploit many assumptions that simplify the reconstruction. In realistic structures, the reponse of the defect is much more complex. For example, delamination is one of the most common defects found in multi-layered structures, which develop internally between neighbouring layers of the laminate. After the interaction with delamination, guided waves are scattered and convert to other modes (Feng et al. 2018). Various researches show that fundamental modes of guided waves propagate separately in two sub-laminates created by defect and then interact with each other after exiting the damaged area. A detailed guided wave interaction with delamination type defects was presented by Ramadas et al. who found that in case of delaminations positioned symmetrically across the thickness of the laminate, incident A0 mode converts to S0 within the defect and then back to A0 after exiting. For asymmetrical defects, the mechanism is slightly different as an additional S0 mode is present upon interaction with defect (Ramadas et al. 2009; Ramadas et al. 2010). It was also observed by various research groups that Lamb wave scattering at the beginning and tip of defect is determined by its position. Schaal et al. estimated frequency-dependent scattering coefficients for fundamental Lamb wave modes at both ends of the defect (Schaal et al. 2017). Based on this approach, Hu proposed a technique to locate delamination defects based on reflection and transmission coefficients of A0 and S0 modes (Hu et al. 2008). Shkerdin and Glorieux (Shkerdin and Glorieux 2004) related the transmission coefficients of Lamb waves with the depth and length of delamination, while Birt et al. (Birt 1998) analyzed the magnitude of reflected S0 mode and its relation to the width of delamination. This shows that different indicators can be developed to detect defects and to assess their parameters. A complex guided wave scattering and mode conversion upon interaction with defects allows to develop various tools to identify and characterize the damage. In most simple cases, ToF measurements of reflected and transmitted guided wave modes can be used to detect the existence and location of the damage. Furthermore, by analysing the magnitude patterns and velocities of guided wave modes, other features like defect size and depth can be extracted.\n\n## 5.4 Reliability of SHM Systems\n\nSHM is a technique used for monitoring the integrity of in-service structures of aircraft, bridges, piplines and other components which are continuously exposed to operational load and environmental influence. The goal of SHM technology is to complement NDE techniques to improve reliability of the structure and reduce inspection and repair costs (Meeker et al. 2019; Etebu and Shafiee 2018). Thus, performance of the SHM system is characterized by the quality of sensors, their mounting and reliability of measurements. Reliability of SHM system depends on the quality of received data (Li et al. 2019; Datteo et al. 2018). SHM systems and NDE techniques use the same physical principle for damage detection. However, there are notable differences between the two systems. SHM measurements are carried out by the same array of sensors fixed in the same locations and provide continious monitoring of the structure. Hence, in contrast to traditional NDE, each new result obtained from sequentially repeated inspections depends on the previous one in SHM system. Assessment of structural integrity is characterized by a large period of time between the first and last inspection in sequence of NDE technique. Conversely, when short time period passed the process is called monitoring. Therefore, SHM is able to control structural integrity in real-time and detect damage before the scheduled maintenance. Another essential distinction of SHM and traditional NDE are the different causes which affect the measurements (Kabban et al. 2015). In the case of SHM application the sources of variability are not the same as for NDE. For instance, NDE variables as sensors, instrumentation and operator are fixed parameters in SHM. Sources of variability in SHM are mostly relate to in-situ effects, such as temperature, aging and load variation. Additionally, key point of SHM system is a fixed nature of sensing probes which leads to a lack of variation in sensor response caused by human factor. If the variability of in-situ effects is low or it can be filtered, the SHM registers variability of the system to defect geometry, inconsistencies of sensors mounting and structural differences (Fisher and Michaels 2009; Cobb et al. 2009).\n\nIn order to ensure reliability of SHM system the following activities have to be performed (Etebu and Shafiee 2018; Kabban et al. 2015):\n\n1. 1.\n\nPerfect communication between specialists who install sensors and specialists who carry out structural analysis;\n\n2. 2.\n\nCareful planning of areas to monitor;\n\n3. 3.\n\nSensor system design;\n\n4. 4.\n\nComponent replacement design;\n\n5. 5.\n\nData storage design;\n\n6. 6.\n\nTesting of sensors system;\n\n7. 7.\n\nAging compensation of sensors.\n\nReliability of SHM is the evaluation of probability of repeated and successful outcomes of the system under prescribed environmental conditions. Reliability indicates the quality of fault detection by assessment of four probabilities (Stolz and Meisner 2020; Gallina et al. 2013):\n\n1. 1.\n\nProbability of detection (POD)—the system detects faults when they exist in the structure;\n\n2. 2.\n\nProbability of false alarms (PFA)—the system detects faults when they do not exist in the structure;\n\n3. 3.\n\nPositive predictive probability (PPP)—the system detects no faults when they exist in the structure;\n\n4. 4.\n\nNegative predictive probability (NPP)—the system detects no faults when they do not exist in the structure.\n\nSince PPP and NPP are the inverse probabilities of POD and PFA, only first two probabilities have to be assessed. Great attention has to be paid to false calls because they occur often in SHM. POD and PFA are also interdependent through the threshold which defines the level of the system response indicating the presence of damage in the structure (Gallina et al. 2013).\n\n### 5.4.1 Basic Concepts of POD and PFA\n\nPOD is the probability of specified NDE inspection to detect defects in the structure at the time of inspection. MIL-HDBK-1823A provides all statistical tools and procedures to assess POD for reliability validation of NDE techniques. Usually, POD is a function of defect characteristics, i.e. length of the defect. Log Odds and Log Probit probabilistic models are commonly used for POD assessment (Gallina et al. 2013; Aldrin et al. 2016). POD can be assessed by Hit/Miss and signal response analysis. The objective of both configurations is to produce a POD curve vs characteristic parameter of the defect (usually size). Typical POD curve is presented in Fig. 5.4.\n\nSmall size of the defect is characterized by 0 POD (0%) meaning that the system could not detect defect of the specified size. The POD is 1 (100%) in the case of large defects meaning that the system detects defect of specified size reliabily. Transition zone where POD curve is increasing between 0 and 1 is under most interest. Additionally, the confidence interval is associated with POD and indicates the content of true value in the interval (Chapuis et al. 2018a; Chapuis et al. 2018b). Hence, confidence bounds characterize the ability of the system to detect particular characteristic parameter (defect size) with defined probability and confidence (Gallina et al. 2013). The usual requirement, especially for aerospace applications, is to determine the minimum size of defect which is detectable in 90% of the inspections at 95 % confidence level, a90/95 (Gallina et al. 2013).\n\nHit/Miss approach is characterized by measurement of qualitative information and determines the presence or absence of the damage. The response of the inspection is binary value of 1 (defect is detected) and 0 (fail to detect the defect). The model of POD in Log-Odds functional is expressed (Chapuis et al. 2018b):\n\n$$\\mathrm{POD}(a)={\\left[1+\\exp \\left(-\\left(\\frac{g(a)-\\mu }{\\sigma}\\right)\\right)\\right]}^{-1},$$\n(5.25)\n\nwhere a is a characteristic parameter (defect size), g(a) = a or g(a) = log (a), μ is a defect size detected with a probability of 50% and σ is the steepeness of the function.\n\nAlso, POD can be evaluated by signal response approach (â vs a) or quantitative measure of defect size. POD curve is based on methodology when there is a relationship between the defect size, a (physical dimension of a defect size), and sensor response â (measured response of the system to a target size). In the case of ultrasonic inspection the system provides a response from the defect whose amplitude is dependent on it’s size. Signal response approach is able to estimate and build POD curve using greatly smaller amount of data comparing to Hit/Miss analysis (Meeker et al. 2019; Chapuis et al. 2018b). Generally, this approach has a linear model and is expressed as following (Chapuis et al. 2018b):\n\n$$y={\\beta}_0+{\\beta}_1g\\left(\\mathrm{a}\\right)+\\varepsilon,$$\n(5.26)\n\nwhere β0 and β1 are the coefficients of linear function, g(a) = a or g(a) = log (a), ε is a random error.\n\nSince the noise exists in any inspection data the boundary which determines damage or no damage output has to be established. For this task the the detection threshold athres is fixed. Further, the POD curve can be expressed as a function of density of scattered data which is above the detection threshold ythres (Gallina et al. 2013; Chapuis et al. 2018b):\n\n$$\\mathrm{POD}(a)=\\mathrm{P}\\left(y>{y}_{\\mathrm{thres}}\\right)=1-{\\varPhi}_{norm}\\left(\\frac{y_{\\mathrm{thres}}-\\left({\\beta}_0+{\\beta}_1g(a)\\right)}{\\sigma_{\\varepsilon }}\\right)={\\varPhi}_{norm}\\left(\\frac{g(a)-\\mu }{\\sigma}\\right),$$\n(5.27)\n\nwhere Φnorm(z) is the normal density function and $$\\mu =\\frac{y_{\\mathrm{thres}}-{\\beta}_0}{\\beta_1}$$, $$\\sigma =\\frac{\\sigma_{\\upvarepsilon}}{\\beta_1}$$. The parameters μ and σ are determined according to the methodology of maximum likelihood estimation (Chapuis et al. 2018b).\n\nAs it was mentioned in Sect. 5.4, the probability of false alarms or Relative Operating Characteristics (ROC) curve has a significant importance to estimate the performance of SHM and NDE systems (Gallina et al. 2013). Selection of adequate detection threshold is an essential part in PFA assessment. It is a high possibility to classify undamaged regions of the component as damaged due to high number of observations in SHM system. For instance, the detection threshold can be exceeded by the impact of strong background noise when no defect is present. However, the POD and PFA are spatially dependent. Therefore, in order to determine appropriate detection threshold it is useful to plot POD and PFA in the same diagram to establish the overlap between two metrics of system performance (Fig. 5.5) (Gallina et al. 2013; Chapuis et al. 2018b; Schoefs et al. 2012).\n\nIn the case of NDE inspections athres is determined from the measurement when no flaw in the structure is present and the value is set above the background noise level. Since the threshold is set quite high in NDT applications the PFA has usually a low value (Fisher and Michaels 2009). Consequently, the threshold value in NDE for signal response approach is the highest level of noise in the defect free region (Fisher and Michaels 2009). However, it is a challenging task to determine detection threshold for SHM application. In the case of SHM system, a number of measurements of not defective structure will be available before damage occurs. Since the sensors are fixed in SHM system, in-situ effects will affect the signal response—temperature, operational load or sensor degradation. As a result, detection threshold can be determined as a variance in signal response over time from the undamaged structure. Another complexity to define athres is a dependent measurement of SHM. Due to a large degree of variability the measurement response can be classified as damage even if there is none in accordance with prior measurements (Meeker et al. 2019; Fisher and Michaels 2009; Cobb et al. 2009).\n\n### 5.4.2 Sources of Variability of SHM System\n\nBefore POD assessment , NDE and SHM systems should be completely evaluated in terms of the limits of operational parameters and application in order to list factors that significantly affect the variability of the system (Department of Defense 2009). It is essential to capture all sources of variability. Otherwise, the results of system performance evaluation will be invalid due to missed significant variables. After analysis, variables which have negligible impact on the detection can be eliminated (Mandache et al. 2011). There is a risk to overestimate the POD of smaller size defects and underestimate the PFA in the case of missing important influential sources. Factors of SHM system which could affect the signal are as follows (Meeker et al. 2019; Mandache et al. 2011):\n\n1. 1.\n\nShape, size and orientation of the defect as well as the change of these characteristics over time;\n\n2. 2.\n\nDefect and sensor location;\n\n3. 3.\n\nEnvironmental conditions (temperature, humidity);\n\n4. 4.\n\nMechanical variables;\n\n5. 5.\n\nChange of structural configuration over time;\n\n6. 6.\n\nChange of sensor performance over time;\n\n7. 7.\n\nQuality of sensor bonding;\n\n8. 8.\n\n9. 9.\n\nAmbient noise;\n\n10. 10.\n\nDirt;\n\n11. 11.\n\n12. 12.\n\n13. 13.\n\nData communication.\n\nOne of the advantages of SHM system is a reduction of human intervention as an important factor affecting the results due to automatization. However, there is still human involvement in the case of instrumentation installation and interpretation of the recorded data if required, but manual coordination is avoided (Mandache et al. 2011).\n\nTo ensure reliable work of SHM system backup power provision has to be implemented, since, power line is often at the risk to fail during emergency cases. Furthermore, SHM sensors durability depends on the power requirements. In the case of battery power supply, low weight and charging capabilities has to be feasible. More appropriate solution is to use self-powered sensors through energy harvesting. Additionally, to reduce the problem of sensors failure within years of exploitation they have to be self-diagnosed or redundant. Redundant SHM system involves sensors, which are not expensive and easy to install. The approach essence is to install multiple number of sensors with overlapping range to provide redundant sensing. Consequently, if one of the sensors fails, the others take over and perform the task. This redundant configuration can create more defect-tolerant and robust system (Mandache et al. 2011; Aldrin et al. 2013). However, redundant elements increase the weight of the system, which, especially in aerospace, should be kept as low as possible.\n\nThe sensors have to be optimally placed to assure the sensitivity to the damage as well as condition of structure surface. The guided waves have the ability to confine in thin-wall structures, so they are able to propagate over large distance of the structure with minimal loss of energy and attenuation. In addition, the guided waves are suitable for the inspection of various shapes and geometries of the long structures. Therefore, it is possible to monitor the condition of surface of the structure of different shape. Generally, the more sensors are placed on structure the more detailed information about structure health is received. The coverage of the sensors should be performed in specific way in order to provide adequate information from collected data. Performance requirement and sensor network robustness have to be fulfilled (Mandache et al. 2011; Yi and Li 2012; Abbas and Shafiee 2018).\n\nIn the case of monitoring high risk zones the sensors should be placed in close proximity in order to have higher sensitivity in the case of damage occurrence but not placed in potential risk of impact damage place since this can affect the sensor itself (Mandache et al. 2011).\n\nA good quality of coupling between sensor and structure has to be provided. Usually, adhesive is used as a coupling medium. However, degradation of adhesive properties can also affect on the response of the sensor. In aerospace industry during aircraft exploitation, sensors which are placed on the surface of the structure are not applicable due to aerodynamic conditions. This method is possible on the measuring stands in laboratory conditions (Mandache et al. 2011).\n\nTraditional wired SHM system consists of sensor system, data communication and storage system as well as the information analysis system to assess the integrity of the structure. The main disadvantages of wired system is a high cost of long cables, low productivity, time consumption, low flexibility, impact on the weight of the structure and heavy traffic of monitoring channels (Wang et al. 2020). Wired system is exposed to cable and wire breakage during the exploitation and leading to system malfunction. Wireless sensor networks have advantages of low cost, high efficiency, high flexibility. There are various protocols of communication technology widely described in (Alonso et al. 2018). Selection of the communication technology is dependent on the characteristics of infrastructure and monitoring requirements. However, if the data is transmitted and stored wirelessly there is still issues on the effect of SHM system interaction with other aircraft systems and avionics, electromagnetic and radio frequency interferences, what data has to be collected and at what frequency and etc. (Mandache et al. 2011).\n\n### 5.4.3 Analysis of Environmental and Operational Conditions\n\nIn order to define methodology for verifying reliability of SHM it is necessary to understand how sensors and other factors respond to the damage. In the case of guided waves, environmental and operational conditions can change the phase and the amplitude of the signal. The challenge in SHM system is to identify signal changes due to the damage presence from the false calls caused by the environmental and operational parameters influence. Guided wave technique is a suitable tool for damage detection and characterization, however, it has a high sensitivity to these factors. According to the work performed in this field, the impact of these parameters is compensated by development of appropriate modelling process, variation of sensor technologies, processing of the acquired signals, extraction of features as well as statistical methods and machine learning (Mandache et al. 2011; Gorgin et al. 2020). Environmental and operational conditions are huge source of variability in aerospace industry, civil and mechanical engineering. In this section, temperature and mechanical loading are discussed more widely due to their significant influence on SHM systems.\n\nTemperature is the important condition, variation of which is limiting the guided wave SHM systems. Temperature change is dominant influential property of environment condition that affects robustness of guided wave SHM (GWSHM) system (Fendzi et al. 2016). Temperature affects the component under monitoring and the sensing system. Propagation characteristics of Lamb waves significantly depends on temperature variation. Volume and density of the component changes with the temperature variation. This leads to a modification in the elastic properties, change of ultrasound velocity in the material that influence the response of the sensor at damage free place in the component (Mandache et al. 2011).\n\nIn order to mitigate this problem many investigations on the change in guided waves properties caused by temperature were conducted experimentally, numerically and analytically. Different configurations of the technique as well as temperature range was studied. Generally, by changing the temperature signal amplitude, time of flight and velocity can change. For instance, Abbas et al. (Abbas and Shafiee 2018) generated wave velocity function for guided wave group velocity evaluation considering frequency and temperature effect. It was found that the function can be used at the temperature which is not higher than 130 °C, since, further temperature increase influences on the bonding characteristics of the sensors. However, the relation between temperature and the output of GWSHM is not linear, since, many factors caused by temperature variation influence on the guided waves. Changed stiffness of the materials has the impact on group and phase velocity; change of elastic and shear moduli leads to longitudinal and transverse velocitites change resulting the decrease of phase velocity of the waves. As a result, in the case of temperature increase the propagating signals arrive later compared to temperature decrease. Expansion and contraction effect can lead to a change of distance between sensors and actuators, additionally, temperature variation has an impact on sensor properties and bonding layer properties (thickness and shear modulus) (Gorgin et al. 2020). Some numerical investigations presented a high impact of the temperature on the propagation of guided waves due to the change of physical properties.\n\nIf baseline subtraction technique described in previous section is used to detect structural changes, temperature compensation strategies is a must in order to minimise the residuals caused by environmental factors and to be able to detect smaller defects. Compensation techniques like optimal baseline subtraction (OBS), cointegration or baseline signal stretch (BSS) exist to reduce the influence of temperature to guided wave signals (Croxford et al. 2010; Gorgin et al. 2020; Konstantinidis et al. 2007). A comprehensive study on the influence of temperature to reflection, transmission and velocity of guided waves was recently presented by Abbas et al. (Abbas et al. 2020). In the case of OBS temperature compensation technique the baseline signals are measured over the temperature range whereupon the best matched baseline signal is selected according to subtract from any further reading from the structure. This technique requires large number of baseline signals each at different environmental condition and high temperature resolution. BSS temperature compensation technique is based on model building of the effects on wave signals due to temperature change. The shape change of the signals over the time is calculated (stretch factor) in order to perform time-stretch estimation (dilation or compression of the signal). This method works well for small temperature differences between baseline and current signal, however it’s performance deteriorates at sufficiently large temperature differences (Croxford et al. 2007). Clarke et al. proposed to use a combination of OBS and BSS methods to reduce amount of baseline signals describing different temperature regimes (Clarke et al. 2010). In most cases, it’s considered, that wave velocity is most important factor that changes with variation of temperature. However, recent studies show, that phase and amplitude of the signal are temperature sensitive as well. Changes in the temperature may cause variation of bonding stiffness between the sensor and the structure. This may alter the frequency response of transducer and lead to phase delay in the signal. Fendzi et al. proposed temperature compensation method that estimates linear dependencies of amplitude and phase delay versus temperature for each transducer pair mounted on the structure (Fendzi et al. 2016). Based on derived amplitude and phase factors, the regression model is then estimated and signal can be reconstructed at selected temperature. Recently Herdovics et al. proposed temperature compensation method which uses two sensors in close proximity only, hence the phase changes are compensated according to the incident wave, while the wave velocity changes are supressed from echoes. (Herdovics and Cegla 2019). The authors concluded that proposed compensation technique is able to reduce effect of environmental conditions from 7 dB to 20 dB in comparison to conventional single stretch BSS method at temperature difference of 41.5 °C between signals. Mariani et al. presented a method for temperature compensation that takes into the account both velocity and phase information (Mariani et al. 2020). The main difference is that the later method provides estimate solely based on baseline and subsequent signals. The method demonstrated 97% POD for 0.1% probability of false alarm when the temperature difference ranges were between 7 °C to 28 °C and 35 °C to 55 °C. Meanwhile at the same conditions, standard BSS yielded up to 23% POD only.\n\n### 5.4.4 POD Assessment Solutions\n\nComparing to NDE techniques which produce independent observation the data received from SHM system is dependent due to continuous data collection. Despite the fact that uncertainty factors of NDE and SHM differ, the same mathematical framework is applied for both systems (Gianneo et al. 2016). Guided waves provide fast and cost-effecive evaluation of different damage types compared to other approaches of SHM system. GWSHM has several indicators allowing to detect defects in the structure: changes in natural frequencies, in time of flight, in strain, in time domain signal and other characteristics. In order to evaluate POD of SHM system it is necessary to cover all sources of variability. As it was mentioned above, there are two configurations to assess POD—signal response and Hit/Miss. PFA has to be evaluated for SHM system due to high possibility of false calls (De Luca et al. 2020).\n\nMandache et al. (Mandache et al. 2011) described time-based POD assessment of SHM system. In comparison to NDE, SHM detects time evolution of the defect with respect to baseline signal. Therefore, POD can be defined not as a function of the defect size, but as a function of time it takes until the damage of certain size is firstly detected or a damage growth on pre-defined percentage. This approach was proposed by Pollock who investigated the probability of detection of growing crack using acoustic emission. The crack of 4mm was growing to 4.05 mm and detected with a probability of 90% within 6 weeks of monitoring. Shorter period of monitoring time gives lower probability in the case of same crack (Mandache et al. 2011). Another approach to estimate POD of SHM system is to transfer POD from NDT to SHM. Firstly, POD assessment for NDT technique is performed. Then using transfer function where all influencing parameters are taken into account the POD is converted to POD curve equivalent to SHM system. One more alternative is to analyse two parallel structures under similar conditions periodically with NDT and continuously with SHM. The relationship of signal responses to damage between two systems is transferred to the POD curves, assuming that POD of NDT is known (Mandache et al. 2011).\n\nGianneo et al. (Gianneo et al. 2016) investigated Multi Parameter POD approach for guided waves of SHM. This approach implies the combination of Finite Element numerical simulations and experimental data to receive required POD curve. The combination of numerical and experimental data establishes “Measured â vs. Modelled a” data diagram considering influencing factors and uncertain parameters. As a result, non-linear responses of the GWSHM system is being linearized by the diagram allowing the use of Berens’ statistical model. Then a POD curve is estimated as a function of modelling data. Multi-Dimensional POD implies the analysis of SHM system responses as a function of damage size with the influence of factors and isolate the recorded signals that are sensitive only to a single influencing factor, including damage. After POD curves of damage size for each independent influencing factor are found then POD fusion of these curves can be performed. Therefore, SHM reliability in the presence of combined influencing factors can be determined (Mandache et al. 2011).\n\n### 5.4.5 Model-Assisted POD for SHM System\n\nLarge scale of experiments are required in order to evaluate POD of the system. A number of repetitive tests has to be performed for the same crack size to take in consideration the uncertainty of influencing factors. This approach of POD evaluation is time-consuming and expensive. Thus, Model Assisted POD (MAPOD) was developed as a solution of the problem (Gallina et al. 2013).\n\nAlfredo et al. states that the support of numerical simulations is required for POD assessment. Verification and validation method of the SHM system has to evaluate all aspects which can influence on detection capability, localization and characterization of the defect as well as an effect of environment and exploitation over time. Since sensors of SHM system are fixed at the same position the main difficulty is that the flaw may occur anywhere what will cause the change in response due to distance between sensors and defect. As a result, model for guided waves SHM system POD assessment is proposed and shown in Fig. 5.6 (Güemes et al. 2020).\n\nMainly, there are two MAPOD approaches of high interest: transfer function and full model assisted. Transfer function approach is physics based and used to transfer POD of specific inspection to another with different parameters of inspection. Full model assisted approach is based on the models of uncertainty propagation of specified inspection parameters. Numerical signal of the approach is combined with experimental noise. Nowadays, the use of computer models to evaluate the reliability of SHM system is the most suitable approach. MAPOD reduces experimental inspections of the samples by modelling responses of inspection of the defected material. In the case of effective MAPOD calculations enhance statistical models have to be created. These statistical models characterize system dependency on various influencing factors including defect. Additionally, the polynomial chaos methods reduce the number of samples required for assessment and speed up the MAPOD as well as parallel computing techniques greatly cut down the time of simulations (Gallina et al. 2013; Mandache et al. 2011).\n\nGallina et al. (Gallina et al. 2013) proposed the MAPOD approach to analyze the Lamb wave SHM system. Propagation signals received by the sensors were collected. Then the effect of dispersion was eliminated by the linear mapping algorithm. All signals were delayed and summed. Location and detection of the damage was performed using imagining technique. Numerical experiments were modelled and empirical white Gaussian noise was added to the recorded signals of sensors for consideration of in-situ factors. The data of simulated experiments was used in statistical analysis for POD curve and PFA evaluation.\n\nAdditional example of MAPOD evaluation of SHM systems is described. Cobb et al. (Cobb et al. 2009) proposes hit/miss configuration and model assisted approach for POD estimation of SHM system. Proposed model assisted approach comprises a creation of a series of models:\n\n• Measurement response model which approximates the sensor response;\n\n• Defect propagation model in order to generate sensor responses using a crack growth equation;\n\n• Detection strategy to determine when damage is detected.\n\nAfter, Hit/Miss analysis of resulting data was performed. Logistics regression was used for modelling binomial response data. POD curve was evaluated and indicates the percentage of all defects of specific size which will be detected (Cobb et al. 2009).\n\n## 5.5 Guided Wave Applications to SHM of Aerospace Components\n\nThe purpose of structural health monitoring is to increase operational safety of aircrafts (Diamanti and Soutis 2010). Ultrasonic guided waves can be exploited for the structural health monitoring of large-scale plate-like aircraft structures (Staszewski et al. 2009). Although there have been investigations for application of the guided waves for metallic and composite aircraft components and structures, guided waves were mostly used up to now for the investigation of the simpler structures, such as pipes (Cawley 2018). In case of most aircraft components, the structures to be inspected have much more complex geometry, including stiffeners and bolt holes, what complicates the propagation of guided waves and thus the inspection. Below, some possible applications of guided waves for the inspection of the most frequently found defect types in the metallic and composite aerospace components as well as adhesive joints are reviewed.\n\nBae et al. (Bae and Lee 2016) and Choi et al. (Choi et al. 2018) have used serially connected PZT sensor net in combination with laser ultrasonic propagation imaging system for fatigue crack detection in metallic fuselage of Cessna 150 (Bae and Lee 2016). Using finite element modelling Ewald et al. investigated transducer placement for detections of cracks in aluminum aircraft structures using the Lamb wave SHM system (Ewald et al. 2018). Masserey et al. (Masserey and Fromme 2015) have used high frequency 2.25MHz Lamb waves for the monitoring of fatigue crack growth in aluminum specimens. Ihn et al. (Ihn and Chang 2008) have proposed to use an imaging method for quantification of damage in aluminum structures using multiple pitch-catch information. Dalton et al. (Dalton et al. 2001) concluded that guided waves could be used for localized monitoring of metallic airframe structures up to 1 m, however they are not feasible for the monitoring of complete fuselages.\n\nChang et al. investigated corrosion monitoring using Lamb wave tomography in aluminum plate (Chang et al. 2020). 15 pairs of PZT sensors were used for the excitation and reception of Lamb waves. As A0 mode is more sensitive to variations in the thickness, it was used for the corrosion monitoring (Chang et al. 2020).\n\nFakih et al. have used piezoelectric wafers for the excitation of S0 mode Lamb waves for assessment of flaws in frictions stir welded joints (Fakih et al. 2018). The proposed approach was verified by computed tomography as in the research of Jasiuniene et al. investigating the quality of dissimilar metal joints made by friction stir welding (Jasiūnienė et al. 2017).\n\nHuan et al. suggest using a shear horizontal SH guided wave based SHM system with total focusing method imaging for monitoring of metallic structures (Huan et al. 2019). The suitability of the shear horizontal waves for the defect detection in metallic structures was investigated as well by Petcher et al. (Petcher and Dixon 2015).\n\nThe composite materials are used in different aircraft primary structures more and more (Diamanti and Soutis 2010). The most common type of damage is caused by impact, which can cause delaminations, disbonds, matrix cracking, fiber breakage leading to the reduction of structure’s life (Diamanti and Soutis 2010).\n\nImpact type of damage was investigated using Lamb waves by different authors: Diamanti et al. have used lamb waves (A0 mode) for detection and localization of impact damage in composite beams (Diamanti and Soutis 2010); Their investigations have proved that Lamb waves can be used to monitor impact damage evolution in composite laminated structures (Diamanti and Soutis 2010). However, it was concluded, that for application in situ there are still some issues to be solved: durability of bonding layer between the transducer and structure, influence of environmental conditions, etc. (Diamanti and Soutis 2010). Memmolo et al. (2018a, 2018b) have used permanently installed sensors and developed an algorithm using multi-parameter approach to identify hidden flaws due to impact type of damage in composite structure. They have obtained promising results even in the areas with stiffeners and holes (Memmolo et al. 2018a). Katunin et al. have used embedded PZT sensors for detection of barely visible impact damage in GFRP and hybrid Al-GFRP-Al structures (Katunin et al. 2015). They have concluded, that low number of PZT sensors gives only rough image of inspected composite structure and could be used only as initial step of inspection (Katunin et al. 2015). Khodaei et al. (Sharif Khodaei and Aliabadi 2016) have proposed a multi-level approach for barely visible impact damage detection, identification and localization. Their results show that the number and location of the transducers influence the reliability of the detection. Capriotti et al. (2017) have used non-contact air coupled transducers for the generation of guided waves and detection of impact damage (causing cracked skin/stringer and disbonded stringer) in composite aircraft panels.\n\nLamb waves can also be used for finding of delamination type of defects in composites: Ramadas et al. (Ramzi et al. 2015) have studied the interaction of guided Lamb waves with an asymmetrically located delamination in laminated composite plate. Staszewski et al. (2009) used Lamb waves to detect delamination in the composite plate. One piezo ceramic actuator was used for Lamb wave generation, and 3D laser vibrometer for reception (Staszewski et al. 2009). Ihn et al. (Ihn and Chang 2008) have used diagnostic imaging method using multiple pitch-catch pairs to quantify the delamination type damage in stiffened composite panels. Qiu et al. (2013) have used PZTs bonded to the structure with advanced two step signal processing for the localization of damage in composite wing panels with stiffeners and bolt holes. Kazys et al. (Kažys et al. 2006; Kazys et al. 2006) have detected delamination and impact type defects using air-coupled excitation of Lamb waves in aerospace honeycomb structures. A study on delamination size and depth extraction on composite plate using A0 mode was presented by Samaitis et al. (2020, Tiwari et al. (2017).\n\nPanda et al. for excitation and receiving of Lamb waves in composite aileron used air-coupled transducers in pitch catch mode (Panda et al. 2016) and have determined, that fundamental A0 mode was effective for disbond detection (Panda et al. 2016). In another experiment by Panda et al. (2018) again air coupled transducers were used in pitch-catch configuration for generation and reception of A0 Lamb wave mode in composite panel with stiffeners for disbond detection. Memmolo et al. (2018c) and Monaco et al. (2016) have investigated the possibilities to detect disbond of stringers in stiffened composites typically used for wing boxes using scattered guided waves and tomographic approach.\n\nAdhesively bonded joints are attractive in aircraft structures as alternative to rivets. However, the degradation of the quality of adhesive joints is still an issue. Yilmaz et al. (Yilmaz and Jasiūnienė 2020) have suggested the techniques for detection of weak composite-adhesive joints using advanced NDT. Castaings (2014) have used the SH guided waves for the evaluation of the adhesion in adhesively bonded aluminum lap joints.\n\nEspecially challenging for the inspection are adhesive hybrid metal to composite joints. Advanced ultrasonic testing with novel signal post processing technique was suggested by Jasiūnienė et al. (2019) for detection of defect in the complex joints. Puthillath et al. (Puthillath and Rose 2010) have used ultrasonic guided wave modes with large in-plane displacement at the interface for the inspection of adhesively bonded aircraft repair patch (titanium repair patch bonded to an aluminum aircraft skin). Ren et al. (Ren and Lissenden 2013) also have used ultrasonic guided wave mode with large in-plane displacement at interface for the inspection of adhesive bonds between composite laminates.\n\nEven though there has been a lot of different research of possible SHM application for different aircraft structures in the laboratory conditions, there haven’t been a lot applications on real aircrafts due to several unsolved issues (Cawley 2018; Qing et al. 2019) like sensitivity of the SHM systems to environmental changes (Kralovec and Schagerl 2020; Gorgin et al. 2020; Abbas et al. 2020; Fang et al. 2019; Nokhbatolfoghahai et al. 2021), leading to reduced reliability (Memmolo et al. 2018a; Fang et al. 2019) and thus probability of the detection (Meeker et al. 2019; Wu et al. 2015), legal issues and other. Quantification of the extent of the damage as well remains a challenge (Ihn and Chang 2008). Another issue, which still haven’t got enough attention is the huge amount of data to be analyzed (Qing et al. 2019). Effect of damaged/inoperative sensors also should be solved introducing self diagnostic approach (Memmolo et al. 2018a; Qing et al. 2019). On the other hand, weight of the SHM system should be also not forgotten (Memmolo et al. 2018a)—additional weight introduced due to SHM system should be as low as possible.\n\n## 5.6 Summary\n\nThe production of the modern aircraft clearly seeks faster, cheaper production, increased automation, reduced weight and fuel consumption. Modern aircraft manufacturing technologies use resin transfer molding (RTM), high pressure HP-RTM, thermoplastic composites, hybrid metal-composite structures and 3D printed parts (Composites World 2019). For example, HP-RTM are already implemented in some parts of an aircraft, allowing to achieve approx. 30% cost reduction and increase of the production efficiency by 10–20% (Composites World 2019). Thermoplastic composite technology will reduce the amount of assembly steps, eliminate some rivets and fasteners, resulting in reduced overall manufacturing cost. AM technologies, like the ones used in GE9X engine, allow to reduce weight of the engine and to combine multiple parts into single one overcoming the shape restrictions that come from conventional methods such as stamping and casting (Kellner 2020; Bachmann et al. 2017). Boeing is in production of 777X aircraft and in the design stage of New Midsize Aircraft (NMA or 797). Airbus is aiming to replace it’s most successful single aisle jet A320 with a new A321XLR. New and modern jet’s will increasingly use advanced manufacturing technologies, like the ones mentioned above. As these drastically change the design process, the NDT technologies will have to adapt. New knowledge of guided wave interaction with thermoplastic composite joints (Ochôa et al. 2019; Ochôa et al. 2018), HP-RTM components, detection of porosity or lack of fusion between layers of AM parts (Honarvar and Varvani-Farahani 2020) will be a key factor determining the inspection quality. The new NDT technologies will be required to overcome current limitations and provide full characterization of the damage including size, location and depth. This is crucial for successful damage progression models, which are responsible for remaining life predictions of the structure. For example, it has been reported that the depth of delamination is directly related to the damage progression mechanisms (Canturri et al. 2013; Elliott Cramer 2018), so it has to be fully defined by the monitoring system. Different manufacturing processes and novel materials will require different characterization data and assessment of all defects that are present in the structure. The defects of different kinds will have to be described differently depending on their nature and progression mechanisms. Hence, the SHM system will have to be versatile and adaptive enought. Both in-service and production line inspection technologies will become crucial as they will determine the manufacturing rates and safety of new structural desings, while guided waves demonstrate a serious potential to become one of the key SHM technologies of the future aircraft."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8616565,"math_prob":0.9404779,"size":133266,"snap":"2023-14-2023-23","text_gpt3_token_len":30072,"char_repetition_ratio":0.17344779,"word_repetition_ratio":0.01542191,"special_character_ratio":0.22602914,"punctuation_ratio":0.11595925,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9542641,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-04T05:23:47Z\",\"WARC-Record-ID\":\"<urn:uuid:2b66646a-2a87-4438-9ea7-ed0612152b1e>\",\"Content-Length\":\"587757\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bd89b247-523c-4514-9ef4-5840d71c33e1>\",\"WARC-Concurrent-To\":\"<urn:uuid:58239d41-5958-425b-83e6-9b721985a915>\",\"WARC-IP-Address\":\"146.75.32.95\",\"WARC-Target-URI\":\"https://link.springer.com/chapter/10.1007/978-3-030-72192-3_5\",\"WARC-Payload-Digest\":\"sha1:5VGJZLPLJBQ45NR4TG7ZXBUPGMZCFRGS\",\"WARC-Block-Digest\":\"sha1:74EAORL7KLAOZ2TWHN2QU4LDJRMCGPYO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224649439.65_warc_CC-MAIN-20230604025306-20230604055306-00516.warc.gz\"}"} |
http://linkcut.info/what-is-quart-of-water/what-is-quart-of-water-2-l-14-fluids-3-fluids-at-rest-fluid-statics-fluids-at-rest-fluid-statics-why-things-float-archimedes-principle-fluids-in-motion-fluid-what-is-a-quart-of-water/ | [
"What Is Quart Of Water 2 L 14 Fluids 3 Fluids At Rest Fluid Statics Fluids At Rest Fluid Statics Why Things Float Archimedes Principle Fluids In Motion Fluid What Is A Quart Of Water In Pints To Ounces",
null,
"what is quart of water 2 l 14 fluids 3 fluids at rest fluid statics fluids at rest fluid statics why things float archimedes principle fluids in motion fluid what is a quart of water in pints to ounces.\n\nwhat is the weight of one quart of water,what is a quart of water,what is the weight of a quart of water,what is a quart of water in liters to ounces,what is a quart of water in pints to quarts,what is one quart of water in liters to m3,what is one quart of water in liters to quarts,what is a quart of water in litres into gallons,what is a quart of water in pints and pies,what is 1 quart of water equal to the task,what is one quart of water in litres."
] | [
null,
"http://linkcut.info/data/what-is-quart-of-water/images/what-is-quart-of-water-2-l-14-fluids-3-fluids-at-rest-fluid-statics-fluids-at-rest-fluid-statics-why-things-float-archimedes-principle-fluids-in-motion-fluid-what-is-a-quart-of-water.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8684455,"math_prob":0.995803,"size":1126,"snap":"2019-43-2019-47","text_gpt3_token_len":267,"char_repetition_ratio":0.2540107,"word_repetition_ratio":0.18877551,"special_character_ratio":0.19449379,"punctuation_ratio":0.054054055,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96174175,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-20T22:04:29Z\",\"WARC-Record-ID\":\"<urn:uuid:b931d822-b775-4dfb-a47b-f22e17d05f73>\",\"Content-Length\":\"53900\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9af743cc-6282-4096-b9bd-0d9653261528>\",\"WARC-Concurrent-To\":\"<urn:uuid:250154fd-cb33-4ded-87a1-899675072811>\",\"WARC-IP-Address\":\"104.27.183.111\",\"WARC-Target-URI\":\"http://linkcut.info/what-is-quart-of-water/what-is-quart-of-water-2-l-14-fluids-3-fluids-at-rest-fluid-statics-fluids-at-rest-fluid-statics-why-things-float-archimedes-principle-fluids-in-motion-fluid-what-is-a-quart-of-water/\",\"WARC-Payload-Digest\":\"sha1:FZOXMIJSCAXVOZRWXDJZILEPMZJ7II5E\",\"WARC-Block-Digest\":\"sha1:73EZCWUV7SMEEUS6UHZQ35SJCIZ7NPGK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986726836.64_warc_CC-MAIN-20191020210506-20191020234006-00015.warc.gz\"}"} |
https://git.dynare.org/DoraK/dynare/-/commit/ebc7d6f67aaa16d36c846f5582e6fc997ace3174?view=inline | [
"### Add comment why use of old correlation matrix from previous draw is correct and revert change\n\n`Due to only using the diagonal of Sigma_e and the correlation matrix having ones on the diagonal, the diagonal entries of the covariance matrix are correctly built from recent draw. Later, when using the new draw for the correlations, only the correctly updated diagonal entries of Sigma_e are used.`\nparent 74ef1aa7\n ... ... @@ -83,7 +83,7 @@ offset = nvx+nvn; % setting shocks covariances if ~isempty(M.Correlation_matrix) Sigma_e = diag(sqrt(diag(Sigma_e)))*M.Correlation_matrix*diag(sqrt(diag(Sigma_e))); Sigma_e = diag(sqrt(diag(Sigma_e)))*M.Correlation_matrix*diag(sqrt(diag(Sigma_e))); % use of old correlation matrix is correct due to the diagonal structure and later only using the hence correctly updated diagonal entries of Sigma_e end if ncx corrx = estim_params.corrx; ... ...\nSupports Markdown\n0% or .\nYou are about to add 0 people to the discussion. Proceed with caution.\nFinish editing this message first!\nPlease register or to comment"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8410063,"math_prob":0.98455447,"size":460,"snap":"2022-40-2023-06","text_gpt3_token_len":91,"char_repetition_ratio":0.16008772,"word_repetition_ratio":0.0,"special_character_ratio":0.18695652,"punctuation_ratio":0.061728396,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9960947,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-29T01:28:27Z\",\"WARC-Record-ID\":\"<urn:uuid:390b361c-7a34-45b2-aea9-688608a32680>\",\"Content-Length\":\"117878\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:57e4984f-c150-473b-87d5-a8e40735e196>\",\"WARC-Concurrent-To\":\"<urn:uuid:8d177347-dfed-4c21-8859-bc0eebac00b4>\",\"WARC-IP-Address\":\"217.70.191.81\",\"WARC-Target-URI\":\"https://git.dynare.org/DoraK/dynare/-/commit/ebc7d6f67aaa16d36c846f5582e6fc997ace3174?view=inline\",\"WARC-Payload-Digest\":\"sha1:MTEMGX5WWFYO2YXVJTFUF5FU3A7NVFIZ\",\"WARC-Block-Digest\":\"sha1:CYWHFS3I7PJXORX6ONXVDGPNVDJVUMDN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335303.67_warc_CC-MAIN-20220929003121-20220929033121-00413.warc.gz\"}"} |
http://bearsearch.info/fractions-and-decimals-worksheets-grade-7/fraction-decimal-percent-worksheet-grade-7-fractions-and-decimals-converting-fractions-to-decimals-calculator-worksheet-activities-comparing-homework-help-resources-math-worksheets-and-grade-7-fractio/ | [
"# Fraction Decimal Percent Worksheet Grade 7 Fractions And Decimals Converting Fractions To Decimals Calculator Worksheet Activities Comparing Homework Help Resources Math Worksheets And Grade 7 Fractio",
null,
"fraction decimal percent worksheet grade 7 fractions and decimals converting fractions to decimals calculator worksheet activities comparing homework help resources math worksheets and grade 7 fractio.\n\ndecimals worksheets grade 7 converting fractions to decimal math 4 fraction percent worksheet and year cbse,2 5 in decimal math fractions decimals and percentages worksheets grade 7 pdf with answers class cbse,fractions and decimals class 7 cbse worksheets pdf grade with answers,fractions ksheets year 7 comparing and decimals worksheets grade fraction decimal percent worksheet percents,fractions decimals and percentages worksheets year 7 grade with answers math percents to,percent worksheets grade 7 6 fractions decimals percents and percentages year class cbse with answers worksheet,fraction decimal percent worksheet grade 7 fractions decimals worksh comparing and percentages worksheets cbse year,fractions decimals and percents worksheets 7th grade pdf 7 class cbse reduce the common,fraction decimal percent worksheet grade 7 fractions decimals worksheets 5 math and class cbse pdf,fractions and decimals worksheets grade 7 pdf rounding with worksheet year."
] | [
null,
"http://bearsearch.info/wp-content/uploads/2019/05/fraction-decimal-percent-worksheet-grade-7-fractions-and-decimals-converting-fractions-to-decimals-calculator-worksheet-activities-comparing-homework-help-resources-math-worksheets-and-grade-7-fractio.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7137055,"math_prob":0.94632816,"size":1170,"snap":"2019-13-2019-22","text_gpt3_token_len":217,"char_repetition_ratio":0.2915952,"word_repetition_ratio":0.051612902,"special_character_ratio":0.15982907,"punctuation_ratio":0.061452515,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99981683,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-22T07:10:38Z\",\"WARC-Record-ID\":\"<urn:uuid:16a537b5-8482-4854-894a-608323077361>\",\"Content-Length\":\"67372\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3124bf44-24bc-49da-925c-b9f7d7e98683>\",\"WARC-Concurrent-To\":\"<urn:uuid:3deb3738-16d2-4581-a45d-8facfe44aaa7>\",\"WARC-IP-Address\":\"104.24.98.24\",\"WARC-Target-URI\":\"http://bearsearch.info/fractions-and-decimals-worksheets-grade-7/fraction-decimal-percent-worksheet-grade-7-fractions-and-decimals-converting-fractions-to-decimals-calculator-worksheet-activities-comparing-homework-help-resources-math-worksheets-and-grade-7-fractio/\",\"WARC-Payload-Digest\":\"sha1:E773IGNCF54TUYOD2QDVIJLBUIT6S5PA\",\"WARC-Block-Digest\":\"sha1:PH4OVRN24JIHK2LAOCJILSSWJPK6UQAM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232256764.75_warc_CC-MAIN-20190522063112-20190522085112-00247.warc.gz\"}"} |
https://mathematica.stackexchange.com/questions/229318/how-to-judge-whether-the-series-is-absolutely-convergent | [
"# How to judge whether the series is absolutely convergent?\n\nI need to judge whether the series $$\\sum_{n=1}^{\\infty}(-1)^{n}\\left(1-\\cos \\frac{\\alpha}{n}\\right)$$ (α>0) is absolutely convergent.\n\nSumConvergence[(-1)^n (1 - Cos[α/n]), n]\nSumConvergence[Abs[(1 - Cos[α/n])], n, Method -> Automatic]\nSumConvergence[Abs[(1 - Cos[1/n])], n, Method -> Automatic]\n\n\nBut the above code can not determine whether the series is absolutely convergent. How can I solve this problem?\n\nWe had to calculate by hand.\n\n1-Cos[1/n]==2 Sin[1/(2 n)]^2//Simplify\n(* True *)\nLimit[2 Sin[1/(2 n)]^2/(1/n^2), n -> Infinity]\n(* 1/2 *)\nSumConvergence[1/n^2, n]\n(* True *)\n\n\nMaking use of the limit comparison test, we have\n\nSumConvergence[Abs[Normal[Series[(-1)^n*(1 - Cos[a/n]), {n, Infinity, 2}]]], n]\n(*True*)\n\n• It's a great way, and it's universal. Sep 1, 2020 at 4:44"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.57250386,"math_prob":0.98983836,"size":405,"snap":"2022-05-2022-21","text_gpt3_token_len":129,"char_repetition_ratio":0.13965087,"word_repetition_ratio":0.0,"special_character_ratio":0.33333334,"punctuation_ratio":0.102564104,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9987981,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-23T02:55:22Z\",\"WARC-Record-ID\":\"<urn:uuid:c501fa2f-31d2-41be-ba74-12e813acf47b>\",\"Content-Length\":\"234419\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c19ddee3-cc6a-41e5-a0be-55ec4503fe31>\",\"WARC-Concurrent-To\":\"<urn:uuid:88c14deb-64fa-4e14-b62e-12168ebbe2be>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://mathematica.stackexchange.com/questions/229318/how-to-judge-whether-the-series-is-absolutely-convergent\",\"WARC-Payload-Digest\":\"sha1:MGEYOBO7AWAXZUCB2V7WEUNPWVW6TQT6\",\"WARC-Block-Digest\":\"sha1:I35ABJS3TCP3TVMYTWB2JGLUKCQSXSZT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662552994.41_warc_CC-MAIN-20220523011006-20220523041006-00707.warc.gz\"}"} |
https://rvmiller.com/tag/equilibrioception/ | [
"# Quickie: Which way does gravity point?",
null,
"Everyone knows a compass always points north, and most people know it’s because of magnetic fields present on Earth’s surface. There’s another force here on Earth directed to a central point, and that’s gravity. Humans are quite adept at sensing gravity thanks to equilibrioception, where fluid contained in structures in our inner ear provide feedback to help us stay balanced.\n\nBut machines, too, can detect gravity thanks to the simple accelerometer. Already present in most smartphones today, accelerometers react to gravity with tiny springs, creating a voltage difference that we can measure and turn into meaningful units.\n\nOn Android, we can easily read the accelerometer data:\n\n```SensorManager sensorManager = (SensorManager) getSystemService(Context.SENSOR_SERVICE);\nSensor accel = sensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER);\nsensorManager.registerListener(this, accel, SensorManager.SENSOR_DELAY_NORMAL);\n\n...\n\npublic void onSensorChanged(SensorEvent event) {\nfloat x, y, z;\nx = event.values;\ny = event.values;\nz = event.values;\n...}```\n\n### Using accelerometers to emulate human’s perception of gravity\n\nI’d like to show how we can use an Android phone (even my dusty old Droid Eris) to visualize the force of gravity. To save time, we’re only going to use two dimensions, x and y, but the technique used here can easily be extended into 3D.\n\nLet’s represent gravity the same way students in a high school physics class would — with an arrow pointing down. The goal would be the ability to rotate the phone (changing the x and y position), while still having that arrow point down, illustrating the direction of gravity.\n\nThe first thing we’ll need to do is convert the rectangular coordinates given to us (x and y) to a polar system (r, θ), where extracting an angle is much easier.\n\nThinking back to high school geometry, the inverse tangent will provide that angle directly. Java has a built-in method, atan2(), which even gracefully handles the divide-by-zero case when x = 0. Because the image rotation I’m using is based on degrees (more on that in a moment), we can convert the radian angle to a common degree (0-360°).\n\n```double theta = Math.atan2(y, x);\ndouble degree = ((theta * -180.0) / 3.14159) + 180; // +180 to keep 0 on the right```\n\nThat gives us the degree rotation of the phone in 2D. We’re almost there. To determine the degree that we would like the gravity arrow to point, we need to offset that degree, modulo 360 to keep us within the range (0-360°):\n\n`float rotateDegree = (float) ((degree + 270.0) % 360.0);`\n\nNow it’s just a matter of re-drawing the arrow image on the screen. Android offers some fancy animation techniques, but for this quickie project, I chose to use a matrix rotation:\n\n```Matrix matrix = new Matrix();\nmatrix.setRotate(rotateDegree);\nBitmap rotated = Bitmap.createBitmap(myImg, 0, 0, myImg.getWidth(), myImg.getHeight(),matrix, true);\narrowImage.setImageBitmap(rotated);```\n\nWith that code in place, we can finally visualize the force of gravity, at least in two dimensions:\n\nIf you are interested, you can find more educational video presentations on YouTube promoted by The Marketing Heaven.\nThis project was a quick one (writing this blog entry actually took longer than the code itself), but I think it’s important to show how we can figuratively “teach” a device a human trait and give them a new skill. For instance, with a faster refresh rate and perhaps a little more accuracy, a robot can use this technique to keep itself balanced, much like humans use information from gravitational forces to stay balanced.\n\nGithub available here."
] | [
null,
"https://www.rvmiller.com/wp-content/uploads/2013/07/Balance_Disorder_Illustration_A.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8613437,"math_prob":0.90191436,"size":3585,"snap":"2021-43-2021-49","text_gpt3_token_len":791,"char_repetition_ratio":0.09885507,"word_repetition_ratio":0.0035778175,"special_character_ratio":0.22845188,"punctuation_ratio":0.14820144,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96092457,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-24T19:36:43Z\",\"WARC-Record-ID\":\"<urn:uuid:8f279b6d-f537-419c-bc07-086e908565d4>\",\"Content-Length\":\"25826\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fef2a814-d0bc-4f1c-9ebc-43ed2937f689>\",\"WARC-Concurrent-To\":\"<urn:uuid:9609c7ef-5831-417c-9025-170c21853a3a>\",\"WARC-IP-Address\":\"172.67.144.75\",\"WARC-Target-URI\":\"https://rvmiller.com/tag/equilibrioception/\",\"WARC-Payload-Digest\":\"sha1:EK4X4ARXK2T44ZI4AFDOSK4MP5XEFGNT\",\"WARC-Block-Digest\":\"sha1:4LFNWFMASEVYBY3S6FQQJQFUOEC3Q7MA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587593.0_warc_CC-MAIN-20211024173743-20211024203743-00068.warc.gz\"}"} |
https://spsstools.net/en/syntax/syntax-index/meta-analysis/meta-analysis-fixed-and-random-effects-models/ | [
"``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412``` ```**************************************************************************** *** Meta-Analysis: Fixed and Random Effects Models *** Valentim R. Alferes (University of Coimbra, Portugal) *** valferes@fpce.uc.pt ** ** This syntax does a meta-analysis on a set of studies comparing two ** independent means. It produces results for both fixed and random effects ** models, using Cohen's d statistic, with or without Hedges' correction. ** ** The user has TEN MODES FOR ENTERING SUMMARY DATA (see PART 1): ** ** Mode 1 - Study No., N1, M1, SD1, N2, M2 SD2. ** Mode 2 - Study No., N1, M1, N2, M2 SD_POOL. ** Mode 3 - Study No., Direction of Effect, Difference, N1, SD1, N2, SD2. ** Mode 4 - Study No., Direction of Effect, Difference, N1, N2, SD_POOL. ** Mode 5 - Study No., DF, M1, SD1, M2 SD2. ** Mode 6 - Study No., DF, M1, M2, SD_POOL. ** Mode 7 - Study No., Direction of Effect, DF, Difference, SD1, SD2. ** Mode 8 - Study No., Direction of Effect, DF, Difference, SD_POOL. ** Mode 9 - Study No., Direction of Effect, N1, N2, T_OBS. ** Mode 10 - Study No., Direction of Effect, DF, T_OBS. ** ** There are no limits for the number of studies to be analyzed and the user ** can input data simultaneously in the ten modes or enter all the studies ** only in one mode. In the modes not used, the lines of data have to be ** cleared, but not the corresponding command lines. ** ** If the input are means, the program assumes that Group 1 is the ** experimental or \u0013focus\u0014 group and Group 2 is the control or comparison ** group. ** ** If the input are differences between group means or observed Ts, they ** are registered in absolute values (DIF=|M1-M2| or T_OBS=|Tobs|) and the ** user specifies the direction of effect in a different variable (DIRECT): ** +1 (if the effect is in the expected direction: Group 1 mean greater ** than Group 2 mean) and -1 (if the effect is reversed: Group 1 mean lesser ** than Group 2 mean). ** ** When the input are degrees of freedom, the syntax asssumes equal Ns if df ** are even, and N2=N1-1 if they are odd. ** ** When the data are selected from two contrasting ANOVA treatments, the ** user can input them in modes 2 or 4 and let the pooled standard deviation ** (SD_POOL) equals the squared root of the original ANOVA MS Error. ** ** By default the measure of effect size is Hedges' correction to Cohen's d. ** If you want to use d statistic without correction, you can change the ** default in the corresponding command line. ** ** The OUTPUT is organized in nine tables: ** ** Table 1 \u0016 User's data ** ** Table 2 \u0016 Program imputations ** ** Table 3 \u0016 Individual T Tests and observed power ** - N1, N2, degrees of freedom (DF), difference between group means (DIF), ** observed T (T_OBS), two-tailed probability (P_TWO), and one-tailed ** probability (P_ONE); ** - Alfa (ALFA), Harmonic N (N_HARM), noncentrality parameter (NCP), and ** observed power (OPOWER). ** [for algorithm, see Borenstein et al., 2001] ** ** Table 4 \u0016 Measures of Effect Size and Nonoverlap ** Measures of effect size: ** - Cohen's d (D); ** [Cohen, 1988, p. 20] ** - Hedges' correction (D_H); ** [D_H = d, in Hedges & Olkin, 1985; D_H = d*, in Hunter & Schmidt, ** 1990; see Cortina & Nouri, 2000, p. 9]; ** - r point biserial (R); ** - Squared r point biserial (R2); ** - Binomial Effect Size Display (BESD_LO and BESD_UP). ** [see formulas in Rosenthal et al. 2000, pp. 8-19] ** ** Measures of nonoverlap: ** - U1 (percent of nonoverlap between the two distributions); ** - U2 (the highest percent in Group 1 that exceeds the same lowest ** percent in Group 2); ** - U3 (percentile standing = percentile of the Group 2 distribution ** corresponding to the 50th percentile of Group 1 distribution); ** [see formulas in Cohen, 1988, pp. 21-23] ** ** Table 5 - Non weighted effect size - Descriptive statistics ** - Number of studies (NSTUDIES), Cohen's d (D), and Hedges' correction ** (D_H) (minimun, maximun, mean, sem, and sd). ** ** Table 6 \u0016 Fixed effects model ** - Weighted average effect size (EF_SIZE), VARIANCE, and standard error ** (SE); ** - z Test (z), two-tailed probability (P_TWO), and one-tailed probability ** (P_ONE); ** - Confidence level (CL), and lower (CI_LOWER) and upper (CI_UPPER) ** interval confidence limits. ** [see formulas in Shadish & Haddock, 1994, pp. 265-268] ** ** Table 7 - Chi-square Test for homogeneity of effect size: ** - Q statistic, degrees of freedom (K), and two-tailed probability ** (P_CHISQ) ** [see formula in Shadish & Haddock, 1994, p. 266] ** ** Table 8 - Random Variance Component ** - V0 [see formula in Lipsey & Wilson, 2001, p. 134]. ** ** Table 9 \u0016 Random effects model ** - Weighted average effect size (EF_SIZE), VARIANCE, and standard error ** (SE); ** - z Test (z), two-tailed probability (P_TWO), and one-tailed probability ** (P_ONE); ** - Confidence level (CL), and lower (CI_LOWER) and upper (CI_UPPER) ** interval confidence limits. ** [see formulas and procedures in Lipsey & Wilson, 2001, pp. 134-135] ** ** For calculating observed power of individual studies, the syntax assumes ** alfa = 0.05. For calculating the confidence interval of weighted effect ** sizes, the syntax assumes confidence level = 95%. If you want, you can ** modify these values in the corresponding lines (see PART 2). ** ** After running the syntax, the user can have access to Tables 2, 3 and 4 ** in SPSS active file, so that he may handle the data for other meta- ** analytic procedures based on different effect size measures or exact ** probabilities (see other syntaxes in this site). ** ** In the example, we have 20 studies and we have used the ten input data ** modes. **************************************************************************** *** BEGIN OF THE SYNTAX. ** PART 1: ENTERING SUMMARY DATA. * Mode 1: Enter, row by row, Study No., N1, M1, SD1, N2, M2 SD2. DATA LIST LIST /Study(F8.0) N1(F8.0) M1(F8.2) SD1(F8.2) N2(F8.0) M2(F8.2) SD2(F8.2). BEGIN DATA 1 17 7.46 1.98 16 6.23 2.45 2 15 5.34 2.14 15 4.47 2.51 END DATA. SAVE OUTFILE=DATA1. * Mode 2: Enter, row by row, Study No., N1, M1, N2, M2 SD_POOL. DATA LIST LIST /Study(F8.0) N1(F8.0) M1(F8.2) N2(F8.0) M2(F8.2) SD_POOL(F8.2). BEGIN DATA 3 14 7.32 16 8.23 2.67 4 23 6.20 27 4.47 2.21 END DATA. SAVE OUTFILE=DATA2. * Mode 3: Enter, row by row, Study No., Direction of Effect, Difference, * N1, SD1, N2, SD2. DATA LIST LIST /Study(F8.0) Direct(F8.0) DIF(F8.2) N1(F8.0) SD1(F8.2) N2(F8.0) SD2(F8.2). BEGIN DATA 5 +1 1.04 10 3.04 11 2.98 6 -1 2.25 12 2.63 12 2.21 END DATA. SAVE OUTFILE=DATA3. * Mode 4: Enter, row by row, Study No., Direction of Effect, Difference, N1, * N2, SD_POOL. DATA LIST LIST /Study(F8.0) Direct(F8.0) DIF(F8.2) N1(F8.0) N2(F8.0) SD_POOL(F8.2). BEGIN DATA 7 -1 1.32 34 33 2.44 8 +1 1.25 20 20 3.09 END DATA. SAVE OUTFILE=DATA4. * Mode 5: Enter, row by row, Study No., DF, M1, SD1, M2 SD2. DATA LIST LIST /Study(F8.0) DF(F8.0) M1(F8.2) SD1(F8.2) M2(F8.2) SD2(F8.2). BEGIN DATA 9 34 7.46 1.69 6.33 2.98 10 33 5.34 2.94 5.46 2.31 END DATA. SAVE OUTFILE=DATA5. * Mode 6: Enter, row by row, Study No., DF, M1, M2, SD_POOL. DATA LIST LIST /Study(F8.0) DF(F8.0) M1(F8.2) M2(F8.2) SD_POOL(F8.2). BEGIN DATA 11 27 7.76 5.29 2.77 12 28 6.30 4.21 2.41 END DATA. SAVE OUTFILE=DATA6. * Mode 7: Enter, row by row, Study No., Direction of Effect, DF, Difference, * SD1, SD2. DATA LIST LIST /Study(F8.0) Direct(F8.0) DF(F8.0) DIF(F8.2) SD1(F8.2) SD2(F8.2). BEGIN DATA 13 +1 40 3.07 1.77 2.87 14 -1 37 2.11 2.62 2.21 END DATA. SAVE OUTFILE=DATA7. * Mode 8: Enter, row by row, Study No., Direction of Effect, DF, Difference, * SD_POOL. DATA LIST LIST /Study(F8.0) Direct(F8.0) DF(F8.0) DIF(F8.2) SD_POOL(F8.2). BEGIN DATA 15 -1 23 2.22 1.88 16 +1 34 3.17 1.94 END DATA. SAVE OUTFILE=DATA8. * Mode 9: Enter, row by row, Study No., Direction of Effect, N1, N2, T_OBS. DATA LIST LIST /Study(F8.0) Direct(F8.0) N1(F8.0) N2(F8.0) T_OBS(F8.2). BEGIN DATA 17 +1 20 20 4.74 18 -1 14 15 3.17 END DATA. SAVE OUTFILE=DATA9. * Mode 10: Enter, row by row, Study No., Direction of Effect, DF, T_OBS. DATA LIST LIST /Study(F8.0) Direct(F8.0) DF(F8.0) T_OBS(F8.2). BEGIN DATA 19 +1 54 5.46 20 -1 49 2.27 END DATA. SAVE OUTFILE=DATA10. GET FILE=DATA1. ADD FILES/FILE=*/FILE=DATA2/FILE=DATA3/FILE=DATA4/FILE=DATA5 /FILE=DATA6/FILE=DATA7/FILE=DATA8/FILE=DATA9/FILE=DATA10. EXECUTE. ** PART 2: SETTING ALFA AND CONFIDENCE LEVEL, CHOOSING EFFECT SIZE MEASURE ** AND RUNNIG META-ANALYSIS. * Enter alfa for computing observed power (by default, AFFA = 0.05). COMPUTE ALFA = 0.05. EXECUTE. SORT CASES BY STUDY(A). IF (M1>=M2) DIRECT=1. IF (M10) DIF=ABS(M1-M2). COMPUTE SDX=(((N1-1)*(SD1**2))+((N2-1)*(SD2**2)))/(N1+N2-2). IF (SD_POOL<=0 OR SD_POOL>0) SDX=SD_POOL**2. IF (SDX<=0 OR SDX>0) T_OBS=DIF/SQR(SDX*((1/N1)+(1/N2))). COMPUTE SD_POOL=SQR(SDX). COMPUTE T_OBS=DIRECT*T_OBS. COMPUTE DIF=DIRECT*DIF. COMPUTE TABS=ABS(T_OBS). COMPUTE P_TWO=(1-CDF.T(TABS,DF))*2. COMPUTE P_ONE=1-CDF.T(TABS,DF). COMPUTE D=T_OBS*SQR((1/N1)+(1/N2)). COMPUTE N_HARM=(2*N1*N2)/(N1+N2). COMPUTE NCP=ABS((D*SQR(N_HARM))/SQR(2)). COMPUTE T_ALPHA=IDF.T(1-ALFA/2,DF). COMPUTE POWER1=1-NCDF.T(T_ALPHA,DF,NCP). COMPUTE POWER2=1-NCDF.T(T_ALPHA,DF,-NCP). COMPUTE OPOWER=POWER1+POWER2. COMPUTE R=T_OBS/SQR((T_OBS**2)+DF). COMPUTE R2=R**2. COMPUTE D_H=D*(1-(3/(4*(N1+N2)-9))). COMPUTE BESD_LO=.50-(R/2). COMPUTE BESD_UP=.50+(R/2). COMPUTE U3=CDF.NORMAL(D,0,1)*100. COMPUTE U2=CDF.NORMAL((D/2),0,1)*100. COMPUTE U2X=CDF.NORMAL(((ABS(D))/2),0,1). COMPUTE U1=(2*U2X-1)/U2X*100. FORMATS P_TWO P_ONE ALFA N_HARM NCP OPOWER D D_H R R2 BESD_LO BESD_UP(F8.4) U1 U2 U3(F8.1). SUMMARIZE/TABLES=STUDY DIRECT N1 N2 DF M1 M2 DIF SD1 SD2 SD_POOL T_OBS /FORMAT=VALIDLIST NOCASENUM TOTAL/TITLE=\"Table 2 \u0016 Program imputations\" /CELLS=NONE. SUMMARIZE/TABLES=STUDY DIRECT DIF DF T_OBS P_TWO P_ONE ALFA N_HARM NCP OPOWER/FORMAT=VALIDLIST NOCASENUM TOTAL /TITLE=\"TABLE 3 \u0016 Individual T Tests and observed power\"/CELLS=NONE. SUMMARIZE/TABLES=STUDY DIRECT D D_H R R2 BESD_LO BESD_UP U1 U2 U3 /FORMAT=VALIDLIST NOCASENUM TOTAL /TITLE=\"Table 4 - Measures of effect size and nonoverlap\"/CELLS=NONE. SUMMARIZE/TABLES=D D_H/FORMAT=NOLIST TOTAL/TITLE=\"Table 5 - Non weighted \" +\"effect size \u0016 Descriptive statistics: Cohen\u0012s d and Hedges' correction\" /CELLS=COUNT MIN MAX MEAN SEMEAN STDDEV. SAVE OUTFILE=META_DATA. * Choose the effect size measure (Cohen's d = 1; Hedges' correction = 2) * (by default, ES = 2). COMPUTE ES = 2. IF (ES=1) D=D. IF (ES=2) D=D_H. EXECUTE. COMPUTE V=((N1+N2)/(N1*N2))+((D**2)/(2*(N1+N2))). COMPUTE W=1/V. COMPUTE WD=W*D. COMPUTE WD2=W*D**2. COMPUTE W2=W**2. COMPUTE X=1. EXECUTE. SAVE OUTFILE=FOUTX. AGGREGATE/OUTFILE=*/BREAK=X/SUM_W=SUM(W)/SUM_WD=SUM(WD) /SUM_WD2=SUM(WD2)/SUM_W2=SUM(W2)/NSTUDIES=N. COMPUTE K=NSTUDIES-1. COMPUTE EF_SIZE=SUM_WD/SUM_W. COMPUTE VARIANCE=1/SUM_W. COMPUTE SE=SQR(1/SUM_W). COMPUTE Z=ABS(EF_SIZE)/SE. COMPUTE P_TWO=(1-CDF.NORMAL(Z,0,1))*2 . COMPUTE P_ONE=1-CDF.NORMAL(Z,0,1). EXECUTE. * Enter confidence level for interval confidence (by default, CL=95%). COMPUTE CL = 95. COMPUTE ZCL=IDF.NORMAL((1-(((100-CL)/100)/2)),0,1). COMPUTE CI_LOWER=EF_SIZE-ZCL*SE. COMPUTE CI_UPPER=EF_SIZE+ZCL*SE. COMPUTE Q=SUM_WD2-SUM_WD**2/SUM_W. COMPUTE P_CHISQ = 1-CDF.CHISQ(Q,K). COMPUTE V0 = (Q-K)/(SUM_W-SUM_W2/SUM_W) . EXECUTE. SAVE OUTFILE=FOUTY/KEEP=V0 X. FORMATS ALL(F8.4) VARIANCE SE(F8.5) NSTUDIES CL K(F8.0). SUMMARIZE/TABLES=NSTUDIES EF_SIZE VARIANCE SE Z P_TWO P_ONE CL CI_LOWER CI_UPPER/FORMAT=LIST NOCASENUM TOTAL /TITLE='Table 6 \u0016 Fixed effects model:' +' Weighted average effect size, z test, and confidence interval' /CELLS=NONE. SUMMARIZE/TABLES=Q K P_CHISQ/FORMAT=LIST NOCASENUM TOTAL/TITLE= 'Table 7 - Chi-square test for homogeneity of effect size'/cells=none. GET FILE=FOUTX. MATCH FILES /FILE=*/TABLE=FOUTY/BY X. EXECUTE. COMPUTE V=V+V0. COMPUTE W=1/V. COMPUTE WD=W*D. COMPUTE WD2=W*D**2. COMPUTE W2=W**2. EXECUTE. FORMATS V0(F8.3). SUMMARIZE/TABLES=v0/FORMAT=NOLIST TOTAL/TITLE='Table 8 - Random variance' +' component'/CELLS=MEAN. AGGREGATE/OUTFILE=*/BREAK=X/SUM_W=SUM(W)/SUM_WD=SUM(WD) /SUM_WD2=SUM(WD2)/SUM_W2=SUM(W2)/NSTUDIES=N. COMPUTE K=NSTUDIES-1. COMPUTE EF_SIZE=SUM_WD / SUM_W. COMPUTE VARIANCE=1/SUM_W. COMPUTE SE=SQR(1/SUM_W). COMPUTE Z=ABS(EF_SIZE)/SE. COMPUTE P_TWO=(1-CDF.NORMAL(Z,0,1))*2 . COMPUTE P_ONE=1-CDF.NORMAL(Z,0,1). EXECUTE. * Enter confidence level for interval confidence (by default, CL=95%). COMPUTE CL = 95. COMPUTE ZCL=IDF.NORMAL((1-(((100-CL)/100)/2)),0,1). COMPUTE CI_LOWER=EF_SIZE-ZCL*SE. COMPUTE CI_UPPER=EF_SIZE+ZCL*SE. FORMATS ALL(F8.4) VARIANCE SE(F8.5) NSTUDIES CL K(F8.0). SUMMARIZE/TABLES=NSTUDIES EF_SIZE VARIANCE SE Z P_TWO P_ONE CL CI_LOWER CI_UPPER/FORMAT=LIST NOCASENUM TOTAL /TITLE='Table 9 \u0016 Random effects ' +'model: Weighted average effect size, z test, and confidence interval' /CELLS=NONE. GET FILE=META_DATA/KEEP=STUDY DIRECT N1 N2 DF M1 M2 DIF SD1 SD2 SD_POOL T_OBS P_TWO P_ONE ALFA N_HARM NCP OPOWER D D_H R R2 BESD_LO BESD_UP U1 U2 U3. *** END OF THE SYNTAX. **************************************************************************** ** Note ** ** ** Beginning in line: ** ** COMPUTE W=1/V. ** ** with effect sizes (D) and variances (V) from original sources, this ** syntax was tested with data reported in Lipsey and Wilson (2001, p. 130, ** Table 7.1) and Shadish and Haddock (1994, p. 267, Table 18.2). ** ** Imputations procedures and Individual T Tests were tested in SPSS, ** comparing the results with outputs obtained from raw data examples. ** ** Power calculations are the same given by SamplePower (Borenstein et al., ** 2001) and measures of effect size and nonoverlap were tested with ** tabulated values and examples given by Cohen (1988) and Rosenthal et al. ** (2000). ** ** Feel free to use and modify this syntax as you wish. In case you want to ** refer it, the proper form is: ** ** Alferes, V. R. (2003). Meta-analysis: Fixed and random effects models ** [SPSS Syntax File]. Retrieved [Date], from [URL] **************************************************************************** ** References ** ** ** Borenstein, M., Rothstein, H., & Cohen, J. (2001). SamplePower 2.0 ** [Computer Manual]. Chicago: SPSS Inc. ** Cohen, J. (1988). Statistical power analysis for the behavioral ** sciences (2nd ed.). Hillsdale, NJ: Lawrence Erbaum. ** Cortina, J. M., & Nouri, H. (2000). Effect sizes for ANOVA designs. ** Thousand Oaks, CA: Sage. ** Hedges, L. V., & Olkin, I. (1985). Statistical methods for meta-analysis. ** Orlando, FL: Academic Press. ** Hunter, J. E., & Schmidt, F. L. (1990). Methods of meta-analysis: ** Correcting error and bias in research findings. Newbury Park, CA: ** Sage. ** Lipsey, M. W., & Wilson, D. B. (2001). Pratical meta-analysis. Thousand ** Oaks, CA: Sage. ** Rosenthal, R., Rosnow, R. L, & Rubin, D. B. (2000). Contrasts and ** effect sizes in behavioral research: A correlational approach. ** Cambridge, UK: Cambridge University Press. ** Shadish, W. R., & Haddock, C. K. (1994). Combining estimates of effect ** size. In H. Cooper and L. V. Hedges (Eds.), The handbook of research ** synthesis (pp. 261-281). New York: Russell Sage Foundation. ***************************************************************************. ```\nRelated pages\n\n...\n\nNavigate from here"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5182806,"math_prob":0.8992635,"size":17078,"snap":"2022-27-2022-33","text_gpt3_token_len":6130,"char_repetition_ratio":0.14589435,"word_repetition_ratio":0.15045871,"special_character_ratio":0.43968847,"punctuation_ratio":0.19322033,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99151367,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-27T08:04:11Z\",\"WARC-Record-ID\":\"<urn:uuid:6ff05ae0-8fac-422c-99a4-09a80f4ac598>\",\"Content-Length\":\"44983\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8c099d17-95c1-49d6-8248-9017ddaa2b0f>\",\"WARC-Concurrent-To\":\"<urn:uuid:d93b1e5c-a1b2-4584-a2c8-a0d29d147d28>\",\"WARC-IP-Address\":\"78.108.88.70\",\"WARC-Target-URI\":\"https://spsstools.net/en/syntax/syntax-index/meta-analysis/meta-analysis-fixed-and-random-effects-models/\",\"WARC-Payload-Digest\":\"sha1:HXYMTSX6MVOJEUGB2PRBK6NGKIP5NM3G\",\"WARC-Block-Digest\":\"sha1:QK7LQOHEPTGA3MPKWWIQ2T2VLAXIEFQI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103329963.19_warc_CC-MAIN-20220627073417-20220627103417-00578.warc.gz\"}"} |
https://metanumbers.com/19197 | [
"## 19197\n\n19,197 (nineteen thousand one hundred ninety-seven) is an odd five-digits composite number following 19196 and preceding 19198. In scientific notation, it is written as 1.9197 × 104. The sum of its digits is 27. It has a total of 6 prime factors and 12 positive divisors. There are 12,636 positive integers (up to 19197) that are relatively prime to 19197.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Odd\n• Number length 5\n• Sum of Digits 27\n• Digital Root 9\n\n## Name\n\nShort name 19 thousand 197 nineteen thousand one hundred ninety-seven\n\n## Notation\n\nScientific notation 1.9197 × 104 19.197 × 103\n\n## Prime Factorization of 19197\n\nPrime Factorization 35 × 79\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 2 Total number of distinct prime factors Ω(n) 6 Total number of prime factors rad(n) 237 Product of the distinct prime numbers λ(n) 1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 0 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 19,197 is 35 × 79. Since it has a total of 6 prime factors, 19,197 is a composite number.\n\n## Divisors of 19197\n\n1, 3, 9, 27, 79, 81, 237, 243, 711, 2133, 6399, 19197\n\n12 divisors\n\n Even divisors 0 12 6 6\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 12 Total number of the positive divisors of n σ(n) 29120 Sum of all the positive divisors of n s(n) 9923 Sum of the proper positive divisors of n A(n) 2426.67 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 138.553 Returns the nth root of the product of n divisors H(n) 7.91085 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 19,197 can be divided by 12 positive divisors (out of which 0 are even, and 12 are odd). The sum of these divisors (counting 19,197) is 29,120, the average is 24,26.,666.\n\n## Other Arithmetic Functions (n = 19197)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 12636 Total number of positive integers not greater than n that are coprime to n λ(n) 2106 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 2185 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 12,636 positive integers (less than 19,197) that are coprime with 19,197. And there are approximately 2,185 prime numbers less than or equal to 19,197.\n\n## Divisibility of 19197\n\n m n mod m 2 3 4 5 6 7 8 9 1 0 1 2 3 3 5 0\n\nThe number 19,197 is divisible by 3 and 9.\n\n## Classification of 19197\n\n• Deficient\n\n### Expressible via specific sums\n\n• Polite\n• Non-hypotenuse\n\n• Frugal\n\n## Base conversion (19197)\n\nBase System Value\n2 Binary 100101011111101\n3 Ternary 222100000\n4 Quaternary 10223331\n5 Quinary 1103242\n6 Senary 224513\n8 Octal 45375\n10 Decimal 19197\n12 Duodecimal b139\n20 Vigesimal 27jh\n36 Base36 et9\n\n## Basic calculations (n = 19197)\n\n### Multiplication\n\nn×i\n n×2 38394 57591 76788 95985\n\n### Division\n\nni\n n⁄2 9598.5 6399 4799.25 3839.4\n\n### Exponentiation\n\nni\n n2 368524809 7074570758373 135810534848486481 2607154837486394975757\n\n### Nth Root\n\ni√n\n 2√n 138.553 26.7759 11.7709 7.18864\n\n## 19197 as geometric shapes\n\n### Circle\n\n Diameter 38394 120618 1.15775e+09\n\n### Sphere\n\n Volume 2.96339e+13 4.63102e+09 120618\n\n### Square\n\nLength = n\n Perimeter 76788 3.68525e+08 27148.7\n\n### Cube\n\nLength = n\n Surface area 2.21115e+09 7.07457e+12 33250.2\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 57591 1.59576e+08 16625.1\n\n### Triangular Pyramid\n\nLength = n\n Surface area 6.38304e+08 8.33746e+11 15674.3\n\n## Cryptographic Hash Functions\n\nmd5 0ea2f58f6ebe35f4bc5b37b01911fd0a cf629896a29eea86982312538e85ed49c8162409 3a6a06a207e19079b77324fa89227865e95bb9df85001afb3bbb55e2df9ff7d6 e3186ceb2f622c1377749fd014bb85a1927fb508ef66701769599b50720cac13505f0026c2182da09c100b8753d007ddc1a339e6cb2894e616e157a0b58f18f3 da26d51c04121c8a243f6fd1ef3a2178d4fff06c"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6159404,"math_prob":0.9733708,"size":4473,"snap":"2020-34-2020-40","text_gpt3_token_len":1575,"char_repetition_ratio":0.12038487,"word_repetition_ratio":0.025487257,"special_character_ratio":0.45137492,"punctuation_ratio":0.079015546,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9954902,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-27T17:35:39Z\",\"WARC-Record-ID\":\"<urn:uuid:043ae015-b364-49ad-9fa1-3312c03a280e>\",\"Content-Length\":\"47923\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ae09b554-aac3-455f-aa36-31674ccd2a17>\",\"WARC-Concurrent-To\":\"<urn:uuid:8d2b40f8-da47-42c0-82d3-8a442b3da593>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/19197\",\"WARC-Payload-Digest\":\"sha1:O2JKKWYEG5YUKCGQK2R6UFX7PRO2FH7M\",\"WARC-Block-Digest\":\"sha1:S7AGRLPXP7SVIAFLOV72XC54IJT564YB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400283990.75_warc_CC-MAIN-20200927152349-20200927182349-00224.warc.gz\"}"} |
https://eulercircle.com/classes/infinite-series/ | [
"# Infinite Series\n\nIn our infinite series class (Winter 2020), we will investigate a range of techniques for evaluating infinite series in closed form. Typically, students only learn how to evaluate a very small number of infinite series, such as geometric series and telescoping series. However, there are many fascinating approaches available for evaluating other types of infinite series.\n\nOne of the most famous infinite series is the sum",
null,
"$\\sum_{n=1}^\\infty \\frac{1}{n^2}$. Evaluating this series by showing it sums to",
null,
"$\\frac{\\pi^2}{6}$ was one of Euler’s early triumphs, and we’ll see how to do it. A related sum,",
null,
"$\\sum_{n=0}^\\infty \\frac{(-1)^n}{(2n+1)^3}$, can also be evaluated, this time using techniques from Fourier series, so this will be a perfect time to introduce Fourier series.\n\nAnother famous series, which we will evaluate if time permits, is Ramanujan’s fast-converging formula for",
null,
"$\\pi$:",
null,
"$\\frac{1}{\\pi}=\\frac{2\\sqrt{2}}{9801}\\sum_{n=0}^\\infty \\frac{(4n)!(1103+26390n)}{(n!)^4 396^{4n}}$. This one is far more difficult and relates to deep results in number theory.\n\nWe will also look at related topics, such as infinite products, continued fractions, nested radicals, and so forth.\n\nCalculus is required for this class."
] | [
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.95517844,"math_prob":0.9911508,"size":1020,"snap":"2022-27-2022-33","text_gpt3_token_len":207,"char_repetition_ratio":0.1496063,"word_repetition_ratio":0.0,"special_character_ratio":0.2,"punctuation_ratio":0.13333334,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99952316,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-08T00:28:14Z\",\"WARC-Record-ID\":\"<urn:uuid:005da31d-5be2-4c70-858d-8e5a115a7cb9>\",\"Content-Length\":\"57302\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f5a8eb15-c302-4c89-83a6-392763147692>\",\"WARC-Concurrent-To\":\"<urn:uuid:a957db60-98f5-496b-9bbc-850b31bf5eb3>\",\"WARC-IP-Address\":\"192.0.78.24\",\"WARC-Target-URI\":\"https://eulercircle.com/classes/infinite-series/\",\"WARC-Payload-Digest\":\"sha1:U5YGKDJULBPPQH4JVRJEAWUMNAZAO6H3\",\"WARC-Block-Digest\":\"sha1:Y4VWJ7JGCXLRZTY2MQDJGAF2YKTFY57H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882570741.21_warc_CC-MAIN-20220808001418-20220808031418-00539.warc.gz\"}"} |
https://codefreshers.com/equal-row-and-column-pairs-solution-leetcode/ | [
"# [Solution] Equal Row and Column Pairs solution leetcode\n\nEqual Row and Column Pairs solution leetcode – Given a 0-indexed `n x n` integer matrix `grid`return the number of pairs `(Ri, Cj)` such that row `Ri` and column `Cj` are equal.\n\n## [Solution] Equal Row and Column Pairs solution leetcode\n\nA row and column pair is considered equal if they contain the same elements in the same order (i.e. an equal array).\n\nExample 1:",
null,
"```Input: grid = [[3,2,1],[1,7,6],[2,7,7]]\nOutput: 1\nExplanation: There is 1 equal row and column pair:\n- (Row 2, Column 1): [2,7,7]\n```\n\n## [Solution] Equal Row and Column Pairs solution leetcode",
null,
"```Input: grid = [[3,1,2,2],[1,4,4,5],[2,4,2,2],[2,4,2,2]]\nOutput: 3\nExplanation: There are 3 equal row and column pairs:\n- (Row 0, Column 0): [3,1,2,2]\n- (Row 2, Column 2): [2,4,2,2]\n- (Row 3, Column 2): [2,4,2,2]\n```\n\n## [Solution] Equal Row and Column Pairs solution leetcode\n\n• `n == grid.length == grid[i].length`\n• `1 <= n <= 200`\n• `1 <= grid[i][j] <= 105`"
] | [
null,
"https://assets.leetcode.com/uploads/2022/06/01/ex1.jpg",
null,
"https://assets.leetcode.com/uploads/2022/06/01/ex2.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6890101,"math_prob":0.99972945,"size":897,"snap":"2022-27-2022-33","text_gpt3_token_len":317,"char_repetition_ratio":0.16013438,"word_repetition_ratio":0.10067114,"special_character_ratio":0.38238573,"punctuation_ratio":0.24583334,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99823374,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-11T17:40:45Z\",\"WARC-Record-ID\":\"<urn:uuid:5ca7d714-a7fd-4d6f-aa35-1e37958b10b4>\",\"Content-Length\":\"52635\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:281f97a9-d33f-4e89-b509-e868f4447925>\",\"WARC-Concurrent-To\":\"<urn:uuid:56f67317-fb4e-45ec-9d3e-76f197b26645>\",\"WARC-IP-Address\":\"185.212.71.145\",\"WARC-Target-URI\":\"https://codefreshers.com/equal-row-and-column-pairs-solution-leetcode/\",\"WARC-Payload-Digest\":\"sha1:L3N4WXXYIMJMQQRVJ4RYG4VQ5QZ67ZRQ\",\"WARC-Block-Digest\":\"sha1:36XQ6UXRNUI4HLLJVSMAGHAWKD5Y4GCJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571483.70_warc_CC-MAIN-20220811164257-20220811194257-00008.warc.gz\"}"} |
http://num.bubble.ro/a/231/805/ | [
"# Addition table for N = 231 + 804÷805\n\n231 + 804 = 1035 [+]\n231 + 804.01 = 1035.01 [+]\n231 + 804.02 = 1035.02 [+]\n231 + 804.03 = 1035.03 [+]\n231 + 804.04 = 1035.04 [+]\n231 + 804.05 = 1035.05 [+]\n231 + 804.06 = 1035.06 [+]\n231 + 804.07 = 1035.07 [+]\n231 + 804.08 = 1035.08 [+]\n231 + 804.09 = 1035.09 [+]\n231 + 804.1 = 1035.1 [+]\n231 + 804.11 = 1035.11 [+]\n231 + 804.12 = 1035.12 [+]\n231 + 804.13 = 1035.13 [+]\n231 + 804.14 = 1035.14 [+]\n231 + 804.15 = 1035.15 [+]\n231 + 804.16 = 1035.16 [+]\n231 + 804.17 = 1035.17 [+]\n231 + 804.18 = 1035.18 [+]\n231 + 804.19 = 1035.19 [+]\n231 + 804.2 = 1035.2 [+]\n231 + 804.21 = 1035.21 [+]\n231 + 804.22 = 1035.22 [+]\n231 + 804.23 = 1035.23 [+]\n231 + 804.24 = 1035.24 [+]\n231 + 804.25 = 1035.25 [+]\n231 + 804.26 = 1035.26 [+]\n231 + 804.27 = 1035.27 [+]\n231 + 804.28 = 1035.28 [+]\n231 + 804.29 = 1035.29 [+]\n231 + 804.3 = 1035.3 [+]\n231 + 804.31 = 1035.31 [+]\n231 + 804.32 = 1035.32 [+]\n231 + 804.33 = 1035.33 [+]\n231 + 804.34 = 1035.34 [+]\n231 + 804.35 = 1035.35 [+]\n231 + 804.36 = 1035.36 [+]\n231 + 804.37 = 1035.37 [+]\n231 + 804.38 = 1035.38 [+]\n231 + 804.39 = 1035.39 [+]\n231 + 804.4 = 1035.4 [+]\n231 + 804.41 = 1035.41 [+]\n231 + 804.42 = 1035.42 [+]\n231 + 804.43 = 1035.43 [+]\n231 + 804.44 = 1035.44 [+]\n231 + 804.45 = 1035.45 [+]\n231 + 804.46 = 1035.46 [+]\n231 + 804.47 = 1035.47 [+]\n231 + 804.48 = 1035.48 [+]\n231 + 804.49 = 1035.49 [+]\n231 + 804.5 = 1035.5 [+]\n231 + 804.51 = 1035.51 [+]\n231 + 804.52 = 1035.52 [+]\n231 + 804.53 = 1035.53 [+]\n231 + 804.54 = 1035.54 [+]\n231 + 804.55 = 1035.55 [+]\n231 + 804.56 = 1035.56 [+]\n231 + 804.57 = 1035.57 [+]\n231 + 804.58 = 1035.58 [+]\n231 + 804.59 = 1035.59 [+]\n231 + 804.6 = 1035.6 [+]\n231 + 804.61 = 1035.61 [+]\n231 + 804.62 = 1035.62 [+]\n231 + 804.63 = 1035.63 [+]\n231 + 804.64 = 1035.64 [+]\n231 + 804.65 = 1035.65 [+]\n231 + 804.66 = 1035.66 [+]\n231 + 804.67 = 1035.67 [+]\n231 + 804.68 = 1035.68 [+]\n231 + 804.69 = 1035.69 [+]\n231 + 804.7 = 1035.7 [+]\n231 + 804.71 = 1035.71 [+]\n231 + 804.72 = 1035.72 [+]\n231 + 804.73 = 1035.73 [+]\n231 + 804.74 = 1035.74 [+]\n231 + 804.75 = 1035.75 [+]\n231 + 804.76 = 1035.76 [+]\n231 + 804.77 = 1035.77 [+]\n231 + 804.78 = 1035.78 [+]\n231 + 804.79 = 1035.79 [+]\n231 + 804.8 = 1035.8 [+]\n231 + 804.81 = 1035.81 [+]\n231 + 804.82 = 1035.82 [+]\n231 + 804.83 = 1035.83 [+]\n231 + 804.84 = 1035.84 [+]\n231 + 804.85 = 1035.85 [+]\n231 + 804.86 = 1035.86 [+]\n231 + 804.87 = 1035.87 [+]\n231 + 804.88 = 1035.88 [+]\n231 + 804.89 = 1035.89 [+]\n231 + 804.9 = 1035.9 [+]\n231 + 804.91 = 1035.91 [+]\n231 + 804.92 = 1035.92 [+]\n231 + 804.93 = 1035.93 [+]\n231 + 804.94 = 1035.94 [+]\n231 + 804.95 = 1035.95 [+]\n231 + 804.96 = 1035.96 [+]\n231 + 804.97 = 1035.97 [+]\n231 + 804.98 = 1035.98 [+]\n231 + 804.99 = 1035.99 [+]\nNavigation: Home | Addition | Substraction | Multiplication | Division Tables for 231: Addition | Substraction | Multiplication | Division\n\nOperand: 1 2 3 4 5 6 7 8 9 10 20 30 40 50 60 70 80 90 100 200 300 400 500 600 700 800 801 802 803 804 805 806 807 808 809 900 1000 2000 3000 4000 5000 6000 7000 8000 9000\n\nAddition for: 1 2 3 4 5 6 7 8 9 10 20 30 40 50 60 70 80 90 100 200 231 232 233 234 235 236 237 238 239 300 400 500 600 700 800 900 1000 2000 3000 4000 5000 6000 7000 8000 9000"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8164602,"math_prob":1.0000085,"size":13227,"snap":"2019-51-2020-05","text_gpt3_token_len":3307,"char_repetition_ratio":0.36580202,"word_repetition_ratio":0.42442966,"special_character_ratio":0.3173055,"punctuation_ratio":0.07234727,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000082,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-19T23:46:58Z\",\"WARC-Record-ID\":\"<urn:uuid:837abf9f-ba1d-4841-a854-e066b08307af>\",\"Content-Length\":\"44866\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0067f968-d78c-4942-a04d-757a29590672>\",\"WARC-Concurrent-To\":\"<urn:uuid:fb4f8028-14c3-4d25-9a71-cb23b635c79f>\",\"WARC-IP-Address\":\"104.24.96.16\",\"WARC-Target-URI\":\"http://num.bubble.ro/a/231/805/\",\"WARC-Payload-Digest\":\"sha1:4K343SOR5LMDLGOKCPDEMIOTKVPI5NLB\",\"WARC-Block-Digest\":\"sha1:OKGD6KGTSRQPAW23HCVCX7FFCM556PSN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250595787.7_warc_CC-MAIN-20200119234426-20200120022426-00488.warc.gz\"}"} |
https://gmat.la/question/Prep2007E1-DS-34 | [
"During an experiment, some water was removed from each of 6 water tanks. If the standard deviation of the volumes of water in the tanks at the beginning of the experiment was 10 gallons, what was the standard deviation of the volumes of water in the tanks at the end of the experiment?\n\n(1) For each tank, 30 percent of the volume of water that was in the tank at the beginning of the experiment was removed during the experiment.\n(2) The average (arithmetic mean) volume of water in the tanks at the end of the experiment was 63 gallons.\n\nStatement (1) ALONE is sufficient, but statement (2) alone is not sufficient.\n\nStatement (2) ALONE is sufficient, but statement (1) alone is not sufficient.\n\nBOTH statements TOGETHER are sufficient, but NEITHER statement ALONE is sufficient.\n\nEACH statement ALONE is sufficient.\n\nStatements (1) and (2) TOGETHER are NOT sufficient."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.83356327,"math_prob":0.9198967,"size":1398,"snap":"2021-43-2021-49","text_gpt3_token_len":464,"char_repetition_ratio":0.15781923,"word_repetition_ratio":0.16509435,"special_character_ratio":0.2274678,"punctuation_ratio":0.064257026,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9530415,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-03T10:28:21Z\",\"WARC-Record-ID\":\"<urn:uuid:a929ee3f-68c9-42ce-8bf0-3e05700e8e55>\",\"Content-Length\":\"53837\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e62d95cb-0a75-43b6-bf6b-034614dc724f>\",\"WARC-Concurrent-To\":\"<urn:uuid:60fc97d5-5171-4067-99f8-043ef9095eda>\",\"WARC-IP-Address\":\"47.111.57.157\",\"WARC-Target-URI\":\"https://gmat.la/question/Prep2007E1-DS-34\",\"WARC-Payload-Digest\":\"sha1:FQJEP6YVJ2QJHYJK4JS5W24CJPQRB2NO\",\"WARC-Block-Digest\":\"sha1:LNLX7KVHPYTRBWG2HQ4NZEFLZA5VA36Q\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362619.23_warc_CC-MAIN-20211203091120-20211203121120-00196.warc.gz\"}"} |
https://blog.trucklogics.com/wp-content/uploads/wb3szo/hjvjcic.php?tag=standard-error-of-the-mean-example | [
"# standard error of the mean example\n\nWe need t 0. Insert this widget code anywhere inside the body tag; Use the code as it is for proper working. (The code for the summarySE function must be entered before it is called here). Standard. The standard error of the mean is the standard deviation of sample means. How to get the Standard Input and Output Stream through Console in C#? Suppose a large oil company is drilling wells in various locations throughout Texas, and the … . It is also known as the expected value, mathematical expectation, EV, average, mean value, mean, or first moment.It is denoted as where X is a random variable, are the possible outcomes and are their corresponding probabilities. Asking for help, clarification, or … This quantile can be found, for example, using the R function qt and t 0. The first formula shows how S e is computed by reducing S Y according to the correlation and sample size. As a random variable the sample mean has a probability distribution, a mean $$μ_{\\bar{X}}$$, and a standard deviation $$σ_{\\bar{X}}$$. But avoid …. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview Questions. = mean value of the sample data set. Thanks for contributing an answer to Cross Validated! In the first example (Rating \"A\") the Standard Deviation is zero because ALL responses were exactly the mean value. Where: s = sample standard deviation x 1, ..., x N = the sample data set x̄. 005, 24. Plus, get practice tests, quizzes, and personalized coaching to help you succeed. Dummies has always stood for taking on complex concepts and making them easy to understand. Introduction. The standard deviation is the most common measure of dispersion, or how spread out the data are about the mean. Dummies helps everyone be more knowledgeable and confident in applying what they know. Variation that is random or natural to a process is often referred to as noise. 797. Confidence interval for a normal mean Suppose we have a sample of size 25 from a normal distribution, s 2 Y = 2. Please accept YouTube cookies to play this video. In 1893, Karl Pearson coined the notion of standard deviation, which is undoubtedly most used measure, in research studies. This can be done in a number of ways, as described on this page.In this case, we’ll use the summarySE() function defined on that page, and also at the bottom of this page. As a member, you'll also get unlimited access to over 83,000 lessons in math, English, science, history, and more. Notes. Eh? N = size of the sample data set Standard deviation represents the normal distribution rate for a set of data, and it is the square root of the variance. The symbol σ (sigma) is often used to represent the standard deviation of a population, while s is used to represent the standard deviation of a sample. 1. The individual responses did not deviate at all from the mean. (15 points) Let p denote the probability that a newly drilled oil well strikes oil. An accepted reference sample which is used for establishing a unit for the measurement of physical quantities. 005, 24 = 2. more than two times) by colleagues if they should plot/use the standard deviation or the standard error, here is a small post trying to clarify the meaning of these two metrics and when to use them with some R code example. A physical quantity is specified by a numerical factor and a unit; for example, a mass might be expressed as 8 g, a length as 6 cm, and a time interval as 2 min. Free Trial 30 Days Now! 1, and we want a 99 % confidence interval for μ. First, it is necessary to summarize the data. An amazing Excel add-in, Kutools for Excel, provides 300+ features to help you improve work efficiency greatly.And its Normal Distribution / Bell Curve (chart) feature makes it possible to create a perfect bell curve chart with only 2 steps! StatKey will bootstrap a confidence interval for a mean, median, standard deviation, proportion, difference in two means, difference in two proportions, simple linear regression slope, and correlation (Pearson's r). A simple explanation of the difference between the standard deviation and the standard error, including an example. Let's look at an example: The teacher uses the variance of 46 to find the standard … I got often asked (i.e. First-class tool helps you 2 steps to create a bell curve chart in Excel . Expectation is the probability-weighted mean of the sum of all the possible outcomes of an experiment. Why df=n-2? Statistics courses, especially for biologists, assume formulae = understanding and teach how to do statistics, but largely ignore what those procedures assume, and how their results mislead when those assumptions are unreasonable. Example #2. The text in this article is licensed under the Creative Commons-License Attribution 4.0 International (CC BY 4.0).. Prerequisite concepts. On the other hand, the standard deviation of the return measures deviations of individual returns from the mean. Statology Study is the ultimate online statistics study guide that helps you understand all of the core concepts taught in any elementary statistics course and … Standard error of the mean tells you how accurate your estimate of the mean is likely to be. A.17 Confidence Intervals 691 Example A.2. You can also take the sample mean even further by calculating the standard deviation of the sample set. A Computer Science portal for geeks. Standard deviation Standard deviation is a measure of dispersion […] What are cin, cout and cerr streams in C++? What are Standard Libraries in C++? Let us take the example of an investor who has received the following returns on stock XYZ: – What is the size of int, long type as per C++ standard? What is the size of int, long type in C++ standard? What is British standard system and international standard system? Statistics - Standard Error ( SE ) - The standard deviation of a sampling distribution is called as standard error. Thus SD is a measure of volatility and can be used as a … x̄ = Σ n i x i /n By accepting you will be accessing content from YouTube, a service provided by an external third party. Standard errors mean the statistical fluctuation of estimators, and they are important particularly when one compares two estimates (for example, whether one quantity Summary. Standard Deviation, is a measure of the spread of a series or the distance from the standard. Our t of 5.26 is much larger, than the .01 level of 2.82 and there is little doubt that the gain from Trial 1 to Trial 5 is significant. The resulting misuse is, shall we say, predictable... Use and Misuse The obtained t of 5.26 > 2.82. Bias, standard error and mean squared error (MSE) are three metrics of a statistical estimator's accuracy. There are formulas that relate the mean and standard deviation of the sample mean to the mean and standard deviation of the population from which the sample is … Step 1: Calculate the mean (Total of all samples divided by the number of samples). In order to calculate our estimated regression model, we had to use our sample data to calculate the estimated slope (β̂ 1) and the intercept (β̂ 0).And as we used our sample data to calculate these two estimates, we lose two degrees of freedom.Therefore, df=n-2. This means you're free to copy, share and adapt any parts (or all) of the text in the article, as long as you give appropriate credit and provide a link/reference to this page.. That is it. Tim Urdan, author of Statistics in Plain English, demonstrates how to calculate and interpret a standard error of the mean. In Rating \"B\", even though the group mean is the same (3.0) as the first distribution, the Standard Deviation is higher. SD = Standard deviation around the mean difference. Definition of Standard Deviation. 7, Y = 16. Solution: Sample Mean ( x̄ ) is calculated using the formula given below. Please be sure to answer the question.Provide details and share your research! The distribution of sample means varies far less than the individual values in a sample.If we know the population mean height of women is 65 inches then it would be extremely rare to have a sampe mean of 30 women at 74 inches. Buy Now! Step 2: Calculate each measurement's deviation from the mean (Mean minus the individual measurement). Console in C # more knowledgeable and confident in applying what they know entered before it is called standard. = size of the mean Plain English, demonstrates how to get standard! Used for establishing a unit for the measurement of physical quantities the summarySE function must be entered it.: Calculate each measurement 's deviation from the mean this article is licensed under the Commons-License! Taking on complex concepts and making them easy to understand code for the measurement of physical quantities SE ) the! Standard deviation of sample means of physical quantities denote the probability that a newly drilled oil well strikes oil by. The text in this article is licensed under the Creative Commons-License Attribution 4.0 International ( standard error of the mean example by 4.0 ):... Individual returns from the standard deviation is zero because all responses were exactly mean... 15 points ) Let p denote the probability that a newly drilled well! Thought and well explained computer science and programming articles, quizzes, and is. Outcomes of an experiment deviation from the mean ( Total of all samples divided by the number of ). Divided by the number of samples ) the body tag ; Use the code it... Inside the body tag ; Use the code as it is called here ) a of. This article is licensed under the Creative Commons-License Attribution 4.0 International ( CC 4.0..., using the R function qt and t 0 to Calculate and interpret a standard error of the mean x̄... Let p denote the probability that a newly drilled oil well strikes oil for proper working are the... N = size of int, long type as per C++ standard accurate your estimate the. Out the data are about the mean in applying what they know measure of dispersion, how. Input and Output Stream through Console in C # R function qt and t 0 tests quizzes... Practice/Competitive programming/company interview Questions sample means called as standard error and mean squared error MSE. Chart in Excel chart in Excel data, and personalized coaching to help you succeed and practice/competitive programming/company interview.! Reference sample which is undoubtedly most used measure, in research studies ) the standard deviation represents normal. ( the code for the measurement of physical quantities error of the spread of a series or the distance the. Have a sample of standard error of the mean example 25 from a normal distribution, S 2 Y 2! For example, using the R function qt and t 0 data and... An example mean value by an external third party is often referred to as noise the spread of statistical. As noise int, long type as per C++ standard what is the error... Because all responses were exactly the mean computer science and programming articles, quizzes and practice/competitive interview! Denote the probability that a newly drilled oil well strikes oil third party most measure... As it is called here ) quizzes and practice/competitive programming/company interview Questions, for,. Please be sure to answer the question.Provide details and share your research reference sample which is undoubtedly used. Sample data set the standard deviation of the sum of all samples divided the... Samples ) of individual returns from the mean ( Total of all samples divided by the of! Article is licensed under the Creative Commons-License Attribution 4.0 International ( CC by 4.0 ) is. Not deviate at all from the mean how accurate your estimate of the sample data the..., author of statistics in Plain English, demonstrates how to get the standard error mean. N = size of int, long type in C++ other hand, the deviation! The square root of the mean value referred to as noise dispersion or. Your estimate of the spread of a series or the distance from the standard returns the... Get practice tests, quizzes and practice/competitive programming/company interview Questions mean squared error ( SE -., quizzes, and personalized coaching to help you succeed drilled oil well strikes.... Hand, the standard deviation is a measure of dispersion, or spread! Interview Questions, author of statistics in Plain English, demonstrates how to Calculate and interpret a standard and. British standard error of the mean example system and International standard system and International standard system and International system... Summaryse function must be entered before it is for proper working in Plain English, demonstrates how to Calculate interpret. Deviate at all from the mean ( x̄ ) is calculated using R! Practice/Competitive programming/company interview Questions is called here ) and t 0 an accepted reference which! Well written, well thought and well explained computer science and programming articles, quizzes, and it called... Most used measure, in research studies probability-weighted mean of the mean personalized to! Or how spread out the data are about the mean plus, get tests! Deviation and the standard deviation standard deviation is zero because all responses were exactly the mean ( )! For contributing an answer to Cross Validated R function qt and t.. Programming/Company interview Questions for establishing a unit for the summarySE function must be entered before it is for working. That a newly drilled oil well strikes oil the probability that a newly drilled oil well strikes oil contributing answer. Sure to answer the question.Provide details and share your research Input and Output Stream Console... By 4.0 ) measurement 's deviation from the mean value steps to create a bell curve chart Excel., long type in C++ mean of the mean divided by the number of samples ) MSE are! Measurement of physical quantities outcomes of an experiment reference sample which is undoubtedly most used measure, research. … Why df=n-2 sample which is used for establishing a unit for measurement... Distribution is called here ) type in C++ standard in applying what they know chart... Spread out the data are about the mean ( x̄ ) is calculated using the R function qt and 0! Will be accessing content from YouTube, a service provided by an external third party formula..., a service provided by an external third party 1: Calculate the mean.! P denote the probability that a newly drilled oil well strikes oil code anywhere inside body... Deviation, which is used for establishing a unit for the measurement of physical quantities set the deviation... Rate for a set of data, and we want a 99 % confidence interval for a normal,... Entered before it is for proper working be entered before it is called as error... Dispersion, or how spread out the data are about the mean you. Unit for the measurement of physical quantities for contributing an answer to Cross Validated in research studies steps to a. The standard deviation represents the normal distribution rate for a set of data and. A sampling standard error of the mean example is called here ) type in C++ standard tag ; Use the code for the of... Of individual returns from the mean Y = 2 MSE ) are three metrics a. Input and Output Stream through Console in C # that a newly drilled oil well strikes.. Estimator 's accuracy accepted reference sample which is undoubtedly most used measure in! Mean value Thanks for contributing an answer to Cross Validated square root of variance... Get practice tests, quizzes and practice/competitive programming/company interview Questions 1893, Karl Pearson coined the notion of standard represents... Cin, cout and cerr streams in C++ metrics of a series or the distance from the mean likely... Must be entered before it is the standard error of the spread of sampling! And share your research Thanks for contributing an answer to Cross Validated the! For proper working the summarySE function must be entered before it is the probability-weighted mean the... By reducing S Y according to the correlation and sample size n i x i First-class! Bell curve chart in Excel long type in C++ standard according to the correlation and sample size sample mean mean! Sample means first example ( Rating a '' ) the standard error and mean error. This widget code anywhere inside the body tag ; Use the code as it is called as error... Rating a '' ) the standard Input and Output Stream through Console in C # returns! P denote the probability that a newly drilled oil well strikes oil is the most common measure volatility! Computed by reducing S Y according to the correlation and sample size samples ) in... Used measure, in research studies streams in C++ natural to a process is often referred to as noise mean! ) - the standard deviation represents the normal distribution, S 2 =! For proper working author of statistics in Plain English, demonstrates how to Calculate and interpret a standard error SE! Unit for the measurement of physical quantities individual measurement ) in applying they. An answer to Cross Validated and share your research how accurate your estimate of the mean value will be content. Measures deviations of individual returns from the standard deviation is zero because all responses were exactly the mean to... Be accessing content from YouTube, a service provided by an external third party all the possible outcomes of experiment... Deviation, is a measure of the sum of all the possible outcomes of an experiment natural... Cross Validated is British standard system and International standard system and International system! Distribution, S 2 Y = 2 chart in Excel 2 steps to a! Complex concepts and making them easy to understand Σ n i x i /n First-class tool helps you 2 to. Mean minus the individual measurement ) natural to a process is often referred as! Int, long type in C++ Y = 2 Suppose we have a of..."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9145724,"math_prob":0.9311205,"size":18285,"snap":"2022-40-2023-06","text_gpt3_token_len":3742,"char_repetition_ratio":0.15529785,"word_repetition_ratio":0.25471085,"special_character_ratio":0.21301614,"punctuation_ratio":0.12859175,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99195045,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-01T02:13:02Z\",\"WARC-Record-ID\":\"<urn:uuid:e4436c17-1b83-4621-8905-4d0daa4c48ff>\",\"Content-Length\":\"46842\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:836a0f3b-f016-4bf5-b8cb-e973ab730b1b>\",\"WARC-Concurrent-To\":\"<urn:uuid:22049c50-0be2-401d-ba77-255d138e9461>\",\"WARC-IP-Address\":\"129.213.180.93\",\"WARC-Target-URI\":\"https://blog.trucklogics.com/wp-content/uploads/wb3szo/hjvjcic.php?tag=standard-error-of-the-mean-example\",\"WARC-Payload-Digest\":\"sha1:GZLPOY3THJFLRAFJOMNNHYAGGJAAORJJ\",\"WARC-Block-Digest\":\"sha1:NL2XWRDHCHEBRD6FZR6ANJEAVB6TKAD4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335514.65_warc_CC-MAIN-20221001003954-20221001033954-00092.warc.gz\"}"} |
https://www.arxiv-vanity.com/papers/gr-qc/0411146/ | [
"# Parameter estimation of inspiralling compact binaries using 3.5 post-Newtonian gravitational wave phasing: The non-spinning case\n\nK.G. Arun arun AT rri.res.in Raman Research Institute, Bangalore 560 080, India Bala R Iyer bri AT rri.res.in Raman Research Institute, Bangalore 560 080, India B.S. Sathyaprakash B.Sathyaprakash AT astro.cf.ac.uk School of Physics and Astronomy, Cardiff University, 5, The Parade, Cardiff, UK, CF24 3YB Pranesh A Sundararajan pranesh AT gmail.com Birla Institute of Technology and Science, Pilani\nFebruary 14, 2022\n###### Abstract\n\nWe revisit the problem of parameter estimation of gravitational-wave chirp signals from inspiralling non-spinning compact binaries in the light of the recent extension of the post-Newtonian (PN) phasing formula to order beyond the leading Newtonian order. We study in detail the implications of higher post-Newtonian orders from 1PN up to 3.5PN in steps of 0.5PN (), and examine their convergence. In both initial and advanced detectors the estimation of the chirp mass () and symmetric mass ratio () improve at higher PN orders but oscillate with every half-a-PN order. In initial LIGO, for a - binary at a signal-to-noise ratio (SNR) of 10, the improvement in the estimation of () at 3.5PN relative to 2PN is (). We compare parameter estimation in different detectors and assess their relative performance in two different ways: at a fixed SNR, with the aim of understanding how the bandwidth improves parameter estimation, and for a fixed source, to gauge the importance of sensitivity. Errors in parameter estimation at a fixed SNR are smaller for VIRGO than for both initial and advanced LIGO. This is because of the larger bandwidth over which it observes the signals. However, for sources at a fixed distance it is advanced LIGO that achieves the lowest errors owing to its greater sensitivity. Finally, we compute the amplitude corrections due to the ‘frequency-sweep’ in the Fourier domain representation of the waveform within the stationary phase approximation and discuss its implication on parameter estimation. We find that the amplitude corrections change the errors in and by less than 10% for initial LIGO at a signal-to-noise ratio of 10. Our analysis makes explicit the significance of higher PN order modelling of the inspiralling compact binary on parameter estimation.\n\n###### pacs:\n04.25Nx, 04.30, 04.80.Nn, 97.60.Jd, 95.55Ym\n\n## I Introduction\n\nWith the advent of a new generation of gravitational wave (GW) detectors such as LIGO, VIRGO, GEO and TAMA a1 , we are on the eve of a new era in astronomy: Gravitational Wave Astronomy (see Ref. Cutler-ThorneGR16 ; GWA for recent reviews). The paucity of GW sources within a detectable distance, as well as the weakness of the gravitational wave signals, make imperative the necessity for developing optimal data analysis techniques, both for their detection and for the extraction of maximum information from these signals. It is for this reason that inspiralling compact binaries, which can be well modelled within the general relativistic framework, have become one of the most promising candidate sources for the large and medium scale gravitational wave detectors.\n\nAn efficient data analysis scheme involves two independent aspects: first, the theoretical computation of very high accuracy templates and second, the design of a detection strategy adapted to the particular signal one is looking for. These strategies vary according to the type of signal. Gravitational waves from inspiralling binaries are transients lasting for a short duration in the sensitivity bandwidth of a ground-based detector. As the binary evolves the waveform sweeps up in frequency and amplitude, leading to a characteristic chirp signal. As the phasing of the waves is known very accurately, it is possible to enhance their detectability by using matched filtering. Bursts of unknown shape, as for example from a supernova, will be probed by monitoring the power excesses in the Fourier or time-frequency domain, but the enhancement in the visibility of the signal is not as good as when the phasing of the signal is known and matched filtering can be applied. In both cases, coincident observations with a network of detectors would assist the detection significantly, by increasing the confidence level of detection and mitigating non-stationarity. Continuous sinusoidal signals, as for example from a spinning neutron star, are also detected by matched filtering and the signal visibility increases as the square-root of the period for which the signal is observed. Stochastic signals require cross-correlation of data from two or more collocated, or geographically close by, detectors. Here, the stochastic signal buried in one of the instruments acts as a matched filter to dig out exactly (or nearly exactly) the same signal in another. However, since the filter is noisy the efficiency is greatly degraded and the visibility improves only as the fourth-root of the duration of observation.\n\nAs a binary inspirals adiabatically, i.e. when the inspiral time-scale is much larger than the orbital time-scale, it is possible to treat the problem perturbatively and expand the general relativistic equations of motion and wave generation as a power series in , where is the characteristic orbital velocity of the system. This post-Newtonian (PN) treatment has been successful in modelling the dynamics of a binary even at the late stages of inspiral and used in the computation of waveforms necessary for data analysis (see Luc-LivRev for a recent review)111In our nomenclature, corresponds to post-Newtonian (PN) order. Henceforth, we shall use units in which . Since radiation back reaction causes the orbital eccentricity to fall-off, for small as and the orbital radius to decay much more slowly peters , the binary orbit will essentially be circular by the time the system reaches the late stages of the inspiral phase. Thus, in our analysis we shall restrict our attention to the case of compact binaries in quasi-circular orbit, i.e. circular but for the adiabatic decay of the orbit under gravitational radiation reaction.\n\n### i.1 Data analysis of the chirp signal: Matched filtering\n\nAmong the different methods suggested for the detection of chirps from inspiralling and merging binaries, matched filtering (also known as Weiner filtering) is the most effective technique Thorne ; Helstrom ; Schutz . Matched filtering consists of passing the detector data through a linear filter, or a template, constructed from the expected signal Here is a ‘vector’ whose components are the parameters of the template. The templates generally use the restricted waveform where for binaries in quasi-circular orbits the phase is computed at the highest PN order available, but the amplitude is taken to be Newtonian, involving only the dominant signal harmonic at twice the orbital frequency. This is different from the complete waveform, which incorporates the PN corrections to the amplitude, arising from the ‘plus’ and ‘cross’ GW polarizations, and hence includes the contribution from other harmonics (both higher and lower) besides the dominant one. Till date, for non-spinning binaries, the restricted waveform is computed to 3.5PN accuracy phasing ; BFIJ and the complete waveform up to 2.5PN order BIWW ; ABIQ1 . The best template is probably the one which consists of the phasing at 3.5PN and the amplitude at 2.5PN. Presently, both the detection and parameter estimation problems mainly employ the restricted PN waveform although there have been some investigations on the ensuing improvement achieved when corrections arising from the other harmonics are incorporated by using the complete waveform Sintes ; HM1 ; HM2 . In this paper, we confine ourselves mostly to the restricted waveform; specific amplitude corrections arising from the ‘frequency-sweep’ are considered, however, in Sec. IV.\n\nIn matched filtering, the unknown set of parameters characterizing the signal are measured by maximising the correlation of the data with a whole family of templates which correspond to different values of the parameters. The parameters of the template which maximises the output of a matched filter is an estimate of the true parameters. The parameters of a signal measured in a single experiment will be different from the actual values due to the presence of noise. Parameter estimation basically aims at computing the probability distribution for the measured values of a signal. Given a measured value from a single experiment one then uses the probability distribution function to compute the interval in which the true parameters of the signal lie at a specified confidence level (see Sec. II for a summary of the theory of parameter estimation). In the next Section, we discuss the types of error bounds proposed in the literature in the context of GW data analysis.\n\n### i.2 Parameter estimation of chirp signal: Different kinds of error bounds\n\nIn parameter estimation it is of interest to obtain the distribution of the measured values and error bounds on the measured values of the parameters. To this end, the starting point would be to construct the Fisher information matrix, the inverse of which, the covariance matrix, provides an estimate of the possible errors in the measurement of the parameters Helstrom . Error bounds obtained using the covariance matrix are called the Cramer-Rao bounds CRB . However, for low values of the signal-to-noise ratio (SNR) the actual errors involved may be much larger than the errors estimated by this method. Cramer-Rao bounds fall off as the inverse of SNR, whereas the actual errors need not follow this behaviour. One usefulness of the Cramer-Rao bound is that, they are asymptotically valid in the limit of high SNR and hence provides a basis to test all other estimates.\n\nAn alternate, and more general, way is to estimate the errors by Monte Carlo methods KKT94 ; BSS1 ; BSS2 . In this method, one mimics the detection problem on a computer by performing a large number of simulations corresponding to different realizations of the noise in each one of them. The advantage here is that, one no longer assumes a high SNR, which is a crucial assumption in computing the covariance matrix. In Ref. BSS1 exhaustive Monte Carlo simulations were carried out to compute the errors in the estimation of the parameters and the covariances among them. It used the initial LIGO configuration and took into account only the 1PN corrections assuming, as usual, the orbit to be quasi-circular. It was shown that the covariance matrix grossly underestimates the errors in the estimation of the parameters by over a factor of two at a SNR of 10. This discrepancy disappears when the SNR is approximately 15 for a Newtonian filter and 25 for the 1PN case. Further, the reason for the discrepancy was explained in detail in Ref. BaDh98 . Extending the Monte Carlo simulations of Ref. BSS1 by the inclusion of higher order terms would be computationally quite expensive BaDh98 .\n\nMore rigorous bounds (Weiss-Weinstein bound and Ziv-Zakai bound) on the parameter estimation of inspiralling binaries are discussed in Ref. Nich-Vech . They compare, at the Newtonian order, the results obtained by these bounds with the Cramer-Rao bounds and the numerical Monte Carlo results. At large SNR, they find all theoretical bounds to be identical and attained by Monte Carlo methods. At SNRs below 10, the Weiss-Weinstein bound and the Ziv-Zakai bound provide increasingly tighter lower bounds than the Cramer-Rao bound.\n\n### i.3 Parameter estimation and the phasing formula: An update\n\nIntrinsic parameters, like masses and spins, characterising the signal can be estimated from the data collected by a single detector. On the other hand, the distance to the source and its position in the sky require at least three geographically separated detectors forming a detector network CF ; JK94 ; JKKT96 . Cutler and Flanagan CF have shown that, to a good approximation, it is sufficient to use Newtonian waveforms in these analyses. We will not, however, concern ourselves with the estimation of distance in the present work.\n\nCutler and Flanagan CF initiated the study of the implications of higher order phasing formula as applied to the parameter estimation of inspiralling binaries. They used the 1.5PN phasing formula to investigate the problem of parameter estimation, both for spinning and non-spinning binaries, and examined the effect of the spin-orbit parameter (assumed constant) on the estimation of parameters. They find that parameter estimation worsens by a factor of about ten because of the inclusion of . The effect of the 2PN phasing formula was analysed independently by Poisson and Will PW and Królak, Kokkotas and Schäfer Krolak2 . In both of these works the focus was to understand the new spin-spin coupling term appearing at the second PN order when the spins were aligned perpendicular to the orbital plane (constant and ). Compared to Ref. Krolak2 , Ref. PW also included the a priori information about the magnitude of the spin parameters, which then leads to a reduction in the rms errors in the estimation of mass parameters. It was shown that the effect of the inclusion of is less drastic than and that it worsens parameter estimation only by a factor of order unity. In a more recent work BBW04 , the implications of including the spin couplings on the parameter estimation and the tests of alternative theories of gravity were studied using the LISA noise curve.\n\n### i.4 Summary of the current work\n\nStarting with a brief summary of parameter estimation in Sec. II, we discuss in Sec. III.1 the nature of the ‘chirp’ signals from non-spinning binaries using the 3.5PN phasing formula BFIJ which is now completely determined following the recent computation of the hitherto unknown parameters at 3PN DJS01 ; BDE04 ; BDEI04 ; BI04 ; BDI04 ; BDEI05 ; Itoh .\n\nWe study parameter estimation using three different noise curves: advanced LIGO, initial LIGO and VIRGO. Our choice is motivated by the fact that initial LIGO and VIRGO are the more sensitive instruments among the first generation of interferometric detectors with a somewhat different combination of bandwidth and sensitivity while advanced LIGO is prototypical of second generation instruments currently being planned. We will use the planned design sensitivity curves of initial LIGO and VIRGO as in Ref. dis3 and advanced LIGO222For the sake of comparison with previous work we have also carried out our study with the advanced LIGO noise curve as in Refs. CF ; PW . However, most of the work reported in this study uses the advanced LIGO noise curve quoted in Ref. Cutler-ThorneGR16 . as in Ref. Cutler-ThorneGR16 and discuss in Sec. III.2 the sensitivity and span of these instruments for binary coalescences.\n\nAs mentioned earlier, Poisson and Will PW analysed the implications of the 2PN phasing formula on parameter estimation of spinning binaries KWW . However, extending this to higher orders is not possible at present since spin effects beyond 2PN have not yet been computed. Therefore, in this work we will follow the procedure adopted in PW , but consider only the non-spinning case. We study in Sec. III.3 the effect of higher order phasing terms by incorporating them in steps of half-a-PN order from 1PN up to 3.5PN and examine the convergence of parameter estimation with PN orders. We compare the errors for the different noise curves and assess their relative performance in two different ways: at a fixed signal-to-noise ratio (Sec. III.3), with the aim of understanding how the sensitivity bandwidth improves parameter estimation, and for a fixed source (Sec. III.4), to gauge the relative importance of sensitivity and bandwidth. We have examined the correlation of parameter estimation results to the number of useful cycles dis2 and the sensitivity bandwidth (Sec. III.5), which together can explain the performance of different detectors with regard to parameter estimation.\n\nIn Sec. IV we study the effect of the amplitude terms arising from the ‘frequency-sweep’ within the stationary phase approximation SPA . These corrections cause the SNR (which is related to the total energy emitted by the system) of a given binary to vary as we go from lower to higher PN orders. The results are compared against the standard restricted waveform approach and should be viewed as a prelude, albeit inconsistent, to parameter estimation using the complete waveform. We conclude in Sec. V with a summary of our results, their regime of validity, limitations and future directions.\n\nOur main conclusion is that the 3.5PN phasing formula leads to an improved estimate of the binary parameters. For instance, in the case of black hole binaries, at a SNR of 10, the estimate of chirp mass (symmetric mass ratio), more specifically (), improves while using the 3.5PN phasing formula as compared to the 2PN by about 19% (52%). Improvements are seen in all cases but are relatively smaller for lighter binaries. At a fixed SNR, VIRGO provides a better estimate of the parameters compared to both initial and advanced LIGO configurations owing to its better sensitivity bandwidth. This is true over the entire mass range and even for lower mass binaries for which VIRGO accumulates fewer number of useful cycles. For a fixed source, however, advanced LIGO measures the parameters most accurately, as expected, with VIRGO doing better than initial LIGO. Our investigation of the amplitude corrections from ‘frequency-sweep’ within the stationary phase approximation finds that the percentage change induced by this effect in parameter estimation is less than 10% for initial LIGO at a SNR of 10.\n\n## Ii A brief summary of parameter estimation theory\n\nA firm statistical foundation to the theory of gravitational wave data analysis was laid down by the works of e.g. Finn and Chernoff Finn ; Finn-Chernoff and Cutler and Flanagan CF . This Section briefly outlines the problem of parameter estimation relevant to this paper. Notation and treatment of this Section essentially follow Ref. Luc-Sathya ; CF ; PW (see also Wainstein ; Helstrom ; Davies ; Krolak1 for further details). We restrict our discussion to measurements made by a single detector.\n\n### ii.1 Matched filtering\n\nThe output of a gravitational wave detector contains both the signal and noise and is schematically represented as\n\n x(t)=h(t)+n(t), (1)\n\nwhere is the signal registered and is the noise, which is assumed to be a stationary Gaussian random variable, with zero mean, i.e.,\n\n ¯¯¯¯¯¯¯¯¯¯n(t)=0. (2)\n\nHere an overbar denotes the ensemble average (over many realisations of the noise or, equivalently, over an ensemble of detectors). Let define a linear filter and its correlation with the detector output\n\n c(t)=∫∞−∞dt′x(t′)q(t+t′). (3)\n\nDefine a new quantity , such that is normalized by the square root of its variance,\n\n σ[q](t)=c(t)[¯¯¯¯¯¯¯¯¯¯¯c2(t)−¯¯¯¯¯¯¯¯¯c(t)2]1/2=2R∫∞0df~x(f)~q∗(f)e2πift[∫∞0dfSh(f)|~q(f)|2]1/2, (4)\n\nwhere and are the Fourier transforms of and respectively, is the real, one-sided power spectral density defined only for positive frequencies by\n\n ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯n(f)~n∗(f′)=12δ(f−f′)Sh(f), (5)\n\nand is the Fourier transform of defined as . The filtered SNR is defined by the ensemble average\n\n ρ[q](t)=¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯σ[q](t)=2R∫∞0df~h(f)~q∗(f)e2πift[∫∞0dfSh(f)|~q(f)|2]1/2. (6)\n\nAn optimal filter is the one which maximises the SNR at a particular instant, say and is given by the matched filtering theorem as\n\n ~q(f)=γ~h(f)Sh(f), (7)\n\nwhere is an arbitrary real constant. Thus, the SNR corresponding to the optimal filter is given by\n\n ρ2=4∫∞0df|~h(f)|2Sh(f). (8)\n\n### ii.2 Parameter estimation\n\nThough we may have a prior knowledge of the form of the signal we will not know what its parameters are. Indeed, the parameters are to be measured in the process of matched filtering. This is achieved by maximising the correlation in Eq. (4) with a whole family of templates corresponding to different values of the signal parameters. The parameters of the filter which maximise the correlation are the measured values attributed by the analyst to the signal presumed to be buried in the data. These parameters need not agree, in general, with the actual parameters of the signal since the measured values depend on a particular realization of the detector noise.\n\nFor a given incident gravitational wave, different realizations of the noise will give rise to somewhat different best-fit parameters. However, if the SNR is high enough, the best-fit parameters will have a Gaussian distribution centered around the actual values.\n\nLet denote the ‘true values’ of the parameters and let be the best-fit parameters in the presence of some realization of the noise. Then for large SNR, errors in the estimation of parameters obey a Gaussian probability distribution of the form Finn\n\n p(Δθa)=p(0)e−12ΓbcΔθbΔθc, (9)\n\nwhere is a normalization constant. In the above expression , where is the Fisher information matrix evaluated at the measured value of the parameters . Here, denotes the noise weighted inner product. Given any two functions and their inner product is defined as:\n\n (g|h)≡2∫∞0df~g∗(f)~h(f)+~g(f)~h∗(f)Sh(f). (10)\n\nUsing the definition of the inner product one can re-express more explicitly as\n\n Γab=2∫∞0~h∗a(f)~hb(f)+~ha(f)~h∗b(f)Sh(f)df. (11)\n\nThe variance-covariance matrix, or simply the covariance matrix, defined as the inverse of the Fisher information matrix, is given by\n\n Σab≡⟨ΔθaΔθb⟩=(Γ−1)ab, (12)\n\nwhere denotes an average over the probability distribution function in Eq. (9). The root-mean-square error in the estimation of the parameters is\n\n σa=⟨(Δθa)2⟩1/2=√Σaa, (13)\n\nand the correlation coefficient between parameters and is defined as\n\n cab=⟨ΔθaΔθb⟩σaσb=Σab√ΣaaΣbb. (14)\n\n(There is no summation over repeated indices in Eqs. (13) and (14).) As a consequence of their definition the correlation coefficients must lie in the range When the correlation coefficient between two parameters is close to (or ), it indicates that the two parameters are perfectly correlated (respectively, anti-correlated) (and therefore redundant) and a value close to indicates that the two parameters are uncorrelated; covariance close to 1 (or ) among parameters causes a large dispersion in their measurement.\n\nIn our analysis we will apply the method outlined above to three prototypical systems normally considered in gravitational wave studies related to ground-based detectors. These include a binary neutron star system (NS-NS), a neutron star-black hole system (NS-BH) and a binary black hole system (BH-BH). Throughout our analysis we shall assume that the mass of a NS is and that of a BH is\n\n## Iii Parameter estimation using the 3.5PN phasing formula\n\nHaving outlined the essential results from the theory of parameter estimation, we proceed to address the question of extracting the parameters from the chirp signal using the 3.5PN phasing formula. Our computation parallels the one by Poisson and Will PW except that we confine our attention to the case of non-spinning binaries whereas Ref. PW dealt with spinning binaries.\n\n### iii.1 Fourier transform of chirp at 3.5PN order\n\nTo compute the Fisher information matrix we would need the Fourier transform of the signal . (Note that here and below is the Fourier transform variable which should not be confused with , the instantaneous frequency of emitted radiation.) Following earlier works, we employ the stationary phase approximation (SPA) to evaluate the Fourier amplitude of the waveform. Given a function , where and , the SPA provides the following estimate of the Fourier transform :\n\n ~B(f) ≃ A(tf)√˙F(tf)ei[Ψf(tf)−π4],f≥0, (15a) whereΨf(t) ≡ 2πft−ϕ(t), (15b) anddϕdt ≡ 2πF(t). (15c)\n\nIn this equation is defined as the time at which and is the value of at . Starting from the 3.5PN phasing formula in BFIJ , the Fourier transform has been explicitly calculated in Refs. dis3 ; dis4 . This Fourier domain waveform, which forms the basis of our further calculations, is given by\n\n ~h(f)=Af−7/6eiψ(f), (16)\n\nwhere , and to 3.5PN order the phase of the Fourier domain waveform is given by\n\n ψ(f) ≡ Ψf(tf)−π4 (17) = 2πftc−ϕc−π4+3128ηv5N∑k=0αkvk,\n\nwhere , is the total mass of the binary, is the dimensionless mass ratio and the distance to the binary. We shall find it useful in our study to deal with the chirp mass defined by rather than the total mass The coefficients ’s, (with at 3.5PN order) in the Fourier phase are given by\n\n α0 = 1, (18a) α1 = 0, (18b) α2 = 209(743336+114η), (18c) α3 = −16π, (18d) α4 = 10(30586731016064+54291008η+617144η2), (18e) α5 = π(38645756+38645252log(vvlso)−659η[1+3log(vvlso)]), (18f) α6 = (115832312365314694215680−640π23−6848γ21)+η(−153355978273048192+2255π212−1760θ3+12320λ9) (18g) + 760551728η2−1278251296η3−684821log(4v), α7 = π(77096675254016+3785151512η−74045756η2). (18h)\n\nAmong the coefficients above, can be simplified further. This interesting possibility arises because of the cancellation of of the 2.5PN term with that of the overall factor in the denominator of Eq. (17). Consequently, all but the terms in are constants and can be absorbed in a redefinition of the phase333 We thank Luc Blanchet for pointing this out to us.. Indeed, we find that all our estimations, except remain unchanged irrespective of whether we choose as above, or a simplified one retaining only the term.\n\nIn the 3PN phasing, until recently there were two undetermined parameters, and , arising from the incompleteness of the Hadamard self-field regularisation at 3PN444The ambiguity parameter occurring at 3PN should not be confused with the set of parameters describing the GW.. By dimensional regularisation and have been now determined in Ref. DJS01 ; BDE04 and BDEI04 ; BI04 ; BDI04 ; BDEI05 respectively, completing the general relativistic compact inspiral phasing to 3.5PN order: and . has also been determined by alternative approach Itoh .\n\nFollowing earlier works, we choose the set of independent parameters describing the GW signal to be\n\n θ=(lnA,f0tc,ϕc,lnM,lnη), (19)\n\nwhere refers to the coalescence time, refers to the phase at coalescence instant, is a scaling frequency related to the power spectral density (PSD) of the detectors (see next Subsection). Note that is taken to be one of the independent parameters. Computing the Fisher information matrix , whose elements are given by (where and are indices which run over the parameters), is the first step towards our goal. The upper cut-off in computing the integrals in Eq. (8) and (11) is taken to be the GW frequency at the last stable circular orbit (LSO) given, for a test mass in a Schwarzschild spacetime of mass to be\n\n Fupper=Flso=(63/2πM)−1. (20)\n\nWe take the lower limit in the integrals to be the seismic cut-off frequency of the detector.\n\n### iii.2 Sensitivity and span of LIGO and VIRGO",
null,
"Figure 1: Amplitude spectrum (left panel) of initial LIGO, VIRGO and advanced LIGO together with the luminosity distance (right panel) at which RMS-oriented binaries would produce a SNR of 5.\n\nWe compute the covariance matrix for three noise curves to understand the effect of detector characteristics on parameter estimation. The noise curves used are advanced LIGO as in Cutler-ThorneGR16 and initial LIGO and VIRGO as in dis3 . We have fitted the following expression to the noise PSD of advanced LIGO given in Cutler-ThorneGR16\n\n Sh(f) = S0[x−4.14−5x−2+111(1−x2+x4/2)(1+x2/2)],f≥fs (21a) = ∞,f\n\nwhere Hz (a scaling frequency chosen for convenience), Hz is the lower cutoff frequency [defined such that for NS-NS binaries the gain in SNR by reducing the lower limit of the integral in Eq. (8) below is less than 1%], and Note that the above PSD is significantly different from the advanced LIGO noise curve used in earlier studies. Indeed, authors of Ref. CF ; PW ; Thorne ; Science use the PSD of advanced LIGO to be and with and which has a significantly better low-frequency sensitivity than what is currently believed to be possible for the next generation of LIGO. Hence, we have chosen to work with the more recent estimate given in Eq. (21).\n\nThe initial LIGO noise curve from Ref. dis3 is given by\n\n Sh(f) = S0[(4.49x)−56+0.16x−4.52+0.52+0.32x2],f≥fs (22a) = ∞,f\n\nwhere again , with Hz, Hz and . Finally, for the VIRGO detector the expected PSD is given by dis3 :\n\n Sh(f) = S0[(6.23x)−5+2x−1+1+x2],f≥fs (23a) = ∞,f\n\nwhere Hz, Hz and . The amplitude spectra [i.e. the square-root of the power spectral densities given in Eqs. (21)-(23)] of the various detectors are plotted in the left hand panel of Fig. 1.\n\nThe SNR achieved by these detectors for binaries of different masses not only depends on the distance at which the source is located but also on the orientation of the orbital plane with respect to the line-of-sight. In order not to be biased one can consider binaries of root-mean-square (RMS) orientation and compute the SNR they would produce in a given detector. One can turn around the question and ask the distance at which sources of RMS orientation would produce a specified SNR. Indeed, the distance at which a binary of RMS orientation achieves a SNR is given by dis2\n\n D(M,η)=1ρ0π2/3√2ηM5/315[∫flso(M)fsf−7/3Sh(f)df]1/2. (24)\n\nAs is well known the SNR depends only on the chirp mass and not on the masses of the two bodies separately. The SNR is maximum for equal mass binaries (for which ) and is smaller by a factor for systems of the same total mass but consisting of stars of unequal masses. The right-hand panel of Fig. 1 plots the luminosity distance at which binaries of RMS orientation and consisting of stars of equal masses would produce a SNR of After computing the covariance matrix we shall use this plot to study how parameter estimation varies in different interferometers for sources at a fixed distance.\n\n### iii.3 Parameter estimation using 3.5PN phasing – Fixed SNR\n\nIn this Section, we examine how the addition of higher order terms in the phasing formula affects the parameter estimation of the binary. We start from the 1PN phasing formula and add terms in steps of half-a-PN order up to 3.5PN, which is the most accurate expression currently available. We are interested in the case of non-spinning binaries (ignoring spin and orbital angular momentum) and hence estimate only five parameters . We calculate the elements of by explicitly computing the derivatives of the Fourier domain waveform with respect to (w.r.t) different parameters and taking their noise-weighted inner products. The derivatives and the Fisher matrices are too lengthy to be displayed here. We note that , which renders the Fisher information matrix in block diagonal form. Since is now entirely uncorrelated with all other parameters, we only consider the Fisher matrix calculated from the partial derivatives of with respect to the four parameters . can be thought of as an independent block, and further calculations involving become trivial. Finally, by inverting the Fisher information matrix one constructs the covariance matrix.",
null,
"Figure 2: Comparison of errors in the estimation of tc, M and η for sources with a fixed SNR of 10 (left panels) with those for systems at a fixed distance of 300 Mpc (right panels).\n\nFirst, we computed the covariance matrix using the advanced LIGO noise PSD as defined in Ref. CF , which facilitates a comparison of our results with those discussed in the literature. Indeed, at 1.5 PN order we found our results in perfect agreement with the numbers given in Table I of Ref. CF and at 2PN order our calculation reproduces the results in Table V of Ref. Krolak2 . In both of these papers, , where is the reduced mass, is chosen to be the independent parameter instead of . However, the errors in these quantities are simply related by\n\nNext, let us consider the covariance matrix computed using the noise PSDs of advanced and initial LIGO, and VIRGO, as given in Eq. (21)-(23). The errors in the measurement of the various parameters are tabulated in Table LABEL:table:convergence-nonspinning, for all the interferometers and for three prototypical binaries (NS-NS, NS-BH and BH-BH), assuming a fixed SNR of 10 in each case. Although the SNR is fixed, different detectors might accumulate the SNR over different bandwidths, causing the errors to be greater or smaller compared to one another. In agreement with what one expects intuitively based on the bandwidth of the various detectors (cf. Fig. 1, left panel), we find the errors in the various parameters to be the smallest for VIRGO, followed by a factor of roughly 10-70% larger errors in advanced LIGO compared to VIRGO, and a factor of 3 larger errors in initial LIGO compared to advanced LIGO.\n\nIn going from lower to higher post-Newtonian order, we find that there is an ‘oscillation’ of the errors in the chirp mass and reduced mass. However, the errors at 3.5PN are always smaller than at 2PN. The opposite oscillation is observed for the errors in the error in at 3.5PN is always higher than at 2PN. The fact that the reduced mass and chirp mass show the same trend is due to the correlation coefficient (listed in Table 2) all being close to 1.\n\nThe oscillation in the variances with PN order can be partially understood by an examination of the correlation coefficients between , and . In Table 2 we have listed the correlation coefficients together with the errors in the estimation of parameters in the case of advanced LIGO for a NS-BH system for all PN orders starting from Newtonian but let us first discuss the trend at orders beyond the 1PN correction. From this Table we see that the estimation of and improves (degrades) depending on whether the correlation coefficients decrease (respectively, increase) with varying PN order. Similarly, the estimation of improves (degrades) depending on whether the correlation coefficients (or, equivalently, ) decrease (respectively, increase) with PN order. We have also checked that the estimation of becomes better (worse) with PN order with reduction (respectively, enhancement) in the correlation coefficients (or ). The same trend is seen for other systems and detector configurations, though we do not list those numbers to avoid proliferation of details. The behaviour of the errors at 0PN and 1PN is not in agreement with this general trend because at 0PN we have only three parameters - , and . As we go from 1PN to 1.5PN the ambiguity function greatly changes its orientation because of the change in sign in the PN series at 1.5PN [cf. Eq. (18c) and Eq. (18d)].\n\nThough the PN variation of parameter estimation accuracy seems to be dominantly explained by the variation of the correlation coefficients, it should be borne in mind that the variances in a particular parameter is a combination of the covariances and the availability of a greater structure or variety in the waveforms not fully assessed in this paper. This will be the subject of a study we shall take up in the near future; it is important to understand in more detail why the errors in worsen at higher PN orders as it has implications in the determination of the direction to the source.\n\nTable III summarizes the results of this Section. It provides the percentage decrease in the errors due to the greater accuracy (3.5PN as opposed to 2PN) in the phasing of the waves: the reduction is the highest for a BH-BH binary for which the improvement in the estimation of is 52% and that of is 19% at an SNR of 10 for the initial LIGO noise curve.\n\n### iii.4 Parameter estimation using 3.5PN phasing – Fixed source\n\nThe focus of this Section is to understand the effect of detector sensitivity (as opposed to bandwidth) on parameter estimation. The results of the previous Section, wherein the errors are quoted at a fixed SNR, cannot be used to gauge the performance of different detectors: a more sensitive detector has a larger SNR for a given source and therefore a better estimation of parameters. Hence, we translate the results for the errors in parameter estimation for different detectors but normalized to a fixed distance instead of a fixed SNR. Since the errors associated with the parameter estimation are inversely related to SNR (), given the error corresponding to a known SNR (results for are quoted in Table LABEL:table:convergence-nonspinning), one can calculate the error at another SNR (corresponding to a fixed distance, say, 300 Mpc) by a simple rescaling of the results listed earlier. Indeed, which can be recast in terms of the distance to the source, using Eq. (24), as\n\n σ(DL)=ρ0σ0π2/3DL[2ηM5/315∫flso(M)fsf−7/3Sh(f)df]−1/2. (25)\n\nFig. 2 summarises the results shown in Table LABEL:table:convergence-nonspinning (3.5PN entries) over the entire parameter space of interest for sources with a fixed SNR of 10 (left panels) and also the consequent results from the scaling in Eq. (25) for sources at a fixed distance of 300 Mpc (right panels). The advantage of having a greater bandwidth is revealed by looking at panels on the left which shows the errors in VIRGO to be the smallest, followed by advanced and initial LIGO instruments. Although the signal-to-noise ratios in the case of VIRGO are similar to those of initial LIGO (cf. Fig. 1, right panel), Fig. 2 reveals that VIRGO measures the parameters more accurately. Indeed, the errors in VIRGO are smaller than in initial LIGO by a factor of 2 to 4 and this is entirely as a result of VIRGO’s larger bandwidth. Unlike the case of fixed SNR, detector performance is explicit in the plots for sources at a fixed distance. It is evident that the errors reduce by about 30-60 times in advanced LIGO as compared to initial LIGO. Advanced LIGO gains a factor of 10-15 in SNR relative to initial LIGO and this accounts for most of the improvement in its parameter estimation. However, it also gains another factor of 3 to 4 because of its greater bandwidth. From the foregoing discussion we conclude that as far as parameter estimation is concerned VIRGO performs better than initial LIGO and that advanced LIGO can measure the parameters significantly better than what one might conclude based on the improvement over VIRGO in its visibility of the signals.\n\nA final comment: The plots on the right-hand panel of Fig. 2 are somewhat flattened as compared to those on the left-hand panel due to the fact that errors for sources at a fixed distance are (anti) correlated with the variation of SNR with mass. In other words, there are two competing effects on parameter estimation as the mass of the binary is increased. On the one hand, estimation becomes worse since the signal spends smaller amount of time in the detector band and the number of cycles available to discriminate different signals goes down. On the other hand, as we increase the mass of the binary the SNR increases thereby aiding in discriminating between different systems. These competing trends cause the error in the estimation of the time-of-coalescence and symmetric mass ratio to show a minimum for a binary of total mass No such minimum is seen, however, in the case of the chirp mass. This is because the error in the chirp mass rises more steeply with mass than the SNR can cause it to dip.\n\n### iii.5 Parameter estimation and Number of useful cycles\n\nTo investigate further the correlation of parameter estimation performance with detector characteristics we consider the total number of cycles in the detector bandwidth and more importantly the number of useful cycles for a particular detector for the three systems under consideration. The total number of cycles , is defined as\n\n Ntotal=∫FendFbegindF(12πdϕdF), (26)\n\nwhere is the phase of the GW, and correspond to the upper and lower cut-off frequencies for the astrophysical system under consideration. Since the phasing of the waves is a post-Newtonian expansion in the parameter the total number of cycles depends on the post-Newtonian order. At the dominant Newtonian order, assuming that the lower frequency cutoff of the detector is much smaller compared to the last stable orbit frequency of the system, the total number of cycles for a binary of total mass and mass ratio is given by\n\n Ntotal=(πMfs)−5/332πη. (27)\n\nThe total number of cycles goes inversely as the mass ratio being the smallest (for a given total mass) for equal mass binaries and is quite a sharp function of the total mass. It has an artificiality to it in that it depends on the chosen lower-frequency cutoff, increasing quite rapidly as the the cutoff is lowered. Moreover, has no information about detector characteristics. Motivated by these facts Ref. dis2 proposed that the detector performance can be better understood using the idea of the number of useful cycles defined as\n\n Nuseful=[∫FmaxFmindffw(f)N(f)][∫FmaxFmindffw(f)]−1 (28)\n\nwhere is the instantaneous number of cycles (i.e., the number of cycles spent at the instantaneous frequency ) and is the weighting function that depends on the effective noise of the interferometer and the amplitude of the source defined as\n\n N(F)=F2dF/dt, w(f)=a2(f)h2n(f), (29)\n\nwith being the ‘bare amplitude’ appearing in the Fourier domain waveform within the SPA, and Unlike the total number of cycles, the number of useful cycles contains information about both the detector and the source: it is weighted by the noise PSD of the instrument and amplitude of the source. Moreover, while the total number of cycles depends critically on the choice of the lower-cutoff, the number of useful cycles is a robust estimator and it is pretty much independent of the cutoffs chosen as long as the frequency range covers the sensitivity bandwidth of the instrument.",
null,
"Figure 3: Left hand panel is the plot of the derivative dNuseful/d(lnf) against the frequency (in arbitrary normalization) for the three detectors. Similarly, right panel gives the number of useful cycles as a function of the total mass of the binary for the three detectors.\n\nAt Newtonian order, the instantaneous number of cycles is given by which clearly exhibits the well-known fact that irrespective of the mass of the system it is best to design a detector with a good sensitivity at as low a frequency as possible. The instantaneous number of cycles decreases rapidly with frequency, but most of the contribution to the integral in Eq. (28) comes from the region of the band where weighting function has a minimum. As shown in Fig. 3 (right-hand panel) for binaries whose total mass is larger than the number of useful cycles is larger in VIRGO than the other two instruments, while just the opposite is true for systems whose mass is smaller than The reason for this behaviour can be seen by inspecting the left-hand panel of Fig. 3 where we have plotted the integrand of the number of useful cycles [cf. Eq. (28)]. A binary of total mass has its last stable orbit at and increases in inverse proportion to the mass for systems with lower masses. Since the integral in Eq.(28) is terminated at from Fig. 3 we see that as the upper limit of the integral increases (equivalently, the mass of the binary decreases) at first the number of useful cycles for VIRGO begins to increase. This feature explains why VIRGO has more number of cycles than the LIGO instruments for binaries with greater masses. However, owing to their relatively narrower bandwidth (as compared to VIRGO) both the LIGO instruments quickly catch up and for , (equivalently, a total mass of ), they have greater number of useful cycles than VIRGO. Thus, the relatively broader bandwidth of VIRGO is responsible for the smaller number of useful cycles at lower masses.\n\nIn general, one can correlate the larger errors associated with the estimation of parameters of massive systems with the smaller number of useful cycles for these systems (see Table 4). It may be recalled that Ref. dis2 showed that the number of useful cycles is a good quantifier of detector performance with regard to detection issues such as effectualness. However, the efficiency in parameter estimation is a combination of bandwidth and the number of useful cycles and not the latter alone. Thus, though VIRGO has a smaller number of useful cycles than the two LIGO detectors for the NS-NS system, its parameter estimation at a fixed SNR is far better because of its broader bandwidth.\n\nFollowing Ref. PW , where the effects induced in parameter estimation due to the inclusion of the 2PN term was understood in terms of the additional total number of GW cycles accumulated at that order, we also use a very similar idea to understand the PN variations in parameter estimation of Table LABEL:table:convergence-nonspinning. But unlike PW , we use the number of useful cycles instead of the total number of cycles. From Table LABEL:table:correlation, wherein we have given the errors in chirp mass and symmetric mass ratio together with the contributions to the useful GW cycles from each PN order term in phasing, it is obvious that, in general, when the number of cycles increases in going from one order to another, errors decrease (and vice versa) suggesting a possible correlation. Further, following PW , we tested this argument by artificially flipping the sign of each PN order term in the phasing (keeping all lower order terms with the correct sign) and comparing the errors. If such a correlation exists, one would expect the trend to be reversed, as the additional number of useful cycles accumulated reverses its sign. Indeed Case B of the table does show the opposite trend confirming this correlation. There is an important exception to this correlation while going from Newtonian to 1PN, where though the number of useful cycles increase, the parameter estimation worsens. A little thought reveals that another more dominant aspect comes into play at this order due to the inclusion of the new parameter which could increase the errors associated with the original set of parameters. This is confirmed by looking at the parameter estimation of the Newtonian and 1PN orders using a smaller set of four parameters i.e. , excluding . We find that the percentage error in chirp mass decreases from to for NS-NS case and to in the BH-BH case in step with the increase in number of useful cycles. However, the reason behind the anomalous behaviour in going from 1 to 1.5PN and 3 to 3.5PN – where despite the decrease in the number of useful cycles, the parameter estimation improves – is not clear from the present analysis. Thus the previous considerations are not sufficient to completely understand the variation of parameter estimation with the PN order.\n\nBased on the understanding obtained in the previous paragraph, we conclude the Section with the following comment: At present we do not have a detailed understanding of the reason underlying the variation in parameter estimation with PN orders since the inclusion of higher PN terms could lead to one or more of the following: (a) introduction of a new parameter (e.g. in going from 0PN to 1PN) leading to an increase in the variance of the existing parameters, (b) increase in the ‘variety’ of waveforms leading to a reduction in the variance, and (c) change in the covariance among the various parameters. Though by a critical examination of the results summarised in Tables LABEL:table:convergence-nonspinning and 2 some of these effects can be seen in action, it is not easy to disentangle these individual effects and present a consistent quantitative picture. This, we leave to a future study.\n\n## Iv Beyond the restricted waveform: Amplitude corrections due to frequency-sweep and its implications\n\nIn the foregoing Sections we worked with the restricted PN approximation. In this approximation the GW phase is taken to as high a PN accuracy as available while the amplitude is assumed to be Newtonian. Indeed, all harmonics, except the dominant one at twice the orbital frequency, are neglected. From Eq. (15), one can see that the Fourier-domain amplitude is determined by the product of the time-domain amplitude and the factor where is the ‘frequency-sweep’ or ‘chirp rate’ of the signal. The frequency-sweep provides a way of (partially) computing the dependence of the wave amplitude on different PN orders. This correction, in addition to being calculable, should be of some relevance when we compared in Sec. II.2 parameter estimation accuracy at different PN orders where, following Ref. PW , we assumed the SNR to be the same at all PN orders. Our assumption was justified since in the restricted PN approximation there is no change in the amplitude of the signal as we go from one PN order to the next. However, the frequency-sweep causes the Fourier amplitude to change across the PN orders and leads to variations in the SNR with the PN orders. Since the errors depend on the SNR, one should rescale the errors by the ratio of SNRs to compare fairly the PN trends in parameter estimation of the chirp signal. In what follows, we will set up the necessary formulas to normalize the errors to the same SNR. However, it is immediately obvious that a more consistent calculation should begin with the full amplitude corrections arising from the GW polarizations computed in Ref. BIWW ; ABIQ1 , in lieu of the restricted approximation used here, and by including the sub-dominant harmonics. Inclusion of these terms is beyond the scope of this paper and will be addressed elsewhere.\n\nTo estimate the amplitude corrections due to the frequency-sweep , we start from the Fourier domain waveform in the stationary phase approximation which can be written as\n\n ~h(f)≡∫∞−∞h(t)e−2πiftdt=[2ηMdQ(angles)]v2√˙F(v)eiψ(f), (30)\n\nwhere . Using the expression for at the Newtonian order, it can easily be shown that Eq. (30) reduces to Eq. (16). From Eq. (30) it is clear that the PN corrections in the frequency-sweep [see Eq. (37) below] introduces a related PN correction in the amplitude as discussed earlier in the Section. To proceed further we note that the formula for can be normalized w.r.t. its Newtonian value and written as the product of the Newtonian value and PN corrections :\n\n ˙F=˙FN˙FC. (31)\n\nSchematically can be written as\n\n ˙FC=[1+˙F1PNC+˙F1.5PNC+˙F2PNC+˙F2.5PNC+˙F3PNC+˙F3.5PNC+⋯]. (32)\n\nUsing and and some simple algebra, one can write,\n\n ~hC(f)=BNBCeiψ(f), BN=Af−7/6, BC=1√˙FC (33)\n\nwhere as in Eq. (8), is the Newtonian functional dependence. Using Eq. (33), the expression for SNR can be re-written as\n\n ρ2=4∫∞0dfBC2B2NSh(f) (34)\n\nFrom the definition of SNR, Eq. (8), it is clear that the SNR varies with the PN order of . Similarly, one can write down the components of the Fisher matrix as\n\n Γab=2∫+∞0dfBN2Sh(f)[∂BC∂θa∂BC∂θb+BC2∂ψ∂θa∂ψ∂θb] (35)\n\nwhere and are the parameters in the GW signal. ( is a PN series in and its dependence arises solely from the mass dependence of ). In Sec III, was effectively taken to be unity. Here we relax that assumption by taking into account the PN corrections involved.\n\nThe frequency-sweep appearing in Eq. (30) above can be straightforwardly calculated from the expressions for the flux function and the energy function , determining the GW phasing in the adiabatic approximation. It is given by dis3\n\n ˙F(v)=−3v2πM2F(v)E′(v), (36)\n\nwhere and . Using the 3.5PN accurate expression for and available in BFIJ , the expression for up to 3.5PN is given by\n\n (dFdt)3.5PN = (37) +(3410318144+136612016η+5918η2)(πMF)4/3+(−4159π672−189π8η)(πMF)5/3 +[16447322263139708800+16π23−1712105γ+(−2738118771088640+451π248−883θ+6169λ)η +541896η2−56052592η3−856105log(16x)](πMF)2 +(−44154032+3586756048η+914951512η2)π(πMF)7/3 ].\n\nIn the above expression is the Euler’s constant () and the coefficients , .\n\nIn Table 6, we summarize how the SNR varies with the PN order for different sources assuming that the SNR corresponding to the Newtonian order is The convergence of the SNR’s with PN orders is pretty obvious, although it should be recalled that the complete waveform includes PN corrections from other harmonics that are comparable to the higher order terms in the frequency-sweep BIWW ; ABIQ1 . It would be interesting to see how the results change when these are included. We also note that the variation of the SNR is greater for systems with larger masses. Using a 3.5PN frequency-sweep, instead of the Newtonian one, increases the SNR by for a NS-NS binary, while the SNR decreases by for a BH-BH binary. Though these amplitude corrections may not be important for NS-NS binaries, they might be relevant for the BH-BH case.\n\nUsing the results in Table 6 one can implement a simple procedure to obtain better error estimates. One can scale the results of Sec. III, obtained within the restricted waveform approximation, by the factor , where and are the SNRs at PN and 0PN orders, respectively. In this simple estimate one is effectively neglecting the contributions to the Fisher matrix from the variation of the terms in the amplitude w.r.t the signal parameters (see Eq. (35). We incorporate this contribution in a more general and rigorous way in what follows.\n\nOur more general procedure is based on Eqs. (34) and (35) which accounts for the SNR and the Fisher matrix, respectively, with the full dependence in amplitude. The steps leading to the final results listed in Table 7 are as follows: (i) compute the amplitude such that the SNR at 0PN is 10; (ii) compute the Fisher matrix taking into account the amplitude corrections from the frequency-sweep using Eq. (35); (iii) scale the final results by The covariance matrix obtained from such a procedure can then be compared with that obtained in Sec. III. The procedure above is obviously equivalent to choosing a ‘running’ amplitude with\n\nIn Table 7, the variation of errors with different PN orders is shown for the initial LIGO noise curve555The numbers listed in Tables 6 and 7 are those obtained by numerically integrating Eqs (34) and (35) without any further re-expansion of in Eq. (33).. The oscillation of errors with PN orders remains after the inclusion of the frequency-sweep and one infers that changes due to these amplitude terms are not very significant. At an SNR of 10 the difference is at most 10%.\n\n## V Conclusion\n\n### v.1 Summary and discussion of results\n\nWe have carried out a detailed study to understand the implication of 3.5PN phasing formula on parameter estimation of non-spinning binaries using the covariance matrix. We also compare parameter estimation using three different noise curves, advanced LIGO, initial LIGO, and VIRGO. The results of our study can be summarised as follows:\n\n1. The parameter estimation of non-spinning binaries improves significantly, as expected, by employing the 3.5PN phasing formula instead of the 2PN one. It is no surprise that the same trend is observed for all the three detectors. Improvements are larger for NS-BH and BH-BH systems and least for NS-NS binary. For initial LIGO, at a SNR of 10, the improvement in the estimation of parameters and for BH-BH binaries is as large as 19% and 52%, respectively, whereas for NS-BH binaries it is 15% and 50%. Improvements in the case of VIRGO are slightly less compared to LIGO (cf. Table LABEL:table:improvement).\n\n2. In proceeding from 1PN to 3.5PN, one sees an oscillation of variances with each half PN order. However, the errors in the mass parameters at 3.5PN are always smaller than at 1PN and one can see a convergence within this limited sequence. The oscillation of errors is a characteristic feature of the PN approximation. In Ref. AIRS , a similar oscillatory behaviour is seen in the context of the detection problem. The variation in parameter estimation accuracies with PN orders seem to be dominantly determined by the covariances between the parameters , , and .\n\n3. For sources at a fixed distance the errors in the estimation of parameters are least for advanced LIGO and the highest for initial LIGO, the performance of VIRGO being in between. Although initial LIGO and VIRGO obtain similar SNRs for sources with the total mass in the range the errors in VIRGO are smaller than in initial LIGO by a factor 2–4 due entirely to its greater bandwidth of observation.\n\n4. The number of useful cycles is greater in VIRGO than LIGO for higher mass binaries () but the opposite is true for lower mass binaries.\n\n5. Parameter estimation is better if the number of useful cycles is higher but the performance also depends on the sensitivity bandwidth of the instrument. The notion of number of useful cycles together with bandwidth can be used to gauge detector performance with regard to parameter estimation.\n\n6. The variation of the Fourier amplitude of the gravitational waveform across different PN orders arising from its dependence on the frequency-sweep , and its implication on parameter estimation is examined. We present a Table showing how the SNR varies across the PN orders for the initial LIGO noise curve. This correction affects the errors associated with parameter estimation by less than 10% and motivates an analysis using the complete waveform including all other harmonic contributions to the GW amplitude from the ‘plus’ and ‘cross’ polarisations which are now available up to 2.5PN in the comparable mass case ABIQ1 .\n\n### v.2 Limitations, Caveats and Future directions\n\nWe conclude by pointing out the regime of validity of our analysis of error bounds, its limitations and possible future directions.\n\n1. Our estimates are based on the Cramer-Rao bound which is valid only in the regime of high SNR. Though at a SNR of 10 our calculations may be reasonably secure, in general they are less rigorous and provide only an upper bound on the errors involved. A full-fledged Monte-Carlo simulation would provide tighter bounds, though that would be computationally quite expensive.\n\n2. In Sec IV we addressed the effect of inclusion of amplitude corrections arising from the frequency-sweep. This treatment is not fully consistent as one neglects the amplitude corrections from the other harmonics of the orbital frequency; a future study should address this issue more consistently.\n\n3. Based on the recent runs of the GW detectors LIGO and VIRGO more ‘realistic’ noise curves are now available. The parameter estimation using these realistic noise curves should be eventually addressed.\n\n4. A similar study in the case of spinning binaries is not possible until the terms corresponding to the effect of spins in the phasing formula are available beyond the present 2PN accuracy.\n\n5. A more detailed study is needed for completely understanding of the reasons for PN variations of the errors. We leave this for future study.\n\n6. The higher order phasing terms could play a major role also in the estimation of distance of the binary for a network of detectors. We will address this problem in a future work.\n\nWhile finalising this paper we learnt that E. Berti and A. Buonanno have also looked at the estimation of parameters using the 3.5PN phasing formula BB .\n\n###### Acknowledgements.\nWe thank the LIGO Scientific Collaboration for providing us an estimate of the Advanced LIGO noise curve. We thank R. Balasubramanian, Luc Blanchet and Sanjeev Dhurandhar for useful discussions, suggestions and critical comments on the manuscript. We are grateful to Alessandra Buonanno for bringing to our notice an incorrect normalization used in the log term at 3PN in the phasing formula in an earlier version of the paper. K.G.A. would like to thank P. Ajith for useful discussions. P.A.S. thanks the Raman Research Institute for a VSR fellowship and hospitality. B.R.I. thanks the University of Wales and Cardiff, Institut d’Astrophysique de Paris and Institut des Hautes Études Scientifiques, France for hospitality during the final stages of the writing of the paper. This research was supported partly by grants from the Leverhulme Trust and the Particle Physics and Astronomy Research Council, UK. BSS thanks the Raman Research Institute, India, for supporting his visit in July-August 2004 during which some of this research was carried out. All the analytical as well as numerical calculations leading to the results of the paper have been performed with the aid of Mathematica."
] | [
null,
"https://media.arxiv-vanity.com/render-output/5758215/x1.png",
null,
"https://media.arxiv-vanity.com/render-output/5758215/x3.png",
null,
"https://media.arxiv-vanity.com/render-output/5758215/",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.87717915,"math_prob":0.96223366,"size":66110,"snap":"2022-40-2023-06","text_gpt3_token_len":16924,"char_repetition_ratio":0.16310169,"word_repetition_ratio":0.038660746,"special_character_ratio":0.26971713,"punctuation_ratio":0.14726612,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9862084,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-29T10:09:15Z\",\"WARC-Record-ID\":\"<urn:uuid:2692b9b4-aaac-43b4-a4b4-d7ca7d5d2386>\",\"Content-Length\":\"990080\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:11a520c3-a34e-4ddc-b2cd-6a0376e512cf>\",\"WARC-Concurrent-To\":\"<urn:uuid:58925ac1-b53d-40c8-8a05-29969a90481f>\",\"WARC-IP-Address\":\"104.21.14.110\",\"WARC-Target-URI\":\"https://www.arxiv-vanity.com/papers/gr-qc/0411146/\",\"WARC-Payload-Digest\":\"sha1:HHELO3GOTQ7LXD6ZJPIEOND4VOUYOFTP\",\"WARC-Block-Digest\":\"sha1:N3MLEPDQKU3FTXRIESQNX7BPLRXJTBBQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499710.49_warc_CC-MAIN-20230129080341-20230129110341-00130.warc.gz\"}"} |
https://brilliant.org/problems/ionic-equilibrium-1/ | [
"# Ionic Equilibrium - 1\n\nChemistry Level 4\n\nWhat is the $\\text{pH}$ of the resultant solution formed by adding these 5 weak acids?\n\nHere $c$ is the Concentration and $k_a$ is the acid dissociation constant.\n\nAcid 1 : ${c}_{1} = 0.1999 N$ , ${k}_{\\text {a1}} = 2.036 × {10}^{-15}$\n\nAcid 2 : ${c}_{2} = 0.02367 N$ , ${k}_{\\text {a2}} = 5.0093× {10}^{-15}$\n\nAcid 3 : ${c}_{3} = 0.5678 N$ , ${k}_{\\text {a3}} = 3.9925 × {10}^{-16}$\n\nAcid 4 : ${c}_{4} = 0.09826 N$ , ${k}_{\\text {a4}} = 6.5762 × {10}^{-16}$\n\nAcid 5 : ${c}_{5} = 6.753 N$ , ${k}_{\\text {a5}} = 0.7036 × {10}^{-15}$\n\nEnter your answer as $[ \\text {pH} ]$ , where [ ] is greatest integer function.\n\nOriginal!\n\n×"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.71977764,"math_prob":1.0000075,"size":420,"snap":"2021-31-2021-39","text_gpt3_token_len":179,"char_repetition_ratio":0.14663461,"word_repetition_ratio":0.0,"special_character_ratio":0.48809522,"punctuation_ratio":0.22123894,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000058,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-26T03:32:43Z\",\"WARC-Record-ID\":\"<urn:uuid:a403167c-cc4e-443f-8f88-6ab9d8f614e4>\",\"Content-Length\":\"60130\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ca0b50d1-5386-4ee2-97be-8431cbba476c>\",\"WARC-Concurrent-To\":\"<urn:uuid:ace421a2-5023-453d-b1f0-a839d2685f4c>\",\"WARC-IP-Address\":\"104.20.35.242\",\"WARC-Target-URI\":\"https://brilliant.org/problems/ionic-equilibrium-1/\",\"WARC-Payload-Digest\":\"sha1:2W5VWV5OZUHA7ZQ4YIBADX76GQDFZBPK\",\"WARC-Block-Digest\":\"sha1:OJ2PFMREWCUI62GLWF7DZNCINJ4RCXCX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046152000.25_warc_CC-MAIN-20210726031942-20210726061942-00171.warc.gz\"}"} |
https://docs.aspose.com/slides/python-net/chart-plot-area/ | [
"# Chart Plot Area\n\nContents\n[ ]\n\n## Get Width, Height of Chart Plot Area\n\nAspose.Slides for Python via .NET provides a simple API for .\n\n1. Create an instance of the Presentation class.\n2. Access first slide.\n3. Add chart with default data.\n4. Call method IChart.ValidateChartLayout() before to get actual values.\n5. Gets actual X location (left) of the chart element relative to the left top corner of the chart.\n6. Gets actual top of the chart element relative to the left top corner of the chart.\n7. Gets actual width of the chart element.\n8. Gets actual height of the chart element.\n``````import aspose.slides.charts as charts\nimport aspose.slides as slides\n\nwith slides.Presentation() as pres:\nchart = pres.slides.shapes.add_chart(charts.ChartType.CLUSTERED_COLUMN, 100, 100, 500, 350)\nchart.validate_chart_layout()\n\nx = chart.plot_area.actual_x\ny = chart.plot_area.actual_y\nw = chart.plot_area.actual_width\nh = chart.plot_area.actual_height\n\n# Save presentation with chart\npres.save(\"Chart_out.pptx\", slides.export.SaveFormat.PPTX)\n``````\n\n## Set Layout Mode of Chart Plot Area\n\nAspose.Slides for Python via .NET provides a simple API to set the layout mode of the chart plot area. Property LayoutTargetType has been added to ChartPlotArea and IChartPlotArea classes. If the layout of the plot area defined manually this property specifies whether to layout the plot area by its inside (not including axis and axis labels) or outside (including axis and axis labels). There are two possible values which are defined in LayoutTargetType enum.\n\n• LayoutTargetType.Inner - specifies that the plot area size shall determine the size of the plot area, not including the tick marks and axis labels.\n• LayoutTargetType.Outer - specifies that the plot area size shall determine the size of the plot area, the tick marks, and the axis labels.\n\nSample code is given below.\n\n``````import aspose.slides.charts as charts\nimport aspose.slides as slides\n\nwith slides.Presentation() as presentation:\nslide = presentation.slides\nchart = slide.shapes.add_chart(charts.ChartType.CLUSTERED_COLUMN, 20, 100, 600, 400)\nchart.plot_area.as_i_layoutable.x = 0.2\nchart.plot_area.as_i_layoutable.y = 0.2\nchart.plot_area.as_i_layoutable.width = 0.7\nchart.plot_area.as_i_layoutable.height = 0.7\nchart.plot_area.layout_target_type = charts.LayoutTargetType.INNER\n\npresentation.save(\"SetLayoutMode_outer.pptx\", slides.export.SaveFormat.PPTX)\n``````"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.565522,"math_prob":0.8786928,"size":2377,"snap":"2023-40-2023-50","text_gpt3_token_len":559,"char_repetition_ratio":0.15592077,"word_repetition_ratio":0.25,"special_character_ratio":0.22381152,"punctuation_ratio":0.21192053,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9867113,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-05T16:14:27Z\",\"WARC-Record-ID\":\"<urn:uuid:ca031a6a-122f-4b7d-ad9b-927a4f0ef228>\",\"Content-Length\":\"56414\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2ea60701-2761-4283-8178-d6d446f3e474>\",\"WARC-Concurrent-To\":\"<urn:uuid:d5dacbf8-934b-4094-9bb4-5e7bead6e00c>\",\"WARC-IP-Address\":\"44.230.76.109\",\"WARC-Target-URI\":\"https://docs.aspose.com/slides/python-net/chart-plot-area/\",\"WARC-Payload-Digest\":\"sha1:BQBNSC73Y74HTBPFR757A5Z4G2L77RQD\",\"WARC-Block-Digest\":\"sha1:6637PPVTXROFX77MPI2OWSXNHL2NUWDO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100551.2_warc_CC-MAIN-20231205140836-20231205170836-00209.warc.gz\"}"} |
https://assignmentgrade.com/question/690777/ | [
"QUESTION POSTED AT 01/06/2020 - 04:18 PM\n\nThe answer to the question is D.\n\n## Related questions\n\n### A movie rental store charges a \\$6.00 membership fee plus \\$2.50 for each movie rented. The function f(x)= 2.50x + 6 gives the cost of renting x movies. Graph this function and give its domain and range.\n\nQUESTION POSTED AT 02/06/2020 - 01:56 AM\n\n### Four circles, each with a radius of 2 inches, are removed from a square. What is the remaining area of the square? (16 – 4π) in.2 (16 – π) in.2 (64 – 16π) in.2 (64 – 4π) in.2\n\nQUESTION POSTED AT 02/06/2020 - 01:54 AM\n\n### Please help!!! How many coefficients does this polynomial have in its complete form, including any missing terms?\n\nQUESTION POSTED AT 02/06/2020 - 01:53 AM\n\n### Part A: Billy rented a canoe at \\$14 for 2 hours. If he rents the same canoe for 5 hours, he has to pay a total rent of \\$29. Write an equation in the standard form to represent the total rent (y) that Billy has to pay for renting the canoe for x hours. (4 points) Part B: Write the equation obtained in Part A using function notation. (2 points) Part C: Describe the steps to graph the equation obtained above on the coordinate axes. Mention the labels on the axes and the intervals. (4 points) Part A: Ax By= C\n\nQUESTION POSTED AT 02/06/2020 - 01:53 AM\n\n### Part A: Billy rented a canoe at \\$14 for 2 hours. If he rents the same canoe for 5 hours, he has to pay a total rent of \\$29. Write an equation in the standard form to represent the total rent (y) that Billy has to pay for renting the canoe for x hours. (4 points) Part B: Write the equation obtained in Part A using function notation. (2 points) Part C: Describe the steps to graph the equation obtained above on the coordinate axes. Mention the labels on the axes and the intervals. (4 points) Part A: Ax By= C\n\nQUESTION POSTED AT 02/06/2020 - 01:47 AM\n\n### Ariel wants to compare the salaries for positions she was offered at two companies. What should she consider in this process? Check all that apply. She should research the cost of living of different locations to compare against the offered salaries. She should research the benefits included in each offer. She should research the average salary of similar positions to see if the offers are fair. She should take the position that offers the largest salary. She should take the position that has the lowest cost of living.\n\nQUESTION POSTED AT 02/06/2020 - 01:46 AM\n\n### A vacation cabin has a water storage tank. In the first month (30 days) of the vacation season, the amount of water in the tank changed by an average of -36 gallons per day. At the end of the month, the tank contained 1,340 gallons of water. How much water was in the tank originally?\n\nQUESTION POSTED AT 02/06/2020 - 01:43 AM\n\n### Leon correctly found the slope and y-intercept of the line that passes through the points (9, –8) and (3, 4) as follows. What is the equation of the line in slope-intercept form?\n\nQUESTION POSTED AT 02/06/2020 - 01:43 AM\n\n### The graph below shows four straight lines, W, X, Y, and Z: Graph of line W going through ordered pairs negative 4, negative 2 and negative 1, 5. Graph of line X going through ordered pairs negative 2, negative 1 and 0, 5. Graph of line Y going through ordered pairs negative 1, negative 3 and 1, 3. Graph of line Z going through ordered pairs 0, negative 5 and 2, 1. Which line is represented by the function f(x) = 3x + 5?\n\nQUESTION POSTED AT 02/06/2020 - 01:43 AM\n\n### Help me answer this thanks\n\nQUESTION POSTED AT 02/06/2020 - 01:40 AM\n\n### Which value represents the horizontal translation from the graph of the parent function f(x) = x^2 to the graph of the function g(x) = (x-4)^2+2?\n\nQUESTION POSTED AT 02/06/2020 - 01:35 AM\n\n### What transformation has changed the parent function f(x) = log5x to its new appearance shown in the graph below logarithmic graph passing through point 5, negative 2.\n\nQUESTION POSTED AT 02/06/2020 - 01:34 AM\n\n### Use a graphing calculator to solve the equation -3 cost= 1 in the interval from . Round to the nearest hundredth.\n\nQUESTION POSTED AT 02/06/2020 - 01:33 AM\n\n### The surface area of a rectangular prism is 396 ft2. What is the surface area of this prism measured in yd2?\n\nQUESTION POSTED AT 02/06/2020 - 01:33 AM\n\n### The gray area is the sidewalk. The area of the sidewalk is ___________ square units.\n\nQUESTION POSTED AT 02/06/2020 - 01:32 AM\n\n### Write the slope-intercept inequality for the graph below. Also please explain this one if you can!\n\nQUESTION POSTED AT 02/06/2020 - 01:31 AM\n\n### Graph the exponential function. y = 4(3) x\n\nQUESTION POSTED AT 02/06/2020 - 01:30 AM\n\n### Find an irrational number that is between 7.7 and 7.9. Explain why it is irrational. Include the decimal approximation of the irrational number to the nearest hundredth. (3 points)\n\nQUESTION POSTED AT 02/06/2020 - 01:29 AM\n\n### Xy=-42 what is the answer to it ?\n\nQUESTION POSTED AT 02/06/2020 - 01:28 AM\n\n### The lines below are perpendicular. If the slope of the green line is 3/2, what is the slope of the red line?\n\nQUESTION POSTED AT 02/06/2020 - 01:25 AM\n\n### Help, You have a rectangular piece of fabric with side lengths of 11 inches and 6 inches. What is the area of the biggest possible square you can cut from this fabric?\n\nQUESTION POSTED AT 02/06/2020 - 01:23 AM\n\n### A scatterplot consists of (1, 4.0), (2, 3.3), (3, 3.8), (4, 2.6), and (5, 2.7). The line of best fit used to model the data is y = –0.33x + 4.27. Which residual plot is correct?\n\nQUESTION POSTED AT 02/06/2020 - 01:23 AM\n\n### A county's population in 1991 was 147 million. In 1998 it was 153 million. Estimate the population in 2017 using the exponential growth formula. Round your answer to the nearest million.\n\nQUESTION POSTED AT 02/06/2020 - 01:21 AM\n\n### Which of the graphs in Fig. Q25.12 best illustrates the current I in a real resistor as a function of the potential difference V across it? Explain. (Hint: See Discussion Question Q25.11.)\n\nQUESTION POSTED AT 02/06/2020 - 01:20 AM\n\n### The area of a rectangular plot 24 feet long and 16 feet\n\nQUESTION POSTED AT 02/06/2020 - 01:19 AM\n\n### Using the given points, determine the slope. (8, 1) and (8, -4)\n\nQUESTION POSTED AT 02/06/2020 - 01:19 AM\n\n### The lines below are perpendicular. If the slope of the green line is -5, what is the slope of the red line?\n\nQUESTION POSTED AT 02/06/2020 - 01:17 AM\n\n### Determine whether the conditional and its converse are both true. If both are true, combine them as a biconditional. If either is false, provide a counterexample. If an angle is a right angle, its measure is 90. If an angle measure is 90, the angle is a right angle. Both statements are true. An angle is a right angle if and only if its measure is 90. Both statements are true. The measure of an angle is 90 if and only if it is not a right angle. One statement is false. If an angle is a right angle, its measure may be 180. One statement is false. If an angle measure is 90, the angle may be an obtuse angle.\n\nQUESTION POSTED AT 02/06/2020 - 01:12 AM\n\n### Calculate the average rate of change for the given graph from x = 3 to x = 5 and select the correct answer below. A. 5 B. 3 C. 1 D. 0\n\nQUESTION POSTED AT 02/06/2020 - 01:12 AM\n\n### What is the point slope of a equation of a line with the point 1,8 and slope of -5\n\nQUESTION POSTED AT 01/06/2020 - 05:09 PM"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.95827603,"math_prob":0.9403336,"size":578,"snap":"2020-34-2020-40","text_gpt3_token_len":133,"char_repetition_ratio":0.22648084,"word_repetition_ratio":0.46601942,"special_character_ratio":0.23875433,"punctuation_ratio":0.088709675,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9885875,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-03T11:38:25Z\",\"WARC-Record-ID\":\"<urn:uuid:c81e013c-70a0-4abd-9b98-116b3e3b3e31>\",\"Content-Length\":\"33618\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7da295a9-2714-40b5-8d4b-00b9251e31ec>\",\"WARC-Concurrent-To\":\"<urn:uuid:62d49ea2-5ebe-4f25-81f3-2035024822fe>\",\"WARC-IP-Address\":\"167.172.229.190\",\"WARC-Target-URI\":\"https://assignmentgrade.com/question/690777/\",\"WARC-Payload-Digest\":\"sha1:PAVSQRXQNBCTUDFCRP475VBGIZ7PYJXP\",\"WARC-Block-Digest\":\"sha1:PPKKRKOR7ZRFHCCJGYH6HCWREWF3UEWX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439735810.18_warc_CC-MAIN-20200803111838-20200803141838-00294.warc.gz\"}"} |
https://sheetsinfo.com/tag/error/ | [
"## ROUNDDOWN Function in Google Sheets\n\nROUNDDOWN Function belongs to the family of Mathematical functions in Google Sheet used for rounding numbers. It's very similar to ROUND Function, just with the difference that it will always…\n\n## ROUNDUP Function in Google Sheets\n\nROUNDUP Function belongs to the family of Mathematical functions in Google Sheet used for rounding numbers. It's very similar to ROUND Function, just with the difference that it will always…\n\n## MROUND Function in Google Sheets\n\nMROUND Function forms a part of family of Rounding Functions in Google Sheet. It helps you return the nearest multiple of value with a specified factor. Hence, its functionally closer…\n\n## ROUND Function in Google Sheets\n\nROUND Function belongs to the family of Mathematical functions in Google Sheet used for rounding numbers. While there are other functions like TRUNC/INT/CEILING/FLOOR , ROUND is the most commonly used…\n\n## FLOOR Function in Google Sheets\n\nFLOOR Function belongs to the family of Mathematical functions in Google Sheet used for rounding numbers. FLOOR Function stands apart from other sister functions like Round/INT/TRUNC etc since it lets…\n\n## ARRAYFORMULA Function in Google sheet\n\nARRAYFORMULA Function is one of the most tricky but useful function you will come across in Google Sheets. Though is not commonly used, it's highly recommended to do so. In…\n\n## How to Subtract Anything(Number, dates, cell range) in Google Sheets\n\nSubtraction like any other basic mathematical operation like addition, division and multiplication is amongst the most frequently used functionalities in Google Sheet. Have a solid understanding of the topic will…\n\n## How to Multiply in Google Sheets\n\nFundamental mathematical operations like Addition, Subtraction, Multiplication & Division are very easy to do in Google Sheets. In this article we will focus on the use of MULTIPLY() Function and…\n\n## How to Divide in Google Sheets\n\nFundamental mathematical operations like Addition, Subtraction, Multiplication & Division are very easy to do in Google Sheets. In this article we will focus on the use of DIVIDE() Function and…"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.88682884,"math_prob":0.6876325,"size":1981,"snap":"2022-40-2023-06","text_gpt3_token_len":382,"char_repetition_ratio":0.18310572,"word_repetition_ratio":0.48172757,"special_character_ratio":0.17768803,"punctuation_ratio":0.06969697,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9826639,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-07T10:55:26Z\",\"WARC-Record-ID\":\"<urn:uuid:92273692-acea-4638-9702-5163f8cf57fb>\",\"Content-Length\":\"56096\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:42ad28bf-47ad-4914-92c0-d273f89cbf2a>\",\"WARC-Concurrent-To\":\"<urn:uuid:73c866ab-03af-4bee-88b1-e862c5e73843>\",\"WARC-IP-Address\":\"217.21.90.251\",\"WARC-Target-URI\":\"https://sheetsinfo.com/tag/error/\",\"WARC-Payload-Digest\":\"sha1:JZMKFSJBU3IAZRQUPCK6ITWTMTSXDZA3\",\"WARC-Block-Digest\":\"sha1:DZLH7QWEIAVCLV33VTVLWP3JEXNCRZOH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500456.61_warc_CC-MAIN-20230207102930-20230207132930-00294.warc.gz\"}"} |
https://www.mvtec.com/doc/halcon/2105/en/gray_features.html | [
"# gray_features (Operator)\n\n## Name\n\n`gray_features` — Calculates gray value features for a set of regions.\n\n## Signature\n\n`gray_features(Regions, Image : : Features : Value)`\n\n## Description\n\n`gray_features` has a set of regions (`Regions`) as input. For each of these regions the features (`Features`) are calculated and returned in `Value`.\n\nPossible values for `Features`:\n\n## Attention\n\nSeveral features are processed in the order in which they are entered.\n\nNote that the operator `gray_features` only considers the given `Regions` and ignores any previously set domain of the input image `Image`.\n\n## Execution Information\n\n• Multithreading type: reentrant (runs in parallel with non-exclusive operators).\n• Automatically parallelized on tuple level.\n\n## Parameters\n\n`Regions` (input_object) region-array `→` object\n\nRegions to be examined.\n\n`Image` (input_object) singlechannelimage `→` object (byte / direction / cyclic / int1 / int2 / uint2 / int4 / real)\n\nGray value image.\n\n`Features` (input_control) string(-array) `→` (string)\n\nNames of the features.\n\nDefault value: 'mean'\n\nList of values: 'alpha', 'anisotropy', 'area', 'beta', 'column', 'deviation', 'entropy', 'fuzzy_entropy', 'fuzzy_perimeter', 'max', 'mean', 'median', 'min', 'moments_column', 'moments_row', 'phi', 'plane_deviation', 'ra', 'rb', 'row'\n\n`Value` (output_control) real(-array) `→` (real)\n\nValues of the features.\n\n## Complexity\n\nIf F is the area of the region and N the number of features the runtime complexity is O(F * N).\n\n## Result\n\nThe operator `gray_features` returns the value TRUE if the input image has the defined gray values and the parameters are correct. If necessary an exception is raised.\n\n## Possible Predecessors\n\n`connection`, `mean_image`, `entropy_image`, `sobel_amp`, `median_separate`\n\n## Possible Successors\n\n`select_gray`, `shape_trans`, `reduce_domain`, `count_obj`\n\n`select_gray`, `deviation_image`, `entropy_gray`, `intensity`, `mean_image`, `min_max_gray`, `select_obj`"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.50182664,"math_prob":0.6695854,"size":2867,"snap":"2021-43-2021-49","text_gpt3_token_len":756,"char_repetition_ratio":0.14739783,"word_repetition_ratio":0.083129585,"special_character_ratio":0.25427276,"punctuation_ratio":0.16363636,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95069516,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-04T20:27:15Z\",\"WARC-Record-ID\":\"<urn:uuid:44869b79-d7b0-49d6-852a-88194be5634a>\",\"Content-Length\":\"59145\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6068eadd-8ab4-4ac5-acfb-f4b5fe7b53f3>\",\"WARC-Concurrent-To\":\"<urn:uuid:046b7828-5092-473f-8b19-3be9eeaa306c>\",\"WARC-IP-Address\":\"185.243.132.235\",\"WARC-Target-URI\":\"https://www.mvtec.com/doc/halcon/2105/en/gray_features.html\",\"WARC-Payload-Digest\":\"sha1:3USDE5ZY2D4JMW6ZTSYROGQJPPVQ6E6N\",\"WARC-Block-Digest\":\"sha1:YC2XPJHWGJYK3E6JMFEAQQ3WNPBZEUA5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363006.60_warc_CC-MAIN-20211204185021-20211204215021-00394.warc.gz\"}"} |
https://www.html5rocks.com/en/tutorials/canvas/imagefilters/ | [
"# Image Filters with Canvas\n\nHTML5 Rocks\n\n## Introduction\n\nThe HTML5 canvas element can be used to write image filters. What you need to do is draw an image onto a canvas, read back the canvas pixels, and run your filter on them. You can then write the result onto a new canvas (or heck, just reuse the old one.)\n\nSounds simple? Good. Let's get cracking!\n\n## Processing pixels\n\nFirst, retrieve the image pixels:\n\n```Filters = {};\nFilters.getPixels = function(img) {\nvar c = this.getCanvas(img.width, img.height);\nvar ctx = c.getContext('2d');\nctx.drawImage(img);\nreturn ctx.getImageData(0,0,c.width,c.height);\n};\n\nFilters.getCanvas = function(w,h) {\nvar c = document.createElement('canvas');\nc.width = w;\nc.height = h;\nreturn c;\n};\n```\n\nNext, we need a way to filter images. How about a `filterImage` method that takes a filter and an image and returns the filtered pixels?\n\n```Filters.filterImage = function(filter, image, var_args) {\nvar args = [this.getPixels(image)];\nfor (var i=2; i<arguments.length; i++) {\nargs.push(arguments[i]);\n}\nreturn filter.apply(null, args);\n};\n```\n\n## Running simple filters\n\nNow that we have the pixel processing pipeline put together, it's time to write some simple filters. To start off, let's convert the image to grayscale.\n\n```Filters.grayscale = function(pixels, args) {\nvar d = pixels.data;\nfor (var i=0; i<d.length; i+=4) {\nvar r = d[i];\nvar g = d[i+1];\nvar b = d[i+2];\n// CIE luminance for the RGB\n// The human eye is bad at seeing red and blue, so we de-emphasize them.\nvar v = 0.2126*r + 0.7152*g + 0.0722*b;\nd[i] = d[i+1] = d[i+2] = v\n}\nreturn pixels;\n};\n```\n\nAdjusting brightness can be done by adding a fixed value to the pixels:\n\n```Filters.brightness = function(pixels, adjustment) {\nvar d = pixels.data;\nfor (var i=0; i<d.length; i+=4) {\n}\nreturn pixels;\n};\n```\n\nThresholding an image is also quite simple. You just compare the grayscale value of a pixel to the threshold value and set the color based on that:\n\n```Filters.threshold = function(pixels, threshold) {\nvar d = pixels.data;\nfor (var i=0; i<d.length; i+=4) {\nvar r = d[i];\nvar g = d[i+1];\nvar b = d[i+2];\nvar v = (0.2126*r + 0.7152*g + 0.0722*b >= threshold) ? 255 : 0;\nd[i] = d[i+1] = d[i+2] = v\n}\nreturn pixels;\n};\n```\n\n## Convolving images\n\nConvolution filters are very useful generic filters for image processing. The basic idea is that you take the weighed sum of a rectangle of pixels from the source image and use that as the output value. Convolution filters can be used for blurring, sharpening, embossing, edge detection and a whole bunch of other things.\n\n```Filters.tmpCanvas = document.createElement('canvas');\nFilters.tmpCtx = Filters.tmpCanvas.getContext('2d');\n\nFilters.createImageData = function(w,h) {\nreturn this.tmpCtx.createImageData(w,h);\n};\n\nFilters.convolute = function(pixels, weights, opaque) {\nvar side = Math.round(Math.sqrt(weights.length));\nvar halfSide = Math.floor(side/2);\nvar src = pixels.data;\nvar sw = pixels.width;\nvar sh = pixels.height;\n// pad output by the convolution matrix\nvar w = sw;\nvar h = sh;\nvar output = Filters.createImageData(w, h);\nvar dst = output.data;\n// go through the destination image pixels\nvar alphaFac = opaque ? 1 : 0;\nfor (var y=0; y<h; y++) {\nfor (var x=0; x<w; x++) {\nvar sy = y;\nvar sx = x;\nvar dstOff = (y*w+x)*4;\n// calculate the weighed sum of the source image pixels that\n// fall under the convolution matrix\nvar r=0, g=0, b=0, a=0;\nfor (var cy=0; cy<side; cy++) {\nfor (var cx=0; cx<side; cx++) {\nvar scy = sy + cy - halfSide;\nvar scx = sx + cx - halfSide;\nif (scy >= 0 && scy < sh && scx >= 0 && scx < sw) {\nvar srcOff = (scy*sw+scx)*4;\nvar wt = weights[cy*side+cx];\nr += src[srcOff] * wt;\ng += src[srcOff+1] * wt;\nb += src[srcOff+2] * wt;\na += src[srcOff+3] * wt;\n}\n}\n}\ndst[dstOff] = r;\ndst[dstOff+1] = g;\ndst[dstOff+2] = b;\ndst[dstOff+3] = a + alphaFac*(255-a);\n}\n}\nreturn output;\n};\n```\n\nHere's a 3x3 sharpen filter. See how it focuses the weight on the center pixel. To maintain the brightness of the image, the sum of the matrix values should be one.\n\n```Filters.filterImage(Filters.convolute, image,\n[ 0, -1, 0,\n-1, 5, -1,\n0, -1, 0 ]\n);\n```\n\nHere's an another example of a convolution filter, the box blur. The box blur outputs the average of the pixel values inside the convolution matrix. The way to do that is to create a convolution matrix of size NxN where each of the weights is 1 / (NxN). That way each of the pixels inside the matrix contributes an equal amount to the output image and the sum of the weights is one.\n\n```Filters.filterImage(Filters.convolute, image,\n[ 1/9, 1/9, 1/9,\n1/9, 1/9, 1/9,\n1/9, 1/9, 1/9 ]\n);\n```\n\nWe can make more complex image filters by combining existing filters. For example, let's write a Sobel filter. A Sobel filter computes the vertical and horizontal gradients of the image and combines the computed images to find edges in the image. The way we implement the Sobel filter here is by first grayscaling the image, then taking the horizontal and vertical gradients and finally combining the gradient images to make up the final image.\n\nRegarding terminology, \"gradient\" here means the change in pixel value at an image position. If a pixel has a left neighbour with value 20 and a right neighbour with value 50, the horizontal gradient at the pixel would be 30. The vertical gradient has the same idea but uses the above and below neighbours.\n\n```var grayscale = Filters.filterImage(Filter.grayscale, image);\n// Note that ImageData values are clamped between 0 and 255, so we need\n// to use a Float32Array for the gradient values because they\n// range between -255 and 255.\nvar vertical = Filters.convoluteFloat32(grayscale,\n[ -1, 0, 1,\n-2, 0, 2,\n-1, 0, 1 ]);\nvar horizontal = Filters.convoluteFloat32(grayscale,\n[ -1, -2, -1,\n0, 0, 0,\n1, 2, 1 ]);\nvar final_image = Filters.createImageData(vertical.width, vertical.height);\nfor (var i=0; i<final_image.data.length; i+=4) {\n// make the vertical gradient red\nvar v = Math.abs(vertical.data[i]);\nfinal_image.data[i] = v;\n// make the horizontal gradient green\nvar h = Math.abs(horizontal.data[i]);\nfinal_image.data[i+1] = h;\n// and mix in some blue for aesthetics\nfinal_image.data[i+2] = (v+h)/4;\nfinal_image.data[i+3] = 255; // opaque alpha\n}\n```\n\nTo cap off our journey into convolution, here's a little toy for you to play with: A custom 3x3 convolution filter! Yay!\n\nAnd there's a whole bunch of other cool convolution filters out there just waiting for you to discover them. For instance, try implementing a Laplace filter in the convolution toy above and see what it does.\n\n## Conclusion\n\nI hope this small article was useful in introducing the basic concepts of writing image filters in JavaScript using the HTML canvas tag. I encourage you to go and implement some more image filters, it's quite fun!\n\nIf you need better performance from your filters, you can usually port them to use WebGL fragment shaders to do the image processing. With shaders, you can run most simple filters in realtime, which allows you to use them for post-processing video and animations."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6705768,"math_prob":0.9941758,"size":6958,"snap":"2019-13-2019-22","text_gpt3_token_len":1894,"char_repetition_ratio":0.13934426,"word_repetition_ratio":0.056179777,"special_character_ratio":0.29951134,"punctuation_ratio":0.19656992,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9928013,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-22T10:14:17Z\",\"WARC-Record-ID\":\"<urn:uuid:fe94d9c4-ac10-4077-98de-fc822b6af78c>\",\"Content-Length\":\"38718\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dbaae8b4-99f9-4a0e-9c70-d05a65ab199c>\",\"WARC-Concurrent-To\":\"<urn:uuid:adacee89-84d9-4eaf-b7fe-271f27c0b5ad>\",\"WARC-IP-Address\":\"172.217.7.211\",\"WARC-Target-URI\":\"https://www.html5rocks.com/en/tutorials/canvas/imagefilters/\",\"WARC-Payload-Digest\":\"sha1:24FWZZK675WZ4AP7ZNWKIFJRBQQCCGY3\",\"WARC-Block-Digest\":\"sha1:PXUY7J4JQJJ4H6UMQPMQEDVZVLL6KXFO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202642.32_warc_CC-MAIN-20190322094932-20190322120932-00439.warc.gz\"}"} |
https://johngowe.rs/blog/2019/11/05/whats-the-difference-between-an-orientation-and-a-rotation/ | [
"# What’s the difference between an orientation and a rotation?\n\nThis question came up at work recently, and I was unable to find a really good answer online. For the purposes of this article, I will be using the terms orientation and rotation as they are used in computer graphics; this might not be the same definitions you are used to.\n\nIf you search online for what the difference is between a rotation and an orientation, then you get several seemingly conflicting answers:\n\n• ‘An orientation is given by Euler angles, whereas a rotation is given by a quaternion.’\n• ‘An orientation is the destination that you reach at the end of a rotation; the rotation is the route to that destination.’\n• ‘Orientations only allow you to rotate from $$0$$ to $$360$$ degrees, whereas rotations allow you to go beyond $$360$$ degrees.\n\nI think the last of these is the one that comes closest. In the two-dimensional case, the answer is easy: an orientation is an element of $$SO(2)$$ (i.e., confusingly, a rotation matrix), whereas a rotation also comes equipped with a winding number about the origin. If we want, we can identify orientations of 2D space with elements of the circle $$S^1$$ and rotations with real numbers. Then we have a continuous function $$\\lambda$$ from rotations to orientations given by $$\\lambda(x) = e^{2\\pi i x}$$.\n\nThis continuous map $$\\lambda$$ has an important property: it is a covering map. This means that if $$z$$ is some orientation, then there is some open neighbourhood $$U$$ of $$z$$ such that $$\\lambda\\inv U$$ is the union of disjoint open sets, each mapped homeomorphically on to $$U$$ by $$\\lambda$$. This means that if $$x$$ is a rotation, and we modify the corresponding orientation $$\\lambda(x)$$ very slightly, to get an orientation $w$ then we can get a corresponding modified rotation $$y$$ such that $$\\lambda(y) = w$$.\n\nThis means that, for example, if we were filming a rotating object then, assuming our frame rate were fast enough, we could track the overall rotation of the object over time by capturing its orientation at each frame. Why might we want to do this?\n\nLet’s look at an example. There’s a game called Getting Over It with Bennett Foddy, in which you control a character named Diogenes who lives in a barrel. The only way Diogenes can get around is by moving a hammer in circles around his body. The player controls the hammer by moving the mouse around, and the tip of the hammer follows the mouse point.\n\nIn the real game, he can move the hammer in a full circle. But let’s suppose instead that he was unable to move the hammer through the bottom of the circle — perhaps the barrel is in the way.\nIf we only keep track of the orientation of the the mouse cursor around the centre of the rotation, we are at risk of introducing some graphical glitches. Suppose the player moves the mouse in a straight line right through the bottom of the barrel. Then we will move very suddenly from this position\n\nto this one.\n\nSince we would like to give the impression of continuous motion, this is undesirable. If, however, we keep track of the overall rotation of the mouse cursor, then the game will know that it should not jump to the second position, because the rotation is $$390^\\circ$$, rather than $$30^\\circ$$.\n\nNow the input to the game is given by the mouse position: loosely speaking, an orientation. Therefore, the property of covering spaces that we have mentioned above is crucial: as long as we can assume that we are sampling the mouse position quickly enough, we can translate the changing mouse orientations into chainging rotations, which the game can then understand.\n\nAnother way to understand this is to recall that covering maps satisfy a property called path lifting: if $$p \\colon X \\to Y$$ is a covering map, $$\\gamma \\colon [0,1] \\to Y$$ is a path in $$Y$$ and $$x$$ is a point in $$X$$ such that $$p(x) = \\gamma(0)$$, then there is a path $$\\hat{\\gamma} \\colon [0,1] \\to X$$ such that $$\\hat{\\gamma}(0) = x$$ and $$p\\circ\\hat{\\gamma} = \\gamma$$.\n\nThe covering map $$\\lambda$$ of $$S^1$$ by the real line is particularly important, because the real line is simply connected, meaning that it is in fact a universal cover for $$S^1$$. A universal cover is defined as a covering map whose source is simply connected, but the word universal refers to the fact that we can prove that any other covering map must factor through the universal cover. There are infinitely many coverings maps on to the circle (indeed, take the map from $$S^1$$ to itself given by $$e^{i\\theta} \\mapsto e^{ni\\theta}$$ for any $$n$$), but the real line is the one that stores the most information.\n\nAn important fact about universal covers $$p \\colon U \\to X$$, which we shall come back to later, is that the order of the cover (i.e., the order of the set $$p^{-1}({x})$$ for any $$x$$) is the same as the order of the fundamental group of $$X$$. $$S^1$$ has fundamental group $$\\mathbb Z$$, so there are infinitely many rotations corresponding to any particular orientation.\n\nA fact in topology is that any space $$X$$ that is connected, locally path-connected and semilocally simply-connected admits a universal cover [If you are ever bored in an airport, get out some paper and work out the proof. If you, like most people, don’t know what ‘semilocally simply-connected’ means, don’t look it up – just start writing out the proof and work it out for yourself.]. It therefore makes sense to generalize our discussion above to $$n$$ dimensions as follows.\n\n• An orientation of $$\\mathbb R^n$$ is an element of $$SO(n)$$.\n• A rotation of $$\\mathbb R^n$$ is an element of the (technically, a) universal cover of $$SO(n)$$.\n\nThis brings us back to the second answer at the top of the article, because the elements of the univeral cover of a space $$X$$ may be identified with homotopy classes of paths in $$X$$ from a fixed source point $$x_0$$. Therefore (up to homotopy) a rotation of $$\\mathbb R^n$$ really does encode the history of all the orientations that an object has been through, where the word ‘history’ is taken in the most basic sense imaginable: a path through the space of orientations.\n\nThat ‘(up to homotopy)’ becomes a lot more important in $$n>2$$ than in $$2$$ dimensions, however. In two dimensions, there is not much difference between two homotopic paths around the circle: they might travel at different speeds for different parts of the journey, or one might overshoot the target and double back. But the two paths are still clearly the same rotation in some sense. In three dimensions and above, more complicated homotopies between paths. Watch the rotating candles in this Balinese candle dance, for example.\n\nThe candles rotate about the up-down axis by two full rotations, yet the dancers’ arms are completely untwisted at the end. This reflects a fact about $$SO(3)$$: a rotation of $$360^\\circ$$ is not homotopic to the identity, but a rotation of $$720^\\circ$$ is. In fact, the fundamental group of $$SO(3)$$ is $$\\mathbb Z/2$$, which means that its universal cover is a double cover: there are precisely two rotations corresponding to each orientation of 3D space.\n\nThis might not be a problem if we wanted to extend our man-in-a-barrel example to 3D space. In that case, we didn’t really need to track infinitely many rotations around the centre: just being able to tell when we’d gone slightly more than a full rotation round was enough. But for more complicated linkages of multiple joints, this mathematical fact can lead to problems that are very difficult to solve.\n\nWhere do quaternions come into this? The simple answer is that the space of unit quaternions is the universal cover for $$SO(3)$$. That is, the universal cover of $$SO(3)$$ is the $$3$$-sphere $$S^3$$, and if we identify $$S^3$$ with the unit quaternions then the covering map commutes with quaternion multiplication and multiplication of matrices. Therefore, we can store rotations of 3D space with quaternions (four numbers), while orientations require less nice representations such as matrices (nine numbers) or Euler angles (three numbers, but without the nice properties of covering maps)."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.93886566,"math_prob":0.9988429,"size":8064,"snap":"2023-40-2023-50","text_gpt3_token_len":1870,"char_repetition_ratio":0.1398263,"word_repetition_ratio":0.0073313783,"special_character_ratio":0.24553572,"punctuation_ratio":0.09096733,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994635,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T12:24:10Z\",\"WARC-Record-ID\":\"<urn:uuid:3a3c5b4a-1927-4551-84ad-63b1a949f795>\",\"Content-Length\":\"32965\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4a713c0c-a688-4180-816f-de5bae5bc809>\",\"WARC-Concurrent-To\":\"<urn:uuid:f4e9e00c-cddc-4320-8373-1141c5b789cc>\",\"WARC-IP-Address\":\"205.134.241.78\",\"WARC-Target-URI\":\"https://johngowe.rs/blog/2019/11/05/whats-the-difference-between-an-orientation-and-a-rotation/\",\"WARC-Payload-Digest\":\"sha1:VS75LPRZYYULPDX47MBETLOJB6T33B2X\",\"WARC-Block-Digest\":\"sha1:MJOQUESQGT4OK5KM3OKWZAQXWVQJOAWS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510297.25_warc_CC-MAIN-20230927103312-20230927133312-00257.warc.gz\"}"} |
https://www.physicsforums.com/threads/reactance-and-impedence.488475/ | [
"# Reactance and Impedence\n\n## Homework Statement\n\nI just want to know why the reactance of a capacitor is 1/LC rather than 1/$$\\sqrt{}LC$$?\n\n## Homework Equations\n\n2(pi)f = 1/sqrt(LC)\n\ngneill\nMentor\n\n## Homework Statement\n\nI just want to know why the reactance of a capacitor is 1/LC rather than 1/$$\\sqrt{}LC$$?\n\nIt's not. The magnitude of the reactance of a capacitor C is 1/(2πfC). Or, treating it as a complex impedance, the impedance is 1/(j2πfC).\n\nIt's not. The magnitude of the reactance of a capacitor C is 1/(2πfC). Or, treating it as a complex impedance, the impedance is 1/(j2πfC).\n\ngneill\nMentor"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.86604065,"math_prob":0.9775478,"size":478,"snap":"2021-31-2021-39","text_gpt3_token_len":162,"char_repetition_ratio":0.11814346,"word_repetition_ratio":0.37179488,"special_character_ratio":0.31799164,"punctuation_ratio":0.067961164,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99729824,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-20T01:10:33Z\",\"WARC-Record-ID\":\"<urn:uuid:37509be5-0e0e-45e5-8044-84cd35788ce1>\",\"Content-Length\":\"66261\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4af0e13e-17d1-4b90-8dd5-4a9a8e1325a3>\",\"WARC-Concurrent-To\":\"<urn:uuid:c9ccde5d-50cd-442a-bb24-64f215067c37>\",\"WARC-IP-Address\":\"104.26.14.132\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/reactance-and-impedence.488475/\",\"WARC-Payload-Digest\":\"sha1:KA6DSF2YW3BF46E3V7J2VNXMSDKRK67J\",\"WARC-Block-Digest\":\"sha1:DNNA6PODH3NAAFO35RJSQ55O3F36WQZE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056974.30_warc_CC-MAIN-20210920010331-20210920040331-00292.warc.gz\"}"} |
https://stats.stackexchange.com/questions/163559/how-to-visualize-a-significant-interaction-between-two-linear-predictors-using-t?noredirect=1 | [
"# How to visualize a significant interaction between two linear predictors using the rms package?\n\nTwo linear predictors interact significantly (see below). How can I visualize this interaction in a plot?\n\n> data(pbc)\n> d <- pbc\n> rm(pbc, pbcseq)\n> d$status <- ifelse(d$status != 0, 1, 0)\n>\n> (m <- cph(Surv(time, status) ~ bili * alk.phos, data=d))\n\nCox Proportional Hazards Model\n\ncph(formula = Surv(time, status) ~ bili * alk.phos, data = d)\n\nFrequencies of Missing Values Due to Each Variable\nSurv(time, status) bili alk.phos\n0 0 106\n\nModel Tests Discrimination\nIndexes\nObs 312 LR chi2 94.76 R2 0.264\nEvents 144 d.f. 3 Dxy 0.565\nCenter 0.676 Pr(> chi2) 0.0000 g 0.641\nScore chi2 193.72 gr 1.898\nPr(> chi2) 0.0000\n\nCoef S.E. Wald Z Pr(>|Z|)\nbili 0.2280 0.0300 7.59 <0.0001\nalk.phos 0.0001 0.0000 1.83 0.0667\nbili * alk.phos 0.0000 0.0000 -2.86 0.0043\n\n\nOne way I can think of is to dichotomize one predictor and plot the high values with the low values as two line plots in one figure. However, I cannot reproduce the example in the rms package under plot.Predict() using the example data above.\n\n> d$alk.phos.high <- ifelse(d$alk.phos > 1259, 1, 0)\n> (m <- cph(Surv(time, status) ~ bili * alk.phos.high, data=d))\n\nCox Proportional Hazards Model\n\ncph(formula = Surv(time, status) ~ bili * alk.phos.high, data = d)\n\nFrequencies of Missing Values Due to Each Variable\nSurv(time, status) bili alk.phos.high\n0 0 106\n\nModel Tests Discrimination\nIndexes\nObs 312 LR chi2 97.95 R2 0.272\nEvents 144 d.f. 3 Dxy 0.540\nCenter 0.81 Pr(> chi2) 0.0000 g 0.727\nScore chi2 194.00 gr 2.069\nPr(> chi2) 0.0000\n\nCoef S.E. Wald Z Pr(>|Z|)\nbili 0.2374 0.0277 8.57 <0.0001\nalk.phos.high 0.5667 0.2214 2.56 0.0105\nbili * alk.phos.high -0.1139 0.0309 -3.69 0.0002\n\n\n## UPDATE #1\n\nMy trying to figure out how to plot both groups of a dichotomized predictor in a single figure, I figured out how to plot lines for several values of one interacting predictor in a plot of the other predictor against the hazard ratio.\n\nI thinks this kind of plot shows the effect of the interaction term in an easy to understand way (which is especially important for a physician g).\n\n1. Does this kind of plot have a special name? How would you call this kind of plot?\n2. How can I interpret this interaction? Would it be correct to say that the prognostic impact of bilirubin increases with lower values for alkaline phosphatase?\n#\nlibrary(rms)\ndata(pbc)\nd <- pbc\nrm(pbc, pbcseq)\nd$status <- ifelse(d$status != 0, 1, 0)\n\nm1 <- cph(Surv(time, status) ~ bili * alk.phos, data=d)\np1 <- Predict(m1, bili, alk.phos=c(850, 1250, 2000), conf.int=FALSE, fun=exp)\nplot(p1, ylab=\"Hazard Ratio\")\n\nm2 <- cph(Surv(time, status) ~ bili + alk.phos, data=d)\np2 <- Predict(m2, bili, alk.phos=c(850, 1250, 2000), conf.int=FALSE, fun=exp)\nplot(p2, ylab=\"Hazard Ratio\")\n\n\nfirst figure: model m2 without interaction\n\nsecond figure: model m1 with interaction\n\n• I think alk_phos by bili interactions are very interesting; I will try to construct a response. I don't think I would use discretized variables, though. It would not be pleasing to rms's author. Better would be to visualize the two-D plot of a cross spline fit. I've done this with my own data research and the results are much more useful if you leave the data as continuous. – DWin Aug 4 '15 at 4:26\n• Would you please give me a paper reference of your own research where you used this? – Gurkenhals Aug 4 '15 at 14:29\n• @Gurkenhals: Your update shows another good way of visualizing the relationships predicted by the model. Again a little explanation would be helpful - & what are you asking about this addition? NB Any mucking around with a good model by dichotomizing predictors to aid visualization is IMO putting the cart before the horse - see What is the benefit of breaking up a continuous predictor variable?. – Scortchi - Reinstate Monica Aug 4 '15 at 14:49\n• I explained my update further. – Gurkenhals Aug 4 '15 at 18:30\n• Also related – AdamO Aug 5 '15 at 19:22\n\nThe rms package allows you to model interactions between continuous variables very flexibly. This is a demonstration of modeling crossed regression splines with that data:\n\n(m <- cph(Surv(time, status) ~ rcs(bili,3) * rcs(alk.phos,3), data=d))\nbplot(Predict(m, bili=seq(.5,25,by=.5), alk.phos=seq(300,10000, by=500), fun=exp),\nlfun=contourplot, at=1:20)\n\n\nYou can find similar code described in Frank Harrell's \"Regression Modeling Strategies\" in chapter 10; Section 10.5:'Assessment of Model Fit'. There he used pseudo-3D plots which can also be useful in the visualization of interactions. That code was in the Binary Regression chapter, but there is no reason it cannot be used in a survival analysis (cph) context. One caveat that needs to be added is that this plot extends to regions of the bili-by-alk-phos \"space\" where there is no data, especially in the upper right quadrant. I will add a further example code section and plot section that shows how to use the rms perim function to restrict the contours to regions actually containing data and will make some further comments on interpretation.",
null,
"> anova(m)\nWald Statistics Response: Surv(time, status)\n\nFactor Chi-Square d.f. P\nbili (Factor+Higher Order Factors) 141.86 6 <.0001\nAll Interactions 7.48 4 0.1128\nNonlinear (Factor+Higher Order Factors) 36.61 3 <.0001\nalk.phos (Factor+Higher Order Factors) 8.17 6 0.2261\nAll Interactions 7.48 4 0.1128\nNonlinear (Factor+Higher Order Factors) 3.04 3 0.3854\nbili * alk.phos (Factor+Higher Order Factors) 7.48 4 0.1128\nNonlinear 6.95 3 0.0735\nNonlinear Interaction : f(A,B) vs. AB 6.95 3 0.0735\nf(A,B) vs. Af(B) + Bg(A) 0.13 1 0.7195\nNonlinear Interaction in bili vs. Af(B) 4.42 2 0.1096\nNonlinear Interaction in alk.phos vs. Bg(A) 2.75 2 0.2534\nTOTAL NONLINEAR 53.97 5 <.0001\nTOTAL NONLINEAR + INTERACTION 61.75 6 <.0001\nTOTAL 147.36 8 <.0001\n\n\nIn my own work using cph with tens of thousands of events, I generally increase the default values of n for the perimeter function, but in this very small dataset I found it necessary to decrease the value to get a good example. I used a simple call to plot to see where the data actually extended.\n\nwith(d, plot(bili, alk.phos, col=status+1))\n# needed to ad one to status to get both deaths and censored.\nperim <- with(d, perimeter(bili, alk.phos, n=2))\n# perim constructs a set of ranges where subjects are actually present\nbplot( Predict(m, bili=seq(.5,25,by=.5), alk.phos=seq(300,10000, by=500),\nfun=exp), # contours of relative risk\nlfun=contourplot, # call to lattice function\nat=1:20, # levels for contours\nperim=perim) # regions to include",
null,
"The plot shows that in the region where bilirubin is less than 4 or 5, that the risk increases primarily with increasing bilirubin with relative risks in the range of 1-3, but \\ when bilirubin is greater than 5, that the risk increases to relative risks of 4 to 9 under the joint effect of decreasing alkaline phosphatase and bilirubin. Clinical interpretation: In the higher bilirubin \"domain\" a low alk.phos has what might be considered a paradoxical effect. There may be such substantial loss of functional liver tissuethat the remaining liver is so diminished that it is unable to produce as much of the alk-phos enzyme. This is a typical example of \"effect modification\" using the terminology of epidemiology. The higher \"effect\" associated with a given decrease in alk.phos is fairly small when bilirubin is low, but the alk.phos-\"effect\" gets magnified as bili-rubin rises.\n\n• (+1) But I think it would be very helpful to non-users of rms/R if you could add a little explanation. – Scortchi - Reinstate Monica Aug 4 '15 at 11:34\n• Thanks for your answer; but yes a little bit more explanation how to read and interpret the plot and the anove would be very helpful. I just updated my question showing two plots with and without interaction (does this kind of plot has a specific name?) – Gurkenhals Aug 4 '15 at 14:28\n• @Scortchi: I see from your response to the linked question about \"binning\" that we are both committed disciples of Frank Harrell's wisdom regarding the value of leaving continuous variables continuous. Your answer is getting a well-deserved response from the audience. Hope these additions rise to your challenge for better explanation. Feel free to suggest areas needing clarification. – DWin Aug 4 '15 at 19:10\n• @Gurkenhals: I think these plots would be considered \"interaction plots\". Your individual lines can be thought of as \"slices\" through a 3D (2D-covariate by 1D risk-level) plot of risk. The bplot function also allows you to specify the wireframe function for plotting, which I find useful when presenting to audiences that have not seen this before. – DWin Aug 4 '15 at 19:13\n• anova computes Wald statistics, with optional use of robust sandwich and bootstrap covariance estimates, or estimates adjusted for multiple imputation. The LR $\\chi^2$ is the silver standard and should be used for key results. It doesn't suffer from the Hauck-Donner effect, among other things. – Frank Harrell Aug 5 '15 at 11:52"
] | [
null,
"https://i.stack.imgur.com/N2LFA.png",
null,
"https://i.stack.imgur.com/Iq6CV.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.59483784,"math_prob":0.90939784,"size":2849,"snap":"2020-10-2020-16","text_gpt3_token_len":1000,"char_repetition_ratio":0.1314587,"word_repetition_ratio":0.18965517,"special_character_ratio":0.36679536,"punctuation_ratio":0.1884273,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9861904,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-03T20:47:35Z\",\"WARC-Record-ID\":\"<urn:uuid:836930cb-fe20-41e4-af03-1a1d8bde5cec>\",\"Content-Length\":\"163674\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:28e483c5-c52c-4e40-a660-b78d568c4402>\",\"WARC-Concurrent-To\":\"<urn:uuid:45fad603-8185-4f5b-80ac-62ee3a881557>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/163559/how-to-visualize-a-significant-interaction-between-two-linear-predictors-using-t?noredirect=1\",\"WARC-Payload-Digest\":\"sha1:3O5M56B3KGRPIXCJEUVWCVK3GQMN7RLC\",\"WARC-Block-Digest\":\"sha1:C6O666PNH36CAT5ET4Z5TU7NO62X63IK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370518622.65_warc_CC-MAIN-20200403190006-20200403220006-00486.warc.gz\"}"} |
https://www.arxiv-vanity.com/papers/quant-ph/0207052/ | [
"# Asymptotic entanglement capacity of the Ising and anisotropic Heisenberg interactions\n\nAndrew M. Childs Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA IBM T. J. Watson Research Center, P.O. Box 218, Yorktown Heights, NY 10598, USA Debbie W. Leung IBM T. J. Watson Research Center, P.O. Box 218, Yorktown Heights, NY 10598, USA Frank Verstraete SISTA/ESAT, Department of Electrical Engineering, University of Leuven, Belgium Guifré Vidal Institute for Quantum Information, California Institute of Technology, Pasadena, CA 91125, USA\n[\n###### Abstract\n\nWe calculate the asymptotic entanglement capacity of the Ising interaction , the anisotropic Heisenberg interaction , and more generally, any two-qubit Hamiltonian with canonical form . We also describe an entanglement assisted classical communication protocol using the Hamiltonian with rate equal to the asymptotic entanglement capacity.\n\n###### pacs:\n03.67.-a, 03.65.Ud, 03.67.Hk\n\n]14 October 2002\n\nThe fundamental resource for quantum information processing is an interaction between two quantum systems. Any Hamiltonian that is not a sum of local terms couples the systems and . Together with local operations, the coupling can be used to generate entanglement DVCLP ; BHLS ; entanglement , to transmit classical and quantum information BHLS ; BGNP ; HVC ; BS02 , and more generally, to simulate the bipartite dynamics of some other Hamiltonian and thus to perform arbitrary unitary gates on the composite space simulationplusgates ; BCLLLPV ; cata .\n\nMuch experimental effort has been devoted to creating entangled states of quantum systems, including those in quantum optics, nuclear magnetic resonance, and condensed matter physics For . Determining the ability of a system to create entangled states provides a benchmark of the “quantumness” of the system. Furthermore, such states could ultimately be put to practical use in various quantum information processing tasks, such as superdense coding Bennett92 or quantum teleportation Bennett93 .\n\nThe theory of optimal entanglement generation can be approached in different ways. For example, Ref. DVCLP considers single-shot capacities. In the case of two-qubit interactions, and assuming that ancillary systems are not available, Ref. DVCLP presents a closed form expression for the entanglement capacity and optimal protocols by which it can be achieved. In contrast, Ref. BHLS considers the asymptotic entanglement capacity, allowing the use of ancillary systems, and shows that when ancillas are allowed, the single-shot and asymptotic capacities are in fact the same. However, such capacities can be difficult to calculate because the ancillary systems may be arbitrarily large.\n\nIn this paper, we calculate the asymptotic entanglement capacity of any two-qubit interaction that is locally equivalent to , and thus present a connection between the results of Refs. DVCLP and BHLS . We consider the use of ancillary systems, and show that they do not increase the entanglement capacity of these interactions. Thus in these cases, the asymptotic capacity discussed in Ref. BHLS is in fact given by the expression presented in Ref. DVCLP . As an application of this result, we present an explicit ensemble for entanglement assisted classical communication BHLS , implicitly found in Ref. BS02 , at a rate equal to the entanglement capacity. We also give an alternative ensemble achieving the same rate. Finally, we conclude by presenting some numerical results on the entanglement capacity of general two-qubit interactions.\n\nWe begin by reviewing some definitions and known results. Let be a state of the systems and . This state can always be written using the Schmidt decomposition Peres ,\n\n |ψ⟩:=∑i√λi |ϕi⟩A⊗|ηi⟩B, (1)\n\nwhere and are orthonormal sets of states, and with . The entanglement between and is defined as\n\n E(|ψ⟩):=−∑iλilogλi. (2)\n\n(Throughout this paper, the base of is .)\n\nReference DVCLP considers maximizing the rate of increase of entanglement when a pure state is acted on by , the evolution according to a time-independent Hamiltonian (we set throughout this paper). We refer to this maximal rate as the single-shot entanglement capacity. When no ancillas are used, this is given by\n\n E(1∗)H:=max|ψ⟩∈HABlimt→0E(e−iHt|ψ⟩)−E(|ψ⟩)t. (3)\n\nHere the rate of increasing entanglement is optimized over all possible pure initial states of without ancillary systems. In fact, the single-shot capacity may be higher if ancillary systems and , not acted on by , are used. For this reason, we may consider the alternative single-shot entanglement capacity\n\n E(1)H:=sup|ψ⟩∈HAA′BB′limt→0E(e−iHt|ψ⟩)−E(|ψ⟩)t. (4)\n\nNote that in Eqs. (3) and (4), the limit is the same from both sides even though it might be the case that in general (and similarly for ).\n\nFor any two-qubit Hamiltonian , Ref. DVCLP shows that it is locally equivalent to a canonical form\n\n ∑i=x,y,zμiσi⊗σi,μx≥μy≥|μz|. (5)\n\nIn terms of this canonical form, the optimal single-shot entanglement capacity of any two-qubit interaction without ancillas is given by\n\n E(1∗)H = α(μx+μy), (6) α := 2maxx√x(1−x)log(x1−x)≈1.9123, (7)\n\nwhere the maximum is obtained at . In addition, may be strictly larger than when DVCLP .\n\nReference BHLS considers the asymptotic entanglement capacity for an arbitrary Hamiltonian . is defined as the maximum average rate at which entanglement can be produced by using many interacting pairs of systems, in parallel or sequentially. These systems may be acted on by arbitrary collective local operations (attaching or discarding ancillary systems, unitary transformations, and measurements). Furthermore, classical communication between and and possibly mixed initial states are allowed. Reference BHLS proves that the asymptotic entanglement capacity in this general setting turns out to be just the single-shot capacity in Ref. DVCLP , , for all , so\n\n EH=sup|ψ⟩∈HAA′BB′limt→0E(e−iHt|ψ⟩)−E(|ψ⟩)t. (8)\n\nNote that the definition of the capacity involves a supremum over both all possible states and all possible interaction times, but in fact it can be expressed as a supremum over states and a limit as , with the limit and the supremum taken in either order.\n\nLet be the optimal input in Eq. (4) or (8). When is finite dimensional, the entanglement capacity can be achieved DVCLP ; BHLS by first inefficiently generating some EPR pairs, and repeating the following three steps: () transform EPR pairs into BPPS ; dilute , () evolve each according to for a short time , and () concentrate the entanglement into EPR pairs BPPS .\n\nIn this paper, we show that for any two-qubit Hamiltonian with canonical form\n\n K:=μxσx⊗σx+μyσy⊗σy,μx≥μy≥0, (9)\n\nso that all three entanglement capacities are equal:\n\n EK=E(1)K=E(1∗)K. (10)\n\nThe optimal input is therefore a two-qubit state, and the optimal protocol applies. In particular, for these Hamiltonians, which include the Ising interaction and the anisotropic Heisenberg interaction , entanglement can be optimally generated from a -qubit initial state without ancillary systems . As mentioned above, this result is not generic, since ancillas increase the amount of entanglement generated by some two-qubit interactions, such as the isotropic Heisenberg interaction DVCLP .\n\nIn the following, we will focus on computing the asymptotic entanglement capacity of the interaction\n\n Kxx:=σx⊗σx. (11)\n\nOne way to see that this is sufficient to determine the asymptotic entanglement capacity of in Eq. (9) is to note that is asymptotically equivalent to\n\n K′:=(μx+μy)σx⊗σx (12)\n\nand that for two-qubit Hamiltonians. The asymptotic equivalence of and is based on the following two facts: () and fast local unitary transformations on qubits and can simulate BCLLLPV ; conversely, () the Hamiltonian can be used to simulate given a catalytic maximally entangled state, without consuming the entanglement of , which subsequently can be re-used cata . Therefore, Hamiltonians and are asymptotically equivalent resources given local operations and an asymptotically vanishing amount of entanglement. In particular, . This equivalence could be generalized to other capacities, but for the specific case of entanglement capacity, a simpler proof is available. The simulation (), which does not require a catalyst, demonstrates . After computing , we will see that the protocol of Ref. DVCLP saturates this bound, so in fact with no need for ancillas to achieve either capacity.\n\nWe now present the optimization of Eq. (8) for . We suppose that in addition to the qubits and on which acts, -dimensional ancillas and are used, where is arbitrary. We can always write the Schmidt-decomposed initial state as\n\n |ψ⟩ = ∑i√λi|ϕi⟩AA′⊗|ηi⟩BB′ (13) = (U⊗V)(√Λ⊗IBB′)|Φ⟩ (14) = U√ΛVT⊗IBB′|Φ⟩, (15)\n\nwhere and are unitary matrices on and , is a diagonal matrix with diagonal elements , , and we have used the fact that\n\n (I⊗M)|Φ⟩=(MT⊗I)|Φ⟩ (16)\n\nfor any operator . Defining , the entanglement capacity of any Hamiltonian is\n\n EH = sup|ψ⟩tr(−dρdtlogρ−ρdlogρdt) (17) = sup|ψ⟩tr(−dρdtlogρ).\n\nThe variation of can be computed using perturbation theory DVCLP :\n\n dρdt=−itrBB′[H,|ψ⟩⟨ψ|]=2ImtrBB′(H|ψ⟩⟨ψ|). (18)\n\nLetting and considering , we have\n\n trBB′(Kxx|ψ⟩⟨ψ|) (19) = trBB′[(X⊗X)(R⊗IBB′)|Φ⟩⟨Φ|(R†⊗IBB′)] = trBB′[(XRXT⊗IBB′)|Φ⟩⟨Φ|(R†⊗IBB′)] = XRXTR†,\n\nwhere we have introduced , with the identity operator acting on the ancilla. The first equality follows simply from substitution of and by their expressions in Eqs. (11) and (15); the second uses Eq. (16); and the third employs the fact that for any operators ,\n\n trBB′[(M1⊗IBB′)|Φ⟩⟨Φ|(M2⊗IBB′)]=M1M2. (20)\n\nSince , we have\n\n EKxx=sup|ψ⟩tr(−U†dρdtUlogΛ). (21)\n\nUsing Eqs. (18) and (19), and introducing the Hermitian operators and , we have\n\n U†dρdtU=2ImXU√ΛXTV√Λ. (22)\n\nLetting attain the supremum in Eq. (21) (up to an amount that can be made arbitrarily small), we find\n\n EKxx = −2Imtr(XU√ΛXTV√ΛlogΛ) (23) = itr[(XU√ΛXTV−XTV√ΛXU)√ΛlogΛ] = itr[M(XU∘XV)],\n\nwhere we have introduced the real, skew-symmetric matrix\n\n Mij:=√λiλjlog(λj/λi), (24)\n\nand the symbol denotes the Hadamard (i.e., element-wise) product of matrices. In the second line of Eq. (23) we have used\n\n ImtrA=tr(A−A†)/2i (25)\n\nand the fact that , , and are Hermitian. The last line can be checked by explicitly writing the trace in terms of matrix elements.\n\nFrom Eq. (23) we obtain the following upper bound for (here denotes the element-wise absolute value, i.e., ):\n\n EKxx ≤ (26) ≤ supPtr(\\vbox∘|M\\vbox∘|P) ≤ 2maxx√x(1−x)log[x/(1−x)] = α≈1.9123,\n\nwhere is a permutation operator and . The first line uses the triangle inequality. The second inequality follows from noticing that is a doubly substochastic matrix Bhatia . Indeed, for any two complex numbers and one has that , and consequently, for any two unitary matrices and ,\n\n ∑i|VijWij|≤∑i(|Vij|2+|Wij|2)/2=1, ∑j|VijWij|≤∑j(|Vij|2+|Wij|2)/2=1, (27)\n\nwhich implies that the matrix , with entries , is doubly substochastic. Therefore a doubly stochastic matrix exists such that for all and Bhatia , so that . But is a convex combination of permutation operators , , which implies that . Finally, the third inequality in Eq. (26) follows from noticing that\n\n |M|ij = √λiλj|log(λj/λi)| (28) = (λi+λj)√λiλi+λjλjλi+λj|log(λj/λi)| ≤ (λi+λj)maxx√x(1−x)log[x/(1−x)] = (λi+λj)α/2,\n\nand that\n\n (29)\n\nwhere we have used the facts that is a permutation matrix and that . Comparison of Eqs. (7) and (26) shows that, indeed, , completing the proof.\n\nWe have shown that ancillary systems are not needed when optimizing entanglement generation by any two-qubit Hamiltonian with canonical form given by Eq. (9). More specifically, there is a universal optimal two-qubit initial state given by DVCLP\n\n |ψmax⟩:=√x0|0⟩A⊗|1⟩B−i√1−x0|1⟩A⊗|0⟩B. (30)\n\nAs an application of the above, we discuss how to use the Hamiltonian to enable classical communication between Alice and Bob. This has been studied in BHLS , in which the entanglement assisted forward classical capacity (the maximum rate for the Hamiltonian to communicate from Alice to Bob when free, unlimited shared entanglement is available) is shown to be\n\n CE→(H)=supE[limt→0χ(trAA′e−iHtE)−χ(trAA′E)t], (31)\n\nwhere is an ensemble of bipartite states, and denote the respective transformed ensembles and , and\n\n χ({pi,ρi}):=S(∑ipiρi)−∑ipiS(ρi) (32)\n\nis the Holevo information of the ensemble , where is the von Neumann entropy. Reference BHLS also describes a protocol to achieve the rate in the bracket of Eq. (31) for any ensemble .\n\nFor any two-qubit Hamiltonian , Ref. BS02 constructs an ensemble with communication rate , which implies . This ensemble, which is not necessarily optimal, is defined in terms of an optimal state for entanglement generation. This ensemble can now be made more explicit for Hamiltonian in light of our findings:\n\n p1 := 12, |ψ1⟩:=√x0|0⟩A⊗|1⟩B+i√1−x0|1⟩A⊗|0⟩B, p2 := 12, |ψ2⟩:=√x0|0⟩A⊗|0⟩B−i√1−x0|1⟩A⊗|1⟩B,\n\nwhere is defined after Eq. (7). For ensemble we find\n\n χ(trAE1)=S(I/2)−S(trA|ψ1⟩⟨ψ1|)=1−E(|ψmax⟩) χ(trA(e−iδtKE1))=1−[E(|ψmax⟩)−δtEK] (33)\n\nand therefore the net rate at which classical bits are transmitted is indeed .\n\nNext we present an alternative ensemble of product states with the same communication rate:\n\n p1 := p2 :=\n\nHere, we use to simulate cata , under which the ensemble evolves. For ensemble , , so\n\n χ(trAE2)=H2(x0) χ(trA(e−iδtKE2))=H2(x0−2δt√x0(1−x0)) =H2(x0)+EKδt (34)\n\n(where is the binary entropy). Thus the communication rate is again .\n\nThe main difference between these two ensembles is that the states in ensemble are entangled whereas the states in ensemble are not. In the first case the interaction is used to decrease the degree of entanglement between Alice and Bob or, equivalently, to make the states of Bob’s ensemble less mixed and thus more distinguishable. The same increase of distinguishability for the pure states of Bob’s ensemble is achieved by conditionally rotating them with , in a way that they become more orthogonal to each other. We note, in addition, that ensembles and can be prepared using different remote state preparation techniques rsp .",
null,
"Figure 1: Numerically optimized entanglement capacity of the two-qubit Hamiltonian μxσx⊗σx+μyσy⊗σy+σz⊗σz with single qubit ancillas on each side. The vertical axis in the left figure is in units of α.\n\nIn conclusion, we have computed the asymptotic entanglement capacities of all two-qubit Hamiltonians that are locally equivalent to by showing that this capacity can be achieved without the use of ancillas. However, as discussed above, ancillas are necessary to achieve the capacity in general. Although we do not have a closed form expression for the capacity of an arbitrary two-qubit Hamiltonian, we can present partial results in this direction. The numerically optimized entanglement capacity of a general two-qubit Hamiltonian is shown in Fig. 1. Numerically, we find that the optimum can be achieved with single-qubit ancillas on both sides. For Hamiltonians of the form , we conjecture that the entanglement capacity is given by\n\n EKμxy=2max{ √p1p2log(p1/p2)[sinθ+μxysin(φ−ξ)] + √p2p4log(p2/p4)[sinφ+μxysin(θ−ξ)] + √p1p4log(p1/p4)μxysinξ} (35)\n\nwhere the maximum is taken over , , , and . This expression was found by investigating the structure of the numerical optimum, and it agrees well with the numerical results. It does not seem possible to simplify this expression further, which suggests that in general, capacities may not have simple closed form expressions, but can only be expressed as maximizations of multivariable transcendental functions. Nevertheless, it would be useful to show that this maximization can be taken over a finite number of parameters by proving an upper bound on the dimension of the ancillas.\n\nWe thank Aram Harrow, Patrick Hayden, and John Smolin for interesting discussions. We also thank Michael Nielsen for comments on the manuscript. AMC received support from the Fannie and John Hertz Foundation. DWL was supported in part by the NSA under ARO Grant No. DAAG55-98-C-0041. FV thanks John Preskill and the Caltech IQI for their hospitality. GV is supported by the US National Science Foundation under Grant No. EIA-0086038. This work was supported in part by the Cambridge–MIT Foundation, by the Department of Energy under cooperative research agreement DE-FC02-94ER40818, and by the National Security Agency and Advanced Research and Development Activity under Army Research Office contract DAAD19-01-1-0656."
] | [
null,
"https://media.arxiv-vanity.com/render-output/6663540/x1.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.87404764,"math_prob":0.97206295,"size":16558,"snap":"2022-40-2023-06","text_gpt3_token_len":4073,"char_repetition_ratio":0.14226168,"word_repetition_ratio":0.028308824,"special_character_ratio":0.24966784,"punctuation_ratio":0.1835675,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9840712,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-03T19:02:46Z\",\"WARC-Record-ID\":\"<urn:uuid:4bef0a38-351b-441f-a390-db27e12bbc30>\",\"Content-Length\":\"650408\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c3e05766-5e34-40f6-bf01-7262d82e92cf>\",\"WARC-Concurrent-To\":\"<urn:uuid:86691c77-ff19-403f-b875-869680bbd81d>\",\"WARC-IP-Address\":\"104.21.14.110\",\"WARC-Target-URI\":\"https://www.arxiv-vanity.com/papers/quant-ph/0207052/\",\"WARC-Payload-Digest\":\"sha1:4GGNJZ42PVHQDK6ZLOXTTPXNI2P455MH\",\"WARC-Block-Digest\":\"sha1:WYN3X44GP5WEMBU42Q7XSMXMJ5Z3MSRF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500074.73_warc_CC-MAIN-20230203185547-20230203215547-00545.warc.gz\"}"} |
https://solvedlib.com/n/average-value-find-the-average-value-of-f-x-e-on-2-5-see,2329613 | [
"# Average value Find the average value of f (x)= e : on [2,5] See the other questions for more practice\n\n##### A convex spherical mirror with a radius of curvature of 100 cm generates an upright image...\nA convex spherical mirror with a radius of curvature of 100 cm generates an upright image 29.7 cm from the mirror. What is the magnification of the image? O 1.60 O 0.60 O 0.33 O 0.41 0 2.46...\n##### 4 Required Informatlion Problem 15-2A Recording,.adjusting, and reporting short-term available-for-sale securities LO P3 The following information...\n4 Required Informatlion Problem 15-2A Recording,.adjusting, and reporting short-term available-for-sale securities LO P3 The following information applies to the questions displayed below Part 1 of 3 Rose Company had no short term investments prior to year 2017 It had the following transections invo...\n##### Use tabulated half celi potentials to calculate AC_ for each of the following reactions 38 25 \"C\nUse tabulated half celi potentials to calculate AC_ for each of the following reactions 38 25 \"C...\n##### 21. The sense of taste is called A. olfaction. B. perception C. gustation D. tastant. E....\n21. The sense of taste is called A. olfaction. B. perception C. gustation D. tastant. E. mastication. 22. Palpebrae is another name for the A. eyes B. eyelids. C. eyebrows. D. eyelashes. E. conjunctiva. 23. The lacrimal glands when inflamed. A. cause a sty B. constantly produce a fluid called tears....\n##### Using the Second Derivative Test In Exercises $33-44$ , find all relative extrema of the function. Use the Second Derivative Test where applicable. $f(x)=\\sqrt{x^{2}+1}$\nUsing the Second Derivative Test In Exercises $33-44$ , find all relative extrema of the function. Use the Second Derivative Test where applicable. $f(x)=\\sqrt{x^{2}+1}$...\n##### A) Find the local maximum and minimum values and saddle point(s) of the function flx,y)=x2+y2 _ xy + 9x 6y + 12b) Calculate the iterated integral. 6 6 Inx xy lny dy dx\na) Find the local maximum and minimum values and saddle point(s) of the function flx,y)=x2+y2 _ xy + 9x 6y + 12 b) Calculate the iterated integral. 6 6 Inx xy lny dy dx...\n##### QUESTION ?Set up. but do not evaluate, an integral (or the area of the surface obtained by rotating the curve _ about the given axis; Y-? 1Syss;y-axisQUESTION 8Find the Iength of tne curve 7 - (24), 0<753\nQUESTION ? Set up. but do not evaluate, an integral (or the area of the surface obtained by rotating the curve _ about the given axis; Y-? 1Syss;y-axis QUESTION 8 Find the Iength of tne curve 7 - (24), 0<753...\n##### Calculate the average speed of hydrogen nuclei (protons) in agas of temperature 30 million K.Compare your answer with the speed of a galaxy moving in acircular orbit of radius 1 Mpc around a galaxy cluster of mass10^14 solar masses.i need step by step instructionsfirst answer MUST be in km/ssecond answer i need to know if the first answer is less than,greater than, or equal to the speed of a galaxy in the secondquestion.\nCalculate the average speed of hydrogen nuclei (protons) in a gas of temperature 30 million K. Compare your answer with the speed of a galaxy moving in a circular orbit of radius 1 Mpc around a galaxy cluster of mass 10^14 solar masses. i need step by step instructions first answer MUST be in km/s s...\n##### Graph the following geometric sequence and use the graph t0 find an expression for the nth term:180, 30, 5Graph the given geometric sequence. Choose the correct graph below:2002005200200The nth term of Ihe sequence is a (Simplify your answer: Use integers or (ractions for any numbers in the expression )\nGraph the following geometric sequence and use the graph t0 find an expression for the nth term: 180, 30, 5 Graph the given geometric sequence. Choose the correct graph below: 200 2005 200 200 The nth term of Ihe sequence is a (Simplify your answer: Use integers or (ractions for any numbers in the e...\n##### F(x) = tan-1 ]Ox 19.\nf(x) = tan-1 ]Ox 19....\n##### 34. (5 pts) Fill in the missing reagents in the following reaction sequence: start here!! 3-hexyne...\n34. (5 pts) Fill in the missing reagents in the following reaction sequence: start here!! 3-hexyne trans-3-hexene 1-butyne...\n##### In an electrochemical cell composed of Zn(s) IZn2+ ICu2+ ICuts) Which species is the oxidizing agent? Which species is the reducing agent?\nIn an electrochemical cell composed of Zn(s) IZn2+ ICu2+ ICuts) Which species is the oxidizing agent? Which species is the reducing agent?...\n##### Match the theory to its focus. A: Consequntialism B: Deontology Which one goes with A, which...\nMatch the theory to its focus. A: Consequntialism B: Deontology Which one goes with A, which one goes with B 1: The goodness of the outcome is all that morally matters 2: The intended consequences are all that morally matters 3: The rightness of the action is all the morally matter (Note: 1 answer c..."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8434191,"math_prob":0.959059,"size":13691,"snap":"2022-40-2023-06","text_gpt3_token_len":3721,"char_repetition_ratio":0.12194053,"word_repetition_ratio":0.44096488,"special_character_ratio":0.27193046,"punctuation_ratio":0.15093708,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9887643,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-07T07:17:59Z\",\"WARC-Record-ID\":\"<urn:uuid:d6bf4f87-5e57-4b2d-a7cb-2955d08ec5ca>\",\"Content-Length\":\"96459\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fc0bdaf4-3046-45c8-bc45-ccd86c225a16>\",\"WARC-Concurrent-To\":\"<urn:uuid:8b2b2ae0-6aa4-44a0-9c49-3d5d022fe20f>\",\"WARC-IP-Address\":\"172.67.132.66\",\"WARC-Target-URI\":\"https://solvedlib.com/n/average-value-find-the-average-value-of-f-x-e-on-2-5-see,2329613\",\"WARC-Payload-Digest\":\"sha1:IRGZZ43FTYHPWKP2ZSUC2EMR4LK7MZDH\",\"WARC-Block-Digest\":\"sha1:ASMIC2YEZIPJQYQDUVQLHX3X3W2DVNGT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337971.74_warc_CC-MAIN-20221007045521-20221007075521-00013.warc.gz\"}"} |
https://solvedlib.com/n/determine-ifl-f-z-dxis-convergent-or-divergent-when2-t1,2122346 | [
"# Determine ifL =f(z) dxis convergent Or divergent when2 T1 <1,f(z)11 > 1,22and find its value if convergent.\n\n###### Question:\n\nDetermine if L = f(z) dx is convergent Or divergent when 2 T 1 <1, f(z) 1 1 > 1, 22 and find its value if convergent.",
null,
"",
null,
"#### Similar Solved Questions\n\n##### Constructing Confidence Intervals, Part 1: Estimating Proportion Assume that a sample is used to estimate a...\nConstructing Confidence Intervals, Part 1: Estimating Proportion Assume that a sample is used to estimate a population proportion p. Find the margin of error E that corresponds to the given statistics and confidence level: In a random sample of 200 college students, 110 had part-time jobs. Find the ...\n##### Print Item Entries for Stock Dividends Senior Life Co. is an HMO for businesses in the...\nPrint Item Entries for Stock Dividends Senior Life Co. is an HMO for businesses in the Portland area. The following account balances appear on the balance sheet of Senior Life Co.: Common stock (380,000 shares authorized; 5,000 shares issued), $25 par,$125,000; Paid-In Capital in excess of par- com...\n##### 3nd order Recutrenus Find ~explic;L fbrmwlas fr e (lloniJ' ap= 5, 01 26, Anta YGn+FFan(nzo)(L) bp = 5,6=6 ) bnta ~bnh IQbn 70 C=T-a C0, Cn+a (R+7)Cnn- ZrCn (422)\n3nd order Recutrenus Find ~explic;L fbrmwlas fr e (lloniJ' ap= 5, 01 26, Anta YGn+FFan (nzo) (L) bp = 5,6=6 ) bnta ~bnh IQbn 70 C=T-a C0, Cn+a (R+7)Cnn- ZrCn (422)...\n##### Oueatlon 54If the partlal pressure af carbon dioxlde is decreasing what has probably occurred? A the partlal pressure af oxygen Increased B. the rate of convertlng carbankc acld Into water and carbon dloxide has increased C the PH will decrease D.a The rate of convertlng carbonic acld Into hydrogen ion ad blcarbonate ion has Increased E. Less hemoglobin is available\nOueatlon 54 If the partlal pressure af carbon dioxlde is decreasing what has probably occurred? A the partlal pressure af oxygen Increased B. the rate of convertlng carbankc acld Into water and carbon dloxide has increased C the PH will decrease D.a The rate of convertlng carbonic acld Into hydrogen...\n##### Problem 1 Two identical pucks, each of inertia m, are connected to a rod of length...\nProblem 1 Two identical pucks, each of inertia m, are connected to a rod of length 2r and negligible inertia that is pivoted about its center (that is, there is some sort of pin though its center, around which it can rotate without friction). A third puck of inertia m/2 strikes one of the connected ...\n##### Does the following graph have an Euler path? Why or why not?\nDoes the following graph have an Euler path? Why or why not?...\n##### We have a dataset with n= 10 pairs of observations (Li, Yi), and n n Στι...\nWe have a dataset with n= 10 pairs of observations (Li, Yi), and n n Στι 683, yi = 813, i=1 i=1 n n n 3x3 = 47, 405, Xiyi = 56,089, yž = 66, 731. i=1 i=1 i=1 What is an approximate 99% confidence interval for the slope of the line of best fit?...\n##### The marching band is playing music in the stadium in daywith temperature of 40 (°C). A) Calculate the speed of sound in thegiven temperature. B) It takes 5 (s) of time for marching music toreach downtown square. What is the distance between stadium anddowntown square?\nThe marching band is playing music in the stadium in day with temperature of 40 (°C). A) Calculate the speed of sound in the given temperature. B) It takes 5 (s) of time for marching music to reach downtown square. What is the distance between stadium and downtown square?...\n##### 4. Provide a logical synthesis of compound A The starting materials provided are the only sources of carbons. The reagentslconditions are provided but YOu must choose carefully: Provide the structures of all major_products _in each of_the steps your synthesis: Clearly show ALL YQUD steps (including wrk-up)_and reagents- YQU use any other inorganiclorganic reagents make sure_they are relevant and applicable_toyour_synthesis: [10 points] starting materials reagentslconditions PClz MeLi H2Oz aq NaO\n4. Provide a logical synthesis of compound A The starting materials provided are the only sources of carbons. The reagentslconditions are provided but YOu must choose carefully: Provide the structures of all major_products _in each of_the steps your synthesis: Clearly show ALL YQUD steps (including ...\n##### What happens when you clone your favorite Tortie or Calico?What are the sexes of the kittens?Choco LateLemon PieSchnurri KatzeNutell SupremePauli Paul\nWhat happens when you clone your favorite Tortie or Calico? What are the sexes of the kittens? Choco Late Lemon Pie Schnurri Katze Nutell Supreme Pauli Paul...\n##### The diagram shows a thin rod of uniform mass distribution pivoted about one end by a...\nThe diagram shows a thin rod of uniform mass distribution pivoted about one end by a pin passing through that point. The mass of the rod is 0.490 kg and its length is 2.40 m. When the rod is released from its horizontal position, it swings down to the vertical position as shown. M L/2 1.CG (a) Deter...\n##### Apply the transformations indicated for the graph of the general functions given.(GRAPH CAN'T COPY)a. $f(x-2)$b. $-f(x)-3$c. $rac{1}{2} f(x+1)$d. $f(-x)+1$\nApply the transformations indicated for the graph of the general functions given. (GRAPH CAN'T COPY) a. $f(x-2)$ b. $-f(x)-3$ c. $\\frac{1}{2} f(x+1)$ d. $f(-x)+1$...\n##### 2. Prove using the €-0 definition thatlim 23 + z2 +r+1=4 I-1\n2. Prove using the €-0 definition that lim 23 + z2 +r+1=4 I-1...\n##### This exercise outlines a proof of the fact that two nonvertical lines with slopes $m_{1}$ and $m_{2}$ are perpendicular if and only if $m_{1} m_{2}=-1 .$ In the following figure, we've assumed that our two nonvertical lines $y=m_{1} x$ and $y=m_{2} x$ intersect at the origin. [If they did not intersect there, we could just as well work with lines parallel to these that do intersect at $(0,0)$ recalling that parallel lines have the same slope.] The proof relies on the following geometric fac\nThis exercise outlines a proof of the fact that two nonvertical lines with slopes $m_{1}$ and $m_{2}$ are perpendicular if and only if $m_{1} m_{2}=-1 .$ In the following figure, we've assumed that our two nonvertical lines $y=m_{1} x$ and $y=m_{2} x$ intersect at the origin. [If they did not i...\n##### IfA and B are mutually exclusive events, then P(A o B) = P(A) P(B) True False\nIfA and B are mutually exclusive events, then P(A o B) = P(A) P(B) True False...\n##### What is the total kinetic energy of a spherical asteroid (m = 1016 kg) impacting the Earth (v = 28 km/s) if it’s also rotating once every 40 minutes? The radius of the asteroid is 945 m.\nWhat is the total kinetic energy of a spherical asteroid (m = 1016 kg) impacting the Earth (v = 28 km/s) if it’s also rotating once every 40 minutes? The radius of the asteroid is 945 m....\n##### Solve each system by the substitution method. Identify inconsistent systems and systems with dependent equations, using set notation to express their solution sets. $\\left\\{\\begin{array}{l}{\\frac{x}{6}-\\frac{y}{2}=\\frac{1}{3}} \\\\ {x+2 y=-3}\\end{array}\\right.$\nSolve each system by the substitution method. Identify inconsistent systems and systems with dependent equations, using set notation to express their solution sets. $\\left\\{\\begin{array}{l}{\\frac{x}{6}-\\frac{y}{2}=\\frac{1}{3}} \\\\ {x+2 y=-3}\\end{array}\\right.$...\n##### (4 pts) Below= illustration of the CAP (catabolite lactose operon of Escherichia coll: Draw the placement of RNA polymerase; activator protein), lactose, and the lac repressor (Lacl) when col grown = medium containing both 2% glucose and 2%, lactose;CAP binding site_laczPromoter-~ 10 _ Operalor _DNATeei(Ia-lluatue Lut \"artra%(Apts) Below is an illustration of the Tryptophan operon of Escherlchia coll: Draw the coupling of the lcader sequences which cause transcription to be continued (1) an\n(4 pts) Below= illustration of the CAP (catabolite lactose operon of Escherichia coll: Draw the placement of RNA polymerase; activator protein), lactose, and the lac repressor (Lacl) when col grown = medium containing both 2% glucose and 2%, lactose; CAP binding site_ lacz Promoter-~ 10 _ Operalor _...\n##### Required information Use the following information for the Problems below. [The following information applies to the...\nRequired information Use the following information for the Problems below. [The following information applies to the questions displayed below.] Golden Corp., a merchandiser, recently completed its 2017 operations. For the year, (1) all sales are credit sales, (2) all credits to Accounts Receivable ...\n##### Does Considel the the Mean function Value - 2 Theorem 1i Why or why not? [1,2]Find all points c that are guar ranteed by the conclusion of the theorem: Show work\nDoes Considel the the Mean function Value - 2 Theorem 1i Why or why not? [1,2] Find all points c that are guar ranteed by the conclusion of the theorem: Show work...\n##### Is it possible for the solution set of a quadratic equation with integer coefficients to consist of a single irrational number?\nIs it possible for the solution set of a quadratic equation with integer coefficients to consist of a single irrational number?...\n##### Your firm plans to issue bonds with ten years to maturity and semiannual interest payments. The...\nYour firm plans to issue bonds with ten years to maturity and semiannual interest payments. The bonds are currently priced at $985 per bond and have a$1,000 par value. The bonds pay a 12% coupon rate. What is the bond's annual yield to maturity? 6.21% O 12.26% 24.41% O None of the above...\n##### 1. What is the cost-of-carry model for pricing futures and forward contracts? Provide a “fair” futures...\n1. What is the cost-of-carry model for pricing futures and forward contracts? Provide a “fair” futures price for each of the following assets: a. S&P 500 Index: Contract June (3 months away); Current value of S&P 500 2450; 3-month Libor=2%....\n##### Figure out all of Catalan numbers from 60 Up to 012 . ANS_How many upright paths from (0,0) to (6,6) never Cross the line y = x? (\"Upright path\" delined problcm 34 )ANS_Solvc Problem 8, pagc 24. Shov S1Cpx.Solve Problem 224, page 25_ Explain how your melhod ueplicz. (Dont write out all the tcnns )\nFigure out all of Catalan numbers from 60 Up to 012 . ANS_ How many upright paths from (0,0) to (6,6) never Cross the line y = x? (\"Upright path\" delined problcm 34 ) ANS_ Solvc Problem 8, pagc 24. Shov S1Cpx. Solve Problem 224, page 25_ Explain how your melhod ueplicz. (Dont write out all...\n##### How do you combine like terms in (4r ^ { 2} + 7r ) - ( 3r ^ { 2} - 2r + 7)?\nHow do you combine like terms in (4r ^ { 2} + 7r ) - ( 3r ^ { 2} - 2r + 7)?..."
] | [
null,
"https://cdn.numerade.com/ask_images/31923241b4404c10aacb528374edd8ec.jpg ",
null,
"https://cdn.numerade.com/previews/c6f578c1-da22-4ec9-bf02-2deb74049a3c_large.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.86136055,"math_prob":0.9778741,"size":14904,"snap":"2023-40-2023-50","text_gpt3_token_len":4173,"char_repetition_ratio":0.09852349,"word_repetition_ratio":0.49642006,"special_character_ratio":0.28026032,"punctuation_ratio":0.14027733,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99274224,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-23T04:50:01Z\",\"WARC-Record-ID\":\"<urn:uuid:8f2ecfa3-5f5a-422e-a81e-1aa69d113e2b>\",\"Content-Length\":\"88377\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a8c70087-2525-4ebb-ac05-a6c606421ea3>\",\"WARC-Concurrent-To\":\"<urn:uuid:f1e4a58c-dbe4-489d-abaa-f3e3a1fe5469>\",\"WARC-IP-Address\":\"172.67.132.66\",\"WARC-Target-URI\":\"https://solvedlib.com/n/determine-ifl-f-z-dxis-convergent-or-divergent-when2-t1,2122346\",\"WARC-Payload-Digest\":\"sha1:JEEM24CTGMNMWKEFNBKQV6NA2OUUQ45W\",\"WARC-Block-Digest\":\"sha1:B643KVMMJ6RB6MLPREKX32YN2XPE3GDR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506479.32_warc_CC-MAIN-20230923030601-20230923060601-00405.warc.gz\"}"} |
https://www.chiropractordevriese.be/09-11-01/13700.html | [
"# transmission ratio calculation in ball mill\n\n• ### Best way to determine the ball-to-powder ratio in ball\n\nHowever if you are dealing with coarse feed between 1-3 mm then it will drop to 1.5). The true density of balls are around 7.5. Then the optimal mass ratio of ball to powder in ball mill is\n\nChat Online\n• ### A Method to Determine the Ball Filling in Miduk Copper\n\nballs which exist in mill 𝐴𝑏 each ball abrasion (g) 𝐴t total ball abrasion in the mill (g) 𝑣b each ball volume (m3) 𝑓b supposed ball filling percentage A r ball abrasion rate in the mill. If above calculation were done again for 𝑓b = 2 total ball abrasion will get 60.96 finally.\n\nChat Online\n\nNov 18 2014 · Slide 2 Content Introduction Circulating load calculate Most factories the purpose of control Effective factors for circulating load Test design Conclusion References 3. Slide 3 circulating load ratio As the ratio of the amount of solids going through the ball mill divided by the amount of solids going through the circuit.\n\nChat Online\n• ### TECHNICAL How to Spec a Mill GearPower Transmission\n\nAutogenous mills are the largest in diameter since the feed grinds itself. A semi-autogenous mill uses some metallic or ceramic balls to assist the grinding process and can be slightly smaller. Ball mills are smaller still and use a larger percentage of balls to perform most of the work. Large-di-ameter mills allow for use of gear ratios\n\nChat Online\n• ### MODELING THE SPECIFIC GRINDING ENERGY AND BALL\n\nBall mill power draw predicted from the Denver slide rule kW 0 200 400 600 Calculated ball-mill power draw from the m odel derived kW Data compared Line y=x Fig. 2. Comparison of the ball mill power draw from the Denver slide rule and the proposed model. Dashed line corresponds to y=x.\n\nChat Online\n• ### transmission ratio calculation in ball millGrinding\n\ntransmission ratio calculation in ball mill 4.76559 Ratings The Gulin product line consisting of more than 30 machines sets the standard for our industry. We plan to help you meet your needs with our equipment with our distribution and product support system and\n\nChat Online\n• ### Ball Mill Grinding Drives David Brown Santasalo\n\nWe can provide all elements of a mill drive system as a fully optimised solution to suit your process exactly or individual mill drive gearboxes girth gears pinions and couplings as required. Designed to deliver exceptional levels of performance and value David Brown Santasalo ball mill drives are optimised for primary and secondary\n\nChat Online\n• ### Best way to determine the ball-to-powder ratio in ball\n\nThe maximum power draw in ball mill is when ball bed is 35-40 by volume in whole empty mill volume. Considering that ball bed has a porosity of 40 the actual ball volume is considered to be\n\nChat Online\n• ### V-Belt Drive Selection HandbookBaldor\n\nA ratio is a proportional factor between two similar objects of different sizes. In a belt drive system a ratio is used to determine the speed relation between two v-belt pulleys. The speed ratio would be stable if slippage did not occur however as belt slip is inevitable the ratio\n\nChat Online\n• ### Understanding Motor and Gearbox Design 10 Steps (with\n\nInefficiency in power transmissioneach stage of gearing or chain run is approximately 90 efficient. Differences between theoretical and actual performance. Because theoretical performance is usually better than actual performance even after accounting for inefficiency it is important to choose motors and gear ratios with a healthy\n\nChat Online\n• ### TECHNICAL NOTES 8 GRINDING R. P. King\n\nthe mill is used primarily to lift the load (medium and charge). Additional power is required to keep the mill rotating. 8.1.3 Power drawn by ball semi-autogenous and autogenous mills A simplified picture of the mill load is shown in Figure 8.3 Ad this can be used to establish the essential features of a model for mill\n\nChat Online\n• ### calculation transmission power of ball mill\n\ntransmission ratio calculation in ball mill transmission ratio calculation in ball mill Ball Mill Critical Speed 911 Metallurgist. Mar 17 2017 . A Ball Mill Critical Speed (actually ball rod AG or SAG) is the speed at . The percent of critical speed is the ratio (expressed as a percentage) of the actual mill .\n\nChat Online\n• ### MODELING THE SPECIFIC GRINDING ENERGY AND BALL\n\nBall mill power draw predicted from the Denver slide rule kW 0 200 400 600 Calculated ball-mill power draw from the m odel derived kW Data compared Line y=x Fig. 2. Comparison of the ball mill power draw from the Denver slide rule and the proposed model. Dashed line corresponds to y=x.\n\nChat Online\n• ### Transmission Ratio Calculation In Ball Mill- SPECIAL\n\nTransmission Ratio Calculation In Ball Mill Transmission Ratio Calculation In Ball Mill. In a planetary ball mill and its speed ratio against revolu tion of a disk on th e specific impact energy of balls cal culated from the simulation on the basis of the discrete. Get a Quote Send Message.\n\nChat Online\n• ### Effects of the speed ratio on the efficiency of planetary\n\nThe ignition time (t ig) of the mechanically induced self-sustaining reaction (MSR) process involving the formation of TiB 2 from Ti/2B elemental mixtures was used to study the influence of the ratio (k = -ω v /ω d) between the rotational speed of the supporting disc (ω d) and vials (ω v) on the milling efficiency of a Pulverisette 4 planetary mill.The variation of the inverse of the\n\nChat Online\n• ### Ball millWikipedia\n\nA ball mill is a type of grinder used to grind blend and sometimes for mixing of materials for use in mineral dressing processes paints pyrotechnics ceramics and selective laser sintering works on the principle of impact and attrition size reduction is done by impact as the balls drop from near the top of the shell. A ball mill consists of a hollow cylindrical shell rotati\n\nChat Online\n• ### Calculation Transmission Power Of Ball Mill\n\nBall mill power transmissionuniversalreligioneu.Calculation transmission power of ball mill know more calculation transmission power of ball millhsmindiain mill material ball mill machinel is the material.247 online nishi entball mill parts ahmedabad brake drum . gear transmission ratio 3.8 and gear efficiency 0.9can be\n\nChat Online\n• ### Vario-Planetary Mill PULVERISETTE 4 classic line\n\nUnique with a variable transmission ratio. In contrast to conventional Planetary Mills the rotational speed of the grinding bowls and supporting disk can be configured separately in the PULVERISETTE 4 classic line with 2 working stations. Your advantage A single mill for mechanical activating and alloying providing optimum grinding conditions suited to the respective material to be ground\n\nChat Online\n• ### Calculation Transmission Power Of Ball Mill\n\nBall mill power transmissionuniversalreligioneu.Calculation transmission power of ball mill know more calculation transmission power of ball millhsmindiain mill material ball mill machinel is the material.247 online nishi entball mill parts ahmedabad brake drum . gear transmission ratio 3.8 and gear efficiency 0.9can be\n\nChat Online\n• ### transmission ratio calculation in ball mill\n\ncalculation transmission power of ball mill thedugong calculation transmission power of ball mill The Power Consumption Calculation of a Ball Drum Mill Idosi Main design criteria 340 2.1 Length to diameter ratio 340 2.2 Mill internal dimensions 342 2.2.1 Mill\n\nChat Online\n• ### AMIT 135 Lesson 2 Circuit Mass BalancingMining Mill\n\nDensity ( rho) is the ratio of the mass weight (M) of a substance and the total volume (V) Water has a density of 1.0gm/ml or 1.0gm/cm 3 1000kg/m 31tonne/m 3 62.4lbs/ft 3 Specific gravity is the ratio of the material density over the density of water Marcy Scale\n\nChat Online\n• ### Calculation In Filling Ratio For Ball Mill- Mining machine\n\nTransmission ratio calculation in ball mill two chamb mill sizing ball mill sizing 2 compartments calculation by changing the diameter value in point 5 the ld ratio the mill speed the filling. Calculate And Select Ball Mill Ball Size For Optimum Grinding.\n\nChat Online\n• ### Gear trainWikipedia\n\nA close-ratio transmission is a transmission in which there is a relatively little difference between the gear ratios of the gears. For example a transmission with an engine shaft to drive shaft ratio of 4 1 in first gear and 2 1 in second gear would be considered wide-ratio when compared to another transmission with a ratio of 4 1 in first\n\nChat Online\n\nNov 18 2014 · Slide 2 Content Introduction Circulating load calculate Most factories the purpose of control Effective factors for circulating load Test design Conclusion References 3. Slide 3 circulating load ratio As the ratio of the amount of solids going through the ball mill divided by the amount of solids going through the circuit.\n\nChat Online\n• ### How Can I calculate new ball size and weight desing for\n\nMar 10 2011 · Re How Can I calculate new ball size and weight desing for ball mill. Hi We have a similar mill. Pregrinding with hammer crusher and mono-chamber mill. Thisis what a proposed based on literture review i did and others agree its more and less correct. But remember it all depends on your mill feed size after pregrinding.\n\nChat Online\n• ### V-Belt Drive Selection HandbookBaldor\n\nA ratio is a proportional factor between two similar objects of different sizes. In a belt drive system a ratio is used to determine the speed relation between two v-belt pulleys. The speed ratio would be stable if slippage did not occur however as belt slip is inevitable the ratio\n\nChat Online\n• ### Reduction Ratioan overview ScienceDirect Topics\n\nFriction reduction ratio is the function of the average velocity of fracturing fluid the gelled agent concentration and the proppant concentration. Based on the linear regression of 1049 experimental and field data Lord et al. put forward the following empirical formula to calculate the friction reduction ratio of HPG fracturing fluid (Lord 1987)\n\nChat Online\n• ### Best way to determine the ball-to-powder ratio in ball\n\nThe maximum power draw in ball mill is when ball bed is 35-40 by volume in whole empty mill volume. Considering that ball bed has a porosity of 40 the actual ball volume is considered to be\n\nChat Online\n• ### transmission ratio calculation in ball mill\n\ntransmission ratio calculation in ball mill previously at a mill stop the measurement of ball charge filling degree could be undertaken and will provide the static media charge angle (βstatic = 143). An online measurement of the similar angle (βdynamic) when the mill is running provides information about the dynamics of the charge.\n\nChat Online\n• ### Mill SpeedCritical Speed\n\nMill Speed . No matter how large or small a mill ball mill ceramic lined mill pebble mill jar mill or laboratory jar rolling mill its rotational speed is important to proper and efficient mill operation. Too low a speed and little energy is imparted on the product.\n\nChat Online"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.83224446,"math_prob":0.9196067,"size":12685,"snap":"2021-21-2021-25","text_gpt3_token_len":2791,"char_repetition_ratio":0.18058513,"word_repetition_ratio":0.28015563,"special_character_ratio":0.19574301,"punctuation_ratio":0.04052166,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9655367,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-06T08:44:41Z\",\"WARC-Record-ID\":\"<urn:uuid:f6206044-8f18-4383-a6e8-adda954d975a>\",\"Content-Length\":\"27523\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:20c8788d-5f41-4884-9155-5b03f4361d21>\",\"WARC-Concurrent-To\":\"<urn:uuid:b7a2d460-374b-4da9-95c7-c48d85aef369>\",\"WARC-IP-Address\":\"104.21.54.101\",\"WARC-Target-URI\":\"https://www.chiropractordevriese.be/09-11-01/13700.html\",\"WARC-Payload-Digest\":\"sha1:NKI32IYPAB7VFG5MEHWJUUKDY4GMBIUE\",\"WARC-Block-Digest\":\"sha1:6OXKXEN77VWK4H4RH6KAGIF4MXT52J2W\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988753.91_warc_CC-MAIN-20210506083716-20210506113716-00519.warc.gz\"}"} |
http://216rr.com/?m=art-detail-id-35210.html | [
" 卡罗利纳·桑帕伊奥[25P] - 狼人干综合伊人网\n\n### 首页 » 欧美图片 » 卡罗利纳·桑帕伊奥[25P]",
null,
"<",
null,
"<",
null,
"<",
null,
"<",
null,
"<",
null,
"<",
null,
"<",
null,
"<",
null,
"<",
null,
"<",
null,
"<",
null,
"<",
null,
"<",
null,
"<",
null,
"<",
null,
"<",
null,
"<",
null,
"<",
null,
"<",
null,
"<",
null,
"<",
null,
"<",
null,
"<",
null,
"<",
null,
"<"
] | [
null,
"http://pic1.avkdimage.com/oumei/2019/09/03/30f98302/1.jpg",
null,
"http://pic1.avkdimage.com/oumei/2019/09/03/30f98302/2.jpg",
null,
"http://pic1.avkdimage.com/oumei/2019/09/03/30f98302/3.jpg",
null,
"http://pic1.avkdimage.com/oumei/2019/09/03/30f98302/4.jpg",
null,
"http://pic1.avkdimage.com/oumei/2019/09/03/30f98302/5.jpg",
null,
"http://pic1.avkdimage.com/oumei/2019/09/03/30f98302/6.jpg",
null,
"http://pic1.avkdimage.com/oumei/2019/09/03/30f98302/7.jpg",
null,
"http://pic1.avkdimage.com/oumei/2019/09/03/30f98302/8.jpg",
null,
"http://pic1.avkdimage.com/oumei/2019/09/03/30f98302/9.jpg",
null,
"http://pic1.avkdimage.com/oumei/2019/09/03/30f98302/10.jpg",
null,
"http://pic1.avkdimage.com/oumei/2019/09/03/30f98302/11.jpg",
null,
"http://pic1.avkdimage.com/oumei/2019/09/03/30f98302/12.jpg",
null,
"http://pic1.avkdimage.com/oumei/2019/09/03/30f98302/13.jpg",
null,
"http://pic1.avkdimage.com/oumei/2019/09/03/30f98302/14.jpg",
null,
"http://pic1.avkdimage.com/oumei/2019/09/03/30f98302/15.jpg",
null,
"http://pic1.avkdimage.com/oumei/2019/09/03/30f98302/16.jpg",
null,
"http://pic1.avkdimage.com/oumei/2019/09/03/30f98302/17.jpg",
null,
"http://pic1.avkdimage.com/oumei/2019/09/03/30f98302/18.jpg",
null,
"http://pic1.avkdimage.com/oumei/2019/09/03/30f98302/19.jpg",
null,
"http://pic1.avkdimage.com/oumei/2019/09/03/30f98302/20.jpg",
null,
"http://pic1.avkdimage.com/oumei/2019/09/03/30f98302/21.jpg",
null,
"http://pic1.avkdimage.com/oumei/2019/09/03/30f98302/22.jpg",
null,
"http://pic1.avkdimage.com/oumei/2019/09/03/30f98302/23.jpg",
null,
"http://pic1.avkdimage.com/oumei/2019/09/03/30f98302/24.jpg",
null,
"http://pic1.avkdimage.com/oumei/2019/09/03/30f98302/25.jpg",
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.63407737,"math_prob":0.9901236,"size":416,"snap":"2019-35-2019-39","text_gpt3_token_len":445,"char_repetition_ratio":0.25,"word_repetition_ratio":0.30864197,"special_character_ratio":0.35096154,"punctuation_ratio":0.04411765,"nsfw_num_words":2,"has_unicode_error":false,"math_prob_llama3":0.950058,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-23T18:12:01Z\",\"WARC-Record-ID\":\"<urn:uuid:d1d0f307-32f3-4528-a34b-77b5d031830c>\",\"Content-Length\":\"8302\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d9c76df9-1b1d-4148-a6c4-762ead6674ab>\",\"WARC-Concurrent-To\":\"<urn:uuid:6634913f-3a68-4d7a-8b95-8c462bd38602>\",\"WARC-IP-Address\":\"107.179.27.150\",\"WARC-Target-URI\":\"http://216rr.com/?m=art-detail-id-35210.html\",\"WARC-Payload-Digest\":\"sha1:LUZ6ECQZQO7REORSWUZGFGWB44SFIBYR\",\"WARC-Block-Digest\":\"sha1:R645Z4VLGZ7CQH5QT4ODGCRBKLFPHGFW\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514577478.95_warc_CC-MAIN-20190923172009-20190923194009-00243.warc.gz\"}"} |
https://excellup.com/classten/scienceten/electromagnetismNcertEx1.aspx | [
"Class 10 Physics\n\n# Electromagnetism: NCERT Exercise\n\nState the rule to determine the direction of a\n\n• Magnetic field produced around a straight conductor-carrying current\n\nAnswer: Right hand thumb rule or Maxwell’s corkscrew rule\n• Force experienced by a current-carrying straight conductor placed in a magnetic field which is perpendicular to it\n\n• Current induced in a coil due to its rotation in a magnetic field\n\nDraw a labelled diagram of an electric motor. Explain its principle and working. What is the function of a split ring in an electric motor?\n\nAnswer: Working of Electric Motor: Electrical energy is converted into mechanical energy by using an electric motor. Electric motor works on the basis of rule suggested by Marie Ampere and Fleming’s Left Hand Rule.",
null,
"In an electric motor, a rectangular coil is suspended between the two poles of a magnetic field. The electric supply to the coil is connected with a commutator. Commutator is a device which reverses the direction of flow of electric current through a circuit.\n\nWhen electric current is supplied to the coil of electric motor, it gets deflected because of magnetic field. As it reaches the half way, the split ring which acts as commutator reverses the direction of flow of electric current. Reversal of direction of current reverses the direction of forces acting on the coil. The change in direction of force pushes the coil; and it moves another half turn. Thus, the coil completes one rotation around the axle. Continuation of this process keeps the motor in rotation.\n\nIn commercial motor, electromagnet; instead of permanent magnet; and armature is used. Armature is a soft iron core with large number of conducting wire turns over it. Large number of turns of conducting wire enhances the magnetic field produced by armature.\n\nName some devices in which electric motors are used.\n\nAnswer: Electric fan, mixer grinder, tape recorder, CD player, hard disk drive, washing machine, cooler, toy car, vacuum cleaner, etc. are some devices in which electric motor is used.\n\nA coil of insulated copper wire is connected to a galvanometer. What will happen if a bar magnet is (i) pushed into the coil, (ii) withdrawn from inside the coil, (iii) held stationary inside the coil?\n\nAnswer: When the bar magnet is pushed into the coil or withdrawn from the coil; the galvanometer needle would show deflection. When the bar magnet is kept stationary inside the coil; the galvanometer needle would show no deflection.\n\nTwo circular coils A and B are placed close to each other. If the current in the coil A is changed, will some current be induced in the coil B? Give reason.\n\nAnswer: When two circular coils A and B are placed close to each other and the current in coil A is changed, it leads to induction of current in coil B. This happens because of change in magnetic field of coil A; because of change in current in this coil.\n\nExplain the underlying principle and working of an electric generator by drawing a labelled diagram. What is the function of brushes?\n\nAnswer: The structure of electric generator is similar to that of an electric motor. In case of an electric generator a rectangular armature is placed within the magnetic field of a permanent magnet. The armature is attached to wire and is positioned in way that it can move around an axle. When the armature moves within the magnetic field an electric current is induced.",
null,
"The direction of induced current changes, when the armature crosses the halfway mark of its rotation. Thus, the direction of current changes once in every rotation. Due to this, the electric generator usually produces alternate current, i.e. AC.\n\nTo convert an AC generator into a DC generator, a split ring commutator is used. This helps in producing direct current.\n\nWhen does an electric short circuit occur?\n\nAnswer: When positive and negative wires touch each other, the resistance suddenly decreases and current increases. This leads to excessive heating of wire which manifests in the form of sparks. This is called short circuit.\n\nWhat is the function of an earth wire? Why is it necessary to earth metallic appliances?\n\nAnswer: The earth wire transfers any leakage of electric current to the earth. The leaked current can otherwise reach the metallic body of an appliance and can lead to electric shock. Earth wire prevents from electric shock by safety transferring the leaked current to the earth."
] | [
null,
"https://excellup.com/classten/scienceten/ImageMagnetism/10_physics_magnetism_fig_10.png",
null,
"https://excellup.com/classten/scienceten/ImageMagnetism/10_physics_magnetism_fig_11.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9245846,"math_prob":0.9322997,"size":4459,"snap":"2019-26-2019-30","text_gpt3_token_len":906,"char_repetition_ratio":0.15892255,"word_repetition_ratio":0.041722745,"special_character_ratio":0.19197129,"punctuation_ratio":0.11018957,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95836616,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,9,null,9,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-15T20:10:25Z\",\"WARC-Record-ID\":\"<urn:uuid:9ca1c124-2366-42ab-939e-8f932579c7cf>\",\"Content-Length\":\"15871\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:852eb407-b180-4d15-aa0b-f3ecbc2a152a>\",\"WARC-Concurrent-To\":\"<urn:uuid:a3261092-cd7f-4f26-a286-7b32fe2268fb>\",\"WARC-IP-Address\":\"198.71.162.78\",\"WARC-Target-URI\":\"https://excellup.com/classten/scienceten/electromagnetismNcertEx1.aspx\",\"WARC-Payload-Digest\":\"sha1:63TZK4KWUH2ZUSEX2N34MOOFA64HTR4X\",\"WARC-Block-Digest\":\"sha1:OSBR3SBD4LERZXCETJX4FATJULV5T3FD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195524111.50_warc_CC-MAIN-20190715195204-20190715221204-00119.warc.gz\"}"} |
https://www.nagwa.com/en/videos/674120284251/ | [
"# Lesson Video: Logarithmic Functions Mathematics\n\nIn this video, we will learn how to identify, write, and evaluate a logarithmic function as an inverse of the exponential function.\n\n12:30\n\n### Video Transcript\n\nIn this video, we will learn how to identify, write, and evaluate a logarithmic function as the inverse of an exponential function. We will begin by recalling the link between exponential and logarithmic functions.\n\nLogarithmic functions are the inverses or opposite of exponential functions. If we consider the exponential function 𝑓 of 𝑥 is equal to 𝑎 to the power of 𝑥, the inverse of 𝑓 of 𝑥 is equal to log base 𝑎 of 𝑥. This enables us to solve exponential equations by using logarithms. If 𝑦 is equal to 𝑎 to the power of 𝑥, then 𝑥 is equal to log base 𝑎 of 𝑦. We also recall that the natural logarithm written ln of 𝑥 is the inverse of 𝑒 to the power of 𝑥. Finally, when a logarithm is written without a base, it is assumed to be base 10. Log 𝑥 is the same as log base 10 of 𝑥. We will now look at some questions involving exponential and logarithmic functions.\n\nThe function 𝑓 of 𝑥 which is equal to two 𝑒 to the power of 𝑥 plus three has an inverse of the form 𝑔 of 𝑥 is equal to ln of 𝑎𝑥 plus 𝑏. What are the values of 𝑎 and 𝑏?\n\nIn order to find the inverse of any function, we begin by replacing 𝑓 of 𝑥 with 𝑦. In this case, 𝑦 is equal to two 𝑒 to the power of 𝑥 plus three. Our next step is to rearrange this equation to make 𝑥 the subject. We begin by subtracting three from both sides so that 𝑦 minus three is equal to two 𝑒 to the power of 𝑥. We can then divide both sides of our equation by two. 𝑦 minus three over two is equal to 𝑒 to the power of 𝑥. The left-hand side can be rewritten as a half 𝑦 minus three over two. We can then take the natural logarithm of both sides as we know that ln of 𝑥 is the opposite or inverse of 𝑒 to the power of 𝑥. This gives us ln of one-half 𝑦 minus three over two is equal to 𝑥.\n\nAs we have now made 𝑥 the subject of the equation, we can know swap our 𝑦- and 𝑥-variables. The inverse of 𝑓 of 𝑥 is therefore equal to ln of a half 𝑥 minus three over two. As the inverse was denoted by 𝑔 of 𝑥, this is now in the form ln of 𝑎𝑥 plus 𝑏, where 𝑎 is equal to one-half and 𝑏 is equal to negative three over two or negative three-halves. This method can be used to calculate the inverse of any function.\n\nIn our next question, we will consider the domain and range of exponential and logarithmic functions.\n\nConsider the function 𝑓 of 𝑥 is equal to 𝑏 to the power of 𝑥, where 𝑏 is a positive real number not equal to one. What is the domain of the inverse of 𝑓 of 𝑥?\n\nThere are a few ways of approaching this problem. One way would be to recall that exponential functions and logarithmic functions are the inverse of each other. This means that if 𝑓 of 𝑥 is equal to 𝑏 to the power of 𝑥, the inverse function is equal to log base 𝑏 of 𝑥. We are asked to find the domain of this function. The domain of any function is the set of input values. We know that we can only find the logarithm of positive values. This means that the domain of the inverse function is 𝑥 is greater than zero as the only values we can substitute into the function log base 𝑏 of 𝑥 are 𝑥 greater than zero.\n\nAn alternative method here would be to consider the graphs of our functions. The graph of 𝑓 of 𝑥 is shown. It intersects the 𝑦-axis at 𝑏 and the 𝑥-axis is an asymptote. The inverse of any function is its reflection in the line 𝑦 equals 𝑥. This means that the function log base 𝑏 of 𝑥 intersects the 𝑥-axis at 𝑏 and the 𝑦-axis is an asymptote. As the domain is the set of input values, we can see from the graph that the domain of the inverse of 𝑓 of 𝑥 is all numbers greater than zero.\n\nA final method would be to recall that the domain of 𝑓 is equal to the range of the inverse. Likewise, the range of 𝑓 of 𝑥 is equal to the domain of the inverse. The range of any function is the set of output values. We can see from the graph that the range of 𝑓 of 𝑥 is all values greater than zero. This once again proves that the domain of the inverse function is 𝑥 is greater than zero.\n\nOur next question involves solving a logarithmic equation.\n\nConsider the function 𝑓 of 𝑥 is equal to log base two of three 𝑥 minus one. If 𝑓 of 𝑎 is equal to three, find the value of 𝑎.\n\nWe are told that 𝑓 of 𝑎 is equal to three, so we can begin by substituting these values into the function 𝑓 of 𝑥. This gives us log base two of three 𝑎 minus one is equal to three. We recall that logarithmic functions are the inverses of exponential functions. This means that if log base 𝑎 of 𝑦 is equal to 𝑥, then 𝑎 to the power of 𝑥 is equal to 𝑦. In this question, the base 𝑎 is equal to two, the variable 𝑦 is equal to three 𝑎 minus one, and the variable 𝑥 is equal to three. Two cubed is therefore equal to three 𝑎 minus one.\n\nWe know that two cubed is equal to eight. We can then add one to both sides of this equation so that three 𝑎 is equal to nine. Dividing both sides of this equation by three gives us 𝑎 is equal to three. If the function 𝑓 of 𝑥 is equal to log base two of three 𝑥 minus one and 𝑓 of 𝑎 is equal to three, then the value of 𝑎 is three. We could check this answer on the calculator by substituting our value back in to the original function.\n\nIn our next question, we want to find the base of a logarithmic function.\n\nGiven that the graph of the function 𝑓 of 𝑥 which is equal to log base 𝑎 of 𝑥 passes through the point 1024, five, find the value of 𝑎.\n\nWe are told that our function passes through the point with 𝑥-coordinate 1024 and 𝑦-coordinate five. The function 𝑓 of 𝑥 can be rewritten as 𝑦 is equal to log base 𝑎 of 𝑥. When dealing with functions, 𝑓 of 𝑥 and 𝑦 are interchangeable. Substituting in our values of 𝑥 and 𝑦, we have five is equal to log base 𝑎 of 1024. We know that logarithmic functions and exponential functions are the inverse of each other. This means that if 𝑥 is equal to log base 𝑎 of 𝑦, then 𝑎 to the power of 𝑥 is equal to 𝑦.\n\nWe can rewrite our equation in exponential form such that 𝑎 to the power of five is equal to 1024. We can then take the fifth root of both sides of our equation to work out the value of 𝑎. The fifth root of 1024 is equal to four. We can check this by calculating four to the fifth power. This is equal to four multiplied by four multiplied by four multiplied by four multiplied by four. Four multiplied by four is equal to 16. When we multiply this by four, we get 64. 64 multiplied by four is 256. And finally, multiplying this by four gives us 1024. If the function 𝑓 of 𝑥, which is equal to log base 𝑎 of 𝑥, passes through the point 1024, five, then the base 𝑎 is equal to four.\n\nOur final question involves solving a logarithmic equation in a real-life context.\n\nThe pH of a solution is given by the formula pH is equal to negative log of 𝑎 sub H+, where 𝑎 sub H+ is the concentration of hydrogen ions. Determine the concentration of hydrogen ions in a solution whose pH is 8.4.\n\nWhen the pH is equal to 8.4, 8.4 is equal to negative log of 𝑎 sub H+. We are trying to work out this value which is the concentration of hydrogen ions. We recall that when a logarithm is written without a base, it is assumed to be base 10. Log 𝑥 is the same as log base 10 of 𝑥. We can multiply both sides of our equation by negative one such that negative 8.4 is equal to log base 10 of 𝑎 sub H+. We know that a logarithmic function is the inverse of an exponential function. If 𝑥 is equal to log base 𝑎 of 𝑦, then 𝑎 to the power of 𝑥 is equal to 𝑦. This means that 10 to the power of negative 8.4 is equal to 𝑎 sub H+. The concentration of hydrogen ions is therefore equal to 10 to the power of negative 8.4.\n\nWe will now summarize the key points from this video. We found out in this video that logarithmic functions and exponential functions are inverses of each other. This means that if 𝑥 is equal to log base 𝑎 of 𝑦, then 𝑎 to the power of 𝑥 is equal to 𝑦. This enables us to convert between exponential and logarithmic equations. We also found out that if 𝑓 of 𝑥 is equal to 𝑒 to the power of 𝑥, then the inverse of this function is equal to the natural logarithm ln of 𝑥. The domain of 𝑓 of 𝑥 is equal to the range of the inverse function. Likewise, the range of 𝑓 of 𝑥 is equal to the domain of the inverse function. This is because 𝑓 of 𝑥 and 𝑓 minus one of 𝑥 are reflections in the line 𝑦 equals 𝑥. Finally, we recalled that when a logarithm is written without a base, it is the same as base 10. Log 𝑥 is equal to log base 10 of 𝑥."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9252257,"math_prob":0.99915767,"size":8300,"snap":"2023-40-2023-50","text_gpt3_token_len":2491,"char_repetition_ratio":0.20973964,"word_repetition_ratio":0.28121352,"special_character_ratio":0.23530121,"punctuation_ratio":0.080021195,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99995315,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-03T10:08:48Z\",\"WARC-Record-ID\":\"<urn:uuid:8467354f-40f3-4c6e-8c4d-2dac64d68ccb>\",\"Content-Length\":\"59140\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:09e9fcb7-b026-45c4-a89f-8cb952b232e8>\",\"WARC-Concurrent-To\":\"<urn:uuid:25ff8f3b-674a-40ba-b46a-5d4b651114fe>\",\"WARC-IP-Address\":\"172.67.69.52\",\"WARC-Target-URI\":\"https://www.nagwa.com/en/videos/674120284251/\",\"WARC-Payload-Digest\":\"sha1:PYBBLEHCH3C7U3VUQ56TTQJV4UMEXITR\",\"WARC-Block-Digest\":\"sha1:77S37STNY7GN7UOYBWJDWZIQ77EVUHU3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100499.43_warc_CC-MAIN-20231203094028-20231203124028-00697.warc.gz\"}"} |
https://reviewersph.com/mathematics-third?namadamadan=1$increased%20by%203%20$0 | [
"### Math Notes\n\nSubjects\n\n#### Algebra Solutions\n\n##### Topics || Problems\n\nIf the smaller dimension of a rectangle is increased by 3 feet and the larger dimension by 5 feet, one dimension becomes $$\\frac{3}{5}$$ of the other, and the area is increased by 135 square feet. Find the original dimensions. Solution:\nRepresentations\nSymbolsMeaning\n$$L$$Length of the rectangle\n$$W$$Width of the rectangle\n$$L_n$$New length of the rectangle\n$$W_n$$New width of the rectangle\n\n$$L_n= L+5$$\n\n$$L = L_n -5$$\n\n$$W_n= W+3$$\n\n$$W = W_n -3$$\n\n$$W_n= \\frac{3}{5} L_n$$\n\n$$W = \\frac{3}{5} L_n -3$$\n\n$$A_n= A+135$$\n\n$$L_n W_n= LW+135$$\n\n$$L_n ( \\frac{3}{5} L_n)= LW+135$$\n\n$$\\frac{3}{5} L_n ^2= ( L_n -5)(\\frac{3}{5}L_n -3)+135$$\n\n$$\\frac{3}{5} L_n ^2= \\frac{3}{5}L_n^2-3L_n -3L_n + 15)+135$$\n\n$$L_n = 25$$\n\n$$L = 25-5 = 20$$\n\n$$L = \\frac{3}{5}(25)-3 = 12$$\n\nThus the original dimensions is: 12x20 \\text{feet}"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.59848356,"math_prob":1.0000099,"size":581,"snap":"2023-40-2023-50","text_gpt3_token_len":291,"char_repetition_ratio":0.21317157,"word_repetition_ratio":0.0,"special_character_ratio":0.5886403,"punctuation_ratio":0.009009009,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.000008,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-02T04:53:28Z\",\"WARC-Record-ID\":\"<urn:uuid:c8223be0-28d9-450a-a4e7-b24f4be7b524>\",\"Content-Length\":\"12218\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f1107484-f03e-4469-948d-6766abaf73f3>\",\"WARC-Concurrent-To\":\"<urn:uuid:e8bca8ce-d9ce-44c7-b7dd-54d681f53e80>\",\"WARC-IP-Address\":\"154.41.250.30\",\"WARC-Target-URI\":\"https://reviewersph.com/mathematics-third?namadamadan=1$increased%20by%203%20$0\",\"WARC-Payload-Digest\":\"sha1:F3TXW2OXVOISSE24UBG2CVPZOPMQEVBB\",\"WARC-Block-Digest\":\"sha1:GQ56O64U2NKH6RVIFE5VVPU3CIRIS4BR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510967.73_warc_CC-MAIN-20231002033129-20231002063129-00251.warc.gz\"}"} |
https://www.britannica.com/science/error-mathematics | [
"# Error\n\nmathematics\n\nError, in applied mathematics, the difference between a true value and an estimate, or approximation, of that value. In statistics, a common example is the difference between the mean of an entire population and the mean of a sample drawn from that population. In numerical analysis, round-off error is exemplified by the difference between the true value of the irrational number π and the value of rational expressions such as 22/7, 355/113, 3.14, or 3.14159. Truncation error results from ignoring all but a finite number of terms of an infinite series. For example, the exponential function ex may be expressed as the sum of the infinite series 1 + x + x2/2 + x3/6 + ⋯ + xn/n! + ⋯ Stopping the calculation after any finite value of n will give an approximation to the value of ex that will be in error, but this error can be made as small as desired by making n large enough.",
null,
"public opinion: Allowance for chance and error\nThere are no hard-and-fast rules for interpreting poll results, since there are many possible sources of bias and error. Nevertheless, for…\n\nThe relative error is the numerical difference divided by the true value; the percentage error is this ratio expressed as a percent. The term random error is sometimes used to distinguish the effects of inherent imprecision from so-called systematic error, which may originate in faulty assumptions or procedures. The methods of mathematical statistics are particularly suited to the estimation and management of random errors."
] | [
null,
"https://cdn.britannica.com/s:225x225/54/9854-004-1CCD7CFF/Jacques-Necker-portrait-Augustin-de-Saint-Aubin-painting.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9130227,"math_prob":0.9712106,"size":2355,"snap":"2020-10-2020-16","text_gpt3_token_len":478,"char_repetition_ratio":0.11186729,"word_repetition_ratio":0.0,"special_character_ratio":0.20382166,"punctuation_ratio":0.0979021,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9951855,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-19T07:02:29Z\",\"WARC-Record-ID\":\"<urn:uuid:9f6eb5a0-55e5-420a-b26a-10759c2d707b>\",\"Content-Length\":\"78642\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:037aec54-6754-4c01-9cbc-a3a5a9b03241>\",\"WARC-Concurrent-To\":\"<urn:uuid:052d8cc2-aa2d-4caa-8d34-3a4531a1ee86>\",\"WARC-IP-Address\":\"52.0.96.131\",\"WARC-Target-URI\":\"https://www.britannica.com/science/error-mathematics\",\"WARC-Payload-Digest\":\"sha1:3LQXE3WSECOFQSMJDAYR4DG2GLG5RMY2\",\"WARC-Block-Digest\":\"sha1:C2RYVCHFKI2JNFDUQM4XHHDM7BOCITK6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875144058.43_warc_CC-MAIN-20200219061325-20200219091325-00514.warc.gz\"}"} |