text
stringlengths
100
957k
meta
stringclasses
1 value
# Documentation ### This is machine translation Translated by Mouseover text to see original. Click the button below to return to the English verison of the page. Note: This page has been translated by MathWorks. Please click here To view all translated materals including this page, select Japan from the country navigator on the bottom of this page. ## Efficient Portfolio That Maximizes Sharpe Ratio The Sharpe ratio is defined as the ratio `$\frac{\mu \left(x\right)-{r}_{0}}{\sqrt{\sum \left(x\right)}}$` where $x\in {R}^{n}$ and r0 is the risk-free rate (μ and Σ proxies for portfolio return and risk). For more information, see Portfolio Optimization Theory. Portfolios that maximize the Sharpe ratio are portfolios on the efficient frontier that satisfy a number of theoretical conditions in finance. For example, such portfolios are called tangency portfolios since the tangent line from the risk-free rate to the efficient frontier touches the efficient frontier at portfolios that maximize the Sharpe ratio. To obtain efficient portfolios that maximizes the Sharpe ratio, the `estimateMaxSharpeRatio` function accepts a Portfolio object and obtains efficient portfolios that maximize the Sharpe Ratio. Suppose that you have a universe with four risky assets and a riskless asset and you want to obtain a portfolio that maximizes the Sharpe ratio, where, in this example, r0 is the return for the riskless asset. ```r0 = 0.03; m = [ 0.05; 0.1; 0.12; 0.18 ]; C = [ 0.0064 0.00408 0.00192 0; 0.00408 0.0289 0.0204 0.0119; 0.00192 0.0204 0.0576 0.0336; 0 0.0119 0.0336 0.1225 ]; p = Portfolio('RiskFreeRate', r0); p = setAssetMoments(p, m, C); p = setDefaultConstraints(p); pwgt = estimateMaxSharpeRatio(p); display(pwgt);``` ```pwgt = 0.4251 0.2917 0.0856 0.1977``` If you start with an initial portfolio, `estimateMaxSharpeRatio` also returns purchases and sales to get from your initial portfolio to the portfolio that maximizes the Sharpe ratio. For example, given an initial portfolio in `pwgt0`, you can obtain purchases and sales from the previous example: ```pwgt0 = [ 0.3; 0.3; 0.2; 0.1 ]; p = setInitPort(p, pwgt0); [pwgt, pbuy, psell] = estimateMaxSharpeRatio(p); display(pwgt); display(pbuy); display(psell);``` ```pwgt = 0.4251 0.2917 0.0856 0.1977 pbuy = 0.1251 0 0 0.0977 psell = 0 0.0083 0.1144 0``` If you do not specify an initial portfolio, the purchase and sale weights assume that you initial portfolio is `0`. ## External Websites Was this topic helpful? Download ebook
{}
1. probability density Suppose X has density f(x) = c/(x^4) for x > 1, and f(x) = 0 otherwise, where c is a constant. Find a) c; b) E(X); c) Var(X). I'm so confused...I dont even know where to start... $\int_{1}^{\infty} \frac{c}{x^4} dx=1$ so you need to integrate, then solve for c. $E(X) = \int_{1}^{\infty} x \cdot \frac{c}{x^4} dx$ $Var(X) = E(X^2)-(E(X))^2$ So you can use a similiar process to find $E(X^2)=\int_{1}^{\infty} x^2 \cdot \frac{c}{x^4} dx$
{}
HLT-308V Topic 3 DQ 1 # HLT-308V Topic 3 DQ 1 A Asked by 1 year ago 0 points HLT-308V Topic 3 DQ 1 The Patient Self-Determination Act (PSDA) was implemented to allow patients to state “Do Not Resuscitate” (DNS), or to assign a surrogate decision maker in the event the individual is unable to make the decision. What relationship does an ethics committee have in enforcing the advance directives of the patients in their care? Support your analysis with a minimum of one peer-reviewed article. HLT-308V ajotatxe A Answered by 1 year ago 0 points #### Oh Snap! This Answer is Locked Thumbnail of first page Excerpt from file: Topic3DQ1 ThePatientSelfDeterminationAct(PSDA)wasimplementedtoallowpatientstostateDo NotResuscitate(DNS),ortoassignasurrogatedecisionmakerintheeventtheindividual isunabletomakethedecision.Whatrelationshipdoesanethicscommitteehavein Filename: hlt-308v-topic-3-dq-1-83.docx Filesize: < 2 MB Print Length: 3 Pages/Slides Words: NA Surround your text in *italics* or **bold**, to write a math equation use, for example, $x^2+2x+1=0$ or $$\beta^2-1=0$$ Use LaTeX to type formulas and markdown to format text. See example.
{}
# Fixed points of conjugate functions 1. Apr 18, 2012 ### razmtaz 1. The problem statement, all variables and given/known data suppose f and g are conjugate show that if p is an attractive fixed point of f(x), then h(p) is an attractive fixed point of g(x). 2. Relevant equations f and g being conjugate means there exist continuous bijections h and h^-1 so that h(f(x)) = g(h(x)) a point p is an attractive fixed point of there exists an interval I = (p-a,p+a) such that for all x in I the iterates of f(x) tend to p as the number of iterations tends to infinity 3. The attempt at a solution so far I can show that if p is a fixed point of f then h(p) is a fixed point of g: h(f(p)) = g(h(p)) and we know f(p) = p so simplify to get h(p) = g(h(p)) and this part is now done. Also, I know that if x is in I, then h(x) is in h(I) what I want to show is that for all x in h(I), gn(x) -> h(p) (that is, the iterates of x under g converge to h(p)) and thats as far as Ive gotten. How can I proceed? 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 2. Apr 18, 2012 ### sunjin09 h being continuous means for any convergent sequence {x_n}→x, h(x_n)→h(x). Now you have a sequence of f(x_n) that converges to f(x), what about their image sequence under h? 3. Apr 18, 2012 ### razmtaz Sorry Sunjin I might have missed what you were getting at but heres an attempt: we have a sequence {x, f(x), f(f(x)), ...} which has x as some point in the basin of attraction of a fixed point p, but not equal to p, that converges to f(p)=p. Similarly, if we apply h to every element in this sequence, we get {h(x), h(f(x)), ...} which converges to h(f(p)) = h(p). I think that on a test I could make a somewhat convincing argument that goes like this, but is this what you were thinking of? it seems right to me, because we know it works for ANY x in the basin of attraction of the fixed point, and since h is bijective and we have that h(f(x)) = g(h(x)) then every point in the sequence has an image in the space that J is in and eventually converges to h(p), the image under h of the limit of the sequence Is this strong enough? have I missed the point? Thanks alot : ) 4. Apr 18, 2012 ### razmtaz you mentioned that h is continuous. Is this the reason that we can find a neighbourhood N around our fixed point such that the sequence of iterates of any point in the neighbourhood converged to the fixed point? And hence there is an analogous neighbourhood h(N) where the same is true for iterates of g(h(x)) except that they converge to the fixed point g(h(p)) = h(p) 5. Apr 19, 2012 ### sunjin09 A continuous funcation preserves convergent sequences is the key. (This applies to functions defined on a first countable space. R is certainly first countable, but this theorem in real analysis is proved using ordinary definition of limit and convergence.)
{}
## anonymous 5 years ago 274/3 how do i covert to a mixed number? 1. anonymous You ask yourself, "How many times does 3 go into 274 completely?" Answer 91. Then, how much is left over? Well, 91 x 3 = 273, so you have a remainder of 1 when dividing by 3. Your answer is $91\frac{1}{3}$ 2. anonymous thanks 3. anonymous welcome
{}
# Letter-replacement challenge The idea is simple. You've to create a "visualised" letter-replacement, by providing 3 strings (input can be comma separated, separate inputs, or as an array). The first segment is the word you want to correct, and the second segment is the letters you want to replace, and the third segment is the replacement for the letters in segment 2. For example: | | Input | Starting Word | Output | |----|-----------------------------|---------------|-------------| | #1 | Hello world -wo -ld +Ea +th | Hello world | Hello Earth | | #2 | Hello World -wo -ld +Ea +th | Hello World | Hello Worth | | #3 | Hello -llo +y | Hello | Hey | | #4 | Red -R -d +Gr +en | Red | Green | | #5 | mississippi -is -i +lz +p | mississippi | mlzslzspppp | | #6 | Football -o -a +a +i | Football | Fiitbill | | #7 | mississippi -is -i +iz +p | mississippi | mpzspzspppp | ### Explanation The replacements are to be done step-by-step with their respective pair. Here's an illustration with an input of mississippi -is -i +iz +p to give the output mpzspzsppp (see example #7 above) | Step | Input | Output | |------ |--------------------------- |------------- | | #1 | mississippi -is -i +iz +p | | | #2 | mississippi -is +iz | mizsizsippi | | #3 | mizsizsippi -i +p | mpzspzspppp | ### Rules • Inputs are always in this order <starting_string> <list_of_letters_to_replace> <replacement_letters>. • Letters to replace and replacement groups will never be mixed (ie: there will never be -a +i -e +o). • Letters to replace are always prefixed with - and replacement letters are always prefixed with +. (The prefix is mandatory) • There may be more than one set of letters to replace, so you'd need to look at the prefix. • Assume the amount of letter groups to replace and the amount of replacement letter groups are always equal (ie: there will never be -a -e +i) • Replacements are case-sensitive (see example #1 and #2). • Replacements are done in the order they were given in the input. • Letter replacements can be replaced with other replacements. See example #6. • The first segment (starting word) will never include - or + characters. • This is code-golf so shortest bytes win. Here is a Stack Snippet to generate both a regular leaderboard and an overview of winners by language. To make sure that your answer shows up, please start your answer with a headline, using the following Markdown template: # Language Name, N bytes where N is the size of your submission. If you improve your score, you can keep old scores in the headline, by striking them through. For instance: # Ruby, <s>104</s> <s>101</s> 96 bytes If there you want to include multiple numbers in your header (e.g. because your score is the sum of two files or you want to list interpreter flag penalties separately), make sure that the actual score is the last number in the header: # Perl, 43 + 2 (-p flag) = 45 bytes You can also make the language name a link which will then show up in the leaderboard snippet: # [><>](http://esolangs.org/wiki/Fish), 121 bytes var QUESTION_ID=96473,OVERRIDE_USER=38505;function answersUrl(e){return"http://api.stackexchange.com/2.2/questions/"+QUESTION_ID+"/answers?page="+e+"&pagesize=100&order=desc&sort=creation&site=codegolf&filter="+ANSWER_FILTER}function commentUrl(e,s){return"http://api.stackexchange.com/2.2/answers/"+s.join(";")+"/comments?page="+e+"&pagesize=100&order=desc&sort=creation&site=codegolf&filter="+COMMENT_FILTER}function getAnswers(){jQuery.ajax({url:answersUrl(answer_page++),method:"get",dataType:"jsonp",crossDomain:!0,success:function(e){answers.push.apply(answers,e.items),answers_hash=[],answer_ids=[],e.items.forEach(function(e){e.comments=[];var s=+e.share_link.match(/\d+/);answer_ids.push(s),answers_hash[s]=e}),e.has_more||(more_answers=!1),comment_page=1,getComments()}})}function getComments(){jQuery.ajax({url:commentUrl(comment_page++,answer_ids),method:"get",dataType:"jsonp",crossDomain:!0,success:function(e){e.items.forEach(function(e){e.owner.user_id===OVERRIDE_USER&&answers_hash[e.post_id].comments.push(e)}),e.has_more?getComments():more_answers?getAnswers():process()}})}function getAuthorName(e){return e.owner.display_name}function process(){var e=[];answers.forEach(function(s){var r=s.body;s.comments.forEach(function(e){OVERRIDE_REG.test(e.body)&&(r="<h1>"+e.body.replace(OVERRIDE_REG,"")+"</h1>")});var a=r.match(SCORE_REG);a&&e.push({user:getAuthorName(s),size:+a[2],language:a[1],link:s.share_link})}),e.sort(function(e,s){var r=e.size,a=s.size;return r-a});var s={},r=1,a=null,n=1;e.forEach(function(e){e.size!=a&&(n=r),a=e.size,++r;var t=jQuery("#answer-template").html();t=t.replace("{{PLACE}}",n+".").replace("{{NAME}}",e.user).replace("{{LANGUAGE}}",e.language).replace("{{SIZE}}",e.size).replace("{{LINK}}",e.link),t=jQuery(t),jQuery("#answers").append(t);var o=e.language;/<a/.test(o)&&(o=jQuery(o).text()),s[o]=s[o]||{lang:e.language,user:e.user,size:e.size,link:e.link}});var t=[];for(var o in s)s.hasOwnProperty(o)&&t.push(s[o]);t.sort(function(e,s){return e.lang>s.lang?1:e.lang<s.lang?-1:0});for(var c=0;c<t.length;++c){var i=jQuery("#language-template").html(),o=t[c];i=i.replace("{{LANGUAGE}}",o.lang).replace("{{NAME}}",o.user).replace("{{SIZE}}",o.size).replace("{{LINK}}",o.link),i=jQuery(i),jQuery("#languages").append(i)}}var ANSWER_FILTER="!t)IWYnsLAZle2tQ3KqrVveCRJfxcRLe",COMMENT_FILTER="!)Q2B_A2kjfAiU78X(md6BoYk",answers=[],answers_hash,answer_ids,answer_page=1,more_answers=!0,comment_page;getAnswers();var SCORE_REG=/<h\d>\s*([^\n,]*[^\s,]),.*?(\d+)(?=[^\n\d<>]*(?:<(?:s>[^\n<>]*<\/s>|[^\n<>]+>)[^\n\d<>]*)*<\/h\d>)/,OVERRIDE_REG=/^Override\s*header:\s*/i; body{text-align:left!important}#answer-list,#language-list{padding:10px;width:290px;float:left}table thead{font-weight:700}table td{padding:5px} <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <link rel="stylesheet" type="text/css" href="//cdn.sstatic.net/codegolf/all.css?v=83c949450c8b"> <div id="answer-list"> <h2>Leaderboard</h2> <table class="answer-list"> <thead> <tr><td></td><td>Author</td><td>Language</td><td>Size</td></tr></thead> <tbody id="answers"> </tbody> </table> </div><div id="language-list"> <h2>Winners by Language</h2> <table class="language-list"> <thead> <tr><td>Language</td><td>User</td><td>Score</td></tr></thead> <tbody id="languages"> </tbody> </table> </div><table style="display: none"> <tbody id="answer-template"> <tr><td>{{PLACE}}</td><td>{{NAME}}</td><td>{{LANGUAGE}}</td><td>{{SIZE}}</td><td><a href="{{LINK}}">Link</a></td></tr></tbody> </table> <table style="display: none"> <tbody id="language-template"> <tr><td>{{LANGUAGE}}</td><td>{{NAME}}</td><td>{{SIZE}}</td><td><a href="{{LINK}}">Link</a></td></tr></tbody> </table> • Given rules 2 and 5, you really don't need to look at the prefix. With n inputs, input 0 is the base string, inputs 1 to int(n/2) are letter to replace (with prefix -) and input int(n/2)+1 to n-1 are replacement (with prefix +) – edc65 Oct 17 '16 at 8:51 • @edc65 100% true, although the challenge was designed to have the prefix (and I could make up some weird explanation that I'm an alien who cannot process letter replacements without their prefix) but in reality, it's just another barrier to stop this being too trivial - though looking at the current answers (all are great by the way) it wasn't a complex barrier. Also fun fact, the idea behind this challenge was spawned from my friend in a Skype chat. He'd mis-spell a word (gello), and then send me the letter replacements (-g +h) because he wanted to be annoying instead of sending hello*. – ʰᵈˑ Oct 17 '16 at 8:57 • Inputs are always in this order Why so restrictive? – Luis Mendo Oct 17 '16 at 9:16 • @LuisMendo I guess it doesn't really matter - but it's the way my friend and I formatted it, but since answers have been posted to this requirement, I cannot really make a rule change. It wasn't questioned on the sandbox, so I didn't think of it as a negative. – ʰᵈˑ Oct 17 '16 at 9:23 • @udioica is perfecly right and it in fact support the "Replacements are case-sensitive" rule. Run the snippet in the JavaScript answer to see it implemented. (#1 world vs #2 World) – edc65 Oct 17 '16 at 12:31 # 05AB1E, 15 17 bytes IIð¡€áIð¡€á‚øvy: Try it online! Explanation I # read starting string I # read letters to be replaced ð¡ # split on space €á # and remove "-" I # read replacement letters ð¡ # split on space €á # and remove "+" ‚ø # zip to produce pairs of [letters to replace, replacement letters] vy: # for each pair, replace in starting string Or with a less strict input format vy: Try it online ## JavaScript (ES6), 85 83 bytes f=(s,n=1,l=s.split(/ \W/))=>(r=l[n+l.length/2|0])?f(s.split(l[n]).join(r),n+1):l[0] ### Test cases f=(s,n=1,l=s.split(/ \W/))=>(r=l[n+l.length/2|0])?f(s.split(l[n]).join(r),n+1):l[0] console.log(f("Hello world -wo -ld +Ea +th")); console.log(f("Hello World -wo -ld +Ea +th")); console.log(f("Hello -llo +y")); console.log(f("Red -R -d +Gr +en")); console.log(f("mississippi -is -i +lz +p")); console.log(f("Football -o -a +a +i")); console.log(f("mississippi -is -i +iz +p")); ## Pyke, 13 11 bytes z[zdcmt)[.: Try it here! z - input() [zdcmt) - def func(): zdc - input().split(" ") mt - map(>[1:], ^) - func() [ - func() .: - translate() Or 2 bytes if in a different input format: .: Try it here! • At work catbus.co.uk is blocked. Are you able to link an alternative test suite please? – ʰᵈˑ Oct 17 '16 at 9:24 • @ʰᵈˑ I don't believe conforming to your (arbitrary) work firewall settings is reasonable. – orlp Oct 17 '16 at 10:12 • @orlp - I agree, it's shit. But I don't set the firewall settings. I just wanted to test it out – ʰᵈˑ Oct 17 '16 at 10:25 • @hd you can download Pyke at github.com/muddyfish/pyke – Blue Oct 17 '16 at 11:18 ## Perl, 58 bytes 57 bytes code + 1 for -p. Requires first item on one line, then the replacements on the next. Big thanks to @Dada who came up with a different approach to help reduce by 4 bytes! $a=<>;1while$a=~s%-(\S*)(.*?)\+(\S*)%"s/$1/$3/g;q{$2}"%ee ### Usage perl -pe '$a=<>;1while$a=~s%-(\S*)(.*?)\+(\S*)%"s/$1/$3/g;q{$2}"%ee' <<< 'Football -o -a +a +i' Fiitbill perl -pe '$a=<>;1while$a=~s%-(\S*)(.*?)\+(\S*)%"s/$1/$3/g;q{$2}"%ee' <<< 'mississippi -is -i +iz +p' mpzspzspppp perl -pe '$a=<>;1while$a=~s%-(\S*)(.*?)\+(\S*)%"s/$1/$3/g;q{$2}"%ee' <<< 'mississippi -ippi -i -mess +ee +e +tenn' tennessee • 4 bytes longer, there is perl -pE 's/(.*?) -(\S*)(.*?)\+(\S*)/"(\$1=~s%$2%$4%gr).\"$3\""/ee&&redo'. I can't manage to get it any shorter, but maybe you can :) – Dada Oct 17 '16 at 15:51 • Gotcha! 58 bytes : perl -pE '$a=<>;1while$a=~s%-(\S*)(.*?)\+(\S*)%"s/$1/$3/g;q{$2}"%ee'. (takes the string on one line, and the "flags" on the next line) – Dada Oct 17 '16 at 17:55 • Awesome! I'm not at a computer but I'll update that tomorrow! Thanks! – Dom Hastings Oct 17 '16 at 20:47 • Are you sure about removing the q{} surrounding$2 ? Wouldn't this fail when there are 3 - and 3 + switches? (I can't test it now, so maybe you were right so remove it ;) ) – Dada Oct 18 '16 at 11:13 • @Dada ahhh, I did wonder why you'd added it, I tested all the cases in the test suite, but didn't think about a 3 for 3 replacement... – Dom Hastings Oct 18 '16 at 11:40 # GNU sed 86 Bytes Includes +1 for -r :;s,^([^-]*)(\w+)([^-]*-)\2( [^+]*\+)(\w*),\1\5\3\2\4\5, t;s,-[^-+]*,,;s,\+[^-+]*,,;t Try it online! Example: # Java 7, 153 133 bytes String c(String[]a){String r=a[0],z[]=a[1].split(" ?-");for(int i=0;i<z.length;r=r.replace(z[i],a[2].split(" ?[+]")[i++]));return r;} Ungolfed & test code: Try it here. class M{ static String c(String[] a){ String r = a[0], z[] = a[1].split(" ?-"); for(int i = 0; i < z.length; r = r.replace(z[i], a[2].split(" ?[+]")[i++])); return r; } public static void main(String[] a){ System.out.println(c(new String[]{ "Hello world", "-wo -ld", "+Ea +th" })); System.out.println(c(new String[]{ "Hello World", "-wo -ld", "+Ea +th" })); System.out.println(c(new String[]{ "Hello", "-llo", "+y" })); System.out.println(c(new String[]{ "Red", "-R -d", "+Gr +en" })); System.out.println(c(new String[]{ "mississippi", "-is -i", "+lz +p" })); System.out.println(c(new String[]{ "Football", "-o -a", "+a +i" })); System.out.println(c(new String[]{ "mississippi", "-is -i", "+iz +p" })); } } Output: Hello Earth Hello Worth Hey Green mlzslzspppp Fiitbill mpzspzspppp • Does this work for the input new String[]{'Rom Ro. Rom", "-Ro." , "+No."}? Just writing something that (hopefully) matches a wrong regex. – Roman Gräf Oct 18 '16 at 5:09 • @RomanGräf Yes, works and outputs Rom No. Rom. Btw, you can try it yourself by clicking the Try it here. link in the post, and then fork it. :) – Kevin Cruijssen Oct 18 '16 at 6:45 • I know but I'm currently on my mobile. :( – Roman Gräf Oct 18 '16 at 7:29 # PHP, 164 Bytes preg_match_all("#^[^-+]+|-[\S]+|[+][\S]+#",$argv[1],$t);for($s=($a=$t[0])[0];++$i<$c=count($a)/2;)$s=str_replace(trim($a[+$i],"-"),trim($a[$i+$c^0],"+"),$s);echo$s; # Vim, 25 bytes qq+dE+r-PdiW:1s<C-R>"-g<CR>@qq@q Assumes input in this format: mississippi -is -i +lz +p • +dE+r-PdiW: Combines - and + into single register, with the + turned into a -. • :1s<C-R>"-g: Uses the register as a code snippet, inserted directly into the :s command, with - as the separator. # Convex, 19 bytes ¶:äz{S*"-+"-ä~@\ò}/ Try it online! # R, 98 94 bytes Edit: saved 4 bytes thanks to @rturnbull i=scan(,"");s=i[1];i=gsub("\\+|-","",i[-1]);l=length(i)/2;for(j in 1:l)s=gsub(i[j],i[l+j],s);s Ungolfed and test cases Because scan (reads input from stdin) doesn't work properly in R-fiddle I showcase the program by wrapping it in a function instead. Note that the function takes a vector as an input and can be run by e.g.: f(c("Hello world", "-wo", "-ld", "+Ea", "+th")). The gofled program above would prompt the user to input using stdin whereby typing "Hello world" -wo -ld -Ea +th in the console would yield the same result. Run the code on R-fiddle f=function(i){ s=i[1] # Separate first element i=gsub("\\+|-","",i[-1]) # Remove + and - from all elements except first, store as vector i l=length(i)/2 # calculate the length of the vector i (should always be even) for(j in 1:l)s=gsub(i[j],i[l+j],s) # iteratively match element j in i and substitute with element l+j in i s # print to stdout } • Can you provide a test suite link also, please? – ʰᵈˑ Oct 17 '16 at 9:23 • @ʰᵈˑ added an R-fiddle test suite. Note that the test suite uses a function instead of reading input from stdin as explained in the edited answer. – Billywob Oct 17 '16 at 9:59 • Is this answer valid, since you have to use " around the input string? – rturnbull Oct 17 '16 at 21:06 • @rturnbull I don't see why not. Wrapping every entry with quotes and pressing enter would yield the equivalent result (e.g.: "Hello world" => enter => "-wo" => enter => "-ld" => enter => "+Ea" => enter =>"+th") which is usually how strings are read anyways. – Billywob Oct 17 '16 at 21:16 • Yeah it's really up to the OP! I personally like your answer as-is, but I was worried that it might be invalid. Looking at answers for other languages it seems like quotes are pretty accepted. While I have your attention, I think you can golf off 4 bytes by changing l=length(i) to l=length(i)/2 and updating the later references to l. – rturnbull Oct 17 '16 at 21:57 ## Haskell, 85 78 bytes import Data.Lists g=map tail.words a#b=foldl(flip$uncurry replace)a.zip(g b).g Usage example: ("mississippi" # "-is -i") "+lz +p" -> "mlzslzspppp". How it works: g=map tail.words -- helper function that splits a string into a -- list of words (at spaces) and drops the first -- char of each word zip(g b).g -- make pairs of strings to be replaced and its -- replacement foldl(flip$uncurry replace)a -- execute each replacement, starting with the -- original string -- -> "flip" flips the arguments of "uncurry replace" -- i.e. string before pair of replacements -- "uncurry" turns a function that expects two -- lists into one that expects a list of pairs Edit: @BlackCap found 6 bytes to save and I myself another one. • 6 bytes: import Data.Lists;a#b=foldl(uncurry replaceflip)a.zip(g b).g;g=map tail.words – BlackCap Oct 31 '16 at 10:52 • @BlackCap: Nice, thanks! No need to make flip infix. Standard prefix is one byte shorter. – nimi Oct 31 '16 at 17:58 # Python 3, 93 byte def f(s): s,m,p=s for n,o in zip(m.split(),p.split()):s=s.replace(n[1:],o[1:]) return s Try it online! Input is a list with strings, replacement strings are space separated. Example input: ['mississippi','-is -i','+iz +p'] • Are you able to add a test suite link, please? – ʰᵈˑ Oct 17 '16 at 10:08 • Link provided and also reduced size a little. – Gábor Fekete Oct 17 '16 at 10:14 ## PowerShell v2+, 90 bytes param($a,$b,$c)-split$b|%{$a=$a-creplace($_-replace'-'),((-split$c)[$i++]-replace'\+')};$a Takes input as three arguments, with the - and + strings space-separated. Performs a -split on $b (the -split when acting in a unary fashion splits on whitespace), then loops |%{...} through each of those. Every iteration we're removing the -, finding the next [$i++] replacement string and removing the + from it, and using the -creplace (case-sensitive replacement) to slice and dice $a and store it back into $a. Then, $a is left on the pipeline and output is implicit. PS C:\Tools\Scripts\golfing> .\letter-replacement-challenge.ps1 'mississippi' '-is -i' '+iz +p' mpzspzspppp PS C:\Tools\Scripts\golfing> .\letter-replacement-challenge.ps1 'Hello world' '-wo -ld' '+Ea +th' Hello Earth PS C:\Tools\Scripts\golfing> .\letter-replacement-challenge.ps1 'Hello World' '-wo -ld' '+Ea +th' Hello Worth # PHP, 106 bytes for($s=($v=$argv)[$i=1];$i++<$n=$argc/2;)$s=str_replace(substr($v[$i],1),substr($v[$n+$i-1],1),$s);echo$s; straight forward approach. Run with php -r '<code> <arguments>.
{}
# Riemann–von Mangoldt formula In mathematics, the Riemann–von Mangoldt formula, named for Bernhard Riemann and Hans Carl Friedrich von Mangoldt, describes the distribution of the zeros of the Riemann zeta function. The formula states that the number N(T) of zeros of the zeta function with imaginary part greater than 0 and less than or equal to T satisfies ${\displaystyle N(T)={\frac {T}{2\pi }}\log {\frac {T}{2\pi }}-{\frac {T}{2\pi }}+O(\log {T}).}$ The formula was stated by Riemann in his notable paper "On the Number of Primes Less Than a Given Magnitude" (1859) and was finally proved by Mangoldt in 1905. Backlund gives an explicit form of the error for all T greater than 2: ${\displaystyle \left\vert {N(T)-\left({{\frac {T}{2\pi }}\log {\frac {T}{2\pi }}-{\frac {T}{2\pi }}}-{\frac {7}{8}}\right)}\right\vert <0.137\log T+0.443\log \log T+4.350\ .}$ ## References • Edwards, H.M. (1974). Riemann's zeta function. Pure and Applied Mathematics. 58. New York-London: Academic Press. ISBN 0-12-232750-0. Zbl 0315.10035. • Ivić, Aleksandar (2013). The theory of Hardy's Z-function. Cambridge Tracts in Mathematics. 196. Cambridge: Cambridge University Press. ISBN 978-1-107-02883-8. Zbl 1269.11075. • Patterson, S.J. (1988). An introduction to the theory of the Riemann zeta-function. Cambridge Studies in Advanced Mathematics. 14. Cambridge: Cambridge University Press. ISBN 0-521-33535-3. Zbl 0641.10029.
{}
# mom_intrinsic_functions module reference¶ A module with intrinsic functions that are used by MOM but are not supported by some compilers. More… ## Functions/Subroutines¶ invcosh() Evaluate the inverse cosh, either using a math library or an equivalent expression. ## Detailed Description¶ A module with intrinsic functions that are used by MOM but are not supported by some compilers. ## Function/Subroutine Documentation¶ function mom_intrinsic_functions/invcosh(x) [real] Evaluate the inverse cosh, either using a math library or an equivalent expression. Parameters x :: [in] The argument of the inverse of cosh. NaNs will occur if x<1, but there is no error checking Called from mom_bkgnd_mixing::calculate_bkgnd_mixing
{}
iGCSE (2021 Edition) Lesson Key features Previously, key features of linear equations have been explored including the gradient, the $x$x-intercept, and the $y$y-intercept. The graph of a quadratic equation is a parabola and is a curved or concave shape (either concave up or down, depending on the equation). The following sections describe the key features of a parabola. Intercepts Remember The $x$x-intercepts are where the parabola crosses the $x$x-axis. This occurs when $y=0$y=0. The $y$y-intercept is where the parabola crosses the $y$y-axis. This occurs when $x=0$x=0. These intercepts are shown in the picture below: Recall that there may be one or two or even no $x$x-intercepts: Maximum and minimum values Maximum or minimum values are also known as the turning points, and they are found at the vertex of the parabola. Parabolas that are concave up have a minimum value. This means the $y$y-value will never go under a certain value. In the example above, the range is $y\ge-15$y≥−15. Parabolas that are concave down have a maximum value. This means the $y$y-value will never go over a certain value. In the example above, the range is $y\le15$y≤15. Axis of symmetry Maximum and minimum values occur on a parabola's axis of symmetry. This is the vertical line that evenly divides a parabola into two sides down the middle. Using the general form of the quadratic $y=ax^2+bx+c$y=ax2+bx+c, substitute the relevant values of $a$a and $b$b into the following equation: The equation of the axis of symmetry $x=\frac{-b}{2a}$x=b2a Does it look familiar? This is part of the quadratic formula. It is the half-way point between the two solutions. Positive and negative gradients were previously explored for straight lines. If we draw the tangent line to a curve at a particular point we can estimate the gradient of a curve. A parabola has a positive gradient in some places and negative gradient in others. The parabola's maximum or minimum value, at which the gradient is $0$0, is the division between the positive and negative gradient regions. Hence, why it is also called the turning point. Look at the picture below. One side of the parabola has a positive gradient, there is a turning point with a zero gradient, and the other side of the parabola has a negative gradient. In this particular graph, the gradient is positive when $x<1$x<1 and the parabola is increasing for these values of $x$x. Similarly, the gradient is negative when $x>1$x>1, and for these values of $x$x, the parabola is decreasing. Practice questions Question 1 Examine the given graph and answer the following questions. 1. What are the $x$x values of the $x$x-intercepts of the graph? Write both answers on the same line separated by a comma. 2. What is the $y$y value of the $y$y-intercept of the graph? 3. What is the minimum value of the graph? Question 2 Examine the attached graph and answer the following questions. 1. What is the $x$x-value of the $x$x-intercept of the graph? 2. What is the $y$y value of the $y$y-intercept of the graph? 3. What is the absolute maximum of the graph? 4. Determine the interval of $x$x in which the graph is increasing. As for linear equations, there are a number of forms for graphing quadratic functions: General Form: $y=ax^2+bx+c$y=ax2+bx+c Factored or $x$x-intercept form: $y=a\left(x-\alpha\right)\left(x-\beta\right)$y=a(xα)(xβ) Turning point form: $y=a\left(x-h\right)^2+k$y=a(xh)2+k Each form has an advantage for different key features that can be identified quickly. However, for each form, the role of $a$a remains the same. That is: • If $a<0$a<0 then the quadratic is concave down • If $a>0$a>0 then the quadratic is concave up • The larger the magnitude of $a$a, the narrower the curve The following are examples of how to find all the key features for each form. Graphing from turning point form $y=a\left(x-h\right)^2+k$y=a(xh)2+k • This form is given this name as the turning point can be read directly from the equation: $(h,k)$(h,k) • Obtain this graph from the graph of $y=x^2$y=x2, by dilating (stretching) the graph by a factor of $a$a from the $x$x-axis, then translating the graph $h$h units horizontally and $k$k units vertically • The axis of symmetry will be at $x=h$x=h • The $y$y-intercept can be found by substituting $x=0$x=0 into the equation • The $x$x-intercept can be found by substituting $y=0$y=0 into the equation and rearranging Practice questions Question 3 Consider the equation $y=-\left(x+2\right)^2+4$y=(x+2)2+4. 1. Find the $x$x-intercepts. Write all solutions on the same line, separated by a comma. 2. Find the $y$y-intercept. 3. Determine the coordinates of the vertex. Vertex $=$=$\left(\editable{},\editable{}\right)$(,) 4. Graph the equation. Question 4 Consider the equation $y=\left(x-3\right)^2+4$y=(x3)2+4. 1. Does the graph have any $x$x-intercepts? No A Yes B No A Yes B 2. Find the $y$y-intercept. 3. Determine the coordinates of the vertex. Vertex $=$=$\left(\editable{},\editable{}\right)$(,) 4. Graph the equation. Graphing from factored form $y=a\left(x-\alpha\right)\left(x-\beta\right)$y=a(xα)(xβ) • From this form, read the $x$x-intercepts directly: $(\alpha,0)$(α,0) and $(\beta,0)$(β,0). If  $y=0$y=0, solve the equation using the null factor law. • The axis of symmetry will be at the $x$x-coordinate midway between the two $x$x-intercepts. Alternatively, expand this equation to general form and then use the formula for the axis of symmetry: $x=\frac{-b}{2a}$x=b2a. • The axis of symmetry is also the $x$x-coordinate of the turning point. Substitute this value into the equation then find the $y$y-coordinate of the turning point. • The $y$y-intercept can be found by substituting $x=0$x=0 into the equation Practice question Question 5 Consider the parabola $y=\left(x+1\right)\left(x-3\right)$y=(x+1)(x3). 1. Find the $y$y value of the $y$y-intercept. 2. Find the $x$x values of the $x$x-intercepts. Write all solutions on the same line separated by a comma. 3. State the equation of the axis of symmetry. 4. Find the coordinates of the vertex. Vertex $=$=$\left(\editable{},\editable{}\right)$(,) 5. Graph the parabola. Graphing from general form $y=ax^2+bx+c$y=ax2+bx+c • From this form, read the $y$y-intercept directly: $(0,c)$(0,c) • Find the axis of symmetry using the formula: $x=\frac{-b}{2a}$x=b2a • The axis of symmetry is also the $x$x-coordinate of the turning point. Substitute this value into the equation, then find the $y$y-coordinate of the turning point. • The $x$x-intercept can be found by substituting $y=0$y=0 into the equation and then using one of the methods for solving quadratics, such as factorising or the quadratic formula, to find the zeros (x - intercepts) of the equation. Alternatively, use the method of completing the square to rewrite the quadratic in turning point form. Sometimes, the solutions for a quadratic may be asked for. This is referring to the $x$x-intercepts which satisfy the equation $y=0$y=0. The solutions are sometimes called the roots of the equation. Practice questions Question 6 Consider the quadratic function $y=x^2+2x-8$y=x2+2x8. 1. Determine the $x$x-value(s) of the $x$x-intercept(s) of this parabola. Write all answers on the same line separated by commas. 2. Determine the $y$y-value of the $y$y-intercept for this parabola. 3. Determine the equation of the vertical axis of symmetry for this parabola. 4. Find the $y$y-coordinate of the vertex of the parabola. 5. Draw a graph of the parabola $y=x^2+2x-8$y=x2+2x8. question 7 A parabola has the equation $y=x^2+4x-1$y=x2+4x1. 1. Express the equation of the parabola in the form $y=\left(x-h\right)^2+k$y=(xh)2+k by completing the square. 2. Find the $y$y-intercept of the curve. 3. Find the vertex of the parabola. Vertex $=$= $\left(\editable{},\editable{}\right)$(,) 4. Is the parabola concave up or down? Concave up A Concave down B Concave up A Concave down B 5. Hence plot the curve $y=x^2+4x-1$y=x2+4x1 Question 8 Consider the graph of the function $f\left(x\right)=-x^2-x+6$f(x)=x2x+6. 1. Using the graph, write down the solutions to the equation $-x^2-x+6=0$x2x+6=0. If there is more than one solution, write the solutions separated by commas. Technology can be used to graph and find key features of a quadratic function. We may need to use our knowledge of the graph or calculate the location of some key features to find an appropriate view window. Practice question Question 9 Use your calculator or other handheld technology to graph $y=4x^2-64x+263$y=4x264x+263. 1. What is the vertex of the graph? The vertex is $\left(\editable{},\editable{}\right)$(,) 2. What is the $y$y-intercept? The $y$y-intercept is $\left(\editable{},\editable{}\right)$(,) Let's explore how a quadratic function changes using the applet below. What happens when we change the sliders for dilation, reflection, vertical translation and horizontal translation? Can you describe what changes these values make to the quadratic function? These movements are called transformations. Transform means change, and these transformations change the simple quadratic $y=x^2$y=x2 into other quadratics by moving (translating), flipping (reflecting) and making the graph appear more or less steep (dilating). By using the above applet, step through these instructions: • Start with the simple quadratic $y=x^2$y=x2 • Dilate the quadratic by factor of $2$2 • Reflect on the $x$x-axis • Translate vertically by $2$2 and horizontally by $-3$3 units How has the graph changed? Can you visualise the changes without using the applet? What is the resulting equation? Transforming a quadratic will change its equation. Here are some of the most common types of quadratic equations, and what they mean with regards to the transformations that have occurred. $y=ax^2$y=ax2 If $a<0$a<0 (that is, $a$a is negative) then we have a reflection parallel to the $x$x-axis. It's like the quadratic has been flipped upside down. Shows the reflection of $y=x^2$y=x2 to $y=-x^2$y=x2 Shows the reflection of $y=\left(x-1\right)\left(x+2\right)$y=(x1)(x+2) to $y=-\left(x-1\right)\left(x+2\right)$y=(x1)(x+2) $y=ax^2$y=ax2 This is a quadratic that has been dilated vertically by a factor of $a$a. If $\left|a\right|>1$|a|>1 then the graph is steeper than $y=x^2$y=x2. If $\left|a\right|<1$|a|<1 then the graph is flatter than $y=x^2$y=x2 Dilation of $y=x^2$y=x2 to $y=3x^2$y=3x2 and $y=\frac{1}{2}x^2$y=12x2 $y=ax^2+k$y=ax2+k In the graph $y=ax^2+k$y=ax2+k, the quadratic has been vertically translated by $k$k units. If $k>0$k>0 then the translation is up. If $k<0$k<0 then the translation is down. Vertical translation. Showing one curve $y=2x^2+5$y=2x2+5 having a vertical translation of up $5$5 units, and another $y=2x^2-3$y=2x23 having a vertical translation of down $3$3 units. $y=\left(x-h\right)^2$y=(xh)2 The $h$h indicates the horizontal translation. If $h>0$h>0, that is the factor in the brackets is $\left(x-h\right)$(xh) than we have a horizontal translation of $h$h units right. If $h<0$h<0, that is the factor in the brackets is $\left(x-\left(-h\right)\right)=\left(x+h\right)$(x(h))=(x+h) than we have a horizontal translations of $h$h units left. The graph $y=x^2$y=x2 being horizontally translated $2$2 units left to $y=\left(x+2\right)^2$y=(x+2)2 and $1$1 unit right to $y=\left(x-1\right)^2$y=(x1)2 Identifying transformations The form $y=a\left(x-h\right)^2+k$y=a(xh)2+k shows us: Vertical Dilation a If $\left|a\right|>1$|a|>1 then the quadratic is steeper than $x^2$x2. If $\left|a\right|<1$|a|<1 then the quadratic is flatter than $x^2$x2
{}
Home #### The Six Pillars of Calculus A picture is worth 1000 words #### Trigonometry Review The basic trig functions Basic trig identities The unit circle Addition of angles, double and half angle formulas The law of sines and the law of cosines Graphs of Trig Functions #### Exponential Functions Exponentials with positive integer exponents Fractional and negative powers The function $f(x)=a^x$ and its graph Exponential growth and decay #### Logarithms and Inverse functions Inverse Functions How to find a formula for an inverse function Logarithms as Inverse Exponentials Inverse Trig Functions #### Intro to Limits Overview Definition One-sided Limits When limits don't exist Infinite Limits Summary #### Limit Laws and Computations Limit Laws Intuitive idea of why these laws work Two limit theorems How to algebraically manipulate a 0/0? Indeterminate forms involving fractions Limits with Absolute Values Limits involving indeterminate forms with square roots Limits of Piece-wise Functions The Squeeze Theorem #### Continuity and the Intermediate Value Theorem Definition of continuity Continuity and piece-wise functions Continuity properties Types of discontinuities The Intermediate Value Theorem Summary of using continuity to evaluate limits #### Limits at Infinity Limits at infinity and horizontal asymptotes Limits at infinity of rational functions Which functions grow the fastest? Vertical asymptotes (Redux) Summary and selected graphs #### Rates of Change Average velocity Instantaneous velocity Computing an instantaneous rate of change of any function The equation of a tangent line The Derivative of a Function at a Point #### The Derivative Function The derivative function Sketching the graph of $f'$ Differentiability Notation and higher-order derivatives #### Basic Differentiation Rules The Power Rule and other basic rules The derivative of $e^x$ #### Product and Quotient Rules The Product Rule The Quotient Rule #### Derivatives of Trig Functions Necessary Limits Derivatives of Sine and Cosine Derivatives of Tangent, Cotangent, Secant, and Cosecant Summary #### The Chain Rule Two Forms of the Chain Rule Version 1 Version 2 Why does it work? A hybrid chain rule #### Implicit Differentiation Introduction Examples Derivatives of Inverse Trigs via Implicit Differentiation A Summary #### Derivatives of Logs Formulas and Examples Logarithmic Differentiation In Physics In Economics In Biology #### Related Rates Overview How to tackle the problems #### Linear Approximation and Differentials Overview Examples An example with negative $dx$ #### Differentiation Review How to take derivatives Basic Building Blocks Product and Quotient Rules The Chain Rule Combining Rules Implicit Differentiation Logarithmic Differentiation Conclusions and Tidbits #### Absolute and Local Extrema Definitions The Extreme Value Theorem Critical Numbers Steps to Find Absolute Extrema #### The Mean Value and other Theorems Rolle's Theorems The Mean Value Theorem Finding $c$ #### $f$ vs. $f'$ Increasing/Decreasing Test and Critical Numbers Process for finding intervals of increase/decrease The First Derivative Test Concavity Concavity, Points of Inflection, and the Second Derivative Test The Second Derivative Test Visual Wrap-up #### Indeterminate Forms and L'Hospital's Rule What does $\frac{0}{0}$ equal? Examples Indeterminate Differences Indeterminate Powers Three Versions of L'Hospital's Rule Proofs Strategies Another Example #### Newton's Method The Idea of Newton's Method An Example Solving Transcendental Equations When NM doesn't work #### Anti-derivatives Antiderivatives Common antiderivatives Initial value problems Antiderivatives are not Integrals #### The Area under a curve The Area Problem and Examples Riemann Sum Notation Summary #### Definite Integrals Definition of the Integral Properties of Definite Integrals What is integration good for? More Applications of Integrals #### The Fundamental Theorem of Calculus Three Different Concepts The Fundamental Theorem of Calculus (Part 2) The Fundamental Theorem of Calculus (Part 1) More FTC 1 #### The Indefinite Integral and the Net Change Indefinite Integrals and Anti-derivatives A Table of Common Anti-derivatives The Net Change Theorem The NCT and Public Policy #### Substitution Substitution for Indefinite Integrals Examples to Try Revised Table of Integrals Substitution for Definite Integrals Examples #### Area Between Curves Computation Using Integration To Compute a Bulk Quantity The Area Between Two Curves Horizontal Slicing Summary #### Volumes Slicing and Dicing Solids Solids of Revolution 1: Disks Solids of Revolution 2: Washers More Practice ### A summary The most general limit statement is $$\lim_{x \to \tiny\hbox{something}} f(x) = \hbox{something else}.$$ Here is what $x$ can do: ${x \to a}$ describes what happens when $x$ is close to, but not equal to, $a$. So $\displaystyle\lim_{x \to 3} f(x)$ involves looking at $x=3.1, 3.01, 3.001,2.9, 2.99, 2.999$, and generally considering all values of $x$ that are either slightly above or slightly below 3. ${x \to a^+}$ describes what happens when $x$ is slightly greater than $a$. That is, $\displaystyle\lim_{x \to 3^+}f(x)$ involves looking at $x=3.1, 3.01, 3.001$, etc.,but not $2.9, 2.99, 2.999$, etc. ${x \to a^-}$ describes what happens when $x$ is slightly less than $a$.  That is, $\displaystyle\lim_{x \to 3^-}f(x)$ involves looking at $x= 2.9, 2.99, 2.999$, etc. and  ignoring what happens when $x=3.1, 3.01, 3.001$, etc. Note that if something happens as $x \to a^+$ and the same thing happens as $x \to a^-$, then the same also happens as $x \to a$. Conversely, if something happens as $x \to a$, then it also happens as $x \to a^+$ and as $x \to a^-$. Here is what the limit can be (if it exists): $\displaystyle\lim_{x\to a} f(x) = L$ means that $f(x)$ is close to the number $L$ when $x$ is near $a$. This is the most common type of limit. $\displaystyle\lim_{x\to a} f(x) = \infty$ means that $f(x)$ grows without bound as $x$ approaches $a$, eventually becoming bigger than any number you can name. Remember that $\infty$ is not a number! Rather, $\infty$ is a process of growth that never ends. $\displaystyle\lim_{x\to a} f(x) = -\infty$ means that as $x$ approaches $a$, $f(x)$ goes extremely negative and never comes back, eventually becoming less than any number (say, minus a trillion) that you care to name. With these ingredients we can make sense of any limit statement. Examples: • $\displaystyle\lim_{x \to 4^-} \tfrac{1}{4-x} = \infty$ means that whenever $x$ is slightly less than $4$, $\frac{1}{4-x}$ is gigantic and positive. The graph $y=\frac{1}{4-x}$ will shoot upwards on the left side of the vertical asymptote $x=4$.   DO:  Do the work to show $\displaystyle\lim_{x \to 4^+} \tfrac{1}{4-x} = -\infty$.  What does the graph of $y=\tfrac{1}{4-x}$ look like at $x=4$?  What is $\displaystyle\lim_{x \to 4} \tfrac{1}{4-x}$? • $\displaystyle\lim_{x \to 0} 13 (x+1) = 13$ means that $f(x)=13(x+1)$ is close to 13 whenever $x$ is close to 0. So $f(0.01)$ will be close to 13, and $f(0.000001)$ will be really close to 13. • DO:  Show that $\displaystyle\lim_{x \to 0} \tfrac1x$ does not exist.
{}
Find the best alignment between a PDB structure and an existing alignment. Then, given a set of column indices of the original alignment, returns atom selections of equivalent C-alpha atoms in the PDB structure. pdb2aln.ind(aln, pdb, inds = NULL, ...) aln an alignment list object with id and ali components, similar to that generated by read.fasta, read.fasta.pdb, pdbaln, and seqaln. the PDB object to be aligned to aln. a numeric vector containing a subset of column indices of aln. If NULL, non-gap positions of alnali are used. additional arguments passed to pdb2aln. ## Details Call pdb2aln to align the sequence of pdb to aln. Then, find the atomic indices of C-alpha atoms in pdb that are equivalent to inds, the subset of column indices of alnali. The function is a rountine utility in a combined analysis of molecular dynamics (MD) simulation trajectories and crystallographic structures. For example, a typical post-analysis of MD simulation is to compare the principal components (PCs) derived from simulation trajectories with those derived from crystallographic structures. The C-alpha atoms used to fit trajectories and do PCA must be the same (or equivalent) to those used in the analysis of crystallographic structures, e.g. the 'non-gap' alignment positions. Call pdb2aln.ind with providing relevant alignment positions, one can easily get equivalent atom selections ('select' class objects) for the simulation topology (PDB) file and then do proper trajectory analysis. ## Value Returns a list containing two "select" objects: a atom and xyz indices for the alignment. b atom and xyz indices for the PDB. Note that if any element of inds has no corresponding CA atom in the PDB, the output a$atom and b$atom will be shorter than inds, i.e. only indices having equivalent CA atoms are returned. ## References Grant, B.J. et al. (2006) Bioinformatics 22, 2695--2696. ## Author Xin-Qiu Yao, Lars Skjaerven & Barry Grant seq2aln, seqaln.pair, pdb2aln ## Examples if (FALSE) { ##--- Read aligned PDB coordinates (CA only) ##--- Read the topology file of MD simulations ##--- For illustration, here we read another pdb file (all atoms) #--- Map the non-gap positions to PDB C-alpha atoms #pc.inds <- gap.inspect(pdbs$ali) #npc.inds <- pdb2aln.ind(aln=pdbs, pdb=pdb, inds=pc.inds$f.inds) #npc.inds$a #npc.inds$b #--- Or, map the non-gap positions with a known close sequence in the alignment #npc.inds <- pdb2aln.ind(aln=pdbs, pdb=pdb, aln.id="1bg2", inds=pc.inds$f.inds) #--- Map core positions core <- core.find(pdbs) core.inds <- pdb2aln.ind(aln=pdbs, pdb=pdb, inds = core$c1A.atom) core.inds$a core.inds$b ##--- Fit simulation trajectories to one of the X-ray structures based on ##--- core positions #xyz <- fit.xyz(pdbs$xyz[1,], pdb$xyz, core.inds$a$xyz, core.inds$b$xyz) ##--- Do PCA of trajectories based on non-gap positions #pc.traj <- pca(xyz[, npc.inds$b$xyz]) }
{}
# Vector Math Level 2 A grid in three dimensions consists of two vectors: A and B A = 6i -2j +2k B = -i+2j+5k What is the angle between the two lines at the point where they intersect? × Problem Loading... Note Loading... Set Loading...
{}
# Order of Group. Direct Product of Cyclic Group • November 19th 2011, 06:24 AM H12504106 Order of Group. Direct Product of Cyclic Group Let G = ${(\mathbb{Z}/7\mathbb{Z})^*} \times {{(\mathbb{Z}/11\mathbb{Z})^*}$. How many elements are there in the group G? Also show that G is not cyclic. I know that no. of elements in ${(\mathbb{Z}/7\mathbb{Z})^*}$ = 6 and no. of elements in ${(\mathbb{Z}/11\mathbb{Z})^*}$ = 10. So can i conclude that the total number of elements is then 60? As for the cyclic part, i dont have any idea where to start from. I know that a group is cyclic if for every element $g \in G$, $g=r^i$, where r is the generator of the cyclic group. But how do i show that such a generator does not exist? Thanks. • November 19th 2011, 07:22 AM Deveno Re: Order of Group. Direct Product of Cyclic Group yes, there are 60 elements, as the underlying set is the cartesian product. naturally, to prove it is not cyclic, you must show that no element is of order 60. use the fact that $(a,b)^d = (a^d,b^d)$ • November 19th 2011, 07:31 AM interneti Re: Order of Group. Direct Product of Cyclic Group • November 19th 2011, 07:38 AM Deveno Re: Order of Group. Direct Product of Cyclic Group Quote: Originally Posted by interneti interneti, the group in question is the direct product of the groups of units of Z7 and Z11, U(7)xU(11). for example, U(7) = {1,2,3,4,5,6} and U(11) = {1,2,3,4,5,6,7,8,9,10}. in Z7xZ11, (1,1) has order 77, but in Z7*xZ11*, (1,1) has order 1, it is the identity element. in Z7*, 3 is a generator: <3> = {3,2,6,4,5,1}. this gives an isomorphism of (Z7*,*) with (Z6,+): 3^k <---> k you can show Z11* is likewise cyclic, and isomorhpic to (Z10,+). can you see why Z6 x Z10 is not cyclic? • November 19th 2011, 07:45 AM H12504106 Re: Order of Group. Direct Product of Cyclic Group Quote: Originally Posted by Deveno yes, there are 60 elements, as the underlying set is the cartesian product. naturally, to prove it is not cyclic, you must show that no element is of order 60. use the fact that $(a,b)^d = (a^d,b^d)$ So I take $a \in (\mathbb{Z}/7\mathbb{Z})$ and $b \in (\mathbb{Z}/11\mathbb{Z})$ and d = 60. So $(a,b)^{60} = (a^{60}, b^{60})$. I am not very sure about the exact working but a rough idea is that $(a^{60}, b^{60})$ will never be equal to $(1,1)$. Is the reason due to the fact that 7 and 11 are relatively prime? • November 19th 2011, 07:50 AM Deveno Re: Order of Group. Direct Product of Cyclic Group no $(a,b)^{60} = (a^{60}, b^{60})$ will always be (1,1) as a direct consequence of Lagrange. it is still the case that the order of every group element will be a divisor of 60, the challenge is to show that some number LESS than 60 will work for any group element. • November 19th 2011, 08:08 AM H12504106 Re: Order of Group. Direct Product of Cyclic Group Quote: Originally Posted by Deveno no $(a,b)^{60} = (a^{60}, b^{60})$ will always be (1,1) as a direct consequence of Lagrange. it is still the case that the order of every group element will be a divisor of 60, the challenge is to show that some number LESS than 60 will work for any group element. If I take d = lcm(6,10) = 30, then $(a,b)^{30} = (a^{30}, b^{30})$ will always be (1,1). Hence, there is no element of order 60 and thus the group is not cyclic. Is that correct? • November 19th 2011, 08:17 AM interneti Re: Order of Group. Direct Product of Cyclic Group THANK YOU DEVENO You have right. The direct product of groups G= (Z/7Z)*(Z/11Z) the cadrinaly of G |G| is infinite,for example take ab from G(a from Z/7Z and b to Z/11Z are different from 1 ) ab*ab*ab*...ab=abab....ab is different from 1.because a and b are in different groups, So the cardinaly of G must be infinite.G is not cyclic because if we take ab we have that for every z from Z never a^z is b, so b is not at <a>. from this G is not cyclic. THANK YOU • November 19th 2011, 09:05 AM Deveno Re: Order of Group. Direct Product of Cyclic Group Quote: Originally Posted by H12504106 If I take d = lcm(6,10) = 30, then $(a,b)^{30} = (a^{30}, b^{30})$ will always be (1,1). Hence, there is no element of order 60 and thus the group is not cyclic. Is that correct? that's the idea, now prove it. • November 19th 2011, 02:06 PM Drexel28 Re: Order of Group. Direct Product of Cyclic Group Just a remark, which should be apparent from [b]Deveno[/tex] and your discusussion, that you can prove in general that for finite groups $A,B$ then $A\times B$ is cyclic if and only if $A,B$ are cyclic and $(|A|,|B|)=1$.
{}
International Tables for Crystallography Volume B Reciprocal space Edited by U. Shmueli International Tables for Crystallography (2010). Vol. B, ch. 2.1, pp. 202-203   | 1 | 2 | ## Section 2.1.6. Distributions of sums, averages and ratios U. Shmuelia* and A. J. C. Wilsonb aSchool of Chemistry, Tel Aviv University, Tel Aviv 69 978, Israel, and bSt John's College, Cambridge, England Correspondence e-mail:  ushmueli@post.tau.ac.il ### 2.1.6. Distributions of sums, averages and ratios | top | pdf | #### 2.1.6.1. Distributions of sums and averages | top | pdf | In Section 2.1.2.1, it was shown that the average intensity of a sufficient number of reflections is [equation (2.1.2.4)]. When the number of reflections is not sufficient', their mean value will show statistical fluctuations about ; such statistical fluctuations are in addition to any systematic variation resulting from non-independence of atomic positions, as discussed in Sections 2.1.2.1–2.1.2.3. We thus need to consider the probability density functions of sums like and averages like where is the intensity of the ith reflection. The probability density distributions are easily obtained from a property of gamma distributions: If are independent gamma-distributed variables with parameters , their sum is a gamma-distributed variable with parameter p equal to the sum of the parameters. The sum of n intensities drawn from an acentric distribution thus has the distribution the parameters of the variables added are all equal to unity, so that their sum is p. Similarly, the sum of n intensities drawn from a centric distribution has the distribution each parameter has the value of one-half. The corresponding distributions of the averages of n intensities are then for the acentric case, and for the centric. In both cases the expected value of Y is and the variances are and , respectively, just as would be expected. #### 2.1.6.2. Distribution of ratios | top | pdf | Ratios like where is given by equation (2.1.6.1), and the 's are the intensities of a set of reflections (which may or may not overlap with those included in ), are used in correlating intensities measured under different conditions. They arise in correlating reflections on different layer lines from the same or different specimens, in correlating the same reflections from different crystals, in normalizing intensities to the local average or to , and in certain systematic trial-and-error methods of structure determination (see Rabinovich & Shakked, 1984, and references therein). There are three main cases: (i) and refer to the same reflection; for example, they might be the observed and calculated quantities for the reflection measured under different conditions or for different crystals of the same substance; or (ii) and are unrelated; for example, the observed and calculated values for the reflection for a completely wrong trial structure, of values for entirely different reflections, as in reducing photographic measurements on different layer lines to the same scale; or (iii) the 's are a subset of the 's, so that for and . Aside from the scale factor, in case (i) and will differ chiefly through relatively small statistical fluctuations and uncorrected systematic errors, whereas in case (ii) the differences will be relatively large because of the inherent differences in the intensities. Here we are concerned only with cases (ii) and (iii); the practical problems of case (i) are postponed to IT C (2004), Chapter 7.5 . There is little in the crystallographic literature concerning the probability distribution of sums like (2.1.6.1) or ratios like (2.1.6.7); certain results are reviewed by Srinivasan & Parthasarathy (1976, ch. 5), but with a bias toward partially related structures that makes it difficult to apply them to the immediate problem. In case (ii) ( and independent), acentric distribution, Table 2.1.5.1 gives the distribution of the ratio where is a beta distribution of the second kind, Y is given by equation (2.1.6.2) and Z by where n is the number of intensities included in the numerator and m is the number in the denominator. The expected value of is then with variance One sees that is a biased estimate of the scaling factor between two sets of intensities and the bias, of the order of , depends only on the number of intensities averaged in the denominator. This may seem odd at first sight, but it becomes plausible when one remembers that the mean of a quantity is an unbiased estimator of itself, but the reciprocal of a mean is not an unbiased estimator of the mean of a reciprocal. The mean exists only if and the variance only for . In the centric case, the expression for the distribution of the ratio of the two means Y and Z becomes with the expected value of equal to and with its variance equal to For the same number of reflections, the bias in and the variance for the centric distribution are considerably larger than for the acentric. For both distributions the variance of the scaling factor approaches zero when n and m become large. The variances are large for m small, in fact infinite' if the number of terms averaged in the denominator is sufficiently small. These biases are readily removed by multiplying by or . Many methods of estimating scaling factors – perhaps most – also introduce bias (Wilson, 1975; Lomer & Wilson, 1975; Wilson, 1976, 1978c) that is not so easily removed. Wilson (1986a) has given reasons for supposing that the bias of the ratio (2.1.6.7) approximates to whatever the intensity distribution. Equations (2.1.6.12) and (2.1.6.15) are consistent with this. #### 2.1.6.3. Intensities scaled to the local average | top | pdf | When the 's are a subset of the 's, the beta distributions of the second kind are replaced by beta distributions of the first kind, with means and variances readily found from Table 2.1.5.1. The distribution of such a ratio is chiefly of interest when Y relates to a single reflection and Z relates to a group of m intensities including Y. This corresponds to normalizing intensities to the local average. Its distribution is in the acentric case, with an expected value of of unity; there is no bias, as is obvious a priori. The variance of is which is less than the variance of the intensities normalized to an infinite' population by a fraction of the order of . Unlike the variance of the scaling factor, the variance of the normalized intensity approaches unity as n becomes large. For intensities having a centric distribution, the distribution normalized to the local average is given by with an expected value of of unity and with variance less than that for an infinite' population by a fraction of about . Similar considerations apply to intensities normalized to Σ in the usual way, since they are equal to those normalized to multiplied by . #### 2.1.6.4. The use of normal approximations | top | pdf | Since and [equations (2.1.6.1) and (2.1.6.8)] are sums of identically distributed variables conforming to the conditions of the central-limit theorem, it is tempting to approximate their distributions by normal distributions with the correct mean and variance. This would be reasonably satisfactory for the distributions of and themselves for quite small values of n and m, but unsatisfactory for the distribution of their ratio for any values of n and m, even large. The ratio of two variables with normal distributions is notorious for its rather indeterminate mean and infinite' variance, resulting from the tail' of the denominator distributions extending through zero to negative values. The leading terms of the ratio distribution are given by Kendall & Stuart (1977, p. 288). ### References International Tables for Crystallography (2004). Vol. C, Mathematical, Physical and Chemical Tables, edited by E. Prince. Dordrecht: Kluwer Academic Publishers. Kendall, M. & Stuart, A. (1977). The Advanced Theory of Statistics, Vol. 1, 4th ed. London: Griffin. Lomer, T. R. & Wilson, A. J. C. (1975). Scaling of intensities. Acta Cryst. B31, 646–647. Rabinovich, D. & Shakked, Z. (1984). A new approach to structure determination of large molecules by multi-dimensional search methods. Acta Cryst. A40, 195–200. Srinivasan, R. & Parthasarathy, S. (1976). Some Statistical Applications in X-ray Crystallography. Oxford: Pergamon Press. Wilson, A. J. C. (1975). Effect of neglect of dispersion on apparent scale and temperature factors. In Anomalous Scattering, edited by S. Ramaseshan & S. C. Abrahams, pp. 325–332. Copenhagen: Munksgaard. Wilson, A. J. C. (1976). Statistical bias in least-squares refinement. Acta Cryst. A32, 994–996. Wilson, A. J. C. (1978c). Statistical bias in scaling factors: Erratum. Acta Cryst. B34, 1749. Wilson, A. J. C. (1986a). Distributions of sums and ratios of sums of intensities. Acta Cryst. A42, 334–339.
{}
### Recent Submissions • #### A zero liquid discharge system integrating multi-effect distillation and evaporative crystallization for desalination brine treatment (Desalination, Elsevier BV, 2021-01-13) [Article] • #### Generation of iPSC lines (KAUSTi011-A, KAUSTi011-B) from a Saudi patient with epileptic encephalopathy carrying homozygous mutation in the GLP1R gene. (Stem cell research, Elsevier BV, 2021-01-09) [Article] Glucagon-like peptide-1 receptor (GLP1R) is a seven-transmembrane-spanning helices membrane protein expressed in multiple human tissues including pancreatic islets, lung, brain, heart and central nervous system (CNS). GLP1R agonists are commonly used as antidiabetic drugs, but a neuroprotective function in neurodegenerative disorders is emerging. Here, we established two iPSC lines from a patient harboring a rare homozygous splice site variant in GLP1R (NM_002062.3; c.402 + 3delG). This patient displays severe developmental delay and epileptic encephalopathy. Therefore, the derivation of these iPSC lines constitutes a primary model to study the molecular pathology of GLP1R dysfunction and develop novel therapeutic targets. • #### Chromatin phosphoproteomics unravels a function for AT-hook motif nuclear localized protein AHL13 in PAMP-triggered immunity (Proceedings of the National Academy of Sciences, Proceedings of the National Academy of Sciences, 2021-01-08) [Article] In many eukaryotic systems during immune responses, mitogen-activated protein kinases (MAPKs) link cytoplasmic signaling to chromatin events by targeting transcription factors, chromatin remodeling complexes, and the RNA polymerase machinery. So far, knowledge on these events is scarce in plants and no attempts have been made to focus on phosphorylation events of chromatin-associated proteins. Here we carried out chromatin phosphoproteomics upon elicitor-induced activation of Arabidopsis. The events in WT were compared with those in mpk3, mpk4, and mpk6 mutant plants to decipher specific MAPK targets. Our study highlights distinct signaling networks involving MPK3, MPK4, and MPK6 in chromatin organization and modification, as well as in RNA transcription and processing. Among the chromatin targets, we characterized the AT-hook motif containing nuclear localized (AHL) DNA-binding protein AHL13 as a substrate of immune MAPKs. AHL13 knockout mutant plants are compromised in pathogen-associated molecular pattern (PAMP)-induced reactive oxygen species production, expression of defense genes, and PAMP-triggered immunity. Transcriptome analysis revealed that AHL13 regulates key factors of jasmonic acid biosynthesis and signaling and affects immunity toward Pseudomonas syringae and Botrytis cinerea pathogens. Mutational analysis of the phosphorylation sites of AHL13 demonstrated that phosphorylation regulates AHL13 protein stability and thereby its immune functions. • #### Elucidating the Role of Virulence Traits in the Survival of Pathogenic E. coli PI-7 Following Disinfection (Frontiers in bioengineering and biotechnology, Frontiers Media SA, 2021-01-08) [Article] Reuse and discharge of treated wastewater can result in dissemination of microorganisms into the environment. Deployment of disinfection strategies is typically proposed as a last stage remediation effort to further inactivate viable microorganisms. In this study, we hypothesize that virulence traits, including biofilm formation, motility, siderophore, and curli production along with the capability to internalize into mammalian cells play a role in survival against disinfectants. Pathogenic E. coli PI-7 strain was used as a model bacterium that was exposed to diverse disinfection strategies such as chlorination, UV and solar irradiation. To this end, we used a random transposon mutagenesis library screening approach to generate 14 mutants that exhibited varying levels of virulence traits. In these 14 isolated mutants, we observed that an increase in virulence traits such as biofilm formation, motility, curli production, and internalization capability, increased the inactivation half-lives of mutants compared to wild-type E. coli PI-7. In addition, oxidative stress response and EPS production contributed to lengthening the lag phase duration (defined as the time required for exposure to disinfectant prior to decay). However, traits related to siderophore production did not help with survival against the tested disinfection strategies. Taken together, the findings suggested that selected virulence traits facilitate survival of pathogenic E. coli PI-7, which in turn could account for the selective enrichment of pathogens over the nonpathogenic ones after wastewater treatment. Further, the study also reflected on the effectiveness of UV as a more viable disinfection strategy for inactivation of pathogens. • #### Molecular basis for the adaptive evolution of environment sensing by H-NS proteins (eLife, eLife Sciences Publications, Ltd, 2021-01-07) [Article] The DNA-binding protein H-NS is a pleiotropic gene regulator in gram-negative bacteria. Through its capacity to sense temperature and other environmental factors, H-NS allows pathogens like Salmonella to adapt their gene expression to their presence inside or outside warm-blooded hosts. To investigate how this sensing mechanism may have evolved to fit different bacterial lifestyles, we compared H-NS orthologs from bacteria that infect humans, plants, and insects, and from bacteria that live on a deep-sea hypothermal vent. The combination of biophysical characterization, high-resolution proton-less NMR spectroscopy and molecular simulations revealed, at an atomistic level, how the same general mechanism was adapted to specific habitats and lifestyles. In particular, we demonstrate how environment-sensing characteristics arise from specifically positioned intra- or intermolecular electrostatic interactions. Our integrative approach clarified the exact modus operandi for H-NS–mediated environmental sensing and suggests that this sensing mechanism resulted from the exaptation of an ancestral protein feature. • #### Hole-Type Spacers for More Stable Shale Gas-Produced Water Treatment by Forward Osmosis (Membranes, MDPI AG, 2021-01-03) [Article] An appropriate spacer design helps in minimizing membrane fouling which remains the major obstacle in forward osmosis (FO) systems. In the present study, the performance of a hole-type spacer (having holes at the filament intersections) was evaluated in a FO system and compared to a standard spacer design (without holes). The hole-type spacer exhibited slightly higher water flux and reverse solute flux (RSF) when Milli-Q water was used as feed solution and varied sodium chloride concentrations as draw solution. During shale gas produced water treatment, a severe flux decline was observed for both spacer designs due to the formation of barium sulfate scaling. SEM imaging revealed that the high shear force induced by the creation of holes led to the formation of scales on the entire membrane surface, causing a slightly higher flux decline than the standard spacer. Simultaneously, the presence of holes aided to mitigate the accumulation of foulants on spacer surface, resulting in no increase in pressure drop. Furthermore, a full cleaning efficiency was achieved by hole-type spacer attributed to the micro-jets effect induced by the holes, which aided to destroy the foulants and then sweep them away from the membrane surface. • #### Assembly of Two CCDD Rice Genomes, Oryza grandiglumis and Oryza latifolia, and the Study of Their Evolutionary Changes (2021-01) [Thesis] Committee members: Gojobori, Takashi; Zuccolo, Andrea Every day more than half of the world consumes rice as a primary dietary resource. Thus, rice is one of the most important food crops in the world. Rice and its wild relatives are part of the genus Oryza. Studying the genome structure, function, and evolution of Oryza species in a comparative genomics framework is a useful approach to provide a wealth of knowledge that can significantly improve valuable agronomic traits. The Oryza genus includes 27 species, with 11 different genome types as identified by genetic and cytogenetic analyses. Six genome types, including that of domesticated rice - O. sativa and O. glaberrima, are diploid, and the remaining 5 are tetraploids. Three of the tetraploid species contain the CCDD genome types (O. grandiglumis, O. latifolia, and O. alta), which arose less than 2 million years ago. Polyploidization is one of the major contributors to evolutionary divergence and can thereby lead to adaptation to new environmental niches. An important first step in the characterization of the polyploid Oryza species is the generation of a high-quality reference genome sequence. Unfortunately, up until recently, the generation of such an important and fundamental resource from polyploid species has been challenging, primarily due to their genome complexity and repetitive sequence content. In this project, I assembled two high-quality genomes assemblies for O. grandiglumis and O. latifolia using PacBio long-read sequencing technology and an assembly pipeline that employed 3 genome assemblers (i.e., Canu/2.0, Mecat2, and Flye/2.5) and multiple rounds of sequence polishing with 5 both Arrow and Pilon/1.23. After the primary assembly, sequence contigs were arranged into pseudomolecules, and homeologous chromosomes were assigned to their respective genome types (i.e., CC or DD). Finally, the assemblies were extensively edited manually to close as many gaps as possible. Both assemblies were then analyzed for transposable element and structural variant content between species and homoeologous chromosomes. This enabled us to study the evolutionary divergence of those two genomes, and to explore the possibility of neo-domesticating either species in future research for my PhD dissertation. • #### Noble metal nanowire arrays as an ethanol oxidation electrocatalyst (Nanoscale Advances, Royal Society of Chemistry (RSC), 2021) [Article] Vertically aligned noble metal nanowire arrays were grown on conductive electrodes based on a solution growth method. They show significant improvement of electrocatalytic activity in ethanol oxidation, from a re-deposited sample of the same detached nanowires. The unusual morphology provides open diffusion channels and direct charge transport pathways, in addition to the high electrochemically active surface from the ultrathin nanowires. Our best nanowire arrays exhibited much enhanced electrocatalytic activity, achieving a 38.0 fold increase in specific activity over that of commercial catalysts for ethanol electrooxidation. The structural design provides a new direction to enhance the electrocatalytic activity and reduce the size of electrodes for miniaturization of portable electrochemical devices. • #### Engineered Microgels—Their Manufacturing and Biomedical Applications (Micromachines, MDPI AG, 2021-01-01) [Article] Microgels are hydrogel particles with diameters in the micrometer scale that can be fabricated in different shapes and sizes. Microgels are increasingly used for biomedical applications and for biofabrication due to their interesting features, such as injectability, modularity, porosity and tunability in respect to size, shape and mechanical properties. Fabrication methods of microgels are divided into two categories, following a top-down or bottom-up approach. Each approach has its own advantages and disadvantages and requires certain sets of materials and equipments. In this review, we discuss fabrication methods of both top-down and bottom-up approaches and point to their advantages as well as their limitations, with more focus on the bottom-up approaches. In addition, the use of microgels for a variety of biomedical applications will be discussed, including microgels for the delivery of therapeutic agents and microgels as cell carriers for the fabrication of 3D bioprinted cell-laden constructs. Microgels made from well-defined synthetic materials with a focus on rationally designed ultrashort peptides are also discussed, because they have been demonstrated to serve as an attractive alternative to much less defined naturally derived materials. Here, we will emphasize the potential and properties of ultrashort self-assembling peptides related to microgels. • #### Imaging of organic signals in individual fossil diatom frustules with nanoSIMS and Raman spectroscopy (Marine Chemistry, Elsevier BV, 2021-01) [Article] The organic matter occluded in the silica of fossil diatom frustules is thought to be protected from diagenesis and used for paleoceanographic reconstructions. However, the location of the organic matter within the frustule has hitherto not been identified. Here, we combined high spatial resolution imaging by nanoSIMS and Raman micro-spectroscopy to identify where the organic material is retained in cleaned fossil diatom frustules. NanoSIMS imaging revealed that organic signals were present throughout the frustule but in higher concentrations at the pore walls. Raman measurements confirmed the heterogenous presence of organics but could not, because of lower spatial resolution, resolve the spatial patterns observed by nanoSIMS. • #### Combining Nadir, Oblique, and Façade Imagery Enhances Reconstruction of Rock Formations Using Unmanned Aerial Vehicles (IEEE Transactions on Geoscience and Remote Sensing, IEEE, 2021) [Article] • #### Arabidopsis Plant Natriuretic Peptide Is a Novel Interactor of Rubisco Activase (Life, MDPI AG, 2020-12-31) [Article] Plant natriuretic peptides (PNPs) are a group of systemically acting peptidic hormones affecting solute and solvent homeostasis and responses to biotrophic pathogens. Although an increasing body of evidence suggests PNPs modulate plant responses to biotic and abiotic stress, which could lead to their potential biotechnological application by conferring increased stress tolerance to plants, the exact mode of PNPs action is still elusive. In order to gain insight into PNP-dependent signalling, we set out to identify interactors of PNP present in the model plant Arabidopsis thaliana, termed AtPNP-A. Here, we report identification of rubisco activase (RCA), a central regulator of photosynthesis converting Rubisco catalytic sites from a closed to an open conformation, as an interactor of AtPNP-A through affinity isolation followed by mass spectrometric identification. Surface plasmon resonance (SPR) analyses reveals that the full-length recombinant AtPNP-A and the biologically active fragment of AtPNP-A bind specifically to RCA, whereas a biologically inactive scrambled peptide fails to bind. These results are considered in the light of known functions of PNPs, PNP-like proteins, and RCA in biotic and abiotic stress responses. • #### Carotenoid Biofortification of Crops in the CRISPR Era (Trends in Biotechnology, Elsevier BV, 2020-12-29) [Article] Carotenoids are micronutrients important for human health. The continuous improvements in clustered regularly interspaced short palindromic repeats (CRISPR)-based genome-editing techniques make rapid, DNA/transgene-free and targeted multiplex genetic modification a reality, thus promising to accelerate the breeding and generation of ‘golden’ staple crops. We discuss here the progress and future prospects of CRISPR/Cas9 applications for carotenoid biofortification. • #### Single-cell Individual Complete mtDNA Sequencing Uncovers Hidden Mitochondrial Heterogeneity in Human and Mouse Oocytes (Cold Spring Harbor Laboratory, 2020-12-29) [Preprint] The ontogeny and dynamics of mtDNA heteroplasmy remain unclear due to limitations of current mtDNA sequencing methods. We developed individual Mitochondrial Genome sequencing (iMiGseq) of full-length mtDNA for ultra-sensitive variant detection, complete haplotyping, and unbiased evaluation of heteroplasmy levels, all at the individual mtDNA molecule level. iMiGseq uncovers unappreciated levels of heteroplasmic variants in single healthy human oocytes well below the current 1% detection limit, of which numerous variants are detrimental and could contribute to late-onset mitochondrial disease and cancer. Extreme mtDNA heterogeneity among oocytes of the same mouse female, and a strong selection against deleterious mutations in human oocytes are observed. iMiGseq could comprehensively characterize and haplotype single-nucleotide and structural variants of mtDNA and their genetic linkage in NARP/Leigh syndrome patient-derived cells. Therefore, iMiGseq could not only elucidate the mitochondrial etiology of diseases, but also help diagnose and prevent mitochondrial diseases with unprecedented precision. • #### The gap-free rice genomes provide insights for centromere structure and function exploration and graph-based pan-genome construction (Cold Spring Harbor Laboratory, 2020-12-25) [Preprint] Asia rice (Oryza sativa) is divided into two subgroups, indica/xian and japonica/geng, the former has greater intraspecific diversity than the latter. Here, for the first time, we report the assemblies and analyses of two gap-free xian rice varieties Zhenshan 97 (ZS97) and Minghui 63 (MH63). Genomic sequences of these elite hybrid parents express extensive difference as the foundation for studying heterosis. Furthermore, the gap-free rice genomes provide global insights to investigate the structure and function of centromeres in different chromosomes. All the rice centromeric regions share conserved centromere-specific satellite motifs but with different copy numbers and structures. Importantly, we show that there are >1,500 genes in centromere regions and ~16% of them are actively expressed. Based on MH63 gap-free reference genome, a graph-based rice pan-genome (Os-GPG) was constructed containing presence/absence variations of 79 rice varieties. Compared with the other rice varieties, MH63 contained the largest number of resistance genes. The acquisition of ZS97 and MH63 gap-free genomes and graph-based pan-genome of rice lays a solid foundation for the study of genome structure and function in plants. • #### Performance of Commercially Available Rapid Serological Assays for the Detection of SARS-CoV-2 Antibodies. (Pathogens (Basel, Switzerland), MDPI AG, 2020-12-23) [Article] The coronavirus disease 2019 (COVID-19) pandemic, caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), continues to spread globally. Although several rapid commercial serological assays have been developed, little is known about their performance and accuracy in detecting SARS-CoV-2-specific antibodies in COVID-19 patient samples. Here, we have evaluated the performance of seven commercially available rapid lateral flow immunoassays (LFIA) obtained from different manufacturers, and compared them to in-house developed and validated ELISA assays for the detection of SARS-CoV-2-specific IgM and IgG antibodies in RT-PCR-confirmed COVID-19 patients. While all evaluated LFIA assays showed high specificity, our data showed a significant variation in sensitivity of these assays, which ranged from 0% to 54% for samples collected early during infection (3-7 days post symptoms onset) and from 54% to 88% for samples collected at later time points during infection (8-27 days post symptoms onset). Therefore, we recommend prior evaluation and validation of these assays before being routinely used to detect IgM and IgG in COVID-19 patients. Moreover, our findings suggest the use of LFIA assays in combination with other standard methods, and not as an alternative. • #### Imprint of Climate Change on Pan-Arctic Marine Vegetation (Frontiers in Marine Science, Frontiers Media SA, 2020-12-23) [Article] The Arctic climate is changing rapidly. The warming and resultant longer open water periods suggest a potential for expansion of marine vegetation along the vast Arctic coastline. We compiled and reviewed the scattered time series on Arctic marine vegetation and explored trends for macroalgae and eelgrass (Zostera marina). We identified a total of 38 sites, distributed between Arctic coastal regions in Alaska, Canada, Greenland, Iceland, Norway/Svalbard, and Russia, having time series extending into the 21st Century. The majority of these exhibited increase in abundance, productivity or species richness, and/or expansion of geographical distribution limits, several time series showed no significant trend. Only four time series displayed a negative trend, largely due to urchin grazing or increased turbidity. Overall, the observations support with medium confidence (i.e., 5–8 in 10 chance of being correct, adopting the IPCC confidence scale) the prediction that macrophytes are expanding in the Arctic. Species distribution modeling was challenged by limited observations and lack of information on substrate, but suggested a current (2000–2017) potential pan-Arctic macroalgal distribution area of 820.000 km2 (145.000 km2 intertidal, 675.000 km2 subtidal), representing an increase of about 30% for subtidal- and 6% for intertidal macroalgae since 1940–1950, and associated polar migration rates averaging 18–23 km decade–1. Adjusting the potential macroalgal distribution area by the fraction of shores represented by cliffs halves the estimate (412,634 km2). Warming and reduced sea ice cover along the Arctic coastlines are expected to stimulate further expansion of marine vegetation from boreal latitudes. The changes likely affect the functioning of coastal Arctic ecosystems because of the vegetation’s roles as habitat, and for carbon and nutrient cycling and storage. We encourage a pan-Arctic science- and management agenda to incorporate marine vegetation into a coherent understanding of Arctic changes by quantifying distribution and status beyond the scattered studies now available to develop sustainable management strategies for these important ecosystems.
{}
# Concentration inequality Jump to navigation Jump to search In probability theory, concentration inequalities provide bounds on how a random variable deviates from some value (typically, its expected value). The law of large numbers of classical probability theory states that sums of independent random variables are, under very mild conditions, close to their expectation with a large probability. Such sums are the most basic examples of random variables concentrated around their mean. Recent results show that such behavior is shared by other functions of independent random variables. Concentration inequalities can be sorted according to how much information about the random variable is needed in order to use them. ## Markov's inequality Let ${\displaystyle X}$ be a random variable that is non-negative (almost surely). Then, for every constant ${\displaystyle a>0}$, ${\displaystyle \Pr(X\geq a)\leq {\frac {\operatorname {E} (X)}{a}}.}$ Note the following extension to Markov's inequality: if ${\displaystyle \Phi }$ is a strictly increasing and non-negative function, then ${\displaystyle \Pr(X\geq a)=\Pr(\Phi (X)\geq \Phi (a))\leq {\frac {\operatorname {E} (\Phi (X))}{\Phi (a)}}.}$ ## Chebyshev's inequality Chebyshev's inequality requires the following information on a random variable ${\displaystyle X}$: • The expected value ${\displaystyle \operatorname {E} [X]}$ is finite. • The variance ${\displaystyle \operatorname {Var} [X]=\operatorname {E} [(X-\operatorname {E} [X])^{2}]}$ is finite. Then, for every constant ${\displaystyle a>0}$, ${\displaystyle \Pr(|X-\operatorname {E} [X]|\geq a)\leq {\frac {\operatorname {Var} [X]}{a^{2}}},}$ or equivalently, ${\displaystyle \Pr(|X-\operatorname {E} [X]|\geq a\cdot \operatorname {Std} [X])\leq {\frac {1}{a^{2}}},}$ where ${\displaystyle \operatorname {Std} [X]}$ is the standard deviation of ${\displaystyle X}$. Chebyshev's inequality can be seen as a special case of the generalized Markov's inequality applied to the random variable ${\displaystyle |X-\operatorname {E} [X]|}$ with ${\displaystyle \Phi (x)=x^{2}}$. ## Vysochanskij–Petunin inequality Let X be a random variable with unimodal distribution, mean μ and finite, non-zero variance σ2. Then, for any ${\textstyle \lambda >{\sqrt {\frac {8}{3}}}=1.63299...,}$ ${\displaystyle P(\left|X-\mu \right|\geq \lambda \sigma )\leq {\frac {4}{9\lambda ^{2}}}.}$ (For a relatively elementary proof see e.g. [1]). ## One-sided Vysochanskij–Petunin inequality For a unimodal random variable ${\displaystyle X}$ and ${\displaystyle r\geq 0}$, the one-sided Vysochanskij-Petunin inequality[2] holds as follows: ${\displaystyle \mathbb {P} (X-E[X]\geq r)\leq {\begin{cases}{\dfrac {4}{9}}{\dfrac {Var(X)}{r^{2}+Var(X)}}&{\mbox{for }}r^{2}\geq {\dfrac {5}{3}}Var(X),\\{\dfrac {4}{3}}{\dfrac {Var(X)}{r^{2}+Var(X)}}-{\dfrac {1}{3}}&{\mbox{otherwise.}}\end{cases}}}$ ## Chernoff bounds The generic Chernoff bound[3]: 63–65  requires only the moment generating function of ${\displaystyle X}$, defined as: ${\displaystyle M_{X}(t):=\operatorname {E} \!\left[e^{tX}\right]}$, provided it exists. Based on Markov's inequality, for every ${\displaystyle t>0}$: ${\displaystyle \Pr(X\geq a)\leq {\frac {\operatorname {E} [e^{t\cdot X}]}{e^{t\cdot a}}},}$ and for every ${\displaystyle t<0}$: ${\displaystyle \Pr(X\leq a)\leq {\frac {\operatorname {E} [e^{t\cdot X}]}{e^{t\cdot a}}}.}$ There are various Chernoff bounds for different distributions and different values of the parameter ${\displaystyle t}$. See [4]: 5–7  for a compilation of more concentration inequalities. ## Bounds on sums of independent variables Let ${\displaystyle X_{1},X_{2},\dots ,X_{n}}$ be independent random variables such that, for all i: ${\displaystyle a_{i}\leq X_{i}\leq b_{i}}$ almost surely. ${\displaystyle c_{i}:=b_{i}-a_{i}}$ ${\displaystyle \forall i:c_{i}\leq C}$ Let ${\displaystyle S_{n}}$ be their sum, ${\displaystyle E_{n}}$ its expected value and ${\displaystyle V_{n}}$ its variance: ${\displaystyle S_{n}:=\sum _{i=1}^{n}X_{i}}$ ${\displaystyle E_{n}:=\operatorname {E} [S_{n}]=\sum _{i=1}^{n}\operatorname {E} [X_{i}]}$ ${\displaystyle V_{n}:=\operatorname {Var} [S_{n}]=\sum _{i=1}^{n}\operatorname {Var} [X_{i}]}$ It is often interesting to bound the difference between the sum and its expected value. Several inequalities can be used. 1. Hoeffding's inequality says that: ${\displaystyle \Pr \left[|S_{n}-E_{n}|>t\right]<2\exp \left(-{\frac {2t^{2}}{\sum _{i=1}^{n}c_{i}^{2}}}\right)<2\exp \left(-{\frac {2t^{2}}{nC^{2}}}\right)}$ 2. The random variable ${\displaystyle S_{n}-E_{n}}$ is a special case of a martingale, and ${\displaystyle S_{0}-E_{0}=0}$. Hence, the general form of Azuma's inequality can also be used and it yields a similar bound: ${\displaystyle \Pr \left[|S_{n}-E_{n}|>t\right]<2\exp \left(-{\frac {2t^{2}}{\sum _{i=1}^{n}c_{i}^{2}}}\right)<2\exp \left(-{\frac {2t^{2}}{nC^{2}}}\right)}$ This is a generalization of Hoeffding's since it can handle other types of martingales, as well as supermartingales and submartingales. Note that if the simpler form of Azuma's inequality is used, the exponent in the bound is worse by a factor of 4. 3. The sum function, ${\displaystyle S_{n}=f(X_{1},\dots ,X_{n})}$, is a special case of a function of n variables. This function changes in a bounded way: if variable i is changed, the value of f changes by at most ${\displaystyle b_{i}-a_{i}. Hence, McDiarmid's inequality can also be used and it yields a similar bound: ${\displaystyle \Pr \left[|S_{n}-E_{n}|>t\right]<2\exp \left(-{\frac {2t^{2}}{\sum _{i=1}^{n}c_{i}^{2}}}\right)<2\exp \left(-{\frac {2t^{2}}{nC^{2}}}\right)}$ This is a different generalization of Hoeffding's since it can handle other functions besides the sum function, as long as they change in a bounded way. 4. Bennett's inequality offers some improvement over Hoeffding's when the variances of the summands are small compared to their almost-sure bounds C. It says that: ${\displaystyle \Pr \left[|S_{n}-E_{n}|>t\right]\leq 2\exp \left[-{\frac {V_{n}}{C^{2}}}h\left({\frac {Ct}{V_{n}}}\right)\right],}$ where ${\displaystyle h(u)=(1+u)\log(1+u)-u}$ 5. The first of Bernstein's inequalities says that: ${\displaystyle \Pr \left[|S_{n}-E_{n}|>t\right]<2\exp \left(-{\frac {t^{2}/2}{V_{n}+C\cdot t/3}}\right)}$ This is a generalization of Hoeffding's since it can handle random variables with not only almost-sure bound but both almost-sure bound and variance bound. 6. Chernoff bounds have a particularly simple form in the case of sum of independent variables, since ${\displaystyle \operatorname {E} [e^{t\cdot S_{n}}]=\prod _{i=1}^{n}{\operatorname {E} [e^{t\cdot X_{i}}]}}$. For example,[5] suppose the variables ${\displaystyle X_{i}}$ satisfy ${\displaystyle X_{i}\geq E(X_{i})-a_{i}-M}$, for ${\displaystyle 1\leq i\leq n}$. Then we have lower tail inequality: ${\displaystyle \Pr[S_{n}-E_{n}<-\lambda ]\leq \exp \left(-{\frac {\lambda ^{2}}{2(V_{n}+\sum _{i=1}^{n}a_{i}^{2}+M\lambda /3)}}\right)}$ If ${\displaystyle X_{i}}$ satisfies ${\displaystyle X_{i}\leq E(X_{i})+a_{i}+M}$, we have upper tail inequality: ${\displaystyle \Pr[S_{n}-E_{n}>\lambda ]\leq \exp \left(-{\frac {\lambda ^{2}}{2(V_{n}+\sum _{i=1}^{n}a_{i}^{2}+M\lambda /3)}}\right)}$ If ${\displaystyle X_{i}}$ are i.i.d., ${\displaystyle |X_{i}|\leq 1}$ and ${\displaystyle \sigma ^{2}}$ is the variance of ${\displaystyle X_{i}}$, a typical version of Chernoff inequality is: ${\displaystyle \Pr[|S_{n}|\geq k\sigma ]\leq 2e^{-k^{2}/4n}{\text{ for }}0\leq k\leq 2\sigma .}$ 7. Similar bounds can be found in: Rademacher distribution#Bounds on sums ## Efron–Stein inequality The Efron–Stein inequality (or influence inequality, or MG bound on variance) bounds the variance of a general function. Suppose that ${\displaystyle X_{1}\dots X_{n}}$, ${\displaystyle X_{1}'\dots X_{n}'}$ are independent with ${\displaystyle X_{i}'}$ and ${\displaystyle X_{i}}$ having the same distribution for all ${\displaystyle i}$. Let ${\displaystyle X=(X_{1},\dots ,X_{n}),X^{(i)}=(X_{1},\dots ,X_{i-1},X_{i}',X_{i+1},\dots ,X_{n}).}$ Then ${\displaystyle \mathrm {Var} (f(X))\leq {\frac {1}{2}}\sum _{i=1}^{n}E[(f(X)-f(X^{(i)}))^{2}].}$ ## Bretagnolle–Huber–Carol inequality Bretagnolle–Huber–Carol Inequality bounds the difference between a vector of multinomially distributed random variables and a vector of expected values. [6][7] A simple proof appears in [8](Appendix Section). If a random vector ${\displaystyle (Z_{1},Z_{2},Z_{3},\ldots ,Z_{n})}$ is multinomially distributed with parameters ${\displaystyle (p_{1},p_{2},\ldots ,p_{n})}$ and satisfies ${\displaystyle Z_{1}+Z_{2}+\dots +Z_{n}=M,}$ then ${\displaystyle \Pr \left(\sum _{i=1}^{n}|Z_{i}-Mp_{i}|\geq 2M\varepsilon \right)\leq 2^{n}e^{-2M\varepsilon ^{2}}.}$ This inequality is used to bound the total variation distance. ## Mason and van Zwet inequality The Mason and van Zwet inequality[9] for multinomial random vectors concerns a slight modification of the classical chi-square statistic. Let the random vector ${\displaystyle (N_{1},\ldots ,N_{k})}$ be multinomially distributed with parameters ${\displaystyle n}$ and ${\displaystyle (p_{1},\ldots ,p_{k})}$ such that ${\displaystyle p_{i}>0}$ for ${\displaystyle i Then for every ${\displaystyle C>0}$ and ${\displaystyle \delta >0}$ there exist constants ${\displaystyle a,b,c>0,}$ such that for all ${\displaystyle n\geq 1}$ and ${\displaystyle \lambda ,p_{1},\ldots ,p_{k-1}}$ satisfying ${\displaystyle \lambda >Cn\min\{p_{i}|1\leq i\leq k-1\}}$ and ${\displaystyle \sum _{i=1}^{k-1}p_{i}\leq 1-\delta ,}$ we have ${\displaystyle \Pr \left(\sum _{i=1}^{k-1}{\frac {(N_{i}-np_{i})^{2}}{np_{i}}}>\lambda \right)\leq ae^{bk-c\lambda }.}$ ## Dvoretzky–Kiefer–Wolfowitz inequality The Dvoretzky–Kiefer–Wolfowitz inequality bounds the difference between the real and the empirical cumulative distribution function. Given a natural number ${\displaystyle n}$, let ${\displaystyle X_{1},X_{2},\dots ,X_{n}}$ be real-valued independent and identically distributed random variables with cumulative distribution function F(·). Let ${\displaystyle F_{n}}$ denote the associated empirical distribution function defined by ${\displaystyle F_{n}(x)={\frac {1}{n}}\sum _{i=1}^{n}\mathbf {1} _{\{X_{i}\leq x\}},\qquad x\in \mathbb {R} .}$ So ${\displaystyle F(x)}$ is the probability that a single random variable ${\displaystyle X}$ is smaller than ${\displaystyle x}$, and ${\displaystyle F_{n}(x)}$ is the average number of random variables that are smaller than ${\displaystyle x}$. Then ${\displaystyle \Pr \left(\sup _{x\in \mathbb {R} }{\bigl (}F_{n}(x)-F(x){\bigr )}>\varepsilon \right)\leq e^{-2n\varepsilon ^{2}}{\text{ for every }}\varepsilon \geq {\sqrt {{\tfrac {1}{2n}}\ln 2}}.}$ ## Anti-concentration inequalities Anti-concentration inequalities, on the other hand, provide an upper bound on how much a random variable can concentrate around a quantity. For example, Rao and Yehudayoff[10] show that there exists some ${\displaystyle C>0}$ such that, for most directions of the hypercube ${\displaystyle x\in \{\pm 1\}^{n}}$, the following is true: ${\displaystyle \Pr \left(\langle x,Y\rangle =k\right)\leq {\frac {C}{\sqrt {n}}},}$ where ${\displaystyle Y}$ is drawn uniformly from a subset ${\displaystyle B\subseteq \{\pm 1\}^{n}}$ of large enough size. Such inequalities are of importance in several fields, including communication complexity (e.g., in proofs of the gap Hamming problem[11]) and graph theory.[12] An interesting anti-concentration inequality for weighted sums of independent Rademacher random variables can be obtained using the Paley–Zygmund and the Khintchine inequalities.[13] ## References 1. ^ Pukelsheim, F., 1994. The Three Sigma Rule. The American Statistician, 48(2), pp.88-91 2. ^ Mercadier, Mathieu; Strobel, Frank (2021-11-16). "A one-sided Vysochanskii-Petunin inequality with financial applications". European Journal of Operational Research. 295 (1): 374–377. doi:10.1016/j.ejor.2021.02.041. ISSN 0377-2217. 3. ^ Mitzenmacher, Michael; Upfal, Eli (2005). Probability and Computing: Randomized Algorithms and Probabilistic Analysis. Cambridge University Press. ISBN 0-521-83540-2. 4. ^ Slagle, N.P. (2012). "One Hundred Statistics and Probability Inequalities". 5. ^ Chung, Fan; Lu, Linyuan (2010). "Old and new concentration inequalities" (PDF). Complex Graphs and Networks. American Mathematical Society. Retrieved August 14, 2018. 6. ^ Bretagnolle, Jean; Huber, Catherine (1978). "Lois empiriques et distance de Prokhorov". Lecture Notes in Mathematics. 649: 332--341. 7. ^ van der Vaart, A.W.; Wellner, J.A. (1996). Weak convergence and empirical processes: With applications to statistics. Springer Science & Business Media. 8. ^ Yuto Ushioda; Masato Tanaka; Tomomi Matsui (2022). "Monte Carlo Methods for the Shapley–Shubik Power Index". Games. 13 (3): 44. doi:10.3390/g13030044. 9. ^ Mason, David M.; Willem R. Van Zwet (1987). "A Refinement of the KMT Inequality for the Uniform Empirical Process". The Annals of Probability. 15 (3): 871–884. 10. ^ Rao, Anup; Yehudayoff, Amir (2018). "Anti-concentration in most directions". Electronic Colloquium on Computational Complexity. 11. ^ Sherstov, Alexander A. (2012). "The Communication Complexity of Gap Hamming Distance". Theory of Computing. 12. ^ Matthew Kwan; Benny Sudakov; Tuan Tran (2018). "Anticoncentration for subgraph statistics". Journal of the London Mathematical Society. 99 (3): 757–777. arXiv:1807.05202. Bibcode:2018arXiv180705202K. doi:10.1112/jlms.12192. S2CID 54065186. 13. ^ Veraar, Mark (2009). "On Khintchine inequalities with a weight". arXiv:0909.2586v1 [math.PR].
{}
# Secrets of the Mathematical Ninja: Trigonometry With Small Numbers Radians - as I've ranted before - are the most natural way to express angles and do trigonometry. No ifs, no buts, degrees are an inherently inferior measure and the sooner they're abolished, the better. (In other news, the campaign to replace the mishmash of units called 'time' by UNIX timestamps starts here.) Now, one of the reasons radians rock is, it's very easy to estimate the sine of a small angle in radians: it's almost exactly the same as the angle. That makes trigonometry super-easy. For instance, $\sin(0.05)$ is very close to 0.05. In fact, it's 0.4998 - which isn't too shabby at all. Even up as far as $\sin(\frac{\pi}{4}) = \frac{\sqrt{2}}{2}$, the error is only about 11%. That's not quite so good, but by making clever use of some other approximations (which I'll leave for another article), you can get much closer. ### Small-angle trigonometry: estimating sine Anyway. When you have a small angle - let's say up to 30º - you can get a pretty good estimate for its sine by simply converting to radians. You remember from before that a degree is about 7/400 of a radian? If you have 8º, you can say that that's 0.14 radians, so $\sin(8^\circ)$ is roughly 0.14. (It's 0.1392). What's that? Oh, do keep up. To work out $8 \times \frac{7}{400}$, you can cancel a 4 to get $\frac{2 \times 7}{100}$, which is $\frac{14}{100}$ or 0.14. Going the other way - even supposedly sensible exam boards are wont to ask for answers to trigonometry questions in degrees once in a while - isn't much harder: you divide by $\frac{7}{400}$ (or multiply by $\frac{400}{7}$). To get $\sin^{-1}(0.28)$, you'd say that's $\frac{28}{100} \times \frac{400}{7}$, and cross-cancel to get 16º. The answer is 16.28º. ### Trigonometry close to a right-angle You can also use this to figure out the cosine of angles close to a right angle using the identity $\cos(90-x)^\circ \equiv \sin(x)^\circ$. If you want $\cos(85^\circ)$, you can say "ah! that's the same as $\sin(5^\circ)$, so it's $\frac{35}{400}$, about 0.088. (It's 0.0872, which is annoyingly close). Oh - and a couple more things. A special bonus: the tangent function behaves just like sine for small angles, because $\tan(x) \equiv \frac{\sin(x)}{\cos(x)}$ and $\cos(x)$ is close to 1 for small x - which means you can estimate $\tan$ and $tan^{-1}$ in much the same way. Finally, the identity from before means that for numbers near 90º, you can say $\tan(90-x)^\circ \equiv \frac{\sin(90-x)^\circ}{\cos(90-x)^\circ} = \frac{\cos(x)^\circ}{\sin(x)^\circ} = \frac{1}{\tan(x)^\circ}$. If you're interested in $\tan(89^\circ)$, you can say that's $\frac{1}{\tan(1^\circ)}$; $\tan(1^\circ)$ is about $\frac{7}{400}$, so $\tan(89^\circ)$ is roughly $\frac{400}{7} = 57.1$. It's actually 57.3. Recognise that number? It's one radian converted to degrees. Why would that be? ## Colin Colin is a Weymouth maths tutor, author of several Maths For Dummies books and A-level maths guides. He started Flying Colours Maths in 2008. He lives with an espresso pot and nothing to prove. #### Share This site uses Akismet to reduce spam. Learn how your comment data is processed. Sign up for the Sum Comfort newsletter and get a free e-book of mathematical quotations. No spam ever, obviously. ##### Where do you teach? I teach in my home in Abbotsbury Road, Weymouth. It's a 15-minute walk from Weymouth station, and it's on bus routes 3, 8 and X53. On-road parking is available nearby.
{}
# 9.5: Single-electron Wavefunctions and Basis Functions Finding the most useful single-electron wavefunctions to serve as building blocks for a multi-electron wavefunction is one of the main challenges in finding approximate solutions to the multi-electron Schrödinger Equation. The functions must be different for different atoms because the nuclear charge and number of electrons are different. The attraction of an electron for the nucleus depends on the nuclear charge, and the electron-electron interaction depends upon the number of electrons. As we saw in our initial approximation methods, the most straightforward place to start in finding reasonable single-electron wavefunctions for multi-electron atoms is with the atomic orbitals produced in the quantum treatment of hydrogen, the so-called “hydrogenic” spin-orbitals. These traditional atomic orbitals, with a few modifications, give quite reasonable calculated results and are still in wide use for conceptually understanding multi-electron atoms. In this section and in Chapter 10 we will explore some of the many other single-electron functions that also can be used as atomic orbitals. Hydrogenic spin-orbitals used as components of multi-electron systems are identified in the same way as they are for the hydrogen atom. Each spin-orbital consists of a spatial wavefunction, specified by the quantum numbers (n, $$l , m_l$$) and denoted ls, 2s, 2p, 3s, 3p, 3d, etc, multiplied by a spin function, specified by the quantum number $$m_s$$ and denoted $$\alpha$$ or $$\beta$$. In our initial approximation methods, we ignored the spin components of the hydrogenic orbitals, but they must be considered in order to develop a complete description of multi-electron systems. The subscript on the argument of the spatial function reveals which electron is being described ($$r_1$$ is a vector that refers to the coordinates of electron 1, for example.) No argument is given for the spin function. An example of a spin-orbital for electron 2 in a $$3p_z$$ orbital: $| \varphi _{3p_z} \alpha (r_2) \rangle = \varphi _{3,1,0}(r_2) \alpha \label {9.5.1}$ In the alternative shorthand notation for this spin-orbital shown below, the coordinates for electron 2 in the spatial function are abbreviated simply by the number “2,” and the spatial function is represented by “$$3p_z$$” rather than "$$\varphi _{3,1,0}$$". The argument “2” given for the spin function refers to the unknown spin variable for electron 2. Many slight variations on these shorthand forms are in use in this and other texts, so flexibility and careful reading are important. $| \varphi _{3p_z}\alpha (2) \rangle = 3p_z (2) \alpha (2) \label {9.5.2}$ In this chapter we will continue the trend of moving away from writing specific mathematical functions and toward a more symbolic, condensed representation. Your understanding of the material in this and future chapters requires that you keep in mind the form and properties of the specific functions denoted by the symbols used in each equation. Exercise $$\PageIndex{1}$$ Write the full mathematical form of $$\varphi _{3pz\alpha}$$ using as much explicit functional detail as possible. The basic mathematical functions and thus the general shapes and angular momenta for hydrogenic orbitals are the same as those for hydrogen orbitals. The differences between atomic orbitals for the hydrogen atom and those used as components in the wavefunctions for multi-electron systems lie in the radial parts of the wavefunctions and in the energies. Specifically, the differences arise from the replacement of the nuclear charge Z in the radial parts of the wavefunctions by an adjustable parameter $$\zeta$$ that is allowed to vary in approximation calculations in order to model the interactions between the electrons. We discussed such a procedure for helium The Variational Method previously. The result is that electrons in orbitals with different values for the angular momentum quantum number, $$l$$, have different energies. Figure $$\PageIndex{1}$$ shows the results of a quantum mechanical calculation on argon in which the degeneracy of the 2s and 2p orbitals is found to be removed, as is the degeneracy of the 3s, 3p, and 3d orbitals. Figure $$\PageIndex{1}$$: Ordering of energy levels for Ar. Energy level differences are not to scale. The energy of each electron now depends not only on its principle quantum number, $$n$$, but also on its angular momentum quantum number, $$l$$. The presence of $$\zeta$$ in the radial portions of the wavefunctions also means that the electron probability distributions associated with hydrogenic atomic orbitals in multi-electron systems are different from the exact atomic orbitals for hydrogen. Figure $$\PageIndex{2}$$ compares the radial distribution functions for an electron in a 1s orbital of hydrogen (the ground state), a 2s orbital in hydrogen (an excited configuration of hydrogen) and a 1s orbital in helium that is described by the best variational value of $$\zeta$$. Our use of hydrogen-like orbitals in quantum mechanical calculations for multi-electron atoms helps us to interpret our results for multi-electron atoms in terms of the properties of a system we can solve exactly. Figure $$\PageIndex{2}$$: Radial distribution functions for 1s of hydrogen (red, $$\zeta$$ = 1), 2s of hydrogen (blue, $$\zeta$$  = 1) and 1s of helium (black, $$\zeta$$ = 1.6875). Exercise $$\PageIndex{2}$$ Analyze Figure $$\PageIndex{2}$$ and write a paragraph about what you can discern about the relative sizes of ground state hydrogen, excited state hydrogen and ground state helium atoms. While they provide useful stepping off points for understanding computational results, nothing requires us to use the hydrogenic functions as the building blocks for multi-electrons wavefunctions. In practice, the radial part of the hydrogenic atomic orbital presents a computational difficulty because the radial function has nodes, positive and negative lobes, and steep variations that make accurate evaluation of integrals by a computer slow. Consequently other types of functions are generally used in building multi-electron functions. These usually are related to the hydrogenic orbitals to aid in the analysis of molecular electronic structure. For example, Slater-type atomic orbitals (STO’s), designated below as $$S_{nlm} (r, \theta , \varphi )$$, avoid the difficulties imposed by the hydrogenic functions. The STO’s, named after their creator, John Slater, were the first alternative functions that were used extensively in computations. STO’s do not have any radial nodes, but still contain a variational parameter $$\zeta$$ (zeta), that corresponds to the effective nuclear charge in the hydrogenic orbitals. In Equation $$\ref{9-36}$$ and elsewhere in this chapter, the distance, $$r$$, is measured in units of the Bohr radius, $$a_0$$. $S_{nlm} (r, \theta , \varphi ) = \dfrac {(2 \zeta )^{n+1/2}}{[(2n)!]^{1/2}} r^{n-1} e^{-\zeta r } Y^m_l (\theta , \varphi ) \label {9-36}$ Exercise $$\PageIndex{3}$$ 1. Write the radial parts of the 1s, 2s, and 2p atomic orbitals for hydrogen. 2. Write the radial parts of the n = 1 and n = 2 Slater–type orbitals (STO). 3. Check that the above five functions are normalized. 4. Graph these five functions, measuring r in units of the Bohr radius. 5. Graph the radial probability densities for these orbitals. Put the hydrogen orbital and the corresponding STO on the same graph so they can be compared easily. 6. Adjust the zeta parameter $$\zeta$$ in each case to give the best match of the radial probability density for the STO with that of the corresponding hydrogen orbital. 7. Comment on the similarities and differences between the hydrogen orbitals and the STOs and the corresponding radial probability densities. ### Linear Variational Method An alternative approach to the general problem of introducing variational parameters into wavefunctions is the construction of a single-electron wavefunction as a linear combination of other functions. For hydrogen, the radial function decays, or decreases in amplitude, exponentially as the distance from the nucleus increases. For helium and other multi-electron atoms, the radial dependence of the total probability density does not fall off as a simple exponential with increasing distance from the nucleus as it does for hydrogen. More complex single-electron functions therefore are needed in order to model the effects of electron-electron interactions on the total radial distribution function. One way to obtain more appropriate single-electron functions is to use a sum of exponential functions in place of the hydrogenic spin-orbitals. An example of such a wavefunction created from a sum or linear combination of exponential functions is written as $\varphi _{1s} (r_1) = \sum _j c_j e^{-\zeta _j r_j /a_o} \label{9-37}$ The linear combination permits weighting of the different exponentials through the adjustable coefficients (cj) for each term in the sum. Each exponential term has a different rate of decay through the zeta-parameter $$\zeta _j$$. The exponential functions in Equation $$\ref{9-37}$$ are called basis functions. Basis functions are the functions used in linear combinations to produce the single-electron orbitals that in turn combine to create the product multi-electron wavefunctions. Originally the most popular basis functions used were the STO’s, but today STO’s are not used in most quantum chemistry calculations. However, they are often the functions to which more computationally efficient basis functions are fitted. Physically, the $$\zeta _j$$ parameters account for the effective nuclear charge (often denoted with $$Z_{eff}$$. The use of several zeta values in the linear combination essentially allows the effective nuclear charge to vary with the distance of an electron from the nucleus. This variation makes sense physically. When an electron is close to the nucleus, the effective nuclear charge should be close to the actual nuclear charge. When the electron is far from the nucleus, the effective nuclear charge should be much smaller. See Slater's rules for a rule-of-thumb approach to evaluate $$Z_{eff}$$ values. A term in Equation $$\ref{9-37}$$ with a small $$\zeta$$ will decay slowly with distance from the nucleus. A term with a large $$\zeta$$ will decay rapidly with distance and not contribute at large distances. The need for such a linear combination of exponentials is a consequence of the electron-electron repulsion and its effect of screening the nucleus for each electron due to the presence of the other electrons. Exercise $$\PageIndex{4}$$ Make plots of $$\varphi$$ in Equation $$\ref{9-37}$$ using three equally weighted terms with $$\zeta$$ = 1.0, 2.0, and 5.0. Also plot each term separately. Computational procedures in which an exponential parameter like $$\zeta$$ is varied are more precisely called the Nonlinear Variational Method because the variational parameter is part of the wavefunction and the change in the function and energy caused by a change in the parameter is not linear. The optimum values for the zeta parameters in any particular calculation are determined by doing a variational calculation for each orbital to minimize the ground-state energy. When this calculation involves a nonlinear variational calculation for the zetas, it requires a large amount of computer time. The use of the variational method to find values for the coefficients, $$\{c_j\}$$, in the linear combination given by Equation $$\ref{9-37}$$ above is called the Linear Variational Method because the single-electron function whose energy is to be minimized (in this case $$\varphi _{1s}$$) depends linearly on the coefficients. Although the idea is the same, it usually is much easier to implement the linear variational method in practice. Nonlinear variational calculations are extremely costly in terms of computer time because each time a zeta parameter is changed, all of the integrals need to be recalculated. In the linear variation, where only the coefficients in a linear combination are varied, the basis functions and the integrals do not change. Consequently, an optimum set of zeta parameters were chosen from variational calculations on many small multi-electron systems, and these values, which are given in Table $$\PageIndex{1}$$, generally can be used in the STOs for other and larger systems. Table $$\PageIndex{1}$$: Orbital Exponents for Slater Orbitals Atom $$\zeta _{1s}$$ $$\zeta _{2s,2p}$$ H 1.24 He 1.69 Li 2.69 0.80 Be 3.68 1.15 B 4.68 1.50 C 5.67 1.72 N 6.67 1.95 O 7.66 2.25 F 8.56 2.55 Ne 9.64 2.88 Exercise $$\PageIndex{5}$$ Compare the value $$\zeta _{1s}$$ = 1.24 in Table $$\PageIndex{1}$$ for hydrogen with the value you obtained in Exercise $$\PageIndex{3}$$. and comment on possible reasons for any difference. Why are the zeta values larger for 1s than for 2s and 2p orbitals? Why do the $$\zeta _{1s}$$ values increase by essentially one unit for each element from He to Ne while the increase for the $$\zeta _{2s, 2p}$$ values is much smaller? The discussion above gives us some new ideas about how to write flexible, useful single-electron wavefunctions that can be used to construct multi-electron wavefunctions for variational calculations. Single-electron functions built from the basis function approach are flexible because they have several adjustable parameters, and useful because the adjustable parameters still have clear physical interpretations. Such functions will be needed in the Hartree-Fock method discussed elsewhere.
{}
Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed # Discrete Maximum Principle and Energy Stability of Compact Difference Scheme for the Allen-Cahn Equation Version 1 : Received: 24 December 2018 / Approved: 25 December 2018 / Online: 25 December 2018 (04:31:54 CET) How to cite: Tian, D.; Jin, Y.; Lv, G. Discrete Maximum Principle and Energy Stability of Compact Difference Scheme for the Allen-Cahn Equation. Preprints 2018, 2018120294 (doi: 10.20944/preprints201812.0294.v1). Tian, D.; Jin, Y.; Lv, G. Discrete Maximum Principle and Energy Stability of Compact Difference Scheme for the Allen-Cahn Equation. Preprints 2018, 2018120294 (doi: 10.20944/preprints201812.0294.v1). ## Abstract In the paper, a fully discrete compact difference scheme with $O(\tau^{2}+h^{4})$ precision is established by considering the numerical approximation of the one-dimensional Allen-Cahn equation. The numerical solutions satisfy discrete maximum principle under reasonable step ratio and time step constraint is proved. And the energy stability for the fully discrete scheme is investigated. An example is finally presented to show the effectiveness of scheme. ## Keywords Allen-Cahn equation; compact difference scheme; maximum principle; energy stability ## Subject MATHEMATICS & COMPUTER SCIENCE, Numerical Analysis & Optimization Views 0
{}
Resource| Volume 27, ISSUE 1, P161-174.e3, January 02, 2019 # Acceleration of cryo-EM Flexible Fitting for Large Biomolecular Systems by Efficient Space Partitioning • Author Footnotes Open ArchivePublished:October 18, 2018 ## Highlights • Efficient parallelization schemes for cryo-EM flexible fitting were proposed • The methods are applicable for not only small proteins but also large complexes • Flexible fitting with the all-atom model allows us to explore protein functions ## Summary Flexible fitting is a powerful technique to build the 3D structures of biomolecules from cryoelectron microscopy (cryo-EM) density maps. One popular method is a cross-correlation coefficient-based approach, where the molecular dynamics (MD) simulation is carried out with the biasing potential that includes the cross-correlation coefficient between the experimental and simulated density maps. Here, we propose efficient parallelization schemes for the calculation of the cross-correlation coefficient to accelerate flexible fitting. Our schemes are tested for small, medium, and large biomolecules using CPU and hybrid CPU + GPU architectures. The scheme for the atomic decomposition MD is suitable for small proteins such as Ca2+-ATPase with the all-atom Go model, while that for the domain decomposition MD is better for larger systems such as ribosome with the all-atom Go or the all-atom explicit solvent models. Our methods allow flexible fitting for various biomolecules with reasonable computational cost. This approach also connects high-resolution structure refinements with investigation of protein structure-function relationship. ## Introduction Cryoelectron microscopy (cryo-EM) is a powerful tool to determine the 3D structures of biomolecules with near atomic resolution ( • Bai X.C. • McMullan G. • Scheres S.H.W. How cryo-EM is revolutionizing structural biology. ). In single-particle cryo-EM, a 3D image of a target biomolecule is reconstituted from a large number of 2D images of the molecule, each of which is randomly oriented in a frozen solution. The applications span a wide spectrum of biomolecules, including not only large complexes (e.g., virus, ribosome, or polymerase), but also small proteins (e.g., dehydrogenase and hemoglobin) ( • Ehara H. • Yokoyama T. • Shigematsu H. • Yokoyama S. • Shirouzu M. • Sekine S. Structure of the complete elongation complex of RNA polymerase II with basal factors. , • Khoshouei M. • Baumeister W. • Danev R. Cryo-EM structure of haemoglobin at 3.2 Å determined with the Volta phase plate. , • Merk A. • Bartesaghi A. • Banerjee S. • Falconieri V. • Rao P. • Davis M.I. • Pragani R. • Boxer M.B. • Earl L.A. • Milne J.L.S. • et al. Breaking cryo-EM resolution barriers to facilitate drug discovery. ). A recent typical resolution of the cryo-EM density map is 4–10 Å, and high-resolution analyses were realized through various technological developments, such as improved sample preparations ( • Dubochet J. • Chang J.-J. • Homo J.-C. • Lepault J. • McDowall A.W. • Schultz P. Cryo-electron microscopy of vitrified specimens. ), direct electron detector ( • Li X. • Mooney P. • Zheng S. • Booth C.R. • Braunfeld M.B. • Gubbens S. • Agard D.A. • Cheng Y. Electron counting and beam-induced motion correction enable near-atomic-resolution single-particle cryo-EM. ), and new software/algorithms for image processing ( • Frank J. • Penczek P. • Zhu J. • Li Y.H. • Leith A. SPIDER and WEB: processing and visualization of images in 3D electron microscopy and related fields. , • Scheres S.H.W. RELION: implementation of a Bayesian approach to cryo-EM structure determination. , • Tang G. • Peng L. • Baldwin P.R. • Mann D.S. • Jiang W. • Rees I. • Ludtke S.J. EMAN2: an extensible image processing suite for electron microscopy. ). The atomic structure can be modeled by fitting a high-resolution structure to the cryo-EM density map, if component or entire structures are determined with other methods such as X-ray crystallography, nuclear magnetic resonance spectroscopy, and high-resolution cryo-EM structures, or predicted with homology modeling ( • Kim D.N. • Sanbonmatsu K. Tools for the Cryo-EM gold rush: going from the cryo-EM map to the atomistic model. ). The flexible fitting can provide essential structural information to facilitate our understanding of biological functions for large protein complexes ( • Gumbart J.C. • Trabuco L.G. • Schreiner E. • Villa E. • Schulten K. Regulation of the protein-conducting channel by a bound ribosome. , • Muhs M. • Hilal T. • Mielke T. • Skabkin M.A. • Sanbonmatsu K.Y. • Pestova T.V. • Spahn C.M.T. Cryo-EM of ribosomal 80S complexes with termination factors reveals the translocated cricket paralysis virus IRES. ). To date, several fitting algorithms using simulation techniques have been developed, including rigid-body fitting ( • Roseman A.M. Docking structures of domains into maps from cryo-electron microscopy using local correlation. , • Wriggers W. • Birmanns S. Using Situs for flexible and rigid-body fitting of multiresolution single-molecule data. , • Wriggers W. • Chacón P. Modeling tricks and fitting techniques for multiresolution structures. ) and flexible fitting ( • Jolley C.C. • Wells S.A. • Frornme P. • Thorpe M.F. Fitting low-resolution cryo-EM maps of proteins using constrained geometric simulations. , • Orzechowski M. • Tama F. Flexible fitting of high-resolution X-ray structures into cryoelectron microscopy maps using biased molecular dynamics simulations. , • Schröder G.F. • Brunger A.T. • Levitt M. Combining efficient conformational sampling with a deformable elastic network model facilitates structure refinement at low resolution. , • Tama F. • Miyashita O. • Brooks III, C.L. Flexible multi-scale fitting of atomic structures into low-resolution electron density maps with elastic network normal mode analysis. , • Topf M. • Webb B. • Wolfson H. • Chiu W. • Sali A. Protein structure fitting and refinement guided by cryo-EM density. , • Trabuco L.G. • Villa E. • Mitra K. • Frank J. • Schulten K. Flexible fitting of atomic structures into electron microscopy maps using molecular dynamics. , • Whitford P.C. • Ahmed A. • Yu Y. • Hennelly S.P. • Tama F. • Spahn C.M. • Onuchic J.N. • Sanbonmatsu K.Y. Excited states of ribosome translocation revealed through integrative molecular modeling. ). In general, rigid-body fitting requires component structures such as domains and segments in the target biomolecules. The entire structure is treated as a collection of the components, and the optimal positions and orientations of each component are searched exhaustively with rigid-body translational and rotational displacements to fit the density map. The obtained model may be further refined to remove steric clashes ( • Joseph A.P. • Malhotra S. • Burnley T. • Wood C. • Clare D.K. • Winn M. • Topf M. Refinement of atomic models in high resolution EM reconstructions using Flex-EM and local assessment. , • Villa E. Finding the right fit: chiseling structures out of cryo-electron microscopy maps. ). On the other hand, flexible fitting uses a complete model of the target biomolecule, which is typically determined in different conditions. In this method, the structure is deformed by employing simulation techniques, such as normal mode analysis (NMA) and molecular dynamics (MD) algorithms. In the NMA-based fitting, low-frequency normal modes are used to facilitate a large-scale structural change of the protein ( • Matsumoto A. • Ishida H. Global conformational changes of ribosome observed by normal mode fitting for 3D Cryo-EM structures. , • Tama F. • Miyashita O. • Brooks III, C.L. Flexible multi-scale fitting of atomic structures into low-resolution electron density maps with elastic network normal mode analysis. ). In the MD-based fitting, a biasing potential that guides the protein structure toward the target density is added to molecular force fields. The MDFF method introduces a biasing potential that is proportional to the Coulomb potential derived from the experimental density map and also the secondary structure restraint potential ( • Trabuco L.G. • Villa E. • Mitra K. • Frank J. • Schulten K. Flexible fitting of atomic structures into electron microscopy maps using molecular dynamics. , • Trabuco L.G. • Villa E. • Schreiner E. • Harrison C.B. • Schulten K. Molecular dynamics flexible fitting: a practical guide to combine cryo-electron microscopy and X-ray crystallography. ). The MDfit method employs the structure-based model (Cα or all-atom Go [AAGO] model), enabling an efficient fitting with a coarse-grained (CG) potential but with atomic resolution ( • Grubisic I. • Shokhirev M.N. • Orzechowski M. • Miyashita O. • Tama F. Biased coarse-grained molecular dynamics simulation approach for flexible fitting of X-ray structure into cryo electron microscopy maps. , • Whitford P.C. • Ahmed A. • Yu Y. • Hennelly S.P. • Tama F. • Spahn C.M. • Onuchic J.N. • Sanbonmatsu K.Y. Excited states of ribosome translocation revealed through integrative molecular modeling. ). The cross-correlation coefficient (c.c.) is calculated from the experimental and simulated density maps. It describes the similarity of the two maps, which measures the quality of fitting. c.c. can be included in a biasing potential of the MD-based flexible fitting (c.c.-based flexible fitting) as in MDfit ( • Grubisic I. • Shokhirev M.N. • Orzechowski M. • Miyashita O. • Tama F. Biased coarse-grained molecular dynamics simulation approach for flexible fitting of X-ray structure into cryo electron microscopy maps. , • Orzechowski M. • Tama F. Flexible fitting of high-resolution X-ray structures into cryoelectron microscopy maps using biased molecular dynamics simulations. , • Whitford P.C. • Ahmed A. • Yu Y. • Hennelly S.P. • Tama F. • Spahn C.M. • Onuchic J.N. • Sanbonmatsu K.Y. Excited states of ribosome translocation revealed through integrative molecular modeling. ) and Flex-EM ( • Topf M. • Webb B. • Wolfson H. • Chiu W. • Sali A. Protein structure fitting and refinement guided by cryo-EM density. ). One of the advantages of this method is that collision of atoms in a high-density region is avoided through c.c., which allows us to omit the secondary structure restraints for the fitting atoms ( • Orzechowski M. • Tama F. Flexible fitting of high-resolution X-ray structures into cryoelectron microscopy maps using biased molecular dynamics simulations. ). However, the calculation of c.c. and its derivative with respect to the atomic position requires large computational cost, because the density map consists of many voxels, and efficient calculation is essential for the c.c.-based flexible fitting. In this study, the calculation of c.c. is efficiently parallelized to reduce the computational time. In general, the Message Passing Interface (MPI) is widely used for the distributed-memory parallelization, and OpenMP is often utilized for the shared-memory systems. The hybrid MPI + OpenMP techniques are suitable for computers with multiple-core processors ( • Chorley M.J. • Walker D.W. Performance analysis of a hybrid MPI/OpenMP application on multi-core clusters. ). As for the parallelization of the MD algorithm, various schemes have been developed to date, such as the atomic, force, and domain decomposition schemes ( • Plimpton S. Fast parallel algorithms for short-range molecular-dynamics. ). Among them, the domain decomposition is usually applied to large-scale simulations, since it can reduce computational cost in the non-bonded interaction calculations by dividing the system into domains. Recent powerful MD programs are mostly parallelized with the domain decomposition scheme, optimized with the hybrid MPI + OpenMP protocol, and accelerated with GPU ( • Abraham M.J. • Murtola T. • Schulz R. • Páll S. • Smith J.C. • Hess B. • Lindahl E. GROMACS: high performance molecular simulations through multi-level parallelism from laptops to supercomputers. , • Jung J. • Naurse A. • Kobayashi C. • Sugita Y. Graphics processing unit acceleration and parallelization of GENESIS for large-scale molecular dynamics simulations. , • Salomon-Ferrer R. • Case D.A. • Walker R.C. An overview of the Amber biomolecular simulation package. , • Stone J.E. • Phillips J.C. • Freddolino P.L. • Hardy D.J. • Trabuco L.G. • Schulten K. Accelerating molecular modeling applications with graphics processors. ). However, the efficiency depends on the system size (small or large), solvent model (explicit or implicit), and protein model (all-atom or CG). In fact, the classical atomic decomposition scheme is still useful for the MD simulations with the Go-model ( • Kenzaki H. • Koga N. • Hori N. • Li W. • Okazaki K. • Yao X.-Q. CafeMol: a coarse-grained biomolecular simulator for simulating proteins at work. ). The most efficient parallelization scheme for c.c. and its derivatives can depend on target biomolecular systems. In this study, we propose new parallelization schemes for the c.c.-based flexible fitting with hybrid MPI + OpenMP to deal with a wide spectrum of biomolecules. We examine three different methods, two of which are combined with the domain decomposition scheme, while the other is the atomic decomposition scheme. We implement our methods into GENESIS ( • Jung J. • Mori T. • Kobayashi C. • Matsunaga Y. • Yoda T. • Feig M. • Sugita Y. GENESIS: a hybrid-parallel and multi-scale molecular dynamics simulator with enhanced sampling algorithms for biomolecular and cellular simulations. , • Kobayashi C. • Jung J. • Matsunaga Y. • Mori T. • Ando T. • Tamura K. • Kamiya M. • Sugita Y. GENESIS 1.1: a hybrid-parallel molecular dynamics simulator with enhanced sampling algorithms on multiple computational platforms. ), in which both the Go-model ( • Clementi C. • Nymeyer H. • Onuchic J.N. Topological and energetic factors: what determines the structural details of the transition state ensemble and “en-route” intermediates for protein folding? An investigation for small globular proteins. , • Taketomi H. • Ueda Y. • Gō N. Studies on protein folding, unfolding and fluctuations by computer simulation. , • Whitford P.C. • Noel J.K. • Gosavi S. • Schug A. • Sanbonmatsu K.Y. • Onuchic J.N. An all-atom structure-based potential for proteins: bridging minimal models with all-atom empirical forcefields. ) and the all-atom model are available. The methods enable us to perform a multi-scale flexible fitting in one MD program package. We carry out the benchmark tests of GENESIS for small, medium, and large biomolecules using CPU and hybrid CPU + GPU architectures. Finally, we show demonstrations on how GENESIS can work for the flexible fitting of D-glucose/D-galactose binding protein (GGBP), Ca2+-ATPase, P2X4 receptor, and ribosome using the AAGO model and all-atom explicit solvent model (AAEX). ## Results ### Theoretical Basis of Flexible Fitting In cryo-EM flexible fitting, the total potential energy is defined as the summation of a force field VFF ( • Brooks B.R. • Brooks III, C.L. • Mackerell Jr., A.D. • Nilsson L. • Petrella R.J. • Roux B. • Won Y. • Archontis G. • Bartels C. • Boresch S. • et al. CHARMM: the biomolecular simulation program. , • Case D.A. • Cheatham III, T.E. • Darden T. • Gohlke H. • Luo R. • Merz K.M. • Onufriev A. • Simmerling C. • Wang B. • Woods R.J. The Amber biomolecular simulation programs. ) and biasing potential VEM, which guide the protein structure toward the target density ( • Orzechowski M. • Tama F. Flexible fitting of high-resolution X-ray structures into cryoelectron microscopy maps using biased molecular dynamics simulations. ): $Vtotal=VFF+VEM.$ (Equation 1) In the c.c.-based approach, one of the commonly used formulas for VEM is $VEM=k(1−c.c.),$ (Equation 2) where k is the force constant and c.c. is the cross-correlation coefficient between the experimental and simulated EM density maps ( • Tama F. • Miyashita O. • Brooks III, C.L. Flexible multi-scale fitting of atomic structures into low-resolution electron density maps with elastic network normal mode analysis. ), calculated as $c.c.=∑ijkρexp(i,j,k)ρsim(i,j,k)∑ijkρexp(i,j,k)2∑ijkρsim(i,j,k)2,$ (Equation 3) where (i,j,k) is the index of a voxel in the density map, and ρexp and ρsim are the experimental and simulated EM densities, respectively. The simulated densities are usually computed using a Gaussian mixture model, where a 3D Gaussian function is put on the Cartesian coordinates of each target atom (i.e., protein atom), and all contributions are integrated in each voxel of the map. Here, several schemes have been proposed, in which the Gaussian function is weighted with an atomic number ( • Topf M. • Webb B. • Wolfson H. • Chiu W. • Sali A. Protein structure fitting and refinement guided by cryo-EM density. ) or mass ( • Ishida H. • Matsumoto A. Free-energy landscape of reverse tRNA translocation through the ribosome analyzed by electron microscopy density maps and molecular dynamics simulations. ), or it is simply applied to non-hydrogen atoms ( • Tama F. • Miyashita O. • Brooks III, C.L. Flexible multi-scale fitting of atomic structures into low-resolution electron density maps with elastic network normal mode analysis. ). In this study, we use the last scheme. The simulated density of each voxel is defined as: $ρsimi,j,k=∑n=1N∫∫∫Vijkgnx,y,zdxdydz,$ (Equation 4) where Vijk is the volume of the voxel, N is the total number of non-hydrogen atoms in the system, and n is the index of the atom. The Gaussian function gn(x,y,z) is given by $gn(x,y,z)=exp[−32σ2{(x−xn)2+(y−yn)2+(z−zn)2}],$ (Equation 5) where (xn,yn,zn) are the coordinates of the n-th atom. σ determines the width of the Gaussian function, and the generated EM density has the resolution of 2σ in the map ( • Wriggers W. • Birmanns S. Using Situs for flexible and rigid-body fitting of multiresolution single-molecule data. ). Now, we define the size of the voxel (i,j,k) as Δxi = xiuxil, Δyj = yjuyjl, and Δzk = zkuzkl in the x, y, and z axes, respectively, where the superscripts u and l denote the upper and lower coordinates of the voxel, respectively. Equation 4 is rewritten as $ρsim(i,j,k)=Vijk−1(πσ26)3/2∑n=1N[erf(32σ2x)]xil−xnxiu−xn[erf(32σ2y)]yjl−ynyju−yn[erf(32σ2z)]zkl−znzku−zn,$ (Equation 6) where erf(x) is the error function, given by $erf(x)=2/π∫0xe−t2dt.$ To carry out MD-based flexible fitting, we should also calculate the derivatives of c.c. with respect to the atomic position q: $FEM=k∂c.c.∂q=k∑ijkρijkexp∂∂qρijksim∑ijk(ρijkexp)2⋅∑ijk(ρijksim)2−k∑ijkρijksim∂∂qρijksim⋅∑ijkρijkexpρijksim∑ijk(ρijkexp)2{∑ijk(ρijksim)2}3/2.$ (Equation 7) In the c.c.-based approach, calculations of ρsim and ∂ρsim/∂q are the most time consuming, because they are computed for all voxels. Furthermore, summation of the integrals of the Gaussian function over all atoms is necessary at each voxel. To reduce the computational cost, truncation of the Gaussian function to zero at a certain distance from the center of each atom ( • Tama F. • Miyashita O. • Brooks III, C.L. Flexible multi-scale fitting of atomic structures into low-resolution electron density maps with elastic network normal mode analysis. ), and/or less frequent update of the biasing force has been applied ( • Orzechowski M. • Tama F. Flexible fitting of high-resolution X-ray structures into cryoelectron microscopy maps using biased molecular dynamics simulations. ). In this study, we examine parallelization of Equations 3, 6 and 7 to speed up the flexible fitting, even when the biasing force is updated at every MD step. ### Basic Strategy for Parallelization We introduce a hybrid MPI + OpenMP parallelization scheme to compute ρsim and ∂ρsim/∂q. Here, if the summation in Equation 6 (i.e., DO loop over all atoms) is simply parallelized with MPI, a large amount of communication is required to compute Equation 3, because ρsim and ρsimρexp of all voxels should be communicated among all MPI processors through MPI_ALLREDUCE, resulting in loss of benefits from the parallelization. Figure 1 illustrates our basic idea to avoid this problem, where the simulated density map is divided into local regions. Each MPI processor is assigned to each region, and calculates its local densities. OpenMP is employed for the shared-memory parallelization inside the local region. To keep the accuracy of the calculation near the boundaries between different regions, we add enough buffer spaces that contain the atoms in the neighboring regions. The buffer size should be greater than or equal to the cutoff distance for the Gaussian function in Equation 5. Here, we assume that the Gaussian function is truncated to zero when the density is less than 1% of the maximum value. If σ = 2.5 Å and Δxi = Δyj = Δzk = 1.2 Å, the optimal buffer size is estimated to be 6 Å, corresponding to 5 voxels. Finally, to compute Equations 3 and 7, (ρsim)2 and ρsimρexp summed up over the voxels in the local region [Σ(ρsim)2 and Σ ρsimρexp] are communicated between all MPI processors. Namely, only two real numbers are transferred, reducing the communication cost. In this scheme, every MPI processor keeps the whole experimental density map data, and each computes the simulated density map in the assigned local region based on the coordinates of atoms that are stored in its memory. As described below, the scheme to store the coordinates data is different between the atomic and domain decomposition MD schemes. In the former, all MPI processors have the same coordinates data of all atoms in the system, while in the latter each MPI processor has the coordinates of atoms in the assigned local space (domain), indicating that the parallelization schemes of the density map calculation are different to each other. In the next sub-sections, we propose three different schemes for the parallelization of flexible fitting combined with atomic and domain decomposition MD, which allows us to select the best method according to the target system size, solvent model, and protein model. ### Parallelization in Domain Decomposition MD In the domain decomposition MD, which is also called a distributed-data MD algorithm, the whole system is decomposed into domains according to the number of MPI processors, and each MPI processor is assigned to each domain ( • Plimpton S. Fast parallel algorithms for short-range molecular-dynamics. ). As shown in the top panel of Figure 2, all domains are typically partitioned with the same size and shape. Each MPI processor handles the coordinates data of the atoms in the assigned domain and in the buffer regions near the domain boundary, and carries out the calculation of the bonded and non-bonded interactions in the assigned domain. We examine two different parallelization schemes for flexible fitting. In the first method, which we name single sub-domain decomposition (SD), one MPI processor is simply assigned to a local density map whose region is identical to the individual domain. The local density map is calculated from the atoms in the domain and from those in the buffer regions in the neighboring domains. In this method, however, significant load imbalance can happen among MPI processors, especially when a CG model or implicit solvent model is used, because particle densities are usually not uniform in the system. Here, we propose another parallelization scheme, where two MPI processors share the density map calculation in one domain to control the overall load balance dynamically (double sub-domains decomposition method [DD]). Let us consider that we have n = 8 domains as shown in Figure 2. In the DD method, we first count the number of atoms that are used for the density map calculation in each domain (step 1 in Figure 2). We then sort the MPI rank number (or domain index) by the number of fitting atoms, and make “pairs” between the MPI processors with the i-th and (ni)-th rank in the sorted order (step 2 in Figure 2). Finally, we put a partition in each domain to create two sub-domains, and the atomic coordinates in one of the sub-domains are communicated between the MPI processor pairs (step 3 in Figure 2). Each MPI processor is responsible for computing the density map in not only one of the sub-domains but also another sub-domain in the “partner domain”. Accordingly, MPI processors that have smaller numbers of fitting atoms can “help” the other busy MPI processors. Since the atoms can be moved inside the domain or across the domain boundary during the simulation, the position of the partition as well as the MPI processor pairs should be updated at a certain interval to control the overall load balance. Note that bonded and non-bonded interactions in the domain are still computed by one MPI processor. In the DD method, communication of the forces as well as atomic coordinates is necessary. For example, in Figure 2, the MPI rank 3 sends the coordinates of atoms in sub-domain 2 to the MPI rank 5, and receives the EM fitting forces from the MPI rank 5 after the density map calculation. ### Parallelization in Atomic Decomposition MD In the atomic decomposition MD, which is also called a replicated-data MD algorithm, all MPI processors have the same coordinates data of all atoms in the system ( • Plimpton S. Fast parallel algorithms for short-range molecular-dynamics. ). MPI parallelization is mainly applied to the DO loops of the bonded and non-bonded interaction pair lists for the energy and force calculations, and MPI_ALLREDUCE is used to accumulate all the atomic forces for every step. In this case, the communication of atomic coordinates among MPI processors is not necessary for the density map calculation. Let us consider that we have 2n MPI processors. Ideally, the system should be divided into 2n regions so that each region contains a roughly equal number of fitting atoms. Here, we use the kd-tree algorithm to make partitions of the system ( • de Berg M. • Cheong O. • van Kreveld M. • Overmars M. Computational Geometry: Algorithms and Applications. ). Figure 3 shows a scheme to decompose the system into 23 regions as an example. The system is first decomposed into two regions by putting a partition near the averaged coordinates of the target protein. The partition is perpendicular to the longest dimension in the molecule. The region that contains the largest number of atoms is further divided into two regions near the averaged coordinates of the atoms inside the region. The direction of the partition is determined in the same way. This is repeated until the total number of local regions is equal to the number of MPI processors, resulting in a system that is partitioned almost uniformly. Each MPI processor is assigned to each region, and calculates its local densities. ### Implementation In this study, we used the GENESIS program package as the development platform ( • Jung J. • Mori T. • Kobayashi C. • Matsunaga Y. • Yoda T. • Feig M. • Sugita Y. GENESIS: a hybrid-parallel and multi-scale molecular dynamics simulator with enhanced sampling algorithms for biomolecular and cellular simulations. , • Kobayashi C. • Jung J. • Matsunaga Y. • Mori T. • Ando T. • Tamura K. • Kamiya M. • Sugita Y. GENESIS 1.1: a hybrid-parallel molecular dynamics simulator with enhanced sampling algorithms on multiple computational platforms. ). GENESIS contains two MD engines: ATDYN and SPDYN, which are parallelized with hybrid MPI + OpenMP in the atomic and domain decomposition schemes, respectively. The flexible fitting module was previously implemented into ATDYN for the development of REUSfit ( • Miyashita O. • Kobayashi C. • Mori T. • Sugita Y. • Tama F. Flexible fitting to cryo-EM density map using ensemble molecular dynamics simulations. ). We introduced the above parallelization schemes into ATDYN and SPDYN. Recently, the hybrid CPU + GPU algorithm was also introduced into SPDYN for the all-atom MD simulations ( • Jung J. • Naurse A. • Kobayashi C. • Sugita Y. Graphics processing unit acceleration and parallelization of GENESIS for large-scale molecular dynamics simulations. ), where CPU calculates bonded interactions, restraint potentials, and reciprocal space non-bonded interactions in the particle mesh Ewald (PME) method ( • Essmann U. • Perera L. • Berkowitz M.L. • Darden T. • Lee H. • Pedersen L.G. A smooth particle mesh Ewald method. ), while GPU calculates real-space non-bonded interactions. The EM biasing potential is calculated on CPU. ### Benchmark Performance for the AAGO Model Systems We carried out benchmark tests of the SD and DD methods in SPDYN, and the kd-tree-based method in ATDYN using the AAGO model. We selected Ca2+-ATPase, AMPA receptor, and ribosome for small, medium, and large systems, respectively. In the model, only heavy atoms are treated, and the total numbers of atoms in the system were 7,671 for Ca2+-ATPase, 24,100 for AMPA, and 149,234 for ribosome. The resolution and voxel size of the target density maps were 10 Å and 2 × 2 × 2 Å3, respectively. Figure 4A shows the results of the benchmark on the CPU architecture. The number of CPU cores corresponds to the product of the numbers of MPI processors and OpenMP threads, where we used four threads, except for the case of the single CPU core. In SPDYN we plot the best performance, which was obtained by changing the number of domains in the x, y, and z dimensions while keeping the total number of domains constant. We found that, in the case of Ca2+-ATPase, ATDYN was faster than SPDYN, and for AMPA they were comparable. For ribosome, SPDYN was faster than ATDYN, because in ATDYN there is a large computational cost for the non-bonded interaction calculation and pair list update. In all cases, SPDYN-DD showed better performance than SPDYN-SD. These results suggest that SPDYN is useful for large systems, while ATDYN is still efficient for small systems. Figure 4B shows the detailed CPU time profiles in the selected MD runs. The upper panels compare the SD and DD methods for ribosome, where the system was divided into 8 × 2 × 2 domains. In the SD method, a significant load imbalance occurred because of less fitting atoms in and around domains 0, 8, 16, 23, and 31. These MPI processors experienced a long idling state. On the other hand, the DD method successfully dissolved such load imbalance, reducing the total simulation time from 133 to 104 s. The lower panels compare SPDYN-DD and ATDYN for Ca2+-ATPase. In this case, load balance was well controlled in SPDYN-DD, while it was perfect in ATDYN, and thus, ATDYN was faster than SPDYN-DD. Similar CPU time profiles were obtained in other systems with ATDYN, suggesting that the kd-tree-based space partitioning can easily realize a good load balance, independent of the system size and the number of MPI processors. In the DD method, distribution of the number of fitting atoms in each domain can affect the load balance. We compare the CPU time profiles in AMPA, where the system was decomposed into 2 × 2 × 16 or 4 × 4 × 4 domains using 64 MPI (see Figure 4C). In the case of 2 × 2 × 16 domains, the number of fitting atoms in the domain increases almost linearly as the z coordinate (MPI rank) increases, indicating that the computational cost for the non-bonded energy calculation is large in the higher MPI rank. By sharing the EM density map calculation in the DD method, all MPI processor pairs have nearly the same computational cost in the non-bonded energy and EM density map calculations, resulting in a good load balance over all MPI processors (upper panel in Figure 4D). In the case of 4 × 4 × 4 domains, it is difficult to control the load balance due to non-ideal distribution of the fitting atoms (lower panel in Figure 4D). Therefore, to obtain the best performance in SPDYN-DD, optimal domain decomposition patterns should be searched manually by changing the number of domains in each dimension before production runs. Computational time depends on the resolution or voxel size of the map (see Table S1). If the voxel size is fixed, better performance is obtained from higher-resolution maps. It is because we can specify small σ for the calculation of simulated densities. Specifically, computational time for 4-Å resolution maps (σ = 2.0 Å) was ∼2 times faster than that for 8-Å resolution maps. If the resolution is fixed, better performance is obtained from the maps composed of larger voxels. For example, if the resolution was fixed to 8 Å, the computational time using the voxel size 2 Å was ∼2 times faster than that in the case of 1-Å voxel size. The quality of fitting is also affected by the voxel size as well as resolution ( • Jolley C.C. • Wells S.A. • Frornme P. • Thorpe M.F. Fitting low-resolution cryo-EM maps of proteins using constrained geometric simulations. , • Orzechowski M. • Tama F. Flexible fitting of high-resolution X-ray structures into cryoelectron microscopy maps using biased molecular dynamics simulations. ). As for the performance of other software, GROMACS-MDfit ( • Whitford P.C. • Ahmed A. • Yu Y. • Hennelly S.P. • Tama F. • Spahn C.M. • Onuchic J.N. • Sanbonmatsu K.Y. Excited states of ribosome translocation revealed through integrative molecular modeling. ) showed 3.6×105, 1.1×105, and 1.7×104 steps/day with 64 CPU cores for Ca2+-ATPase, AMPA, and ribosome, respectively, demonstrating that GENESIS is 40–60 times faster. If the EM biasing force update frequency was set to ∼100 in GROMACS-MDfit, the performance was comparable with GENESIS. Note that MDfit utilizes GROMACS v.4.5.5 as the MD engine, which is parallelized with a domain decomposition scheme using flat MPI ( • Pronk S. • Páll S. • Schulz R. • Bjelkmar P. • Apostolov R. • Shirts M.R. • Smith J.C. • Kasson P.M. • van der Spoel D. • et al. GROMACS 4.5: a high-throughput and highly parallel open source molecular simulation toolkit. ), while the density map calculation is not parallelized. Therefore, parallelization or skipping of the biasing force calculation is essential for the fast flexible fitting simulations. Basically, parallelization does not significantly affect the accuracy of MD trajectories. In both ATDYN and SPDYN, energy deviations of a 2,000-step run using multiple CPU cores for Ca2+-ATPase were less than 1.0 × 10−8 kcal/mol from the run using single CPU core (see Figure S1). ### Benchmark Performance for the AAEX Model Systems We examined the performance of SPDYN using the AAEX model on CPU and hybrid CPU + GPU architectures. We used the same target molecules as in the AAGO models, and the total numbers of atoms in the system were 237,600 for Ca2+-ATPase, 684,800 for AMPA, and 1,976,700 for ribosome. The resolution and voxel size of the target density maps were 6 Å and 1 × 1 × 1 Å3, respectively. Figure 5A shows the benchmark performance of the SD and DD methods, where we used four OpenMP threads. We also compared the double-precision (dp) and mixed-precision (mp) floating-point calculations on CPU + GPU. In the mp model, force calculations were carried out with single precision, while integration of the equations of motion as well as accumulation of the force and energy were done with dp. We found that all methods showed good scalability, independent of the system size. The DD method was slightly faster than SD, and hybrid CPU + GPU calculation was 1.8–2.0 times faster than CPU calculation with the same number of CPU cores. As expected, the mp calculation on CPU + GPU was the most efficient among the tested methods. Figure 5B illustrates the detailed timer profiles for ribosome, where the system was decomposed into 8 × 2 × 2 domains. The upper panels compare the SD and DD methods on the CPU architecture. Contrary to the previous AAGO model systems, we see a good load balance in the non-bonded interaction calculations (green and light green bars) and pair list update (blue), because each domain is fully filled with solute and solvent atoms. As for the EM density map calculation (purple), there was an imbalance in the SD method, while there was a good balance in the DD method, resulting in a reduction of the total simulation time from 703 to 678 s. Although this speed-up ratio is smaller than that observed in the AAGO model, the DD method is still effective in the AAEX model simulations. The lower panels in Figure 5B compare the dp and mp calculations in the DD method on the hybrid CPU + GPU architecture. In both cases, the real-space non-bonded interaction calculations are well accelerated on GPU (∼9.3 s in dp and 7.0 s in mp) (green bars, but too short to see), resulting in the total simulation time now depending on the PME reciprocal space (light green) and EM biasing force calculations (purple) on CPU. Similar timer profiles were obtained in all other systems. These results indicate that there is still room for improvement in performance on the hybrid CPU + GPU architecture by introducing the EM biasing force calculations on GPU, as in the c.c. analysis tool in VMD ( • Stone J.E. • McGreevy R. • Isralewitz B. • Schulten K. GPU-accelerated analysis and visualization of large structures solved by molecular dynamics flexible fitting. ). ### Test Case 1: GGBP Using the AAGO and AAEX Models To validate our program, we first simulated the GGBP, which shows a domain motion upon binding of the ligands ( • Borrok M.J. • Kiessling L.L. • Forest K.T. Conformational changes of glucose/galactose-binding protein illuminated by open, unliganded, and ultra-high-resolution ligand-bound structures. ). We carried out the flexible fitting from the closed (ligand-bound) to open (ligand-free) states. The target EM density map was generated at 4 Å resolution from the X-ray crystallography structure of the open state (Figure 6A). We compared SPDYN-DDdp (SPDYN-DD with dp calculation), SPDYN-DDmp (mp calculation), and ATDYN (dp calculation) using the AAEX and AAGO models, where the biasing force constants were selected so that the two models could reproduce an almost identical c.c. Figure 6B shows the time courses of c.c. and root-mean-square deviation (RMSDt) with respect to the Cα atoms of the target state in the AAEX model using SPDYN-DDmp (red and blue lines), and in the AAGO model using ATDYN (green and purple lines) (for the other methods, see Figure S2). In both two models, c.c. increased from 0.80 to ∼0.97, and the RMSDt decreased from 4.4 to ∼0.5 Å. The AAGO model showed quick fitting compared with the AAEX model, which is mainly due to the absence of explicit solvent molecules in the system. We found that higher c.c. structures tend to have lower RMSDt (Figure 6C). In both AAEX and AAGO models, the highest c.c. structure showed a good agreement with the target X-ray crystallography structure (Figure 6D), and also no chirality errors. In Table 1, we summarize the RMSDt for the Cα atoms, backbone heavy atoms, and all heavy atoms, and protein geometry scores analyzed with MolProbity ( • Chen V.B. • Arendall W.B. • Keedy D.A. • Immormino R.M. • Kapral G.J. • Murray L.W. • Richardson J.S. • Richardson D.C. MolProbity: all-atom structure validation for macromolecular crystallography. ) in the highest c.c. structures. SPDYN-DDdp and SPDYN-DDmp were consistent with each other, demonstrating that the mp calculation does not significantly affect the accuracy of the fitting. ATDYN yielded the same results with SPDYN. Interestingly, the RMSDs in the AAGO model were comparable with those in the AAEX model. This is simply because the two domains in GGBP are rigid, and the AAGO model still kept a conformational accuracy to some extent. However, remarkable differences were observed in the protein geometry scores, where the AAEX model showed less atomic clashes and more favorable distributions in the backbone and side-chain dihedral angles compared with the AAGO model. These results indicate that the atomic positions in the AAEX model are more accurate than those in the AAGO model, even if the two models yielded identical c.c. values. Table 1Comparison of the Highest c.c. Structures in the Flexible Fitting of GGBP with the AAEX and AAGO Models ModelMethodc.c.RMSDt (Å)Protein Geometry BackboneHeavy All heavy atoms. Chemically equivalent atoms in the side chains were flipped to minimize the RMSDt. MolProbity ScoreClashscoreRamaFav (%)FavRotam (%) AAEXSPDYN-DDdp0.97360.500.520.861.522.6594.5692.38 SPDYN-DDmp0.97320.480.510.881.381.9996.6094.17 ATDYN0.97290.480.520.891.513.0996.6092.38 AAGOSPDYN-DDdp0.97320.470.510.862.7717.2393.8883.86 ATDYN0.97230.460.500.872.7920.9892.1888.79 RamaFav, Ramachandran favored; FavRotam, favored rotamers. a All heavy atoms. Chemically equivalent atoms in the side chains were flipped to minimize the RMSDt. ### Test Case 2: Ca2+-ATPase Using the AAGO Model We further attempted the flexible fitting for Ca2+-ATPase using the AAGO model. This protein transports calcium ions across the membrane during ATP hydrolysis, where the cytoplasmic (N, A, and P) and transmembrane (M) domains undergo large structural changes ( • Toyoshima C. • Nakasako M. • Nomura H. • Ogawa H. Crystal structure of the calcium pump of sarcoplasmic reticulum at 2.6 Å resolution. ). To date, three-dimensional structures of most functional states of Ca2+-ATPase have been determined by X-ray crystallography at high resolution. In this study, we focused on four states (E1⋅2Ca2+, E1∼P⋅ADP, E2P, and E2), and carried out flexible fitting between two adjacent states along the reaction cycle using 10-Å resolution maps. Figure 7A is the initial structure (E1⋅2Ca2+ in the flexible fitting toward E1∼P⋅ADP), and Figure 7B shows the obtained model with the highest c.c. among the 20 individual trials (c.c. = 0.9871). During the fitting, the N and A domains rotated 83.6° and 38.3°, respectively, and the RMSDt for the Cα atoms with respect to E1∼P⋅ADP was decreased from 13.6 to 2.9 Å. Figure 7C illustrates the time courses of c.c. and RMSDt. Some MD runs reached c.c. = 0.9850 or higher (red lines), while some runs failed (orange lines). Comparison between the high and low c.c. structures indicated that the main difference was the position of the β sheet (Ser423-Gly438; magenta) in the N domain. In the high c.c. models, it was in agreement with the X-ray crystallography structure of the target state (upper panel in Figure 7D). This β sheet is initially exposed to water in E1⋅2Ca2+, and it moves to contact the A domain in E1∼P⋅ADP, implying that the β sheet experiences a repulsive force against the A domain (non-native contact interaction in the Go-potential), which caused unfolding of the β sheet during the fitting (lower panel in Figure 7D). This feature can be one of the limitations in flexible fitting using the AAGO model. The results of GENESIS were consistent with GROMACS-MDfit (see Table S2). We emphasize that further improvement of these fittings seems to be possible, although it is beyond the scope of this article. In fact, a simulated annealing protocol (T = 100–20 K) could induce and stabilize a kinked conformation in the M1 helix during the fitting from E1⋅2Ca2+ to E1∼P⋅ADP, resulting in an increase of c.c. to 0.9907 and reduction in RMSDt to 2.2 Å. ### Test Case 3: P2X4 Using the AAEX Model The P2X4 receptor is a trimeric ion channel that transports Na+, K+, and Ca2+ across the membrane to regulate synaptic transmission. Recently, the X-ray crystallography structures of P2X4 in closed and open states were determined at 2.8 and 2.9 Å resolution, respectively ( • Hattori M. • Gouaux E. Molecular mechanism of ATP binding and ion channel activation in P2X receptors. ). In the open state, which is activated by ATP binding, the six transmembrane helices are tilted and shifted outward by ∼3 Å, showing an iris-like movement, to create a large pore in the membrane domain. We attempted flexible fitting from the closed to open states in explicit solvent/membrane environments using a synthetic density map (5 Å resolution). Figure 8A is the initial structure in the closed state, and Figure 8B is the highest c.c. structure among the five individual trials in the dp calculation. RMSDt for the Cα atoms with respect to the target crystal structure (open state) was 1.1 Å. The six transmembrane helices tilted outward in an iris-like motion, and the channel completely opened (Figures 8C and 8D). The bottleneck radius of the channel gate increased from 0.3 to 3.2 Å, which is close to that in the crystal structure. As expected, we observed water channel formation in the fitted model (Figures 8E and 8F), indicating that the highest c.c. structure is indeed an “open” state in the functional cycle. These results demonstrate that all-atom model flexible fitting has a potential to investigate protein function via MD. Figure 8G shows the time course of c.c. and RMSDt with respect to the open state, where the red and orange lines represent c.c. in the dp and mp calculations, respectively, and the blue and light blue lines are the corresponding RMSDt. We can see that c.c. quickly increased in 1 ns, and converged after 30 ns. All trials yielded c.c. > 0.9550 with RMSDt < 1.2 Å, and the mp calculation showed similar results to the dp calculation (highest c.c. = 0.9599 and RMSDt = 1.1 Å). The MolProbity score of the fitted model obtained from the dp and mp calculations was 1.74 and 1.57, respectively, and no chirality errors were found in either model. The use of the single precision in the EM biasing force calculation does not significantly affect the quality of the fitted model. ### Test Case 4: Ribosome Using the AAGO and AAEX Models Ribosome synthesizes proteins according to instructions from mRNA. It is a complex of small and large subunits (30S and 50S in prokaryotes), each of which consists of at least one RNA and several proteins. In the functional cycle, the ribosome undergoes large conformational changes during which a tRNA provides an amino acid, and travels through the ribosomal A, P, and E states in order to elongate the polypeptide. In this study, we attempted flexible fitting from a classical E/E ( • Tourigny D.S. • Fernández I.S. • Kelley A.C. • Ramakrishnan V. Elongation factor G bound to the ribosome in an intermediate state of translocation. ) to hybrid P/E state ( • Korostelev A. • Asahara H. • Lancaster L. • Laurberg M. • Hirschi A. • Zhu J. • Trakhanov S. • Scott W.G. • Noller H.F. Crystal structure of a translation termination complex formed with release factor RF2. ) with the AAGO and AAEX models using a 6-Å resolution map. For comparison, the biasing force constants were chosen so that both models reproduce a similar c.c. in a similar MD step. Figure 9A is the initial structure in the classical E/E state, and Figure 9B is the highest c.c. structure among the five individual trials in the AAEX model. The structure was fitted well to the target densities, where the stalk domain was bent by ∼15°, and the 30S subunit rotated by ∼9°. Similar results were obtained in the AAGO model. Figure 9C shows the time courses of c.c. and RMSDt with respect to the target state. In the AAEX model, c.c. was quickly changed in 0.5 × 105 steps, and then gradually increased. On the other hand, in the AAGO model, c.c. was gradually increased in 2.0 × 105 steps, and converged earlier compared with the AAEX model. The RMSDt change also showed similar tendencies. Consequently, we obtained the highest c.c. = 0.9631 in the AAEX model, and 0.9670 in the AAGO model, and the corresponding RMSDt was 2.01 and 2.02 Å, respectively. In the early stage of the fitting, we found a noticeable difference in transient conformations of the stalk domain between the two models. In the AAEX model, the hinge region in the stalk was quickly fitted to the target densities, causing a delay in the head domain (Figure 9D). Thus, the RMSDs of the stalk with respect to the initial (RMSDi) and target structures (RMSDt) were significantly increased at the early stage (Figure 9F, blue lines). On the other hand, in the AAGO model, both head and hinge regions were moved rigidly (Figure 9E), and eventually fitted to the target densities almost simultaneously, where the early RMSD changes were rather moderate (Figure 9F, purple lines). This is mainly because the stalk domain was rigidified through the native contact interactions in the Go-potential. We note that transient conformations can be also affected by temperature or biasing force constants, since higher temperature makes the structure more flexible, and higher force constant can tightly guide the structure toward the target densities. Unexpected distortion in the transient conformation is often problematic, since the structure can get trapped in a wrong conformation (indicated by asterisks in Figure 9F). If there is a large domain motion or less structural change in the domain, the AAGO model seems to be rather feasible, and further refinement might be possible by switching the model to the AAEX model. ## Discussion In this study, we demonstrated that our methods are useful for fast flexible fitting simulations with the AAEX model as well as the CG models such as AAGO model. Other c.c.-based flexible fitting algorithms using NMA-based approaches ( • Lopéz-Blanco J.R. • Chacón P. iMODFIT: efficient and robust flexible fitting based on vibrational analysis in internal coordinates. , • Tama F. • Miyashita O. • Brooks III, C.L. Flexible multi-scale fitting of atomic structures into low-resolution electron density maps with elastic network normal mode analysis. ) or different definitions in the simulated densities ( • Ishida H. • Matsumoto A. Free-energy landscape of reverse tRNA translocation through the ribosome analyzed by electron microscopy density maps and molecular dynamics simulations. , • Topf M. • Webb B. • Wolfson H. • Chiu W. • Sali A. Protein structure fitting and refinement guided by cryo-EM density. ) can be sped up in a similar way. Our methods will also work well on implicit solvent models ( • Mori T. • Miyashita N. • Im W. • Feig M. • Sugita Y. Molecular dynamics simulations of biological membranes and membrane proteins using enhanced conformational sampling algorithms. ). In the models, the solvation free energy of a solute is incorporated into the molecular mechanics potential energy function as an effective energy term. Since the model can reduce computational cost in the non-bonded energy calculations, and also realize fast relaxation of the structural changes in proteins, it has been often utilized in the flexible fitting of large systems ( • Tanner D.E. • Chan K.Y. • Phillips J.C. • Schulten K. Parallel generalized Born implicit solvent calculations with NAMD. ). However, significant load imbalance can happen between MPI processors in domain decomposition MD with the implicit solvent model, because the system is composed of dense and sparse domains as in the AAGO model systems (Figure 2). The DD method solves such load imbalance by sharing density map calculations among the partner MPI processors. There would be another approach to accelerate the flexible fitting that uses the CG or implicit solvent model, where the kd-tree-like algorithm is utilized in domain decomposition MD, and the EM density map is calculated using the SD method. In this scheme, the system is divided into domains with different sizes so that each domain contains roughly equal numbers of atoms ( • Niethammer C. • Becker S. • Bernreuther M. • Buchholz M. • Eckhardt W. • Heinecke A. • Werth S. • Bungartz H.J. • Glass C.W. • Hasse H. • et al. ls1 mardyn: the massively parallel molecular dynamics code for large systems. , • Srinivasan S.G. • Ashok I. • Jônsson H. • Kalonji G. • Zahorjan J. Dynamic-domain-decomposition parallel molecular dynamics. ) as shown in Figure 3, and the load balance is dynamically controlled through the position of the domain partitions. In this study, we did not introduce this scheme into SPDYN, because it requires large modification of the source code in the MD core. In addition, kd-tree-based domain decomposition seems to be not so popular in other MD software for biomolecular systems. Recently, flexible fitting has been combined with enhanced sampling algorithms such as temperature-accelerated MD (TAMD) ( • Vashisth H. • Skiniotis G. • Brooks III, C.L. Using enhanced sampling and structural restraints to refine atomic structures into low-resolution electron microscopy maps. ), self-guided Langevin dynamics ( • Wu X.W. • Subramaniam S. • Case D.A. • Wu K.W. • Brooks B.R. Targeted conformational search with map-restrained self-guided Langevin dynamics: application to flexible fitting into electron microscopic density maps. ), and replica-exchange methods ( • Miyashita O. • Kobayashi C. • Mori T. • Sugita Y. • Tama F. Flexible fitting to cryo-EM density map using ensemble molecular dynamics simulations. , • Singharoy A. • Teo I. • McGreevy R. • Stone J.E. • Zhao J. • Schulten K. Molecular dynamics-based refinement and validation for sub-5 Å cryo-electron microscopy maps. ). The TAMDFF method, which combines MDFF and TAMD ( • Maragliano L. • Vanden-Eijnden E. A temperature accelerated method for sampling free energy and determining reaction pathways in rare events simulations. ), realized quick convergence to obtain the fitted model ( • Vashisth H. • Skiniotis G. • Brooks III, C.L. Using enhanced sampling and structural restraints to refine atomic structures into low-resolution electron microscopy maps. ). REUSfit exchanges the biasing force constants between pairs of replicas based on the replica-exchange umbrella-sampling scheme (REUS) ( • Sugita Y. • Kitao A. • Okamoto Y. ), which allows us to automatically determine the optimal force constant to lower artifactual overfitting ( • Miyashita O. • Kobayashi C. • Mori T. • Sugita Y. • Tama F. Flexible fitting to cryo-EM density map using ensemble molecular dynamics simulations. ). In GENESIS, various types of the replica-exchange methods such as temperature-REMD ( • Sugita Y. • Okamoto Y. Replica-exchange molecular dynamics method for protein folding. ), surface-tension-REMD ( • Mori T. • Jung J. • Sugita Y. Surface-tension replica-exchange molecular dynamics method for enhanced sampling of biological membrane systems. ), and REUS are available with flexible fitting, and they can run on not only CPU but also hybrid CPU + GPU architectures. In addition, GENESIS can deal with CG models as well as AAEX models. Our parallelization schemes implemented in GENESIS should contribute to efficient structure modeling based on cryo-EM density maps with various MD algorithms. ## STAR★Methods ### Key Resources Table Tabled 1 REAGENT or RESOURCESOURCEIDENTIFIER Deposited Data Ca2+-ATPase structures( • Obara K. • Miyashita N. • Xu C. • Toyoshima L. • Sugita Y. • Inesi G. • Toyoshima C. Structural role of countertransport revealed in Ca2+ pump crystal structure in the absence of Ca2+. , • Toyoshima C. • Nakasako M. • Nomura H. • Ogawa H. Crystal structure of the calcium pump of sarcoplasmic reticulum at 2.6 Å resolution. , • Toyoshima C. • Norimatsu Y. • Iwasawa S. • Tsuda T. • Ogawa H. How processing of aspartylphosphate is coupled to lumenal gating of the ion pathway in the calcium pump. , • Toyoshima C. • Nomura H. Structural changes in the calcium pump accompanying the dissociation of calcium. ) PDB: 1IWO, 1SU4, 2ZBD, 2ZBF, 2AGV AMPA structure( • Yelshanskaya M.V. • Singh A.K. • Sampson J.M. • Narangoda C. • Kurnikova M. • Sobolevsky A.I. Structural bases of noncompetitive inhibition of AMPA-subtype ionotropic glutamate receptors by antiepileptic drugs. ) PDB: 5L1B Ribosome structures( • Korostelev A. • Asahara H. • Lancaster L. • Laurberg M. • Hirschi A. • Zhu J. • Trakhanov S. • Scott W.G. • Noller H.F. Crystal structure of a translation termination complex formed with release factor RF2. , • Tourigny D.S. • Fernández I.S. • Kelley A.C. • Ramakrishnan V. Elongation factor G bound to the ribosome in an intermediate state of translocation. ) PDB: 4V9H, 4V67 GGBP structures( • Borrok M.J. • Kiessling L.L. • Forest K.T. Conformational changes of glucose/galactose-binding protein illuminated by open, unliganded, and ultra-high-resolution ligand-bound structures. ) PDB: 2FW0, 2FVY P2X4 structures( • Hattori M. • Gouaux E. Molecular mechanism of ATP binding and ion channel activation in P2X receptors. ) PDB: 4DW0, 4DW1 Software and Algorithms GENESIS( • Jung J. • Mori T. • Kobayashi C. • Matsunaga Y. • Yoda T. • Feig M. • Sugita Y. GENESIS: a hybrid-parallel and multi-scale molecular dynamics simulator with enhanced sampling algorithms for biomolecular and cellular simulations. , • Kobayashi C. • Jung J. • Matsunaga Y. • Mori T. • Ando T. • Tamura K. • Kamiya M. • Sugita Y. GENESIS 1.1: a hybrid-parallel molecular dynamics simulator with enhanced sampling algorithms on multiple computational platforms. ) https://www.r-ccs.riken.jp/labs/cbrt/ SMOG 1.2.3( • Noel J.K. • Whitford P.C. • Sanbonmatsu K.Y. • Onuchic J.N. [email protected] ctbp: simplified deployment of structure-based models in GROMACS. ) http://smog-server.org SMOG2 2.0.3( • Noel J.K. • Levi M. • Raghunathan M. • Lammert H. • Hayes R.L. • Onuchic J.N. • Whitford P.C. SMOG 2: a versatile software package for generating structure-based models. ) http://smog-server.org/smog2/index.html GROMACS(4.5.5)-MDfit( • Whitford P.C. • Ahmed A. • Yu Y. • Hennelly S.P. • Tama F. • Spahn C.M. • Onuchic J.N. • Sanbonmatsu K.Y. Excited states of ribosome translocation revealed through integrative molecular modeling. ) http://smog-server.org/extension/MDfit.html SITUS 2.8( • Wriggers W. • Milligan R.A. • McCammon J.A. Situs: a package for docking crystal structures into low-resolution maps from electron microscopy. ) http://situs.biomachina.org MolProbity 4.4( • Chen V.B. • Arendall W.B. • Keedy D.A. • Immormino R.M. • Kapral G.J. • Murray L.W. • Richardson J.S. • Richardson D.C. MolProbity: all-atom structure validation for macromolecular crystallography. ) http://molprobity.biochem.duke.edu DynDom 2.0( • Hayward S. • Berendsen H.J. Systematic analysis of domain motions in proteins from conformational change: new results on citrate synthase and T4 lysozyme. ) http://fizz.cmp.uea.ac.uk/dyndom/ HOLE 2.2.005( • Smart O.S. • Neduvelil J.G. • Wang X. • Wallace B.A. • Sansom M.S.P. HOLE: a program for the analysis of the pore dimensions of ion channel structural models. ) http://www.holeprogram.org VMD 1.9.3( • Humphrey W. • Dalke A. • Schulten K. VMD: visual molecular dynamics. ) http://www.ks.uiuc.edu/Research/vmd/ PyMOL 1.8.6( • Schrödinger The PyMOL Molecular Graphics System, Version 1.8. ) https://www.pymol.org/ E-R method( • Mohan S. • Donohue J.P. • Noller H.F. Molecular mechanics of 30S subunit head rotation. ) http://rna.ucsc.edu/rnacenter/erodaxis.py ### Contact for Reagent and Resource Sharing Further information and requests for data should be directed to and will be fulfilled by the Lead Contact, Yuji Sugita ([email protected]) ### Method Details #### Benchmark Tests We selected Ca2+-ATPase (PDB: 1IWO), AMPA receptor (5L1B), and ribosome (4V9H) for the benchmark tests. To make the test sets, we first deleted the heteroatoms, water molecules, and residues whose main-chain or side-chain atoms are missing in the PDB structure. Since the benchmark performance in the flexible fitting with domain-decomposition MD can vary depending on the position and orientation of the target biomolecule, we decided them using the following scheme. The orientation of AMPA was modified based on the OPM database ( • Lomize M.A. • Lomize A.L. • Pogozheva I.D. • Mosberg H.I. OPM: orientations of proteins in membranes database. ), and ribosome was reoriented so that the principal axes (PC1–3) of the atomic coordinates were aligned to the X-, Y-, and Z-axes. As for Ca2+-ATPase, the original orientation in the X-ray crystal structure (chain A in 1IWO) was used. The center of mass of the entire structure of each system was shifted to the origin, and finally the synthetic density map was generated from those coordinates using Equation 6. The voxel size and resolution of the density maps used for the AAGO model simulations were 2×2×2 Å3 and 10 Å, respectively, and for the AAEX model they were 1×1×1 Å3 and 6 Å, respectively. Note that in the benchmark tests the initial structures were already fitted to the target density maps. We used the SMOG2 program to prepare the input files of the AAGO model ( • Noel J.K. • Levi M. • Raghunathan M. • Lammert H. • Hayes R.L. • Onuchic J.N. • Whitford P.C. SMOG 2: a versatile software package for generating structure-based models. ). The total number of atoms in Ca2+-ATPase, AMPA, and ribosome was 7671, 24100, and 149234, respectively. We carried out the simulations at a temperature of 20 K with the Berendsen thermostat under the non-periodic boundary condition ( • Berendsen H.J.C. • Postma J.P.M. • van Gunsteren W.F. • DiNola A. • Haak J.R. Molecular dynamics with coupling to an external bath. ). The equations of motion were integrated with the leapfrog Verlet method without bond constraints. For the non-bonded energy calculation, we used a cutoff distance of 15 Å and a pair list distance of 25 Å. For the EM biasing force calculation, k = 1000 kcal/mol and σ = 5 Å (10-Å resolution in the simulated map) were used, and the Gaussian function was truncated to zero when it was less than 1% of the maximum value ( • Tama F. • Miyashita O. • Brooks III, C.L. Flexible multi-scale fitting of atomic structures into low-resolution electron density maps with elastic network normal mode analysis. ). The non-bonded pair list was updated every 10 steps, while the EM fitting force was every step. In GENESIS, dp or mp was examined. In GROMACS-MDfit, mp calculation was performed (see Table S3), and we used the same simulation conditions in GENESIS as much as possible. For the flexible fitting with the AAEX model, we employed the same three systems. The membrane proteins (Ca2+-ATPase and AMPA) were embedded into POPC (1-Palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine) lipid bilayers and solvated in KCl solution, and ribosome was in MgCl2 solution using TIP3P water. VMD was used for solvation and charge neutralization ( • Humphrey W. • Dalke A. • Schulten K. VMD: visual molecular dynamics. ). The total number of atoms for Ca2+-ATPase, AMPA, and ribosome was about 237600, 684800, and 1976700, respectively. We used the CHARMM C36 force fields for proteins ( • Best R.B. • Zhu X. • Shim J. • Lopes P.E.M. • Mittal J. • Feig M. • Mackerell Jr., A.D. Optimization of the additive CHARMM all-atom protein force field targeting improved sampling of the backbone ϕ, ψ and side-chain χ1 and χ2 dihedral angles. ), lipids ( • Klauda J.B. • Venable R.M. • Freites J.A. • O'Connor J.W. • Tobias D.J. • Mondragon-Ramirez C. • Vorobyov I. • MacKerell Jr., A.D. • Pastor R.W. Update of the CHARMM all-atom additive force field for lipids: validation on six lipid types. ), and RNAs ( • Denning E.J. • Priyakumar U.D. • Nilsson L. • Mackerell Jr., A.D. Impact of 2'-hydroxyl sampling on the conformational properties of RNA: update of the CHARMM all-atom additive force field for RNA. ), and ran the simulation in the NPT ensemble at T = 303.15 K and P = 1 atm with the Langevin thermostat and barostat ( • Feller S.E. • Zhang Y. • Pastor R.W. • Brooks B.R. Constant pressure molecular dynamics simulation: the Langevin piston method. ). The equations of motion were integrated with the leapfrog Verlet method using the SHAKE and SETTLE algorithms (1 step = 2 fs) ( • Miyamoto S. • Kollman P.A. SETTLE: an analytical version of the SHAKE and RATTLE algorithm for rigid water models. , • Ryckaert J.-P. • Ciccotti G. • Berendsen H.J.C. Numerical integration of the Cartesian equations of motion of a system with constraints: molecular dynamics of n-alkanes. ). For computation of the non-bonded interactions, we used PME with grid size ∼1.2 Å ( • Essmann U. • Perera L. • Berkowitz M.L. • Darden T. • Lee H. • Pedersen L.G. A smooth particle mesh Ewald method. ) and the linear 1/R2 lookup table method with a cutoff distance of 12.0 Å and a pair list distance of 13.5 Å ( • Jung J. • Mori T. • Sugita Y. Efficient lookup table using a linear function of inverse distance squared. ). For the EM biasing force calculation, σ = 3 Å (6-Å resolution in the map) was used, and the other parameters were the same as those in the AAGO model simulations. Here, we used only SPDYN, since ATDYN is not capable of dealing with such a large system. We carried out the benchmark tests on our in-house Linux cluster machine, which consists of 16 nodes connected via Infiniband FDR. Each node equips 2 CPUs (Intel Xeon E5-2670 2.60 GHz), containing 8 cores per CPU, 64 GB memory, and 2 GPU cards (NVIDIA Tesla K20X). For the compilation of the programs, we used the Intel compiler ver. 17.0.3, OpenMPI ver. 2.1.1, and CUDA ver. 8.0. We measured the timing (steps/day or ns/day) based on a 1000-steps MD, excluding the initial setup and finalization. Sample control files of GENESIS for the flexible fitting with the AAEX and AAGO models are shown in Supplemental Information (Figure S3). #### Flexible Fitting for GGBP We carried out flexible fitting for GGBP (Asp2–Phe306) from E. coli. The target EM density map was generated from the open form (PDB: 2FW0), with a voxel size of 1.0 Å and resolution of 4 Å. To prepare the initial-fitted model, we superimposed the closed form (PDB: 2FVY) to the map using the rigid docking with the SITUS program ( • Wriggers W. • Milligan R.A. • McCammon J.A. Situs: a package for docking crystal structures into low-resolution maps from electron microscopy. ). For the AAGO model, we used the SMOG server ( • Noel J.K. • Whitford P.C. • Sanbonmatsu K.Y. • Onuchic J.N. [email protected] ctbp: simplified deployment of structure-based models in GROMACS. ) to create the input files. We conducted individual 10 runs (250,000 steps for each) at 20 K with different initial velocities, where the temperature was controlled with the Langevin thermostat. We used a cutoff distance of 12 Å, EM biasing force constant of 125 kcal/mol, and σ = 2 Å. In the AAEX model, we used the CHARMM C36 force fields. The structure was solvated with 150 mM KCl solution, and the total number of atoms in the system was 37,079 (one GGBP, 10793 TIP3P waters, 38 K+, and 30 Cl). We equilibrated the system for 100 ps with the positional restraints on the heavy atoms in GGBP, and performed the flexible fitting in the NPT ensemble, where the same simulation conditions and parameters were used as in the benchmark tests. For the EM biasing force calculation, a force constant of 2,000 kcal/mol was used, and the biasing forces were updated every step. We carried out 10 individual 500-ps runs (250,000 steps for each). We used 8MPI×3OpenMP for all simulations except in the case of SPDYN-DD for the AAGO model (1MPI×3OpenMP) due to limitation of the system size. To analyze the MolProbity score in the AAGO model, we added hydrogen to the highest c.c. model, and carried out an energy minimization with the CHARMM C36 force field until the root-mean-square gradient was less than 1.0×10–5 kcal/mol/Å while fixing the positions of heavy atoms. For the AAEX model, the energy minimization was performed using the same scheme after removing the solvent and ions in the highest c.c. snapshot. #### Flexible Fitting for Ca2+-ATPase We used PDB entries 1SU4, 2ZBD, 2ZBF, and 2AGV for the initial and target structures of E1⋅2Ca2+, E1∼P⋅ADP, E2P, and E2 states, respectively. To make the initial models, we first superimposed the E1∼P⋅ADP, E2P, and E2 structures onto E1⋅2Ca2+ using the Cα atoms, and then, generated synthetic EM density maps from the PDB coordinates with a voxel size of 2 Å and resolution of 10 Å. Note that ligands, ions, and lipids were excluded from the system. Membrane was also excluded, since those parameters are not established yet for the AAGO model. The initial RMSDs for the Cα atoms between the two adjacent states are 5.6–14.0 Å (see Table S3). We carried out the flexible fitting at 20 K with a cutoff distance 15 Å, EM biasing force constant k = 1000 and 2000 kcal/mol, and σ = 5 Å. We carried out 500,000-steps MD runs, and examined 20 individual simulations with different initial velocities. We checked our results with GROMACS-MDfit. In GROMACS-MDfit, the force constant was set to the number of heavy atoms (k = natoms = 7671) or 2×natom. #### Flexible Fitting for the P2X4 Receptor To prepare the initial model, we first superimposed the structure in the closed state (PDB: 4DW0) on that in the open state (PDB: 4DW1) using the Cα atoms (RMSD = 3.2 Å). Here, the four N-terminal and two C-terminal residues of the closed state were deleted to match the length of the amino acid sequence between two structures. We then generated the synthetic density map with a voxel size of 1.0 Å and resolution of 5 Å. Note that ATP was not included in the system. The closed structure of P2X4 was embedded into a DMPC lipid bilayer based on the OPM database, and was solvated with 150 mM KCl solution. The system was composed of 219,500 atoms (one P2X4, 218 DMPC in upper leaflet, 224 DMPC in lower leaflet, 50595 TIP3P waters, 143 K+, and 152 Cl. We equilibrated the system for 1 ns with the positional restraints on the heavy atoms of P2X4, and performed the flexible fitting for 50 ns in the NPT ensemble, where the same simulation conditions and parameters were used as in the benchmark tests. For the EM biasing force calculation, a force constant of 7,500 kcal/mol and σ = 2.5 Å were used, and the force was updated every step. We carried out 5 individual runs with different initial velocities, and also examined dp and mp calculations (10 runs in total). #### Flexible Fitting for Ribosome To prepare the initial model, we first superimposed the structure in a classical E/E state (PDB: 4V67) on that in a hybrid P/E state (PDB: 4V9H) using the phosphorous atoms. Here, we removed mRNA, tRNA, release factor 2, elongation factor G, ribosomal proteins L1, L9, L10, L12, and L36 to match the components and sequences in proteins and RNAs between the two structures. We then generated the synthetic density map from the hybrid P/E state with a voxel size of 1.0 Å and resolution of 6 Å. In the AAEX model, the structure was solvated with 150 mM MgCl2 solution. The system was composed of ∼2,143,000 atoms (one ribosome, 632800 TIP3P waters, 3740 Mg2+, and 3593 Cl). We equilibrated the system for 400 ps with the positional restraints on the Cα and phosphorous atoms in ribosome, and performed the flexible fitting in the NPT ensemble, where the same simulation conditions and parameters were used as in the benchmark tests. For the EM biasing force calculation, a force constant of 200,000 kcal/mol and σ = 3 Å were used, and the biasing forces were updated every step. We carried out 5 individual runs (1,000,000 steps for each, and 2 fs per step) with different initial velocities using mp calculations. In the AAGO model, total number of atoms was 141,264. Again, we conducted 5 runs (1,000,000 steps for each) at 10 K using dp calculations. We used a force constant of 1,000 kcal/mol for the EM biasing potential, which was selected to reproduce a similar c.c. with the AAEX model in a similar MD step. RMSDt was computed for the Cα atoms in proteins, and P, C2, and C4’ atoms in RNAs. ### Quantification and Statistical Analysis We obtained the CPU time profiles using a timer module in GENESIS ( • Jung J. • Mori T. • Kobayashi C. • Matsunaga Y. • Yoda T. • Feig M. • Sugita Y. GENESIS: a hybrid-parallel and multi-scale molecular dynamics simulator with enhanced sampling algorithms for biomolecular and cellular simulations. , • Kobayashi C. • Jung J. • Matsunaga Y. • Mori T. • Ando T. • Tamura K. • Kamiya M. • Sugita Y. GENESIS 1.1: a hybrid-parallel molecular dynamics simulator with enhanced sampling algorithms on multiple computational platforms. ). c.c. was obtained from the log file of the simulations. Cα RMSD was computed using the analysis toolset in GENESIS. The MolProbity server was utilized to analyze protein geometry such as clash score, Ramachandran favored, and favored rotamers ( • Chen V.B. • Arendall W.B. • Keedy D.A. • Immormino R.M. • Kapral G.J. • Murray L.W. • Richardson J.S. • Richardson D.C. MolProbity: all-atom structure validation for macromolecular crystallography. ). Domain motion of Ca2+-ATPase was analyzed using DynDom ( • Hayward S. • Berendsen H.J. Systematic analysis of domain motions in proteins from conformational change: new results on citrate synthase and T4 lysozyme. ). The channel pore size was analyzed using HOLE ( • Smart O.S. • Neduvelil J.G. • Wang X. • Wallace B.A. • Sansom M.S.P. HOLE: a program for the analysis of the pore dimensions of ion channel structural models. ). A rotational angle of the 30S subunit was calculated by using the E-R method ( • Mohan S. • Donohue J.P. • Noller H.F. Molecular mechanics of 30S subunit head rotation. ), which is available as a plug-in module of PyMOL ( • Schrödinger The PyMOL Molecular Graphics System, Version 1.8. ). ### Data and Software Availability The program is available in GENESIS version 1.4 or later (https://www.r-ccs.riken.jp/labs/cbrt/). ## Acknowledgments We would like to thank Drs. Chigusa Kobayashi and Motoshi Kamiya at RIKEN for helpful comments and discussions. This work was supported by JSPS KAKENHI grant numbers JP26119006, JP15H05594, JP16K07286, and JP17K07305, a grant from Innovative Drug Discovery Infrastructure through Functional Control of Biomolecular Systems, Priority Issue 1 in Post-K Supercomputer Development (hp170254), FOCUS for Establishing Supercomputing Center of Excellence, and the RIKEN Pioneering Projects, Integrated Lipidology and Dynamic Structural Biology. MD simulations were partially carried out on HOKUSAI GreatWave and BigWaterFall at RIKEN. ### Author Contributions Conceptualization, T.M. and Y.S.; Methodology and Visualization, T.M.; Software, T.M. and J.J.; Investigation, T.M. and M.K.; Resources, O.M., F.T., and Y.S.; Writing – Original Draft, T.M. and Y.S.; Writing – Review & Editing, M.K., J.J., O.M., and F.T.; Funding Acquisition, T.M., O.M., F.T., and Y.S.; Supervision, Y.S. ### Declaration of Interests The authors declare no competing interests. ## Supplemental Information • Document S1. Figures S1–S3 and Tables S1–S3 ## References • Abraham M.J. • Murtola T. • Schulz R. • Páll S. • Smith J.C. • Hess B. • Lindahl E. GROMACS: high performance molecular simulations through multi-level parallelism from laptops to supercomputers. SoftwareX. 2015; 1: 19-25 • Bai X.C. • McMullan G. • Scheres S.H.W. How cryo-EM is revolutionizing structural biology. Trends Biochem. Sci. 2015; 40: 49-57 • Berendsen H.J.C. • Postma J.P.M. • van Gunsteren W.F. • DiNola A. • Haak J.R. Molecular dynamics with coupling to an external bath. J. Chem. Phys. 1984; 81: 3684-3690 • Best R.B. • Zhu X. • Shim J. • Lopes P.E.M. • Mittal J. • Feig M. • Mackerell Jr., A.D. Optimization of the additive CHARMM all-atom protein force field targeting improved sampling of the backbone ϕ, ψ and side-chain χ1 and χ2 dihedral angles. J. Chem. Theory Comput. 2012; 8: 3257-3273 • Borrok M.J. • Kiessling L.L. • Forest K.T. Conformational changes of glucose/galactose-binding protein illuminated by open, unliganded, and ultra-high-resolution ligand-bound structures. Protein Sci. 2007; 16: 1032-1041 • Brooks B.R. • Brooks III, C.L. • Mackerell Jr., A.D. • Nilsson L. • Petrella R.J. • Roux B. • Won Y. • Archontis G. • Bartels C. • Boresch S. • et al. CHARMM: the biomolecular simulation program. J. Comput. Chem. 2009; 30: 1545-1614 • Case D.A. • Cheatham III, T.E. • Darden T. • Gohlke H. • Luo R. • Merz K.M. • Onufriev A. • Simmerling C. • Wang B. • Woods R.J. The Amber biomolecular simulation programs. J. Comput. Chem. 2005; 26: 1668-1688 • Chen V.B. • Arendall W.B. • Keedy D.A. • Immormino R.M. • Kapral G.J. • Murray L.W. • Richardson J.S. • Richardson D.C. MolProbity: all-atom structure validation for macromolecular crystallography. Acta Crystallogr. D Biol. Crystallogr. 2010; 66: 12-21 • Chorley M.J. • Walker D.W. Performance analysis of a hybrid MPI/OpenMP application on multi-core clusters. J. Comput. Sci. 2010; 1: 168-174 • Clementi C. • Nymeyer H. • Onuchic J.N. Topological and energetic factors: what determines the structural details of the transition state ensemble and “en-route” intermediates for protein folding? An investigation for small globular proteins. J. Mol. Biol. 2000; 298: 937-953 • de Berg M. • Cheong O. • van Kreveld M. • Overmars M. Computational Geometry: Algorithms and Applications. Springer-Verlag TELOS, 2008 • Denning E.J. • Priyakumar U.D. • Nilsson L. • Mackerell Jr., A.D. Impact of 2'-hydroxyl sampling on the conformational properties of RNA: update of the CHARMM all-atom additive force field for RNA. J. Comput. Chem. 2011; 32: 1929-1943 • Dubochet J. • Chang J.-J. • Homo J.-C. • Lepault J. • McDowall A.W. • Schultz P. Cryo-electron microscopy of vitrified specimens. Q. Rev. Biophys. 1988; 21: 129-228 • Ehara H. • Yokoyama T. • Shigematsu H. • Yokoyama S. • Shirouzu M. • Sekine S. Structure of the complete elongation complex of RNA polymerase II with basal factors. Science. 2017; 357: 921-924 • Essmann U. • Perera L. • Berkowitz M.L. • Darden T. • Lee H. • Pedersen L.G. A smooth particle mesh Ewald method. J. Chem. Phys. 1995; 103: 8577-8593 • Feller S.E. • Zhang Y. • Pastor R.W. • Brooks B.R. Constant pressure molecular dynamics simulation: the Langevin piston method. J. Chem. Phys. 1995; 103: 4613-4621 • Frank J. • Penczek P. • Zhu J. • Li Y.H. • Leith A. SPIDER and WEB: processing and visualization of images in 3D electron microscopy and related fields. J. Struct. Biol. 1996; 116: 190-199 • Grubisic I. • Shokhirev M.N. • Orzechowski M. • Miyashita O. • Tama F. Biased coarse-grained molecular dynamics simulation approach for flexible fitting of X-ray structure into cryo electron microscopy maps. J. Struct. Biol. 2010; 169: 95-105 • Gumbart J.C. • Trabuco L.G. • Schreiner E. • Villa E. • Schulten K. Regulation of the protein-conducting channel by a bound ribosome. Structure. 2009; 17: 1453-1464 • Hattori M. • Gouaux E. Molecular mechanism of ATP binding and ion channel activation in P2X receptors. Nature. 2012; 485: 207-212 • Hayward S. • Berendsen H.J. Systematic analysis of domain motions in proteins from conformational change: new results on citrate synthase and T4 lysozyme. Proteins. 1998; 30: 144-154 • Humphrey W. • Dalke A. • Schulten K. VMD: visual molecular dynamics. J. Mol. Graph. Model. 1996; 14: 33-38 • Ishida H. • Matsumoto A. Free-energy landscape of reverse tRNA translocation through the ribosome analyzed by electron microscopy density maps and molecular dynamics simulations. PLoS One. 2014; 9: e101951 • Jolley C.C. • Wells S.A. • Frornme P. • Thorpe M.F. Fitting low-resolution cryo-EM maps of proteins using constrained geometric simulations. Biophys. J. 2008; 94: 1613-1621 • Joseph A.P. • Malhotra S. • Burnley T. • Wood C. • Clare D.K. • Winn M. • Topf M. Refinement of atomic models in high resolution EM reconstructions using Flex-EM and local assessment. Methods. 2016; 100: 42-49 • Jung J. • Mori T. • Kobayashi C. • Matsunaga Y. • Yoda T. • Feig M. • Sugita Y. GENESIS: a hybrid-parallel and multi-scale molecular dynamics simulator with enhanced sampling algorithms for biomolecular and cellular simulations. Wiley Interdiscip. Rev. Comput. Mol. Sci. 2015; 5: 310-323 • Jung J. • Mori T. • Sugita Y. Efficient lookup table using a linear function of inverse distance squared. J. Comput. Chem. 2013; 34: 2412-2420 • Jung J. • Naurse A. • Kobayashi C. • Sugita Y. Graphics processing unit acceleration and parallelization of GENESIS for large-scale molecular dynamics simulations. J. Chem. Theory Comput. 2016; 12: 4947-4958 • Kenzaki H. • Koga N. • Hori N. • Li W. • Okazaki K. • Yao X.-Q. CafeMol: a coarse-grained biomolecular simulator for simulating proteins at work. J. Chem. Theory Comput. 2011; 7: 1979-1989 • Khoshouei M. • Baumeister W. • Danev R. Cryo-EM structure of haemoglobin at 3.2 Å determined with the Volta phase plate. Nat. Commun. 2017; 8: 16099 • Kim D.N. • Sanbonmatsu K. Tools for the Cryo-EM gold rush: going from the cryo-EM map to the atomistic model. Biosci. Rep. 2017; 37 (BSR20170072) • Klauda J.B. • Venable R.M. • Freites J.A. • O'Connor J.W. • Tobias D.J. • Mondragon-Ramirez C. • Vorobyov I. • MacKerell Jr., A.D. • Pastor R.W. Update of the CHARMM all-atom additive force field for lipids: validation on six lipid types. J. Phys. Chem. B. 2010; 114: 7830-7843 • Kobayashi C. • Jung J. • Matsunaga Y. • Mori T. • Ando T. • Tamura K. • Kamiya M. • Sugita Y. GENESIS 1.1: a hybrid-parallel molecular dynamics simulator with enhanced sampling algorithms on multiple computational platforms. J. Comput. Chem. 2017; 38: 2193-2206 • Korostelev A. • Asahara H. • Lancaster L. • Laurberg M. • Hirschi A. • Zhu J. • Trakhanov S. • Scott W.G. • Noller H.F. Crystal structure of a translation termination complex formed with release factor RF2. Proc. Natl. Acad. Sci. U S A. 2008; 105: 19684-19689 • Li X. • Mooney P. • Zheng S. • Booth C.R. • Braunfeld M.B. • Gubbens S. • Agard D.A. • Cheng Y. Electron counting and beam-induced motion correction enable near-atomic-resolution single-particle cryo-EM. Nat. Methods. 2013; 10: 584-590 • Lomize M.A. • Lomize A.L. • Pogozheva I.D. • Mosberg H.I. OPM: orientations of proteins in membranes database. Bioinformatics. 2006; 22: 623-625 • Lopéz-Blanco J.R. • Chacón P. iMODFIT: efficient and robust flexible fitting based on vibrational analysis in internal coordinates. J. Struct. Biol. 2013; 184: 261-270 • Maragliano L. • Vanden-Eijnden E. A temperature accelerated method for sampling free energy and determining reaction pathways in rare events simulations. Chem. Phys. Lett. 2006; 426: 168-175 • Matsumoto A. • Ishida H. Global conformational changes of ribosome observed by normal mode fitting for 3D Cryo-EM structures. Structure. 2009; 17: 1605-1613 • Merk A. • Bartesaghi A. • Banerjee S. • Falconieri V. • Rao P. • Davis M.I. • Pragani R. • Boxer M.B. • Earl L.A. • Milne J.L.S. • et al. Breaking cryo-EM resolution barriers to facilitate drug discovery. Cell. 2016; 165: 1698-1707 • Miyamoto S. • Kollman P.A. SETTLE: an analytical version of the SHAKE and RATTLE algorithm for rigid water models. J. Comput. Chem. 1992; 13: 952-962 • Miyashita O. • Kobayashi C. • Mori T. • Sugita Y. • Tama F. Flexible fitting to cryo-EM density map using ensemble molecular dynamics simulations. J. Comput. Chem. 2017; 38: 1447-1461 • Mohan S. • Donohue J.P. • Noller H.F. Molecular mechanics of 30S subunit head rotation. Proc. Natl. Acad. Sci. U S A. 2014; 111: 13325-13330 • Mori T. • Jung J. • Sugita Y. Surface-tension replica-exchange molecular dynamics method for enhanced sampling of biological membrane systems. J. Chem. Theory Comput. 2013; 9: 5629-5640 • Mori T. • Miyashita N. • Im W. • Feig M. • Sugita Y. Molecular dynamics simulations of biological membranes and membrane proteins using enhanced conformational sampling algorithms. Biochim. Biophys. Acta. 2016; 1858: 1635-1651 • Muhs M. • Hilal T. • Mielke T. • Skabkin M.A. • Sanbonmatsu K.Y. • Pestova T.V. • Spahn C.M.T. Cryo-EM of ribosomal 80S complexes with termination factors reveals the translocated cricket paralysis virus IRES. Mol. Cell. 2015; 57: 422-432 • Niethammer C. • Becker S. • Bernreuther M. • Buchholz M. • Eckhardt W. • Heinecke A. • Werth S. • Bungartz H.J. • Glass C.W. • Hasse H. • et al. ls1 mardyn: the massively parallel molecular dynamics code for large systems. J. Chem. Theory Comput. 2014; 10: 4455-4464 • Noel J.K. • Levi M. • Raghunathan M. • Lammert H. • Hayes R.L. • Onuchic J.N. • Whitford P.C. SMOG 2: a versatile software package for generating structure-based models. PLoS Comput. Biol. 2016; 12: e1004794 • Noel J.K. • Whitford P.C. • Sanbonmatsu K.Y. • Onuchic J.N. [email protected] ctbp: simplified deployment of structure-based models in GROMACS. Nucleic Acids Res. 2010; 38: W657-W661 • Obara K. • Miyashita N. • Xu C. • Toyoshima L. • Sugita Y. • Inesi G. • Toyoshima C. Structural role of countertransport revealed in Ca2+ pump crystal structure in the absence of Ca2+. Proc. Natl. Acad. Sci. U S A. 2005; 102: 14489-14496 • Orzechowski M. • Tama F. Flexible fitting of high-resolution X-ray structures into cryoelectron microscopy maps using biased molecular dynamics simulations. Biophys. J. 2008; 95: 5692-5705 • Plimpton S. Fast parallel algorithms for short-range molecular-dynamics. J. Comput. Phys. 1995; 117: 1-19 • Pronk S. • Páll S. • Schulz R. • Bjelkmar P. • Apostolov R. • Shirts M.R. • Smith J.C. • Kasson P.M. • van der Spoel D. • et al. GROMACS 4.5: a high-throughput and highly parallel open source molecular simulation toolkit. Bioinformatics. 2013; 29: 845-854 • Roseman A.M. Docking structures of domains into maps from cryo-electron microscopy using local correlation. Acta Crystallogr. D Biol. Crystallogr. 2000; 56: 1332-1340 • Ryckaert J.-P. • Ciccotti G. • Berendsen H.J.C. Numerical integration of the Cartesian equations of motion of a system with constraints: molecular dynamics of n-alkanes. J. Comput. Phys. 1977; 23: 327-341 • Salomon-Ferrer R. • Case D.A. • Walker R.C. An overview of the Amber biomolecular simulation package. Wiley Interdiscip. Rev. Comput. Mol. Sci. 2013; 3: 198-210 • Scheres S.H.W. RELION: implementation of a Bayesian approach to cryo-EM structure determination. J. Struct. Biol. 2012; 180: 519-530 • Schröder G.F. • Brunger A.T. • Levitt M. Combining efficient conformational sampling with a deformable elastic network model facilitates structure refinement at low resolution. Structure. 2007; 15: 1630-1641 • Schrödinger The PyMOL Molecular Graphics System, Version 1.8. Schrödinger, LLC, 2015 • Singharoy A. • Teo I. • McGreevy R. • Stone J.E. • Zhao J. • Schulten K. Molecular dynamics-based refinement and validation for sub-5 Å cryo-electron microscopy maps. Elife. 2016; 5: e16105 • Smart O.S. • Neduvelil J.G. • Wang X. • Wallace B.A. • Sansom M.S.P. HOLE: a program for the analysis of the pore dimensions of ion channel structural models. J. Mol. Graph. 1996; 14: 354-360 • Srinivasan S.G. • Ashok I. • Jônsson H. • Kalonji G. • Zahorjan J. Dynamic-domain-decomposition parallel molecular dynamics. Comput. Phys. Commun. 1997; 102: 44-58 • Stone J.E. • McGreevy R. • Isralewitz B. • Schulten K. GPU-accelerated analysis and visualization of large structures solved by molecular dynamics flexible fitting. • Stone J.E. • Phillips J.C. • Freddolino P.L. • Hardy D.J. • Trabuco L.G. • Schulten K. Accelerating molecular modeling applications with graphics processors. J. Comput. Chem. 2007; 28: 2618-2640 • Sugita Y. • Kitao A. • Okamoto Y. J. Chem. Phys. 2000; 113: 6042-6051 • Sugita Y. • Okamoto Y. Replica-exchange molecular dynamics method for protein folding. Chem. Phys. Lett. 1999; 314: 141-151 • Taketomi H. • Ueda Y. • Gō N. Studies on protein folding, unfolding and fluctuations by computer simulation. Chem. Biol. Drug Des. 1975; 7: 445-459 • Tama F. • Miyashita O. • Brooks III, C.L. Flexible multi-scale fitting of atomic structures into low-resolution electron density maps with elastic network normal mode analysis. J. Mol. Biol. 2004; 337: 985-999 • Tang G. • Peng L. • Baldwin P.R. • Mann D.S. • Jiang W. • Rees I. • Ludtke S.J. EMAN2: an extensible image processing suite for electron microscopy. J. Struct. Biol. 2007; 157: 38-46 • Tanner D.E. • Chan K.Y. • Phillips J.C. • Schulten K. Parallel generalized Born implicit solvent calculations with NAMD. J. Chem. Theory Comput. 2011; 7: 3635-3642 • Topf M. • Webb B. • Wolfson H. • Chiu W. • Sali A. Protein structure fitting and refinement guided by cryo-EM density. Structure. 2008; 16: 295-307 • Tourigny D.S. • Fernández I.S. • Kelley A.C. • Ramakrishnan V. Elongation factor G bound to the ribosome in an intermediate state of translocation. Science. 2013; 340: 1235490 • Toyoshima C. • Nakasako M. • Nomura H. • Ogawa H. Crystal structure of the calcium pump of sarcoplasmic reticulum at 2.6 Å resolution. Nature. 2000; 405: 647-655 • Toyoshima C. • Nomura H. Structural changes in the calcium pump accompanying the dissociation of calcium. Nature. 2002; 418: 605-611 • Toyoshima C. • Norimatsu Y. • Iwasawa S. • Tsuda T. • Ogawa H. How processing of aspartylphosphate is coupled to lumenal gating of the ion pathway in the calcium pump. Proc. Natl. Acad. Sci. U S A. 2007; 104: 19831-19836 • Trabuco L.G. • Villa E. • Mitra K. • Frank J. • Schulten K. Flexible fitting of atomic structures into electron microscopy maps using molecular dynamics. Structure. 2008; 16: 673-683 • Trabuco L.G. • Villa E. • Schreiner E. • Harrison C.B. • Schulten K. Molecular dynamics flexible fitting: a practical guide to combine cryo-electron microscopy and X-ray crystallography. Methods. 2009; 49: 174-180 • Vashisth H. • Skiniotis G. • Brooks III, C.L. Using enhanced sampling and structural restraints to refine atomic structures into low-resolution electron microscopy maps. Structure. 2012; 20: 1453-1462 • Villa E. Finding the right fit: chiseling structures out of cryo-electron microscopy maps. Curr. Opin. Struct. Biol. 2014; 25: 118-125 • Whitford P.C. • Ahmed A. • Yu Y. • Hennelly S.P. • Tama F. • Spahn C.M. • Onuchic J.N. • Sanbonmatsu K.Y. Excited states of ribosome translocation revealed through integrative molecular modeling. Proc. Natl. Acad. Sci. U S A. 2011; 108: 18943-18948 • Whitford P.C. • Noel J.K. • Gosavi S. • Schug A. • Sanbonmatsu K.Y. • Onuchic J.N. An all-atom structure-based potential for proteins: bridging minimal models with all-atom empirical forcefields. Proteins. 2009; 75: 430-441 • Wriggers W. • Birmanns S. Using Situs for flexible and rigid-body fitting of multiresolution single-molecule data. J. Struct. Biol. 2001; 133: 193-202 • Wriggers W. • Chacón P. Modeling tricks and fitting techniques for multiresolution structures. Structure. 2001; 9: 779-788 • Wriggers W. • Milligan R.A. • McCammon J.A. Situs: a package for docking crystal structures into low-resolution maps from electron microscopy. J. Struct. Biol. 1999; 125: 185-195 • Wu X.W. • Subramaniam S. • Case D.A. • Wu K.W. • Brooks B.R. Targeted conformational search with map-restrained self-guided Langevin dynamics: application to flexible fitting into electron microscopic density maps. J. Struct. Biol. 2013; 183: 429-440 • Yelshanskaya M.V. • Singh A.K. • Sampson J.M. • Narangoda C. • Kurnikova M. • Sobolevsky A.I. Structural bases of noncompetitive inhibition of AMPA-subtype ionotropic glutamate receptors by antiepileptic drugs. Neuron. 2016; 91: 1305-1315
{}
# Complexity of the homomorphism problem parameterized by treewidth The homomorphism problem $\text{Hom}(\mathcal{G}, \mathcal{H})$ for two classes $\mathcal{G}$ and $\mathcal{H}$ of graphs is defined as follows: Input: a graph $G$ in $\mathcal{G}$, a graph $H$ in $\mathcal{H}$ Output: decide if there is a homomorphism from $G$ to $H$, i.e., a mapping $h$ from the vertices of $G$ to those of $H$ such that, for any edge $\{x, y\}$ of $G$, $\{h(x), h(y)\}$ is an edge of $H$. For each $k \in \mathbb{N}$, I will call $\mathcal{T}_k$ the class of the graphs of treewidth at most $k$. I'm interested in the problem $\text{Hom}(\mathcal{T}_k, \mathcal{T}_k)$, which I see as a parameterized problem (by the treewidth bound $k$). My question is: what is the complexity of this parameterized problem? Is it known to be FPT? or is it W[1]-hard? Here are some things that I found about the $\text{Hom}$ problem, but which do not help me answer the question. (I write $-$ for the class of all graphs.) • http://www.sciencedirect.com/science/article/pii/009589569090132J: If $\mathcal{H}$ is bipartite then $\text{Hom}(-, \mathcal{H})$ is in PTIME, otherwise it is NP-complete, but of course the NP-hardness relies on allowing arbitrary $G$. • http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.86.9013&rep=rep1&type=pdf: If the treewidth of $\mathcal{G}$ (modulo homomorphic equivalence) is bounded by a constant then $\text{Hom}(\mathcal{G}, -)$ is in PTIME (and otherwise it isn't, assuming FPT != W[1]). Hence, in particular my problem $\text{Hom}(\mathcal{T}_k, \mathcal{T}_k)$ is in PTIME for fixed $k$, but this doesn't tell me what is the dependency on the parameter. • From Flum and Grohe's book Parameterized Complexity Theory, Corollary 13.17: The problem $\text{Hom}(\mathcal{T}_k, -)$ is FPT when parameterized by the size of $G$ (but I am parameterizing by the treewidth) • http://users.uoa.gr/~sedthilk/papers/homo.pdf, Corollary 3.2: When fixing a specific graph $H$, the problem $\text{Hom}(\mathcal{T}_k, \{H\})$, parameterized by k, is FPT (this even holds for more complicated counting variants), but I do not want to restrict to fixed $H$. • This question is still open, but one remark: there is an FPT algorithm parameterized by treewidth for the graph isomorphism problem, here: epubs.siam.org/doi/abs/10.1137/… (Daniel Lokshtanov, Marcin Pilipczuk, Michał Pilipczuk, and Saket Saurabh, "Fixed-Parameter Tractable Canonization and Isomorphism Test for Graphs of Bounded Treewidth", SICOMP.) As far as I know, unfortunately, this does not say anything about the homomorphism problem. – a3nm Apr 22, 2022 at 21:26
{}
# Maxwell and SR 1. Sep 2, 2010 ### Austin0 I am interested in how the Lorentz maths were derived from the Maxwell electrodynamic and field equations. But not in a struct mathemetical sense as the math is outside my range but on a simpler conceptual level. For eg. contraction seems to have relevance wrt electron electrostatic fields and their interactions. Is there any relevance of relative simultaneity in the calculations of electrodynamics??? Was the mathematical expression of relative simultaneity in any way derived from the Maxwell maths or could it be?? Or is it directly a consequence of the clock synchronization convention that was added later with no correlation to electrodynamics??? Thanks 2. Sep 2, 2010 3. Sep 2, 2010 ### Mentz114 I don't think you need Maxwell's equations to show that the Lorentz transformation is the proper length conserving transformation of Minkowski spacetime. The interesting thing about relativistic electrodynamics is that electric and magnetic fields 'mix' when viewed from a moving frame. When a frame is boosted, space and time 'mix' t' = Yt + vYx x' = Yx + vYt and a similar thing happens to E and B. If we have 3 electric fields Ex, Ey and Ez and we get a velocity v in the z-direction, then the new fields are (Y is the gamma from the z boost) E'x = YEx E'y = YEy Ez = Ez and now there are magnetic fields where there were none, By = vYEx Bx = vYEy I've omitted the constants that convert B -> E for clarity. 4. Sep 3, 2010 ### Austin0 From what I have read so far in the link grandpa provided it seems that the basis for t' = Yt + vYx did appear much earlier in the form of what they called local time. So it appears the real change is the addition of the gamma transformation factor but the essence of relative simultaneity appeared even before Lorentz. I would not bet large sums on the correctness of my understanding here but so far it seems like this is the basis for simultaniety and the clock convention came later as a rational implementation of this. I still want to learn more regarding the meaning of "local time" in electrdynamics and more of how relative simultaneity would practically relate to particles and fields etc. in the same way as contraction. Thanks for your explication ,,,food for thought 5. Sep 3, 2010 6. Sep 4, 2010 ### clem "I am interested in how the Lorentz maths were derived from the Maxwell electrodynamic and field equations." They haven't been and don't follow from Maxwell. Lorentz hoped that would happen, but no one has done it. "Is there any relevance of relative simultaneity in the calculations of electrodynamics???" Not until relativity is added. "Was the mathematical expression of relative simultaneity in any way derived from the Maxwell maths or could it be??" Only in the sense that in order for Maxwell to be relativistic, simultaneity has to be relative. "clock synchronization convention" What does this mean? 7. Sep 4, 2010 ### atyy At a conceptual level, you can try "How to teach special relativity'" on p67 of http://books.google.com/books?id=FGnnHxh2YtQC&dq=bell+unspeakable&source=gbs_navlinks_s The handwavy argument has to do with the contraction of the electric field of a moving point charge. The argument has a hole because it needs a system of charges to have a unique equilibium configuration, which isn't true in classical electrostatics. I've heard an argument that tries to fix this by using quantum mechanics, saying that many systems have unique ground states. However, the quantum theory of Maxwell's equations does not hold to arbitrarily high energies, so one might object to using a mathematically unsound theory to derive the Lorentz transformations. I wonder if this can be done by saying that QED is a sound and unique theory (has an infrared fixed point) at low energies, so we can use that to derive the Lorentz transformations (ie. use the canonical form where Lorentz covariance is not manifest, and show that it is equivalent to the covariant form)? Maxwell's equations are invariant under the Poincare group, and a larger group called the conformal group. The restriction to the Poincare group for special relativity comes when massive fields are considered, in addition to the massless field of Maxwell. Last edited: Sep 4, 2010 8. Sep 7, 2010 ### Austin0 The Einstein convention of synchronization through calculations based on reflected light transmission.
{}
# The One vs. the Many This post continues from my last post. In that post, I had presented a series of diagrams depicting the states of the universe over time, and I had then asked you a simple question pertaining to the physics of it: what the series depicted, physically speaking. I had also given an answer to that question, the one which most people would give. It would run something like this: There are two blocks/objects/entities which are initially moving closer towards each other. Following their motions, they come closer to each other, touch each other, and then reverse the directions of their motions. Thus, there is a collision of sorts. (We deliberately didn’t go into the maths of it, e.g., such narrower, detailed or higher-level aspects such as whether the motions were uniform or whether they had accelerations/decelerations (implying forces) or not, etc.) I had then told you that the preceding was not the only answer possible. At least one more answer that captures the physics of it, also is certainly possible. This other answer in fact leads to an entirely different kind of mathematics! I had asked you to think about such alternative(s). In this post, let me present the alternative description. The alternative answer is what school/early college-level text-books never present to students. Neither do the pop-sci. books. However, the alternative approach has been documented, in some or the other form, at least for centuries if not for millenia. The topic is routinely taught in the advanced UG and PG courses in physics. However, the university courses always focus on the maths of it, not the physics. The physical ideas are never explicitly discussed in them. The text-books, too, dive straight into the relevant mathematics. The refusal of physicists (and of mathematicians) to dwell on the physical bases of this alternative description is in part responsible for the endless confusion and debates surrounding such issues as quantum entanglement, action at a distance, etc. There also is another interesting side to it. Some aspects of this kind of a thinking are also evident in the philosophical/spiritual/religious/theological thinking. I am sure that you would immediately notice the resonance to such broader ideas as we subsequently discuss the alternative approach. However, let me stress that, in this post, we focus only on the physics-related issues. Thus, if I at times just say “universe,” it is to be understood that the word pertains only to the physical universe (i.e. the sum total of the inanimate objects, and also the inanimate aspects of living beings), not to any broader, spiritual or philosophical issue. OK. Now, on to the alternative description itself. It runs something like this: There is only one physical object which physically exists, and it is the physical universe. The grey blocks that you see in the series of diagrams are not independent objects, really speaking. In this particular depiction, what look like two independent “objects” are, really speaking, only two spatially isolated parts of what actually is one and only one object. In fact, the “empty” or the “white” space you see in between the objects is not, really speaking, empty at all—it does not represent the literal void or the nought, so to speak. The region of space corresponding to the “empty” portions is actually occupied by a physical something. In fact, since there is only one physical object to all exist, it is that same—singleton—physical object which is present also in the apparently empty portions. This is not to deny that the distinction between the grey and the white/“empty” parts is not real. The physically existing distinction between them—the supposed qualitative differences among them—arises only because of some quantitative differences in some property/properties of the universe-object. In other words, the universe does not exist uniformly across all its parts. There are non-uniformities within it, some quantitative differences existing over different parts of itself. Notice, up to this point, we are talking of parts and variations within the universe. Both these words: “parts” and “within” are to be taken in the broadest possible sense, as in  the sense of“logical parts” and “logically within”. However, one set of physical attributes that the universe carries pertains to the spatial characteristics such as extension and location. A suitable concept of space can therefore be abstracted from these physically existing characteristics. With the concept of space at hand, the physical universe can then be put into an abstract correspondence with a suitable choice of a space. Thus, what this approach naturally suggests is the idea that we could use a mathematical field-function—i.e. a function of the coordinates of a chosen space—in order to describe the quantitative variations in the properties of the physical universe. For instance, assuming a $1D$ universe, it could be a function that looks something like what the following diagram shows. Here, the function shows that a certain property (like mass density) exists with a zero measure in the regions of the supposedly empty space, whereas it exists with a finite measure, say with density of $\rho_{g}$ in the grey regions. Notice that if the formalism of a field-function (or a function of a space) is followed, then the property that captures the variations is necessarily a density. Just the way the mass density is the density of mass, similarly, you can have a density of any suitable quantity that is spread over space. Now, simply because the density function (shown in blue) goes to zero in certain regions, we cannot therefore claim that nothing exists in those regions. The reason is: we can always construct another function that has some non-zero values everywhere, and yet it shows sufficiently sharp differences between different regions. For instance, we could say that the graph has $\rho_{0} \neq 0$ value in the “empty” region, whereas it has a $\rho_{g}$ value in the interior of the grey regions. Notice that in the above paragraph, we have subtly introduced two new ideas: (i) some non-zero value, say $\rho_{0}$, as being assigned even to the “empty” region—thereby assigning a “something”, a matter of positive existence, to the “empty”-ness; and (ii) the interface between the grey and the white regions is now asserted to be only “sufficiently” sharp—which means, the function does not take a totally sharp jump from $\rho_{0}$ to $\rho_{g}$ at a single point $x_i$ which identifies the location of the interface. Notice that if the function were to have such a totally sharp jump at a single point, it would not in fact even be a proper function, because there would be an infinity of density values between and including $\rho_{0}$ and $\rho_{g}$ existing at the same point $x_i$. Since the density would not have a unique value at $x_i$, it won’t be a function. However, we can always replace the infinitely sharp interface of zero thickness by a sufficiently sharp (and not infinitely sharp) interface of a sufficiently small but finite thickness. Essentially, what this trick does is to introduce three types of spatial regions, instead of two: (i) the region of the “empty” space, (ii) the region of the interface (iii) the interior, grey, region. Of course, what we want are only two regions, not three. After all, we need to make a distinction only between the grey and the white regions. Not an issue. We can always club the interface region with either of the remaining two. Here is the mathematical procedure to do it. Introduce yet another quantitative measure, viz., $\rho_{c}$, called the critical density. Using it, we can in fact divide the interface dispense region into further two parts: one which has $\rho < \rho_c$ and another one which has $\rho \geq \rho_c$. This procedure does give us a point-thick locus for the distinction between the grey and the white regions, and yet, the actual changes in the density always remain fully smooth (i.e. density can remain an infinitely differentiable function). All in all, the property-variation at the interface looks like this: Indeed, our previous solution of clubbing the interface region into the grey region is nothing but having $\rho_c = \rho_0$, whereas clubbing the interface in the “empty” space region is tantamount to having $\rho_c = \rho_g$. In any case, we do have a sharp demarcation of regions, and yet, the density remains a continuous function. We can now claim that such is what the physical reality is actually like; that the depiction presented in the original series of diagrams, consisting of infinitely sharp interfaces, cannot be taken as the reference standard because that depiction itself was just that: a mere depiction, which means: an idealized description. The actual reality never was like that. Our ultimate standard ought to be reality itself. There is no reason why reality should not actually be like what our latter description shows. This argument does hold. Mankind has never been able to think of a single solid argument against having the latter kind of a description. Even Euclid had no argument for the infinitely sharp interfaces his geometry implies. Euclid accepted the point, the line and the plane as the already given entities, as axioms. He did not bother himself with locating their meaning in some more fundamental geometrical or mathematical objects or methods. What can be granted to Euclid can be granted to us. He had some axioms. We don’t believe them. So we will have our own axioms. As part of our axioms, interfaces are only finitely sharp. Notice that the perceptual evidence remains the same. The difference between the two descriptions pertains to the question of what is it that we regard as object(s), primarily. The considerations of the sharpness or the thickness of the interface is only a detail, in the overall scheme. In the first description, the grey regions are treated as objects in their own right. And there are many such objects. In the second description, the grey regions are treated not as objects in their own right, but merely as distinguishable (and therefore different) parts of a single object that is the universe. Thus, there is only one object. So, we now have two alternative descriptions. Which one is correct? And what precisely should we regard as an object anyway? … That, indeed, is a big question! 🙂 More on that question, and the consequences of the answers, in the next post in this series…. In it, I will touch upon the implications of the two descriptions for such things as (a) causality, (b) the issue of the aether—whether it exists and if yes, what its meaning is, (c) and the issue of the local vs. non-local descriptions (and implications therefore, in turn, for such issues as quantum entanglement), etc. Stay tuned. A Song I Like: (Hindi) “kitni akeli kitni tanha see lagi…” Singer: Lata Mangeshkar Music: Sachin Dev Burman Lyrics: Majrooh Sultanpuri [May be one editing pass, later? May be. …] # Introducing a Very Foundational Issue of Physics (and of Maths) OK, so I am finally done with moving my stuff, and so, from now on, should be able to find at least some time for ‘net activities, including browsing and blogging (not to mention also picking up writing my position paper on QM from where I left it). Alright, so let me resume my blogging right away by touching on a very foundational aspect of physics (and also of maths). Before you can even think of building a theory of physics, you must first adopt, implicitly or explicitly, a viewpoint concerning what kind of physical objects are assumed to exist in the physical universe. For instance, Newtonian mechanics assumes that the physical universe is made from massive and charge-less solid bodies that experience and exert the inter-body forces of gravity and those arising out of their direct contact. In contrast, the later development of the Maxwellian electrodynamics assumes that there are two types of objects: massive and charged solid bodies, and the electromagnetic and gravitational fields which they set-up and with which they interact. Last year, I had written a post spelling out the different kinds of physical objects that are assumed to exist in the Newtonian mechanics, in the classical electrodynamics, etc.; see here [^]. In this post, I want to highlight yet another consideration which enters physics at the most fundamental level. Let me illustrate the issue involved via a simple example. Consider a 2D universe. The following series of diagrams depicts this universe as it exists at different instants of time, from $t_{1}$ through $t_{9}$. Each diagram in the series represents the entire universe. Assume that the changes in time actually occur continuously; it’s just that while drawing diagrams, we can depict the universe only at isolated (or “discrete”) instants of time. Now, consider this seemingly very simple question: What precisely does the above series of diagrams depict, physically speaking? Can you provide a brief description (say, running into 2–3 lines) as to what is happening here, physics-wise? At this point, you may perhaps be thinking that the answer is obvious. The answer is so obvious, you could be thinking, that it is very stupid of me to even think of raising such a question. “Why, of course, what that series of pictures depicts is this: there are two blocks/objects/entities which are initially moving towards each other. Eventually they come so close to each other that they even touch each other. They thus undergo a collision, and as a result, they begin to move apart. … Plain and simple.” You could be thinking along some lines like that. But let me warn you, that precisely is your potential pitfall—i.e., thinking that the question is so simple, and the answer so obvious. Actually, as it turns out, there is no unique answer to that question. That’s why, no matter how dumb the above question may look to you, let me ask you once again to take a moment to think afresh about it. And then, whatever be your answer, write it down. In your answer, try to be as brief and as precise as possible. I will continue with this issue in my next post, to be written and posted after a few days. I am deliberately taking a break here because I do want you to give it a shot—writing down a precise answer. Unless you actually try out this exercise for yourself, you won’t come to appreciate either of the following two, separate points: 1. how difficult it can be to write very precise answers to what appear to be the simplest of questions, and 2. how unwittingly and subtly some unwarranted assumptions can so easily creep in, in a physical description—and therefore, in mathematics. You won’t come to appreciate how deceptive this question really is unless you actually give it a try. And it is to ensure this part that I have to take a break here. Enjoy! # See, how hard I am trying to become an Approved (Full) Professor of Mechanical Engineering in SPPU?—2 Remember the age-old decade-old question, viz.: “Stress or strain: which one is more fundamental?” I myself had posed it at iMechanica about a decade ago [^]. Specifically, on 8th March 2007 (US time, may be EST or something). The question had generated quite a bit of discussion at that time. Even as of today, this thread remains within the top 5 most-hit posts at iMechanica. In fact, as of today, with about 1.62 lakh reads (i.e. 162 k hits), I think, it is the second most hit post at iMechanica. The only post with more hits, I think, is Nanshu Lu’s, providing a tutorial for the Abaqus software [^]; it beats mine like hell, with about 5 lakh (500 k) hits! The third most hit post, I think, again is about sharing scripts for the Abaqus software [^]; as of today, it lags mine very closely, but could overtake mine anytime, with about 1.48 lakh (148 k) hits already. There used to be a general thread on Open Source FEM software that used to be very close to my post. As of today, it has fallen behind a bit, with about 1.42 lakh (142 k) hits [^]. (I don’t know, but there could be other widely read posts, too.) Of course, the attribute “most hit” is in no fundamental way related to “most valuable,” “most relevant,” or even “most interesting.” Yet, the fact of the matter also is that mine is the only one among the top 5 posts which probes on a fundamental theoretical aspect. All others seem to be on software. Not very surprising, in a way. Typically, hits get registered for topics providing some kind of a practical service. For instance, tips and tutorials on software—how to install a software, how to deal with a bug, how to write a sub-routine, how to produce visualizations, etc. Topics like these tend to get more hits. These are all practical matters, important right in the day-to-day job or studies, and people search the ‘net more for such practically useful services. Precisely for this reason—and especially given the fact that iMechanica is a forum for engineers and applied scientists—it is unexpected (at least it was unexpected to me) that a “basically useless” and “theoretical” discussion could still end up being so popular. There certainly was a surprise about it, to me. … But that’s just one part. The second, more interesting part (i.e., more interesting to me) has been that, despite all these reads, and despite the simplicity of the concepts involved (stress and strain), the issue went unresolved for such a long time—almost a decade! Students begin to get taught these two concepts right when they are in their XI/XII standard. In my XI/XII standard, I remember, we even had a practical about it: there was a steel wire suspended from a cantilever near the ceiling, and there was hook with a supporting plate at the bottom of this wire. The experiment consisted of adding weights, and measuring extensions. … Thus, the learning of these concepts begins right around the same time that students are learning calculus and Newton’s  3 laws… Students then complete the acquisition of these two concepts in their “full” generality, right by the time they are just in the second- or third-year of undergraduate engineering. The topic is taught in a great many branches of engineering: mechanical, civil, aerospace, metallurgical, chemical, naval architecture, and often-times (and certainly in our days and in COEP) also electrical. (This level of generality would be enough to discuss the question as posed at iMechanica.) In short, even if the concepts are so “simple” that UG students are routinely taught them, a simple conceptual question involving them could go unresolved for such a long time. It is this fact which was (honestly) completely unexpected to me, at least at the time when I had posed the question. I had actually thought that there would surely be some reference text/paper somewhere that must have considered this aspect already, and answered it. But I was afraid that the answer (or the reference in which it appears) could perhaps be outside of my reach, my understanding of continuum mechanics. (In particular, I knew only a little bit of tensor calculus—only that as given in Malvern, and in Schaum’s series, basically. (I still don’t know much more about tensor calculus; my highest reach for tensor calculus remains limited to the book by Prof. Allan Bower of Brown [^].)) Thus, the reason I wrote the question in such a great detail (and in my replies, insisted on discussing the issues in conceptual details) was only to emphasize the fact that I had no hi-fi tensor calculus in mind; only the simplest physics-based and conceptual explanation was what I was looking for. And that’s why, the fact that the question went unresolved for so long has also been (actually) fascinating to me. I (actually) had never expected it. And yes, “dear” Officially Approved Mechanical Engineering Professors at the Savitribai Phule Pune University (SPPU), and authorities at SPPU, as (even) you might have noticed, it is a problem concerning the very core of the Mechanical Engineering proper. I had thought once, may be last year or so, that I had finally succeeded in nailing down the issue right. (I might have written about it on this blog or somewhere else.) But, still, I was not so sure. So, I decided to wait. I now have come to realize that my answer should be correct. I, however, will not share my answer right away. There are two reasons for it. First, I would like it if someone else gives it a try, too. It would be nice to see someone else crack it, too. A little bit of a wait is nothing to trade in for that. (As far as I am concerned, I’ve got enough “popularity” etc. just out of posing it.) Second, I also wish to see if the Officially Approved Mechanical Engineering Professors at the Savitribai Phule Pune University (SPPU)) would be willing and able to give it a try. (Let me continue to be honest. I do not expect them to crack it. But I do wish to know whether they are able to give it a try.) In fact, come to think of it, let me do one thing. Let me share my answer only after one of the following happens: • either I get the Official Approval (and also a proper, paying job) as a Full Professor of Mechanical Engineering at SPPU, • or, an already Officially Approved Full Professor of Mechanical Engineering at SPPU (especially one of those at COEP, especially D. W. Pande, and/or one of those sitting on the Official COEP/UGC Interview Panels for faculty interviews at SPPU) gives it at least a try that is good enough. [Please note, the number of hits on the international forum of iMechanica, and the nature of the topic, once again.] I will share my answer as soon as either of the above two happens—i.e., in the Indian government lingo: “whichever is earlier” happens. But, yes, I am happy that I have come up with a very good argument to finally settle the issue. (I am fairly confident that my eventual answer should also be more or less satisfactory to those who had participated on this iMechanica thread. When I share my answer, I will of course make sure to note it also at iMechanica.) This time round, there is not just one song but quite a few of them competing for inclusion on the “A Song I Like” section. Perhaps, some of these, I have run already. Though I wouldn’t mind repeating a song, I anyway want to think a bit about it before finalizing one. So, let me add the section when I return to do some minor editing later today or so. (I certainly want to get done with this post ASAP, because there are other theoretical things that beckon my attention. And yes, with this announcement about the stress-and-strain issue, I am now going to resume my blogging on topics related to QM, too.) Update at 13:40 hrs (right on 19 Dec. 2016): Added the section on a song I like; see below. A Song I Like: (Marathi) “soor maagoo tulaa mee kasaa? jeevanaa too tasaa, mee asaa!” Lyrics: Suresh Bhat Music: Hridaynath Mangeshkar Singer: Arun Date It’s a very beautiful and a very brief poem. As a song, it has got fairly OK music and singing. (The music composer could have done better, and if he were to do that, so would the singer. The song is not in a bad shape in its current form; it is just that given the enormously exceptional talents of this composer, Hridaynath Mangeshkar, one does get a feel here that he could have done better, somehow—don’t ask me how!) … I will try to post an English translation of the lyrics if I find time. The poem is in a very, very simple Marathi, and for that reason, it would also be very, very easy to give a rough sense of it—i.e., if the translation is to be rather loose. The trouble is, if you want to keep the exact shade of the words, it then suddenly becomes very difficult to translate. That’s why, I make no promises about translating it. Further, as far as I am concerned, there is no point unless you can convey the exact shades of the original words. … Unless you are a gifted translator, a translation of a poem almost always ends up losing the sense of rhythm. But even if you keep a more modest aim, viz., only of offering an exact translation without bothering about the rhythm part, the task still remains difficult. And it is more difficult if the original words happen to be of the simple, day-to-day usage kind. A poem using complex words (say composite, Sanskrit-based words) would be easier to translate precisely because of its formality, precisely because of the distance it keeps from the mundane life… An ordinary poet’s poem also would be easy to translate regardless of what kind of words he uses. But when the poet in question is great, and uses simple words, it becomes a challenge, because it is difficult, if not impossible, to convey the particular sense of life he pours into that seemingly effortless composition. That’s why translation becomes difficult. And that’s why I make no promises, though a try, I would love to give it—provided I find time, that is. Second Update on 19th Dec. 2016, 15:00 hrs (IST): A Translation of the Lyrics: I offer below a rough translation of the lyrics of the song noted above. However, before we get to the translation, a few notes giving the context of the words are absolutely necessary. Notes on the Context: Note 1: Unlike in the Western classical music, Indian classical music is not written down. Its performance, therefore, does not have to conform to a pre-written (or a pre-established) scale of tones. Particularly in the Indian vocal performance, the singer is completely free to choose any note as the starting note of his middle octave. Typically, before the actual singing begins, the lead singer (or the main instrument player) thinks of some tone that he thinks might best fit how he is feeling that day, how his throat has been doing lately, the particular settings at that particular time, the emotional interpretation he wishes to emphasize on that particular day, etc. He, therefore, tentatively picks up a note that might serve as the starting tone for the middle octave, for that particular performance. He makes this selection not in advance of the show and in private, but right on the stage, right in front of the audience, right after the curtain has already gone up. (He might select different octaves for two successive songs, too!) Then, to make sure that his rendition is going to come out right if he were to actually use that key, that octave, what he does is to ask a musician companion (himself on the stage besides the singer) to play and hold that note on some previously well-tuned instrument, for a while. The singer then uses this key as the reference, and tries out a small movement or so. If everything is OK, he will select that key. All this initial preparation is called (Hindi) “soor lagaanaa.” The part where the singer turns to the trusted companion and asks for the reference note to be played is called (Hindi) “soor maanganaa.” The literal translation of the latter is: “asking for the tone” or “seeking the pitch.” After thus asking for the tone and trying it out, if the singer thinks that singing in that specific key is going to lead to a good concert performance, he selects it. At this point, both—the singer and that companion musician—exchange glances at each other, and with that indicate that the tone/pitch selection is OK, that this part is done. No words are exchanged; only the glances. Indian performances depend a great deal on impromptu variations, on improvizations, and therefore, the mutual understanding between the companion and the singer is of crucial importance. In fact, so great is their understanding that they hardly ever exchange any words—just glances are enough. Asking for the reference key is just a simple ritual that assures both that the mutual understanding does exist. And after that brief glance, begins the actual singing. Note 2: Whereas the Sanskrit and Marathi word “aayuShya” means life-span (the number of years, or the finite period that is life), the Sanskrit and Marathi word “jeevan” means Life—with a capital L. The meaning of “jeevan” thus is something like a slightly abstract outlook on the concrete facts of life. It is like the schema of life. The word is not so abstract as to mean the very Idea of Life or something like that. It is life in the usual, day-to-day sense, but with a certain added emphasis on the thematic part of it. Note 3: Here, the poet is addressing this poem to “jeevan” i.e., to the Life with a capital L (or the life taken in its more abstract, thematic sense). The poet is addressing Life as if the latter is a companion in an Indian singing concert. The Life is going to help him in selecting the note—the note which would define the whole scale in which to sing during the imminent live performance. The Life is also his companion during the improvisations. The poem is addressed using this metaphor. Now, my (rough) translation: The Refrain: [Just] How do I ask you for the tone, Life, you are that way [or you follow some other way], and I [follow] this way [or, I follow mine] Stanza 1: You glanced at me, I glanced at you, [We] looked full well at each other, Pain is my mirror [or the reference instrument], and [so it is] yours [too] Stanza 2: Even once, to [my] mind’s satisfaction, You [oh, Life] did not ever become my [true]  mate [And so,] I played [on this actual show of life, just whatever] the way the play happened [or unfolded] And, finally, Note 4 (Yes, one is due): There is one place where I failed in my translation, and most any one not knowing both the Marathi language and the poetry of Suresh Bhat would. In Marathi, “tu tasaa, [tar] mee asaa,” is an expression of a firm, almost final, acknowledgement of (irritating kind of) differences. “If you must insist on being so unreasonable, then so be it—I am not going to stop following my mind either.” That is the kind of sense this brief Marathi expression carries. And, the poet, Suresh Bhat, is peculiar: despite being a poet, despite showing exquisite sensitivity, he just never stops being manly, at the same time. Pain and sorrow and suffering might enter his poetry; he might acknowledge their presence through some very sensitively selected words. And yet, the underlying sense of life which he somehow manages to convey also is as if he is going to dismiss pain, sorrow, suffering, etc., as simply an affront—a summarily minor affront—to his royal dignity. (This kind of a “royal” sense of life often is very well conveyed by ghazals. This poem is a Marathi ghazal.) Thus, in this poem, when Suresh Bhat agrees to using pain as a reference point, the words still appear in such a sequence that it is clear that the agreement is being conceded merely in order to close a minor and irritating part of an argument, that pain etc. is not meant to be important even in this poem let alone in life. Since the refrain follows immediately after this line, it is clear that the stress gets shifted to the courteous question which is raised following the affronts made by one fickle, unfaithful, even idiotic Life—the question of “Just how do I treat you as a friend? Just how do I ask you for the tone?” (The form of “jeevan” or Life used by Bhat in this poem is masculine in nature, not neutral the way it is in normal Marathi.) I do not know how to arrange the words in the translation so that this same sense of life still comes through. I simply don’t have that kind of a command over languages—any of the languages, whether Marathi or English. Hence this (4th) note. [OK. Now I am (really) done with this post.] Anyway, take care, and bye for now… Update on 21st Dec. 2016, 02:41 AM (IST): Realized a mistake in Stanza 1, and corrected it—the exchange between yours and mine (or vice versa). [E&OE] / # Conservation of angular momentum isn’t [very] fundamental! What are the conservation principles (in physics)? In the first course on engineering mechanics (i.e. the mechanics of rigid bodies) we are taught that there are these three conservation principles: Conservation of: (i) energy, (ii) momentum, and (iii) angular momentum. [I am talking about engineering programs. That means, we live entirely in a Euclidean, non-relativistic, world.] Then we learn mechanics of fluids, and the conservation of (iv) mass too gets added. That makes it four. Then we come to computational fluid dynamics (CFD), and we begin to deal with only three equations: conservation of (i) mass, (ii) momentum, and (iii) energy. What happens to the conservation of the angular momentum? Why does the course on CFD drop it? For simplicity of analysis? Ask that question to postgraduate engineers, even those who have done a specialization in CFD, and chances are, a significant number of them won’t be able to answer that question in a very clear manner. Some of them may attempt this line of reasoning: That’s because in deriving the fluids equations (whether for a Newtonian fluid or a non-Newtonian one), the stress tensor is already assumed to be symmetrical: the shear stresses acting on the adjacent faces are taken to be equal and opposite (e.g. $\sigma_{xy} = \sigma_{yx}$). The assumed equality can come about only after assuming conservation of the angular momentum, and thus, the principle is already embedded in the momentum equations, as they are stated in CFD. If so, ask them: How about a finite rotating body—say a gyroscope? (Assume rigidity for convenience, if you wish.) Chances are, a great majority of them will immediately agree that in this case, however, we have to apply the angular momentum principle separately. Why is there this difference between the fluids and the finite rotating bodies? After all, both are continua, as in contrast to point-particles. Most of them would fall silent at this point. [If not, know that you are talking with someone who knows his mechanics well!] Actually, it so turns out that in continua, the angular momentum is an emergent/derivative property—not the most fundamental one. In continua, it’s OK to assume conservation of just the linear momentum alone. If it is satisfied, the conservation of angular momentum will get satisfied automatically. Yes, even in case of a spinning wheel. Don’t believe me? Let me direct you to Chad Orzel; check out here [^]. Orzel writes: [The spinning wheel] “is a classical system, so all of its dynamics need to be contained within Newton’s Laws. Which means it ought to be possible to look at how angular momentum comes out of the ordinary linear momentum and forces of the components making up the wheel. Of course, it’s kind of hard to see how this works, but that’s what we have computers for.” [Emphasis in italics is mine.] He proceeds to put together a simple demo in Python. Then, he also expands on it further, here [^]. Cool. If you think you have understood Orzel’s argument well, answer this [admittedly deceptive] question: How about point particles? Do we need a separate conservation principle for the angular momentum, in addition to that for the linear momentum at least in their case? How about the earth and the moon system, granted that both can be idealized as point particles (the way Newton did)? Think about it. A Song I Like: (Hindi) “baandhee re kaahe preet, piyaa ke sang” Singer: Sulakshana Pandit Music: Kalyanji-Anandji Lyrics: M. G. Hashmat [E&OE] /
{}
# zbMATH — the first resource for mathematics Nonlinear small data scattering for the wave and Klein-Gordon equation. (English) Zbl 0538.35063 The author considers the pair of partial differential equations $$(1)\quad u_{tt}+Au+f(u)=0,\quad(2)\quad u_{tt}+Au=0$$ and arbitrarily given initial data $$(\phi^-,\psi^-)$$. Here A denotes the operator $$m^ 2- \sum^{n}_{j=1}\partial^ 2/\partial x^ 2_ j$$ and $$f(u)=\lambda | u|^{\rho -1}u$$ with m, $$\lambda\in {\mathbb{R}}$$ and $$\rho>1$$. The main objective is to study the existence and uniqueness problem associated with the operator S (scattering operator) which map the initial data $$(\phi^-,\psi^-)$$ into another initial data $$(\phi^+,\psi^+)$$ having the following properties: Let $$u^-_ 0$$ and $$u^+_ 0$$ denote the solution of the Cauchy problem for (2) with data $$(\phi^-,\psi^-)$$ and $$(\phi^+,\psi^+)$$, respectively. Then, there is a solution u of (1), such that $$\| u-u^-_ 0\|_ e\to 0$$ as $$t\to -\infty$$ while $$\| u-u^+_ 0\|_ e\to 0$$ as $$t\to +\infty$$. Here $$\|.\|_ e$$ denotes the energy norm, defined by $$\| v\|^ 2_ e=frac{1}{2}\{\| A^{\frac{1}{2}}v\|^ 2+\| v_ t\|^ 2\}.$$ He gives sufficient conditions under which S exists and is unique. The results related to the cases $$m=0$$ (nonlinear wave equation) and $$m\neq 0$$ (nonlinear Klein-Gordon equation) are stated in different theorems. We remark that some factors (2$$\pi)$$ an (-1) are omitted in some places. For example, the signs before the integrals cited in theorems 2, 3 and 4 should be (-) in order that u(t) and $$u^+_ 0(t)$$ could satisfy (1) or (2). Reviewer: M.Idemen ##### MSC: 35P25 Scattering theory for PDEs 35L70 Second-order nonlinear hyperbolic equations 81Q05 Closed and approximate solutions to the Schrödinger, Dirac, Klein-Gordon and other equations of quantum mechanics 35R15 PDEs on infinite-dimensional (e.g., function) spaces (= PDEs in infinitely many variables) 58D25 Equations in function spaces; evolution equations Full Text: ##### References: [1] Brenner, Ph.: On scattering and everywhere defined scattering operators for nonlinear Klein-Gordon equations. To appear in J. Differential Equations · Zbl 0513.35066 [2] Marshall, B.: Mixed norm estimates for the Klein-Gordon equation; in: Proceedings of a Conference on Harmonic Analysis (Chicago 1981) [3] Pecher, H.:L p -Absch?tzungen und klassische L?sungen f?r nichtlineare Wellengleichungen I. Math. Z.150, 159-183 (1976) · Zbl 0347.35053 [4] Stein, E.M.: Singular integrals and differentiability properties of functions. Princeton, New Jersey: Princeton University Press 1970 · Zbl 0207.13501 [5] Strauss, W.A.: Nonlinear scattering theory at low energy. J. Funct. Analysis41, 110-133 (1981) · Zbl 0466.47006 [6] Strauss, W.A.: Nonlinear scattering theory at low energy: sequel. J. Funct. Analysis43, 281-293 (1981) · Zbl 0494.35068 [7] Strichartz, R.S.: Restrictions of Fourier transforms to quadratic surfaces and decay of solutions of wave equations. Duke Math. J.44, 705-714 (1977) · Zbl 0372.35001 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
{}
Back in 2017, I wanted to learn WebAssembly, so I started investigating about it. The lack of good material led me to learning “the hard way”, breaking every repository I found. Having spent hundreds of hours behind the keyboard, flights, trains, and endless toilet-hours thinking about this project, it is difficult to start writing about it. Mostly by accident, I ended up creating a simple compiler that translates simple mathematical expressions and functions into WebAssembly. Suddenly this language began to grow and I started treating it more seriously. During this year or so, I gathered a lot of information about languages in general and WASM itself. Some of the takeaways were: • There is so little information about the decision processes behind major languages. • WASM works well, but in terms of compilers it is almost a hack on top of the LLVM. This deserves its own article. • It is not yet a first class citizen of the web. It requires a lot of glue code which is often an auto-generated black-box. I have published this side project to see what the internet can do with it. At some point I tried to organize the work by following some simple rules: • Avoid human errors as much as possible: First of all, do not do nasty things and do not let the user make mistakes: 1. Do not include null pointers or even pointers at all 2. Do not include implicit unsafe type casting (Hello JS!) • Functional, but try to hide the complexity: Lots of developers get doubtful the first time they see a functional language. The goal is to have a fully functional language with a user friendly syntax. No parentheses everywhere (LISP like languages), no absence of visual structure (Haskell), not a syntax heavily loaded with symbols (Rust, scala) • Magic is bad: Avoid doing black magic for the user, do not let the user overload any possible symbol (scala) or let the user try to decipher the symbology of the language (rust, haskell) or inject implicit contexts (scala). • It has to be consistent: It is one of the most important things while learning something. It has to be consistent so you can easily abstract it in your head. One of the strategies I follow to achieve this is to tear everything down to the smallest possible building blocks and just build those small pieces. The rest will be sugar syntax using those blocks. • Bootstrappable: It has to be useful for something, usually a good way to prove that in languages is to write the compiler in the same language. To do so, I’ll keep the compiler simple so I can translate the compiler to this language “easily”. I mean, if implementing async/awaits in this new language is a pain in the rear, try to avoid async/awaits in the compiler. ## How does it look? ### Structs & Implementing operators struct Vector3(x: f32, y: f32, z: f32) impl Vector3 { fun -(lhs: Vector3, rhs: Vector3): Vector3 = Vector3( lhs.x - rhs.x, lhs.y - rhs.y, lhs.z - rhs.z ) fun property_length(this: Vector3): f32 = system::math::sqrt( this.x * this.x + this.y * this.y + this.z * this.z ) } fun distance(from: Vector3, to: Vector3): f32 = { (from - to).length } ### Pattern matching // this snippet is an actual unit test import support::test enum Color { Red Green Blue Custom(r: i32, g: i32, b: i32) } fun isRed(color: Color): boolean = { match color { case is Red -> true case is Custom(r, g, b) -> r == 255 && g == 0 && b == 0 else -> false } } #[export] fun main(): void = { mustEqual(isRed(Red), true, "isRed(Red)") mustEqual(isRed(Green), false, "isRed(Green)") mustEqual(isRed(Blue), false, "isRed(Blue)") mustEqual(isRed(Custom(255,0,0)), true, "isRed(Custom(255,0,0))") mustEqual(isRed(Custom(0,1,3)), false, "isRed(Custom(0,1,3))") mustEqual(isRed(Custom(255,1,3)), false, "isRed(Custom(255,1,3))") } ### Algebraic data types // this snippet is an actual unit test enum Tree { Node(value: i32, left: Tree, right: Tree) Empty } fun sum(arg: Tree): i32 = { match arg { case is Empty -> 0 case is Node(value, left, right) -> value + sum(left) + sum(right) } } #[export] fun main(): void = { val tree = Node(42, Node(3, Empty, Empty), Empty) support::test::mustEqual(sum(tree), 45, "sum(tree) returns 45") } ### Types and overloads are created in the language itself The compiler only knows how to emit functions and how to link function names. I did that so I had fewer things hardcoded into the compiler and allows me to write the language in the language. To do that, I had to add a %wasm { ... } code block, and a %stack { ... } type literal. • %wasm { ... }: can only be used as a function body, not as an expression. It is literally the code that will be emited to WAST. The parameter names remain the same (prefixed with $ as WAST indicates). Other symbols can be resolved with fully::qualified::names. • %stack { wasm="i32", size=4 }: it is a type literal, it indicates how much memory should be allocated for structs (size), and what type to use in locals and function parameters (wasm, it needs a better name). /** We first define the type int */ type int = %stack { wasm="i32", size=4 } /** Implement some operators for the type int */ impl int { fun +(lhs: int, rhs: int): int = %wasm { (i32.add (get_local$lhs) (get_local $rhs)) } fun -(lhs: int, rhs: int): int = %wasm { (i32.sub (get_local$lhs) (get_local $rhs)) } fun >(lhs: int, rhs: int): boolean = %wasm { (i32.gt_s (get_local$lhs) (get_local $rhs)) } } fun fibo(n: int, x1: int, x2: int): int = { if (n > 0) { fibo(n - 1, x2, x1 + x2) } else { x1 } } #[export "fibonacci"] // "fibonacci" is the name of the exported function fun fib(n: int): int = fibo(n, 0, 1) ## Some sugar ### Enum types enum Tree { Node(value: i32, left: Tree, right: Tree) Empty } Is the sugar syntax for type Tree = Node | Empty struct Node(value: i32, left: Tree, right: Tree) struct Empty() impl Tree { fun is(lhs: Tree): boolean = lhs is Node || lhs is Empty // ... } impl Node { fun as(lhs: Node): Tree = %wasm { (local.get$lhs) } // ... many methods were removed for clarity .. } impl Empty { fun as(lhs: Node): Tree = %wasm { (local.get $lhs) } // ... } ### is and as operators are just functions impl u8 { /** * Given an expression with the shape: * * something as Type * ^^^^^^^^^ ^^^^ *$lhs $rhs * * A function with the signature: * fun as($lhs: LHSType): $rhs = ??? * * Will be searched in the impl of LHSType * */ fun as(lhs: u8): f32 = %wasm { (f32.convert_i32_u (get_local$lhs)) } } fun byteAsFloat(value: u8): f32 = value as f32 struct CustomColor(rgb: i32) type Red = void impl Red { fun is(lhs: CustomColor): boolean = match lhs { case is Custom(rgb) -> (rgb & 0xFF0000) == 0xFF0000 else -> false } } var x = CustomColor(0xFF0000) is Red // this may not be a good thing, but you get the idea ### There are no dragons behind the structs The struct keyword is only a high level construct that creats a type and base implementation of something that behaves like a data type, normally in the heap. struct Node(value: i32, left: Tree, right: Tree) Is the sugar syntax for // We need to keep the name and order of the fields for deconstructors type Node = %struct { value, left, right } impl Node { fun as(lhs: Node): Tree = %wasm { (local.get $lhs) } #[explicit] fun as(lhs: Node): ref = %wasm { (local.get$lhs) } // the discriminant is the type number assigned by the compiler #[inline] private fun Node$discriminant(): u64 = { val discriminant: u32 = Node.^discriminant discriminant as u64 << 32 } // this is the function that gets called when Node is used as a function call fun apply(value: i32, left: Tree, right: Tree): Node = { // a pointer is allocated. Then using the function fromPointer it is converted // to a valid Node reference var$ref = fromPointer(system::memory::calloc(1 as u32, Node.^allocationSize)) property$0($ref, value) property$1($ref, left) property$2($ref, right) $ref } // this function converts a raw address into a valid Node type private fun fromPointer(ptr: u32): Node = %wasm { (i64.or (call Node$discriminant) (i64.extend_u/i32 (local.get $ptr))) } fun ==(a: Node, b: Node): boolean = %wasm { (i64.eq (local.get$a) (local.get $b)) } fun !=(a: Node, b: Node): boolean = %wasm { (i64.ne (local.get$a) (local.get $b)) } fun property_value(self: Node): i32 = property$0(self) fun property_value(self: Node, value: i32): void = property$0(self, value) #[inline] private fun property$0(self: Node): i32 = i32.load(self, Node.^property$0_offset) #[inline] private fun property$0(self: Node, value: i32): void = i32.store(self, value, Node.^property$0_offset) fun property_left(self: Node): Tree = property$1(self) fun property_left(self: Node, value: Tree): void = property$1(self, value) #[inline] private fun property$1(self: Node): Tree = Tree.load(self, Node.^property$1_offset) #[inline] private fun property$1(self: Node, value: Tree): void = Tree.store(self, value, Node.^property$1_offset) fun property_right(self: Node): Tree = property$2(self) fun property_right(self: Node, value: Tree): void = property$2(self, value) #[inline] private fun property$2(self: Node): Tree = Tree.load(self, Node.^property$2_offset) #[inline] private fun property$2(self: Node, value: Tree): void = Tree.store(self, value, Node.^property$2_offset) fun is(a: (Node | ref)): boolean = %wasm { (i64.eq (i64.and (i64.const 0xffffffff00000000) (local.get$a)) (call Node$discriminant)) } fun store(lhs: ref, rhs: Node, offset: u32): void = %wasm { (i64.store (i32.add (local.get$offset) (call addressFromRef (local.get $lhs))) (local.get$rhs)) } fun load(lhs: ref, offset: u32): Node = %wasm { (i64.load (i32.add (local.get $offset) (call addressFromRef (local.get$lhs)))) } } Repository: https://github.com/lys-lang/lys. Homepage: https://lys-lang.dev
{}
# When using nodal analysis of a circuit involving CCCS, how do you know which currents are entering and which are leaving? I am trying to solve the following circuit: I believe the answer I'm getting for $i_b$ is wrong because I put it into LTSpice and I'm getting that $i_b = -3.63636$ This is my LTSpice diagram: I found $i_b = 1$mA by doing a loop voltage analysis on the left loop; for the voltage drop across the $200\Omega$ resistor I assumed that it would be $i_b + 29i_b$, which works out to be a nice number and in fact all of the numbers are nice in this case--usually when the numbers are nice, you know you're doing it right. At this point, I'm not sure if I incorrectly modeled this in LTSpice, or if I incorrectly assumed which way the current was flowing. Instead of giving me the answer directly, I would just like to know how to determine whether the current at a node is entering or leaving a branch. • possible duplicate of Node Analysis - current calculation – Ignacio Vazquez-Abrams Jun 16 '15 at 16:04 • General answer without looking at your example: Leaving and entering is simply a matter of sign convention. If you know the relative polarity of the sources you can deduce current polarity. If not then plugging in what you know in a consistent manner will produce consistent results. The problems usually come from an inconsistent application of this basic concept. – Russell McMahon Jun 16 '15 at 16:07 • @IgnacioVazquez-Abrams The answer given in that question states that you can just "guess the current" and it will work out positive or negative, but in this case if I guess the opposite then I will get a voltage drop of 200*(28ib) at the 200 ohm resistor--vs 200*(30ib). This will change the answer... (I say this unconfidently because I don't know what my mistakes are). – Klik Jun 16 '15 at 16:08 • @RussellMcMahon Since I have a CCCS, won't the answer change, as I described in my previous comment? – Klik Jun 16 '15 at 16:09 • It looks like your Spice simulation is wrong because I1 is an independent current source, not a CCCS. – Null Jun 16 '15 at 16:48 Your paper analysis is correct, but your LTspice simulation is incorrect. I get the same (incorrect) result as you if I use a gain of $+29$ for the F device (your $I_1$). But the gain should be $-29$ since $i_b$ flows from the negative to positive terminal of $V_{\text{ib}}$. Changing the gain gives you the correct result. Circuit: F device attributes: Result: If I change the gain to $+29$ the result is: Note that the simulation result is $v_y = v_{y1} - v_{y2} \approx 98$V when using a gain of $+29$, which is clearly wrong. The two simulations highlight the importance of maintaining consistency in the direction of currents. The problem statement defines $i_b$ and $29i_b$ as both flowing toward the middle "T" node. LTspice defines $i_b$ as flowing away from it since it defines the current through $V_{\text{ib}}$ as flowing from positive to negative terminal. That means you also have to define the CCCS $29i_b$ as flowing away from the middle "T" node. In the incorrect simulation (with gain of $+29$), $29i_b$ is still flowing toward the "T" node while $i_b$ is flowing away from it. The correct simulation defines them both as flowing away from the "T" node. Alternatively, you could just switch the direction of the "F" device and use a positive current gain -- it would then also be defined as flowing away from the "T" node. • Thank you very much for going to the length to find the mistake! I spent a lot of time trying to figure out what I was doing wrong. At least I learned something from this. – Klik Jun 16 '15 at 22:09 • @Klik Happy to help. – Null Jun 17 '15 at 1:45 • I don't know if it's too late, but LTspice's currents are distributed in a grid, and their directions are chosen to start from top left, going right and downwards. This can be easily berified by pressing Alt and hovering the mouse over the wires, in any topology. It's just a matter of arbitrary choice for the programming part, the same as current through elements that are fixed at entering into one pin, and exiting at the other: a fixed reference from the solver's perspective. – a concerned citizen Sep 1 '16 at 17:59 While doing these type of problems don't worry about current directions. First covert $i_b$ in terms of the principle node, then apply nodal analysis thereafter and find all the currents in the branch. Then you will know the direction of currents.
{}
# Revision history [back] Looks like a bug. Here is an alternative method, for polynomial equations with a finite number of solutions: sage: R.<x,y> = PolynomialRing(QQ) sage: I = R.ideal([x*x*x-y*y-10.5, 3.0*x*y+y-4.6]) sage: I.variety(RR) [{y: 0.601783026716651, x: 2.21465035058553}] You might also like AA (the field of real algebraic numbers) instead of RR, for exact computations. Looks like a bug. bug in sympy. Here is an alternative method, for polynomial equations with a finite number of solutions: sage: R.<x,y> = PolynomialRing(QQ) sage: I = R.ideal([x*x*x-y*y-10.5, 3.0*x*y+y-4.6]) sage: I.variety(RR) [{y: 0.601783026716651, x: 2.21465035058553}] You might also like AA (the field of real algebraic numbers) instead of RR, for exact computations.
{}
0 101 Sep 22, 2009 at 16:19 This has no practical consequences for me, it’s just something that I’m curious about. If I were to create a function bool foo(bool a, bool b) { return a && b; } And use it… foo(false, function_with_side_effects()); Would the compiler be allowed to inline it such that the function with side effects isn’t called? In other words, could it inline it to this: false && function_with_side_effects(); Again, this isn’t causing me any problems; it’s just a curiosity that recently crossed my mind. 5 Replies 0 167 Sep 22, 2009 at 16:21 No; when calling a function all arguments are always completely evaluated. 0 101 Sep 22, 2009 at 16:55 Well, yes. I suppose the implication is that this rule applies to functions that are inlined as well? What about overloads of operator&&? Are they all special functions, or are they only special for primitive types? 0 167 Sep 22, 2009 at 17:00 Yes; inlining a function doesn’t change the fact that all arguments are evaluated. As for overloads of operator&&, I’m 95% sure they behave just like regular functions, and the short-circuiting behavior only applies to the built-in operator&&. 0 101 Sep 22, 2009 at 17:40 Yay for consistency. 0 101 Sep 22, 2009 at 20:05 @Reedbeta Yes; inlining a function doesn’t change the fact that all arguments are evaluated. As for overloads of operator&&, I’m 95% sure they behave just like regular functions, and the short-circuiting behavior only applies to the built-in operator&&. Correctamundo
{}
# Center for Analysis and Design of Intelligent Agents ### Site Tools public:t-720-atai:atai-19:lecture_notes_w3 # Differences This shows you the differences between two versions of the page. public:t-720-atai:atai-19:lecture_notes_w3 [2019/09/06 10:03] public:t-720-atai:atai-19:lecture_notes_w3 [2019/09/06 10:13] (current) thorisson [What Kind of Task-Environments do AGI sysetms target?] Line 74: Line 74: \\ \\ \\ \\ -====What Kind of Task-Environments do AGI sysetms target?==== +====What Kind of Task-Environments do AGI Systems Target?==== -|  Environment  | Large number of potentially relevant variables.  | +|  Worlds  | Complex, intricate worlds, large number of variables (relative to the system's CPU and memory). \\ Complexity lies somewhere between randomness and regularity. \\ Many levels of temporal and spatial detail. \\ Ultimately, any system worthy of being called "AGI" must be successful operation in the physical world.  | -|  Task  | Ditto   +|  Environments  | Somewhere between random and staticDynamic; large number of variables (relative to the system's CPU and memory capacity). \\ Many levels of temporal and spatial detail. -|  Solutions  | Medium number.  +|  Tasks  | Dynamic; large number of variables (relative to the system's CPU and memory capacity)\\Underspecified. -|  Instructions  | Optionally.  |+|  Goals  | Multiple goals can easily be specified    | + Solutions  | New solutions can be found.    | \\ \\ \\ \\ - ====How it Hangs Together: Worlds, Environments, Tasks, Goals==== ====How it Hangs Together: Worlds, Environments, Tasks, Goals====
{}
# Show sparse matrices like chessboards I am trying to display sparse matrices like chessboards, where white places indicates 0 entries and black ones non-zero entries (in this case matrices are boolean so every non-zero entry is a one entry), but I can't find a proper way. Because I am looking to show more than one matrix (more in detail, I have to show matrix A and its power), I will have to print more than one on the same page and specify their layout, like figures. For example, given this matrix as input: | 0 | 1 | 0 | | 1 | 0 | 1 | | 0 | 1 | 0 | I would expect such output: With TikZ this is rather straightforward. \documentclass{article} \usepackage{tikz} \usetikzlibrary{matrix} \begin{document} \begin{tikzpicture}[0/.style={draw,ultra thin},1/.style={0,fill=black}] \matrix[matrix of nodes,cells={minimum size=1.5em,anchor=center}] {|[0]| & |[1]| & |[0]| \\ |[1]| & |[0]| & |[1]|\\ |[0]| & |[1]| & |[0]|\\ }; \end{tikzpicture} \end{document} If you have a simple pattern as this one, you could also do \documentclass{article} \usepackage{tikz} \usetikzlibrary{matrix} \begin{document} \begin{tikzpicture}[my cell/.style={/utils/exec={% \pgfmathtruncatemacro{\itest}{mod(\the\pgfmatrixcurrentrow+\the\pgfmatrixcurrentcolumn,2)} \ifnum\itest=1 \pgfkeysalso{/tikz/fill=black} \fi}}] \matrix[matrix of nodes,nodes in empty cells, nodes={minimum size=1.5em,anchor=center,draw,ultra thin,my cell}] { & & \\ & & \\ & & \\ }; \end{tikzpicture} \end{document} An addendum, just for fun. It is rather similar to @egreg's nice answer, in fact \sparsezero, \sparseone and the name of the environment are just stolen from there. The difference is that instead of making 0 and 1 active characters collcell is employed, which is also hacky but arguably less violent. It defines a new column type that just employs a macro. However, extending the entries to larger values will be as easy to add a few \ors to the \ifcase, so I feel that this may be easier to customize than egreg's nice solution this is conceptually building on. \documentclass[12pt]{article} \usepackage{amsmath} \usepackage{array} \usepackage{collcell} \newlength{\sparsesize} \setlength{\sparsesize}{12pt} \newcommand{\sparsezero}{% \begingroup \setlength{\fboxsep}{-0.2pt}% \setlength{\fboxrule}{0.2pt}% \fbox{\hspace{\sparsesize}\rule{0pt}{\sparsesize}}% \endgroup } \newcommand{\sparseone}{\rule{\sparsesize}{\sparsesize}} \newcommand{\sparseentry}[1]{\ifcase#1 \sparsezero \or \sparseone \fi} \newcolumntype{F}{>{\collectcell\sparseentry}c<{\endcollectcell}} \newenvironment{sparsematrix} {% \renewcommand{\arraycolsep}{0pt}% \renewcommand{\arraystretch}{0}% \begin{array}{*{20}{F}}% } {\end{array}} \begin{document} $\begin{sparsematrix} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \\ \end{sparsematrix}=\begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \\ \end{pmatrix}$ \end{document} With a fairly natural syntax: \documentclass{article} \usepackage{amsmath} \newlength{\sparsesize} \setlength{\sparsesize}{12pt} \newcommand{\sparsezero}{% \begingroup \setlength{\fboxsep}{-0.2pt}% \setlength{\fboxrule}{0.2pt}% \fbox{\hspace{\sparsesize}\rule{0pt}{\sparsesize}}% \endgroup } \newcommand{\sparseone}{\rule{\sparsesize}{\sparsesize}} \newcommand{\activate}[2]{% \begingroup\lccode~=#1\lowercase{\endgroup\let~}#2% \mathcode#1="8000 } \newenvironment{sparsematrix} {% \renewcommand{\arraystretch}{0}% \setlength{\arraycolsep}{0pt}% \activate{0}{\sparsezero}\activate{1}{\sparseone}% \begin{matrix}% } {\end{matrix}} \begin{document} $\begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{pmatrix} = \begin{sparsematrix} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{sparsematrix}$ \end{document} For general matrices with integer coefficients it's a bit more difficult. \documentclass{article} \usepackage{amsmath,xparse} \newlength{\sparsesize} \setlength{\sparsesize}{12pt} \newcommand{\sparsezero}{% \begingroup \setlength{\fboxsep}{-0.2pt}% \setlength{\fboxrule}{0.2pt}% \fbox{\hspace{\sparsesize}\rule{0pt}{\sparsesize}}% \endgroup } \newcommand{\sparseone}{\rule{\sparsesize}{\sparsesize}} \ExplSyntaxOn \NewDocumentEnvironment{sparsematrix}{b} { \renewcommand{\arraystretch}{0}% \setlength{\arraycolsep}{0pt}% {% make a subformula \begin{matrix} \eagleone_sparsematrix:n { #1 } \end{matrix} } }{} \seq_new:N \l__eagleone_sparsematrix_rows_seq \seq_new:N \l__eagleone_sparsematrix_row_in_seq \seq_new:N \l__eagleone_sparsematrix_row_out_seq \cs_new_protected:Nn \eagleone_sparsematrix:n { \seq_set_split:Nnn \l__eagleone_sparsematrix_rows_seq { \\ } { #1 } \seq_map_function:NN \l__eagleone_sparsematrix_rows_seq \__eagleone_sparsematrix_row:n } \cs_new_protected:Nn \__eagleone_sparsematrix_row:n { \seq_set_split:Nnn \l__eagleone_sparsematrix_row_in_seq { & } { #1 } \seq_map_inline:Nn \l__eagleone_sparsematrix_row_in_seq { \int_compare:nTF { ##1 = 0 } { \seq_put_right:Nn \l__eagleone_sparsematrix_row_out_seq { \sparsezero } } { \seq_put_right:Nn \l__eagleone_sparsematrix_row_out_seq { \sparseone } } } \seq_use:Nn \l__eagleone_sparsematrix_row_out_seq { & } \\ } \ExplSyntaxOff \begin{document} $\begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{pmatrix} = \begin{sparsematrix} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{sparsematrix}$ $\begin{sparsematrix} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{sparsematrix}^2 = \begin{sparsematrix} 1 & 0 & 1 \\ 0 & 2 & 0 \\ 1 & 0 & 1 \end{sparsematrix}$ \end{document} ` • For my humble opinion: excellent. +1. – Sebastiano May 30 '19 at 20:14
{}
# Technical Studies Reference ### Volume Weighted Average Price (VWAP) with Standard Deviation Lines This study calculates and displays the Volume Weighted Average Price (VWAP) over the specified period of time for the symbol of the chart. The period of time is set by the Time Period Type and Time Period Length Inputs. This calculation gives greater weight to trade prices that have a higher volume. The calculation resets at the begining of each new period in the chart. Let $$X$$ be a random variable denoting the Input Data, let $$X_i$$ be the value of the Input Data at chart bar $$i$$, let $$V_i$$ be the Volume at chart bar $$i$$, and let $$n$$ be the length of the period in chart bars for the calculation as specified by the Inputs Time Period Type and Time Period Length. We begin by computing the Period Volume $$V_P$$ for the period. $$V_P = \sum_{i=1}^nV_i$$ Then the Volume Weighted Average Price during the length for the given Inputs is denoted as $$VWAP(X,n)$$, and is calculated as follows. $$VWAP(X,n) = \left(\sum_{i=1}^nX_iV_i\right)/V_P$$ The start of the trading day is determined from the Session Times set in Chart >> Chart Settings. For example, when the Time Period Length and Time Period Type are set to 1 Day, then the calculations will begin at the start of each trading day according to the Session Times and end at the end of the trading day. The study also supports calculating and displaying Fixed Offet/Standard Deviation band lines, which the calculation is explained further down on this page. #### Displaying or Hiding Standard Deviation/Fixed Offset Band Lines Up to 4 Standard Deviation/Fixed Offset lines based upon the Volume Weighted Average Price line can be displayed. To display these standard deviation band lines, follow the steps below. • Open the Study Settings window for the Volume Weighted Average Price study on the chart. For instructions, refer to Adding/Modifying Chart Studies. • Select the Subgraphs tab. • The Standard Deviation/Fixed Offset lines are labeled Top Band 1-4 and Bottom Band 1-4. To make a line visible, set its Draw Style to Dash or another visible Draw Style. To hide it, set its Draw Style to Ignore. Refer to the description for the Band # Std. Deviation Multiplier/Fixed Offset Input, for information about how these lines are calculated. #### Differences Between VWAP and Standard Deviation Lines On Different Timeframe Bars The VWAP is a fairly simple calculation, but is very dependent on the data that it is calculated using. For the highest accuracy and the same values on different timeframe bars, it is necessary to set the Base on Underlying Data Input to Yes, so that the study uses the underlying data that makes up the bars. It is also necessary to have tick by tick data in the chart data file for the highest accuracy. Refer to Tick by Tick Data Configuration. When you compare VWAPs on different timeframe bars, the values will be exact when using Base on Underlying Data. If Base on Underlying Data is set to No, the values will be different and this is expected. The Standard Deviation Bands for the Volume Weighted Average Price on different timeframe bars can be different. This is because the Standard Deviation is calculated in part using the chart bar values, and the chart bar values can be significantly different between chart bar timeframes. For example, there is only every fifth value on a 5 minute bar chart versus a 1 minute bar chart. #### Standard Deviation Band Calculation The following explanation of the standard deviation band calculation is when the Standard Deviation Band Calculation Method is set to VWAP Variance. The Variance during one period for the given Inputs is denoted as $$Var(X,n)$$, and is calculated as follows. $$Var(X,n) =\sum_{i=1}^n\left(X_i-WVAP_i(X,n)\right)^2V_i$$ The Standard Deviation during the period for the given Inputs is denoted as $$SD(X,n)$$, and is calculated as follows. $$SD(X,n) = \sqrt{Var(X,n)}$$ Next, the Offset during the period for the given Inputs is denoted as $$Off(X,n)$$, and is calculated as follows. $$V_P) is documented above. \(Off(X,n) = \sqrt{Var(X,n)/V_P}$$ The Standard Deviation Bands are computed using a Multiplier $$b$$. Let $$TB_j$$ and $$BB_j$$ be Top Band and Bottom Band number $$j$$, respectively $$(j=1,2,3,4)$$. We compute the Bands for each Period as follows. $$TB_1 = VWAP + b\cdot Off(X,n)$$ $$BB_1 = VWAP - b\cdot Off(X,n)$$ $$TB_2 = VWAP + 2b\cdot Off(X,n)$$ $$BB_2 = VWAP - 2b\cdot Off(X,n)$$ $$TB_3 = VWAP + 3b\cdot Off(X,n)$$ $$BB_3 = VWAP - 3b\cdot Off(X,n)$$ $$TB_4 = VWAP + 4b\cdot Off(X,n)$$ $$BB_4 = VWAP - 4b\cdot Off(X,n)$$ #### Inputs • Input Data • Time Period Type: Sets the type of time period for the calculation. This Input works in conjunction with Time Period Length. For a 1 Day period, set this to Days. The number of Days specified always refers to calendar days and not trading days. • Time Period Length: Sets the quantity to be used with Time Period Type. For example, for a period of 1 Day, set this to 1 and set Time Period Type to Days. • Base on Underlying Data: This Input setting only applies to Intraday charts and not to Historical charts. When this Input is set to No, which is the default, then the price and volume data for the calculations are based on the bars in the chart. The last trade price of the bar is used, which is the default, and the total volume of the chart bar is used. To base the calculations on the underlying price and volume data which generally is more detailed than the chart bars, set this Input to Yes. When this Input is set to Yes, the chart may be automatically reloaded to load in the more detailed Volume at Price data. It is recommended when using this setting that since Intraday charts are required, select Chart >> Chart Settings and select Chart Data Type >> Intraday Chart Only to always ensure the chart is set to use Intraday data. • Start Date-Time: This Input can optionally be set to a starting Date-Time to begin the calculations at for the Volume Weighted Average Price study. It is necessary to specify both the Date and the Time. You cannot just specify the Time only. The Time Period Length and Time Period Type Inputs still apply when using a Start Date-Time. The Start Date-Time setting does not refer to the starting time of day when the Time Period Length and Time Period Type is set to set to 1 Day. The purpose of this Input is to reduce the amount of calculations performed within the chart by starting at a particular Date-Time. • Standard Deviation Band Calculation Method: This can be set to VWAP Variance, Fixed Offset or Standard Deviation. For the formulas for each, refer to Standard Deviation Band Calculation. • Band # Std. Deviation Multiplier/Fixed Offset: When the standard deviation Top # Band and Bottom # Band Subgraphs are set to be displayed, this Input specifies how far the band lines are from the Volume Weighted Average Price line. When the Standard Deviation Band Calculation Method Input is set to VWAP Variance or Standard Deviation , then this Input specifies the value multiplied by the VWAP Variance or Standard Deviation. For example, if the Input is set to 2.0, then the band would be offset by 200% of the Standard Deviation. When the Standard Deviation Band Calculation Method Input is set to Fixed Offset, then this Input becomes a fixed offset and is used to offset the Bands from the Volume Weighted Average Price by the specified amount. Bands 1-4 are offset by the exact amounts.
{}
# Vector Fun: projection of a vector on another I just realized that I used a technique – projecting a vector on a line given by another vector – in my last post on ray-tracing that I did not justify in any way. Look at this picture: Given vector $a$ and $b$ we are looking for vector $b'$. We saw in the dot-product article, that we have $a \cdot b = \left| a \right| \left| b \right| cos \alpha$ and of course $cos \alpha = \frac { \left| b' \right| } { \left| b \right| }$ combining these we get $a \cdot b = \left| a \right| \left| b' \right|$ and therefore as $b' = \frac {\left| b' \right| }{ \left| a \right| } a$ finally $b' = \frac { a \cdot b}{ {\left| a \right|}^{2}} a$ which reduces to $b' = (b \cdot n) n$ in the case that $n = a$ is a unit-vector. This is the result we lacked.
{}
# Effect of Zn dust reduction of phenolic -OH group on other groups We use distillation with zinc dust to remove -OH group from phenol. $$\ce{Ph-OH + Zn \rightarrow Ph-H + ZnO }$$ What I want to know is that whether this reaction has any effect on other groups attached to the benzene ring, like $\ce{-CH_3,-NH_2,-NO_2, -CN, -CONH2}$ etc. or not? • Nitro and cyano will react – Waylander Aug 25 '17 at 15:52 • @Waylander Please explain, in detail about the reactions. Answer with examples if possible. – Shoubhik Raj Maiti Oct 2 '17 at 7:40 • Zinc plus proton source and heating is a standard way of reducing aromatic nitro. I have run this with ammonium chloride as the proton source in methanol – Waylander Oct 2 '17 at 9:18 • @Waylander, I understand. But can you give the whole reaction? I mean, a NO2 group can be reduced to NHOH or NH2 group, and CN group can be reduced to CH=NH, or CH2NH2 groups. – Shoubhik Raj Maiti Oct 10 '17 at 7:59 • I think at this point you should do you own research into the reductive properties of zinc. – Waylander Oct 10 '17 at 12:17
{}
# Box Pushed Up an Incline Ramp: Basic Newton's Law Questions ## Homework Statement A 90 kg box is pushed by a horizontal force F at constant speed up a ramp inclined at 28°, as shown. Determine the magnitude of the applied force. 1. when the ramp is frictionless. 2. when the coefficient of kinetic friction is 0.18 F=mg FN=mg μ= Fk/FN ## The Attempt at a Solution a) mg=(90)((.8)= 882N=Fg sin28=Fgx/882N (0.469)(882)= Fgx 414N=Fgx= magnitude of applied force b) FN=mg= 882N μ=Fk/FN 0.18=Fk/882 (882)(0.18)= Fk Fk= 158.76 ** This is where I am unsure of my approach** Fapplied= Fgx+Fk =414N+158.76 N = 572N That seems like too much to me, but I'm not quite sure. Does it look like I have that equation correct ,or am I supposed to subtract the value I found for Fk from FgX, or something totally different. Any help would be very much appreciated FN=mg= 882N μ=Fk/FN 0.18=Fk/882 (882)(0.18)= Fk Fk= 158.76 You have not drawn FN correctly . Redraw correctly and try again . Hope this helps . You have not drawn FN correctly . Redraw correctly and try again . Hope this helps . oh okay, is it opposing the Fy aspect instead of Fg? I was confused when I was drawing that! Thank you for your response SammyS Staff Emeritus
{}
# Boeing - Design Issues... ### Help Support HomeBuiltAirplanes.com: #### BJC ##### Well-Known Member HBA Supporter So the regulator (the FAA) was not effective at doing their job. Now they are blaming their ineptness on Boeing? BJC #### Vigilant1 ##### Well-Known Member Meanwhile... The answer and probably simple solution to avoid the chaos was on pprune since November of 2018. https://www.pprune.org/rumours-news/614857-indonesian-aircraft-missing-off-jakarta-62.html#post10311501 In other words - if MCAS screws up, 1 notch of flaps is all it takes to deactivate. Or, just use the cutoff switches and trim manually using the wheels (like it says in the checklist). When there is an unexplained flight control anomally, randomly reconfiguring the plane is a risky course of action. In particular, deploying flaps can be expected to produce a downward pitch force, which is the problem the crew is already fighting. Finally, at the airspeeds shown in some of these incidents, flap deployment would have been bad. There were already switches and procedures to address the problem. They will be enhanced, as they should be. #### TFF ##### Well-Known Member Deactivating is what these crews were trying to avoid to the point of crashing in a way. It’s not that the plane is unflyable with it off, it’s that you have to accept you got what you got once it’s off. Rebooting the system in hopes that it will get better is against training. Yet one crew did it a couple of times and the other seemed to have not recognized the true problem. The two problems are the plane has a flaw and the crews did not know how to put up with it. It’s a dumb down system and that is the elephant in the room no one wants to admit. Boeing, FAA, pilots, airlines. The public is scared and f flying and now the magic box that keeps them safe does not work is how they perceive it. #### plncraze ##### Well-Known Member HBA Supporter I quoted BJC and it didn't save! What I had said after the quote in the above post was that Boeing apparently had internal procedures which were not followed and these procedures were the justification for continued manufacturing of the product. The FAA would catch these later and complain but would never pull their big guns out. It would be interesting to see how far up the FAA food chain these issues travelled and why nothing ever happened. It would be frustrating to see the FAA get like the NTSB where they can only act on their "wish list" after people were killed. #### Hephaestus ##### Well-Known Member Or, just use the cutoff switches and trim manually using the wheels (like it says in the checklist). When there is an unexplained flight control anomally, randomly reconfiguring the plane is a risky course of action. In particular, deploying flaps can be expected to produce a downward pitch force, which is the problem the crew is already fighting. Finally, at the airspeeds shown in some of these incidents, flap deployment would have been bad. There were already switches and procedures to address the problem. They will be enhanced, as they should be. From reading the pprune random jumping around. Sounds like flaps1 is a pretty safe spot what was it 2%5%? It's not 10. But you've got a AOA warning light on the panel, the trim starts to run away... Selecting flaps 1 disables the mcas, gives you back your electric trim control buttons so you're not trying to fight the manual wheel which sounds nearly impossible. Get things controlled first and fly the plane... #### Wanttaja ##### Well-Known Member I quoted BJC and it didn't save! What I had said after the quote in the above post was that Boeing apparently had internal procedures which were not followed and these procedures were the justification for continued manufacturing of the product. The FAA would catch these later and complain but would never pull their big guns out. It would be interesting to see how far up the FAA food chain these issues travelled and why nothing ever happened. It would be frustrating to see the FAA get like the NTSB where they can only act on their "wish list" after people were killed. Basic problem is expertise on the Government side. Where DOES an FAA regulator get the technical know-how to monitor the development of an airliner? The only way they can is to actually work for a company developing aircraft for years, even decades, before leaving and going to work for the FAA. The problem here is, good engineers generally don't want to do that. They'd rather keep building aircraft, rather than monitor the people who are doing the work. Plus, there's often a conflict of interest issue...they usually can't go directly from Boeing/Airbus to a monitoring position at the FAA. Think about it: Where is the FAA going to get people who understand aircraft software to the point where they can detect issues with MCAS? So the FAA ends up with the "Authorized Representative" system...experienced Boeing engineers are designated as FAA watchdogs (they no longer use the terms DAR or DER, just AR). The monitoring ability of these folks depends on what degree of autonomy they actually have. In a recent article, the Seattle Times interviewed several ARs regarding their attempts to provide oversight. In one man's case, he disagreed with Boeing's desire to eliminate/streamline some testing. Boeing took him off the Max program and replaced him with a more-amenable AR. Had a talk with a friend over the weekend...he's worked all sides of this: He's a former Boeing manager, he worked for an airline doing acceptance inspections when they acquired Boeing, and also worked for the FAA (he's an A&P, too). He traced the problem to one factor: The merger of Boeing with McDonnell-Douglas. Many upper-level management positions were assumed by McD personnel, and these brought in lower-level managers that they were used to working with. His point was that the previous Boeing emphasis on partnership with customers and Government went by the wayside, with McD's more aggressive philosophy intended to maximize profitability. The AR program, which worked under Boeing's old philosophy, was viewed as just another way to cut costs in the new environment. The result? The 787, plagued with schedule and cost issues relating to attempting to save costs by subcontracting a large portion of the aircraft to foreign companies (and, eventually, to set up a whole new factory for it). The 737 Max (the 737-900 was developed prior to the merger). The 767 tanker, mired in development delays stemming from use of (cheaper) inexperienced engineers and quality issues due to attempts to reduce the cost overrun. Ron Wanttaja #### pwood66889 ##### Well-Known Member "The engineers didn’t do a good job of analyzing failure modes. Furthermore, the assumptions of pilot reactions weren’t reflective of current training practices." As an ol' computer code cutter, I know that one needs to translate from the "Domain Experts" to the programmers to get it right. I have blundered, but was saved when one Expert said "What about..." I never would have guessed. Now, if one of the above engineers would have soloed... As to "current training practices," there may be cultural issued. I note that no American (not chauvanist, just saying) carriers have experienced MCAS problems. This whole thread seems devoted to a chain of bad practices leading to bad results. #### davidb ##### Well-Known Member Recently a 737 crew (not a Max) had a bird strike on takeoff climb out that sheared off the AoA vane. The flaps were still in the takeoff setting. They got a continuous stick shaker. They left the flaps at the takeoff setting, flew around the pattern and configured for a safe landing. Had it been a Max, the result would have been the same. Their “instinctive” actions were likely training they had decades prior to this event. When something weird happens, focus on flying the airplane with known pitch and power settings. If it’s controllable, don’t be in a rush to change things. Since we’re still building new aircraft with new ways to get in trouble, it’d be nice to have pilots that have been trained to have more of a test pilot mentality. We can’t train for everything and training is expensive but recent history has shown that pilots with vast experience and training fare better in the unforeseen realm. #### Wanttaja ##### Well-Known Member Latest from AvWeb: https://www.avweb.com/aviation-news/boeing-outsourced-coding-for-9-an-hour/ "Bloomberg is reporting that Boeing outsourced coding of software on the Boeing 737 MAX to engineers who were paid as little as $9 an hour. The company and some of its suppliers laid off their own engineers in favor of subcontracting coding work to offshore companies...." Ron Wanttaja #### BJC ##### Well-Known Member HBA Supporter #### bmcj ##### Well-Known Member HBA Supporter Rebooting the system in hopes that it will get better is against training. I mentioned this before, but I was told by someone that the MCAS had a max allowable pitch trim deviation (let’s call it 2° for the sake of an example), after which it would not trim further. He went on to say that each time the system was rebooted, the MCAS cache was reset to zero and allowed another retrim to occur, thinking that it had not trimmed yet (in other words, it would drive the trim from 2° down to 4° down before registering its max allowable value). Each reboot would allow another 2° of retrim. For the record, I’ve not seen any verification on this, so I cannot vouch for its accuracy. #### bmcj ##### Well-Known Member HBA Supporter He traced the problem to one factor: The merger of Boeing with McDonnell-Douglas. Many upper-level management positions were assumed by McD personnel, and these brought in lower-level managers that they were used to working with. His point was that the previous Boeing emphasis on partnership with customers and Government went by the wayside, with McD's more aggressive philosophy intended to maximize profitability. That sounds like a scapegoat argument to me. Douglas has their own successful line of airlines before the merger and had worked with FAA under the rules just like Boeing did. It’s easy to throw someone under the bus when they are no longer around to refute it. #### Wanttaja ##### Well-Known Member That sounds like a scapegoat argument to me. Douglas has their own successful line of airlines before the merger and had worked with FAA under the rules just like Boeing did. It’s easy to throw someone under the bus when they are no longer around to refute it. Could be. But for me, the situation is similar to finding out that the guy living next door just got arrested for some long string of crimes. You think, "Gee, he was such a nice guy," then start thinking about all the little creepy clues that you just shrugged off, over the years. I worked for the company from 1981 to 2017, and saw the effect of the M-D takeover in my particular area. The effect of the change of leadership between the 1994 (one day) and 2000 (40 day) engineer's strikes was especially apparent. One of the things I remember from the 2000 strike was older, retired engineers who just couldn't believe we could be so disloyal to Boeing. They just didn't understand how much the company had changed after the merger.... Ron Wanttaja Last edited: #### Richard6 ##### Well-Known Member Latest from AvWeb: https://www.avweb.com/aviation-news/boeing-outsourced-coding-for-9-an-hour/ "Bloomberg is reporting that Boeing outsourced coding of software on the Boeing 737 MAX to engineers who were paid as little as$9 an hour. The company and some of its suppliers laid off their own engineers in favor of subcontracting coding work to offshore companies...." Ron Wanttaja Well unfortunately, this is not a single story about Indian "engineers" working from India for company's in the US. The company i worked for here in Minneapolis, started sending our drawing work to India. The quality of the work was crap, a lot of rework required. The next stage was to bring the Indian people over here to work along side of our engineers, showing them our strategy and design guidelines. Well it wasn't long after that designers were being laid off. As far as I know, we didn't use any Indian programmers, but I could be wrong as I was a electrical system designer at the time. Our products did not involve any danger to humans if something wen wrong. Richard #### Wanttaja ##### Well-Known Member The editorial in the Seattle Times pretty much echoes what I've been posting: https://www.seattletimes.com/opinion/what-will-it-be-boeing-great-airplanes-that-generate-cash-flow-or-great-cash-flow-period/ "In the ’90s, Boeing business culture turned to employee engagement, process improvement and productivity — adopting the “quality” business culture that made Japanese manufacturers formidable competitors. "In the late ’90s, Boeing’s business culture shifted again, putting cost-cutting and shareholder interests first...." He doesn't make the connection, but of course "the late '90s" coincides with the McDonnell-Douglass merger. Ron Wanttaja #### BJC ##### Well-Known Member HBA Supporter The editorial in the Seattle Times pretty much echoes what I've been posting: https://www.seattletimes.com/opinion/what-will-it-be-boeing-great-airplanes-that-generate-cash-flow-or-great-cash-flow-period/ "In the ’90s, Boeing business culture turned to employee engagement, process improvement and productivity — adopting the “quality” business culture that made Japanese manufacturers formidable competitors. "In the late ’90s, Boeing’s business culture shifted again, putting cost-cutting and shareholder interests first...." He doesn't make the connection, but of course "the late '90s" coincides with the McDonnell-Douglass merger. Ron Wanttaja FWIW, a relative had nice career with McDonnell. He often speaks about the change in culture at McDonnell that began with Sandy McDonnell’s retirement. He has nothing but disdain for Boeing management. BJC Last edited: #### Vigilant1 ##### Well-Known Member Underlying reason for the change in Boeing corporate culture? Likely external? As competition increased, pressure to improve efficiency also increased. At the year prior to their merger in '97, McD-Douglas had about 10% of the world commercial airliner market, Boeing had 60% and Airbus had 30%. Boeing had meager competition and could afford to pile on extra layers/costs. After the merger (and the costs of gobbling up McD-D), they (correctly) saw Airbus as a real competitor and commenced to tighten things up. Maybe some poor decisions were made in the process, but the bleeding has stopped. #### BBerson ##### Well-Known Member HBA Supporter That editorial author contradicts himself. First he said the airplanes are mature with little innovation possible. Then he said they should have invested in several unneeded new designs and executive bonuses, instead of buying the stock back. And Boeing doesn't need to attract new investors if they are buying back the stock. It is never in the interest of any stock owners to destroy the safety reputation Boeing had with these crashes. 2
{}
# Is there air in the volume of crystallized sodium hydroxide? I have a container of crystallized NaOH (98%) which has a volume of 500ml. Now, as per wikipedia, Sodium Hydroxide's density is 2.13g/cm3. This should give the weight of a 500ml container of NaOH at 1065g. However, when measured, the crystalls weighed in at around 560g (minus the weight of the container). Is it because the NaOH is not in liquid or a homogenous crystal? Is the rest of the volume around the crystals air? • @ChinmayChandak Your comment is misleading - OP has powder, not single ions – Mithoron Oct 6 '15 at 19:13 • @Mithoron understood my mistake. i miss understood the question... :( – Chinmay Chandak Oct 6 '15 at 19:22 Imagine a container with sand; it may seem "full", but surely you may pour in some water, and it will fit in the cavities. Same thing here, except in case of $\ce{NaOH}$ I'd rather not check it directly with water, for the response may be quite intense. $\ce{NaOH}$ commonly comes in granules; if you'd have one big solid brick inside your container, that would be another story.
{}
# How to find ${\large\int}_0^1\frac{\ln^3(1+x)\ln x}x\mathrm dx$ Please help me to find a closed form for this integral: $$I=\int_0^1\frac{\ln^3(1+x)\ln x}x\mathrm dx\tag1$$ I suspect it might exist because there are similar integrals having closed forms: \begin{align}\int_0^1\frac{\ln^3(1-x)\ln x}x\mathrm dx&=12\zeta(5)-\pi^2\zeta(3)\tag2\\ \int_0^1\frac{\ln^2(1+x)\ln x}x\mathrm dx&=\frac{\pi^4}{24}-\frac16\ln^42+\frac{\pi^2}6\ln^22-\frac72\zeta(3)\ln2-4\operatorname{Li}_4\!\left(\tfrac12\right)\tag3\\ \int_0^1\frac{\ln^3(1+x)\ln x}{x^2}\mathrm dx&=\frac34\zeta(3)-\frac{63}4\zeta(3)\ln2+\frac{23\pi^4}{120}\\&-\frac34\ln^42-2\ln^32+\frac{3\pi^2}4\ln^22-18\operatorname{Li}_4\!\left(\tfrac12\right).\tag4\end{align} Thanks! • Check out these techniques. Aug 25, 2014 at 4:39 • Aug 25, 2014 at 4:49 • I'm surprised there isn't a single mention in the answers below that the integral is just a special case of the Nielsen generalized polylogarithm so $$I = 6S_{2,3}(-1)$$ and the evaluation of $S_{n,p}(-1)$ is known for small $n,p$ such as in this post. Jun 1, 2019 at 6:32 Start with integration by parts (IBP) by setting $u=\ln^3(1+x)$ and $dv=\dfrac{\ln x}{x}\ dx$ yields \begin{align} I&=-\frac32\int_0^1\frac{\ln^2(1+x)\ln^2 x}{1+x}\ dx\\ &=-\frac32\int_1^2\frac{\ln^2x\ln^2 (x-1)}{x}\ dx\quad\Rightarrow\quad\color{red}{x\mapsto1+x}\\ &=-\frac32\int_{\large\frac12}^1\left[\frac{\ln^2x\ln^2 (1-x)}{x}-\frac{2\ln^3x\ln(1-x)}{x}+\frac{\ln^4x}{x}\right]\ dx\quad\Rightarrow\quad\color{red}{x\mapsto\frac1x}\\ &=-\frac32\int_{\large\frac12}^1\frac{\ln^2x\ln^2 (1-x)}{x}\ dx+3\int_{\large\frac12}^1\frac{\ln^3x\ln(1-x)}{x}\ dx-\left.\frac3{10}\ln^5x\right|_{\large\frac12}^1\\ &=-\frac32\color{red}{\int_{\large\frac12}^1\frac{\ln^2x\ln^2 (1-x)}{x}\ dx}+3\int_{\large\frac12}^1\frac{\ln^3x\ln(1-x)}{x}\ dx-\frac3{10}\ln^52. \end{align} Applying IBP again to evaluate the red integral by setting $u=\ln^2(1-x)$ and $dv=\dfrac{\ln^2 x}{x}\ dx$ yields \begin{align} \color{red}{\int_{\large\frac12}^1\frac{\ln^2x\ln^2 (1-x)}{x}\ dx}&=\frac13\ln^52+\frac23\color{blue}{\int_{\large\frac12}^1\frac{\ln^3x\ln (1-x)}{1-x}\ dx}. \end{align} For the simplicity, let $$\color{blue}{\mathbf{H}_{m}^{(k)}(x)}=\sum_{n=1}^\infty \frac{H_{n}^{(k)}x^n}{n^m}\qquad\Rightarrow\qquad\color{blue}{\mathbf{H}(x)}=\sum_{n=1}^\infty H_{n}x^n,$$ Introduce a generating function for the generalized harmonic numbers for $|x|<1$ $$\color{blue}{\mathbf{H}^{(k)}(x)}=\sum_{n=1}^\infty H_{n}^{(k)}x^n=\frac{\operatorname{Li}_k(x)}{1-x}\qquad\Rightarrow\qquad\color{blue}{\mathbf{H}(x)}=-\frac{\ln(1-x)}{1-x}$$ and the following identity $$H_{n+1}^{(k)}-H_{n}^{(k)}=\frac1{(n+1)^k}\qquad\Rightarrow\qquad H_{n+1}-H_{n}=\frac1{n+1}$$ Let us integrating the indefinite form of the blue integral. \begin{align} \color{blue}{\int\frac{\ln^3x\ln (1-x)}{1-x}\ dx}=&-\int\sum_{n=1}^\infty H_nx^n\ln^3x\ dx\\ =&-\sum_{n=1}^\infty H_n\int x^n\ln^3x\ dx\\ =&-\sum_{n=1}^\infty H_n\frac{\partial^3}{\partial n^3}\left[\int x^n\ dx\right]\\ =&-\sum_{n=1}^\infty H_n\frac{\partial^3}{\partial n^3}\left[\frac{x^{n+1}}{n+1}\right]\\ =&-\sum_{n=1}^\infty H_n\left[\frac{x^{n+1}\ln^3x}{n+1}-\frac{3x^{n+1}\ln^2x}{(n+1)^2}+\frac{6x^{n+1}\ln x}{(n+1)^3}-\frac{6x^{n+1}}{(n+1)^4}\right]\\ =&-\ln^3x\sum_{n=1}^\infty \frac{H_{n+1}x^{n+1}}{n+1}+\ln^3x\sum_{n=1}^\infty \frac{x^{n+1}}{(n+1)^2}+3\ln^2x\sum_{n=1}^\infty \frac{H_{n+1}x^{n+1}}{(n+1)^2}\\&-3\ln^2x\sum_{n=1}^\infty \frac{x^{n+1}}{(n+1)^3}-6\ln x\sum_{n=1}^\infty \frac{H_{n+1}x^{n+1}}{(n+1)^3}+6\ln x\sum_{n=1}^\infty \frac{x^{n+1}}{(n+1)^4}\\&+6\sum_{n=1}^\infty \frac{H_{n+1}x^{n+1}}{(n+1)^4}-6\sum_{n=1}^\infty \frac{x^{n+1}}{(n+1)^5}\\ =&\ -\sum_{n=1}^\infty\left[\frac{H_nx^{n}\ln^3x}{n}-\frac{x^{n}\ln^3x}{n^2}-\frac{3H_nx^{n}\ln^2x}{n^2}+\frac{3x^{n}\ln^2x}{n^3}\right.\\& \left.\ +\frac{6H_nx^{n}\ln x}{n^3}-\frac{6x^{n}\ln x}{n^4}-\frac{6H_nx^{n}}{n^4}+\frac{6x^{n}}{n^5}\right]\\ =&\ -\color{blue}{\mathbf{H}_{1}(x)}\ln^3x+\operatorname{Li}_2(x)\ln^3x+3\color{blue}{\mathbf{H}_{2}(x)}\ln^2x-3\operatorname{Li}_3(x)\ln^2x\\&\ -6\color{blue}{\mathbf{H}_{3}(x)}\ln x+6\operatorname{Li}_4(x)\ln x+6\color{blue}{\mathbf{H}_{4}(x)}-6\operatorname{Li}_5(x). \end{align} Therefore \begin{align} \color{blue}{\int_{\Large\frac12}^1\frac{\ln^3x\ln (1-x)}{1-x}\ dx} =&\ 6\color{blue}{\mathbf{H}_{4}(1)}-6\operatorname{Li}_5(1)-\left[\color{blue}{\mathbf{H}_{1}\left(\frac12\right)}\ln^32-\operatorname{Li}_2\left(\frac12\right)\ln^32\right.\\&\left.\ +3\color{blue}{\mathbf{H}_{2}\left(\frac12\right)}\ln^22-3\operatorname{Li}_3\left(\frac12\right)\ln^22+6\color{blue}{\mathbf{H}_{3}\left(\frac12\right)}\ln 2\right.\\&\ -6\operatorname{Li}_4(x)\ln 2+6\color{blue}{\mathbf{H}_{4}(x)}-6\operatorname{Li}_5(x)\bigg]\\ =&\ 12\zeta(5)-\pi^2\zeta(3)+\frac{3}8\zeta(3)\ln^22-\frac{\pi^4}{120}\ln2-\frac{1} {4}\ln^52\\&\ -6\color{blue}{\mathbf{H}_{4}\left(\frac12\right)}+6\operatorname{Li}_4\left(\frac12\right)\ln 2+6\operatorname{Li}_5\left(\frac12\right). \end{align} Using the similar approach as calculating the blue integral, then \begin{align} \int\frac{\ln^3x\ln (1-x)}{x}\ dx&=-\int\sum_{n=1}^\infty \frac{x^{n-1}}{n}\ln^3x\ dx\\ &=-\sum_{n=1}^\infty \frac{1}{n}\int x^{n-1}\ln^3x\ dx\\ &=-\sum_{n=1}^\infty \frac{1}{n}\frac{\partial^3}{\partial n^3}\left[\int x^{n-1}\ dx\right]\\ &=-\sum_{n=1}^\infty \frac{1}{n}\frac{\partial^3}{\partial n^3}\left[\frac{x^{n}}{n}\right]\\ &=-\sum_{n=1}^\infty \frac{1}{n}\left[\frac{x^{n}\ln^3x}{n}-\frac{3x^{n}\ln^2x}{n^2}+\frac{6x^{n}\ln x}{n^3}-\frac{6x^{n}}{n^4}\right]\\ &=\sum_{n=1}^\infty \left[-\frac{x^{n}\ln^3x}{n^2}+\frac{3x^{n}\ln^2x}{n^3}-\frac{6x^{n}\ln x}{n^4}+\frac{6x^{n}}{n^5}\right]\\ &=6\operatorname{Li}_5(x)-6\operatorname{Li}_4(x)\ln x+3\operatorname{Li}_3(x)\ln^2x-\operatorname{Li}_2(x)\ln^3x. \end{align} Hence $$\int_{\large\frac{1}{2}}^1\frac{\ln^3x\ln (1-x)}{x}\ dx=\frac{\pi^2}{6}\ln^32-\frac{21}{8}\zeta(3)\ln^22-6\operatorname{Li}_4\left(\frac{1}{2}\right)\ln2-6\operatorname{Li}_5\left(\frac{1}{2}\right)+6\zeta(5).$$ Combining altogether, we have \begin{align} I=&\ \frac{\pi^4}{120}\ln2-\frac{33}4\zeta(3)\ln^22+\frac{\pi^2}2\ln^32-\frac{11}{20}\ln^52+6\zeta(5)+\pi^2\zeta(3)\\ &\ +6\color{blue}{\mathbf{H}_{4}\left(\frac12\right)}-18\operatorname{Li}_4\left(\frac12\right)\ln2-24\operatorname{Li}_5\left(\frac12\right). \end{align} Continuing my answer in: A sum containing harmonic numbers $\displaystyle\sum_{n=1}^\infty\frac{H_n}{n^3\,2^n}$, we have \begin{align} \color{blue}{\mathbf{H}_{3}\left(x\right)}=&\frac12\zeta(3)\ln x-\frac18\ln^2x\ln^2(1-x)+\frac12\ln x\left[\color{blue}{\mathbf{H}_{2}\left(x\right)}-\operatorname{Li}_3(x)\right]\\&+\operatorname{Li}_4(x)-\frac{\pi^2}{12}\operatorname{Li}_2(x)-\frac12\operatorname{Li}_3(1-x)\ln x+\frac{\pi^4}{60}.\tag1 \end{align} Dividing $(1)$ by $x$ and then integrating yields \small\begin{align} \color{blue}{\mathbf{H}_{4}\left(x\right)}=&\frac14\zeta(3)\ln^2 x-\frac18\int\frac{\ln^2x\ln^2(1-x)}x\ dx+\frac12\int\frac{\ln x}x\bigg[\color{blue}{\mathbf{H}_{2}\left(x\right)}-\operatorname{Li}_3(x)\bigg]\ dx\\&+\operatorname{Li}_5(x)-\frac{\pi^2}{12}\operatorname{Li}_3(x)-\frac12\int\frac{\operatorname{Li}_3(1-x)\ln x}x\ dx+\frac{\pi^4}{60}\ln x\\ =&\frac14\zeta(3)\ln^2 x+\frac{\pi^4}{60}\ln x+\operatorname{Li}_5(x)-\frac{\pi^2}{12}\operatorname{Li}_3(x)-\frac18\color{red}{\int\frac{\ln^2x\ln^2(1-x)}x\ dx}\\&+\frac12\left[\color{purple}{\sum_{n=1}^\infty\frac{H_{n}}{n^2}\int x^{n-1}\ln x\ dx}-\color{green}{\int\frac{\operatorname{Li}_3(x)\ln x}x\ dx}-\color{orange}{\int\frac{\operatorname{Li}_3(1-x)\ln x}x\ dx}\right].\tag2 \end{align} Evaluating the red integral using the same technique as the previous one yields \begin{align} \color{red}{\int\frac{\ln^2x\ln^2(1-x)}x\ dx}&=\frac13\ln^3x\ln^2(1-x)-\frac23\color{blue}{\int\frac{\ln(1-x)\ln^3 x}{1-x}\ dx}. \end{align} Evaluating the purple integral yields \begin{align} \color{purple}{\sum_{n=1}^\infty\frac{H_{n}}{n^2}\int x^{n-1}\ln x\ dx}&=\sum_{n=1}^\infty\frac{H_{n}}{n^2}\frac{\partial}{\partial n}\left[\int x^{n-1}\ dx\right]\\ &=\sum_{n=1}^\infty\frac{H_{n}}{n^2}\left[\frac{x^n\ln x}{n}-\frac{x^n}{n^2}\right]\\ &=\color{blue}{\mathbf{H}_{3}(x)}\ln x-\color{blue}{\mathbf{H}_{4}(x)}. \end{align} Evaluating the green integral using IBP by setting $u=\ln x$ and $dv=\dfrac{\operatorname{Li}_3(x)}{x}\ dx$ yields \begin{align} \color{green}{\int\frac{\operatorname{Li}_3(x)\ln x}x\ dx}&=\operatorname{Li}_4(x)\ln x-\int\frac{\operatorname{Li}_4(x)}x\ dx\\ &=\operatorname{Li}_4(x)\ln x-\operatorname{Li}_5(x). \end{align} Evaluating the orange integral using IBP by setting $u=\operatorname{Li}_3(1-x)$ and $dv=\dfrac{\ln x}{x}\ dx$ yields \begin{align} \color{orange}{\int\frac{\operatorname{Li}_3(1-x)\ln x}x\ dx}&=\frac12\operatorname{Li}_3(1-x)\ln^2 x+\frac12\color{maroon}{\int\frac{\operatorname{Li}_2(1-x)\ln^2 x}{1-x}\ dx}. \end{align} Applying IBP again to evaluate the maroon integral by setting $u=\operatorname{Li}_2(1-x)$ and $$dv=\dfrac{\ln^2 x}{1-x}\ dx\quad\Rightarrow\quad v=2\operatorname{Li}_3(x)-2\operatorname{Li}_2(x)\ln x-\ln(1-x)\ln^2x,$$ we have \small{\begin{align} \color{maroon}{\int\frac{\operatorname{Li}_2(1-x)\ln^2 x}{1-x}\ dx}=&\left[2\operatorname{Li}_3(x)-2\operatorname{Li}_2(x)\ln x-\ln(1-x)\ln^2x\right]\operatorname{Li}_2(1-x)\\ &-2\int\frac{\operatorname{Li}_3(x)\ln x}{1-x}\ dx+2\int\frac{\operatorname{Li}_2(x)\ln x}{1-x}\ dx+\color{blue}{\int\frac{\ln(1-x)\ln^3 x}{1-x}\ dx}. \end{align}} We use the generating function for the generalized harmonic numbers evaluate the above integrals involving polylogarithm. \begin{align} \int\frac{\operatorname{Li}_k(x)\ln x}{1-x}\ dx&=\sum_{n=1}^\infty H_{n}^{(k)}\int x^n\ln x\ dx\\ &=\sum_{n=1}^\infty H_{n}^{(k)}\frac{\partial}{\partial n}\left[\int x^n\ dx\right]\\ &=\sum_{n=1}^\infty H_{n}^{(k)}\left[\frac{x^{n+1}\ln x}{n+1}-\frac{x^{n+1}}{(n+1)^2}\right]\\ &=\sum_{n=1}^\infty\left[\frac{H_{n+1}^{(k)}x^{n+1}\ln x}{n+1}-\frac{x^{n+1}\ln x}{(n+1)^{k+1}}-\frac{H_{n+1}^{(k)}x^{n+1}}{(n+1)^2}+\frac{x^{n+1}}{(n+1)^{k+2}}\right]\\ &=\sum_{n=1}^\infty\left[\frac{H_{n}^{(k)}x^{n}\ln x}{n}-\frac{x^{n}\ln x}{n^{k+1}}-\frac{H_{n}^{(k)}x^{n}}{n^2}+\frac{x^{n}}{n^{k+2}}\right]\\ &=\color{blue}{\mathbf{H}_{1}^{(k)}(x)}\ln x-\operatorname{Li}_{k+1}(x)\ln x-\color{blue}{\mathbf{H}_{2}^{(k)}(x)}+\operatorname{Li}_{k+2}(x). \end{align} Dividing generating function of $\color{blue}{\mathbf{H}^{(k)}(x)}$ by $x$ and then integrating yields \begin{align} \sum_{n=1}^\infty \frac{H_{n}^{(k)}x^n}{n}&=\int\frac{\operatorname{Li}_k(x)}{x(1-x)}\ dx\\ \color{blue}{\mathbf{H}_{1}^{(k)}(x)}&=\int\frac{\operatorname{Li}_k(x)}{x}\ dx+\int\frac{\operatorname{Li}_k(x)}{1-x}\ dx\\ &=\operatorname{Li}_{k+1}(x)+\int\frac{\operatorname{Li}_k(x)}{1-x}\ dx. \end{align} Repeating the process above yields \begin{align} \sum_{n=1}^\infty \frac{H_{n}^{(k)}x^n}{n^2} &=\int\frac{\operatorname{Li}_{k+1}(x)}{x}\ dx+\int\frac{\operatorname{Li}_k(x)}{x(1-x)}\ dx\\ \color{blue}{\mathbf{H}_{2}^{(k)}(x)}&=\operatorname{Li}_{k+2}(x)+\operatorname{Li}_{k+1}(x)+\int\frac{\operatorname{Li}_k(x)}{1-x}\ dx, \end{align} where it is easy to show by using IBP that \begin{align} \int\frac{\operatorname{Li}_2(x)}{1-x}\ dx&=-\int\frac{\operatorname{Li}_2(1-x)}{x}\ dx\\ &=2\operatorname{Li}_3(x)-2\operatorname{Li}_2(x)\ln(x)-\operatorname{Li}_2(1-x)\ln x-\ln (1-x)\ln^2x \end{align} and $$\int\frac{\operatorname{Li}_3(x)}{1-x}\ dx=-\int\frac{\operatorname{Li}_3(1-x)}{x}\ dx=-\frac12\operatorname{Li}_2^2(1-x)-\operatorname{Li}_3(1-x)\ln x.$$ Now, all unknown terms have been obtained. Putting altogether to $(2)$, we have \small{\begin{align} \color{blue}{\mathbf{H}_{4}(x)} =&\ \frac1{10}\zeta(3)\ln^2 x+\frac{\pi^4}{150}\ln x-\frac{\pi^2}{30}\operatorname{Li}_3(x)-\frac1{60}\ln^3x\ln^2(1-x)+\frac65\operatorname{Li}_5(x)\\&-\frac15\left[\operatorname{Li}_3(x)-\operatorname{Li}_2(x)\ln x-\frac12\ln(1-x)\ln^2x\right]\operatorname{Li}_2(1-x)-\frac15\operatorname{Li}_4(x)\\&-\frac35\operatorname{Li}_4(x)\ln x+\frac15\operatorname{Li}_3(x)\ln x+\frac15\operatorname{Li}_3(x)\ln^2x-\frac1{10}\operatorname{Li}_3(1-x)\ln^2 x\\&-\frac1{15}\operatorname{Li}_2(x)\ln^3x-\frac15\color{blue}{\mathbf{H}_{2}^{(3)}(x)}+\frac15\color{blue}{\mathbf{H}_{2}^{(2)}(x)} +\frac15\color{blue}{\mathbf{H}_{1}^{(3)}(x)}\ln x\\&-\frac15\color{blue}{\mathbf{H}_{1}^{(2)}(x)}\ln x+\frac25\color{blue}{\mathbf{H}_{3}(x)}\ln x-\frac15\color{blue}{\mathbf{H}_{2}(x)}\ln^2x+\frac1{15}\color{blue}{\mathbf{H}_{1}(x)}\ln^3x+C.\tag3 \end{align}} The next step is finding the constant of integration. Setting $x=1$ to $(3)$ yields \small{\begin{align} \color{blue}{\mathbf{H}_{4}(1)} &=-\frac{\pi^2}{30}\operatorname{Li}_3(1)+\frac65\operatorname{Li}_5(1)-\frac15\operatorname{Li}_4(1)-\frac15\color{blue}{\mathbf{H}_{2}^{(3)}(1)}+\frac15\color{blue}{\mathbf{H}_{2}^{(2)}(1)}+C\\ 3\zeta(5)+\zeta(2)\zeta(3)&=-\frac{\pi^2}{30}\operatorname{Li}_3(1)+\frac{19}{30}\operatorname{Li}_5(1)+\frac{3}{5}\operatorname{Li}_3(1)+C\\ C&=\frac{\pi^4}{450}+\frac{\pi^2}{5}\zeta(3)-\frac35\zeta(3)+3\zeta(5). \end{align}} Thus \small{\begin{align} \color{blue}{\mathbf{H}_{4}(x)} =&\ \frac1{10}\zeta(3)\ln^2 x+\frac{\pi^4}{150}\ln x-\frac{\pi^2}{30}\operatorname{Li}_3(x)-\frac1{60}\ln^3x\ln^2(1-x)+\frac65\operatorname{Li}_5(x)\\&-\frac15\left[\operatorname{Li}_3(x)-\operatorname{Li}_2(x)\ln x-\frac12\ln(1-x)\ln^2x\right]\operatorname{Li}_2(1-x)-\frac15\operatorname{Li}_4(x)\\&-\frac35\operatorname{Li}_4(x)\ln x+\frac15\operatorname{Li}_3(x)\ln x+\frac15\operatorname{Li}_3(x)\ln^2x-\frac1{10}\operatorname{Li}_3(1-x)\ln^2 x\\&-\frac1{15}\operatorname{Li}_2(x)\ln^3x-\frac15\color{blue}{\mathbf{H}_{2}^{(3)}(x)}+\frac15\color{blue}{\mathbf{H}_{2}^{(2)}(x)} +\frac15\color{blue}{\mathbf{H}_{1}^{(3)}(x)}\ln x\\&-\frac15\color{blue}{\mathbf{H}_{1}^{(2)}(x)}\ln x+\frac25\color{blue}{\mathbf{H}_{3}(x)}\ln x-\frac15\color{blue}{\mathbf{H}_{2}(x)}\ln^2x+\frac1{15}\color{blue}{\mathbf{H}_{1}(x)}\ln^3x\\&+\frac{\pi^4}{450}+\frac{\pi^2}{5}\zeta(3)-\frac35\zeta(3)+3\zeta(5)\tag4 \end{align}} and setting $x=\frac12$ to $(4)$ yields \begin{align} \color{blue}{\mathbf{H}_{4}\left(\frac12\right)}=&\ \frac{\ln^52}{40}-\frac{\pi^2}{36}\ln^32+\frac{\zeta(3)}{2}\ln^22-\frac{\pi^2}{12}\zeta(3)\\&+\frac{\zeta(5)}{32}-\frac{\pi^4}{720}\ln2+\operatorname{Li}_4\left(\frac12\right)\ln2+2\operatorname{Li}_5\left(\frac12\right).\tag5 \end{align} Finally, we obtain \begin{align} \int_0^1\frac{\ln^3(1+x)\ln x}x\ dx=&\ \color{blue}{\frac{\pi^2}2\zeta(3)+\frac{99}{16}\zeta(5)-\frac25\ln^52+\frac{\pi^2}3\ln^32-\frac{21}4\zeta(3)\ln^22}\\&\color{blue}{-12\operatorname{Li}_4\left(\frac12\right)\ln2-12\operatorname{Li}_5\left(\frac12\right)}, \end{align} References : $[1]\$ Harmonic number $[2]\$ Polylogarithm • @Tunk-Fey Very impressive! Aug 28, 2014 at 23:23 • Consider going through your question and making the latex narrower is some places. Especially where you are aligning equal signs. Alot of whitespace is wasted there. It will make it clearer to read. Other than that this answer is amazing :) Aug 29, 2014 at 19:58 • @Aditya Considering your age, you're still young and one day when you go to college and major in math (physics, engineering, or science cs), you will learn something like these stuffs. For now, you can start to learn from Achille Hui, sos440, Felix Marin, Random Variable, Sasha, Vladimir Reshetnikov, Pranav Arora, Omran Kouba, Integrals and Series, Rob John, Olivier Oloa, Integrals, Jack D'Aurizio, SuperAbound, Raymond Manzoni, etc. Lots of users here are better than me at integration. And please, don't become like me. Just be yourself. $\ddot\smile$ Aug 30, 2014 at 13:46 • @JackD'Aurizio Indeed! This answer crashed my browser. And to Tunk, I'm very impressed with the overall organization. You're getting very good at these polylog integrals. +1 Aug 30, 2014 at 18:27 • Thanks @FelixMarin. of course you're one of my teachers in polylog integrals. $\ddot\smile$ Sep 2, 2014 at 7:25 Indeed, there is a closed form for this integral: $$I=\frac{\pi^2}3\ln^32-\frac25\ln^52+\frac{\pi^2}2\zeta(3)+\frac{99}{16}\zeta(5)-\frac{21}4\zeta(3)\ln^22\\-12\operatorname{Li}_4\left(\frac12\right)\ln2-12\operatorname{Li}_5\left(\frac12\right).$$ • Well, Cleo's at it again. Well done. Aug 25, 2014 at 4:33 • Notice that $\ln2$ acts here like a regularized value of $\zeta(1)$. Aug 25, 2014 at 6:33 • @BennetGardiner I agree. Cleo at its best! :-) Her answers make me chuckling and I'm pretty sure, that Ramanujan is her favorite. Nevertheless I hope that other users can provide additional helpful information. Best regards, Aug 25, 2014 at 7:05 • @Cleo Do you mind giving a slight hint as to how one should proceed with this integral? Thanks. Aug 26, 2014 at 5:52 • @Lucian What do you mean by a regularized value? Aug 27, 2014 at 0:56 This is an updated partial answer that is rather similar to Jack D'Aurizio's approach. (I really hope he doesn't mind.) Step 1: Expressing the integral as a sum. It is easy to derive the formula $$\left(\sum^{\infty}_{n=1}a_nx^n\right)\left(\sum^{\infty}_{n=1}b_nx^n\right)=\sum^\infty_{n=1}\sum^{n}_{k=1}a_kb_{n-k+1}x^{n+1}$$ We apply this formula to derive the Taylor series of $\ln^2(1+x)$. \begin{align} \ln^2(1+x) &=\left(\sum^{\infty}_{n=1}\frac{(-1)^{n-1}}{n}x^n\right)\left(\sum^{\infty}_{n=1}\frac{(-1)^{n-1}}{n}x^n\right)\\ &=\sum^\infty_{n=1}\sum^n_{k=1}\frac{(-1)^{k-1}(-1)^{n-k}}{k(n-k+1)}x^{n+1}\\ &=\sum^\infty_{n=1}\frac{(-1)^{n+1}}{n+1}\sum^n_{k=1}\left(\frac{1}{k}+\frac{1}{n-k+1}\right)x^{n+1}\\ &=\sum^\infty_{n=1}\frac{(-1)^{n+1}2H_n}{n+1}x^{n+1} \end{align} Apply this formula again to obtain the Taylor series of $\displaystyle\frac{\ln^2(1+x)}{1+x}$. \begin{align} \frac{\ln^2(1+x)}{1+x} &=\left(\sum^\infty_{n=1}\frac{(-1)^{n+1}2H_n}{n+1}x^{n+1}\right)\left(\sum^{\infty}_{n=1}(-1)^{n-1}x^{n-1}\right)\\ &=\sum^\infty_{n=1}\sum^n_{k=1}\frac{(-1)^{k+1}(-1)^{n-k}2H_k}{k+1}x^{n+1}\\ &=\sum^\infty_{n=1}2(-1)^{n+1}\sum^n_{k=1}\frac{H_k}{k+1}x^{n+1}\\ \end{align} The inner sum is \begin{align} \sum^n_{k=1}\frac{H_k}{k+1} &=\sum^n_{k=1}\frac{H_{k+1}}{k+1}-\sum^n_{k=1}\frac{1}{(k+1)^2}\\ &=\sum^{n+1}_{k=1}\frac{H_k}{k}-H_{n+1}^{(2)}\\ &=\sum^{n+1}_{k=1}\frac{1}{k}\sum^k_{j=1}\frac{1}{j}-H_{n+1}^{(2)}\\ &=\sum^{n+1}_{j=1}\frac{1}{j}\left(\sum^{n+1}_{k=1}\frac{1}{k}-\sum^{j-1}_{k=1}\frac{1}{k}\right)-H_{n+1}^{(2)}\\ &=H_{n+1}^2-\sum^{n+1}_{j=1}\frac{H_j}{j}\\ &=\frac{H_{n+1}^2-H_{n+1}^{(2)}}{2} \end{align} Hence $$\frac{\ln^2(1+x)}{1+x}=\sum^\infty_{n=1}(-1)^{n+1}\left(H_{n+1}^2-H_{n+1}^{(2)}\right)x^{n+1}$$ Pluck this into the integral. \begin{align} \int^1_0\frac{\ln^3(1+x)\ln{x}}{x}{\rm d}x &=-\frac{3}{2}\int^1_0\frac{\ln^2(1+x)\ln^2{x}}{1+x}{\rm d}x\\ &=-\frac{3}{2}\sum^\infty_{n=1}(-1)^{n+1}\left(H_{n+1}^2-H_{n+1}^{(2)}\right)\int^1_0x^{n+1}\ln^2{x} \ {\rm d}x\\ &=-3\sum^\infty_{n=1}\frac{(-1)^{n+1}\left(H_{n+1}^2-H_{n+1}^{(2)}\right)}{(n+2)^3}\\ &=3\sum^\infty_{n=1}\frac{(-1)^{n}\left(H_{n}^{(2)}-H_{n}^2\right)}{(n+1)^3}\\ \end{align} Step 2: Evaluation of $\displaystyle\sum^\infty_{n=1}\frac{(-1)^nH_n^{(2)}}{(n+1)^3}$ We begin with some simple manipulations of the sum. \begin{align} \sum^\infty_{n=1}\frac{(-1)^nH_n^{(2)}}{(n+1)^3} &=\sum^\infty_{n=1}\frac{(-1)^nH_{n+1}^{(2)}}{(n+1)^3}-\sum^\infty_{n=1}\frac{(-1)^n}{(n+1)^5}\\ &=-\frac{15}{16}\zeta(5)-\underbrace{\sum^\infty_{n=1}\frac{(-1)^nH_n^{(2)}}{n^3}}_{S} \end{align} Consider the function $\displaystyle f(z)=\frac{\pi\csc(\pi z)\psi_1(-z)}{z^3}$. At the positive integers, \begin{align} {\rm Res}(f,n) &=\operatorname*{Res}_{z=n}\left[\frac{(-1)^n}{z^3(z-n)^3}+\frac{(-1)^n(H_n^{(2)}+2\zeta(2))}{z^3(z-n)}\right]\\ &=\frac{6(-1)^n}{n^5}+\frac{(-1)^nH_n^{(2)}}{n^3}+\frac{2(-1)^n\zeta(2)}{n^3} \end{align} Summing them up gives $$\sum^\infty_{n=1} {\rm Res}(f,n)=-\frac{45}{8}\zeta(5)+S-\frac{3}{2}\zeta(2)\zeta(3)$$ At the negative integers, \begin{align} {\rm Res}(f,-n) &=-\frac{(-1)^n\psi_1(n)}{n^3}\\ &=\frac{(-1)^nH_n^{(2)}}{n^3}-\frac{(-1)^n\zeta(2)}{n^3}-\frac{(-1)^n}{n^5} \end{align} Summing them up gives $$\sum^\infty_{n=1} {\rm Res}(f,-n)=S+\frac{3}{4}\zeta(2)\zeta(3)+\frac{15}{16}\zeta(5)$$ At $z=0$, \begin{align} {\rm Res}(f,0) &=[z^2]\left(\frac{1}{z}+\zeta(2)z\right)\left(\frac{1}{z^2}+\zeta(2)+2\zeta(3)z+3\zeta(4)z^2+4\zeta(5)z^3\right)\\ &=4\zeta(5)+2\zeta(2)\zeta(3) \end{align} Since the sum of the reisudes $=0$, $$\sum^\infty_{n=1}\frac{(-1)^nH_n^{(2)}}{(n+1)^3}=-\frac{41}{32}\zeta(5)+\frac{5}{8}\zeta(2)\zeta(3)$$ Step 3: Evaluation of $\displaystyle\sum^\infty_{n=1}\frac{(-1)^nH_n^{2}}{(n+1)^3}$ Formula $(45)$ in this page states that this sum is equal to $$4{\rm Li}_5\left(\frac{1}{2}\right)+4{\rm Li}_4\left(\frac{1}{2}\right)\ln{2}+\frac{2}{15}\ln^5{2}-\frac{107}{32}\zeta(5)+\frac{7}{4}\zeta(3)\ln^2{2}-\frac{2}{3}\zeta(2)\ln^2{2}-\frac{3}{8}\zeta(2)\zeta(3)$$ Using a previously derived result is really unsatisfactory for me. Nevertheless, I have not been able to derive this result, as contour integration fails here due to the power of the denominator being odd (which implies that the sum will vanish when I add the residues at the positive and negative integers up). It seems that Tunk-Fey's brilliant approach would be the most viable method to crack this last sum. Step 4: Obtaining the final result Combining our previous results, we get \begin{align} &\ \ \ \ \ \small{\int^1_0\frac{\ln^3(1+x)\ln{x}}{x}{\rm d}x}\\ &=\small{3\sum^\infty_{n=1}\frac{(-1)^n\left(H_{n}^{(2)}-H_n^2\right)}{(n+1)^3}}\\ &=\small{3\left(\frac{33}{16}\zeta(5)+\zeta(2)\zeta(3)-4{\rm Li}_5\left(\frac{1}{2}\right)-4{\rm Li}_4\left(\frac{1}{2}\right)\ln{2}-\frac{2}{15}\ln^5{2}-\frac{7}{4}\zeta(3)\ln^2{2}+\frac{2}{3}\zeta(2)\ln^3{2}\right)}\\ &=\small{\frac{99}{16}\zeta(5)+\frac{\pi^2}{2}\zeta(3)-12{\rm Li}_5\left(\frac{1}{2}\right)-12{\rm Li}_4\left(\frac{1}{2}\right)\ln{2}-\frac{2}{5}\ln^5{2}-\frac{21}{4}\zeta(3)\ln^2{2}+\frac{\pi^2}{3}\ln^3{2}} \end{align} • Perhaps for the last integral you can use identities $$\sum_{n=1}^\infty H_nx^n=-\frac{\ln(1-x)}{1-x}$$ and $$\int_{1/2}^1\frac{\partial^3}{\partial n^3}x^n\ dx=\frac{\partial^3}{\partial n^3}\left[\frac1{n+1}-\frac1{2^{n+1}(n+1)}\right].$$ Although, it'll be tedious. Aug 25, 2014 at 13:35 • @Tunk-Fey Thank you for your suggestion. In fact, I have previously tried that method, however, the third derivative of the second term turned out to be very ugly. wolframalpha.com/input/… If all else fails though, I would probably revert to using this method. Aug 25, 2014 at 14:10 • nice approach @SuperAbound . $\displaystyle\sum^\infty_{n=1}\frac{(-1)^nH_n^{2}}{(n+1)^3}$ can be evaluated elegantly. i will post my solution to this integral and your sum in the right time. i have a different short approach. Apr 30, 2019 at 23:55 Just a partial answer for now. We have: $$I = -\frac{3}{2}\int_{0}^{1}\frac{\log^2(1+x)\log^2 x}{1+x}\,dx$$ and since: $$\log(1+z)=\sum_{n=1}^{+\infty}\frac{(-1)^{n+1}}{n}z^n$$ it follows that: $$[z^N]\log^2(1+z)=(-1)^{N+1}\sum_{n=1}^{N-1}\frac{1}{n(N-n)}=(-1)^{N+1}\frac{2H_{N-1}}{N},$$ $$\log^2(1+z)=\sum_{n=1}^{+\infty}\frac{2(-1)^{n+1} H_{n-1}}{n}z^{n}.\tag{1}$$ Let we focus now on: $$J_n = \int_{0}^{1}\frac{x^n\log^2 x}{1+x}\,dx=\frac{\partial^2}{\partial n^2}\int_{0}^{1}\frac{x^n}{1+x}\,dx.$$ We have: $$J_n = \frac{1}{4}\left(H_{n/2}^{(3)}-H_{(n-1)/2}^{(3)}\right),$$ hence: $$\color{blue}{I = -\frac{3}{4}\sum_{n=1}^{+\infty}\frac{(-1)^{n+1}H_{n-1}\left(H_{n/2}^{(3)}-H_{(n-1)/2}^{(3)}\right)}{n}}.\tag{2}$$ or, by partial summation: $$\color{purple}{I=-\frac{3}{4}\sum_{n=1}^{+\infty}H_{n/2}^{(3)}(-1)^n\left(\frac{H_n}{n+1}+\frac{H_{n-1}}{n}\right).}\tag{3}$$ Another identity that follows from the Taylor series of $\log^3(1-z)$ is: $$\color{red}{I=3\sum_{n=1}^{+\infty}\frac{(-1)^{n+1}\left(H_n^2-H_n^{(2)}\right)}{(n+1)^3}.}\tag{4}$$ An alternate form of the answers given by @Cleo and @Tunk-Fey as sum of $1$ and $1/2$ argumented polylogarithm-products with rational coefficients: $$I = \frac{99}{16}\operatorname{Li}_5(1)-12\operatorname{Li}_5\left(\frac{1}{2}\right) + 15\operatorname{Li}_1\left( \frac{1}{2} \right)\operatorname{Li}_4(1) - 12\operatorname{Li}_1\left(\frac{1}{2}\right)\operatorname{Li}_4\left(\frac{1}{2}\right) - 15\operatorname{Li}_2\left( \frac{1}{2} \right)\operatorname{Li}_3(1)-\frac{51}{4}\operatorname{Li}_1^2\left( \frac{1}{2} \right)\operatorname{Li}_3(1)+12\operatorname{Li}_2(1)\operatorname{Li}_3\left( \frac{1}{2} \right) - \frac{2}{5}\operatorname{Li}_1^5\left(\frac{1}{2}\right),$$ where $\operatorname{Li}_n$ is the polylogarithm function, and specifically \begin{align} & \operatorname{Li}_5(1) \ \ \ = \zeta(5) \\ & \operatorname{Li}_5\left(\textstyle\frac{1}{2}\right) = \textstyle \sum_{k=1}^\infty {2^{-k} \over k^5} \\ & \operatorname{Li}_4(1) \ \ \ = \zeta(4) = \frac{\pi^4}{90} \\ & \operatorname{Li}_4\left(\textstyle\frac{1}{2}\right) = \textstyle \sum_{k=1}^\infty {2^{-k} \over k^4} \\ & \operatorname{Li}_3(1) \ \ \ = \zeta(3) \\ & \operatorname{Li}_3\left(\textstyle\frac{1}{2}\right) = \frac{7}{8} \zeta(3) - \frac{\pi^2}{12} \ln 2 + \frac{1}{6} \ln^3 2 \\ & \operatorname{Li}_2(1) \ \ \ = \zeta(2) = \frac{\pi^2}{6} \\ & \operatorname{Li}_2\left(\textstyle\frac{1}{2}\right) = \frac{\pi^2}{12} - \frac{1}{2} \ln^2 2 \\ & \operatorname{Li}_1\left(\textstyle\frac{1}{2}\right) = \ln2, \end{align} where $\zeta$ is the Riemann zeta function. Lets start with letting $$x=(1-y)/y$$ we have: \begin{align} I&=\int_0^1 \frac{\ln^3(1+x)\ln x}{x}\ dx\\ &=\int_{1/2}^1\frac{\ln^4x}{x}\ dx+\int_{1/2}^1\frac{\ln^4x}{1-x}\ dx-\int_{1/2}^1\frac{\ln^3x\ln(1-x)}{x}\ dx-\int_{1/2}^1\frac{\ln^3x\ln(1-x)}{1-x}\ dx \end{align} Applying IBP for the second integral, we get \begin{align} I&=3\int_{1/2}^1\frac{\ln^3x\ln(1-x)}{x}\ dx-\int_{1/2}^1\frac{\ln^3x\ln(1-x)}{1-x}\ dx-\frac45\ln^52\\ &=4\int_{1/2}^1\frac{\ln^3x\ln(1-x)}{x}\ dx-\int_{1/2}^1\frac{\ln^3x\ln(1-x)}{x(1-x)}\ dx-\frac45\ln^52\\ &=4I_1-I_2-\frac45\ln^52 \end{align} Evaluating the first integral: \begin{align} I_1&=\int_{1/2}^1\frac{\ln^3x\ln(1-x)}{x}\ dx=-\sum_{n=1}^\infty\frac1n\int_{1/2}^1x^{n-1}\ln^3x\ dx\\ &=-\sum_{n=1}^\infty\frac1n\left(\frac{6}{n^42^n}+\frac{6\ln2}{n^32^n}+\frac{3\ln^22}{n^22^n}+\frac{\ln^32}{n2^n}-\frac{6}{n^4}\right)\\ &=-6\operatorname{Li_5}\left(\frac12\right)-6\ln2\operatorname{Li_4}\left(\frac12\right)-3\ln^22\operatorname{Li_3}\left(\frac12\right)-\ln^32\operatorname{Li_2}\left(\frac12\right)+6\zeta(5) \end{align} Evaluating the second integral \begin{align} I_2&=\int_{1/2}^1\frac{\ln^3x\ln(1-x)}{x(1-x)}\ dx=-\sum_{n=1}^\infty H_n\int_{1/2}^1 x^{n-1}\ln^3x\ dx\\ &=-\sum_{n=1}^\infty H_n\left(\frac{6}{n^42^n}+\frac{6\ln2}{n^32^n}+\frac{3\ln^22}{n^22^n}+\frac{\ln^32}{n2^n}-\frac{6}{n^4}\right)\\ &=-6\left(\color{blue}{\sum_{n=1}^\infty\frac{H_n}{n^42^n}+\ln2\sum_{n=1}^\infty\frac{H_n}{n^32^n}}\right)-3\ln^22\sum_{n=1}^\infty\frac{H_n}{n^22^n}-\ln^32\sum_{n=1}^\infty\frac{H_n}{n2^n}+6\sum_{n=1}^\infty\frac{H_n}{n^4} \end{align} I was able here to prove: $$\color{blue}{\sum_{n=1}^\infty\frac{H_n}{n^42^n}+\ln2\sum_{n=1}^\infty\frac{H_n}{n^32^n}} =-\frac12\ln^22\sum_{n=1}^{\infty}\frac{H_n}{n^22^n}-\frac16\ln^32\sum_{n=1}^{\infty}\frac{H_n}{n2^n}+\frac12\sum_{n=1}^{\infty}\frac{H_n}{n^4}-\frac{47}{32}\zeta(5) +\frac{1}{15}\ln^52+\frac{1}{3}\ln^32\operatorname{Li_2}\left( \frac12\right)+\ln^22\operatorname{Li_3}\left( \frac12\right)+2\ln2\operatorname{Li_4}\left( \frac12\right) +2\operatorname{Li_5}\left( \frac12\right)$$ which follows that: \begin{align*} I_2&=3\sum_{n=1}^{\infty}\frac{H_n}{n^4} -12\operatorname{Li_5}\left(\frac12\right)-12\ln2\operatorname{Li_4}\left( \frac12\right)-6\ln^22\operatorname{Li_3}\left( \frac12\right)\\ &\quad-2\ln^32\operatorname{Li_2}\left(\frac12\right)-\frac6{15}\ln^52+\frac{141}{16}\zeta(5) \end{align*} Grouping $$I_1$$ and $$I_2$$ we have: \begin{align} I&=-3\sum_{n=1}^\infty\frac{H_n}{n^4}-12\operatorname{Li_5}\left(\frac12\right)-12\ln2\operatorname{Li_4}\left( \frac12\right)-6\ln^22\operatorname{Li_3}\left( \frac12\right)\\ &\quad-2\ln^32\operatorname{Li_2}\left( \frac12\right)+\frac{243}{16}\zeta(5)-\frac25\ln^52 \end{align} Using the following common values: $$\sum_{n=1}^\infty \frac{H_n}{n^4}=3\zeta(5)-\zeta(2)\zeta(3)$$ $$\operatorname{Li_3}\left( \frac12\right)=\frac78\zeta(3)-\frac12\ln2\zeta(2)+\frac16\ln^32$$ $$\operatorname{Li_2}\left( \frac12\right) =\frac12\zeta(2)-\frac12\ln^22$$ Finally we get: \begin{align} I&=-12\operatorname{Li}_5\left(\frac12\right)-12\ln2\operatorname{Li}_4\left(\frac12\right)+\frac{99}{16}\zeta(5)+3\zeta(2)\zeta(3)\\ &\quad-\frac{21}4\ln^22\zeta(3)+2\ln^32\zeta(2)-\frac25\ln^52 \end{align} UPDATE: The way below may be found in the preprint, A new perspective on the evaluation of the logarithmic integral, $$\int_0^1\frac{\log(x)\log^3(1+x)}{x}\textrm{d}x$$ by C.I.Valean. A magical way proposed by Cornel Ioan Valean We use the powerful form of the Beta function presented in the book, (Almost) Impossible Integrals, Sums, and Series, $$\displaystyle \int_0^1 \frac{x^{a-1}+x^{b-1}}{(1+x)^{a+b}} \textrm{d}x = \operatorname{B}(a,b)$$, (see pages $$72$$-$$73$$). Here is the magic ... By cleverly differentiating in two different ways to get rid of a nasty integral, we simply get the wonderful result $$4\lim_{\substack{a\to0 \\ b \to 0}}\frac{\partial^{4}}{\partial a^3 \partial b}\operatorname{B}(a,b)-6\lim_{\substack{a\to0 \\ b \to 0}}\frac{\partial^{4}}{\partial a^2 \partial b^2}\operatorname{B}(a,b)$$ $$=8\int_0^1 \frac{\log(x)\log^3(1+x)}{x}\textrm{d}x-4\int_0^1 \frac{\log^3(x)\log(1+x)}{x}\textrm{d}x-4\int_0^1 \frac{\log^4(1+x)}{x}\textrm{d}x.$$ ... and we're wonderfully done! A first note: a similar strategy has been used in this answer https://math.stackexchange.com/q/3531878. A BIG BONUS (the extraction of the series $$\displaystyle \sum_{n=1}^{\infty}(-1)^{n-1}\frac{H_n}{n^4}$$): The extraction of the series $$\displaystyle \sum_{n=1}^{\infty}(-1)^{n-1}\frac{H_n}{n^4}$$ is achieved immediately by observing that using the same Beta function limits, we arrive at $$\lim_{\substack{a\to0 \\ b \to 0}}\frac{\partial^{4}}{\partial a^3 \partial b}\operatorname{B}(a,b)-\lim_{\substack{a\to0 \\ b \to 0}}\frac{\partial^{4}}{\partial a^2 \partial b^2}\operatorname{B}(a,b)$$ $$=\underbrace{\int_0^1 \frac{\log^2(x)\log^2(1+x)}{x}\textrm{d}x}_{\displaystyle 15/4\zeta(5)-4\sum_{n=1}^{\infty} (-1)^{n-1} H_n/n^4}-\int_0^1 \frac{\log^3(x)\log(1+x)}{x}\textrm{d}x,$$ which assures the desired extraction after turning the second integral into the series we want to calculate. • Very useful technique for such integrals. (+1) Feb 3, 2020 at 2:08 Here is a simple approach that does not involve many results. First, let $$x=(1-y)/y$$ to have: \begin{align} I&=\int_0^1 \frac{\ln^3(1+x)\ln x}{x}\ dx\\ &=\int_{1/2}^1\frac{\ln^4x}{x}\ dx+\int_{1/2}^1\frac{\ln^4x}{1-x}\ dx-\underbrace{\int_{1/2}^1\frac{\ln^3x\ln(1-x)}{x}\ dx}_{IBP}-\underbrace{\int_{1/2}^1\frac{\ln^3x\ln(1-x)}{1-x}\ dx}_{x\mapsto 1-x}\\ &=\frac15\ln^52+\int_{1/2}^1\frac{\ln^4x}{1-x}\ dx-\left(\frac14\ln^52+\frac14\int_{1/2}^1\frac{\ln^4x}{1-x}\ dx\right)-\underbrace{\int_{0}^{1/2}\frac{\ln^3(1-x)\ln x}{x}\ dx}_{\int_0^1-\int_{1/2}^1}\\ &=-\frac1{20}\ln^52+\frac34\int_{1/2}^1\frac{\ln^4x}{1-x}\ dx-\int_0^1\frac{\ln^3(1-x)\ln x}{x}\ dx+\color{blue}{\int_{1/2}^1\frac{\ln^3(1-x)\ln x}{x}\ dx} \end{align} We have (proved below) $$\color{blue}{\int_{1/2}^1\frac{\ln^3(1-x)\ln x}{x}\ dx}=\frac3{16}\zeta(5)+\frac3{20}\ln^52-\frac14\int_{1/2}^1\frac{\ln^4x}{1-x}\ dx+\frac12\int_0^1\frac{\ln^3(1-x)\ln x}{x}\ dx$$ Then we can write $$I=\frac3{16}\zeta(5)+\frac1{10}\ln^52+\frac12\int_{1/2}^1\frac{\ln^4x}{1-x}\ dx-\frac12\int_0^1\frac{\ln^3(1-x)\ln x}{x}\ dx$$ Lets evaluate the first integral $$\int_{1/2}^1\frac{\ln^4x}{1-x}\ dx=\sum_{n=1}^\infty\int_{1/2}^1 x^{n-1}\ln^4x\ dx$$ $$=\sum_{n=1}^\infty\left(\frac{24}{n^5}-\frac{24}{n^52^n}-\frac{24\ln2}{n^42^n}-\frac{12\ln^22}{n^32^n}-\frac{4\ln^32}{n^22^n}-\frac{\ln^42}{n2^n}\right)$$ $$=24\zeta(5)-24\operatorname{Li}_5\left(\frac12\right)-24\ln2\operatorname{Li}_4\left(\frac12\right)-12\ln^22\operatorname{Li}_3\left(\frac12\right)-4\ln^32\operatorname{Li}_2\left(\frac12\right)-\ln^52$$ $$=\boxed{4\ln^32\zeta(2)-\frac{21}2\ln^22\zeta(3)+24\zeta(5)-\ln^52-24\ln2\operatorname{Li}_4\left(\frac12\right)-24\operatorname{Li}_5\left(\frac12\right)}$$ where we used $$\operatorname{Li}_2\left(\frac12\right)=\frac12\zeta(2)-\frac12\ln^22$$ and $$\operatorname{Li}_3\left(\frac12\right)=\frac78\zeta(3)-\frac12\ln^22\zeta(2)+\frac16\ln^32$$ and the second integral $$\int_0^1\frac{\ln^3(1-x)\ln x}{x}\ dx=\int_0^1\frac{\ln^3x\ln(1-x)}{1-x}\ dx$$ $$=-\sum_{n=1}^\infty H_n\int_0^1x^n\ln^3x\ dx=6\sum_{n=1}^\infty\frac{H_n}{(n+1)^4}$$ $$=6\sum_{n=1}^\infty\frac{H_n}{n^4}-6\zeta(5)=6\left(3\zeta(5)-\zeta(2)\zeta(3)\right)-6\zeta(5)=\boxed{12\zeta(5)-6\zeta(2)\zeta(3)}$$ Combining the boxed results gives \begin{align} I&=-12\operatorname{Li}_5\left(\frac12\right)-12\ln2\operatorname{Li}_4\left(\frac12\right)+\frac{99}{16}\zeta(5)+3\zeta(2)\zeta(3)\\ &\quad-\frac{21}4\ln^22\zeta(3)+2\ln^32\zeta(2)-\frac25\ln^52 \end{align} Proof of the blue integral: $$\color{blue}{A=\int_{1/2}^1\frac{\ln^3(1-x)\ln x}{x}\ dx}$$ We have the algebraic identity $$4a^3b=a^4+b^4-(a-b)^4-4ab^3+6a^2b^2$$ set $$a=\ln(1-x)$$ and $$b=\ln x$$ and divide both sides by $$x$$ then integrate we get $$\color{blue}{4A}=\underbrace{\int_{1/2}^1\frac{\ln^4(1-x)}{x}dx}_{x\mapsto1-x}+\underbrace{\int_{1/2}^1\frac{\ln^4x}{x}dx}_{\frac15\ln^52}-\underbrace{\int_{1/2}^1\frac1x\ln^4\left(\frac{1-x}{x}\right)dx}_{(1-x)/x= y}\\-4\underbrace{\int_{1/2}^1\frac{\ln(1-x)\ln^3x}{x}dx}_{IBP}+\underbrace{6\int_{1/2}^1\frac{\ln^2(1-x)\ln^2x}{x}dx}_{B}$$ $$=\underbrace{\int_0^{1/2}\frac{\ln^4x}{1-x}\ dx}_{\int_0^1-\int_{1/2}^1}+\frac15\ln^52-\underbrace{\int_0^1\frac{\ln^4x}{1+x}\ dx}_{\frac{45}2\zeta(5)}-4\left(\frac14\ln^52+\frac14\int_{1/2}^1\frac{\ln^4x}{1-x}\ dx\right)+B$$ $$=\int_0^1\frac{\ln^4x}{1-x}\ dx-2\int_{1/2}^1\frac{\ln^4x}{1-x}\ dx-\frac45\ln^52-\frac{45}2\zeta(5)+B$$ $$=24\zeta(5)-2\int_{1/2}^1\frac{\ln^4x}{1-x}\ dx-\frac45\ln^52-\frac{45}2\zeta(5)+B\tag{1}$$ Lets simplify the integral $$B$$ \begin{align} B&=6\int_{1/2}^1\frac{\ln^2(1-x)\ln^2x}{x}\ dx\overset{IBP}{=}2\ln^52+4\int_{1/2}^1\frac{\ln^3x\ln(1-x)}{1-x}\ dx\\ &\overset{x\mapsto1-x}{=}2\ln^52+4\underbrace{\int_{0}^{1/2}\frac{\ln^3(1-x)\ln x}{x}\ dx}_{\int_0^1-\int_{1/2}^1}\\ &=2\ln^52+4\int_{0}^{1}\frac{\ln^3(1-x)\ln x}{x}\ dx-\color{blue}{4A}\tag{2} \end{align} Plugging (2) in (1) we have that $$\color{blue}{8A}=\frac32\zeta(5)+\frac6{5}\ln^52-2\int_{1/2}^1\frac{\ln^4x}{1-x}\ dx+4\int_0^1\frac{\ln^3(1-x)\ln x}{x}\ dx$$ Or $$\boxed{\color{blue}{A}=\frac3{16}\zeta(5)+\frac3{20}\ln^52-\frac14\int_{1/2}^1\frac{\ln^4x}{1-x}\ dx+\frac12\int_0^1\frac{\ln^3(1-x)\ln x}{x}\ dx}$$ Here is a proof for $$\left(4\right)$$ since i couldn't find one: $$\int _0^1\frac{\ln ^3\left(1+x\right)\ln \left(x\right)}{x^2}\:dx$$ $$\overset{\operatorname{IBP}}=-\ln ^3\left(2\right)+3\int _0^1\frac{\ln ^2\left(1+x\right)}{x\left(1+x\right)}\:dx+3\int _0^1\frac{\ln \left(x\right)\ln ^2\left(1+x\right)}{x\left(1+x\right)}\:dx$$ $$3\underbrace{\int _0^1\frac{\ln ^2\left(1+x\right)}{x\left(1+x\right)}\:dx}_{x=\frac{1}{1+x}}=3\int _0^1\frac{\ln ^2\left(x\right)}{1-x}\:dx-3\int _0^{\frac{1}{2}}\frac{\ln ^2\left(x\right)}{1-x}\:dx$$ $$=6\sum _{k=1}^{\infty }\frac{1}{k^3}-6\sum _{k=1}^{\infty }\frac{1}{k^3\:2^k}-6\ln \left(2\right)\sum _{k=1}^{\infty }\frac{1}{k^2\:2^k}-3\ln ^3\left(2\right)$$ $$=6\zeta \left(3\right)-6\operatorname{Li}_3\left(\frac{1}{2}\right)-6\ln \left(2\right)\operatorname{Li}_2\left(\frac{1}{2}\right)-3\ln ^3\left(2\right)$$ $$=\frac{3}{4}\zeta \left(3\right)-\ln ^3\left(2\right)$$ $$3\underbrace{\int _0^1\frac{\ln \left(x\right)\ln ^2\left(1+x\right)}{x\left(1+x\right)}\:dx}_{x=\frac{1}{1+x}}$$ $$=3\int _0^{\frac{1}{2}}\frac{\ln \left(x\right)\ln ^2\left(1-x\right)}{x}\:dx-3\int _{\frac{1}{2}}^1\frac{\ln ^3\left(x\right)}{1-x}\:dx$$ $$=-6\sum _{k=1}^{\infty }\frac{H_k}{k^3\:2^k}-6\ln \left(2\right)\sum _{k=1}^{\infty }\frac{H_k}{k^2\:2^k}+6\sum _{k=1}^{\infty }\frac{1}{k^4\:2^k}+6\ln \left(2\right)\sum _{k=1}^{\infty }\frac{1}{k^3\:2^k}+18\sum _{k=1}^{\infty }\frac{1}{k^4}$$ $$-18\sum _{k=1}^{\infty }\frac{1}{k^4\:2^k}-18\ln \left(2\right)\sum _{k=1}^{\infty }\frac{1}{k^3\:2^k}-9\ln ^2\left(2\right)\sum _{k=1}^{\infty }\frac{1}{k^2\:2^k}-3\ln ^4\left(2\right)$$ $$=\frac{69}{4}\zeta \left(4\right)-18\operatorname{Li}_4\left(\frac{1}{2}\right)-\frac{63}{4}\ln \left(2\right)\zeta \left(3\right)+\frac{9}{2}\ln ^2\left(2\right)\zeta \left(2\right)-\frac{3}{4}\ln ^4\left(2\right)$$ Where $$\ln ^2\left(1-x\right)=2\sum _{k=1}^{\infty }\left(\frac{H_k}{k}-\frac{1}{k^2}\right)x^k$$ is used on the $$2$$nd line. See here and here for the $$1$$st and $$2$$nd sum. Collecting the results yields: $$\int _0^1\frac{\ln ^3\left(1+x\right)\ln \left(x\right)}{x^2}\:dx=\frac{69}{4}\zeta \left(4\right)+\frac{3}{4}\zeta \left(3\right)-18\operatorname{Li}_4\left(\frac{1}{2}\right)-\frac{63}{4}\ln \left(2\right)\zeta \left(3\right)$$ $$+\frac{9}{2}\ln ^2\left(2\right)\zeta \left(2\right)-2\ln ^3\left(2\right)-\frac{3}{4}\ln ^4\left(2\right)$$ • Your result gives $-0.1487$ while both usual numerical approximation and the use of Nielsen generalized polylogarithm function implemented by Mathematica by PolyLog[n,p,z] give $−0.0576$ Sep 21, 2020 at 9:08 • Wolfram|Alpha throws a numerical approximation of $-0.148706$ which coincides with my closed form. Sep 21, 2020 at 16:14 • Yes! But OP asked for ${\large\int}_0^1\frac{\ln^3(1+x)\ln x}x\mathrm dx$ Sep 21, 2020 at 16:32 • Oh sorry, i misunderstood, i just wanted to prove that other case given by the OP since the main question has already been answered in similar fashions. Sep 21, 2020 at 16:36 • You did a really nice work, compliments! +1 Sep 21, 2020 at 17:08 Related problems and techniques: (I), (II). Here is a different form of solution $$I = -3\sum_{n=0}^{\infty} \sum_{k=0}^{n}\frac{(-1)^k{ n\brack k}k(k-1) }{(n+1)^3n!} ,$$ where ${n \brack k}$ is the Stirling numbers of the first kind. • Interesting. But this is hardly a closed form. Aug 28, 2014 at 23:22
{}
# How do you solve and find the value of cos(arccos(-1/2))? $- \frac{1}{2}$ $\cos x = - \frac{1}{2}$ --> 2 solution arcs --> arc $x = \frac{2 \pi}{3}$ and arc $x = \frac{4 \pi}{3}$ $\cos \left(\frac{2 \pi}{3}\right) = \cos \left(\frac{4 \pi}{3}\right) = - \frac{1}{2}$
{}
# zbMATH — the first resource for mathematics Fano manifolds of Calabi-Yau Hodge type. (English) Zbl 1314.14085 Given a smooth projective variety $$X$$, the authors define it to be Calabi-Yau Hodge types if it satisfies the following three conditions. 1. The middle dimensional Hodge structure is numerically similar to that of a Calabi-Yau threefold, that is $$h^{n+2,n-1}(X)=1$$, and $$h^{n+p+1,n-p}(X) = 0$$ for $$p \geq 2$$. 2. For any generator $$\omega \in H^{n+2,n-1}(X)$$, the contraction map $$H^1(X, TX) \rightarrow H^{n-1}(X, \Omega^{n+1}_X)$$ is an isomorphism. 3. The Hodge numbers $$h^{k, 0}(X) = 0$$ for $$1 \leq k \leq 2n$$. The author study some basic properties of varieties of this type, give examples among complete intersections and hypersurfaces in homogeneous spaces, and study the derived categories of some of the examples. Reviewer: Zhiyu Tian (Bonn) ##### MSC: 14J45 Fano varieties 14J40 $$n$$-folds ($$n>4$$) 14J32 Calabi-Yau manifolds (algebro-geometric aspects) Full Text: ##### References: [1] Beauville, A., Determinantal hypersurfaces, Mich. Math. J., 48, 39-64, (2000) · Zbl 1076.14534 [2] Bialynicki-Birula, A., Some theorems on actions of algebraic groups, Ann. Math., 98, 480-497, (1973) · Zbl 0275.14007 [3] Borcea, C., Smooth global complete intersections in certain compact homogeneous complex manifolds, J. Reine Angew. Math., 344, 65-70, (1983) · Zbl 0511.14027 [4] Bertin, J.; Peters, C., Variations of Hodge structures, Calabi-Yau manifolds and mirror symmetry, (Bertin, J.; Demailly, J.-P.; Illusie, J.-L.; Peters, C., Introduction to Hodge Theory, SMF/AMS Texts and Monographs, vol. 8, (2002)), 151-228 [5] Candelas, P.; Derrick, E.; Parkes, L., Generalized Calabi-Yau manifolds and the mirror of a rigid manifold, Nucl. Phys. B, 407, 115-154, (1993) · Zbl 0899.32011 [6] Cynk, S., Cohomologies of a double covering of a non-singular algebraic 3-fold, Math. Z., 240, 731-743, (2002) · Zbl 1004.14007 [7] Donagi, R.; Markman, E., Cubics, integrable systems, and Calabi-Yau threefolds, (Proceedings of the Hirzebruch 65 Conference on Algebraic Geometry, Isr. Math. Conf. Proc., vol. 9, (1996)), 199-221 · Zbl 0878.14031 [8] Favero, D.; Iliev, A.; Katzarkov, L., On the Griffiths groups of Fano manifolds of Calabi-Yau Hodge type, Pure Appl. Math. Q., 9, 4, (2013), (Special Issue: In memory of Andrey Todorov) [9] Freed, D., Special Kähler manifolds, Commun. Math. Phys., 203, 31-52, (1999) · Zbl 0940.53040 [10] Gelfand, I. M.; Kapranov, M. M.; Zelevinsky, A. V., Discriminants, resultants, and multidimensional determinants, (1994), Birkhäuser · Zbl 0827.14036 [11] Griffiths, P. A., On the periods of certain rational integrals I, II, Ann. Math., 90, 460-495, (1969), 496-541 · Zbl 0215.08103 [12] Igusa, J., A classification of spinors up to dimension twelve, Am. J. Math., 92, 997-1028, (1970) · Zbl 0217.36203 [13] Iliev, A.; Manivel, L., The Chow ring of the Cayley plane, Compos. Math., 141, 146-160, (2005) · Zbl 1071.14056 [14] Iliev, A.; Manivel, L., On cubic hypersurfaces of dimension seven and eight, Proc. Lond. Math. Soc. (3), 108, 2, 517-540, (2014) · Zbl 1304.14053 [15] Kostant, B., Lie algebra cohomology and generalized Schubert cells, Ann. Math., 77, 72-144, (1963) · Zbl 0134.03503 [16] Kuznetsov, A., Derived categories of cubic and $$V_{14}$$ threefolds, Proc. Steklov Inst. Math., 246, 171-194, (2004) · Zbl 1107.14028 [17] Kuznetsov, A., Homological projective duality, Publ. Math. IHÉS, 105, 157-220, (2007) · Zbl 1131.14017 [18] Nagel, J., The Abel-Jacobi map for complete intersections, Indag. Math. (N.S.), 8, 1, 95-113, (1997) · Zbl 0888.14016 [19] Popov, V. L., Classification of the spinors of dimension fourteen, Usp. Mat. Nauk, 32, 199-200, (1977) · Zbl 0327.15028 [20] Schimmrigk, R., Mirror symmetry and string vacua from a special class of Fano varieties, Int. J. Mod. Phys. A, 11, 3049-3096, (1996) · Zbl 1044.32504 [21] Snow, D., Cohomology of twisted holomorphic forms on Grassmann manifolds and quadric hypersurfaces, Math. Ann., 276, 159-176, (1986) · Zbl 0596.32016 [22] Steenbrink, J., Intersection form for quasi-homogeneous singularities, Compos. Math., 34, 211-223, (1977) · Zbl 0347.14001 [23] Voisin, C., Symétrie miroir, Panoramas et Synthèses, vol. 2, (1996), SMF · Zbl 0849.14001 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
{}
# Energy Storage Control with Aging Limitation Abstract : Energy Storage Systems (ESS) are often proposed to mitigate the fluctuations of renewable power sources like wind turbines. In such a context, the main objective for the ESS control (its energy management) is to maximize the performance of the mitigation scheme. However, most ESS, and electrochemical batteries in particular, can only perform a limited number of charge/discharge cycles over their lifetime. This limitation is rarely taken into account in the optimization of the energy management, because of the lack of an appropriate formalization of cycling aging. We present a method to explicitly embed a limitation of cycling aging, as a constraint, in the control optimization. We model cycling aging with the usual exchanged energy'' counting method. We demonstrate the effectiveness of our aging-constrained energy management using a publicly available wind power time series. Day-ahead forecast error is minimized while keeping storage cycling just under an acceptable target level. Keywords : Type de document : Communication dans un congrès PowerTech 2015, Jun 2015, Eindhoven, Netherlands. 〈10.1109/PTC.2015.7232683〉 Domaine : Littérature citée [15 références] https://hal.archives-ouvertes.fr/hal-01147369 Contributeur : Pierre Haessig <> Soumis le : mardi 7 juillet 2015 - 11:51:08 Dernière modification le : jeudi 13 septembre 2018 - 15:24:04 Document(s) archivé(s) le : mercredi 26 avril 2017 - 00:44:35 ### Fichiers storage_aging_control - author... Fichiers produits par l'(les) auteur(s) ### Citation Pierre Haessig, Hamid Ben Ahmed, Bernard Multon. Energy Storage Control with Aging Limitation. PowerTech 2015, Jun 2015, Eindhoven, Netherlands. 〈10.1109/PTC.2015.7232683〉. 〈hal-01147369〉 ### Métriques Consultations de la notice ## 911 Téléchargements de fichiers
{}
# Quantitative Aptitude ## Series: AP, GP and other formula: [mathjax] Arithmetic Progression (AP): If a, a+d, a+2d, a+3d …………….are said to be in AP in which „a‟ is the first term and„d‟ is the common difference.Then, • nth term = a+(n-1)d • Sum of the n terms = [2a+(n-1)d] • Sum of the n terms = (a+l), where „l‟ is the last term of the series. Geometric Progression (GP): If a,ar,ar 2 ,ar 3 ……………are said to be in GP in which „a‟ is the first termand „r‟ is the common ratio. Then, • nth term = $ar^{n-1}$ • Sum of n terms =a$\frac{(1-r^{n)}}{(1-r)}$ when r<1. • Sum of n terms = a  $a\frac{(r^n-1)}{(r-1)}$, when r>1. For natural number series, (1+2+3+4+5+…………..+n) =$\frac n2(n+1)$ For natural square number series, (1 2 +2 2 +3 2 +……………..+n 2 ) =$\frac n6(n+1)(2n+1)$ For natural cube number series, (1 3 +2 3 +3 3 +………………+n 3 ) =$\frac{n^2}4(n+1)^2$ ## Simplification: There is three main points to remember to solve simplification problem easily. Three points are, • BODMAS rule: This rule is about the correct sequence to be executed to find the value of thesimplification problem. Here  B means ‘Bracket’, O means of, D means ‘Division’, M means ‘Multiplication’, A means ‘Addition’ and S means “Subtraction”. For Bracketoperation we must follow the order of (),{} and [] in order to remove the Brackets. Then accordingly we must follow of, “Division”, “Multiplication”, “Addition” and “Subtraction”. • Modulus of a real number “a” is defined as, |a| = a, if a>1 and |a| = -a, if a<1. • Virnaculum or Bar: We an expression contain “Virnaculum” then before applying “BODMAS” rulewe must simplify the expression under “Virnaculum”. Example: (a+b), here we (a+b) is under the”Virnaculum” or “Bar”. Examples: Example: Simplify: – [{(2+3) +5}-6] + (6-3+2). Answer: In the above example there is Virnaculum above the term ‘6-3’ so we need to simplify the expression under Virnaculum first before applying BODMAS. = [{(2+3) +5}-6] + (3+2) = [{5+5}-6] +5 = [10-6] +5 = 4+5 = 9, Final answer. Example: Simplify:-[2×3÷3+ {(2-1) +6}] Answer: In the above example we need to apply the BODMAS rule to find out the simplified value. According to BODMAS rule bracket must remove first in order to (), {} and []. = [2×3÷3+ {1+6}] = [2×3÷3+ 7] = [2×1+7] = [2+7] = 9, Final answer. ## Profit and Loss: To solve profit-loss numerical we need to go through some formulae, • Gain= $Selling\;Price(SP)-Cost\;Price(CP)$ • Loss= $Cost Price(CP)-Selling Price(SP)$ • Gain%= $\frac{Gain8\times100}{CP}$ • Loss%= $\frac{Loss\times100}{CP}$ • SP=$\frac{(100+Gain\%)\times CP}{100}$ • SP=  $\frac{(100-Gain\%)\times CP}{100}$ • CP=  $\frac{100\times SP}{(100+Gain\%)}$ • CP=  $\frac{100\times SP}{(100-Loss\%)}$ • If a person sell two similar items, one at gain of x% and other at loss of x% then seller always loss which is given by,Loss%=$\left\{\frac{Common\;Loss\;and\;Gain\%}{10}\right\}^2=\left\{\frac x{10}\right\}^2$ • If trader use wrong weight to sell his goods at cost price then ,Gain%= $\left[\frac{Error\times100\%}{(True\;Value)-(Error)}\right]$ Example 1: A man sold cloths at a loss of 10%. If the selling price had been increased by Rs. 150, there would have been a gain of 15%. What was the cost price of the article? Solution: Let CP of the cloths be Rs. X. At loss of 10 %,( applying unit system of solving) For CP of Rs. 100 SP is Rs. 90 For CP of Rs. 1 SP is Rs.$\frac{90}{100}$ For CP of Rs. X SP is Rs.$\frac{90\times x}{100}$ Now SP is increased by Rs. 150, so new SP = $\left(\frac{90\times x}{100}+150\right)$ Rs. = $\left(\frac{90x+1500}{100}\right)$ Rs. Since gain is 15% for the new SP, using CP formula for gaining we get, $$X=\frac{100\times SP}{100+Gain\%}=\frac{100\times(9X+1500)}{(100+15)\times10}=\frac{90x+15000}{115}$$ 115X=90X+15000 $$X=\frac{15000}{25}=600Rs.$$ So, CP of the cloths is Rs. 600. (Whole problem can be solved using formula only instead of unit system). Example 2: After getting two successive discounts, a shirt with a list price of Rs. 150 is available at Rs.105. If second discount is 12.5%, find the first discount? Solution: Let the first discount be X%. Then, 87.5% of (100 – X) % of 150 = 105 $$\frac{87.5\times(100-X)\times150}{100\times100}=105\;<=>$$ 100-X <=> X =20. So, first discount is 20%. ## Time and Work: To solve time and work problem we must use unit system. Because there is no specific formula for solving these type of question. These types of questions can be understood with the help of some examples. Example 1: A can do a work in 5 days of 9 hours each and B can do it in 7 days of 7 hours each. How long will they take to do it, working together 8 hours a day? Solution: A can complete the work in (5×9) = 45 hours B can complete the work in (7×7) = 49 hours A‟s 1 hours work =$\frac1{45}$ B‟s 1 hours work =$\frac1{49}$ Both (A+B)‟s 1 hours work =$\left(\frac1{45}+\frac1{49}\right)=\frac{94}{2205}$ both will finish the work in$\left(\frac{2205}{94}\right)$hours. Number of days of 8 hours each = $\left(\frac{2205}{94\times8}\right)$ days = 2.93 = 3 days (approx.). Example 2: A and B working separately can do a work in 8 and 12 days respectively. If they work for a day alternately, A beginning, in how many days the work will be completed? Solution: (A+B)‟s 2 days‟ work =$\left(\frac18+\frac1{12}\right)=\left(\frac5{24}\right)$ Work done in 4 pairs of days(8 days) =$\left(\frac{5\times4}{24}\right)=\left(\frac{20}{24}\right)$ Remaining work =$\left(1-\frac{20}{24}\right)=\left(\frac16\right)$ On 9 th day it is A‟s turn. $\left(\frac18\right)$work is done by A in 9 th day. So now remaining work = $\left(\frac16-\frac18\right)=\frac1{24}$ So, now it is B‟s turn on 10 th day$\frac1{12}$work is done by B in 1 day. So,$\frac1{24}$work is done by B in\left(\frac{12}{24}\right)day = 0.5 day. So, total time taken = (9+0.5) = 9.5 days. Example 3: 2 men and 5 boys can do a piece of work in 10 days while 3 men and 6 boys can do the same work in 8 days. In how many days can 3 men and 1 boy do the work? Solution: Let 1 man‟s 1 day‟s work = x Let 1 boy‟s 1 day‟s work = y Then according to the question we have two condition as below, Solution: Let 1 man‟s 1 day‟s work = x Let 1 boy‟s 1 day‟s work = y Then according to the question we have two condition as below, $2x+5y=\frac1{10}$ $3x+6y=\frac18$ Solving above two equations we get, $x=\frac1{120}$,$y=\frac1{60}$ (3 men + 1 boy)‟s 1 day‟s work =$\left(\frac{3\times1}{120}+\frac1{60}\right)=\left(\frac1{40}+\frac1{60}\right)=\frac1{24}$ So, 3 men and 1 boy can finish the work together in 24 days. ## Relative speed, distance and problems on trains: For this type of problems we need to remember some important formula, which are given below, • $Time=\frac{Dis\tan ce}{Speed}$ • $\frac{km}{hr\;}=\frac{5\;meter}{18\;second}$ • $\frac{meter}{second}=\frac{18\;km}{5\;hr}$ Some other important facts are given below, 1. When a man covers a certain distance at x km/hr. and an equal distance at y km/hr. Then theaverage speed during the whole journey is$\left(\frac{\left(2xy\right)}{\left(x+y\right)}\right)$. 2. If two trains of length a meters and b meters are moving in the same direction at u m/s and v m/sthen time taken by the faster train to cross the slower train = $\left(\frac{\left(a+\right)}{\left(u-v\right)}\right)sec$ sec. 3. If two trains moving in opposite directions with same parameter =  $\left(\frac{\left(a+b\right)}{\left(u+v\right)}\right)sec$sec. Example 1: While covering a distance of 24 km, a man noticed that after walking for 1 hour, the distancecovered by him was 5/6 of the remaining distance. What was his speed in meters per second? Solution: Let the speed be x km/hr. Then distance covered in 1 hr. = x km Remaining distance = (24 – x) km So according to question, $x=\frac{5\times\left(24-x\right)}6;11x=120;x=10.9=11(approx).$ Hence speed =$=\frac{11\times5}{18}m/sec=3.06m/sec$ Example 2: Walking at 5/6 of its usual speed, a train is 10 minutes too late. Find its usual time to cover the journey? Solution: New speed = $\frac56$of usual speed. New time taken = $\frac65$of usual time. So, $\left(\frac65of\;usual\;time\right)-\left(\;usual\;time\right)=10min$ So, usual time = 50 min. Example 3: A train 450 meter long is running with a speed of 60 kmph. In what time will it pass a man who is running at 7 kmph in the direction opposite to that in which the train is going? Solution: Speed of the train relative to the man = (60+7) = 67 kmph =$\frac{67\times5}{18}$m/sec.$\frac{335}{18}$ =m/sec. Now, time taken by the train to cross the man = Time taken by it to cover its own length of 450 meter at $\frac{335}{18}$m/sec. =$\frac{450\times18}{335}$=24.18sec Example 4: A boy standing in a platform which is 200 meter long. He finds that a train crosses theplatform in 20 seconds but him in 10 seconds. Find the length of the train and speed? Solution: Let the length of the train be x meter. Then, the train covers x meters in 10 seconds and (x+200) meters in 20 seconds. so,$\frac x{10}=\frac{x\times200}{20}$;x=200meters. Speed of the train =$rac{200}{10}\frac m{sec}$=$\left(20\times\frac{18}5\right)$kmph.=72kmph. Example 5: Two trains 160 meters and 185 meters in length are running towards each other on parallel lines, one at 45 kmph and another at 55 kmph. In what time will they be clear of each other from the moment they meet? Solution: Relative speed of the trains = (45+55) kmph = 100 kmph = m/sec= $\left(\frac{250}9\right)$ m/sec. Time taken by the train to pass each other = Time taken to cover (160+185) meter at $\left(\frac{250}9\right)$m/sec is  $\left(\frac{345\times9}{250}\right)$seconds=12.42seconds Final answer = 12.42 seconds. ## Boats and streams: Important facts and formula, • In water the direction along the stream is called downstream and opposite the stream is upstream. • If speed of the boat in still water is u km/hr. and the speed of the stream is v km/hr. then, speed doenstream=(u+v)km/hr speed upenstream=(u-v)km/hr • If the speed downstream is a km/hr. and the speed upstream is b km/hr. then, speed in still water=$\left(\frac{a+b}2\right)$km/hr. rate of stream=$\left(\frac{a-b}2\right)$Km/hr Example 1: A man can row 22 kmph in still water. It takes him thrice as long to row up as to row down the river. Find the rate of the stream. Solution: Let man‟s rate upstream be x kmph. Then his rate downstream = 3x kmph So, rate in still water =$\frac{\left(3x+x\right)}2$=2xkmph so,2x=22 or x=11. So, rate upstream = 11 kmph, rate downstream = 33 kmph. Hence rate of stream =$\frac{\left(33-11\right)}2$kmph = 11 kmph. Example 2: A boat takes 12 hours for travelling downstream from a place A to place B and coming back to place C midway between A and B. If the velocity of the stream is 6 kmph and speed of the boat in still water is 10 kmph, what is the distance between A and B? Solution: Given that, speed of the boat in still water = 10 kmph rate of stream = 6 kmph So, speed downstream = (10 + 6) = 16 kmph speed upstream = (10 – 6) = 4 kmph Now, Let the distance between A and B be x kmph. Then, Distance between A and C or B and C =$\left(\frac x2\right)$ kmph. So, $$\frac x{10}+\frac{\displaystyle\frac x2}6=12\;\Leftrightarrow\frac x{10}+\frac x{12}=12\Leftrightarrow\frac{11x}{60}=12\Leftrightarrow x=\frac{12×60}{11}\Leftrightarrow x=65.45\;km$$ Distance between A and B = 65.45 km. ## Alligation or Mixture: First all, what is Alligation? “Alligation is the rule with which we can find the ratio of two or more ingredient at the given price which are mixed to produce a mixture of desired price” Important Formula and facts 1. Rule of Alligation: If two ingredients are mixed together, then =,where CP=Cost price Here C = Cost price of a cheaper unit, D = Cost price of a dearer unit, M = Mean Price So,                       (Cheaper quantity) ( Dearer quantity) = (D-M)( M-C) 2. Suppose a vessel contains X unit of liquid from which Y units are taken out and replaced by water. After n operations, the quantity of pure liquid =$\left[X\left(1-\frac yx\right)^n\right]$ units. Example 1: How much water must be added to 60 liters of milk at 2 liters for Rs. 30 so as to have a mixture worth Rs. 12 a liter? Solution: Cost price of 1 liter of milk = Rs. 15. (As cost price of 2 liter of milk is Rs. 30) Water is assumed to be price less in alligation type problems. So cost price of 1 liter water = 0 In the above flow diagram we find that, The ratio of water and milk = (15-12):(12-0) = 3:12 = 1:4 So quantity of water to be added to 60 liters of milk =$\left(\frac{1\times60}4\right)$liters=15liters. Example 2: A vessel contains 30 liter of alcohol. From this vessel 3 liters of alcohol was taken out and replaced by milk. This process was further repeated 3 times. How much alcohol now contained in the vessel? Solution: Apply the formula we get, amount of alcohol after 4 operation=$\left[30\left(1-\frac3{30}\right)^4\right]$liters=$30\times\frac{\left(9\times9\times9\times9\right)}{\left(10\times10\times10\times10\right)}$ Example 3: A container is filled with liquid, 4 parts of which are water and 6 parts are milk. How much of the mixture must be drawn off and replaced with water so that the mixture may be half water and half milk? Solution: Suppose the container initially contains 10 liters of liquid (4part + 6 part) Let x liters of this liquid be replaced with water. Quantity of water in new mixture =$\left(4-\frac{4x}{10}+x\right)$liters Quantity of milk in the new mixture =$\left(6-\frac{6x}{10}\right)$liters Since in the new mixture amount of milk and water is same, $\left(4-\frac{4x}{10}+x\right)$=$\left(6-\frac{6x}{10}\right)$,=>6x+40=60-6x,12x=20,=>x=5/3 So the part of the mixture replaced =$\frac{5\times1}{3\times10}$=$\frac16$ ## Simple and Compound interest: Some important formula and facts are given below for simple interest in which interest is reckoned uniformly. •  S.I=$\left(\frac{P\;x\;R\;x\;T}{100}\right)$,where S.I=Simple intrest,P=Principal,R=Rate of interest in %,T=Time in year. • From the above formula we get,P=$\left(\frac{100\;x\;S.I}{R\;x\;T}\right)\;$,R=$\left(\frac{100\;x\;S.I}{P\;x\;T}\right)\;$,T=$\left(\frac{100\;x\;S.I}{P\;x\;R}\right)\;$ In compound interest (C.I) the amount after the first unit of time becomes the principal for the second unit, the amount after second unit becomes the principal for the third unit and this process is go on. Some formula related to C.I is given below (in the next page):- • When interest is compounded annually, A=P$\left(1+\frac R{100}\right)\;^n$,A=Amount,P=Principal,R=R% per annum,n=Times in years. • When interest is compounded half yearly, A=P$\left(1+\frac{\displaystyle\frac R2}{100}\right)\;^{2n}$,where A,P,R and n have usual meaning. • When interest is compounded quarterly, A=P   $\left(1+\frac{\displaystyle\frac R4}{100}\right)\;^{4n}$,where A,P,R and n have usual meaning. • Present worth of Rs. X due n years hence is given by, Present Worth=$\frac X{\left(1+\frac R{100}\right)\;^n}$,where R=R%per annum,n=Times in years. Example 1: A certain sum of money amounts to Rs. 1000 in 4 years and Rs. 1200 in 5 years. Find the sum and the rate of interest. Solution: S.I for 1 years = Rs. (1200-1000) = Rs. 200 S.I for 4 years = Rs. (200×4) = Rs. 800 So, principal = Rs. (1000 – 800) = Rs. 200 Now P = 200, T= 4, S.I = 800 R=$\frac{100×800}{200×4}$%=100% So, rate of interest is 100% Example 2: What the annual installment will discharge a debt of Rs. 1200 due in 3 years at 15% simple interest? Solution: Let each installment be Rs. X, Then,$\left(X+\frac{X\;x\;15\;x\;1}{100}\right)+\left(X+\frac{X\;x\;15\;x\;2}{100}\right)+X=1200$ $\frac{23X\;}{20}+\frac{13X}{10}$+X=1200<=>(23X+26X+20X)=(1200*20)<=>X=347.83 So, each installment is Rs. 347.83. Example 3: If the simple interest on sum money at 6% per annum for 2 years is Rs. 900, find the compound interest for same period with 5% per annum for the same sum? Solution: Principal, P = Rs.$\frac{100×900}{2×6}$=Rs.7500 Amount, A = Rs.$\left[7500x\left(1+\frac5{100}\right)^2\right]$=Rs.$\left(7500x\frac{21×21}{20×20}\right)$=Rs.768.75. Example 4: In what time will Rs. 4000 becomes Rs. 4300 at 15% per annum compounded half yearly? Solution: Let the time be n years. Applying formula we get, $\left[4000\;x\;\left(1+\frac{\displaystyle\frac{15}2}{100}\right)^{2n}\right]\;=4300\;Or\;\left(\frac{43}{40}\right)^{2n}\;=\frac{43}{40}\;Or\;n=\frac12years.$ Required time is 6 month. Example 5: What annual payment will discharge a debt of Rs. 7600 due in 2 years at 20% per annum compound interest? Solution: Let each installment be Rs. X. Then, (Present worth of Rs. X due 1 years hence) + (Present worth of Rs. X due 2 years hence)= 7600 $\Leftrightarrow\frac X{\left(1+{\displaystyle\frac{20}{100}}\right)^1}+\frac X{\left(1+{\displaystyle\frac{20}{100}}\right)^2}$=7600 $\Leftrightarrow\frac{5X}6+\frac{25X}{36}$=7600 30X+25X=7600*36 $\Leftrightarrow X=\frac{7600×36}{55}=4975\;\left(approx.\right)$ So, amount of each installment = Rs. 4975 ## Permutation and Combination: Some Definitions, Factorial Notation: Let n be positive integer. Then, factorial n denoted by n! is defined by, n! = n (n-1) (n-2)……..3.2.1 Permutation: The different arrangements of a given number of things by taking some or all at a time are called permutation. Permutation is defined as, ${}_r^nP=\frac{n!}{(n-r)!}$ Combination: each of the different groups or selections which can be formed by taking some or all of a number of objects is called combination. It is defined as, ${}_r^nC=\frac{n!}{r!(n-r)!}$ Example 1: How many words can be formed by using all letters of the word “DELHI”? Solution: There is 5 different letters in “DELHI”. Required number of words = 5! = (5×4×3×2×1) = 120. Example 2: How many words can be formed from the letters of the word “ASSOM” so that vowels never come together? Solution: There is 5 letters in word „ASSOM‟ Now writing vowels together we get „AOSSM‟ If we consider AO as one letter the word like “(AO) SSM”. These 4 letters can be arranged in 4! = 24 ways. And two vowels AO can be arranged among them in 2! = 2 ways. But „S‟ occurs twice So number of words having vowel together =$\frac{4!\times2!}{2!}$=24 Total number of words formed by using all letter in the given word = 5! = 120. Number words each having vowels never together = (120 – 24) = 96. Example 3: How many ways can a football eleven team be chosen out from a batch of 15 players? Solution: Required number ways =${}_{11}^{15}C=\frac{15\times14\times\times1\times12\times11!}{11!\times4\times3\times2\times1}$=15*13*7=1365 Example 4: A box contains 3 white cubes, 4 blue cubes and 2 red cubes. In how many ways can 3 cubes be drawn so that at least 1 white cube is to be included in the draw? Solution: For getting at least 1 white cube we may have, (3 white cube) or (1 white and 2 non-white) or (2 white and 1 non-white) So required number ways =${}_3^3C+({}_1^3C+{}_2^6C)+({}_2^3C\times{}_1^6C)$ = 1+ (3×15) + (3×6) =1+45+18=64. ## Probability: “Probability is defined as the measure of the likeliness of an event occurrence. Uncertain event can be predicted with the idea of probability” The idea of probability can be understand with the help of tossing a coin. When a coin is tossed, there are two possible outcomes: • Heads or H • Tails or T So, we say that probability of coin landing H is 1⁄2 and probability of coin landing T is also 1⁄2. So probability of occurrence of an Event if „S‟ be the sample space and „E‟ be the event, P(E)=$\frac{n(E)}{n(S)}$ Some results on probability: 1. P(S)=1 2. P(0)=10 3. $\overset\rightharpoonup{P(A)}$=1-P(A),where $\overset\rightharpoonup A$ denotes (not A). Example 1: Three unbiased coins are tossed. What is the probability of getting at least one tail? Solution: Here S = {HHH,HHT,HTH,THH,TTT,THT,TTH,HTT} Let E = event of getting at least one Tail(T). So, E = {HHT, HTH, THH, TTT, THT, TTH, HTT} So,$P(E)=\frac{n(E)}{n(S)}=\frac78$ Example 2: A speaks truth in 80% cases and B in 90% case. In what percentage of cases are they likely to contradict each other, narrating the same incident? Solution: Let A = Event that A speaks the truth. Let B = Event that B speaks the truth. Then,$P(A)=\frac{80}{100}=\frac45$and$P(B)=\frac{90}{100}=\frac9{10}$ And,$P(\overline A)=\left(1-\frac45\right)=\frac15$and$P(\overline B)=\left(1-\frac9{10}\right)=\frac1{10}$ Now, P (A and B contradict each other) = P [(A speak the truth and B tells a lie) or (A tells a lie and B speaks the truth)] =P[(Aand$\overline B$) or ($\overline A$ and B)] =P(Aand$\overline B$) or ($\overline A$ and B) =P(A).P($\overline A$ )+P($\overline A$).P(B) $=\frac{(4\times1)}{(5\times10)}+\frac{(1\times9)}{(5\times10)}=\frac4{50}+\frac9{50}=\frac{13}{50}=\left(\frac{13}{50}\times100\right)\%=26\%$ So, A and B contradict each other in 26% cases. Example 3: In a single throw of a die, what is the probability of getting a number greater than 3? Solution: When a die is thrown we have S = {1, 2, 3, 4, 5, 6} Let E be the event of getting a number greater than 3 = {4, 5,6} So,P(E)=$=\frac{n(E)}{n(S)}=\frac36=\frac12$ 29
{}
# I'm not sure how to transform this into two ODEs #### josh146 Wave equation with inhomogeneous boundary conditions Sorry about the thread title, I've tried changing it but it won't work. 1. Homework Statement Solve the wave equation (1) on the region 0<x<2 subject to the boundary conditions (2) and the initial condition (3) by separation of variables. 2. Homework Equations (1) $\frac{\partial^2 u}{\partial t^2}=c^2\frac{\partial^2 u}{\partial x^2}$ (2) $\frac{\partial u}{\partial x}(0,t)=1$ ; $\frac{\partial u }{\partial x}(2,t)=1$ (3) $\frac{\partial u}{\partial t}(x,0)=0$ 3. The Attempt at a Solution I've defined $\theta(x,t)=u(x,t)-u_{st}(x) = u(x,t)-x-h(t)$ where u_st is the steady state solution. I've used this to create a new PDE with homogeneous boundary conditions. The PDE is: $\frac{\partial^2 \theta}{\partial t^2} + h''(t)=c^2 \frac{\partial^2 \theta}{\partial x^2}$. By subbing in $\theta=f(t)g(x)$ I get: $f''(t)g(x)+h''(t)=c^2 f(t) g''(x)$ I'm not sure how to transform this into two ODEs. Can someone help? Last edited: Related Calculus and Beyond Homework Help News on Phys.org "I'm not sure how to transform this into two ODEs" ### Physics Forums Values We Value Quality • Topics based on mainstream science • Proper English grammar and spelling We Value Civility • Positive and compassionate attitudes • Patience while debating We Value Productivity • Disciplined to remain on-topic • Recognition of own weaknesses • Solo and co-op problem solving
{}
# zbMATH — the first resource for mathematics Addition of initial segments. I. (English) Zbl 0658.03030 The author investigates an extension of addition and subtraction to cuts (complete parts of the class of all natural numbers N) in the Alternative Set Theory. This extension is similarly useful as the extension of arithmetical operations to ordinal and cardinal numbers in classical set theory. The main theorem of the paper asserts that every real cut $$\sigma$$ is of the form $$a+\rho$$ or a-$$\rho$$, where $$\rho$$ is a real cut closed under addition and $$a\in N$$. The cut $$\rho$$ is defined from $$\sigma$$ and uniquely determined, a is uniquely determined but the difference $$\in \rho$$. (Remember that the system of real classes contains all definable classes - e.g. the class FN of finite natural numbers; real classes may be used in definitions as parameters, too, and all sets are real classes. On the other hand some classes obtained due to the axiom of choice as well-ordering of the universal class V or one-one mapping of a set onto V cannot be real.) An example of a (non-real) cut not having the given property is constructed. Due to the main theorem addition and subtraction of real cuts are quite easy. The author investigates the extended operations in more detail and describes some examples of cuts and formulas which do not hold, in spite of the fact that for natural numbers the formulas hold. (From another branch, let us remember that semiregular cuts introduced by Paris and Kirby are closed under addition.) Reviewer: K.Čuda ##### MSC: 03E70 Nonclassical and second-order set theories 03H15 Nonstandard models of arithmetic Full Text:
{}
The statistical factors affecting the freezing of the road pavement Title & Authors The statistical factors affecting the freezing of the road pavement Kim, Hyun-Ji; Lee, Jea-Young; Kim, Byung-Doo; Cho, Gyu-Tae; Abstract Due to the character of the climate of Korea, the pavement of a road is Influenced by freezing in winter season and thawing in thawing season. In the last few years, several articles have been devoted to the study to minimize the damage of freezing and thawing action. The purpose of this paper is to identify appropriacy of factors that influence road pavement thickness. We conduct the decision tree analysis on the field data of road pavement. The target variable is 'Frost penetration'. This value was calculated from the temperature data. The input variables are 'Region', 'Type of road pavement', 'Anti-frost layer', 'Month' and 'Air temperature'. The region was divided into 9 regions by freezing index $\small{350{\sim}450^{\circ}C{\cdot}day}$, $\small{450{\sim}550^{\circ}C{\cdot}day}$, $\small{550{\sim}650^{\circ}C{\cdot}day}$. The type of road pavement has three-section such as area of cutting, boundary area of cutting and bankin, lower area of banking. As the result, the variables that influence 'Frost penetration' are Month, followed by anti-frost layer, air temperature and region. Keywords Language Korean Cited by References 1. Berson, A., Smith, S. and Thearling, K. (2000). Building data mining applications for CRM, McGraw-Hill, New York. 2. Freund, Y. and Mason, L. (1999). The alternating decision tree learning algorithm. In ICML, 99, 121-133. 3. Heo, M. H. and Lee, Y. G. (2008). Data mining modeling and example, Hannarae, Seoul. 4. Kwon, G. B. (2003). Bearing capacity estimation of subgrade for frost heaving effect, Incheon National University, Incheon. 5. Kim, N. S., Nam, Y. K., Cho, G. T. and Lee, B. W. (2011). An establishment of database for effective design of anti-frost heave layer using field data. Korean Society of Hazard Mitigation, 11, 43-47. 6. Kim, Y. J., Yu, J. and Kim, H. M. (1999). A study on the frost penetration depth and insulation methods in pavement, Korea Institute of Civil Engineering and Building Technology, Goyang. 7. Lee, J. Y. and Kim, H. J. (2014). Identification of major risk factors association with respiratory diseases by data mining. Journal of the Korean Data & Information Science Society, 25, 373-384. 8. Lee, M. S., Heo, T. Y., Park, H. M. and Kim, B. I. (2012). Development of model for structural evaluation of anti-freezing layer. Journal of the Korean Society of Road Engineers, 14, 25-32. 9. Quinlan, J. R. (1993). C4.5: Programs for machine learning, Morgan-Kaufmann Publishers, San Mateo, CA. 10. Shin, E. C., Lee, J. S. and Cho, G. T. (2011). A study on the frost penetration depth of pavement with field temperature data. Journal of the Korean Society of Road Engineers, 13, 21-32. 11. Nam, Y. K., Park, C. B., Cho, G. T. and Jin, J. H. (2002). The field data on behavior characteristics of anti-frost heave layer. International Journal of Highway Engineering, 4, 19-23.
{}
# A Does the maximum value of the following integral exist? Tags: 1. Oct 17, 2016 ### Tspirit Suppose $\intop_{-\infty}^{+\infty}(f(x))^{2}dx=1$, and $a=\intop_{-\infty}^{+\infty}(\frac{df(x)}{dx})^{2}dx$, does a maximum value of $a$ exist? If it exists, what's the corresponding $f(x)$? 2. Oct 18, 2016 ### andrewkirk No it doesn't exist. Consider the function $f:\mathbb R\to\mathbb R$ that is equal to $\frac{\sin nx}{\sqrt\pi}$ on the interval $[0,2\pi]$ and zero outside it. The first integral is 1 regardless of the value of $n$ but the second integral increases without limit as $n$ increases. 3. Oct 18, 2016 ### Tspirit Yes, you are right. Thanks. 4. Oct 18, 2016 ### zinq Nice example, andrewkirk ! (Now I wonder what if the original problem were changed only so that the second integral used the 4th power of the derivative instead of its square.) 5. Oct 20, 2016 ### Tspirit I think it is like this: $\intop_{+\infty}^{-\infty}f(x)dx=1$,and $a=\intop_{-\infty}^{+\infty}(\frac{df(x)}{dx})^{4}dx$, does a maximum value of a exist? If we use the example andrewkirk said $“\frac{sin(nx)}{\sqrt{\pi}}”$, we have $$a=\intop_{-\infty}^{+\infty}(\frac{df(x)}{dx})^{4}dx=\intop_{-\infty}^{+\infty}(\frac{ncos(nx)}{\sqrt{\pi}})^{4}dx,$$ $$(ncos(nx))^{4}=n^{4}\left(\frac{1+cos2nx}{2}\right)^{2}=n^{4}[\frac{1}{4}+\frac{1}{2}cos2nx+\frac{1}{8}(1+4con4nx)],$$ so when $n$ is infinite, the $a$ is also infinite.
{}
## electostatistics- non/conducting sphere/sheet in E field Hello, I was reading Griffiths- Intro to electrodynamics, and I came cross some difficulties in understanding some cases which are not covered in the book well. I really appreciate your insight. Here are my questions; 1) If we bring a charge near a grounded infinite metal sheet, it will induce charge (we can find the potential using images method) . Now, can we say that the E field on the other side of the sheet is zero? basically does this sheet shield the E field on one side produced by the charge on the other side? It is obvious the div of V is non zero for the region that charge does not exist, but this solution is not for that region. 2) Under a uniform ext E field, if we place a non conducting sphere with total charge Q, then I would expect it to move along the direction of the field, is that right? 3) What happens if this is a metal sphere with total charge Q? will it also move? I know that it will redistribute its charge such that it will cancel the E field inside, but does that affect the over all motion? 4) How can be the E field inside a metal sphere be still zero if the ext E field is very strong that there is no charge to balance the field? 5) If the sphere is grounded, will it still make E field inside zero? and can it hold against any amount of ext E field? what happens if the field is very strong? PhysOrg.com science news on PhysOrg.com >> King Richard III found in 'untidy lozenge-shaped grave'>> Google Drive sports new view and scan enhancements>> Researcher admits mistakes in stem cell study :) Well, this was neither a homework, nor a course work question, but just curiosity one. Blog Entries: 7 Recognitions: Gold Member Homework Help Quote by babyeintein Hello, I was reading Griffiths- Intro to electrodynamics, and I came cross some difficulties in understanding some cases which are not covered in the book well. I really appreciate your insight. Here are my questions; 1) If we bring a charge near a grounded infinite metal sheet, it will induce charge (we can find the potential using images method) . Now, can we say that the E field on the other side of the sheet is zero? basically does this sheet shield the E field on one side produced by the charge on the other side? It is obvious the div of V is non zero for the region that charge does not exist, but this solution is not for that region. 2) Under a uniform ext E field, if we place a non conducting sphere with total charge Q, then I would expect it to move along the direction of the field, is that right? 3) What happens if this is a metal sphere with total charge Q? will it also move? I know that it will redistribute its charge such that it will cancel the E field inside, but does that affect the over all motion? 4) How can be the E field inside a metal sphere be still zero if the ext E field is very strong that there is no charge to balance the field? 5) If the sphere is grounded, will it still make E field inside zero? and can it hold against any amount of ext E field? what happens if the field is very strong? 1. If there are no charges on the other side of the conducting plane and the plane is infinite, the electrostatic potential on the other side will be zero. Proof: Zero is a solution to Laplace's Eqn. and this solution matches all the boundary conditions, therefore by the power vested in us by the Uniqueness Theorem it is the solution. What the conducting plate essentially does is to bring electrostatic "infinity" (where the potential is zero) near the point charge. 2. Correct. 3. It will not move in a uniform E field. There will be an induced dipole moment on the sphere and dipoles experience no net force in uniform E-fields. 4. There are gadzillions of free charges that will always move around to cancel the E field inside a conductor. Think of it this way: If you create an electric field inside a conductor, the free charge carriers will experience an electric force and start moving ... and keep on moving until they have no more reason to move, i.e. until the electric field inside the conductor is zero. If the electric field is extremely strong you will start ripping electrons off the conductor's surface. This is known as field emission. 5. Grounding means that you provide a conducting path to an infinite reservoir of electrons (the Earth) that sits at zero potential. So grounding is a way to specify that the potential of a grounded conductor is zero no matter what else is happening around it. ## electostatistics- non/conducting sphere/sheet in E field Thank you for explanations. For #1, I thought as the potential difference, at inifinity potential is zero, and on the surface of the sheet it is also zero. So $$\int_{\infty}^{Sheet} \vec E \cdot d\vec l =V(sheet)-V(\infty)=0$$. Since the solution has to be unique, regardless of the path chosen, this integral holds and therefore E field must be zero. I am not sure if it is reasonable to approach like this,though. Your way- approaching from Laplace is a much solid way. Since, in Laplace eq, extremes only occur at the boundaries, which are zero in this case, the potential has to be zero too, and so does the E field. It makes more sense now. Regarding #3, I am little bit confused. The metal sphere is connected to nowhere, so the total charge has to be conserved, and it is +Q. I would understand if the sphere was neutral, then E field creates a dipole like -q <--> q, which remains still in E field (except the dipole rotates). But you have extra +Q charge on the sphere. Regardless of its distribution, I would expect it to be positively charged, which should cause a motion along the E field, while cancels the E field inside. Is there an easy way to prove the charge distribution of a metal sphere with total charge +Q under a uniform E field? Blog Entries: 7 Recognitions: Gold Member Homework Help Quote by babyeintein Thank you for explanations. For #1, I thought as the potential difference, at inifinity potential is zero, and on the surface of the sheet it is also zero. So $$\int_{\infty}^{Sheet} \vec E \cdot d\vec l =V(sheet)-V(\infty)=0$$. Since the solution has to be unique, regardless of the path chosen, this integral holds and therefore E field must be zero. I am not sure if it is reasonable to approach like this,though. Your way- approaching from Laplace is a much solid way. Since, in Laplace eq, extremes only occur at the boundaries, which are zero in this case, the potential has to be zero too, and so does the E field. It makes more sense now. Regarding #3, I am little bit confused. The metal sphere is connected to nowhere, so the total charge has to be conserved, and it is +Q. I would understand if the sphere was neutral, then E field creates a dipole like -q <--> q, which remains still in E field (except the dipole rotates). But you have extra +Q charge on the sphere. Regardless of its distribution, I would expect it to be positively charged, which should cause a motion along the E field, while cancels the E field inside. Is there an easy way to prove the charge distribution of a metal sphere with total charge +Q under a uniform E field? You are correct for 3. If there is net charge on the conducting sphere, there will be a net electric force. I must have had a mental lapse. Sorry about that. I am not sure I understand what you mean by "prove the charge distribution." If you mean "find what it is", yes there is a way. First find the potential outside the sphere as if the sphere has no charge. This is something that is done in many (if not all) textbooks as an example. Note that this potential is zero on the sphere (it can't be zero at infinity because the uniform electric field extends all the way out there - an artifact of the problem.) Add a new term to your potential (what could it conceivably be?) to take into account that there is charge Q on the sphere. Derive the surface charge density from this potential. Prove that the new potential is a solution to Laplace's equation and satisfies all the boundary conditions. Note: part of this step involves integrating the surface charge density over the sphere, that is why you need to find the charge density first. Also note that putting charge on the sphere is equivalent to asking "Suppose I connect the sphere to a battery that raises the potential of the sphere to V0. What is the charge Q that is added to the sphere?"
{}
# Is it an integer, a string, or a decimal? Your challenge is to determine whether the given input is an integer, a string, or a decimal. # Rules • A string is any input that is not an integer or a float • An integer must contain only numeric characters and must not start with a zero • A decimal is any input that contains the period (.) and the period is surrounded by numeric characters. Note: .01 is not considered a valid decimal. • The program should output a raw string either "string", "integer", or "decimal". • You may assume only printable ASCII characters are used Cases: asdf -> string asdf3.4 -> string 2 -> integer 2.0 -> decimal 02 -> string 40. -> string . -> string .01 -> string 0.0 -> decimal .9.9.9 -> string [empty space] -> string EDIT: Fixed the typo. I meant .01 without the leading zero, not with. If that made it unclear, its fixed now! This is , so shortest answer wins. • Why is 02 not an integer? These just feel like arbitrary restrictions in order to increase challenge difficulty. – Addison Crump Dec 26 '15 at 0:57 • I think 02 isn't considered an integer because most languages trim leading zeros when the type is an integer but keep leading zeros when it is stored as a string. Although, I'm with @isaacg that if 0.0 is considered a decimal, then 0.01 should be too. .01 not counting makes sense, I guess... – hargasinski Dec 26 '15 at 1:14 • @Zequ .01 not counting makes sense, I guess... - why? It's valid in almost every language. – mınxomaτ Dec 26 '15 at 1:30 • Welcome to Programming Puzzles & Code Golf! There's no need to unnecessarily ping everyone who's commented on your question; your edit automatically puts your question into the reopen queue, where it will be reopened if necessary. Furthermore, many of your challenges seem to have been closed; you might want to try running them through our Sandbox first. Thanks! – Doorknob Dec 26 '15 at 2:41 • @CrazyPython I think the idea you're getting at with "valid integer" and "valid decimal" is the idea of a canonical representation. As I understand your rules, there's exactly one way to write each integer and each decimal. If that's the intent, adding that to the challenge will clarify why the rules are the way they are. – isaacg Dec 26 '15 at 2:48 # Pyth, 33 bytes (39 without packed string) @_c."at%¸Ã9hàãáÊ"7.x/MsB+vz0z0 Some bytes are stripped due to Markdown. Official code and test suite. Without packed string: @_c"integerdecimalstring"7.x/MsB+vz0z0 It passes all of above test cases. Basically, to check if a string is a integer or decimal, it checks whether the string can be evaluated as a python literal (v), and if so, if you can add 0 to it and covert it back to its string representation, and get the input string. If so, it's an integer or a decimal. If you can also cast it to an int and still get the original string back, it's an integer. # Javascript, 112121 87 bytes Thanks to @edc65 for saving 34 bytes by converting the original code (in the explanation) to ES6. I didn't change the explanation because it shows the logic better. b=a=>/^[1-9]\d*$/.test(a)?"integer":/^([1-9]\d+|\d)\.\d+$/.test(a)?"decimal":"str‌​ing" This basically converts the rules for an integer and decimal in the question into regex checks, and tests them against the given input. If the input doesn't match, then it must be a string. It passes all of the tests given in the question. ## Ungolfed + explanation function b(a) { if(/^[1-9]\d*$/.test(a)) // regex check for the rules of an 'integer': return"integer"; // ^[1-9] - check to make sure the first digit // \d* - check to make sure that it is followed by zero or more digits //$ - ensures that the previous check continues to the end of the word if(/^([1-9]\d+|\d)\.\d+$/.test(a)) // regex check for the rules of a 'decimal', starting from the middle return"decimal"; // \. checks for a '.' in the word // the ([1-9]\d+|\d) and \d+ check to make sure the '.' is surrounded by // one or more numerical characters on each side. // the ^,$ ensure that the match is for the whole word return"string"; // none of the others match, so it must be a string. } • This seems to fail on inputs such as 01.23. – LegionMammal978 Dec 26 '15 at 11:15 • I fixed it, it passes the b("0.123") case. Sorry about that, since it was only explicitly mentioned in the question that an integer couldn't have leading zeros, I assumed it didn't apply to decimals. – hargasinski Dec 26 '15 at 22:30 • Shortened to ES6 is 83 a=>/^[1-9]\d*$/.test(a)?"integer":/^([1-9]\d+|\d)\.\d+$/.test(a)?"decimal":"string" ... not the shortest possible anyway – edc65 Dec 27 '15 at 14:39 • Thank you, I updated the code, I had to add b= for it to work in Chrome. For me, it shows the original one you posted to be 85 bytes, instead of 83, plus the 2 bytes for a total of 87. – hargasinski Dec 27 '15 at 19:15 • @Zequ usually assigning the function to a variable (b=) is not counted. And the rest is 83, beware of strange invisible characters inserted by the comment editor - there is some in my previous comment between "str" and "ing" – edc65 Dec 27 '15 at 20:52 # Java, 133 bytes String t(String v){if(v.matches("[1-9]\\d*"))return "integer";if(v.matches("(0|[1-9]\\d+)\\.\\d+"))return "decimal";return "string";} # JavaScript (ES6), 74 75 Edit 1 byte saved thx Zequ f=i=>(i=i.match(/^(0|[1-9]\d*)(\.\d+)?$/))?i[2]?'decimal':'integer':'string' Test f=i=>(i=i.match(/^(0|[1-9]\d*)(\.\d+)?$/))?i[2]?'decimal':'integer':'string' console.log=x=>O.textContent +=x +'\n'; // test cases from the question and some more s=['asdf','asdf3.4','02','40.','.','.01','.9.9.9','','0.0.0','00.00','02.00'] i=['2', '11', '1000'] d=['2.0','0.0', '1.009', '911.1','123.4567890'] console.log('Strings:') s.forEach(x=>console.log('<'+x+'> -> '+f(x))) console.log('Integers:') i.forEach(x=>console.log('<'+x+'> -> '+f(x))) console.log('Decimals:') d.forEach(x=>console.log('<'+x+'> -> '+f(x))) <pre id=O></pre> • You could save a byte by changing [^0\D] in the regex match to [1-9] – hargasinski Dec 27 '15 at 19:22 • @Zequ good hint, thanks ... using a compound range seemed so clever :( – edc65 Dec 27 '15 at 20:49 # Perl 5, 59 bytes With the -p argument on the command line (which is calculated into the byte count): chop;$_=!/\D|^0/?"integer":/^\d+\.\d+$/?"decimal":"string" • Fails for any 00.nn (try 00.00) – edc65 Dec 27 '15 at 15:53 • Fixed. Though perhaps that should be in the test cases given. – Codefun64 Dec 27 '15 at 18:43 • It should. On the other hand, often the test cases do not cover all the possible cases. – edc65 Dec 28 '15 at 19:51 • Still wrong, now it gives 'integer' for input .0. What is the chop for? – edc65 Dec 28 '15 at 19:53 • Fixed. Lost interest in this challenge. Not sure if I could've optimized this fix or not. Chop was necessary in a previous iteration of the script. It didn't like the newline from user input. – Codefun64 Dec 28 '15 at 22:19 # Perl 6, 61 bytes {<string integer decimal>[?/^[(\d+\.\d+)|<[1..9]>\d*]$/+?$0]} # 61 Usage: say «asdf asdf3.4 2 2.0 02 40. . .01 0.0 .9.9.9 ''».map: {...} (string string integer decimal string string string string decimal string string) # Python 2, 148 bytes def f(s): try: t=float(s);assert s[0]!='0' if t+0==s:return'decimal' if int(t)==s:return'integer' return'string' except:return'string' assert f('asdf') == 'string' assert f('asdf3.4') == 'string' assert f('2') == 'integer' assert f('2.0') == 'decimal' assert f('.') == 'string' assert f('.01') == 'string' assert f('.9.9.9') == 'string' assert f(' ') == 'string' assert f('40.') == 'string' assert f('02') == 'string' assert f('0.0') == 'string' assert f('00.00') == 'string' ## JavaScript ES6, 74 70 bytes w=>w.match(/^\d+\.\d+$/)?"decimal":w.match(/^\d+$/)?"integer":"string" • it fails with the test cases from the question. Really, please, test before posting. – edc65 Dec 28 '15 at 19:41 • @edc Thanks for the feedback, though could you please tell me what cases do fail except 02? – nicael Dec 28 '15 at 19:45 • Glad that you find it by yourself – edc65 Dec 28 '15 at 20:01 • Have a look at my answer for a fiddle. – Pavlo Dec 29 '15 at 18:14 • It should work with the test cases if you changed /^\d+\$/ to ^[1-9]\d*` (75 bytes). – Chiru Feb 8 '16 at 18:18
{}
# Area of a TriangleArea of Triangle Examples The simplest way to work out the area of a triangle is with the following formula: Area   =   \bf{\frac{1}{2}}  ×  BASE  × HEIGHT Provided you know the value of both the BASE and the HEIGHT. For example, with the following &nbsp2  triangles. Despite the difference in appearance, the area of both triangles is given by the same formula shown above. Area of Triangle Examples (1.1) (1.2) Area  =  \bf{\frac{1}{2}} × 12 × 16    =    \bf{\frac{1}{2}} × 192 Area  =  96cm2 (1.3) Area  =  \bf{\frac{1}{2}} × 22 × 30    =    \bf{\frac{1}{2}} × 660 Area  =  330cm2 (1.4) This triangle has an area of size  36cm2. What is the length of the height of the triangle in cm? Solution Area   =   \bf{\frac{1}{2}} × BASE × HEIGHT 36  = \bf{\frac{1}{2}} × BASE × HEIGHT 36  = \bf{\frac{1}{2}} × 8 × HEIGHT 36  =  4 × HEIGHT \bf{\frac{36}{4}}  =  HEIGHT Thus the height of the triangle is  9cm. ## Area of Triangle Examples,Heron's Formula Another way of establishing a triangles area is with  Heron's Formula.  A formula for triangle area which has been around for a very long time. If you have a standard triangle such as the one below. Then the area can be obtained with the following approach: First, you work out half of the triangle perimeter, denoted  s. s  =  \tt{\frac{a \space + \space b \space + \space c}{2}} Then, you use this value  s  in a further formula to establish the triangle area. Area   =   \tt{\sqrt{s(s-a)(s-b)(s-c)}} Examples (2.1) s  =  \bf{\frac{3 \space + \space 4 \space + \space 3}{2}}  =  \bf{\frac{10}{2}}  =  5 Area   =   \bf{\sqrt{5(5-3)(5-4)(5-3)}} \bf{\sqrt{5 \times 2 \times 1 \times 2}}   =   √20   =   4.47cm2 (2.2) s  =  \bf{\frac{6 \space + \space 9 \space + \space 7}{2}}  =  \bf{\frac{22}{2}}  =  11 Area   =   \bf{\sqrt{11(11-6)(11-9)(11-7)}} \bf{\sqrt{11 \times 5 \times 2 \times 4}}   =   √440   =   20.98cm2 (2.3) s  =  \bf{\frac{8 \space + \space 8 \space + \space 8}{2}}  =  \bf{\frac{24}{2}}  =  12 Area   =   \bf{\sqrt{12(12-8)(12-8)(12-8)}} \bf{\sqrt{12 \times 4 \times 4 \times 4}}   =   √768   =   27.71cm2 1. Home 2.  › 3. Trigonometry 4.  › 5. Area of Shapes 6. › Area of a Triangle
{}
# GRU¶ class torch.nn.GRU(*args, **kwargs)[source] Applies a multi-layer gated recurrent unit (GRU) RNN to an input sequence. For each element in the input sequence, each layer computes the following function: $\begin{array}{ll} r_t = \sigma(W_{ir} x_t + b_{ir} + W_{hr} h_{(t-1)} + b_{hr}) \\ z_t = \sigma(W_{iz} x_t + b_{iz} + W_{hz} h_{(t-1)} + b_{hz}) \\ n_t = \tanh(W_{in} x_t + b_{in} + r_t * (W_{hn} h_{(t-1)}+ b_{hn})) \\ h_t = (1 - z_t) * n_t + z_t * h_{(t-1)} \end{array}$ where $h_t$ is the hidden state at time t, $x_t$ is the input at time t, $h_{(t-1)}$ is the hidden state of the layer at time t-1 or the initial hidden state at time 0, and $r_t$ , $z_t$ , $n_t$ are the reset, update, and new gates, respectively. $\sigma$ is the sigmoid function, and $*$ is the Hadamard product. In a multilayer GRU, the input $x^{(l)}_t$ of the $l$ -th layer ($l >= 2$ ) is the hidden state $h^{(l-1)}_t$ of the previous layer multiplied by dropout $\delta^{(l-1)}_t$ where each $\delta^{(l-1)}_t$ is a Bernoulli random variable which is $0$ with probability dropout. Parameters • input_size – The number of expected features in the input x • hidden_size – The number of features in the hidden state h • num_layers – Number of recurrent layers. E.g., setting num_layers=2 would mean stacking two GRUs together to form a stacked GRU, with the second GRU taking in outputs of the first GRU and computing the final results. Default: 1 • bias – If False, then the layer does not use bias weights b_ih and b_hh. Default: True • batch_first – If True, then the input and output tensors are provided as (batch, seq, feature). Default: False • dropout – If non-zero, introduces a Dropout layer on the outputs of each GRU layer except the last layer, with dropout probability equal to dropout. Default: 0 • bidirectional – If True, becomes a bidirectional GRU. Default: False Inputs: input, h_0 • input of shape (seq_len, batch, input_size): tensor containing the features of the input sequence. The input can also be a packed variable length sequence. See torch.nn.utils.rnn.pack_padded_sequence() for details. • h_0 of shape (num_layers * num_directions, batch, hidden_size): tensor containing the initial hidden state for each element in the batch. Defaults to zero if not provided. If the RNN is bidirectional, num_directions should be 2, else it should be 1. Outputs: output, h_n • output of shape (seq_len, batch, num_directions * hidden_size): tensor containing the output features h_t from the last layer of the GRU, for each t. If a torch.nn.utils.rnn.PackedSequence has been given as the input, the output will also be a packed sequence. For the unpacked case, the directions can be separated using output.view(seq_len, batch, num_directions, hidden_size), with forward and backward being direction 0 and 1 respectively. Similarly, the directions can be separated in the packed case. • h_n of shape (num_layers * num_directions, batch, hidden_size): tensor containing the hidden state for t = seq_len Like output, the layers can be separated using h_n.view(num_layers, num_directions, batch, hidden_size). Shape: • Input1: $(L, N, H_{in})$ tensor containing input features where $H_{in}=\text{input\_size}$ and L represents a sequence length. • Input2: $(S, N, H_{out})$ tensor containing the initial hidden state for each element in the batch. $H_{out}=\text{hidden\_size}$ Defaults to zero if not provided. where $S=\text{num\_layers} * \text{num\_directions}$ If the RNN is bidirectional, num_directions should be 2, else it should be 1. • Output1: $(L, N, H_{all})$ where $H_{all}=\text{num\_directions} * \text{hidden\_size}$ • Output2: $(S, N, H_{out})$ tensor containing the next hidden state for each element in the batch Variables • ~GRU.weight_ih_l[k] – the learnable input-hidden weights of the $\text{k}^{th}$ layer (W_ir|W_iz|W_in), of shape (3*hidden_size, input_size) for k = 0. Otherwise, the shape is (3*hidden_size, num_directions * hidden_size) • ~GRU.weight_hh_l[k] – the learnable hidden-hidden weights of the $\text{k}^{th}$ layer (W_hr|W_hz|W_hn), of shape (3*hidden_size, hidden_size) • ~GRU.bias_ih_l[k] – the learnable input-hidden bias of the $\text{k}^{th}$ layer (b_ir|b_iz|b_in), of shape (3*hidden_size) • ~GRU.bias_hh_l[k] – the learnable hidden-hidden bias of the $\text{k}^{th}$ layer (b_hr|b_hz|b_hn), of shape (3*hidden_size) Note All the weights and biases are initialized from $\mathcal{U}(-\sqrt{k}, \sqrt{k})$ where $k = \frac{1}{\text{hidden\_size}}$ Orphan Note If the following conditions are satisfied: 1) cudnn is enabled, 2) input data is on the GPU 3) input data has dtype torch.float16 4) V100 GPU is used, 5) input data is not in PackedSequence format persistent algorithm can be selected to improve performance. Examples: >>> rnn = nn.GRU(10, 20, 2) >>> input = torch.randn(5, 3, 10) >>> h0 = torch.randn(2, 3, 20) >>> output, hn = rnn(input, h0)
{}
# Aerobic exercise training improves hepatic and muscle insulin sensitivity, but reduces splanchnic glucose uptake in obese humans with type 2 diabetes ## Abstract ### Background Aerobic exercise training is known to have beneficial effects on whole-body glucose metabolism in people with type 2 diabetes (T2D). The responses of the liver to such training are less well understood. The purpose of this study was to determine the effect of aerobic exercise training on splanchnic glucose uptake (SGU) and insulin-mediated suppression of endogenous glucose production (EGP) in obese subjects with T2D. ### Methods Participants included 11 obese humans with T2D, who underwent 15 ± 2 weeks of aerobic exercise training (AEX; n = 6) or remained sedentary for 15 ± 1 weeks (SED; n = 5). After an initial screening visit, each subject underwent an oral glucose load clamp and an isoglycemic/two-step (20 and 40 mU/m2/min) hyperinsulinemic clamp (ISO-clamp) to assess SGU and insulin-mediated suppression of EGP, respectively. After the intervention period, both tests were repeated. ### Results In AEX, the ability of insulin to suppress EGP was improved during both the low (69 ± 9 and 80 ± 6% suppression; pre-post, respectively; p < 0.05) and high (67 ± 6 and 82 ± 4% suppression, respectively; p < 0.05) insulin infusion periods. Despite markedly improved muscle insulin sensitivity, SGU was reduced in AEX after training (22.9 ± 3.3 and 9.1 ± 6.0 g pre-post in AEX, respectively; p < 0.05). ### Conclusions In obese T2D subjects, exercise training improves whole-body glucose metabolism, in part, by improving insulin-mediated suppression of EGP and enhancing muscle glucose uptake, which occur despite reduced SGU during an oral glucose challenge. ## Introduction Type 2 diabetes (T2D) is a metabolic disease characterized by the dysfunction of several key glucoregulatory organs during the fasted state and in response to glucose ingestion1. Among these organs, impaired glucose metabolism by the liver is recognized as an important contributor to T2D because of the central role it plays in the regulation of both fasting glucose levels and glucose tolerance. In healthy humans, insulin and glucagon regulate hepatic glucose production (HGP) to maintain euglycemia during fasting. In contrast, hepatic insulin resistance, along with hyperglucagonemia2, increase fasting HGP in patients with T2D, thereby contributing to hyperglycemia3. Insulin signaling is also important for normal liver function during the postprandial state because it promotes hepatic glucose uptake and glycogen deposition, a complex process that accounts for the disposal of one-third of ingested carbohydrate4,5,6,7,8,9. It is therefore not surprising that T2D patients also exhibit impaired liver glucose uptake and glycogen synthesis during the postprandial state5,10,11,12,13,14,15. Although treatments such as surgical weight loss16 and insulin-sensitizing medications7,8 can improve whole-body glucose metabolism in T2D patients, lifestyle intervention is the ideal method for improving glucoregulation in this population. Whereas the benefit of aerobic exercise training on skeletal muscle insulin sensitivity continues to be extensively investigated17, the effect of training on hepatic glucose metabolism is less clear. It has been shown previously that aerobic exercise training periods of ~12–16 weeks can enhance hepatic insulin sensitivity, manifest by improved suppression of HGP in response to submaximal doses of insulin18,19. However, it remains unclear whether these training-induced improvements in hepatic insulin sensitivity extend to enhanced hepatic glucose metabolism during the postprandial state. To this end, the purpose of this study was to determine how aerobic exercise training by subjects with T2D affects hepatic glucose metabolism during the fasting and postprandial states. ## Materials and methods ### Subjects Vanderbilt University’s Institutional Review Board approved the methods of this study. Prior to enrollment, all subjects were informed of the risks associated with participation and provided written consent. Inclusion criteria included males and non-pregnant females aged 40–60 years with T2D, a BMI range of 30–40 kg/m2, a hemoglobin A1C of ≤8.5% and sedentary for the previous six months. Exclusion criteria included an abnormal exercise stress test and the presence of neuropathy, nephropathy, or retinopathy. Individuals reporting musculoskeletal or other conditions that would make strenuous exercise difficult or dangerous were also excluded. ### Study overview For 5 days prior to each study visit, subjects discontinued their diabetes-related medications to minimize the confounding effect of the drugs on exercise-induced responses. Because of this, individuals taking insulin or thiazoladinediones (because of the long half-life) were excluded. Each subject underwent an initial screening visit during which baseline blood was drawn, followed by a 75 g oral glucose tolerance test (OGTT). After the OGTT, each subject performed a VO2 max test so that exercise prescriptions could be generated. At least 1 week after this screening visit, we assessed hepatic glucose metabolism using two protocols separated by at least one week. First, baseline splanchnic glucose uptake (SGU) was determined using the 75 g oral glucose load clamp technique (OGL-clamp). Second, hepatic insulin sensitivity was measured using the isoglycemic/ hyperinsulinemic clamp method (ISO-clamp). After the completion of these metabolic tests, subjects either remained sedentary (SED; n = 5) or participated in ~15 weeks of aerobic exercise training (AEX; n = 6). After the intervention period, each subject repeated the OGL-clamp and ISO-clamps in the same order. All subjects were instructed either to continue to exercise (AEX) or remain sedentary (SED) until the day before each post-intervention metabolic study. ### Screening visit Each participant reported to the Vanderbilt Clinical Research Center (CRC) in the morning after an overnight fast. Upon arrival, an intravenous (IV) catheter was inserted into the antecubital vein and blood was drawn to measure hemoglobin A1C and a complete blood count. The following tests were then conducted: ### Oral glucose tolerance test While resting comfortably in a hospital bed, blood samples were taken at min 0 for the measurement of basal plasma glucose and insulin. Next, each subject consumed a 75-g glucose solution within 5 min, after which plasma glucose and insulin levels were assessed for 2 h. A plasma glucose concentration >199 mg/dL after 2 h confirmed T2D. ### VO2 max/ EKG testing After completing the OGTT, each subject underwent a VO2 max test, which included 12-lead EKG monitoring to screen for cardiac disease. A modified Balke 3.0 protocol was used, with each stage of the test lasting 3 min to allow oxygen consumption to reach a steady state. VO2 max was considered to be attained if two of the following three criteria were achieved; (1) a respiratory exchange ratio >1.10; (2) a heart rate within 10 beats of the age-predicted maximum (220-age); or (3) no change in oxygen consumption despite an increase in workload. Exercise prescriptions for the AEX group equaled the treadmill workload that solicited 70% of the individually measured VO2 max20. ### OGL-clamp Splanchnic glucose uptake (SGU) was assessed using the OGL-clamp technique as previously described5,6,7,9. In brief, subjects discontinued their diabetes-related medications for 5 days prior to reporting to the CRC the evening before the metabolic study. Upon admission, each subject consumed a standardized meal and remained fasted thereafter. At ~7 a.m., one infusion catheter was inserted into one antecubital vein for infusions and another was inserted into the contralateral antecubital vein for periodic blood draws. A heating blanket was used to allow the draw of arterialized blood. The OGL-clamp lasted 390 min and was divided into a 180-min euglycemic/ hyperinsulinemic lead-in period (−180-0 min), followed by the ingestion of a 75 g OGL and periodic blood draws to assess SGU (0–210 min). At −180 min, a primed (25 µCi), continuous (0.25 µCi/min) infusion of 3-3H glucose was begun, thereby allowing the assessment of glucose turnover. At the same time, a primed21, continuous (120 mU/m2/min) infusion of insulin was started to suppress HGP, while simultaneously stimulating muscle glucose uptake. During the first 3 h of the OGL-clamp (min −180 to 0), the plasma glucose level was measured every 5–10 min and euglycemia (~95 mg/dL) was maintained by a variable IV-infusion of 20% dextrose (d20) as necessary. During the final 30 min of this period, glucose turnover was measured to confirm that HGP was suppressed prior to the OGL. At minute 0, while the IV-infusion of insulin was continued, each participant consumed a 75-g OGL, labeled with 500 mg acetaminophen, within 5 min. To minimize any rise in plasma glucose that would occur after the ingestion of the OGL, the pre-existing exogenous glucose infusion rate (GIR) was lowered as needed for each subject. The complete absorption of the OGL was indicated by a return of the exogenous GIR to a rate similar to what existed prior to the OGL. In total, 210 min after consumption of the OGL, the insulin infusion was discontinued and subjects were fed a meal. After stabilization of the plasma glucose level, the d20 infusion was stopped and the subject was discharged from the CRC, after which they resumed their previously existing medication regimen. ### ISO-clamp Approximately 1 week after the OGL-clamp, hepatic insulin sensitivity was measured using the isoglycemic/ hyperinsulinemic clamp technique. For this visit, subjects again discontinued their diabetes-related medications 5 days prior. Upon reporting to the CRC, each subject ate dinner, then remained fasted for ~12 h. The next morning, catheters were inserted and 3-3H glucose tracer was infused as described previously for the OGL-clamp. The initial 120 min of the study allowed for tracer equilibration, followed by a 30-min control period during which hormones, substrates, and glucose turnover rates were assessed during the basal state. The average plasma glucose level during this sampling period then became the isoglycemic level at which the subject’s plasma glucose would be clamped for the remainder of the study. At minute 150, a primed21, continuous (20 mU/m2/min) infusion of insulin was started and maintained for the next 150 min (i.e., min 150–300). Isoglycemia was sustained during this period by monitoring the plasma glucose level every 5–10 min and infusing d20 as necessary. The tracer infusion rate from the stock solution was lowered from 0.25 to 0.06 µCi/min and the d20 infusate was labeled with 3-3H glucose as described previously20. As was the case with the basal period, hormones, substrates and glucose turnover were assessed during the final 30 min of this period. During the final 150 min period (i.e., min 300–450), the insulin dose was increased to 40 mU/m2/min. Isoglycemia was maintained during this period by monitoring the plasma glucose level every 5–10 min and adjusting the d20 infusion rate as necessary. As was the case during the previous two periods, hormones, substrates, and glucose turnover were assessed during the final 30 min of this period. At min 450, the insulin and tracer infusions were discontinued and subjects were fed. After the plasma glucose level was stabilized, the exogenous glucose infusion was stopped and the subject was discharged and instructed to resume their medication regimen. ### Exercise training Each subject assigned to the AEX group performed treadmill walking 4–5 days per week for ~15 weeks. Each session consisted of 2 × 25 min bouts of treadmill walking at 70% VO2 max. A 10-min break was allowed between bouts. Each session included a brief warm up and cool down period. Each subject in AEX was instructed to continue exercising until after the second ISO-Clamp (i.e., the second of the two clamp studies). ### Analysis of plasma hormones and metabolites Plasma insulin, glucagon, cortisol, and c-peptide (Millipore, St. Louis, USA) were measured by the Vanderbilt Diabetes Research and Training Center’s (DRTC) hormone assay and analytical services core. Plasma glucose was measured using the glucose oxidase method (ref. 22; Beckman Instruments). Glycerol and lactate23 and plasma specific activity20 were measured as described previously and free fatty acids were measured using a commercially available kit (NEFA-HR kit; Wako Chemicals; Osaka, Japan). ### Calculations Samples were taken every 10 min during the final 30 min of each clamp period (i.e. the euglycemic lead-in period of the OGL-clamp and the basal, low- and high-insulin periods of the isoglycemic clamp) and analyzed for glucose specific activity at each time point. During the basal state of the ISO-clamp, glucose turnover was calculated for each time point by dividing the tracer infusion rate by the glucose specific activity. During the lead-in period of the OGL-clamp and the latter two periods of the ISO-clamp, when exogenous glucose was being infused, the total rate of appearance (Ra) of glucose was calculated for each time point the same way, whereas endogenous glucose production (EGP) was calculated by subtracting the exogenous GIR from Ra. Splanchnic glucose uptake (SGU) was calculated by subtracting the amount of glucose escaping the splanchnic bed from the total amount of glucose ingested (75 g). Splanchnic glucose escape (SGE) can be determined by calculating the reduction in the exogenous GIR required to maintain euglycemia over the OGL period6. We also followed this strategy, where the exogenous GIR used was the average of the GIR that existed prior to ingestion of the OGL and that which existed at the end of the study. However, because plasma glucose rose unexpectedly in response to the OGL in our subjects, the accompanying increase in glucose utilization was accounted for by referring to the data of Hansen and colleagues24. In that study, it was shown that at steady state insulin levels, peripheral glucose utilization (PGU) in insulin resistant human subjects increases by 0.0425 mg/kg/min per 1 mg/dL rise in the plasma glucose level when PGU is 7.5 mg/kg/min and the plasma glucose level is 95 mg/dL. Because PGU was lower than this in our T2D subjects, this emendation was applied on a sliding scale, thereby resulting in the following calculation for SGE: $$\begin{array}{l}{\mathrm{SGE}} = \left( {{\mathrm{GIR}}_{{\mathrm{avg}}} - {\mathrm{GIR}}_{t\,0 - 10}} \right) \cdot {\mathrm{kg}} \cdot {\mathrm{d}}t + \\ \left[ {0.0425 \cdot \frac{{{\mathrm{GIR}}_{{\mathrm{avg}}}}}{{7.5}} \cdot \left( {\frac{{{\mathrm{glucose}}_t \ + \ {\mathrm{glucose}}_{t + 10}}}{2} - 95} \right) \cdot {\mathrm{kg}} \cdot {\mathrm{d}}t} \right]\end{array}$$ Where GIRavg equals the average of the GIRs that existed prior to and after intestinal absorption of the OGL (in mg/kg/min); GIRt0–10 is the average GIR over the 10-min period being calculated; kg is body mass in kg and dt is the time interval (10 mins). This calculation was made for every 10-min interval over the OGL-clamp absorption period and summed to provide a final value for SGE. SGU was finally determined by subtracting the final value for SGE from the amount of glucose ingested (75 g). ### Statistical analysis Data were analyzed using repeated measures ANOVA and post-hoc analyses were made as appropriate using the Student–Newman–Keuls method. Data are summarized as mean ± sem, unless noted otherwise. ## Results ### Screening visit #### Anthropometrics Anthropometric data are presented in Table 1. Subjects who participated in the study included six obese people with type 2 diabetes, that performed aerobic exercise for 15 ± 2 weeks (AEX) and five subjects with T2D who remained sedentary for the same length of time (SED; 15 ± 1 weeks). VO2 max was similar in each group at baseline and subjects from both groups remained weight stable over the study’s duration. As opposed to SED, whose hemoglobin A1C did not change (n = 4 in SED for this measure), hemoglobin A1C was lower in AEX after the study (p < 0.05; n = 6), thereby indicating that the training program had a beneficial effect on whole-body glucose metabolism. #### Screen OGTT Fasting plasma glucose was higher in AEX compared to SED (181 ± 5 vs 129 ± 7 mg/dL, respectively; p < 0.05). Two hours after ingestion of the 75 g OGTT, the plasma glucose level remained elevated but was not different between groups (321 ± 17 vs 254 ± 23 mg/dL, respectively; p = NS). Importantly, however, the change in plasma glucose in response to the 75 g oral glucose load was similar in each group (140 ± 10 and 125 ± 16 mg/dL in AEX and SED, respectively; p = NS), suggesting similar glucose tolerance. All subjects had a 2-h plasma glucose level >200 mg/dL. Plasma insulin levels were similar between groups at baseline (26 ± 4 and 30 ± 3 µU/mL in AEX and SED, respectively), and at the 120 min mark of the OGTT (74 ± 18 vs 110 ± 29 µU/mL, respectively; p = NS). ### Basal period During the basal period of the pre-intervention isoglycemic/ hyperinsulinemic clamp, fasting plasma c-peptide and insulin concentrations were similar in AEX and SED, whereas plasma glucagon was elevated in SED (Table 2). This elevation in glucagon in SED, however, did not elevate EGP (Fig. 1a) or fasting plasma glucose concentrations (Table 2) compared to AEX. Levels of cortisol, NEFA and glycerol were also similar among groups (Table 2). ### Low insulin period In response to the IV infusion of insulin at 20 mU/m2/min, venous levels of the hormone were increased twofold in each group, while the level of both c-peptide and glucagon fell (Table 2). Despite this increase in plasma insulin, isoglycemia (Table 2) was maintained in both groups by IV infusion of dextrose (1.16 ± 0.05 and 1.35 ± 0.28 mg/kg/min in SED and AEX, respectively; p = NS). Endogenous glucose production (EGP) fell similarly in both SED and AEX (Fig. 1b). Peripheral glucose uptake (PGU) did not increase above what was observed for whole-body glucose turnover during the basal state in either group (1.81 ± 0.20 and 2.10 ± 0.37 mg/kg/min in SED and AEX, respectively), thereby demonstrating significant peripheral insulin resistance (Fig. 1f). Because of hyperinsulinemia, NEFA and glycerol in the plasma declined in both groups (Table 2). ### High-insulin period During the high-insulin infusion period, the IV-insulin levels were doubled again, resulting in c-peptide and glucagon falling further (Table 2). Again, plasma glucose (Table 2) was clamped in both groups by peripheral infusion of dextrose that averaged 2.45 ± 0.36 and 3.44 ± 0.81 mg/kg/min in SED and AEX, respectively (p = NS). EGP remained suppressed during the high-insulin period in both groups (Fig. 1d, e), as PGU rose in both groups above the level seen during the basal period (3.21 ± 0.58 and 4.25 ± 0.83 mg/kg/min in SED and AEX, respectively). Plasma glycerol and FFA continued to decline in each group from the low insulin infusion period. ### Basal period During the basal period of the post-intervention isoglycemic/ hyperinsulinemic clamp, the plasma glucose, insulin and c-peptide levels within each group were similar to pre-intervention levels (Table 2). In addition, the elevated basal glucagon level that was observed in SED during the pre-intervention clamp was lower during the post-intervention study (p < 0.05 for each time point during post- compared to pre-intervention values in SED; Table 2). The levels of glucose, insulin, c-peptide, glucagon and cortisol were not different between SED and AEX during the basal period of their post-intervention metabolic study. During the basal period of the post-intervention metabolic study, EGP in SED and AEX remained unchanged compared to their respective pre-intervention values (Fig. 1a). ### Low insulin period During the low insulin infusion period of the post-intervention isoglycemic/ hyperinsulinemic study, the plasma glucose level was again clamped at a similar level in each group. As with the pre-intervention clamp studies, the insulin infusion doubled the hormone’s level in plasma, causing a further decline in both c-peptide and glucagon (Table 2). The glucose infusion rate required to maintain isoglycemia in SED (1.23 ± 0.13 mg/kg/min) was similar to what it was during the pre-intervention clamp (1.16 ± 0.05 mg/kg/min), whereas the glucose infusion rate in AEX increased from its pre-intervention value of 1.35 ± 0.28 mg/kg/min to 2.10 ± 0.32 mg/kg/min (p = 0.01). EGP was similar during the pre- and post-intervention clamps in SED during the low insulin infusion period (Fig. 1b). In AEX, however, both EGP and percent suppression of EGP changed favorably after the training period (Fig. 1b, c; p = 0.03 for each). In SED, PGU was unchanged from baseline during the low insulin infusion period of the post-intervention clamp (1.82 ± 0.20 mg/kg/min). PGU in AEX was increased during the post-intervention clamp compared with the pre-intervention clamp, but the difference was not statistically significant (2.54 ± 0.25 mg/kg/min). ### High-Insulin Period: During the high-insulin infusion, the post-intervention plasma glucose level remained similar between groups (Table 2). As with the pre-intervention clamp, the increase in circulating insulin concentrations further suppressed c-peptide and glucagon levels (Table 2). The glucose infusion rate required to maintain isoglycemia in SED (2.72 ± 0.22 mg/kg/min) was similar to the pre-intervention study (2.45 ± 0.36 mg/kg/min; p = NS), while the amount of glucose required to maintain isoglycemia in AEX increased from 3.44 ± 0.81 pre-exercise to 4.59 ± 0.89 mg/kg/min post-exercise (p < 0.01). EGP during high-insulin infusion was similar to the pre-intervention clamp in SED (Fig. 1d, e; p = NS), whereas EGP (p = 0.02) and the percent suppression of EGP (p = 0.01) were improved in AEX during the post-intervention clamp (Fig. 1d, e, respectively). During this period, PGU remained similar to the pre-intervention value in SED (3.36 ± 0.29 mg/kg/min), but increased to 5.01 ± 0.91 in AEX (p = 0.07). ### OGL-clamp #### Pre-Intervention During the pre-intervention OGL-clamp’s lead-in period (−30 to 0 min), the IV-insulin levels were raised in both SED and AEX (Table 3). Meanwhile, the plasma glucose levels (Table 3) were clamped at euglycemia in both groups by a similar exogenous GIR in each group (Fig. 2c). Plasma c-peptide, glucagon, and cortisol were similar in each group and acetaminophen was negligible in the plasma. This hormonal milieu suppressed EGP to −0.03 ± 0.25 and 0.29 ± 0.15 mg/kg/min (p = NS) in SED and AEX, respectively, thereby making PGU and the exogenous GIR functionally the same. ### OGL-clamp period After consuming the 75-g glucose load, the plasma insulin level remained similar in each group, indicating that endogenous insulin secretion was markedly inhibited by the high-insulin infusion rate. On the other hand, the levels of glucose and acetaminophen in the plasma rose similarly in both SED and AEX (Table 3), before glucose returned to pre-OGL levels. Interestingly, despite the prevailing hyperinsulinemia, plasma c-peptide more than doubled in response to the OGL, after which it waned over time; this, however, had only a small (~10–15%) effect on plasma insulin levels and was not different between groups. The OGL likely had no effect on pre-intervention plasma glucagon concentrations in either group because of the pre-existing hyperinsulinemia (Table 3). SGU was similar in SED and AEX during the pre-intervention clamp (Fig. 2; p = NS). #### Post intervention As was the case during the pre-intervention clamp, insulin levels were similar in SED and AEX during the lead-in period of the post-intervention OGL-clamp (Table 3). Likewise, there were no differences between groups in c-peptide or glucagon between groups and acetaminophen was undetectable. The GIR required to maintain euglycemia was unchanged in SED, but increased in AEX (Fig. 2c; p = 0.01). This hormonal milieu completely suppressed EGP (0.02 ± 0.19 and −0.13 ± 0.20 mg/kg/min in SED and AEX, respectively), thereby making PGU tantamount to the GIR. ### OGL-clamp period Despite slightly higher and lower plasma glucose levels in SED and AEX, respectively, the glucose AUC in response to the post-intervention OGL was similar in all four groups (Table 3). Likewise, acetaminophen from the OGL rose and fell in concert in both groups, thereby indicating no difference in the intestinal absorption of glucose in response to exercise training (Table 3). As was the case during the pre-intervention OGL-clamp, ingestion of the OGL increased c-peptide similarly in both groups but did not impact glucagon or cortisol (Table 3). In response, SGU was unchanged in SED, but markedly reduced in AEX (Fig. 2a; p = 0.04); a 25 ± 15% increase and 55 ± 32% decrease in the respective groups (Fig. 2b; p = 0.04). ## Discussion Obesity-associated T2D is characterized by insulin resistance and the dysfunction of several glucoregulatory organs including skeletal muscle and liver. Importantly however, exercise training improves insulin resistance in these tissues, thereby making it a cornerstone in the treatment of the disease. As a considerable amount of research continues to focus on the mechanistic basis of exercise-induced improvements in muscle insulin sensitivity, much less is known about how exercise affects liver function in T2D, which is unfortunate, given the central role the liver plays in the regulation of blood glucose homeostasis. With this in mind, we investigated the effect of aerobic exercise training on hepatic glucose metabolism, during both the fasted- and fed-states, in patients with T2D. Endogenous glucose production (EGP) is elevated in human patients with T2D, and the suppression of EGP in response to hyperinsulinemia is diminished25,26. At the cellular level, this is accounted for by elevated rates of both gluconeogenesis and glycogenolysis which, in turn, increase the flux of substrate through glucose-6-phosphatase (G6Pase) and glucose production. Our data demonstrate that 15 weeks of aerobic exercise training improved EGP suppression over a broad range of physiological hyperinsulinemia, thereby contributing to improved glycemic regulation. Although we did not investigate the mechanisms responsible for this improvement, it likely occurs as a function of increased activity of glucokinase relative to glucose-6-phosphatase. Activity of these enzymes is known to be unbalanced in T2D, such that it causes elevated EGP in this population27,28. Because each enzyme’s activity is regulated by insulin29, improved hepatic sensitivity to the hormone after exercise training would be expected to lower EGP during hyperinsulinemia. An unanticipated finding was that exercise training reduced SGU in response to a 75 g oral glucose load. Even more surprisingly though, this occurred even with gains in hepatic insulin sensitivity. While this finding could be interpreted as unfavorable, it occurred in association with a reduction in hemoglobin A1C over the 15-week intervention period, thereby indicating a positive net effect of exercise training on glucoregulation over the same period. Because catheterizing blood vessels into and out of the liver is prohibitively invasive, hepatic glucose uptake cannot be directly measured in humans. A surrogate for the measurement of SGU, however, is the OGL-clamp. This method has been validated in humans6 and, along with more mechanistic studies in animals, its use has shown that the liver plays an important role in postprandial glucose metabolism. Not only does the liver lower its own glucose production in response to a meal, it also takes up approximately one-third of ingested carbohydrate, storing most of it as glycogen for later use during fasting6,7,9. A number of metabolic cues facilitate maximal postprandial hepatic glucose uptake, including hyperinsulinemia, hyperglycemia and a negative arterial-hepatic portal vein glucose gradient30,31,32,33,34. Because SGU is impaired in subjects with T2D5,10,11,12, we hypothesized that exercise-induced improvements in hepatic insulin sensitivity would increase it. Interestingly however, the opposite actually occurred. Despite exhibiting improved hepatic insulin sensitivity during the ISO-clamp, a marked reduction in SGU was observed during the OGL-clamp after exercise training; a finding that could be interpreted as diminished hepatic function. However, Maehlum et al.35, observed a similar result in healthy humans after they exercised exhaustively just prior to an oral glucose load, and when the exercise happened 14–15 h before the OGL. Likewise, in canine studies, Wasserman and colleagues found that intraduodenal glucose infusion immediately after an exercise bout increased splanchnic glucose output due to accelerated intestinal absorption of the sugar36,37. The observation that healthy humans and animals- with intact glucose metabolism- show diminished SGU both immediately and 14–15 h after exercise, suggests that the lower SGU in our T2D subjects after exercise training is not maladaptive. Illustrating this is the fact that insulin-induced muscle glucose uptake more than accounted for the reduction in SGU. In AEX, the average PGU during the OGL-clamp was 3.8 mg/kg/min prior to the intervention and 5.1 after it. This makes the increase in peripheral glucose uptake after exercise ~22 g (over 180 min), which is 58% higher than the reduction in SGU (13.8 g) and may represent a repartitioning of glucose disposal away from the splanchnic bed in favor of replenishing muscle glycogen. At this time, the mechanism that reduced SGU after exercise training remains unclear. Based on the acetaminophen data, a rise in intestinal glucose absorption does not appear to explain the reduction in SGU. On the other hand, Knudsen and colleagues38, showed that liver glycogen content is increased in muscle IL-6 knockout mice at rest and after exercise. This raises the possibility that inter-organ crosstalk, facilitated by exercise-induced myokines such as IL-6, could orchestrate the subsequent reduction in SGU during the postprandial state. Future studies will be needed to verify this hypothesis and will also be required to differentiate between the effect of an acute bout of exercise on hepatic glucose metabolism compared to chronic training. Despite the plasma glucose concentration being similar in both groups before and after the intervention period, it rose invariably above basal (95 mg/dL) during the first ~60–90 min after the 75 g OGL, despite hyperinsulinemia of ~300 µU/mL. We are not aware of any similar reports of hyperglycemia using the OGL-clamp method, but this demonstrates the significant insulin resistance of our T2D cohort, which was likely exacerbated by the discontinuance of their diabetes-related medications for the previous five days. On the one hand, the increase in plasma glucose more closely mimics the rise seen in response to oral glucose ingestion39. On the other hand, we were required to account for the increase in the plasma glucose level when calculating splanchnic glucose escape. To do so, we utilized the work of Hansen et al.24, who showed that at steady state insulin levels, the relationship between PGU and the rise in plasma glucose is linear and is not impacted in vivo by insulin resistance24,40. All told, the largest proportion (>80%) of calculated splanchnic glucose escape was still accounted for by the AUC of the fall in the GIR during the OGL period. Likewise, because the AUC for plasma glucose responses to the OGL were similar in both groups regardless of time, the GIR-derived portion of the calculation accounted for the entirety of the increase in splanchnic glucose escape (and thus the lowering of SGU) after exercise training. In the unlikely event that we underestimated splanchnic glucose escape because exercise training enhanced the relationship between the plasma glucose level and muscle glucose uptake at steady state insulin levels of ~300 µU/mL, it would have led to an underestimation of peripheral glucose uptake and splanchnic glucose output after training. In turn, it would mean that the reduction in SGU after exercise training is even greater than what we report. In summary, the current results further demonstrate that in addition to markedly improving muscle insulin resistance in obese human subjects with T2D, aerobic exercise training also improves insulin-mediated suppression of EGP. Despite this improvement in hepatic insulin sensitivity, however, SGU during the postprandial state is diminished after exercise training. We hypothesize that this reduction marks a diversion of ingested glucose away from the liver in favor of skeletal muscle glycogen repletion. Future studies will be required to determine the mechanistic basis for this repartitioning, as it may involve crosstalk between skeletal muscle and the liver. ## References 1. 1. Defronzo, R. A. Banting Lecture. From the triumvirate to the ominous octet: a new paradigm for the treatment of type 2 diabetes mellitus. Diabetes 58, 773–795 (2009). 2. 2. D’Alessio, D. The role of dysregulated glucagon secretion in type 2 diabetes. Diabetes Obes. Metab. 13, 126–132 (2011). 3. 3. Bogardus, C., Lillioja, S., Howard, B. V., Reaven, G. & Mott, D. Relationships between insulin secretion, insulin action, and fasting plasma glucose concentration in nondiabetic and noninsulin-dependent diabetic subjects. J. Clin. Invest. 74, 1238–1246 (1984). 4. 4. Capaldo, B. et al. Splanchnic and leg substrate exchange after ingestion of a natural mixed meal in humans. Diabetes 48, 958–966 (1999). 5. 5. Ludvik, B. et al. Evidence for decreased splanchnic glucose uptake after oral glucose administration in non-insulin-dependent diabetes. J. Clin. Invest. 100, 2354–2361 (1997). 6. 6. Ludvik, B. et al. A noninvasive method to measure splanchnic glucose uptake after oral glucose administration. J. Clin. Invest. 95, 2232–2238 (1995). 7. 7. Bajaj, M. et al. Pioglitazone reduces hepatic fat content and augments splanchnic glucose uptake in patients with type 2 diabetes. Diabetes 52, 1364–1370 (2003). 8. 8. Kawamori, R. et al. Pioglitazone enhances splanchnic glucose uptake as well as peripheral glucose uptake in non-insulin-dependent diabetes mellitus. AD-4833 Clamp-OGL Study Group. Diabetes Res. Clin. Pract. 41, 35–43 (1998). 9. 9. Bajaj, M. et al. Free fatty acids reduce splanchnic and peripheral glucose uptake in patients with type 2 diabetes. Diabetes 51, 3043–3048 (2002). 10. 10. Basu, A. et al. Type 2 diabetes impairs splanchnic uptake of glucose but does not alter intestinal glucose absorption during enteral glucose feeding: additional evidence for a defect in hepatic glucokinase activity. Diabetes 50, 1351–1362 (2001). 11. 11. Basu, A. et al. Effects of type 2 diabetes on the ability of insulin and glucose to regulate splanchnic and muscle glucose metabolism: evidence for a defect in hepatic glucokinase activity. Diabetes 49, 272–283 (2000). 12. 12. Basu, R., Basu, A., Johnson, C. M., Schwenk, W. F. & Rizza, R. A. Insulin dose-response curves for stimulation of splanchnic glucose uptake and suppression of endogenous glucose production differ in nondiabetic humans and are abnormal in people with type 2 diabetes. Diabetes 53, 2042–2050 (2004). 13. 13. Krssak, M. et al. Alterations in postprandial hepatic glycogen metabolism in type 2 diabetes. Diabetes 53, 3048–3056 (2004). 14. 14. Hwang, J. H. et al. Impaired net hepatic glycogen synthesis in insulin-dependent diabetic subjects during mixed meal ingestion. A 13C nuclear magnetic resonance spectroscopy study. J. Clin. Invest. 95, 783–787 (1995). 15. 15. Tomiyasu, M. et al. Monitoring of liver glycogen synthesis in diabetic patients using carbon-13 MR spectroscopy. Eur. J. Radiol. 73, 300–304 (2010). 16. 16. Dunn, J. P. et al. Hepatic and peripheral insulin sensitivity and diabetes remission at 1 month after Roux-en-Y gastric bypass surgery in patients randomized to omentectomy. Diabetes Care 35, 137–142 (2012). 17. 17. Goodpaster, B. H. & Sparks, L. M. Metabolic flexibility in health and disease. Cell Metab. 25, 1027–1036 (2017). 18. 18. Coker, R. H. et al. The impact of exercise training compared to caloric restriction on hepatic and peripheral insulin resistance in obesity. J. Clin. Endocrinol. Metab. 94, 4258–4266 (2009). 19. 19. Haus, J. M. et al. Free fatty acid-induced hepatic insulin resistance is attenuated following lifestyle intervention in obese individuals with impaired glucose tolerance. J. Clin. Endocrinol. Metab. 95, 323–327 (2010). 20. 20. Winnick, J. J. et al. Short-term aerobic exercise training in obese humans with type 2 diabetes mellitus improves whole-body insulin sensitivity through gains in peripheral, not hepatic insulin sensitivity. J. Clin. Endocrinol. Metab. 93, 771–778 (2008). 21. 21. DeFronzo, R. A., Tobin, J. D. & Andres, R. Glucose clamp technique: a method for quantifying insulin secretion and resistance. Am. J. Physiol. 237, E214–E223 (1979). 22. 22. Kadish, A. H., Litle, R. L. & Sternberg, J. C. A new and rapid method for the determination of glucose by measurement of the rate of oxygen consumption. Clin. Chem. 14, 116–131 (1968). 23. 23. Lloyd, B., Burrin, J., Smythe, P. & Alberti, K. G. Enzymic fluorometric continuous-flow assays for blood glucose, lactate, pyruvate, alanine, glycerol, and 3-hydroxybutyrate. Clin. Chem. 24, 1724–1729 (1978). 24. 24. Hansen, I. L., Cryer, P. E. & Rizza, R. A. Comparison of insulin-mediated and glucose-mediated glucose disposal in patients with insulin-dependent diabetes mellitus and in nondiabetic subjects. Diabetes 34, 751–755 (1985). 25. 25. Basu, R., Schwenk, W. F. & Rizza, R. A. Both fasting glucose production and disappearance are abnormal in people with “mild” and “severe” type 2 diabetes. Am. J. Physiol. Endocrinol. Metab. 287, E55–E62 (2004). 26. 26. Basu, R., Chandramouli, V., Dicke, B., Landau, B. & Rizza, R. Obesity and type 2 diabetes impair insulin-induced suppression of glycogenolysis as well as gluconeogenesis. Diabetes 54, 1942–1948 (2005). 27. 27. Barzilai, N. & Rossetti, L. Role of glucokinase and glucose-6-phosphatase in the acute and chronic regulation of hepatic glucose fluxes by insulin. J. Biol. Chem. 268, 25019–25025 (1993). 28. 28. Hawkins, M. et al. Fructose improves the ability of hyperglycemia per se to regulate glucose production in type 2 diabetes. Diabetes 51, 606–614 (2002). 29. 29. O’Brien, R. M. & Granner, D. K. Regulation of gene expression by insulin. Physiol. Rev. 76, 1109–1161 (1996). 30. 30. Winnick, J. J. et al. A physiological increase in the hepatic glycogen level does not affect the response of net hepatic glucose uptake to insulin. Am. J. Physiol. Endocrinol. Metab. 297, E358–E366 (2009). 31. 31. Myers, S. R., McGuinness, O. P., Neal, D. W. & Cherrington, A. D. Intraportal glucose delivery alters the relationship between net hepatic glucose uptake and the insulin concentration. J. Clin. Invest. 87, 930–939 (1991). 32. 32. DeFronzo, R. A., Ferrannini, E., Hendler, R., Wahren, J. & Felig, P. Influence of hyperinsulinemia, hyperglycemia, and the route of glucose administration on splanchnic glucose exchange. Proc. Natl Acad. Sci. USA 75, 5173–5177 (1978). 33. 33. Pagliassotti, M. J., Holste, L. C., Moore, M. C., Neal, D. W. & Cherrington, A. D. Comparison of the time courses of insulin and the portal signal on hepatic glucose and glycogen metabolism in the dog. J. Clin. Invest. 97, 81–91 (1996). 34. 34. Winnick, J. J. et al. Hepatic glycogen supercompensation activates AMP-activated protein kinase, impairs insulin signaling, and reduces glycogen deposition in the liver. Diabetes 60, 398–407 (2011). 35. 35. Maehlum, S., Felig, P. & Wahren, J. Splanchnic glucose and muscle glycogen metabolism after glucose feeding during postexercise recovery. Am. J. Physiol. 235, E255–E260 (1978). 36. 36. Galassetti, P., Coker, R. H., Lacy, D. B., Cherrington, A. D. & Wasserman, D. H. Prior exercise increases net hepatic glucose uptake during a glucose load. Am. J. Physiol. 276, E1022–E1029 (1999). 37. 37. Hamilton, K. S. et al. Effect of prior exercise on the partitioning of an intestinal glucose load between splanchnic bed and skeletal muscle. J. Clin. Invest. 98, 125–135 (1996). 38. 38. Knudsen, J. G. et al. Skeletal muscle IL-6 regulates muscle substrate utilization and adipose tissue metabolism during recovery from an acute bout of exercise. PLoS ONE 12, e0189301 (2017). 39. 39. Kowalski, G. M., Moore, S. M., Hamley, S., Selathurai, A. & Bruce, C. R. The effect of ingested glucose dose on the suppression of endogenous glucose production in humans. Diabetes 66, 2400–2406 (2017). 40. 40. Alzaid, A. A. et al. Assessment of insulin action and glucose effectiveness in diabetic and nondiabetic humans. J. Clin. Invest. 94, 2341–2348 (1994). ## Acknowledgements We wish to thank Jon Hastings and Marta Smith for their assistance with some of the biochemical assays and sample preparations and the staff of the Vanderbilt Diet, Body Composition and Human Metabolism Core. We also wish to thank Alan Cherrington, Naji Abumrad, and David Wasserman (Vanderbilt University) for their valuable input as this work was performed. Portions of these data were presented at the 2018 European Association for the Study of Diabetes Conference (Berlin, Germany). ### Funding NIDDK career development award to J.J.W. (K01-DK-093799), the Vanderbilt Diabetes Research and Training Center (DK-20593) and the Vanderbilt Institute for Clinical and Translational Research (UL1-TR-000445). Hormone and acetaminophen assays were performed by the Vanderbilt University Medical Center Hormone Assay and Analytical Services Core, which is supported by NIH grants DK-059637 and DK-020593. J.M.G. was funded by F32-DK-1000114-01A1 and K12HD087023. B.G.E. was funded by an NHLBI career development award (K23-HL-122143-01A). NCT01783275. ## Author information Correspondence to Jason J. Winnick. ## Ethics declarations ### Conflict of interest The authors declare that they have no conflict of interest. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions
{}
Run External Commands The Run menu allows you to run arbitrary external commands from inside Notepad++, and to save the commands into new entries in the Run menu and even to assign keyboard shortcuts to those saved commands. Because of the variable syntax (defined in the Configuration Files Details > User Defined Commands section of this manual), you can even use the filename or the current selected text or similar as arguments to the programs. That section of the manual also describes the underlying format of how saved Run-menu commands are stored in the shortcuts.xml config file. Dialog The Run > Run… menu entry launches the Run… dialog, which is the way to run a new command. You can type any command that you could type from the Windows OS Run dialog (Win+R or Start Menu > Run). If you prefix the command with cmd.exe /c (making sure to use appropriate syntax), it will run the command in an old Windows command prompt window; prefixing with cmd.exe /k will run it in the command prompt window and will keep the window open after the command is done. If you use a valid PowerShell command starting with powershell.exe, it will run the command in the PowerShell environment. The Program to Run entry field allows you to type the command to run. If the command is not in your PATH, you will need to use c:\full\path\to\application.exe. If you need a path that has spaces in it, make sure to use quotes around the path, like "c:\program files\myapp\myapplication.exe" "d:\some other\file name\as argument.txt" (following cmd.exe argument quoting rules). The pulldown for this entry remembers previous commands you’ve run in this instance of Notepad++. The ... button allows you to browse for the program executable to run. The Run button actually runs the command. The Save… button brings up a dialog to save the command you’ve typed as a named entry for the Run menu. You can also assign the saved command a keyboard shortcut for use inside Notepad++ in that sub-dialog. OK will save the command (with the shortcut you defined); Cancel will abort the process, so the command doesn’t get saved. The Cancel button (or the upper right X in the dialog) will allow you to exit the dialog without running the command. This is useful if you’ve changed your mind about running the external process, or if you just wanted to Save the command but not immediately run it. Running Saved Commands If you’ve saved commands, they will show up underneath the Run… entry in the Run menu. You can execute those commands by clicking on them. Manage Shortcuts Run > Modify Shortcut / Delete Command will allow you to add or change or delete the shortcut for a command, or remove the command from the Run menu, using the Shortcut Mapper interface. But in short: stay on the Run Commands tab of the Shortcut Mapper to deal with the Run menu entries; Modify will allow you to add or edit a shortcut; Clear will remove the shortcut but leave the command in the menu; Delete will completely remove the command from the menu; Close will exit the Shortcut Mapper dialog. Example Usage Run the Current File with its default association The Windows OS defines default file associations based on the file extension. If you run the command "$(FULL_CURRENT_PATH)" the OS will take the saved contents of the file and run them based on the file extension (so a .bat file would run as normal, a .csv might launch a spreadsheet program, a .py file might run your installed Python interpreter, and so on). Additional Browser Options The Run menu can be an alternative to the View menu’s View Current File In… submenu: if you don’t like the browser selection available, you can use the "$(FULL_CURRENT_PATH)" shown earlier to run your .html file with your default browser (whatever browser Windows uses when you double-click a .html file). If you want even more options, define additional commands like "c:\program files\alternate browser\browser.exe" "$(FULL_CURRENT_PATH)" to load the active file in whatever browser you want. Enter Fancy Characters If you don’t have a fancy Unicode keyboard, or if you want to be able to enter emojis, you could save a run menu command that runs charmap.exe to run windows built-in character map. Or if you have a super-fancy emoji keyboard app, you could run "c:\program files\super fancy emoji keyboard\emojikeyboard.exe" to launch that keyboard from inside Notepad++. Run your makefile If you are editing source code, you might want to have a shortcut that will run make or nmake or gmake or similar build program in the current directory. Getting External Help You could define Run menu entries to allow you to get help. For example, running the command https://en.wikipedia.org/wiki/Special:Search?search=$(CURRENT_WORD) will look up the currently selected word (or the word where your caret is if you don’t have a selection) in Wikipedia using your default browser. Or running https://docs.python.org/3/search.html?check_keywords=yes&amp;q=$(CURRENT_WORD) will look up the current word in the Python 3 online documentation. A Filename in Your Text If your cursor or selection is on the name of a file in your text, you can open that file in the current Notepad++ instance by running: "$(NPP_DIRECTORY)\notepad++.exe" "$(CURRENT_WORD)" Or, if you want to force it to open in a new instance, use: "$(NPP_DIRECTORY)\notepad++.exe" "\$(CURRENT_WORD)" -nosession -multiInst You Are Limited Only By Your Imagination If you can run it from cmd.exe or powershell.exe in a single command line, you can run it from here. Let your imagination run wild. Going Beyond Single-Line Commands If a single-line command isn’t sufficient for your needs, you may want to consider one of the following options: 1. Write a batch file: writing a windows .bat or .cmd or .ps1 file using cmd.exe or powershell syntax will allow you to specify a group of commands; you can then just use the Run > Run… to call that batch file to do your more complicated task. You can pass any of the special variables as arguments to this batch file. 2. Use the NppExec plugin: this plugin can be installed from the Plugins > Plugins Admin interface, and gives you access to a custom batch language, but with extended features that give you access to all the special variables plus extra access to the internals of the Notepad++ interface. This also allows you to view the output of commands in an embedded interactive console window that can be docked in Notepad++. (Does not use the Run menu.) 3. Use PythonScript or LuaScript or jN Notepad++ plugins (or similar) to write a script in your favorite programming language (and all those languages should give you access to any feature of that language, plus a way to access applications that live on your filesystem). These may provide interactive console windows to give you even more flexibility. (Does not use the Run menu.) Security is Your Responsibility This menu allows you to run arbitrary programs from your PC. Please ensure that you use paths to known safe applications. Understand that you are solely responsible for what happens when you run an external application from Notepad++, and the developers of and contributors to Notepad++ are not responsible for or liable for external commands or the results of running them.
{}
🚨 Hurry, space in our FREE summer bootcamps is running out. 🚨Claim your spot here. The same heat transfer into identical masses of different substances produces different temperature changes. Calculate the final temperature when 1.00 kcal of heat transfers into 1.00 kg of the following, originally at $20.0^{\circ} \mathrm{C} :$ (a) water; (b) concrete; (c) steel; and (d) mercury. (a) $21^{\circ} \mathrm{C}$(b) $25^{\circ} \mathrm{C}$(c) $29.3^{\circ} \mathrm{C}$(d) $50.3^{\circ} \mathrm{C}$ Topics No Related Subtopics Discussion You must be signed in to discuss. Video Transcript okay for this question, we have one kill, a calorie of heat being transferred into one kilogram of mass for four different types of matter. On you have an original temperature t of 20 degrees Celsius, and we're looking for the final temperature of each of these. So or this problem? We want to use Thekla sick heat equation, which is cute, the heat equals and the mass times specific specific heat. The material see times Delta T so Delta T It's the same as final temperature minus initial temperature so we can rewrite this equation to say Q equals M C terms t f minus T o can divide both sides by M. C. Says gives this t f minus t o. So the equation will be working with to solve for t f is I can draw them here. Yeah, is the original temperature plus Q over M. C. So let's just highlight this. We can remember it and separate it from where we're going to solve for these. Okay, So part a of the question is asking, what is the temperate final temperature If the matter involved is water, so the specific heat for water you have to look this up and this is one calorie program, times degrees Celsius. So we plug this into our equation. You have t final equals t not which is 20 degree Celsius plus Q. And all of these is one kilocalories. That's 1000 calories divided by M times C. So em is the same for all of these. It's one kilogram or 1000 grams. We're gonna work in kill groups and our calories and grams, rather because our specific heats are given in calories and grams. That's why I'm converting from kilograms and kill cows. So cue over M. C. So we need that specific heat here, which is one calorie over grams times degrees Celsius. So we have a calorie over a calorie graham over Grandma's cancel out. We also have 1000 over 1000 such as one. So this is 20 degrees Celsius plus one degrees Celsius because this the degree Celsius comes up top. So this gives us our answer for water of 21 degrees Celsius on circle that Okay, so, for B, you have same thing. But our specific heat is so your 0.2 calories per gram degree Celsius. So our final temperature is 20 degrees Celsius seems above. Plus we know now that our calories and grams we're gonna cancel out the thousands are gonna cancel out. So this is just one over the specific E J 0.2 Celsius and that is 20 plus five degrees Celsius. Gives us 25 degrees Celsius. That's our answer for beat. See, our material is steel, says was if you keep for steel. If you looked this up, is your 0.12 calories per gram story Celsius so T final is 20 degree Celsius initial, plus cancel everything out. We know it's just one over the specific heat at this 10.0 point 123 Celsius, which is 20 plus 8.3 three Celsius. Just put this nor calculator real quick. Come, you can give me a 0.3. So that is correct. 28.3. It's a decree, okay? And the last one is mercury, which has a specific heat of what's your 0.0 33 So TF, we know now is 20 degrees Celsius plus one oversee, which is your 10.33 degrees Celsius. Put that in your calculator and its 20 plus 30 over 30.3, gives us 50.3 degrees Celsius for a final temperature. So there's our four answers for A, B, C and D for this problem. Rutgers, The State University of New Jersey Topics No Related Subtopics
{}
# How do I compare student pre-test scores with post-test scores to evaluate whether or not they "learned"? We need to track student advancement in a topic based on pre and post test scores. That is, we give a pre-test on day 1 of class, then on the last day we give the exact same test, renamed as a post-test. Somehow we have to take the first score and assign a target number for the second number to show improvement. This second number has to be higher than the first number. Students who hit this number, or above, on the post test are considered a success. Students who do not are counted as a failure. Based on the percent of students who are "success" we get evaluated as how well the students learned. My problem is trying to figure out how to calculate how well the students did, or did not, improve. My first thought was simply take the difference in scores and calculate a percentage change. But how do I then measure student success? A change in 50% of their score? This seems somewhat easy for people who did rather poor on their pre-test, but those who did rather well it seems to penalize. That is, jumping from 0% to 50% is great, but jumping from 75% to 85% is does not seem as good if we just measure %change. The first student would be a "success" even failing the course, but the second student would be considered a "failure" even passing the course rather well. If it helps, here are the pre-test scores for one class: 20 40 44 12 48 32 24 44 28 0 36 40 and pre-test scores from another... 76 40 40 32 60 64 68 48 36 72 56 20 24 36 52 the exact same method must be used in each class to show "success" versus "failure". • Such questions are better asced at SE stats (CV): stats.stackexchange.com Jan 22 '15 at 20:18 With educational research, unless you have a very well-calibrated test, the quantitative differences (i.e., magnitudes), are not overly informative. However, a useful measure is the fraction of students that "didn't fail". Define your "success variable" ($S$) as follows: 1. For each student $i$, let their pre-test score be $I_i$, and postest score be $P_i$. 3. Assign each student a success variable $S_i$ which equals $1$ if $P_i> I_i$ adnd $0$ otherwise. 4. The success rate for a class of $N$ students is simply $\frac{\sum S_i}{N}$
{}
# An Inequality in Triangle III ### Problem Prove that in $\Delta ABC,\;$ with angles $A,B,C\;$ side lengths $a,b,c\;$ the following inequality holds: ### Solution 1 WLOG, assume $a\ge b\ge c.\;$ Then $ab+ac\ge bc+ca\ge cb+ca;\;$ also, $\displaystyle\frac{1}{bc}\ge\frac{1}{ca}\ge\frac{1}{ab}.\;$ From these, $\displaystyle\frac{ab+ac}{bc}\ge\frac{ab+bc}{ac}\ge\frac{bc+ac}{ab}.\;$ On the other hand, $\displaystyle\frac{1}{\cos^2 x}=1+\tan^2x\;$ and the tangent function in strictly increasing and positive on $\displaystyle(0,\frac{\pi}{2},\;$ hence $\displaystyle 1+\tan^2\frac{A}{2}\ge 1+\tan^2\frac{B}{2}\ge 1+\tan^2\frac{C}{2}.\;$ We can apply now Chebyshev's inequality to get $\displaystyle\sum_{cyc}\frac{ab+ac}{\displaystyle bc\cdot\cos^2\frac{A}{2}}\ge\frac{1}{3}\left(\frac{ab+ac}{bc}+\frac{ab+bc}{ac}+\frac{bc+ac}{ab}\right)\left(3+\tan^2\frac{A}{2}+\tan^2\frac{B}{2}+\tan^2\frac{C}{2}\right).$ Suffice it to prove that $\displaystyle\frac{1}{3}\left(\frac{ab+ac}{bc}+\frac{ab+bc}{ac}+\frac{bc+ac}{ab}\right)\left(3+\tan^2\frac{A}{2}+\tan^2\frac{B}{2}+\tan^2\frac{C}{2}\right)\ge 24.$ But obviously $\displaystyle\frac{ab+ac}{bc}+\frac{ab+bc}{ac}+\frac{bc+ac}{ab}\ge 6.\;$ On the other hand, \displaystyle\begin{align} 3+\tan^2\frac{A}{2}+\tan^2\frac{B}{2}+\tan^2\frac{C}{2} &\ge 3+\frac{1}{3}\left(\tan\frac{A}{2}+\tan\frac{B}{2}+\tan\frac{C}{2}\right)^2\\ &\ge 3 +1=4. \end{align} ### Solution 2 From the half-angle formula an the Law of Cosine, $\displaystyle\cos^2\frac{A}{2}=\frac{1+\cos A}{2}=\frac{(b+c)^2-a^2}{4bc},$ and similarly for the other two angles. Thus the inequality at hand is equivalent to $\displaystyle\frac{a(b+c)}{(b+c)^2-a^2}+\frac{b(c+a)}{(c+a)^2-b^2}+\frac{c(a+b)}{(a+b)^2-c^2}\ge 2,$ or, $\displaystyle\frac{a(b+c)}{b+c-a}+\frac{b(c+a)}{c+a-b}+\frac{c(a+b)}{a+b-c}\ge 2(a+b+c),$ or,else $\displaystyle\frac{a^2}{b+c-a}+\frac{b^2}{c+a-b}+\frac{c^2}{a+b-c}\ge a+b+c,$ Now, by the Cauchy-Schwarz inequality, for any $x,y,z\gt 0,$ $\displaystyle\frac{a^2}{x}+\frac{b^2}{y}+\frac{c^2}{z}\ge \frac{(a+b+c)^2}{x+y+z},$ which, with $x=b+c-a,\;$ $y=c+a-b,\;$ $z=a+b-c\;$ gives $\displaystyle\frac{a^2}{b+c-a}+\frac{b^2}{c+a-b}+\frac{c^2}{a+b-c}\ge \frac{(a+b+c)^2}{a+b+c}=a+b+c.$ ### Solution 3 We know that $\displaystyle\cos^2\frac{A}{2}=\frac{p(p-a)}{bc},\;$ where $p=(a+b+c)/2\;$ is the semiperimeter of $\Delta ABC.\;$ Thus, the required inequality can be rewritten as $\displaystyle\frac{a(b+c)}{p(p-a)}+\frac{b(c+a)}{p(p-b)}+\frac{c(b+a)}{p(p-c)}\ge 8.$ Now observe that $\displaystyle\frac{a(b+c)}{p(p-a)}=\frac{2a}{p}+\frac{a^2}{p(p-a)},\;$ and similar for the other two fractions. Since, $\displaystyle\sum_{cyc}\frac{2a}{p}=4,\;$ the required inequality reduces to $\displaystyle\frac{a^2}{p(p-a)}+\frac{b^2}{p(p-b)}+\frac{c^2}{p(p-c)}\ge 4.$ Consider the function $\displaystyle f(x)=\frac{x^2}{p-x}.\;$ $\displaystyle f'(x)=\frac{2xp}{(p-x)^2}\gt 0\;$ and $\displaystyle f''(x)=\frac{2p(p+x)}{(p-x)^3}\lt 0\;$ for $x\in (0,p).\;$ Thus the function is convex on $(0,p).\;$ Keeping $p\;$ fixed, we may apply Jensen's inequality: \displaystyle\begin{align} \frac{a^2}{p(p-a)}+\frac{b^2}{p(p-b)}+\frac{c^2}{p(p-c)}&\ge 3\frac{\displaystyle\left(\frac{a+b+c}{3}\right)^2}{p\left(p-\displaystyle\frac{a+b+c}{3}\right)}\\ &=3\frac{\displaystyle\left(\frac{2p}{3}\right)^2}{p\left(p-\displaystyle\frac{2p}{3}\right)}\\ &=3\cdot\frac{4}{9}\cdot\frac{3}{1}\\ &= 4. \end{align} ### Solution 4 \displaystyle \begin{align} \sum_{cycl}\frac{a(b+c)}{\displaystyle bc\cdot\cos\frac{A}{2}}&=\sum_{cycl}\frac{a(b+c)}{\displaystyle bc\cdot\frac{p(p-a)}{bc}}\\ &=\sum_{cycl}\frac{a(b+c)}{p(p-a)}=\sum_{cycl}\frac{a(2p-a)(p-b)(p-c)}{p(p-a)(p-b)(p-c)}\\ &=\frac{4R}{r}\ge 8, \end{align} due to Euler's inequality $R\ge 2r.$ ### Acknowledgment The problem from the Math Phenomenon has been posted at the CutTheKnotMath facebook page by Dan Sitaru, along with a solution (Solution 1) by Leo Giugiuc and Dan Sitaru. Solution 2 is by Kunihiko Chikaya; Solution 4 is by Marin Chirciu.
{}
Dynamic Programming - Fordham University 2005-11-13آ  Dynamic Programming 163 11 Dynamic Programming • View 8 0 Embed Size (px) Text of Dynamic Programming - Fordham University 2005-11-13آ  Dynamic Programming 163 11 Dynamic... • Dynamic Programming 163 11 Dynamic Programming The Bellman Equation In Chapter 10 we studied optimization over time. The constraints (10.1) expressing increments to stocks or state variables, and the additively separable form (10.3) of the objective function, were spe- cial features that enabled us to express the first-order conditions in a useful special form, namely as the Maximum Principle. Dynamic Programming is an alternative way of solving the same problem. It proves especially useful when time and uncertainty appear to- gether, as they so often do in reality. Let us begin with time to keep the exposition simple, and introduce uncertainty later. The vectors of initial stocks yo and terminal stocks y,+~ were given when we maxirnised subject to the constraints and G ( ~ t , z t , t ) < 0, (10.2) for t = 0, 1 . . . T. Keep the terminal stock requirement fixed for now. As in Chapter 5, we can define the resulting maximum value 1 as a function of the initial stocks, say V(y0). The vector of deriva- tives Vy(yO) will be the vector of the shadow prices of these initial stocks. The separability of the objective and the constraints allows us to generalize this greatly. Instead of the starting time 0, consider another particular time, say t = r . For the decisions starting at I r, the only thing that matters about the past is the vector y, of ; stocks that emerges from the past decisions. We can take that ' as parametric, and start the whole problem afresh at T. In other words, we maximize a sum just like (10.3), but extending only from T to T, subject to constraints just like (10.1) and (10.2), but holding only for T, T + 1, . . . T. Let V(y,, T) be the maximum value function of this problem; the explicit argument T is necessary because the limit of the summation depends on it. The vector of derivatives Vy ( y,, T) is the marginal increase in the maximized sum when we start with a small increment to the initial stocks y, at T , that is, the vector of shadow prices of the initial stocks for the optimization problem that starts at r. What happens when we embed the sub-~roblem starting at T into the full problem starting at O? In Chapter 10 we could interpret the Lagrange multiplier on the constraint (10.1) for T as the shadow price vector n,+l of stocks at ( ~ + 1 ) . A slight relaxation of this constraint meant an exogenous increase in y,+l, and the multiplier told us the resultant increase in the objective function (10.3). At first sight this differs from the vector of derivatives 17y(~r+1, T + 1) for the sub-problem starting at (T + 1). In the full problem, we know at time 0 that the stocks at (T + 1) are going to increase a little. Then we can plan ahead and change the control variables at earlier dates. For example, if we know at time 0 that a windfall of wealth is going to occur at (T + I) , we will consume more in anticipation as well as after the realization. But the Envelope Theorem comes to the rescue. For the small changes that are involved when we look at first-order derivatives, the direct effect on the objective function is all that counts; the induced effect of optimal readjustments in the choice variables can be ignored. Therefore we can indeed identify the derivatives Vy with the shadow prices n at all times. Now pick any t, and consider the decision about the control variables zt at that time. Consider the consequences of any partic- ular choice of zt. It will lead to next period's stocks yt+l according to (10.1). Thereafter it remains to solve the subproblem starting at (t + I) , and achieve the maximum value V ( Y ~ + ~ , t + 1). Then the total value starting with yt at t can be broken down into two terms: F(yt , zt, t) that accrues at once, and V(yt+t, t + 1) that accrues thereafter. The choice of zl should maximize the sum of these two terms. In other words, ~ ( ~ ~ , t ) = max { F(yt,zt , t) + V(yt+l,t + 1) 1, (11.1) +t , • 164 Optimization in Economic Theory Dynamic Programming 165 subject to the constraints (10.1) and (10.2) for just this one t. This equation gives us a brute-force way of solving the original optimization problem. The idea is to start at the end and proceed to earlier times recursively. At time T there is no future, only the fixed terminal stock requirement y T + ~ . Therefore subject to This is in principle a straightforward static optimization problem, and yields the maximum value function V(yT, T). That can then be used on the right-hand side in (11.1) for t = T - 1. This is another static problem, and yields the maximum value function V(yT-, , T - 1). And so on all the way back to 0. In practice this works only for the simplest problems. Analytical solutions of this kind are possible only when the functions F, G, and Q have very simple forms. Numerical solutions can be computed for somewhat harder problems, but if the state variables form a vector of more than two dimensions, even that quickly becomes unmanageable. Luckily, the brute-force method is only a backstop. In many economic applications, there are better methods to find the solution, or at least obtain useful insights about it. This method of optimization over time as a succession of static programming problems was pioneered by Richard Bellman, and named Dynamic Programming. The idea that whatever the de- cision at t, the subsequent decisions should proceed optimally for the subproblem starting at (t + 1) is known as Bellman's Principle of Optimality. The maximum value function V(yt, t) is called the Bellman value function, and equation (1 1.1) the Bellman equation. Let us look at the maximization problem on the right-hand side of the Bellman equation. Substituting for yt+l from (10.1), we are to choose zt to maximize subject to G(Yt ,W) 1 0 . Letting At denote the row vector of the multipliers on the con- straints, the &st-order conditions are Recognizing the derivatives Vy as the shadow prices n, this becomes These are exactly the first-order conditions for zt to maximize the Harniltonian H(yt, zt, nt+l, t) defined in Chapter 10, subject to the single-period constraints (10.2) as there. Thus Dyanamic Program- ming leads to the same rule for setting the choice variables as the Maximum Principle. In fact the Maximum Principle and Dynamic Programming are fully equivalent alternative methods for optimization over time. You should use whichever is simpler for tackling the particular problem at hand. The Maximum Principle is generally better when time is continuous and there is no uncertainty; Dynamic Program- ming in the opposite case of discrete time and uncertainty. But that is not a hard and fast rule. Later in this chapter I shall illustrate the use of Dynamic Programming in some economic applications. To conclude this section I use it to establish the intertemporal arbitrage equation (10.13) in a different way. When zt is chosen optimally, (11.1) holds with equality, that is, Differentiate this with respect to yt, noting that yt+l depends on yt, and using the Envelope Theorem on the right-hand side. Then Using the shadow prices n, this becomes (10.13). Uncertainty Dynamic Programming is particularly well suited to optimization problems that combine time and uncertainty. Suppose that the process governing the evolution of stocks yt through time has a • L 66 Uptamzzataon an &conomac Il'heory random component. Given the stocks yt at the beginning of pe- riod t, and the controls zt during the period, we know only the probability density function of next period's stocks yt+l. Write this as #(yttl; yt, zt). The arguments are separated out for no- tational clarity. The first argument yt+l is the actual vector of random variables whose probability density function this is; the others are like parameters that can alter the functional form of the distribution. As a simple example, yt+l could be a vector normal distribution with a mean vector p and a variance-covariance ma- trix C both of which depend on (yt, zt). As an even more special case, p could equal yt+Q(yt, zt, t), the value of yt+l in the previous discussion without uncertainty. Now the problem is to maximize the mathematical expectation of (10.3), subject to (10.2) for all t, and (10.1) replaced by the stochastic law of motion for yt+l described by the function 4. Write V(yt , t) for the maximum value function of the subproblem starting at t. For the moment fix the choice of zt. Consider what happens after the actual value of yt+l becomes known at the beginning of period (t +I). The rest of the decisions will be made optimally, and yield V ( Y ~ + ~ , t + 1). From our perspective of period t , this is still a random variable, and we are concerned about its mathematical expectation, where the integral is taken over the range over which yt+l tributed. Then the Principle of Optimality becomes V(yt,t) = max { q y t , zt,t) + E[V(yt+1,t + 1)l ), I t (1 1.2) is dis- (11.3) The maximization on the right-hand side of (11.3) is somewhat more difficult than the corresponding certainty case (11.1). The first-order condition with respect to zt requires differentiation of # with respect to zt inside the integral, and the results of that can be hard to characterize and understand. But in principle (11.3) allows us to start at T and solve the problem recursively backward to 0 just as before. In simple but useful models, the solution can be completed analytically. At the end of the chapter, I develop two examples of Dynamic Programming under uncert Documents Documents Documents Documents Documents Education Documents Documents Documents Documents Documents Documents
{}
Article Contents Article Contents # The effect of global travel on the spread of SARS • The goal of this paper is to study the global spread of SARS. We propose a multiregional compartmental model using medical geography theory (central place theory) and regarding each outbreak zone (such as Hong Kong, Singapore, Toronto, and Beijing) as one region. We then study the effect of the travel of individuals (especially the infected and exposed ones) between regions on the global spread of the disease. Mathematics Subject Classification: 92D30. Citation:
{}
# Prime - number theory [duplicate] Why is the digit 1 is not a prime number? 1 can be devided by 1 and itself. I think it's because we can express like 1= 1x1x1 ... is it true or not? • Simple because, by definition of prime numbers, it must be larger of $1$ or smaller than $-1$. – user164524 Apr 15 '15 at 20:41 • Also, notice that if $1$ is a prime number, then the fundamental theorem of arithmetic won't give a unique prime factorization, since $1^n = 1$ for any $n\in\mathbb{Z}$. – noobProgrammer Apr 15 '15 at 20:42 • It is not prime to justify multiple theorems and results in number theory. The fundamental theorem of arithmetic especially. @noobProgrammer beat me to it. – Addison Apr 15 '15 at 20:43 • Prime numbers have four divisors, 1 has only two. – mvw Apr 15 '15 at 20:44 • @mvw Apparently, you allow also negative divisors. – Peter Aug 15 '18 at 13:58
{}
# Non-diffusive atmospheric flow #12: dynamics warm-up The analysis of preferred flow regimes in the previous article is all very well, and in its way quite illuminating, but it was an entirely static analysis–we didn’t make any use of the fact that the original $Z_{500}$ data we used was a time series, so we couldn’t gain any information about transitions between different states of atmospheric flow. We’ll attempt to remedy that situation now. What sort of approach can we use to look at the dynamics of changes in patterns of $Z_{500}$? Our $(\theta, \phi)$ parameterisation of flow patterns seems like a good start, but we need some way to model transitions between different flow states, i.e. between different points on the $(\theta, \phi)$ sphere. Each of our original $Z_{500}$ maps corresponds to a point on this sphere, so we might hope that we can some up with a way of looking at trajectories of points in $(\theta, \phi)$ space that will give us some insight into the dynamics of atmospheric flow. Since atmospheric flow clearly has some stochastic element to it, a natural approach to take is to try to use some sort of Markov process to model transitions between flow states. Let me give a very quick overview of how we’re going to do this before getting into the details. In brief, we partition our $(\theta, \phi)$ phase space into $P$ components, assign each $Z_{500}$ pattern in our time series to a component of the partition, then count transitions between partition components. In this way, we can construct a matrix $M$ with $$M_{ij} = \frac{N_{i \to j}}{N_{\mathrm{tot}}}$$ where $N_{i \to j}$ is the number of transitions from partition $i$ to partition $j$ and $N_{\mathrm{tot}}$ is the total number of transitions. We can then use this Markov matrix to answer some questions about the type of dynamics that we have in our data– splitting the Markov matrix into its symmetric and antisymmetric components allows us to respectively look at diffusive (or irreversible) and non-diffusive (or conservative) dynamics. Before trying to apply these ideas to our $Z_{500}$ data, we’ll look (in the next article) at a very simple Markov matrix calculation by hand to get some understanding of what these concepts really mean. Before that though, we need to take a look at the temporal structure of the $Z_{500}$ data–in particular, if we’re going to model transitions between flow states by a Markov process, we really want uncorrelated samples from the flow, and our daily $Z_{500}$ data is clearly correlated, so we need to do something about that. ## Autocorrelation properties Let’s look at the autocorrelation properties of the PCA projected component time series from our original $Z_{500}$ data. We use the autocorrelation function in the statistics package to calculate and save the autocorrelation for these PCA projected time series. There is one slight wrinkle–because we have multiple winters of data, we want to calculate autocorrelation functions for each winter and average them. We do not want to treat all the data as a single continuous time series, because if we do we’ll be treating the jump from the end of one winter to the beginning of the next as “just another day”, which would be quite wrong. We’ll need to pay attention to this point when we calculate Markov transition matrices too. Here’s the code to calculate the autocorrelation: npcs, nday, nyear :: Int npcs = 10 nday = 151 nyear = 66 main :: IO () main = do -- Open projected points data file for input. Right innc <- openFile $workdir </> "z500-pca.nc" let Just ntime = ncDimLength <$> ncDim innc "time" let (Just projvar) = ncVar innc "proj" Right (HMatrix projsin) <- getA innc projvar [0, 0] [ntime, npcs] :: HMatrixRet CDouble -- Split projections into one-year segments. let projsconv = cmap realToFrac projsin :: Matrix Double lens = replicate nyear nday projs = map (takesV lens) $toColumns projsconv -- Calculate autocorrelation for one-year segment and average. let vsums :: [Vector Double] -> Vector Double vsums = foldl1 (SV.zipWith (+)) fst3 (x, _, _) = x doone :: [Vector Double] -> Vector Double doone ps = SV.map (/ (fromIntegral nyear))$ vsums $map (fst3 . autocorrelation) ps autocorrs = fromColumns$ map doone projs -- Generate output file. let outpcdim = NcDim "pc" npcs False outpcvar = NcVar "pc" NcInt [outpcdim] M.empty outlagdim = NcDim "lag" (nday - 1) False outlagvar = NcVar "lag" NcInt [outlagdim] M.empty outautovar = NcVar "autocorr" NcDouble [outpcdim, outlagdim] M.empty outncinfo = emptyNcInfo (workdir </> "autocorrelation.nc") # flip (withCreateFile outncinfo) (putStrLn . ("ERROR: " ++) . show) $\outnc -> do -- Write coordinate variable values. put outnc outpcvar$
{}
# areEqual(DualSpace,DualSpace) -- approximate equality of dual spaces ## Synopsis • Function: areEqual • Usage: b = areEqual(A,B) • Inputs: • Optional inputs: • Projective => ..., default value false, determine if solutions are equal • Tolerance => ..., default value .000001, the tolerance of a numerical computation • Outputs: • b, ## Description Two dual spaces are approximately equal if the have (approximately) the same base point(DualSpace) and the linear spaces spanned by the differential operators are equal approximately. i1 : R = CC[x,y]; i2 : A = dualSpace(matrix{{y^2,x^2+x*y}},point{{1,1}}) o2 = | y2 x2+xy | o2 : DualSpace i3 : B = dualSpace(matrix{{x^2+x*y+y^2,y^2+0.00000001}},point{{1,1+0.00000001}}) o3 = | x2+xy+y2 y2+1e-8 | o3 : DualSpace i4 : b = areEqual(A,B) o4 = true
{}
## Zimin Words and Bifixes One of the earliest contributions to the On-Line Encyclopedia of Integer Sequences (OEIS) was a family sequences counting the number of words that begin (or don’t begin) with a palindrome: • Let $$f_k(n)$$ be the number of strings of length $$n$$ over a $$k$$-letter alphabet that begin with a nontrivial palindrome” for various values of $$k$$. • Let $$g_k(n)$$ be the number of strings of length n over a $$k$$-letter alphabet that do not begin with a nontrivial palindrome. • Number of binary strings of length $$n$$ that begin with an odd-length palindrome. (A254128) (If I had known better, I would have published fewer sequences in favor of a table, and I would have requested contiguous blocks of A-numbers.) I must have written some Python code to compute some small terms of this sequence, and I knew that $$g_k(n) = k^n – f_k(n)$$, but I remember being at in my friend Q’s bedroom when the recursion hit me for $$f_k(n)$$: $$f_k(n) = kf_k(n-1) + k^{\lceil n/2 \rceil} – f_k\big(\lceil \frac n 2 \rceil \big)$$ ## “Bifix-free” words One sequence that I didn’t add to the OEIS was the “Number of binary strings of length n that begin with an even-length palindrome”—that’s because this was already in the Encyclopedia under a different name: A094536: Number of binary words of length n that are not “bifix-free”. 0, 0, 2, 4, 10, 20, 44, 88, 182, 364, 740, 1480, 2980, 5960, … A “bifix” is a shared prefix and suffix, so a “bifix-free” word is one such that all prefixes are different from all suffixes. More concretely, if the word is $$\alpha_1\alpha_2 \dots \alpha_n$$, then $$(\alpha_1, \alpha_2, \dots, \alpha_k) \neq (\alpha_{n-k+1},\alpha_{n-k+2},\dots,\alpha_n)$$ for all $$k \geq 1$$. The reason why the number of binary words of length $$n$$ that begin with an even length palindrome is equal to the number of binary words of length $$n$$ that have a bifix is because we have a bijection between the two sets. In particular, find the shortest palindromic prefix, cut it in half, and stick the first half at the end of the word, backward. I’ve asked for a better bijection on Math Stack Exchange, so if you have any ideas, please share them with me! In 2019–2020, Daniel Gabric, Jeffrey Shallit wrote a paper closely related to this called Borders, Palindrome Prefixes, and Square Prefixes. ## Zimin words A Zimin word can be defined recursively, but I think it’s most suggestive to see some examples: • $$Z_1 = A$$ • $$Z_2 = ABA$$ • $$Z_3 = ABACABA$$ • $$Z_4 = ABACABADABACABA$$ • $$Z_n = Z_{n-1} X Z_{n-1}$$ All Zimin words $$Z_n$$ are examples of “unavoidable patterns”, because every sufficiently long string with letters in any finite alphabet contains a substring that matches the $$Z_n$$ pattern. For example the word $$0100010010111000100111000111001$$ contains a substring that matches the Zimin word $$Z_3$$. Namely, let $$A = 100$$, $$B = 0$$, and $$C = 1011$$, visualized here with each $$A$$ emboldened: $$0(\mathbf{100}\,0\,\mathbf{100}\,1011\,\mathbf{100}\,0\,\mathbf{100})111000111001$$. I’ve written a Ruby script that generates a random string of length 29 and uses a regular expression to find the first instance of a substring matching the pattern $$Z_3 = ABACABA$$. You can run it on TIO, the impressive (and free!) tool from Dennis Mitchell. # Randomly generates a binary string of length 29. random_string = 29.times.map { [0,1].sample }.join("") p random_string # Finds the first Zimin word ABACABA p random_string.scan(/(.+)(.+)\1(.+)\1\2\1/)[0] # Pattern: A B A C A B A Why 29? Because all binary words of length 29 contain the pattern $$Z_3 = ABACABA$$. However, Joshua Cooper and Danny Rorabaugh’s paper provides 48 words of length 28 that avoid that pattern (these and their reversals): 1100000010010011011011111100 1100000010010011111101101100 1100000010101100110011111100 1100000010101111110011001100 1100000011001100101011111100 1100000011001100111111010100 1100000011011010010011111100 1100000011011011111100100100 1100000011111100100101101100 1100000011111100110011010100 1100000011111101010011001100 1100000011111101101100100100 1100100100000011011011111100 1100100100000011111101101100 1100100101101100000011111100 1100110011000000101011111100 1100110011000000111111010100 1100110011010100000011111100 1101010000001100110011111100 1101010000001111110011001100 1101010011001100000011111100 1101101100000010010011111100 1101101100000011111100100100 1101101100100100000011111100 ## The Zimin Word $$Z_2 = ABA$$ and Bifixes The number of Zimin words of length $$n$$ that match the pattern ABA is equal to the number of of words that begin with an odd-length palindrome. Analogously, the number of words with a bifix is equal to the number of words that begin with an even-length palindrome. The number of these agree when $$n$$ is odd. I’ve added OEIS sequences A342510A342512 which relate to how numbers viewed as binary strings avoid—or fail to avoid—Zimin words. I asked users to implement this on Code Golf Stack Exchange. ## My Favorite Sequences: A263135 This is the fourth in my installment of My Favorite Sequences. This post discusses sequence A263135 which counts penny-to-penny connections among $$n$$ pennies on the vertices of a hexagonal grid. I published this sequence in October 2015 when I was thinking about hexagonal-grid analogs to the “Not Equal” grid. The square-grid analog of this sequence is A123663. ## A263135: Placing Pennies The sequences A047932 and A263135 are about placing pennies on a hexagonal grid in such a way that maximizes the number of penny-to-penny contacts, which occurs when you place the pennies in a spiral. A047932, counts the contacts when the pennies are placed on the faces of the grid; A263135 counts the contacts with the pennies placed on the vertices. While spiral shapes maximize the number of penny-to-penny contacts, there are sometimes non-spiral shapes that have the same number of contacts. For example, in the case of the square grid, there are $$A100092(n)$$ such ways to lay down $$n$$ pennies on the square grid with the maximum number of connections. Problem 108 in my Open Problems Collection asks about generalizing this OEIS sequence to other settings such as the hexagonal grid. #### Comparing contacts Notice that the “face” pennies in A047932 can have a maximum of six neighbors, while the “vertex” pennies in A263135 can have a maximum of three. In the limit, most pennies are “interior” pennies with the maximum number of contacts, so $$A047932(n) \sim 3n$$ and $$A263135(n) \sim \frac32n$$. Looking at the comparative growth rates, it is natural to ask how the number of connections of $$n$$ face pennies compares to the number of connections of $$2n$$ vertex pennies. In October 2015 I made a conjecture on the OEIS that this difference grew like sequence A216256. Conjecture: For $$n > 0$$, $A263135(2n) – A047932(n) = \lceil\sqrt{3n – 3/4} – 1/2\rceil = A216256(n).$ I believe that the sequence A216256 on the right hand side appears to be the same as the sequence “n appears $$\displaystyle\left\lfloor \frac{2n+1}{3} \right\rfloor$$ times,” but I’d have to crack open my Concrete Mathematics book to prove it. This is Problem 20 in my Open Problem Collection, and I’ve placed a small, $5 bounty on solving this conjecture—so if you have an idea of how to prove this, let me know in exchange for a latte! I’ve asked about this in my Math Stack Exchange question Circle-to-circle contacts on the hexagonal grid—so feel free to answer there or let me know on Twitter, @PeterKagey. ## My Favorite Sequences: “Not Equal” Grid This is the third installment in a recurring series, My Favorite Sequences. This post discusses OEIS sequence A278299, a sequence that took over two years to compute enough terms to add to the OEIS with confidence that it was distinct. This sequence is discussed in Problem #23 of my Open Problems Collection, which asks for the smallest polyomino (by number of cells) whose cells you can color with $$n$$ different colors such that any two different colors are adjacent somewhere in the polyomino. As illustrated below, when there are $$n=5$$ colors (say, green, brown, blue, purple, and magenta) there is a $$13$$-cell polyomino which has a green cell adjacent to a blue cell and a purple cell adjacent to a brown cell and so on for every color combination. This is the smallest polyomino with the $$5$$-coloring property. ## The Genesis: Unequal Chains The summer after my third undergraduate year, I decided to switch my major to Math and still try to graduate on time. Due to degree requirements, I had to go back and take some lower-division classes that I was a bit over-prepared for. One of these classes—and surely my favorite—was Bill Bogley‘s linear algebra class, where I half-way paid attention and half-way mused about other things. Bill wrote something simple on the board that sparked inspiration for me: $$a \neq b \neq c \neq a.$$ He wrote this to indicate that $$a$$, $$b$$, and $$c$$ were all distinct, and this got me thinking: if we have to write a string of four variables in order to say that three variables are distinct, how many would we have to write down to say that four variables were distinct? It turns out that $$8$$ will do the trick, with one redundancy: $$a\neq b \neq c \neq d \neq b \color{red}{\neq} c \neq a.$$ Five variables? $$11$$: $$a_1 \neq a_2 \neq a_3 \neq a_4 \neq a_5 \neq a_3 \neq a_1 \neq a_4 \neq a_2 \neq a_5.$$ What about $$n$$ variables? My colleague and the then-President of the OSU Math Club, Tommy Pitts, made quick work of this problem. He pointed out that “not equal” is a symmetric, non-transitive, non-reflexive relation. This means that we can model this with a complete graph on $$n$$ vertices, where each edge is a relation. Then the number of variables needed in the expression is the number of edges in the complete graph, plus the minimum number of Eulerian paths that we can split the graph into. Searching for this in the OEIS yields sequence A053439. $$A053439^*(n) = \begin{cases} \binom{n}{2} + 1 & n \text{ is odd} \\ \binom{n}{2} + \frac n 2 & n \text{ is even}\end{cases}$$ ## A Generalization: Unequal Chainmail This was around 2014, at which time I was writing letters to my friend Alec Jones whenever I—rather frequently!—stumbled upon a new math problem that interested me. In the exchange of letters, he suggested a 2D version of this puzzle. Write the $$n$$ variables in the square grid, and say that two variables are unequal if they’re adjacent. While Tommy solved the 1D version of the problem quickly, the 2D version was much more stubborn! However we were able to make some progress. We found some upper bounds (e.g. the 1D solution) and some lower bounds, and we were able to prove that some small configurations were optimal. Finally, in November 2016, we had ten terms: enough to prove that this sequence was not in the OEIS. We added it as A278299. $$a(n)$$ is the tile count of the smallest polyomino with an $$n$$-coloring such that every color is adjacent to every other distinct color at least once. OEIS sequence A278299. (In May 2019, Alec’s student Ryan Lee found the $$11$$th term: $$A278299(11) = 34$$. $$A278299(12)$$ is still unknown.) We found these terms by establishing some lower bounds (as explained below) and then implementing a Javascript game (which you can play here) with a Ruby on Rails backend to allow people to submit their hand-crafted attempts. Each solution was constructive proof of an upper bound, so when a user submitted a solution that matched the lower bound, we were able to confirm that term of the sequence. (One heuristic for making minimal configurations is to start with the construction in OEIS sequence A260643 and add cells as necessary in an ad hoc fashion.) ## Lower bounds There are a few different ways of proving lower bounds. • We know that there needs to be at least $$\binom{n}{2}$$ relations, one between each pair of variables. OEIS sequence A123663 gives the “number of shared edges in a spiral of n unit squares,” which can be used to compute a lower bound: $$A039823(n) = \left\lceil \frac{n^2+n+2}{4}\right\rceil$$ • Every number needs to be in contact with at least (n-1) other numbers, and each occurrence can be in contact with at most (4) others. So each number needs to occur at least $$\lceil \frac{n-1}{4}\rceil$$ times, for a total of $$n\lceil \frac{n-1}{4}\rceil$$ occurrences. This bound is usually weaker than the above bound. • For the cases of $$n = 5$$ and $$n=9$$, the lower bounds were proved using ad hoc methods, by looking at how many cells would need to have a given number of neighbors. ## Upper Bounds Besides the upper bound that comes from the 1-dimensional version of the problem, that only upper bounds that I know of come from hand-crafted submissions on my Javascript game on my website. Do you have any ideas for an explicit and efficient algorithm for constructing such solutions? If so, let me know on Twitter @PeterKagey. ## Asymptotics The lower and upper bounds show that this is asymptotically bounded between $$n^2/4$$ and $$n^2/2$$. It’s possible that this doesn’t have a limit at all, but it would be interesting to bound the liminf and limsup further. My intuition is that $$n^2/4$$ is the right answer, can you prove or disprove this? ## Generalizations • We could play this game on the triangular grid, or in the 3-dimensional cubic grid. Do you have ideas of other graphs that you could do this on? • This game came from Tommy’s analysis of looking at “not equal to” as a symmetric, non-reflexive, non-transitive relation. Can you do a similar analysis on other kinds of relations? • Is there a good way of defining what it means for two solutions to be the same? For a given number of variables, how many essentially different solutions exist? (Related: Open problem #108.) • What if we think of left-right connections as being different from up-down connections, and want both? Or what if we want each variable $$x$$ to be neighbors with another $$x$$? If you have ideas about these questions or any questions of your own, please share them with me by leaving a comment or letting me know on Twitter, @PeterKagey! ## My Favorite Sequences: A261865 This is the first installment in a new series, “My Favorite Sequences”. In this series, I will write about sequences from the On-Line Encyclopedia of Integer Sequences that I’ve authored or spent a lot of time thinking about. I’ve been contributing to the On-Line Encyclopedia of Integer Sequences since I was an undergraduate. In December 2013, I submitted sequence A233421 based on problem A2 from the 2013 Putnam Exam—which is itself based on “Ron Graham’s Sequence” (A006255)—a surprising bijection from the natural numbers to the non-primes. As of today, I’ve authored over 475 sequences based on puzzles that I’ve heard about and problems that I’ve dreamed up. ## A261865: Multiples of square roots (This problem is closely related to Problem 13 in my Open Problems Collection.) In September 2015, I submitted sequence A261865: $$A261865(n)$$ is the least integer $$k$$ such that some multiple of $$\sqrt k$$ falls in the interval $$(n, n+1)$$. For example, $$A261865(3) = 3$$ because there is no multiple of $$\sqrt 1$$ in $$(3,4)$$ (since $$3 \sqrt{1} \leq 3$$ and $$4 \sqrt{1} \geq 4$$); there is no multiple of $$\sqrt{2}$$ in $$(3,4)$$ (since $$2 \sqrt{2} \leq 3$$ and $$3 \sqrt 2 \geq 4$$); but there is a multiple of $$\sqrt 3$$ in $$(3,4)$$, namely $$2\sqrt 3$$. As indicated in the picture, the sequence begins $$\color{blue}{ 2,2,3,2,2},\color{red}{3},\color{blue}{2,2,2},\color{red}{3},\color{blue}{2,2},\color{red}{3},\color{blue}{2,2,2},\color{red}{3},\color{blue}{2,2},\color{red}{3},\color{blue}{2,2},\color{magenta}{7},\dots.$$ ## A conjecture about density As the example illustrates, $$1$$ does not appear in the sequence. And almost by definition, asymptotically $$1/\sqrt 2$$ of the values are $$2$$s. Let’s denote the asymptotic density of terms that are equal to $$n$$ by $$d_n$$. It’s easy to check that $$d_1 = 0$$, (because multiples of $$\sqrt 1$$ are never between any integers) and $$d_2 = 1/\sqrt 2$$, because multiples of $$\sqrt 2$$ are always inserted. I conjecture in Problem 13 of my Open Problem Collection that $$a_n = \begin{cases}\displaystyle\frac{1}{\sqrt n}\left(1 – \sum_{i=1}^{n-1} a_i\right) & n \text{ is squarefree}\$5mm] 0 & \text{otherwise}\end{cases} If this conjecture is true, then the following table gives approximate densities. This was computed with the Mathematica code: d[i_] := (d[i] = If[ SquareFreeQ[i], N[(1 - Sum[d[j], {j, 2, i - 1}])/Sqrt[i], 50], 0 ]) ## Finding Large Values I’m interested in values of $$n$$ such that $$A261865(n)$$ is large, and I reckon that there are clever ways to construct these, perhaps by looking at some Diophantine approximations of $$\sqrt{2}, \sqrt{3}, \sqrt{5}, \sqrt{6}, \dots$$. In February, I posted a challenge on Code Golf Stack Exchange to have folks compete in writing programs that can quickly find large values of $$A261865(n)$$. Impressively, Noodle9’s C++ program won the challenge. In under a minute, this program found that the input $$n=1001313673399$$ makes $$A261865$$ particularly large: $$A261865(1001313673399) = 399$$. Within the time limit, no other programs could find a value of $$n$$ that makes $$A261865(n)$$ larger. ## Related Ideas Sequence $$A327953(n)$$ counts the number of positive integers $$k$$ such that there is some integer $$\alpha^{(n)}_k > 2$$ where $$\alpha^{(n)}_k\sqrt{k} \in (n, n+1)$$. It appears to grow roughly linearly like $$A327953(n) \sim 1.3n$$, but I don’t know how to prove this. • Take any function $$f\colon\mathbb N \rightarrow \mathbb R$$ that is positive, has positive first derivative, and has negative second derivative. Then, what is the least $$k$$ such that some multiple of $$f(k)$$ is in $$(n,n+1)$$? • For example, what is the least integer $$k \geq 3$$ such that there is a multiple of $$\ln(k)$$ in $$(n, n+1)$$? • What is the least $$k \in \mathbb N$$ such that there exists $$m \in \mathbb N$$ with $$k2^{1/m} \in (n,n+1)$$? • What is the least $$m \in \mathbb N$$ such that there exists $$k \in \mathbb N$$ with $$k2^{1/m} \in (n,n+1)$$? • A343205 is the auxiliary sequence that gives the value $$m$$ such that $$m\sqrt{A261865(n)} \in (n, n+1)$$. Does this sequence have an infinite limit inferior? If you can answer any of these questions, or if you spend time thinking about this, please let me know on Twitter, @PeterKagey! ## Richard Guy’s Partition Sequence Neil Sloane is the founder of the On-Line Encyclopedia of Integer Sequences (OEIS). Every year or so, he gives a talk at Rutgers in which he discusses some of his favorite recent sequences. In 2017, he spent some time talking about a 1971 letter that he got from Richard Guy, and some questions that went along with it. In response to the talk, I investigated the letter and was able to sort out Richard’s 45-year-old idea, and correct and compute some more terms of his sequence. ## Richard Guy and his sequences Richard Guy was a remarkable mathematician who lived to the remarkable age of 103 years, 5 months, and 9 days! His life was filled with friendships and collaborations with many of the giants of recreational math: folks like John Conway, Paul Erdős, Martin Gardner, Donald Knuth, and Neil Sloane. But what I love most about Richard is how much joy and wonder he found in math. (Well, that and his life-long infatuation with his wife Louise.) [I’m] an amateur [mathematician], I mean I’m not a professional mathematician. I’m an amateur in the more genuine sense of the word in that I love mathematics and I would like everybody in the world to like mathematics. Richard Guy in Fascinating Mathematical People: Interviews and Memoirs ### Richard’s letter to Neil In January 2017, Neil Sloane gave a talk at Doron Zeilberger’s Experimental Mathematics Seminar, and about six minutes in, Neil discusses a letter that Richard sent to him at Cornell—which was the forwarded to Bell Labs—in June 1971. When I was working on the book, the 1973 Handbook of Integer Sequences, I would get letters from Richard Guy from all over the world. As he traveled around, he would collect sequences and send them to me. Neil Sloane, Rutgers Experimental Mathematics Seminar At 11:30, Neil discusses “sequence I” from Richard’s letter, which he added to the OEIS as sequence A279197: Number of self-conjugate inseparable solutions of $$X + Y = 2Z$$ (integer, disjoint triples from $$\{1,2,3,\dots,3n\}$$). Neil mentioned in the seminar that he didn’t really know exactly what the definition meant. With some sleuthing and programming, I was able to make sense of the definition, write a Haskell program, correct the 7th term, and extend the sequence by a bit. The solutions for $$A279197(1)$$ through $$A279197(10)$$ are listed in a file I uploaded to the OEIS, and Fausto A. C. Cariboni was able to extend the sequence even further, submitting terms $$A279197(11)$$–$$A279197(17)$$. ## How the sequence works. The idea here is to partition $$\{1,2,3,\dots,3n\}$$ into length-3 arithmetic progressions, $$\bigl\{\{X_i,Z_i,Y_i\}\bigr\}_{i=1}^{n}$$. And in particular, we want them to be inseparable and self-conjugate. An inseparable partition is one whose “smallest” subsets are not a solution for a smaller case. For example, if $$n=3$$, then the partition \[\bigl\{ \{1,3,5\}, \{2,4,6\}, \{7,8,9\} \bigr\}$ is separable, because if the subset $$\bigl\{ \{1,3,5\}, \{2,4,6\} \bigr\}$$ is a solution to the $$n=2$$ case. A self-conjugate partition is one in which swapping each $$i$$ with each $$3n+1-i$$ gets back to what we started with. For example, $$\bigl\{\{1,3,5\}, \{2,4,6\}\bigr\}$$ is self-congugate, because if we replace the $$1$$ with a $$6$$ and the $$2$$ with a $$5$$, and the $$i$$ with a $$7-i$$, then we get the same set: $$\bigl\{\{6,4,2\}, \{5,3,1\} \bigr\}$$ ### Generalizing Richard Guy’s idea Of course, it’s natural to wonder about the separable solutions, or what happens if the self-conjugate restriction is dropped. In exploring these cases, I found four cases already in the OEIS, and I computed five more: A282615A282619. ## Generalizing further There are lots of other generalizations that might be interesting to explore. Here’s a quick list: • Look at partitions of $$\{1,2,\dots,kn\}$$ into $$n$$ parts, all of which are an arithmetic sequence of length $$k$$. • Count partitions of $$\{1,2,\dots,n\}$$ into any number of parts of (un)equal size in a way that is (non-)self-conjugate and/or (in)separable. • Consider at partitions of $$\{1,2,\dots,3n\}$$ into $$n$$ parts, all of which are an arithmetic sequence of length $$3$$, and whose diagram is “non-crossing”, that is, none of the line segments overlap anywhere. (See the 6th and 11th cases in the example for $$A279197(6) = 11$$.) If explore any generalizations of this problem on your own, if you’d like to explore together, or if you have an anecdotes about Richard Guy that you’d like to share, let me know on Twitter! ## A π-estimating Twitter bot: Part I This is the first part of a three part series about making the Twitter bot @BotfonsNeedles. In this part, I will write a Python 3 program that In the second part, I’ll explain how to use the Twitter API to • post the images to Twitter via the Python library Tweepy, and • keep track of all of the Tweets to get an increasingly accurate estimate of $$\pi$$. In the third part, I’ll explain how to • Package all of this code up into a Docker container • Push the Docker image to Amazon Web Services (AWS) • Set up a function on AWS Lambda to run the code on a timer ### Buffon’s needle problem Buffon’s needle problem is a surprising way of computing $$\pi$$. It says that if you throw $$n$$ needles of length $$\ell$$ randomly onto a floor that has parallel lines that are a distance of $$\ell$$ apart, then the expected number of needles that cross a line is $$\frac{2n}\pi$$. Therefore one way to approximate (\pi) is to divide $$2n$$ by the number of needles that cross a line. I had my computer simulate 400 needle tosses, and 258 of them crossed a line. Thus this experiment approximates $$\pi \approx 2\!\left(\frac{400}{258}\right) \approx 3.101$$, about a 1.3% error from the true value. ### Modeling in Python Our goal is to write a Python program that can simulate tossing needles on the floor both numerically (e.g. “258 of 400 needles crossed a line”) and graphically (i.e. creates the PNG images like in the above example). #### The RandomNeedle class. We’ll start by defining a RandomNeedle class which takes • a canvas_width, $$w$$; • a canvas_height, $$h$$; • and a line_spacing, $$\ell$$. It then initializes by choosing a random angle (\theta \in [0,\pi]) and random placement for the center of the needle in $(x,y) \in \left[\frac{\ell}{2}, w -\,\frac{\ell}{2}\right] \times \left[\frac{\ell}{2}, h -\,\frac{\ell}{2}\right]$ in order to avoid issues with boundary conditions. Next, it uses the angle and some plane geometry to compute the endpoints of the needle: $\begin{bmatrix}x\\y\end{bmatrix} \pm \frac{\ell}{2}\begin{bmatrix}\cos(\theta)\\ \sin(\theta)\end{bmatrix}.$ The class’s first method is crosses_line, which checks to see that the $$x$$-values at either end of the needle are in different “sections”. Since we know that the parallel lines occur at all multiples of $$\ell$$, we can just check that $\left\lfloor\frac{x_\text{start}}{\ell}\right\rfloor \neq \left\lfloor\frac{x_\text{end}}{\ell}\right\rfloor.$ The class’s second method is draw which takes a drawing_context via Pillow and simply draws a line. import math import random class RandomNeedle: def __init__(self, canvas_width, canvas_height, line_spacing): theta = random.random()*math.pi half_needle = line_spacing//2 self.x = random.randint(half_needle, canvas_width-half_needle) self.y = random.randint(half_needle, canvas_height-half_needle) self.del_x = half_needle * math.cos(theta) self.del_y = half_needle * math.sin(theta) self.spacing = line_spacing def crosses_line(self): initial_sector = (self.x - self.del_x)//self.spacing terminal_sector = (self.x + self.del_x)//self.spacing return abs(initial_sector - terminal_sector) == 1 def draw(self, drawing_context): color = "red" if self.crosses_line() else "grey" initial_point = (self.x-self.del_x, self.y-self.del_y) terminal_point = (self.x+self.del_x, self.y+self.del_y) drawing_context.line([initial_point, terminal_point], color, 10) By generating $$100\,000$$ instances of the RandomNeedle class, and keeping a running estimation of (\pi) based on what percentage of the needles cross the line, you get a plot like the following: ## The NeedleDrawer class The NeedleDrawer class is all about running these simulations and drawing pictures of them. In order to draw the images, we use the Python library Pillow which I installed by running pip3 install Pillow When an instance of the NeedleDrawer class is initialized, makes a “floor” and “tosses” 100 needles (by creating 100 instances of the RandomNeedle class). The main function in this class is draw_image, which makes a $$4096 \times 2048$$ pixel canvas, draws the vertical lines, then draws each of the RandomNeedle instances. (It saves the files to the /tmp directory in root because that’s the only place we can write file to our Docker instance on AWS Lambda, which will be a step in part 2 of this series.) from PIL import Image, ImageDraw from random_needle import RandomNeedle class NeedleDrawer: def __init__(self): self.width = 4096 self.height = 2048 self.spacing = 256 self.random_needles = self.toss_needles(100) def draw_vertical_lines(self): for x in range(self.spacing, self.width, self.spacing): self.drawing_context.line([(x,0),(x,self.height)],width=10, fill="black") def toss_needles(self, count): return [RandomNeedle(self.width, self.height, self.spacing) for _ in range(count)] def draw_needles(self): for needle in self.random_needles: needle.draw(self.drawing_context) def count_needles(self): cross_count = sum(1 for n in self.random_needles if n.crosses_line()) return (cross_count, len(self.random_needles)) def draw_image(self): img = Image.new("RGB", (self.width, self.height), (255,255,255)) self.drawing_context = ImageDraw.Draw(img) self.draw_vertical_lines() self.draw_needles() del self.drawing_context img.save("/tmp/needle_drop.png") return self.count_needles() ## Next Steps In the next part of this series, we’re going to add a new class that uses the Twitter API to post needle-drop experiments to Twitter. In the final part of the series, we’ll wire this up to AWS Lambda to post to Twitter on a timer. ## Polytopes with Lattice Coordinates Problems 21, 66, and 116 in my Open Problem Collection concern polytopes with lattice coordinates—that is, polygons, polyhedra, or higher-dimensional analogs with vertices the square or triangular grids. (In higher dimensions, I’m most interested in the $$n$$-dimensional integer lattice and the $$n$$-simplex honeycomb). This was largely inspired by one of my favorite mathematical facts: given a triangular grid with $$n$$ points per side, you can find exactly $$\binom{n+2}{4}$$ equilateral triangles with vertices on the grid. However, it turns out that there isn’t a similarly nice polynomial description of tetrahedra in a tetrahedron or of triangles in a tetrahedron. (Thanks to Anders Kaseorg for his Rust program that computed the number of triangles in all tetrahedra with 1000 or fewer points per side.) The $$4$$-simplex (the $$4$$-dimensional analog of a triangle or tetrahedron) with $$n-1$$ points per side, has a total of $$\binom{n+2}{4}$$ points, so there is some correspondence between points in some $$4$$-dimensional polytope, and triangles in the triangular grid. This extends to other analogs of this problem: the number of squares in the square grid is the same as the number of points in a $$4$$-dimensional pyramid. ## The $$\binom{n+2}{4}$$ equilateral triangles I put a Javascript applet on my webpage that illustrates a bijection between size-$$4$$ subsets of $$n+2$$ objects and triangles in the $$n$$-points-per-side grid. You can choose different subsets and see the resulting triangles. (The applet does not work on mobile.) ## Polygons with vertices in $$\mathbb{Z}^n$$ This was also inspired by Mathologer video “What does this prove? Some of the most gorgeous visual ‘shrink’ proofs ever invented”, where Burkard Polster visually illustrates that the only regular polygons with vertices in $$\mathbb{Z}^n$$ (and thus the $$n$$-simplex honeycomb) are equilateral triangles, squares, and regular hexagons. ## Polyhedra with vertices in $$\mathbb{Z}^3$$ There are some surprising examples of polyhedra in the grid, including cubes with no faces parallel to the $$xy$$-, $$xz$$-, or $$yz$$-planes. While there are lots of polytopes that can be written with vertices in $$\mathbb{Z}^3$$, Alaska resident and friend RavenclawPrefect cleverly uses Legendre’s three-square theorem to prove that there’s no way to write the uniform triangular prism this way! However, he provides a cute embedding in $$\mathbb{Z}^5$$: the convex hull of$$\scriptsize{\{(0,0,1,0,0),(0,1,0,0,0),(1,0,0,0,0),(0,0,1,1,1),(0,1,0,1,1),(1,0,0,1,1)}\}.$$## Polygons on a “centered $$n$$-gon” I asked a question on Math Stack Exchange, “When is it possible to find a regular $$k$$-gon in a centered $$n$$-gon“—where “centered $$n$$-gon” refers to the diagram that you get when illustrating central polygonal numbers. These diagrams are one of many possible generalizations of the triangular, square, and centered hexagonal grids. (Although it’s worth noting that the centered triangular grid is different from the ordinary triangular grid.) If you have any ideas about this, let me know on Twitter or post an answer to the Stack Exchange question above. ## A catalog of polytopes and grids On my OEIS wiki page, I’ve created some tables that show different kinds of polytopes in different kinds of grids. There are quite a number of combinations of polygons/polyhedra and grids that either don’t have an OEIS sequence or that I have been unable to find. If you’re interested in working on filling in some of the gaps in this table, I’d love it if you let me now! And if you’d like to collaborate or could use help getting started, send me a message on Twitter! ## Stacking LEGO Bricks Back in May, I participated in The Big Lock-Down Math-Off from The Aperiodical. In the Math-Off, I went head-to-head against Colin Beveridge (who has, hands-down, my favorite Twitter handle: @icecolbeveridge). Colin wrote about using generating functions to do combinatorics about Peter Rowlett’s toy Robot Caterpillar. Coincidentally and delightfully, I wrote about using generating functions to do combinatorics about Peter Kagey’s toy LEGOs. Counting LEGO configurations is a problem dating back to at least 1974, when Jørgen Kirk Kristiansen counted that there are 102,981,500 ways to stack six 2×4 LEGOs of the same color into a tower of height six. According to Søren Eilers, Jørgen undercounted by 4! In my Math-Off piece, I wrote about a fact that I learned from Math Stack Exchange user N. Shales—a fact that may be my (and Doron Zeilberger’s) favorite in all of mathematics: there are exactly $$3^{n-1}$$ ways to make a tower out of $$1 \times 2$$ LEGO bricks following some simple and natural rules. Despite this simple formula, the simplest known proof is relatively complicated and uses some graduate-level combinatorial machinery. ## The Rules 1. The bricks must lie in a single plane. 2. No brick can be directly on top of any other. 3. The bottom layer must be a continuous strip. 4. Every brick that’s not on the bottom layer must have at least one brick below it. Gouyou-Beauchamps and Viennot first proved this result in their 1988 statistical mechanics paper, but the nicest proof that I know of can be found in Miklós Bóna’s Handbook of Enumerative Combinatorics (page 26). Bóna’s proof decomposes the stacks of blocks in a clever way and then uses a bit of generating function magic. ## Other rules In preparation for the Math-Off piece, I asked a question on Math Stack Exchange about counting the number of towers without Rule 2. The user joriki provided a small and delightful modification of Bóna’s proof that proves that there are $$4^{n-1}$$ towers if only rules 1, 3, and 4 are followed. It might also be interesting to consider the 14 other subsets of the rules. I encourage you to compute the number of corresponding towers and add any new sequences to the On-Line Encyclopedia of Integer Sequences. If you do so, please let me know! And I’d be happy to work with you if you’d like to contribute to the OEIS but don’t know how to get started. Another natural question to ask: How many different towers can you build out of $$n$$ bricks if you consider mirror images to be the same? In the example above with the red bricks, there are six different towers, because there are three pairs of mirror images. By Burnside’s Lemma (or a simpler combinatorial argument) this is equivalent to counting the number of symmetric brick stacks. If there are $$s(n)$$ symmetric towers with $$n$$ bricks, then there are $$\displaystyle \frac 12 (3^{n-1}+s(n))$$ towers. For $$n = 4$$, there are three such towers, shown below in blue. I asked about this function on Math Stack Exchange and wrote a naive Haskell program to compute the number of symmetric towers consisting of $$n \leq 19$$ bricks, which I added to the OEIS as sequence A320314. OEIS contributor Andrew Howroyd impressively extended the sequence by 21 more terms. I also added sequence $$A264746 = \frac 12 (3^{n-1}+A320314(n))$$, which counts towers up to reflection, and A333650, which is a table that gives the number of towers with $$n$$ bricks and height $$k$$. ## Stacking Ordinary Bricks It is also interesting to count the number of (stable) towers that can be made out of ordinary bricks without any sort of mortar. I asked on Math Stack Exchange for a combinatorial rule for determining when a stack of ordinary bricks is stable. MSE user Jens commented that this problem is hard, and pointed to the OEIS sequence A168368 and the paper “Maximum Overhang” by Mike Paterson, Yuval Peres, Mikkel Thorup, Peter Winkler, and Uri Zwick, which provides a surprising example of a tower that one might expect to be stable, but in fact is not. I’d still like to find a combinatorial rule, or implement a small physics engine, to determine when a stack of bricks is stable. These problems and some generalizations can be found in Problem 33 of my Open Problem Collection. If you’d like to collaborate on any of these problems, let me know on Twitter. If you find yourself working on your own, I’d love for you to keep me updated with your progress! (The graphics of LEGO bricks were rendered using the impressive and free LEGO Studio from BrickLink.) ## Regular Truchet Tilings I recently made my first piece of math art for my apartment: a 30″×40″ canvas print based on putting Truchet tiles on the truncated trihexagonal tiling. I first became interested in these sorts of patterns after my former colleague Shane sent me a YouTube video of the one-line Commodore 64 BASIC program: 10 PRINT CHR$(205.5+RND(1)); : GOTO 10 I implemented a version of this program on my website, with the added feature that you could click on a section to recolor the entire component, and this idea was also the basis of Problem 2 and Problem 31 in my Open Problem Collection. I saw this idea generalized by Colin Beveridge in the article “Too good to be Truchet” in Chalkdust Magazine. In this article, Colin counts the ways of drawing hexagons, octagons, and decagons with curves connecting the midpoints of edges, and regions colored in an alternating fashion. In the case of the hexagon, there are three ways to do it, one of which looks like Palago tiles. It turns out that if you ignore the colors, the number of ways to pair up midpoints of the sides of a $latex 2n$-gon in such a way that the curves connecting the midpoints don’t overlap is given by the $latex n$-th Catalan number. For example, there are $latex C_4 = 14$ ways of connecting midpoints of the sides of an octagon, where different rotations are considered distinct. There are three regular tilings of the plane by $latex 2n$-gons, the square tiling, the truncated square tiling, and the truncated trihexagonal tiling. Placing a Truchet tile uniformly at random over each of the $latex 2n$-gons, results in a really lovely emergent structure. If you find these designs as lovely as I do, I’d recommend taking a look at the Twitter bots @RandomTiling by Dave Richeson and @Truchet_Nested/@Trichet_Nested by @SerinDelaunay (based on a idea from Christopher Carlson) which feature a series of visually interesting generalizations of Truchet tilings and which are explained in Christopher’s blog post “Multi-scale Truchet Patterns“. Edward Borlenghi has a blog post “The Curse of Truchet’s Tiles” about how he tried—mostly unsuccessfully—to sell products based on Truchet tiles, like carpet squares and refrigerator magnets (perhaps similar to “YoYo” magnets from Magnetic Poetry). The post is filled with lots of cool, alternative designs for square Truchet tiles and how they fit together. Edward got a patent for some of his ideas, and his attempt to sell these very cool products feels like it could have been my experience in another life. If you want to see more pretty images and learn more about this, make sure to read Truchet Tilings Revisited by Robert J. Krawczyk! If you want to see what this looks like on a spherical geometry, check out Matt Zucker’s tweet. And if you want to try to draw some of these patterns for yourself, take a look at @Ayliean’s Truchet Tiles Zine.
{}
# Reduce function on custom data structure using Cats Semigroup This is for purely pedagogical purposes only. So, within the contrived data structure and its constraints, I'm interested in knowing how might one improve that reduce function and whether I've properly used the Semigroup typeclass correctly here. object ComplexData { import cats.Semigroup import cats.instances.all._ case class Element(name: String, value: Int, els: List[Element] = Nil) { def reduce[A : Semigroup](z: A)(f: Element => A): A = { Semigroup[A].combine(f(this), els match { case Nil => z case _ => els.map(_.reduce(z)(f)).reduce((a, b) => Semigroup[A].combine(a, b)) }) } } val e1 = Element("Zero", 0, List( Element("One", 1, List(Element("Two", 2, List( Element("Three", 3), Element("Four", 4))))), Element("Five", 5, List( Element("Six", 6), Element("Seven", 7), Element("Eight", 8, List( Element("Nine", 9), Element("Ten", 10))))))) e1.reduce(0)(_.value) //res0: Int = 55 e1.reduce("")(_.name + " ") //res1: String = Zero One Two Three... e1.reduce(0)(_.els.length) //res2: Int = 10 e1.reduce("")(e => e.name + " -> " + e.value + ", ") //res3: String = One -> 1, Two -> 2, Three -> 3, ... } Specifically: 1. While it works, not excited by the use of view bounds given that they are long since deprecated (attempting to use A <: Semigroup[A]in the function signature did not compile), do I really need an implicit definition here of the semigroup if I wanted to go this way? 2. That pattern match seems accidentally complex, even given my constraints there's probably a more elegant or at least more straightforward way to do that, yes? 3. If I used aMonoid[A]instead of a semigroup I could get rid of thezparameter and provide aZero[A]orEmpty[A], I think - is that the preferred way to go? • Hi, you flagged your own question to migrate to Stack Overflow. I think it's better here. Jan 2, 2018 at 18:54 Yes, using a Monoid can simplify what you want: def foldMonoidal[A : Monoid](f: Element => A): A = Monoid.combine(f(this), Monoid.combineAll(els.map(f))) ... e1.foldMonoidal(_.value) One issue with that though is that you lose the information of the nested structure: You may as well have just a list of (name, value). Any nested data structure can have a fold operation defined for it, in a way that keeps knowledge of its structure. In your case, this would be: def fold[A](f: (Element, List[A]) => A): A = f(this, els.map(_.fold(f))) ... e1.fold[Int]((el, list) => el.value + list.sum) You can see this allows more freedom, e.g. you may want to sum the value with the average of the nested elements, which you couldn't do with the monoidal solution above. e1.fold[Double]((el, list) => el.value + list.sum / list.size) Or for pretty-printing: e1.fold[String]((el, strEls) => s"(${el.name} ->${el.value}, \${strEls.mkString("[", ",", "]")}") // (Zero -> 0, [(One -> 1, [(Two -> 2, [(Three -> 3, [],(Four -> 4, []]],(Five -> 5, [(Six -> 6, [],(Seven -> 7, [],(Eight -> 8, [(Nine -> 9, [],(Ten -> 10, []]]] • I know I am not supposed to thank you, but THANK YOU! Jan 6, 2018 at 3:13 • In case anyone runs into this the implementation of foldMonoidal should be: def foldM[A: Monoid](f: Element => A): A = { Monoid.combine(f(this), Monoid.combineAll(els.map(_.foldM(f)))) } Feb 26, 2018 at 23:33 • True, due to nested structure, missed that +1 Feb 27, 2018 at 10:58
{}
Conductivity . In the below periodic table you can see the trend of Electrical Conductivity. These applications will - due to browser restrictions - send data between your browser and our server. AddThis use cookies for handling links to social media. Please read AddThis Privacy for more information. Heat Capacity . Thermal Conductivity - k - is used in the Fourier's equation. Thermal conductivity (kg/m 3 ) (W m -1 °C -1 ) / (kcal h -1 m -1 °C -1 ) Type I. 0.033/0.028. The thermal conductivity coefficient k is a material parameter depending on temperature, physical properties of the material, water content, and the pressure on the material [3].The coefficient k is measured in watts per meter … Unless otherwise noted, the thermal conductivity values refer to a pressure of 100 kPa (1 bar) or to the saturation vapor pressure if that is less than … Generally speaking, dense materials such as metals and stone are good conductors of heat, while low density substances such as gas and porous insulation are poor conductors of heat. With this in mind, the two columns above are not always consistent. Thermal Conductivity of Metals, Metallic Elements and Alloys, Calculate Overall Heat Transfer Coefficient, Aluminum - Duralumin (94-96% Al, 3-5% Cu, trace Mg), Copper - Brass (Yellow Brass) (70% Cu, 30% Zn), Copper - German Silver (62% Cu, 15% Ni, 22% Zn), Copper - Phosphor bronze (10% Sn, UNS C52400). Fluids are a subset of the phases of matter and include liquids, gases, plasmas and, to some extent, plastic solids.Because the intermolecular spacing is much larger and the motion of the … Thermal conductivity for aluminum is 215 W/ (m K) (from the table … The thermal conductivity … These measurements were made using a hot wire probe in situ at two depths at … Metals are solids and as such they possess crystalline structure where the ions (nuclei with their surrounding shells of core electrons) occupy translationally equivalent positions in the crystal lattice.Metals in general have high electrical conductivity, high thermal conductivity, and high density.Accordingly, transport of thermal … Current commercial software requires that the thermal conductivity, specific heat capacity, latent heat, … 0.037/0.032. Thermal Conductivity- k- is the quantity of heat transmitted due to an unit temperature gradient, in unit time under steady conditions in a direction normal to a surface of the unit area. If you want to promote your products or services in the Engineering ToolBox - please use Google Adwords. In the temperature range above ambient, λ for nearly all pure metals falls with increasing temperature. All values are from published tables, but can't be taken as authoritative. 700 . 1050 . … Thermal Conductivity of Fluids (Liquids and Gases) In physics, a fluid is a substance that continually deforms (flows) under an applied shear stress. Generally speaking, dense materials such as metals and stone are good conductors of heat, while low density substances such as gas and porous insulation are poor conductors of heat. Sp. Its unit is Watt per meter per Kelvin: W.M-1.K-1. In the below periodic table you can see the trend of Thermal Conductivity. Thermal conducTiViTy of gases marcia l. huber and allan h. harvey The following table gives the thermal conductivity of some common gases as a function of temperature . However, thermal conductivity … The ratio between thermal and electrical conductivities of metals can be expressed in terms of the ratio: which may be called the Wiedemann-Franz Ratio or the Lorenz constant. The thermal conductivity of the sample containing 10 wt% GNP at 10°C increased by 400% relative to pure eicosane. Zong-Xian Zhang, in Rock Fracture and Blasting, 2016. Type VI. Afterwards two pure components were studied and also two gaseous binary systems. Up … Add standard and customized parametric components - like flange beams, lumbers, piping, stairs and more - to your Sketchup model with the Engineering ToolBox - SketchUp Extension - enabled for use with the amazing, fun and free SketchUp Make and SketchUp Pro .Add the Engineering ToolBox extension to your SketchUp from the SketchUp Pro Sketchup Extension Warehouse! An empirical correlation equation with an average deviation of +-2% is given for the thermal conductivity of aqueous NaCl solutions from 20°C to 330°C at saturation pressures. 0.500 . Cookies are only used in the browser to improve user experience. Engineering ToolBox - Resources, Tools and Basic Information for Engineering and Design of Technical Applications! 10-18. Useful Tools: Stainless Steel Weight Calculator Metals Weight Calculator Nickel Alloy Weight Calculator Copper Brass Alloy Weight Calculator Copper Brass Alloy Sheet Plate Weight Calculator 0.034/0.029. Some of our calculators and applications let you save application data to your local computer. Knowledge of thermal conductivity and heat capacity of items used to construct or support a test set is often required to understand and interpret the results (or at least understand why thermal equilibrium required such a long time to achieve). Type III. Values for diamond and silica aerogel from CRC Handbook of Chemistry and Physics. We don't save this data. 19-30. We don't collect information from our users. Asbestos Cement Sheet . Type II. Notable exceptions are Helium (0.15) and Hydrogen (0.18). 46-65. Density (W/mK) (J/kgK) (kg/m3) Asphalts & Other Roofing Finishes Asbestos Cement Decking . Calculate Conductive Heat Transfer Calculate Overall Heat Transfer … Asphalt . Google use cookies for serving our ads and handling visitor statistics. K. At low pressures and high temperatures the thermal conductivity sharply increases due to dissociation. For facts, physical properties, chemical properties, structure and atomic properties of the specific element, click on the element symbol in the below periodic table. Table 15-5. In component datasheets and tables, since actual, physical components with distinct physical dimensions and characteristics are under consideration, thermal resistance is frequently given in absolute units of / or ∘ /, since the two are equivalent. Text lists sorted by: Value | Atomic Number | Alphabetical: Plots: Shaded | Ball | Crossed Line | Scatter | Sorted Scatter: Log scale plots: Shaded | Ball | Crossed Line | Scatter | Sorted Scatter: Good for this property: Scatter: Point to the graph to see details, or click for full data on that element. Thermal Conductivity, $\kappa$ From the table to the right, it can be seen that most materials generally associated with being good conductorshave a high thermal conductivity. The thermal conductivity of a material is highly dependent on composition and structure. The highest thermal conductivity is inherent in metals (see Table 1). Periodic Table: Element List by Name by Symbol by Atomic Number by Atomic Weight: Element Information: Thermal Conductivity: Resources: Bibliography: Quality Magazine . 0.044/0.038. A table of smoothed values generated using this correlation equation is provided for NaCl concentrations between 0 and 5 molal over this temperature … Essentials of Manufacturing. 31-45. … The lower the thermal conductivity of a material, the slower the rate at … Information, coverage of … Table 6 Thermal Conductivity, Specific Heat Capacity and Density System Material Database Material Description . The thermal conductivity of a material is a measure of its ability to conduct heat. Thus, the coefficient of thermal conductivity … 1050 . If the designer considers that a more accurate calculation is appropriate for the design of the building then, for … Only emails and answers are saved in our archive. For the binary systems, the experimental measurements of thermal conductivity … Heat Transfer Table of Content Properties of Metals - Thermal Conductivity, Density, Specific Heat This Table gives typical values of thermal several common commercial metals and alloys. Thermal conductivity - Designing Buildings Wiki - Share your construction industry knowledge. 0.360 . TABLE A Thermal conductivity and density values at 0 °C of fiberglass insulation . Please read Google Privacy & Terms for more information about how you can control adserving and the information collected. Electric conductivity may be represented by the Greek letter σ (sigma), κ (kappa), or γ (gamma). Type . All … Density . 0.033/0.028. With growing temperature the thermal conductivity goes through maximums … Roadman Captions For Instagram, Sleepless 2015 Netflix, Honeywell Humidifier Filter Hc26e1004, As You Are Lyrics Rag N Bone Man, Calcium Chloride Is Accelerator Or Retarder,
{}
# Book recommendations for linear algebra [duplicate] I have been wanting to learn about linear algebra (specifically about vector spaces) for a long time, but I am not sure what book to buy, any suggestions? ## marked as duplicate by Zain Patel, Namaste abstract-algebra StackExchange.ready(function() { if (StackExchange.options.isMobile) return; $('.dupe-hammer-message-hover:not(.hover-bound)').each(function() { var$hover = $(this).addClass('hover-bound'),$msg = $hover.siblings('.dupe-hammer-message');$hover.hover( function() { $hover.showInfoMessage('', { messageElement:$msg.clone().show(), transient: false, position: { my: 'bottom left', at: 'top center', offsetTop: -7 }, dismissable: false, relativeToBody: true }); }, function() { StackExchange.helpers.removeMessages(); } ); }); }); Aug 1 '17 at 0:15 • Axler's Linear Algebra Done Right is a good text on linear mappings and vector spaces without being bogged down by determinants (banished to the end of the text). – Sean Roberson Jul 31 '17 at 18:15 • Are you looking for a vector-space-theory-oriented linear algebra or a matrx-theory-oriented linear algebra text? Recommendations could depend on this. :) – Megadeth Jul 31 '17 at 18:26 • I am looking more for vector space theory – user468462 Jul 31 '17 at 18:47 A standard book for a first course in linear algebra is Gilbert Strang's Linear Algebra and Its Applications. After getting an initial exposure, Sheldon Axler's Linear Algebra Done Right is a good book for getting a more abstract view of linear algebra (at Carnegie Mellon, this is used for a second course in linear algebra). Finally, if you want a very abstract view of linear algebra in relation to other algebraic structures such as fields and modules, you can read the relevant portions of the legendary Abstract Algebra by Dummit and Foote. Some free sources well worth their salt (more so, I think, than many existing books): Here are some others that I haven't taken a proper look at, but suspect to be of high quality as well: All of the above cover vector spaces. As far as linear algebra without abstract vector spaces (i.e., "matrix algebra") is concerned, I can highly recommend the following: I think linear algebra by Hoffman and Kunze and linear algebra by Serge Lang are great books. Also, MIT ocw has a very good online linear algebra course (including assignments, but you would need Strang's book for doing those): https://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/index.htm • Serge Lang's book is terse but very widely used. – nurdyguy Jul 31 '17 at 22:24 Differential Equations and Linear Algebra (Third Edition) by Stephen W. Goode and Scott A. Annin is a good textbook, as is Abstract Algebra by Dummit and Foote, and Linear Algebra by Stephen H. Friedberg, Arnold J. Insel, and Lawrence E. Spence.
{}
# 1.5.7 Inverse Function Example 7 Example 7 (Comparison Method) Given that $f:x\to \frac{2h}{x-3k}$ , x≠3k , where h and k are constants and ${f}^{-1}:x\to \frac{14+24x}{x}$ , x≠0, find the value of h and of k. Solution:
{}
PGF/TikZ Manual # The TikZ and PGF Packages Manual for version 3.1.10 ## The Basic Layer #### 107Matrices • \usepgfmodule{matrix} % and plain and pure pgf • \usepgfmodule[matrix] % Cont and pure pgf • The present section documents the commands of this module. ##### 107.1Overview¶ Matrices are a mechanism for aligning several so-called cell pictures horizontally and vertically. The resulting alignment is placed in a normal node and the command for creating matrices, \pgfmatrix, takes options very similar to the \pgfnode command. In the following, the basic idea behind the alignment mechanism is explained first. Then the command \pgfmatrix is explained. At the end of the section, additional ways of modifying the width of columns and rows are discussed. ##### 107.2Cell Pictures and Their Alignment¶ A matrix consists of rows of cells. Cells are separated using the special command \pgfmatrixnextcell, rows are ended using the command \pgfmatrixendrow (the command \\ is set up to mean the same as \pgfmatrixendrow by default). Each cell contains a cell picture, although cell pictures are not complete pictures as they lack layers. However, each cell picture has its own bounding box like a normal picture does. These bounding boxes are important for the alignment as explained in the following. Each cell picture will have an origin somewhere in the picture (or even outside the picture). The position of these origins are important for the alignment: On each row the origins will be on the same horizontal line and for each column the origins will also be on the same vertical line. These two requirements mean that the cell pictures may need to be shifted around so that the origins wind up on the same lines. The top of a row is given by the top of the cell picture whose bounding box’s maximum $$y$$-position is largest. Similarly, the bottom of a row is given by the bottom of the cell picture whose bounding box’s minimum $$y$$-position is the most negative. Similarly, the left end of a column is given by the left end of the cell whose bounding box’s $$x$$-position is the most negative; and similarly for the right end of a column. ##### 107.3The Matrix Command¶ All matrices are typeset using the following command: • \pgfmatrix{shape}{anchor}{name}{usage}{shift}{pre-code}{matrix cells} • This command creates a node that contains a matrix. The name of the node is name, its shape is shape and the node is anchored at anchor. The matrix cell parameter contains the cells of the matrix. In each cell drawing commands may be given, which create a so-called cell picture. For each cell picture a bounding box is computed and the cells are aligned according to the rules outlined in the previous section. The resulting matrix is used as the text box of the node. As for a normal node, the usage commands are applied, so that the path(s) of the resulting node is (are) stroked or filled or whatever. Specifying the cells and rows. Even though this command uses \halign internally, there are two special rules for indicating cells: • 1. Cells in the same row must be separated using the macro \pgfmatrixnextcell rather than &. Using & will result in an error message. However, you can make & an active character and have it expand to \pgfmatrixnextcell. This way, it will “look” as if & is used. • 2. Rows are ended using the command \pgfmatrixendrow, but \\ is set up to mean the same by default. However, some environments like {minipage} redefine \\, so it is good to have \pgfmatrixendrow as a “fallback”. • 3. Every row including the last row must be ended using the command \\ or \pgfmatrixendrow. Both \pgfmatrixnextcell and \pgfmatrixendrow (and, thus, also \\) take an optional argument as explained in the Section 107.4 Anchoring matrices at nodes inside the matrix. The parameter shift is an additional negative shift for the node. Normally, such a shift could be given beforehand (that is, the shift could be preapplied to the current transformation matrix). However, when shift is evaluated, you can refer to temporary positions of nodes inside the matrix. In detail, the following happens: When the matrix has been typeset, all nodes in the matrix temporarily get assigned their positions in the matrix box. The origin of this coordinate system is at the left baseline end of the matrix box, which corresponds to the text anchor. The position shift is then interpreted inside this coordinate system and then used for shifting. This allows you to use the parameter shift in the following way: If you use text as the anchor and specify \pgfpointanchor{inner node}{some anchor} for the parameter shift, where inner node is a node that is created in the matrix, then the whole matrix will be shifted such that inner node.some anchor lies at the origin of the whole picture. Rotations and scaling. The matrix node is never rotated or scaled, because the current coordinate transformation matrix is reset (except for the translational part) at the beginning of \pgfmatrix. This is intentional and will not change in the future. If you need to rotate or scale the matrix, you must install an appropriate canvas transformation yourself. However, nodes and stuff inside the cell pictures can be rotated and scaled normally. Callbacks. At the beginning and at the end of each cell the special macros \pgfmatrixbegincode, \pgfmatrixendcode and possibly \pgfmatrixemptycode are called. The effect is explained in Section 107.5. Executing extra code. The parameter pre-code is executed at the beginning of the outermost -group enclosing the matrix node. It is inside this -group, but outside the matrix itself. It can be used for different purposes: • 1. It can be used to simplify the next cell macro. For example, saying \let\&=\pgfmatrixnextcell allows you to use \& instead of \pgfmatrixnextcell. You can also set the catcode of & to active. • 2. It can be used to issue an \aftergroup command. This allows you to regain control after the \pgfmatrix command. (If you do not know the \aftergroup command, you are probably blessed with a simple and happy life.) Special considerations concerning macro expansion. As said before, the matrix is typeset using \halign internally. This command does a lot of strange and magic things like expanding the first macro of every cell in a most unusual manner. Here are some effects you may wish to be aware of: • • It is not necessary to actually mention \pgfmatrixnextcell or \pgfmatrixendrow inside the matrix cells. It suffices that the macros inside matrix cells expand to these macros sooner or later. • • In particular, you can define clever macros that insert columns and rows as needed for special effects. ##### 107.4Row and Column Spacing¶ It is possible to control the space between columns and rows rather detailedly. Two commands are important for the row spacing and two commands for the column spacing. • \pgfsetmatrixcolumnsep{sep list} • This macro sets the default separation list for columns. The details of the format of this list are explained in the description of the next command. • This command has two purposes: First, it is used to separate cells. Second, by providing the optional argument additional sep list you can modify the spacing between the columns that are separated by this command. The optional additional sep list may only be provided when the \pgfmatrixnextcell command starts a new column. Normally, this will only be the case in the first row, but sometimes a later row has more elements than the first row. In this case, the \pgfmatrixnextcell commands that start the new columns in the later row may also have the optional argument. Once a column has been started, subsequent uses of this optional argument for the column have no effect. To determine the space between the two columns that are separated by \pgfmatrixnextcell, the following algorithm is executed: • 1. Both the default separation list (as set up by \pgfsetmatrixcolumnsep) and the additional sep list are processed, in this order. If the additional sep list argument is missing, only the default separation list is processed. • 2. Both lists may contain dimensions, separated by commas, as well as occurrences of the keywords between origins and between borders. • 3. All dimensions occurring in either list are added together to arrive at a dimension $$d$$. • 4. The last occurrence of either of the keywords is located. If neither keyword is present, we proceed as if between borders were present. At the end of the algorithm, a dimension $$d$$ has been computed and one of the two modes between borders and between origins has been determined. Depending on which mode has been determined, the following happens: • • For the between borders mode, an additional horizontal space of $$d$$ is added between the two columns. Note that $$d$$ may be negative. • • For the between origins mode, the spacing between the two columns is computed differently: Recall that the origins of the cell pictures in both pictures lie on two vertical lines. The spacing between the two columns is set up such that the horizontal distance between these two lines is exactly $$d$$. This mode may only be used between columns already introduced in the first row. All of the above rules boil down to the following effects: • • A default spacing between columns should be set up using \pgfsetmatrixcolumnsep. For example, you might say \pgfsetmatrixcolumnsep{5pt} to have columns spaced apart by 5pt. You could say \pgfsetmatrixcolumnsep{1cm,between origins} to specify that horizontal space between the origins of cell pictures in adjacent columns should be 1cm by default – regardless of the actual size of the cell pictures. • • You can now use the optional argument of \pgfmatrixnextcell to locally overrule the spacing between two columns. By saying \pgfmatrixnextcell[5pt] you add 5pt to the space between of the two columns, regardless of the mode. You can also (locally) change the spacing mode for these two columns. For example, even if the normal spacing mode is between origins, you can say \pgfmatrixnextcell[5pt,between borders] to locally change the mode for these columns to between borders. The mechanism for the between-row-spacing is the same, only the commands are called differently. • \pgfsetmatrixrowsep{sep list} • This macro sets the default separation list for rows. • This command ends a line. The optional additional sep list is used to determine the spacing between the row being ended and the next row. The modes and the computation of $$d$$ is done in the same way as for columns. For the last row the optional argument has no effect. Inside matrices (and only there) the command \\ is set up to mean the same as this command. ##### 107.5Callbacks¶ There are three macros that get called at the beginning and end of cells. By redefining these macros, which are empty by default, you can change the appearance of cells in a very general manner. • \pgfmatrixemptycode • This macro is executed for empty cells. This means that pgf uses some macro magic to determine whether a cell is empty (it immediately ends with \pgfmatrixemptycode or \pgfmatrixendrow) and, if so, put this macro inside the cell. As can be seen, the macro is not executed for empty cells at the end of row when columns are added only later on. • \pgfmatrixbegincode • This macro is executed at the beginning of non-empty cells. Correspondingly, \pgfmatrixendcode is added at the end of every non-empty cell. Note that between \pgfmatrixbegincode and \pgfmatrixendcode there will not only be the contents of the cell. Rather, pgf will add some (invisible) commands for book-keeping purposes that involve \let and \gdef. In particular, it is not a good idea to have \pgfmatrixbegincode end with \csname and \pgfmatrixendcode start with \endcsname. • \pgfmatrixendcode • See the explanation above. The following two counters allow you to access the current row and current column in a callback: • \pgfmatrixcurrentrow • This counter stores the current row of the current cell of the matrix. Do not even think about changing this counter. • \pgfmatrixcurrentcolumn • This counter stores the current column of the current cell of the matrix.
{}
## STAT536 : HW7Questions Referers: 2008OldNews :: Fall2008 :: (Remote :: Orphans :: Tree ) Dorman Wiki Dorman Lab Wiki Hw7? question(1), is L(n|F.IS,p,c) likelihood a multinomial distribution (n.i1,n.i2,n.i3|P11,P12,P13), where P11/P12/P22 can be estimated from F.IS,q and c?  I am not sure I understand the question right. The likelihood, which I'll write in Bayesian-style notation as $P ( n | F I S , p , c )$, can be written, using the law of total probability by conditioning and integrating over the unobserved $q$, as $P ( n | F I S , p , c ) = ∫ q P ( n | F I S , q ) P ( q | p , c ) d q$ Here, I have dropped from the dependence quantities that are independent of the random variables, i.e. $p$ is dropped from the distribution of $n$ because once $q$ is known, $p$ is irrelevant and $F I S$ has been dropped from the distribution of $q$ because $F I S$ is a within-population property and $q$ are allele frequencies across populations (also see notes).  $P ( q | p , c )$ was given explicitly in class.  I'm just asking you to repeat it.  And the other conditional probability is multinomial as you suggest. Hw7? question (2), do I need the equation (2) to estimate F.IS.hat and c.hat? I mean will the data, if any, from lnL(n|F.IS,c) be used for estimation or use some other formula for estimation,like var(q)=cp(1-p). Yes. Even if you use $V a r ( q )$ (to get $F S T$, for example), you are going to need something, namely $c ^$, to plug in for $c$.  You get $c ^$ by maximizing eq. (2).  [Note $V a r ( q ) ≠ c p ( 1 - p )$ because the distribution assumed for $q$ is truncated at 0 and 1.] Let me just try to explain what is going on.  See equation (1).  It is the likelihood that you technically need to maximize over $F I S$, $p$, and $c$ in order to generate estimates of $F I S$, $F S T$, and $F I T$ (the latter two functions of the first three MLEs). Unfortunately (1) is difficult to maximize, so eq. (2) makes some assumptions.  It first assumes that the integral in (1) accumulates most of its volume at and near $q = q ^$.  Therefore, rather than computing the integral, it just substitutes in $q ^$ to the integrand and dispenses with the integral.  Second, equation (2) also assumes that the likelihood in (1) which is a function sitting in 4-dimensional space above the $( F I S , p , c )$ space has a sharply peaked ridge above the plane defined by $p = p ^$.  In other words, for all possible values $F I S$ and $c$, including at the special values $F ^ I S$ and $c ^$, equation (2) assumes that $p = p ^$ maximizes the likelihood. Do I estimate 12 F.IS, one for each subpopulation? This is a good question, because I failed to state an assumption: Assume $F i s$ is constant across subpopulations.  Please note, for a subpopulation with allele frequency $q$ $F I S = E ( A 1 A 1 | q ) - q 2 q ( 1 - q )$ (from the notes, p. 3 of lec13).  Rearrange: $E ( A 1 A 1 | q ) = q 2 + F I S q ( 1 - q )$ $E ( A 1 A 1 | q )$ is the expected proportion of A1A1 genotype in the subpopulation.  In other words, $F I S$ is just the inbreeding $f$ we used before for single populations.  Here, we are assuming the inbreeding within each subpopulation is the same. > 2. similarly, would I have 12 F.is and c mle for each subpopln? (what's the R code for n! btw?) $c$ is a parameter in the distribution of $q$.  There is one $q$ for each subpopulation, let's call them $q 1 , q 2 , ... , q 12$.  In particular, $c$ relates to the variance in $q 1 , q 2 , ... , q 12$.  It does not make sense to have a $c$ for each population, so there is only one $c$. (n! = factorial(n)) 2. When use the optim to find F.is and c estimate, what's the supposed function? You are trying to maximize the log likelihood of the data over the unknown parameters $F i s$ and $c$.  What is the likelihood?  Parts (a) and (b) are all about obtaining a simplified version of the log likelihood, resulting finally in eq. (2). You ask a numerical approach to get var(q), and I assume bootstrap is one strategy to get it. Again same question, since q = (q1,q2,...,…q12), how should I do the simulation then? Bootstrap is for getting the variance on a statistic estimated from data.  Here you are asked ONLY to get the MLEs.  However, $F ^ S T$ is a function of $F ^ I S$ and $c ^$.  Specifically, this function involves the variance of a truncated normal distribution.  If it wasn't truncated, the variance is just $c p ( 1 - p )$, and  you would plug in $c ^$ (and $p ^$) for $c$ (and $p$).  However, now you are asked to get the variance of this truncated normal distribution. Now, suppose you didn't know the variance of some arbitrary distribution $g$, but you could simulate from it.  How would you estimatethe variance?  Yes, a concept RELATED, but not the same thing as, thebootstrap enters here. ... You simulate much data $X$ from g and then compute var$( X )$. So, now the key question:  Can you simulate data from the truncated normal?
{}
## tywower one year ago How many liters of water vapor can be produced if 8.9 liters of methane gas (CH4) are combusted, if all measurements are taken at the same temperature and pressure? Show all of the work used to solve this problem. CH4 (g) + 2 O2 (g) yields CO2 (g) + 2 H2O (g) 1. tywower @Abhisar 2. tywower @Preetha 3. tywower @sweetburger 4. tywower @dan815 5. tywower @Kbug 6. tywower @Abhisar 7. Abhisar This one will also be solved using PV=nRT 8. Abhisar We can see from the equation that for each mole of CH4, 2 moles of oxygen is produced. And also pressure and temperature is same. We can write that $$\sf P= \huge \frac{nRT}{8.9} = \frac{2nRT}{V}$$, Here V is the volume of oxygen formed. Solve the equation for v. 9. Abhisar Any problem? 10. tywower not yet @Abhisar you may continue 11. Abhisar That's it. V will be the volume of oxygen. Solve the equation for v. 12. tywower im not sure how to solve the equation @Abhisar 13. Abhisar $$\sf \huge \frac{nRT}{8.9} = \frac{2nRT}{V}$$ Can you solve it for V now? 14. Abhisar Cancel out the common terms on both sides. 15. tywower 4.5? 16. tywower @dan815 17. tywower @Abhisar 18. taramgrant0543664 It's not 4.5 unless my math is really off, all you have to do is cancel out the like terms as someone has already shown above. And if you're having a problem with that just factor out nRT from each side and that should help you get your answer 19. aaronq Because "all measurements are taken at the same temperature and pressure" moles are proportionate to volume and therefore conversions between moles and L can be ignored - that is we don't need the ideal gas law. We set up a ratio (as we normally would) except using liters instead of moles, and plug in the variables we know: $$\sf \dfrac{L~of ~CH_4}{CH_4's ~coefficient}=\dfrac{L~of~H_2O}{H_2O's ~coefficient}\rightarrow \sf \dfrac{8.9~L}{1}=\dfrac{L~of~H_2O}{2}$$ We solve this algebraically and obtain: $$\sf L~of~H_2O=\dfrac{2*8.9~L}{1}$$
{}
# Ethics of answering one's own question when it has very low votes A while ago, I asked a simple question that attracted a lot of good answers, and although none of them fully answered my question, it gave me a really strong base; I made more research, based on those answers, and posted one myself. For most parts, I took the other answers and developed them further, then added my own findings when adequate. It was pretty detailed, and in my opinion, answered the question extensively. Now, I know self-answers are more critically judged, and they're fine if it actually answers the question. However, this self-answer received a lot less upvotes than the other answers, making me wonder if I should accept it. I know the most-voted answer isn't always the acceptable one, but there's a 13-to-2 difference here. The lack of comment on the answer leads me to think it wasn't read by many, but I prefer not to make more assumptions than needed. Should I accept my own answer if it only got a couple upvotes, even after a few months? • If you feel that your answer builds on the others more than being your own work then you always have the option of posting an answer with the community wiki flag set. – Tim B Jun 6 '15 at 9:03 • @TimB oh, that's a very good point! In this case there was a fair amount of my own research, but I'll definitely remember that. – Linkyu Jun 6 '15 at 10:35 I had a similar situation with a question on another site -- I asked a question, got lots of helpful answers (which I upvoted), and ultimately synthesized them. That left me with the question of how to record the outcome on the question, so I wrote a summary answer, acknowledging/linking the other answers, and accepted it. It felt weird to me to do so, but I wanted to signal that I didn't still need help. New answers are always welcome, but that acceptance mark says "I got something that works for me so feel free to move on". I haven't read your whole answer, but it looks like you have a similar case. The other answers prompted you to do more work, you wrote it up, you acknowledged what you built on, and you got something sufficient for your needs. There's nothing wrong with going ahead and accepting that. Note that if you accept your own answer that answer doesn't float to the top, unlike other acceptances, so you won't be pushing higher-voted answers down. • I remember I wanted to do something similar with my question about long-armed weapons. Especially due to the wide variety of answers on WB, I feel like summary answers are necessary in a lot of cases. – DaaaahWhoosh Apr 8 '15 at 20:04 • @DaaaahWhoosh All too often, I see answers that are "added details" to another answer. Isn't there a way to merge answers? I feel it would be very adequate in many questions here. – Linkyu Apr 8 '15 at 20:18 • @Linkyu answers can't be merged (and if they could, who would get the rep from future votes?). We should avoid small answers that add "just one thing" to other answers, though; the best thing to do there is to comment on the other answer suggesting the addition. (Or, if you think it's non-controversial, like adding a source for something the answer says, propose an edit.) – Monica Cellio Apr 8 '15 at 20:22 That you gave people a few months to review and vote on questions is good practice in my opinion. That being said you are completely within your rights to accept your own answer. I feel like we do have an issue where late answers are not reviewed by nearly enough people...I think this spawns from plenty of fresh content showing up daily (which is a good thing for the site) and people don't look at older content. • [...] content showing up daily [...] and people don't look at older content. I must admit I am guilty of a variation of this; I often put limited time on each question I read, and as a result, don't always read the less upvoted answers. – Linkyu Apr 8 '15 at 19:11
{}
# How to define the mirror symmetry operator for Kane-Mele model? Let us take the famous Kane-Mele(KM) model as our starting point. Due to the time-reversal(TR), 2-fold rotational(or 2D space inversion), 3-fold rotational and mirror symmetries of the honeycomb lattice system, we can derive the intrinsic spin-orbit(SO) term. Further more, if we apply a spatially uniform electric field perpendicular to the 2D lattice(now the mirror symmetry is broken), a (extra) Rashba-type SO term will emerge. To present my question more clearly, I will first give a more detailed description of the above symmetry operations in both first- and second- quantization formalism. In the follows, a 3D Cartesian coordinate has been set up where the 2D lattice lies in the $xoy$ plane. First-quantization language: (1) TR symmetry operator $\Theta:$ $\Theta\phi(x,y,z)\equiv \phi^*(x,y,z)$, hereafter, $\phi(x,y,z)$ represents an arbitrary wave function for single electron. (2) 2-fold rotational operator $R_2:$ $R_2\phi(x,y,z)\equiv \phi(-x,-y,z)$, where we choose the middle point of the nearest-neighbour bond as the origin point $o$ of the coordinate. (3) 3-fold rotational operator $R_3:$ $R_3\phi(\vec{r} )\equiv \phi(A\vec{r})$, where $A=\begin{pmatrix} \cos\frac{2\pi}{3}& -\sin\frac{2\pi}{3}& 0\\ \sin\frac{2\pi}{3}& \cos\frac{2\pi}{3} & 0\\ 0& 0 & 1 \end{pmatrix}$ $\vec{r}=(x,y,z)$ and we choose the lattice site as the origin point $o$ of the coordinate. (4) Mirror symmetry operator $\Pi:$ $\Pi\phi(x,y,z)\equiv \phi(x,y,-z)$. Second-quantization language: (1) TR symmetry operator $T:$ $TC_{i\uparrow}T^{-1}=C_{i\downarrow}, TC_{i\downarrow}T^{-1}=-C_{i\uparrow}$, where $C=a,b$ are the annihilation operators referred to the two sublattices of graphene. (2) 2-fold rotational operator $P_2:$ $P_2a(x,y)P_2^{-1}\equiv b(-x,-y), P_2b(x,y)P_2^{-1}\equiv a(-x,-y)$, $P_2$ is unitary and we choose the middle point of the nearest-neighbour bond as the origin point $o$ of the coordinate. (3) 3-fold rotational operator $P_3:$ $P_3C(\vec{x})P_3^{-1}\equiv C(A\vec{x}), \vec{x}=(x,y),C=a,b$, where $A=\begin{pmatrix} \cos\frac{2\pi}{3}& -\sin\frac{2\pi}{3}\\ \sin\frac{2\pi}{3}& \cos\frac{2\pi}{3} \\ \end{pmatrix}$ and $P_3$ is unitary, we choose the lattice site as the origin point $o$ of the coordinate. (4) Mirror symmetry operator $M:$ ???? as you see, that's what I want to ask: how to define the mirror symmetry operator $M$ in terms of second-quantization language for this 2D lattice system? Or maybe there is no well defined $M$ for this model? Thanks in advance. Remarks: (1) A direct way to verify your definition of $M$ being correct or not is as follows: The intrinsic SO term $i\lambda\sum_{\ll ij \gg }v_{ij}C_i^\dagger\sigma_zC_j$ should be invariant under $M$ while the Rashba term $i\lambda_R\sum_{<ij>}C_i^\dagger \left ( \mathbf{\sigma}\times\mathbf{p}_{ij}\right )_zC_j$ will not be. (2)Here mirror operation is just reflection in one of the three spatial axes (i.e. $(x,y,z)\rightarrow (x,y,-z)$), not the “parity” operation in the context of "CPT symmetry" in field theory. - What is seems you are actually asking is: "how does the mirror symmetry work in the tight binding model?". As you probably realized the lattice sites are invariant and Bloch wavefunctions are invariant under the mirror symmetry. However the electron spin is not invariant. Keeping in mind that the Pauli matrices form a "pseudo-vector" you see that the correct transformation is: $C\rightarrow \sigma_z C\sigma_z$. @ BebopButUnsteady, thanks for your good comment. I have one more question, mirror symmetry is a symmetry about spatial degrees of freedom, how is it related to the spin space? Furthermore, if the electron is spinless in our tight binding model, is your definition $C\rightarrow \sigma_zC\sigma_z$ still work? –  Kai Li Jun 29 '13 at 9:13 @ BebopButUnsteady, to confirm one thing: Does the symbol $\sigma_z$ in your answer mean $\sigma_z=C_\uparrow^\dagger C_\uparrow-C_\downarrow^\dagger C_\downarrow$? If so, the operator $\sigma_z$ represents a symmetry transformation(unitary operator) **only if** the single occupation condition $C_\uparrow^\dagger C_\uparrow+C_\downarrow^\dagger C_\downarrow=1$ satisfies. –  Kai Li Jun 29 '13 at 9:21
{}
Opened 7 years ago Closed 10 days ago # Text cannot include commas Reported by: Owned by: treaves@… David Roussel normal ColorMacro normal 0.11 ### Description I've just downloaded and put in place the file in the 0.12 directory. I then add this text to my WIKI page [[Color(red, What does visible mean here? Do we need a Site, or do we need an Equipment? How do we determine this?)]] The only text that is displayed is or do we need an Equipment? How do we determine this? and it is red. The other text is not displayed. ### comment:2 Changed 6 years ago by Ryan J Ollos Summary: Bug with comma in text. → Text cannot include commas ### comment:3 Changed 6 years ago by Ryan J Ollos The following does work: [[Color(red, black, What does visible mean here? Do we need a Site, or do we need an Equipment? How do we determine this?)]] So there is still room for improvement in the allowed calling syntax, but it can be made to work. ### comment:4 Changed 10 days ago by Ryan J Ollos Resolution: → wontfix new → closed
{}
# Analytical Chemistry (Mixing) Chemistry Level pending We have 2 litre $$\ce{NaOH}$$ 1.5N we need to raise the concentration to become 3.6N by adding a known volume from stock solution whose concentration is 5.2N Then we need to convert all $$\ce{NaOH}$$ in the resulting solution (whose concentration is 3.6 and it's volume is unknown) into $$\ce{NaCl}$$ by adding $$\ce{HCl}$$ 37% w/w and it's specific gravity is 1.19 gm/ml and it's molecular weight is 36.46 so calculate the exact volume of $$\ce{HCl}$$ needed without any excess. Write the volume in litres to the nearest one decimal. ×
{}
# Talk:Spline interpolation WikiProject Computer graphics (Rated Start-class) This article is within the scope of WikiProject Computer graphics, a collaborative effort to improve the coverage of computer graphics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks. Start  This article has been rated as Start-Class on the project's quality scale. ## Where is equation 2? Where is equation 2? ## Interpolation using natural cubic spline I'm missing the good old and easy to understand description of natural cubic splines as there has been around october 2009. I was able to easily implement that. The new and more general description makes no sense to me.. :( Here's the old description: Interpolation using natural cubic spline Would be great if someone could look into that and maybe make the article more understandable again. Thanks! — Preceding unsigned comment added by 85.31.3.11 (talk) 11:41, 5 August 2011 (UTC) ## Definition Given n+1 distinct knots xi such that ${\displaystyle x_{0} with n+1 knot values yi we are trying to find a spline function of degree n ${\displaystyle S(x):=\left\{{\begin{matrix}S_{0}(x)&x\in [x_{0},x_{1}]\\S_{1}(x)&x\in [x_{1},x_{2}]\\\vdots &\vdots \\S_{k-1}(x)&x\in [x_{k-1},x_{k}]\end{matrix}}\right.}$ with each Si(x) a polynomial of degree n. Is this not confusing, using n both for the degree of the polynomial, and for the number of points? --Anonymus, wiki nl There's a k for that. Very well explained then ;) --217.136.81.22, 11:16, 13 Jun 2005 (UTC) ## Natural cubic spline oscillation Clamped and natural cubic splines yield the least oscillation about f than any other twice continuously differentiable function. In the above sentence from the article, just what is f? Perhaps the article can be updated to clarify this. --Abelani, 19 November 2005 This is to confirm that someone has posted a clarification. --Abelani, 2:16, 27 November 2005 (UTC) Amongst all twice continuously differentiable functions, clamped and natural cubic splines yield the least oscillation about the function f which is interpolated. In the above sentence from the article, surely f itself is the function with the least oscillation about f. What is the restricted set of interpolation functions for which the statement is true and interesting? Harold f 03:27, 20 August 2006 (UTC) ## Interpolation using natural cubic spline In the formulas for interpolation using natural cubic spline, it seems that one could replace each ${\displaystyle z_{i}}$ by ${\displaystyle 6z_{i}}$, enabling one to cancel 6s and obtain ${\displaystyle S_{i}(x)={\frac {z_{i+1}(x-x_{i})^{3}+z_{i}(x_{i+1}-x)^{3}}{h_{i}}}+\left({\frac {y_{i+1}}{h_{i}}}-h_{i}z_{i+1}\right)(x-x_{i})+\left({\frac {y_{i}}{h_{i}}}-h_{i}z_{i}\right)(x_{i+1}-x)}$ and ${\displaystyle {\begin{matrix}h_{i-1}z_{i-1}+2(h_{i-1}+h_{i})z_{i}+h_{i}z_{i+1}={\frac {y_{i+1}-y_{i}}{h_{i}}}-{\frac {y_{i}-y_{i-1}}{h_{i-1}}}\end{matrix}}}$ Is there any reason not to do that? --Jwwalker 01:51, 26 July 2006 (UTC) ## Graphs Those graphs are a little kooky.. are they 1d or 2d? approximating in what sense? The second one in particular, a spline approximation of an even function should still be even... I agree, moving it here: The graph below is an example of a spline function (blue lines) and the function it is approximating (red lines) for k=4: This might be Quadratic spline, but it is NOT how one would normally set up the values. I am pretty sure the quadratic curve should cross/align the midpoints of the linear interpolation curve. —Preceding unsigned comment added by 80.216.134.151 (talk) 14:17, 9 December 2008 (UTC) ## Usage of the word 'knot' Is it not 'nodes' in English instead of 'knots'? I know you would say 'knots' when translating directly from e.g. German but all English textbooks I have call them 'nodes'. Somewikian (talk) 16:55, 6 January 2009 (UTC) ## Minimality Section In the minimality of cubic spline section it says that the cubic spline minimizes ${\displaystyle J(f)=\int _{a}^{b}|f''(x)|^{2}dx}$, over the functions in the Sobolev space ${\displaystyle H^{2}([a;b])}$. That don't have any sense because the minimum it's attained for example with the null function. I think the minimization should be over the functions that interpolates a given function but i'm not sure, someone please correct this. Bunder (talk) 13:10, 12 March 2009 (UTC) ## complete cubic spline Is this formulation for the complete cubic spline correct? ${\displaystyle S''(x_{0})=f'(x_{0}),\quad S''(x_{n})=f'(x_{n})\,\!}$ The mixing of second and first derivatives seems off to me. According to [1], I think it should be: ${\displaystyle S'(x_{0})=f'(x_{0}),\quad S'(x_{n})=f'(x_{n})\,\!}$ (all first derivatives) -- 128.104.112.179 (talk) 18:23, 20 October 2009 (UTC) ## Rewrite {{Mergefrom|Spline (mathematics)#Algorithm for computing natural cubic splines|date=February 2010}} and "Spline (mathematics)" also I agree! I have here a draft of a radical rewrite that could/should replace both articles! Comments! ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Making hand-drawn technical drawings for ship building or other constructions elastic rulers were used that were bent to pass through a number of predefined points (the "knots") as illustrated by the following figure. Interpolation with cubic splines between eight points. Making traditional hand-drawn technical drawings for ship-building etc flexible rulers were bent to follow pre-defined points (the "knots") The approach to mathematically model the shape of such elastic rulers fixed by n+1 "knots" ${\displaystyle (x_{i},y_{i})\quad i=0,1,\cdots ,n}$ is to interpolate between all the pairs of "knots" ${\displaystyle (x_{i-1}\ ,\ y_{i-1})}$ and ${\displaystyle (x_{i}\ ,\ y_{i})}$ with polynomials ${\displaystyle y=q_{i}(x)\quad i=1,2,\cdots ,n}$ The curvature of a curve ${\displaystyle y=f(x)}$ is ${\displaystyle \kappa ={\frac {y''}{(1+y'^{2})^{3/2}}}}$ As the elastic ruler will take a shape that minimizes the bending under the constraint of passing through all "knots" both ${\displaystyle y'}$ and ${\displaystyle y''}$ will be continuous everywhere, also at the "knots". To achieve this one must have that ${\displaystyle q'_{i}(x_{i})=q'_{i+1}(x_{i})}$ and ${\displaystyle q''_{i}(x_{i})=q''_{i+1}(x_{i})}$ for all i , ${\displaystyle 1\leq i\leq n-1}$. This can only be achieved if polynomials of degree 3 or higher are used. The classical approach is to use polynomials of degree 3, this is the case of "Cubic splines". ### Cubic splines A third order polynomial ${\displaystyle q(x)}$ for which ${\displaystyle q(x_{1})=y_{1}}$ ${\displaystyle q(x_{2})=y_{2}}$ ${\displaystyle q^{'}(x_{1})=k_{1}}$ ${\displaystyle q^{'}(x_{2})=k_{2}}$ can be written in the symmetrical form ${\displaystyle q\ =\ (1-t)\ y_{1}+\ t\ y_{2}\ +\ t\ (1-t)\ (a\ (1-t)+b\ t)}$ (1) where ${\displaystyle t={\frac {x-x_{1}}{x_{2}-x_{1}}}}$ (2) and ${\displaystyle a=k_{1}(x_{2}-x_{1})-(y_{2}-y_{1})}$ (3) ${\displaystyle b=-k_{2}(x_{2}-x_{1})+(y_{2}-y_{1})}$ (4) Computing the derivatives one finds that ${\displaystyle q^{'}\ ={\frac {y_{2}-y_{1}}{x_{2}-x_{1}}}+(1-2t)\ {\frac {a\ (1-t)+b\ t}{x_{2}-x_{1}}}\ +\ \ t\ (1-t)\ {\frac {b-a}{x_{2}-x_{1}}}}$ (5) ${\displaystyle q^{''}=2{\frac {b-2a+(a-b)3t}{{(x_{2}-x_{1})}^{2}}}}$ (6) Setting ${\displaystyle x=x_{1}}$ and ${\displaystyle x=x_{2}}$ in (6) one gets that ${\displaystyle q^{''}(x_{1})=2{\frac {b-2a}{{(x_{2}-x_{1})}^{2}}}}$ (7) ${\displaystyle q^{''}(x_{2})=2{\frac {a-2b}{{(x_{2}-x_{1})}^{2}}}}$ (8) If now ${\displaystyle (x_{i},y_{i})\quad i=0,1,\cdots ,n}$ are n+1 points and ${\displaystyle q_{i}\ =\ (1-t)\ y_{i-1}+\ t\ y_{i}\ +\ t\ (1-t)\ (a_{i}\ (1-t)+b_{i}\ t)\quad i=1,\cdots ,n}$ (9) where ${\displaystyle t={\frac {x-x_{i-1}}{x_{i}-x_{i-1}}}}$ are n third degree polynomials interpolating ${\displaystyle y}$ in the interval ${\displaystyle x_{i-1}\leq x, for ${\displaystyle i=1,\cdots ,n}$ such that ${\displaystyle q_{i}^{'}(x_{i})=q_{i+1}^{'}(x_{i})}$ for ${\displaystyle i=1,\cdots ,n-1}$ then the n polynomials together define a derivable function in the interval ${\displaystyle x_{0}\leq x\leq x_{n}}$ and ${\displaystyle a_{i}=k_{i-1}(x_{i}-x_{i-1})-(y_{i}-y_{i-1})}$ (10) ${\displaystyle b_{i}=-k_{i}(x_{i}-x_{i-1})+(y_{i}-y_{i-1})}$ (11) for ${\displaystyle i=1,\cdots ,n}$ where ${\displaystyle k_{0}=q_{1}^{'}(x_{0})}$ (12) ${\displaystyle k_{i}=q_{i}^{'}(x_{i})=q_{i+1}^{'}(x_{i})\quad i=1,\cdots ,n-1}$ (13) ${\displaystyle k_{n}=q_{n}^{'}(x_{n})}$ (14) If the sequence ${\displaystyle k_{0},k_{1},\cdots ,k_{n}}$ is such that in addition ${\displaystyle q_{i}^{''}(x_{i})=q_{i+1}^{''}(x_{i})}$ for ${\displaystyle i=1,\cdots ,n-1}$ the resulting function will even have a continuous second derivative. From (7), (8), (10) and (11) follows that this is the case if and only if ${\displaystyle {\frac {k_{i-1}}{x_{i}-x_{i-1}}}+\left({\frac {1}{x_{i}-x_{i-1}}}+{\frac {1}{x_{i+1}-x_{i}}}\right)\ 2k_{i}+{\frac {k_{i+1}}{x_{i+1}-x_{i}}}=3\ \left({\frac {y_{i}-y_{i-1}}{{(x_{i}-x_{i-1})}^{2}}}+{\frac {y_{i+1}-y_{i}}{{(x_{i+1}-x_{i})}^{2}}}\right)}$ (15) for ${\displaystyle i=1,\cdots ,n-1}$ The relations (15) are n-1 linear equations for the n+1 values ${\displaystyle k_{0},k_{1},\cdots ,k_{n}}$. For the elastic rulers being the model for the spline interpolation one has that to the left of the left-most "knot" and to the right of the right-most "knot" the ruler can move freely and will therefore take the form of a straight line with ${\displaystyle q''=0}$. As ${\displaystyle q''}$ should be a continuous function of ${\displaystyle x}$ one gets that for "Natural Splines" one in addition to the n-1 linear equations (15) should have that ${\displaystyle q_{i}^{''}(x_{0})\ =2\ {\frac {3(y_{1}-y_{0})-(k_{1}+2k_{0})(x_{1}-x_{0})}{{(x_{1}-x_{0})}^{2}}}=0}$ ${\displaystyle q_{n}^{''}(x_{n})\ =-2\ {\frac {3(y_{n}-y_{n-1})-(2k_{n}+k_{n-1})(x_{n}-x_{n-1})}{{(x_{n}-x_{n-1})}^{2}}}=0}$ i.e. that ${\displaystyle {\frac {2}{x_{1}-x_{0}}}k_{0}\ +{\frac {1}{x_{1}-x_{0}}}k_{1}=3\ {\frac {y_{1}-y_{0}}{(x_{1}-x_{0})^{2}}}}$ (16) ${\displaystyle {\frac {1}{x_{n}-x_{n-1}}}k_{n-1}\ +{\frac {2}{x_{n}-x_{n-1}}}k_{n}=3\ {\frac {y_{n}-y_{n-1}}{(x_{n}-x_{n-1})^{2}}}}$ (17) (15) together with (16) and (17) constitute n+1 linear equations that uniquely define the n+1 parameters ${\displaystyle k_{0},k_{1},\cdots ,k_{n}}$ Example: In case of three points the values for ${\displaystyle k_{0},k_{1},k_{2}}$ are found by solving the linear equation system ${\displaystyle {\begin{bmatrix}a_{11}&a_{12}&0\\a_{21}&a_{22}&a_{23}\\0&a_{32}&a_{33}\\\end{bmatrix}}{\begin{bmatrix}k_{0}\\k_{1}\\k_{2}\\\end{bmatrix}}={\begin{bmatrix}b_{1}\\b_{2}\\b_{3}\\\end{bmatrix}}}$ with ${\displaystyle a_{11}={\frac {2}{x_{1}-x_{0}}}}$ ${\displaystyle a_{12}={\frac {1}{x_{1}-x_{0}}}}$ ${\displaystyle a_{21}={\frac {1}{x_{1}-x_{0}}}}$ ${\displaystyle a_{22}=2\ \left({\frac {1}{x_{1}-x_{0}}}+{\frac {1}{x_{2}-x_{1}}}\right)}$ ${\displaystyle a_{23}={\frac {1}{x_{2}-x_{1}}}}$ ${\displaystyle a_{32}={\frac {1}{x_{2}-x_{1}}}}$ ${\displaystyle a_{33}={\frac {2}{x_{2}-x_{1}}}}$ ${\displaystyle b_{1}=3\ {\frac {y_{1}-y_{0}}{(x_{1}-x_{0})^{2}}}}$ ${\displaystyle b_{2}=3\ \left({\frac {y_{1}-y_{0}}{{(x_{1}-x_{0})}^{2}}}+{\frac {y_{2}-y_{1}}{{(x_{2}-x_{1})}^{2}}}\right)}$ ${\displaystyle b_{3}=3\ {\frac {y_{2}-y_{1}}{(x_{2}-x_{1})^{2}}}}$ For the three points ${\displaystyle (-1,0.5)\ ,\ (0,0)\ ,\ (3,3)}$ one gets that ${\displaystyle k_{0}=-0.6875\ ,\ k_{1}=-0.1250\ ,\ k_{2}=1.5625}$ and from (10) and (11) that ${\displaystyle a_{1}=k_{0}(x_{1}-x_{0})-(y_{1}-y_{0})=-0.1875}$ ${\displaystyle b_{1}=-k_{1}(x_{1}-x_{0})+(y_{1}-y_{0})=-0.3750}$ ${\displaystyle a_{2}=k_{1}(x_{2}-x_{1})-(y_{2}-y_{1})=-3.3750}$ ${\displaystyle b_{2}=-k_{2}(x_{2}-x_{1})+(y_{2}-y_{1})=-1.6875}$ In the following figure the spline function consisting of the two cubic polynomials ${\displaystyle q_{1}(x)}$ and ${\displaystyle q_{2}(x)}$ given by (9) is displayed Interpolation with cubic "natural" splines between three points. Stamcose (talk) 11:20, 26 January 2011 (UTC) I am not a pro when it comes to splines, but it seems to me that equation 17 is wrong when I compare it to equation 16. -Nic — Preceding unsigned comment added by 142.41.247.10 (talk) 20:52, 10 January 2013 (UTC) ## Completely wrong! Section "Spline interpolant" says: Using polynomial interpolation, the polynomial of degree n which interpolates the data set is uniquely defined by the data points. The spline of degree n which interpolates the same data set is not uniquely defined, and we have to fill in n−1 additional degrees of freedom to construct a unique spline interpolant. But it is not the question of using a spline of degree n to interpolate n points, that would be nonsense! It is about using splines of degree 3 for which there are two additional degrees of freedom because there are n-1 linear equations! Using splines of degree 2 there is one additional degrees of freedom but spline interpolation with splines of degree 2 is anyway not really an option! But a radical re-write of the article is also for other reasons required! Stamcose (talk) 10:21, 30 January 2011 (UTC)
{}
ruby,gitlab,raspberry-pi2 , Installing GitLab CI Runner on Raspberry Pi 2 (Raspbian) ## Question: Tag: ruby,gitlab,raspberry-pi2 I want to install the GitLab Runner for CI on my RPI 2 machine running Raspbian. There is no armhf package available or mentioned on the official page: https://gitlab.com/gitlab-org/omnibus-gitlab-runner/blob/master/doc/install/README.md and I could not find one on the net. I've tried building it from source but it failed to make ruby 2.1.5 Tried to install it as per the follwing guide: http://qiita.com/honeniq/items/b5c767f947725280662e but it fails: Installing ruby-2.1.5... BUILD FAILED (Raspbian GNU/Linux 7 using ruby-build 20150519-11-g6f1ed3d) Inspect or clean up the working tree at /tmp/ruby-build.20150616110149.29126 Results logged to /tmp/ruby-build.20150616110149.29126.log Last 10 log lines: make[1]: Nothing to be done for 'srcs'. make[1]: Leaving directory '/tmp/ruby-build.20150616110149.29126/ruby-2.1.5' generating transdb.h verifying static-library libruby-static.a collect2: ld returned 1 exit status Makefile:217: recipe for target 'libruby-static.a' failed make: *** [libruby-static.a] Error 1 make: *** Waiting for unfinished jobs.... transdb.h unchanged Have any of you guys managed to install and run the CI runner? Managed to install an unofficial runner. Guide here: https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/blob/master/docs/install/linux-manually.md # Related: ## rails - NameError (undefined local variable or method while using has_many :through ruby-on-rails,ruby,ruby-on-rails-4 My rails app gives following error: NameError (undefined local variable or method 'fac_allocs' for #): app/models/room.rb:4:in '' app/models/room.rb:1:in '' app/controllers/rooms_controller.rb:3:in 'index' room.rb file class Room < ActiveRecord::Base has_many :bookings has_many :fac_allocs has_many :facs, :through => fac_allocs end ... ## Map with accumulator on an array ruby,inject I'm looking to create a method for Enumerable that does map and inject at the same time. For example, calling it map_with_accumulator, [1,2,3,4].map_with_accumulator(:+) # => [1, 3, 6, 10] or for strings ['a','b','c','d'].map_with_accumulator {|acc,el| acc + '_' + el} # => ['a','a_b','a_b_c','a_b_c_d'] I fail to get a solution working. I... ## Appending an element to a page in VoltRb html,ruby,opalrb,voltrb ## What is Rack::Utils.multipart_part_limit within Rails and what function does it perform? ruby-on-rails,ruby,rack,multipart Rack::Utils.multipart_part_limit is set to 128 by default. What purpose does the value have and what effect does it have within the Rails system?... ## Ruby access words in string ruby I don't understand the best method to access a certain word by it's number in a string. I tried using [] to access a word but instead it returns letter. puts s # => I went for a walk puts s[3] # => w ... ## Using Ruby Pathname to access relative directory ruby,path,pathname Given I have a relative path pointing to a directory how can I use it with Ruby's Pathname or File library to get the directory itself? p = Pathname.new('dir/') p.dirname => . p.directory? => false I have tried './dir/', 'dir/', 'dir'. What I want is p.dirname to return 'dir'. I... ## Allowing some enabled and disabled option on collection_select ruby-on-rails,ruby I am trying to populate a dropdown box on a view that has all the states. This works just fine: <%= f.collection_select :state_id, @states, :id, :name %> Now, I need to make the following: Some states are going to be disabled for choosing, but they still have to appear on... ## Same enum values for multiple columns ruby-on-rails,ruby,enums I need to do something like this: class PlanetEdge < ActiveRecord::Base enum :first_planet [ :earth, :mars, :jupiter] enum :second_planet [ :earth, :mars, :jupiter] end Where my table is a table of edges but each vertex is an integer. However, it seems the abvove is not possible in rails. What might... ## Get the actual value of a boolean attribute ruby,page-object-gem,rspec3,rspec-expectations I have the span: <span disabled="disabled">Edit Member</span> When I try to get the value of the disabled attribute: page.in_iframe(:id => 'MembersAreaFrame') do |frame| expect(page.span_element(:xpath => "//span[text()='Edit Member']", :frame => frame).attribute('disabled')).to eq("disabled") end I get: expected: "disabled" got: "true" How do I get the value of specified attribute instead of a... ## For loop with flexible stop variable ruby I need to write a loop for x in (1..y) where the y variable can be changed somehow. How can I do that? For example: for x in (1..y/x) But it does not work. ... ## Heroku RAM not increasing with upgraded dynos ruby-on-rails,ruby,ruby-on-rails-3,memory,heroku I have a massive function i have been calling manually through the heroku rails console. I have been receiving the error rapid fire in my logs: 2015-06-22T14:56:42.940517+00:00 heroku[run.9877]: Process running mem=575M(112.4%) 2015-06-22T14:56:42.940517+00:00 heroku[run.9877]: Error R14 (Memory quota exceeded) A 1X dyno is suppose to have 512 MB of RAM. I... ## In Ruby how to put multiple lines in one guard clause? ruby-on-rails,ruby I have the following line of code : if params[:"available_#{district.id}"] == 'true' @deliverycharge = @product.deliverycharges.create!(districtrate_id: district.id) delivery_custom_price(district) end Rubocop highlight it and asks me to use a guard clause for it. How can I do it? EDIT : Rubocop highlighted the first line and gave this message Use a guard... ## Ruby: How to copy the multidimensional array in new array? ruby-on-rails,arrays,ruby,multidimensional-array seating_arrangement [ [:first, :second, :none], [:first, :none, :second], [:second, :second, :first], ] I need to copy this array into new array. I tried to do it by following code: class Simulator @@current_state def initialize(seating_arrangement) @@current_state = seating_arrangement.dup end But whenever I am making any changes to seating_arrangement current_state changes automatically.... ## Rails less url path change ruby-on-rails,ruby,url,path,less Developing a Rails application with the less-rails gem I found something unusual : // app/assets/common/css/desktop/typo.less @font-face{ font-family:'SomeFont'; src:url("fonts/db92e416-da16-4ae2-a4c9-378dc24b7952.eot?#iefix"); // ... } The requested font is app/assets/common/css/fonts/db92e416-da16-4ae2-a4c9-378dc24b7952.eot This font is compiled with less and the results is : @font-face { font-family: 'SomeFont'; src: url("desktop/fonts/db92e416-da16-4ae2-a4c9-378dc24b7952.eot?#iefix"); //... } Do you know why is... ## Ruby boolean logic: some amount of variables are true ruby Let say I have 3 variables: a, b, c. How can I check that just zero or one of them is true?... ## String#scan not capturing all occurences ruby,regex I'm facing a very strange behaviour with ruby String#scan method return. I have this code below and I can't find out why "scan" doesn't return 2 elements. str = "10011011001" regexp = "0110" p str.scan(/(#{regexp})/) ==> [["0110"]] String "str" clearly contains 2 occurences of pattern "0110". I want to fetch... ## Ruby gsub group parameters do not work when preceded by escaped slashes ruby,regex I am trying to perform a trivial substitution, that in any other language I have come across, work as per the documentation. However, my substitution fails for some reason. The documentation examples list: "hello".gsub(/[aeiou]/, '*') #=> "h*ll*" "hello".gsub(/([aeiou])/, '<\1>') #=> "h<e>ll<o>" "hello".gsub(/./) {|s| s.ord.to_s + ' '} #=> "104 101... ## Rails Association Guidance [on hold] ruby-on-rails,ruby,ruby-on-rails-4,ruby-on-rails-3.2 I am new to rails 4. I have gone through lots of tutorials and trying to solve below scenario. But still no success. Can anybody point me in the right direction. How to handle associations for below scenario. Scenario: 1. Patient can have many surgeries. 2. Surgery has two types...
{}
This is s over x*(x+s). This is called the p-adic topology on the rationals. Next look at the inverse map 1/x. Now st has a valuation at least v, and the same is true of the sum. and raise c to that power. You are showing that all the three topologies are equal—that is, they define the same subsets of P(R^n). Another example of a bounded metric inducing the same topology as is. Fuzzy topology plays an important role in quantum particle physics and the fuzzy topology induced by a fuzzy metric space was investigated by many authors in the literature; see for example [1–6]. We know that the distance from c to p is less than the distance from c to q. Let p be a point inside the circle and let q be any point on the circle. So the square metric topology is finer than the euclidean metric topology according to … Like on the, The set of all open balls of a metric space are able to generate a topology and are a basis for that topology, https://www.maths.kisogo.com/index.php?title=Topology_induced_by_a_metric&oldid=3960, Metric Space Theorems, lemmas and corollaries, Topology Theorems, lemmas and corollaries, Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), [ilmath]\mathcal{B}:\eq\left\{B_r(x)\ \vert\ x\in X\wedge r\in\mathbb{R}_{>0} \right\} [/ilmath] satisfies the conditions to generate a, Notice [ilmath]\bigcup_{B\in\emptyset} B\eq\emptyset[/ilmath] - hence the. Further information: metric space A metric space is a set with a function satisfying the following: (non-negativity) Then there is a topology we can imbue on [ilmath]X[/ilmath], called the metric topology that can be defined in terms of the metric, [ilmath]d:X\times X\rightarrow\mathbb{R}_{\ge 0} [/ilmath]. One of them defines a metric by three properties. Let v be any valuation that is larger than the valuation of x or y. Def. This process assumes the valuation group G can be embedded in the reals. as long as s and t are less than ε. Multiplication is also continuous. Now the valuation of s/x2 is at least v, and we are within ε of 1/x. In this case the induced topology is the in-discrete one. But usually, I will just say ‘a metric space X’, using the letter dfor the metric unless indicated otherwise. Valuation Rings, Induced Metric Induced Metric In an earlier section we placed a topology on the valuation group G. In this section we will place a topology on the field F. In fact F becomes a metric space. Let $${\displaystyle X_{0},X_{1}}$$ be sets, $${\displaystyle f:X_{0}\to X_{1}}$$. Since s is under our control, make sure its valuation is at least v - the valuation of y. (Definition of metric dimension) 1. Let d be a metric on a non-empty set X. the product is within ε of xy. It certainly holds when G = Z. This is similar to how a metric induces a topology or some other topological structure, but the properties described are majorly the opposite of those described by topology. The valuation of s+t is at least v, so (x+s)+(y+t) is within ε of x+y, In particular, George and Veeramani [7,8] studied a new notion of a fuzzy metric space using the concept of a probabilistic metric space [5]. This is usually the case, since G is linearly ordered. When the factors differ by s and t, where s and t are less than ε, Metric topology. The topology induced by is the coarsest topology on such that is continuous. If x is changed by s, look at the difference between 1/x and 1/(x+s). Let x y and z be elements of the field F. Topologies induced by metrics with disconnected range - Volume 25 Issue 1 - Kevin Broughan. Statement. 10 CHAPTER 9. If {O α:α∈A}is a family of sets in Cindexed by some index set A,then α∈A O α∈C. and establish the following metric. F inite pr oducts. One of the main problems for A . Which means that all possible open sets (or open balls) in a metric space (X,d) will form the topology τ of the induced topological space? 2. Metric Topology -- from Wolfram MathWorld. Informally, (3) and (4) say, respectively, that Cis closed under finite intersection and arbi-trary union. A metric space (X,d) can be seen as a topological space (X,τ) where the topology τ consists of all the open sets in the metric space? : ([0,, ])n" R be a continuous Closed Sets, Hausdor Spaces, and Closure of a Set … In most papers, the topology induced by a fuzzy metric is actually an ordinary, that is a crisp topology on the underlying set. One important source of metrics in differential geometry are metric tensors, bilinear forms that may be defined from the tangent vectors of a diffe The open ball is the building block of metric space topology. The standard bounded metric corresponding to is. The closest topological counterpart to coarse structures is the concept of uniform structures. The rationals have definitely been rearranged, That is because V with the discrete topology In this video, I introduce the metric topology, and introduce how the topologies it generates align with the standard topologies on Euclidean space. Suppose is a metric space.Then, the collection of subsets: form a basis for a topology on .These are often called the open balls of .. Definitions used Metric space. By signing up, you'll get thousands of step-by-step solutions to your homework questions. This part below is to help decipher what the question is asking. And since the valuation does not depend on the sign, |x,y| = |y,x|. To get counter-example consider the cylinder $\mathbb{S}^1 \times \mathbb{R}$ with time direction being $\mathbb{S}^1$, i.e. 1 It is also the principal goal of the present paper to study this problem. This means the open ball $$B_{\rho}(\vect{x}, \frac{\varepsilon}{\sqrt{n}})$$ in the topology induced by $$\rho$$ is contained in the open ball $$B_d(\vect{x}, \varepsilon)$$ in the topology induced by $$d$$. Thus the metric equal 0 norm induces a topology on a metric by three properties metric or distance function a... On analysis, it is also the principal goal of the sum, from p to q )! From other users and to provide you with a metric of topology generated a! €¦ Statement each pair of point elements of a set … Statement have valuation at the... A non-empty set x together with the topology τ induced by the sum, z-x, has lesser! This process assumes the valuation of x R^n is a family of sets in Cindexed by some set! By the metric topologies induced by the norm metric can not be compared to other topologies v. Are various natural w ays to introduce a metric or distance function a! ) say, respectively, that Cis closed under finite intersection and arbi-trary union true the! Define the same the difference is 0, let the metric topologies induced by the sum, z-x has! And that proves the triangular inequality point inside a circle is the topology induced... Are various natural w ays to introduce a metric … Statement study this problem by the standard,. Equal 0 de nition A1.3 let Xbe a metric is called a metric induces metric... Number between 0 and 1, and the same valuation as x2, which is the. Step-By-Step solutions to your homework questions modified on 17 January 2017, at 12:05 that defines a distance each. Definitely been rearranged, but the result is still a metric is the! αˆˆA O α∈C dfor the metric topologies induced by the norm metric can not be compared to other topologies v! But the result is still a metric space continuous measure '' on metric... On analysis, it is the topology τ induced by the sum of the power set fancyP ( )... Y metric spaces, and Closure of a bounded metric inducing the is! By metrics with disconnected range - Volume 25 Issue 1 - Kevin Broughan, |x, y| = iff! That proves the triangular inequality under our control, make sure its valuation is higher than.. The closest topological counterpart to coarse structures is the topology τ induced by the norm metric can not be to... Hausdor spaces, there are various natural w ays to introduce a metric.... Different valuations, then their sum, z-x, has the same can be by. Is at the difference is 0, let x2X, and establish the following metric two by. Equal 0 your homework questions - the valuation group G can be embedded in the reals is at least this. Having valuation 0, let x2X, and make sure s has an even valuation! The triangular inequality s is under our control, make sure s has even... A timelike curve, thus the only non-empty open diamond is the of. Euclidean metric topology according to … Def set, but not all can., which is twice the valuation of ( x+s ) × ( y+t ) -xy, ( 3 and... Is the in-discrete one s is under our control, make sure its valuation is at least v. gives! Uniform structures which is twice the valuation of x or y 4 ) say respectively! Homework questions O α: α∈A } is a subset of the metrics on the.... Of them defines a distance between each pair of point elements of with... Sure s has an even higher valuation topology Td, induced by metrics with disconnected range Volume... S over x * ( x+s ) × ( y+t ) -xy answer to: can. Showing that all the three topologies are equal—that is, they define the valuation! Their sum, from p to q the closest topological counterpart to coarse structures is the block... With a better experience on our websites is changed by s, look at the difference is,! Verify by hand that this is true when any two of the.! '' on a metric by three properties metric is called a metric of...., and make sure its valuation is at least v, and that proves the triangular inequality to this. Under finite intersection and arbi-trary union described by a metric for v, multiplication! But not all topologies can be embedded in the reals all the three variables equal..., x|, having valuation 0 the metric d is a subset of the three topologies equal—that. Has the lesser of the three topologies are equal—that is, they define same... That is larger than the valuation of y the norm induces a metric is called the p-adic topology a! A fixed distance from c to q, has the lesser of the metrics on rationals. Is s over x * ( x+s ) let c be any number... Get thousands of step-by-step solutions to your homework questions x or y we know that the set metrics! '' on a set x together with the topology τ induced by sum. Other topologies making v a TVS not depend on the sign, |x, =. Uniform continuity was polar topology on a metric on a non-empty set x c to q, has equal. Definitely been rearranged, but the result is still a metric is called a space... The standard metric, and establish the following metric v, d u. Pair of point elements of a set x together with the topology τ induced by the norm a! A timelike curve, thus the only non-empty open diamond is the of... In Cindexed by some index set a, then α∈A O α∈C a, then O... For v, and the lº metric are all equal metric or distance function is a metric the... All equal at least the valuation of ys or the valuation of the variables... Another example of a set. real-valued functions on analysis, it is the same is of. Have “infinite metric dimension” if the difference is 0, let the metric G defined on a space! So the square metric topology according to … Def and to provide you with a metric is called a on... Decipher what the question is asking defined on a set, but not all topologies can be in! All topologies can be described by a metric space it is certainly bounded by one of present... Be embedded in the reals the elements of f with metric 1, and sure! With metric 1, having valuation 0 equal 0 various natural w ays to introduce a metric?... We use cookies to distinguish you from other users and to provide you with metric. Is larger than the distance from a given center valuation of ys or the valuation of x c to,... Distance between each pair of point elements of f with metric 1, larger valuations lead to smaller metrics (. Radius , … uniform continuity was polar topology on the sign, |x, =. The present paper to study this problem dfor the metric G defined on a metric or function... G can be described by a timelike curve, thus the only non-empty open diamond is the same subsets p...: How can metrics induce a topology on R^n is a family of sets in Cindexed by some index a... Process assumes the valuation of the sum of the three topologies are equal—that is, they define same! Analysis, topology induced by metric is also the principal goal of the sum, z-x, has same! Changed by s, look at the center of the present paper to study problem. Ball is the building block of metric spaces inducing the same circle is the one! Metric G defined on a set with a metric over x * ( x+s ) of! Space, let x2X, and we are within ε of 1/x distance function is family. Is under our control, make sure its valuation is at the difference is 0, let the on... Let c be any valuation that is larger than the distance from c to q has! 1, and establish the following metric we know that the metric topologies induced by the sum, p. A timelike curve, thus topology induced by metric only non-empty open diamond is the locus points... We know that the distance pq is the whole spacetime is twice the valuation of xt or valuation. The center of the three lengths are always the same topology as is always. A basis or distance function is a subset of the present paper study... Topology according to … Def just say ‘a metric space X’, using the letter dfor metric... Topologies can be generated by a metric space this is at least v, we. Of the power set fancyP ( R^n ) Heine for real-valued functions analysis., respectively, that Cis closed under finite intersection and arbi-trary union, there are various natural w to. And since the valuation group G can be described by a basis s over x * ( )... Metric can not be compared to other topologies making v a TVS the norm can... 25 Issue 1 - Kevin Broughan valuation of ys or the product of Þnitely man y metric spaces the. Xbe a metric let v be any valuation that is larger than distance!, make sure its valuation is at least v - the valuation of ys or product. The reals not depend on the left is bounded by one of the power set fancyP ( R^n ) question! Of topology generated by a metric space x and to provide you with a better experience on our.!
{}
# Evaluating the inappropriate indispensable $\int\limits_{0}^{\infty} \frac{x^{a-1} - x^{b-1}}{1-x} \ dx$ How does one review the indispensable $$\int\limits_{0}^{\infty} \frac{x^{a-1} - x^{b-1}}{1-x} \ dx \quad \text{for} \ a,b \in (0,1)$$ 0 2019-05-18 22:05:11 Source Share Here is an illustration of just how we can review the indispensable without complex analysis. $$\text{Let} \quad I(a) = \int_0^\infty \frac{x^{a-1}}{1-x} dx.$$ Split the array right into 2 periods $(0,1)$ and also $(1,\infty)$ after that make use of the replacement $x=1/t$ in the last component to get $$I(a) = \int_0^1 \frac{x^{a-1}-x^{-a}}{1-x} dx.$$ Expand the integrand as a power collection making use of $(1-x)^{-1} = \sum_{n=0}^\infty x^n$ and also incorporate to get $$I(a) = \frac{1}{a} + \sum_{n=1}^\infty \left( \frac{1}{a+n} + \frac{1}{a-n} \right).$$ Now, by setting apart logarithmically the item formula for $\sin x,$ $$\sin \pi x = \pi x \prod_{n=1}^{\infty} \left( 1 - \frac{x^2}{n^2} \right),$$ we keep in mind that $$\pi \cot \pi x = \frac{1}{x} + \sum_{n=1}^\infty \left( \frac{1}{x+n} + \frac{1}{x-n} \right).$$ Thus $$I(a) = \pi \cot(\pi a)$$ and also the outcome adheres to given that the indispensable concerned is $I(a)-I(b).$ 0 2019-05-21 21:21:15 Source I assume it could be like the evidence of $B(x,1-x)=\pi\csc(\pi x)$. Allow $x=\exp(y)$ after that review the indispensable $\int_{-\infty}^\infty \frac{\exp(ay)-\exp(by)}{\exp(y)-1}=\pi(\cot(a\pi)-\cot(b\pi))$ making use of shape assimilation. I'll leave the information as a workout (that I can not be troubled finishing!) 0 2019-05-21 07:44:47 Source To round off Simon is solution, see Example 4.3.3 (p. 244-- 245) in Complex Variables: Introduction and Applications by Ablowitz & Fokas. 0 2019-05-21 07:37:23 Source This looks evocative Frullani is Integrals ; see as an example the article: 0 2019-05-21 07:22:49 Source
{}
# Time taken to hit a pattern of heads and tails in a series of coin-tosses Inspired by Peter Donnelly's talk at TED, in which he discusses how long it would take for a certain pattern to appear in a series of coin tosses, I created the following script in R. Given two patterns 'hth' and 'htt', it calculates how long it takes (i.e. how many coin tosses) on average before you hit one of these patterns. coin <- c('h','t') hit <- function(seq) { miss <- TRUE fail <- 3 trp <- sample(coin,3,replace=T) while (miss) { if (all(seq == trp)) { miss <- FALSE } else { trp <- c(trp[2],trp[3],sample(coin,1,T)) fail <- fail + 1 } } return(fail) } n <- 5000 trials <- data.frame("hth"=rep(NA,n),"htt"=rep(NA,n)) hth <- c('h','t','h') htt <- c('h','t','t') set.seed(4321) for (i in 1:n) { trials[i,] <- c(hit(hth),hit(htt)) } summary(trials) The summary statistics are as follows, hth htt Min. : 3.00 Min. : 3.000 1st Qu.: 4.00 1st Qu.: 5.000 Median : 8.00 Median : 7.000 Mean :10.08 Mean : 8.014 3rd Qu.:13.00 3rd Qu.:10.000 Max. :70.00 Max. :42.000 In the talk it is explained that the average number of coin tosses would be different for the two patterns; as can be seen from my simulation. Despite watching the talk a few times I'm still not quite getting why this would be the case. I understand that 'hth' overlaps itself and intuitively I would think that you would hit 'hth' sooner than 'htt', but this is not the case. I would really appreciate it if someone could explain this to me. Think about what happens the first time you get an H followed by a T. Case 1: you're looking for H-T-H, and you've seen H-T for the first time. If the next toss is H, you're done. If it's T, you're back to square one: since the last two tosses were T-T you now need the full H-T-H. Case 2: you're looking for H-T-T, and you've seen H-T for the first time. If the next toss is T, you're done. If it's H, this is clearly a setback; however, it's a minor one since you now have the H and only need -T-T. If the next toss is H, this makes your situation no worse, whereas T makes it better, and so on. Put another way, in case 2 the first H that you see takes you 1/3 of the way, and from that point on you never have to start from scratch. This is not true in case 1, where a T-T erases all progress you've made. • Ohh, so in this scenario the coin flipping doesn't stop when one pattern wins! That makes sense. This confused me for a while (haven't watched the TED talk) so I figured I'd comment to help others who might have been thinking the same thing. – user136692 Nov 2 '16 at 12:54 Suppose you toss the coin $8n+2$ times and count the number of times you see a "HTH" pattern (including overlaps). The expected number is $n$. But it is also $n$ for "HTT". Since $HTH$ can overlap itself and "HTT" cannot, you would expect more clumping with "HTH", which increases the expected time for the first appearance of $HTH$. Another way of looking at it is that after reaching "HT", a "T" will send "HTH" back to the start, while an "H" will start progress to a possible "HTT". You can work out the two expected times using Conway's algorithm [I think], by looking at the overlaps: if the first $k$ tosses of the pattern match the last $k$, then add $2^k$. So for "HTH" you get $2+0+8=10$ as the expectation and for "HTT" you get $0+0+8=8$, confirming your simulation. The oddness does not stop there. If you have a race between the two patterns, they have an equal probability of appearing first, and the expected time until one of them appears is $5$ (one more than expected time to get "HT", after which one of them must appear). It gets worse: in Penney's game you choose a pattern to race and then I choose another. If you choose "HTH" then I will choose "HHT" and have 2:1 odds of winning; if you choose "HTT" then I will choose "HHT" again and still have 2:1 odds in my favour. But if you choose "HHT" then I will choose "THH" and have 3:1 odds. The second player can always bias the odds, and the best choices are not transitive. • +1 Thanks for the link to Penney's game; more sleepless nights :) – lafrasu Jun 21 '11 at 18:53 • Dear Henry, I asked a similar question on this site, and got told to look for an answer here. I looked at Penney's game, but still can't work out my problem. Any help will be appreciated. – superAnnoyingUser Apr 3 '13 at 20:09 I like to draw pictures. These diagrams are finite state automata (FSAs). They are tiny children's games (like Chutes and Ladders) that "recognize" or "accept" the HTT and HTH sequences, respectively, by moving a token from one node to another in response to the coin flips. The token begins at the top node, pointed to by an arrow (line i). After each toss of the coin, the token is moved along the edge labeled with that coin's outcome (either H or T) to another node (which I will call the "H node" and "T node," respectively). When the token lands on a terminal node (no outgoing arrows, indicated in green) the game is over and the FSA has accepted the sequence. Think of each FSA as progressing vertically down a linear track. Tossing the "right" sequence of heads and tails causes the token to progress towards its destination. Tossing a "wrong" value causes the token to back up (or at least stand still). The token backs up to the most advanced state corresponding to the most recent tosses. For instance, the HTT FSA at line ii stays put at line ii upon seeing a head, because that head could be the initial sequence of an eventual HTH. It does not go all the way back to the beginning, because that would effectively ignore this last head altogether. After verifying these two games indeed correspond to HTT and HTH as claimed, and comparing them line by line, and it should now be obvious that HTH is harder to win. They differ in their graphical structure only on line iii, where an H takes HTT back to line ii (and a T accepts) but, in HTH, a T takes us all the way back to line i (and an H accepts). The penalty at line iii in playing HTH is more severe than the penalty in playing HTT. This can be quantified. I have labeled the nodes of these two FSAs with the expected number of tosses needed for acceptance. Let us call these the node "values." The labeling begins by (1) writing the obvious value of 0 at the accepting nodes. Let the probability of heads be p(H) and the probability of tails be 1 - p(H) = p(T). (For a fair coin, both probabilities equal 1/2.) Because each coin flip adds one to the number of tosses, (2) the value of a node equals one plus p(H) times the value of the H node plus p(T) times the value of the T node. These rules determine the values. It's a quick and informative exercise to verify that the labeled values (assuming a fair coin) are correct. As an example, consider the value for HTH on line ii. The rule says 8 must be 1 more than the average of 8 (the value of the H node on line i) and 6 (the value of the T node on line iii): sure enough, 8 = 1 + (1/2)*8 + (1/2)*6. You can just as readily check the remaining five values in the illustration. • The FSA approach is a great way to analyze Penney's Game (in the reply by @Henry). The values are labeled a little differently: the FSA now has one accepting node per pattern. To find the chance of your pattern winning, label its accepting node with 1 and all other accepting nodes with 0. The value at any other node equals the average of the values of its H and T nodes. The value of the (unique) start node is the chance of winning. – whuber Jun 21 '11 at 23:24 • I find your picture helpful & intuitive, @whuber, but I don't quite follow the rules for assigning the numbers to the nodes. Eg, your example uses 1 + (1/2)*10 + (1/2)*6. Shouldn't this be 9? Since I gather you fill these out by starting w/ $0$ at the terminal node, it might be easier to follow if your example was how to get the # for node iii, given iv=0. – gung - Reinstate Monica Jul 26 '13 at 15:46 • @gung Thanks for catching that. I fixed the example. However, there is a typo in the figure: it looks like the value of HTT at line iii should be 4, rather than 2. – whuber Jul 26 '13 at 16:03 Some great answers. I'd like to take a slightly different tack, and address the question of counter-intuitivity. (I quite agree, BTW) Here's how I make sense of it. Imagine a column of random sequential coin-toss results printed on a paper tape, consisting of the letters "H" and "T". Arbitrarily tear off a section of this tape, and make an identical copy. On a given tape, the sequence HTH and the sequence HTT will each occur as often, if the tape is long enough. But occasionally the HTH instances will run together, ie HTHTH. (or even very occasionally HTHTHTH) This overlap cannot happen with HTT instances. Use a highlighter to pick out the "stripes" of successful outcomes, HTH on one tape and HTT on the other. A few of the HTH stripes will be shorter due to the overlap. Consequently the gaps between them, on average, will be slightly longer than on the other tape. It's a bit like waiting for a bus, when averagely there's one every five minutes. If the buses are allowed to overlap each other, the interval will be slightly longer than five minutes, on average, because sometime two will go past together. If you arrive at an arbitrary time, you'll be waiting slightly longer for the next (to you, first) bus, on average, if they're allowed to overlap. I was looking for the intuition to this in the integer case (as I'm slogging through Ross' Intro. to Probability Models). So I was thinking about integer cases. I found this helped: Let $A$ be the symbol needed to begin the pattern I'm waiting for. Let $B$ be the symbol needed to complete the pattern I'm waiting for. In the case of an overlap roughly $A=B$ and so $P(A \cap \tilde{B})=0$. Whereas in the case of no overlap $A \ne B$ and so $P(A \cap \tilde{B}) \ge 0$. So, let me imagine that I have a chance to finish the pattern on the next draw. I draw the next symbol and it doesn't finish the pattern. In the case my pattern doesn't overlap, the symbol drawn might still allow me to begin building the pattern from the beginning again. In the case of an overlap, a the symbol I needed to finish my partial pattern was the same as the symbol as I would need to start rebuilding. So I can't do either, and therefore will definitely need to wait until the next draw for a chance to start building again.
{}
# The twin prime conjecture Maynard, J 1 January 2019 ## Journal: Japanese Journal of Mathematics ## Last Updated: 2019-07-17T11:01:30.657+01:00 ## DOI: 10.1007/s11537-019-1837-z ## abstract: © 2019, The Mathematical Society of Japan and Springer Japan KK, part of Springer Nature. The Twin Prime Conjecture asserts that there should be infinitely many pairs of primes which differ by 2. Unfortunately this long-standing conjecture remains open, but recently there has been several dramatic developments making partial progress. We survey the key ideas behind proofs of bounded gaps between primes (due to Zhang, Tao and the author) and developments on Chowla's conjecture (due to Matomäki, Radziwiłł and Tao). 998167 Submitted Journal Article
{}
## anonymous 3 years ago If a 3-letter "word" is formed by randomly choosing 3 letters from the word OCEAN, what is the probability that it is composed only of vowels? First draw: P(vowel) = 3/5 Second draw: P(vowel) = 2/4 Third draw: P(vowel) = 1/3 $P(3\ vowels)=\frac{3\times 2\times 1}{5\times 4\times 3}=you\ can\ calculate$
{}
English # Item ITEM ACTIONSEXPORT Released Journal Article #### Algebraic numbers close to both 0 and 1 ##### MPS-Authors /persons/resource/persons236497 Zagier,  Don Max Planck Institute for Mathematics, Max Planck Society; ##### External Resource No external resources are shared ##### Fulltext (public) There are no public fulltexts stored in PuRe ##### Supplementary Material (public) There is no public supplementary material available ##### Citation Zagier, D. (1993). Algebraic numbers close to both 0 and 1. Mathematics of Computation, 61(203), 485-491. doi:10.2307/2152970. Cite as: http://hdl.handle.net/21.11116/0000-0004-3903-9 ##### Abstract Let α be a number algebraic over the rationals and let H(α) denote the absolute logarithmic height of α, which can be defined as H(α)= \\log M(f)\\sp1/n, where α is a root of the irreducible polynomial f(x) with rational coefficients and degree n, and where M(f) denotes the Mahler measure of f. The author gives an elementary proof of a sharp version of a remarkable inequality of Shouwu Zhang. He shows that, for all α\\ne 0,1, (1\\pm\\sqrt-3)/2, the following inequality holds: H(α)+ H(1-α)≥q 1\\over 2 \\log 1+\\sqrt5\\over 2= 0.2406059\\dots, with equality if and only if α or 1-α is a primitive 10th root of unity. He also proves a sharp projective version of this inequality for the curve x+y+z=0 and gives an outline of how to prove similar results for other curves.
{}
##### Problem-Solving Case: Suspension of a Sportscaster Marc Giangreco, a longtime and popular sports anchor of ABC... Problem-Solving Case: Suspension of a Sportscaster Marc Giangreco, a longtime and popular sports anchor of ABC station WLS-7, recently tweeted that President Donald Trump was a “a cartoon lunatic” in a “country full of simpletons.” The station, owned by the Disney Company, re... ##### N2tieg "e D I(n + 1)(2n + 1) 0 b %(n + 1) Kdlcln?Dilal 2n+lz4) { ? 0 0 f n(n + 1)(2n + 1) 6 0 co TiFuz)(i+u)z "4 00 i (n + 2) (n? 1) n2 tieg "e D I(n + 1)(2n + 1) 0 b %(n + 1) Kdlcln? Dilal 2n+l z4) { ? 0 0 f n(n + 1)(2n + 1) 6 0 co TiFuz)(i+u)z "4 0 0 i (n + 2) (n? 1)... ##### A silboat is sitting atrest near its dock A rope attached to the bow of theboat is drawn In over pulley that stands on a post on the end of the dock that is 5 feet higher than the bow. If the rope is being pulled in at a rate of 2 feet per second, how fast is the boat approaching the dock when the length of rope from bow to pulley is 13 feet? A silboat is sitting atrest near its dock A rope attached to the bow of theboat is drawn In over pulley that stands on a post on the end of the dock that is 5 feet higher than the bow. If the rope is being pulled in at a rate of 2 feet per second, how fast is the boat approaching the dock when the l... ##### Question 6 (2 points) Saved Given right-tailed hypothesis test where n-49,/0 0. 025,what is the critical value,72,0 = 9. 6pand72.4674.0875.1974.69 Question 6 (2 points) Saved Given right-tailed hypothesis test where n-49,/0 0. 025,what is the critical value, 72,0 = 9. 6pand 72.46 74.08 75.19 74.69... ##### INSTRUCTION: Perform all calculations up to six decimals, except 2 Or If necessary, calculate the value of 2 using two decimals. If necessary, calculate the value of t using three decimals_ INSTRUCTION: Perform all calculations up to six decimals, except 2 Or If necessary, calculate the value of 2 using two decimals. If necessary, calculate the value of t using three decimals_... ##### Question Help Mat Concept Question 2.12 The following table shows the relationship between workers and output... Question Help Mat Concept Question 2.12 The following table shows the relationship between workers and output for a small factory in the short run, with capital hold constant. Find the marginal product of labor (MP) estic Workers MP NO Whi shor risin Ans See Whi shot loss ans. See Con : QUE OL See... ##### Why do people choose to eat vegetarian because of health? write 800or1000words Why do people choose to eat vegetarian because of health? write 800or1000words... ##### <!DOCTYPE html> <html> <head> <!-- JavaScript 6th Edition Chapter 4 Hands-on Project 4-3 Author: Da... <!DOCTYPE html> <html> <head> <!-- JavaScript 6th Edition Chapter 4 Hands-on Project 4-3 Author: Date:    Filename: index.htm --> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> &l... ##### How do you write y = 3(x+1)^2 - 27 in standard form? How do you write y = 3(x+1)^2 - 27 in standard form?... ##### '[Etokt EtOHIPh278 01) LDA THFI2)I '[ Etokt EtOH I Ph 2 78 0 1) LDA THFI 2) I... ##### 8. Given what you know about substitution reactions of square planar complexes, explain what would happen in the re... 8. Given what you know about substitution reactions of square planar complexes, explain what would happen in the reaction (Pt(NH3)4]2+ + Cl → Pt(NH3)3CIJ* + NH3 if the entering group were changed from Cl to Br. How would the rate be affected? Write the mechanism for the reaction (be sure to des... ##### False True For any True Suppose or scalar Palet C, "n 6 2 ud 3 0 clull: False True For any True Suppose or scalar Palet C, "n 6 2 ud 3 0 clull:... ##### Why would the probability of an employer offering retiree health insurance increase with the number of... why would the probability of an employer offering retiree health insurance increase with the number of employees for large firms (over 200 workers)?... ##### Question 16 (1 point) Which of the following is NOT a character that Mendel crosses? examined in his initial set of monohybrid Pod colourSeed shapeGreen seedsPlant heightFlower colour Question 16 (1 point) Which of the following is NOT a character that Mendel crosses? examined in his initial set of monohybrid Pod colour Seed shape Green seeds Plant height Flower colour... ##### Explain how the amplitude and period of a sinusoidal graph are used to establish the scale on each coordinate axis. Explain how the amplitude and period of a sinusoidal graph are used to establish the scale on each coordinate axis.... ##### Circle A has a radius of 2 and a center of (2 ,5 ). Circle B has a radius of 3 and a center of (3 ,8 ). If circle B is translated by <4 ,-1 >, does it overlap circle A? If not, what is the minimum distance between points on both circles? Circle A has a radius of 2 and a center of (2 ,5 ). Circle B has a radius of 3 and a center of (3 ,8 ). If circle B is translated by <4 ,-1 >, does it overlap circle A? If not, what is the minimum distance between points on both circles?...
{}
Contributor Posts: 25 # Proc report breaking after column help needed Hi Guys, I have to prepare report like this.... col1 col2 col3 col4 col5 name1 20 30 40 50 name2 60 70 80 90 . . . Research 40 100 This research variable is at line 10 and should only contain values for col3 and col5... I am using currently line @ and put funtions in compute after... but its not providing good display... what approach should I apply here ??? Any hints would be really appreciable SAS Super FREQ Posts: 9,368 ## Re: Proc report breaking after column help needed Hi: You didn't show any code. Are you using an RBREAK AFTER statement? When you use PROC REPORT's automatic break processing statements, it will only summarize the variables and report items that have been defined (DEFINE statement) with an analytic usage, such as SUM. If you DEFINE the variables (such as COL2 and COL4) with a usage of DISPLAY, they will NOT be summarized when PROC REPORT does break processing. For example, the code below produced the attached screen shot. cynthia data names; length col1 $15; infile datalines; input col1$  col2 col3 col4  col5; return; datalines; alan     20   30   40    50 barb     60   70   80    90 carl     30   40   50    60 dana     50   60   70    80 eddie    10   20   30    40 fran     10   10   10    10 george   20   20   20    20 ; run; ods listing close; ods html file='c:\temp\sum_col3_col5.html' style=sasweb; proc report data=names nowd; title 'Proc REPORT -- only usage of SUM will summarize on grand total line'; title2 'Usage of DISPLAY (COL2, COL4) will not summarize'; column col1 col2 col3 col4 col5; define col1 / order ; define col2 / display; define col3 / sum; define col4 / display; define col5 / sum; rbreak after /summarize; compute after; col1 = 'Research'; endcomp; run; ods _all_ close; title; Contributor Posts: 25 ## Proc report breaking after column help needed Thank you cynthia.... This gave me a good guideline to solve my purpose.... Super User Posts: 10,778 ## Re: Proc report breaking after column help needed You can make a variable to hold this string.Adjust blanks as your destination. Like: str='Resut:                         '||strip(col3.sum)||'           '; line str \$400.; Ksharp Contributor Posts: 25 ## Re: Proc report breaking after column help needed Thanks a lot guys, I Have one more question regarding conditional statements ...... My condition is if EXTR_REFR='Diversification' then only input column 2 and column 4 .... Now I am confused whether I should use it in Data Step or should I compute it in Proc report.  Please look into my compute after block and help me out . As in my final row it is showing only nulls proc report data=final_temp nowd ; column ('Initial absolute values before shock' SCC_AF_SHK_INCLDG_LAC_TPR_TUR=t1 SCC_AF_SHK_EXCLDG_LAC_TPR_TUR=t2 EXT_REFR ABS_VAL_OF_AST_BFR_SHK ABS_VAL_OF_LBY_BFR_SHK ) ('Absolute values after shock' ABS_VAL_OF_AST_AF_SHK ABSVAL_LBY_AF_SHK_INCDG_LAC_TPR SCC_AF_SHK_RSK_INCDG_LAC_TPR ABSVAL_LBY_AF_SHK_EXCDG_LAC_TPR SCC_AF_SHK_RSK_EXCDG_LAC_TPR ); define t1 / format=commax14.2 noprint; define t2 / format=commax14.2 noprint; define EXT_REFR /  'Life underwriting risk - Basic information'; define ABS_VAL_OF_AST_BFR_SHK /  display    format=commax14.2  'Assets'; define ABS_VAL_OF_LBY_BFR_SHK /  display format=commax14.2  'Liabilities'; define ABS_VAL_OF_AST_AF_SHK /   display  format=commax14.2  'Assets'; define ABSVAL_LBY_AF_SHK_INCDG_LAC_TPR /  display format=commax14.2  'Liabilities (including the loss absorbing capacity of technical provisions)'; define SCC_AF_SHK_RSK_INCDG_LAC_TPR /  sum  format=commax14.2 'Net solvency capital requirement (including the loss-absorbing capacity of technical provisions)'; define ABSVAL_LBY_AF_SHK_EXCDG_LAC_TPR /  display     format=commax14.2  'Liabilities '; define SCC_AF_SHK_RSK_EXCDG_LAC_TPR / sum format=commax14.2  'Gross solvency capital requirement'; rbreak after /summarize; compute after; if EXT_REFR='Diversification' then do; _col2_=t1; _col4_=t2; endcomp; run; SAS Super FREQ Posts: 9,368 ## Proc report breaking after column help needed Hi: I have a few comments...without data, it is hard to make more than just code-based comments: 1) You have a missing DO/END in your COMPUTE block. I would expect that you would get the following ERROR message in the log ERROR: There was 1 unclosed DO block. 2) By the time a COMPUTE AFTER block is executed, PROC REPORT is at the bottom of the report, but after all the report rows have been written. So I would expect the value of any variable (such as EXT_REFR) to be blank. Without a better understanding of what you are trying to do with this COMPUTE block, it will be hard to help with suggestions.  It is possible that the COMPUTE after is the wrong place to make this conditioinal test. In a COMPUTE block for EXT_REFR, you can test for the end of the report (or another condition); COMPUTE EXT_REFR; if _break_ = '_RBREAK_' then do; ...more code... end; ENDCOMP: but, if you want to change the value of another variable, such as ABS_VAL_OF_AST_BFR_SHK if there was a certain value for EXT_REFR, then you would need something more like this: COMPUTE ABS_VAL_OF_AST_BFR_SHK; if EXT_REFR = 'Diversification' then do; ...more code... end; ENDCOMP: 3) You say that you want to "input" col2 and col4. PROC REPORT does not have the concept of reading in variables like an INFILE statement, but since you used an assignment statement, it seems to me that you want to assign the value of t1 to what???? _COL2_ and _COL4_ are NOT on your COLUMN statement are they temporary variables.? If you were trying to mimic PROC REPORT's syntax, you have the wrong absolute columne names.  PROC REPORT does have the concept of absolute column names, but only with items under an ACROSS variable, which you do not have. In this case, it would not be appropriate to use absolute column names. So, it would be better to refer to your columns by their name in the column statement rather than  calling them _COL2_ and _COL4_. Which columns do you mean as #2 and #4??? ABS_VAL_OF_AST_BFR_SHK and ABS_VAL_OF_AST_AF_SHK? 4) I don't actually understand why you create T1 and T2 -- Maybe, you only want to show the summary values for SCC_AF_SHK_INCLDG_LAC_TPR_TUR and SCC_AF_SHK_EXCLDG_LAC_TPR_TUR on the summary line and NOT on every row???? I wold expect that the code you have is not giving you the desired results. It might be better for you to work with Tech Support on this report. They can look at all your data and all your code and help you achieve your desired results. cynthia Contributor Posts: 25 ## Re: Proc report breaking after column help needed Hi Cynthia,Thanks a lot for digging deep in this problem... Let me provide a sample data so that we can have clear vision on what I am set out to achieve. Life underwriting risk  col1 col2 col3 col4 col5 col6 col7 Derivative                    100   200  300  400   500  600 700 Morality                         200   400 500  600   700  800 900 Simplification                                400   600 Risk Category              400  600   700  900 1300 500 600 Manipulations                                300    200 Total category                               2600  3400 See here, if the value for Life Undersriting risk is Simplification and Manipulation then I have to output only col3 and col4 and remaining fields should be blank and finally I need a summary on col3 and col4. I can get summary on these 2 fields by grouping but how should I conditionally display it and that too at the same position is something I am worried about. Hope I have make my requirement clear..... Thanks a lot again for your efforts Super User Posts: 10,778 ## Re: Proc report breaking after column help needed I think your compute after is not right. ext_refr is always null for your situation. You'd better to post some sample data and final report you want to see. I also notice that Total category                               2600  3400 does not equal the total of col3 col4. Ksharp SAS Super FREQ Posts: 9,368 ## Re: Proc report breaking after column help needed Hi: Are the requirements really firmed up yet? Initially you said you wanted to sum only COL3 and COL5 variables, now you only need to sum and show COL3 and COL4 on selected rows. I am still not certain what the data looks like, since your example seems like the final report and not the initial data. This modification to the program I posted above "blanks out" various columns based on the value of the EXT_REFR variable. Do note that I have called my numeric variables COL1-COL7....and refer to them as such in my COMPUTE block. If your numeric variables were named FRED, ETHEL, LUCY, RICKY, KERMIT, ELMO and OSCAR, then you would use those variable names in your COLUMN statement and in your COMPUTE block. (Of course, if those were the variable names, it would be a truly interesting report.) See the attached screenshot for a possible approach. Since the COL1-COL7 variables are numeric, they must be set to missing (.) and then using the MISSING option allows the . to display as a blank in the final report. cynthia Discussion stats • 8 replies • 357 views • 3 likes • 3 in conversation
{}
# Notes ## Notes: Generalized harmonic form of Einstein’s equations from a gauge-fixed action The generalized harmonic formulation can be derived by adding a gauge-fixing term to the Einstein-Hilbert action ## Notes: A family of ramp functions and the Beta function Sometimes in a numerical method, you need to be able to continuously turn a calculation on or off in space or time. ## (Notes) When is a metric conformal to Ricci-flat? Integrability conditions when trying to solve for a conformal factor ## Note on simple(r) equations for Einstein-dilaton-Gauss-Bonnet and dynamical Chern-Simons theories Here to save you some algebra and column-inches ## Note on commutation coefficients in two ways An identity between vector commutation coefficients and coframe connection coefficients ## Note on exterior, interior, and Lie derivative superalgebra These three objects form a superalgebra! Whoa! ## Note on Lie derivatives and divergences When can you integrate-by-parts with Lie derivatives? ## Note on generating a divergence identity Special thanks to Ben Mares for coming up with this identity. ## Note on a (dimension-dependent) Weyl identity There is a nice 4-dimensional Weyl identity that I can never seem to remember off the top of my head; so I decided I need to write this note so I don’t have ... ## Notes on the Weyl, E/B, and 3+1 decompositions of Riemann, and curvature invariants The Weyl decomposition, further splitting into E/B, and computing in terms of 3+1 objects ## Notes on the pullback connection In an effort to keep myself organized, I decided I should type up some notes I have laying around.
{}
# All Questions 596 views 6k views ### Can you help me understand what a cryptographic “salt” is? I'm a beginner to cryptography and looking to understand in very simple terms what a cryptographic "salt" is, when I might need to use it, and why I should or should not use it. Can anyone offer me a ... 3k views ### Can one generalize the Diffie-Hellman key exchange to three or more parties? Does anyone know how to do a Diffie-Hellman or ECDH key exchange with more than two parties? I know how to do a key exchange between 2 parties, but I need to be able to have a key agreement between 3 ...
{}
I’ve been doing some programming in Excel’s VBA recently.  I have a series of blocks of code, where the only thing different in the blocks is that each block calls a different function.  Pseudo-code resembles blah1 blah2 Call Function1 blah3 blah4 ... blah1 blah2 Call Function2 blah3 blah4 ... blah1 blah2 Call Function3 blah3 blah4 ... The obvious change here would be to create a subroutine with one of the blocks of code, and to call the subroutine passing the unique function as a parameter to the subroutine.  Excel 2003 VBA does not permit this directly, so my next thought was to use Eval, and pass the function name as a string.  Excel 2003 VBA does not include Eval. But… Excel VBA does permit one to pass an object as an argument.  So the solution becomes: • Define 3 different objects, and give each object a single function, each function named the same. • Pass the object as an argument, instead of the function. • Have the subroutine call the function via the object.
{}
## anonymous one year ago A certain reaction has an activation energy of 46.39 kJ/mol. At what Kelvin temperature will the reaction proceed 5.50 times faster than it did at 299 K? • This Question is Open 1. Photon336 What's the pre exponential factor for this reaction? 2. cuanchi http://chemwiki.ucdavis.edu/Physical_Chemistry/Kinetics/Modeling_Reaction_Kinetics/Temperature_Dependence_of_Reaction_Rates/The_Arrhenius_Law/The_Arrhenius_Law%3A_Activation_Energies you can use this formula a derivation of Arrhenius you dont need the A factor is the same at both temperatures. Then k1/k2 = 1/5 and T1= 299K ;T2=?, R=8.314 J/molK, Ea=46.39 x 10^3 J/mol ln(k1/k2)=(1/T2−1/T1)Ea/R $\ln(\frac{ k _{1} }{ k _{2} })= (\frac{ 1 }{ T _{2} }-\frac{ 1 }{ T _{1} })\frac{ Ea }{ R }$ 3. Photon336 @Cuanchi So we don't need A in this problem because it's the same for the particular reaction were studying? so we basically isolate A and then rewrite the Equation to solve for temperature. His question had 5.5 times faster so does this mean K2 = 5.5K1 and then substitute that into the expression Ln(5.5K1/K1) = Ln(5.5) = (rest of equation)? I mean temperature would affect our K. 4. cuanchi Yes your logic is correct, but if you choose k2=5.5, in the formula Ln(k1/k2)=Ln(1/5.5)!!! just be careful, if you choose k2= 5.5, T2= ?, k1=1, T1= 299K; the activation energy is given in kJ and the R= 8.314 J/molK is in joules, you have to convert the Ea to joules (x10^3). It will not affect the answer which one you assign 1 or 2 so far you keep the pairs k1,T1 and k2,T2 according to the formula. If you mix it up you will get a kelvin temperature with a negative sign (doesn't exist!!) I got ~ 328 K 5. Photon336 Great explanation! oh... yeah... I just saw that! I assume that the incorrect Ln value, would affect our T2 value. once you get that you can probably just rearrange and solve for T2
{}
# Recent posting and editing events (since Apr 28, 2017) Bott periodicity for Dirac matrices?Answer posted by 40227, 4 hours ago Bott periodicity for Dirac matrices?Comment posted by Arnold Neumaier, 13 hours ago Bott periodicity for Dirac matrices?Comment posted by Jia Yiyang, 22 hours ago Bott periodicity for Dirac matrices?Answer posted by Arnold Neumaier, 1 day ago Bott periodicity for Dirac matrices?Question posted by Jia Yiyang, 2 days ago $P$-wave $ππ$ scattering and the $ρ$ resonance from lattice QCDSubmission posted by SubmissionBot, 2 days ago Coadjoint orbits in physicsAnswer posted by anonymous, 2 days ago 12 pages of posting events since Mar 15. For earlier items see the main page
{}
Article # Particle Filtering for Multiple Object Tracking in Dynamic Fluorescence Microscopy Images: Application to Microtubule Growth Analysis Dept. of Med. Inf., Erasmus MC-Univ. Med. Center, Rotterdam (Impact Factor: 3.8). 07/2008; 27(6):789 - 804. DOI: 10.1109/TMI.2008.916964 Source: IEEE Xplore ABSTRACT Quantitative analysis of dynamic processes in living cells by means of fluorescence microscopy imaging requires tracking of hundreds of bright spots in noisy image sequences. Deterministic approaches, which use object detection prior to tracking, perform poorly in the case of noisy image data. We propose an improved, completely automatic tracker, built within a Bayesian probabilistic framework. It better exploits spatiotemporal information and prior knowledge than common approaches, yielding more robust tracking also in cases of photobleaching and object interaction. The tracking method was evaluated using simulated but realistic image sequences, for which ground truth was available. The results of these experiments show that the method is more accurate and robust than popular tracking methods. In addition, validation experiments were conducted with real fluorescence microscopy image data acquired for microtubule growth analysis. These demonstrate that the method yields results that are in good agreement with manual tracking performed by expert cell biologists. Our findings suggest that the method may replace laborious manual procedures. ### Full-text Available from: W.J. Niessen, Aug 18, 2013 2 Followers · 133 Views • Source • "where each element of b takes the value of the background intensity I b . Note that a similar image likelihood is also used to compute the weights of samples in tracking approaches based on particle filters (e.g., [19], [21]). Once all weights have been evaluated with the image likelihood p(z|x), the weights β i , i = 1, . . . " ##### Article: Tracking Multiple Particles in Fluorescence Time-Lapse Microscopy Images via Probabilistic Data Association [Hide abstract] ABSTRACT: Tracking subcellular structures as well as viral structures displayed as 'particles' in fluorescence microscopy images yields quantitative information on the underlying dynamical processes. We have developed an approach for tracking multiple fluorescent particles based on probabilistic data association. The approach combines a localization scheme that uses a bottom-up strategy based on the spot-enhancing filter as well as a top-down strategy based on an ellipsoidal sampling scheme that uses the Gaussian probability distributions computed by a Kalman filter. The localization scheme yields multiple measurements that are incorporated into the Kalman filter via a combined innovation, where the association probabilities are interpreted as weights calculated using an image likelihood. To track objects in close proximity, we compute the support of each image position relative to the neighboring objects of a tracked object and use this support to re-calculate the weights. To cope with multiple motion models, we integrated the interacting multiple model algorithm. The approach has been successfully applied to synthetic 2D and 3D images as well as to real 2D and 3D microscopy images, and the performance has been quantified. In addition, the approach was successfully applied to the 2D and 3D image data of the recent Particle Tracking Challenge at the IEEE International Symposium on Biomedical Imaging (ISBI) 2012. IEEE Transactions on Medical Imaging 09/2014; 34(2). DOI:10.1109/TMI.2014.2359541 · 3.80 Impact Factor • Source • "The positions and directions of motion of the objects are randomly chosen within the image plane. The speed (i.e., the displacement in pixels per frame) is drawn uniformly at random over the interval [2] [7] for large objects and over [2] [4] for small objects. The SNR of the images of large objects is 2 (ca. " ##### Article: Approximate Sequential Importance Sampling for Fast Particle Filtering [Hide abstract] ABSTRACT: Particle filters are key algorithms for object tracking under non-linear, non-Gaussian dynamics. The high computational cost of particle filters, however, hampers their applicability in cases where the likelihood model is costly to evaluate, or where large numbers of particles are required to represent the posterior. We introduce the approximate sequential importance sampling/resampling (ASIR) algorithm, which aims at reducing the cost of traditional particle filters by approximating the likelihood with a mixture of uniform distributions over pre-defined cells or bins. The particles in each bin are represented by a dummy particle at the center of mass of the original particle distribution and with a state vector that is the average of the states of all particles in the same bin. The likelihood is only evaluated for the dummy particles, and the resulting weight is identically assigned to all particles in the bin. We derive upper bounds on the approximation error of the so-obtained piecewise constant function representation, and analyze how bin size affects tracking accuracy and runtime. Further, we show numerically that the ASIR approximation error converges to that of sequential importance sampling/resampling (SIR) as the bin size is decreased. We present a set of numerical experiments from the field of biological image processing and tracking that demonstrate ASIR's capabilities. Overall, we consider ASIR a promising candidate for simple, fast particle filtering in generic applications. • Source • "The dynamics model assumes nearly constant velocity, and the appearance model approximates each object by Gaussian intensity profile in the final microscopy image. These are standard models that adequately describe biological fluorescence microscopy [13], [14]. The state vector in this case is x = (ˆ x, ˆ y, v x , v y , I 0 ) T , wherê x andˆy are the estimated x-and y-positions of the object, (v x , v y ) its velocity vector, and I 0 its estimated fluorescence intensity. " ##### Article: Adaptive Distributed Resampling Algorithm with Non-Proportional Allocation [Hide abstract] ABSTRACT: Distributed resampling algorithm with proportional allocation (RNA) draws serious attention to parallel systems and how they can be used in particle tracking applications. We extend the original work by Boli\'c et al. by introducing the adaptive RNA (ARNA), which improves RNA by enabling (1) adjustable particle exchange ratio and (2) randomized ring topology. These features of ARNA boost the runtime performance of the fastest RNA (i.e., RNA with 10% particle exchange ratio) by 9%. In such parallel settings, it is important to have all processing elements (PE) tracking the object and thus keeping a high PE efficiency percentage $PE_{\textrm{eff}}$. ARNA shows 25-times $PE_{\textrm{eff}}$ improvement over the RNA methods in a network of 384 PEs. Moreover, the ARNA algorithm requires only few modifications in the original RNA code and thus ARNA is considered a better alternative to RNA.
{}
Transformations of Quadratic Functions. This means the u-shape of the parabola will turn upside down. Standard: F.BF.3 What are the four types of transformations of a function? How Do I Use Study.com's Assign Lesson Feature? and finally reflect. Lastly, graphs can be flipped. Quadratic functions can be graphed just like any other function. They're usually in this form: f(x) = ax2 + bx + c. One thing to note about that equation is that the coefficient a cannot be equal to zero. An error occurred trying to load this video. 2.1 Transformations of Quadratic Functions Homework Name:_ Date:_ Period:_ For 1 … In the diagram below, f (x) was the original quadratic and g (x) is the quadratic after a series of transformations. Write the equation of a transformed quadratic function using the vertex form. Get access risk-free for 30 days, answer choices. You can test out of the For each of the technologies and resources below, derive the transformation frontier T(q_1, q_2) and find an expression for the marginal rate, which steps transform the graph of { y=x^2 ,to , y = -(x-3)^3 +2 } would it be 3 units to right translate up 2 units, Suppose that X has a discrete uniform distribution on the integers 5, 6, 7, 8. F(s) =. Repeat the exercise below a few times to observe how changing a stretched and for negative values also reflects the curve y=ax². c. Wha, Find the Laplace transform of f(t) =\left\{\begin{matrix} 0, & t< 4 \\ t^2 -8t +22, & t \geq 4 \end{matrix}\right. Transformation descriptions include: 2 horizontal translations, 4 vertical translations, 1 reflection in the x-axis, 1 horizontal shrin Inside this combination of a quiz and worksheet, you are asked about the transformations of quadratic functions. It includes three examples. If that number is greater than one, the graph will be compressed. Quadratic Functions The translation of a function is simply the shifting of a function. Students explore transformations of quadratic functions through this investigation. Already registered? The first type of transformations we will deal with are called shifts. You just transformed your parabola! Improve your math knowledge with free questions in "Transformations of quadratic functions" and thousands of other math skills. Earn Transferable Credit & Get your Degree, How to Graph an Absolute Value and Do Transformations, Linear Equations: Intercepts, Standard Form and Graphing, Arithmetic and Geometric Series: Practice Problems, How to Add, Subtract and Multiply Polynomials, Solving Quadratic Inequalities in One Variable, Factoring Quadratic Equations: Polynomial Problems with a Non-1 Leading Coefficient, Compare Properties of Functions Graphically, Practice Adding and Subtracting Rational Expressions, Factoring By Grouping: Steps, Verification & Examples, Translating & Reflecting Graphs of Linear Functions, Transformation of Exponential Functions: Examples & Summary, How to Use Synthetic Division to Divide Polynomials, Big Ideas Math Geometry: Online Textbook Help, GED Math: Quantitative, Arithmetic & Algebraic Problem Solving, SAT Subject Test Mathematics Level 1: Practice and Study Guide, SAT Subject Test Mathematics Level 2: Practice and Study Guide, UExcel Precalculus Algebra: Study Guide & Test Prep, UExcel Statistics: Study Guide & Test Prep, College Preparatory Mathematics: Help and Review, High School Precalculus: Tutoring Solution, High School Algebra I: Homework Help Resource, Prentice Hall Algebra 2: Online Textbook Help, High School Algebra I: Homeschool Curriculum. {{courseNav.course.mDynamicIntFields.lessonCount}} lessons However, Select a subject to preview related courses: You can also change the width of the graph by compressing or stretching the graph in the horizontal direction. The new graph will look like an upside down U. Plus, get practice tests, quizzes, and personalized coaching to help you It's simple! Biology Lesson Plans: Physiology, Mitosis, Metric System Video Lessons, Lesson Plan Design Courses and Classes Overview, Online Typing Class, Lesson and Course Overviews, Airport Ramp Agent: Salary, Duties and Requirements, Personality Disorder Crime Force: Study.com Academy Sneak Peek. Create your account. A reflection on the x-axis will be obtained by multiplying the function by -1 i.e. All function rules can be described as a transformation of an original function rule. What is the Main Frame Story of The Canterbury Tales? succeed. Intro to parabola transformations. 1. y = -(x – 2)2 2. y = (x + 3) 2 - 1 3. y = 3x2 + 1 4. y = -2x2 5. y = -x2 – 5 6. y = 3(x + 1) 2 7. y = 3 1 (x + 2) 2 + 3 8. y = - 2 1 (x – 1) 2 + 3 9. y = (x + 3) 2 10. y = -(x – 1) 2 + 4 c. Complete the exercises below and They're usually in this form: f(x) = ax2 + bx + c. They will always graph into a curved shape called a parabola, which is a u-shape. Repeat the exercise below a few times to observe how changing c moves the curve y=x²+c. The neat thing about these is that they will always graph into a curved shape called a parabola. Let's shift our graph to the left 10, down 5, and flip it. Opens downward neat thing about these is that they will always graph into a curved shape called a parabola for. Let 's shift our graph to the left 10, down 5, and flip it it makes nice., vertical shift, and other study tools neat thing about these that! A smooth curve that has been drawn though the points by the computer since the curve.! Trademarks and copyrights are the four types of transformations we will be compressed and... A function square and the range of quadratic functions through this investigation like an upside down for... So in the form from widest to narrowest graph equation is a function that can be stretched and coaching... Parabola will turn upside down depending on the x-axis next exercise we graph this parent function, we look! Flashcards, games, and more with flashcards, games, quadratic function transformations reflect along the x-axis like. Yourself in the function f ( x ) = 3 ( x ) = x2 or Private?! Is shown for reference as the yellow curve and this is a function log or! Changing c moves the curve is continuous equation of a function that can described! Equation calculator - Solve quadratic equations using factoring, complete the square and the of! And has a Master 's degree in Secondary Education numbers and a ≠ 0 the.... Transformation equation and graph cards that match the transformation stretch, vertical shift, personalized. To graph quadratic functions method involves starting with the basic graph of a equation., or contact customer support transforming linear functions ( Lesson 2-6 ) or! Of y=x² is shown for reference as the yellow curve and this is a particular case of y=ax²... Respective owners using tables of values Assign Lesson Feature opposite sign this graph being! Values also reflects the curve y=x²+c the x-axis case of equation y=x²+c where c=0 years of college and thousands... ; y = 3 x^2 ; y = 5 x^2 ; y x2. Be upside down depending on the x-axis arc and then comes back down to the left and threw ball! We learned how to graph quadratic functions the x-axis to graph quadratic functions second... Horizontally, which means the u-shape of the first type of transformations quadratic! Get practice tests, quizzes, and flip it multiply just x by number! Add this Lesson you must be a parabola you choose a Public or Private college + c. whose will... The graph of a quadratic function using the vertex form of a quiz and Worksheet, you agree our. Type of transformations of quadratic functions are second order functions, which means highest. Can earn credit-by-exam regardless of age or Education level and reflect along the x-axis will be compressed similar! 'S shift our graph to the left and right width of your graph to the.... This: f ( x ) = x2 the u-shape of the graph be... Big Ideas math ALGEBRA 2: Online Textbook help page to learn more, visit our Earning page!, terms, and more with flashcards, games, and other study tools, complete square. Y=Ax² where a=1 range of quadratic functions using tables of values college and save thousands off your.! So in the equation functions in this section, we will be obtained multiplying... The Big Ideas math ALGEBRA 2: look out the sign of the Canterbury Tales five.. Described from the quadratic function ; y = 3 ( x + c. where a, b c. Exercise below a few times to observe how changing a graphs up/down or left/right ), flipping,. ( x ) = 6 ( x ) = 3 x^2 ; y x2... Of their respective owners you get the best experience to the equation like this f.: using transformations to write an equation for the quadratic parent function of a quadratic in. What college you want to attend yet decisions Revisited: Why Did you choose a Public or Private college into. And help us to graph quadratic functions as the yellow curve and is.: vertical stretch, vertical shift, and maybe reflect have been applied to the equation using their properties for! And axis of symmetry quadratic functions Describe the following transformations on the x-axis will compressed... The computer since the curve is continuous is continuous help page to more. We replace 0 with y, then we get our typical parabola in an.... How changing c moves the curve is continuous page to learn more, visit our Earning page... We graph this parent function, we learned how to move the of... Horizontally, which means the highest exponent for a variable is two credit-by-exam regardless of age or Education.... Given in the Crucible test out of the graph will be stretched and/or reflected by changing the.. Quiz and Worksheet, you will multiply the entire equation by a.... The left 10, down 5, and other study tools described as a of! The next exercise algebraic functions in this set of transformation worksheets stretched and for negative values also the... “ a ” for the quadratic parent function, we simply make entire. By using this website uses cookies to ensure you get the unbiased info you need find... Be described as a transformation of an original function rule of a quadratic function in the form of 7... Formula step-by-step the sign of the graph being compressed towards the x-axis be. What are the property of their respective owners transformation description described from the quadratic formula step-by-step help you.! Function negative, down 5, and personalized coaching to help you.. The right school -1 i.e are called shifts function negative this parent function, we will deal with are shifts... Points by the computer since the curve y=ax² transformation of an original function rule using... We get our typical parabola in an u-shape and you are asked about the graph y=x²... Your math knowledge with free questions in transformations of a quadratic function Homework Name: _ 1! View quadratic functions using their properties choose a Public or Private college owners!, flipping them, or contact customer support ( n ) in vertex form of a quadratic function f x. Repeat the exercise below a few times to observe how changing c moves the curve y=x²+c new graph get!, think about the quadratic function transformations will look like an upside down depending on the function in the equation Main. Big Ideas math ALGEBRA 2: look out the sign, a change in follows. Left/Right ) quadratic function transformations flipping them, or shrinking them the number is between 0 and 1, the graph a. Find the vertex and help us to graph quadratic functions Worksheet by using Contents... And ‘ moving ’ it according to the right five units sets of functions! You know about transformations to write an equation for the most part, learned! Of values all regards except it opens downward function that can be stretched and/or reflected changing..., the graph horizontally to the right five units video looks at using vertex form for days... About transformations to graph quadratic functions from widest to narrowest graph that number is greater than,! The reflection of each quadratic function using the vertex form is similar to transforming linear functions ( Lesson ). Square and the range of quadratic functions Describe the following table and you are asked the! And graph cards that match the transformation description described from the quadratic function in words... Thousands of other math skills computer since the curve y=ax² range of functions... Get a quadratic function f ( x ) = ( 1/4x ).. Exactly like the graph will be obtained by multiplying the function by quadratic function transformations i.e ( Lesson )! Shingle Style Architects, The Return Of Ringo, Post University Women's Basketball, Ubereats Alcohol Delivery, Borderlands 3 Length, What Is The Name Of The First Real Robot, Toddler Princess Leia Costume, Online Jobs For Students Work From Home, Alphabet Letters For Kids,
{}
## How To Find The Function Of A Taylor Series Find the 1st 4 nonzero terms in the Taylor series Math AP®? Calculus BC Infinite sequences and series Finding Taylor polynomial approximations of functions Finding Taylor polynomial approximations of functions Taylor & …... Can someone please explain how to find the power series expansion of the function in the title? Also, how would you do it in the reverse case? That is, given the power series expansion, how can you deduce the function $\frac{x}{1-x-x^3}$? 3.3. T S Mathematics Department The taylor series of a function will never know the domain of your function. One might hope that this could be solved by extending the domain to infinity in either direction- but taylor series have a radius of convergence which does not in general allow for this.... Math AP®? Calculus BC Infinite sequences and series Finding Taylor polynomial approximations of functions Finding Taylor polynomial approximations of functions Taylor & … Worked example recognizing function from Taylor series In this section we discuss how the formula for a convergent Geometric Series can be used to represent some functions as power series. To use the Geometric Series formula, the function must be able to be put into a specific form, which is often impossible. However, use of this formula does quickly illustrate how functions can be represented as a power series. We also discuss differentiation and rrdtool how to keep graphs for one year Taylor series expansion requires a function to have derivatives up to an infinite order around the expansion point. Maclaurin Series Expansion Taylor series expansion around x = 0 is called Maclaurin series expansion:. How to find equations of graphs on desmos ## How To Find The Function Of A Taylor Series ### Find function corresponding to a Taylor series • 3.3. T S Mathematics Department • When a Taylor Series Converges Physics Forums • 3.3. T S Mathematics Department • Calculus/Taylor series Wikibooks open books for an open ## How To Find The Function Of A Taylor Series ### A Taylor series is a polynomial of infinite degrees that can be used to represent all sorts of functions, particularly functions that aren't polynomials. It can be assembled in many creative ways to help us solve problems through the normal operations of function addition, multiplication, and composition. We can also use rules of • The taylor series of a function will never know the domain of your function. One might hope that this could be solved by extending the domain to infinity in either direction- but taylor series have a radius of convergence which does not in general allow for this. • Can someone please explain how to find the power series expansion of the function in the title? Also, how would you do it in the reverse case? That is, given the power series expansion, how can you deduce the function $\frac{x}{1-x-x^3}$? • Taylor series expansion requires a function to have derivatives up to an infinite order around the expansion point. Maclaurin Series Expansion Taylor series expansion around x = 0 is called Maclaurin series expansion: • A function is analytic if and only if a power series converges to the function; the coefficients in that power series are then necessarily the ones given in the above Taylor series formula. If a = 0 {\displaystyle a=0} , the series is also called a Maclaurin series . ### You can find us here: • Australian Capital Territory: Hackett ACT, Anembo ACT, Dickson ACT, Kambah ACT, Lyons ACT, ACT Australia 2613 • New South Wales: Lurnea NSW, Deepwater NSW, Bulahdelah NSW, Great Marlow NSW, Bunyan NSW, NSW Australia 2016 • Northern Territory: Millner NT, Katherine NT, Anindilyakwa NT, Hundred of Douglas NT, Tanami NT, Berrimah NT, NT Australia 0891 • Queensland: Sun Valley QLD, Thabeban QLD, Terranora QLD, Bamaga QLD, QLD Australia 4017 • South Australia: Waterloo SA, Leabrook SA, Pinkawillinie SA, Clearview SA, West Range SA, Peterhead SA, SA Australia 5059 • Tasmania: Wattle Hill TAS, North Scottsdale TAS, Gowrie Park TAS, TAS Australia 7076 • Victoria: Turtons Creek VIC, Chadstone VIC, Nhill VIC, Fairhaven VIC, Koroit VIC, VIC Australia 3001 • Western Australia: Karrakup WA, Herdsman WA, North Yunderup WA, WA Australia 6039 • British Columbia: Qualicum Beach BC, Telkwa BC, Clinton BC, Castlegar BC, Burnaby BC, BC Canada, V8W 8W7 • Yukon: Eagle Plains YT, Yukon Crossing YT, Canyon YT, Lorne YT, Haines Junction YT, YT Canada, Y1A 5C2 • Alberta: Hinton AB, Linden AB, Calmar AB, Consort AB, Stettler AB, Beaumont AB, AB Canada, T5K 8J5 • Northwest Territories: Salt Plains 195 NT, Yellowknife NT, Yellowknife NT, Fort Resolution NT, NT Canada, X1A 6L1 • Saskatchewan: Bruno SK, Minton SK, Turtleford SK, Middle Lake SK, Parkside SK, St. Brieux SK, SK Canada, S4P 1C1 • Manitoba: Minnedosa MB, Winnipeg Beach MB, Winnipeg Beach MB, MB Canada, R3B 1P8 • Quebec: Chateauguay QC, Notre-Dame-de-l'Ile-Perrot QC, Ville-Marie QC, Louiseville QC, Longueuil QC, QC Canada, H2Y 6W9 • New Brunswick: Beresford NB, Tracy NB, Rogersville NB, NB Canada, E3B 1H6 • Nova Scotia: Westville NS, Trenton NS, Cape Breton NS, NS Canada, B3J 9S2 • Prince Edward Island: O'Leary PE, Alexandra PE, Hazelbrook PE, PE Canada, C1A 9N1 • Newfoundland and Labrador: Fox Cove-Mortier NL, Indian Bay NL, Aquaforte NL, Chance Cove NL, NL Canada, A1B 3J4 • Ontario: Munster ON, Froatburn ON, Medina ON, McCreary's Shore, Lake Dore ON, Jarvis ON, Auden ON, ON Canada, M7A 4L3 • Nunavut: Lake Harbour (Kimmirut) NU, Belcher Islands NU, NU Canada, X0A 1H7 • England: Worthing ENG, Luton ENG, Hemel Hempstead ENG, Chester ENG, Slough ENG, ENG United Kingdom W1U 2A1 • Northern Ireland: Bangor NIR, Bangor NIR, Craigavon (incl. Lurgan, Portadown) NIR, Newtownabbey NIR, Craigavon (incl. Lurgan, Portadown) NIR, NIR United Kingdom BT2 6H9 • Scotland: Cumbernauld SCO, Hamilton SCO, Cumbernauld SCO, Livingston SCO, Dunfermline SCO, SCO United Kingdom EH10 1B2 • Wales: Swansea WAL, Barry WAL, Swansea WAL, Newport WAL, Barry WAL, WAL United Kingdom CF24 9D6
{}
# Quantum numbers and radial probability of the electrons In this book it has been written: The $ns$, $(n − 1)d$, and $(n − 2)f$ orbitals are so close to one another in energy, and interpenetrate one another so extensively. And in the wikipedia article of Pauli exclusion principle it has been written: The Pauli exclusion principle is the quantum mechanical principle that states that two identical fermions (particles with half-integer spin) cannot occupy the same quantum state simultaneously. In the case of electrons in atom, it can be stated as follows: it is impossible for two electrons of a poly-electron atom to have the same values of the four quantum numbers. Does this mean that two electrons of an atom can have significant radial probability at the same location even if they are defined by different set of quantam numbers? • Yes of course! Pauli's exclusion principle is only about the 'quantum numbers', more correctly, it states that 'A system containing several electrons must be described by an antisymmetric total eigenfunction', which is the stronger statement. The weaker statement is that no two electrons can have two identical set of quantum numbers. It doesnt say anything about the probability or the energy or any other obesrvable. Any property deduced satisfying the condition thereof is purely a mathematical result and entirely in accordance with the principle Oct 17, 2016 at 7:14 • @PrasadMani,Thank you very much.Please write this as an answer so that you may get deserved reputation. – user132865 Oct 17, 2016 at 7:34 If we consider space variables of two electrons (identical particles) to have almost the same values, then their wavefunctions are 'almost' identical if they are in the same quantum state, ie, $\psi_{a}(1)~ \simeq~\psi_{a}(2)$ and $\psi_{b}(1)~\simeq~\psi_{b}(2)$ [the label 1 and 2 denote the spatial co-ordinates of the electron '1' and '2' i.e. ($x_1,y_1,z_1$) and ($x_2,y_2,z_2$), and the labels a and b for the wavefunction denote the three quantum numbers $n,l,m$ of two different quantum states]. $$\frac{1}{\sqrt2}[\psi_{a}(1)\psi_{b}(2) - \psi_{a}(2) \psi_{b}(1)]\simeq\frac{1}{\sqrt2}[\psi_{b}(1)\psi_{a}(2) - \psi_{b}(1) \psi_{a}(2)]\simeq 0$$
{}
# Converting between EY, AEP and ARI The latest version of Australian Rainfall and Runoff (ARR2016) proposes new terminology for flood risk (see Book 1, Chapter 2.2.5).  Preferred terminology is provided in Figure 1.2.1 which is reproduced below. Definitions: • EY – Number of exceedances per year • AEP – Annual exceedance probability • AEP (1 in x) – 1/AEP • ARI – Average Recurrence Interval (years) Australian Rainfall and Runoff preferred terminology For floods rarer than 5%, the relationship between the various frequency descriptors can be estimated by the following straightforward equations. $\mathrm{EY} = \frac{1}{\mathrm{ARI}}$ $\mathrm{EY} = \mathrm{AEP}$ $\mathrm{AEP(1\; in\; x \;Years)} = \frac{1}{\mathrm{AEP}}$ $\mathrm{ARI} = \mathrm{AEP(1\; in \; x \; Years)}$ $\mathrm{AEP} = \frac{1}{\mathrm{ARI}}$ For common events, more complex equations are required (these will also work for any frequency): $\mathrm{EY} = \frac{1}{\mathrm{ARI}}$ $\mathrm{AEP(1\; in\; x \;Years)} = \frac{1}{\mathrm{AEP}}$ $\mathrm{AEP(1\; in\; x \;Years)} = \frac{\exp(\mathrm{EY})}{\left( \exp(\mathrm{EY}) - 1 \right)}$ $\mathrm{ARI} =\frac{1}{-\log_e(1-AEP)}$ $\mathrm{AEP} = \frac{\exp(\frac{1}{\mathrm{ARI}}) - 1}{\exp(\frac{1}{\mathrm{ARI}})}$ A key result is that we can’t use the simple relationship ARI = 1/AEP for frequent events.  So, for example, the 50% AEP event is not the same as the 2-year ARI event. ### Example calculations For an ARI of 5 years, what is the AEP: $\mathrm{AEP} = \frac{\exp(\frac{1}{\mathrm{5}}) - 1}{\exp(\frac{1}{\mathrm{5}})} = 0.1813$ For an AEP of 50%, what is the ARI? $\mathrm{ARI} =\frac{1}{-\log_e(1-0.5)} = 1.443$ R functions and example calculation available as a gist. # Highlights from ARR Book 7 Book 7 of Australian Rainfall and Runoff is titled Application of Catchment Modelling Systems.  It has been written by experienced people and there is some great information. A few, paraphrased, highlights follow. • Its often challenging to get good calibrations for all the available historical events and there may good reasons why. Difficulties in calibrating a model to observed flood events of different magnitude should be taken as an indication of the changing role of processes. In many cases a significant change occurs between floods that are mostly contained within the stream channel and floods in which floodplain storage plays an important role in the routing process. If the model has only been calibrated to in-bank floods, confidence in its ability to represent larger floods will be lower. • Calibration needs to focus on what the model is to be used for, not just ensuring past events are well represented. The focus of model calibration is not just to develop a model that is well calibrated to the available flood data.  Application of the model to the design requirements must be the primary focus. It is often the case that calibration floods are relatively frequent while design applications require much rarer floods.  In this case, work in refining the model calibration to the frequent floods may not be justified. Parameter values should account for the expected future design conditions, rather than an unrepresentative calibration event. Calibration usually works with historic flood events while the design requirements are for probabilistic events.  The parameters calculated for the historic events may not be applicable to the design flood events. • On using all available data. Even if the data is of poor quality or incomplete, it is important that the model calibration be at least consistent with the available information. Even poor quality observations may be sufficient to apply a ‘common sense test’. …at least ensure that model performance is consistent with minimal data [available]… • On inconsistent data Effort should be concentrated on resolving the source of the inconsistency rather than pursing further calibration. • Dealing with poor calibration. It is far more important to understand why a model may not be calibrating well at a particular location than to use unrealistic parameter values to ‘force’ the model to calibrate. • Don’t expect your model to provide a good fit to all data. It is extremely unlikely that your simple model is perfectly representing the complex real world well, all your data has been collected without error, or is unaffected by local factors. • The appearance of great calibrations may mean: The model has been overfitted to the data with unrealistic parameter values, or Some of the data, that does not fit well, has been ignored or not presented. Calibration events should be re-run with adopted parameters and results should show at least reasonable performance for all of the calibration events. • Confirming model suitability for design events Model performance, for design events, should be confirmed using Flood Frequency Analysis results, if available, or regional flood frequency information. Book 7 also has worthwhile guidance on uncertainty analysis, model checking and reporting. # ARR update from the FMA conference There were several papers related to Australian Rainfall and Runoff at the FMA conference last week.  Once the papers become available on the FMA website, it would be worth checking, at least these three: • What Do Floodplain Managers Do Now That Australian Rainfall and Runoff Has Been Released? – Monique Retallick, WMAwater. • Australian Rainfall and Runoff: Case Study on Applying the New Guidelines -Isabelle Testoni, WMAwater. • Impact of Ensemble and Joint Probability Techniques on Design Flood Levels -David Stephens, Hydrology and Risk Consulting. There was also a workshop session where software vendors and maintainers discussed how they were updating their products to become compliant with the new ARR. A few highlights: 1. The ARR team are working on a single temporal pattern that can be used with hydrologic models to get a preliminary and rapid assessment of flood magnitudes for a given frequency. This means an ensemble or Monte Carlo approach won’t be necessary in all cases but is recommended for all but very approximate flood estimates. 2. The main software vendors presented on their efforts to incorporate ARR2016 data and procedures into models. This included: RORB, URBS, WBMN, RAFTS. Drains has also included functionality. All the models use similar approaches but speakers acknowledged further changes were likely as we learn more about the implications of ARR2016. The modelling of spatial rainfall patterns did not seem well advanced as most programs only accept a single pattern so don’t allow for the influence of AEP and duration. 3. WMA Water have developed a guide on how to use ARR2016 for flood studies. This has been done for the NSW Office of Environment and Heritage (OEH) and looks to be very useful as it includes several case studies. The guide is not yet publicly available but will be provided to the NFRAG committee so may released. 4. Hydrologists need to take care when selecting the hydrograph, from the ensemble of hydrographs, to use for hydraulic modelling. A peaked, low-volume hydrograph may end up being attenuated by hydraulic routing. We need to look at the peaks of the ensemble of hydrographs as well as their volumes. The selection of a single design hydrograph from an ensemble of hydrographs was seen as an area requiring further research. 5. Critical duration – The identification of a single critical duration is often much less obvious now we are using ensemble rainfall patterns. It seems that many durations produce similar flood magnitudes. The implications of this are not yet clear. Perhaps if the peaks are similar, we should consider hydrographs with more volume as they will be subject to less attenuation from further routing. 6. There was lots of discussion around whether we should use the mean or median of an ensemble of events.  The take away message was that in general we should be using the median of inputs and mean of outputs. 7. When determining the flood risk at many points is a large catchment, different points will have different critical durations. There was talk of “enveloping” the results. This is likely to be an envelope of means rather than extremes. 8. The probabilistic rational method, previously used for rural flood estimates in ungauged catchments, is no longer supported. The RFFE is now recommended. 9. The urban rational method will only be recommended for small catchments such as a “two lot subdivision”. 10. There was no update on when a complete draft of ARR Book 9 would be released. 11. Losses should be based on local data if there is any available. This includes estimating losses by calibration to a flood frequency curve. Only use data hub losses if there is no better information. In one case study that was presented, the initial loss was taken from the data hub and the continuing loss was determined by calibration to a flood frequency curve. 12. NSW will not be adopting the ARR2016 approach to the interaction of coastal and riverine flooding. Apparently their current approaches are better and have an allowance for entrance conditions that are not embedded in the ARR approach. 13. NSW will not be using ARR approaches to estimate the impacts of climate change on flooding. Instead they will use NARCLIM. 14. NSW have mapped the difference between the 1987 IFD and the 2016 IFD rainfalls and use this to assist in setting priorities for undertaking flood studies. 15. A case study was presented for a highly urbanized catchment in Woolloomooloo. There was quite an involved procedure to determine the critical duration for all points in the catchment and the temporal patterns that led to the critical cases. Results using all 10 patterns were mapped, gridded and averaged. I didn’t fully understand the approach as presented but there may be more information in the published version of Isabelle Testoni’s paper once it becomes available. There is still much to learn about the new Australian Rainfall and Runoff and much to be decided.  The papers at the FMA conference were a big help in understanding how people are interpreting and responding to the new guideline. # Time of concentration: Pilgrim McDermott formula There are many formulas for the time of concentration.  A previous post discussed the Bransby Williams approach. Here I look at the Pilgrim McDermott formula, which is another method commonly used in Australia and relates time of concentration to catchment area (A): $t_c = 0.76A^{0.38}$    (hours)                                                     (equation 1) where A is measured in km2. This formula is a component of the Probabilistic Rational Method as discussed in Australian Rainfall and Runoff 1987 (ARR1987) Book IV and is recommended for use in: • Eastern New South Wales • Victoria (as developed by Adams, 1987) • Western Australia – wheatbelt region McDermott and Pilgrim (1982) needed a formula for the time of concentration to develop their probabilistic rational method approach which was ultimately adopted in ARR1987.  They make the point that, for their statistical method, it is not necessary that the time of concentration closely matches the time for water to traverse a catchment, rather a characteristic time is required for a catchment to determine the duration of the design rainfall.  This characteristic time must be able to be determined directly by designers and lead to consistent values of the runoff coefficient and design flood values. The basic formula for the probabilistic rational method is: $Q_y = C_y I_{(y,t_c)} A$                                                                  (equation 2) Where: • $Q_y$ is the flood of $y$ years average recurrence interval. • $C_y$ is the runoff coefficient for a particular average recurrence interval. • $I$ is the rainfall intensity which is a function of $t_c$ (time of concentration) and $y$. • $A$ is the catchment area. For a catchment with a stream gauge, where flood frequency analysis can be undertaken, this will provide the $Q_y$ values on the left hand side of equation 2. We also know the catchment area ($A$). If $t_c$ can be estimated via a time of concentration formula, then the rainfall intensity can be looked up in an IFD table for the location and the only unknown is $C_y$. $C_y = \frac{Q_y}{I_{(y,t_c)} A}$                                                              (equation 3) This was the approach used in ARR1987. A large number of gauges were selected and $C_y$ values calculated. Ultimately $C_{10}$ values were mapped in Volume 2 of Australian Rainfall and Runoff.   For floods other than those with a 10 year average recurrence interval, frequency factors were provided to calculate the required runoff coefficient values.  This meant design floods could be estimated for ungauged catchments given information on design rainfall intensity which is available everywhere in Australia. For this approach to work, some relationship is required between $t_c$ and catchment characteristics i.e. we need a time of concentration formula. McDermott and Pilgrim (1982) began their development of such a formula by testing the Bransby Williams approach because that had been shown to be the best of 8 methods examined by French et al. (1974). McDermott and Pilgrim found that Bransby Williams wasn’t suitable for their purposes because it often resulted in runoff coefficients greater than 1 and they thought the use of such large values would be resisted by practising engineers. Equation 2 doesn’t preclude runoff coefficient values greater than 1 but the intuitive definition of $C$ as being “the proportion of rainfall that runs off” requires it. An alternative time of concentration formula was developed by considering the ‘minimum time of rise of the flood hydrograph’ which McDermott and Pilgrim collected or collated for 96 catchments. This is the time from when storm rainfall starts until stream discharge begins to increase. McDermott and Pilgrim adopted this as their definition of the time of concentration. The measured times of concentration were regressed against catchment characteristics that included: • Catchment area • Main stream length • Main stream equal area slope • Main stream average slope • Catchment shape factor • Stream slope non-uniformity index • Vegetation cover • Median annual rainfall • Soil type. Three formulas provided a similar fit to the data with the simple relationship with catchment area ultimately adopted (equation 1). One of the important implications of the probabilistic rational method approach is that the time of concentration used for design must be calculated using the same formula that was used in the derivation of the runoff coefficients (equation 3).   So, in Victoria (and Eastern NSW and the Wheatbelt of WA), when using the probabilistic rational method to estimate floods in ungauged catchments, it is important to adopt the Pilgrim McDermott formula for the time of concentration and not use any of the many other approaches. ### References Adams, C. A. (1987) Design flood estimation for ungauged rural catchments in Victoria.  Road Construction Authority, Victoria. (link) French, R., Pilgrim, D. H. and Laurenson, E. M. (1974) Experimental examination of the rational method for small rural catchments. Civil Engineering Transactions CE16: 95-102. McDermott, G. E. and Pilgrim, D. H. (1982) Design flood estimation for small catchments in New South Wales.  Department of National Development and Energy.  Australian Water Resources Council Technical Paper No. 73, pp. 233. (link) Pilgrim, D. H. and McDermott, G. E. (1982) Design floods for small rural catchments in eastern New South Wales. Civil Engineering Transactions.  Institution of Engineers CE24:226-234. # Where is ARR? [Edit 13 Oct 2016] RFFE software is now available at http://rffe.arr-software.org/ ARR project reports are available at: The main ARR page is http://arr.ga.gov.au/ The new version of Australian Rainfall and Runoff, plus supporting documents and software, was all available via www.arr.org.au. This general address has recently been changed to http://arr.ga.gov.au/. The web-based version of the draft ARR guideline is here or via this direct link.  Flike is still available here. It seems that all the project reports have been removed, however, Way  Back Machine, a web archiving service, has copies of old ARR sites. If you go to www.waybackmachine.org and then enter arr.org.au it will bring up a calendar that shows when archives were made.  The most recent version is available via this link http://web.archive.org/web/20160501050605/http://www.arr.org.au/.  All the project reports seem to be available. I’ve also made a link to a dropbox folder with those reports that were distributed at the Hydrology and Water Resources Symposium in Hobart in Dec 2015. The RFFE software does not seem to be available yet on the new site but I know WMA Water are working to get this back up.   Lets hope this is sorted out soon. # New Draft of Australian Rainfall and Runoff [Edit 17 Oct 2016: Some web links have changed.  See Where is ARR?] A new draft version of Australian Rainfall and Runoff has just been released and is available for download here.  The epub file is dated 2016-07-07. So, what has changed?  After skimming through the document, my preliminary assessment is as follows: • There is now more than one editor.  The previous version listed James Ball as the editor while this one lists James along with: Mark Babister, Rory Nathan., Bill Weeks, Erwin Weinmann, Monique Retalick and Isabelle Testoni, with Peter Coombes and Steve Roso associate editors for Book 9. • Industry consultation on the current draft will take place until October 2016 when the editorial team will meet to consider feedback and decide on when the next update will be published. • Referencing has been improved and many in-text citations are hyperlinked to their location in a reference list. • Book 2 Rainfall Estimation, Chapter 1, Introduction, has been re-written. • There is a new chapter on climate change impacts on rainfall (Book 2, Chapter 2.7). • Book 2, Chapter 4, has a name change, from Spatial Patterns to Areal Reduction Factors.  Alan Seed is no longer listed as an author.  Spatial patterns are now addressed in a new chapter (Book 2, Chapter 6). • Book 2, Chapter 4, Areal Reduction Factors: the equations used to calculate areal reduction factors have changed (but are still difficult to interpret, at least on the epub version). • Book 2, Chapter 4, Figure 2.4.1: The ARF regions map looks different but it may just be a change in colour scheme. • Book 2, Chapter 5 – the chapter on temporal patterns has been greatly expanded and design patterns are now available at data.arr.org.au (although the website is currently unavailable).  The example in Book 2, Chapter 5.10 shows the use of design temporal patterns in RORB modelling. • Book 2, Chapter 7 now covers continuous rainfall simulation (mainly just a change in Chapter numbering). • Book 3, Chapter 3.12.  This is a new chapter: RFFE Implementation and Limitations, which includes a discussion on the likely accuracy of the RFFE and additional checking to be undertaken when using the tool. • Book 4, Catchment Simulation.  There was no content in this book in the Dec 2015 version of ARR.  Now a draft of the whole book is available (Disclosure – I’m the lead author of Chapter 2). • Book 5, Flood Hydrograph Estimation, has been extensively revised.  There are new chapters on: catchment representation (Book 5, Chapter 2); flood routing principles (Book 5, Chapter 5); and flood hydrograph modelling approaches (Book 5, Chapter 6). • The losses Chapter (Book 5, Chapter 3) has been revised.  There are new methods to select losses for design flood estimation (Chapter 3.5).  The loss regions have changed (Figure 5.3.16).  There are now only 4 regions and new prediction equations are available for each region.  Median IL and CL values for much of Australia are provided in Figures 5.3.18 and 5.3.19. • Book 6, Flood Hydraulics, was well advanced in the previous draft but there are several updates.  The chapters on Rock Chutes and Rock Riprap have been removed and a new chapter on safety design criteria has been added (Book 6, Chapter 7).  This was previously in Book 9. • Book 7, Application of Catchment Modelling Systems, is now available as a complete draft.  There was no content in the December 2015 version of ARR.  This Book includes information relevant for hydrologic models, RORB, RAFTS, URBS and WBMN. • Book 8, Estimation of Very Rare to Extreme Floods was well advanced in the Dec 2015 draft ARR and a quick review suggests there have been few changes. • Book 9, Runoff in Urban Areas was not included in the earlier draft.   It was available as a separate PDF but is now integrated into ARR.  All of Book 9 is now available except Chapter 6 ‘Modelling Approaches’. •  The safety design criteria information from the earlier version of Book 9 has been moved to Book 6. One issue that is not specifically addressed is the continued use of the urban rational method.  It is not included in the urban book (Book 9) and the subtext is that there are better approaches. However, the urban rational method is widely used in practice and is recommended by some authorities e.g. Melbourne Water (see their hydrologic and hydraulic design guidelines here). A general issue is that authority guidelines and standards will need to be updated to relate to the new Australian Rainfall and Runoff.  For example the Austroads Guide to Road Design, Part 5: Drainage – General and Hydrology Considerations, refers to the 1987 version of Australian Rainfall and Runoff.  There is a similar issue with the stormwater drainage code, AS/NZS3500.3. # Teaching hydrology I’m reading Richard Dawkins’ book An appetite for wonder.  In it, he writes about the tutorial system at Oxford.  As a senior undergraduate, his weekly tutorial assignments included: • Reading a PhD thesis and writing something similar to an examiner’s report, reviewing the history of the field in which the thesis was written, proposing follow up research and discussing the theoretical and philosophical issues raised by the thesis. • Becoming, close-to, a world expert in some topic by reading theses, papers, and books and writing an essay on a topic selected by his tutor. Then there was a one hour one-on-one session with a tutor, discussing and defending his essay. Image doing something like that with undergraduates in hydrology.  In one week, a student could become close to the world expert in something like the rational method, its history, applications, limitations and requirements for further research.  Or, they could cover hydrologic routing, or some aspect of flood frequency analysis. At a university, where I taught, our students were taking four units in disparate fields, working part-time, and taking tutorials with 20 or more colleagues.  Hydrology was covered in, perhaps 2 units in a few lectures and a few tutorials.  The teaching was commendably broad but so shallow in comparison with that described by Dawkins. The common feedback from students was that they did not get sufficient feedback.  How would it be if they had an hour, one-on-one with a tutor to defend and discuss their written work, every week? Teaching engineering hydrology is a particular challenge.  At a minimum you would like graduates to be able to apply the current methods, work through the steps and come up with a defensible design.  But it would be nice if they could do more.  Ideally, graduates should be able to criticise the current methods, understand the limitations, know when they should be used and when other approaches are better and be able to identify where research is required.  I remember setting an exam question once where I asked students to follow a particular procedure and come up with an answer.  They generally did well.  Next I asked the circumstances when the procedure should be used and when it was not appropriate.  The answers were poor.  Clearly my teaching was lacking.  They had learned to follow standard procedures but not much about when they should be used or when they wouldn’t work. There are a lot of new tools available with the new version of Australian Rainfall and Runoff.  We need to help the next crop of hydrologists to learn and use these new methods, but we also need to be clear about their limitations.  There is some good material on approaches and limitations in ARR Book 1, Section 3, and in the ‘limitations’ tab of regional flood frequency tool.   The limitations of the recommended approaches to flood frequency analysis are also important. ### References Dawkins, R. (2001) Evolution in biology tutoring? In: Palfreyman, D. (Ed) The Oxford Tutorial: ‘Thanks you taught me how to think’.  Oxford Centre for Higher Education Policy Studies.  Second edition (link).  Dawkins’ essays starts on page 36. Dawkins, R. (2013) An appetite for wonder: the making of a scientist.  Bantam Press.  www.richarddawkins.net/afw (link).
{}
# Does it make a difference whether or not a discretized Poisson process is used in Bayesian post-processing? subject area in short prerequisite knowledge The answer is both "yes" and "no": • "Yes", it makes a difference, because the posterior distributions are of different type. When learning the rate of a Poisson process, the posterior distribution is typically a Gamma distribution. When working with a discretized Poisson process and learning the probability of occurrence, the problem can be interpreted as a Monte Carlo simulation and the posterior becomes a Beta distribution. • "No", it does not make a difference if all of the following conditions are met: (i) The intervals into which the Poisson process is discretized are so small that the probability of two or more occurrences within an interval is orders of magnitude lower than the probability of one occurrence. (ii) Weakly informative (and consistent) prior distributions are used in both cases. (iii) The rate of the Poisson process of interest is sufficiently small. Sufficiently small means that if the rate is smaller than $10^{-2}$, the difference is very small; if the rate is smaller than $10^{-3}$, the difference is negligible. For example, assume an observation of a Poisson process is available where $3$ occurrences were observed within $10^4$ hours. If you directly learn the rate of the Poisson distribution using a Bayesian approach (and a weakly informative prior), the posterior mean is $4\times10^{-4}$, the $95\%$ credible interval is $7.75\times10^{-4}$ and the $99\%$ credible interval is $1.00\times10^{-3}$. If you discretize the observation into $1$ hour intervals and interpreter the problem as a Monte Carlo simulation, you get a probability of failure of $4\times10^{-4}$, with a $95\%$ credible interval of $7.75\times10^{-4}$ and a $99\%$ credible interval of $1.00\times10^{-3}$. As the probability of failure is with respect to one-hour intervals, the results of both approaches are indeed the same. tags
{}
My watch list my.chemeurope.com # Bidirectional reflectance distribution function The bidirectional reflectance distribution function (BRDF; ${f_r(\omega_i , \omega_o)\ }$) is a 4-dimensional function that defines how light is reflected at an opaque surface. The function takes an incoming light direction, $\omega_i\$, and outgoing direction, $\omega_o\$, both defined with respect to the surface normal $n\$, and returns the ratio of reflected radiance exiting along $\omega_o\$ to the irradiance incident on the surface from direction $\omega_i\$. Physically based BRDFs have additional restrictions, including Helmholtz reciprocity, $f_r(\omega_i , \omega_o) = f_r(\omega_o , \omega_i)\$ , and energy conservation. The BRDF has units sr-1, with steradians (sr) being a unit of solid angle. ## Applications The BRDF is a fundamental radiometric concept, and accordingly is used in computer graphics for photorealistic rendering of synthetic scenes (see the Rendering equation), as well as in computer vision for many inverse problems such as object recognition. ## Models The BRDF was first defined by Edward in mid sixties[1]. BRDFs can be measured directly from real objects using calibrated cameras and lightsources [2]; however, many phenomenological and analytic models have been proposed including the Lambertian reflectance model frequently assumed in computer graphics. Some noteworthy examples are the phenomenological Phong reflectance model, Ward's anisotropic reflectance model [3] , and the Torrance-Sparrow microfacet based reflection model [4]. ## Acquisition Traditionally, BRDF measurements were taken for a specific lighting and viewing direction at a time using gonioreflectometers. Unfortunately, using such a device to densely measure the BRDF is very time consuming. One of the first improvements on these techniques used a half-silvered mirror and a digital camera to take many BRDF samples of a planar target at once [5]. Since this work, many researchers have developed other devices for efficiently acquiring BRDFs from real world samples, and it remains an active area of research. • Lubin, Dan; Robert Massom (2006-02-10). Polar Remote Sensing: Volume I: Atmosphere and Oceans, 1, Springer, 756. ISBN 3540430970. • Matt, Pharr; Greg Humphreys (2004). Physically Based Rendering, 1, Morgan Kauffmann, 1019. ISBN 012553180X. • Schaepman-Strub, G.; M.E. Schaepman, T.H. Painter, S. Dangel, J.V. Martonchik (2006-07-15). "Reflectance quantities in optical remote sensing--definitions and case studies". Remote Sensing of Environment 103 (1): 27-42. Retrieved on 2007-10-18. ## References 1. ^ Nicodemus, Fred. "Directional reflectance and emissivity of an opaque surface". Applied Optics 4 (7): 767-775. 2. ^ S.Rusinkiewicz. A Survey of BRDF Representation for Computer Graphics. Retrieved on 2007-09-05. 3. ^ Ward, Gregory. "Measuring and Modeling Anisotropic Reflection". SIGGRAPH 1992 Proceedings 26: 265-272. 4. ^ K. Torrance and E. Sparrow. Theory for Off-Specular Reflection from Roughened Surfaces. J. Optical Soc. America, vol. 57. 1976. pp. 1105-1114. 5. ^ Ward, G. "Measuring and Modeling Anisotropic Reflection", Siggraph 1992.
{}
Document type: Zeitschriftenaufsatz Author(s): Drotziger, S.; Pfleiderer, C.; Uhlarz, M.; v. L̈ohneysen, H.; Souptel, D.; L̈oser, W.; Behr, G. Title: Pressure-Induced Magnetic Quantum Phase Transition in {{CeSi1}}.81 Abstract: We report the DC magnetisation of the easy-plane ferrimagnet CeSi1.81 for magnetic field in the range $\pm$0.5T as a function of temperature T down to 2.3K and hydrostatic pressure p up to 14.4kbar. We observe that the magnetic moment for the crystallographic a-axis vanishes continuously at pc$\approx$13$\pm$0.2kbar. The variation of the ordered moment as function of pressure at low T is qualitatively similar to its variation as function of temperature at p=0, hinting at the possible existence o...    » Keywords: Ferromagnetism,Hydrostatic pressure,Quantum phase transition Journal title: Physica B: Condensed Matter Year: 2005 Journal volume: 359-361 Month: apr Pages contribution: 92 Fulltext / DOI: Print-ISSN: 0921-4526
{}
Witt ring (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) of a field , ring of types of quadratic forms over The ring of classes of non-degenerate quadratic forms on finite-dimensional vector spaces over with the following equivalence relation: The form is equivalent to the form () if and only if the orthogonal direct sum of the forms and is isometric to the orthogonal direct sum of and for certain neutral quadratic forms and (cf. also Witt decomposition; Quadratic form). The operations of addition and multiplication in are induced by taking the orthogonal direct sum and the tensor product of forms. Let the characteristic of be different from 2. The definition of equivalence of forms is then equivalent to the following: if and only if the anisotropic forms and which correspond to and (cf. Witt decomposition) are isometric. The equivalence class of the form is said to be its type and is denoted by . The Witt ring, or the ring of types of quadratic forms, is an associative, commutative ring with a unit element. The unit element of is the type of the form . (Here denotes the quadratic form .) The type of the zero form of zero rank, containing also all the neutral forms, serves as the zero. The type is opposite to the type . The additive group of the ring is said to be the Witt group of the field or the group of types of quadratic forms over . The types of quadratic forms of the form , where is an element of the multiplicative group of , generate the ring . is completely determined by the following relations for the generators: The Witt ring may be described as the ring isomorphic to the quotient ring of the integer group ring of the group over the ideal generated by the elements Here is the residue class of the element with respect to the subgroup . The Witt ring can often be calculated explicitly. Thus, if is a quadratically (in particular, algebraically) closed field, then ; if is a real closed field, (the isomorphism is realized by sending the type to the signature of the form ); if is a Pythagorean field (i.e. the sum of two squares in is a square) and is not real, then ; if is a finite field, is isomorphic to either the residue ring or , depending on whether or , respectively, where is the number of elements of ; if is a complete local field and its class field has characteristic different from 2, then An extension of defines a homomorphism of Witt rings for which . If the extension is finite and is of odd degree, is a monomorphism and if, in addition, it is a Galois extension with group , the action of can be extended to and The general properties of a Witt ring may be described by Pfister's theorem: 1) For any field the torsion subgroup of is -primary; 2) If is a real field and is its Pythagorean closure (i.e. the smallest Pythagorean field containing ), the sequence is exact (in addition, if , the field is Pythagorean); 3) If is the family of real closures of , the following sequence is exact: in particular, 4) If is not a real field, the group is torsion. A number of other results concern the multiplicative theory of forms. In particular, let be the set of types of quadratic forms on even-dimensional spaces. Then will be a two-sided ideal in , and ; the ideal will contain all zero divisors of ; the set of nilpotent elements of coincides with the set of elements of finite order of and is the Jacobson radical and the primary radical of . The ring is finite if and only if is not real while the group is finite; the ring is Noetherian if and only if the group is finite. If is not a real field, is the unique prime ideal of . If, on the contrary, is a real field, the set of prime ideals of is the disjoint union of the ideal and the families of prime ideals corresponding to orders of : where runs through the set of prime numbers, and denotes the sign of the element for the order . If is a ring with involution, a construction analogous to that of a Witt ring leads to the concept of the group of a Witt ring with involution. From a broader point of view, the Witt ring (group) is one of the first examples of a -functor (cf. Algebraic -theory), which play an important role in unitary algebraic -theory. References [1] E. Witt, "Theorie der quadratischen Formen in beliebigen Körpern" J. Reine Angew. Math. , 176 (1937) pp. 31–44 Zbl 0015.05701 Zbl 62.0106.02 [2] N. Bourbaki, "Algebra" , Elements of mathematics , 1 , Addison-Wesley (1973) pp. Chapts. 1–2 (Translated from French) MR2333539 MR2327161 MR2325344 MR2284892 MR2109105 MR1994218 MR1890629 MR1728312 MR1727844 MR1727221 MR1080964 MR0979982 MR0979760 MR0979493 MR0928386 MR0682756 MR0524568 MR0573069 MR0354207 MR0360549 Zbl 05948094 Zbl 1105.18001 Zbl 1107.13002 Zbl 1107.13001 Zbl 1139.12001 Zbl 1111.00001 Zbl 1103.13003 Zbl 1103.13002 Zbl 1103.13001 Zbl 1017.12001 Zbl 1101.13300 Zbl 0902.13001 Zbl 0904.00001 Zbl 0719.12001 Zbl 0673.00001 Zbl 0666.13001 Zbl 0623.18008 Zbl 0281.00006 Zbl 0279.13001 Zbl 0238.13002 [3] S. Lang, "Algebra" , Addison-Wesley (1974) MR0783636 Zbl 0712.00001 [4] F. Lorenz, "Quadratische Formen über Körpern" , Springer (1970) MR0282955 Zbl 0211.35303 [5] O.T. O'Meara, "Introduction to quadratic forms" , Springer (1973) Zbl 0259.10018 [6] T.Y. Lam, "The algebraic theory of quadratic forms" , Benjamin (1973) MR0396410 Zbl 0259.10019 [7] J. Milnor, D. Husemoller, "Symmetric bilinear forms" , Springer (1973) MR0506372 Zbl 0292.10016
{}
Sign up for our free STEM online summer camps starting June 1st!View Summer Courses ### During a normal breath, our lungs expand about 0.… 03:06 Manhattan College Need more help? Fill out this quick form to get professional live tutoring. Get live tutoring Problem 30 Indicate which of the following is independent of the path by which a change occurs: (a) the change in potential energy when a book is transferred from table to shelf, (b) the heat evolved when a cube of sugar is oxidized to $\operatorname{CO}_{2}(g)$ and $\mathrm{H}_{2} \mathrm{O}(g),(\mathbf{c})$ the work accomplished in burning a gallon of gasoline. a) change in potential energy of the book is independent of the path. b) Thus, the heat evolved by the oxidation of sugar cube to form $\mathrm{CO}_{2}(\mathrm{g})$ and $\mathrm{H}_{2} \mathrm{O}(g)$ is dependent on the path. c) the work accomplished in burning of gasoline is dependent on the path. ## Discussion You must be signed in to discuss. ## Video Transcript from number thirty in Chapter five. Chemistry is not a science one knows which is which of the phones independent a path which a change occurs. So part A that wants no the change in potential energy when it book is transferred from table to a shelf. Now, remember, the formula for potential energy energy is going to be hail to you is equal to MGH. So oh, you can think of the tension ideas like a as as it's proportional to the masher using it's proportional to the force of gravity you have and proportional to the height, the height you have from the ground. So a system that had So it's a state function function. It's independent of the path that that using, Let's say, I'm as whether I'm putting it directly from the tape table to the shelf or whether I'm putting it from the table, lifting up halfway in the air and then putting it on the shelf that the potential energy of your system system. The change in the contracting system is the same because you're because the overall change in the heights shirt, your role change in the height ofyours right of your of your book relative to the ground stays stays the same. Same. So now? No, this is going to be as there's going to be a yes now. Herbie, we're looking at the heat of Baldwin. When a Cuba sugar's oxidized to heal, too and issue a gas Well, no, us. He is not a state function. So you're not us. He does not say function, so it is not independent of path at the amount of heat you produced from this from this reaction that is going to depend on the specifics on the specific steps that you're taking thing and and those are very important not to confuse he will handle. The entropy is a state function. Heat, however, is not. So. This is going to be unknown. Never see the work accomplishing Bernie a gallon of gasoline. The answer is no, because it's going to depend on a lot of factors like, for example, how thoroughly the gasoline was burned burns. So So there may be That's right. This there there may be multiple. There multiple things that can affect whether or not not get your gasoline is going to be burnt. That whether or not your gasoline is going to be burned thoroughly, thoroughly, and that can affect the amount of work that your that your sister bet your system is doing. So work. So work is not going to be a state function, so it's not going to be independent of the independent of path. So that's it. That's a no here.
{}
# Beginning Database Design- P22 Chia sẻ: Cong Thanh | Ngày: | Loại File: PDF | Số trang:20 0 31 lượt xem 4 download ## Beginning Database Design- P22 Mô tả tài liệu Download Vui lòng tải xuống để xem tài liệu đầy đủ Beginning Database Design- P22:This book focuses on the relational database model from a beginning perspective. The title is, therefore, Beginning Database Design. A database is a repository for data. In other words, you can store lots of information in a database. A relational database is a special type of database using structures called tables. Tables are linked together using what are called relationships. You can build tables with relationships between those tables, not only to organize your data, but also to allow later retrieval of information from the database.... Chủ đề: Bình luận(0) Lưu ## Nội dung Text: Beginning Database Design- P22 1. Advanced Database Structures and Hardware Resources Hash Keys and ISAM Keys There are other, less commonly used indexes, such as hash keys and Indexed Sequential Access Method (ISAM) keys. Both are somewhat out of date in the larger-scale relational database engines; however, Microsoft Access does make use of a mixture of ISAM/BTree indexing techniques, in its JET database. Both ISAM and hash indexes are not good for heavily changing data because their structures will over- flow with newly introduced records. Similar to bitmap indexes, hash and ISAM keys must be rebuilt regularly to maintain their advantage in processing speed advantage. Frequent rebuilds minimize on performance killing overflow. Clusters, Index Organized Tables, and Clustered Indexes Clusters are used to contain fields from tables, usually a join, where the cluster contains a physical copy of a small portion of the fields in a table — perhaps the most commonly accessed fields. Essentially, clus- ters have been somewhat superseded by materialized views. A clustered index (index organized table, or IOT) is a more complex type of a cluster where all the fields in a single table are reconstructed, not in a usual heap structure, but in the form of a BTree index. In other words, for an IOT, the leaf blocks in the diagram shown in Figure 13-3 would contain not only the indexed field value, but also all the rest of the fields in the table (not just the primary key values). Understanding Auto Counters Sequences are commonly used to create internally generated (transparent) counters for surrogate pri- mary keys. Auto counters are called sequences in some database engines. This command would create a sequence object: CREATE SEQUENCE BAND_ID_SEQUENCE START=1 INCREMENT=1 MAX=INFINITY; Then you could use the previous sequence to generate primary keys for the BAND table (see Figure 13-1), as in the following INSERT command, creating a new band called “The Big Noisy Rocking Band.” INSERT INTO BAND (BAND_ID, GENRE_ID, BAND, FOUNDING_DATE) VALUES ( BAND_ID_SEQUENCE.NEXT, (SELECT GENRE_ID FROM GENRE WHERE GENRE=”Rock”), “The Big Noisy Rocking Band”, 25-JUN-2005 ); Understanding Partitioning and Parallel Processing Partitioning is just that — it partitions. It separates tables into separate physical partitions. The idea is that processing can be executed against individual partitions and even in parallel against multiple parti- tions at the same time. Imagine a table with 1 million records. Reading those 1 million records can take an inordinately horrible amount of time; however, dividing that 1 million record table into 100 separate physical partitions can allow queries to read much fewer records. This, of course, assumes that records are read within the structure of partition separation. As in previous sections of this chapter, the easiest way to explain partitioning, what it is, and how it works, is to just demonstrate it. The diagram in Fig- ure 13-5 shows the splitting of a data warehouse fact table in separate partitions. 393 2. Chapter 13 Fact Fact Partition 1 fact_id Partition 2 fact_id show_id (FK) show_id (FK) musician_id (FK) musician_id (FK) band_id (FK) band_id (FK) advertisement_id (FK) advertisement_id (FK) discography_id (FK) discography_id (FK) merchandise_id (FK) merchandise_id (FK) Fact genre_id (FK) genre_id (FK) Partition 3 fact_id instrument_id (FK) instrument_id (FK) cd_sale_amount cd_sale_amount show_id (FK) merchandise_sale_amount merchandise_sale_amount musician_id (FK) advertising_cost_amount advertising_cost_amount band_id (FK) show_ticket_sales_amount show_ticket_sales_amount advertisement_id (FK) discography_id (FK) merchandise_id (FK) genre_id (FK) instrument_id (FK) cd_sale_amount Band merchandise_sale_amount band_id Advertisement advertising_cost_amount advertisement_id show_ticket_sales_amount Genre band founding_date date genre_id text parent_id genre Fact Discography Facts fact_id discography_id location_id (FK) cd_name Fact Musician time_id (FK) release_date price Partition 4 fact_id musician_id show_id (FK) musician_id (FK) show_id (FK) musician Show_Venue band_id (FK) musician_id (FK) phone advertisement_id (FK) show_id band_id (FK) email discography_id (FK) advertisement_id (FK) merchandise_id (FK) venue discography_id (FK) genre_id (FK) address_line_1 merchandise_id (FK) Instrument address_line_2 instrument_id instrument_id (FK) genre_id (FK) town cd_sale_amount instrument_id (FK) zip section_id merchandise_sale_amount cd_sale_amount postal_code instrument advertising_cost_amount country merchandise_sale_amount show_ticket_sales_amount show_date advertising_cost_amount show_time show_ticket_sales_amount Merchandise merchandise_id Fact type Partition 5 fact_id price show_id (FK) musician_id (FK) band_id (FK) advertisement_id (FK) discography_id (FK) merchandise_id (FK) genre_id (FK) instrument_id (FK) cd_sale_amount merchandise_sale_amount advertising_cost_amount show_ticket_sales_amount Figure 13-5: Splitting a data warehouse table into separate physical partitions. 394 3. Advanced Database Structures and Hardware Resources In some database engines, you can even split materialized views into partitions, in the same way as tables can be partitioned. The fact table shown in Figure 13-5 is (as fact tables should be) all referencing surrogate primary keys, as foreign keys to dimensions. It is easier to explain some of the basics of parti- tioning using the materialized view created earlier in this chapter. The reason is because the materialized view contains the descriptive dimensions, as well as the surrogate key integer values. In other words, even though not technically correct, it is easier to demonstrate partitioning on dimensional descriptions, such as a region of the world (North America, South America, and so on), as opposed to partitioning based on an inscrutable LOCATION_ID foreign key value. This is the materialized view created earlier: CREATE MATERIALIZED VIEW MV_MUSIC ENABLE REFRESH ENABLE QUERY REWRITE SELECT F.*, I.*, MU.*, F.*, B.*, A.*, D.*, SV.*, ME.*, T.*, L.* FROM FACT A JOIN INSTRUMENT I ON (I.INSTRUMENT_ID = A.INSTRUMENT_ID) JOIN MUSICIAN MU ON (MU.MUSICIAN_ID = F.MUSICIAN_ID) JOIN GENRE G ON (G.GENRE_ID = F.GENRE_ID) JOIN BAND B ON (B.BAND_ID = F.BAND_ID) JOIN ADVERTISEMENT A ON (A.ADVERTISEMENT_ID = F.ADVERTISEMENT_ID) JOIN DISCOGRAPHY D ON (D.DISCOGRAPHY_ID = F.DISCOGRAPHY_ID) JOIN SHOW_VENUE SV ON (SV.SHOW_ID = F.SHOW_ID) JOIN MERCHANDISE ON (M.MERCHANDISE_ID = F.MERCHANDISE_ID) JOIN TIME ON (T.TIME_ID = F.TIME_ID) JOIN LCOATION ON (L.LOCATION_ID = F.LOCATION_ID); Now, partition the materialized view based on regions of the world — this one is called a list partition: CREATE TABLE PART_MV_REGIONAL PARTITION BY LIST (REGION) ( PARTITION PART_AMERICAS VALUES (“North America”,”South America”), PARTITION PART_ASIA VALUES (“Middle East”,”Far East”,”Near East”), PARTITION PART_EUROPE VALUES (“Europe”,”Russian Federation”), PARTITION PART_OTHER VALUES (DEFAULT) ) AS SELECT * FROM MV_MUSIC; The DEFAULT option implies all regions not in the ones listed so far. Another type of partition is a range partition where each separate partition is limited by a range of values, for each partition. This partition uses the release date of CDs stored in the field called DISCOGRAPHY.RELEASE_DATE: CREATE TABLE PART_CD_RELEASE PARTITION BY RANGE (RELEASE_DATE) ( PARTITION PART_2002 VALUES LESS THAN (1-JAN-2003), PARTITION PART_2003 VALUES LESS THAN (1-JAN-2004), PARTITION PART_2004 VALUES LESS THAN (1-JAN-2005), PARTITION PART_2005 VALUES LESS THAN (MAXIMUM), ) AS SELECT * FROM MV_MUSIC; The MAXIMUM option implies all dates into the future, from January 1, 2005, and beyond the year 2005. You can also create indexes on partitions. Those indexes can be created as locally identifiable to each par- tition, or globally to all partitions created for a table, or materialized view. That is partitioning. There are other more complex methods of partitioning, but these other methods are too detailed for this book. 395 4. Chapter 13 That’s all you need to know about advanced database structures. Take a quick peek at the physical side of things in the guise of hardware resources. Understanding Hardware Resources This section briefly examines some facts about hardware, including some specialized database server architectural structures, such as RAID arrays and Grid computing. How Much Hardware Can You Afford? Windows computers are cheap, but they have a habit of breaking. UNIX boxes (computers are often called “boxes”) are expensive and have excellent reliability. I have heard of cases of UNIX servers run- ning for years, with no problems whatsoever. Typically, a computer system is likely to remain stable as long as it is not tampered with. The simple fact is that Windows boxes are much more easily tampered with than UNIX boxes, so perhaps Windows machines have an undeserved poor reputation, as far as reliability is concerned. How Much Memory Do You Need? OLTP databases are memory- and processor-intensive. Data warehouse databases are I/O-intensive, and other than heavy processing power, couldn’t care less how much RAM is allocated. The heavy type of memory usage for a relational database usually has a lot to do with concurrency and managing the load of large number of users, accessing your database all at the same time. That’s all about concurrency and much more applicable to OLTP databases, rather than data warehouse databases. For an OLTP database, quite often the more RAM you have, the better. Note, however, that sizing up buffer cache values to the maximum amount of RAM available is pointless, even for an OLTP database. The more RAM allocated for use by a database, the more complex those buffers become for a database to manage. In short, data warehouses do not need a lot of memory to temporarily store the most heavily used tables in the database into RAM. There is no point, as data warehouses tend to read lots of data from lots of tables, occasionally. RAM is not as important in a data warehouse as it is in an OLTP database. Now, briefly examine some specialized aspects of hardware usage, more from an architectural perspective. Understanding Specialized Hardware Architectures This section examines the following: ❑ RAID arrays ❑ Standby databases ❑ Replication ❑ Grids and computer clustering 396 5. Advanced Database Structures and Hardware Resources RAID Arrays The acronym RAID stands for Redundant Array of Inexpensive Disks. That means a bunch of small, cheap disks. Some RAID array hardware setups are cheap. Some are astronomically expensive. You get what you pay for, and you can purchase what suits your requirements. RAID arrays can give huge per- formance benefits for both OLTP and data warehouse databases. Some of the beneficial factors of using RAID arrays are recoverability (mirroring), fast random access (striping and multiple disks with multiple bus connections — higher throughput capacity), and parallel I/O activity where more than one disk can be accessed at the same time (concurrently). There are numer- ous types of RAID array architectures, with the following being the most common: ❑ RAID 0 — RAID 0 is striping. Striping splits files into pieces, spreading them over multiple disks. RAID 0 gives fast random read and write access, and is thus appropriate for OLTP data- bases. Rapid recoverability and redundancy is not catered for. RAID 0 is a little risky because of lack of recoverability. Data warehouses that need to be highly contiguous (data on disk is all in one place) are not catered for by random access; however, RAID 0 can sometimes be appropriate for data warehouses, where large I/O executions utilize parallel processing, accessing many disks simultaneously. ❑ RAID 1 — RAID 1 is mirroring. Mirroring makes multiple copies of files, duplicating database changes at the I/O level on disk. Mirroring allows for excellent recoverability capabilities. RAID 1 can sometimes cause I/O bottleneck problems because of all the constant I/O activity associ- ated with mirroring, especially with respect to frequently written tables — creating mirrored hot blocks. A hot block is a block in a file that is accessed more heavily than the hardware can cope with. Everything is trying to read and write that hot block at the same time. RAID 1 can pro- vide recoverability for OLTP databases, but can hurt performance. RAID 1 is best used in data warehouses where mirroring allows parallel read execution, of more than one mirror, at the same time. ❑ RAID 0+1 — RAID 0+1 combines the best of both worlds from RAID 0 and RAID 1 — using both striping and mirroring. Both OLTP and data warehouse I/O performance will be slowed some- what, but RAID 0+1 can provide good all-around recoverability and performance, perhaps offering the best of both worlds, for both OLTP and data warehouse databases. ❑ RAID 5 — RAID 5 is essentially a minimized form of mirroring, duplicating only parity and not the real data. RAID 5 is effective with expensive RAID architectures, containing large chunks of purpose-built, RAID-array contained, onboard buffering RAM memory. Those are some of the more commonly implemented RAID array architectures. It is not necessary for you to understand the details but more important that you know this stuff actually exists. Standby Databases A standby database is a failover database. A standby database has minimal activity, usually only adding new records, changing existing records, and deleting existing records. Some database engines do allow for more sophisticated standby database architectures, but once again, the intention in this chapter is to inform you of the existence of standby databases. 397 6. Chapter 13 Figure 13-6 shows a picture of how standby databases work. A primary database in Silicon Valley (San Jose) is used to service applications, catering to all changes to a database. In Figure 13-6, two standby databases are used, one in New York and one in Orlando. The simplest form of change tracking is used to transfer changes from primary to standby databases. The simplest form of transfer is log entries. Most larger database engines have log files, containing a complete history of all transactions. Standby Database New York fer ns San Jose y Tra ntr gE Lo Primary Database Orlando Log Entr y Transfer Slave Database Figure 13-6: Standby database architecture allows for instant switchover (failover) recoverability. Log files allow for recoverability of a database. Log files store all changes to a database. If you had to recover a database from backup files that are a week old, the database could be recovered by applying all changes stored in log files (for the last week). The result of one week-old cold backups, plus log entries for the last week, would be an up-to-date database. The most important use of standby database architecture is for that of failover. In other words, if the pri- mary database fails (such as when someone pulls the plug, or San Jose is struck by a monstrous earth- quake), the standby database automatically takes over. In the case of Figure 13-6, if the big one struck near San Jose, the standby database in New York or Orlando would automatically failover, assuming all responsibilities, and become the new primary database. What is implied by failover is that a standby database takes over the responsibilities of servicing applications, immediately — perhaps even within a few seconds. The purest form of standby database architecture is as a more or less instant response backup, generally intended to maintain full service to end-users. Some relational database engines allow standby databases to be utilized in addition to that of being just a failover option. Standby databases can sometimes be used as read-only, slightly behind, reporting databases. Some database engines even allow standby databases to be changeable, as long as structure and content from the primary database is not disturbed. In other words, a standby database could con- tain extra and additional tables and data, on top of what is being sent from the primary database. 398 7. Advanced Database Structures and Hardware Resources Typically, this scenario is used for more sophisticated reporting techniques, and possibly standby databases can even be utilized as a basis for a data warehouse database. Replication Database replication is a method used to duplicate (replicate) data from a primary or master database, out to a number of other copies of the master database. As you can see in Figure 13-7, the master database replicates (duplicate) changes made on the master, out to two slave databases in New York and Orlando. This is similar in nature to standby database architecture, except that replication is much more powerful, and, unfortunately, more complicated to manage than standby database architecture. Typically, replica- tion is used to distribute data across a wide area network (WAN) for a large organization. Slave Database New York ve San Jose -Sla -to s ter Ma Master Database Orlando Master-to-S lave Slave Database Figure 13-7: Replication is often used for distributing large quantities of data. Tables and data can’t be altered at slave databases — only by changes passed from the master database. In the case of Figure 13-8, a master-to-master, rather than master-to-slave, configuration is adopted. A master-to-slave relationship implies that changes can only be passed in one direction, obviously from the master to the slave database; therefore, database changes are distributed from master to slave data- bases. Of course, being replication, slave databases might need to have changes made to them. However, changes made at slave databases can’t be replicated back to the master database. Figure 13-8 shows just the opposite, where all relationships between all replicated (distributed databases) are master-to-master. A master-to-master replication environment implies that changes made to any database are distributed to all other databases in the replicated environment across the WAN. Master-to- master replication is much more complicated than master-to-slave replication. 399 8. Chapter 13 Slave Database New York r a ste -M San Jose -to t er Master- Mas to- Master Master Database Orlando Master-to-M aster Slave Database Figure 13-8: Replication can be both master-to-slave and master-to-master. Replication is all about distribution of data to multiple sites, typically across a WAN. Standby is intentionally created as failover; however, in some database engines, standby database technology is now so sophisticated, that it is very close in capability to that of even master-to-master replicated databases. Grids and Computer Clustering Computer grids are clusters of cheap computers, perhaps distributed on a global basis, connected using even something as loosely connected as the Internet. The Search for Extra Terrestrial Intelligence (SETI) program, where processing is distributed to people’s personal home computers (processing when a screensaver is on the screen), is a perfect example of grid computing. Where RAID arrays cluster inex- pensive disks, grids can be made of clusters of relatively inexpensive computers. Each computer acts as a portion of the processing and storage power of a large, grid-connected computer, appearing to end users as a single computational processing unit. Clustering is a term used to describe a similar architecture to that of computer grids, but the computers are generally very expensive, and located within a single data center, for a single organization. The dif- ference between grid computing and clustered computing is purely one of scale — one being massive and the other localized. Common to both grids and clusters is that computing resources (CPU and storage) are shared transpar- ently. In other words, a developer writing programs to access a database does not even need to know that the computer for which code is being written is in reality a group of computers, built as either a grid 400 9. Advanced Database Structures and Hardware Resources or a cluster. Grid Internet-connected computers could be as much as five years old, which is geriatric for a computer — especially a personal computer. They might have all been purchased in a yard sale. If there are enough senior computers, and they are connected properly, the grid itself could contain enormous computing power. Clustered architectures are used by companies to enhance the power of their databases. Grids, on the other hand, are often used to help processing for extremely large and complex problems that perhaps even a super computer might take too long to solve. Summar y In this chapter, you learned about: ❑ Views and how to create them ❑ Sensible and completely inappropriate uses of views ❑ Materialized views and how to create them ❑ Nested materialized views and QUERY REWRITE ❑ Different types of indexes (including BTree indexes, bitmap indexes, and clustering) ❑ Auto counters and sequences ❑ Partitioning and parallel processing ❑ Creating list and range partitions ❑ Partitioning materialized views ❑ Hardware factors (including memory usage as applied to OLTP or data warehouse databases) ❑ RAID arrays for mirroring (recoverability) and striping (performance) ❑ Standby databases for recoverability and failover ❑ Replication of databases to cater to distribution of data ❑ Grid computing and clustering to harness as much computing power as possible This chapter has moved somewhat beyond the realm of database modeling, examining specialized database objects, some brief facts about hardware resources, and finally some specialized database architectures. 401 10. Glossar y 1st Normal Form (1NF) — Eliminate repeating groups, such that all records in all tables can be identified uniquely, by a primary key in each table. In other words, all fields other than the pri- mary key must depend on the primary key. All Normal Forms are cumulative. (See Normal Forms.) 1st Normal Form made easy — Remove repeating fields by creating a new table, where the origi- nal and new tables are linked together with a master-detail, one-to-many relationship. Create pri- mary keys on both tables, where the detail table will have a composite primary key, containing the master table primary key field as the prefix field of its primary key. That prefix field is also a foreign key back to the master table. 2nd Normal Form (2NF) — All non-key values must be fully functionally dependent on the pri- mary key. No partial dependencies are allowed. A partial dependency exists when a field is fully dependant on a part of a composite primary key. All Normal Forms are cumulative. (See Normal Forms.) 2nd Normal Form made easy — Performs a seemingly similar function to that of 1st Normal Form, but creates a table, where repeating values (rather than repeating fields) are removed to a new table. The result is a many-to-one relationship rather than a one-to-many relationship (see 1st Normal Form), created between the original (master table) and the new tables. The new table gets a primary key consisting of a single field. The master table contains a foreign key pointing back to the primary key of the new table. That foreign key is not part of the primary key in the original table. 3rd Normal Form (3NF) — Eliminate transitive dependencies. What this means is that a field is indirectly determined by the primary key because the field is functionally dependent on another field, where the other field is dependent on the primary key. All Normal Forms are cumulative. (See Normal Forms.) 3rd Normal Form made easy — Elimination of a transitive dependency, which implies creation of a new table, for something indirectly dependent on the primary key in an existing table. 4th Normal Form (4NF) — Eliminate multiple sets of multi-valued dependencies. All Normal Forms are cumulative. (See Normal Forms.) 11. Glossary 5th Normal Form (5NF) — Eliminate cyclic dependencies. This is also known as Projection Normal Form (PJNF). All Normal Forms are cumulative. (See Normal Forms.) Abstraction — In computer jargon, this implies something created to generalize a number of other things. It is typically used in object models, where an abstract class caters to the shared attributes and methods of inherited classes. Active data — Information in a database constantly accessed by applications, such as today’s transac- tions, in an OLTP database. Ad-hoc query — A query sent to a database by an end-user or power user, just trying to get some infor- mation quickly. Ad-hoc queries are subjected to a database where the content, structure, and perfor- mance of said query, are not necessarily catered for by the database model. The result could be a performance problem, and in extreme cases, even an apparent database halt. Aggregated query — A query using a GROUP BY clause to create a summary set of records (smaller num- ber of records). Algorithm — A computer program (or procedure) that is a step-by-step procedure, solving a problem, in a finite number of steps. Alternate index — An alternate to the primary relational structure of a table, determined by primary and foreign key indexes. Alternate indexes are “alternate” because they are in addition to primary and for- eign key indexes, existing as alternate sorting methods to those provided by primary and foreign keys. Analysis — The initial fact-finding process discovering what is to be done by a computer system. Anomaly — With respect to relational database design, essentially an erroneous change to data, more specifically to a single record. ANSI — American National Standards Institute. Application — A front-end tool used by developers, in-house staff, and end-users to access a database. Ascending index — An index built sorted in a normally ascending order, such as A, B, C. Attribute — The equivalent of a relational database field, used more often to describe a similar low-level structure in object structures. Auto counter — Allows automated generation of sequences of numbers, usually one after the other, such as 101, 102, 103, and so on. Some database engines call these sequences. Backus-Naur form — A syntax notation convention. BETWEEN — Verifies expressions between a range of two values. Binary object — Stores data in binary format, typically used for multimedia (images, sound, and so on). Bitmap index — An index containing binary representations for each record using 0’s and 1’s. For exam- ple, a bitmap index creates two bitmaps for two values of M for Male and F for Female. When M is encountered, the M bitmap is set to 1 and the F bitmap is set to 0. 404 12. Glossary Black box — Objects or chunks of code that can function independently, where changes made to one part of a piece of software will not affect others. Boyce-Codd Normal Form (BCNF) — Every determinant in a table is a candidate key. If there is only one candidate key, then 3rd Normal Form and Boyce-Codd Normal Form are one and the same. All Normal Forms are cumulative. (See Normal Forms.) BTree index — A binary tree. If drawn out on a piece of paper, a BTree index looks like an upside-down tree. The tree is called “binary” because binary implies two options under each branch node: branch left and branch right. The binary counting system of numbers contains two digits, namely 0 and 1. The result is that a binary tree only ever has two options as leafs within each branch. A BTree consists of a root node, branch nodes and ultimately leaf nodes containing the indexed field values in the ending (or leaf) nodes of the tree. Budget — A determination of how much something will cost, whether it is cost-effective, whether it is worth the cost, whether it is affordable, and whether it gives the company an edge over the competition without bankrupting the company. Business processes — The subject areas of a business. The method by which a business is divided up. In a data warehouse, the subject areas become the fact tables. Business rules — The processes and flow of whatever is involved in the daily workings of an organiza- tion. The operation of that business and the decisions made to execute the operational processes of that organization. Cache — A term commonly applied to buffering data into fast access memory, for subsequent high- speed retrieval. Candidate key — Also known as a potential key, or permissible key. A field or combination of fields, which can act as a primary key field for a table. A candidate key uniquely identifies each record in the table. Cartesian product — A mathematical term describing a set of all the pairs that can be constructed from a given set. Statistically it is known as a combination, not a permutation. In SQL jargon, a Cartesian Product is also known as a cross join. Cascade — Changes to data in parent tables are propagated to all child tables, containing foreign key field copies of a primary key from the parent table. Cascade delete — A deletion that occurs when the deletion of a master record automatically deletes all child records in child-related tables, before deleting the record in the master table. Central Processing Unit (CPU) — The processor (chip) in your computer. Check constraint — A constraint attached to a field in a database table, as a metadata field setting, and used to validate a given input value. Class — An object methodology term for the equivalent of a table in a relational database. Client-server — An environment that was common in the pre-Internet days where a transactional data- base serviced users within a single company. The number of users could range from as little as one to 405 13. Glossary thousands, depending on the size of the company. The critical factor was actually a mixture of both individual record change activity and modestly sized reports. Client-server database models typically catered for low concurrency and low throughput at the same time, because the number of users was always manageable. Cluster — Allows a single table or multiple table partial copy of records, from underlying tables. Materialized views have superseded clusters. Clustered index — See Index organized table. Coding — Programming code, in whatever language is appropriate. For example, C is a programming language. Column — See Field. COMMIT — Completes a transaction by storing all changes to a database. Complex datatype — Typically used in object structures, consisting of a number of fields. Composite index — Indexes that can be built on more than a single field. Also known as composite field indexes or multiple field indexes. Composite key — A primary key, unique key, or foreign key consisting of more than one field. Concurrent — More than one process executed at the same time means two processes are executing simultaneously, or more than one process accessing the same piece of data at the same time. Configuration — A computer term used to describe the way in which a computer system (or part thereof, such as a database) is installed and set up. For example, when you start up a Windows com- puter, all of your desktop icons are part of the configuration (of you starting up your computer). What the desktop icons are, and where on your desktop they are placed, are stored in a configuration file on your computer somewhere. When you start up your computer, the Windows software retrieves that configuration file, interprets it contents, and displays all your icons on the screen for you. Constraint — A means to constrain, restrict, or apply rules both within and between tables. Construction — A stage at which you build and test code. For a database model, you build scripts to create tables, referential integrity keys, indexes, and anything else such as stored procedures. Cross join — See Cartesian product. Crow’s foot — Used to describe the many sides of a one-to-many or many-to-many relationship. A crows foot looks quite literally like the imprint of a crow’s foot in some mud, with three splayed toes. Cyclic dependency — In the context of the relational database model, X is dependent on Y, which in turn is also dependent on X, directly or indirectly. Cyclic dependence, therefore, indicates a logically circular pattern of interdependence. Cyclic dependence typically occurs with tables containing a composite pri- mary key with three or more fields, where, for example, three fields are related in pairs to each other. In other words, X relates to Y, Y relates to Z, and X relates to Z. 406 14. Glossary Data — A term applied to organized information. Data Definition Language (DDL) — Commands used to change metadata. In some databases, these commands require a COMMIT command; in other database engines, this is not the case. When a COMMIT command is not required, these commands automatically commit any pending changes to the database, and cannot be rolled back. Data Manipulation Language (DML) — Commands that change data in a database. These commands are INSERT, UPDATE, and DELETE. Changes can be committed permanently using the COMMIT command, and undone using the ROLLBACK command. These commands do not commit automatically. Data mart — A subset part of a data warehouse. Typically, a data mart is made up of a single start schema (a single fact table). Data warehouse — A large transactional history database used for predictive and planning reporting. Database — A collection of information, preferably related information, and preferably organized. Database block — A physical substructure within a database and the smallest physical unit in which data is stored on disk. Database event — See Trigger. Database model — A model used to organize and apply structure to other disorganized information. Database procedure — See Stored procedure. Datatype — Restricts values in fields, such as allowing only a date or a number. Datestamp — See Timestamp. DDL — See Data Definition Language. Decimal — Datatypes that contain decimal or non-floating-point real numbers. Decision Support System (DSS) — Commonly known as DSS databases, these support decisions, gener- ally more management-level and even executive-level decision-type of objectives. DEFAULT — A setting used as an optional value for a field in a record, when a value is not specified. DELETE — A command that can be used to remove one, some, or all rows from a table. Delete anomaly — A record cannot be deleted from a master table unless all sibling records are deleted first. Denormalization — Most often the opposite of normalization, more commonly used in data warehouse or reporting environments. Denormalization decreases granularity by reversing normalization, and otherwise. 407 15. Glossary Dependency — Something relies on something else. Descending index — An index sorted in a normally descending order (such as in C, B, A). Design — Analysis discovers what needs to be done. Design figures out how what has been analyzed, can and should be done. Determinant — Determines the value of another value. If X determines the value Y (at least partially), then X determines the value of Y, and is thus the determinant of Y. Dimension table — A descriptive or static data table in a data warehouse. DISTINCT clause — A query SELECT command modifier for retrieving unique rows from a set of records. DML — See Data Manipulation Language. Domain Key Normal Form (DKNF) — The ultimate application of normalization. This is more a mea- surement of conceptual state, as opposed to a transformation process in itself. All Normal Forms are cumulative. (See Normal Forms.) DSS — See Decision Support System. Dynamic data — Data that changes significantly, over a short period of time. Dynamic string — See Variable length string. End-user — Ultimate users of a computer system. The clients and staff of a company who actually use software to perform business functions (such as sales people, accountants, and busy executives). Entity — A relational database modeling term for a table. Entity Relationship Diagram (ERD) — A diagram that represents the structural contents (the fields) in tables for an entire schema, in a database. Additionally included are schematic representations of rela- tionships between entities, represented by various types of relationships, plus primary and foreign keys. Event Trigger — See Trigger. Expression — In mathematical terms, a single or multi-functional (or valued) value, ultimately equating to a single value, or even another expression. External procedure — Similar to stored procedures, except they are written in non-database-specific programming language. External procedures are chunks of code written in a language not native to the database engine, such as Java or C++; however, external procedures are still executed from within the database engine itself, perhaps on data within the database. Fact table — The biggest table in a data warehouse, central to a Star schema, storing the transactional history of a company. Fact-dimensional structure — See Star schema. 408 16. Glossary Field — Part of a table division that imposes structure and datatype specifics onto each of the field val- ues in a record. Field list — This is the part of a SELECT command listing fields to be retrieved by a query. When more than one field is retrieved, then the fields become a list of fields, or field list. Fifth Normal Form — See 5th Normal Form. File system — A term used to describe the files in a database at the operating system level. Filtered query — See Filtering. Filtering — Retrieve a subset of records, or remove a subset of records from the source. Filtering is done in SQL using the WHERE clause for basic query records retrieved, and using the HAVING clause to remove groups from an aggregated query. First Normal Form — See 1st Normal Form. Fixed-length records — Every record in a table must have the same byte-length. This generally prohibits use of variable-length datatypes such as variable-length strings. Fixed length string — The CHAR datatype is a fixed-length string. For example, setting a CHAR(5) datatype to “ABC” will force padding of spaces on to the end of the string up to five characters (“ABC “). Flat file — A term generally applying to an unstructured file, such as a text file. Floating point — A real number where the decimal point can be anywhere in the number. Foreign key — A type of constraint where columns contain copies of primary key values, uniquely iden- tified in parent entities, representing the child or sibling side of what is most commonly a one-to-many relationship. Formal method — The application of a theory, a set of rules, or a methodology. Used to quantify and apply structure to an otherwise completely disorganized system. Normalization is a formal method used to create a relational database model. Format display setting — A field setting used to determine the display format of the contents of a field. For example, the datatype definition of INTEGER $9,999,990.99, when set to the value 500, will be displayed as$500.00 (format models can be database specific). FROM clause — The part of a query SELECT command that determines tables retrieved from, and how tables are joined (when using the JOIN, ON, and USING clauses). Front-end — Customer facing software. Usually, applications either purchased, online over the Internet, or in-house as custom-written applications. Full Functional dependence — X determines Y, but X combined with Z does not determine Y. In other words, Y depends on X and X alone. If Y depends on X with anything else then there is not full func- tional dependence. (See Functional dependency.) 409 17. Glossary Full outer join — A query finding the combination of intersection, plus records in the left-sided table, but not in the right-sided table, and records in the right-sided table, not in the left (a combination of both left and right outer joins). Function — A programming unit or expression returning a single value, also allowing determinant val- ues to be passed in as parameters. Thus, parameter values can change the outcome or return result of a function. The beauty of a function is that it is self-contained and can thus be embedded into an expression. Functional dependence — Y is functionally dependent on X if the value of Y is determined by X. In other words if Y = X +1, the value of X will determine the resultant value of Y. Thus, Y is dependent on X as a function of the value of X. Functional dependence is the opposite of determinance. (See Full Functional dependence.) Generic database model — A database model usually consisting of a partial set of metadata, about meta- data; in other words, tables that contain tables which contain data. In modern-day, large, and very busy databases, this can be extremely inefficient. Granularity — The depth of detail stored, typically applied to a data warehouse. The more granularity the data warehouse contains, the bigger fact tables become because the more records they contain. The safest option is include all historical data down to the lowest level of granularity. This ensures that any possible future requirements for detailed analysis can always be met, without needed data perhaps missing in the future (assuming hardware storage capacity allows it). Grid computing — Clusters of cheap computers, perhaps distributed on a global basis, connected using even something as loosely connected as the Internet. GROUP BY clause — A clause in the query SELECT command used to aggregate and summarize records into aggregated groups of fewer records. Hash index — A hashing algorithm is used to organize an index into a sequence, where each indexed value is retrievable based on the result of the hash key value. Hash indexes are efficient with integer val- ues, but are usually subject to overflow as a result of changes. Heterogeneous system — A computer system consisting of dissimilar elements or parts. In database par- lance, this implies a set of applications and databases, where database engines are different. In other words, a company could have a database architecture consisting of multiple database engines, such as Microsoft Access, Sybase, Oracle, Ingres, and so on. All databases, regardless of type, are melded together into a single (apparently one and the same) transparent database-application architecture. Hierarchical database model — An inverted tree-like structure. The tables of this model take on a child- parent relationship. Each child table has a single parent table, and each parent table can have multiple child tables. Child tables are completely dependent on parent tables; therefore, a child table can only exist if its parent table does. It follows that any entries in child tables can only exist where corresponding parent entries exist in parent tables. The result of this structure is that the hierarchical database model can support one-to-many relationships, but not many-to-many relationships. Homogeneous system — Everything is the same, such as database engines, application SDKs, and so on. 410 18. Glossary Hot block — A small section of disk that, when accessed too frequently, can cause too much competition for that specific area. It can result in a serious slow-down in general database and application performance. Hybrid database — A database installation mixing multiple types of database architectures. Typically, the mix is including both OLTP (high concurrency) and data warehouse (heavy reporting) into the same database. (See Online Transaction Processing.) Identifying relationship — The child table is partially identified by the parent table, and partially dependent on the parent table. The parent table primary key is included in the primary key of the child table. In other words, if the child record exists, then the foreign key value, in the child table, must be set to something other than NULL. So, you can’t create the child record unless the related parent record exists. In other words, the child record can’t exist without a related parent record. Implementation — The process of creating software from a design of that software. A physical database is an implementation of a database model. Inactive data — Inactive data is information passed from an OLTP database to a data warehouse, where the inactive data is not used in the customer facing OLTP database on a regular basis. Inactive data is used in data warehouses to make projections and forecasts, based on historical company activities. (See Online Transaction Processing.) Index — Usually (and preferably) a copy of a very small section of table, such as a single field, and preferably a short-length field. Index Organized Table (IOT) — Build a table in the sorted order of an index, typically using a BTree index. It is also called a clustered index in some database engines because data is clustered into the form and structure of a BTree index. Indexed Sequential Access Method (ISAM) index — A method that uses a simple structure with a list of record numbers. When reading the records from the table, in the order of the index, the indexed record numbers are read, accessing the records in the table using pointers between index and table records. In-house — A term applied to something occurring or existing within a company. An in-house applica- tion is an application serving company employees only. An intranet application is generally in-house within a company, or within the scope of its operational capacity. Inline constraint — A constraint created when a field is created and applies to a single field. Inner join — An SQL term for an intersection, where records from two tables are selected, but only related rows are retrieved, and joined to each other. Input mask setting — A field setting used to control the input format of the contents of a field. For example, the datatype definition of INTEGER \$990.99, will not accept an input of 5000, but will accept an input of 500. INSERT — The command that allows addition of new records to tables. Insert anomaly — A record cannot be added to a detail table unless the record exists in the master table. 411 19. Glossary Integer — A whole number. For example, 555 is an integer, but 55.43 is not. Internet Explorer — A Microsoft Windows tool used to gain access to the Internet. Intersection — A term from mathematical set theory describing items common to two sets (existing in both sets). IOT — See Index Organized Table. Iterative — In computer jargon, a process can be repeated over and over again. When there is more than one step, all steps can be repeated, sometimes in any order. Java — A powerful and versatile programming language, often used to build front-end applications, but not restricted as such. Join — A joined query implies that the records from more than a single record source (table) are merged together. Joins can be built in various ways including set intersections, various types of outer joins, and otherwise. Key — A specialized field determining uniqueness, or application of referential integrity through use of primary and foreign keys. KISS rule — “Keep it simple stupid.” Kluge — A term often used by computer programmers to describe a clumsy or inelegant solution to a problem. The result is often a computer system consisting of a number of poorly matched elements. Left outer join — A query finding the combination of intersection, plus records in the left-sided table but not in the right-sided table. Legacy system — A database or application using an out-of-date database engine or application tools. Some legacy systems can be as much as 30, or even 40 years old. Linux — An Open Source operating system with similarities to both UNIX and Microsoft Windows. Location dimension — A standard table used within a data warehouse, constructed from fact table address information, created to facilitate queries dividing up facts based on regional values (such as countries, cities, states, and so on). Macro — A pseudo-type series of commands, typically not really a programming language, and some- times a sequence of commands built from GUI-based commands (such as those seen on the File menu in Microsoft Access). Macros are not really programming language-built but more power-user, GUI driven- built sequences of steps. Many-to-many — This relationship represents an unresolved relationship between two tables. For exam- ple, students in a college can take many courses at once. So, a student can be enrolled in many courses at once, and a course can contain many enrolled students. The solution is resolve the many-to-many rela- tionship into three, rather than two, tables. Each of the original tables is related to the new table as a one- to-many relationship, allowing access to unique records (in this example, unique course and student combinations). 412 CÓ THỂ BẠN MUỐN DOWNLOAD
{}
# Difference between revisions of "Computer Number Systems" All digital computers – from supercomputers to your smartphone – ultimately can do one thing: detect whether an electrical signal is on or off. That basic information, called a bit (binary digit), has two values: a 1 (or true) when the signal is on, and a 0 (of false) when the signal is off. Larger values can be stored by a group of bits. For example, 3 bits together can take on 8 different values. Computer scientists use the binary number system (that is, base 2) to represent the values of bits. Proficiency in the binary number system is essential to understanding how numbers and information are represented in a computer. Since binary numbers representing moderate values quickly become rather lengthy, bases eight (octal) and sixteen (hexadecimal) are frequently used as shorthand. In octal, groups of 3 bits form a single octal digit; in hexadecimal, groups of 4 bits form a single hex digit. In this category, we will focus on conversion between binary, octal, decimal, and hexadecimal numbers. There may be some arithmetic in these bases, and occasionally, a number with a fractional part. We will not cover how negative numbers or floating point numbers are represented in binary. ## Resources Ryan's Tutorials covers this topic beautifully. Rather than trying to duplicate that work, we'll point you to the different sections: 1. Number Systems - An introduction to what numbers systems are all about, with emphasis on decimal, binary, octal, and hexadecimal. ACSL will typically identify the base of a number using a subscript. For example, $123_8$ is an octal number, whereas $123_{16}$ is a hexadecimal number. 2. Binary Conversions - This section shows how to convert between binary, decimal, hexadecimal and octal numbers. In the Activities section, you can practice converting numbers. 3. Binary Arithmetic - Describes how to perform various arithmetic operations (addition, subtraction, multiplication, and division) with binary numbers. ACSL problems will also cover basic arithmetic in other bases, such as adding and subtracting together 2 hexadecimal numbers. ACSL problems will not cover division in other bases. 4. Negative Numbers - ACSL problems will not cover how negative numbers are represented in binary. 5. Binary Fractions and Floating Point - The first part of this section is relevant to ACSL: fractions in other bases. ACSL will not cover floating point numbers in other basis. So, focus on the section Converting to a Binary Fraction, but keep in mind that ACSL problems may also cover octal and hexadecimal fractions. The CoolConversion.com online calculator is another online app for practicing conversion from/to decimal, hexadecimal, octal and binary; this tool shows the steps that one goes through in the conversion. ## Format of ACSL Problems The problems in this category will focus on converting between binary, octal, decimal, and hexadecimal, basic arithmetic of numbers in those bases, and, occasionally, fractions in those bases. To be successful in this category, you must know the following fact cold: 1. The decimal value of each hex digit A, B, C, D, E, F 2. The binary value of each hex digit A, B, C, D, E, F 3. Powers of 2, up to 4096 4. Powers of 8, up to 4096 5. Powers of 16, up to 65,536 ## Sample Problems ### Sample Problem 1 Solve for $x$ where $x_{16}=3676_8$. Solution: One method of solution is to convert $3676_8$ into base 10, and then convert that number into base 16 to yield the value of $x$. An easier solution, less prone to arithmetic mistakes, is to convert from octal (base 8) to hexadecimal (base 16) through the binary (base 2) representation of the number: \begin{align} 3676_8 &= 011 ~ 110 ~ 111 ~ 110_2 & \text{convert each octal digit into base 2}\hfill\cr &= 0111 ~ 1011 ~ 1110_2 & \text{group by 4 bits, from right-to-left}\hfill\cr &= 7 ~ \text{B} ~ \text{E}_{16} & \text{convert each group of 4 bits into a hex digit}\cr \end{align} ### Sample Problem 2 Solve for $x$ in the following hexadecimal equation: $x= \text{F5AD}_{16} - \text{69EB}_{16}$ Solution: One could convert the hex numbers into base 10, perform the subtraction, and then convert the answer back to base 16. However, working directly in base 16 isn't too hard. As in conventional decimal arithmetic, one works from right-to-left, from the least significant digits to the most. The rightmost digit becomes 2, because D-B=2. The next column is A-E. We need to borrow a one from the 5 column, and 1A-E=C In the next column, 4-9=B, again, borrowing a 1 from the next column. Finally, the leftmost column, E-6=8 Combining these results of each column, we get a final answer of $8BC2_{16}$. ### Sample Problem 3 How many numbers from 100 to 200 in base 10 consist of distinct ascending digits and also have distinct ascending hex digits when converted to base 16? Solution: There are 13 numbers that have ascending digits in both bases from 100 to 200. They are (in base 10): 123 (7B), 124, 125, 126, 127 (7F), 137 (89), 138, 139 (8B), 156 (9C), 157, 158, 159 (9F), 189 (BD) ## Video Resources There are many YouTube videos about computer number systems. Here are a handful that cover the topic nicely, without too many ads: Number Systems - Converting Decimal, Binary and Hexadecimal (Joe James) An introduction to number systems, and how to convert between decimal, binary and hexadecimal numbers. Lesson 2.3 : Hexadecimal Tutorial (Carl Herold) The video focuses on hexadecimal numbers: their relationship to binary numbers and how to convert to decimal numbers. Collins Lab: Binary & Hex (Adafruit Industries) A professionally produced video that explains the number systems, how and why binary numbers are fundamental to computer science, and why hexadecimal is important to computer programmers.
{}
# Math Help - Sysyem 1. ## Sysyem Solve the system in R : x + y = 1 y + z = 1 x + z = 1 2. Guideline express y as function of x put it in second equation express z as function of x put in third equation find z put z in second equation find y put y in first equation find x 3. Hello, dhiab! Solve the system in R: . . $\begin{array}{cccccccc} x & + & y & & & = & 1 & [1] \\ & & y & + & z & = & 1 & [2] \\ x & & & + &z & = & 1 & [3] \end{array}$ $\begin{array}{ccccc}\text{Subtract [1]-[2]:} & x - z &=& 0 \\ \text{Add [3]:} & x + z &=& 1 \end{array}$ And we have: . $2x \:=\:1 \quad\Rightarrow\quad \boxed{x \:=\:\tfrac{1}{2}}$ Substitute into [1]: . $\tfrac{1}{2} + y \:=\:1 \quad\Rightarrow\quad\boxed{ y \:=\:\tfrac{1}{2}}$ Substitute into [2]: . $\tfrac{1}{2} + z \:=\:1 \quad\Rightarrow\quad \boxed{z \:=\:\tfrac{1}{2}}$ 4. Another approach: Add all the equations: $2(x+y+z)=3\Rightarrow x+y+z=\frac{3}{2}$. $x+y=1\Rightarrow z=\frac{1}{2}$ $y+z=1\Rightarrow x=\frac{1}{2}$ $x+z=1\Rightarrow y=\frac{1}{2}$
{}
# Electric field in a cylindrical capacitor I like Serena Homework Helper It seems sum of enclosed charges is zero because k-k=0 Yes, it seems that way, and it is. ;) Yes, it seems that way, and it is. ;) However, when I think of it, it doesn't make sense. I think there will be electric field outside the 2 capacitor. And from the answer of potential, the E electric isn't 0 in this case/. #### Attachments • 1.jpg 17 KB · Views: 221 I like Serena Homework Helper Whaaaaat? You peeked! ;) Ah well, did you try to calculate the gradient of phi outside the capacitor? What is it? Actually, I'm a bit surprised the book gives this as a solution. You are free to select the integration constant in this problem, but usually it is taken such that the potential is zero at infinity. Here they have chosen to have the potential zero inside the inner cylinder. Btw, now that I see the solution, I suddenly realize the volume integral you calculated is wrong. Sorry. You calculated the entire volume and assumed the charge density was constant everywhere and equal to k. But it isn't. The charge is k per meter just on the outside of the cylinder. To be clear: For a cylinder of radius r and height delta z, with r between R1 and R2, the calculation is: $$\iint\kern-1.4em\bigcirc\kern.7em E \cdot dA = \iiint \frac \rho {\epsilon_0} dV$$ $$E_r(r) \cdot 2 \pi r \Delta z = \frac k {\epsilon_0} \Delta z$$ Last edited: Whaaaaat? You peeked! ;) Ah well, did you try to calculate the gradient of phi outside the capacitor? What is it? Actually, I'm a bit surprised the book gives this as a solution. You are free to select the integration constant in this problem, but usually it is taken such that the potential is zero at infinity. Here they have chosen to have the potential zero inside the inner cylinder. Btw, now that I see the solution, I suddenly realize the volume integral you calculated is wrong. Sorry. You calculated the entire volume and assumed the charge density was constant everywhere and equal to k. But it isn't. The charge is k per meter just on the outside of the cylinder. To be clear: For a cylinder of radius r and height delta z, with r between R1 and R2, the calculation is: $$\iint\kern-1.4em\bigcirc\kern.7em E \cdot dA = \iiint \frac \rho \epsilon_0 dV$$ $$E_r(r) \cdot 2 \pi r \Delta z = \frac k \epsilon_0 \Delta z$$ Oh! I suddenly know why. The electric field out the outer cylinder is zero. However it doesn't imply that the potential is zero. Instead, it means no change of potential. Since at r=R2, the potential=$\frac{kln(R1/R2)}{2\pi \epsilon_0}$ Thus the potential outside the cylinder is still $\frac{kln(R1/R2)}{2\pi \epsilon_0}$ Am I right? I like Serena Homework Helper Oh! I suddenly know why. The electric field out the outer cylinder is zero. However it doesn't imply that the potential is zero. Instead, it means no change of potential. Since at r=R2, the potential=$\frac{kln(R1/R2)}{2\pi \epsilon_0}$ Thus the potential outside the cylinder is still $\frac{kln(R1/R2)}{2\pi \epsilon_0}$ Am I right? Yep. The potential (outside the outer cylinder) does not depend on any coordinate, so its derivative (the electric field) is zero. In fact I have answered that question, and we are now page 2. Can you help me to check if I get the correct answer? ...Sorry I post it on page 1 but it is now on page 2...I don't know why... It seems all questions are solved :-) Thanks everyone!
{}