subset
stringclasses
3 values
text
stringlengths
14
7.51M
source
stringclasses
2 values
web
# Unwrapping problem relevant to SNAPHU Hi. I try to get a DEM from Sentinel-1 by refering this video: and the whole process I follow is on 11:07 https://youtu.be/7w_-deMSRTs?t=667 . In this flowchart, unfortunately, I got a problem at unwrapping of 11:19 https://youtu.be/7w_-deMSRTs?t=679 . I mean, although SnaphuExport was well done and I chose in SNAP
auto_math_text
web
# Lower bound confidence interval NaN from time to time 43 posts / 0 new Offline Joined: 11/19/2018 - 19:32 Lower bound confidence interval NaN from time to time Hi, I was wondering about some lower bound NaN values I've come across (or also when, for example the AE model has C set to 0, but the variance component has a lower bound CI NaN value even though all of them should have a value of 0 (lower bound, estimate, upper bound)). Specifically, I have had NaN values for the A standardized variance component lower bound of the confidence interval from time to time. Does anyone know why this may be? Is there a work-around on how to adjust it? It's for relatively few estimates, but it does come up. I am using the latest OpenMx version and it seems to be running fine otherwise in general (on a univariate model, etc.). I know there is also calculating a CI via bootstrapping, but I am not entirely sure how to do that as of yet (but maybe that would lend itself toward resolving this issue). Offline Joined: 12/12/2012 - 12:15 verbose output? Have you looked at the summary(..., verbose=TRUE) output? Offline Joined: 11/19/2018 - 19:32 Thanks for the reply! I just Thanks for the reply! I just included it and it lists the following: R[write to console]: OpenMx version: 2.19.5 [GIT v2.19.5] R version: R version 4.1.0 (2021-05-18) Platform: x86_64-conda-linux-gnu Default optimizer: SLSQP NPSOL-enabled?: No OpenMP-enabled?: Yes I thought NPSOL was the default optimizer (at some point, though maybe I am mistaken)? Not sure if that may have to do with it. It also gives NaN values for the upper CI bound for the C estimate (when C is fixed to 0) at times as well. Offline Joined: 12/12/2012 - 12:15 verbose CI output That looks fine. I think SLSQP is used for confidence intervals regardless of which optimizer you select. The output I was after is the detailed CI output. Here's what it outputs by default, confidence intervals: lbound estimate ubound note common.A[1,1] 0.5566175 6.173024e-01 0.68400870 common.C[1,1] NA 2.406416e-13 0.05269798 !!! common.E[1,1] 0.1537491 1.730463e-01 0.19563705 and here's the detailed output with summary(model, verbose=TRUE): CI details: parameter side value fit diagnostic statusCode method a c e mean 1 common.A[1,1] lower 0.55661745 4071.507 success OK neale-miller-1997 0.7460680 5.399907e-04 0.4244517 21.39288 2 common.A[1,1] upper 0.68400870 4071.519 success OK neale-miller-1997 0.8270482 4.229033e-06 0.4095962 21.39341 3 common.C[1,1] lower 0.00000000 4067.663 alpha level not reached infeasible non-linear constraint neale-miller-1997 0.7856859 0.000000e+00 0.4159883 21.39293 4 common.C[1,1] upper 0.05269798 4071.549 success infeasible non-linear constraint neale-miller-1997 0.7560895 2.295604e-01 0.4181163 21.39237 5 common.E[1,1] lower 0.15374906 4071.505 success infeasible non-linear constraint neale-miller-1997 0.7968068 2.489554e-08 0.3921085 21.39306 6 common.E[1,1] upper 0.19563705 4071.512 success infeasible non-linear constraint neale-miller-1997 0.7729641 9.786281e-08 0.4423088 21.39289 Offline Joined: 11/19/2018 - 19:32 Oh I see, yeah that looks Oh I see, yeah that looks very in-depth, I didn't print it out, so now I see it. Thanks a lot for the clarification. Here are two outputs (for a C and A lower bound respectively where it's NaN) (hopefully the formatting is okay): 1) (for C lower bound NaN) confidence intervals: lbound estimate ubound note MZ.StdVarComp[1,1] 0.07422233 0.3950067 0.6354808 MZ.StdVarComp[2,1] NA 0.0000000 0.0000000 !!! MZ.StdVarComp[3,1] 0.36451917 0.6049933 0.9257777 CI details: parameter side value fit diagnostic 1 MZ.StdVarComp[1,1] lower 0.07422233 -37.95743 success 2 MZ.StdVarComp[1,1] upper 0.63548083 -37.95903 success 3 MZ.StdVarComp[2,1] lower 0.00000000 -41.80563 alpha level not reached 4 MZ.StdVarComp[2,1] upper 0.00000000 -37.93677 success 5 MZ.StdVarComp[3,1] lower 0.36451917 -37.95903 success 6 MZ.StdVarComp[3,1] upper 0.92577767 -37.95743 success statusCode method a e 1 OK neale-miller-1997 0.05779659 0.2041213 2 OK neale-miller-1997 0.18255742 0.1382638 3 infeasible non-linear constraint neale-miller-1997 0.13495746 0.1670206 4 infeasible non-linear constraint neale-miller-1997 0.17279164 0.1425522 5 infeasible non-linear constraint neale-miller-1997 0.18255742 0.1382638 6 infeasible non-linear constraint neale-miller-1997 0.05779659 0.2041213 2) for the A component case: confidence intervals: lbound estimate ubound note MZ.StdVarComp[1,1] NA 4.752728e-19 0.184599 !!! MZ.StdVarComp[2,1] 0.0000000 0.000000e+00 0.000000 !!! MZ.StdVarComp[3,1] 0.8153433 1.000000e+00 1.000000 CI details: parameter side value fit diagnostic 1 MZ.StdVarComp[1,1] lower 2.842761e-41 -252.6202 alpha level not reached 2 MZ.StdVarComp[1,1] upper 1.845990e-01 -248.7728 success 3 MZ.StdVarComp[2,1] lower 0.000000e+00 -248.7756 success 4 MZ.StdVarComp[2,1] upper 0.000000e+00 -248.7447 success 5 MZ.StdVarComp[3,1] lower 8.153433e-01 -248.7712 success 6 MZ.StdVarComp[3,1] upper 1.000000e+00 -248.8087 success statusCode method a e 1 infeasible non-linear constraint neale-miller-1997 -5.366488e-22 0.10065144 2 infeasible non-linear constraint neale-miller-1997 -4.407308e-02 0.09262844 3 infeasible non-linear constraint neale-miller-1997 -2.513257e-02 0.10927582 4 infeasible non-linear constraint neale-miller-1997 -2.463694e-02 0.10243448 5 infeasible non-linear constraint neale-miller-1997 -4.407965e-02 0.09262449 6 infeasible non-linear constraint neale-miller-1997 -3.161669e-10 0.09930529 Offline Joined: 12/12/2012 - 12:15 CI interpretation > 1) (for C lower bound NaN) If the upper bound is zero then you can probably just regard the lower bound as zero. The algorithm is very particular and wants to find the correct amount of misfit, but the model is already backed up into a corner and the optimizer gets stuck. > 2) for the A component case: This is similar to the first case. Here, the optimizer got closer the target fit of -248.77, but didn't quite make it because the parameters got cornered again. 2.842761e-41 can be regarded as zero. It looks like you're using the ACE model that does not allow variance components to go negative. This model is miscalibrated and will result in biased intervals. For better inference, you should use the model that allows the variance components to go negative. You can truncate the intervals at the [0,1] interpretable region for reporting. Offline Joined: 11/19/2018 - 19:32 Got it. That makes a lot of Got it. That makes a lot of sense. I guess I can go with that in this case. I really appreciate the clarification and suggestion. Is that normal practice to go ahead and just turn NaN values to zero (in that circumstance), etc.? Otherwise, I can just list that as being what was done in this circumstance (unless there are other workarounds). Do you have any references on setting it so the variance components are allowed to go negative? I am not too familiar with this, and have seen some posts, but am not entirely sure where would be the best place to look. Offline Joined: 01/24/2014 - 12:15 reference Got it. That makes a lot of sense. I guess I can go with that in this case. I really appreciate the clarification and suggestion. Is that normal practice to go ahead and just turn NaN values to zero (in that circumstance), etc.? Otherwise, I can just list that as being what was done in this circumstance (unless there are other workarounds). If a model fixes the shared-environmental component to zero, then under that model, the lower confidence limit for the shared-environmental component is trivially zero (as is the upper confidence limit). Do you have any references on setting it so the variance components are allowed to go negative? I am not too familiar with this, and have seen some posts, but am not entirely sure where would be the best place to look. See here. Offline Joined: 03/01/2013 - 14:09 Verhulst & Neale Hi There's a paper in Behavior Genetics about estimating unbounded A C and E variance components, instead of the usual implicitly-bounded path coefficient specification, which constrains variance components to be non-negative. It's here: https://pubmed.ncbi.nlm.nih.gov/30569348/ Offline Joined: 11/19/2018 - 19:32 Thank you both (and I Thank you both (and I apologize for the delay in response). That definitely makes sense. I still need to read the paper more carefully, but I understand the need / benefits for it. Ideally I would like to use it given its benefits, so I am trying to figure out what best way to incorporate it may be. I usually have been using path based implementations of the univariate ACE, and from what I've seen the direct variance approach can either be done in umx or there is a script which is more non-path based (which would require more familiarity, though both do). As for umx (I am new to this), would it be sufficient to use the umxACEv function, along with the umxConfint function (to get the standardized estimates + standardized CIs)? Or is there some other pointer to look at? That's what I've gathered from one of the tutorials and also the documentation of it. And if I am to go with this other (direct variance approach) approach, in order to limit the range of the bounds, would I just do that after obtaining the estimates and associated CIs? And, lastly, it is okay to just approximate the bounds which are very close to 0 but the optimizer fails (like was said above, outside of the triviality of it already being constrained to 0)? So for example the A estimate? I really appreciate all the great amount of help you all have given! Offline Joined: 11/19/2018 - 19:32 So, I figured out most of So, I figured out most of these answers. One question that is coming up, is I am using umx (just for convenience at this point), and I run into an issue as in this thread: https://openmx.ssri.psu.edu/node/4593 for at least one of the estimates. Is it possible to adjust either using umxModify or umxACE the lower bound for the C parameter as in that post? Or is that strictly limited to doing it directly without umx (because it cannot be adjusted with umx as that's not a feature)? I know there is a umx_scale() function, which maybe can help (to a degree with this)--on that note, though, does that not affect the ultimate result and bias it in ways? I appreciate it. If not, I hope there is a way for umx to skip the estimate instead of crashing, but I am not sure that is feasible, also. Offline Joined: 11/19/2018 - 19:32 So it looks like this So it looks like this crashing issue ends up being an issue a lot when it comes to submodels (AE/CE). For now, since I am not sure how that would be addressed consistently using the direct variance approach (without biasing it in some sense), I will just stick with the older approach that is biased since it doesn't throw any errors. But I would definitely be curious if there is a consistent / fair work-around for the above which still is using the direct variance approach based on any of your advice, since that would ultimately be unbiased relative the older approach which would be ideal. If anyone has any advice, I appreciate it as always! Offline Joined: 05/24/2012 - 00:35 crashing? In the thread you reference, the problem is starting values. If you use a large enough error variance then you should be able to get the model to optimize. In contrast to the referenced thread, I would not add lower or upper bounds since the whole point of the variance parameterization is to not have these bounds. When you report the confidence interval, you can apply the bounds. For example, if the A variance is estimated as -.2 to .2 then you can report the interval [0,.2]. Offline Joined: 11/19/2018 - 19:32 Hi jpritikin, thank you for Hi jpritikin, thank you for the response! That is helpful (as to how to apply the bounds after the fact). Hmm, if I am using umx, I am not sure of how to get large enough error variance. By error, do you mean the environmental variance in this case (or just in general the variance that needs to be large for the MZ/DZ twins)? Or would this be something to explore by multiplying the actual values of the data by 100 or so (and would that be okay to do with no other alterations, or would that cause issues elsewhere if it's not renormalized, so to speak)? I am not sure if the freeToStart, or value variables of umxModify are relevant in this instance (with the freeToStart parameter, for example, or tryHard which didn't seem to work too well). Also maybe xmuValues or xmu_starts would be relevant in this case? For reference, here are two errors I am explicitly getting: Error: The job for model 'AE' exited abnormally with the error message: fit is not finite (The continuous part of the model implied covariance (loc2) is not positive definite in data 'MZ.data' row 20. Detail: covariance = matrix(c( # 2x2 0.0132678091792278, -0.0151264409931152 , -0.0151264409931152, 0.0132678091792278), byrow=TRUE, nrow=2, ncol=2) ) and Error: The job for model 'CE' exited abnormally with the error message: fit is not finite (The continuous part of the model implied covariance (loc2) is not positive definite in data 'DZ.data' row 53. Detail: covariance = matrix(c( # 2x2 0.142828309693162, 0.18385055831094 , 0.18385055831094, 0.142828309693162), byrow=TRUE, nrow=2, ncol=2) ) Offline Joined: 11/19/2018 - 19:32 I just used the umxModify I just used the umxModify function to set the starting value for E (just as you would for any fixed parameter, before running, and it seems to work for CE (I will try also for AE). If that sounds right to you, let me know. Otherwise, for now it sounds about right to me. Thanks so much!! Offline Joined: 05/24/2012 - 00:35 starting values Yeah, that sounds like the correct approach. Based on what you wrote, I'm not sure that you realize that umx models are OpenMx models. If you do umxACEv(..., autoRun=FALSE) then you get the OpenMx model which you can adjust before running. Offline Joined: 11/19/2018 - 19:32 That's actually exactly what That's actually exactly what I did (so I had two lines one with FALSE and the other with not) :) but I am sure my understanding could always be better (umx is still quite new to me). One question that came to mind is, does it matter if one identifies the regex parameter for any given component of interest as A_r.c. vs A_r1c1? I saw this post https://openmx.ssri.psu.edu/node/4229 where it's used to drop the entire free parameter set, but I am not sure if it makes any technical difference. Offline Joined: 05/24/2012 - 00:35 regex Regex matching is just for ease of use. You can always list out all the parameters one-by-one. Offline Joined: 11/19/2018 - 19:32 I see. I was actually I see. I was actually wondering if there is any difference between the notation for removing all of C (C_r.c.) versus just one part of the matrix (C_r1c1), but I looked into it and I guess that since it is symmetric it is okay (is my guess)? Also, in the path version of OpenMx the starting values I did by taking, say V = sqrt(phenotypic variance across all twin pairs)/3. Should this equivalently be set for all the parameters A, C and E in umx/is it possible/needed, with the direct variance approach? Right now I have only directly set the E parameter as needed for the subnested models for AE/CE to 0.5 in umxmodify, but am wondering if there is a more systematic approach (i.e. should the 0.5 just be replaced with the V listed above)? Since I know the start value can affect the ultimate CI bounds at the very least. And, can this be set prior to running ACEv? I am not sure if the xmu_start_value_list() function is relevant. Finally, is there a way to suppress warnings, or the attempt to print to browser for umx? Offline Joined: 11/19/2018 - 19:32 quick follow-up My mistake, from here it looks like since it is concerned directly with variance the square root shouldn't be taken, right? https://github.com/tbates/umx/issues/38 But does this mean that it's already included directly in umx internally? If not, is there a rule of thumb then for making E large or small (when it comes to doing the subnested models), or any starting values in general when doing ACEv? And the warning suppression + browser suppression may still be helpful (since it prints out browser related information even if I have options(Browser = 'NULL'). Offline Joined: 05/24/2012 - 00:35 starting values Regarding starting values, the critical thing is to start with a larger enough error/environmental variance. If your error variance is too small then residuals will get ridiculously large Z scores which can cause optimization failure. Offline Joined: 11/19/2018 - 19:32 Got it, I see. So just Got it, I see. So just arbitrarily set it high I am guessing (when it comes to say, umxModify since that's where the issue comes up). The only other question I have related to this is can this be done prior to running umxACEv/is it needed, or should it only be done using umxModify() when it comes to the submodels (using regex, etc. for example)? Thanks so much for the quick responses! And I will try and submit a bug within the week for sure regarding the output issue. Offline Joined: 05/24/2012 - 00:35 warnings etc If you can't figure out how to suppress umx output then you might want to file a bug in the umx github. Offline Joined: 07/31/2009 - 14:25 umxModify, umx_set_silent, umx_set_auto_plot, umxSetParameters > Is there any difference in umxModify between regex = "C_r.c." and update="C_r1c1" As the matrix is symmetric these are equivalent. It's always easy to check what you've done with m1$top$C, or parameters(m1) , or, for path-based models, try tmx_show(m1) - it shows all the matrix properties in nice browser tables with roll-overs for properties. > Do I need to set start values in umx models? No - umx takes care this for you. But if you want to, you can set them directly. They are just parameters, so just set them: for instance if you wondered about sensitivity to the start value for C, just set the C values quite high to start, e.g. see what the parameters are with parameters(m1), and set with, e.g. umxSetParameters(m1, "C_r1c1", values=1) > Finally, is there a way to suppress warnings, or the attempt to print to browser for umx? Yes: umx_set_silent(TRUE) umx_set_auto_plot(FALSE) Offline Joined: 11/19/2018 - 19:32 Thanks so much for the Thanks so much for the response! I just figured out about the auto_plot before I saw this, that is exactly what one of them was (and I had umx_set_silent prior too). The only thing that is left at the moment seems to be as a result of the xmu functions (as I found online to match what I am getting). One of them isn't a warning, but the other is. I am wondering if it might be possible to disable these kinds of messages since I am looking into quite a few phenotypes. The specific popups are from: xmu_show_fit_or_comparison which automatically outputs the log likelihood estimate (this isn't as major but everything adds into computation in terms of print out), and more apparently from xmu_check_variance: Polite note: Variance of variable(s) '' and '' is < 0.1. You might want to express the variable in smaller units, e.g. multiply to use cm instead of metres. Alternatively umx_scale() for data already in long-format, or umx_scale_wide_twin_data for wide data might be useful. Given that the phenotypes I am working with are already in their native form, I see this note a lot, and am not sure if it could be suppressed. And thanks a lot for those clarifications--that all makes plenty of sense. Offline Joined: 01/24/2014 - 12:15 I partially disagree In contrast to the referenced thread, I would not add lower or upper bounds since the whole point of the variance parameterization is to not have these bounds. I partially disagree, in that it's not a bad idea to use a bound to ensure that the E variance is strictly positive. For example, if the A variance is estimated as -.2 to .2 then you can report the interval [0,.2]. Won't that cause CIs to have smaller coverage probability than they're supposed to? Offline Joined: 05/24/2012 - 00:35 coverage probability > > For example, if the A variance is estimated as -.2 to .2 then you can report the interval [0,.2]. > > Won't that cause CIs to have smaller coverage probability than they're supposed to? No because variance proportions are proportions. The true values are always between 0 and 1. Or you could regard values outside of 0 and 1 as rejections of the model. For example, if DZ twins are more correlated than MZ twins then there is something else going on besides genetic effects. Hence, it is inappropriate to use the classical ACE model to analyze such data. Offline Joined: 11/19/2018 - 19:32 Lower/Upper bound for E with direct variance approach Is it normal to get NaN values (instead of 1) for the upper/lower bound of the E model? This comes up frequently, with Code 3. I am not sure if this is the same case as mentioned by AdminRobK where C is trivially 0, etc. I am guessing this is the case, though and can be safely regarded as 1 (upper or lower) with corresponding log likelihood that is produced? Offline Joined: 11/19/2018 - 19:32 Example As an example: lbound estimate ubound lbound Code ubound Code top.A_std[1,1] NA 0 0 NA 3 top.C_std[1,1] NA 0 0 NA 3 top.E_std[1,1] 1 1 NA 3 NA Offline Joined: 11/19/2018 - 19:32 Phenotypes where E lower bound gets code 3 in alternative models And, one last question hopefully (outside of the E bound question): I get very few code 3 NaNs for the lower bound of the E estimate in any model in general (AE, etc.) of the ones I select for. Sometimes this is fixable by a change in the E starting value (a bit higher than what I had already set and not too high in certain cases, though this doesn't always work), and sometimes it is fixable by changing the seed. Are these alterations okay to do in this circumstance, even though it's not necessarily consistent with the rest of what I would be using for the rest of the phenotypes? I definitely appreciate it! Offline Joined: 11/19/2018 - 19:32 CSOLNP optimizer I also had type 6 error codes in a few cases (absent upper bound of an A estimate, for example), and along with the type 3 error codes, I could fix these by switching the optimizer to CSOLNP. I guess, again, going to the other question, would this be acceptable even if the vast majority of estimates involve SLSQP as the primary optimizer (it's a bit inconsistent across phenotypes)? The packages are really nice :) Offline Joined: 01/24/2014 - 12:15 confidence intervals Is it normal to get NaN values (instead of 1) for the upper/lower bound of the E model? This comes up frequently, with Code 3. I am not sure if this is the same case as mentioned by AdminRobK where C is trivially 0, etc. I am guessing this is the case, though and can be safely regarded as 1 (upper or lower) with corresponding log likelihood that is produced? Yes, in an E-only model, the upper and lower limits of the confidence interval for the standardized E variance component are trivially 1 (because the standardized E component is fixed to 1 under that model). And, one last question hopefully (outside of the E bound question): I get very few code 3 NaNs for the lower bound of the E estimate in any model in general (AE, etc.) of the ones I select for. Sometimes this is fixable by a change in the E starting value (a bit higher than what I had already set and not too high in certain cases, though this doesn't always work), and sometimes it is fixable by changing the seed. Are these alterations okay to do in this circumstance, even though it's not necessarily consistent with the rest of what I would be using for the rest of the phenotypes? I definitely appreciate it! I also had type 6 error codes in a few cases (absent upper bound of an A estimate, for example), and along with the type 3 error codes, I could fix these by switching the optimizer to CSOLNP. I guess, again, going to the other question, would this be acceptable even if the vast majority of estimates involve SLSQP as the primary optimizer (it's a bit inconsistent across phenotypes)? If you're getting different results for your CIs by changing the start values, the RNG seed, and/or the optimizer, then I'm concerned that you're also getting a different solution in the primary optimization (i.e., to find the MLE). The fact that changing the start values apparently affects your CI results is especially concerning, since every confidence-limit search begins at the MLE and not at the initial start values. Have you checked whether or not you're getting substantially equivalent point estimates, standard errors, and -2logL each time you try? You might want to first run your MxModel with intervals=FALSE to get a good initial solution, and then use omxRunCI() to subsequently get confidence intervals. Offline Joined: 11/19/2018 - 19:32 Thanks a lot for the reply. From what I can tell, it looks like the point estimates are stable (I will look into this more, though). The - 2logL seems consistent as well. The only thing that seems to change is, for example, when changing the E start value, I will get a different lower bound CI (not upper bound, for example) in the cases in which this did not succeed for say, the AE model. Here is an example where it doesn't succeed for the upper bound of the ACE model (info column is -2logLL top row, and AIC second row). This is with no change in the starting value. lbound estimate ubound note info top.A_std[1,1] -1.062900 -0.245001 NaN !!! 139.933203 top.C_std[1,1] -0.182275 0.492349 1.059306 -271.866406 top.E_std[1,1] 0.489673 0.752652 1.078756 0.000000 This is the case when I change the optimizer to CSOLNP lbound estimate ubound note info top.A_std[1,1] -1.064765 -0.245001 0.571626 139.933203 top.C_std[1,1] -0.186698 0.492349 1.060668 -271.866406 top.E_std[1,1] 0.489466 0.752652 1.078679 0.000000 When I look into the AE model with the error and change the starting value for E (from 0.5 to 0.8) for an estimate I get this: lbound estimate ubound note info top.A_std[1,1] 0.179462 0.430153 0.625025 36.693141 top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -67.386281 top.E_std[1,1] NaN 0.569847 0.820538 !!! 0.000000 to this: lbound estimate ubound note info top.A_std[1,1] 0.179462 0.430153 0.625025 36.693141 top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -67.386281 top.E_std[1,1] 0.393639 0.569847 0.820538 0.000000 and if I change the above (same AE model case) to the CSOLNP optimizer (instead of the starting value to 0.8, so keep that at 0.5), I get: lbound estimate ubound note info top.A_std[1,1] 0.178853 0.430153 0.624640 36.693141 top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -67.386281 top.E_std[1,1] 0.375360 0.569847 0.821147 0.000000 Offline Joined: 11/19/2018 - 19:32 top row - log likelihood My mistake, small correction: top row/last column is just the log likelihood. Offline Joined: 11/19/2018 - 19:32 umx equivalent Outside of the rest of the examples I posted, etc. is there an equivalent option for what you are suggesting specifically in umx? All I know of is the addCI parameter when running the ACE model / variants, so I am not sure how that method translates from OpenMx to the umx interface, etc. So far I haven't changed anything in that respect. The optimizer seems to fail for very few cases, specifically with respect to the Cis (usually it's just one bound of one parameter estimate that it has difficulty with). But out of the phenotypes I am checking, it's relatively few where this occurs (though ideally there would be none). Thanks a lot for all of the help--I really appreciate it. Offline Joined: 11/19/2018 - 19:32 Another phenotype example if perturbed For another estimate (which didn't have any errors), these are the following differences it has if I change the optimizer/starting E value/seed (below with a hashtag): lbound estimate ubound note info top.A_std[1,1] 0.063975 0.342624 0.565213 50.340513 top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -94.681026 top.E_std[1,1] 0.434787 0.657376 0.936025 0.000000 # E_start = 0.5, CSOLNP lbound estimate ubound note info top.A_std[1,1] 0.063876 0.342624 0.565174 50.340513 top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -94.681026 top.E_std[1,1] 0.434826 0.657376 0.936090 0.000000 # E_start = 0.5, default optimizer lbound estimate ubound note info top.A_std[1,1] 0.063897 0.342624 0.565175 50.340513 top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -94.681026 top.E_std[1,1] 0.434825 0.657376 0.936082 0.000000 # E_start = 0.8, default optimizer lbound estimate ubound note info top.A_std[1,1] 0.063876 0.342624 0.565174 50.340513 top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -94.681026 top.E_std[1,1] 0.434826 0.657376 0.936090 0.000000 # E_Start = 0.5, default optimizer, different seed Offline Joined: 11/19/2018 - 19:32 More comprehensive output Here is a more comprehensive version (with standard errors). It does look like it's relatively pretty consistent even with SEs (if I am not mistaken). Here is an example of an estimate where there is a CI bound NaN error. # seed = first, optimizer = SLSQP, E = 0.5 free parameters: name matrix row col Estimate Std.Error A 1 expMean_var1 top.expMean means var1 0.60409619 0.018456489 2 A_r1c1 top.A 1 1 0.01609547 0.005227711 3 E_r1c1 top.E 1 1 0.02132252 0.004326436 lbound estimate ubound note info top.A_std[1,1] 0.179462 0.430153 0.625025 36.693141 top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -67.386281 top.E_std[1,1] NaN 0.569847 0.820538 !!! 0.000000 # seed = first, optimizer = SLSQP, E = 0.8 name matrix row col Estimate Std.Error A 1 expMean_var1 top.expMean means var1 0.60409620 0.018456490 2 A_r1c1 top.A 1 1 0.01609547 0.005227712 3 E_r1c1 top.E 1 1 0.02132252 0.004326436 lbound estimate ubound note info top.A_std[1,1] 0.179462 0.430153 0.625025 36.693141 top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -67.386281 top.E_std[1,1] 0.393639 0.569847 0.820538 0.000000 # seed = second, optimizer = SLSQP, E = 0.5 free parameters: name matrix row col Estimate Std.Error A 1 expMean_var1 top.expMean means var1 0.60409619 0.018456489 2 A_r1c1 top.A 1 1 0.01609547 0.005227711 3 E_r1c1 top.E 1 1 0.02132252 0.004326436 lbound estimate ubound note info top.A_std[1,1] 0.179462 0.430153 0.625016 36.693141 top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -67.386281 top.E_std[1,1] 0.375239 0.569847 0.820538 0.000000 # seed = first, optimizer = CSOLNP, E = 0.5 free parameters: name matrix row col Estimate Std.Error A 1 expMean_var1 top.expMean means var1 0.60409614 0.018456474 2 A_r1c1 top.A 1 1 0.01609544 0.005227694 3 E_r1c1 top.E 1 1 0.02132248 0.004326420 lbound estimate ubound note info top.A_std[1,1] 0.179462 0.430153 0.624903 36.693141 top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -67.386281 top.E_std[1,1] 0.375201 0.569847 0.820538 0.000000 Here is an example of the working case: # seed = first, optimizer = SLSQP, E = 0.5 ree parameters: name matrix row col Estimate Std.Error A 1 expMean_var1 top.expMean means var1 0.42486799 0.016192053 2 A_r1c1 top.A 1 1 0.01035623 0.004402426 3 E_r1c1 top.E 1 1 0.01986998 0.004057314 lbound estimate ubound note info top.A_std[1,1] 0.063876 0.342624 0.565174 50.340513 top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -94.681026 top.E_std[1,1] 0.434826 0.657376 0.936090 0.000000 # seed = first, optimizer = SLSQP, E = 0.8 free parameters: name matrix row col Estimate Std.Error A 1 expMean_var1 top.expMean means var1 0.42486797 0.016192049 2 A_r1c1 top.A 1 1 0.01035622 0.004402423 3 E_r1c1 top.E 1 1 0.01986997 0.004057314 lbound estimate ubound note info top.A_std[1,1] 0.063897 0.342624 0.565175 50.340513 top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -94.681026 top.E_std[1,1] 0.434825 0.657376 0.936082 0.000000 # seed = second, optimizer = SLSQP, E = 0.5 name matrix row col Estimate Std.Error A 1 expMean_var1 top.expMean means var1 0.42486799 0.016192053 2 A_r1c1 top.A 1 1 0.01035623 0.004402426 3 E_r1c1 top.E 1 1 0.01986998 0.004057314 lbound estimate ubound note info top.A_std[1,1] 0.063876 0.342624 0.565174 50.340513 top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -94.681026 top.E_std[1,1] 0.434826 0.657376 0.936090 0.000000 # seed = first, optimizer = CSOLNP, E = 0.5 name matrix row col Estimate Std.Error A 1 expMean_var1 top.expMean means var1 0.42486797 0.016192034 2 A_r1c1 top.A 1 1 0.01035620 0.004402407 3 E_r1c1 top.E 1 1 0.01986994 0.004057299 lbound estimate ubound note info top.A_std[1,1] 0.063819 0.342624 0.565174 50.340513 top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -94.681026 top.E_std[1,1] 0.434826 0.657376 0.936118 0.000000 Based on this, should there be any concerns as to switching the optimizer, or maybe an idea as to what really is going on in the few cases in which one of the CI bounds are not calculated? Offline Joined: 01/24/2014 - 12:15 One miscellaneous hint I One miscellaneous hint I forgot to mention yesterday: if you're using SLSQP, try setting mxOption "Feasibility tolerance" to a smaller value than its on-load default (say, 1e-3 or 1e-4). That might help, at least as of OpenMx v2.19. What is your mxVersion() output, come to think of it? Edit: Never mind. I just saw your mxVersion() output upthread. Based on this, should there be any concerns as to switching the optimizer, or maybe an idea as to what really is going on in the few cases in which one of the CI bounds are not calculated? I do not think there are any concerns relating to switching the optimizer or changing the RNG seed. Neither of those things is considerably changing the point estimates or standard errors, right? But, from what you've included in your posts, I can't really tell what's going on when a confidence limit is reported as NaN. I would need to see at least the first seven columns of the 'CI details' table (which prints when you use summary() with argument verbose=TRUE). I would also need the -2logL at the MLE (which you seem to have intended to include in your posts?). The information in your posts is not very easy to read, either. The tables would be easier to read if they displayed in a fixed-width font, which can be done with Markdown or with HTML tags. Outside of the rest of the examples I posted, etc. is there an equivalent option for what you are suggesting specifically in umx? I don't know. Sorry. Offline Joined: 11/19/2018 - 19:32 That sounds promising, and it That sounds promising, and it does work with the alternate optimizer. I have tried changing the tolerance using mxOptions() but it doesn't look like that makes a difference (setting it as such mxOption(key="Feasibility tolerance", value=1e-3)), unless umx might be overriding this, etc (which I wouldn't think it would, but I don't know). I've never really typed in html, takes more time, but yeah I agree it looks much nicer (and I personally also didn't like how it was saving prior). confidence intervals: lbound estimate ubound note top.A_std[1,1] 0.1794616 0.4301532 0.6244231 top.C_std[1,1] 0.0000000 0.0000000 0.0000000 !!! top.E_std[1,1] NA 0.5698468 0.8205384 !!! CI Details: parameter side value fit diagnostic 1 top.A_std[1,1] lower 0.1794616 -69.54434 success 2 top.A_std[1,1] upper 0.6244231 -69.54440 success 1 top.C_std[1,1] lower 0.0000000 -69.54471 success 1 top.C_std[1,1] upper 0.0000000 -69.54464 success 1 top.E_std[1,1] lower 0.4012878 -69.82063 alpha level not reached 1 top.E_std[1,1] upper 0.8205384 -69.54434 success StatusCode method expMean_var1 A_r1c1 E_r1c1 1 OK neale-miller-1997 0.6039050 0.006549955 0.02994786 2 OK neale-miller-1997 0.6042243 0.026113065 0.01570644 3 OK neale-miller-1997 0.6330415 0.010966001 0.02533962 4 OK neale-miller-1997 0.6384895 0.018864979 0.02289753 5 infeasible non-linear constraint neale-miller-1997 0.6040962 0.022071510 0.01479346 6 infeasible non-linear constraint neale-miller-1997 0.6039050 0.006549955 0.02994786 This is when it fails. Also there is this information: Model Statistics: Parameters Degrees of Freedom Fit (-2lnL units) 3 141 -73.38628 And, outside of that, it looks like the std errors are NA under the "free parameters" column, specifically when I run summary(verbose = TRUE) on the umxConfint result (which is what gives the results/CIs above). But when I run summary(verbose=TRUE) on just the model prior to umxConfint, the SEs (albeit no CIs) are stable (and can be referred to in the other post, etc.). And no worries--you've helped me a lot! Hopefully this last bit will give a bit of closure. I will say switching the optimizer fixes the issue though for those few estimates this occurs. Offline Joined: 11/19/2018 - 19:32 Optimizer for CI vs optimizer for the model itself including CI It looks like I can change the optimizer specifically for the CI or also in general for the model estimate / calculation itself, and maybe these two would give slightly different output. Is there a preference of doing either, or is there a benefit, etc. since I would only be doing this for a few phenotypes out of the majority, anyway? Offline Joined: 01/24/2014 - 12:15 confidence intervals I've never really typed in html, takes more time, but yeah I agree it looks much nicer (and I personally also didn't like how it was saving prior). OK, now I can see what's wrong with the lower confidence limit for E. The optimizer was unable to adequately worsen the -2logL. For a 95% CI, the target worsening of fit at both limits is about 3.841. For E's lower limit, the worsening was only by about 3.566, which isn't too far off. It looks like I can change the optimizer specifically for the CI or also in general for the model estimate / calculation itself, and maybe these two would give slightly different output. Is there a preference of doing either, or is there a benefit, etc. since I would only be doing this for a few phenotypes out of the majority, anyway? It's basically impossible to give a general recommendation about that. Do whatever seems to work best for you. Offline Joined: 11/19/2018 - 19:32 Got it--that all makes sense. Got it--that all makes sense. Thanks for all of the help!! Offline Joined: 01/24/2014 - 12:15
auto_math_text
web
• # question_answer In the given conformation ${{C}_{2}}$ is rotated about ${{C}_{2}}-{{C}_{3}}$ bond anticlockwise by an angle of $120{}^\circ$ then the conformation obtained is [AIIMS 2004] A) Fully eclipsed conformation B) Partially eclipsed conformation C) Gauche conformation D) Staggered conformation Solution : You need to login to perform this action. You will be redirected in 3 sec
auto_math_text
web
The superannuation industry has had phenomenal growth over the last 30 years, much of it attributable to the increasing mandatory employer contributions (now sitting at 9.5% of wages). The Rice Warner  Superannuation Market Projections Report 2019 showed that industry assets grew by nearly 10% a year in the decade to June 2019.  Although this rate is unsustainable in the long run, we did project (pre-coronavirus) a further 7% a year growth rate over the following 10 years.  That will need to be reviewed once we understand the full impact of the COVID-19 pandemic. Asset Allocation World-leading investment returns have also been a major contributor to the industry’s growth.  The major factor here is asset allocation.  Over long periods, those funds with higher allocations to growth assets do better as they milk the equity risk premium.  When the mandatory SG system began in 1992, superannuation funds mainly invested in listed (public) assets with a small allocation to property (usually via a fund manager).  However, asset consultants and funds began to see the value in private assets which provided higher potential returns through an illiquidity premium. Initially, superannuation funds began to invest in infrastructure and many large funds benefited from State government privatisations.  These private assets, including airports, toll roads and ports, have a monopoly rent which provides a regular income which grows with economic activity.  The new owners often revamped and geared the assets, which enhanced the income streams giving even stronger returns. Since then, funds have invested in other private assets such as direct property holdings, private equity, and venture capital.  The mix varies by superannuation fund, based on their own cash flows and membership demographics but the overall results have been similar over long periods as we showed in an Insight column last October. These real assets have benefited from the decline in interest rates over the last 15 years – the reduced discount rates have increased the valuations.  The relative stability of their cash flows during an extended period of economic growth has led to many funds increasingly using these assets as proxies for government bonds, but with higher returns. Liquidity Over time, some funds have used the relative price stability of unlisted assets to shift higher amounts into growth assets, even for pension accounts.   The guaranteed cash flows from SG contributions gives positive net income for the industry and provides a cash buffer and strong overall liquidity.  Until recently, many funds were concerned about how to allocate the huge cash flows, given many assets appeared to be expensive.  They did not want to leave too much in cash and bonds given the low interest rates currently on offer. Cash flows vary from fund to fund.   Many have strong brands and retain the bulk of their members in the MySuper default investment strategy.  However, most funds offer a range of investment choices and they must transfer assets immediately if a member requires a new allocation – or if they retire into a different portfolio.  Members can also opt to join a different fund and these roll-overs must be dealt with promptly.  Further, large employers could use a Successor Fund Transfer (SFT) to move their employees to or from a different fund.  All these events impact on cash flows and must be included in any liquidity stress testing. Some funds experienced liquidity issues following the GFC in 2007/8.  There were a handful of funds which had problems due to high allocations to unlisted assets and there were also funds which invested in property funds or mortgage trusts which were frozen after their own liquidity issues (not enough cash to pay withdrawals). APRA released SPG530 in 2013 to deal with many aspects of Investment Governance, including liquidity management, which must be monitored and managed on an ongoing basis.  Each fund must set up a Liquidity Management Plan (LMP) and this must be stress-tested against likely poor outcomes.  APRA states that: An RSE licensee may also consider, for example, RSE-specific events such as large-scale redundancies within an employer-sponsor, successor fund transfers or market-driven events such as illiquidity of underlying investments as observed during the global financial crisis. Much of the retail superannuation segment has been developed with multiple products and a plethora of investment choices.  This has meant a dilution of the investment funds, giving mass but not scale.  The structure has meant that MySuper products have not held a high proportion of the RSE’s total assets, so liquidity needs were greater and accumulated over several products.  Consequently, the entry to unlisted assets was generally later and at lower levels than for industry and public offer funds. Valuing unlisted assets One claimed advantage of unlisted assets is that they are not subject to the volatile daily prices of the stock markets.  While this is true, the underlying worth of the assets should be no different.  Over time, this plays out, but, in the short term, listed prices are driven by confidence and expectations and emotions (fear and greed), as well as supply and demand for the stock, so they are often priced at a premium or discount to theoretical values. The main disadvantage of unlisted assets is that they are illiquid, and any sale is more complex and slower than an on-market transaction.  This shows up in the illiquidity premium. Where, for instance, infrastructure and property assets are held via a listed vehicle, the market prices of the listed vehicle generally stand at a discount to the value of the underlying assets. In this pandemic, all assets have fallen in value.  They will resettle once we understand the expected future earnings pattern better.  Some will go back to a similar pre-virus world; others will be permanently impaired.  Since unlisted assets are only valued periodically, they lag the immediacy of listed markets to changes in circumstances and, as a result, there could be arbitrage opportunities. Superannuation funds are required to ensure equity between different groups of members.  New members (or new contributions) should not pay more for an asset than it is worth.  The difficulty is in assessing the value of an asset in these uncertain economic times.  How do you value a convention centre when there may be no (international) conferences in the next two years?  What about airports if international travel is largely stopped until the end of 2021?  In practice, the funds will increase the frequency of valuations to ensure they are reasonably up to date as economic circumstances change.  That might mean that the initial falls of 5% to 15% on some unlisted assets could be lowered again later if future earnings are expected to be lower for longer. Impact of the early release of super All liquidity plans were unsettled by the recent legislation to allow members to withdraw up to $10,000 tax free before 1 July and then again before 1 October. This Coronavirus event could lead to$25 to \$50 billion of unforeseen withdrawals over the next four months (as most of the second instalment will take place early in July). The media and government have taken the view that funds should have allowed for this in their LMPs.  This is harsh as the legislation is new and very little notice of payment was given.  These calls are essentially berating Trustee Boards for not having provided for the sovereign risk of the Government requiring cash withdrawals well outside their own enunciated (but not yet legislated) Objective of Superannuation. Most funds will manage, but some operate for members in industries with high levels of unemployment.  As JobKeeper does not allow for any superannuation contributions, the regular cash flow has been severely disrupted.  None of this could have been foreseen! The heavily impacted funds might need to sell some assets to support their cash flows.  Not only does this mean selling at depressed prices, but it reduces the scope to buy other discounted assets as they become available.  Fortunately, we are not aware of any funds that will need to ask APRA for approval to cease rollovers due to illiquidity (Section 6.36 of the SIS Act). One of the implications of the Early Release scheme is that funds will hold more money in cash (earning very little).  They will be worried that this precedent could be repeated, and they need to be better prepared.  Superannuation funds with a significant proportion of members in those industries most affected at the moment (such as hospitality, retail and tourism) may seek to attract more members in other areas as well as building up membership of their account-based pensions.  This will provide an increased buffer against another one of these (hopefully) one-in-a-hundred-year events. Future earnings Over the next decade, the superannuation industry is not likely to have the same earnings pattern as enjoyed over the last 20 years.  This will mean new targets – is CPI + 3% to 4% still viable over the next ten years?  Perhaps it will be if CPI is negative for some of this time! If targets are reduced, that will flow into communications material, online calculators and financial advice models. It will also reduce projected future retirement benefits for members and possibly reduce confidence in the system, despite its clear value for most Australians. It is also fair to say that forcing superannuation funds to hold more liquidity in anticipation of another unexpected Government requirement will reduce the long-term earning capacity of superannuation and eventually lead to lower tax revenue and higher Age Pension costs. CPD: Actuaries Institute Members can claim two CPD points for every hour of reading articles on Actuaries Digital. Comment on the article (Be kind) Your comment will be revised by the site if needed. Previous Next
auto_math_text
web
doc-src/IsarImplementation/Thy/logic.thy changeset 20929 cd2a6d00ec47 parent 20547 796ae7fa1049 child 21324 a5089fc012b5 equal inserted replaced 20928:74910a189f1d 20929:cd2a6d00ec47 764 *} 764 *} 765 765 766 766 767 section {* Rules \label{sec:rules} *} 767 section {* Rules \label{sec:rules} *} 768 768 769 text {* 769 text %FIXME {* 770 770 771 FIXME 771 FIXME 772 772 773 A \emph{rule} is any Pure theorem in HHF normal form; there is a 773 A \emph{rule} is any Pure theorem in HHF normal form; there is a 774 separate calculus for rule composition, which is modeled after 774 separate calculus for rule composition, which is modeled after
auto_math_text
web
# Resmack: Part 4: Grammar Mutations Note: This post is part of a series on developing resmack. See the resmack tag for all posts in the series. # Fuzzing and Mutations Feedback-driven fuzzing uses a form of genetic algorithm to generate inputs that ideally cause more and more more of the target program to be executed. • Have a collection of inputs • Create new inputs from previous inputs (mutate, crossover) • Have a fitness function to determine the “success” of an input The corpus of inputs used during fuzzing is the collection of inputs. The creation of new inputs is the mutating of items within the corpus to create new inputs. The feedback in feedback-driven fuzzing is often the fitness function. # Feedback-driven Grammar Fuzzing Doing feedback-driven fuzzing with grammars poses a different set of problems than pure mutational-based fuzzing. This blog post focuses on the corpus and mutation aspects of feedback-driven fuzzing. ## Questions ### What do you store in the corpus? Storing only the generated data throws away knowledge of the structure, but not storing the generated data means you need to regenerate derivative inputs from scratch. ### How Do You Reuse Structural Knowledge of Previous Inputs? This hinges largely on the limitations of how previous inputs are stored in the corpus. On one end of the spectrum, if you only store the generated data, you cannot use the structural knowledge about that input that existed when it was created. You may be able to store start/end offsets of the data that each rule generated, but this method falls apart when you’re working non-static (dynamic) grammars. On the other end, if you have full knowledge about how an input was generated, you should be able to modify the decisions used during its creation to create a valid, but mutated, descendant of an input. ## Grammar-specific Corpus For resmack, I wanted to store each item in the corpus in a way that preserved the tree structure of the decisions used when data was generated by the grammar. For example, the grammar below defines a run-on sentence. The zero-based line numbers indicate the rule indices, which will be used later: 0 1 2 3 4 5 6 run_on_sentence: | sentence: fruit_list: | subject: I | we | you verb: eat | throw | stomp on | demolish fruit: apples | bananas | pears | watermelon conjunction: and | or The sentence I eat apples and bananas or we throw watermelon Would be represented by the tree: To preserve the decisions used at each junction in the tree, I found it worked well to save the full state of the pseudo-random number generator (PRNG) for each rule that was generated. Implementing the same grammar as above in resmack would produce this metadata saved in the corpus: Corpus Metadata: ... num_entries: 1 entry[0]: ... num_states: 16 state[ 0]: ... rule_idx: 0 "run_on_sentence" rand_state: ebe8df8d|fa44b3ba|6be3539c|f28d3079 state[ 1]: ... rule_idx: 1 "sentence" rand_state: e3215c4e|7a4f3fab|096cf811|4c1e1846 state[ 2]: ... rule_idx: 3 "subject" rand_state: e3215c4e|7a4f3fab|096cf811|4c1e1846 state[ 3]: ... rule_idx: 4 "verb" rand_state: d5707ba3|90029bf4|7432f25f|893f69b2 state[ 4]: ... rule_idx: 2 "fruit_list" rand_state: cc4d89e5|31401208|a47561fc|ef9230c9 state[ 5]: ... rule_idx: 5 "fruit" rand_state: 129fab24|5978fa11|e81cf819|91160ef6 state[ 6]: ... rule_idx: 6 "conjunction" rand_state: daf15fc3|a3fba92c|0b77713d|77a73e43 state[ 7]: ... rule_idx: 2 "fruit_list" rand_state: 0eadc8ac|727d87d2|26d476fe|e4bb7ea2 state[ 8]: ... rule_idx: 5 "fruit" rand_state: 0eadc8ac|727d87d2|26d476fe|e4bb7ea2 state[ 9]: ... rule_idx: 6 "conjunction" rand_state: 986b31dc|5a043980|d3761a52|37cb84b6 state[10]: ... rule_idx: 0 "run_on_sentence" rand_state: f5a48cea|1119120e|436e2b8e|7de9b36e state[11]: ... rule_idx: 1 "sentence" rand_state: 99542d8a|a7d3b56a|84eebb64|850b0367 state[12]: ... rule_idx: 3 "subject" rand_state: 99542d8a|a7d3b56a|84eebb64|850b0367 state[13]: ... rule_idx: 4 "verb" rand_state: bb8c9b87|ba692384|bad042ee|c5b06916 state[14]: ... rule_idx: 2 "fruit_list" rand_state: c455d115|bb35faed|d31bd169|ca5493fe state[15]: ... rule_idx: 5 "fruit" rand_state: c455d115|bb35faed|d31bd169|ca5493fe This is a portion of real data from my local, persisted corpus in resmack. The rule names were added for readability. Notice how there are 16 PRNG states saved to generate this corpus item - one for each non-terminal node in the graph above (all nodes above the pink ones). Saving the state of the PRNG at each rule generation results in a decision tree of sorts. Replaying the PRNG states in order at each rule generation will cause the same output to be generated. On the flip side, modifying the persisted PRNG state at any node in the tree would: 1. invalidate all descendant nodes • new decisions will be made, but we don’t know what they are yet 2. cause that portion of the generated data to change ## Mutating Grammar Decisions Suppose I wanted to change the fruit_list value in the first sentence to something other than apples and bananas: I eat apples and bananas or we throw watermelon I eat ?????????????????? or we throw watermelon This could be accomplished by changing the saved PRNG state values for that rule generation: Note The image above is for illustration purposes only. The actual resmack corpus is a single binary file, not text-based like above. Which would create the following tree: Replaying the newly mutated saved state then yields a similar sentence, except it now has a new fruit list: I eat watermelon or we throw watermelon ### Crossover The same technique can also be used to crossover multiple inputs in the corpus by swapping PRNG states for the same rules. For example, the PRNG state for a different corpus item’s fruit_list in the first sentence could overwrite the fruit_list PRNG state from another corpus item: ## Remarks ### PRNG Space Requirements Due to the random number generator that I am using (Xoshiro128**), four 32-bit integers are used to track the PRNG’s state. The number of integers required could be reduced if a different random number generator algorithm was used. ### Inefficiencies With Small Inputs It is possible that a large number of PRNG states would be required to generate a very small input. The worst case would be one PRNG state being required for 0-1 bytes in the input. For my use cases, this is an acceptable tradeoff since my primary targets will be generating much more than 16 bytes per rule.
auto_math_text
web
“log_alpha” is a variable(I think), every time I update its data with its grad, the grad will change into None. code as follow: 1 alpha_loss = -torch.mean(self.log_alpha * (log_pis.double() + self._target_entropy)) 3 alpha_loss.backward() 5 self.log_alpha = self.log_alpha - grad * 3e-4 but if I use line 2 code, no error. So I’m a little confused about the grad clearing mechanism. & I’m not sure if the grad will be added all the time. It seems you are replacing `self.log_alpha` in your example. Have a look at these example, where we 1) replace `x`, 2) assign to a new tensor and 3) modify `x` inplace: ``````# 1) replace tensor loss = torch.mean(x) loss.backward() > tensor([0.5000, 0.5000]) > None # 2) assign to other tensor loss = torch.mean(x) loss.backward() > tensor([0.5000, 0.5000]) > tensor([0.5000, 0.5000]) > None # 3) inplace loss = torch.mean(x) loss.backward() > tensor([0.5000, 0.5000])
auto_math_text
web
Perspectives from Around the World # Effects of fiscal shocks in a globalised world Alan J AUERBACH Professor of Economics and Law at the University of California, Berkeley For Alan J AUERBACH's full bio, http://www.voxeu.org/person/alan-j-auerbach Yuriy GORODNICHENKO Associate Professor in the Department of Economics, University of California, Berkeley For Yuriy GORODNICHENKO's full bio, http://www.voxeu.org/person/yuriy-gorodnichenko The impact of fiscal policy on exchange rates is of key interest to policymakers. This column argues that unexpected government spending instantly affects exchange rates. The finding, based on daily data reporting of the US Department of Defense, may suggest that unexpected government spending has broader macroeconomic effects as well. The results, however, do not hold is low-frequency data are used instead. What are the effects of fiscal policy on aggregate economic activity in a globalised world? This is a key question in current policy and academic debates. The central challenge in this debate is how to identify fiscal shocks in the data. Previous research (e.g., Blanchard and Perotti 2002, Romer and Romer 2010) has used structural or narrative time series methods to isolate unanticipated, exogenous innovations to government spending or revenue. While these approaches have many desirable properties, they typically have been applied at quarterly or even annual frequencies. These low frequencies can limit the plausibility of identifying assumptions (e.g., minimum delay restriction for government spending) and reduce statistical power (e.g., narrative shocks can account for only a few historical changes in fiscal variables). ## New evidence using daily data on government spending In recent work, we address this challenge by using daily data on US government spending (Auerbach and Gorodnichenko 2015). Using daily variation does limit the scope of our investigation, for we are unable to measure the effects of shocks on slow moving aggregate variables, like real GDP, for which comparable high-frequency data are unavailable. However, high-frequency analysis greatly enhances our ability to assess reactions of forward-looking variables such as exchange rates, asset prices, yields, etc. In previous research, analyses of how these variables react to government spending or revenue shocks were limited because it was hard to rule out reverse causality using low-frequency data. In contrast, one can be fairly certain that, on a given day, shocks to actual or contracted payments of the US government are not affected by economic news and hence causation is likely to flow from fiscal variables to forward-looking variables. In our research, we have constructed two daily series for government defence spending. The first series is payments to defence contractors reported in the daily statements of the US Treasury. The second series is the announced volume of contracts awarded daily by the US Department of Defense. Since one series measures actual outlays while the other provides a measure of future government spending, using these two series helps us to underscore the key role of fiscal foresight for timing shocks to government spending as well as responses to these shocks. While it is possible to construct more government spending variables at the daily frequency, we focus on military spending to minimise the possibility of reserve causality and other forms of endogeneity, and because defence procurement is such an important part of the federal budget and a major source of volatility in that budget. While interpretation of spending shocks at this high frequency may be complex--we discuss that these shocks may include 'level' (how much to spend), 'timing' (when to spend), and 'identity' (who receives government funding) components--we document that certain shocks to government spending have a non-negligible 'level' component, which is the component typically studied with data at quarterly or annual frequencies. Specifically, we show that announcements about future military spending move the index of stock prices for firms in the defence industry. ## Department of Defense contracts Since 1994, nearly every weekday at 5 pm, the Department of Defense has announced (on http://www.defense.gov/contracts/) its new contract awards greater than $6.5 million. A typical announcement specifies the duration of the contract, the awarded amount, the name of the winner, the location of contract execution, and additional details about the nature of the contract. Each contract is assigned a unique code and is summarised by a paragraph in an announcement. The contracts tend to be of multi-year duration. The Department also makes announcements about modifications to existing contracts. To avoid mixing anticipated and unanticipated awards, we use only announcements of new contracts, that is, contracts that appear for the first time on the Department of Defense's website. One drawback of using these data is that the Department does not provide them in a format suitable for statistical analysis. To convert this information into usable form, we have downloaded web pages with announcements from the Department's archive and parsed data from the web pages. To verify the quality of the information, we use several algorithms of parsing information from the text of announcements, have at least two people check the consistency of collected data, and randomly check the validity of information extracted from a sample of web pages by independent research assistants. Overall, the quality of the data appears to be high. While the announcements are not immediately translated into actual disbursements, using announcements offers one key potential advantage. Standard theory predicts that unconstrained, forward-looking agents should react at the time of the news rather than when actual spending occurs. The announcements can thus provide a better timing for spending shocks, as measured by the present value of contract awards. Our series of daily totals of announced contracts shows huge variation (Panel A, Figure 1). The awarded amounts vary from$3 million to almost $25 billion with a standard deviation of$1.2 billion and a mean of \$450 million. Consistent with the view that these announcements do not simply reflect high-frequency timing, daily contract awards do not appear much smoother when aggregated to monthly frequency (Panel B, Figure 1). The time series of monthly totals of awarded contracts is characterised by low serial correlation and spikes without any discernible seasonal pattern. Furthermore, these spikes in monthly totals can be related to major military developments. For example, we observe a surge in awarded contracts immediately after the 9/11 terrorist attack, the start of the second Iraq war in 2003, the Russo-Georgian war in 2008, and the start of Operation New Dawn. In contrast, we observe no significant movements in actual payments on defence contracts, the other series used in our analysis. We follow our earlier work (Auerbach and Gorodnichenko 2012, 2013) and estimate the effect of government spending using direct projections as in Jorda (2005). Specifically, we construct impulse responses by running a series of regressions (see the paper for details). As in our earlier work, we extend the direct-projections approach to allow the responses to vary by the state of the economy, for example, where regimes correspond to recessions and expansions. Furthermore, given the number of observations available at a daily frequency, it is feasible to extend the approach to estimation based on a more sophisticated classifications of regimes, such as recession with a binding zero lower bound (ZLB) on short-term nominal interest rates. ## Results Panel A of Figure 2 shows the impulse response of the nominal exchange rate (the Trade Weighted US Dollar Index, Major Currencies) to a unit shock in the Department of Defense's announcements (daily log volume of awarded contracts, de-seasonalised and de-trended). At the time of the shock, the dollar appreciates by 0.0001 (that is, 0.01%). This contemporaneous response is statistically significant at the 95% level. Over time, the exchange rate appreciates further and reaches the maximum appreciation of 0.00052 after about 25 working days. Given the amount of volatility in both series (the exchange rate and the Department's announcements), statistical significance of the point estimates is remarkable. The direction of the response of the exchange rate is consistent with basic macroeconomic theory; in an economy with a flexible exchange rate, government spending shocks should lead to appreciation of the domestic currency. The dynamics of the appreciation are broadly in line with economic theory as well. While the exchange rate peaks with a delay, the duration of the delay is fairly short relative to previous studies where the maximum reaction was delayed by many months. Interestingly, when we aggregate data to the monthly frequency, the delay becomes more pronounced—the response peaks after six months. Therefore, the delayed responses in the previous literature may be in part due to the use of low-frequency data. To contrast the difference between announced and actual spending, we present the daily response of the exchange rate to actual spending (daily payments to defence contractors) in Panel B of Figure 2. We find no significant response at any horizon. The pattern is similar when we estimate the response using data at the monthly frequency; if anything, the point estimates suggest that the dollar depreciates. This difference in responses to actual and announced government spending shocks can explain why previous studies using actual spending and data at low frequencies failed to find a robust link between exchange rate movements and fundamentals such as the fiscal deficit. This article first appeared on www.VoxEU.org on May 10, 2015. Reproduced with permission. Reference(s) • Auerbach, A, and Y Gorodnichenko (2012), "Fiscal Multipliers in Recession and Expansion," in Fiscal Policy after the Financial Crisis, A Alesina and F Giavazzi, eds., University of Chicago Press. • Auerbach, A, and Y Gorodnichenko (2013), "Output Spillovers from Fiscal Policy," American Economic Review Papers and Proceedings 103,141-146. • Auerbach, A, and Y Gorodnichenko (2015), "Effects of Fiscal Shocks in a Globalized World," NBER WP 21100. • Blanchard, O, and R Perotti (2002), "An Empirical Characterization of the Dynamic Effects of Changes in Government Spending and Taxes on Output," Quarterly Journal of Economics 117(4), 1329-1368. • Jorda, O (2005), "Estimation and Inference of Impulse Responses by Local Projections," The American Economic Review 95(1), 161-182. June 1, 2015 ## Article(s) by this author • ### Effects of fiscal shocks in a globalised world June 1, 2015[Perspectives from Around the World]
auto_math_text
web
• Create Account # Switching between Camera Types (TrackBall -> First Person -> etc.) Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic. 3 replies to this topic ### #1MegaPixel  Members   -  Reputation: 241 Like 0Likes Like Posted 17 January 2013 - 06:58 AM Hi to everyone, I'm trying to build a generic camera class (so just one class) that with few simple operators can allow one to create any type of high level camera. I'd like to do it this way because I think that, then, switching for example between a first person and a trackball like camera will be easier. For now I've successfully implemented the usual 1st person camera by defining few simple operators on the camera class and creating orientation using quaternions. Operators/Methods: moveXCameraRelative moveYCameraRelative rotateXCameraRelative rotateYCameraRelative The thing is that I can't figure out how to switch between (say) a 1st person and a trackball without screwing everything. What I mean is flying a bit with a 1st person and then from that exact pov switch to trackball and use it and then back to 1st person transparently (like in a DCC tool). What I thought is that I should accumulate orientations etc. but I guess that my current method is not very correct because instead of accumulating orientations deltas I accumulate the angle and calculate the orientation of accumulated angles instead of defining an offset quaternion. I saw some implementation they do something like: Quaternion q(axis,delta) //the offset quaternion (relative rotation quaternion) I do something like: angle += delta; Quaternion q(axis,angle); //the absolute rotation quaternion should I use the first solution and accumulate quaternions instead of accumulate the absolute angle to have the possibility to implement the behaviour that I described before ? ### #2haegarr  Crossbones+   -  Reputation: 4587 Like 0Likes Like Posted 17 January 2013 - 08:35 AM There are several problems mentioned in the OP. My advices are: 1. You should stay away from using Euler angles if possible. 2. You should apply delta rotations and translations, but store the current placement as either a matrix or a pair of position vector and orientation quaternion. 3. You should not integrate too much responsibility into a single class. E.g. the camera class stores and grants read access to the placement of the camera as well as some simple manipulators (in extreme just a setter) for this placement, but it should not provide higher level control like a concept of 1st person and 3rd person camera. Instead, provide a basic CameraControl class and derive FPCameraControl and others from it. 4. When switching the active CameraControl, the next active control may need to alter the camera's placement to get it into a prescribed state. If you want to avoid sudden changes, then make a soft transition so that the current placement (i.e. those stored in the camera object, usually as left from the previous control) and the new required placement are interpolated over a short time, e.g. a second or so, before actually granting control to the following CameraControl object. Notice please that this can be integrated into the schema nicely: Define a TransitionalCameraControl class that is parametrized with the CameraControl that should become active. Let the former one do the interpolation (it asks the camera for the current placement and the given CameraControl for the required placement), and let it replace itself by the given CameraControl when done. ### #3MegaPixel  Members   -  Reputation: 241 Like 1Likes Like Posted 17 January 2013 - 09:04 AM There are several problems mentioned in the OP. My advices are: 1. You should stay away from using Euler angles if possible. 2. You should apply delta rotations and translations, but store the current placement as either a matrix or a pair of position vector and orientation quaternion. 3. You should not integrate too much responsibility into a single class. E.g. the camera class stores and grants read access to the placement of the camera as well as some simple manipulators (in extreme just a setter) for this placement, but it should not provide higher level control like a concept of 1st person and 3rd person camera. Instead, provide a basic CameraControl class and derive FPCameraControl and others from it. 4. When switching the active CameraControl, the next active control may need to alter the camera's placement to get it into a prescribed state. If you want to avoid sudden changes, then make a soft transition so that the current placement (i.e. those stored in the camera object, usually as left from the previous control) and the new required placement are interpolated over a short time, e.g. a second or so, before actually granting control to the following CameraControl object. Notice please that this can be integrated into the schema nicely: Define a TransitionalCameraControl class that is parametrized with the CameraControl that should become active. Let the former one do the interpolation (it asks the camera for the current placement and the given CameraControl for the required placement), and let it replace itself by the given CameraControl when done. So that means that there is no way to switch from one camera control to another without interpolating between them ? I thought it was possible to just accumulate in the right order (based on the current camera control type) to be consistent to one camera control or another without sudden changes to show up. Plus, currently I'm calculating my orientation like this and it works stable with no gimbal lock or numerical instabilities, but I do not understand why I should work with delta rotations instead of absolute angles (it works anyway). Here is a code snippet: //create orientation QuatfFromAxisAngle(Vec3f(1.f,0.f,0.f),mPitch,&mRotX); QuatfFromAxisAngle(Vec3f(0.f,1.f,0.f),mYaw,&mRotY); QuatfFromAxisAngle(Vec3f(0.f,0.f,1.f),mRoll,&mRotZ); QuatfMult(mRotX,mRotY,&mRotXY); QuatfMult(mRotZ,mRotXY,&mOrientation); //normalize quaternion QuatfNormalize(mOrientation,&mOrientation); //now extract the orientation part of the view matrix Mat44fInitFromQuaternion(mOrientation,&mViewMatrix.mat44f); mViewMatrix.mat44f.v03 = -Vec3fDot(cameraPos,mRight); mViewMatrix.mat44f.v13 = -Vec3fDot(cameraPos,mUp); mViewMatrix.mat44f.v23 = -Vec3fDot(cameraPos,mForward); It works smooth and perfect (I keep Roll == 0 all the way). Pitch, Yaw are just accumulated absoulte angles: Pitch += dPitch; same for yaw What I'm thinking is It possible to make just one class that gives just very bare bones operators to implement different camera behaviours without having to implement a class for every camera type ? I saw some implementation showing something like: rotateXCameraRelative or rotateXWorldRelative blabla which leads me to think they are just basic operators and there is no reference to first person or third or track ball ... and the idea is that a very specific combination of them can implement for example a first person behaviour or a trackball if you use a different combination of them. I ### #4haegarr  Crossbones+   -  Reputation: 4587 Like 0Likes Like Posted 17 January 2013 - 11:01 AM You could of course switch camera control without interpolation. However, that may introduce abrupt changes. Think for example of switching from a 1st person camera to a 3rd person camera. The former one is obviously located much closer to the avatar (if any, but let us think so) than the latter one. Notice that both kinds of camera control have their own idea of how to locate (especially position) the camera. Switching without interpolation lets the camera position jump. If you want it that way ... no problem. For pleasure, however, it is usual to interpolate just to avoid abrupt changes. Even with the system described you have the choice of whether to interpolate or not, simply by using a TransitionCameraControl or don't doing so. The TransitionCameraControl may also terminate immediately if it detects the both placements to be already identical. I thought it was possible to just accumulate in the right order (based on the current camera control type) to be consistent to one camera control Notice please that this would work exactly only for non prescribed parts of placement. As written above, e.g. the position of a 1st person and a 3rd person camera is prescribed by the control. When switching from 4rd person to 1st person than the 1st person control sees the position of the camera out-of-range. But this out-of-range isn't an accident, because being left by the former control. Why should each and every CameraControl have the need to correct what was left by its predecessor? why I should work with delta rotations instead of absolute angles Using delta rotations gives you a sequence like Ry(hn) * Rx(pn) * Ry(hn-1) * Rx(pn-1) * ... * Ry(h0) * Rx(p0) while accumulating heading and pitching for each other gives you Ry(hn + hn-1 + ... + h0) * Rx(pn + pn-1 + ... + p0) = Ry(hn) * Ry(hn-1) * ... * Ry(h0)* Rx(pn)  * Rx(pn-1) * ... * Rx(p0) what is obviously not the same in general. However, the former one is usually what ones expects to get. which leads me to think they are just basic operators and there is no reference to first person or third or track ball ... and the idea is that a very specific combination of them can implement for example a first person behaviour or a trackball if you use a different combination of them. That idea is close to what I meant with "the camera class ... grants ... some simple manipulators" in the previous post. However, what manipulators do you expect to need? You need a setter for sure. All others depend on the kinds of controls you want to implement. It is likely that another kind of control will need another kind of manipulator. Even if not ... even doing look-at computations and friends is not strictly a domain of a camera. Why aren't they part of the vector math package? Then all the controls can still use shared code without burdening the camera class, and tracking and path following and so on will be available for other game objects, too. Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic. PARTNERS
auto_math_text
web
# Improving 2D raycasting performance This topic is 380 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hello everybody. I have implemented a 2D raycasting algorithm in my game engine but I must be doing something very wrong, since my game is pretty much unplayable if I cast around 10 rays each frame (it runs fine at around 5-6 casts per frame). I tried profiling but, as you can see from the two pictures, I ended up on STL territory, which tells me that the performance issues arise from my approach to the problem. I'm using the Sweep-and-Prune algorithm for my broadphase detection, which divides my entire map into sections, and colliders inside a section are only checked for collisions with other colliders from that section. This approach is used in my raycasting algorithm as well. Initially, I get a list of all colliders inside the raycast's radius and then I check the ray against every collider in that list. As far as I understand it the problem lies not in the Ray vs. Shape checks, but in the preparation work before that. Any tips or guidelines on how to correct my approach, or specifics of the algorithms, are most welcome. Raycast bool MCollisionEngine::Raycast(const glm::vec2& Origin, const glm::vec2& Direction, float MaxDistance, std::string LayerMask, bool bHitTriggers) const { for (PCollider* const Collider : PossibleColliders) { const PCollisionLayer& ColliderLayer = Layers.at(Collider->LayerKey); { SShape* Shape = Collider->Shape.get(); switch (Shape->GetType()) { case EShapeType::AABB: if (RayIntersectsAABB(Origin, Direction, MaxDistance, static_cast<SAABB*>(Shape))) { return true; } break; case EShapeType::Circle: if (RayIntersectsCircle(Origin, Direction, MaxDistance, static_cast<SCircle*>(Shape))) { return true; } break; case EShapeType::OBB: if (RayIntersectsOBB(Origin, Direction, MaxDistance, static_cast<SOBB*>(Shape))) { return true; } break; } } } return false; } bool MCollisionEngine::BroadCircleToOBB(const SCircle& Circle, const SOBB* const OBB) { const std::vector<glm::vec2>& Normals = OBB->GetNormals(); for (int i = 0; i < 2; i++) { const glm::vec2& Normal = Normals[i]; Line CircleProj = Circle.ProjectInto(Normal); Line OBBProj = OBB->ProjectInto(Normal); if (glm::length(CircleProj.CalculatePenetrationVector(OBBProj)) < Utils::EPSILON) { return false; } } return true; } SOBB::GetSupportPoint glm::vec2 SOBB::GetSupportPoint(const glm::vec2& Direction) const { const std::vector<glm::vec2>& Vertices = GetVertices(); float BestProjection = -std::numeric_limits<float>::max(); glm::vec2 BestVertex; for (const glm::vec2& Vertex : Vertices) { float Projection = glm::dot(Vertex, Direction); if (Projection > BestProjection) { BestProjection = Projection; BestVertex = Vertex; } } return BestVertex + Position; } Edited by Kercyn ##### Share on other sites You should click on that button top right of the profiling results that says something like "generate detailed report". Then you can click around the hot path and see which lines of code are hot. Anyways, just from the call stack it looks like your function GetCollidersInRadius is super slow. Edit: Also because you keep using "const auto&" I have no clue what your code is doing. It's completely unreadable for me personally. Edited by Randy Gaul ##### Share on other sites 16 hours ago, Randy Gaul said: You should click on that button top right of the profiling results that says something like "generate detailed report". Then you can click around the hot path and see which lines of code are hot. Anyways, just from the call stack it looks like your function GetCollidersInRadius is super slow. Edit: Also because you keep using "const auto&" I have no clue what your code is doing. It's completely unreadable for me personally. Thanks for the reply. It's too late right now to check your suggestion so I'll do it tomorrow. For the const auto& issue, I replaced all autos in the first post with their actual type. I'm also including the headers used in the snippets above, just in case. PCollisionLayer class PCollisionLayer final { public: PCollisionLayer() = default; {} ~PCollisionLayer() = default; //! The layer's index in the collision mask. std::size_t Index; }; Line class Line final { public: Line() = default; Line(glm::vec2 a_, glm::vec2 b_); ~Line() = default; glm::vec2 a; glm::vec2 b; //! Calculates the penetration vector between this and Other. // http://www.metanetsoftware.com/technique/tutorialA.html glm::vec2 CalculatePenetrationVector(const Line& Other) const; Line ProjectInto(const glm::vec2& Axis) const; bool IsPointInside(const glm::vec2& Point) const; float GetLength() const; bool operator<(const Line& lhs) const; bool operator>(const Line& lhs) const; bool operator<=(const Line& lhs) const; bool operator>=(const Line& lhs) const; bool operator==(const Line& lhs) const; bool operator!=(const Line& lhs) const; }; EDIT: I tried what you suggested but unfortunately all I got was a fancier way of viewing what already existed in the call stack table. My main concern is that my entire approach is wrong, since the "hottest" line ends up being a range-based for. Edited by Kercyn ##### Share on other sites On 7/6/2017 at 4:12 PM, Kercyn said: EDIT: I tried what you suggested but unfortunately all I got was a fancier way of viewing what already existed in the call stack table. My main concern is that my entire approach is wrong, since the "hottest" line ends up being a range-based for. Try posting that image so others can see the hot path in your code, if you still wanted advice on this stuff. ##### Share on other sites I was struggling with this for a couple of days, but I finally decided to replace my algorithm with a much simpler one that runs way more efficiently. Thanks for the guidelines! ##### Share on other sites On 7/15/2017 at 5:22 AM, Kercyn said: I was struggling with this for a couple of days, but I finally decided to replace my algorithm with a much simpler one that runs way more efficiently. Thanks for the guidelines! ##### Share on other sites 14 hours ago, Andrew Feng said: You can find the source code here. What I'm doing is applying a reverse-rotation to the ray, so that I can treat the problem as a ray vs AABB collision. Then I'm implementing the algorithm described here. It may not be the fastest or most efficient implementation, but for my current needs, it's "good enough". Edited by Kercyn 1. 1 2. 2 Rutin 21 3. 3 4. 4 frob 16 5. 5 • 9 • 12 • 9 • 33 • 13 • ### Forum Statistics • Total Topics 632593 • Total Posts 3007271 × ## Important Information We are the game development community. Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up! Sign me up!
auto_math_text
web
• Create Account ## Use 64bit precision (GPU) Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic. 10 replies to this topic ### #1montify  Members 590 Like 0Likes Like Posted 16 April 2014 - 12:35 PM Hello, I've bin working on a large Terrain, so now im on the Point that i reach the precision of float. Im working with DX11 / ShaderModel 5 and this provided doubles on GPU.. My Question is: How i can use double (Matrix, inputPosition) instead of float? I read something about Render relative to the Eye or so, but the simple solution should use doubles, but how? ### #2spazzarama  GDNet+ 1626 Like 1Likes Like Posted 16 April 2014 - 04:51 PM My Question is: How i can use double (Matrix, inputPosition) instead of float? From MSDN HLSL Scalar Types: http://msdn.microsoft.com/en-us/library/windows/desktop/bb509646(v=vs.85).aspx • double - 64-bit floating point value. You cannot use double precision values as inputs and outputs for a stream. To pass double precision values between shaders, declare each double as a pair of uint data types. Then, use the asdouble function to pack each double into the pair of uints and the asuint function to unpack the pair of uints back into the double. So I think you will have to encode the doubles into a pair of uint values to pass them around. Justin Stenning | Blog | Book - Direct3D Rendering Cookbook (using C# and SharpDX) Projects: Direct3D Hook, EasyHook, Shared Memory (IPC), SharpDisasm (x86/64 disassembler in C#), Afterglow, C#raft @spazzarama ### #3DementedCarrot  Members 1142 Like 1Likes Like Posted 16 April 2014 - 09:31 PM If you want to render stuff relative to the eye in float space using doubles, you: 1. Use doubles for your position vectors. 2. Use the double vector for every object position, and for your camera. Then you have to translate your positions into a float-capable space for rendering. You translate every object position to get the position relative to the eye with: DoubleVector3 objectPosition = object.somePosition; DoubleVector3 cameraPosition = camera.position; DoubleVector3 doubleRelativePosition = objectPosition - cameraPosition; // When you translate the object by the camera position, the resulting number is representable by a float. // Just cast the double-vector components down to floats! FloatVector3 relativePosition; relativePosition.x = (float)doubleRelativePosition.x; relativePosition.y = (float)doubleRelativePosition.y; relativePosition.z = (float)doubleRelativePosition.z; and then that's the position you pass into the shader for rendering. This is really cumbersome for a ton of objects because you have to recompute this translation every time you move your camera. There is an extension of this method to keep you from creating relative coordinates every frame. You have to create a relative anchor point that moves with your camera. To do this you have to: 1. Create a double-vector anchor point that moves with your camera periodically. You move this anchor point when float-precision starts to become insufficient to represent points inside the float-anchor-area. 2. You build relative float-vector positions for everything relative to the anchor point, as we did before with the camera but with the anchor point. 3. When you move far enough away from the anchor, you re-locate it. 4. When the anchor moves you re-translate everything relative to the new anchor point. This means everything has a double-vector world position and a float-vector relative anchor position. 5. You use a regular camera view matrix to move around inside this anchor float space. 6. Draw everything normally as if the anchor-relative position is the position, and the anchor relative camera position is the camera location. I hope this helps! Edits: Typo-city Edited by DementedCarrot, 16 April 2014 - 09:47 PM. ### #4Chris_F  Members 3018 Like 0Likes Like Posted 16 April 2014 - 10:09 PM You should probably consider whether or not lack of precision is really the issue you are facing. Modern GPUs may support DP floating point because of SM5, but it is a feature which is almost never used by anyone (for games anyway) so consumer GPUs tend to be really bad at it. Take my GPU for example, a GTX 760 which is probably pretty typical. Switching from SP floats to DP floats is the difference between 2.25 teraflops and 94 gigaflops. That's a huge performance decrease. ### #5montify  Members 590 Like 0Likes Like Posted 17 April 2014 - 01:49 AM Thank you for reply Ok i should use relative to the eye So i have one vertexbuffer for each childnode and rootnode. Only the worldmatrix is changing. My problem is i use cube2sphere mapping which require that the cubecenter is 0,0,0 With relative to the eye the camera position is at 0,0,0? So whats my line? When i substract the camera position from the patch position the cube is out of the origin in worldspace ### #6DementedCarrot  Members 1142 Like 0Likes Like Posted 17 April 2014 - 09:12 AM My problem is i use cube2sphere mapping which require that the cubecenter is 0,0,0 Can you be more specific about what cube2sphere mapping is, and what it's used for? ### #7montify  Members 590 Like 0Likes Like Posted 17 April 2014 - 09:32 AM Oh Sorry, so : Im working on a Procedural Planet with ChunkedLOD in earth Size. To get a Sphere out of 6 Planes, i normalize each Vertex. But the requirement is, the center of the cube (out of 6 planes) must be 0,0,0 (world-origin) like: http://britonia.wordpress.com/2010/05/20/planet-geometry/ But here do the Vector3.normalize on the CPU, but i do it on the GPU: VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { float3 worldPosition = mul(input.Position, World).xyz; float3 normalized = normalize(worldPosition); float4 scaled = float4(normalized * SeaLevel, 1); output.Position = mul(mul(scaled, View), Projection); output.UV = input.UV; return output; } So, when i subtract the Camera Position from the QuadTree-Patch Position the Center of my Sphere/Cube is not 0,0,0 . Edited by montify, 17 April 2014 - 10:28 AM. ### #8DementedCarrot  Members 1142 Like 0Likes Like Posted 17 April 2014 - 10:45 AM That's a tough one. I'm not totally sure how to accomplish this when the world meshes are created that way. I'll think on it. ### #9montify  Members 590 Like 0Likes Like Posted 17 April 2014 - 11:32 AM nono the worldmesh created like: I have one VertexBuffer 33x33 vertices. ( a flat Plane ) Now i made 6 of this Planet (Rotatet, Translate) to get a Cube. Now i normalize the Input.Position in the Shader = A perfect Sphere. --------------------- Is it possible and have it the same effect to translate the Position ( Position - cameraPosition) NOT in the WorldSpace but rather in ViewSpace? ### #10montify  Members 590 Like 0Likes Like Posted 19 April 2014 - 10:40 AM So i have it . I Set the Camera Posititon to Vector3.Zero and "hack" the worldMatrix //new View Matrix var View = Matrix.CreateLookAt(Vector3.Zero, cam.Forward, cam.Up); Vector3 cameraSpacePosition = (position - cam.Position); //Relative to the Camera world.M41 = cameraSpacePosition.X; world.M42 = cameraSpacePosition.Y; world.M43 = cameraSpacePosition.Z; But i scale/normalize each vertex (the entire Planet) in the Shader like ( with *6000): VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { float3 worldPosition = mul(input.Position, World).xyz; float3 normalized = normalize(worldPosition); float4 scaled = float4(worldPosition * 6000, 1); output.Position = mul(scaled, ViewProj); output.Position.z = log(C*output.Position.z + 1) / log(C*Far + 1) * output.Position.w; //Logarithmic Depth Buffer return output; } The Scale and Normalize don't work anymore... anybody out there who have a Solution for this Problem? Edited by montify, 19 April 2014 - 10:41 AM. ### #11montify  Members 590 Like 0Likes Like Posted 21 April 2014 - 11:14 AM Update: Now i do the Relative to the Eye transformation in the VertexShader VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { float3 worldPosition = mul(input.Position, World).xyz; float3 normalized = normalize(worldPosition); // Normalize all Vertex to get a Sphere float4 scaled = float4(normalized * SeaLevel, 1); // Scale all Vertex with SeaLevel = 6000 scaled -= float4(camPosition, 1); //All geometry relative to the Eye output.Position = mul(scaled, ViewProj); output.Position.z = log(C*output.Position.z + 1) / log(C*Far + 1) * output.Position.w; //Logarithmic Depth Buffer return output; } It works, no Jittering, but i have some odd  "stairs" So whats going on here? Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
auto_math_text
web
# MCQs on chemical kinetics for NEET | Pdf Chemical kinetics is a very important chapter for NEET Aspirants. NEET exam is important in your life, because your future career depends upon your score in NEET exam Chemical kinetics MCQ for NEET test your knowledge, intelligence, memory and quick response. Your speed and accuracy is the essence of this NEET MCQ. For this you have to cultivate a different frame of mind. For this first solve the different chemical kinetics MCQs for NEET given in this page and then try to complete each Chapter of chemistry NEET MCQs given on ybstudy.com With Our chemical kinetics MCQ for NEET Find out where you stand, your strong points and weak points and try to take corrective steps immediately. In this process your subconscious mind will be thinking about the correct answers of chemical kinetics MCQS for those questions and in the second round, you will be getting most of the answers. Remember before starting to solve our chemical kinetics MCQs online test all the important notes and the meaning of all definitions should be understood. ## Here are the important chemistry objective questions for 12th pdf 1) Rate constant in not independent on.... a) concentration of reactants b) concentration of product c) molecularities of reaction d) temperature 2) aA +bB product, if rate= K[A]x [B]y then the following is correct ? a) x+y=a+b b) x+y≠a+b or x+y=a+b c) X+y=a+b=0 d) x+y= (a + b)2 3) Rate law is...... a) determined from given chemical equation b) determined theoretically c) determined experimental d) not determined experimentally 4) For general reaction aA+bB--> product rate =K [A]x [B]y  when x=-ve then, a) rate increases as, [A] increases b) rate decreases as, [A] decreases c)rate is independent on constant A d) rate decreases as, [A] increases Answer: d) rate decreases as, [A] increases 5) For the reaction 2H2,O2(g)---> 2H2O(l) +O2(g) rate =........ a) K [H2 O2]1/2 b) K [K [H2 O2]1/3 c) K [H2 O2]1 d) K [H2 O2]2 6) For the reaction, NO2(g) +CO(g)--> NO(g)+CO2(g) Which of the following is correct? a) rate=K[NO2]1/2[CO] b) rate = K[NO2]2 c)rate= K [CO]1 d) rate= K[NO2]1[CO]1 7) Which of the following is not correct Stated? a) rate law estimate rate of reaction b) rate law estimate concentration of reactants not product c) rate law estimate mechanism of complex reactant d) rate law estinate concentration of products at any time interval during reaction Answer: b) rate law estimate concentration of reactants not product 8) Chemical reaction is characterized by .. a) rates of reactions b) feasibility of reactions c) position of equilibrium d) all above characteristics 9) Chemical kinetics study deals with. a) rates of chemical reaction b) mechanism of rate reaction c) factors affecting rates of reaction d) all 10) The half life for Zero order reaction is.... a) At/2K b) [A]o/2At c) [A]o/2 d) [A]o/2K 11)......represents example of zero order reaction a) decomposition of ammonia gas on platinum Surface b) decomposition of nitrogen oder on platinum surface c) decomposition of phosphine gas on tungsten surface d) all 12)......not represents pseudo-order reaction. a) Acid hydrolysis of ester b) Inversion of cane sugar c) Decomposition of PCI5 d) Conversion of methyl acetate into methanol and acetic acid in presence of H2SO4 13) Rate law and order of reaction can't be calculated by.... a) Molecularity method b) Isolation method c) initial rate method d Integrated rate law 14) Molecularity term is applied for.... a) simple reactions b) complex reactions c)  both simples complex reactions d) none of the above 15).....is not a an example of first order. a) Acid hydrolysis of ester b) Decomposition of N2O5 C) cyclopropene into cyclopropane d) Decomposition of H2O2 16) The term (-dc/dt) in rate equation refers to... a) the concentration of a reactant b) the decrease in concentration of the reactant with time c) the velocity constant of reaction d) the concentration of a product Answer: b) the decrease in concentration of the reactant with time 17) Number of moles of a substance present in one litre of volume is known as... a) activity b) molar concentration c) active mass d) concentration 18) Rate of which reactions increases with temperature... a) of any b) of exothermic c) of endothermic reaction d) of reversible reaction 19) The rate of chemical reaction is directly proportional to.... a) active masses of reactants b) equilibrium constant c) active masses of products d) pressure Answer: a) active masses of reactants 20) The rate of a reaction can be increased in general by all the factors except... a) using a catalyst b) increasing the temperature c) increasing the activation energy d) increasing the concentration of reactants Answer: c) increasing the activation energy 21) According to rate law, Rate= k[A] [B]2 [C]  If C is taken in large excess, then the overall order of the reaction is... a) 2 b) 4 c) 3 d) 1 22) The unit of the velocity constant in case of zero order reaction is... a) conc. × time-1 b) conc-1. x time c) conc. X (time)2 d) conc-1. x time-1 23) If the concentration of a reactant A is doubled and the rate of its reaction increased by a factor of 2, the order of reaction with respect to A is... a) first b) zero c) third d) second 24) Some statements are given below about a reaction of 1st order a) Units of concentration affect the value of k b) Plot of 't1/2 Vs 'a' is a straight line parallel to conc. axis C) Unit of k is mol dm time d) Hydrolysis of ethyl acetate, using a mineral acid, is an example of it. Among the above.. a) only A is false b) B,C and D are true c) A, B and D are true d) A and C are false Answer: d) A and C are false 25) The unit of rate constant for a zero order reaction is... a)  litre mol-1 sec-1 b) mol litre-1 sec-1 c) mol sec-1 d) litre sec-1 26) The rate constant of a reaction has same units the rate of reaction.the reaction is of.. a) first order b) zero order c) second order d) third order 27) The rate constant of a reaction is 2.5x 10-2  minutes -1 The order of the reaction is... a) one b) zero c) three d) two 28) The decomposition of NH3 on the surface of finely divided Platinum as catalyst.. a) is always a zero order reaction b) is Zero order at high concentration but 1st order at low temperature c) is zero order at low concentration but 1st order at high concentration d) it is always a first order reaction Answer: b) is Zero order at high concentration but 1st order at low temperature 29) If 'a' is the initial concentration or the reactant, the time taken for completion of the reaction , if it is  of.... zero order, will be a) a/2k b) a/k c) 2a/k d) k/a 30) The specific rate constant of a first order reaction depends upon... a) concentration of the reactants b) concentration of products c) time d) temperature 31) A reaction is of first order when... a) the amount ot product formed increases with linearly with time b) the rate decreases linearly with time c) the rate is linearly related to the concentration of the reactant d) the concentration of the reactant decreases linearly with time Answer: c) the rate is linearly related to the concentration of the reactant 32) If the concentration is expressed in moles per litre, the unit of the rate constant for a first order reaction is... a) mole litre sec b) mole litre C) sec-1 d) mole-1 33) Order of a reaction is.... a) equal to the sum of the concentration terms in the stoichiometric equation b)  equal to the sum of the concentration terms in the rate equation C) always equal to the molecularity of the reaction d) equal to the sum of the powers of the concentration terms in the rate equation. Answer: d) equal to the sum of the powers of the concentration terms in the rate equation 34) If the initial concentration is doubled, the time half change is also doubled, the order of the reaction is... a) 2 b) 3 c) 1 d) 0 35) The dimensions of rate constant for a first order reaction involve.... a) time and concentration b) concentration only c) time only d) neither time nor concentration 36) The rate of reaction at unit concentration of reactant is called..... a) average rate b) instantaneous rate c) rate law d) rate constant 37) Which of the following statements is incorrect... a) Rate law expression cannot be obtained from the stoichiometric equation b)Law of mass action expression can be written from the balanced equation c) Specific reaction rate of a reaction is constant at constant temperature d) Rate of reaction and rate constant have same units Answer: d) Rate of reaction and rate constant have same units 38) The rate constant of a reaction does not change when.... b) concentrations of the reactants are changed c)temperature is changed Answer: b) concentrations of the reactants are changed 39) The rate at which a substance reacts depends on its.... a) atomic weight b) molecular weight C) active mass d) equivalent weight 40) 2A----> B+C would be a zero Order reaction when... a) the rate of reaction 1s proportional to square of conc. of A b) the rate of reaction remains same any conC. of A c) the rate remains unchanged at any- conc. of B and C d) the rate of reaction doubles if conc. of B is increased to double Answer: b) the rate of reaction remains same any conC. of A 41) Order of reaction is decided by... a) temperature b) mechanism of reaction c)molecularity d) pressure 42) Rate of first order reaction depends upon.. a) time b) concentration of reactant c) temperature d) all the these 43) Which of the following is true for order of a reaction? a) It is equal to the sum of exponents of the molar concentrations of the reactants in the rate equation b) It is always a whole number c) It is never zero d) t can be determined theoretically Answer: a) It is equal to the sum of exponents of the molar concentrations of the reactants in the rate equation 44) radioactive decay follows... order kinetics a) zero b) I c) II d 111 45) What is the half life of Cl if its disintegration constant is 2.31 x 10 year 1.... a) 0.3x 104 years b) 0.3 x 102 years c) 0.3 x 108 years d) 0.3 x 103 years Answer: d) 0.3 x 103 years 46) Decomposition of nitrogen pentoxide is known to be a first order reaction. 75 percent of the oxide had decomposed in the first 24 minutes. At the end of an hour, after the start of the reaction, the amount of oxide left will be... a) nil 47) The thermal decomposition of a compound is of first order. If a sample of the compound decomposes 50 % in 120 minutes, in what times will it undergo 90 % decomposition... a) nearly 240 minutes b) nearly 480 minutes c)nearly 450 minutes d) nearly 400 minutes 48) The rate constant of a first order reaction is 3x 10-6 per second. If the initial concentration is 0.10 M, the initial rate of reaction is... a) 3x 10-5Ms-1 b) 3x 10-8 Ms-1 c) 3x 10-6 Ms--1 d) 3x 10-7 Ms-1 49) for a chemical reaction...... Can never be a fractional a) molecularity b) rate constant C) half life d) order 50) molecularity of a reaction..... a) can be zero b) can have a fractional value C) is always whole number d) can not be less than 2 Answer: C) is always whole number
auto_math_text
web
Main To catch a ball, turn when called or pick up a cup, our brains must direct not just what to do, but where to do it. Inherent to this process is a ‘sensorimotor transformation’2,8,9 in which an object’s location detected in sensory space, such as the position on the retina, is converted into movement direction in motor coordinates, such as the direction of limb or joint angle changes. There is considerable evidence that topographically organized brain regions in a wide range of species encode the location and identity of visual objects10,11,12,13; however, how neural connectivity patterns convey such information to downstream premotor networks, and how developmental programs specify this connectivity, remains poorly understood. In Drosophila, VPNs that have dendrites in the optic lobe and axon terminals in the central brain detect ethologically relevant visual features, such as small-object motion or looming of dark objects6,7,14,15,16,17, and are close to the sensorimotor interface. Multiple VPN types initiate visually guided behaviours6,18,19,20,21, and some VPN types synapse directly onto a subset of the ≈500 premotor descending neurons (DNs) per hemibrain whose activation drives distinct motor actions22,23,24. There are 20–30 different types of VPN, each a population of 20–200 neurons per hemibrain (Fig. 1a), with small receptive fields (20–40°) that together cover visual space6,15,16. VPN dendrites in the optic lobe thus form a topographic map of visual space, and object location on the fly’s retina is theoretically encoded by which VPN neurons within a given type are excited. However, it has been unclear whether, and how, this spatial information is passed to downstream partners because the axons of all VPNs within a given type terminate in narrow, distinct glomeruli within the central brain (Fig. 1a) with little25 or no6,15,26,27 observable topography at the light-microscopy level. Yet several VPN cell types have been associated with direction-specific behaviours, including backing up and turning, escaping looming stimuli from different directions, collision avoidance and, in flight, saccade turns away from a visual stimulus6,28,29,30. Here we examine how direction-specific visual information is transformed onto downstream premotor networks by exploring the VPN-to-postsynaptic partner interface using electron microscopy (EM), light microscopy, physiology and behaviour. Neural control of looming escape direction Looming visual cues indicate an impending collision or predator attack and drive rapid escape actions in most visual animals31,32. Flies orient their escape takeoff away from the direction of a looming stimulus28,33. Several Drosophila VPN types respond to looming stimuli6,16,33,34, in particular LC4, a population of about 60 neurons per hemibrain, whose activation is critical for fast escape takeoffs through direct synapses onto the giant fibre (GF) DN35 (Fig. 1a). To investigate the control of escape direction, we measured fly responses to three different directions of looming using the FlyPEZ33 automated assay and machine-learning-based automated tracking (Extended Data Fig. 1a). Flies moved their centre of mass (COM) away from the stimulus direction (Extended Data Fig. 1a), and takeoffs were generally33 away from the stimulus (Extended Data Fig. 1b). As previously suggested28, we found takeoff direction arose from pre-takeoff postural shifts of a fly’s COM relative to its middle pair of legs (Δ[T2 leg angle]; Extended Data Fig. 1c,d), which power the takeoff jump. This indicates that object location encoded by looming-sensitive VPNs, such as LC4, is passed downstream. GF activation does not drive postural adjustments36 and is not expected to control the escape takeoff direction. LC4 axons, however, overlap with dendrites of nine other DNs24 (here called LC4-DNs). To examine whether LC4-DNs control takeoff direction, we focused on seven for which we had DN-specific genetic driver lines24 (Fig. 1b). Analysis of the Drosophila ‘hemibrain connectome’, reconstructed from EM data27, confirmed that these DNs receive direct visual input from looming-sensitive VPNs, and (except for DNp06) a substantial portion of this is from LC4 (Fig. 1c) with four of them (DNp04, GF, DNp02 and DNp11) among the top 10 downstream partners of LC4 (ref. 27). We optogenetically activated each DN, as well as two ‘combination’ lines targeting either two or three LC4-DNs together, and analysed the resulting behaviour with high-speed video33. GF activation produced takeoff rates of greater than 90% (refs. 33,34,35,36). Only DNp04, DNp11 and combination line activation increased takeoff rates significantly compared to that of controls (Extended Data Fig. 1e,f and Supplementary Table 1), albeit with rates lower than that for GF activation (that is, 15–40% versus >90%), suggesting that natural threats may simultaneously activate multiple LC4-DNs to drive downstream escape motor circuits. DNp04- and DNp11-activated takeoffs were almost exclusively ‘long-mode’, in which the wings are raised before the takeoff jump, whereas GF activation produced ‘short-mode’ escapes without prior wing-raising as previously described36 (Extended Data Fig. 1g,h and Supplementary Table 1). Combination line activation drove primarily long-mode takeoff, but did also unexpectedly produce many short-mode takeoffs, which are thought to rely on GF activation. Taken together with the findings of our previous work37, this mixed result indicates either that the combination of DNp02, DNp04 and DNp06 inputs to the GFs, or that these DNs are not naturally co-activated with the strong intensity of optogenetic activation. To evaluate whether any of these DNs triggered postural adjustments critical for escape directionality, we tracked 11 body points using Animal Part Tracker software (Branson Lab, see Methods) and created a metric for postural shift (Fig. 1d). DNp11 activation drove flies to lean forwards, whereas activation of DNp02 (including combinations of DNp02 and DNp04 or DNp02, DNp04 and DNp06) promoted backward leaning (Fig. 1e–g and Supplementary Videos 1 and 2). We next assessed whether these induced postural shifts led to directional takeoffs (Fig. 1h,i). Activation of DNp11 evoked forward takeoffs (Fig. 1i), whereas activation of DNp02 and DNp04 together evoked a strong bias towards backward takeoffs (Fig. 1i). As activation of DNp04 alone resulted in omnidirectional takeoffs (Fig. 1i), we reasoned that DNp02 was the main contributor to the movements leading to backward takeoff. The weak forward takeoff bias from GF activation probably results from the average resting posture of the fly, which was previously observed to have the COM slightly in front of the T2 legs28. To further test whether DNp02 and DNp11 contribute to directional control during looming-evoked escape, we silenced each DN by selectively expressing Kir2.1, an inwardly rectifying potassium channel, and then measured responses to frontal (0°) or rear (180°) looming stimuli (Extended Data Fig. 2). DNp02-silenced flies took off normally (forwards) in response to rear stimuli but showed significant impairment in their ability to take off backwards in response to frontal stimuli—on average most DNp02-silenced flies took off forwards, directly towards the stimulus. This is consistent with the activation of DNp02 driving a backward postural shift, and supports a critical role for DNp02 in the postural adjustments that control backward takeoffs. Notably, flies in which DNp11 was silenced had a similar phenotype—these flies took off forwards in response to both frontal and rear looming stimuli. This could indicate that more DNs, possibly with interconnections, are involved in the control of forward takeoffs than backward ones, and also probably reflects the bias of the fly to jump forwards if no postural adjustment is made from the common resting posture. We conclude that, as flies with either DNp11 or DNp02 inactivated did not respond with normal takeoff directions to anterior or posterior looming stimuli, both DNs contribute to directional control of the fly’s natural escape behaviour. We next sought to determine how LC4 neurons differentially convey the spatial location of the looming stimulus to DNp11 and DNp02 (Fig. 2a,b). In the right hemisphere of a complete serial section transmission EM dataset, we traced all LC4 neurons, DNp02 and DNp11 (FAFB dataset38; Fig. 2c) and marked synapses between LC4 neurons and each DN. We found a wide range (1 to 75) in the number of synapses individual LC4 neurons made with a given DN (Extended Data Fig. 3a). We next investigated whether LC4 neurons that synapsed more with DNp11 or DNp02 had dendrites located in a particular region of the lobula neuropil. We visualized the LC4 dendrites in the lobula and coloured each neuron by the number of synapses it made with a given DN. This revealed antiparallel synaptic number gradients along the lobula anterior–posterior (A–P) axis for DNp02 and DNp11 (Fig. 2d). By contrast, A–P gradients were not seen in LC4 connectivity onto the GF and DNp04 (Extended Data Fig. 3b,c). The same A–P gradient patterns with LC4 synapses onto DNp11 and DNp02 were seen in an EM dataset from a second brain (hemibrain)27 (Fig. 2e–g). This was supported by a strong negative correlation between the number of synapses a given LC4 makes with DNp11 and with DNp02 (Fig. 2h). The orientation of these gradients corresponds to the backward- and forward-jumping motor outputs of DNp02 and DNp11, respectively. Taken together, the behaviour and connectomic data support a simple model: antiparallel synaptic gradients transform locally detected object location into oppositely directed behaviours. A frontward looming stimulus activates anterior LC4 neurons that provide relatively more drive to DNp02, which produces backward body movements generating a backward escape trajectory following co-activation with DNp04 or other escape pathways. For a stimulus looming from behind, posterior LC4 neurons become more active and drive DNp11 to generate forward postural shifts and a forward-directed takeoff. The synapse gradient model is based on the assumption that synapse number correlates with connection strength. To directly test this, we carried out in vivo whole-cell patch-clamp recordings from DNp02, DNp11, DNp04 and the GF during visual looming stimulation at varying locations along the A–P axis of the visual space. We presented vertical arrays of small dark expanding discs at four different azimuthal locations ipsilateral to the targeted DN (Fig. 3a and Extended Data Fig. 4a). DNp02, DNp11, the GF and DNp04 all depolarized in response to looming, and all except the GF produced action potentials (Fig. 3a–f and Extended Data Fig. 4b–j; see Methods for identification of action potentials). DNp02 produced more action potentials in response to anterior, compared to more posterior, stimuli (44 versus 13 spikes across all trials), whereas DNp11 exhibited the opposite trend (Fig. 3b,c,e and Extended Data Fig. 4i). These trends were consistent for both individual (Fig. 3c and Extended Data Fig. 4c,i) and averaged (Fig. 3e) responses. By contrast, DNp04 produced bursts of action potentials without significant azimuth tuning (Extended Data Fig. 4c–f,i). In agreement with the action potential tuning curves, depolarizing membrane potentials in DNp02 were larger for more anterior azimuthal locations of the looming stimulus, whereas those for DNp11 were larger for more posterior looming locations (Fig. 3d,f and Extended Data Fig. 4j). For the GF, we did not see distinct tuning properties for the anterior–posterior location of the stimuli. DNp04 did show a trend towards stronger responses to anterior stimuli, although the responses were more variable than for DNp11 or DNp02 (Extended Data Fig. 4g,h,j). If synapse number correlates directly with input current drive to the postsynaptic cell, we should be able to predict the DN responses to looming stimuli at different azimuthal locations. To assess this, we used the EM data to make a model incorporating both the spatial profile of LC4 dendrites and the synaptic connectivity of LC4 axons with DNs. Main dendritic branches of all 55 LC4 neurons in the FAFB dataset38 were mapped from lobula to eye coordinates following a previously established method25 (Fig. 3g and Extended Data Fig. 5a–c). The normalized estimated responses to looming recapitulated the azimuthal tunings predicted by the synaptic gradients and matched the responses for all four DNs we measured (Fig. 3h,i and Extended Data Fig. 5d–f). We conclude that the synaptic numbers observed from EM data can be interpreted as functional synaptic weights. We used this model to simulate responses to looming from azimuthal locations across the whole visual hemisphere, including those not possible in our physiology experiments. Our simulation showed strong antiparallel looming response profiles for DNp02 and DNp11 across nearly the whole visual hemifield (30°–130°), supporting the observed synaptic gradients as predictive of functional response profiles (Fig. 3j). Taken together, these results corroborate the model that anterior LC4 neurons provide stronger inputs to DNp02 in response to anterior stimuli whereas posterior LC4 neurons provide more drive to DNp11 in response to posterior stimuli in a graded fashion. This differential connectivity drives the backward (DNp02) or forward (DNp11) escape takeoffs away from looming threats. Synaptic gradients are a common wiring motif To address the question of whether visuomotor transformation through gradients of synapses is limited to just LC4 and DNp02 and DNp11 or whether it represents a general circuit wiring logic, we analysed the output connectivity patterns of 20 VPN cell types6 using data from the hemibrain connectome27. First, we used principal component analysis and k-means analyses to cluster individual neurons within a VPN cell type on the basis of the similarity of their outputs (that is, the number of synapses they form onto the set of synaptic partners within their respective optic glomerulus; Extended Data Fig. 6a and Methods). Next, we colour-coded each cluster to visualize the relationship between a neuron’s cluster identity and the spatial location of its dendrites in the lobula. A striking spatial separation of the clusters was found in most VPN cell types (Fig. 4a,b), revealing widespread differential synaptic connectivity, such that individual neurons within one VPN cell type elaborated quantitatively and qualitatively different outputs in the glomerulus depending on the location of their dendrites in the lobula (Fig. 4c and Extended Data Fig. 6b–g). To investigate these properties in more detail, we analysed synaptic connectivity between two VPN cell types (LC4 and LPLC2) and the top 25 postsynaptic partners of each of them (Fig. 4d–g). Both VPN cells types are looming detectors and share some postsynaptic partners, including the GF6,34,39. For each VPN cell type, we first assessed the similarity of its outputs onto different postsynaptic neurons by measuring the pairwise correlation for all 300 possible pairs of its top 25 postsynaptic partners (similar to LC4 and DNp02 or DNp11 in Fig. 2h). The resulting matrices revealed that postsynaptic targets of LC4 and LPLC2 formed three and five connectivity-based clusters, respectively (Fig. 4d,f). Thus, different postsynaptic partners receive different patterns of input from the same VPN cell type. Next, to visualize the relationship between this differential input and VPN dendritic maps, we calculated weighted dendritic input centroids for each of the top 25 postsynaptic partners of LC4 and LPLC2, and measured pairwise distances between them (Extended Data Fig. 9a–h and Methods). These indicate spatial regions of the lobula providing the most input to a given postsynaptic partner. The resulting topographic maps (Fig. 4e,g) revealed that all three connectivity-based clusters for LC4 clearly segregated along the A–P axis of the lobula (Fig. 4e). By contrast, two out of five clusters for LPLC2 segregated along the A–P axis of the lobula, two segregated along the D–V axis, and one cluster had no spatial bias (that is, neurons from this cluster receive uniform input from all LPLC2 neurons; Fig. 4g). Notably, both the numbers and topographic positions of these clusters largely match the results of k-means analysis for both VPN cell types (Fig. 4a). These examples illustrate how the topographic map of VPN dendritic inputs in the optic lobe is converted into maps of graded synaptic weights in the optic glomerulus. We observed synaptic gradients reflecting both the A–P and D–V axes of the dendritic map across all 20 VPN cell types (Fig. 4h and Extended Data Fig. 7), analogous to those we originally found in the fly directional escape circuit (Fig. 2). The ethological relevance of some of these gradients may be deduced from the known function of postsynaptic neurons in the literature. For example, the D–V gradient from LPLC4 onto DNp07 may control landing behaviour22 (Fig. 4h) and the A–P gradient from LPLC1 onto PLP219 (Extended Data Fig. 7) could regulate collision avoidance29. Thus, we propose that conversion of visual space coordinates into graded synaptic connectivity is a shared feature of VPN wiring. Synaptic gradients with or without axon topography Topographic arrangement of VPN axons would provide a simple mechanism for the development of synaptic gradients. Previous studies concluded that this was unlikely6,15,25,26 (with an exception of LC10 (refs. 6,13) and traces of topography in the LC6 (ref. 25) glomerulus). Here we revisited this issue using EM data27 and looked for axon topography corresponding to dendritic arrangement along either the A–P or D–V axis of the lobula. We found five additional VPN cell types (LC4, LC9, LC22, LPLC1 and LPLC4) that have axon terminals retaining rough A–P topography, and one (LC16) whose axons maintain traces of D–V topography (Fig. 5a, Extended Data Fig. 8a–e and Supplementary Videos 3 and 4). These observations were confirmed using light microscopy and MultiColor FlpOut40: the axon terminals of sparsely labelled VPNs with dendrites in either the anterior or posterior lobula targeted distinct domains in their corresponding glomeruli and also exhibited differential morphology as assessed by EM and light microscopy (Extended Data Fig. 8g–j). No axon topography, however, was observed for most (12/20) VPN cell types (Fig. 5b and Extended Data Fig. 8f) at the resolution of our analysis. Therefore, synaptic gradients in these cases (Fig. 4h and Extended Data Fig. 7) must emerge by an alternative mechanism. In summary, VPNs fall into two classes (Fig. 5c). In one, synaptic gradients correlate with axon topography within the glomerulus and in the other they do not. DN dendrite location matches LC4 synaptic gradients We focused on LC4 to understand how axon topography leads to the formation of synaptic gradients. We found that for the top 25 postsynaptic partners of LC4, the spatial distribution of postsynaptic sites in the LC4 glomerulus strongly correlated with the positions of LC4 dendrites in the lobula (Fig. 6a, Extended Data Fig. 9g,i and Methods). This is exemplified by DNp02 and DNp11 receiving anticorrelated inputs from LC4 axons (Figs. 2h and 4d,e) and having spatially segregated postsynaptic sites in the LC4 glomerulus (Fig. 6b and Extended Data Fig. 9i). Topographic mapping of the LC4 axon terminals alone cannot account for these patterns. To assess whether the spatial distribution of DN dendrites also contributes to differential connectivity, we mapped the positions of dendrites of different DN neurons within the LC4 glomerulus using light microscopy. DNp02 and DNp11 dendrites occupy unique glomerular sub-compartments where axons of LC4 corresponding to anterior and posterior visual fields selectively terminate. By contrast, dendrites of DNp04, a postsynaptic neuron with no A–P synaptic gradient with LC4, arborize uniformly within the LC4 glomerulus (Fig. 6c,d and Extended Data Fig. 10a–c). To map synapses at the light level, we used a modification of the STaR41 method to visualize presynaptic sites in sparsely labelled LC4 neurons (Extended Data Fig. 10d,e) and assessed their proximity to DNp02 and DNp11 dendrites (Fig. 6e–h). The presynaptic sites of LC4 from the anterior lobula were much closer on average to the DNp02 dendrites than those from the posterior (Fig. 6e,g and Supplementary Videos 5 and 6). Conversely, DNp11 dendrites were closer to the presynaptic sites of LC4 from the posterior lobula. In summary, LC4 utilizes a spatial wiring strategy to attain graded synaptic connectivity. A combination of topographic arrangement of LC4 axons and placement of DNp02 and DNp11 dendrites within different spatial domains in the glomerulus determines the directional specificity of the escape response to looming stimuli from different regions of the visual field. Spatially independent synaptic gradients in LPLC2 The synaptic gradients elaborated by LPLC2 form in a fundamentally different way from those elaborated by LC4. Analysis of the top 25 postsynaptic partners of LPLC2 found no significant relationship between positions of LPLC2 dendrites in the lobula (that is, synaptic output specificity) and the spatial arrangement of synapses in the LPLC2 glomerulus (Fig. 6i and Extended Data Fig. 9h,j). For example, the postsynaptic neurons PVLP071 and PVPL076 have anticorrelated inputs from LPLC2 (Fig. 4f,g), yet their postsynaptic sites are intermingled in the LPLC2 glomerulus (Fig. 6j and Extended Data Fig. 9j). We confirmed this principle by labelling presynaptic sites in axons of individual LPLC2 neurons with dendrites within the dorsal and ventral lobula and measuring the proximity of these presynaptic sites to the GF dendrites (Fig. 6k and Extended Data Fig. 10f). No significant difference in distances was found (Fig. 6l) despite a marked difference in synapse counts (Fig. 4h). Thus, the spatial distribution of synapses in the LPLC2 glomerulus seems random. To assess this principle in a more systematic manner, we further analysed EM data (hemibrain) and measured the correlation between axo-dendritic overlap and synaptic counts for four topographic and four non-topographic VPNs and their postsynaptic partners (Extended Data Fig. 11). Our results strengthened the notion that VPNs utilize two qualitatively different wiring strategies to form synaptic gradients. We next sought to assess whether the synaptic gradients of LPLC2 onto the GF were functionally significant (Fig. 4h). The dendrites of LPLC2 neurons expressing the P2X2 receptor were locally activated by injection of ATP in the dorsal and ventral regions of the lobula, and the response in the GF was assessed using electrophysiological recordings (Extended Data Fig. 12a). GF responses following activation of dorsal LPLC2 were significantly stronger than those following ventral ATP injections. By contrast, little difference was seen in response following stimulation of dorsal versus ventral LC4 (also connected to the GF, but without a notable D–V synaptic gradient; Fig. 6m and Extended Data Fig. 12b,c). In summary, functionally relevant graded synaptic connectivity of LPLC2 is established through a spatially independent mechanism. Discussion We took advantage of cell-type-specific genetic tools, behavioural and physiological analyses, and densely reconstructed neuron connectivity maps to examine a central brain sensory-to-motor interface at synaptic resolution. We showed that the transformation of object location from retinal to body coordinates is solved by gradients of synapses between spatially ordered visual-feature-detecting neurons (that is, VPNs) and movement-direction-specific premotor neurons (that is, DNs). We demonstrated that such numeric gradients produce functional synaptic weights and lead to predictable response differences in postsynaptic neurons that drive fly escape takeoffs correctly oriented away from looming threats. Individual cells within one VPN cell type are thus functionally heterogeneous with connectivity profiles often as dissimilar as ones found between different neuron types. It is this continuous heterogeneity that converts visual stimuli into ethologically relevant behavioural responses. We discovered behavioural roles for individual DNs (DNp02 and DNp11), and it may be tempting to consider these as command neurons for particular body movement directions. However, several observations suggest that they act instead as members of a larger DN group whose combined activity represents both the strength of the drive to takeoff and movement direction. First, when optogenetically activated alone no LC4-DN drove a high takeoff rate (25% takeoff rate maximum, all long-mode takeoffs). By contrast, activation of the command-like GF drove nearly 100% takeoff (all short-mode takeoffs). Second, activation of DN combinations (for example, DNp02 and DNp04 or DNp02, DNp04 and DNp06), increased takeoff rates significantly, although only up to about 40% takeoff. This suggests that co-activation of multiple DNs drives the long-mode takeoff and more DNs than we identified probably participate. Finally, whereas co-activation of DNp02 and DNp04 increased the backward shift of flies compared to activation of DNp02 alone, this shift was reduced by additional co-activation of DNp06. Thus, different DNs may ‘vote’ for movement in a particular direction and the resulting behaviour is the sum of these votes, much like the population activity in directionally selective motor cortex neurons correlates with movement direction in primates42. This mechanism could extend beyond forward and backward control if the left and right DNs of the same type, which would be differentially activated in the event of a looming stimulus from the side, also independently ‘voted’ for leftward or rightward body shifts, much like unilateral activity in DNg02 neurons correlates with left or right flight saccades in flying flies43. By this mechanism it would be plausible for the fly to obtain the ability to takeoff in any direction relative to its body, as has been observed in behavioural data. Expanding our analysis to 20 different VPN cell types and their postsynaptic partners revealed synaptic gradients as a general property of visual feature detector output in Drosophila. Evidence consistent with a gradient motif has been observed at the sensorimotor interface of the cockroach cercal system, where input from directionally selective abdominal wind-sensitive hairs has graded effects on the response of downstream giant interneurons, which drive escape44. Thus, synaptic number gradients may be a general principle for transmission of spatial information between sensory and motor networks. VPNs guide innate visual behaviours of the fly, including looming-evoked backing or takeoff and small-object tracking6,16. We expect that the synaptic gradients we described here are specified by genetically hard-wired developmental processes, rather than through experience. In support of a developmental origin, we observed substantially the same LC4-DN gradients in EM volumes of two different fly brains27,38. The same wiring motif, however, could be present in more flexible areas of sensorimotor interface such as ellipsoid body ‘compass’ neurons45 and would provide a simple mechanism for how learning-induced changes in numbers of synapses between neurons could result in different stimulus–behaviour pairings. We identified two different circuit wiring strategies producing synaptic gradients in different VPN cell types. In the ‘spatial’ strategy, topographic mapping of VPN axon terminals organizes the optic glomerulus and is ‘read out’ by stereotypically positioned dendrites of different target neurons. Axonal topography may arise through age-dependent mechanisms as described for more peripheral regions of the fly visual system46, or through graded expression of cell surface molecules (for example, Eph receptors and ephrins) as described in the vertebrate visual system47. Developmental mechanisms must act in parallel to target dendritic processes of different postsynaptic neuron types to discrete domains within the glomerulus. Most VPN cell types we examined (12/20), however, did not show clear topographic organization of their axonal projections. Thus, in most cases, gradients emerge in the absence of spatial cues. Molecular heterogeneity within one cell type previously found in the fly visual system48 and mouse visual cortex49 may underlie such differential synaptic specificity. Future work should examine whether spatial gradients of molecular regulators instruct differential expression of cell adhesion and recognition molecules in VPNs, thereby transforming a retinotopic arrangement of dendritic arbours in the optic lobe into a graded distribution of synapses in the central brain. Methods Experimental model details Flies were reared under standard conditions at 25 °C and 50% humidity with a 16-h light/8-h dark cycle on a standard cornmeal fly food. Male and female flies 3–5 days after eclosion were used for all experiments except if specified otherwise. Flies used for optogenetic activation experiments were raised on 0.2 mM retinal (Sigma R2500) food, and maintained on 0.4 mM retinal food as adults. These flies were kept in the dark in foil-covered vials until they were prepared for experiments. Supplementary Table 2 provides detailed descriptions of fly genotypes used in each experiment and origins of transgenic stocks. Behavioural experiments High-throughput takeoff assay We tested escape responses of unrestrained flies using our previously developed FlyPEZ33 system to automate fly behaviour experiments and collect large sample sizes necessary to quantitatively characterize differences in escape behaviour. In FlyPEZ, individual flies were released one at a time onto a 5 mm by 5 mm glass platform through an automated gate without undue perturbation, where they were targeted for visual or optogenetic stimulation. The fly position on the platform was tracked using a real-time tracking algorithm, which coordinated the triggering of a high-speed video camera and either looming stimulus or light stimulus. For visual stimulation, we used digital micromirror device projectors running at a refresh rate of 360 Hz, controlled by MATLAB using the Psychophysics Toolbox. Dark looming discs expanding from 10° to 180° at an elevation of 45° and azimuth of 0°, 90° or 180° ± 22.5° relative to the fly head position were presented on a 7-inch-diameter back-projection coated dome centred over the fly platform, which covers 360° in azimuth and 120° in elevation of the fly’s visual field. To simulate an object approaching with constant velocity, the projected looming disc centre remained constant while the disc radius increased nonlinearly over time on the basis of the following equation $$\theta \left(t\right)=2{{\rm{\tan }}}^{-1}\frac{l}{{vt}}$$ in which $$\theta$$ is the angular size of the stimulus (in radians), l is the radius of the virtual object, and v is its simulated approach velocity. $$t$$ = 0 is the theoretical time of contact, when the object would reach 180°, so that t < 0 during object expansion. For optogenetic stimulation, CsChrimson was activated in flies raised on retinal food with four 624-nm wavelength light-emitting diodes (total irradiance of 500 W m−2, as measured from the location of the fly on the platform). Escape responses were captured using a macro lens on a high-speed camera, and two perspectives of the fly (side and bottom views) were filmed at 6,000 frames per second under 850-nm infrared illumination. Only one stimulus was presented per fly, and the platform was cleared before release of the subsequent fly. All looming experiments were carried out during the 4-h activity peak in the afternoon light cycle, and all optogenetic experiments were carried out in the dark. Behavioural data analysis Escape sequence durations in the CsChrimson activation and Kir2.1-silencing experiments were manually annotated by labelling the first frame of wing raising and the last frame of tarsal contact from the FlyPEZ video data. For the analysis of postural shifts and takeoff angles following either optogenetic activation or looming stimulus presentation, we used a machine learning software package, Animal Part Tracker (APT, a software package developed by the Branson Lab at Janelia) v0.3.4, which allowed us to automatically track locations of body parts in the input videos. For automated tracking, the videos were subsampled at 600 Hz (1.67-ms interval), which was sufficient to observe smooth changes in leg and body movements. Missing tracking data due to occlusions (body part out of frame) were interpolated for gaps less than five frames (8.33 ms), and a moving-average filter was applied to smooth the raw tracking data. For optogenetic activation experiments, videos in which visibility of T2 legs was lost over the 100 ms of annotation were excluded, except for cases in which the fly performed a takeoff. For silencing experiments, videos in which visibility of T2 legs was lost between the stimulus start and the start of jumping leg extension were excluded from the COM movement, COM flow field and T2 leg angle analyses. Individual takeoff vectors were obtained from two locations of the COM, one at takeoff, when the last of the middle tarsi loses contact with the ground (tend), and one either at a manually annotated frame of the start of jumping leg extension, or at 5 ms before the takeoff (tstart; Fig. 1i). The population mean resultant length, $$\bar{R}$$, is calculated by the following equation $$\bar{R}=\frac{1}{n}\left|\mathop{\sum }\limits_{j=0}^{n}{{\rm{e}}}^{i\theta j}\right|$$ in which $$n$$ is the total number of the takeoff vectors, and $${{\rm{e}}}^{i\theta }$$ is Euler’s formula as a simplified representation of a vector. $$\bar{R}$$ is a statistic between 0 and 1 for the spread of a circular variable in the population, such that 1 means all of the takeoff directions are concentrated at a single angle, and 0 means the spread is more uniform. The COM referenced to fly body-centric coordinates was obtained by translating and rotating the COM as described in Extended Data Fig. 1c. Δ[T2 leg angle] at a given time frame of the FlyPEZ video was obtained using the APT-tracked tarsal tips of the middle legs and the COM as described in Fig. 1d. A Butterworth filter was applied to the T2 leg angle time series results. Individual COM movement vectors were calculated as the vector from COM0 to COMpre (Extended Data Fig. 1d). Electrophysiological experiments Electrophysiological recordings and data analysis Female flies of 2–4 days in age were anaesthetized on a Peltier-driven cold plate and positioned ventral side up to be tethered on a custom polyether-ether-ketone recording plate by applying ultraviolet-cure glue to the head and thorax. We used only female flies because: female flies are larger and hence less prone to desiccation than male flies, and so have the potential to provide longer-lasting electrophysiological recordings; and both the hemibrain and full brain (FAFB) EM datasets were collected from female flies, so our direct measurements of the gradients are both in female flies. For recording stability, the proboscis was glued in a retracted position and the front pair of legs were clipped and glued at the femur. To access the DN soma for whole-cell recording, a window was cut in the cuticle on the posterior side of the head, and the overlying fat and trachea were removed. The brain was continuously perfused during electrophysiology with the external solution containing (in mM): 103 NaCl, 3 KCl, 5 N-Tris (hydroxymethyl)methyl-2-aminoethane-sulfonic acid, 8 trehalose, 10 glucose, 26 NaHCO3, 1 NaH2PO4, 1.5 CaCl2 and 4 MgCl2, bubbled with 95% O2 and 5% CO2, and adjusted to pH 7.3 and 273–276 mOsm. To disrupt the perineural sheath around the soma of interest, collagenase (0.25 mg ml−1 in external solution) was applied locally with a large-bore pipette to the surface of the brain. A small amount of tissue was then removed by using suction from a pipette filled with external solution to gain unrestricted patch pipette access. Patch pipettes were made from borosilicate glass using a Sutter p-1000 puller and fire-polished after pulling using a Narishige MF-900 microforge to achieve a final resistance of 4–8 MΩ. The internal solution contained (in mM): 140 potassium aspartate, 10 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid, 1 ethylene glycol tetraacetic acid, 4 MgATP, 0.5 Na3GTP and 1 KCl. The pH was 7.3 and the osmolarity was adjusted to approximately 265 mOsm. To obtain patch-clamp recordings, DN somata were visually targeted through brief GFP excitation. Recordings were acquired in current-clamp mode with a MultiClamp 700B amplifier (Molecular Devices), low-pass filtered at 10 kHz, and digitized at 40 kHz (Digidata 1440A, Molecular Devices). Whole-cell recording data were analysed in MATLAB using custom written code or using Clampfit 11 software (Molecular Devices), and graphical representation was carried out by using Prism 9.2.0 software (GraphPad). Spike events in response to looming stimuli were determined on the basis of the rise slope (mV ms−1) in the response region above a threshold given from the averaged maximum slope in the baseline region across individual recordings, followed by visual inspection of the raw data. The baseline region of each trial corresponded to the 2-s time window before the beginning of the looming stimulus. The response region was the 150-ms period after the onset of the stimulus. To estimate the magnitude of depolarization in response to looming stimuli, membrane potentials were averaged across individual trials (4–8 trials per neuron), and the area (ms × mV) was calculated in the 150-ms response region. Visual stimulation for electrophysiology Custom visual stimuli were produced in MATLAB using the Psychophysics Toolbox to display looming stimuli with different approach angles around the fly. We were limited in how far posterior we could show stimuli owing to constraints of the plate to which the fly was tethered to for accessing the back of the head capsule and the microscope. This was especially an issue for DNp11 recordings, as the microscope objective blocks presentation of the posterior stimuli that should most strongly excite DNp11. Thus, our strategy for assessing the functional gradient of the receptive field (RF) was to compare directly measured visual responses in the experimentally accessible visual field to responses predicted by a model we generated from the measured synaptic numbers and an alignment with the visual world (see the section below entitled Mapping the LC4 anatomical RF). Within our accessible visual area, we generated looming stimuli at 32.5°, 45°, 57.5° and 70° along the eye equator (anterior to posterior) and then pitched the plane of these stimuli down 20° to roughly coincide with the tilt of the synaptic gradients we measured. Looming stimuli from different azimuths were shown in randomized sets. Looming stimuli were arrays of three discs, black on a white background, and programmed to expand from 0° to 30° in azimuth in each disc with a 12-s inter-stimulus interval. We used three-disc vertical arrays because we wanted to use a stimulus that would produce as strong a response as possible and which could be varied in azimuth. As LC4 neurons have only an approximately 40° RF, only a handful of LC4 neurons may be excited by a single looming stimulus. Therefore, to activate more LC4 neurons along a given azimuth, we used a column of three. See Extended Data Fig. 4a for a depiction of the looming stimuli used. Visual stimuli were back-projected at 360 Hz onto a 4-inch diameter dome at 768 × 768 resolution. Stimulus frames were synchronized by simultaneously recording a photodiode with the recording trace that monitored a patch of each frame projected just outside the dome and coloured black or white on alternate frames. Constant angular velocity stimuli were generated using the following equation $$\theta \left(t\right)={v}_{{\rm{a}}}t$$ in which $$\theta$$ is the angular size of the stimulus, $${v}_{{\rm{a}}}$$ is the angular velocity, and $$\theta$$ = 0 at $$t$$ = 0. All stimuli were corrected for distortion and irradiance differences as described previously. P2X2 experiments Whole-cell patch-clamp recordings from the GF were carried out in 2–4-day-old female flies as described above. For P2X2 receptor activation of LC4 or LPLC2 VPNs, a glass capillary pulled to a 1-μm diameter was positioned on the VPN dendrites, which expressed both GFP and the P2X2 receptor, approximately 50 μm below the surface of the brain. ATP (Sigma A9187, 5 mM) was microinjected (5 psi, 200-ms pulse) under the control of a Picospritzer (Parker Hannifin). To test dorsoventral gradients of functional connectivity between the VPNs and the GF, either the dorsal or ventral part of the lobula was stimulated in an alternating fashion at 90-s intervals to permit recovery between pulses. Whole-cell recording data were analysed as mentioned above. Before calculating the peak amplitudes of the GF response, the membrane potential traces acquired during ATP applications were low-pass filtered and averaged across individual trials as specified in the figure legends. Generation of single-cell STaR transgenic flies A combination of HIFI DNA assembly (NEB) and restriction-enzyme-based cloning was used to generate either 13XLexAoP2-FRT-STOP-FRT-myr::GFP-2A-R::PEST or 13XLexAoP2-FRT-STOP-FRT-myr::tdTomato-2A-R::PEST through modification of pJFRC177 (Addgene: 10XUAS-FRT-STOP-FRT-myrGFP, plasmid no. 32149). First, the 10XUAS sequence of pJFRC177 was replaced by 13XLexAoP2 from pJFRC19 (Addgene: 13XLexAoP2-IVS-myrGFP, plasmid no. 26224). Second, the GFP-coding sequence of pJFRC177 was replaced by either GFP-2A (cassette C: GS linker-FRT-STOP-FRT-GFP-2A-LexAVP16) or tdTomato-2A (UAS-DIPalpha-2A-tdTomato), both followed by the coding sequence of R::PEST recombinase from pJFRC165 (Addgene: 20XUAS-IVS-R::PEST plasmid no. 32142). Transgenic flies were generated by integration of either construct into the VK00033 landing site using a commercial injection service (BestGene). To generate sparsely labelled VPNs with visualized presynaptic sites (sparse StaR), 13XLexAoP2-FRT-STOP-FRT-myr::GFP-2A-R::PEST constructs were recombined with StaR41 (Brp-RSRT-stop-RSRT-myr::smGdP-V5-2A-LexA, laboratory stock). Female flies carrying the recombined constructs were crossed into male flies with VPN-specific LexA driver lines and hsFLP recombinase. At 48 h after puparium formation, pupae were heat-shocked for 15 min in 37 °C water bath. Immunohistochemistry Unless otherwise specified, dissected flies were aged 3–4 days post eclosion. Brains were dissected in ice-cold Schneider’s Drosophila Medium (Gibco 21720–024), and fixed in acid-free glyoxal (Addax Biosciences) containing 5% sucrose (Sigma S9378) overnight at 4 °C. Brains were rinsed repeatedly with PBST (PBS (Bioland Scientific LLC PBS01-03) containing 0.5% Triton-X100 (Sigma T9284)), and incubated in blocking solution (PBST containing 10% normal goat serum (Sigma G6767)) for 2 h at room temperature before incubation with antibodies. Brains were incubated sequentially with primary and secondary antibodies diluted in blocking solution for 24 h at 4 °C, with three rinses in PBST followed by 1 h incubations at room temperature in between and afterwards. Primary antibodies were used at 1:20 (nc82), 1:500 (chicken anti-GFP) and 1:200 (all others) dilutions. All secondary antibodies were used at 1:300 dilutions. The full list of antibodies used is available in the Reporting Summary. The technique for subsequent mounting in DPX was adapted from the Janelia protocol for mounting the central nervous system of adult Drosophila in DPX. After being washed to remove residual secondary antibodies, brains were additionally fixed with PBS containing 4% paraformaldehyde (Electron Microscopy Sciences 15710) for 3 h at room temperature, rinsed with PBS and mounted on 22 × 22-mm square No. 1.5H cover glass (Thorlabs CG15CH2) (with the posterior side of the brain facing the cover glass) previously coated with poly-l-lysine (0.078% solution in deionized water, Sigma P1524) with added 0.2% Kodak Photo-Flo 200 Solution (Electron Microscopy Sciences 74257) followed by a quick 1–2-s rinse with MilliQ water. Brains were dehydrated by placing the cover glass into baths with successively increasing ethanol (Sigma 459844) concentrations (30–50–75–95–100–100–100%, 10 min each) followed by three successive baths of xylene (Thermo Fisher Scientific X5–500), 5 min each. Afterwards the glass was uniformly covered with 8–10 drops of DPX (Electron Microscopy Sciences 13510) and placed on a prepared slide between the spacers made of two 22 × 22 mm square No. 2 cover glasses (Fisher Scientific 12-540B). The slide was left for 24 h in the hood for drying, and then transferred to room temperature and imaged at least 24 h afterwards, Confocal image acquisition and processing Immunofluorescence images were acquired using a Zeiss LSM 880 confocal microscope with Zen digital imaging software using an oil-immersion ×63 objective. Serial optical sections were obtained from whole-mount brains with a typical resolution of 1,024 μm × 1,024 μm × 0.5 μm. Image stacks were exported to Imaris 9.7 for level adjustment, cropping and removal of signal in off-target brain regions and background noise, as well as 3D volume reconstructions. Analysis of neuroanatomical data from confocal image stacks To assess and measure the differential placement of DN dendrites within the LC4 glomerulus, confocal image stacks of colocalized glomeruli and DN dendrites were aligned so that the x axis corresponded to the sagittal diameter (width) of the glomerulus and cropped at the edges of the glomerulus to exclude any extraglomerular DN dendrites from consideration. 3D reconstructions of LC4 axon terminals and DN dendrites were obtained using the Imaris Filaments tool (Extended Data Fig. 10b). The x coordinates of the filaments were exported to GraphPad Prism 9.2.0 and normalized to the sagittal diameter of the LC4 glomerulus (0–1 range). The x coordinate of the centroid of the DN dendritic arbour was calculated as a mean of x coordinates of all filaments and used as a final metric of spatial distribution of dendrites within the glomerulus (Extended Data Fig. 10c). To assess the spatial proximity between presynaptic sites of individual LC4 or LPLC2 neurons and DN dendrites (single-cell STaR experiments), Brp puncta in single VPN cells were reconstructed using the Imaris Spots tool, followed by identification of their centroids, as well as centroids of reconstructed dendritic filaments. Distance between Brp puncta and DN dendrite centroids was measured along the sagittal diameter of the glomerulus (LC4) or along three cardinal axes (A–P, D–V and L–M) of the glomerulus (for LPLC2). Only female flies were used for analysis to be consistent with the available connectome data, which are in a female fly. Analyser was not blinded to genotype due to characteristic identifiable morphology of DNp02, DNp11 and DNp04, as well as clear anatomical positions of anterior–posterior LC4 and LPLC2. Connectomics analysis FAFB connectome reconstruction analysis We annotated the FAFB serial section transmission EM volume using the CATMAID software to determine the chemical synaptic connectivity between the LC4 neurons and four DNs of interest, DNp02, DNp11, GF and DNp04. As a starting point, we used previously traced skeletons for LC4 neurons. To start tracing the DNs, we used morphological cues from confocal fluorescence imaging in distinct strategies to locate a starting point for tracing each DN. For DNp02, confocal microscopy stacks suggested that the somata neurite travels close to the path of the GF somata neurite. We found DNp02 by locating its neurite within a shared soma tract, which, along with several other neurites, appears encased in a dark sheath. DNp04 was located when tracing the LC4 neurons. The skeleton was then traced out and linked to the same soma tract as DNp02 and GF. DNp11 was located by searching for candidate DNs that cross the midline dorsal of the oesophagus. From each starting node, the full skeleton was traced and compared to the confocal image stacks for confirmation of cell type identity. To determine the chemical synaptic connectivity, we searched for four criteria: T-bars, presynaptic vesicles, synaptic clefts and postsynaptic densities. If a potential synapse possessed two out of four criteria, it was labelled as a synapse. We focused our efforts on LC4 (presynaptic) and DNp02, DNp11, GF and DNp04 (postsynaptic) synapses to gain a representative view of the connectivity between LC4 and the DNs. Mapping the LC4 anatomical RF To model the real-world RFs of the LC4 population, we followed a previously established method25, and applied it to newly reconstructed LC4 neurons. We first mapped all 55 LC4 dendrites (FAFB volume) onto a layer of the lobula by fitting a second-order surface to all of the dendritic arbours. Each projected dendrite traced out a polygon that represented the field of view of the corresponding LC4 neuron. We modelled each LC4 as a 2D circular Gaussian on this surface. Its height was set to be unity, and its width was given by the radius of a circle that had the same area as the projected polygon. To map each LC4 neuron’s location (COM of the dendrite) onto eye coordinates, we used as reference points previously reconstructed Tm5 neurons25 from two medulla columns, which correspond to the centre of the eye and a dorsal position on the central meridian (the line that partitions the eye between anterior and posterior halves). To estimate an LC4-DN’s RF, we first multiplied each LC4 Gaussian’s height by the number of synaptic connections to that LC4-DN. We then summed all LC4 Gaussians to produce a 2D multi-Gaussian distribution, which was the LC4-DN’s RF. To estimate an LC4-DN’s response to a looming stimulus, we multiplied the LC4 Gaussian’s height by both the number of synaptic connections and the percentage of the LC4 RF that was covered by the stimulus at its maximum size (30°). For instance, if the stimulus overlapped with 40% of an LC4‘s RF, then that LC4 Gaussian’s effective height was the number of connections times 0.4. Finally, all LC4 contributions were summed to produce the estimated response of the LC4-DN to the looming stimulus. Note that LC4s that did not overlap at all with a stimulus contributed nothing to the DN’s response. Hemibrain connectome reconstruction analysis Volumetric data of neurons and neuropils, as well as connectivity data and synapse locations, were obtained from the neuPrint (hemibrain v1.1) database, (https://neuprint.janelia.org/) and have been processed with the natverse package51 for R (v4.0.3) using custom scripts. All coordinates in these datasets are based on the original voxel resolution of 8 nm. k-means clustering of individual neurons within VPN cell type populations For each VPN cell type, a matrix of synaptic connections between individual VPN neurons and their postsynaptic partners was constructed using the neuprintR package. Postsynaptic partners forming fewer than 50 total synapses with the entire VPN cell type population were excluded (about 1 synapse per individual VPN on average; we reasoned that this threshold would reflect the limit of EM data reconstruction error rate). Synaptic connections within the population of VPN cell type were also removed (for example, LC4 to LC4 synapses). The resulting matrix was scaled such that the variables (individual postsynaptic partners) had unit variance across the observations (individual VPN cells in the population). Principal component analysis was carried out on the scaled matrix. Up to ten principal components were used for k-means clustering on the individual VPNs (the number of PCs was determined on the basis of the drop in the eigenvalues in the scree plots for each VPN type). A value of k was subsequently determined from the corresponding scree plots by the drop in the within-cluster sum of squared distance (example in Extended Data Fig. 6a). Correlation in synaptic connectivity Matrices of correlation in synaptic connectivity (Fig. 4d,f) were generated using the pairwise Spearman’s correlation coefficient of the 300 unique pairs derived from the top 25 postsynaptic partners (based on the total number of synapses and excluding connections with the same VPN cell type) of LC4 and LPLC2, ordered using hierarchical clustering. Each entry evaluates the monotonic relationship between a pair of the synaptic connectivity gradients. For each pair, the correlation coefficient was calculated using the vectors containing the number of synapses between the selected postsynaptic partners and each individual VPN cell within the population (example in Fig. 2h). Weighted cendritic centroids To evaluate the distances between weighted dendritic map centroids for each postsynaptic partner of LC4 and LPLC2, we identified the endpoints of the dendrites innervating the lobula for each individual VPN cell. These were isolated using cut-planes that were manually selected to optimally separate the lobula region (Extended Data Fig. 9a–d). We then evaluated the centroid of the selected endpoints by calculating their spatial average. We repeated these steps for all VPN cells within a population (71 for LC4 and 85 for LPLC2). The resulting 3D centroids were then projected onto the cut-plane. The outlines of the lobula were obtained by evaluating the convex hull of the projections of all the selected endpoints for all of the cells of the examined VPN. To identify a weighted innervation centroid for a given postsynaptic partner, we calculated the overall weighted median using the number of synapses associated with each centroid as weights. We then identified the top anticorrelated pairs of postsynaptic partners by selecting those for which the Spearman’s correlation coefficient is below a certain threshold that was determined by evaluating, for each VPN, the value that optimizes the correlation between the dendritic map and the synaptic connectivity correlation. For each one of these top pairs, we estimated the perpendicular to the line connecting the corresponding weighted median centroids. These lines were combined using the median operator to reduce the influence of potential outliers. This resulted in a single line identifying the optimal unbiased separator of the most anticorrelated pairs (median separation line in Fig. 4e,g). The distance between their projections onto the line perpendicular to the optimal separator (projection line in Fig. 4e,g) was used as a final metric to generate the matrix and calculated for each pair of postsynaptic partners (Extended Data Fig. 9g,h). The projection line for LC4 was almost parallel to the A–P axis of the lobula (Fig. 4e), and slightly deviated from that for LPLC2 owing to the dual nature of synaptic gradients in this cell type (both A–P and D–V). Spatial distribution of postsynaptic sites in optic glomeruli A similar approach based on the estimation of an unbiased separator was used to evaluate the correlation between the centroids of postsynaptic sites for postsynaptic partners of VPNs. To estimate this separator, we started by isolating all postsynaptic sites within the glomerulus using a cut-plane. We then selected the top anticorrelated pairs of postsynaptic partners, in a manner similar to how we analysed the dendritic map centroids. For each pair, we split the postsynaptic sites into two different classes depending on the postsynaptic partner they belonged to and used a support vector machine with a linear kernel to evaluate the optimal separating plane. We then computed the median of these planes. This resulted in a single plane identifying the unbiased optimal separator of the most anticorrelated pairs (median separation plane in Fig. 6b,j). We then projected the postsynaptic sites of each postsynaptic partner onto the line perpendicular to the optimal separator and calculated the distance between the median of the respective projections. The distance matrices for a given VPN cell type were obtained by calculating the pairwise distances between each of the 300 pairs of postsynaptic partners of LC4 and LPLC2 (Extended Data Fig. 9i,j). For selected pairs of postsynaptic neurons, the distributions of postsynaptic sites projections were compared using the two-sample Kolmogorov–Smirnov test (Fig. 6b,j). Assessment of topographic mapping in VPN optic glomeruli Skeletons of individual neurons within each VPN cell type were selected manually on the basis of A–P and D–V topographic location of their dendrites and/or the pattern of k-means clustering of the dendritic maps (15 cells per topographic domain, unless stated otherwise in the figure legends). Groups of neurons with dendrites in different topographic domains were differentially coloured. Axonal processes of the corresponding neurons were traced in the optic glomerulus and visually examined for traces of spatially ordered organization. LC10 neurons were excluded from the analysis owing to previously reported A–P axonal topography6,13. LC6 neurons were excluded owing to previous extensive analysis25 indicating the presence of coarse glomerular retinotopy inaccessible through visual examination. Statistical analysis All statistical analyses were carried out in RStudio 1.4.1103, MATLAB or Prism 9.2.0 software (GraphPad). NS: > 0.05, *P < 0.05, **P < 0.01, ***P < 0.001 and ****P < 0.0001 for all figures where applicable. Statistical tests for Figs. 1e and 3e,h and Extended Data Figs. 1,2,4 and 12 are described in Supplementary Table 1. In all box plots (Fig. 6 and Extended Data Fig. 11), the solid line depicts the median; the upper and lower bounds of the box depict the third and first quantiles of the data spread, respectively. Whiskers indicate minimum and maximum values. All other statistical tests, number of replicates, statistical significance levels and other elements of statistical analysis are reported in the corresponding section of the Methods, along with the associated results and/or in the corresponding figure legends. No data were excluded from the analysis except as noted for the behaviour experiments (see the section in the Methods entitled Behavioural data analysis). All measurements were taken from distinct samples. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
auto_math_text
web
# Re: MF hackery (arrow kit) • To: math-font-discuss@cogs.susx.ac.uk • Subject: Re: MF hackery (arrow kit) • From: Matthias Clasen <clasen@pong.mathematik.uni-freiburg.de> • Date: Mon, 3 Nov 1997 15:30:49 +0100 Ulrik Vieth wrote: > > I have tried |zero_width|, but that did not correctly center the > > glyph. Now I use an explicit |adjust_fit| to cancel the width. > > Should be fine, if it solves the problem. However, it would be > interesting to find out why |zero_width| didn't work as expected. If I interpreted the outcome of my short experiment with |zero_width| correctly, it let's glyph bitmap hang out to the right. This is what you need for the \not glyph, but I want the negations in the arrowkit to be centered, since they are placed between two glyphs, not overlaid. > >> * The gaped squiggly arrows produce some stray pixes within the > >> are that should be erased. Anyone knows how to fix this??? > > > Instead of |unfill|ing a box, I now calculate the intersection point > > and draw a shorter spline. This eliminates the stray pixels and gives > > correctly rounded endings. > > This sounds like a better approach. Too bad that Metafont doesn't > have MetaPost's |cutafter| and |cutbefore| macros. What are these doing ? There names seem to indicate that they are automating what I have done manually. > > This turned out to be a bit tricky. We cannot guarantee uniform > > spacing between the dots without sacrificing device-independence, > > so I increased the space between the dots a bit, making the > > nonuniformity much less prominent. This is only a problem for > > low resolutions, it was quite noticeable at 300dpi, but invisible > > at 900dpi. > > OK, as I see that you've used 6 dots per 12u, which makes 9 dots > per 18u, of which 3 dots would be deleted in the gapped versions. > Sounds reasonable. Remaining question: Is it the right approach > to have dotted arrows rather than dashed ones in the first place? Yes I think that is how I derived the original widths and dot separations. About the dash/dot question: IIRC, I have taken them from Alan Jeffreys paper about requirements for arrows. I don't know why he didn't have dashed arrows. > > I was surprised by the difference between the mf programs for the > > brackets and floors in punct.mf and symbol.mf. While the brackets > > are very sophisticated, the floors are almost trivial. Is this > > because the brackets are designed for text fonts, while the floors > > are not ? > > I suppose so. Maybe we should leave the text brackets alone and > design new math brackets based on the design of floors/ceilings. Good idea. Should I give them rounded corners (like the floors/ceilings) or should I change the floors/ceilings to have sharp corners ? > P.S. At the moment I have a couple of projects in the pipeline, > none of which is ready for distribution yet, but the following > is upcoming. Please don't rush the next release. On my list of changes for the next version are * Reshuffling of the extensible fonts to remove the extensible recipes from the variable area (an idea found in the archives) * Made all bigops should have small counterparts, but not necessarily vice versa' true (also from the archives). * Most new bigops in MX2 now have glyphs in the cm version (their design is improvable though). * Various bugs fixed. None of these is really urgent, so I'll wait for your stuff. Regards, Matthias PS Yesterday evening I noticed that the xta family doesn't pick up any italic glyphs from psyro. That is another point to fix before the next version. PPS Now that Michael Downes announced the alpha version of breqn.sty' should we try to make `newmath.sty' compatible with it ? From my short look at breqn.sty, I think we would just need to provide a command \DeclareMathCompound and use it for everything that is not a simple \mathchardef. Another point I want to investigate in the near future is how much of AMSLaTeX math works unchanged (from experiments I did long ago, I remember that \sqrt was problematic, since it assumes OMS/OMX encoding).
auto_math_text
web
# Intersolar: Most of U.S. Solar Market to Reach Grid Parity by 2015 Few ideas drive as much excitement in the solar industry as grid parity, the point at which solar systems — or any other renewable source of energy — cost the same as producing electricity from conventional sources like fossil fuels. Travis Bradford, founder of The Prometheus Institute, on Monday said that the research group forecasts that two-thirds of the U.S. market (by electricity sold) using photovoltaic systems will be at grid parity by 2015. “The U.S. will be an extraordinary market in a few years,” he said while giving a morning keynote address at the Intersolar conference in San Francisco. The figure includes federal tax incentives and assumes that electricity rates will rise on average 1 percent per year, a conservative assumption, according to Bradford. Solar systems can produce electricity at or below grid prices in about 10 percent of the U.S. market today, Bradford said. That number will rise to two-thirds of the U.S. market because of the fast decline in the cost of modules and other system components like racks. Bradford said commercial solar systems would reach about $2 to$3 per watt installed and residential about $4 per watt installed by 2015, down from more than$6 per watt today. Bradford said the United States “is entering a period” when it could become the largest market for and producer of solar modules. The rapid advance toward grid parity and government incentives like tax credits and renewable portfolio standards are the main drivers behind this potential shift. “I don’t think a lot of people believe this yet, but that doesn’t mean it’s not true,” Bradford said. Image courtesy of NREL.
auto_math_text
web
mne.channels.read_dig_montage(hsp=None, hpi=None, elp=None, point_names=None, unit='mm', transform=True, dev_head_t=False) Read digitization data from a file and generate a DigMontage Parameters: hsp : None | str | array, shape (n_points, 3) If str, this corresponds to the filename of the headshape points. This is typically used with the Polhemus FastSCAN system. If numpy.array, this corresponds to an array of positions of the headshape points in 3d. These points are in the native digitizer space. hpi : None | str | array, shape (n_hpi, 3) If str, this corresponds to the filename of Head Position Indicator (HPI) points. If numpy.array, this corresponds to an array of HPI points. These points are in device space. elp : None | str | array, shape (n_fids + n_hpi, 3) If str, this corresponds to the filename of electrode position points. This is typically used with the Polhemus FastSCAN system. Fiducials should be listed first: nasion, left periauricular point, right periauricular point, then the points corresponding to the HPI. These points are in the native digitizer space. If numpy.array, this corresponds to an array of fids + HPI points. point_names : None | list If list, this corresponds to a list of point names. This must be specified if elp is defined. unit : ‘m’ | ‘cm’ | ‘mm’ Unit of the input file. If not ‘m’, coordinates will be rescaled to ‘m’. Default is ‘mm’. This is applied only for hsp and elp files. transform : bool If True, points will be transformed to Neuromag space. The fidicuals, ‘nasion’, ‘lpa’, ‘rpa’ must be specified in the montage file. Useful for points captured using Polhemus FastSCAN. Default is True. dev_head_t : bool If True, a Dev-to-Head transformation matrix will be added to the montage. To get a proper dev_head_t, the hpi and the elp points must be in the same order. If False, an identity matrix will be added to the montage. Default is False. montage : instance of DigMontage The digitizer montage. Notes All digitized points will be transformed to head-based coordinate system if transform is True and fiducials are present. New in version 0.9.0.
auto_math_text
web
# Problems converging using custom gamma2 distribution Wow the speedups look quite nice especially with 100 iterations! Tagging @wds15 I am sure here is interested. 2 Likes Eek! Exciting! A couple questions: 1. seed is set to to 54647 in the beginning of the vignette, and the initial models are ran without specifying a seed. Then the benchmark_threading function calls update without a seed for the scaling_model, followed by another update with seed=1234 in the run_benchmark function. Are all these right? This might explain the weird dips in the beginning of the graphs. 2. The same for iter in the update for the scaling_model. 3. In the first pic, were the chunks run serially on the one core? Good point about the seeds. @wds15 set the seeds so perhaps he knows more. on second thought, the seed and iter look fine. If that the latency in Amdahl’s law, then with 8 cores there’s about an 87% improvement. Here it is with increased reps from 20 (n=3240) to 200 (n=32400): Very nice to see reduce_sum in brms show its utility. The speedup which you see with more chunking on a single core is possible due to better CPU cache use when working on smaller parts. I think it’s usually a good thing to increase the number of chunks until you see a slowdown. BTW, your machine has how many cores? 32? Are these all physical cores? Did you get that from eyeballing the Runtime with Varying Cores Plot? To clarify, the slowdown plot runs them sequentially to gauge the effect of chunking. And chucking is the number of partial sums? so you’re saying to increase the number of partitions into partial sums until you see a slowdown when running sequentially on one core, and this helps gauge the number of within-chain threads? Why does one chunk have slowdown? (It’s not completely clear to me how to read this plot.) Also, should we expect the pattern moving from 50 to 100 iter to continue when moving to 2000? What about when max_treedepth is increased? In my last plot, I see that for grainsize 450, speedup stalls for 2, 4, and 8 cores at 100 iters but not with iter 50. And in all plots, the model maxes out treedepth of 10 and I’ll be needing a depth of 17. I’m running an AWS EC2 C4.8xlarge. There are 36 vCPUs. I did eyeball your speed up for 8 cores which is about 7x This helps gauge the grainsize. I thought that’s extensively explained in the vignette, no? It should converge to a stable non changing curve. Deeper trees are more leapfrog steps which is just like increasing number of iterations in this context from my understanding. I had very mixed experience with these virtual cpus…you do not know if these are really on just one physical machine. It explained the overhead of chunking but it also said that 1 chunk would be simply using all the data, which to me is just the original problem without chunking so how would it be slower than itself. It was just vague enough to confuse me lol. hmm…were those experiences bad? Is there something in particular I should be aware of? I doubt they are on the same one physical machine. I haven’t had issues with parallel cloud computing before but they weren’t with aws. It is a Compute Optimized Instances: Compute Optimized instances are ideal for compute bound applications that benefit from high performance processors. Instances belonging to this family are well suited for batch processing workloads, media transcoding, high performance web servers, high performance computing (HPC), scientific modeling, dedicated gaming servers and ad server engines, machine learning inference and other compute intensive applications. I was affraid that the text is not clear…will need to review then. The point is: 1 chunk is the original problem, yes. For each additional chunk we have to create copies of all the parameters in memory. Thus, for each additional chunk we end-up having one more copy in memory (and acutally even more things are needed to do for more chunks). As a result calculating 2 chunks is more work than calculating just 1 chunk as there is the overhead which increases with more chunks. This translates into a slowdown when constraining to just 1 core. However, as you have observed a speedup with few chunks its clear that more things are ongoing. This is due to better CPU cache use with smaller chunks… so it‘s complex. And after all the key thing to take away is that this game is not trivial and one can be very happy when one has a problem like yours which scales not so bad with up to 8 cores. Virtual CPUs were bad in my experience, but then I don‘t know if I had a „compute tailored node“. 1 Like Few more questions: 1. in the paper describing the custom gamma2 distribution, @paul.buerkner chooses to fit the variance instead of the mean. I don’t understand how this is different (or better) than fitting the to the shape? I get that the variance is now independent from the mean but so what…the shape was already independent and wouldn’t the shape fit to the data the same way? 2. and since shape is on a smaller scale than v, wouldn’t it be easier to model (sample)? 3. will priors affect benchmarking? I think modelling variance in this case is simply that: a modelling choice. Whether it is a good choice for your data is hard to guess, though a good use of posterior predictive checks could tell you if one of those has problems for your data. (e.g. looking at the “stat_grouped” check with stats like sd, min, max over various subgroups of your data could give some clues). The difference between fitting shape and fitting variance would manifest if you have sets of observations which differ in predictors for mean, but have the same predictors for shape/variance. This is easiest to see if we model only the mean . Changing mean (\mu) and holding shape (\alpha) constant means the variance is (if my math is correct) \sigma^2 = \frac{\mu^2}{\alpha}. Obviously, if we hold \sigma^2 constant, the implied \alpha will change with \mu. I can imagine cases when one is preferred and case where the other is preferred. Generally yes, this can happen - poorly chosen priors will affect your model and your inferences. However, if your data contain enough information about all the model parameters, your inferences will be the same under a broad range of priors. If you have the time, it is IMHO good practice to check if your inferences change with a bit wider and a bit narrower priors. If your inferences do not change, you are quite likely OK. Does that make sense? how would this imply one over the other and not problems with priors? I don’t understand. I can’t see the difference…could you share those examples? wait…are you saying the difference is whether we want the variance to change with the mean or whether we want the shape change with the mean? If we model the variance, then we want the shape to change with the mean and if we model the shape, then we want the variance to change with the mean? …nope, still don’t get it. In my problem, it is possible to have the same mean but different variances depending on which group you’re in. For example, with low noise and dense data, I expect the standard interpolator to perform reasonably well with RMSE that are small and close together. However, with a different worse-performing interp, I could get the same mean but a larger variance. As the noise and sparsity increase, I expect RMSE to increase. If the interp is worse-performing, I expect the RMSE to increase. As the RMSE increases, I expect there to be less stability, hence, larger variances. but i expect this variance-to-mean relationship or shape to be different for each size, noise, interp group. Does that make sense? I feel like I should model shape but I’m not sure and don’t know if its worth the time and effort to wait for custom distributions with within chain threading. ok. I initially ran the benchmarking with very broad priors as the initial guess and I was not sure if i should re-run the benchmarking with more narrow priors to make sure I didn’t choose a terrible grainsize. Understanding a model is always a bit of detective work, so no hard rules. But let’s say you find that the variability (sd) is not well modelled for a subgroup, while the mean is modelled well. One possibility is that the relation between mean and shape is wrong. If you use the mean + variance version and see that the sd is lower than should be in cases where the mean is large and higher than it should when the mean is low, than it suggests that modelling shape instead might help. That sort of thing. Generally posterior predictive checks (unlike prior predictive checks) can uncover a broad range of problems with the model, not just bad priors. But going from “something is bad with the model” to “this specific thing is bad” can be challenging. Yes that’s what I meant. I don’t have very specific examples, but I think that when the gamma distribution is used because I actually assume the process to be roughly a sum of exponentials (say time until failure of multi-component system where multiple components have to break to cause a failure of the whole), then modelling shape seems like a good idea as it has direct interpretation. If I use gamma, just because I have a positive outcome and lognormal has too large variance than maybe modelling variance is more natural. This is IMHO possible with both parametrizations. What differs is exactly how the mean/variance relationship is allowed to behave. 1 Like but how does this affect sampling? why would modeling variance instead of shape allow for better sampling when they are all related? If all your predictors are discrete (booleans/factors) and you use the same predictors for both mean and shape/variance than the difference should be negligible - you allow each subgroup with the same predictors to have its own mean and shape/variance (although priors could also be affecting this to a lesser extent - what is a relatively wide prior for shape might turn out to be a narrow prior for variance and vice versa). In any other scenario there might (and might not) be a substantial difference in both model fit to data and the inferences you will make. Why is that important? The short answer would be: it is a different model (mathematically) and different models can have different properties including how well they converge on a given dataset. To be a bit more specific, Andrew has written a lot about the “Folk theorem” which is the observation that often a model that is hard to compute is also a bad model for the data. So if one of the parametrizations fits your data while the other “fights” your data, this may (and may not) manifest as differences in speed of convergence (or whether the model converges at all). Does that make sense? I guess… I have only factors. The mean is predicted by g_shape, g_size,g_noise,g_interps. Shape/Variance is predicted by all but g_shape because they are similar enough. Then from what you say, there should be negligible difference but so far in my runs I see that for the large dataset with 200 reps, I was able to converge using Gamma2 in 3 days, while with Gamma I am going on a week with only about 60% done on each chain. So its having a much more difficult time converging/sampling when modeling shape. I have set the appropriate priors for shape and for variance. Priors for shape are smaller N(0,2) and shape_intercept N(8,2) while priors for variance are N(0,3) and variance_intercept N(3,1). idk I still don’t get how this variance parameterization is having such an effect on sampling.I guess I need a more theoretical understanding of the mechanisms taking place that are affected by parameterization. I’ll look more into it and then get back with more questions. I honestly don’t know exactly what is happening, just that something is weird. But one thing that should help you figure it out would be to reduce the dataset - e.g. choose some combination of g_size, g_noise, g_interps, select only this subset and fit a simpler model y ~ g_shape + (1 | grep), v ~ 1. This should let you iterate faster and also make it clear whether the problems are with the shape/variance parametrization… (you can obviously try multiple such subsets to see if things are different in other parts of the model). Some observations: 1. it was easy to set the priors and run log(y) with Gaussian distribution, but setting the priors for a lognormal distribution has been difficult as determined by the pp_check looking great for gaussian but having too long of a tail for lognormal. I tried prior predictive checks but got nowhere. I read from another post according to paul that prior predictive checks are not possible yet. I haven’t been able to reduce the tail of the lognormal, so either the failures of the gaussian are hidden in the pp_check or I’m not understanding how to get the two to match up, but more than likely, It seems when there is not enough data, the priors are more important for the lognormal, which is not something i expected. 2. When using the Gamma, I have low Bulk and Tail ESS for only the Intercept (about 200-300) with an Rhat of 1.01. From shinystan, I get warnings of params with < 10% neff and mcse. Increasing the number of iterations doesn’t seem to do much to increase ess. However, for the lognormal, while the Rhat is about the same, the ESS were better. Gamma2 while finished faster, had terribly low ESS (all much less than 100). So maybe this parameterization is not the way to go after all. 3. I did not model 1|r|g_rep for both y and shape, even though this is how it was done in all brms docs. This is because that model would be too complex, overparameterized. However, I don’t believe not modeling 1|r|g_rep for shape would cause the low ess for intercept. When I tried modeling it, it was overparameterized, the model converged quickly with high ess for all params but the est.error was enormous for all (about 2-3 as opposed to the 0.2). 4. monotonic effects, I read the paper and vingette on this. Monotonic effects force this monotonic behavior. And wrongly assuming this has the same affect as any other wrong modeling assumption. Failing to use monotonic effects affect ess, although this is probably not what I’m seeing. I believe my g_size and g_noise should be monotonic but that is not what I’m seeing when modeling as unorded factors. This is similar to Figure 12 in the paper. But I didn’t understand the explanation very well. 1 Like
auto_math_text
web
# Solar Ecology Portfolio Because solar energy is photovoltaics, and so much more… ## What is Solar Ecology? Solar ecology is the systems framework of discovery and design associated with solar energy conversion writ large, coupled with the dynamic context of locale (e.g. a regime of place and time), the affected stakeholders, and the diverse technologies or ecosystems providing preferred solar goods and services (i.e. solar utility). Solar photovoltaics are established global commodities, and solar electricity is being planned at the gigawatt scale internationally, a phenomenal success story. This growth has coincided with a rapid maturation of the solar field, involving a grand opportunity to explore other solar goods and services in the wake of photovoltaic successes. The work of solar ecology discusses solar in transition, moving from a wave of photovoltaic growth as of now, toward major global impacts in the next century. Photovoltaic goods are identified as a bellwether for future opportunities that include integrative “solar” expressed across three existing cultures of design. Solar utility is the vehicle aiding project development in solar ecology, describing takeholders’ preference for solar goods and services fit within the dynamic perspective of the locale. The broader field of solar ecology is described as an emerging transdisciplinary systems field of solar energy within the context of the environment, society and technology–connecting science with design, business, lifestyle, health, and well-being. A solar ecology framework will contribute to a shared wave of coupled discoveries, inventions, and social change strongly influencing the food-energy-water nexus by 2100. ## Community Solar We are working on a new space called Community Solar, addressing the new trends for differentiated models of shared solar deployment. This space is based on the work of Community Solar on State: a Living Laboratory Framework for Outreach, Education, and Research– funded by the Reinvention Fund of Penn State’s Sustainability Institute. Penn State has developed a strong community interest and passion for solar energy research and development. The students, staff, and faculty, and our surrounding community deserve a path to enable the low-carbon future to grow photovoltaic (PV) installations in the framework of the Living Laboratory. We also wish to create an outreach and educational platform that will allow our community to proceed with a pilot community PV project, called a “solar garden”, retain the best practices learned, and then move forward with enabling prospective solar projects as they evolve in the future. ## Solar ASE: the All-Seeing Eye In collaboration with Dr. George Young (Penn State Dept. of Meteorology and Atmospheric Science) and our PhD student Mr. Vivek Srikrishnan, our team has confirmed the viability of a new form of solar resource monitoring technology: a 5x multipyranometer array trained with an Artificial Neural Network to extract the anisotropy of the solar resource (beam irradiance, diffuse sky irradiance, and ground diffuse irradiance components) for any oriented surface. The Solar ASE system–where “ASE” has been abbreviated from the All-Seeing Eye (yes, several of us are Tolkien fans…)–greatly reduces measurement errors over standard empirical correlations used throughout the solar industry today, verified in Golden, CO by our team. Existing empirical solar irradiance models that decouple compenents of light will estimate the beam component (or direct) of solar light with 20-30% error from that of true pyherliometer measured data. The tested ASE system offers estimates with <10% error. Under snowy conditions, existing empirical solar irradiance models are even worse, with 48% error, while the ASE still presents 15%. The ASE system will ultimately be valued to society for solar photovoltaics, agriculture such as grape productivity for wine, and smart building controls. In addition by substituting the ASE for pyrheliometer devices, we hope to reduce systems costs by an order of magnitude, from $20,000-$30,000 per system to well below \$1000. All efforts have been made to embrace an open-source approach to developing the ASE system, in order to maximize the viral dispersion of such an important and transformative technology. Both Undergraduates and Graduate students have been involved in the next stage of deployment, supported by the 2015 PSIEE Seed Grant funds.
auto_math_text
web
# Enthalpy Of Combustion Of Methane Standard enthalpy of combustion $$\left(\text{Δ}{H}_{C}^{\text{°}}\right)$$ is the enthalpy change when 1 mole of a substance burns (combines vigorously with oxygen) under standard state conditions; it is sometimes called "heat of combustion. Methane releases its chemical energy by undergoing hydrocarbon combustion. Another way to state Hess' Law is: If a chemical equation can be written as the sum of several other chemical equations, the enthalpy change of the first chemical equation equals the sum of the enthalpy changes of the other. 2j of energy are required to raise the temperature of 1g of water by 1oC. His negative negative 891 killer jewels from all the MPA p of combustion of methane when yes, water vapors formed is negative 803 killer Jules formal. Every mole of methane (16 g) releases 810 KJ of energy on burning. ” For example, the enthalpy of combustion of ethanol, −1366. Given the thermochemical equation. Upon hitting submit, the stoichiometric equivalents will be calculated for the remaining reactants and products. The combustion reaction for methane is. The enthalpy change for this reaction is measured by pressurizing a strong metal reaction vessel (called a bomb) with a mixture of methane and oxygen gas. The molar mass of methane is. back to Kinetics and Equilibrium links. Calculate the average bond enthalpy of an N-H bond. 8 kJ/mol, is the amount of heat produced when one mole of. If you're behind a web filter, please make sure that the domains *. Use the heat capacity of the calorimeter, calculated in (a), to calculate (i) the molar internal energy of combustion, (ii) the molar enthalpy of combustion, and (iii) the molar enthalpy of formation of fumaric and maleic acids. Question) The enthalpy of combustion of methane at 25 c is 890 J The heat liberated when 3 2 g of methane is burnt in air (1) -890 KJ(2) 178 KJ(3) 445 KJ(4) 278 KJ - Chemistry - Electrochemistry. Prediction of the Combustion Enthalpy of Municipal Solid Waste Chem. It's probably a good way to heat your home. Theoretical temperature of combustion Under constant pressure theoretical temp. The combustion reaction for methane is. When we say that methane is combustible, it means that it is possible to burn it. The heat of combustion is the heat produced when one mole of a substance is completely burnt in oxygen under standard conditions. 95, in increments of 0. Chosen standard pressure is 1 atm. 11 Bond Dissociation Energies The bond dissociation energy (enthalpy change) for a bond A 9B which is broken through the reaction AB : A B is defined as the standard-state enthalpy change for the reaction at a specified temperature, here at 298 K. 41 kJ/kg/K: Latent heat propylene: 410 kJ/kg/K: Boiling temperature at ambient pressure: 232 K: Process temperature at ambient pressure: 300 K: Volume of charge I: 425 m 3. Its formula is CH 4. Enthalpy – Combustion of alcoholsAimThe purpose of this experiment is to determine the heats of combustion of a series of 5 primary alcohols, methanol to pentanol. It is also known as burning. The heat of combustion (ΔH c 0) is the energy released as heat when a substance undergoes complete combustion with oxygen. 0113 E)1221 10) The DH for the solution process when solid sodium hydroxide dissolves in water is -44. Enthalpy is given as, Filed Under: Chemistry Tagged With: Endothermic reaction, energy, enthalpy, enthalpy of molecules, enthalpy of reaction. This video from Frankly Chemistry shows determination of the approximate enthalpy change of combustion of methane using average bond energy terms. 53 kJ/molΔH for O2 = 0 kJ/molΔH for H2O = -241. This method gives an estimate of the reaction's enthalpy change. #N#Sulphur Dioxide. asked by Hannah on January 30, 2012; chemistry. Standard enthalpy of combustion is the enthalpy change when 1 mole of a substance burns (combines vigorously with oxygen) under standard state conditions; it is sometimes called “heat of combustion. Exothermic reactions have negative enthalpy values (-ΔH). 1 Enthalpy Worksheet Enthalpy Worksheet 890. 40), C 2 H 6 (-68. Used Formulations. C(s) + 2H2(g) --> CH4(g. (Glossary of terms in. The standard enthalpy of formation refers to the enthalpy change when one mole of a compound is formed from its elements. #N#Enthalpy, Btu/lb. Heat of Solution of Calcium Hydroxide: Strategies are discussed for studying systems in which two chemical. The enthalpy of combustion of methane, graphite and dihydrogen at 298K are -890. 50 and −285. Methane's heat of combustion is 55. 0 kJ mol-1. Chemically, this combustion process consists of a reaction between methane and oxygen in the air. 00kg water from 15 degrees celcius to 85 degrees celcuis? Assume that there are negligible heat losses to the. The value of natural gas is calculated by its Btu content. The bond dissociation enthalpy is the energy needed to break one mole of the bond to give separated atoms - everything being in the gas. You can write your answers on your student sheet. CH4 + 2O2 --> CO2 + 2H2O Assign the enthalpies of formation and the sign of the energy value with the corresponding molecule (Will demonstrate my technique if needed). B)Wha volume of natural gas, assumed to be pure methane, measured at STP, must be burnt to heat 1. Enthalpy of Combustion 1 Enthalpy of Combustion via Calorimetry Introduction This experiment measures the enthalpy change when a system consisting of a known amount of a substance in the presence of excess oxygen is quantitatively reacted to form simple oxides, i. Methane by itself cannot be caused to burn effectively. SAMPLE EXERCISE 5. Determination of the approximate enthalpy change of combustion of methane using average bond energy terms. Fuels release heat on burning: Heat of combustion is total amount of heat released when a fuel is burnt when there is complete combustion with oxygen. Enthalpy of phase transition: ΔS trs: Entropy of phase transition: Δ c H° gas: Enthalpy of combustion of gas at standard conditions: Δ f H° gas: Enthalpy of formation of gas at standard conditions: Δ sub H: Enthalpy of sublimation: Δ vap H: Enthalpy of vaporization: Δ vap S: Entropy of vaporization: ρ c: Critical density. 3 kJ of heat is evolved. This means that the D G of the sum of a series of reactions is equal to the sum of the D G's of the individual reactions: Similar to enthalpy, thermodynamics tables contain an entry for the standard free energy of formation ( D G f º). Bonds Bond Energies in kJ/mol C-H 414 kJ/mol O-O 142 kJ/mol O=O 498 kJ/mol C=O 736 kJ/mol C-O 360 kJ/mol O-H 464 kJ/mol Enter the answer in kJ. 6 kJmol-1 and the enthalpy of vaporisation of water is 40. DEPARTMENTOFCOMMERCE. 3 kJ mol -1 , -393. Enthalpy Change of Combustion of Methane This application calculates the enthalpy change of combustion of methane at standard conditions, given the reaction CH 4 (g) + 2 O 2 (g) → CO 2 (g) + 2 H 2 O (g) The enthalpy of methane, oxygen, carbon dioxide and water are computed using the empirical correlations in the ThermophysicalData:-Chemicals. The molar enthalpy of combustion of propane is -2043. Complete combustion of a hydrocarbon such as methane produces carbon dioxide and water. The change in enthalpy of a reaction = (sum of the formation of the products) - ( sum of formation of the reactants. The chemical reaction for combustion is typically that of ahydrocarbon fuel reacting with oxygen derived from atmospheric air to form. Calculate the change in enthalpy for the combustion of methane using the values in the table above (assuming that the system remains at 298 K) for the combustion of methane to form carbon dioxide and gaseous water. 8 kJ/mol, is the amount of heat produced when one mole of. Two new C=O bonds are made when the carbon from methane is added into a CO 2 molecule. It is mainly used to generate industrial and utility electric power, produce industrial process steam and heat, and heat residential and commercial space. Another drawback is the unavoidable loss by boil off which is typical to maintain the cold temperature in the tank. The initial mixture preparation phase referred to occurs after the fuel leaves the injector nozzle up to the time of ignition when a standing, fuel rich premixed flame is established. 5 and -286?. The amount of methane that undergoes combustion is determined using the pressure of methan and the ideal gas law. , when the substance is burned. A methane combustion reaction releases 891 kilojoules of heat energy per mole of methane. The enthalpy of combustion of methane, graphite and dihydrogen at 298 K are, -890. Methane, colorless odorless gas that occurs abundantly in nature and as a product of certain human activities. B)Wha volume of natural gas, assumed to be pure methane, measured at STP, must be burnt to heat 1. Chemists would write the following to represent the combustion of methane:. 4 kJ/mol of heat. Methanol is an ingredient in many products for car and home maintenance and arts and crafts. Determine Products and Reactants. The enthalpy of combustion of methane, graphite and dihydrogen at 2 9 8 K are − 8 9 0. It is also less stable and slowly oxidates by oxygen to carbon dioxide and water. A specific example can be made from our old familiar combustion of methane reaction. 8 kJ mol -1 respectively. 38), and C 3 H 6 (34. , when the substance is burned. Since this reaction should be exothermic, I don't know why I have got a positive value. Hence enthalpy of combustion of methane is 890. The enthalpies of formation of the compounds in the combustion of methane, mc019-1. The enthalpy of formation is the enthalpy change when a compound is formed. Combustion is the scientific word for burning. Complete combustion of methane. If you're seeing this message, it means we're having trouble loading external resources on our website. The enthalpy of formation of ammonia is -46 kJmol-1 and the bond dissociation enthalpies of nitrogen gas and hydrogen gas are +945 kJmol-1 and +436 kJmol-1 respectively. 4 gives a negative value for the heat of combustion so the equation for the combustion reaction shows the heat term as a product. The chemical reaction is typically a hydrocarbon reacting with oxygen to form carbon dioxide, water and heat. A) Write an equation for the combustion of methane. 3 kJ mol −1, whereas the standard enthalpy change Hexane (C 6 H 14) is -4163. 45 × 10 3 kJ 1 mol × 1 mol 114. Get a printable copy (PDF file) of the complete article (439K), or click on a page image below to browse page by page. 5 kj + 2 x -241. 8 kJ/molΔH for CO2 = -393. Start by writing the balanced equation of combustion of the substance. 8 kJ mol -1 respectively. The products of a. The calorific value is the total energy released as heat when a substance undergoes complete combustion with oxygen under standard conditions. The adiabatic flame calorimeter measures )T resulting from combustion of a substance in O 2 (g). 156 moles of methane has an enthalpy of combustion as 125 kJ. Consider the combustion of methane (CH 4). The enthalpy of combustion of methane, graphite and dihydrogen at 298 K are, -890. CH4 + 2O2 --> CO2 + 2H2O Assign the enthalpies of formation and the sign of the energy value with the corresponding molecule (Will demonstrate my technique if needed). The molar mass of methane is. 00 g of methane from 36. 0 k J m o l − 1; Bond energy O = O = 4 9 5. Calculate the average bond enthalpy of an N-H bond. For example, the enthalpy. Enthalpy of combustion (ΔH comb) The change in enthalpy that occurs during a combustion reaction. Problem TD2. 8 kJ mol –1 (ii) –52. I have calculated the enthalpy of combustion for methanol as 535kJmol-1. For the combustion of $$1 \: \text{mol}$$ of methane at $$15^\text{o}$$, we find by experiment (corrected from constant volume to constant pressure, if necessary) that the reaction is exothermic by $$212. If that value was positive, it would require a lot of heat put into the reaction in order to combust the gas. Smoldering – combustion of the fuel is essentially complete where oxygen is available and smoldering continues resulting in smoke generation. jpgHf = -393. Complete combustion of methane. Calculate the molecular weight of a chemical compound. The four hydrogen atoms are bonded to the central carbon atom and are directed in four. It exhibits a very high heat of combustion not in the least due to its high hydrogen content. Combustion of methane is a multiple step reaction summarized as follows: CH 4 + 2 O 2 → CO 2 + 2 H 2 O (ΔH = −891 k J/mol, at standard conditions) Peters four-step chemistry is a systematically reduced four-step chemistry which explains the burning of methane. 118 of the Thermochemical Network This version of ATcT results was partially described in Ruscic et al. Enthalpy of Combustion. Calorimetry Heat of Combustion of Methane Concepts. CH 4 + 2O 2 → CO 2 + 2H 2 O Heat Energy ( Enthalpy ). Through similar procedures, we can obtain the heat of combustion of carbon in graphite (standard state of the element carbon) and the heat of combustion of methane, CH 4. A methane combustion reaction releases 891 kilojoules of heat energy per mole of methane. Natural gas is comprised primarily of methane. Methanol is an ingredient in many products for car and home maintenance and arts and crafts. Molar Heat of Combustion (molar enthalpy of combustion) of a substance is the heat liberated when 1 mole of the substance undergoes complete combustion with oxygen at constant pressure. ] less energy stored in the bonds than the reactants. 75 times higher. 118 of the Thermochemical Network This version of ATcT results was partially described in Ruscic et al. 5 k J m o l − 1, − 2 8 5. Combustion is an oxidation reaction that produces heat, and it is therefore always exothermic. The combustion reaction Consider the combustion of wood. CH 4(g) + 2O 2 (g) CO 2(g) + H 2 O (g) This value is different from the -890 KJ mol-1 that is. For example, the enthalpy. The standard enthalpy of combustion is the enthalpy change when one mole of a reactant completely burns in excess oxygen under standard thermodynamic conditions (although experimental values are usually obtained under different conditions and subsequently adjusted). , water and carbon dioxide); this amount of heat is a characteristic of the substance. It is the major component of gas used in kitchens and heaters in many countries. For each of the propellant combinations shown above, four graphs have been provided. The ratio of the gas con-stant of nitrogen to methane is 1. Compare your result to the value listed in Table A–27. In this demo, equal moles of methane and methanol are combusted in a cylinder to launch a foam ball up into the air. So, 1 mole of methane has an enthalpy of combustion = Hence, the enthalpy of combustion per mole of methane is. For example, standard enthalpy changes of combustion start with 1 mole of the substance you are burning. How is this represented in an Enthalpy Diagram? The key features are the following: 1. For example, C-H bonds. Concept Introduction: Hess's Law: Standard enthalpy of formation: The change in enthalpy that accompanies the formation of one mole of a product from its pure elements, with all substances in their standard states is called as a standard enthalpy of formation. The molar enthalpy of combustion of butane is -2657. Assume that the water in the products is in the liquid form. Combustion & Thermochemistry This section will cover the following concepts: • Basic property relations for ideal gas and ideal gas mixtures. Use the bond enthalpy value that you obtained in part (ii). • Enthalpy can be presented in the following equation where U is the internal energy, p is pressure, and V is the volume of the system. 5 kJ mol1, and 285. Methane's heat of combustion is 55. C +O2 --CO2 deltaH = -393. To calculate: To find the effect of preheating on adiabatic flame temperature by increasing the temperature from 298 to 600 K. 41 kJ/kg/K: Latent heat propylene: 410 kJ/kg/K: Boiling temperature at ambient pressure: 232 K: Process temperature at ambient pressure: 300 K: Volume of charge I: 425 m 3. Write the balanced equation showing the combustion of methane. The heat of combustion of methane is -890 kJ mol-1. 04g/mole), the amount of heat released is;. asked by Hannah on January 30, 2012; chemistry. Speight Ph. This process is favorable at 25°C. , when the substance is burned. Here Δn g = ( 1-3) Hence Δn g = -2. Standard Enthalpies of Formation Alan D. Let's add it up to see if we get the reaction we want. The combustion reaction Consider the combustion of wood. 5 kJ mol -1, and -285. That is, Hf 298 298 298 298 Hf (A) Hf (B) Hf (AB). This gas is not present in the atmosphere around us, but it can be extracted through mining processes from basic fossil fuels throughout the world. Let's start with a simple reaction: the combustion of methane (this is a calculation of enthalpy of combustion) CH 4 + 2 O 2. 8 k J m o l − 1 respectively. The standard enthalpy of combustion is the enthalpy change when one mole of a reactant completely burns in excess oxygen under standard thermodynamic conditions (although experimental values are usually obtained under different conditions and subsequently adjusted). Enthalpy of atomisation, Δ a H 0, is the change in enthalpy when one mole of bonds are completely broken to obtain atoms in the gas phase. Enthalpy of Combustion. Enthalpy is a state function; the enthalpy change of a reaction is independent of its path and depends only on the initial and final states of the reactants and products. Answer to: Determine the standard enthalpy of reaction for the combustion of methane CH4(g) + 2 O2(g) → CO2(g) + 2 H2O(g) using the following bond energies: C-H. Heat is typically supplied to the reaction by combustion of a mixture of natural gas and potential off-gases from the synthesis. 156 moles of methane has an enthalpy of combustion as 125 kJ. So, 1 mole of methane has an enthalpy of combustion = Hence, the enthalpy of combustion per mole of methane is. Natural gas consists of a high percentage of methane (generally above. Share Tweet Send In the re­ac­tion process heat en­er­gy is re­leased equiv­a­lent to 891 kJ. The heat of reaction (which for a combustion reaction is the heat of combustion) is calculated as: Heat of reaction = [the sum of all heats of formation of all products] - [the sum of all heats of formation of all reactants] Let's do an example. The energy term will be include in the reaction on the reactants side. For example, C-H bonds. The heat of combustion (ΔH c 0) is the energy released as heat when a compound undergoes complete combustion with oxygen under standard conditions. It is burned for fuel in a combustion reaction. Enthalpies of combustion of substances are usually measured in Berthelot's Bomb calorimeter. Example 3: Combustion of Methane. It is highly recommend that you seek the Material Safety Datasheet ( MSDS) for this chemical from a reliable source such as Iowa State, and follow its directions. The enthalpy change for the combustion of methane gas is given in the table as a negative value, ΔH = -890 kJ mol-1, because the reaction produces energy (it. All gases are assumed to be at STP. Explain this statement. So, 1 mole of methane has an enthalpy of combustion = Hence, the enthalpy of combustion per mole of methane is. Heat is absorbed by the system due to the products of the reaction having a greater enthalpy than the reactants. The heating value (or energy value or calorific value) of a substance, usually a fuel or food (see food energy), is the amount of heat released during the combustion of a specified amount of it. 41 kJ/kg/K: Latent heat propylene: 410 kJ/kg/K: Boiling temperature at ambient pressure: 232 K: Process temperature at ambient pressure: 300 K: Volume of charge I: 425 m 3. The balanced equation for the combustion of methane is as follow: CH 4 + O 2 → CO 2 + 2H 2 O. 27 kJ mol –1. 2j of energy are required to raise the temperature of 1g of water by 1oC. The combustion of methane means that it is possible to burn it. Comment on the difference. When methane and oxygen are mixed, an explosion does not instantly occur. It is used to calculate the material's properties under different. combustion of methane -air -oxygen and methane -oxygen mixtures. If you're behind a web filter, please make sure that the domains *. Earhart 11/7/2016 enthalpy-qr-20161107 Author: Alan D. In the text which follows these concepts are. Enthalpy of Combustion 1 Enthalpy of Combustion via Calorimetry Introduction This experiment measures the enthalpy change when a system consisting of a known amount of a substance in the presence of excess oxygen is quantitatively reacted to form simple oxides, i. heat of combustion synonyms, heat of combustion pronunciation, heat of combustion translation, English dictionary definition of heat of combustion. Upon hitting submit, the stoichiometric equivalents will be calculated for the remaining reactants and products. 15) were gaseous H 2 O instead of liquid H 2 O, H would be -802 kJ instead of -890 kJ. 5 kJ/mol) and H 2 O (ΔH f ° = -285. 64 Combustion Fundamentals Chap. I have calculated the enthalpy of combustion for methanol as 535kJmol-1. 8 kJ/molΔH for CO2 = -393.  bond average bond enthalpy / kJ mol–1   C–H +410   O–H +465   O=O +500   C=O. You calculate ΔH∘ c from standard enthalpies of formation: where p stands for "products" and r stands for. Simple diatomic molecules. (Glossary of atmospheric chemistry terms (Recommendations 1990)) on page 2187 PAC, 1996, 68, 957. 8 k J m o l − 1; Δ H s u b C (s. EXPOSURE: Workers that use methanol may breathe in mists or have direct skin contact. The enthalpy of formation of ammonia is -46 kJmol-1 and the bond dissociation enthalpies of nitrogen gas and hydrogen gas are +945 kJmol-1 and +436 kJmol-1 respectively. delta H CH4=1656 kJ/mol delta H O2=498 kJ/mol delta H H2O=-928 kJ/mol delta H CO2=-1598 kJ/mol. products resulting from the combustion of methane-air -oxygen mixtures was similar to that. Substituent constants. (a) Combustion of elements. 8 k J m o l − 1 respectively. 8 kJ/mol, is the amount of heat produced when one mole of. 16) Using bond enthalpies only, calculate the standard enthalpy of combustion of propane. 2 The large quantity of nitrogen diluent substantially reduces the mole fractions of the combustion products from the values they would have in its absence. Gas Molecular Weight. (iv) Explain why the enthalpy of formation of ClF 3(g) that you calculated in part (iii) is likely to be different from a data book value. Partial oxidation takes place in a limited air supply. Homework Equations CH4 + 2O2 ---> 2H20 + CO2 enthalpy= D(reactants) - D(products) D= bond dissociation energies Bond Dissociation energy (KJ/MOL) C-C 350 C=C 611 C-H 410 C-O 350 C=O 732 O-O 180 O=O 498 H-O 460 3. Calculate the approximate enthalpy change, ?Hrxn, for the combustion of one mole of methane a shown in the balanced chemical equation: CH4+2O2?2H2O+CO2 Use the values you calculated in Parts A, B, C, and D, keeping in mind the stoichiometric coefficients. A furnace that provides heat by burning methane gas (CH4) must have the correct mixture of air and fuel. 5 kJ/mol 🤓 Based on our data, we think this question is relevant for Professor Enderle's class at UCD. The combustion of methane, CH4, releases mole of methane is burned, 890. Reactions that liberate heat are termed exothermic reactions, and reactions that absorb heat are termed endothermic reactions. Hint: Methane is a gas at standard temperature and pressure. Enthalpy of combustion (HHV or higher heating value) is the difference between the reactants enthalpy value minus the combustion products enthalpy value at the reference temperature, which is 298 K. Putting values in above equation, we get: We are given: Enthalpy of combustion for given mass = 125 kJ. Incomplete combustion means burning in a lack of air (not enough oxygen). The four most important stages of combustion for prescribed burners are, pre-ignition (fuel is about to burst into flame) flaming – active. We calculated the enthalpy change during this transformation before from traditional thermochemcial methods. Example, the conversion of graphite to diamond occurs. IntroductionIn this experiment, you burn a measured mass of an alcohol in a spirit lamp and transfer the heat energy released to a calorimeter containing water. Also referred to as energy or calorific value, heat value is a measure of a fuel's energy density, and is expressed in energy (joules) per specified amount (e. 4 KJ mol-1 Standard enthalpy of combustion is the amount of heat evolved when 1 mole of the substance under standard conditions ( 298 K and 1 bar pressure) is completely burnt to form the products also under standard condition. Assume that: (i) A theoretical amount of air is supplied and that combustion is complete. Lattice thermodynamics. Enthalpy Formula Questions: 1. This variation is provided in Crane (1988) as a graph for hydrocarbon vapors and natural gases, and as an equation for other common gases. [], and was also used for the initial development of high-accuracy ANLn composite electronic structure methods []. 11 Bond Dissociation Energies The bond dissociation energy (enthalpy change) for a bond A 9B which is broken through the reaction AB : A B is defined as the standard-state enthalpy change for the reaction at a specified temperature, here at 298 K. 50 g CH4 x 16. 125 moles of methane : is the change in enthalpy associated with the combustion of 530 g of. If methanol is released to the environment, it will be broken down in air. The chemical reaction is typically a hydrocarbon reacting with oxygen to form carbon dioxide, water and heat. 53 kJ/molΔH for O2 = 0 kJ/molΔH for H2O = -241. How does the length of carbon chains, specifically those related to alkanols, namely ethanol 1- propanol and 1- butanol, affect the heat of combustion? To investigate how the length of carbon chains affect the heat of combustion and enthalpy, in order to determine which fuel would be the safest and most efficient to take on a camping trip. 4 gives a negative value for the heat of combustion so the equation for the combustion reaction shows the heat term as a product. Reactions that liberate heat are termed exothermic reactions, and reactions that absorb heat are termed endothermic reactions. Here Δn g = ( 1-3) Hence Δn g = -2. (b) Combustion of compounds CH 4 (g) + O 2 (g) → CO 2 (g) + 2H 2 O(l) When 1 mole of methane burns completely in oxygen to form carbon dioxide and water, 890 kJ of heat is released. Exothermic reactions have negative enthalpy values (-ΔH). The standard enthalpy change of combustion, ΔH° c, expressed per mole of ammonia and with condensation of the water formed, is −382. There are four hydrogen atoms in methane, so that's enough to make two molecules of H 2 O. 2 The large quantity of nitrogen diluent substantially reduces the mole fractions of the combustion products from the values they would have in its absence. You might be wondering why I’m using ½ a mole of oxygen. density, dynamic viscosity, kinematic viscosity, specific enthalpy, specific entropy, specific isobar heat capacity cp,, specific isochore heat capacity cp, speed of sound, coefficient of compressibility Z. The combustion of methane gas is represented by the reaction: CH 4 (g) + 2 O 2 (g) CO 2 (g) + 2 H 2. 10 It is important to give the states when writing an equation for H because H depends on the states of all reactants and products. Write a balanced equation and draw an approximate enthalpy diagram for each of the following: (a) the combustion of 1 mol of methane in oxygen; (b) the freezing of liquid water. Think about the combustion of methane. It's probably a good way to heat your home. The enthalpy of combustion of methane, graphite and dihydrogen at 298 K are, -890. Received 10 October 2009; accepted 20 December 2009 *Corresponding author. Oxides of nitrogen (NOx) formed in combustion processes are due either to thermal fixation of atmospheric nitrogen in the combustion air ("thermal NOx"), or to the conversion of chemically bound nitrogen in the fuel ("fuel NOx"). 8 for the molar heat of combustion of methane. The enthalpy of combustion of 114. Combustion definition is - an act or instance of burning. The Attempt at a Solution. CH4 + 2O2 --> CO2 + 2H2O Assign the enthalpies of formation and the sign of the energy value with the corresponding molecule (Will demonstrate my technique if needed). The combustion of methane means that it is possible to burn it. This video from Frankly Chemistry shows determination of the approximate enthalpy change of combustion of methane using average bond energy terms. The energy produced by a combustion reaction comes from the energy stored in the fuel's chemical bonds that is released when the reactants split apart and are. A furnace that provides heat by burning methane gas (CH4) must have the correct mixture of air and fuel. 8 k J m o l − 1 respectively. Enter a mass or volume in one of the boxes below. Enthalpy Combustion Values of Computed and Calorimetric MSW Components (MJ kg –1 ). Sources: Green Book, 2 nd ed. Write an equation to represent the reaction that occurs during the measurement of the enthalpy change of combustion of cyclohexanol. Chemically, this process consists of a reaction between methane and oxygen. Subtract the enthalpy of methane from the sum of the enthalpies of CO2 and H2O. 76 N 2) is the number of moles of O 2 in the combustion air per mole of fuel. Look up the ^Hf (heat of formation) for methane, carbon dioxoide, and water (l). 4 g of N 2 reacts?. Enthalpies of combustion of substances are usually measured in Berthelot's Bomb calorimeter. 3 Nitrogen Oxides Emissions1-2,6-10,15,17-27 -. For gas AFR is usually determined in m3/m 3. Molecular weight calculation: 12. Enthalpy of Combustion 1 Enthalpy of Combustion via Calorimetry Introduction This experiment measures the enthalpy change when a system consisting of a known amount of a substance in the presence of excess oxygen is quantitatively reacted to form simple oxides, i. A specific example can be made from our old familiar combustion of methane reaction. Example: Work out the enthalpy change of combustion of Methane, using bond energies. In a combustion reaction a substance reacts with oxygen from the air and transfers energy to the surroundings as light and heat. jpgHf = -393. By this difference technique, many heats of formation of compounds have been determined. Given the enthalpy change of the following combustion processes: You may be wondering why you're given the values of enthalpy change for the three processes. You calculate ΔH∘ c from standard enthalpies of formation: where p stands for "products" and r stands for. Notation: mpp - the stream of fuel mass, mair - the stream of air mass, msp - the stream of exhaust gases mass, Tst - standard temperature (298 K), Tc - combustion temperature. 3) COMBUSTION TEMPERATURE RISE CALCULATIONS. The general population may be exposed by vapors, ingestion, dermal or eye contact. The products of a. 3 kJ mol1 393. Using the standard enthalpies of formation listed in Table 5. This method gives an estimate of the reaction's enthalpy change. Predict the enthalpy change due to the combustion of 10. 12ΔH fC02 =ΔH reaction – 6ΔH fwater + 2ΔH fbenzene = -6535. The molar enthalpy of combustion of propane is -2043. #N#Carbon Monoxide. Its formula is CH 4. C(s) + 2H2(g) --> CH4(g. When we consider systems with no chemical change, this value is not an issue. Stagnation Enthalpy 4900 kJ/kg GasPower 441. 125 moles of methane. deltaH(O2) = 498 KJ/mol. Chemically, this combustion process consists of a reaction between methane and oxygen in the air. Example Problem using an Enthalpy value: CH 4 (g) + 2O 2 (g) -> CO 2 (g) + 2H 2 O(g) ΔH = -802 kJ. Put in methane, find out the enthalpy of combustion of methane. Bond dissociation enthalpy and mean bond enthalpy. 7k points) thermodynamic. Those equations are. The four most important stages of combustion for prescribed burners are, pre-ignition (fuel is about to burst into flame) flaming – active. Only two sets of values with high accuracy and precision and. The Incomplete Combustion of Natural Gas - Methane.  bond average bond enthalpy / kJ mol–1   C–H +410   O–H +465   O=O +500   C=O. Forgetting to do this is probably the most common mistake you are likely to make. When the reactants enthalpy value is equal to the combustion products enthalpy value, one can calculate the combustion products adiabatic flame. How many joules of heat are needed to raise the temperature of 5. 5 k J m o l − 1, − 2 8 5. The enthalpy of combustion is the energy released by a combustion reaction between hydrocarbons, oxygen and a heat source. Fluegas Composition, mole % #N#Carbon Dioxide. Combustion & Thermochemistry This section will cover the following concepts: • Basic property relations for ideal gas and ideal gas mixtures. In this reaction, the bonds between the hydrogens and the carbon and the bonds between the oxygens are broken. 3 k J m o l − 1, − 3 9 3. Calculate the change in enthalpy for the combustion of methane using the values in the table above (assuming that the system remains at 298 K) for the combustion of methane to form carbon dioxide and gaseous water. 04246 g/mol. The viscosity on this page is the dynamic (absolute) viscosity. Temperature = 503. Homework Equations CH4 + 2O2 ---> 2H20 + CO2 enthalpy= D(reactants) - D(products) D= bond dissociation energies Bond Dissociation energy (KJ/MOL) C-C 350 C=C 611 C-H 410 C-O 350 C=O 732 O-O 180 O=O 498 H-O 460 3. Example: Work out the enthalpy change of combustion of Methane, using bond energies. asked by Hannah on January 30, 2012; chemistry. in an excess of oxygen) under standard. Molecular parameters. The balanced equation of the complete combustion of butane is 2C4H10(g)+13O2(g)→8CO2(g)+10H2O(l) At 1. 3 kJ mol -1 -393. Published on May 9, 2014. Table data obtained from CRC Handbook of Chemistry and. Heats of combustion of methane, ethane, propane, n-butane, isobutane, and n-pentane, each in the gaseous state, by Rossini [1,2, 3,24]. You calculate ΔH∘ c from standard enthalpies of formation: where p stands for "products" and r stands for. The handling of this chemical may incur notable safety precautions. The lowest heat flux was achieved in a quasi-stationary mode, and the highest heat flux was achieved through combustion of methane released from the clathrate powder. The specific energy and energy density of a fuel provide practical measures of the energy content of a fuel in units more commonly used in the storage and handling of these substances (energy per weight and volume). A) Write an equation for the combustion of methane. EXPOSURE: Workers that use methanol may breathe in mists or have direct skin contact. The value of natural gas is calculated by its Btu content. Go to tabulated values. The standard enthalpy of formation refers to the enthalpy change when one mole of a compound is formed from its elements. The present work quantifies the impact of such support on the combustion of lean ( \(\varPhi = 0. ” For example, the enthalpy of combustion of ethanol, −1366. You usually calculate the enthalpy change of combustion from enthalpies of formation. 6$$ ) turbulent premixed DME/air flames with a Damköhler. Sources for natural gas include fossil fuel deposits which can be processed to yield natural gas and biofuel generators which can be used to make methane from biological material. The heat of combustion is the energy liberated when a substance undergoes complete combustion, at constant pressure usually in an environment with excess Oxygen. During the combustion reaction, the hydrocarbon molecules are converted to carbon dioxide and water. The Δ means a change, H means an enthalpy, c indicates combustion, and the ⦵ character indicates that everything is in its standard state. But for fuel-intensive processes, and as NOx minimization becomes more important, you may want a more accurate value. Methane's heat of combustion is 55. Use the following bond energies to estimate the enthalpy change for the combustion of one mole of methane (CH4). The molar enthalpy of combustion of butane is -2657. Energy released on combustion of 33. Formaldehyde is also a product; so, its combustion reaction must be turned around. (It should be an “old friend” by now. You are going to calculate the enthalpy change of combustion for the alcohols you burned. Calculation of Methane : if you found an error, please mail to: [email protected] 233-240) The heat flow for a reaction at constant pressure, q p, is called enthalpy, ΔH. 04246 g/mol. Write the balanced equation showing the combustion of methane. 1 Enthalpy Worksheet Enthalpy Worksheet 890. 1, 2] enthalpy of formation based on version 1. So this question the empathy of combustion of methane gas when liquid water is formed. This computer program calculates the specific heat (Btu/lb-F) and the enthalpy (Btu/lb) for typical gases found in the flue gas of combustion systems. 0 k J / mol. Famous quotes containing the words heat of, tables, heat and/or combustion: " often in the heat of noonday, leaning on a hoe, looking across valleys at the mountains, so blue, so close, my only conscious thought was, "How can I ever get away from here? How can I get to where they have books, where I can be educated?" I worked hard, always waiting for something to happen to change things. The standard enthalpy of formation refers to the enthalpy change when one mole of a compound is formed from its elements. To clear the input boxes press the clear button at the bottom of the form. Stoichiometric air means the minimum air in stoichiometric mixture. The main elements in combustion are then :. Methane is easily ignited. 75, so for the same gas flow the pressure should be 1. Bonds Bond Energies in kJ/mol C-H 414 kJ/mol O-O 142 kJ/mol O=O 498 kJ/mol C=O 736 kJ/mol C-O 360 kJ/mol O-H 464 kJ/mol Enter the answer in kJ. The heat of combustion can be expressed as follows: energy/mole of fuel, energy/volume of the fuel. Decane, a 10-carbon n-alkane and one of the highest vapor phase constituents of jet propellent-8 (), was selected to represent the semivolatile fraction for the initial development of a physiologically based pharmacokinetic (PBPK) model for JP-8. Compounds containing carbon and hydrogen, i. Let's add it up to see if we get the reaction we want. 5 kJ m ol -1, and -285. Enthalpy of atomization is the amount of enthalpy change when a compound's bonds are broken and the component e lements are reduced to individual atoms. 45 × 10 3 kJ/mol The enthalpy change per gram of isooctane can be determined as follows, 5. , in Natural Gas (Second Edition), 2019. Plan Your Strategy Act on Your Strategy a. The enthalpy of combustion of methane, graphite and dihydrogen at 2 9 8 K are − 8 9 0. That is, when one 1. 4 to calculate an approximate enthalpy or heat of reaction for the combustion of one mole of methane gas (CH4) to form gaseous H2O and CO2. In the example above and to the left, the combustion reaction of methane and oxygen to form carbon dioxide and water is shown broken into steps to show the entire energy "using" and "forming" process. 75, so for the same gas flow the pressure should be 1. Used Formulations. Given the thermochemical equation. Enthalpy of formation of Enthalpy of formation of. 1, 2] enthalpy of formation based on version 1. CH 4 (g) + 2 O 2 (g) → CO 2 (g) + 2 H 2 O (l). #N#Thermal Cond. Earhart 11/7/2016 enthalpy-qr-20161107 Author: Alan D. Calculate the heat produced by combustion per liter of methanol. - Comparing the Enthalpy Changes of Combustion of Different Alcohols Introduction In this investigation I will set out to find the differences in the enthalpies of combustion of 6 different alcohols. Consider the combustion of methane (CH 4). Problem: Calculate the approximate enthalpy change, ΔH rxn, for the combustion of methane: CH 4 + 2O2 → 2H2O + CO2Given that-ΔH for CH4 = -74. Heat of CombustionThe Heat of Combustion of a substance is the heat energy released when 1 mole of the substance is. The enthalpy of combustion of methane, graphite and dihydrogen at 2 9 8 K are − 8 9 0. Here, methane reacts with oxygen to generate carbon dioxide and water. This process of combustion releases energy. The standard enthalpy of combustion is the enthalpy change when one mole of a reactant completely burns in excess oxygen under standard thermodynamic conditions (although experimental values are usually obtained under different conditions and subsequently adjusted). 82 kJ/mol) according to the equation below. As an example, consider again the complete combustion of Methane (CH 4) with theoretical air: Notice that in the reactants and the products of the above example we have basic elements O 2 and N 2 as well as compounds CH 4, CO 2, and H 2 O. Complete combustion of a hydrocarbon such as methane produces carbon dioxide and water. 53 kJ/molΔH for O2 = 0 kJ/molΔH for H2O = -241. Write the equation and balance it. Bonds Bond Energies in kJ/mol C-H 414 kJ/mol O-O 142 kJ/mol O=O 498 kJ/mol C=O 736 kJ/mol C-O 360 kJ/mol O-H 464 kJ/mol Enter the answer in kJ. Four C-H bonds must be broken in the combustion of methane. By definition, the heat of combustion (enthalpy of combustion, ΔH c) is minus the enthalpy change for the combustion reaction, ie, -ΔH. Calculate the standard enthalpy change for the combustion of 1 mol of liquid methanol, assuming {eq}H_2O(g) {/eq} as a product. This method gives an estimate of the reaction's enthalpy change. In particular, it is the amount of heat released when a given amount (usually 1 mole ) of a combustible pure substance is burned to form incombustible products (e. I've recently found out my calculated value of internal energy of methane largely deviates from the ab initio output (at $\pu{1000 K}$ \$0. Simple diatomic molecules. At very high temperatures and pressures, diamond becomes more stable than graphite. We calculated the enthalpy change during this transformation before from traditional thermochemcial methods. Aim: The enthalpy change of combustion of a fuel is a measure of the energy transferred when one mole of fuel burns completely. Methane by itself cannot be caused to burn effectively. Calculate the enthalpy change for the process and calculate bond enthalpy of in 134 Views. Methane releases its chemical energy by undergoing hydrocarbon combustion. As an example, consider again the complete combustion of Methane (CH 4) with theoretical air: Notice that in the reactants and the products of the above example we have basic elements O 2 and N 2 as well as compounds CH 4, CO 2, and H 2 O. The standard enthalpy change for the reaction. jpgHf = -74. Tel: 86 516 83883009 E-mail address: [email protected] doi: 10. The enthalpy change when a compound undergoes complete combustion at constant temperature and pressure is called the enthalpy of combustion. CH4(g) + 2O2(g) ® CO2(g) + 2H2O(g) (i) Use the average bond enthalpies given in the table below to calculate a value for the enthalpy change of combustion of methane, DHC. Here Δn g = ( 1-3) Hence Δn g = -2. Its change in a system is equal to the heat brought to the system at constant pressure. You may recall that the products of the complete combustion of a hydrocarbon are water vapor and carbon dioxide gas. Hence, methane has the highest enthalpy per unit mass when combusted. DEPARTMENTOFCOMMERCE. Determine the enthalpy of combustion of methane (CH 4) at 25°C and 1 atm, using the enthalpy of formation data from Table A−26. It is a chemical reaction in which hydrocarbon is burnt and it produces carbon dioxide, heat and water. This method gives an estimate of the reaction's enthalpy change. Question: Calculate {eq}\Delta H {/eq}, {eq}\Delta S {/eq}, and {eq}\Delta G {/eq} for the combustion of methane. You can write your answers on your student sheet. The standard enthalpy of formation "The enthalpy of formation is the energy change when 1 mole of a substance is formed from its constituent elements in their standard states" Particular points to note: The elements are in their usual states under standard conditions. (b) Combustion of compounds CH 4 (g) + O 2 (g) → CO 2 (g) + 2H 2 O(l) When 1 mole of methane burns completely in oxygen to form carbon dioxide and water, 890 kJ of heat is released. The chemical reaction for the combustion is typically that of a hydrocarbon fuel reacting with oxygen derived from atmospheric air to form carbon dioxide , water and heat. 00 atm and 23 ∘C, what is the volume of carbon dioxide. This operation is outlined above for the combustion of methane. 48 PAC, 1990, 62, 2167. The Δ means a change, H means an enthalpy, c indicates combustion, and the ⦵ character indicates that everything is in its standard state. 75 times higher. The standard enthalpy of combustion of ethane is the energy released when one mole of ethane is completely burned in excess oxygen under standard conditions. Results of calculations to determine thermodynamic, transport, and flow properties The product gases are those resulting from The oxygen content of of combustion product gases are presented. Thus the Enthalpy of Vaporization must be included in the calculation (Δhvap). By definition, the heat of combustion (enthalpy of combustion, ΔH c) is minus the enthalpy change for the combustion reaction, ie, -ΔH. The use of external enthalpy support (e. 04g/mol, therefore calculating number of moles, 1. 8 kJ/mol), CO 2 (ΔH f ° = -393. 2 grams of methane = 0. asked by Hannah on January 30, 2012; chemistry. 5 and -286?. Enter a mass or volume in one of the boxes below. The combustion of methane, CH4, releases mole of methane is burned, 890. Let's add it up to see if we get the reaction we want. Because natural gas consists primarily of methane, it is expected that reaction (1) will liberate heat. The heat of combustion can be expressed as follows: energy/mole of fuel, energy/volume of the fuel. By converting the methanol to formaldehyde and hydrogen the enthalpy of the fuel has been increased. formation during combustion obeys hundreds of elementary chemical reactions. For instance if you just input an assumed flue gas temperature the program will calculate the specific heat and enthalpy of each component gas in the program. Heat of Solution of Calcium Hydroxide: Strategies are discussed for studying systems in which two chemical. Most research related to methane hydrates has been focused on their formation and. The heat of combustion of methane is -890 kJ mol-1. Methane combustion is a chemical reaction that occurs from burning methane gas — which is an odorless and colorless fossil fuel — causing high amounts of heat and pressure to be produced. 0 g CH4/1mole CH4 = 800 kilojoules/mole. You cannot apply bond enthalpies to this. Comment on the difference. CH 4 + 2O 2 → CO 2 + 2H 2 O + Heat (1,013. It is mainly used to generate industrial and utility electric power, produce industrial process steam and heat, and heat residential and commercial space. Coals are classified by rank according to their progressive alteration in the natural metamorphosis from lignite to anthracite. 0 which is exothermic. These graphs can be used to estimate (1) the optimum mixture ratio of the combustion reactants, (2) the adiabatic flame temperature of the combustion reaction, (3) the average molecular weight of the combustion products, and (4) the specific heat ratio of the combustion products. The viscosity on this page is the dynamic (absolute) viscosity. 8 k J m o l − 1; Δ H s u b C (s. To find the required energy, we consider the enthalpy change when the temperature of the propellants is increased. Decane, a 10-carbon n-alkane and one of the highest vapor phase constituents of jet propellent-8 (), was selected to represent the semivolatile fraction for the initial development of a physiologically based pharmacokinetic (PBPK) model for JP-8. 2 grams of methane = 0. For example, the enthalpy of change for a methane combustion reaction is ΔH = -891 per kJ/mol. By definition, the heat of combustion (enthalpy of combustion, ΔH c) is minus the enthalpy change for the combustion reaction, ie, -ΔH. use the values, keeping in mind. 8 The enthalpy change equals the heat of reaction at constant pressure. Fuels release heat on burning: Heat of combustion is total amount of heat released when a fuel is burnt when there is complete combustion with oxygen. The standard enthalpy of combustion is ΔH∘ c. 82 kJ/mol) according to the equation below. If the product in the combustion of methane (Equation 5. When that is done, use a heat of formation table to determine the heat of formation (ΔHf) values for the compounds involved in the equation. By volume there…. The enthalpy of combustion of methane, graphite and dihydrogen at 2 9 8 K are − 8 9 0. For example, the enthalpy. So, the enthalpy change in this reaction (which should be the standard enthalpy of combustion) is -2334. 04246 g/mol. of combustion Tc is evaluated from the balance of subtracts and products enthalpies. (ii) Air and fuel enter the combustion chamber at 25°C and the products leave at 798 K. " This is a very common chemical reaction, to take something and combust (burn) it in oxygen. Use the following bond energies to estimate the enthalpy change for the combustion of one mole of methane (CH4). Bonds Bond Energies in kJ/mol C-H 414 kJ/mol O-O 142 kJ/mol O=O 498 kJ/mol C=O 736 kJ/mol C-O 360 kJ/mol O-H 464 kJ/mol Enter the answer in kJ. The definition of the standard. Enthalpy of formation from a reaction. 1 Combustion ofOctane in Air Detennine the stoichiometric fuel/air mass ratio and product gas composition for combus­ tion ofoctane (CSH1S ) in air. The air for combustion enters the process boundary under the ambient conditions, and the fuel for combustion (“reformate”) is an internal stream comprising unconverted methane, CO, and unrecovered hydrogen from the syngas. Exothermic reactions have negative enthalpy values (-ΔH). 0 k J m o l − 1; Bond energy C = O = 7 2 8. 50 g CH4 x 16. Incomplete combustion of ethene. It's probably a good way to heat your home. The heat of combustion can be expressed as follows: energy/mole of fuel, energy/volume of the fuel. Endothermic reactions have positive enthalpy values (+ΔH). Enthalpy – Combustion of alcoholsAimThe purpose of this experiment is to determine the heats of combustion of a series of 5 primary alcohols, methanol to pentanol. The four hydrogen atoms are bonded to the central carbon atom and are directed in four. C (s) + O 2 (g) → CO 2 (g) ΔH = -394 kJ. This project calculated the adiabatic flame temperature of a combustion reaction of pure methane and oxygen, assuming that all of the heat liberated by the combustion reaction goes into heating the resulting mixture. Fluegas Composition, mole % #N#Carbon Dioxide.
auto_math_text
web
# 2D vector tutorials This topic is 3174 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I have been searching for quite sometime and would like tutorials for anything I've missed. I have a basic vector struct created. It was copied from RPG Toolkit's sourceforge page(edited a little). The toolkit is basically a story passed form person to person. The programming has been warped. It was also programmed using MSV 6. It would take a great deal of reprogramming to compile in any other compiler. I've heard rumors of it acting in strange ways and being less strict as to what you can do. This being said it was almost impossible to figure out what I actually needed to use these vectors. I am starting from the beginning and would like tutorials from creating the struct to the varies used of the vectors themself. I am a fan of formulas and algorithms even though my understanding of them is well... not there. Thanks http://justmathtutoring.com/ http://en.wikipedia.org/wiki/Euclidean_vector http://www.sacredsoftware.net/tutorials/Vectors/Vectors.xhtml Edit Added links [Edited by - rpg gamer 17 on December 4, 2009 2:20:48 PM] ##### Share on other sites Sounds like you've found a few good sources for vectors. What exactly are you looking for? If you could be more clear, we could help better. If you just want to understand vectors and how to use them, try making a sim of a charged particle moving in a magnetic field (wiki it). Also, try making a program to calculate the power output of a solar panel (given light vector and panel normal vector). That should cover most of the subtleties of vectors. As far as structs for vectors go, if you're having trouble with a particular implementation, post it. Otherwise, the struct should just have components x y and z, which correspond to the usual vector components. The struct should have accompanying length(u), dot(u, v), cross(u,v) and vector(x, y, z) functions: vector is just a constructor dot(u, v) = u.x*v.x + u.y*v.y + u.z * v.z length(u) = sqrt(dot(u, u)) cross(u, v) = vector(u.y*v.z - u.z*v.y, u.z*v.x - u.x*v.z, u.x*v.y - u.y*v.x) I hope this helps, as i said, I don't know what you're looking for. ##### Share on other sites Those are good ways to learn vectors. The way i learned them is creating my own vector class for actionscript. So that i had to actually think about how they worked. here is another link, i hope its usefull to you: http://www.euclideanspace.com/maths/geometry/space/vector/index.htm ##### Share on other sites I'm designing the below for a 2D game. My vectors will have 3 components but the third I do not plan to used. I am planning on using bounding boxes and vectors for my collision system. The main problem I am having is deciding how to constuct my vector holder. This will hold all vectors that are within the bounding box. In order to do that I need to know which type of vector I should use. I was thinking of using free vectors(vectors who's inital position is not needed). I am guessing an offset value stating the actual posistion of the point would be needed. This seems redundant. Thanks EDIT: REEDIT: I'm looking around for line collision tutorials. EDIT: Alright I have box vs box collision implemented. There is a problem with it though: Red Box = person Green Outline = wall Here is how I calculate the size of the bounding box: void BoundingBox::Update(void){ std::vector<Vector> vVector = m_shape.VECTOR(); //returns vector list from shape unsigned int type = m_shape.TYPE(); switch(type) { case SH_LINE: //unsure some may be slanted break; case SH_CIRCLE: //x is the center point vVector[0] is the radius left = ((float)x) - CVector::Magnitude(vVector[0]); right = ((float)x) + CVector::Magnitude(vVector[0]); top = ((float)y) - CVector::Magnitude(vVector[0]); bottom = ((float)y) + CVector::Magnitude(vVector[0]); break; case SH_POLYGON: left = (float)x; top = (float)y; for(std::vector<Vector>::const_iterator ci = vVector.begin(); ci != vVector.end(); ci++) { //find the farthest right if(x + (*ci).x > right) right = x + (*ci).x; //find the farthest up if(y + (*ci).y > bottom) bottom = y + (*ci).y; } break; }} here is how I check to see if there is a collision. This part will need to be extended to see if the object does cross the vector it contains but I'm unsure how to do that. bool PB_Collision::BoxWithBox(BoundingBox box){ unsigned int i = 0; for(unsigned int i = 0; i < vBox.size(); i++) { bool Possible = true; if(Possible) if(box.bottom <= vBox.top) Possible = false; if(Possible) if(box.top >= vBox.bottom) Possible = false; if(Possible) if(box.right <= vBox.left) Possible = false; if(Possible) if(box.left >= vBox.right) Possible = false; if(Possible) return true; } return false;} I was manipulating LazyFoo's collision detection. Under the last if(Possible) would be a CheckVector(BoundingBox) function. [Edited by - rpg gamer 17 on December 7, 2009 7:52:03 PM] ##### Share on other sites Hey, it looks like you might be confusing vectors with line segments. A line segment is constructed from two vectors (begin, end). There are no "types" of vectors, unless you want to use vectors of floats vs vectors of ints. What do you mean by a point lying on a vector? Or past a vector? a point is a vector, so you can just test them for equality, though I assume that is not what you want; please clarify. For AABBs, you need 2 vectors (low corner, high corner). The pairs ((lowx, highx), (lowy, highy) ...) define intervals; if you want to test if a point is in a box, you just test if it is within those 3 intervals. To test two AABBs against each other, you have to test the three pairs of intervals, they all have to collide for there to be overlap. Also looks like you want to test line segments against each other. To test line segments, you have to test if the two points that define line 'A' are on opposite side of line 'B', and visa versa. You'll have to look up the opposite test, I can't pull it off the top of my head. It has to do with testing the clockwise/counterclockwise of two triples of points defined by the two segments. Keep in mind that line segment intersection is meaningless in 3D. Again I'm not sure what you are asking, so i don't know if this helps. If you can state questions more clearly, I can give better help. ##### Share on other sites I edited the previous post. I guess while you were posting. Sorry about that. I'm still using vectors(I think). I'm just moving the position of the bounding box: class Shape{private: unsigned int m_type; std::vector<Vector> vVector; //list of vectorspublic: Shape(void); Shape(unsigned int type); ~Shape(void); void push_back(Vector v); //ways to add vectors to the list void push_back(float x, float y, float z = 0); inline std::vector<Vector> VECTOR() { return vVector; } inline unsigned int TYPE() { return m_type; }};class BoundingBox{public: void Update(void); //update bounding box right, left, top, bottom coordinates Shape m_shape; BoundingBox(void); BoundingBox(int _x, int _y, Shape shape); BoundingBox(Vector pos, Shape shape); ~BoundingBox(void); int x; int y; float right; float left; float top; float bottom;}; I'm using the thought behind the RPG Toolkit's collision system as what I know about vectors. The vectors used in the collision system may be a bunch of line segments grouped together. The toolkit had a point function(in it's source code) that was used to see if the sprite bounding box was on the vectors contained inside of there bounding box. I'll see if I can find the function again and decipher it. If I need to convert this to use line segments I will do so. I basically want to check: If a point(vector) is between the left and right of the box. ->If a point(vector) is between the top and bottom of the box. ->->If so a collision has been met. The vector is inside the box. EDIT: I found the function: Line number 294-345 Line number 566-645 is used to determine intersection Line number 350-378 PointOnLine Function The bottom one contains several other functions most are located a little above 566. RPG Toolkit Sourceforge Click on CVector.cpp->Revision 1.30 view [Edited by - rpg gamer 17 on December 7, 2009 6:31:29 PM] ##### Share on other sites I had a similair problem today. What you could do is determine the distance between 2 vectors(object, collisionObject) and then do the appropriate action of the distance is between a certain value. I used this in unity to create a primitive navigation system between nodes. I bought the 3D Math Primer for Graphics and Games Development(Catchy title) after reading the chapter about vectors online, dont remember the site, but its a great way to get to understand all that's needed. Could have been Amazon or Google books, it appears on both of those sites and i think its worth checking. If you have any specific questions feel free to ask. What you could do is see if the vector distance between the 2 is equal to half the width of the box you need to collide with. So that you know when its touching. ##### Share on other sites I am about to do an epic restructuring of my current code. Please disregard any posted code as they have or will be tweaked or restructured. @The Almighty Snark Thanks. I'm looking for the book. If I happen to find the site that you might be talking about I'll post it. EDIT: I think I found the site: http://www.scribd.com/doc/11997032/3D-Math-Primer-for-Graphics-and-Game-Development ^^ EDIT: I have a stupid question... How do you find the top right corner of a box... if you have a circle inside it and you only know the radius? Hold on... I'll draw a picture... EDIT: Never Mind... I figured it out... >.< The diameter is the width and height of the circle. EDIT: Restructoring completed. Now I'm looking at the book. [Edited by - rpg gamer 17 on December 9, 2009 9:11:27 AM] 1. 1 2. 2 3. 3 4. 4 frob 15 5. 5 • 10 • 12 • 20 • 12 • 13 • ### Forum Statistics • Total Topics 632148 • Total Posts 3004450 ×
auto_math_text
web
## Wednesday, March 30, 2011 ### Samsung Plants Keyloggers On Laptops http://yro.slashdot.org/story/11/03/30/2148244/Samsung-Plants-Keyloggers-On-Laptops "Mohammed Hassan writes in Network World that he found a keylogger program installed on his brand-new laptop — not once, but twice. After initial denials, Samsung has admitted they did this, saying it was to 'monitor the performance of the machine and to find out how it is being used.' As Hassan says, 'In other words, Samsung wanted to gather usage data without obtaining consent from laptop owners.' Three PR officers from Samsung have so far refused comment." ## Sunday, March 27, 2011 ### A Sun Made Out Of Bananas I was cruising the information superhighway today, and came across some random article about neat science facts.  I usually treat these with quite a bit of skepticism, as they generally take a single basic fact and ignore everything else in order to derive something which is "neat".  This one didn't disappoint.  The first fact caught my eye: If the Sun were made of bananas, it would be just as hot The Sun is hot, as the more astute of you will have noticed. It is hot because its enormous weight – about a billion billion billion tons – creates vast gravity, putting its core under colossal pressure. Just as a bicycle pump gets warm when you pump it, the pressure increases the temperature. Enormous pressure leads to enormous temperature. If, instead of hydrogen, you got a billion billion billion tons of bananas and hung it in space, it would create just as much pressure, and therefore just as high a temperature. So it would make very little difference to the heat whether you made the Sun out of hydrogen, or bananas, or patio furniture. As I said above, this has a nugget of truth, i.e., massive objects will compress themselves and increase in temperature.  There is no doubt about this, indeed much of the Earth's heat is left over from it coalescing and the residual effects.  However, it ignored the fact that the heat would quickly (in astronomical time scales) dissipate and the object would cool down.  Nuclear fusion is what sustains a star's heat for billions of years. Luckily my fellow pedantic internet nerds corrected this oversight and the following was added: Edit: this might be a little confusing. The heat caused by the internal pressure would be similar to that of our Sun. However, if it's not made of hydrogen, the fusion reaction that keeps it going wouldn't get under way: so a banana Sun would rapidly cool down from its initial heat rather than burning for billions of years. Thanks to people who pointed this out. That being said, I felt the desire to go one step further in my pedantry.  I knew bananas are mostly water, which is mostly hydrogen.  I wondered if the sun would be able to undergo fusion if it were made out of bananas. Thanks to the wonders of the internet I knew within seconds that a banana is about 70% water and 30% carbohydrates.  I'm going to assume the peel is about the same in makeup as the insides (or that our banana sun would be peeled).  We all know that water is H2O, but what are carbohydrates?  I looked up which carbohydrates are in a banana, and then looked up what those carbohydrates are made of.  If I knew anything about organic chemistry, I would have known that they all have roughly the same ratio of 1 C : 2 H : 1 O.  Now that I knew the chemical ratios I had to figure out what the ratios were by mass.  Hydrogen has a mass of about 1 amu, carbon 12 amu, and oxygen 16 amu.  Which means that water is 11.1% hydrogen by mass, 88.9% oxygen.  Carbohydrates are 40.0% carbon, 6.7% hydrogen, and 53.3% oxygen, by mass. By the powers of math, we find that the final make up of our banana sun would be 9.78% H, 78.22% O, 12.00% C.  Oxygen and carbon can fuse, but our sun isn't massive enough to make it happen.  So only the 10% hydrogen could fuse.  In addition, the oxygen and carbon would settle in the core possibly preventing hydrogen fusion in the outer layers.  I don't know enough about stellar evolution to say for sure what the result would be.  But, I'd guess that today we'd have something pretty similar to a white dwarf. While the banana sun would likely be rather bright today, and still qualify as a star, it is unlikely that it would have given off enough heat for life to form on Earth.  I suppose you should thank your lucky stars that a sea of free electrons and protons will form hydrogen atoms and not bananas (well at least not for a very long time). As an epilogue I'll answer how many bananas one would need to equal the Sun's mass.  Google tells us that a single banana is 125 grams, and Google's wonderful calculator makes short work of the calculation: 1.6 x 1031 or 16,000,000,000,000,000,000,000,000,000,000 bananas.  You're gonna need a bigger boat. ## Friday, March 25, 2011 ### Self-Referential Aptitude Test I've seen this a few times before, but never bothered with it because I figured it would be crazy.  Well it was pretty crazy, but only took an hour or two. You can stop reading here if you want to try it in its default state, however, I'd like to clarify some ambiguity. Also, the last question is a bit of an opinion, but the correct answer should be clear based on the other answers. ## Monday, March 14, 2011 ### Japanese Nuclear Plants Pretty good discussion of what is happening with the Japanese nuclear plants that were affected by the recent earthquake. ## Saturday, March 12, 2011 ### You might be a winter camper if...... http://www.wintercampers.com/the-lighter-side-of-winter-camping/you-might-be-a-wintercamper-if/ "You might be a WinterCamper if……You like to go winter camping to catch up on your sleep." ## Wednesday, March 9, 2011 ### Police Use Licensing Laws to Do End-Run Around Constitution http://www.huffingtonpost.com/james-peron/police-use-licensing-laws_b_830080.html Inspectors for "health and safety" purposes don't need warrants and police have learned that by inviting fellow government officials to join them, they can conduct armed, warrantless searches under the guise of routine "health and safety" inspections. ## Monday, March 7, 2011 ### Judge Allows Subpoenas For GeoHot YouTube Viewers, Blog Visitors "Stepping up Sony's lawsuit against PS3 jailbreak developer George Hotz, this Thursday a judge approved multiple subpoenas which seek logs of all viewers and commenters to his YouTube video, visitors to his blog and website, and all information associated with his Twitter account." http://yro.slashdot.org/story/11/03/05/1954216/Leave-a-Message-Go-To-Jail "A man in Weare, New Hampshire was charged with felony wiretapping for recording the police during a traffic stop — based on a cell phone call he made as an officer approached his vehicle. From the article: Police considered it wiretapping because the call was being recorded by a voice mail service without the officer's consent." ## Thursday, March 3, 2011 ### Soldier in Leaks Case Jailed in Cell Naked, Lawyer Says http://www.nytimes.com/2011/03/04/us/04manning.html?_r=1 A lawyer for Pfc. Bradley Manning, the Army intelligence analyst accused of leaking secret government files to WikiLeaks, has complained that his client was stripped and left naked in his cell for seven hours on Wednesday.
auto_math_text
web
# The Lieb-Robinson bound 1. Aug 27, 2014 ### friend Does the Lieb-Robinson bound actually proved the speed of light from the assumptions of quantum theory. If it did, would that be a derivation of Special relativity? Thanks. 2. Aug 27, 2014 ### Physics Monkey Not quite. For one thing, typically the Lieb-Robinson bound is not tight, that is there is a slower velocity (of sound, for example) which controls the spread of information. The Lieb-Robinson bound is really a microscopic construction that knows very little about the actual dynamics of the system. Furthermore, even in systems where the Lieb-Robinson bound is obeyed, the system does not have to obey special relativity. There is still a preferred frame set by the lattice and sometimes this can play a crucial role in the physics. Of course, to be fair sometimes Lorentz invariance does emerge, but this is a complex dynamical phenomenon not captured by Lieb-Robinson. Finally, the Lieb-Robinson bound permits violation of the "light cone" by exponential tails. To the extent that one wants to believe the relativistic light cone is sharper than that, one needs more than Lieb-Robinson. Since this is the Beyond the Standard Model section, it should be mentioned that it is not clear exactly how seriously to take the relativistic light cone especially if, for example, geometry itself is fluctuating. Hope this helps. 3. Aug 27, 2014 ### friend I wonder if it can be simplified to prove the speed of light. For example, a lattice of one particle, or a spin network that's continuous, etc?
auto_math_text
web
# What is wrong? 1. Jun 13, 2013 ### jaumzaum What is wrong?? Hi, I want to know where is the mistake in the statement below (I really don't know what is wrong, but it is obviously wrong) $\frac{\partial ^{2}x}{\partial t^{2}} = \frac{\partial ^{2}x}{\partial x^{2}} (\frac{\partial x}{\partial t} )^{2}$ But $\frac{\partial ^{2}x}{\partial x^{2}} = \frac{\partial \frac{\partial x}{\partial x}}{\partial x} = \frac{\partial 1}{\partial x} = 0$ This way: $\frac{\partial ^{2}x}{\partial t^{2}} = 0$ Thanks, John 2. Jun 13, 2013 ### voko After so many posts, you surely know that you should post what is given and what you are supposed to find. Really. 3. Jun 13, 2013 ### 1MileCrash That first statement doesn't look true. 4. Jun 13, 2013 ### Staff: Mentor What does the following simplify to? $$\frac{\partial x}{\partial x}$$ 5. Jun 13, 2013 ### jaumzaum Sorry. Imagine a car traveling in the x axis. It's initial position is zero and its initial velocity ∂x/∂t is zero too. Although it has a variable acceleration a. $a = \frac{\partial ^{2}x}{\partial t^{2}}$ I've just multiplied by $\frac{\partial x^{2}}{\partial x^{2}}$ to get $\frac{\partial ^{2}x}{\partial x^{2}} (\frac{\partial x}{\partial t} )^{2}$ In my conception I just multiplied by 1 and got an absurd result. "a" is not always zero (I haven't even mentioned the function a(t)) 6. Jun 13, 2013 ### voko First of all, it is far from clear why you are using partial derivatives. You have functions of just one argument - the time - so ordinary derivatives would work fine. Second, $a = \frac {d^2x} {dt^2} \ne (\frac {dx} {dt})^2 = v^2$. 7. Jun 13, 2013 ### 1MileCrash Yikes, I think you're having a problem with notation vs. concepts. First of all, $\frac{\partial ^{2}x}{\partial x^{2}} (\frac{\partial x}{\partial t} )^{2}$ is in no way, shape, or form, the same as $\frac{\partial ^{2}x}{\partial t^{2}}$ You did multiply by 1, but your mistake is thinking that the Leibniz notation is just like a simple fraction involving numbers being multiplied because it looks the same. It is a fraction, but ∂ is not a number. ∂/∂ is not 1, it's not anything, it's like saying +/+ or (grapefruit)/(I think Michael Shannon is a talented actor.) -edited out- I proved that it couldn't be true, but misread the equation myself in doing so. :/ Last edited: Jun 13, 2013 8. Jun 13, 2013 ### jaumzaum But why can I multiply dx/dt by dθ/dθ to get (dx/dθ)(dθ/dt) = w (dx/dθ). I've just multipied by one too, and only rearranged the terms, as I did in the initial derivative. What's the difference between them? Why in the first one I cannot do this and in the second one I can? 9. Jun 13, 2013 ### 1MileCrash Ok... this may be challenging to explain. What you've done is used symbols that look algebraic, who's values are not algebraic but defined, then did algebra with these symbols, assuming their values would be preserved. Put it this way, some people write arcsin(x) as sin^-1(x). But, that doesn't mean I can do algebra to both sides and get a true statement, because the symbol "sin^-1(x)" gets its value from definition. Here: See, $\frac{\partial ^{2}x}{\partial x^{2}}$ Is 0. This symbol, as a whole, means the acceleration of x with respect to itself. It will always be changing at a constant rate relative to itself (namely, 1) so you have made the claim: $\frac{\partial ^{2}x}{\partial t^{2}} = 0(\frac{\partial x}{\partial t} )^{2}$ $\frac{\partial ^{2}x}{\partial t^{2}} = 0$ That's the claim, period. The fact that if you leave 0 as you wrote it and do algebra to get something that looks true means absolutely nothing because our "zero" is zero through conceptual definition and not algebraically so (IE, the numerator is not 0.) So it's like "changing your mind" mid way through. "This is zero because of what we say the symbols mean, not algebraically" to "Do normal algebra with those symbols" back to "Now let's return to what the symbols mean again and observe the result." Which will of course, end up as nonsense. Last edited: Jun 13, 2013 10. Jun 13, 2013 ### Staff: Mentor Yes. That's where I was going when I asked this question. Minor quibble. It means the second derivative of x with respect to itself. Acceleration usually means the second derivative of position with respect to time. 11. Jun 13, 2013 ### 1MileCrash Well, of course I agree, but I wanted to stay in the OP's terms. 12. Jun 13, 2013 ### voko In fact, the symbols were not used correctly even by the 17th century's standard. Mr Leibniz did a marvelous job to ensure that $\frac {d} {dt} \frac {d} {dt}$ naturally ends up being $\frac {d^2} {dt^2}$, not $(\frac {d \text{something}} {dt})^2$, simply because it has two d's and none of the 'something'. 13. Jun 13, 2013 ### 1MileCrash I think $\frac{\partial ^{2}x}{\partial t^{2}} = \frac{\partial ^{2}x}{\partial x^{2}} (\frac{\partial x}{\partial t} )^{2}$ was thought to be true because if you treat the RHS purely algebraically, you get $\frac{\partial ^{2}x\partial ^{2}x^{2}}{\partial ^{2} x^{2} \partial ^{2} t ^{2}}$ to which "cancelations" give $\frac{\partial ^{2}x}{ (\partial t)^{2}}$ Not due to an error involving Unless I'm missing it somewhere. :)
auto_math_text
web
# Bandaid: A band name generator I have owned an electric bass for the last twenty five years. I managed to play it in the low 10s of hours and at some point could riff noises resembling the Pink Panther theme song. I decided it was time to get a band name matching my talent. Given that I am an engineer, I obviously automated the whole process. This post explains how I trained a deep neural network to create new band names on demand. We will go over the training dataset generation process, we will review the model architecture, we will build intuition around what that model learned and we will use this model to generate new band names. ## Band names dataset I leveraged Wikidata and downloaded a dataset of music band names using the following sparql query: SELECT DISTINCT ?entity ?bandName WHERE { ?entity wdt:P31/wdt:P279* wd:Q2088357. ?entity rdfs:label ?bandName. FILTER (LANG(?bandName) = 'en') } Here is a sample of what we get: entity bandName http://www.wikidata.org/entity/Q396 U2 http://www.wikidata.org/entity/Q371 !!! http://www.wikidata.org/entity/Q689 Bastille http://www.wikidata.org/entity/Q50598 Infinite http://www.wikidata.org/entity/Q18788 Epik High For more details on how this works, please go through my previous post on dataset generation. The resulting dataset contains 85,153 band names and 547 unique characters. Half of these characters are used only once or twice. Given that our model will have a hard time learning how to use rare characters and that each additional character makes the model slower to train, I took the following design decisions: • All characters are lowercased; • Only ASCII characters are included. Using these two rules, I reduced the initial 547 characters to these 67: {' ': 1, '!': 2, '"': 3, '#': 4, '\$': 5, '%': 6, '&': 7, "'": 8, '(': 9, ')': 10, '*': 11, '+': 12, ',': 13, '-': 14, '.': 15, '/': 16, '0': 17, '1': 18, '2': 19, '3': 20, '4': 21, '5': 22, '6': 23, '7': 24, '8': 25, '9': 26, ':': 27, ';': 28, '=': 29, '?': 30, '@': 31, '[': 32, '\\': 33, ']': 34, '^': 35, '_': 36, '': 37, 'a': 38, 'b': 39, 'c': 40, 'd': 41, 'e': 42, 'f': 43, 'g': 44, 'h': 45, 'i': 46, 'j': 47, 'k': 48, 'l': 49, 'm': 50, 'n': 51, 'o': 52, 'p': 53, 'q': 54, 'r': 55, 's': 56, 't': 57, 'u': 58, 'v': 59, 'w': 60, 'x': 61, 'y': 62, 'z': 63, '{': 64, '|': 65, '}': 66, '~': 67} Using this alphabet, we can encode our band names to list of numbers that our model can understand: In [22]: alphabet.encode("Glass Animals".lower()) Out[22]: [44, 49, 38, 56, 56, 1, 38, 51, 46, 50, 38, 49, 56] Out[23]: [38, 55, 40, 38, 41, 42, 1, 43, 46, 55, 42] The number of band name we can represent from our dataset however drops from 85,153 to 79,753. Here are some of the names that can't be spelled with this restricted alphabet: Some of the characters in ASCII like ], ^ and \ are still used only once in the dataset. We will live with this inefficiency for now. ## Model architecture Now that we have a dataset, we can teach a model to create new band names. I started from TensorFlow's tutorial on text generation and poked around until I was satisfied. The result is this architecture: model = tf.keras.Sequential( [ # map each characters to an embedding of size 64 tf.keras.layers.Embedding( len(alphabet), 64, batch_size=64, ), # we use 1,024 hidden units in our GRU. tf.keras.layers.GRU( 1024, return_sequences=True, stateful=True, recurrent_initializer="glorot_uniform", ), # predict a distribution over the next possible characters # knowing a prefix tf.keras.layers.Dense(len(alphabet)), ] ) This model learns to write band names one character at a time. It is made of three building blocks which we will cover in more details: • A character embedding layer that maps characters into an embedding space; • A layer of Gated Recursive Units (GRUs) acting as a state machine writing band names one character at a time; • A densely connected layer estimating the probability of the next character from the state of the GRUs. I trained this model using the sparse_categorical_crossentropy loss for eight epochs using mini-batch of size 64. Let's look at what was learned in the process. #### Character embeddings layer The first layer of our neural network maps each character to an embedding of size 64. Here is the embedding that was learned for the character a (with index 38 according to our alphabet): In [60]: model.layers[0].get_weights()[0][38] Out[60]: array([ 0.30702987, -0.63329417, -0.20212476, -0.4470627 , 0.36042994, -0.49842888, 0.2777874 , -0.09102639, 0.19714546, -0.17154549, -0.21538487, 0.40895164, 0.37431315, -0.28506562, -0.44888547, 0.7362037 , -0.15533094, -0.17730029, 0.36867294, -0.3623726 , -0.24717565, 0.44966665, 0.2590245 , -0.3569541 , 0.6784191 , 0.08784037, -0.43929407, 0.07683449, 0.00999499, 0.2224479 , -0.32996455, 0.25540373, 0.436953 , -0.4415921 , -0.2441453 , -0.21282889, -0.13839865, -0.5111227 , 0.55712277, 0.11951732, 0.05748424, -0.24553397, 0.5800741 , 0.21185097, -0.2751697 , -0.16367064, -0.5004835 , -0.3733032 , 0.30201647, -0.25884396, -0.47911265, -0.26210967, 0.20878884, -0.35981387, -0.11836641, 0.27695036, -0.10165487, -0.04859921, -0.4266084 , 0.04561161, 0.19834217, -0.59851754, 0.11871306, -0.3452472 ], dtype=float32) Characters that are used similarly in band names should be close of each other. For example, the 1 is probably closer to 2 than to m. Let's see if this intuition is supported by our data: In [63]: embedding_1 = model.layers[0].get_weights()[0][18] In [64]: embedding_2 = model.layers[0].get_weights()[0][19] In [65]: embedding_m = model.layers[0].get_weights()[0][50] In [68]: np.linalg.norm(embedding_1 - embedding_2) Out[68]: 0.88233894 In [69]: np.linalg.norm(embedding_1 - embedding_m) Out[69]: 3.327488 In [70]: np.linalg.norm(embedding_2 - embedding_m) Out[70]: 3.3125098 We see that our embeddings are behaving as we expect. They put 1 and 2 close together while keeping m further away. A more scalable way to visualize these distance is to use a tool like Embedding Projector. Here is what it looks like: The red dots are all digits, the purple dots are letters and the blue dots are other characters. We see that letters and digits are grouped together as expected. There is also a blue dot for # hanging out with the digits. Inspecting the band names with # we see that it is used in the same context as number validating our intuition: You can explore these embeddings yourself on Embedding Projector. #### GRU layer The GRU layer predicts the probability of a character being added to a prefix. We can express this probability as $$p(x_i | x_{i-1}, x_{i-2}, …, x_1)$$ and the probability of generating a band name as $$p(x_1^n) = \prod_{i = 1}^n p(x_i | x_{i-1}, x_{i-2}, …, x_1)$$ where $$x_1^n$$ is a sequence of $$n$$ characters and $$x_i$$ is the i-th character in the sequence. In traditional Natural Language Processing, we approximate the probability of adding a new character using the markovian assumption (we limit the state of the model to the last $$k$$ characters): $$p(x_i | x_1^{i-1}) \approxeq p(x_i | x_{i-k}, x_{i - k + 1}, …, x_{i - 1})$$ Since these models are trained using frequency tables, using small values of $$k$$ helps fight data sparsity. When using Gated Recurrent Units (GRU), we can condition our probability on the GRU's state instead: \begin{gather} gru_i = f(gru_{i-1}, x_{i-1}) \\\ p(x_i | x_1, x_2, …, x_{i-1}) = p(x_i | gru_{i}) \end{gather} where $$gru_{i-1}$$ is a high dimension continuous state (i.e. an embedding) and $$f(\cdot, \cdot)$$ is a state transition function learned by the model. I mapped the final state of a sample our band name dataset using Embedding Projector: We observe many clusters of band names. If the markovian assumption is helpful, band names ending with the same characters should be grouped together. Here are all the bands ending with the letter a: We observe that they are all grouped together in few clusters. Here is the list of bands whose name ends with Cr's where we observe the same thing: If we look at embeddings of bands with the same starting letter, we observe that it is all over the place: We can also look at how a given band name embedding evolve as we are predicting letters. In this example, all red dots are states visited while generating the name glass animals: The state of the band name moves around, which supports our intuition that the GRU layer capture just enough of last few characters to help predict the next character. ### Dense layer The dense layer uses the GRU layer's state to predict the likelihood of each character following the characters generated so far. ## Creating new band names using the model The model we trained gives us the probability of a character given a prefix. We use it in a decoding algorithm to generate band names one character at a time. The simplest decoding algorithms starts with a prefix and greedily sample characters one at a time: def generate( model, prefix: List[int], eos: int, max_length: int): """ Generate a sequence from a recurrent model. :param model: Model to generate the sequence with. :param prefix: Prefix to continue generating with the model. :param eos: Id to identify the end of a sequence (should be 0 most of the time). :param max_length: Maximum lenght of the sequence to generate. """ output_values = [] # Reset the state of the GRU layer (the embedding of the previous # prefix). model.reset_states() input_values = tf.expand_dims(prefix, 0) output_values.extend(prefix) for i in range(max_length): # Compute the distribution over the next character. The # prediction will also affect the state of our GRU layers and # memorize the prefix embedding. predictions = model.predict(input_values, batch_size=1) predictions = tf.squeeze(predictions, 0) # Sample a character from this distribution. x_i = tf.random.categorical(predictions, num_samples=1)[-1, 0].numpy() if x_i == eos: # This is the end of the name, we can return it. break # Feed the prediction back into the model to predict the next # character. input_values = tf.expand_dims([x_i], 0) output_values.append(x_i) return output_values I wrote a variant of this algorithm using beam search to generate the search graphs presented in the following section. ## Results I am now ready to pick a name for my band. I just need to prime the model with prefixes. Here is what the model outputted for a band name that starts with /rage /: The white ellipses are partial band names that were dropped during the search and the blue ellipses are band names generated by the model. The edges are labeled with a character and its likelihood score. I am Canadian, I might as well have a band name starting with beaver: Finally, I have options if I ever want a vanity project: Given all of these options, my top three contenders are: • Rage and Stars • Beaver Fingers • Alex and the Wilder Trio I will go with a melancholic band names Rage and Stars but will keep Beaver Fingers in mind as a solid contender. ## Wrapping up We trained a model generating new band names from the following components: • A list of existing band names retrieved from Wikidata. I am extremely grateful to this project organizing advocating for an organized knowledge base in a world of embeddings. • A deep neural network generating text one character at a time. We saw that even if we use deep learning, we can still relate the model to classical Natural Language Processing approaches. The model learns to group characters into classes (e.g. digits and letters) and the model learns the markovian hypothesis by itself (i.e. state of words with the same suffix are grouped together). I am grateful for TensorFlow. It amazes me to see that what costed me sweat and blood to implement 12 years ago can now be done in few lines of Python. • A simple decoder that generate a band name one character at a time. Organizing my code base for experimentation was more challenging than I expected. I refactored two or three times and ended up with a mess anyway. I am both happy and ashamed to share the it here. If everything works as expected, you should be able to train your own model using make bandaid PREFIXES="foo "` on a fresh checkout.
auto_math_text
web
# Proteomics/Protein Separations - Chromatography/Chromatography Theory « Protein Separations - Chromatography Chromatography Theory » Protein Separations - Chromatography Stationary and Mobile Phases Chapter written by: Laura Grell and Alexander Butarbutar Contact llg3875@rit.edu or nbb3924@rit.edu for contributions Chapter modified by Kai Burnett and Dalia Ghoneim Contact kab9783@rit.edu or dxg6098@rit.edu ## Chromatography Theory Chromatography is a method of separating molecule. The method takes advantage of differences between a mobile phase and a stationary phase to separate the different components in a mixture. The target molecules can interact with the stationary phase based on characteristics such as charge, size, and hydrophobicity. There are two theories of Chromatography: 1. Plate theory 2. Rate theory ### Plate Theory of Chromatography Archer John Porter Martin and Richard Laurence Millington Synge created the plate theory of chromatography. The plate theory describes the stationary phase and the mobile phase as being in equilibrium. The partition coefficient k between these two plates can be defined as: ${\displaystyle K={\frac {Concentration\ of\ solute\ in\ stationary\ phase}{Concentration\ of\ solute\ in\ mobile\ phase}}}$ As k increases, the time it takes for the solutes to separate increases. If chromatography is being performed in a column of fixed length and flow rate, the retention time ${\displaystyle (t_{R})}$ and retention volume ${\displaystyle (V_{r})}$ can be measured and used to determine the value of K. ### Rate Theory of Chromatography Rate theory was introduced by van Deemter to account for chromatographic behavior that could not be explained by plate theory. Rate theory is based on three terms: path-dependent diffusion (A),longitudinal diffusion (B) and mass transfer (C). A. Path-dependent diffusion occurs when the packing in a chromatography column is not uniform, so that two identical analytes may migrate differently because one had further to travel than the other. B. Longitudinal diffusion is the result of materials moving from an area of high concentration (the center of a band on a chromatography column) to an area of low concentration (the outside edges of the same band). Longitudinal diffusion increases with increasing temperature. C. Mass transfer effects result from the fact that materials take time to equilibrate between the stationary and mobile phases. While that time is elapsing, the mobile phase is still moving. This leads to band broadening.
auto_math_text
web
# pygplates.ReconstructionTreeBuilder¶ class pygplates.ReconstructionTreeBuilder Bases: Boost.Python.instance Enable incremental building of a reconstruction tree by inserting total reconstruction poles. The following example demonstrates the general procedure for incrementally building a ReconstructionTree: builder = pygplates.ReconstructionTreeBuilder() ... builder.insert_total_reconstruction_pole(fixed_plate_id1, moving_plate_id1, total_rotation_moving_plate_id1_relative_fixed_plate_id1) builder.insert_total_reconstruction_pole(fixed_plate_id2, moving_plate_id2, total_rotation_moving_plate_id2_relative_fixed_plate_id2) ... reconstruction_tree = builder.build_reconstruction_tree(anchor_plate_id, reconstruction_time) The ReconstructionTree.__init__() method of class ReconstructionTree uses ReconstructionTreeBuilder to create itself from rotation features. __init__() Methods __init__() build_reconstruction_tree(anchor_plate_id, ...) Builds a ReconstructionTree from the total reconstruction poles inserted via insert_total_reconstruction_pole(). insert_total_reconstruction_pole(...) Insert the total reconstruction pole associated with the plate pair moving_plate_id and fixed_plate_id plate. build_reconstruction_tree(anchor_plate_id, reconstruction_time) Builds a ReconstructionTree from the total reconstruction poles inserted via insert_total_reconstruction_pole(). Parameters: anchor_plate_id (int) – the anchored plate id of the reconstruction tree reconstruction_time (float or GeoTimeInstant) – the reconstruction time of all the total reconstruction poles inserted InterpolationError if reconstruction_time is distant past or distant future The top (root) of the tree is the plate anchor_plate_id. The total reconstruction poles inserted via insert_total_reconstruction_pole() are all assumed to be for the time reconstruction_time - although this is not checked. NOTE: This method resets the state of this ReconstructionTreeBuilder to where it was before any calls to insert_total_reconstruction_pole(). So a second call to this method (without any intervening calls to insert_total_reconstruction_pole()) will result in an empty reconstruction tree. insert_total_reconstruction_pole(fixed_plate_id, moving_plate_id, total_reconstruction_pole) Insert the total reconstruction pole associated with the plate pair moving_plate_id and fixed_plate_id plate. Parameters: fixed_plate_id (int) – the fixed plate id of the total reconstruction pole moving_plate_id (int) – the moving plate id of the total reconstruction pole total_reconstruction_pole (FiniteRotation) – the total reconstruction pole The total reconstruction pole is associated with the reconstruction time of the ReconstructionTree that will be built by build_reconstruction_tree(). A total reconstruction pole can be obtained from a (rotation) feature of type ‘gpml:TotalReconstructionSequence’ and inserted into a reconstruction tree builder: fixed_plate_id, moving_plate_id, total_reconstruction_pole = rotation_feature.get_total_reconstruction_pole() interpolated_rotation = total_reconstruction_pole.get_value(reconstruction_time) if interpolated_rotation: builder.insert_total_reconstruction_pole( fixed_plate_id, moving_plate_id, interpolated_rotation.get_finite_rotation()) #### Previous topic pygplates.ReconstructionTree #### Next topic pygplates.ReconstructionTreeEdge
auto_math_text
web
x ## Special topic 从 20 世纪中叶至今, 复杂系统研究迅速发展, 成为了引人注目并具有广泛应用的新领域. 复杂系统要么具有结构的复杂性, 要么具有演化的复杂性, 在多数情况下二者兼具. 不同于传统物理学通常处理的规则介质, 许多复杂系统具有复杂结构, 近年来受到极大关注的复杂网络结构就是其中最典型的代表. 同时复杂系统也可表现为演化行为的多样性和复杂性. 即便系统结构并不复杂, 系统中的非线性相互作用可能产生复杂的演化行为, 包括: 形形色色的不稳定性; 丰富的斑图动力学; 各种各样的自组织、涌现及进化行为等等.物理学从一开始就深深进入了复杂系统研究领域, 其中统计物理无疑是研究和理解复杂系统最主要的工具. 复杂系统研究紧密联系着当前科学发展的两大趋势. 一是不同学科的交叉和融合. 近年来物理学和数学越来越深入地进入其他学科领域, 特别是生物学和社会科学, 使这些传统大多以定性描述为主的学科开始了以数据为依托的定量研究, 而这些交叉领域研究几乎都处于复杂系统的研究范畴. 二是大数据科学的迅猛发展和应用. 基于互联网和物联网数据采集和存储技术的突飞猛进, 现在可利用的数据量正在爆炸性的增长. 这些数据中包含了极大量对自然和社会的有用信息, 能合理利用会带来巨大并不断增长的财富. 但产生这些数据的系统和可能被这些数据所影响的系统, 往往都是复杂系统, 其行为具有高度的不可预测性,使这笔财富并不容易获取. 深入研究复杂系统, 发展有效的数据分析手段是成功使用这笔潜在财富的关键和核心. 要研究和处理所有以上困难和问题, 统计物理是强有力的手段. 长期以来统计物理在处理各种不可确切预见的轨道和状态中发展了丰富的思想、方法和技术手段, 这些必然将会和已经为复杂系统的研究提供了强有力的工具. 同时复杂系统由于结构和行为的大量新特点又为统计物理的创新发展提供强大推动. 本专题邀请了在领域前沿活跃工作的专家学者撰写了 18 篇研究和综述论文, 介绍了作者们在该领域的最新进展和成果. 内容包括对物理领域以及生物、经济、工业和其他交叉领域的复杂系统的研究; 既有宏观经典系统的讨论, 也有量子系统复杂行为的探索; 有论文讨论了复杂系统行为的基础统计理论, 也有论文分析了复杂系统演化的同步化、斑图动力学及其调控. 专题中多篇论文涉及复杂网络问题: 有关于网络结构形成和稳定性分析, 也有利用网络产生的数据分析网络结构, 网络上信息传播, 网络结构下人文活动, 经济演化, 社会运行规律等等. 统计物理和复杂系统是一个内涵宏大的领域, 专题论文都是作者兴趣所在的课题研究成果和心得, 只涉及领域中的点点滴滴. 但我们期望专题中介绍的成果能加强国内学者在这一领域的交流, 吸引对该领域有兴趣的青年学者和学生进来钻研, 推动我国在这一领域的研究水平更上一层. ##### Topics 2020, 69 (8): 1-5. Abstract + 2020, 69 (8): 080101. doi: 10.7498/aps.69.080101 Abstract + ###### SPECIAL TOPIC—Statistical physics and complex systems 2020, 69 (8): 080201. doi: 10.7498/aps.69.20191584 Abstract + Rank aggregation aims to combine multiple rank lists into a single one, which has wide applications in recommender systems, link prediction, metasearch, proposal selection, and so on. Some existing studies have summarized and compared different rank aggregation algorithms. However, most of them cover only a few algorithms, the data used to test algorithms do not have a clear statistical property, and the metric used to quantify the aggregated results has certain limitations. Moreover, different algorithms all claim to be superior to existing ones when proposed, the baseline algorithms, the testing samples, and the application scenario are all different from case to case. Therefore, it is still unclear which algorithm is better for a particular task. Here we review nine rank aggregation algorithms and compare their performances in aggregating a small number of long rank lists. We assume an algorithm to generate different types of rank lists with known statistical properties and cause a more reliable metric to quantify the aggregation results. We find that despite the simplicity of heuristic algorithms, they work pretty well when the rank lists are full and have high similarities. In some cases, they can reach or even surpass the optimization-based algorithms in performance. The number of ties in the list will reduce the quality of the consensus rank and increase fluctuations. The quality of aggregated rank changes non-monotonically with the number of rank lists that need to be combined. Overall, the algorithm FAST outperforms all others in three different rank types, which can sufficiently complete the task of aggregating a small number of long rank lists. 2020, 69 (8): 080203. doi: 10.7498/aps.69.20200170 Abstract + Brain is a typical complex system with characteristics such as self-adaptation, self-organization, and multistability. The activity of the default mode network (DMN), a crucial functional subnetwork of the human brain in resting state, obeys typical non-equilibrium statistical mechanical processes in which the system continually switches among multiple metastable states. Revealing the underlying dynamical mechanism of these processes has important scientific significance and clinical application prospects. In this paper, according to the blood oxygen level dependent (BOLD) signals obtained from functional magnetic resonance imaging (fMRI), we build an energy landscape, disconnectivity graph and transition network to explore the non-equilibrium processes of DMN switching among different attractors in resting state. Taking the activities of high-level visual and auditory cortices for examples, we verify the intimate relationship between the dynamics of DMN and the activity modes of these external brain regions, through comparing the distributions in state space and the algorithms such as XGBoost and deep neural networks. In addition, we analyze the interaction between various DMN regions in the resting state by using the techniques such as compressive-sensing-based partial correlation and convergence cross mapping. The results in this paper may presnt new insights into revealing the dynamics of the intrinsic non-equilibrium processes of brain in resting state, and putting forward clinically significant biomarkers for brain dysfunction from the viewpoint of dynamics. 2020, 69 (8): 080502. doi: 10.7498/aps.69.20191968 Abstract + Rhythmic behaviors, i.e. temporally periodic oscillations in a system, can be ubiquitously found in nature. Interactions among various rhythms can lead to self-organized behaviors and synchronizations. This mechanism is also responsible for many phenomena such as nonlinear waves, spatiotemporal patterns, and collective behaviors in populations emerging in complex systems. Mathematically different oscillations are described by limit-cycle oscillators (pacemakers) with different intrinsic frequencies, and the synchrony of these units can be described by the dynamics of coupled oscillators. Studies of microscopic dynamics reveal that the emergence of synchronization manifests itself as the dimension reduction of phase space, indicating that synchrony can be considered as no-equilibrium phase transition and can be described in terms of order parameters. The emergence of order parameters can be theoretically explored based on the synergetic theory, central manifold theorem and statistical physics. In this paper, we discuss the order-parameter theory of synchronization in terms of statistical physics and set up the dynamical equations of order parameters. We also apply this theory to studying the nonlinear dynamics and bifurcation of order parameters in several typical coupled oscillator systems. 2020, 69 (8): 080503. doi: 10.7498/aps.69.20191934 Abstract + Spiral waves are ubiquitous in diverse physical, chemical, and biological systems. Periodic external fields, such as polarized electric fields, especially circularly polarized electric fields which possess rotation symmetry may have significant effects on spiral wave dynamics. In this paper, control of spiral waves in excitable media under polarized electric fields is reviewed, including resonant drift, synchronization, chiral symmetry breaking, stabilization of multiarmed spiral waves, spiral waves in subexcitable media, control of scroll wave turbulence, unpinning of spiral waves in cardiac tissues, control of spiral wave turbulence in cardiac tissues, etc. 2020, 69 (8): 080505. doi: 10.7498/aps.69.20200450 Abstract + Casimir force in quantum electrodynamics is the representation of zero point energy of vacuum. Depending on the type of fluctuation medium, generalized Casimir force covers a wide spectrum of topics in physics, such as, quantum, critical, Goldstone mode, and non-equilibrium Casimir force. In general, long range correlated fluctuations and constraints are two conditions for generating the Casimir force. In this paper, through a survey of the development of Casimir physics, we discuss several types of Casimir forces and several regularization methods. We end the paper with an outlook for the further development of Casimir physics in the future. 2020, 69 (8): 080506. doi: 10.7498/aps.69.20200360 Abstract + Quantum scar is an intriguing phenomenon in quantum or wave dynamics that the wavefunction takes an exceptionally large value around an unstable periodic orbit. It has attracted much attention and advances the understanding of the semiclassical quantization. Most of previous researches involving quantum scars focus on hard-wall quantum billiards. Here we investigate the quantum billiard with a smooth confinement potential which possesses complex classical dynamics. We demonstrate that the semiclassical quantization approach works well for both the stable and unstable classical periodic orbit, besides the fact that the shape of the orbits varies as the energy increases or even the stability switches. The recurrence rule of the quantum scars in this complex solf-wall billiard differs from that of the hard-wall nonrelativistic quantum billiard, such as being equally spaced in energy instead of being equally spaced in the square root of energy. These results implement the previous knowledge and may be used for understanding the measurements of density of states and transport properties in two-dimensional electron systems with random long-range impurities. 2020, 69 (8): 080507. doi: 10.7498/aps.69.20200561 Abstract + In biological active systems there commonly exist active rod-like particles under elastic confinement. Here in this work, we study the collective behavior of self-propelled rods confined in an elastic semi-flexible ring. By changing the density of particles and noise level in the system, It is clearly shown that the system has an ordered absorbing phase-separated state of self-propelled rods and the transition to a disordered state as well. The radial polar order parameter and asphericity parameter are characterized to distinguish these states. The results show that the gas density near the central region of the elastic confinement has a saturated gas density that co-exists with the absorbed liquid crystal state at the elastic boundary. In the crossover region, the system suffers an abnormal fluctuation that drives the deformation of the elastic ring. The non-symmetric distribution of particles in the transition region contributes significantly to the collective translocation of the elastic ring. 2020, 69 (8): 084203. doi: 10.7498/aps.69.20191721 Abstract + Pedestrian tracking is a hotspot and a difficult topic in computer vision research. Through the tracking of pedestrians in video materials, trajectories can be extracted to support the analysis of individual or collected behavior dynamics. In this review, we first discuss the difference between pedestrian tracking and pedestrian detection. Then we summarize the development of traditional tracking algorithms and deep learning-based tracking algorithms, and introduce classic pedestrian dynamic models. In the end, typical applications, including intelligent monitoring, congestion analysis, and anomaly detection are introduced systematically. With the rising use of big data and deep learning techniques in the area of computer vision, the research on pedestrian tracking has made a leap forward, which can support more accurate, timely extraction of behavior patterns and then to facilitate large-scale dynamic analysis of individual or crowd behavior. ## COVER ARTICLE 2020, 69 (8): 084701. doi: 10.7498/aps.69.20200362 Abstract + Polymer microparticles with various compositions and morphologies have recently received much attention. Their surface-roughness significantly affects the physical and chemical properties, which especially counts in regulating the interaction between biological materials and living systems. In this paper, we design a polystyrene microsphere with controllable surface textures. At first, a microfluidic device is used to generate droplets with uniform size containing the hydrophobic polymer and a co-surfactant. During the volatilization of the organic solvent, the shrinking droplets appear to be unstable at the interface. Thus, the surface area increases spontaneously, and microspheres with wrinkles on the surface are obtained after being solidified. The results show that tuning the concentration of the co-surfactant and the rate of solvent evaporation can effectively regulate the surface roughness of the microspheres. Circulating tumor cell capture experiments reveal that this textured structure can facilitate the cell adhesion and increase the number of the captured cells. These features indicate that the coarse microspheres possess a promising application prospect in the field of biomedical analysis. 2020, 69 (8): 086102. doi: 10.7498/aps.69.20200332 Abstract + In the B4 phase of bent-core liquid crystals, smectic layers of tilted achiral bent-core molecules are chiral and polar, which, driven by intra-layer structural mismatch, eventually twist into helical nanofilaments. We design a NOBOW/hexadecane organogel system, which is different from traditional organogel system, and the studied organogels show reversible gel-liquid transitions under temperature cycles. At high temperature, the NOBOW molecules dissolve in hexadecane and the storage modulus and viscous modulus show typical liquid characteristics. At low temperature, the mobility of NOBOW molecules decreases and the storage modulus of the organogels increases as the temperature decreases. We conduct a rheology experiment to systematically investigate the viscoelasticity of the organogel to understand the property of the organogel and develop the application in soft matter. The viscoelastic studies of the organogels reveal that the helical nanofilaments are internally strained and their 3D networks are relatively stiff, which provides an in-depth insight into the properties of the organogels and paves the way for their applications in soft matter. 2020, 69 (8): 088901. doi: 10.7498/aps.69.20191817 Abstract + Link prediction in complex networks has attracted much attention in recent years and most of work focuses on proposing more accurate prediction algorithms. In fact, “how difficultly the target network can be predicted” can be regarded as an important attribute of the network itself. In this paper it is intended to explain and characterize the link predictability of the network from the perspective of spectrum. By analyzing the characteristic spectrum of the network, we propose the network link predictability index. Through calculating the index, it is possible to learn how difficultly the target network can be predicted before choosing algorithm, and to solve the problem whether the network is unpredictable or the algorithm is inappropriate. The results are useful for the selecting and matching the complex network and link prediction algorithms. 2020, 69 (8): 088902. doi: 10.7498/aps.69.20191973 Abstract + In recent years, the study of partial synchronization of coupled oscillators in complex networks has attracted great attention. The underlying reason is both the extensive existence of the patterns of partial synchronization in brain network and their close relationship to brain functions of cognition and memory. In this paper, we briefly review the research progress in this field. According to the researches by different groups, we classify them as three types, i.e. chimera state, remote synchronization, and clustering synchronization. We mainly discuss the conditions of these three states, as well as their models, detections, and their applications in biology. We discuss the relationship among the three types of states and give some outlooks for future studies. 2020, 69 (8): 088903. doi: 10.7498/aps.69.20191686 Abstract + Many spatial mobility of people, goods and information, such as human travel, population migration, commodity trade, information communication, social interaction and scientific cooperation, follow a law similar to Newton’s law of universal gravitation. This law, named social gravity law, is that the flow between two locations is directly proportional to the product of the vitality of these two locations, and inversely proportional to a power function of their distance. The gravity model established by analogy with the gravity law has also been widely used to predict trip distribution, population migration, interregional trade flows, etc. But why do many complex social systems have such a simple law? It is an interesting and valuable issue. This paper reviews the research on exploring the roots of the social gravity law from various perspectives, including statistical physics, microeconomics, and game theory. 2020, 69 (8): 088904. doi: 10.7498/aps.69.20192000 Abstract + In real life, most of the infrastructure networks closely related to the national economy and people's livelihood do not exist independently, but are interconnected with or dependent on each other, so the multilayer network model is proposed to study the independent complex systems and infrastructures. When the nodes in the multilayer network suffer initial failure or attack, the cascade occurs due to the interaction between the “intra-layer” and “inter-layer”, and the failure can propagate in the network layer and across the layers iteratively, so that the scale of the failures is enlarged gradually. As a result, many multilayer networks are more fragile than single networks. The cascading failure of multilayer network usually brings very serious catastrophes to our society. So, conducting the research on preventing the multilayer network from cascading failure and recovering is of great significance. As far as the prevention of cascading failure is concerned, what are mainly included are the strategies such as the fault detection, the protection of important nodes, the optimization of the coupling method of networks, and the backup of nodes. As for the recovery of multi-layer network, included mainly are the strategies such as common boundary node recovery, the idle connected link recovery, the link addition, the priority recovery of important nodes, the topology perturbation, and the repairing of localized attack and adaptive link. 2020, 69 (8): 088905. doi: 10.7498/aps.69.20191970 Abstract + In this paper, we propose a new type of relativistic regional innovation index by using the international patent application data. Based on the super-linear relationship between regional innovation and economic development, the new index can eliminate the influence of economic development level on innovation capabilities, and can effectively achieve the comparison of innovation capabilities among economies at different economic development levels. This new index is quite simple, and points out a series of new findings that are sharply different from the traditional cognitive phenomena, e.g. the index shows that the technological innovation capabilities of mainland China are among the highest in the world in 1980s. Moreover, the use of this new index not only can efficiently explain the economic growth of countries in the world at a higher level, but also find that there is a novel 20-year business cycle in the correlation between the index and economic growth rate. These results show that the index, as a simple single indicator, can achieve a higher degree of explanatory ability with minimal data dependence. This new index not only repositions the innovation capacity of world’s economies, but also provides a new insight into an in-depth understanding of the relationship between innovation and economic development, and implies the development potential and application space such a kind of relativistic economic indicator. 2020, 69 (8): 088906. doi: 10.7498/aps.69.20200001 Abstract + Open complex systems far from equilibrium widely exist in the nature and the fields of society and technology, which are the main research objects of complexity science. Through the exchange of energy and material with the outside world, complex systems can form a variety of internal structures, orders and laws by self-organization behaviors, which poses an arduous challenge to the understanding and predicting complex systems. With the improvement of experimental technology and the progress of science and technology, the data reflecting the mechanism of various complex systems are increasing exponentially, thereby providing new opportunities for studying complex systems. Revealing the structures and dynamics of complex systems from the measured data is an inverse problem in the field of physics, which is the premise of understanding complex systems, predicting the evolution of system state, and regulating system state. However, it is very difficult to solve this inverse problem due to the diversity and complexity of complex system. Therefore, we need to fully mine the hidden knowledge and deep mechanism in the data with the help of interdisciplinary integration. In this paper we briefly review the research results of complex system in recent years, especially the reconstruction of complex network structures, hoping to inspire the innovation to the inverse problem of complex systems. Meanwhile, we hope that researchers in different fields can pay much attention to the inverse problems of complex systems, promote the cross and integration of nature, society, economy, biology and technology, and solve the scientific problems that we are facing. 2020, 69 (8): 088907. doi: 10.7498/aps.69.20191954 Abstract + With the development of power electronic technology and requirement for clean energy, the traditional power systems which are dominated by synchronous generators are gradually changing into the power-electronic-based power systems with diversified power electronic equipment. The power systems are facing a great revolution in their primary equipment, and this has not happened in the past one hundred years. In recent years, with great increasing penetration of power electronic devices into power grids, the large-scale blackouts caused by power electronic devices have been reported, which seriously threatens the safe and stable operation of power systems. Under the above background, in this paper we first introduce several methods of analyzing the traditional power system transient stability from the equal area criterion for the single machine infinite bus system to several Lyapunov function based direct methods for multi-machine systems. Then we introduce some of our recent work on the nonlinear modeling and analysis of a key component of power-electronic-based power systems, voltage source converter (VSC), and propose a multiple machine system model including power electronic equipment and traditional synchronous machines. Finally, we illustrate the transient characteristics of the power electronic devices, and summarize the basic problems and challenges for the transient stability of power-electronic-based power systems. We hope that these basic problems in power-electronic-based power system dynamics including nonlinearity, multi-time-scale, and complexity could arouse the general interest of researchers in the fields of complex systems and statistical mechanics. 2020, 69 (8): 088908. doi: 10.7498/aps.69.20191776 Abstract + With the rapid development of mobile communication and Internet technologies, online live streaming has gradually become popular for information communication and entertainment in the new media environment. Live streaming has been widely used in teaching, reality show, E-sports games and events, brand marketing and other aspects. With the active participation of millions of streamers and hundreds of millions of viewers, massive online crowd behavior activity data are generated, which offers rich experimental scenarios for large-scale crowd behavior dynamics research, live streaming channel recommendation and online community evolution. In this paper, we summarize the relevant research literature of live streaming, and review current studies from a comprehensive list of aspects: workload pattern, viewers and streamers behavior, community network discovery and analysis, etc. We summarize the temporal and spatial patterns of live streaming platform workload, heavy tailed effect of large-scale crowd behavior in live streaming platform, etc. We believe that the future work on live streaming can be directed in the examination of formation and evolution mechanism of various community networks formed by large-scale users, as well as the recommendation and detection of live streaming content. ###### GENERAL 2020, 69 (8): 080202. doi: 10.7498/aps.69.20191829 Abstract + The phase separation phenomenon between different matters plays an important role in many science fields. And the high order nonlinear Cahn-Hilliard (C-H) equation is often used to describe the phase separation phenomenon between different matters. However, it is difficult to solve the high-order nonlinear C-H equations by the theorical methods and the grid-based methods. Therefore, in this work the meshless methods are addressed, and a local refinement finite pointset method (LR-FPM) is proposed to numerically investigate the high-order nonlinear C-H equations with different boundary conditions. Its constructive process is as follows. 1) The fourth derivative is decomposed into two second derivatives, and then the spatial derivative is discretized by FPM based on the Taylor series expansion and weighted least square method. 2) The local refinement and quintic spline kernel function are employed to improve the numerical accuracy. 3) The Neumann boundary condition with high-order derivatives is accurately imposed when solving the local linear equation sets. The 1D/2D C-H equations with different boundary conditions are first solved to show the ability of the LR-FPM, and the analytical solutions are available for comparison. Meanwhile, we also investigate the numerical error and convergence order of LR-FPM with uniform/non-uniform particle distribution and local refinement. Finally, 1D/2D C-H equation without analytical solution is predicted by using LR-FPM, and compared with the FDM result. The numerical results show that the implement of the boundary value condition is accurate, the LR-FPM indeed has a higher numerical accuracy and convergence order, is more flexible and applicable than the grid-based FDM, and can accurately predict the time evolution of nonlinear diffusive phase separation phenomenon between different materials time. ## EDITOR'S SUGGESTION 2020, 69 (8): 080301. doi: 10.7498/aps.69.20192001 Abstract + $\left| {\rm{0}} \right\rangle = \left| {{\rm{6}}{{\rm{S}}_{1/2}},F = 3,{m_F} = - 1} \right\rangle$ and $\left| 1 \right\rangle = \left| {{\rm{6}}{{\rm{S}}_{1/2}},F = 4,{m_F} = + 1} \right\rangle$), in the “magic” magnetic field condition. By adopting a two-photon process, in which a microwave photon and an RF photon are used, we obtain the coherence manipulation of the qubit. The dependence of differential energy shift on magnetic field is experimentally studied, and the “magic” magnetic field is determined. In this magic condition, the first derivative of differential energy shift between $\left| {\rm{0}} \right\rangle = \left| {{\rm{6}}{{\rm{S}}_{1/2}},F = 3,{m_F} = - 1} \right\rangle$ and $\left| 1 \right\rangle = \left| {{\rm{6}}{{\rm{S}}_{1/2}},F = 4,{m_F} = + 1} \right\rangle$ in quantized magnet field is zero, which means that the qubit is immune to the fluctuation of magnetic field and the coherence time can be substantially prolonged. The experimentally obtained magic magnetic field is B = 1.4(2) Gauss, which is in good agreement with the theoretical calculation value B = 1.393 Gauss. Finally, we measure the qubit coherence time by setting the quantized magnetic field to be at magic point B = 1.396 Gauss. The qubit coherence time is measured to be 11(1) ms by Ramsey interferometer, where the main decoherence factor is the inhomogeneous dephasing due to the atomic motion in the dipole trap. This incoherence factor can be dramatically suppressed by a spin-echo process where an additional π-pulse is inserted in between the two π/2 pulses. At the magic magnetic point the qubit coherence time can be extended to 1 s by the spin-echo method.">Qubit encoded in single neutral atoms is a basic experimental platform for studying the quantum computation, quantum information processing and quantum simulation. The extension of the coherence time has been an important task in recent years. On the basis of the single cesium neutral atom trapped in blued-detuned dipole trap, we study the coherence time of a qubit, which is encoded in a pair of magnetically insensitive ground states of cesium atom ($\left| {\rm{0}} \right\rangle = \left| {{\rm{6}}{{\rm{S}}_{1/2}},F = 3,{m_F} = - 1} \right\rangle$ and $\left| 1 \right\rangle = \left| {{\rm{6}}{{\rm{S}}_{1/2}},F = 4,{m_F} = + 1} \right\rangle$), in the “magic” magnetic field condition. By adopting a two-photon process, in which a microwave photon and an RF photon are used, we obtain the coherence manipulation of the qubit. The dependence of differential energy shift on magnetic field is experimentally studied, and the “magic” magnetic field is determined. In this magic condition, the first derivative of differential energy shift between $\left| {\rm{0}} \right\rangle = \left| {{\rm{6}}{{\rm{S}}_{1/2}},F = 3,{m_F} = - 1} \right\rangle$ and $\left| 1 \right\rangle = \left| {{\rm{6}}{{\rm{S}}_{1/2}},F = 4,{m_F} = + 1} \right\rangle$ in quantized magnet field is zero, which means that the qubit is immune to the fluctuation of magnetic field and the coherence time can be substantially prolonged. The experimentally obtained magic magnetic field is B = 1.4(2) Gauss, which is in good agreement with the theoretical calculation value B = 1.393 Gauss. Finally, we measure the qubit coherence time by setting the quantized magnetic field to be at magic point B = 1.396 Gauss. The qubit coherence time is measured to be 11(1) ms by Ramsey interferometer, where the main decoherence factor is the inhomogeneous dephasing due to the atomic motion in the dipole trap. This incoherence factor can be dramatically suppressed by a spin-echo process where an additional π-pulse is inserted in between the two π/2 pulses. At the magic magnetic point the qubit coherence time can be extended to 1 s by the spin-echo method. 2020, 69 (8): 080302. doi: 10.7498/aps.69.20200025 Abstract + The ability to support frictionless motion is one of the manifestations of superfluidity. An impurity immersed in a superfluid can move without dissipation below the critical velocity, which, according to the Landau criterion, is determined by the elementary excitation spectrum of the system. In a quantum gas of the ultracold atoms, the critical velocity can be measured by stirring a laser beam through the atomic cloud, and the emergence of dissipation can be observed via the heating effect above the threshold stirring speed. Recently, such a technique is exploited to study the superfluidity of the Bose-Einstein condensate (BEC) of 162Dy atoms with dipole-dipole interactions. It is shown that both the critical velocity and the heating rate reflect the anisotropy of the underlying dipolar excitation spectrum.In this work, we theoretically investigate the anisotropic dissipation of a point-like impurity moving through a dipolar BEC. For the motion along the principal axis, the dissipation rate above the critical velocity is analytically derived according to the linear response theory. At a given reduced velocity, we find the dissipation rate being of a higher value in the direction parallel to the dipole moment, which qualitatively explains the recent experimental observation in dysprosium atoms. Moreover, in the moving direction away from the principal axis, the asymptotic expressions for the dissipation rate are obtained in the high-speed limit, as well as in the regime close to the dissipation threshold. By combining these analytical results with the numerical calculations, we conclude that, in a dipolar BEC, the angular dependence of the dissipation rate always shows the same anisotropy as the critical velocity. Our predictions can be examined in the current experiments with cold atoms, and the results presented here may be also helpful in understanding the anisotropic superfluidity in other systems. 2020, 69 (8): 080501. doi: 10.7498/aps.69.20191964 Abstract + Hofstadter ladder describes a Boson ladder under a uniform magnetic field and supports nontrivial energy band and fractional quantum Hall states. Staggered hopping is illuminated from the SSH model and proved to have non-trivial effects on current phases. We introduce staggered hopping on Hofstadter ladder to study the novel current phases. Exact diagonalization (ED) and density matrix renormalization group (DMRG) methods have been employed to study the current phases of the ladder in noninteraction and strong interaction (hard core boson) cases. By observing energy singularities and the new flux patterns when increasing the staggered hopping strength, we extend Meissner and vortex phase to horizontal current phase, vertical current phase and vortex phase. The horizontal current phase has stronger chiral currents in horizontal direction, which is the long direction of the ladder. The vertical current phase has stronger chiral currents in vertical direction. The above two phases do not break translational invariance while the vortex phase does. The current patterns of horizontal current phase are proved to be continuously deformed form the Meissner phase, and the vortex phase has similar signatures. The vertical current phase is only visible when the hopping is staggered. These phases generally exist in noninteraction regimes and interacting superfluid regimes. We have defined new quantities (i.e. current inhomogeneity and nearest overlap) to characterize different quantum phases. In noninteraction case, the horizontal current phase go through the vortex phase to enter the vertical current phase by second order phase transitions, but in strong interaction case such a change can be directly made in a first order phase transition. The direct transition is made in higher fillings with almost identical flux. Surprisingly, the three phases turn into only two phases in Mott regimes, and the phase transition between the horizontal current phase and the vertical current phase has disappeared. We call the new phase as Mott-homogenous phase. The staggered hopping has exotic effects in strong interaction case. For n = 0.25 filling, the staggered hopping shrinks the region of vortex phases and produces Mott-SF transition. When the staggered hopping is weak, the system achieves Mott-SF transition just by varying the flux. This research can enrich current phases in lattice systems and illuminate further studies on chiral currents. 2020, 69 (8): 080504. doi: 10.7498/aps.69.20191774 Abstract + With the improvement of people's living standards, large-scaled public activities have increased considerably, and the emergency probability has increased greatly. When an emergency occurs, the emergency evacuation can effectively reduce casualties and economic losses. Therefore, how to quickly evacuate crowd is a current research hotspot in this field. The path planning of emergency evacuation is one of the effective ways to implement the crowd evacuation. Aiming at the problem of path planning for emergency evacuation and taking the grid map as the background, the ant colony cellular optimization (ACCO) algorithm is proposed as the path planning algorithm based on the cellular automata theory and ant colony algorithm. Firstly, in order to solve the problem of inconsistent time steps in the quadrilateral grid map, the grid map based on hexagonal cell is established and the ACCO algorithm is developed based on the hexagonal grid map. And the method of solving grid coordinate is given. Then, in order to improve the convergence speed and search ability of the ACCO algorithm, the static field is used to optimize the heuristic function, and the segment update rule is used to optimize the pheromone update method. Finally, the parameters of ACCO algorithm are optimized through the particle swarm optimization (PSO) algorithm. The method of designing the fitness evaluation function is proposed, and the optimal combination of parameters of the ACCO algorithm is implemented according to the fitness function. In order to verify the scientificity and effectiveness of the algorithm proposed in this research and also to systematically verify the optimization strategy, in this research the exhibition hall on the B-deck of a large cruise ship is used as the engineering background, and the traditional algorithm and the ACCO algorithm are adopted to perform the simulations. The simulation results show that compared with the traditional quadrilateral grid, the hexagonal grid proposed in this research unifies the simulation time step and can be used as the division method of the simulation environment. At the same time, the ACCO algorithm can effectively perform the evacuation path planning, and the optimization strategy proposed in this research not only acceletates the search speed, but also increases the solution space and improves the search ability, which can effectively avoid falling into the local optimal solution. 2020, 69 (8): 080701. doi: 10.7498/aps.69.20200285 Abstract + In this paper, a simple susceptible-infected (SI) model is build for simulating the early phase of COVID-19 transmission process. By using the data collected from the newest epidemiological investigation, the parameters of SI model is estimated and compared with those from some other studies. The population migration data during Spring festival in China are collected from Baidu.com and also extracted from different news sources, the migration characteristic of Wuhan city in the early phase of the epidemic situation is captured, and substituted into a simple difference equation model which is modified from the SI model for supporting migrations. Then several simulations are performed for the spatiotemporal transmission process of COVID-19 in China. Some conclusions are drawn from simulations and experiments below. 1) With 95% confidence, the infection rate of COVID-19 is estimated to be in a range of 0.2068–0.2073 in general situation, and the corresponding basic reproduction number R0 is estimated to be in a range of 2.5510–2.6555. A case study shows that under an extreme condition, the infection rate and R0 are estimated to be 0.2862 and 3.1465, respectively. 2) The Pearson correlation coefficient between Baidu migration index and the number of travelers sent by railway is 0.9108, which indicates a strong linear correlation between them, thus it can be deduced that Baidu migration index is an efficient tool for estimating the migration situation. 3) The epidemic arrival times for different provinces in China are estimated via simulations, specifically, no more than 1 day within an estimation error of 41.38%; no more than 3 days within an error of 79.31%, and no more than 5 days with an error of 95.55%. An average estimation error is 2.14 days. ###### ATOMIC AND MOLECULAR PHYSICS 2020, 69 (8): 083401. doi: 10.7498/aps.69.20200132 Abstract + $\nu = 0{\rm{ }},j = 0$ is investigated based on the new potential energy surface (PES) by using the Chebyshev wave packet method. All partial wave contributions up to J = 60 are calculated explicitly by the coupled state (CS) approximation method and the Coriolis coupling (CC) effect. Dynamic properties such as reaction probabilities, integral cross sections, and state specific rate constants are calculated. The calculated probabilities and integral reaction cross sections display an increasing trend with the increase of the collision energy and an oscillatory structure due to the CH2 well on the reaction path. The thermal rate constants of the endoergic reaction with a temperature ranging from 1000 K to 2000 K are obtained also. The calculated rate constants increase in the entire temperature range, showing a sharp T dependence in a range of 1400–2000 K. The rate constants are sensitive to the temperature due to the high threshold of the title reaction. In addition, the results of the exact calculations including CC effect are compared with those from the CS approximation. For smaller J, the CS probabilities are larger than the CC results, while for larger J, they are smaller than the CC ones. For reaction cross sections and rate constants, the CS results and the CC ones are in good agreement with each other at lower energy. However, they turn different at higher energy. The comparison between the CC and CS results indicates that neglecting the Coriolis coupling leads the cross sections and the rate constants to be underestimated due to the formation of a CH2 complex supported by stationary point of CH2(${\tilde{\rm X}}{}^3 \rm A''$) PES. It is suggested that the CH2 complex plays an important role in the process of the title reaction. However, it seems to overestimate the CS and CC rate constants because the barrier recrossing is neglected. Unfortunately, the results obtained in the present work have no corresponding theoretical or experimental data to be compared with, therefore these results provide simply a certain reference significance to the follow-up study of the title reaction.">The C(3P) + H2 → CH+H reaction in a collision energy range of 1.0–2.0 eV with the initial state $\nu = 0{\rm{ }},j = 0$ is investigated based on the new potential energy surface (PES) by using the Chebyshev wave packet method. All partial wave contributions up to J = 60 are calculated explicitly by the coupled state (CS) approximation method and the Coriolis coupling (CC) effect. Dynamic properties such as reaction probabilities, integral cross sections, and state specific rate constants are calculated. The calculated probabilities and integral reaction cross sections display an increasing trend with the increase of the collision energy and an oscillatory structure due to the CH2 well on the reaction path. The thermal rate constants of the endoergic reaction with a temperature ranging from 1000 K to 2000 K are obtained also. The calculated rate constants increase in the entire temperature range, showing a sharp T dependence in a range of 1400–2000 K. The rate constants are sensitive to the temperature due to the high threshold of the title reaction. In addition, the results of the exact calculations including CC effect are compared with those from the CS approximation. For smaller J, the CS probabilities are larger than the CC results, while for larger J, they are smaller than the CC ones. For reaction cross sections and rate constants, the CS results and the CC ones are in good agreement with each other at lower energy. However, they turn different at higher energy. The comparison between the CC and CS results indicates that neglecting the Coriolis coupling leads the cross sections and the rate constants to be underestimated due to the formation of a CH2 complex supported by stationary point of CH2(${\tilde{\rm X}}{}^3 \rm A''$) PES. It is suggested that the CH2 complex plays an important role in the process of the title reaction. However, it seems to overestimate the CS and CC rate constants because the barrier recrossing is neglected. Unfortunately, the results obtained in the present work have no corresponding theoretical or experimental data to be compared with, therefore these results provide simply a certain reference significance to the follow-up study of the title reaction. ###### ELECTROMAGNETISM, OPTICS, ACOUSTICS, HEAT TRANSFER, CLASSICAL MECHANICS, AND FLUID DYNAMICS 2020, 69 (8): 084201. doi: 10.7498/aps.69.20191933 Abstract + Micro-impurity pollution is always one of the key factors affecting the quality and service life of precision devices. Micro-nano impurity particles are difficult to remove by traditional cleaning methods (ultrasonic cleaning, etc.) and low removal efficiency by laser cleaning methods (dry laser cleaning, etc.). The laser plasma shock wave has high pressure and high temperature characteristics, which can remove nano-scaled impurity particles, and has great potential applications. In this work, we mainly study the thermodynamic effect of the laser plasma in the process of removing micro- and nano-particles. In this work, the Al particles on the Si substrate are removed by laser plasma shock wave, and the transformation of the particle state is discussed through the changes of the experimental sample morphology after different pulse effects. The experimental results show as follows With the increase of the pulse number, the micro- and nano-particle residues gradually decrease. At the same time, on the surface of the sample after these particles are removed, it can be found that large particles break up into small particles, and some of the particles will change into smooth spheres when their temperatures reach the melting point. These phenomena are the result of the interaction of the thermodynamic effect of the plasma. In order to study the transformation process of particle state, based on the plasma shock wave propagation theory, the evolution law of pressure characteristic and temperature characteristic of shock wave are obtained. From the evolution law, it can be seen that with the increase of shock wave radius, the pressure and temperature gradually decrease. When the shock wave propagates to the surface of a sample, it can reach the compression threshold and correspondingly the surface temperature arrives at melting temperature of particles, which are consistent with the experimental results. By using the finite element simulation method, the pressure and temperature of laser plasma shock wave acting on particles are studied. The stress distribution and temperature distribution in particles varying with time are obtained. The analysis results are consistent with the experimental results, and therefore the thermodynamic mechanism of plasma on particles is obtained. ## EDITOR'S SUGGESTION 2020, 69 (8): 084202. doi: 10.7498/aps.69.20191989 Abstract + Soliton is a universal format of nonlinear wave propagation in nature. Soliton can maintain its shape during propagation. This unique property has been widely observed in plasma physics, high energy electromagnetics, hydrodynamics, and nonlinear optics. Soliton interactions can reflect collective dynamic behaviors in complex nonlinear systems, showing significant basic research value. Passive mode-locked laser is an ideal platform for studying soliton interaction. The attraction and repulsion between two optical solitons can form soliton molecules. Their properties have been intensively studied by optical spectral analysis. However, conventional optical spectrum analyzers show low resolution and long average time. Time-stretched dispersive Fourier transformation (TS-DFT) is an emerging-powerful measurement technology, which can map the spectrum of an optical pulse to a temporal waveform under sufficient dispersion. The TS-DFT makes it possible to detect the dynamics of the solitons in real time. Based on TS-DFT, the internal dynamics of the solitons in Ti:sapphire femtosecond laser is studied in experiment. By changing the pump power, the stable soliton molecules with a separation of 180 fs and the weak phase oscillatory soliton molecules with a separation of 105 fs are observed. The amplitude in the weak oscillation state is merely 0.05 rad. We also find that the soliton molecules in stable state can transform into phase sliding state under environmental perturbation. These optical soliton molecules with a binding separation of 100 fs are of great significance for studying the short-range nonlinear interactions of solitons. ###### PHYSICS OF GASES, PLASMAS, AND ELECTRIC DISCHARGES 2020, 69 (8): 085201. doi: 10.7498/aps.69.20191864 Abstract + There are several methods of diagnosing the capacitively coupled plasma, such as microwave resonance probe, Langmuir probe, etc, but methods like microwave resonance probe are mainly used for determining the electron density. Moreover, in the diagnosing of plasma potential, the emissive probe has a higher accuracy than the traditional electrostatic probes, and it can directly monitor the potential in real time. However, in the existing work, emissive probe is mostly applied to the diagnosis of plasmas with high density or plasmas modulated by pulsed dual frequency (one of the radio frequency sources is modulated), the experiments on the emissive probe diagonising plasma excited by a pulsed single frequency are quite rare. In this paper, the temporal evolution of the plasma potential and electron temperature with input power and pressure in a pulsed 27.12 MHz capacitively coupled argon plasma are investigated by using an emissive probe operated in floating point mode. The plasma potential is obtained by measuring emissive probe potential under a strongly heated condition, while the electron temperature is estimated from the potential difference between the emissive probe under strongly heating and cold conditions. The measurements show that as the pulse is on, the plasma potential will rise rapidly and become saturated within 300 μs due to the requirement for neutrality condition; while the pulse is off, the plasma potential undergoes a rapid decline and then stabilizes. An overshoot for the electron temperature occurs as the onset of the pulse, because of the influence of radio frequency electric field and residual electrons from the last pulse; during the pulse-off time, rapid loss of high-energy electrons causes the electron temperature to rapidly drops to 0.45 eV within 300 μs, then it rises slightly, which is related to the electrons emitted by the probe. The plasma potential basically has a linear dependence on the change of input power and pressure for the pulse-on and pulse-off time; and the input power has a greater influence on the difference between the overshoot electron temperature and the steady state electron temperature during the pulse-on time. Corresponding explanations are given for the temporal evolution of plasma potential and electron temperature in different pulse stages and under different discharge conditions. ###### CONDENSED MATTER: STRUCTURAL, MECHANICAL, AND THERMAL PROPERTIES 2020, 69 (8): 086101. doi: 10.7498/aps.69.20191896 Abstract + Fin field effect transistor (FinFET) is a most widely used structure when the field effect transistor is scaled down to 30 nm or less. And there are few studies on single-event transient of FinFET devices with gate length below 30 nm. The single-event-transient on FinFET with gate length below 30 nm is worth studying. The single-event-transient responses of bulk FinFETs with 30 nm, 40 nm, 60 nm and 100 nm gate length are examined by using the pulsed laser and technology computer-aided design (TCAD) simulation in this article. First, we use the pulsed laser to ionize the gate of the FinFET device and detect the transient drain current of the FinFET device. The experimental results show that there are obvious platforms for the transient drain current tails of FinFETs with different gate lengths, and the platform current increases as the gate length of FinFET becomes shorter. The charges collected in the platform of FinFET devices with gate lengths of 100, 60, 40, and 30 nm are 34%, 40%, 51%, and 65% of the total charge collected in transient drain current, respectively. Therefore, when the FinFET device with the gate length below 100 nm, the platform current will seriously affect the device performance. Second, we use TCAD to simulate the heavy ion single-event effect of FinFET device and study the generation mechanism of platform region in transient drain current. The TCAD simulation explains this mechanism. Laser or heavy ions ionize high concentration electron-hole pairs in the device. The holes are quickly collected and the high concentration electrons are left under the FinFET channel. High concentration electrons conduct source and drain, generating the source-to-drain current at the tail of the transient drain current. Moreover the source-drain conduction enhances the electrostatic potential below the FinFET channel and suppresses high-concentration electron diffusion, making source-to-drain current decrease slowly and form the platform. The transient drain current tail has a long duration and a large quantity of collected charges, which seriously affects FinFET performance. This is a problem that needs studying in the single-event effect of FinFET device. It is also a problem difficult to solve when the FinFET devices are applied to spacecraft. And the generation mechanism of the transient drain current plateau region of FinFET device can provide theoretical guidance for solving these problems. ###### CONDENSED MATTER: ELECTRONIC STRUCTURE, ELECTRICAL, MAGNETIC, AND OPTICAL PROPERTIES 2020, 69 (8): 087801. doi: 10.7498/aps.69.20191923 Abstract + A green and low-cost method to prepare high-quality GaN (gallium nitride) nanowires is important for the applications of GaN-based devices on a large scale. In this work, high-quality GaN nanowires are successfully prepared by a green plasma enhanced chemical vapor deposition method without catalyst, with Al2O3 used as a substrate, metal Ga as a gallium source and N2 as a nitrogen source. The obtained GaN nanomaterials are investigated by using X-ray diffraction (XRD), scanning electron microscopy (SEM), transmission electron microscopy (TEM), Raman spectroscopy, and photoluminescence (PL) spectroscopy. The XRD results demonstrate that hexagonal-wurtzite GaN is obtained and no other phases exist. The SEM results show that GaN nanowires and hexagonal GaN microsheets are obtained at different temperatures. When the growth temperature is at 950 ℃ (reaction time for 2 h), the hexagonal GaN microsheets each with a size of 15 μm are obtained. When the growth temperature is at 1000 ℃(reaction time for 2 h), the GaN nanowires with the lengths in a range of 10–20 μm are obtained. With the reaction temperature increasing from 0.5 h to 2 h, the lengths of GaN nanowires increase. The TEM results suggest that the GaN nanowires are of high crystallinity and the growth direction of GaN nanowires is in the [0001] direction. The Raman results indicate that there exists a compressive stress in the GaN nanowires and its value is 0.84 GPa. Meanwhile, the growth mechanism of GaN nanowires is also proposed. The morphologies of GaN nanomaterials are tailed by the growth temperature, which may be caused by Ga atomic surface diffusion. Ga atoms have low diffusion energy and small diffusion length at 950 ℃. They gather in the non-polar m-plane. The (0001) plane with the lowest energy begins to grow. Then, hexagonal GaN microsheets are obtained. When reaction temperature is at 1000 ℃, the diffusion length of Ga atoms increases. Ga atoms can diffuse into (0001) plane. In order to maintain the lowest surface energy, the GaN nanowires grow along the [0001] direction. The PL results indicate that the obtained GaN nanowires have just an intrinsic and sharp luminescence peak at 360 nm, which possesses promising applications in photoelectric devices such as ultraviolet laser emitter. Our research will also provide a low-cost and green technical method of fabricating the new photoelectric devices. 2020, 69 (8): 087901. doi: 10.7498/aps.69.20200026 Abstract + For a microwave device filled with dielectrics, the secondary electron (SE) emission has a very important influence on the mechanism of microwave breakdown including low pressure discharge and multipactor. In this work, the SE yields (SEYs) and the SE energy spectra of seven kinds of dielectric materials are first measured and then used to examine their effects. In the positive charging process under electron irradiation, the surface potential of the dielectric layer trends to be steady with the SEY being one. Based on the measurement data, the steady surface potential is calculated under the charging stability condition. The steady surface potential is bigger for a bigger SEY. For a given SEY, the steady surface potential is found to be proportional to the peak energy Epeak of the SE energy spectrum. Furthermore, the effect of steady surface potential on low pressure discharge and multipactor are respectively studied for a parallel plate system filled with a dielectric layer. A static electric field related to the positive charging is introduced. The electron diffusion model in low pressure discharge process is modified by considering the static electric field. The electrons drift in a fixed direction under the action of static electric field, and the electron diffusion length decreases. Consequently, the effective electrons for low discharge decreases and the threshold microwave power increases. Therefore, a dielectric material with higher SEY and bigger Epeak is helpful in suspending the inhibition of low pressure discharge. Furthermore, the effect of steady electric field on multipactor is also explored. Two effects related to dielectric material and metal are analyzed in detail. The SE emission from dielectric material is held back by the steady electric field and some low energy electrons return back to the dielectric materials. The effective SEY thus decreases. On the other hand, the electric field reduces the landing electron energy on the metal, and the corresponding SEY also decreases. The electron oscillation condition with considering both microwave field and stead electric field is derived and the threshold values for microwave power of multipactor are calculated. The susceptibility curves corresponding to different materials are plotted. Our result may be used to choose the filling dielectric materials for a microwave device. 2020, 69 (8): 088701. doi: 10.7498/aps.69.20191908 Abstract + Fluorescence microscopic imaging technology realizes specific imaging by labeling biological tissue with fluorescence molecules, which has a high signal-to-noise ratio and has been widely used in the field of medical biology research. Some typical fluorescence microscopy techniques, such as confocal microscopy and two-photon microscopy, have high fluorescence intensity, but the long exposure can cause phototoxicity and photobleaching of biological tissue, which is difficult to meet the demand for long-time observation or noninvasive imaging. Then, light sheet fluorescence microscopy (LSFM) has become a hot research topic in fluorescence micro-imaging in recent years due to its fast speed, high resolution, low photobleaching and low phototoxicity. The imaging speed of a typical light sheet microscopy is not fast enough to observe fast biological activities such as transmission of neural signals, blood flow, and heart beats. At present, many reported light-sheet fluorescence microscopies still have some problems such as fixed imaging surface, slow imaging speed, small imaging depth or residual artifacts. Therefore, in this paper, a rapid light-sheet fluorescence microscopy based on electrically tunable lens is built. To achieve the rapid movement of the focal plane of the detection objective lens, the electrically tunable lens is introduced to meet the reqirement for fast changing of the diopter. Similarly, the rapid movement of light sheet is achieved by introducing one-dimensional galvanometer to change the rotation angle. Fast imaging requires the light sheet and focal plane to overlap in real time, which is then combined with a high-speed sCMOS receiving fluorescence to complete the whole imaging. In the experiment, the vertical depth significantly increases by modifying the optical path, and the LABVIEW programming is used to coordinate and improve the dynamic imaging quality, which effectively reduces the artifacts generated in rapid imaging. Finally, an imaging speed of 275 frames/s with a lateral resolution of ~0.73 μm, vertical resolution of ~5.5 μm, and an imaging depth of ~138 μm is achieved. This is of significance for developing the real-time and non-invasive imaging of living biological tissues.
auto_math_text
web
# Trucks and SUVs should be heavily taxed. Large, heavy vehicles are safe, as everyone knows. If you’re going to be in an accident, would you rather be in a Miata or an Escalade? More large vehicles on the road make us safer, and we should worry about anything which reduces vehicle sizes. Notably, fuel economy standards decrease vehicle size, so we will become less safe as fuel economy standards become more strict. See for example Crandall and Graham (1989). Or so conventional wisdom goes, but it turns out the truth is more subtle, according to recent research. The issue is not how big cars are on average, but rather the dispersion of vehicle types and weights.  A large body of evidence shows that people in small cars are much less likely to be killed in a collision with another small car than with a larger vehicle, particularly a truck or an SUV. Heavier vehicles are safer for their occupants, given a crash occurs because heavier vehicles are favored by the laws of physics in a crash. Trucks and SUVs gain an additional advantage because they are also tall.  When a truck or SUV hits a smaller car the force of the crash impacts on the upper bodies of the occupants of the car rather than on the steel frame of the car below. Further, pedestrians and cyclists are more likely to be killed if they are struck by a truck or SUV than by a car. Finally, trucks and SUVs tend to have relatively poor braking and maneuverability and, given driving styles, may cause an increase in accidents. Heavy vehicles, particularly trucks and SUVs, are not safer than smaller cars, they’re only safer for their occupants conditional on a crash occurring. For everyone else trucks and SUVs are a hazard. An increase in the proportion of trucks and SUVs on the road could increase or decrease overall safety, depending on the mix of vehicles already on the road.  Note that an increase in the proportion of trucks and SUVs which has no effect on overall safety involves a perverse outcome: the occupants of the new trucks and SUVs experience more safety as a result of their vehicle selection, but pedestrians, cyclists, and occupants of smaller vehicles experience less safety. A Prisoner’s Dilemma writ large arises: we would all be better off if we all cooperate and drive small cars, but anyone can defect and buy an Escalade. An “arm’s race” occurs in which we all wind up buying vehicles which are inefficiently too large. These considerations are troubling given the market share of trucks and SUVs has risen from 17% in 1981 to 50% in 2006, at least in part because fuel economy standards are lower for trucks and SUVs, effectively a subsidy. Several recent econometric studies,  including Anderson (2007)Li (2009), and Jacobsen (2010),  suggest that the increase in the proportion of trucks and SUVs has cost many lives (all statistics in this post were drawn from one of these papers). Anderson (2007) estimates that the elasticity of the mortality rate to truck and SUV share is about 0.34 (or 143 deaths per percentage point increase in trucks and SUVs), that about 80% of the increase in deaths as more trucks and SUVs hit the road are to pedestrians, cyclists, and car occupants, and that a Pigouvian tax on trucks and SUVs would come in at just under $4,000. He concludes: Overall, light trucks pose a significant hazard to other users of the highway system but on average provide no additional protection to their own occupants. Li’s (2009) estimates suggest that about 12% of truck and SUV sales can be attributed to the arm’s race for private safety and that a tax on trucks and SUVs should be set at about$2,500.  Jacobsen (2010) provides selection-corrected estimates of the effects of changes in fleet composition on safety and finds that a one mile per gallon increase in CAFE standards costs 164 lives due to the discrepancy in standards for trucks and cars, but a unified standard has little effect on overall safety. These estimates are remarkably consistent given these authors use a variety of data and empirical methods. The conclusion is statistically robust and well-grounded in theory: large vehicles, particularly trucks and SUVs, pose substantial external costs and should face large corrective taxes, roughly $2,500 to$4,500.  Further, these estimates likely underestimate how dangerous trucks and SUVs are as they assume away, due to data limitations, an effect of driving a truck or SUV on a given driver’s behavior: if people in such vehicles feel safer because they are in large, heavy vehicles, they may respond by driving more recklessly. A question I have after perusing this literature: how about a uniform, possibly nonlinear, per-kilogram tax on all vehicles?  Such a tax would not solve the vehicle configuration issue (an additional tax on trucks and SUVs would be required) but it would solve the arm’s race in vehicle weight.
auto_math_text
web
# Colloquium Paris VIde Cosmologie et Astroparticules LPNHE, LPTHE & GreCo/IAP Le Colloquium de Cosmologie se tiendra une fois par mois sur le campus de Jussieu le mercredi a 14 heures. Son objectif est de présenter des séminaires pédagogiques d'interet général sur des themes d'actualité en cosmologie et domaines connexes, aussi bien théoriques qu'observationnels. Tous les chercheurs du campus Jussieu sont les bienvenus ainsi que ceux des campus voisins: CdF, ENS, IAP-Observatoire de Paris. Ce colloque est soutenu par la Federation de Recherche Interactions Fondamentales (les organisateurs: P. Astier, H. de Vega, P. Peter) ## Année 2006-2007 Octobre  2006 MERCREDI  25 octobre 2006, 14h00,  Bibliotheque , LPTHE, Tour 24, 5eme etage, Jussieu Kevork Abazajian (Univ. of Mariland) "Dark matter and  neutrino physics" The nature of the dark matter is still unknown.  Hidden in the neutrino sector of particle physics may be one or more  fermions with no standard model interactions that nonetheless couple to neutrinos via their mass generation mechanism, namely sterile neutrinos.  Such a particle may be the dark matter, produced in the early universe  through matter-suppressed neutrino mixing or matter-enhanced resonant  mixing.  I will overview the kinetics of relativistic mixed neutrinos  in dense environments, and will specify with sterile neutrino dark  matter production in the early universe.  I will discuss how this  candidate alters cosmological structure formation, and the resulting  constraints from observed cosmological clustering.  In addition, I  discuss how this candidate may be detected by X-ray telescopes. Novembre 2006 MERCREDI  15 novembre 2006, 14h00,  Bibliotheque , LPTHE, Tour 24, 5eme etage, Jussieu Rainer Beck (Max Planck & Univ. Bonn) "Cosmic Magnetism Revealed with the Square Kilometer Array (SKA)." The origin of magnetic fields in stars, galaxies and clusters is an open problem in astrophysics and fundamental physics. "The Origin and Evolution of Cosmic Magnetism" is one of the Key Science Projects for the Square Kilometre Array (SKA), the international next-generation radio telescope.  An all-sky survey of Faraday rotation measures (RM) at 1.4 GHz will serve to model the structure and strength of the magnetic fields in the intergalactic medium, the interstellar medium of intervening galaxies and of the Milky Way.  Spectro-polarimetry will allow to separate RM components from distinct foreground and background regions and hence to perform 3-D "Faraday tomography" of the magnetized interstellar medium of the Milky Way and nearby galaxies.  Furthermore, polarization imaging with the SKA will open a new era in the observation of magnetic fields in galaxies, in galaxy clusters and in the intergalactic medium. Decembre 2006 MERCREDI  6 decembre 2006, 14h00,  Bibliotheque , LPTHE, Tour 24, 5eme etage, Jussieu Andreas Zech (LPNHE, Paris) "Observing the most energetic particles with the Pierre Auger Observatory" The origin and nature of ultra-high energy cosmic rays is still an open question. Resolving this question is of  considerable importance for both astrophysics and particle physics. The Pierre Auger Observatory is the world's largest detector to study the high energy end of the cosmic ray spectrum.  It combines two observational techniques, an array of water Cherenkov detectors and four air fluorescence telescope stations, to observe the extensive air showers generated in the atmosphere by cosmic rays. This hybrid observation  mode yields an excellent resolution and allows for important systematic cross-checks. The Auger South site, located in the province of Mendoza in Argentina, has started data acquisition in 2004 with only  a small fraction of its full aperture. Since then it has been growing continuously and is now nearing its completion. The  collected data provide already some insights into the energy spectrum, the origin and composition of ultra-high energy cosmic rays. I will give an update of the status of the observatory and discuss our first scientific results. Janvier 2007 MERCREDI  17 janvier 2007, 14h00,  Bibliotheque , LPTHE, Tour 24, 5eme etage, Jussieu Peter L. Biermann (MPIfR Bonn et Tuscaloosa) "Origin and physics of the highest energy cosmic rays" The highest energy cosmic ray particles are the most energetic particles known to us in the universe, and their observations have led us to build one of the largest detector system in the world, the AUGER air-shower array. We have detected particles to 3 10^{20} eV, which is a macroscopic energy.  There are a number of options how to generate them, and how to get these particles from their sources to us.  These particles may be accelerated to high energy in a shock wave, such as in a radio galaxy, or some other shock-wave such as during the formation of large scale structure in the universe. Other propositions assume that they are the product of a decay of a very massive particle (possibly connected to dark matter) and the merger of black holes. One key point will be to understand the cosmological web of magnetic fields, which may influence the propagation of high energy particles; here it is especially important to understand the role of a galactic wind and its magnetic structure.  I will discuss the observational and theoretical limits for an exemplary set of models, and also the predictions, that result from these models. I will place special emphasis on the search strategy that will be important once we will have statistically relevant AUGER data.  Detailed observations may allow us not only to set limits to the cosmic magnetic field, and the physics of sources, but also to aspects of particle physics. Fevrier 2007 MERCREDI  14 fevrier 2007, 14h00,  Bibliotheque , LPTHE, Tour 24, 5eme etage, Jussieu Liping Fu(IAP) "Cosmic shear from CFHTLS Wide" A primary scientific goal of the Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) is the exploration of the dark matter power spectrum properties and its evolution with redshift using weak gravitational lensing (cosmic shear). I will present the current state of the cosmic shear measurement using CFHTLS Wide 3rd release. It is the first time that the cosmic shear signal is explored beyond the one degree scale, which will be strong constraints on cosmological parameters. Meanwhile, the reliability of the current shear measuring pipeline is checked using simulation data, which shows high accuracy. In a short review, we compare our cosmic shear from CFTHLS with the other non-CFHTLS surveys showing a consistent signal. Mars 2007 MERCREDI  28 mars 2007, 14h00,  Bibliotheque , LPTHE, Tour 24, 5eme etage, Jussieu Patrick Valegeas (Saclay) "Formation of large-scale structures in the Universe: non-linear regime" The large-scale structures we observe in the present universe (such as galaxies and clusters of galaxies) have formed thanks to gravitational instability which amplified the small density perturbations created in the early universe. Moreover, the power increases at small scales as in the CDM model which leads to a hierarchical scenario where small scales become non-linear first. Thus, at large scales or at early times (e.g. to study the CMB) one can use a perturbative approach. However, many observations (e.g weak-lensing) probe the weakly non-linear or highly non-linear regimes. This requires to go beyond the usual perturbative expansion. After a brief review of the dynamics of gravitational clustering in the cosmological context, I shall present various analytical methods which have been devised to investigate the weakly non-linear regime. In particular, I shall focus on recent systematic methods which amount to reorganize the standard perturbative expansion by performing some partial infinite resummations. Mai 2007 MERCREDI  23 mai 2007, 14h00,  Bibliotheque , LPTHE, Tour 24, 5eme etage, Jussieu Didier Barret  (CESR Toulouse) "General Relativistic Motion of Matter in regions of extremely strong gravity revealed through quasi-periodic oscillations by the X-ray Rossi satellite" Kilo-Hz quasi-periodic oscillations (QPOs) have been discovered in the X-ray flux of several low-mass accreting X-ray binaries. These oscillations probe the motion of matter in a region of extreme gravity, where fundamental predictions of strong field general relativity, such as the existence of an innermost stable circular orbit (ISCO), have yet to be tested. We have undertaken a systematic analysis of the kilo-Hz quasi-periodic oscillations detected from several sources by the Rossi X-ray Timing Explorer and observed in several sources a reproducible effect that we interpret as the signature of the ISCO. In this talk (intended for non specialists), I will introduce kilo-Hz QPOs and review their general properties before discussing their potential for strong gravity and dense matter. I will then present the various pieces of evidence that we have accumulated in favor of our recent claim that the sharp drop of the coherence of the kilo-Hz QPOs at some critical frequency is related to the ISCO. ## Année 2005-2006 Septembre  2005 MERCREDI 14 SEPTEMBRE 2005, 14h00,  Salle Grossetete, LPNHE, Tour 33 (RdC) Jussieu Ben Moore (University of Zurich) "Dark matter and structure formation in the Universe" The initial conditions for structure formation are now well constrained and we know that the Universe is dominated by dark energy and dark matter. However several key problems remain: astronomers have only identified and understood one percent of the Universe and we are unsure how the matter arranges itself into stars, planets and galaxies. Recent supercomputer calculations have allowed us to make substantial progress in these areas. I will discuss our quest to understand the nature of dark matter and to predict the distribution of cold dark matter from the scale of the lecture room to that of galaxies and clusters. ## Octobre 2005 MERCREDI 12 OCTOBRE 2005, 14h00, Bibliotheque , LPTHE, Tour 24, 5eme etage, Jussieu Ken Ganga (APC, Tolbiac) "Polarization of the Cosmic Microwave Background and the QUaD experiment" I will discuss the anisotropies in the cosmic microwave background (CMB) radiation, its polarization, and the goals we hope to achieve in measuring it. QUaD is an experiment for measuring the polariziation in the CMB from the South Pole. It uses similar technology as that successfully used by previous experiments such as BOOMERanG and which will be used by future experiments such as the Planck/HFI instrument. QUaD has taken data for the one season, and will continue operation for at least one more. Though no scientific results will be presented, I will review the goals of the experiment, give the status of the instrument, and will present our expectations and schedule. ## Novembre 2005 MERCREDI 16 NOVEMBRE 2005, 14h00, Bibliotheque , LPTHE, Tour 24, 5eme etage, Jussieu Jose Valle (IFC, Valencia) "Neutrinos yesterday, today and tomorrow" I briefly review the milestone events in the development of neutrino physics that culminated with the discovery of neutrino masses and the understanding of solar and atmospheric neutrino "anomalies". I highlight the role that reactors and accelerators have played in providing final confirmation of observations from underground experiments. I also discuss the issues of robustness of the oscillation interpretation, and the challenges for the next generation of experiments. Finally I discuss the significance of these findings towards the understanding of the ultimate origin of neutrino masses and mixings. ## Decembre 2005 ** MARDI**  6 DECEMBRE 2005, 14h00, Salle Grossetete, LPNHE, Tour 33 (RdC) Jussieu Daniel Eisenstein (University of Arizona) "Dark Energy and Cosmic Sound" I present galaxy clustering results from the Sloan Digital Sky Survey that reveal the signature of acoustic oscillations of the photon-baryon fluid in the first million years of the Universe. The scale of this feature can be computed and hence the detection in the galaxy clustering serves as a standard ruler, giving a geometric distance to a redshift of 0.35. I will discuss the implications of this measurement for the composition of the universe, including dark energy and spatial curvature, and the prospects for future redshift surveys to use the acoustic peak to map the expansion history of the universe. ## Janvier 2006 MERCREDI 18  JANVIER 2006, 14h00, Salle Grossetete, LPNHE, Tour 33 (RdC) Jussieu Georg Raffelt (Max Planck Institute for Physics, Munich) "Neutrino Physics in Heaven" After an overview of the various themes that connect neutrino physics with astrophysics and cosmology I will focus on two main topics. First, what can we learn from the neutrino signal of a future supernova physics. Second, what can we learn about neutrino properties from cosmological observables, notably about the neutrino absolute mass scale and about secret neutrino interactions from cosmological large-scale structure observables. ## Fevrier 2006 MERCREDI 22  FEVRIER 2006, **16h30**, Bibliotheque , LPTHE, Tour 24, 5eme etage, Jussieu Helene Sol (LUTH, Meudon) "Astronomie gamma au sol : une nouvelle image du cosmos par l'experience europeenne HESS" L'astronomie des très hautes énergies vient de franchir un cap décisif grâce aux performances  en sensibilité et résolution spatiale atteintes par l'expérience HESS, High Energy Stereoscopic System, pleinement opérationnelle depuis début 2004. Nous assistons maintenant à l'émergence d'une vision renouvelée de notre univers que le séminaire tentera d'illustrer, structurée autour de la physique des accélérateurs cosmiques tels que vents de pulsars, restes de supernovae, binaires-X et microquasars, sources galactiques non-identifiées - possibles accélérateurs 'sombres' - centre galactique, et noyaux actifs de galaxies. ## Mars 2006 MERCREDI 22 MARS 2006, 14h00, Salle des Seminaires, Sous-Sol, IAP, 98bis Boul Arago David Hogg (New York University) "Galaxies and galaxy merging in the last billion years" I review what we know about the Local (nearest few hundred Mpc) Universe from studies of millions of galaxies with enormous surveys, such as the Sloan Digital Sky Survey. Structure in the Universe grows by merging and accretion, as do galaxies. I show that small-scale galaxy clustering measurements, galaxy morphology studies, and simple analyses of recent-past star-formation histories of galaxies can all put complementary constraints on the rate at which galaxies are accreting material and growing. These studies suggest that the galaxy population grows (in mass) by of order one percent per Gyr, with large variance among galaxies. I will focus on the observations and their interpretation, but there are also theoretical challenges. I am optimistic that in the future these kinds of observations will non-trivially constrain the dynamics of the dark matter on small scales. ## Avril 2006 MERCREDI 26 AVRIL 2006,  **16h30**, Bibliotheque , LPTHE, Tour 24, 5eme etage, Jussieu Norma Sanchez (Observatoire de Paris, LERMA) "Trous Noirs dans l'Univers: naissance, vie, mort et rémanents des trous noirs" Depuis quelques années, les trous noirs ne sont plus des objets purement théoriques mais sont devenus des objets physiques réels faisant partie de l'univers. Des observations astrophysiques des dernières annees (dans les domaines X, optique, infrarouge, gamma) ont produit une évidence grandissante de l'existence des trous noirs, en allant des trous noirs supermassifs dans les centres des galaxies, dont notre propre galaxie, aux trous noirs des masses stellaires, dont les systèmes binaires. Ces trous noirs sont décrits par la physique classique (non quantique), ils interagissent avec leur environment (étoiles, gaz, disques d'accrétion) produisant des effets observables. D'autre part, les trous noirs primordiaux ou microscopiques, bien qu'encore non d'etectés, sont l'objet d'un interet cosmologique et physique grandissant. Ce sont des trous noirs semiclassiques émettant par effet tunnel et la création des paires, des particules des toutes sortes avec un spectre thermique universel (émission de Hawking). Dans les dernières étapes de leur évaporation, les trous noirs deviennent purement quantiques, se transformant en cordes et se désintégrant en toutes sortes des particules avec une émission non thermique. Naissance, vie, fin de vie et rémanents des trous noirs dans tout leur rang des masses seront exposés. ## Mai 2006 MERCREDI 10 MAI 2006,  **16h30**, Salle Grossetete, LPNHE, Tour 33 (RdC) Jussieu Olivier Doré (CITA, Toronto) "Mapping the polarized sky with WMAP: methods and cosmological implications" The Wilkinson Microwave Anisotropy Probe (WMAP) is a NASA satellite designed to produce high resolution full sky maps of the temperature and polarization of the cosmic microwave background (CMB). The accurate characterization of the fluctuations in the CMB contains exquisite information about the global structure, composition, and evolution of the universe. Relying on the first three years of observations, WMAP has now measured these fluctuations with unprecedented accuracy. I will illustrate how a greater signal-to-noise in the temperature measurement but also a new large scale polarization signal detection have significantly sharpened our cosmological interpretation. A simple six-parameters cosmological model (flat LCDM); consisting of baryons, dark matter, a cosmological constant, initial perturbation spectrum amplitude and slope, and optical depth; is an excellent fit to the WMAP data, as well as a host of other astronomical experiments. The new WMAP data also hint at a small deviation from scale invariance in the primordial fluctuation power spectrum, a key prediction of inflation. If confirmed this would strengthen our confidence in the inflationary scenario and allow detailed model testing. Besides, the combination of WMAP data and other astronomical data places even stronger constraints on the density of dark matter and dark energy, the properties of neutrinos, the properties of dark energy and the geometry of the Universe ## Juin 2006 MERCREDI 21 JUIN 2006,  14h00, Bibliotheque , LPTHE, Tour 24, 5eme etage, Jussieu Miguel Angel Aloy (MPI-Garching et Valencia) "Progenitors of Gamma Ray Bursts: theory and numerical simulations" I start by a summary of the current theoretical status and present observational data on Gamma Ray Bursts (GRB). I will put particular emphasis on the physical properties of the systems that can be progenitors of both short and long GRBs. Numerical simulations based on general relativitistic hydrodynamics have shown the physical realism of progenitor models as collapsars and mergers of neutron star binaries or neutron stars and black holes. These simulations provide quantitative estimates of generic properties of the relativistic outflows that can be compared with the available observational data and disentangle which progenitor models are best suited to reproduce the observed phenomenology. Page web entretenue par Michael Joyce ## Année 2004-2005 Septembre  2004 MERCREDI 29 SEPTEMBRE 2004, 14h30,  Bibliotheque , LPTHE, Tour 24, 5eme etage, Jussieu Massimo Giovannini (CERN-TH) "Magnetic fields in the early and in the present Universe" I shall review the present evidence of large-scale magnetic fields in galaxies, galaxy clusters and superclusters and I shall outline the main techniques used to infer the existence of cosmic magnetism. After discussing the problems related to the origin of large-scale magnetic fields, I shall focus on different suggestions emerging from field theory in curved space-times and from string theory arguing for a cosmological origin of large-scale magnetism. If magnetic fields were generated prior to the decoupling of photons from baryons, then, the thermodynamical history of the Universe may be modified already at the electroweak epoch. Direct experimental consequences can follow on the primary anisotropies of the Cosmic Microwave Background (CMB), on the polarization of the CMB itself (Faraday effect) and on the production of stochastic backgrounds of gravitational waves. Octobre 2004 MERCREDI 20 OCTOBRE 2004, 14h00, Bibliotheque , LPTHE, Tour 24, 5eme etage, Jussieu Eric Linder (LBL, Berkeley) "Dark Energy and the Dynamics of the Universe" Discoveries in the last few years have revolutionized our knowledge of the universe and our ideas of its ultimate fate. Measurements of the expansion of the universe show that it is not slowing down under normal gravity but accelerating due to an unknown, gravitationally repulsive "dark energy". This may be a clue to new properties of quantum physics or of gravity "beyond Einstein". I present an overview of the puzzles of dark energy and the means for unraveling them through cosmological probes. Next generation experiments such as the Supernova/Acceleration Probe (SNAP) satellite would measure the supernova distance-redshift relation to high accuracy and map the evolution of structure and dark matter through gravitational lensing. These observations will explore the frontiers of physics and aim to uncover what makes up the still unknown 95% of our universe. Novembre 2004 MERCREDI 24 NOVEMBRE 2004, 14h00, Bibliotheque , LPTHE, Tour 24, 5eme etage, Jussieu Francesco Sylos Labini (LPT, Orsay) "Non linear structures in gravitation and cosmology" I will first give a brief overview of the state of observations of large scale structure in the distribution of galaxies in the Universe, and also of the main theoretical instrument  --- gravitational N-body simulation --- used to explain their origin  in standard cosmological models.   I will then discuss the principal properties of the  "non linear" structures encountered in both contexts,  describing some of the basic  statistical methods for their  characterization.  I explain that despite a similar power-law two-point  correlation function characterising both cases,  the fluctuations may in fact be qualitatively  very different in nature,  and I report observational evidence that this is indeed the case.  I conclude with a discussion of some of the open theoretical questions raised by these results. Decembre 2004 MERCREDI 15 DECEMBRE 2004, 14h00, Bibliotheque , LPTHE, Tour 24, 5eme etage, Jussieu Hector de Vega (LPTHE, Jussieu) "Inflation, cosmic microwave background anisotropies and quantum field theory effects" Inflation was proposed to explain the homogeneity, isotropy and flatness of the Universe. At the same time, inflation generates the scalar density fluctuations that seed large scale structure, thus explaining the temperature anisotropy in the cosmic microwave background (CMB), as well as tensor perturbations (primordial gravitational waves). Inflationary models predictions (gaussian, nearly scale invariant adiabatic perturbations) are in spectacular agreement both with the CMB (the recent WMAP data) as well as with the large scale structure observations. The theory of inflation is presented starting from the basic paradigm and its realization as an effective field theory for the early universe where inflation turns to act as an attractor. The deep connection between inflation and grand unification is stressed. The spectrum of observable primordial fluctuations is derived from the inflaton dynamics and the new quantum effects arising from the quantum field theory treatment are presented. The inflaton decay during inflation (into lighter particles or into itself) induces novel observable effects in the primordial spectrum and generates non-gaussianity. Janvier 2005 MERCREDI 26 JANVIER 2005, 14h00,Salle Grossetete, LPNHE, Tour 33 (RdC) Jussieu Eric Gourgoulhon (LUTH, Meudon) "Relativité numérique et sources d'ondes gravitationnelles" L'objet de la relativité numérique est de résoudre les équations d'Einstein À l'aide d'ordinateurs, pour étudier plus particulièrement les sources astrophysiques d'ondes gravitationnelles détectables par VIRGO ou LISA. Ces dernières mettent en effet en jeu des objets compacts (étoiles à neutrons, trous noirs) qui ne peuvent ê;tre décrits correctement que par la relativité générale. Une particularité de la relativité numérique est qu'elle requiert des études analytiques approfondies en préalable à toute implémentation numérique. Je présenterai deux de ces études: la première sur la dynamique de l'espace-temps et la deuxième sur le traitement local des trous noirs. Enfin je discuterai de l'implémentation numérique et présenterai quelques résultats. Fevrier 2005 MERCREDI 16 FEVRIER 2005, 14h00, Bibliotheque, LPTHE, Tour 24, 5eme etage, Jussieu Julien Guy (LPNHE, Jussieu) "SuperNova Legacy Survey: Resultats de la premiere annee et implications cosmologiques" Les supernovae de type Ia (SNe Ia) forment a l'heure actuelle la classe d'objets la plus performante pour etudier l'evolution de la distance de luminosite avec le redshift, et ainsi mesurer l'histoire de l'expansion de l'Univers. Des releves anterieurs ont permis de mettre en evidence une acceleration recente de cette expansion, due a une hypothetique Energie Noire. Il s'agit maintenant de contraindre l'equation d'etat de cette derniere et de mesurer avec precision les parametres cosmologiques, ce qui requiert une large statistique. Le SuperNova Legacy Survey (SNLS) est a l'heure actuelle le projet le plus ambitieux. Il prevoit d'observer plus de 700 SNe Ia, dont les redshifts seront compris entre 0.2 et 1, multipliant ainsi la statistique disponible par un facteur 10. Je presenterai les resultats de la premiere annee du SNLS, et discuterai leur implication cosmologique. Mars 2005 MERCREDI 16 MARS 2005, 14h00, Salle Grossetete, LPNHE, Tour 33 (RdC) Jussieu Felix Mirabel (European Space Observatory & CEA-Saclay-Sap) "Black holes in the Universe" I will review the observational evidence for the existence of supermassive black holes and black holes of stellar-mass. I will concentrate on Microquasars, which are stellar-mass black holes in our own Galaxy that mimic,  on a smaller scale, many of the phenomena seen in quasars. Microquasars are ideal laboratories to tests the Physics in the limits of the strongest gravitational fields. Their discovery has opened the way for a new understanding of the connection between accretion of matter onto black holes and the origin of relativistic jets seen  everywhere in the Universe. I will review the open questions and future perspectives in this new field of research. Avril 2005 MERCREDI 13 AVRIL 2005, 14h00, Salle Grossetete, LPNHE, Tour 33 (RdC) Jussieu Riccardo Barbieri (Scuola Normale Superiore, Pisa ) "Particle Physics and  fundamental' physics" I summarize for non experts the current status of particle physics and I describe a possible evolution the field could take in the future. Mai 2005 MERCREDI 11 MAI, 2005, 14h00, Salle Grossetete, LPNHE, Tour 33 (RdC) Jussieu Gabriele Veneziamo (College de France et CERN ) "Conventional vs. Unconventional Cosmic Accelerators" We are quite sure that, in its long history, the Universe underwent an accelerated expansion at least twice, once in early cosmology, and once in our recent past. In either case, we are not sure about what provided the repulsive force that is needed to accelerate. I will discuss both the most conventional "accelerator candidates" and some less conventional ones that appear to be preferred by string theory and point out ways to distinguish these alternatives through their observational consequences. Juin 2005 MERCREDI 8 JUIN 2005, 14h00, Bibliotheque, LPTHE, Tour 24, 5eme etage, Jussieu Sacha Dolgov (INFN-Ferrara, ITEP-Moscow et LERMA-Observatoire de Paris) "Cosmological Magnetic Fields and CMB Polarization" The origin of the observed magnetic fields in galaxies and intergalactic space presents a profound cosmological mystery. An overview of possible mechanisms of their generation is presented . If the large scale magnetic fields were created in the early universe they could lead to an observable rotation of polarization plane of cosmic microwave background radiation. This phenomenon is discussed in detail. ## Année 2003-2004 Novembre 2003 MERCREDI 26 NOVEMBRE 2003, 14h00,  Salle Grossetete, LPNHE, Tour 33 (RdC) Jussieu Daniel  Boyanovsky  (LERMA, Observatoire de Paris & University of Pittsburgh) "Phase Transitions in the Early and Present Universe" I will present a summary of Early Universe cosmology and the physics of compact stars stressing the contact with particle physics. The succesive phase transitions that happened in the Early Universe as well as novel phases in Neutron Stars will be discussed including their observational consequence Janvier 2004 MERCREDI 14 JANVIER 2004, 14h00,  Salle Grossetete, LPNHE, Tour 33 (RdC) Jussieu Yannick Mellier (IAP) " Distortion gravitationnelle et cosmologie" Les analyses statistiques de la distribution en ellipticite des galaxies montre un champ coherent produit par les effets de lentilles gravitationnelle des grandes structures de l'univers (le cosmic shear). L'interpretation cosmologique de ce champ permet d'en extraire des informations sur les proprietes de la matiere noire, des relations matiere-lumiere et de l'energie noire.  Je presenterai l'etat actuel de nos connaissance dans le domaine du cosmic shear ainsi que les resultats attendus avec le CFHTLS, qui commence tout juste a produire les premiere images de megacam, notamment sur l'equation d'etat de l'energie noire. Je montrerai ensuite ce que devrait obtenir des experiences comme JDEM/SNAP d'ici 10 ans. Fevrier 2004 MERCREDI 25 FEVRIER 2004, 14h00, Bibliotheque , LPTHE, Tour 16 (1er etage) Jussieu "Turbulence and Scaling in Astrophysics" I shall introduce the concept of turbulence using Kolmogorov turbulence in a non-magnetized incompressible fluid as an example. Then I shall show how the properties of the fluid are modified by magnetic fields. Finally  I shall talk about the effects of compressibility and partial ionization on the properties of magnetic turbulence. Astrophysical fluids are as a rule turbulent, magnetized, compressible and sometimes only partially ionized. Star formation, accretion processes, cosmic rays transport, polarized radiation and heat, etc are affected by magnetic turbulence. Recent research also shows that interstellar turbulence is essential to understanding the properties of various foregrounds that interfere with the measurements of CMB polarization. I shall show that the scattering of cosmic rays by the interstellar medium is  substantially changed when the properties of magnetic turbulence are taken into account. I shall end my talk by describing how the theory can be tested with observations. Mars 2004 MERCREDI 17 MARS 2004, 14h00,  Salle Grossetete, LPNHE, Tour 33 (RdC) Jussieu Joseph Silk (Oxford University) "A la recherche de la matiere sombre" Je ferai un compte-rendu de l'abondance globale de matiere sombre. Je decrirai l'etat actuel de nos connaissances sur la matiere sombre baryonique et les possibilites de detection de la matiere sombre non-baryonique. MERCREDI 24 MARS 2004, 14h00,  Bibliotheque , LPTHE, Tour 16 (1er etage) Jussieu Robert Brandenberger (Brown University) "Challenges for String Cosmology" The inflationary scenario provides the current paradigm of early Universe cosmology. Although this scenario has been very successful phenomenologically, it is plagued by serious conceptual problems. I will discuss some of these problems and explain why superstring theory might provide a good framework in which to address these issues. I will give an overview of some of the key challenges which a new paradigm of the early Universe based on string theory faces, and will discuss one approach to addressing these questions, brane gas cosmology". Avril 2004 MERCREDI 28 AVRIL 2004, 14h00,  Salle Grossetete, LPNHE, Tour 33 (RdC) Jussieu Mikhail Shaposhnikov (EPFL Lausanne) "Baryon asymmetry of the universe: a window to physics beyond the standard model" I will discuss the problem of baryon asymmetry of the Universe and different proposals suggested for its solution. All of them require new physics beyond the standard model of particle physics. Mai 2004 MERCREDI 26 MAI 2004, 14h00,  Bibliotheque , LPTHE, Tour 16 (1er etage) Jussieu Patrick Petitjean (IAP) "Structures du milieu inter-galactique a grand decalage spectral, formation des galaxies et variation des constantes fondamentales" Le spectre  d'absorption des quasars tres eloignes revele le gaz interpose entre nous et le quasar. L'etude de ce milieu  intergalactique et son evolution cosmologique montre qu'il constitue le reservoir des baryons a partir duquel les galaxies se forment. Un modele coherent de la distribution spatiale de la matiere baryonique dans l'Univers a donc emerge dans lequel les nuages inter-galactiques se repartissent suivant le reseau filamentaire de la matiere noire, dont les noeuds sont les lieux ou se forment de facon preferentielle les galaxies. De plus, l'observation du spectre  d'absorption des quasars tres eloignes permet de mettre de contraints a la variation avec le temps  des constantes fondamentales de la physique: $\alpha_{em}$ et $\frac{m_{proton}}{m_{electron}}$ et eventuellement la mesure de la temperature du fond diffus. Juin 2004 MERCREDI 9 JUIN 2004, 14h00,  Salle Grossetete, LPNHE, Tour 33 (RdC) Jussieu Neil Turok (University of Cambridge) "A Cyclic Universe Scenario" Cosmology has seen dramatic progress in recent years, with observations ruling out many popular theories and focussing attention on a concordance model' involving high energy inflation in the early universe and low energy cosmic acceleration today. The first part of the talk will review some of the key phenomenology which singles out this model amongst many rival theories. However, the model has deep consistency puzzles: what drives the inflation? how did inflation begin? will the future really be a cold, eternal void? New ideas from string and M theory on the nature of spacetime motivate us to reconsider these questions. We have proposed a new cosmological scenario, the cyclic Universe', in which today's dark energy plays the central role. It is required to drive an eternal series of cycles, each consisting of a big bang followed by a period of activity followed by accelerating expansion which cleans up' the debris restoring the Universe to a smooth, pristine state ready for the next bang. This scenario reproduces the phenomenological successes of inflation with completely different, long wavelength physics. It also casts new light on the cosmic singularity, suggesting it was not the beginning of time. An observational test distinguishing the cyclic model from inflation will be described.
auto_math_text
web
### Operational Amplifiers In electronics, an operational amplifier, or op-amp, is a DC-coupled high-gain electronic voltage amplifier with differential inputs and, usually, a single output. Typically the output of the op-amp is controlled either by negative feedback, which largely determines the magnitude of its output voltage gain, or by positive feedback, which facilitates regenerative gain and oscillation. High input impedance at the input terminals and low output impedance are typical characteristics. Op-amps are among the most widely used electronic devices, used in a vast array of consumer, industrial, and scientific devices. Many standard IC op-amps cost only a few cents in moderate production volume; however some integrated or hybrid operational amplifiers with special performance specifications may cost over \$100 US in small quantities. Modern designs are electronically more rugged than earlier implementations and some can sustain direct short-circuits on their outputs without damage. Circuit Notation  : The circuit symbol for an op-amp is shown in Figure 1 where: • V+: non-inverting input • V: inverting input • Vout: output • VS+: positive power supply • VS−: negative power supply The power supply pins (VS+ and VS−) can be labeled in different ways (See IC power supply pins). Despite different labeling, the function remains the same. Often these pins are left out of the diagram for clarity, and the power configuration is described or assumed from the circuit. The positions of the inverting and non-inverting inputs may be reversed in diagrams where appropriate; the power supply pins are not commonly reversed. For Example IC741 is an operational amplifier. It is used for doing arithmetic operations on analog computers, instrumentation and other control systems. Operational amplifier is in the class of linear IC's. Linear have a peculiarity that they can take continuous voltage signals like their analog counterparts.These are highly used today because of their high reliability and low cost. They are mainly used as voltage amplifiers. The basic operational amplifier works similar to the following sequence. input stage--->intermediate stage--->level shifter--->output stage. Input stage consists of high input impedance it amplifies the difference between the given input signals. The intermediate stage consists of cascaded amplifiers to amplify the signals from the input. Due to high amplification the DC level of the signals goes up. So in order to bring them down to the rated value,level shifter or level translator is used. The output stage consists of class AB/ class B power amplifier in order to amplify the power of the output signal. ## Operation of ideal op-amps The amplifier's differential inputs consist of an inverting input and a non-inverting input, and ideally the op-amp amplifies only the difference in voltage between the two. This is called the "differential input voltage". In its most common use, the op-amp's output voltage is controlled by feeding a fraction of the output signal back to the inverting input. This is known as negative feedback. If that fraction is zero, i.e., there is no negative feedback, the amplifier is said to be running "open loop" and its output is the differential input voltage multiplied by the total gain of the amplifier, as shown by the following equation: $V_\mathrm{out} = (V_+ - V_-) \cdot G_\mathrm{openloop}$ where V+ is the voltage at the non-inverting terminal, V is the voltage at the inverting terminal and G is the total open-loop gain of the amplifier. Because the magnitude of the open-loop gain is typically very large and not well controlled by the manufacturing process, op-amps are not usually used without negative feedback. Unless the differential input voltage is extremely small, open-loop operation results in op-amp saturation (see below in Nonlinear imperfections). An example of how the output voltage is calculated when negative feedback exists is shown below in Basic non-inverting amplifier circuit. Another typical configuration of op-amps is the positive feedback, which takes a fraction of the output signal back to the non-inverting input. An important application of it is the comparator with hysteresis (see Schmitt trigger). For any input voltages the ideal op-amp has • infinite open-loop gain, • infinite bandwidth, • infinite input impedances (resulting in zero input currents), • zero offset voltage, • infinite slew rate, • zero output impedance, and • zero noise. The inputs of an ideal op-amp under negative feedback can be modeled using a nullator, the output with a norator and the combination (complete ideal op-amp) by a nullor. <
auto_math_text
web
HepSim is a public repository with Monte Carlo simulated events for particle-collision experiments. The HepSim repository was started at ANL during the US long-term planning study of the American Physical Society’s Division of Particles and Fields (Snowmass 2013). It contains predictions from leading-order (LO) parton shower models, next-to-leading order (NLO) and NLO with matched parton showers. It also includes Monte Carlo events after fast (“parametric”) and full (Geant4) detector simulations and event reconstruction. HepSim contains event samples for physics and detector-performance studies for High-Luminosity LHC (HL-LHC), International Linear Collider (ILC), Very Large Hadron Collider (VLHC), Future Circular Collider (FCC-ee and FCC-hh), Compact Linear Collider (CLIC), Circular Electron-Positron Collider (CEPC), Electron-Ion Collider (EIC) and other future particle colliders. HepSim Monte Carlo samples are registered by the U.S. Department of Energy Office of Scientific and Technical Information, OSTI.GOV. Each event sample is assigned a DOI (Digital Object Identifier) number as listed on the ANL-HEPSIM OSTI.GOV page. The assigned DOI numbers are also indicated on the HepSim information panel for datasets. HepSim was designed following the guidelines and principles of DOE Public Access Plan for unclassified and otherwise unrestricted scientific data in digital formats. # Physics and detector studies Here are several links for extending this Wiki for particular detector-performance topics: # For developers If you plan to contribute to HepSim (Monte Carlo events, data storage etc), read this: # How to cite If you use HepSim event samples, Python/Jython analysis scripts and output XML files in your research, talks or publications, please cite this project as: S.V. Chekanov. HepSim: a repository with predictions for high-energy physics experiments. Advances in High Energy Physics, vol. 2015, ID136093, 2015. arXiv:1403.1886 and link. @article{Chekanov:2014fga, author = "Chekanov, S.V.", title = "{HepSim: a repository with predictions for high-energy physics experiments}", year = "2015", eprint = "1403.1886", journal = "Advances in High Energy Physics", volume = "2015", pages = "136093", archivePrefix = "arXiv", primaryClass = "hep-ph", note = {Available as \url{http://atlaswww.hep.anl.gov/hepsim/}} } </hidden> # Acknowledgement The current work is supported UChicago Argonne, LLC, Operator of Argonne National Laboratory (Argonne''). Argonne, a U.S. Department of Energy Office of Science laboratory, is operated under Contract No. DE-AC02-06CH11357. This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-06CH11357. Sergei Chekanov 2016/04/29 16:24
auto_math_text
web
# Can Keras be used to build clustering models The keras.wrappers.scikit_learn module can be used to build KerasClassifier model. Can Keras be used to build clustering models? If it can be, are there any examples for that? • Depends on what exactly you want, autoencoders are an example of that. What clustering algorithm do attempt to use? – Media May 9 '18 at 12:43 • you know i want to use some features like age, city, education, company, job title and so on to cluster people into some groups and to get the key features of each group. – sanjie May 9 '18 at 13:08 • Deep Learning tools generally suck at clustering. At best they include the slowest variant of k-means. One of the least useful methods. I doubt you find one that has, e.g., DBSCAN. Or even any of the fast k-means variants. Or OPTICS. Or support for other distance functions such as Canberra. – Has QUIT--Anony-Mousse May 9 '18 at 16:02
auto_math_text
web
### Book Store Currently only available for. CBSE Gujarat Board Haryana Board ### Previous Year Papers Download the PDF Question Papers Free for off line practice and view the Solutions online. Currently only available for. Class 10 Class 12 Why do colligative properties of an electrolyte solution of a given concentration are found to be larger than those of a non-electrolyte solution of the same concentration? The colligative properties of solution depend on the total number of solute particles present in solution. Since the electrolytes ionise and give more than one particle per formula unit in solution, the colligative effect of an electrolyte solution is always greater than that of a non-electrolyte of the same molar concentration. Discrepancies arise when solute particles are present in either associated or in the dissociated form. Consider benzoic acid molecules present as dimers in C 6H6. The number of solute particles is half, therefore value of Tfor benzoic acid (dissolved in benzene) would be half the normal value. During dissociation of NaCl into Na+ and CI–1 ions in aqueous solution, presence of twice the number of ions cause a doubling of depression/elevation (AT) of freezing/boiling point. Calculate the molarity of each of the following solution (a) 30 g of Co(NO3)2.6H2O in 4.3 L solution (b) 30 mL of 0.5 MH2SO4 diluted to 500 mL. solution; Molarity (M) is defined as number of moles of solute dissolved in one litre (or one cubic decimetre) of solution. (a) Mol. mass of Moles of $\mathrm{Co}\left(\mathrm{NO}{\right)}_{3}.6{\mathrm{H}}_{2}\mathrm{O}$ Volume of solution = 4.3 L Molarity, (b) Number of moles present in 1000 ml of 0.5M H2SO4= 0.5 mol therefore number of moles present in 30ml of 0.5M H2SO4=$\frac{0.5×30}{1000}$mol =0.015mol therefore molarity =0.015/0.5L thus molarity is 0.03M Calculate the mass percentage of benzene (C6H6) and carbon tetrachloride (CCl4) if 22 g of benzene is dissolved in 122 g of carbon tetrachloride. Mass % of benzene Mass% of carbon tetrachloride = 100 - 15.28 = 84.72% Calculate (a) molality (b) molarity and (c) mole fraction of KI if the density of 20% (mass/mass) aqueous KI is 1.202 g mL-1. (a) 20% (mass/mass) means that 20 g of KI is present in 80 g of water. Therefore, Moles of KI in solution moles of KI = 20/166 =0.12mol moles of water =80/18 =4.44mol therefore, mole fraction of KI = Calculate the mole fraction of benzene in solution containing 30% by mass in carbon tetrachloride. Let the total mass of the solution be 100g and mass of benzene be 30 g therefore mass of tetrachloride= (100-30)g = 70g Molar mass of benzene, Calculate the mass of urea (NH2CONH2) required in making 2.5 kg 0.25 of molal aqueous solution. Solution: Molality (m) is defined as the number of moles of the solute per kilogram (kg) of the solvent and is expressed as: Mol. mass of urea ${\mathrm{NH}}_{2}{\mathrm{CONH}}_{2}$ = 14 + 2 + 12 + 16 + 14 + 2 = Molality (m) = or Moles of solute = 0.25 x 0.25 =  0.625 Mass of urea = Moles of solute x Molar mass = 0.625 x 60 = 37.5 g Do a good deed today Refer a friend to Zigya
auto_math_text
web
hw question 4B.3 Brian_Ho_2B Posts: 221 Joined: Fri Aug 09, 2019 12:16 am hw question 4B.3 A system's internal energy is increased by 982 J when 492 J of heat was supplied. Because change in energy is equal to the sum of work and heat, it makes sense that work must have been involved for the the change in energy to be that high. Using that equation, it makes sense that 982 J = 492 J + w, shouldn't the work be equal to 490 Joules ? How come the answer is 90 * 10^2 J? Ipsita Srinivas 1K Posts: 50 Joined: Mon Jun 17, 2019 7:24 am Re: hw question 4B.3 I know this isn't a satisfactory answer, but the textbook might just be wrong? It seems weird to me that the work done is approximately 10 times more than the internal energy change! (982 J vs 9000 J) Luyan Zhang - 2D Posts: 103 Joined: Sat Jul 20, 2019 12:16 am Re: hw question 4B.3 Brian_Ho_2B wrote:A system's internal energy is increased by 982 J when 492 J of heat was supplied. Because change in energy is equal to the sum of work and heat, it makes sense that work must have been involved for the the change in energy to be that high. Using that equation, it makes sense that 982 J = 492 J + w, shouldn't the work be equal to 490 Joules ? How come the answer is 90 * 10^2 J? The solution manual should be wrong. There is no way work done is 9000J.
auto_math_text
web
0th Percentile A function to add an edge to a graph. Keywords manip ##### Usage addEdge(from, to, graph, weights) ##### Arguments from The node the edge starts at to The node the edge goes to. graph The graph that the edge is being added to. weights A vector of weights, one for each edge. ##### Details Both from and to can be vectors. They need not be the same length (if not the standard rules for replicating the shorter one are used). Edges are added to the graph between the supplied nodes. The weights are given for each edge. The implementation is a bit too oriented towards the graphNEL class and will likely change in the next release to accomodate more general graph classes. If the graph is undirected then the edge is bidirectional (and only needs to be added once). For directed graphs the edge is directional. ##### Value A new instance of a graph object with the same class as graph but with the indicated edges added. addNode,removeEdge, removeNode ##### Examples V <- LETTERS[1:4] edL2 <- vector("list", length=4) names(edL2) <- V for(i in 1:4) edL2[[i]] <- list(edges=c(2,1,2,1)[i], weights=sqrt(i)) gR2 <- graphNEL(nodes=V, edgeL=edL2, edgemode="directed") gX <- addEdge("A", "C", gR2, 1) gR3 <- randomEGraph(letters[10:14], .4) gY <- addEdge("n", "l", gR3, 1) Documentation reproduced from package graph, version 1.50.0, License: Artistic-2.0 ### Community examples Looks like there are no examples yet.
auto_math_text
web
# dgl.rand_bipartite¶ dgl.rand_bipartite(utype, etype, vtype, num_src_nodes, num_dst_nodes, num_edges, idtype=torch.int64, device=device(type='cpu'))[source] Generate a random uni-directional bipartite graph and return. It uniformly chooses num_edges from all possible node pairs and form a graph. The random choice is without replacement, which means there will be no multi-edge in the resulting graph. To control the randomness, set the random seed via dgl.seed(). Parameters • utype (str, optional) – The name of the source node type. • etype (str, optional) – The name of the edge type. • vtype (str, optional) – The name of the destination node type. • num_src_nodes (int) – The number of source nodes. • num_dst_nodes (int) – The number of destination nodes. • num_edges (int) – The number of edges • idtype (int32, int64, optional) – The data type for storing the structure-related graph information such as node and edge IDs. It should be a framework-specific data type object (e.g., torch.int32). By default, DGL uses int64. • device (Device context, optional) – The device of the resulting graph. It should be a framework-specific device object (e.g., torch.device). By default, DGL stores the graph on CPU. Returns The generated random bipartite graph. Return type DGLGraph Examples >>> import dgl >>> dgl.rand_bipartite('user', 'buys', 'game', 50, 100, 10) Graph(num_nodes={'game': 100, 'user': 50},
auto_math_text
web
## Beginning Functional Programming Published on February 15, 2018 by Vasily Vasinov Functional programming has been getting more and more accepted by the wider community of software engineers. Lots of mainstream languages implement ideas that were considered esoteric only 15 years ago. Many engineers say that they learned more about computer science and computer systems after being introduced to the world of functional programming. Many also claim that it’s more fun to build software with functional tools. The reason functional programming forces you to learn so much is because it challenges your every assumption about building software. In this lesson we’ll cover some functional programming concepts that are considered the low-hanging fruit of functional programming and that are somewhat easy to grasp. My goal here is to spark interest in engineers that are already quite experienced in languages like Java, Ruby, and Python. Don’t worry if you don’t understand some of the Scala code that we’ll use throughout this lesson. Its main purpose is to demonstrate functional programming concepts in broad strokes. ### Immutable State First functional programming experiences are pretty similar: engineers start working on a project with the assumption that Scala (or other language with advanced functional capabilities) is like Java but it has a few cool features and “Rubyisms.” It’s pretty natural to have this assumption initially even though it’s wrong. Code written by functional rookies has mutable state all over the place and there is usually a lack of understanding what immutable data structures are good for. “How do I modify values in a list?” “How do I modify a map?” “How do I keep state in loops?” Those are some pretty common early questions. To demonstrate the benefits of immutability let’s look at two versions of the same program: one in Java and one in Scala. ##### Immutability Immutability is the property of objects describing their inability to be changed after they are created. In other words, once an object is initialized no outside component can change it. The following imperative Java program filters a list of users by tht active flag, sorts them by ID, and then pulls last names from sorted objects into a list: public class User { private final int id; private final String firstName; private final String lastName; private final Boolean active; // omitting constructors, getters, and setters for brevity... } public static List<String> activeById(List<User> us) { List<User> users = new ArrayList<User>(); for (User u: us) { } Collections.sort(users, new Comparator<User>() { public int compare(User a, User b) { return a.getId() - b.getId(); } }); List<String> finalUsers = new ArrayList<String>(); for (User u: users) { } return finalUsers; } List<User> inputUsers = new ArrayList<User>(); List<User> activeUsersById = activeById(inputUsers) This is pretty typical pre-Java 8 code: each collection is mutated by a set of explicit instructions. The whole program is somewhat verbose: every line of activeById describes what we want to do with the data as opposed to how the data should flow from start to finish. Here is what the same program looks like when written functionally: case class User(id: Int, firstname: String, lastname: String, active: Boolean) def activeById(us: Seq[User]) = us.filter(_.active).sortBy(_.id).map(_.lastname) val activeUsersById = activeById(Seq( User(11, "Nick", "Smith", false), User(89, "Ken", "Pratt", true), User(23, "Jack", "Sparrow", true) )) This looks a lot cleaner than the imperative version largely because there is no state that we have to keep track of. ActiveById accepts one argument and then passes it through chained functions that are part of the standard collection library. It’s important to note that filter, sortBy, and map are not arbitrary. They are very well defined and studied by the functional programming practitioners. A Rubyist would immediately notice that functional Scala code is similar to what she would write in Ruby. The code might look similar but the underlying mechanism of immutability is very different. The problem with Ruby is that it doesn’t enforce immutability. Every variable and data structure can potentially be mutated, which makes Ruby not as trustworthy. In Scala vals (read-only variables) and immutable collections are actually immutable from the programmer’s perspective. What does immutability bring to the table? From the practical standpoint, we end up with cleaner code, a smaller margin for error (we always know what’s in our collections and read-only variables), and richer abstractions. Another benefit of immutability is that we can write concurrent programs without worrying about mutexes and semaphores. ### Functions Most of functional programming is about using functions—not a huge surprise. There are different kinds of functions and techniques of functional composition that we can use. #### Pure Functions Pure functions are one of the main pillars of functional programming. A pure function is a function that depends only on its input parameters. It also must only return a specific result without mutating the outside state. sin(x: Double) or md5(s: String) are great examples of pure functions that only depend on their input parameters and always return an expected result since they don’t rely on anything from the outside. This makes pure functions easily testable and less bug prone. Not all abstractions can be directly implemented with pure functions. Think about input/output operations, logging, DB reading and writing, etc. In functional programming, there are models and abstractions that allow us to deal with those impure abstractions in a pure way, which results in cleaner and more composable code. Functions in Scala are first-class citizens. It means that they are not just class methods that can be declared and called; they are separate typed entities that can be passed to other functions. Functions can also return other functions. We can store them in variables or data structures. We can also work with them in the literal form without naming them: val ints = Seq(1, 2, 3, 4, 5) ints.filter(n => n % 2 == 0) n => n % 2 == 0, or its full form (n) => { n % 2 == 0 }, is a function literal without a name that checks whether a number is even. We can pass it around as another function argument or use it as a return value. Functions can be nested inside other functions. It’s a useful feature if we need to implement recursive calls with “subroutines” that we don’t want to put in the higher level scope. def toList: List[A] = { @annotation.tailrec def go(s: Stream[A], l: List[A]): List[A] = s match { case Empty => l case Cons(h, t) => go(t(), h() +: l) } go(this, List[A]()).reverse } In this example we don’t want go to be in the same scope as toList because it’s too specific to its “parent” function and there could be other functions at the same level as toList that have its own functions named go. #### Currying and Partial Function Application Currying and partial function application are pure math concepts that can be practically applied in functional languages. Both approaches allow us to store partially invoked functions in variables or pass them to other functions. Let’s take a closer look at partial function application: sealed trait Resource case class User(id: Int) extends Resource case class Record() case class FormattedRecord() since: Long): List[Record] = ??? def formatRecords(f: Long => List[Record], since: Long): List[FormattedRecord] = ??? In this example we have two generic functions: loadRecordsFor and formatRecords. We can partially apply loadRecordsFor for some user and save it to userRecordsLoader. Then, in another part of the program, we can call formatRecords with userRecordsLoader as first parameter, since userRecordsLoader matches formatRecords’s signature perfectly. This kind of function composition comes in handy in a lot of situations and makes our code less rigid. Currying is similar to partial function application. It’s the process of decomposing a function of multiple arguments into a chained sequence of functions of one argument. In Scala, we use multiple parameter lists to implement currying: def line(a: Int, b: Int, x: Int): Int = a * x + b def curriedLine(a: Int)(b: Int)(x: Int): Int = line(a, b, x) def defaultLine(x: Int): Int = curriedLine(1)(0)(x) defaultLine(5) // returns 5 The curriedLine method does all the currying work here. Its signature is Int => (Int => (Int => Int)). This means that curriedLine takes an integer as a parameter and returns another function that takes an integer as a parameter…and so on. ### Options and Pattern Matching The Option data type is an abstraction that represents optional values. It might seem like it’s nothing special but in everyday work it’s an extremely powerful mechanism to represent null, empty, or corrupt object and variable values. Options are containers that either contain a value of a certain type, represented by Some[T], or contain nothing, which is represented as None. You can also think of it as a list that’s either empty or has one value. When applied throughout the whole program this powerful abstraction allows us to eliminate countless edge cases that result in null pointer exceptions or type incompatibilities when null values get extracted. Consider the following example: case class Project(id: String, name: String, priority: Int, description: Option[String]) object ProjectsAccessor { find(id: String): Option[Project] = ??? } val project = ProjectsAccessor.find(123) We are trying to retrieve a project record from the database but we don’t know if the project with a specific ID exists. Instead of returning null or throwing an exception, we are either going to return Some[Project] or None as defined by the Option[Project] return type of the find method. Container types like that allow us to use a powerful tool called pattern matching. Pattern matching is a way to process data based on its structure. For example, if we wanted to process the result of the find method from the example above and extract the name of the project we’d do something like this: ProjectsAccessor.find(123) match { case Some(p) => p.name case None => "" } Here we are matching the result of find to see if the project exists. If it exists then we return its name, otherwise we return an empty string. At first, it might look like a switch-case structure from Java but it’s very different. With pattern matching we can add custom logic to our patterns: ProjectsAccessor.find(123) match { case Some(p) if 1 until 5 contains p.priority => p.name case Some(p) if name == "Default Project" => p.name case Some(p) => None case None => "" } We can also match our result directly based on the actual structure of the object: def descriptionWrapper(p: Project) = p match { case Project(_, _, _, None) => "No description." case Project(id, _, _, Some(d)) => s"Project $id's description:$d" } ### One-Liners and For Comprehensions One of the greatest things that advanced function composition brings to the table is function chaining. Instead of manually reiterating over the same data collection in a bunch of for loops, we can do it in one elegant expression, or a one-liner: case class Participant(name: String, score: Int, active: Boolean) val ps = Seq(Participant("Jack", 34, true), Participant("Tom", 51, true), Participant("Bob", 90, false)) ps.filter(_.score < 50).filter(_.active).map(_.copy(active = false)) // returns List(Participant(Jack, 34, false)) In this one-liner we grabbed all participants whose score is lower than 50 and who are still active; then we changed the active status of selected participants to false. There are many situations where similar one-liners save functional programmers time and dramatically reduce the amount of code. If a one-liner becomes too dense we can always break it down with a technique called for comprehensions. Let’s rewrite our example from before: for { loser <- ps if loser.score < 50 activeLoser <- Some(loser) if activeLoser.active deactivatedLoser <- Some(activeLoser.copy(active = false)) } yield deactivatedLoser // returns List(Participant(Jack, 34, false)) This is more verbose than a one-liner but in cases where logic becomes too dense, this can really help code readability, yet keep all the benefits of immutability and function chaining. ### Type System Coming from the world of Ruby programming strict typing in Scala can feel like a burden at the outset. After exploring its benefits most tend to change their minds. Some functional languages have extremely sophisticated type systems with properties that imperative programmers would never use. Those type systems allow for flexible and composable programs. Let’s go over some of the properties of functional programming language type systems. #### Type Inference Type inference is an ability of a programming language to deduce types of an expression without explicit type annotations. Scala is not great at type inference in certain cases and sometimes we have to hold its hand and give it hints about what types to use. Let’s look at a real example: // We always have to specify types in method signatures: def nameStartsWith(ns: Seq[String], n: String): Seq[String] = // Scala can't infer types for "generic" collections, // so we can't just say Seq.empty: ns.foldLeft(Seq.empty[String]) { // But it doesn't need types in this anonymous function: (l, r) => if(r.startsWith(n)) r +: l else l } // Type inference works really well with list declarations: val names = Seq("Bob", "Alice", "Ann") nameStartsWith(names, "A") // returns List(Ann, Alice) This example demonstrates both sides of type inference in Scala: we still have to explicitly define some types but in other cases, like when we pass a function (l, r) => ..., types don’t have to be annotated. In purely functional languages, like Haskell, we hardly ever have to specify types. The compiler is smart enough to infer them. #### Type Bounds Type bounds is another important concept in functional programming. It means that we can support class hierarchies in generic type declarations. In Java, we can use generics in order to define types during runtime and still keep our code type safe. For example, to define a generic list of elements we’d use this interface in Java: public interface MyList<T> If we want to define a list of, say, maps, but we don’t know what the implementation of those maps is, we’d use an upper bound for a type in our generic: public interface MyList<T extends Map> Now we can use this list to fill it with Hashtable, LinkedHashMap, HashMap, and TreeMap, or in other words, all default descendants of the Map interface. If there are children inheriting from Map, they can be valid elements of MyList as well. No other type can be used because of the type bound. Here is an example of using an upper bound in Scala: def convertToInts[T <: AnyVal](es: Seq[T], f: T => Int): Seq[Int] = { es.map(f(_)) } AnyVal is a parent class of Double, Float, Int, and many others. In this function we simply define that we want T to be a child of AnyVal for type safety. On top of defining upper bounds we can define lower bounds, like [T >: Int] that would match Int’s parents. We can also combine type bounds for different generics in the same function signature: [T >: Int <: Any]. #### Type Variance One other important property of an advanced type system is type variance. In Java, if we have class List<T> then List<Object> and List<String> will be unrelated or invariant even though Object and String are directly related. Arrays are covariant, meaning that String[] is a subtype of Object[]. Since arrays are mutable, we can end up with ArrayStoreException runtime exceptions if values of different types are stored and retrieved incorrectly. In Scala, arrays are invariant by default and immutable collections (or container types) are covariant, which in Scala syntax is defined as [+A]. Since collections are immutable all of our potential type errors will be discovered at compile time as opposed to run time. We can also define a container as contravariant: [-A]. Contravariance means that a container with a parent type is a subtype of a container with a child type. Here is how it all works in real code: case class InvariantContainer[A]() case class CovariantContainer[+A]() case class ContravariantContainer[-A]() class Person class User extends Person // works val inv1: InvariantContainer[User] = InvariantContainer[User] // doesn't work // works val cov1: CovariantContainer[User] = CovariantContainer[User] // works // works val con1: ContravariantContainer[User] = ContravariantContainer[User] // doesn't work // works val con3: ContravariantContainer[User] = ContravariantContainer[Person] Covariance and cotravariance are widely used in collection implementations and function type trickery. ### Lazy Evaluation and Infinite Data Structures The concept of lazy evaluation doesn’t directly exist in non-functional languages but it’s pretty easy to grasp. Think of a typical if statement: def expensiveOperation() = ??? val a = "foo" val b = "foo" if ((a == b) || expensiveOperation()) true else false In most imperative languages the || operator evaluates its arguments (a == b) and expensiveOperation() lazily meaning that expensiveOperation() doesn’t get executed if (a == b) is true. It’s only executed if (a == b) returns false. Lazy evaluation in Scala allows us to define similar behavior in more contexts. We can define our variables to be lazy, meaning that they don’t get executed until they are accessed for the first time as opposed to normal variables that are executed when they are defined. Once a lazy variable is executed its value is cached. case class Order(name: String, price: Int) case class User(id: Int, orders: Seq[Order]) { lazy val cheapOrders: Seq[Order] = orders.filter(_.price < 50) lazy val expensiveOrders: Seq[Order] = orders.filter(_.price >= 50) } In this example we have a case class for the user abstraction that contains a list of shopping orders. CheapOrders and expensiveOrders don’t get evaluated during case class initialization like a normal val would. They would only get executed if we call them directly. Why not use a method instead? The problem is that if we have an expensive computation or a DB call to make, calling a method multiple times will execute it multiple times. Lazy variables cache the return value once they are called, which makes for very effective optimizations in certain scenarios. Another example of delayed execution are by-name function parameters. Normally, function parameters get executed right when they are passed. However, in some cases that involve DB access or heavy computations we don’t want to execute function parameters until absolutely necessary: sealed trait Person case class User() extends Person loadUsers: => Seq[User]): Seq[Person] = { } Here we have three by-name parameters with potentially expensive DB operations. We don’t want all of them to be executed, so we can’t pass them by value as we normally would. The arrow symbol => means that we are passing the function itself as opposed to the return value of the function. Now we can call it whenever we need to. Laziness and by-name parameters are used to implement one of the most powerful constructs in functional programming: infinite data structures. In imperative programming, all data structures have a predefined size, which works fine in most cases but sometimes we don’t know what the size of the data structure is until the very end. With delayed execution it becomes possible to define our data structures in general form without “filling them up” with data until we actually have to do it. All of this sounds great in theory but how does it actually work? Let’s use an infinite data structure, called stream, for generating prime numbers. In Java, in order to generate primes, we would write a function that would generate prime numbers up to some limit. Then we would call this function to generate a list of N prime numbers and pass the result elsewhere. If we need a different list of primes (say, prime numbers up to N + M), we’d have to recalculate our list from scratch. In Scala, we would do it differently: val primes: Stream[Int] = { def generatePrimes (s: Stream[Int]): Stream[Int] = generatePrimes(Stream.from(2)) } This syntax probably doesn’t make much sense if you never worked with Scala but it’s not important in this case. What’s important is what we can do with this infinite data structure (not needing an implicit lower or upper bound). Say, we want to get the first five prime numbers that are greater than 100. It’s a piece of cake with our implementation: primes.filter(_ > 100).take(5).toList This chain of functions will return List(101, 103, 107, 109, 113) as expected. We can pass primes around to functions anywhere in our program without it being executed. We can also chain any actions on top of it (like filter in the example above) and pass the expression around, again, without generating any actual results until we need them. This allows us to come up with composable abstractions and to mold programs like play-doh. Subscribe to our Grokked Weekly newsletter to receive excellent engineering articles and the latest lessons from Grok Academy.
auto_math_text
web
# Amy Popular questions and responses by Amy 86. ## word unscramble need help unscrambling a word or words tarluna what is it HELP ME UNSCRAMBLE THESE LETTERS TO MAKE A WORD THAT PERTAINS TO SPACE AND IS A P POSSESSIVE NOUN. SEGHRTMLAN J'aodredefis Please post the word or words you need to unscramble, and we'll try to help 87. ## physics A ball thrown straight up from the ground passes a window 5.6 m up. An observer looking out the window sees the ball pass the window again, going down, 3.4s later. Find the velocity with which the ball was initially thrown. 88. ## physics A box of unknown mass is sliding with an initial speed vi = 5.60 m/s across a horizontal frictionless warehouse floor when it encounters a rough section of flooring d = 3.40 m long. The coefficient of kinetic friction between the rough section of flooring 89. ## Chemistry which is the stronger acid, HIO4 or HBrO4? 90. ## physics The figure shows an overhead view of a 3.00-kg plastic rod of length 1.20 m on a table. One end of the rod is attached to the table, and the rod is free to pivot about this point without friction. A disk of mass 37.0 g slides toward the opposite end of the 91. ## Bus. Math a. Graduated payments result in the borrower paying A. more at the beginning of the mortgage. B. less at the beginning of the mortgage. C. the mortgage at 1¨M2 the standard rate. D. less at the end of the mortgage. Answer: B 2. When are annuity due 92. ## Math: does anyone know how to solve this 1 + tan^2 (5) - csc^2 (85) = ? Would this equal 0 could someone explain this to me and the correct answer? 93. ## Chemistry-weak acid affects the pH? For which salt in each of the following groups will the solubility depend on pH? AgCl AgF AgBr please explain why its the answer 94. ## Probability/Statistics A quiz consists of 10 multiple-choice questions, each with 4 possible answers. For someone who makes random guesses for all of the answers, find the probability of passing if the minimum passing grade is 70 %. I'm not sure what to do. I figured that 95. ## Statistics Please help, I do not understand how to do this: Let x be a random variable that represents white blood cell count per cubic milliliter of whole blood. Assume that x has a distribution that is approximately normal, with mean μ = 7400 and estimated 96. ## Ethics and Tech The Enlightenment view of addiction is that: A. there is nothing wrong with addiction. B. addiction is not real. C. addiction can never be overcome by will-power alone. D. people are responsible for the choices they make. is it D 97. ## English Scarlet Letter Chapter 9 1.The subject of the main clause in the sentence beginning "If the latter possess native sagacity . . . " (line 68) is (A) "latter" (line 68) (B) "he" (line 70) (C) "revelations" (line 76) (D) "qualifications" (line 80) (E "soul" 98. ## Chemistry To extract gold from its ore, the ore is treated with sodium cyanide solution in the presence of oxygen and water. 4 Au(s) + 8 NaCN(aq) + O2(g) + 2 H2O(l) -> 4 NaAu(CN)2(aq) + 4 NaOH(aq) A parking lot charges $3 to park a car for the first hour and$2 per hour after that. If you use more than one parking space, the second and each subsequent car will be charged 75% of what you pay to park just one car. If you park 3 cars for t hours, which
auto_math_text
web
# Some comments on QM and CM—Part 2: Without ontologies, “classical” mechanics can get very unintuitive too. (Also, a short update.) We continue from the last post. If you haven’t read and understood it, it can be guaranteed that you won’t understand anything from this one! [And yes, this post is not only long but also a bit philosophical.] The last time, I gave you a minimal list of different ontologies for physics theories. I also shared a snap of my hurriedly jotted (hand-written) note. In this post, I will come to explain what I meant by that note. 1. In the real world, you never get to see the objects of “classical” mechanics: OK, let’s first take a couple of ideas from Newtonian mechanics. 1.1. Point-particles: The Newtonian theory uses a point particle. But your perceptual field never holds the evidence for any such an object. The point particle is an abstraction. It’s an idealized (conceptual-level) description of a physical object, a description that uses the preceding mathematical ideas of limits (in particular, the idea of the vanishingly small size). The important point to understand here isn’t that the point-particle is not visible. The crucial point here is: it cannot be visible (or even made visible, using any instrument) because it does not exist as a metaphysically separate object in the first place! 1.2. Rigid bodies: It might come as a surprise to many, esp. to mechanical engineers, but something similar can also be said for the rigid body. A rigid body is a finite-sized object that doesn’t deform (and unless otherwise specified, doesn’t change any of its internal fields like density or chemical composition). Further, it never breaks, and all its parts react instantaneously to any forces exerted on any part of it. Etc. When you calculate the parabolic trajectory of a cricket ball (neglecting the air resistance), you are not working with any entity that can ever be seen/ touched etc.—in principle. In your calculations, in your theory, you are only working with an idea, an abstraction—that of a rigid body having a center of mass. Now, it just so happens that the concepts from the Newtonian ontologies are so close to what is evident to you in your perceptual field, that you don’t even notice that you are dealing with any abstractions of perceptions. But this fact does not mean that they cease to be abstract ideas. 2. Metaphysical locus of physics abstractions, and epistemology of how you use them: 2.1. Abstractions do exist—but only in the mind: In general, what’s the metaphysics of abstractions? What is the metaphysical locus of its existence? An abstraction exists as a unit of mental integration—as a concept. It exists in your mind. A concept doesn’t have an existence apart from, or independent of, the men who know and hold that concept. A mental abstraction doesn’t exist in physical reality. It has no color, length, weight, temperature, location, speed, momentum, energy, etc. It is a non-material entity. But it still exists. It’s just that it exists in your mind. In contrast, the physical objects to which the abstractions of objects make a reference, do exist in the physical reality out there. 2.2. Two complementary procedures (or conceptual processings): Since the metaphysical locus of the physical objects and the concepts referring to them are different, there have to be two complementary and separate procedures, before a concept of physics (like the ideal rigid body) can be made operational, say in a physics calculation: 2.2.1. Forming the abstraction: First, you have to come to know that concept—you either learn it, or if you are an original scientist, you discover/invent it. Next, you have to hold this knowledge, and also be able recall and use it as a part of any mental processing related to that concept. Now, since the concept of the rigid body belongs to the science of physics, its referents must be part of the physical aspects of existents. 2.2.2. Applying the abstraction in a real-world situation: In using a concept, then, you have to be able to consider a perceptual concrete (like a real cricket ball) as an appropriate instance of the already formed concept. Taking this step means: even if a real ball is deformable or breakable, you silently announce to yourself that in situations where such things can occur, you are not going to apply the idea of the rigid body. The key phrases here are: “inasmuch as,” “to that extent,” and “is a.” The mental operation of regarding a concrete object as an instance of a concept necessarily involves you silently assuming this position: “inasmuch as this actual object (from the perceptual field) shows the same characteristics, in the same range of “sizes”, as for what I already understand by the concept XYZ, therefore, to that extent, this actual object “is a” XYZ. 2.2.3. Manipulation of concepts at a purely abstract level is possible (and efficient!): As the next step, you have to be able to directly manipulate the concept as a mere unit from some higher-level conceptual perspective. For example, as in applying the techniques of integration using Newton’s second law, etc. At this stage, your mind isn’t explicitly going over the defining characteristics of the concept, its relation to perceptual concretes, its relation to other concepts, etc. Without all such knowledge at the center of your direct awareness, you still are able to retain a background sense of all the essential properties of the objects subsumed by the concept you are using. Such a background sense also includes the ideas, conditions, qualifications, etc., governing its proper usage. That’s the mental faculty automatically working for you when you are born a human. You only have to will, and the automatic aspects of your mind get running. (More accurately: Something or the other is always automatically present at the background of your mind; you are born with such a faculty. But it begins serving your purpose when you begin addressing some specific problem.) All in all: You do have to direct the faculty which supplies you the background context, but you can do it very easily, just by willing that way. You actually begin thinking on something, and the related conceptual “material” is there in the background. So, free will is all that it takes to get the automatic sense working for you! 2.2.4. Translating the result of a calculation into physical reality: Next, once you are done with working ideas at the higher-level conceptual level, you have to be able to “translate the result back to reality”. You have to be able to see what perceptual-level concretes are denoted by the concepts related to the result of calculation, its size, its units, etc. The key phrase here again are: “inasmuch as” and “to that extent”. For example: “Inasmuch as the actual cricket ball is a rigid body, after being subjected to so much force, by the laws governing rigid bodies (because the laws concern themselves only with the rigid bodies, not with cricket balls), a rigid body should be precisely at 100.0 meter after so much time. Inasmuch as the cricket ball can also be said to have an exact initial position (as for a rigid body used in the calculations), its final position should be exactly 100 meter away. Inasmuch as a point on the ground can be regarded as being exactly 100 meter away (in the right direction), the actual ball can also be expected, to that extent, to be at [directly pointing out] that particular spot after that much time. Etc. 2.3: A key take-away: So, an intermediate but big point I’ve made is: Any theory of classical mechanics too makes use of abstractions. You have to undertake procedures involving the mappings between concretes and abstractions, in classical mechanics too. 2.4. Polemics: You don’t see a rigid body. You see only a ball. You imagine a rigid body in the place of the given ball, and then decide to do the intermediate steps only with this instance of the imagination. Only then can you invoke the physics theory of Newtonian mechanics. Thus, the theory works purely at the mental abstractions level. A theory of physics is not an album of photographs; an observation being integrated in a theory is not just a photograph. On the other hand, a sight of a ball is not an abstraction; it is just a concretely real object in your perceptual field. It’s your mind that makes the connection between the two. Only can then any conceptual knowledge be acquired or put to use. Acquisition of knowledge and application of knowledge are two sides of the same coin. Both involve seeing a concrete entity as an instance subsumed under a concept or a mental perspective. 2.5. These ideas have more general applicability: What we discussed thus far is true for any physics theory: whether “classical” mechanics (CM) or quantum mechanics (QM). It’s just that the first three ontologies from the last post (i.e. the three ontologies with “Newtonian” in their name) have such abstractions that it’s very easy to establish the concretes-to-abstractions correspondence for them. These theories have become, from a hindsight of two/three centuries and absorption of its crucial integrative elements into the very culture of ours, so easy for us to handle, they seem to be so close to “the ground” that we have to think almost nothing to regard a cricket ball as a rigid body. Doesn’t matter. The requirement of you willingly having to establish the correspondenc between the concretes and abstractions (and vice versa) still exists. Another thing: The typical application of all the five pre-quantum ontologies also typically fall in the limited perceptual range of man, though this cannot be regarded as the distinguishing point of “classical” mechanics. This is an important point so let me spend a little time on it. Trouble begins right from Fourier’s theory. 3. “Classical” mechanics is not without tricky issues: 3.1. Phenomenological context for the Fourier theory is all “classical”: In its original form, Fourier’s theory dealt with very macroscopic or “every day” kind of objects. The phenomenological context which gave rise to Fourier’s theory was: the transmission of heat from the Sun by diffusion into the subterranean layers of the earth, making it warm. That was the theoretical problem which Fourier was trying to solve, when he invented the theory that goes by his name. Actually, that was a bit more complicated problem. A simpler formulation of the same problem would be: quantitatively relating the thermal resistance offered by wood vs. metal, etc. The big point I want to note here is: All these (the earth, a piece of wood or metal) are very, very “everyday” objects. You wouldn’t hesitate saying that they are objects of “classical” physics. 3.2. But the Fourier theory makes weird predictions in all classical physics too: But no matter how classical these objects look, an implication is this: The Fourier theory ends up predicting infinite velocity for signal propagation for “classical” objects too. This is a momentous implication. Make sure you understand it right. Pop-sci writers never highlight this point. But it’s crucial. The better you understand it, the less mysterious QM gets! In concrete terms, what the Fourier theory says is this: If you pour a cup of warm water on ground at the North pole, no doubt the place will get warmer for some time. But this is not the only effect your action would have. Precisely and exactly at the same instant, the South pole must also get warmer, albeit to a very small extent. Not only the South Pole, every object at every place on the earth, including the cell phone of your friend sitting in some remote city also must get warmer. [Stretching the logic, and according a conduction mode also to the intergalactic dust: Not just that, every part of the most distant galaxies too must get warmer—in the same instant.] Yes, the warming at remote places might be negligibly small. But in principle, it is not zero. And that’s classical physics of ordinary heat conduction for you. 3.3. Quantum entanglement and Heisenberg’s uncertainty principle are direct consequences of the same theory: Now, tell me, how intuitive was Fourier’s predictions? My answer: Exactly as unintuitive as is the phenomenon of quantum entanglement—and, essentially, for exactly the same ontological-physical-mathematical reasons! Quantum entanglement is nothing but just another application of the Fourier theory. And so is Heisenberg’s uncertainty principle. It too is a direct consequence of the Fourier theory. 3.4. Another key take-away: So, the lesson is: Not all of “classical” mechanics is as “intuitive” as you were led to believe. 3.5. Why doesn’t any one complain? If classical physics too is that unintuitive, then how come that no one goes around complaining about it? The reason is this: Classical mechanics involves and integrates a conceptually smaller range of phenomena. Most of its application scenarios too are well understood—even if not by you, and then at least by some learned people, and they have taken care to explain all these scenarios to you. For instance, if I ask you to work out how the Coriolis force works for two guys sitting diametrically opposite on a rotating disco floor and throwing balls at each other, I am willing to take a good bet that you won’t be able to work out everything on your own using vector analysis and Newton’s laws. So, this situation should actually be non-intuitive to you. It in fact is: Without searching on the ‘net, be quick and tell me whether the ball veers in the direction of rotation or opposite it? See? It’s just that no pop-sci authors highlight issues like this, and so, no philosophers take notice. (And, as usual, engineers don’t go about mystifying anything.) So, what happens in CM is that some expert works out the actual solution, explains to you. You then snatch some bits and pieces, may be just a few clues from his explanation, and memorize them. Slowly, as the number of such use-cases increases, you get comfortable enough with CM. Then you begin to think that CM is intuitive. And then, the next time when your grandma asks you how come that motorcyclist spinning inside the vertical well doesn’t fall off, you say that he sticks to the wall due to the centrifugal force. Very intuitive! [Hint, hint: Is it actually centrifugal or centripetal?] OK, now let’s go over to QM. 4. The abstract-to-concretes mappings are much more trickier when it comes to QM: 4.1. The two-fold trouble: The trouble with QM is two-fold. First of all, the range of observations (or of phenomenology) underlying it is not just a superset of CM, it’s a much bigger superset. Second: Physicists have not been able to work out a consistent ontology for QM. (Most often, they have not even bothered to do that. But I was talking about reaching an implicit understanding to that effect.) So, they are reduced to learning (and then teaching) QM in reference to mathematical quantities and equations as the primary touch-stones. 4.2. Mathematical objects refer to abstract mental processes alone, not to physical objects: Now, mathematical concepts have this difference. They are not only higher-level abstractions (on top of physical concepts), but their referents too in themselves are invented and not discovered. So, it’s all in the mind! It’s true that physics abstractions, qua mental entities, don’t exist in physical reality. However, it also is true that the objects (including their properties/characteristics/attributes/acctions) subsumed under physics concepts do have a physical existence in the physical world out there. For instance, a rigid body does not exist physically. But highly rigid things like stones and highly pliable or easily deformable things like a piece of jelly or an easily fluttering piece of cloth, do exist physically. So, observing them all, we can draw the conclusion that stones have much higher rigidity than the fluttering flag. Then, according an imaginary zero deformability to an imaginary object, we reach the abstraction of the perfectly rigid body. So, while the rigid body itself does not exist, rigidity as such definitely is part of the natural world (I mean, of its physical aspects). But not so with the mathematical abstractions. You can say that two (or three or $n$ number of) stones exist in a heap. But what actually exists are only stones, not the number $2$, $3$, or $n$. You can say that a wire-frame has edges. But you don’t thereby mean that its edges are geometrical lines, i.e., objects with only length and no thickness. 4.3. Consequence: How physicists hold, and work with, their knowledge of the QM phenomena: Since physicists could not work out a satisfactory ontology for QM, and since concepts of maths do not have direct referents in the physical reality as apart from the human consciousness processing it size-wise, their understanding of QM does tend to be a lot more shaky (the comparison being with their understanding of the pre-quantum physics, esp. the first three ontologies). As a result, physicists have to develop their understanding of QM via a rather indirect route: by applying the maths to even more number of concrete cases of application, verifying that the solutions are borne out by the experiments (and noting in what sense they are borne out), and then trying to develop some indirect kind of a intuitive feel, somehow—even if the objects that do the quantum mechanical actions aren’t clear to them. So, in a certain sense, the most talented quantum physicists (including Noble laureates) use exactly the same method as you and me use when we are confronted with the Coriolios forces. That, more or less, is the situation they find themselves in. The absence of a satisfactory ontology has been the first and foremost reason why QM is so extraordinarily unintuitive. It also is the reason why it’s difficult to see CM as an abstraction from QM. Ask any BS in physics. Chances are 9 out of 10 that he will quote something like Planck’s constant going to zero or so. Not quite. 4.4. But why didn’t any one work out an ontology for QM? But what were the reasons that physicists could not develop a consistent ontology when it came to QM? Ah. That’s too complicated. At least 10 times more complicated than all the epistemology and physics I’ve dumped on you so far. That’s because, now we get into pure philosophy. And you know where the philosophers sit? They all sit on the Humanities side of the campus! But to cut a long story short, very short, so short that it’s just a collage-like thingie: There are two reasons for that. One simple and one complicated. 4.4.1. The simple reason is this: If you don’t bother with ontologies, and then, if you dismiss ideas like the aether, and go free-floating towards ever higher and still higher abstractions (especially with maths), then you won’t be able to get even EM right. The issue of extracting the “classical” mechanical attributes, variables, quantities, etc. from the QM theory simply cannot arise in such a case. Indeed, physicists don’t recognize the very fact that ontologies are more basic to physics theories. Instead, they whole-heartedly accept and vigorously teach and profess the exact opposite: They say that maths is most fundamental, even more fundamental than physics. Now, since QM maths is already available, they argue, it’s only a question of going about looking for a correct “interpretation” for this maths. But since things cannot be very clear with such an approach, they have ended up proposing some 14+ (more than fourteen) different interpretations. None works fully satisfactorily. But some then say that the whole discussion about interpretation is bogus. In effect, as Prof. David Mermin characterized it: “Shut up and calculate!” That was the simple reason. 4.4.2. The complicated reason is this: The nature of the measurement problem itself is like that. Now, here, I find myself in a tricky position. I think I’ve cracked this problem. So, even if I think it was a very difficult problem to crack, please allow me to not talk a lot more about it here; else, doing so runs the risk of looking like blowing your own tiny piece of work out of all proportion. So, to appreciate why the measurement problem is complex, refer to what others have said about this problem. Coleman’s paper gives some of the most important references too (e.g., von Neumann’s process 1 vs. process 2 description) though he doesn’t cover the older references like the 1927 Bohr-Einstein debates etc. Then there are others who say that the measurement problem does not exist; that we have to just accept a probabilistic OS at the firmware level by postulation. How to answer them? That’s a homework left for you. 5. A word about Prof. Coleman’s lecture: If Prof. Coleman’s lecture led you to conclude that everything was fine with QM, you got it wrong. In case this was his own position, then, IMO, he too got it wrong. But no, his lecture was not worthless. It had a very valuable point. If Coleman were conversant with the ontological and epistemological points we touched on (or hinted at), then he would have said something to the following effect: All physics theories presuppose a certain kind of ontology. An ontology formulates and explains the broad nature of objects that must be assumed to exist. It also puts forth the broad nature of causality (objects-identities-actions relations) that must be assumed to be operative in nature. The physics theory then makes detailed, quantitative, statements about how such objects act and interact. In nature, physical phenomena differ very radically. Accordingly, the phenomenological contexts assumed in different physical theories also are radically different. Their radical distinctiveness also get reflected in the respective ontologies. For instance, you can’t explain the electromagnetic phenomena using the pre-EM ontologies; you have to formulate an entirely new ontology for the EM phenomena. Then, you may also show how the Newtonian descriptions may be regarded as abstractions from the EM descriptions. Similarly, we must assume an entirely new kind of ontological nature for the objects if the maths of QM is to make sense. Trying to explain QM phenomena in terms of pre-quantum ontological ideas is futile. On the other hand, if you have a right ontological description for QM, then with its help, pre-QM physics may be shown as being a higher-level, more abstract, description of reality, with the most basic level description being in terms of QM ontology and physics. Of course, Coleman wasn’t conversant with philosophical and ontological issues. So, he made pretty vague statements. 6. Update on the progress in my new approach. But RSI keeps getting back again and again! I am by now more confident than ever that my new approach is going to work out. Of course, I still haven’t conducted simulations, and this caveat is going to be there until I conduct them. A simulation is a great way to expose the holes in your understanding. So take my claim with a pinch of salt, though I must also hasten to note that with each passing fortnight (if not week), the quantity of the salt which you will have to take has been, pleasantly enough (at least for me), decreasing monotonically (even if not necessarily always exponentially). I had written a preliminary draft for this post about 10 days ago, right when I wrote my last post. RSI had seemed to have gone away at that time. I had also typed a list of topics (sections) to write to cover my new approach. It carried some 35+ sections. However, soon after posting the last blog entry here, RSI began growing back again. So, I have not been able to make any substantial progress since the last post. About the only things I could add were: some 10–15 more section or topic names. The list of sections/topics includes programs too. However, let me hasten to add: Programs can’t be written in ink—not as of today, anyway. They have to be typed in. So, the progress is going to be slow. (RSI.) All in all, I expect to have some programs and documentation ready by the time Q1 of 2021 gets over. If the RSI keeps hitting back (as it did the last week), then make it end-Q2 2021. OK. Enough for this time round. A song I like: [When it comes to certain music directors, esp. from Hindi film music, I don’t like the music they composed when they were in their elements. For example, Naushad. For example, consider the song: मोहे पनघट पे (“mohe panghat pe”). I can sometimes appreciate the typical music such composers have produced, but only at a somewhat abstract level—it never quite feels like “my kind of music” to me. Something similar, for the songs that Madan Mohan is most famous for. Mohan was a perfectionist, and unlike Naushad, IMO, he does show originality too. But, somehow, his sense of life feels like too sad/ wistful/ even fatalistic to me. Sadness is OK, but a sense of inevitability (or at least irromovability) of suffering is what gets in the way. There are exceptions of course. Like, the present song by Naushad. And in fact, all songs from this move, viz. साथी (“saathi”). These are so unlike Naushad! I have run another song from this movie a while ago (viz. मेरे जीवन साथी, कली थी मै तो प्यासी (“mere jeevan saathee, kalee thee main to pyaasee”). That song had actually struck me after a gap of years (may be even a decade or two), when I was driving my old car on the Mumbai-Pune expressway. The air-conditioner of my car is almost never functional (because I almost never have the money to get it repaired). In any case, the a/c was neither working nor even necessary, on that particular day late in the year. So, the car windows were down. It was pretty early in the morning; there wasn’t much traffic on the expressway; not much wind either. The sound of the new tires made a nice background rhythm of sorts. The sound was very periodic, because of the regularity of the waviness that comes to occur on cement-concrete roads after a while. That waviness? It’s an interesting problem from mechanics. Take a photo of a long section of the railway tracks while standing in the middle, especially when the sun is rising or setting, and you see the waviness that has developed on the rail-tracks too—they go up and down. The same phenomenon is at work in both cases. Broadly, it’s due to vibrations—a nonlinear interaction between the vehicle, the road and the foundation layers underneath. (If I recall it right, in India, IIT Kanpur had done some sponsored research on this problem (and on the related NDT issues) for Indian Railways.) So, anyway, to return to the song, it was that rhythmical sound of the new tires on the Mumbai-Pune Expressway which prompted something in my mind, and I suddenly recalled the above mentioned song (viz. मेरे जीवन साथी, कली थी मै तो प्यासी (“mere jeevan saathee, kalee thee main to pyaasee”). Some time later, I ran it here on this blog. (PS: My God! The whole thing was in 2012! See the songs section, and my the then comments on Naushad, here [^]) OK, cutting back to the present: Recently, I recalled the songs from this movie, and began wondering about the twin questions: (1) How come I did end up liking anything by Naushad, and (2) How could Naushad compose anything that was so much out of his box (actually, the box of all his traditional classical music teachers). Then, a quick glance at the comments section of some song from the same film enlightened me. (I mean at YouTube.) I came to know a new name: “Kersi Lord,” and made a quick search on it. Turns out, Naushad was not alone in composing the music for this film: साथी (“saathee”). He had taken assistance from Kersi Lord, a musician who was quite well-versed with the Western classical and Western pop music. (Usual, for a Bawa from Bombay, those days!) The official credits don’t mention Kersi Lord’s name, but just a listen is enough to tell you how much he must have contributed to the songs of this collaboration (this movie). Yes, Naushad’s touch is definitely there. (Mentally isolate Lata’s voice and compare to मोहे पनघट पे (“mohe panghat pe”).) But the famous Naushad touch is so subdued here that I actually end up liking this song too! So, here we go, without further ado (but with a heartfelt appreciation to Kersi Lord): (Hindi) ये काैन आया, रोशन हो गयी (“yeh kaun aayaa, roshan ho gayee) Singer: Lata Mangeshkar Music: [Kersi Lord +] Naushad Lyrics: Majrooh Sultanpuri A good quality audio is here [^]. ] PS: May be one little editing pass tomorrow? History: — 2020.12.19 23:57 IST: First published — 2020.12.20 19:50 IST and 2020.12.23 22:15 IST: Some very minor (almost insignificant) editing / changes to formatting. Done with this post now.
auto_math_text
web
# 751.1 Preliminary Design (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) ## 751.1.1 Overview ### 751.1.1.1 Introduction The Preliminary Design of a structure begins with the district submitting a Bridge Survey and Preliminary Geotechnical Report indicating their need for a structure, and ends with the completion of the Design Layout or TS&L submittal (type, size and location). This section is intended to be a guide for those individuals assigned the task of performing the Preliminary Design or “laying out” of a structure. The types of structures can be broken into five categories: 1.) Bridge Over Water 3.) Box Culvert Over Water 4.) Retaining Wall 5.) Rehabilitation or Overlay of Existing Structure In addition to the following information, the Preliminary Design shall consider hydraulic issues where applicable. ### 751.1.1.2 Bridge Survey The Preliminary Design process starts with the receipt of the Bridge Survey. The Structural Resource Manager records the receipt date in his file and then passes the Bridge Survey on to the Bridge Survey Processor. The following is a list of steps that are taken by the Bridge Survey Processor. Assign a Bridge Number to the Structure Use the next number in the Bridge Index. Book 1 is for bridge numbers that start with an “A” thru A9999 then with a “B” starting at B0001, while book 2 is for all other state bridge numbers. The “UIP” book is for off-system bridges. Anytime a bridge is rehabilitated or changed in any way, it will receive a new bridge number. For example, bridge no. A-1234 would become A12341 for the first rehab. And A12342 for the second rehab. Another example is that bridge no. L-123R would become L01232 for its next rehab. New timber bridges start with the letter “T” in book 2. Enter the Bridge No. and other required information in the Bridge Survey Book. Enter the Bridge No. in the Survey Received database – Microsoft Access (J:\Brhist\survrec.mdb) The password required to do this is available from the Office clerk. Enter the Bridge No., survey received date and feature crossed in the Bloodhound 2000 database – Access (T:\br-proj\bloodhound2000\bloodhound-2000.mdb). Write Bridge No. on the plat and profile sheets as well as the cover letter from the district. Create Job Folders Check to see if a Correspondence File has been created. If the Correspondence File has been created, record the Bridge No. and make a Design Layout File for each structure received. If the Correspondence File has not been created, make a Correspondence File, an outer folder and a Design Layout File for each structure received. Here is the information for each type of folder/file: Folder Type Required Information on Folder Outer (pink label) County, Route and Job No. Correspondence County, Route and Job No. Design Layout County, Route, Bridge No., Location and Job No. Also be sure to notify the Structural Resource Manager and appropriate Structural Project Manager (SPM) or Structural Liaison Engineer when a new Correspondence File is created. Include in your note Job No., County, Route, Bridge No. and the Bridge Division contact. The Bridge Division contact is either the Structural Project Manager (SPM) or Structural Liaison Engineer. Fill Out Bridge Survey Report (BR 105R) If the district did not include a completed form BR 105R for the structure, one should be completed by the Bridge Division. This form is not necessary on rehabs, overlays and retaining walls. An acknowledgement letter shall be sent to the district informing them that Bridge Division received the Bridge Survey. Include the Bridge No. and the name of the Bridge Division contact. Locate the Structure on the County Map Pull out the map for the county this particular structure is in, circle the location and write the Bridge No. on the map (Bridges only, not walls and culverts). Calculate Drainage Information For structures over streams or waterways, calculate the drainage area, length of stream, 10% elevation, 85% elevation and slope from topographic maps. Record this information on Form BR 105R in the Design Layout folder. If the district’s calculations indicate that the drainage area is less than 1.5 sq. miles, do not calculate the drainage area. The accuracy of the drainage area should be to the nearest 0.10 sq. mile for drainage areas less than 10 sq. miles and to the nearest 1.0 sq. mile for drainage areas greater than or equal to 10 sq. miles. Process Electronic Files If electronic files of the Bridge Survey plat and profile sheets were not included, contact the Transportation Project Manager (TPM) in the district or the Roadway Consultant to arrange for the files to be sent. Consult with the SPM or Bloodhound 2000 to determine who the district TPM is. When the electronic files are received, verify that the scale is 1”=10’ and that the necessary reference files are included. The Bridge Survey Processor may have to work with the district to correct any errors. Final Step for Bridge Survey Processor Once all of these steps are completed, the Bridge Survey Processor should deliver the Correspondence File, outer folder and the Design Layout Folder to the Structural Resource Manager. The Structural Resource Manager will then pass the files to the SPM. The SPM then requests the Structural Resource Manager to assign a Preliminary Designer. ### 751.1.1.3 Beginning Preliminary Design Once the Preliminary Designer is assigned to a structure, they should meet with the Structural Project Manager to go over the Correspondence and Layout files to see if anything out of the ordinary has come up at Core Team Meetings prior to that date. It is important that any correspondence or calculations used in the laying out of the structure should be included in the bound portion of the Layout Folder. The Preliminary Designer should then examine the Bridge Survey closely for any errors or omissions. Consult Section 748.6 Bridge Reports and Layouts. Pay special attention to the scales used. Make sure the district's submittal includes photographs and details of staging and/or bypasses (if applicable). Contact the district to resolve any discrepancies or questions. Look at the Bridge Survey cover letter to determine who the District Contact is. A visit to the bridge site by the Preliminary Designer may be warranted to help determine Manning’s “n” values, examine adjacent properties, etc. If you decide to make this trip, advise the Structural Project Manager and the District Contact since they may also want to attend. ## 751.1.2 Bridges/Boxes ### 751.1.2.1 End Slopes/Spill Fills The end slopes are determined by the Construction and Materials Division and are supplied to the Bridge Division by way of the Preliminary Geotechnical Report. If this report is not in the Correspondence file, contact the District to get a copy of it. The Bridge Division has made a commitment to the districts that we will have the bridge plans, specials and estimate completed 12 months after the date the Bridge Survey and Preliminary Geotechnical Report are received. The "12 month clock" does not start ticking until both the Bridge Survey and the Preliminary Geotechnical Report are in the Bridge Division. When laying out a skewed structure, adjust the end slope for the skew angle. On higher skews, this will have a significant effect on the lengths of the spans. Often the slope of the spill fills will be steeper than the roadway side slopes. On a skewed structure, this makes it necessary to "warp" the slopes. Whenever there will be a berm under any of the spans, its elevation should be such that there is a minimum of 4 feet clear between the ground line and the bottom of the girder as shown below. (*) Specify berm elevation or 4'-0" minimum clearance. BERM ELEVATION If a rock cut is encountered in the spill slope, a slope of 1:1 may be used to the top of the rock. ### 751.1.2.2 Wing Lengths The lengths of the wings at the end bents are to be determined prior to the issuance of the Bridge Memorandum. There are two reasons for this. First, the district will use these lengths to determine the placement of their guardrail (bridge anchor section). Second, if the lengths of the wings exceed 22 feet, they will have to be broken into a stub wing and a detached wing wall. If this happens, then you will need to include this extra cost in your Preliminary Cost Estimate and request soundings for the wall. The request for soundings for the wall should include a request for the determination of the allowable bearing of the soil (if in cut - assume piling if it is in fill) and the angle of internal friction for the material retained by the detached wing wall. Also include the bottom of wing footing elevation. On divided highway bridges with high skews and shallow end slopes, the wing lengths on the median side of the bridge may be less than the other side due to the difference in sideslope between the median and the outside. The live load requirements for a structure shall be HL-93 On box culverts, the actual live load applied to the structure is dependent upon the amount of fill on top of the box; however, see Structural Project Manager for the live load that goes on the Bridge Memorandum. ### 751.1.2.4 Skew Angle Determining the most appropriate skew angle for the structure involves some judgement. On bridges over streams, pick the angle that will allow floodwater to pass through the bridge opening with the least amount of interference from intermediate bent columns. Another consideration on meandering streams is to avoid a skew which will cause the spill fill – side slope transition from blocking the stream. Often a trip to the field may be justified just for determining the angle (you can even ask the district to stake some different skews for you to observe in the field). On stream crossings, avoid skews between zero and five degrees and try to use five degree increments. On grade separations, often the skew must be accurate to the nearest second to maintain minimum horizontal clearances. Keep all bents on a bridge parallel whenever possible and avoid skews over 55 degrees. Also keep in mind that the higher the skew, the higher the Preliminary Cost Estimate due to the beam caps and wings being longer. ### 751.1.2.5 Structure Type Selection As the size of the creek/river increases, here is a rough approximation of structure type selection: Box Culvert (single, double or triple) Prestressed I-Girder (type 2, 3, 4 or 6) Prestressed Bulb-Tee Girder (63.5” or 72.5") Plate Girder Sometimes a Solid Slab, Voided Slab, Prestressed Box Girders, Prestressed Voided Slab Beams or NU Girder bridge will need to be used instead of a Prestressed I-Girder due to limited vertical clearance or freeboard. Other times a Plate Girder or Wide Flange may need to be used instead of a Prestressed I-Girder for the same reason. Higher strength concrete girders may allow you to span further with shallower girders. Higher strength concrete prestressed I-girders should also be considered as a means to save money by eliminating girder lines. Prestressed concrete double-tee girders should be avoided if possible due to the redecking concerns for future maintenance. On grade separations with high skews, you may want to consider using a 4 span bridge with integral end bents rather than a 2 span bridge with semi-deep abutments. This should be considered if the semi-deep slab length exceeds 30’. On Prestressed I-Girder bridges, it is usually more cost effective to shorten the end spans of a 3 span Prestressed I-Girder bridge rather than having all spans the same length. The optimum span ratio is 1.1 to 1.0. For example, a span layout of (67’ - 76’ - 67’) is structurally more efficient than (70’-70’-70’). ### 751.1.2.6 Box Culverts A general rule of thumb for whether or not a culvert may be used in place of a bridge is... The most a culvert can handle is about 1,000 cfs per cell with 3 cells being the usual maximum. This can vary if the slope of the streambed is unusually flat or steep. Another rule of thumb is that the water from a drainage area of less than 5 square miles can usually be handled by a concrete box culvert. Most districts prefer a box culvert to a bridge because of the lower maintenance costs; however, if a stream crossing is on the borderline between a box culvert and a bridge, each option should be explored and presented to the district. The presentation to the district should include the cost estimate for each option as well as a recommendation as to which option is preferred by the Bridge Division. Keep in mind that box culverts should be avoided on streams with medium to heavy drift, as shown on the Bridge Survey. Hydraulics for some small box culverts are handled by the district. For drainage areas of 1,000 acres (approx. 1.5 sq. miles or 2.5 sq. kilometers) or less, the district will do the hydraulics. For drainage areas larger than this, the Bridge Division will do the hydraulics. If you must curve or kink your concrete box culvert, try to limit each bend to 15 degrees. The FHWA publication HDS-5 “Hydraulic Design of Highway Culverts” recommends that you space these bends a minimum of 50 feet apart. If this is not practical, you will need to account for the head loss resulting from the sharper bend. The Final Design of box culverts (structural calculations and contract plans preparation) will be done by the Bridge Division unless it is a single cell box culvert, in which case the plans are done by the district. When sizing the proposed concrete box culvert, use the standard cell sizes whenever possible. Consult the most recent set of Missouri Standard Plans to determine the current standard cell sizes. Locate the inside face of the headwalls of the culvert at or beyond the edge of the roadway clear zone. It is best to confirm this with the district because they may have gotten a design exception. If the headwalls cannot be placed beyond the clear zone, common in the situation of a very low fill, then guardrail will need to be attached to the top slab at least 10” from the headwall of the culvert. Check the Preliminary Geotechnical Report for recommendations concerning the use of collars on longer box culverts. These are called for in the Preliminary Geotechnical Report when substantial differential settlement is expected. Do not allow the precast option on box culvert extensions or other "oddball" situations. ### 751.1.2.7 Girder Type Selection Once you have determined that your structure will have girders, you must decide what types of girders to use. For checking your vertical clearance or freeboard, you will need to know the maximum span length of each type of girder. See 751.22 P/S Concrete I Girders or 751.14 Steel Superstructure. You will need to make adjustments if the span ratios get over 1.25. Notify the District Contact as soon as you know that the profile grade will need to be raised to meet the minimum vertical clearance or freeboard requirements. If the district says the profile grade can’t be raised, consider using more girder lines, using higher strength concrete if Prestressed I-Girders or NU Girders are being used, or switching to a voided slab bridge. As a last resort, request a Design Exception for the substandard item. Prestressed I-Girder types 2, 3 and 4 cost roughly the same per foot (\$100) and even the type 6 girders cost only slightly more (\$130/ft.). If you decide to go with a Prestressed Bulb-Tee Girder, try to limit the maximum span to 125'. We have gone as far as 133' but the strands had to be at 1 1/2" centers. Also keep in mind that these types of girders are very heavy and will often require two or three cranes to set them and may be difficult to transport to the site. If you decide to use Plate Girders, then you have to decide if the girders should be painted or not. The use of weathering steel (ASTM A709 Grades 50W and HPS70W) is preferred due to the lower maintenance costs; however, there are situations where the use of weathering steel is not advisable. Here is a brief list of times when weathering steel should NOT be used (based on FHWA Technical Advisory T5140.22): 1. If the distance from Ordinary High Water to low steel is less than 8' (or 3’ between Des. High Water and low steel). 2. If the bridge is located in either the St. Louis or Kansas City urban areas. 3. If the bridge is over a road with an ADT greater than 10,000. 4. If the bridge is over a road with an ADTT greater than 1,200. If the vertical clearance is at least 25’, the limitations of 2.), 3.) and 4.) do not apply. If weathering steel cannot be used, the girders should be painted gray (Federal Standard #26373). If the district doesn’t want gray, they can choose brown (Federal Standard #30045). If the district or the local municipality wants a color other than gray or brown, they must meet the requirements of Section 1045.5 Policy on Color of Structural Steel Paint. System H paint should be used on weathering steel while System G should be used on all other steel plate girders. ### 751.1.2.8 Longer Bridges For bridges that are longer than normal (more than 6 spans being a general rule of thumb), other items must be considered. If the feature you are crossing allows flexibility in bent placement, the most cost efficient span length is one that will result in the cost of one span's superstructure being equal to the cost of one bent. For example, calculate the cost of one intermediate bent, and then adjust the length of the span until the cost of the girders, slab and curb equal the cost of the bent. The use of higher strength concrete in Prestressed I-Girders can allow spans to be increased 16% to 21% as a means to eliminate intermediate bents. Another item to consider is the placement of expansion devices. Be sure to include the costs of the expansion devices and deadman anchors (if applicable) in your Preliminary Cost Estimate. ### 751.1.2.9 Staged Construction If the new structure you are laying out replaces an existing structure, the exact details of the staging must be coordinated with the District Contact. If the new structure is on a new alignment, there is little cause for concern. However, if the new structure is on the same or slightly different alignment, the location of the bents for the new structure should be spaced to avoid the existing substructure units if at all possible. Also, if the new structure is on the same or slightly different alignment, the question of traffic handling will need to be addressed. If the district wants to use a temporary bypass, then you need to determine if the district can size some drainage-diversion pipes for the bypass. If the district decides pipes cannot be used, then a temporary bridge is necessary. A separate Bridge Survey/Memo/Bridge No. is required. If the district does not want to use a temporary bypass, and they want to maintain traffic on the existing bridge while the new one is constructed, then the new structure will have to be staged. One important item to verify in this situation is that the new girders will clear the existing substructure. Another item to consider in setting up the staging is the temporary barrier curb and required minimum horizontal distance from the edge of the deck based on whether the temporary barrier curb is attached to the slab. ### 751.1.2.10 Temporary Barriers Bridge Plans must include reference to Temporary Barrier attachment if required. Coordination required with Design. a. No attachment (Sufficient distance available to accommodate lateral deflection of barriers) b. Tie down strap system. (Refer to Standard Plan 617.20B)Coordinate with Design to provide minimum of 4 temporary barrier sections on approach slab roadway. c. Bolt through deck system (To be used only on existing decks, with sufficient strength, that will be removed.) (Refer to Standard Plan 617.20B) Coordinate with Design for required transition barrier attachments Lateral deflection requirements due to traffic impact on barriers must be considered if a project requires the use of Temporary Barriers. When the Temporary Barrier is used in a free standing mode immediately adjacent to the edge of a bridge deck, the distance from the edge of the bridge slab to the center of gravity of the barrier shall be 45.3 inches minimum. 45.3 inches minimum shall be used where vertical displacement of traffic at edge of pavement is a safety issue. For all other applications of a free standing Temporary Barrier, the design lateral deflection of the barrier shall be 24 inches minimum. Regardless of deflection distance available, if the bridge deck is super elevated or has a large roadway slope, a free standing Temporary Barrier should not be used because the barrier has the potential for movement due to gravity forces on the barrier. When the Temporary Barrier is adequately attached to the bridge deck (Refer to Standard Plan 617.20B) minimum distance of 6 inches shall exist from the edge of the bridge slab to the face of the barrier. ### 751.1.2.11 Earthquake Consideration If the structure you are laying out falls in seismic design category B, C or D, there are a few items to keep in mind. Box culverts are preferable to bridges on stream crossings because they are exempt from seismic design unless corssing a known exposed fault. Pile cap intermediate bents are preferable to open column bents on footings because footings can grow quite large due to seismic forces. Minimize the number of expansion joints in the deck because each of these locations may require earthquake restrainers which are very costly. Make the superstructure as light as possible, which usually means use steel plate girders or wide flanges instead of prestressed concrete girders where ever possible. For shorter spans, voided and solid slab bridges perform well. ### 751.1.2.12 Replacing an Existing Structure If you are replacing an existing structure with a new one, you may have to calculate a cost estimate for rehabilitating the old bridge. The sufficiency rating, which can be found on the SI&A form (Structure, Inventory & Appraisal), provides information on eligibility status of the bridge for using federal bridge (HBRRP) funds: Status Sufficiency Rating (SR) Comment Deficient and SR < 50 Qualifies for full federal bridge replacement funds. Deficient and 50 < SR < 80 Qualifies for partial federal bridge replacement funds. Not Deficient Federal bridge replacement funds can not be used; however, other federal funds could possibly be used. If the sufficiency rating is greater than 50 but less than 80, then a cost analysis will need to be included in the layout folder showing that it is a better value or more cost effective to replace the bridge than it is to rehab/widen it. If rehab/widen is more cost effective, a state may elect to replace the bridge; however, federal bridge replacement funds may be capped at 80% of the rehab/widen cost estimate. See the following FHWA letter for a more detailed explanation. The SI&A form can be requested from a Bridge Inventory Analyst in the Rating Section. Include a copy of this form in the Layout Folder. An interstate job (job no. with an “I” in it) is an example of using federal funds to replace a bridge without worrying about the sufficiency rating of the existing bridge. The reason this is acceptable is because you are using federal “interstate” funds, not federal “bridge replacement” funds. An example of an SIA form can be seen below, followed by a letter from FHWA explaining guidelines for use of federal bridge replacement money. ### 751.1.2.13 Temporary Bridges If the district will be using a bypass on stream crossings, a temporary bridge may be necessary. The district should first consider using large drainage-diversion pipes to carry the water under the bypass, if the district determines this is not practical, they should submit a Bridge Survey for a temporary bridge on the bypass. Check with the Structural Project Manager for hydraulic design frequency. Once the number of 40’ spans has been determined, the district should be contacted so they can locate the pieces necessary for the construction of the bridge. Make sure the pieces the district intends to use have the “new” beam caps that take 14” H-pile. The district should provide you with the location of where the pieces are coming from and where they should be taken by the contractor at the end of the project. If the district is unable to find the pieces, then they will need to be contractor furnished. This has a big impact on costs. See Preliminary Cost Estimate. Consult the AREA (American Railway Engineers Associations) Manuals located in the Development Section for more detailed information. Here are some basic points to keep in mind: • Railroads often raise their tracks so provide some cushion in your vertical clearance. • Horizontal clearance shall be 12'-0" plus 1.5 inches per each degree of track curvature. (MoDOT's R.R. liaison will obtain the degree of curvature from the R.R.) • Will the railroad want room for an extra track or maintenance roadway? • Keep the ballast free drained. • Drainage needs to be designed for 100 year storm. • Slope protection shall consist of 1’-6” thick rock blanket (Type 2) placed on top of permanent erosion control geotextile. Some railroads may require changes to this; however, this will be determined on a "case-by-case" basis. • Some railroads also now require the barrier curbs and slab overhangs to be designed to accommodate fences that may be added in the future. If the face of the columns of an intermediate bent falls within 25 feet of the centerline of the railroad track, a collision wall is required. The elevation for the top of the collision wall is set at 6 feet above top of rail. The Railroad Liaison in the Multimodal Operations Division is a very good resource for answering questions at any stage of the layout. It typically takes a very long time to receive approval of a layout from the railroad. The Railroad has to approve both the Preliminary Design and the Final Plans! When making a submittal to the Railroad Liaison for approval of the Preliminary Design, include three sets of half-sized plat and profile sheets, as well as a copy of the Design Layout Sheet. ### 751.1.2.15 Historical Bridge Considerations You also need to check with the Historical Bridge Coordinator in the Design Division when replacing a bridge. There is not a magic age for a bridge for it to become "historical". Age does not matter. All "Bridge Resources" that will be impacted by MoDOT need to be cleared through the Department of Natural Resources (DNR) Historic Preservation Program (HPP) before they can be replaced, demolished, extensively rehabilitated or deeded to a new owner (county, city, etc.). The following is a definition of "Bridge Resources": "Bridge Resources are both public and privately owned highway, railroad and pedestrian bridges, viaducts and culverts. This does not include metal and plastic pipes, unless they are encased in an older concrete, stone or brick structure." The following is the information on this topic supplied to the district (FYI): "Bridge Resources on any given job or location study need to be checked out and cleared just like historic buildings (architecture) and archaeological sites. Standard size color photographs can be submitted to the Historic Bridge Coordinator directly and/or attached to the Request for Environmental Assessment (RES) or Questionnaire to Determine Need for Cultural Resources Assessment. The Historic Bridge Coordinator will then determine and execute procedures for clearance, if required." Bridges that are older than 50 years stand a better chance of being evaluated as eligible for the National Register of Historic Places (NRHP) in Clayton Fraser's 1996 draft Missouri Historic Bridge Inventory. This is a study that was undertaken under STURAA (Surface Transportation and Uniform Relocation Assistance Act of 1987) in order to inventory all potentially NRHP eligible historic bridges in the state. Any of these that are determined NRHP eligible by the HPP will require special mitigation (or avoidance) if they are to be affected by project activities. For this reason, it is important that all bridge resources be identified early in the process. Usually, bridge resources do not stand in the way of right of way acquisition (A-dates) because they are generally located on roadways that the state already owns; however, there are cases in which bridge resources are privately owned and located on private property. In these rare cases, bridge resources would need to be checked out prior to our right of way acquisition approval. ### 751.1.2.16 Preliminary Cost Estimate The Preliminary Cost Estimate should be neat, legible and dated since a copy of it is included with the Bridge Memo. It should also be rounded to the nearest thousand dollars. The accepted method of calculating the Preliminary Cost Estimate is to actually calculate some approximate quantities for the bridge and then multiply them by the unit prices supplied by the Review Section. A spreadsheet should be used to calculate these quantities. To estimate the pounds of reinforcing steel in a structure, multiply the number of cubic yards of concrete in the structure by 125 for bridges. See table below for Box Culverts. Box Culvert Reinforcing Steel (lbs.) Estimate Design Fill (ft.) Concrete (lbs/cy) Multiplier 2.00 225 6.00 168 10.00 116 25.00 96 32.00 84 The Preliminary Cost Estimate,should be increased for the following items: (Cost Estimate Guide for rural preliminary design )(do not compound the increases and use your judgment). Item % Increase Staged Construction 10 Horizontally Curved 5 Seismic Performance Cat. B 10 * Seismic Performance Cat. C 25 * Seismic Performance Cat. D 40 * Tight Site/Limited Access 3 ${\displaystyle *}$ These factors assume estimated quantities have not been increased due to seismic forces. Here are some guidelines for estimating the cost of the removal of existing bridges: Type of Bridge Removal Cost per Square Foot Simple Structures Over Streams \$ 5 Girder Structures Over Roads \$ 7 Conc. Slab Structures Over Interstates \$25 (quick opening of lanes to traffic) After you calculate the Preliminary Cost Estimate, divide it by the area of the deck and compare the price per sq. ft. to this table: The average costs vary. Usually they fall within these ranges. Type of Bridge Avg. Price/Sq. Ft. of Deck Prestressed I-Girder \$65 - \$90 Prestressed Bulb-Tee \$75 - \$100 Plate Girder \$90 - \$125 Voided/Solid Slab \$90 - \$125 Temp. Bridge (state furn.) \$25 - \$45 Temp. Bridge (cont. furn.) \$115 - \$145 Major Lake Crossing \$175 - \$200 Major River Crossing \$200 - \$250 The cost estimate spreadsheet should be stored at T:\br-proj\current estimates after being reviewed by the Structural Project Manager ### 751.1.2.17 Bridge Memorandums (Memos) The Bridge Memo is the document sent to the District that tells them where we plan to put the bridge, what kind of structure it will be, the Preliminary Cost Estimate and any other pertinent information. More information is required on more complicated structures. If you are not sure if the District needs to have a certain piece of information concerning the structure, include it on the Bridge Memo to be safe. Too much information is better than too little. Here is a sample listing of what to include on the Bridge Memorandum: 1. Identify type of structure, span lengths, skew, loading, roadway width, wing lengths and special end fill considerations. For curved structures, specify how the design span lengths are to be measured i.e., “measured along the CL of Roadway”. 2. Indicate all pertinent profile grade, alignment and superelevation transition information. 3. Identify the fill exception stations or ends of the bridge. The district uses this to coordinate the bridge with their roadway design features such as guardrail. For PSI-Girder bridges take into account the layout length when claculating these stations. 4. Identify slopes at end bents. 5. Indicate elevation of any berms to be constructed at the end bents. 6. If applicable, call for old roadway fill to be removed to natural ground line. 7. For box culverts, indicate the location of the headwalls and the type of wings to be provided (flared or straight). Also include the upper and lower flow line elevations along the CL of the box. 8. Identify any bridge related items that the district will need to address in their plans or special provisions as a “Roadway Item”. 9. Include the cost estimate for construction (Preliminary Cost Estimate). Include supporting calculations with the Bridge Memo packet sent to the district. 10. Include the method of traffic handling while construction is underway. Attach sketches for staged construction when appropriate. 11. For stream crossings, show all pertinent hydrologic data used for the layout of the structure. See Section 751.5.3 Hydraulics for Hydraulic Data tables. 12. For grade separations, include all minimum vertical and horizontal clearances (final and construction). For bridges over railroads, also include minimum lateral clearance from the centerline of track to nearest construction falsework. 13. Quite often, the district will add items to a bridge late in the final design process because they “didn’t think of them” earlier. This often causes extra work due to the necessary redesigns. Include a statement similar to the following to reduce this occurrence: • "No conduit, lighting, utility supports or sidewalks are to be included in the final plans for this bridge." • If the district has already indicated that they want special items attached to the bridge, include the specifics on the Bridge Memorandum and modify the above note. 14. The design year ADT (average daily traffic) and ADTT (average daily truck traffic). Request this from the district if it is not shown on the plat sheet. On grade separations, get the ADT and ADTT for both roads. 15. For box culverts, include the following notes: • "Provide grading of the channel bottom with the R/W limits as needed for culvert flowline elevations and transition of the channel bed to the culvert openings. Taper channel banks to match the ends of the culvert opening as required (Roadway Item)." • "Roadway width is ______ from outside of shoulder to outside of shoulder. The __ :1 roadway sideslopes are to be ‘rolled up and over’ the culvert to provide minimum cover on the barrel (see road plans)." • (Use this note when the headwalls are placed to satisfy clear zone requirements and/or when the fill height on top of the culvert is very shallow resulting in a flatter sideslope than that indicated on the roadway typical section). 16. Also for box culverts, state if guardrail (Roadway Item) is to be provided in lieu of meeting the clear zone requirements. If there will be guardrail over the box culvert and the fill height is less than 2 feet, indicate that attachment of the guardrail to the top slab will be handled in the bridge plans, even though the guardrail itself is a roadway item. Once the Preliminary Designer has the Bridge Memo completed, they should submit it to the Structural Project Manager for their review. The SPM will then request a Bridge Memo Conference with the Assistant State Bridge Engineer and the Structural Resource Manager. After this review and/or conference, the Preliminary Designer will then proceed with preparing the Bridge Memo package for delivery to the district. The Bridge Memo should be signed and dated the day you send it out. You should include spaces for two signatures from the District. When you send the Bridge Memo, you only need to send one copy on white paper. Your original signature should appear on this copy. The cover letter accompanying the Bridge Memo should be addressed to the Transportation Project Manager. A cover letter is more desirable than a Letter of Transmittal. The packet sent to the district should include a minimum of the following: 1 Copy of the Bridge Memo 1 Copy of the Calculations used for the Preliminary Cost Estimate 1 Copy of the Constructability Questionnaire - (modify to address project issues) 1 Copy of the Layout for Soundings Once the signed Bridge Memo is received from the District, one copy should be sent to the State Design Engineer. Once again it is preferable for a cover letter to be used for this instead of a Letter of Transmittal. The reason for this is that as of December of 1998, you need to include information pertaining to floodplains in this cover letter. Specifically you need to state whether or not the bridge is in a Floodway or Zone A or other designation. You should also include a statement stating that a Floodplain Development Permit is required or that a Floodplain Development Permit is not required and that the Bridge Division will request such a permit if necessary. The original Bridge Memo should be placed in the Layout folder upon its return from the district. ### 751.1.2.18 Soundings (Borings) The purpose of the borings is to define subsurface conditions at the project site. This information will be used to determine type of foundation (driven piles, footing, spread footings), preliminary estimate of pile lengths and engineering design properties. If boulders or cobbles are indicated, driven piles will need "shoes:, also known as pile point reinforcement. If there is a possibility that drilled shafts will be used, request borings based on using drilled shafts so the appropriate lab work can be done the first time. Borings should be requested at each bent. For bents on columns, estimate the number and location of the columns for each bent and request borings for these locations. Cores should be taken at each station, alternating locations at the field party’s discretion. Each boring should be taken to rock, or 30’ into material with a blow count of 20 or higher. Completed standard forms shall be sent to the Construction and Materials Division to request soundings. This is typically done at the same time that the Bridge Memo is sent to the District. The packet sent to the district should include a minimum of the following: (Consultants should contact Structural Liaison Engineer) 1 Copy of the Request for Final Soundings of Structure 2 Copies of the Soundings Layout 2 Copies of the Bridge Unit Request for Soil Properties 2 Copies of the Plat and Profiles Sheets (half-sized) 2 Copies of Sheet 1A of the Existing Bridge Plans (if applicable) In addition, an email should be sent to the Geotechnical Engineer and Geotechnical Director in Construction and Materials Division. This email should have the electronic files of the three standard forms attached. ### 751.1.2.19 Substructure Type Once the signed Bridge Memo and the Borings are received, the entire layout folder should be given to the Preliminary Detailer (requested by SPM, assigned by Structural Resource Manager). The Preliminary Detailer will copy the appropriate Microstation drawings into their own directory. (Do not rename files) Consultants contact Structural Liaison Engineer. The Preliminary Detailer will then draw the proposed bridge on the plat and profile sheets and add the borings to the profile sheets. The bridge should also be drawn on the contracted profile for a perspective of the profile grade relative to the ground line for drainage considerations. The Preliminary Detailer will also generate a draft Design Layout Sheet and then return the layout folder to the Preliminary Designer for review. The Preliminary Designer will then choose the substructure types for each of the bents. Pile cap bents are less expensive than column bents but they should not be used in the following locations; • Where drift has been identified as a problem. * • Where the height of the unbraced piling is excessive (${\displaystyle \,kl/r<120}$ is a general rule of thumb) (take scour into account). * * Consider encasing the piling in concrete to allow a pile cap bent. For column bents, an economic analysis should be performed to compare drilled shafts to footings with cofferdams. When evaluating the drilled shaft option, keep in mind that if casing is used (see Geotechnical information) it should extend at least as high as the elevation that would be used for the seal course design. Also keep in mind that the permanent casing should be kept at least one foot below the ground line or low water elevation. Any casing above this elevation will be temporary. End Bents are usually pile caps; however, if quality rock is abundant at or just below the bottom of beam elevation, a stub end bent on spread footings may be used. If you have any doubt about the suitability and uniformity of the rock, you can still use a pile cap end bent. Just include prebore to get a minimum of 10 feet of piling into the rock. If you have concerns about temperature movements, you can require that the prebore holes be oversized to allow for this movement. Once the substructure type has been determined, re-examine your preliminary cost estimate and notify the district if it needs to be adjusted. ### 751.1.2.20 Type of Footings Once it has been determined that a bent will have columns on footings, the next decision is whether the footings should be pile or spread (on shale or rock). If it is a stream crossing, the bottom of footing elevation should be based on the scour calculations found in the Hydraulic Design section. The borings should then be studied to see if a minimum of 10' of piling can be placed below the footings. If this is doubtful because of the presence of shale or rock, spread footings or drilled shafts should be used. In instances where it appears that a spread footing can be used but there are pinnacles in the area, you may want to use a pile footing and just require prebore to insure that you get the minimum embedment of 10 feet. For spread footings on grade separations, include a “not above” elevation to ensure a footing cover of at least 3 feet. Note that two types of soundings are typically provided by a sounding investigation. 1. Auger Borings - These are the most typical type of sounding provided due to availability of equipment and low cost. This type of boring is generally stopped immediately upon encountering "hard rock". All description of type of soil and rock encountered is determined in the field. 2. Core Samples - These are more time consuming and expensive. They are also subject to the availability of the specialized equipment and are therefore provided as sparingly as possible by the sounding crew. Once "hard rock" is encountered at a coring location, drilling is continued for an additional 10 feet to ensure a consistent layer of actual hard rock (not a boulder). If a void layer is encountered in the additional drilling, the drilling is continued until another 10 feet of consistent hard rock is encountered. In addition to field determination of soil layer type and performance of the Standard Penetration Test (SPT), samples are returned to the lab for additional tests such as determination of rock quality (% RQD). ### 751.1.2.21 Types of Piling The two types of piling commonly used are bearing pile or friction pile. Bearing pile are H-pile and are commonly used when shale or rock will be encountered at an elevation that will limit the pile lengths to about 100’. Use shoes (pile point reinforcement) if boulders or cobbles are anticipated. Prebore if necessary to achieve minimum embedment. Here are some guidelines for minimum embedment: Pile Type Location Minimum Embedment Steel H Pile All 10' CIP Pile End Bents 10' into natural ground CIP Pile Int. Bents 15'-20' below scour depth * * 15’ If the material is hard cohesive or dense granular; 20’ if the material is soft cohesive or loose granular. ### 751.1.2.22 Estimating the Lengths of Friction Piles All designers doing preliminary design should use the DRIVEN computer program to estimate the lengths for CIP piling. One way to check the validity of your DRIVEN results is to look at the piling information for existing bridges in the vicinity. Please also be on the lookout for any borings that contain "glacial till" (gravelly clay). This material is notorious for stopping CIP pile. This procedure is not a substitute for experience and engineering judgment. It is simply an attempt to have a more uniform method for estimating pile lengths. All soil data must be obtained as well as elevation information pertaining to intermediate and end bents. The soil borings and core information are then observed. The unit weights of the different soil layers are determined by correlating information from the core data with information found in reference tables. The resulting unit weights are written on the soil boring page. If the soil is cohesive, the undrained shear strength should be determined by dividing the results of the pocket penetrometer test by two. If there was no pocket penetrometer test performed, then a correlation between the SPT blow counts and the undrained shear strength can be determined from reference tables. The water table must be identified or estimated and labeled on each of the borings and cores. The water table is usually distinguishable by the presence of gray colored soil. Note that more accurate data is obtained from cores than is obtained from borings because borings are performed using an auger type apparatus that mixes and remolds the soil. ### 751.1.2.23 Drilled Shafts Drilled shafts are to be used when their cost is comparable to that of large cofferdams and footings. Other examples include when there are subsurface items to avoid (culverts, utilities, etc.) or when there are extremely high soil pressures due to slope failures. The Foundation Investigation request should include a request for opinion regarding the necessity of permanent casing when drilled shafts are investigated. Cost estimate savings and supporting subsurface information shall be discussed with Construction and Materials before permanent casing is omitted on a project. The borings report for drilled shafts should supply you with the allowable end bearing and side friction as well as the elevations for which the allowable rock values are applicable. The Design Layout Sheet should include the following information: Top of Shaft Elevation Top of Permanent Casing Elevation Top of Sound Rock Elevation Bent Elevation Side Friction (tsf) End Bearing (tsf) ### 751.1.2.24 Excavation Datum An Excavation Datum should be placed on the Layout Sheet when water is expected to be encountered during the excavation for footings. The elevation used is usually the Low Water Elevation plus 1 foot (rounded up to the next even foot) but may be made slightly higher on bigger streams and rivers. Everything above this datum is Class 1 Excavation while everything below it is Class 2 Excavation. ### 751.1.2.25 Seal Courses On structures over water with pile footings, a determination should be made as to whether or not to include seal courses. Seal courses are used in conjunction with cofferdams when a contractor may have trouble dewatering the footing excavation. They are usually necessary when you have sandy or gravelly soils and footing elevations below the stream bed. You will need to include a water surface elevation on the Design Layout Sheet for which the Seal Courses should be designed for. Typically the elevation used is the average of the Low Water Elevation and the Design High Water Elevation; however, a site visit may be required to determine how reasonable this is. In no case should this elevation be higher than the 10 year high water elevation or the overbank elevation. ### 751.1.2.26 Cofferdams Cofferdams should be included if the depth of the hole for the footing exceeds 8 feet and/or the bottom of footing elevation is below the Ordinary High Water (OHW) elevation. Any bent that requires a seal course will also require a cofferdam. These are bid lump sum per bent. Consult with the Assistant State Bridge Engineer about this. All piling in pile footings should be straight (not battered) when a cofferdam is expected. ### 751.1.2.27 Webs On structures over water where medium to heavy drift has been indicated on the Bridge Survey, consider using web walls between the columns on the column bents near or in the stream. The bottom elevation for the web is typically 1' higher than the overbank elevation. ### 751.1.2.28 Protection of Spill Slopes The District shall be consulted for type of slope protection. Concrete Slope Protection is a Roadway Pay Item. On stream crossings, Rock Blanket is usually placed. The type and thickness of Rock Blanket is to be determined by the District based on the flow velocity from the Design High Water. This flow velocity is determined by the person doing the hydraulic calculations and should be placed on the Bridge Memo. When Rock Blanket is used, an elevation for the upper limit of this protection needs to be calculated. First, calculate the following two elevations: 100 year High Water Elevation plus 2 feet 500 year High Water Elevation plus 1 foot Take the higher of these two elevations and compare it to the Low Girder Elevation minus 1.2 feet. Use the lowest of these two elevations for the upper limit of your Rock Blanket. This elevation should be placed on the profile sheets. If the toe of the abutment slope falls on the overbank, the rock blanket apron should extend from the toe toward the channel a distance equal to twice the 100 year flow depth on the overbank, but need not exceed 25 feet. ### 751.1.2.29 Design Exceptions Anytime MoDOT standards are not followed, a Design Exception is necessary. These are usually initiated by the Transportation Project Manager in the district; however, if the item is related to the bridge, the Bridge Division will initiate the Design Exception. The Design Exception Information Form should be filled out by the preliminary designer and then reviewed by the Structural Project Manager (SPM). The SPM should then submit the Design Exception to the Assistant State Bridge Engineer for review. After this review, the Design Exception should be submitted to the State Bridge Engineer for his signature. This submission should include written comments from the SPM on why the Design Exception should be approved. Once the Design Exception has been signed by the State Bridge Engineer, the SPM should mail the Design Exception Information Form and cover letter to the Transportation Project Manager in the district. The TPM will sign it and then send it to the General Headquarters Design Division for final approval. The Design Division will supply copies of the signed Design Exception to both the district and the Bridge Division. Some examples of Design Exceptions initiated by the Bridge Division are: Hydraulic Standards Vertical Clearance If the vertical clearance under a new or widened bridge does not meet the standard, a Design Exception is required. If the reduction in vertical clearance is due solely to the overlay of the road under the bridge, the Bridge Division would not initiate the Design Exception. Roadway/Shoulder Width Less Than Standard (New Structures) On new structures, if the roadway and/or shoulder widths on the bridge match the approach roadway, the Design Exception would be initiated by the district. If the roadway and/or shoulder widths on a new bridge are less than the approach roadway, the Design Exception would be initiated by the Bridge Division. Roadway/Shoulder Width Less Than Standard (Existing Structures) On Non-Interstate Rehab (3R) jobs, an exception for width is required any time we don’t meet the new design standards. The approach lanes being referred to in 3R Minimum Design Satandards (Rural) note (8) are the new lanes. The last note should be modified to read “Bridges programmed for replacement within 5 years may be allowed to remain in place as is and should be looked at on a case by case basis.” On Interstate Rehab (4R) jobs, an exception for width is required any time we don’t meet the new design standards. If an existing bridge is over 200 feet long, FHWA has said that they will routinely approve the width if both shoulders are at least 3.5’ wide, but we should still request the Design Exception. FHWA will want to see any approved Design Exceptions before they approve the preliminary design. ### 751.1.2.30 Finishing Up Design Layout Once the Preliminary Detailer has created the Design Layout Sheet and added the borings and details of the proposed bridge to the plat and profile sheets, they should be checked by the Preliminary Designer. These sheets are the end product of the Preliminary Design process and will be used to perform the structural calculations for the Final Design phase of the bridge, which results in the production of the contract plans. Here is a list of items to include. 1.) General Information a. Live load designation b. Traffic counts for the design year (ADT and ADTT). c. Tie station (if applicable). d. Beginning station. e. Horizontal curve data. f. Profile grade information (including offset from CL of roadway or median). g. Excavation datum. 2.) Superstructure a. Type and span lengths. b. Roadway widths and type of barrier curbs. 3.) Substructure a. Skew(s) of all bents. b. Types of all bents. c. Locations of cross-bracing or webs. d. Locations and top of wall elevations for collision walls. 4.) End Bents (Abutments) a. Type of end fill and maximum slope. Include earth plugs for piling in rock fill. b. Berm elevations. c. Type and extent of slope protection and need for geotextile material. d. Angle of internal friction to be used for deadman anchors. 5.) Foundations a. Type and lengths of all piling. b. Minimum tip elevations for friction piles. c. Location and elevation for any preboring. d. Location of any pile point reinforcement (shoes). e. Types of footings, their elevations and allowable bearing (if applicable). f. Location of any cofferdams and/or seal courses. g. End bearing and side bearing capacity for any drilled shafts. h. Top of Rock Socket elevations and their minimum lengths. 6.) Traffic Handling a. How will traffic be handled (bypass, road closure, staging, other) b. Include a sketch of any staging. 7.) Disposition of Existing Structure a. Bridge No(s). of structures slated for removal. b. Estimate cost of removal and indicate that this cost is included in the total. 8.) Hydraulic Information a. Drainage area and terrain description. b. Design frequency. c. Design discharge. d. Design high water elevation. e. Estimated backwater. f. Overtopping frequency and discharge if less than 500 yr. 9.) Miscellaneous a. Locations of Bridge Approach Slabs. b. Call out slab drain requirements if other than the standard procedure. c. The location of the stationing reference line (CL roadway, CL median, other). d. Station equations. e. Minimum final and construction clearances (vertical and horizontal). f. Use of weathering steel or color of paint (steel girders). g. Name and phone number of District Contact. h. Preliminary cost estimate. i. Details of any utilities to be attached to the bridge. j. Details of any conduit, light supports or any other unusual attachments. k. Channel change requirements. l. Temporary shoring requirements and whether it is a Bridge or Roadway Item. m. Location of Maint. facility contractor is to use for delivery of MoDOT retained items. n. Directory/path for any Ceal, Geopak or Microstation files used for layout of bridge. Once the Preliminary Detailer and Designer are in agreement on these items, the entire layout folder should be submitted to the SPM for their review. The SPM will then request a Design Layout Conference with the Assistant State Bridge Engineer and the Structural Resource Manager. Following this conference, the Preliminary Detailer and Designer will make any requested changes and complete the assembly of the Layout Folder by including the approved Design Layout Sheet and one set of half sized plat and profile sheets. The Layout Folder should then be delivered to the SPM along with one set of half-sized plat and profile sheets and a copy of the Design Layout Sheet. The SPM should then use a cover letter to send the one set of half-sized plat and profile sheets, as well as the copy of the Design Layout Sheet, to the Transportation Project Manager in the district. Include in this cover letter any changes in the Preliminary Cost Estimate and the current Plans Completion Date. An example can be found on the next page. The Preliminary Detailer should provide a copy of the Design Layout Sheet to the Bridge Survey Processor. The Bridge Survey Processor should then perform the following tasks: • Enter the Date to Final Design in the Bridge Survey Book and the Survey Rcv. Database • Supply a copy of the Design Layout Sheet to Development and Review. • Copy all of the Microstation files in house to • pwname:\\MoDOT\Documents\Central Office\Bridge\A_Prelim_design\district\job no. • (Consultants contact Structural Liaison Engineer). The SPM should then enter the following information into Bloodhound. All other fields in Bloodhound should be updated at this time by the SPM. The SPM will then send a request for a Final Designer to the Structural Resource Manager. ### 751.1.2.31 FHWA Submittal Full FHWA oversight is required for the following projects: • Interstate projects equal to or exceeding \$1 million in estimated costs • Intelligent Transportation System (ITS) projects • Major bridge projects (over 1,000 feet long or span over 400 feet) or unique designs, operational features, unusual geotechnical or hydraulic features on the National Highway System (NHS). If the project is located off of the interstate and not sufficiently complex to warrant FHWA involvement (i.e. The project is primarily for applying a latex modified wearing surface), then FHWA oversight is not required. For FHWA oversight, the layout needs to be submitted to FHWA for their approval. The submittal should include the following: • Cover letter • One set of half-sized plat and profile sheets • One copy of Design Layout Sheet • One copy of completed form BR105R (gray sheet) • One copy of the Borings report including Cover Letter from Materials • One copy of each approved Design Exception (if applicable) • One copy of the Bridge Deck Condition Survey Summary (if applicable) • One copy of the Bridge Rehab Checklist (if applicable) • One copy of the Bridge Inspection Report for the existing bridge (if applicable) • One copy of half-sized existing bridge plans (if applicable) • One copy of anything else referred to on the Design Layout Sheet (an example would be top of pavement elevations if these are to be used in Final Design) That is the end of the Preliminary Design phase of bridge design at MoDOT. ### 751.1.2.32 Aesthetic Enhancements Aesthetic enhancements can include everything from form liners and different colored paints to actual brick or stonework on the bridge. The district is required to inform the Bridge Division if aesthetic enhancements will be required on a bridge. Aesthetic enhancements should be discussed by the core team during the scoping process. Note: Galvanized slab drains are to remain unpainted unless otherwise requested by the district. The required special provision is available if the district wishes to paint the galvanized slab drains. ## 751.1.3 Overlays/Rehabs/Redecks/Widenings ### 751.1.3.1 Overview Modifying existing bridges is quite different than laying out new bridges. These types of projects can be broken into four general categories: 1. Overlaying an existing bridge as part of a roadway overlay project. 2. Rehabilitating and/or redecking an existing bridge as a stand alone programmed project. 3. Widening an existing bridge to meet minimum shoulder width requirements as part of a roadway overlay project. 4. Widening an existing bridge to add lanes as part of a roadway project. ### 751.1.3.2 Bridges on Resurfacing Projects This is probably the most common type of project. The first step is to determine the limits of the project. This can be done by looking at the description and log miles of the project in the Program Book. The District Contact should also be consulted to make sure the project limits have not changed. The second step is using the Bridge Maps produced by the Maintenance Division to locate any and all bridges within the limits of the project. Once the Bridge Nos. for these structures are known, obtain a copy of the Bridge Maintenance report for each structure. These reports contain the log mile for each structure. Compare this to the log mile limits of the project. If the log mile on the report indicates the bridge is outside of the project limits, check with the District Contact again to see if the bridge is to be included in the project. If a bridge falls within the project limits, it must be evaluated to see if it meets the current safety criteria for such items as shoulder width and curb type/height. If the job will be built with federal funds, any substandard safety item must be remedied or handled with a Design Exception. If the job will be built with 100% state funds, the bridge can be left alone (no safety improvements). ### 751.1.3.3 Curb Type and Height Three types of curbs are acceptable in Missouri; Thrie Beams, Safety Barrier Curbs (SBC) and Curb and Parapets. When using the SBC or Curb and Parapet, a five-hole bolt pattern must be used to connect the approach railing to the bridge curb. A) Thrie Beam i) If the deck is less than 8.5” thick, the attachment must bolt through the deck with a plate on the bottom side of the deck. The details showing anchoring with a bent stud formed within the deck is no longer acceptable. (The deck is too thin and the deck edge breaks off during a collision.) ii) The center of the rail shall be 21” to the top of the finished driving surface. iii) Thrie Beams are not a preferred railing for interstate or high ADT’s. B) Safety Barrier Curb (SBC) i) If installed at the time of the driving surface, the top of the curb should be no less tha 2’-8” above the driving surface. ii) If the wearing surface is installed after the SBC is in place, the wearing surface shall be no greater than 2”, making the curb 2’-6”. C) Curb and Parapet i) The concrete portions of the curb and parapet are the only components that are used in calculating the height of the rail. The handrails are not crash worthy. ii) Curb and Parapets can be as short as 2’-3” from the driving surface if no raise in grade is added. Once a wearing surface, (other than ¼” epoxy), is applied, the parapet must then be heightened to 2’-8” above the finished driving surface. This is generally done by adding curb blockouts to the existing curb and parapet. iii) The horizontal dimension of the step from the driving face of the curb to the driving face of the parapet is recommended to be between 0” to 3” but cannot to exceed 6”. If a curb blockout is used, this dimension cannot exceed 3”. iv) Many times the end posts are not the same width as the parapets. Check to see if the end posts are wider and if they extend towards the driving lanes or to the outside edge. It may be necessary to remove the end posts all together to accommodate for blockouts. BLOCKOUT ON TOP OF EXISTING CURB BLOCKOUT ON TOP OF EXISTING CURB BLOCKOUT ON TOP OF EXISTING CURB BLOCKOUT ON TOP OF EXISTING CURB ACCEPTABLE CURB BLOCKOUT CONFIGURATIONS PART ELEVATION Note: Existing holes (For guard rail attachment) in existing bridge parapet shall be filled with an approved epoxy mortar. ELEVATION A-A PART ELEVATION Note: Existing holes (For guard rail attachment) in existing bridge parapet shall be filled with an approved epoxy mortar. * Remove this area of Safety Barrier Curb for Guard Rail Attachment PART ELEVATION A-A HOLES FOR NEW BRIDGE ANCHOR ATTACHMENT TO EXISTING CURB AND PARAPET * 2" (Max. Overlay) 16" SBC (2'-8")(New Curb) 16" SBC (2'-8")(Exist. With Overlay) Curb & Parapet(No Overlay) Curb & Parapet(Change in Grade) Thrie Beam(Change in Grade) ACCEPTABLE RAILS ### 751.1.3.4 Bridge Rehab Checklist An example of a “Check List for Rehabilitation Work on Existing Bridges” is shown below. ## 751.1.4 Retaining Walls ### 751.1.4.1 Overview This section is intended to help with the issues unique to retaining walls. Many sections in the "Bridges/Boxes" section of this manual will still need to be used when working on retaining walls. Retaining walls are very much like bridges in that they require the many of the same items, such as: • Bridge Survey • Bridge Number • Bridge Memorandum • Soundings • Design Layout Sheet ### 751.1.4.2 Types of Walls There are two general types of retaining walls used by MoDOT; cast-in-place (CIP) concrete walls and mechanically stabilized earth (MSE) walls. MSE walls are the preferred type due to their lower cost; however, there are several times when MSE walls cannot be used. These include: • When barrier curb must be attached to the top of the wall. • When the underlying soil cannot support the weight of the fill and wall (must use CIP on piling). • When you don’t have adequate room behind the wall for the reinforcing straps. In general a minimum reinforcement length of 8.0 ft. regardless of wall height, has been recommended based on historical practice, primarily due to size limitations of conventional spreading and compaction equipment. Shorter minimum reinforcement lengths, on the order of 6.0 ft. , but less than 70 percent of the wall height, can be considered if smaller compaction equipment is used, facing panel alignment can be maintained, and minimum requirements for wallexternal stability are met. The requirement for uniform reinforcement length equal to 70 percent of the structure height has no theoretical justification, but has been the basis of many successful designs to-date. Parametric studies considering minimum acceptable soil strengths have shown that structure dimensions satisfying all of the requirements of Article 11.10.5 require length to height ratios varing gfrom 0.8H for low satructures, i.e. 10.0 ft., to 0.63 H for high structures, i.e. 40.0 ft. Significant shortening of the reinforcement elements below the minimum recommended ratio of 0.7H may only be considered when accurate, site specific determinations of the strength of the unreinforced fill and the foundation soil have been made. Christopher et al. (1990) presents results which strongly suggest that shorter reinforcing length to height ratios. I.e. 0.5 H to 0.6 H , substantially increase horizontal deformations. The reinforcement length shall be uniform throughout the entire height of the wall, unless substantiating evidence is presented to indicate that variation in length is satisfactory. A nonuniform reinforcement length may be considered under the following circumstances: Lengthening of uppermost reinforcement layers to beyond 0.7H to meet pullout requirements, or to address seismic or impact loads. Lengthening of the lowermost reinforcement layers beyond 0.7H to meet overall (global) stability requirements based on the results of a detailed global stability analysis. Shortening of bottom reinforcement layers to less than 0.7H to minimize excavation requirements, provided the wall is bearing on rock or very competant foundation soil. For walls on rock or very competent foundation soil, e.i., SPT > 50, the Bottom reinforcements may be shortened to a minimum of 0.4H with the Upper reinforcements lengthened to compensate for external stability issues in lieu of removing rock or competent soil for construction. Design Guidelines for this case are provided in FHWA Publications No. FHWA-NHI-00-043 (Elias et al. 2001). For conditions of marginal stability, consideration must be given to ground improvement techniques to improve foundation stability, or to lengthening of reinforcement." MSE walls are pre-qualified and listed on the internet in two categories: • Small block walls • Large block walls Small block walls are battered walls with a maximum height of 10 feet. Large block walls are vertical walls with heights that may exceed 10 feet. Combination wall systems are considered small block wall system and shall be battered with a maximum height of 10 feet. Any deviation from the criteria listed shall be discussed with Structural Project Manager. ### 751.1.4.3 MSE Walls Both the horizontal alignment and the top of wall elevations are supplied by the district in the Bridge Survey. You do need to check the top of wall elevations to make sure the district accounted for any concrete gutters placed behind the top of the wall. These are necessary if the slope of the fill will direct water towards the top of the wall. The district should decide whether to use type A or type B gutters (Mo. Std. Plan 609.00) and where they should drain to. You will also need to set the elevations for the top of the leveling pad. The minimum embedment, which is the distance between the finished ground line and the top of the leveling pad, is based on this table: (FHWA Demo. #82) Slope in Front of Wall Minimum Embedment Horizontal H/20 3H:1V H/10 2H:1V H/7 The absolute minimum embedment is 2’. When the soundings are returned, they will include a minimum embedment necessary for global stability. Estimating the cost of MSE walls is quite simple. Use \$40 to \$50 per square foot of the area of the face of the wall. The request for soundings for MSE walls should include requests for the angle of internal frictions (Ø) for both the foundation and the retained material. Request that soundings be taken every 25 feet along the wall alignment. Soundings shall be made to rock or to a point which is 20 feet below the bottom of the wall, whichever is higher. If soundings indicate weak material exist then the designer should investigate that sufficient right of way limits exist to address the required length for the soil reinforcement. ### 751.1.4.4 CIP Concrete Walls Once you determine that you must use a CIP concrete wall, there is very little to do as far as the layout of the structure. Both the horizontal alignment and the top of wall elevations are supplied by the district in the Bridge Survey. You do need to check the top of wall elevations to make sure the district accounted for any concrete gutters placed behind the top of the wall. These are necessary if the slope of the fill will direct water towards the top of the wall. The district should decide whether to use type A or type B gutters (Mo. Std. Plan 609.00) and where they should drain to. You will also need to set the elevations for the top of the footing, which should be a minimum of 2 feet below the finished ground line for walls south of Interstate 70 and 3 feet below the finished ground line for walls north of Interstate 70. In tight roadway situations where a barrier curb is to be placed on top of the wall, make sure that a stem thickness of 16" will fit. Check with the District Contact to determine if they want any coping on the exposed face of the wall. French drains will be used to relieve water pressure behind the CIP wall as a default. If you expect to encounter springs or swampy conditions, then check with the District Contact on calling for an underdrain. If the decision is made to use an underdrain, the porous backfill and pipes are Roadway Items and this must be noted on the Bridge Memorandum and Design Layout. The request for soundings for CIP walls should include requests for the angle of internal friction (Ø) for the retained material as well as an allowable bearing value for the foundation. Request that soundings be taken every 25 feet along the wall alignment. Soundings shall be made to rock or to a point which is 20 feet below the bottom of the wall, whichever is higher. A guide to estimating the costs of CIP retaining walls can be found on the following page. This is relatively accurate as long as you don’t need to place the wall on piling. If you have indications that the foundation material is very poor in quality (less than 1 ton per sq. foot allowable bearing), add some money for piling. Wall Height in Feet Cost per Linear Foot 1 \$75 2 \$125 3 \$175 4 \$250 5 \$270 6 \$300 7 \$325 8 \$350 9 \$450 10 \$575 11 \$650 12 \$780 13 \$860 14 \$1,000 15 \$1,080 16 \$1,160 17 \$1,200 18 \$1,275 19 \$1,440 20 \$1,525 21 \$1,610 22 \$1,710 23 \$1,790 24 \$1,875 25 \$1,975 26 \$2,085 27 \$2,185 28 \$2,295 29 \$2,395 30 \$2,500 Prices are sporadic beyond 30 feet in height. Wall height is measured from the top of the footing to the top of the wall. ### 751.1.4.5 Obstructions Any time the retaining wall will encounter obstructions, provisions must be made on the final plans. Therefore, if you are aware of any obstructions, they should be called out on the Bridge Memorandum and Design Layout Sheet. Here are some examples of types of obstructions and how to describe them on the layout: Type of Obstruction Description Lighting Foundation Std. 45’ Light Pole, Sta. 167+48.50, 16 ft. left Sign Truss Foundation Truss T-72, Sta. 172+41.80, 31 ft. right Drop Inlet 2’ x 2’ Type D Drop Inlet, Sta. 163+12.45, 14 ft. left
auto_math_text
web
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Entrainment of Voluntary Movement to Undetected Auditory Regularities ## Abstract In physics “entrainment” refers to the synchronization of two coupled oscillators with similar fundamental frequencies. In behavioral science, entrainment refers to the tendency of humans to synchronize their movements with rhythmic stimuli. Here, we asked whether human subjects performing a tapping task would entrain their tapping to an undetected auditory rhythm surreptitiously introduced in the guise of ambient background noise in the room. Subjects performed two different tasks, one in which they tapped their finger at a steady rate of their own choosing and one in which they performed a single abrupt finger tap on each trial after a delay of their own choosing. In both cases we found that subjects tended to tap in phase with the inducing modulation, with some variability in the preferred phase across subjects, consistent with prior research. In the repetitive tapping task, if the frequency of the inducing stimulus was far from the subject’s own self-paced frequency, then entrainment was abolished, consistent with the properties of entrainment in physics. Thus, undetected ambient noise can influence self-generated movements. This suggests that uncued decisions to act are never completely endogenous, but are subject to subtle unnoticed influences from the sensory environment. ## Introduction Rarely is our sensory context devoid of sound, and subtle sounds in the background might influence our behavior in ways that we are not aware of. For example, imagine that you are humming a tune while in front of the stove cooking, and only later do you realize that you happened to be humming in the key suggested by the ventilation fan. Or, while writing a letter you start tapping your fingers on the table thinking of what to write next, only later realizing that you were tapping in sync with a cricket outside in the garden. We sought to experimentally examine the effect of undetected background auditory noise on self-paced movements (repetitive movement where the pace is self-chosen) and self-initiated movement (single movement initiated at a self-chosen time, without any sensory cue). We also assessed whether this effect satisfies the definition of entrainment found in physics by happening only when the two frequencies are close to each other1, and whether it operates in the same way for periodic movements and abrupt one-time movements. In experiments where an ongoing stimulus is modulated or perturbed, it is important to distinguish between an above-threshold stimulus and an above-threshold perturbation (or modulation): the perturbation may be below threshold even if the stimulus itself is above threshold. Or both can be below threshold. Previous research on sensorimotor synchronization (SMS) using tapping tasks has shown that subjects make automatic corrections to their finger tapping in response to small perturbations in the timing of an above-threshold periodic auditory stimulus2. This has been shown even for perturbations that were well below the detection threshold3,4, (i.e., the phasic perturbations were below threshold, while the stimuli themselves were well above threshold). More generally, a number of prior studies have provided evidence that behavioral responses to distractor sequences (e.g. phase correction) can be automatic and involuntary5,6,7,8. Prior studies have also shown that human movements may become unintentionally synchronized with a plainly-audible, but irrelevant environmental rhythm1,9,10. A more recent study11 found that runners spontaneously adapt their cadence to the tempo of the music they are listening to while running. However, in all of the above-mentioned studies the stimuli to which the subjects entrained their movements were plainly audible or visible and were an overt part of the experiment. Here we wanted to ask whether subjects would spontaneously and involuntarily entrain to a subtle periodic signal even when that signal went completely unnoticed. Entrainment under these conditions would imply that auditory processing impacts motor actions in the absence of awareness, which would contribute to the growing evidence of cross-modal unconscious processing12,13. In addition, we wanted to examine the extent to which such unconscious signals might influence different behaviors, by testing their impact on a simple repetitive tapping task, and on a task requiring participants to perform a single abrupt spontaneous tap on each trial after a random self-chosen delay. In experiment 1, we had subjects perform a self-paced repetitive tapping task inside of a sound-proof testing chamber. We communicated with the subjects during the experiment via an intercom through which we surreptitiously introduced continuous faint background noise (< ~ 20 db; sound file sound_sample.wav in the supplementary online material). Although the noise was present continuously throughout the experiment, we could transition between periodic and non-periodic modulation of the noise (Fig. 1). Unlike in previous experiments, even the noise itself often went completely unnoticed (we confirmed this using a structured questionnaire delivered after the experiment, see Methods). In this experiment subjects exhibited statistically significant entrainment to the sine-wave modulated noise, but only when the frequency of the modulation was close to the subject’s own preferred tapping frequency, consistent with the physics of entrainment, and with prior studies1. We also had subjects perform a spontaneous self-initiated movement task wherein they performed a single abrupt finger-tap spontaneously at a random moment. In this case we found that single one-time finger taps also had a significantly consistent phase relationship with the noise modulation, but unlike with repetitive tapping, this effect could also be present when the modulation frequency was not close the subject’s preferred tapping frequency. For both kinds of task we used a structured post-experiment questionnaire to evaluate whether or not each subject had been aware of the noise and found that the effects remained significant even when the analyses were restricted to subjects who had not been aware of the noise. ## Materials and Methods ### Human subjects A total of 60 subjects participated in the study: 30 in experiment 1 (15 males, mean age = 22.0 years, SD = 3.6 years) and 30 in experiment 2 (11 males, mean age = 25.3 years, SD = 4.1 years). Data from three of the subjects in experiment 1 could not be used (N = 27 in experiment 1): one subject could not perform the task properly; one subject had a mean U-value that was an extreme outlier relative to the other subjects (absolute distance from the median was more than 2.5 times the inter-quartile range); and the data from a third subject could not be saved due to a technical problem. All subjects were right-handed, had normal hearing, and no psychiatric or neurological history. They were naive to the purpose of the study and gave informed consent, in accordance with institutional guidelines and the Declaration of Helsinki. The experimental protocol was approved by the commission cantonale d’éthique de la recherche sur l’être humain of the canton de Vaud, and all methods were performed in accordance with the relevant guidelines and regulations. Experiment 1 was carried out at the École Polytechnique Fédérale de Lausanne (EPFL), Department of Life Sciences, Lausanne, Switzerland. Experiment 2 was carried out at the Campus Biotech in Geneva, Switzerland. From the point of view of the subjects, there were no stimuli in this experiment. However, we surreptitiously introduced a small amount of noise through a pair of computer speakers placed on either side of the base of the computer display on the table in front of the subject (Fig. 1). Also on the table in front of the subject was a small microphone. The speakers and microphone were placed in the sound-proof testing room ostensibly to serve as an intercom through which we communicated with the subject from outside of the testing room without having to open the door. However, the real purpose of the speaker was to deliver auditory noise stimulation. The amplitude of the noise was adjusted so as to be barely audible (< ~20 dB) when the door to the testing room was closed and the subject was completely still, and even then hardly noticeable and easily ignored (sound file sound_sample.wav in the supplementary online material). To create the noise stimuli we randomly generated spikes, each with random sign (+1 or −1), and then softened the result by imposing a $$1/{f}^{\beta }$$ power spectrum with β = 1.5 (i.e., we made the noise more pink). During the experiment we could modulate the temporal density of these spikes with a sinusoidal envelope. The magnitude of the modulation was set so as to be just barely detectable (spike probability oscillating between 0.05 and 0.1 at a sampling rate of 22050 Hz) (sound file sound_sample.wav in the supplementary online material). For the control condition, the modulation envelope was a random series of pulses with the intervals in between pulses drawn from a Poisson distribution (lambda = 1). In experiment 1 (N = 27), subjects performed a simple self-paced repetitive finger-tapping task (“rhythmic task”). A subset of these subjects (N = 10) also performed (in separate blocks) a task in which they had to initiate a single abrupt tap on each trial at a spontaneously chosen moment (“single-tap task”). Tapping blocks were always done in pairs, the first of which was a rhythmic task lasting 30 seconds and the second of which was either a rhythmic task or a single-tap task lasting 60 seconds. For the subject these were just two consecutive blocks, but in fact we used the data from the first block (the “estimation” block), with steady unmodulated noise in the background, to estimate the subject’s preferred tapping rate. We then used that to determine the modulation frequency used in the second block, which could be equal to the subject’s mean tapping rate (“near” condition), 1.5 times the subject’s mean tapping rate (“far” condition), or the modulation could be aperiodic (“baseline” condition). A new group of participants was recruited for Experiment 2 (N = 30), which was focused on the single-tap task. In this experiment blocks of trials were also done in pairs, the first of which was the 30-second estimation block, and the second of which was always the single-tap task for 60 seconds. Each trial began with the appearance of a written instruction telling the subject to tap their finger once whenever s/he wanted to. This remained on the screen for 2 seconds after which it disappeared cueing the subject that s/he can now perform the finger tap at any time. After performing the single finger tap, there was a jittered ITI of approximately 1 second and then the instruction reappeared cueing the start of the next trial. Subjects tapped on a custom-made sensor (Salomon et al., 2016) consisting of a wooden tray (25 × 25 cm) with a small copper plate (width: 2.5 cm, length: 9 cm) connected to a wristband with a 2.5 × 2 cm ground electrode. The sensor was connected with an Ethernet cable to an Arduino Uno microprocessor, which was connected to the computer by a USB cable. For each tap, we recorded both the time of the finger lift and the time of the finger drop by measuring electrical conductivity between the finger and the copper plate. Subjects were instructed that their index finger should remain in contact with the copper plate by default (i.e. in the “down” position). Each finger tap was then composed of a “lift” followed immediately by a “drop” of the finger back to the default position. At the end of the experiment, all subjects completed a structured questionnaire, which served to estimate awareness of the auditory noise stimulation. On the questionnaire we asked (1) whether the conditions in which they performed the experiment were quiet enough to perform the tasks, (2) whether they noticed any noise during the experiment, and if yes (3) whether they noticed any rhythmic component in the noise. Finally, if they noticed a rhythm in the noise, we asked (4) whether or not they thought it influenced their tapping behavior. The questions were revealed one at a time in this order, and the subject was not allowed to see the next question until s/he had finished responding to the current one. ### Data analyses and statistics The onsets of finger drops (i.e., the time at which the finger touched the plate after an initial lift) and the continuous time-varying phase of the noise modulation were extracted to compute the phase of the noise modulation at the time of each finger tap across blocks and participants. Unintentional synchronization of movement to external stimuli is known to be somewhat unstable, with movements being attracted alternatively to either 0° or 180° rather than simply being phase locked1,11, and tending to drift in and out of phase14,15. For auditory stimuli in particular phase locking can apparently happen at a variety of different angles, not just 0° or 180°, and the angle tends to be different for different subjects16. Therefore we chose to use a statistical test that is sensitive to non-uniformity in the distribution of phases, rather than being sensitive to the concentration of phase angles in a single region of phase space. Each subject’s phase values were submitted to Rao’s spacing test of uniformity (Rao, 1976), providing U-values as a non-uniformity index for the distribution of phases. U-values greater or smaller than the median by more than 2.5 times the interquartile range were excluded, and the remaining U-values were averaged across blocks. The significance of a main effect of condition was assessed using linear mixed effects models, with U-value ranks averaged across blocks as the dependent variable17, condition as a fixed effect, and intercepts for subjects as random effects. Note that condition could not be treated as a random effect, otherwise the number of observations would be smaller than the number of random effects. In all models, p-values were obtained by likelihood ratio tests, and degrees of freedom were estimated using the Satterthwaite approximation. Statistical significance of the difference in U-values between conditions was assessed using permutation tests: a null distribution of the mean difference in U-values at the group-level was created by shuffling the condition labels over 5000 iterations. In line with 2-sided tests, p-values were estimated by counting the proportion of shuffled samples exceeding the observed average difference in U-values in the near (or far) versus baseline conditions. All analyses were performed with R (2016) using the lme4 and lmerTest packages (Bates, Maechler, Bolker, & Walker, 2014; Kuznetsova, Brockhoff, & Christensen, 2014). ### Monte Carlo Simulation If two oscillators have the same frequency, then they will have a consistent phase relationship over time – no entrainment necessary1,10. Even if their frequencies are not perfectly constant over time, as long as the variance of the interval distribution is very small relative to the window of time over which the oscillators are sampled, then there will be a relatively consistent phase relationship. Thus if a subject were to tap at a constant rate throughout the training block, and then maintain exactly the same rate of tapping throughout the subsequent “near” block, then there would be a consistent phase relationship without necessarily implying entrainment. In order to control for this possibility we performed a Monte Carlo simulation in which, for each “near” block of each subject, we drew random intervals from a distribution with the same mean and variance as this particular block. We used the mean and variance to generate 1000 surrogate sequences of intervals and then computed the 1000 corresponding U-values on the phases of the inducing stimulus at the time of each surrogate tap. If the subject’s mean frequency was close enough to that of the inducing stimulus and the variance of the subject’s intervals was small enough, then the surrogate taps should yield U-values close to what was actually observed. ## Results On average, participants tapped at a frequency of 1.44 Hz (+/− 0.12 Hz). A linear mixed model on ranked U-values revealed a main effect of condition (F(2,77) = 11.52, p < 0.001), showing that U-values were smaller in the baseline condition (mean U-value = 128.77 + /− 8.78) compared to the near (mean U-value = 141.85 +/− 15.83; permutation test: p < 0.001), but not the far condition (mean U-value = 129.02 + /− 7.81; permutation test: p = 0.47) (Fig. 2). In the near condition 8 of the 27 subjects had U-values that were individually significant at p < 0.05 (p < 0.0001, binomial test). The average phase values across subjects did not depart from uniformity (Rao’s uniformity test, p > 0.1), indicating that subjects were phase locked at different angles. However, as mentioned in the Methods section, unintentional synchronization of movement to auditory stimuli is known to be somewhat unstable, with movements being attracted to different phase angles depending on the subject, rather than simply being phase locked16. To test whether entrainment in the near condition changed over time during the course of each trial, we computed U-values within 10 contiguous mini-blocks each containing 10% of the total number of taps in a trial. A linear mixed model on ranked U-values with intercepts for subjects as random effects, and a per-subject random slope for the effect of condition and mini-blocks confirmed the main effect of condition reported above (F(2,50.36) = 20.19, p < 0.001), but did not reveal any effect of mini-block (F(1,25.40) = 0.08, p = 0.79) nor an interaction between mini-block and condition (F(2,690.90 = 1.74, p = 0.18) (Fig. 2). On the structured questionnaire following the experiment, no subject reported perceiving noises or noise modulations, suggesting that entrainment in the near condition occurred despite unawareness. Taken together, these results suggest that entrainment does occur for self-paced tapping in the absence of awareness. Consistent with what is found in physics and in prior studies1, such entrainment was observed only when the entraining frequency was close to the subject’s preferred pace. In order to control for the possibility that the apparent entrainment may have resulted simply because subjects maintained a very precise and constant rate of tapping between the “training” and “near” blocks, we performed a Monte Carlo simulation (see Methods). If the subject’s mean frequency was close enough to that of the inducing stimulus and the variance of the subject’s intervals was small enough, then the surrogate data produced by the simulation should have yielded a significant value for Rao’s U. The difference in U-values between the near and baseline condition over 1000 Monte Carlo simulations was 7.31 (SD = 2.31), significantly different from 0 (Monte Carlo p-value < 0.001). This suggests that a part of the observed effect could have stemmed from a consistent tapping frequency between the training and the near condition. However, the average difference in U-values between the near and baseline conditions observed in all three experiments was 13.02 (SD = 18.42), superior to 99.1% of the Monte Carlo differences. This suggests that the observed effect is not completely accounted for by a consistent frequency of tapping, and that actual entrainment did occur in the near condition. A subset of, subjects also had to perform a spontaneous self-initiated movement task, wherein they perform a single spontaneous tap on each trial after a random delay of their own choosing. A linear mixed model on the ranked U-values in these trials revealed a trend for a main effect of condition (F(2,24) = 3.22, p = 0.06). We further explored this main effect, and found that the U-values were significantly smaller in the baseline condition (mean U-value = 121.12 + /− 5.21) compared to both the near (mean U-value = 130.96 + /− 5.90; permutation test: p = 0.02) and to the far condition (mean U-value = 130.00 + /− 7.23; permutation test: p = 0.004). As opposed to what we found previously, these results suggest that the noise modulation in both the “near” and “far” conditions had an effect on the finger tap initiation. A replication of the above preliminary result was obtained in Experiment 2, which included 30 participants. The linear mixed model on the ranked U-values in these trials revealed a significant main effect of condition (F(2,60) = 6.41, p = 0.003), with smaller U-values in the baseline condition (mean U-value = 123.98 + /− 3.77) compared to both the near (mean U-value = 132.22 + /− 3.71; permutation test: p = 0.002) and to the far condition (mean U-value = 131.25 + /− 3.27; permutation test: p = 0.008). Compared to the previous experiments, a greater number of participants noticed a sound while they were performing the task (n = 18), and a subset of them even noticed a rhythmic pattern in it (n = 12). This may be due to the fact that Experiment 2 was performed in a different lab, with differences in terms of acoustics that we could not account for. When excluding these participants, keeping only those who reported hearing no modulation, the difference in U-values between the baseline (mean U-value = 125.61 + /− 4.90) and the near condition remained significant (mean U-value = 133.79 + /− 5.12; permutation test: p = 0.008), and a trend was found for the difference between the baseline and the far condition (mean U-value = 130.22 + /− 4.25; permutation test: p = 0.08) (Fig. 3). While a significant number of subjects had individually significant U-values in experiment 1, the same was not true of experiment 2 (3 subjects out of 30 individually significant at p < 0.05), where the effect was only detectable in the aggregate across subjects. This may have been because there were far fewer taps per subject in experiment 2, due to the nature of the single-tap task. ## Discussion At any given moment if you remain very still and listen carefully, you will likely hear all manner of background sounds – birds chirping outside, the hum of your desktop computer cooling fan, the ticking of a clock on the wall, the sound of a refrigerator, etc. These kinds of background sounds may typically go unnoticed, especially if they are relatively faint. We asked whether or not undetected rhythmic regularities in auditory background noise could influence self-paced or self-initiated movement. In order to test this, we introduced faint background noise into an intercom system and then modulated the noise in different ways in order to measure its effect on behavior in two different types of tapping tasks, under three different conditions. In one task subjects simply tapped rhythmically at their own preferred pace (task 1), and in the other task (task 2) subjects performed a single spontaneous tap after waiting an unspecified amount of time. For each task the noise could be modulated at the subject’s own preferred tapping rate (“near” condition), at 1.5 times the subject’s preferred tapping rate (“far” condition), or the modulation could be random and aperiodic (“baseline” condition). For task 1, subjects’ tapping was entrained in the “near” condition but was not entrained in the “far” or “baseline” conditions. For task 2, subjects tended to perform their single tap (one per trial) in synchrony with the modulation in both the “near” and “far” conditions. Together, these results indicate that unconscious auditory processing can impact rhythmic motor actions, and can even modulate the onset of spontaneous actions unbeknownst to the participants. This attests to the complexity of sensorimotor processing occurring in the absence of awareness (Deroy et al., 2016; Faivre et al., 2017). Prior studies of synchronized finger tapping have demonstrated effects of undetected timing variability in otherwise supra-threshold rhythmic auditory stimuli7,18,19,20. Here we extended these findings by showing that undetected auditory regularities (rhythmic modulation of faint background noise) could influence self-paced finger tapping, even when both the inducing stimulus and its effect on behavior went completely undetected. Indeed, in experiment 1 subjects denied perceiving any sounds or sound modulations, and in experiment 2 the results remained significant even when restricted to subjects who denied perceiving any sounds or modulations. This suggests that effects in task 1 and task 2 occurred despite the absence of auditory awareness. Here, we probed auditory awareness using subjective measures at the end of experiment, since asking subjects to perform an objective task on the sound during the experiment would uncover its presence. As the sound modulations became faintly audible when attending to them, the use of objective measures was incompatible with our experimental design. Sub-threshold perturbations of supra-threshold and plainly-heard distractor sequences are known to engage corrective mechanisms automatically, without subjects intending to make corrections or even noticing that any correction had been made20,21. Our data suggest that such corrective mechanisms may still be engaged when the distractor sequence (in our case the background noise) goes undetected. Entrainment not only applies to bodily movement, but to brain activity as well, and prior research has shown that entrainment of low-frequency brain oscillations might mediate the effects of expectation on reaction time22. This suggests a mechanism by which the auditory regularities in the present study might influence the timing of self-paced and spontaneous tapping. Recent work on uncued “self-initiated” movements23,24 argues that, in the absence of an imperative stimulus, the precise onset time of self-initiated movements may at least in part be determined by ongoing endogenous fluctuations of brain activity. The current work connects to those prior findings by showing that the precise onset time of uncued movements may also be influenced by exogenous fluctuations in the sensorium (a slight shift in the phase of a finger tap is equivalent to a change in the precise onset time of that movement). Future research should look at the relationship between the phase of slow EEG fluctuations and tapping times in our task. Counter to intuition, it has previously been shown that irrelevant stimuli may have a stronger effect on behavior when they are just below threshold compared to just above5,25. Presumably this is because cognitive control mechanisms are only engaged when task irrelevant stimuli are processed consciously, as if cognitive control mechanisms are recruited in order to suppress distracting stimuli. Future research could test this hypothesis by examining the effect of periodic stimulation at just above and just below the perceptual threshold, and also examine how exogenous and endogenous factors interact to fix the precise onset time of movement when there is no imperative stimulus. ## References 1. Lopresti-Goodman, S. M., Richardson, M. J., Silva, P. L. & Schmidt, R. C. Period basin of entrainment for unintentional visual coordination. Journal of motor behavior 40, https://doi.org/10.3200/jmbr.40.1.3-10 (2008). 2. Michon, J. A. Timing in temporal tracking PhD thesis, Leiden University, (1967). 3. Repp, B. H. Compensation for subliminal timing perturbations in perceptual-motor synchronization. Psychol Res 63, 106–128 (2000). 4. Repp, B. H. Phase correction, phase resetting, and phase shifts after subliminal timing perturbations in sensorimotor synchronization. J Exp Psychol: Human Percept Perform 27, 600–621 (2001). 5. Repp, B. H. Automaticity and voluntary control of phase correction following event onset shifts in sensorimotor synchronization. J Exp Psychol Hum Percept Perform 28, 410–430 (2002). 6. Repp, B. H. Phase correction in sensorimotor synchronization: nonlinearities in voluntary and involuntary responses to perturbations. Hum Mov Sci 21, 1–37 (2002). 7. Repp, B. H. Does an auditory distractor sequence affect self-paced tapping? Acta Psychologica 121, 81–107, https://doi.org/10.1016/j.actpsy.2005.06.006 (2006). 8. Schmidt, R. C. & O’Brien, B. Evaluating the Dynamics of Unintended Interpersonal Coordination. Ecological Psychology 9, 189–206, https://doi.org/10.1207/s15326969eco0903_2 (1997). 9. Repp, B. H. & Penel, A. Rhythmic movement is attracted more strongly to auditory than to visual rhythms. Psychol Res 68, 252–270 (2004). 10. Schmidt, R. C., Richardson, M. J. & Arsenault, C. A. & Galantucci, B. Visual Tracking and Entrainment to an Environmental Rhythm. J Exp Psychol: Human Percept Perform 33, 860–870 (2007). 11. Van Dyck, E. et al. Spontaneous Entrainment of Running Cadence to Music Tempo. Sports Medicine - Open 1, 15, https://doi.org/10.1186/s40798-015-0025-9 (2015). 12. Deroy, O. et al. The Complex Interplay Between Multisensory Integration and Perceptual Awareness. Multisensory research 29, 585–606, https://doi.org/10.1163/22134808-00002529 (2016). 13. Faivre, N. et al. Self-Grounded Vision: Hand Ownership Modulates Visual Location through Cortical beta and gamma Oscillations. J Neurosci 37, 11–22, https://doi.org/10.1523/JNEUROSCI.0563-16.2016 (2017). 14. Kelso, J. A. S. Dynamic patterns: the self-organization of brain and behavior. (MIT Press, 1995). 15. von Holst, E. In The collected papers of Erich von Holst: Vol. 1. The behavioral physiology of animal and man (ed R. Martin) (University of Miami Press, 1973). 16. Hattori, Y., Tomonaga, M. & Matsuzawa, T. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans. PLOS ONE 10, e0130682, https://doi.org/10.1371/journal.pone.0130682 (2015). 17. Conover, W. J. & Iman, R. L. Rank transformations as a bridge between parametric and nonparametric statistics. American Statistician 35, 124–129 (1981). 18. Kagerer, F. A., Viswanathan, P., Contreras-Vidal, J. L. & Whitall, J. Auditory–motor integration of subliminal phase shifts in tapping: better than auditory discrimination would predict. Experimental Brain Research 232, 1207–1218, https://doi.org/10.1007/s00221-014-3837-9 (2014). 19. Thaut, M. H. & Kenyon, G. P. Rapid motor adaptations to subliminal frequency shifts during syncopated rhythmic sensorimotor synchronization. Human Movement Science 22, 321–338, https://doi.org/10.1016/S0167-9457(03)00048-4 (2003). 20. Thaut, M. H., Tian, B. & Azimi-Sadjadi, M. R. Rhythmic finger tapping to cosine-wave modulated metronome sequences: Evidence of subliminal entrainment. Human Movement Science 17, 839–863, https://doi.org/10.1016/S0167-9457(98)00031-1 (1998). 21. Repp, B. H. Sensorimotor synchronization: A review of the tapping literature. Psychon Bull Rev 12, 969–992 (2005). 22. Stefanics, G. et al. Phase Entrainment of Human Delta Oscillations Can Mediate the Effects of Expectation on Reaction Speed. J. Neurosci. 30, 13578–13585 (2010). 23. Schurger, A., Sitt, J. & Dehaene, S. An accumulator model for spontaneous neural activity prior to self-initiated movement. PNAS 109, E2904–E2913 (2012). 24. Murakami, M., Vicente, M. I., Costa, G. M. & Mainen, Z. F. Neural antecedents of self-initiated actions in secondary motor cortex. Nat Neurosci 17, 1574–1582, https://doi.org/10.1038/nn.3826 (2014). 25. Tsushima, Y., Sasaki, Y. & Watanabe, T. Greater Disruption Due to Failure of Inhibitory Control on an Ambiguous Distractor. Science 314, 1786–1788, https://doi.org/10.1126/science.1133197 (2006). ## Acknowledgements NF is an EPFL Fellow co-funded by a Marie Skłodowska-Curie fellowship. Special thanks to Nicolas Oeggerli for his assistance in setting up and running the experiments. AS and BT were supported by ERC Starting Grant #640626 (ACTINIT). ## Author information Authors ### Contributions A.S. conceived the research. A.S., N.F., L.C., and O.B. designed and developed the experiment. L.C., B.T., N.F., and A.S. carried out the experiments. N.F. and A.S. analyzed the data. N.F., A.S., and O.B. wrote the manuscript. ### Corresponding author Correspondence to Aaron Schurger. ## Ethics declarations ### Competing Interests The authors declare that they have no competing interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Schurger, A., Faivre, N., Cammoun, L. et al. Entrainment of Voluntary Movement to Undetected Auditory Regularities. Sci Rep 7, 14867 (2017). https://doi.org/10.1038/s41598-017-15126-w • Accepted: • Published: • DOI: https://doi.org/10.1038/s41598-017-15126-w • ### Lateralised dynamic modulations of corticomuscular coherence associated with bimanual learning of rhythmic patterns • Olivia Morgan Lapenta • Peter E. Keller • Manuel Varlet Scientific Reports (2022) • ### Accent-induced stabilization of spontaneous auditory–motor synchronization • Cécile J. Bouvet • Manuel Varlet • Benoît G. Bardy Psychological Research (2020)
auto_math_text
web
# Deferred evaluation in Renjin, Riposte, and pqR July 24, 2013 By (This article was first published on Radford Neal's blog » R Programming, and kindly contributed to R-bloggers) The previously sleepy world of R implementation is waking up.  Shortly after I announced pqR, my “pretty quick” implementation of R, the Renjin implementation was announced at UserR! 2013.  Work also proceeds on Riposte, with release planned for a year from now. These three implementations differ greatly in some respects, but interestingly they all try to use multiple processor cores, and they all use some form of deferred evaluation. Deferred evaluation isn’t the same as “lazy evaluation” (which is how R handles function arguments). Deferred evaluation is purely an implementation technique, invisible to the user, apart from its effect on performance. The idea is to sometimes not do an operation immediately, but instead wait, hoping that later events will allow the operation to be done faster, perhaps because a processor core becomes available for doing it in another thread, or perhaps because it turns out that it can be combined with a later operation, and both done at once. Below, I’ll sketch how deferred evaluation is implemented and used in these three new R implementations, and also comment a bit on their other characteristics. I’ll then consider whether these implementations might be able to borrow ideas from each other to further expand the usefulness of deferred evaluaton. My pqR implementation (discussed also in my blog post here and others it links to), is the most conservative of these projects. It is based on R-2.15.0, and so retains compatibility with the huge number of R packages that work with that version of R (apart from any bugs that I may have introduced, and excepting any packages that use internal characteristics of the R interpreter that they weren’t supposed to rely on).  Many of the performance improvements in pqR result simply from detailed rewrites of C code — indeed, the project began with my realization of just how much scope there must be for such improvements.  But there are other more profound changes in pqR. Two relating to deferred evaluation are the use of helper threads to perform numerical computations concurrently on multicore systems, and to a lesser extent the use of variant returns to optimize some operations. Renjin (see also this talk) is written in Java, but tries to retain compatibility by reusing many parts of R-2.14.2 (eg, by translating C components to Java). One of its goals is to allow R to use huge objects that don’t fit in memory.  Perhaps related to this, its objects are immutable, which is apparently tolerable because it can create “views” of objects that are the same except for some simple changes. These views also play a role in how Renjin can defer computations until a tree of operations has been built, which can then be optimized and perhaps done in parallel. Riposte (see also this paper), appears to be mostly written in C++, and to currently target only Intel processors. Of the three projects, Riposte’s implementation seems to be the most distinct from the current R Core implementation, which may perhaps lead to compatibility problems. However, this may also allow it to employ more sophisticated techniques. Its use of deferred evaluation seems generally similar to Renjin’s, but I expect that many details differ. (The available documentation on Renjin isn’t sufficient to really tell.) So, when do these implementations defer evaluations?  Consider the following R code, and assume that the variables a and b are numeric vectors of the same length: f <- function (a,b) { u <- a*b+1; exp(u); } print(f(a,b)/2) When evaluating this code, pqR, Renjin, and Riposte may all not do the multiply, not do the add, not do the exponentiation, and not do the division, until the final value is actually needed by print. Renjin and Riposte also do the equivalent of scheduling tasks for the multiply, add, exponentiation, and division, though for Renjin this is seen as constructing a directed graph of operations, and for Riposte it is seen as creating a “vector trace”. They both will also ensure that these computations are actually performed when the result is needed for printing. However, Renjin and Riposte differ from pqR in that, as far as I can tell, neither will start any of the vector computations above until the call of print, even if there are idle processor cores. Because of this, it seems that only pqR can overlap interpretive operations in the master thread with numeric computations done in other threads. The advantage of not immedately starting these computations is that when the result is finally needed, the whole tree of computations is available, allowing optimizations to be performed. In particular, operations can be combined, so that possibly long intermediate vectors do not need to be allocated. In the above example, a single loop (perhaps split amongst processor cores) can read sucessive elements, i, of a and b, compute exp(a[i]*b[i]+1)/2, and store the result (or even just print it without ever storing it). This merging of operations can be beneficial even if only one processor core is available. Since the general apparatus for deferred evaluation is already implemented in pqR, adding a capability for merging operations might not be too much work, though I suspect the result would be a bit of a “hack” compared to what Renjin or Riposte do. Actually, pqR already does a very limited amount of merging via its variant result mechanism. For example, the evaluation of sum(exp(x)) will not store exp(x) in memory, but just sum its elements as they are computed. In the other direction, I wonder whether pqR’s immediate use of available processor cores could be incorporated into Renjin or Riposte. For very long vectors, starting work immediately may not help much, but I think many R programs work on vectors of moderate size (thousands of elements), for which the time it takes for the interpreter to reach the point where the result is needed may be comparable to the time required for the vector computation. Starting the computation immediately will then give a substantial speedup. Finally, Renjin, Riposte, and pqR aren’t the only new implementations of R in the works.  There are at least two more — FastR, which uses Java, and CXXR, which uses C++. As far as I can tell, neither of these projects involve deferred evaluation, but they may have other interesting ideas.
auto_math_text
web
# E​l​u​c​idation To start off, the right side looks to be four lines of text. We can divide the pixels up into 3×3 chunks to read them: DONT EXIST USES A FOIL NEVER-BEFORE-SEEN TYPE OF CAKE Alright, those seem important. But what about the other side? Well, the colorful rectangle at the bottom appears to be divisible in the same way: This looks like the numbers 123456789. And there are similar shapes appearing in the top... This leads to the main aha moment of the puzzle: The top left can be cut up the same way too -- but parts are being obscured by the number squares at the bottom! So the second and third letters of the first row are 8 (rotated) and 9; on the far right of that row, we have 7, but it's been rotated and color-flipped. The colors help us disambiguate some identical-looking numbers, like 2 and 5, and 1 and 4. (Though 6 and 9 aren't disambiguated.) Now the clues make sense - they're giving hints as to what's underneath the squares. For instance, the second row is pretty readable already as D?E[L/C][S/J]: given the clue "USES A FOIL", the answer must be DUELS. So we can fill in DUELS, using the font on the right as a reference for the "official" letter shapes. Then we can continue, and reveal what was previously hidden: The answers to the clues are ARENT, DUELS, NOVEL, and LAYER. There are a few ambiguities in the tiles (which is 6 and which is 9? is 8 rotated or flipped?). But these can all be resolved, since all but one option for each will give you a non-letter. So, the legibility of this puzzle is: EXECRABLE!
auto_math_text
web
This post was originally written on the site below. https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Image-Captioning This is a PyTorch Tutorial to Image Captioning. This is the first in a series of tutorials I’m writing about implementing cool models on your own with the amazing PyTorch library. Basic knowledge of PyTorch, convolutional and recurrent neural networks is assumed. If you’re new to PyTorch, first read Deep Learning with PyTorch: A 60 Minute Blitz and Learning PyTorch with Examples. Questions, suggestions, or corrections can be posted as issues. I’m using PyTorch 0.4 in Python 3.6. Objective Concepts Overview Implementation Training Inference # Objective To build a model that can generate a descriptive caption for an image we provide it. In the interest of keeping things simple, let’s implement the Show, Attend, and Tell paper. This is by no means the current state-of-the-art, but is still pretty darn amazing. The authors’ original implementation can be found here. This model learns where to look. As you generate a caption, word by word, you can see the model’s gaze shifting across the image. This is possible because of its Attention mechanism, which allows it to focus on the part of the image most relevant to the word it is going to utter next. Here are some captions generated on test images not seen during training or validation: There are more examples at the end of the tutorial. # Concepts • Image captioning. duh. • Encoder-Decoder architecture. Typically, a model that generates sequences will use an Encoder to encode the input into a fixed form and a Decoder to decode it, word by word, into a sequence. • Attention. The use of Attention networks is widespread in deep learning, and with good reason. This is a way for a model to choose only those parts of the encoding that it thinks is relevant to the task at hand. The same mechanism you see employed here can be used in any model where the Encoder’s output has multiple points in space or time. In image captioning, you consider some pixels more important than others. In sequence to sequence tasks like machine translation, you consider some words more important than others. • Transfer Learning. This is when you borrow from an existing model by using parts of it in a new model. This is almost always better than training a new model from scratch (i.e., knowing nothing). As you will see, you can always fine-tune this second-hand knowledge to the specific task at hand. Using pretrained word embeddings is a dumb but valid example. For our image captioning problem, we will use a pretrained Encoder, and then fine-tune it as needed. • Beam Search. This is where you don’t let your Decoder be lazy and simply choose the words with the best score at each decode-step. Beam Search is useful for any language modeling problem because it finds the most optimal sequence. # Overview In this section, I will present an overview of this model. If you’re already familiar with it, you can skip straight to the Implementation section or the commented code. ### Encoder The Encoder encodes the input image with 3 color channels into a smaller image with “learned” channels. This smaller encoded image is a summary representation of all that’s useful in the original image. Since we want to encode images, we use Convolutional Neural Networks (CNNs). We don’t need to train an encoder from scratch. Why? Because there are already CNNs trained to represent images. For years, people have been building models that are extraordinarily good at classifying an image into one of a thousand categories. It stands to reason that these models capture the essence of an image very well. I have chosen to use the 101 layered Residual Network trained on the ImageNet classification task, already available in PyTorch. As stated earlier, this is an example of Transfer Learning. You have the option of fine-tuning it to improve performance. These models progressively create smaller and smaller representations of the original image, and each subsequent representation is more “learned”, with a greater number of channels. The final encoding produced by our ResNet-101 encoder has a size of 14x14 with 2048 channels, i.e., a 2048, 14, 14 size tensor. I encourage you to experiment with other pre-trained architectures. The paper uses a VGGnet, also pretrained on ImageNet, but without fine-tuning. Either way, modifications are necessary. Since the last layer or two of these models are linear layers coupled with softmax activation for classification, we strip them away. ### Decoder The Decoder’s job is to look at the encoded image and generate a caption word by word. Since it’s generating a sequence, it would need to be a Recurrent Neural Network (RNN). We will use an LSTM. In a typical setting without Attention, you could simply average the encoded image across all pixels. You could then feed this, with or without a linear transformation, into the Decoder as its first hidden state and generate the caption. Each predicted word is used to generate the next word. In a setting with Attention, we want the Decoder to be able to look at different parts of the image at different points in the sequence. For example, while generating the word football in a man holds a football, the Decoder would know to focus on – you guessed it – the football! Instead of the simple average, we use the weighted average across all pixels, with the weights of the important pixels being greater. This weighted representation of the image can be concatenated with the previously generated word at each step to generate the next word. ### Attention The Attention network computes these weights. Intuitively, how would you estimate the importance of a certain part of an image? You would need to be aware of the sequence you have generated so far, so you can look at the image and decide what needs describing next. For example, after you mention a man, it is logical to declare that he is holding a football. This is exactly what the Attention mechanism does – it considers the sequence generated thus far, and attends to the part of the image that needs describing next. We will use soft Attention, where the weights of the pixels add up to 1. If there are P pixels in our encoded image, then at each timestep t You could interpret this entire process as computing the probability that a pixel is the place to look to generate the next word. ### Putting it all together It might be clear by now what our combined network looks like. • Once the Encoder generates the encoded image, we transform the encoding to create the initial hidden state h (and cell state C) for the LSTM Decoder. • At each decode step, • the encoded image and the previous hidden state is used to generate weights for each pixel in the Attention network. • the previously generated word and the weighted average of the encoding are fed to the LSTM Decoder to generate the next word. We use a linear layer to transform the Decoder’s output into a score for each word in the vocabulary. The straightforward – and greedy – option would be to choose the word with the highest score and use it to predict the next word. But this is not optimal because the rest of the sequence hinges on that first word you choose. If that choice isn’t the best, everything that follows is sub-optimal. And it’s not just the first word – each word in the sequence has consequences for the ones that succeed it. It might very well happen that if you’d chosen the third best word at that first step, and the second best word at the second step, and so on… that would be the best sequence you could generate. It would be best if we could somehow not decide until we’ve finished decoding completely, and choose the sequence that has the highest overall score from a basket of candidate sequences. Beam Search does exactly this. • At the first decode step, consider the top k candidates. • Generate k second words for each of these k first words. • Choose the top k [first word, second word] combinations considering additive scores. • For each of these k second words, choose k third words, choose the top k [first word, second word, third word] combinations. • Repeat at each decode step. • After k sequences terminate, choose the sequence with the best overall score. As you can see, some sequences (striked out) may fail early, as they don’t make it to the top k at the next step. Once k sequences (underlined) generate the <end> token, we choose the one with the highest score. # Implementation The sections below briefly describe the implementation. They are meant to provide some context, but details are best understood directly from the code, which is quite heavily commented. ### Dataset I’m using the MSCOCO ‘14 Dataset. You’d need to download the Training (13GB) and Validation (6GB) images. We will use Andrej Karpathy’s training, validation, and test splits. This zip file contain the captions. You will also find splits and captions for the Flicker8k and Flicker30k datasets, so feel free to use these instead of MSCOCO if the latter is too large for your computer. ### Inputs to model We will need three inputs. #### Images Since we’re using a pretrained Encoder, we would need to process the images into the form this pretrained Encoder is accustomed to. Pretrained ImageNet models available as part of PyTorch’s torchvision module. This page details the preprocessing or transformation we need to perform – pixel values must be in the range [0,1] and we must then normalize the image by the mean and standard deviation of the ImageNet images’ RGB channels. mean = [0.485, 0.456, 0.406] std = [0.229, 0.224, 0.225] Also, PyTorch follows the NCHW convention, which means the channels dimension (C) must precede the size dimensions. We will resize all MSCOCO images to 256x256 for uniformity. Therefore, images fed to the model must be a Float tensor of dimension N, 3, 256, 256, and must be normalized by the aforesaid mean and standard deviation. N is the batch size. #### Captions Captions are both the target and the inputs of the Decoder as each word is used to generate the next word. To generate the first word, however, we need a zeroth word, <start>. At the last word, we should predict <end> the Decoder must learn to predict the end of a caption. This is necessary because we need to know when to stop decoding during inference. <start> a man holds a football <end> Since we pass the captions around as fixed size Tensors, we need to pad captions (which are naturally of varying length) to the same length with <pad> tokens. <start> a man holds a football <end> <pad> <pad> <pad>.... Furthermore, we create a word_map which is an index mapping for each word in the corpus, including the <start>,<end>, and <pad> tokens. PyTorch, like other libraries, needs words encoded as indices to look up embeddings for them or to identify their place in the predicted word scores. 9876 1 5 120 1 5406 9877 9878 9878 9878.... Therefore, captions fed to the model must be an Int tensor of dimension N, L where L is the padded length. #### Caption Lengths Since the captions are padded, we would need to keep track of the lengths of each caption. This is the actual length + 2 (for the <start> and <end> tokens). Caption lengths are also important because you can build dynamic graphs with PyTorch. We only process a sequence upto its length and don’t waste compute on the <pad>s. Therefore, caption lengths fed to the model must be an Int tensor of dimension N. ### Data pipeline See create_input_files() in utils.py. • An HDF5 file containing images for each split in an I, 3, 256, 256 tensor, where I is the number of images in the split. Pixel values are still in the range [0, 255], and are stored as unsigned 8-bit Ints. • A JSON file for each split with a list of N_c * I encoded captions, where N_c is the number of captions sampled per image. These captions are in the same order as the images in the HDF5 file. Therefore, the ith caption will correspond to the i // N_cth image. • A JSON file for each split with a list of N_c * I caption lengths. The ith value is the length of the ith caption, which corresponds to the i // N_cth image. • A JSON file which contains the word_map, the word-to-index dictionary. Before we save these files, we have the option to only use captions that are shorter than a threshold, and to bin less frequent words into an <unk> token. We use HDF5 files for the images because we will read them directly from disk during training / validation. They’re simply too large to fit into RAM all at once. But we do load all captions and their lengths into memory. See CaptionDataset in datasets.py. This is a subclass of PyTorch Dataset. It needs a __len__ method defined, which returns the size of the dataset, and a __getitem__ method which returns the ith image, caption, and caption length. We read images from disk, convert pixels to [0,255], and normalize them inside this class. The Dataset will be used by a PyTorch DataLoader in train.py to create and feed batches of data to the model for training or validation. ### Encoder See Encoder in models.py. We use a pretrained ResNet-101 already available in PyTorch’s torchvision module. Discard the last two layers (pooling and linear layers), since we only need to encode the image, and not classify it. We do add an AdaptiveAvgPool2d() layer to resize the encoding to a fixed size. This makes it possible to feed images of variable size to the Encoder. (We did, however, resize our input images to 256, 256 because we had to store them together as a single tensor.) Since we may want to fine-tune the Encoder, we add a fine_tune() method which enables or disables the calculation of gradients for the Encoder’s parameters. We only fine-tune convolutional blocks 2 through 4 in the ResNet, because the first convolutional block would have usually learned something very fundamental to image processing, such as detecting lines, edges, curves, etc. We don’t mess with the foundations. ### Attention See Attention in models.py. The Attention network is simple – it’s composed of only linear layers and a couple of activations. Separate linear layers transform both the encoded image (flattened to N, 14 * 14, 2048) and the hidden state (output) from the Decoder to the same dimension, viz. the Attention size. They are then added and ReLU activated. A third linear layer transforms this result to a dimension of 1, whereupon we apply the softmax to generate the weights alpha. ### Decoder See DecoderWithAttention in models.py. The output of the Encoder is received here and flattened to dimensions N, 14 * 14, 2048. This is just convenient and prevents having to reshape the tensor multiple times. We initialize the hidden and cell state of the LSTM using the encoded image with the init_hidden_state() method, which uses two separate linear layers. At the very outset, we sort the N images and captions by decreasing caption lengths. This is so that we can process only valid timesteps, i.e., not process the <pad>s. We can iterate over each timestep, processing only the colored regions, which are the effective batch size N_t at that timestep. The sorting allows the top N_t at any timestep to align with the outputs from the previous step. At the third timestep, for example, we process only the top 5 images, using the top 5 outputs from the previous step. This iteration is performed manually in a for loop with a PyTorch LSTMCell instead of iterating automatically without a loop with a PyTorch LSTM. This is because we need to execute the Attention mechanism between each decode step. An LSTMCell is a single timestep operation, whereas an LSTM would iterate over multiple timesteps continously and provide all outputs at once. We compute the weights and attention-weighted encoding at each timestep with the Attention network. In section 4.2.1 of the paper, they recommend passing the attention-weighted encoding through a filter or gate. This gate is a sigmoid activated linear transform of the Decoder’s previous hidden state. The authors state that this helps the Attention network put more emphasis on the objects in the image. We concatenate this filtered attention-weighted encoding with the embedding of the previous word (<start> to begin), and run the LSTMCell to generate the new hidden state (or output). A linear layer transforms this new hidden state into scores for each word in the vocabulary, which is stored. We also store the weights returned by the Attention network at each timestep. You will see why soon enough. # Training Before you begin, make sure to save the required data files for training, validation, and testing. To do this, run the contents of create_input_files.py after pointing it to the the Karpathy JSON file and the image folder containing the extracted train2014 and val2014 folders from your downloaded data. The parameters for the model (and training it) are at the beginning of the file, so you can easily check or modify them should you wish to. To train your model from scratch, simply run this file – python train.py To resume training at a checkpoint, point to the corresponding file with the checkpoint parameter at the beginning of the code. Note that we perform validation at the end of every training epoch. ### Loss Function Since we’re generating a sequence of words, we use CrossEntropyLoss. You only need to submit the raw scores from the final layer in the Decoder, and the loss function will perform the softmax and log operations. The authors of the paper recommend using a second loss – a “doubly stochastic regularization”. We know the weights sum to 1 at a given timestep. But we also encourage the weights at a single pixel p to sum to 1 across all timesteps T This means we want the model to attend to every pixel over the course of generating the entire sequence. Therefore, we try to minimize the difference between 1 and the sum of a pixel’s weights across all timesteps. We do not compute losses over the padded regions. An easy way to do get rid of the pads is to use PyTorch’s pack_padded_sequence(), which flattens the tensor by timestep while ignoring the padded regions. You can now aggregate the loss over this flattened tensor. Note – This function is actually used to perform the same dynamic batching (i.e., processing only the effective batch size at each timestep) we performed in our Decoder, when using an RNN or LSTM in PyTorch. In this case, PyTorch handles the dynamic variable-length graphs internally. You can see an example in dynamic_rnn.py in my other tutorial on sequence labeling. We would have used this function along with an LSTM in our Decoder if we weren’t manually iterating because of the Attention network. ### Early stopping with BLEU To evaluate the model’s performance on the validation set, we will use the automated BiLingual Evaluation Understudy (BLEU) evaluation metric. This evaluates a generated caption against reference caption(s). For each generated caption, we will use all N_c captions available for that image as the reference captions. The authors of the Show, Attend and Tell paper observe that correlation between the loss and the BLEU score breaks down after a point, so they recommend to stop training early on when the BLEU score begins to degrade, even if the loss continues to decrease. I used the BLEU tool available in the NLTK module. Note that there is considerable criticism of the BLEU score because it doesn’t always correlate well with human judgment. The authors also report the METEOR scores for this reason, but I haven’t implemented this metric. ### Remarks I recommend you train in stages. I first trained only the Decoder, i.e. without fine-tuning the Encoder, with a batch size of 80. I trained for 20 epochs, and the BLEU-4 score peaked at about 23.25 at the 13th epoch. I used the Adam() optimizer with an initial learning rate of 4e-4. I continued from the 13th epoch checkpoint allowing fine-tuning of the Encoder with a batch size of 32. The smaller batch size is because the model is now larger because it contains the Encoder’s gradients. With fine-tuning, the score rose to 24.29 in just about 3 epochs. Continuing training would probably have pushed the score slightly higher but I had to commit my GPU elsewhere. An important distinction to make here is that I’m still supplying the ground-truth as the input at each decode-step during validation, regardless of the word last generated. This is called Teacher Forcing. While this is commonly used during training to speed-up the process, as we are doing, conditions during validation must mimic real inference conditions as much as possible. I haven’t implemented batched inference yet – where each word in the caption is generated from the previously generated word, and terminates upon hitting the <end> token. Since I’m teacher-forcing during validation, the BLEU score measured above on the resulting captions does not reflect real performance. In fact, the BLEU score is a metric designed for comparing naturally generated captions to ground-truth captions of differing length. Once batched inference is implemented, i.e. no Teacher Forcing, early-stopping with the BLEU score will be truly ‘proper’. With this in mind, I used eval.py to compute the correct BLEU-4 scores of this model checkpoint on the validation set without Teacher Forcing, at different beam sizes – Beam Size Validation BLEU-4 1 29.98 3 32.95 5 33.17 This is higher than the result in the paper, and could be because of how our BLEU calculators are parameterized, the fact that I used a ResNet encoder, and actually fine-tuned the encoder – even if just a little. Also, remember – when fine-tuning during Transfer Learning, it’s always better to use a learning rate considerably smaller than what was originally used to train the borrowed model. This is because the model is already quite optimized, and we don’t want to change anything too quickly. I used Adam() for the Encoder as well, but with a learning rate of 1e-4, which is a tenth of the default value for this optimizer. On a Titan X (Pascal), it took 55 minutes per epoch without fine-tuning, and 2.5 hours with fine-tuning at the stated batch sizes. ### Model Checkpoint You can download this pretrained model and the corresponding word_map here. Note that this checkpoint should be loaded directly with PyTorch, or passed to caption.py – see below. # Inference During inference, we cannot directly use the forward() method in the Decoder because it uses Teacher Forcing. Rather, we would actually need to feed the previously generated word to the LSTM at each timestep. caption_image_beam_search() reads an image, encodes it, and applies the layers in the Decoder in the correct order, while using the previously generated word as the input to the LSTM at each timestep. It also incorporates Beam Search. visualize_att() can be used to visualize the generated caption along with the weights at each timestep as seen in the examples. To caption an image from the command line, point to the image, model checkpoint, word map (and optionally, the beam size) as follows – python caption.py --img='path/to/image.jpeg' --model='path/to/BEST_checkpoint_coco_5_cap_per_img_5_min_word_freq.pth.tar' --word_map='path/to/WORDMAP_coco_5_cap_per_img_5_min_word_freq.json' --beam_size=5 Alternatively, use the functions in the file as needed. Also see eval.py, which implements this process for calculating the BLEU score on the validation set, with or without Beam Search. ### Some more examples The Turing Tommy Test – you know AI’s not really AI because it hasn’t watched The Room and doesn’t recognize greatness when it sees it. # FAQs You said soft attention. Is there, um, a hard attention? Yes, the Show, Attend and Tell paper uses both variants, and the Decoder with “hard” attention performs marginally better. In soft attention, which we use here, you’re computing the weights alpha and using the weighted average of the features across all pixels. This is a deterministic, differentiable operation. In hard attention, you are choosing to just sample some pixels from a distribution defined by alpha. Note that any such probabilistic sampling is non-deterministic or stochastic, i.e. a specific input will not always produce the same output. But since gradient descent presupposes that the network is deterministic (and therefore differentiable), the sampling is reworked to remove its stochasticity. My knowledge of this is fairly superficial at this point – I will update this answer when I have a more detailed understanding. How do I use an attention network for an NLP task like a sequence to sequence model? Much like you use a CNN to generate an encoding with features at each pixel, you would use an RNN to generate encoded features at each timestep i.e. word position in the input. Without attention, you would use the Encoder’s output at the last timestep as the encoding for the entire sentence, since it would also contain information from prior timesteps. The Encoder’s last output now bears the burden of having to encode the entire sentence meaningfully, which is not easy, especially for longer sentences. With attention, you would attend over the timesteps in the Encoder’s output, generating weights for each timestep/word, and take the weighted average to represent the sentence. In a sequence to sequence task like machine translation, you would attend to the relevant words in the input as you generate each word in the output. You could also use Attention without a Decoder. For example, if you want to classify text, you can attend to the important words in the input just once to perform the classification. Can we use Beam Search during training? Not with the current loss function, but yes. This is not common at all. What is Teacher Forcing? Teacher Forcing is when we use the ground truth captions as the input to the Decoder at each timestep, and not the word it generated in the previous timestep. It’s common to teacher-force during training since it could mean faster convergence of the model. But it can also learn to depend on being told the correct answer, and exhibit some instability in practice. It would be ideal to train using Teacher Forcing only some of the time, based on a probability. This is called Scheduled Sampling. (I plan to add the option). Can I use pretrained word embeddings (GloVe, CBOW, skipgram, etc.) instead of learning them from scratch? Yes, you could, with the load_pretrained_embeddings() method in the Decoder class. You could also choose to fine-tune (or not) with the fine_tune_embeddings() method. After creating the Decoder in train.py, you should provide the pretrained vectors to load_pretrained_embeddings() stacked in the same order as in the word_map. For words that you don’t have pretrained vectors for, like <start>, you can initialize embeddings randomly like we did in init_weights(). I recommend fine-tuning to learn more meaningful vectors for these randomly initialized vectors. decoder = DecoderWithAttention(attention_dim=attention_dim, embed_dim=emb_dim, decoder_dim=decoder_dim, vocab_size=len(word_map), dropout=dropout) decoder.load_pretrained_embeddings(pretrained_embeddings) # pretrained_embeddings should be of dimensions (len(word_map), emb_dim) decoder.fine_tune_embeddings(True) # or False Also make sure to change the emb_dim parameter from its current value of 512 to the size of your pre-trained embeddings. This should automatically adjust the input size of the decoder LSTM to accomodate them. How do I keep track of which tensors allow gradients to be computed? With the release of PyTorch 0.4, wrapping tensors as Variables is no longer required. Instead, tensors have the requires_grad attribute, which decides whether it is tracked by autograd, and therefore whether gradients are computed for it during backpropagation. • By default, when you create a tensor from scratch, requires_grad will be set to False. • When a tensor is created from or modified using another tensor that allows gradients, then requires_grad will be set to True. • Tensors which are parameters of torch.nn layers will already have requires_grad set to True. How do I compute all BLEU (i.e. BLEU-1 to BLEU-4) scores during evaluation? You’d need to modify the code in eval.py to do this. Please see this excellent answer by kmario23 for a clear and detailed explanation.
auto_math_text
web
# Quarkonium (Redirected from Bottomonium) In particle physics, quarkonium (from quark + onium, pl. quarkonia) designates a flavorless meson whose constituents are a quark and its own antiquark. Examples of quarkonia are the J/ψ meson (the ground state of charmonium, c c ) and the ϒ meson (bottomonium, b b ). Because of the high mass of the top quark, toponium does not exist, since the top quark decays through the electroweak interaction before a bound state can form. Usually quarkonium refers only to charmonium and bottomonium, and not to any of the lighter quark–antiquark states. This usage is because the lighter quarks (up, down, and strange) are much less massive than the heavier quarks, and so the physical states actually seen in experiments (η, η′, and π0 mesons) are quantum mechanical mixtures of the light quark states. The much larger mass differences between the charm and bottom quarks and the lighter quarks results in states that are well defined in terms of a quark–antiquark pair of a given flavor. ## Charmonium states Charmonium In the following table, the same particle can be named with the spectroscopic notation or with its mass. In some cases excitation series are used: Ψ' is the first excitation of Ψ (for historical reasons, this one is called J/ψ particle); Ψ" is a second excitation, and so on. That is, names in the same cell are synonymous. Some of the states are predicted, but have not been identified; others are unconfirmed. The quantum numbers of the X(3872) particle have been measured recently by the LHCb experiment at CERN[1] . This measurement shed some light on its identity, excluding the third option among the three envised, which are : • a charmonium hybrid state; • a ${\displaystyle D^{0}{\bar {D}}^{*0}}$ molecule. • a candidate for the 11D2 state; In 2005, the BaBar experiment announced the discovery of a new state: Y(4260).[2][3] CLEO and Belle have since corroborated these observations. At first, Y(4260) was thought to be a charmonium state, but the evidence suggests more exotic explanations, such as a D "molecule", a 4-quark construct, or a hybrid meson. Term symbol n2S + 1LJ IG(JPC) Particle mass (MeV/c2) [1] 11S0 0+(0−+) ηc(1S) 2983.4±0.5 13S1 0(1−−) J/ψ(1S) 3096.900±0.006 11P1 0(1+−) hc(1P) 3525.38±0.11 13P0 0+(0++) χc0(1P) 3414.75±0.31 13P1 0+(1++) χc1(1P) 3510.66±0.07 13P2 0+(2++) χc2(1P) 3556.20±0.09 21S0 0+(0−+) ηc(2S), or η′ c 3639.2±1.2 23S1 0(1−−) ψ(3686) 3686.097±0.025 11D2 0+(2−+) ηc2(1D) 3639.2±1.2 13D1 0(1−−) ψ(3770) 3773.13±0.35 13D2 0(2−−) ψ2(1D) 13D3 0(3−−) ψ3(1D) 21P1 0(1+−) hc(2P) 23P0 0+(0++) χc0(2P) 23P1 0+(1++) χc1(2P) 23P2 0+(2++) χc2(2P) ???? 0+(1++) X(3872) 3871.69±0.17 ????  ??(1−−) Y(4260) 4263+8 −9 Notes: * Needs confirmation. Predicted, but not yet identified. Interpretation as a 1−− charmonium state not favored. ## Bottomonium states In the following table, the same particle can be named with the spectroscopic notation or with its mass. Some of the states are predicted, but have not been identified; others are unconfirmed. Term symbol n2S+1LJ IG(JPC) Particle mass (MeV/c2)[2] 11S0 0+(0−+) ηb(1S) 9390.9±2.8 13S1 0(1−−) Υ(1S) 9460.30±0.26 11P1 0(1+−) hb(1P) 13P0 0+(0++) χb0(1P) 9859.44±0.52 13P1 0+(1++) χb1(1P) 9892.76±0.40 13P2 0+(2++) χb2(1P) 9912.21±0.40 21S0 0+(0−+) ηb(2S) 23S1 0(1−−) Υ(2S) 10023.26±0.31 11D2 0+(2−+) ηb2(1D) 13D1 0(1−−) Υ(1D) 13D2 0(2−−) Υ2(1D) 10161.1±1.7 13D3 0(3−−) Υ3(1D) 21P1 0(1+−) hb(2P) 23P0 0+(0++) χb0(2P) 10232.5±0.6 23P1 0+(1++) χb1(2P) 10255.46±0.55 23P2 0+(2++) χb2(2P) 10268.65±0.55 33S1 0(1−−) Υ(3S) 10355.2±0.5 33PJ 0+(J++) χb(3P) 10530±5 (stat.) ± 9 (syst.)[4] 43S1 0(1−−) Υ(4S) or Υ(10580) 10579.4±1.2 53S1 0(1−−) Υ(5S) or Υ(10860) 10865±8 63S1 0(1−−) Υ(11020) 11019±8 Notes: * Preliminary results. Confirmation needed. The Υ(1S) state was discovered by the E288 experiment team, headed by Leon Lederman, at Fermilab in 1977, and was the first particle containing a bottom quark to be discovered. The χb (3P) state was the first particle discovered in the Large Hadron Collider. The article about this discovery was first submitted to arXiv on 21 December 2011.[4][5] On April 2012, Tevatron's DØ experiment confirms the result in a paper published in Phys. Rev. D.[6][7] ## QCD and quarkonia The computation of the properties of mesons in Quantum chromodynamics (QCD) is a fully non-perturbative one. As a result, the only general method available is a direct computation using lattice QCD (LQCD) techniques. However, other techniques are effective for heavy quarkonia as well. The light quarks in a meson move at relativistic speeds, since the mass of the bound state is much larger than the mass of the quark. However, the speed of the charm and the bottom quarks in their respective quarkonia is sufficiently smaller, so that relativistic effects affect these states much less. It is estimated that the speed, v, is roughly 0.3 times the speed of light for charmonia and roughly 0.1 times the speed of light for bottomonia. The computation can then be approximated by an expansion in powers of v/c and v2/c2. This technique is called non-relativistic QCD (NRQCD). NRQCD has also been quantized as a lattice gauge theory, which provides another technique for LQCD calculations to use. Good agreement with the bottomonium masses has been found, and this provides one of the best non-perturbative tests of LQCD. For charmonium masses the agreement is not as good, but the LQCD community is actively working on improving their techniques. Work is also being done on calculations of such properties as widths of quarkonia states and transition rates between the states. An early, but still effective, technique uses models of the effective potential to calculate masses of quarkonia states. In this technique, one uses the fact that the motion of the quarks that comprise the quarkonium state is non-relativistic to assume that they move in a static potential, much like non-relativistic models of the hydrogen atom. One of the most popular potential models is the so-called Cornell potential ${\displaystyle V(r)=-{\frac {a}{r}}+br}$[8] where ${\displaystyle r}$ is the effective radius of the quarkonium state, ${\displaystyle a}$ and ${\displaystyle b}$ are parameters. This potential has two parts. The first part, ${\displaystyle a/r}$ corresponds to the potential induced by one-gluon exchange between the quark and its anti-quark, and is known as the Coulombic part of the potential, since its ${\displaystyle 1/r}$ form is identical to the well-known Coulombic potential induced by the electromagnetic force. The second part, ${\displaystyle br}$, is known as the confinement part of the potential, and parameterizes the poorly understood non-perturbative effects of QCD. Generally, when using this approach, a convenient form for the wave function of the quarks is taken, and then ${\displaystyle a}$ and ${\displaystyle b}$ are determined by fitting the results of the calculations to the masses of well-measured quarkonium states. Relativistic and other effects can be incorporated into this approach by adding extra terms to the potential, much in the same way that they are for the hydrogen atom in non-relativistic quantum mechanics. This form has been derived from QCD up to ${\displaystyle {\mathcal {O}}(\Lambda _{\text{QCD}}^{3}r^{2})}$ by Y. Sumino in 2003.[9] It is popular because it allows for accurate predictions of quarkonia parameters without a lengthy lattice computation, and provides a separation between the short-distance Coulombic effects and the long-distance confinement effects that can be useful in understanding the quark/anti-quark force generated by QCD. Quarkonia have been suggested as a diagnostic tool of the formation of the quark–gluon plasma: both disappearance and enhancement of their formation depending on the yield of heavy quarks in plasma can occur. ## References 1. ^ LHCb collaboration; Aaij, R.; Abellan Beteta, C.; Adeva, B.; Adinolfi, M.; Adrover, C.; Affolder, A.; Ajaltouni, Z.; et al. (February 2013). "Determination of the X(3872) meson quantum numbers". Physical Review Letters. 110 (22): 222001. arXiv:. Bibcode:2013PhRvL.110v2001A. doi:10.1103/PhysRevLett.110.222001. PMID 23767712. 2. ^ "A new particle discovered by BaBar experiment". Istituto Nazionale di Fisica Nucleare. 6 July 2005. Retrieved 2010-03-06. 3. ^ B. Aubert et al. (BaBar Collaboration) (2005). "Observation of a broad structure in the π+πJ/ψ mass spectrum around 4.26 GeV/c2". Physical Review Letters. 95 (14): 142001. arXiv:. Bibcode:2005PhRvL..95n2001A. doi:10.1103/PhysRevLett.95.142001. 4. ^ a b ATLAS Collaboration (2012). "Observation of a new χ b state in radiative transitions to ϒ (1S) and ϒ (2S) at ATLAS". arXiv: [hep-ex]. 5. ^ Jonathan Amos (2011-12-22). "LHC reports discovery of its first new particle". BBC. 6. ^ Tevatron experiment confirms LHC discovery of Chi-b (P3) particle 7. ^ Observation of a narrow mass state decaying into Υ(1S) + γ in pp collisions at 1.96 TeV 8. ^ Hee Sok Chung; Jungil Lee; Daekyoung Kang (2008). "Cornell Potential Parameters for S-wave Heavy Quarkonia". Journal of the Korean Physical Society. 52 (4): 1151. arXiv:. Bibcode:2008JKPS...52.1151C. doi:10.3938/jkps.52.1151. 9. ^ Y. Sumino (2003). "QCD potential as a "Coulomb-plus-linear" potential". Phys. Lett. B. 571: 173–183. arXiv:. Bibcode:2003PhLB..571..173S. doi:10.1016/j.physletb.2003.05.010.
auto_math_text
web
# Embedding One of the design goals for Zephyr is to ensure that the forward-modelling architecture (viz., zephyr.backend) may be straightforwardly embedded in other programs. As such, an effort has been made to limit the dependencies for the backend infrastructure that handles simulating wave equations. In particular, Zephyr has been designed to interface with a new version of FULLWV, a well-known academic seismic full-waveform inversion package by Prof. R. Gerhard Pratt et al.
auto_math_text
web
# Hall Effect in a Plasma ## Hall Effect in a Plasma Description (HAL) 1. Note that there is NO eating or drinking in the 111-Lab anywhere, except in rooms 282 & 286 LeConte on the bench with the BLUE stripe around it. Thank You the Staff. In general, the Hall effect refers to the fact that a voltage can develop across two boundaries in a direction transverse to the current flow in a system of charged particles in a magnetic field owing to the Lorentz force q(v x B). In this experiment you will measure the Hall voltage as a function of current, magnetic field strength, and gas pressure in a gaseous discharge tube. These data are then used to determine the density and mean speed of free electrons in the system. This experiment requires a review of electricity and magnetism, thermal physics, and atomic physics. It is also an excellent introduction to plasma physics 1. Pre-requisites: None 2. Days Alloted for the Experiment: 6 3. Consecutive days: No All pages in this lab. Note To print Full Lab Write-up go to lower left side and click on Printable Version and print, Note Pre-Labs printed separately I. Hall Effect in a Plasma This lab will be graded 30% on theory, 20% on technique, and 50% on analysis. # Before the Lab Complete the following before your experiment's scheduled start date: 1. View the Hall Effect in a Plasma video. 2. Complete the HAL Pre Lab and Evaluation sheets. Print and fill it out. The Pre-Lab must be printed separately. Discuss the experiment and pre-lab questions with any faculty member or GSI and get it signed off by that faculty member or GSI. Turn in the signed pre-lab sheet with your lab report. 1. W. B. Kunkel, "Hall Effect In A Plasma", America Journal of Physics 49, 733 (1981). Searchable Page 2. Golant, V.E. and et al. "Chapter 5, Chapter 7, Chapter 8, and Symbols." Fundamentals of Plasma Physics, Wiley, New York (1980). \#QC718.G6213 3. M. N. Hirsh, and J. J. Oskam, eds., "Ch. 2 Glow Discharges at DC and Low Frequencies." Gaseous Electronics, Vol 1. Academic Press, New York (1978). A good book on Plasma Discharge structures. 4. L. Spitzer, Physics of Fully Ionized Gases. 2nd Ed., Wiley, New York, (1962). A short and very readable introduction to plasma physics (this is a simple general reference to read.) \#QC718.S6.1962 5. Read about Striations; A.V. Nedospasov, "Striations", Soviet Physics Uspekhi 11, 174-187 (1968). 6. Hall Effect, Melissionos 2nd ed. More References You should keep a laboratory notebook. The notebook should contain a detailed record of everything that was done and how/why it was done, as well as all of the data and analysis, also with plenty of how/why entries. This will aid you when you write your report. 1. W. B. Kunkel, "Hall Effect In A Plasma", America Journal of Physics 49, 733 (1981). Searchable Page 2. Golant, V.E. and et al. "Chapter 5, Chapter 7, Chapter 8, and Symbols." Fundamentals of Plasma Physics, Wiley, New York (1980). \#QC718.G6213 3. M. N. Hirsh, and J. J. Oskam, eds., "Ch. 2 Glow Discharges at DC and Low Frequencies." Gaseous Electronics, Vol 1. Academic Press, New York (1978). A good book on Plasma Discharge structures. 4. L. Spitzer, Physics of Fully Ionized Gases. 2nd Ed., Wiley, New York, (1962). A short and very readable introduction to plasma physics (this is a simple general reference to read.) \#QC718.S6.1962 5. Read about Striations; A.V. Nedospasov, "Striations", Soviet Physics Uspekhi 11, 174-187 (1968). 6. Hall Effect, Melissionos 2nd ed. Other References 1. S. C. Brown, Introduction to Electrical Discharges in Gases, Wiley, New York, (1965). \#QC711.5763 2. R. N. Franklin, Plasma Phenomena in Gas Discharges, Clarendon Press, Oxford (1976), page 48. 3. C. Kittel, Introduction to Solid State Physics, 4th Ed., Wiley, New York (1971), pp. 287-289. 4. C. Kittel and H. Kroemer, Thermal Physics, Freeman, SFO (1980). \#QC311.5.K52.1980 5. P. Lorrain & R. D. Corson , "Electromagnetic Fields and Waves", Section 7.3, pp. 299-301. 6. L. Pekarek, "Ionization Waves (Striations) in a Discharge Plasma", Soviet Physics Uspekhi 11, 188-208 (1968). 7. F. Chen, Introduction to Plasma Physics and Controlled Fusion, Chapter 1 & Chapter 5, Vol. 1, 2nd ed., Plenum Press (1984). Reference # 1. Lyman Spitzer, should be read to understand the basic concepts of Plasma discharges. Index to the full Hall Effect Reprints List Hall Reprint List Reprints and other information can be found on the Physics 111 Library Site # Introduction In low-density plasmas, such as the positive columns of glow discharges, the Hall effect is large and easily observable. In this experiment the Hall voltage across a helium discharge column is determined as a function of magnetic field, discharge current, and gas pressure. Electron drift velocities and densities are inferred from measurements of electrical parameters. A measurement of the resistance permits evaluation of the collision frequency of the electrons. The electron temperature can be calculated. # The Hall Effect To measure the Hall effect in a conductor we apply both electric and magnetic fields. The analysis is simplest if we take care to apply them perpendicular to each other. In the most common Hall effect geometry, we measure the current that flows in the direction of the applied electric field. We confine the carriers in the direction perpendicular to the electric field, establishing a boundary condition that this transverse current is zero. To analyze the magnitudes of the various currents and voltages that are developed in steady-state, we use some simple force balance considerations. In thinking about the electrons in a plasma, we should be careful about the distinction between the velocity of individual electrons, $\overrightarrow {v}$, and the drift velocity of an ensemble of electrons $\overrightarrow {\Delta v}\equiv <\overrightarrow {v}>$. Before we apply any fields, the electrons are moving randomly at a high speed that reflects their temperature. The average velocity of the ensemble is zero. When we apply the longitudinal electric field, the ensemble will move in the direction of the applied force, at a drift velocity that is in general much slower than the mean thermal velocity. To determine the drift velocity, we consider what happens to an electron during its motion between collision events. During this time it experiences the electric force, $\overrightarrow {F}_{E}=q \overrightarrow {E}$, where q=-e for our problem. If the velocity of the electron just after its last collision is $\overrightarrow {v}$, then its velocity just before the next collision is $\overrightarrow {v}+\overrightarrow {\Delta v}$, where $\overrightarrow {\Delta v}= \frac{\tau q}{m}\overrightarrow {E}$. In the equation above, τ is the mean free time. It is also common to relate the drift velocity to the collision frequency, ν, which is just reciprocal of τ(make sure you can see the difference between v and $\nu$ in the typography!): $\overrightarrow {\Delta v}= \frac{q}{m \nu}\overrightarrow {E}$. Next, we average $\overrightarrow {v}+\overrightarrow {\Delta v}$, over many collisions. As we assume that the initial velocity is random, the average velocity is just given by $\overrightarrow {\Delta v}$. The effect of the magnetic field on the electrons is a little bit trickier. In the absence of an electric field, the velocities are random, and the B field has no net effect. In other words, electrons are equally likely to be curving one way as the other. However, in the presence of drift induced by the electric field, there will be a time-averaged magnetic force. $\overrightarrow {F}_{M}=q \overrightarrow {\Delta v} \times \overrightarrow {B}$. Let's choose coordinates such that the magnetic field is along the z-axis and the electric field is along the x-axis. Under such conditions, electrons will drift in the x-direction under the influence of the applied field. The force caused by the magnetic field will therefore be in the y-direction. Note that neither E nor B will yield a force in the z-direction. As stated above, in the most common Hall geometry, we confine the electrons so that the net current in the y-direction is zero. As a result of this boundary condition, an electric field develops that balances the magnetic force on the drifting electrons. The quantitative study of the Hall effect is all about the ratio of this electric field to the applied B field. The magnetic force on the drifting electrons is obtained by substituting the drift velocity into the Lorentz force equation, $F_y = -\frac {q^2E_xB_z} {m \nu}$. To maintain a condition of zero current flow in the y-direction, an electric field given by, $E_y = \frac {qE_xB_z} {m \nu}$, must be established. Since physicist's love dimensionless quantities, we should consider the ratio, $\frac{E_y}{E_x} = \frac {qB_z} {m \nu}$, which is known as the Hall angle. Notice that familiar combination of parameters appears here. Yes, qB/m is just the cyclotron frequency, which is 1.8x1011 Hz for a magnetic field of 1 Tesla. A nice expression for the Hall angle is then, ΘHall = ωcτ. The analysis of the Hall effect that we have done is valid when the cyclotron frequency is smaller than the collision frequency. In this regime, the electron's motion is interrupted by collision before it can complete a cyclotron orbit. To learn as much as we can about conduction in the plasma we should measure the longitudinal current as well as the Hall angle. The electric current density $\overrightarrow {j}$ is given by, $\overrightarrow{j} = q n \overrightarrow {\Delta v}$. The resistance of the plasma, which we call ρ, is the ratio of this current to the longitudinal electric field, or, $\rho = \frac{E_x}{j_x}$. In the longitudinal (unconfined) direction, the current is determined by balancing the force due to the applied electric field and the frictional force. This balancing act yields the following expression for the resistance, $\rho = \frac{m\nu}{q^2n}$. If we can measure both the Hall angle and the resistance, this gives us two independent observables. In general, a conductor under study has four independent parameters: the density, charge, mass, and scattering rate of the mobile charged species. In the case of our plasma, the current is carried by electrons, and therefore the mass and charge are known to us. In this case our two independent observables are sufficient to determine the remaining two parameters - the density and collision rate of the electrons. # Hall Effect in Glow Discharge Columns In the discussion above we assumed that the density of electrons was uniform in space. For homogeneous systems in thermal equilibrium this is a reasonable assumption. However, this is not case for the plasma. The free electrons in the plasma are produced in ionizing collisions with gas atoms or molecules confined at a low pressure in a long narrow discharge tube. At the low current densities under consideration here, recombination of the charged particles takes place almost exclusively at the tube walls. This means that the electrons (and positive ions) produced in the gas must find their ways to the walls before they disappear. The resulting electron density distribution is not uniform, but has a maximum in the center of the tube, and falls nearly to zero at the walls. When a transverse magnetic field is applied the electrons are redistributed to counteract the Lorentz force, and a Hall field is set up. This Hall field is not uniform across the plasma. Therefore, the Hall voltage, which is the integral of the Hall field, is not simply the product of the Hall field and separation of the electrodes. However, it turns out that Hall voltage that you actually measure is, in fact, exactly one-half the expected value, for the case of slab geometry. The proof of this can be found in the book by Golant et al (SEE Golant Reading Materials). Armed with this knowledge, you can use the measured Hall voltage and longitudinal current density to obtain the electron density and scattering rate. As a quantitative example, we can operate a glow discharge at a current density of 10 A/m2 in a mixture of 1% argon in helium at p = 30 torr pressure and find a Hall field EH = 300 V/m when B = 100 gauss, and an ohmic field Eo = 5000 V/m. This tells us that the electron drift speed is ue = 30,000 m/sec, and therefore the free electron density is approximately ne = 2 x 1015 m-3. The gas density calculated from the ideal gas law, on the other hand, is Ng = 1024 m-3, so that the degree of ionization ne/Ng is only about 2 x 10–9. Note: Measure the distance between the pole pieces! While we have learned a lot, notice that we don't obtain any direct information about the thermal velocity of the electrons in the plasma. However, we can make some indirect inferences. We know the electron collision frequency and the density of the atoms that the electrons are colliding with. According to kinetic theory, these are related by the scattering cross section, that is, $\nu \cong N_g \sigma v$. In our experiment using He gas the cross section is approximately equal to 3.8 x 10–20 m2. Using this value for cross section and the expression above, the mean electron speed and hence the mean electron energy, or temperature can be estimated. The collision rate equation gives the average speed and the temperature is related to the mean square speed. For the MB distribution the mean square speed is not equal to the square of the average speed and you should take this factor into account. See Kittle for discussion on this point. If you do your calculations correctly, you should obtain some rather large electron speeds and temperatures. The electron temperature Te can exceed 10,000K. You might be wondering if you should keep your hands off the glass tube that encloses the plasma. In fact, the glass is only barely warm to the touch. What is this telling about the nonequilibrium state in the plasma? A hint is that it should be related to the efficiency of energy transfer from the electrons to the He atoms. Can you explain why the thermal contact between the free electrons and the background gas is so weak? It can also be shown that the energy in random motion of the free electrons has an upper bound given by $Energy = \left ( \frac {1}{2} \right ) m \left \langle v^2 \right \rangle < \left ( \frac {1}{2} \right ) M_g \Delta v^2$ (8) where Δv denotes again the drift speed and Mg is the mass of the gas atom or molecule. The upper limit is reached when electron energy loss due to inelastic collisions (excitation and ionization) is negligible compared to that caused by elastic collisions. A substitution of the mass of the helium atom and the electron drift speed of our example into eq. (8) yields an energy upper limit of 18 eV. Although 1 eV is very much higher than the thermal energy of the gas, it is much lower than most ionization energies. The ions produced in our gas mixture are primarily argon, which has an ionization energy of 15.8 eV. How can ionization take place when the mean electron energy is much lower than the energy required to liberate an electron from the gas atoms? You may have noticed that in the steady state, a local rate balance between ionization and recombination at an electron temperature of 10,000K and at the densities quoted here, the degree of ionization should be very high rather than the very low level observed. The explanation is that the loss of ions in these discharges at low gas pressures is controlled by transport to and recombination at the relatively cool walls. This is a second and equally important process causing pronounced deviations from equilibrium conditions. The matter of energy balance and of charged particle production, transport, and loss are thoroughly discussed in the texts on ionization phenomena in gases. The steady state condition for the column of a glow discharge is a delicate balance between several non equilibrium processes. It is not surprising that such columns display a variety of oscillations and non-uniformities that sometimes interfere with our Hall effect observations and therefore deserve some attention. # Instabilities, Oscillations, and Striations Plasmas are in general unstable entities. If plasmas were intrinsically stable, plasma fusion reactors could already be supplying commercial electricity, but they are not. The stability of a plasma is dependent on almost anything imaginable, not only the obvious: current, voltage, pressure, magnetic field and mass flow rate; but potentially also the non-obvious: contaminant gases, ambient temperature (untested), ambient light (not observed), and time (very important). It is characteristic of all plasmas that they can support many types of waves. Some of these waves grow spontaneously from thermal fluctuations to large amplitudes. In that case they are called instabilities. Most instabilities in our discharge are driven by the electric current, somewhat like whistle tones are excited by air streaming through pipes. A small change in current or in gas density can change the oscillatory behavior completely. Therefore, pay attention and don’t change the gas pressure or flow in the middle of taking a set of data. You should have read the articles on Striations Observations reveal that oscillation frequencies are in the multi-kilohertz range, with amplitudes sometimes in excess of a few volts. It is therefore important that measuring devices be protected by low-pass filters consisting of RC input circuits with about 0.2 sec time constants. Quiescent (oscillation-free) operation is a necessity for all measurements. You may have to invest some time finding a stationary state, by varying the current, gas pressure, and flow rates. Patience is needed. The gas is a mixture of helium with 1% of argon and 0.1% of nitrogen. The argon is most easily ionized and supplies most of the free electrons, while the nitrogen is added to make the entire mixture relatively insensitive to contamination by air and other residual impurities. A small flow rate is required to keep the mixture constant. Too rapid a flow causes turbulence in the gases. Under many operating conditions, particularly at the lower pressures, a striking stationary structure appears in the visible glow consisting of alternating bright and dim regions or striations. These can be considered as large-amplitude ionization waves. The amplitude modulation of the luminous intensity, which may approach 100%, is much larger than the axial variation of electron density which rarely exceeds 20%. It turns out to be primarily caused by the variation in the tail of the electron energy distribution which is responsible for most of the visible radiation. The effect on our Hall voltage is therefore only minor, and we can ignore the presence of stationary striations for our purposes. The effect on Eo is also minor if the distance between probes is made large enough to permit averaging over several wavelengths of these striations. # Apparatus and Equipment 1. The equipment required for this experiment is contained in the Plasma Hall Effect rack. The labels on the rack controls are intended to faithfully reflect the descriptions in this text, though schematic labels in the text and on the gas flow diagram are NOT attached to the physical valve handles. Rather, physical valve handles are labeled on the rack corresponding to their functional description, such as “Gas Input Adjust” (for valve v2). 2. The gauss meter, an oscilloscope (any scope with a 1M input will do), the gas supply cylinder (right side) and the vacuum pump (under the bench) are the required components that are external to the Plasma Hall Effect rack. 3. The schematic diagrams of the gas flow system, the plasma electrical measurement system (including supplies, meters, and the discharge tube), and the magnet circuits, are included in the following sections. These systems are functionally independent though they all affect the plasma and its characteristics. 4. The impedances of the supplies and the meters are explicitly identified to assist the experimenter in identifying potential measurement errors. Like most physical experiments utilizing electrical measurements, the results depend on the interaction of a non-ideal physical construction (the discharge tube containing the plasma) with non-ideal electrical instruments. The thoughtful experimenter is mindful of these interactions, taking care to minimize the deleterious effects on the measurements. The instruments perturb the physics. # Equipment Notes General, the gas and the plasma: The vacuum and gas supply system, including the line between the inflow needle valve and the high-pressure regulator, is leak-free enough to permit a base vacuum pressure below 2 mm of mercury (2 Torr) pressure. However, due to back-pressure after shutting off the pump and closing the valves, pressure will build up in the system at a rate of ~1Torr per 11 minutes, which is insignificant. So, when shutting off the system and returning at a later time, don't be alarmed if their is a high pressure reading on the gauge. Glow discharges are sensitive to contamination, especially by organic vapors, and this plasma discharge tube is no exception. Considerable flushing (flowing gas through the discharge tube with the plasma established) helps establish a stable plasma discharge characteristics. The longer the duration of a plasma discharge, the more stable and reproducible the discharge conditions become and the wider the discharge operating range in pressure (up to 30 Torr) and current (below 0.3 mA). The settling time of the plasma that is to a stable current-voltage, and noise operating point may be several minutes. As a result of this long settling time, the plasma control parameters: pressure, flow rate, and discharge voltage need to be changed slowly while monitoring the plasma response parameters discharge current, plasma physical appearance (striations), discharge current/voltage noise at the discharge monitor. If the control parameters are changed too quickly, one of the most common and annoying results is that the discharge extinguishes and the control parameters need to be returned to a starting point. As a guideline, changes to pressure should be slower than 1 Torr every 10 seconds and the voltage changes 100V in 5 seconds. The color of the glow is sensitive to gas composition, which for the helium/argon/nitrogen mixture used in this experiment, is pink. . Although the system is leak tight, it is desirable to keep the gas pressure upstream of the input valve (labeled “Gas input adjust”)slightly above atmospheric pressure in order to prevent air from entering the gas system between the input valve and the gas regulator. This higher pressure has the disadvantage of operating with a large pressure drop across the input valve, thereby making the plasma discharge pressure very sensitive to the setting of this valve. On the other hand, once the input valve conductance is set, a nearly constant gas flow rate has been established. How to set a gas pressure and gas flow rate: The drawing below illustrates the complete plasma discharge gas flow system, which is similar to other low pressure gas delivery systems. A gas source (He/Ar/N tank) supplies gas at about 5 psi (~1000 Torr at p2). The discharge tube pressure (p3) is nominally 10-30 Torr so that valve v2 conducts gas a nearly constant rate determined by the ~1000 Torr differential pressure valve v2’s conductance G2. G2 is a needle valve with relatively low conductance. This procedure establishes a constant gas flow rate into the discharge tube. The rotary-vane vacuum pump has a very high evacuation capacity and would require a significant gas flow through v2 to achieve the proper discharge pressure at p3. It is common practice to insert a “throttling” valve (v4 & v5) between the vacuum pump and the discharge tube to allow reduce the effective evacuation capacity, effectively throttling the vacuum pump. The throttling valve allows a pressure differential between the vacuum pump (~0 Torr) and the discharge tube (p3, 10-30 Torr). When the pressure in the discharge tube is stable, the gas inflow equals the gas outflow. With valve v5 closed then (p2-p3)*G2=(p3-0)*G4. With p2=1000 Torr and p3=10 Torr, the ratio of valve conductance, G4/G2~100. That is, the conductance of valve v4 has to be 100 times that of valve v2, or practically stated, needle valve v2 will be almost closed while valve v4 will be significantly open. To avoid damaging the valves in the Hall plasma system and as general information, some guidelines should be observed. All the accessible valves are right handed, that is they close when turned clockwise. Only hands without tools should be used to adjust valves. Valves v2, v3, and v5 can be closed with one finger and a thumb. Valve v1, on the top of the gas cylinder requires a full grip to open and close (and considerable torque), but only needs to be opened and closed about ¼ turn. It is very important to close v1 when the experiment is not in use. Each gas cylinder is a ten-year supply. So why is there a spare cylinder of gas? The gas regulator, r1, attached to the gas cylinder, and controlled by the large black knob, does not normally need to be adjusted. When v1 is open, p1 should indicate 100-2000 psi and p2 indicate 4-6 psi. If for some reason p2 is not 4-6 psi when v1 is open, first tap on the regulator r1 and gauge p2 with your knuckle. If p2 still does not indicate a pressure of 4-6 psi, the pressure at p2 may be increased by slowly turning the black knob (r1) clockwise. If the pressure is too high, a counter-clockwise rotation of the black knob of r1 will reduce the pressure at p2 ONLY if the gas has someplace to go. Therefore, before turning the black knob to adjust p2, it is required to have the vacuum pump on and v2 and v4 both open ¼ turn. There are two techniques for establishing a flow and a pressure with this gas flow configuration. The first technique is the more traditional of the two: set flow (v2) then set pressure (v4) at constant flow. The second technique is a non-constant flow required to reach discharge pressures in the 12-35 Torr range and requiring a higher conductance than v4 can provide using the first technique. The second technique requires that the discharge is established AND then the flow AND pressure are increased. With v5 and v2 closed, open v4 3 turns (3.5 turns is all the way open) and wait about 30 seconds, p3 should read about 1 Torr. Technique 1: Open v4 by ½ turn, open and adjust v2 until p3, “discharge pressure”, indicates 10 Torr. This is the low gas flow set point. Do not change v2 as the flow rate is set. Open v4 2 turns and enable the discharge voltage. Technique 2: Open v4 3 turns, open and adjust v2 until p3, “discharge pressure”, indicates 15 Torr. This is the fixed conductance point of v4. Do not adjust v4. The gas flow through v2 and v4 is proportional to p3/G4. Enable the discharge voltage. To increase the discharge pressure, open v2 which will increase p3 and the flow rate proportionally. How to set a gas pressure: 1st pump out the system with the coarse pump-out valve open, on the left side of rack, then open the Gas Input Adjust valve on the right side of rack one turn, with the main gas cylinder tank closed. When the vacuum meter reads near zero, close the main coarse pump-out valve (Throttle Coarse) and open all the way the needle valve (Throttle Fine) on the pump-out side of the system (NOTE: these two valves are in parallel). Now open the cylinder tank main valve and adjust the pressure meter to read above 1 lb. Time to adjust the gas inlet valve on the right side of rack to the lowest pressure you want for the first pressure set point. Now adjust the pump-out needle valve for the other pressures you want for the experiment. Acceptable discharge conditions -- the oscilloscope showing no fluctuations larger than 0.1 V -- can be produced at pressures between 3 torr and 35 torr, with currents between 0.5 and 2 mA. At the low-pressure end of this range stationary striations are very pronounced and can interfere with the measurements. So keep the current low if possible. Some modest stationary striations (relatively small intensity modulation) are visible over most of the range but are mild enough not to distort the reading of Eo and EH by more than 10%. The conditions can be kept reproducible by maintaining a very small flow rate which replaces the gas content of the tube about every five seconds. It is essential that the discharge power supply have a floating ground connection so that the Hall probes can be kept close to ground potential by means of the potentiometer bridge arrangement. See the Circuit Diagrams. The reason for this requirement is the following: the effective contact impedance between a small probe and a low density plasma is several mega-ohms. It is also nonlinear, increasing rapidly when drawing primarily ion current from the plasma. The leakage resistance of standard cable connectors and feed-through insulators is between 109 and 1010 ohms. If either the cathode or anode were at ground potential, the probes and Hall measuring circuits would be at positive or negative potentials of about one kilo-volt, and unwanted probe currents would be quite large. The biasing potentiometer, which adjusts the potential of the probes, needs adjustment each time a discharge parameter is changed. When this precaution is taken, good linear relationships are found between the Hall probes and the B-field up to at least 200 gauss in either direction. Observed deviations from the linear relationship given in eq. (4) at higher magnetic fields are presumably caused by magnetically induced distortions of the discharge. When the field is too strong the visible appearance of the glow between the pole faces is noticeably changed, indicating excessive deviations from axial symmetry. # Plasma Discharge and Associated Electrical Instruments The plasma discharge and associated electrical instruments: The discharge tube containing the plasma is powered through a 0-3 kV supply “discharge voltage” that is voltage referenced to the chassis (earth) ground through a resistor divider, the ground-to-discharge voltage offset. This offset voltage as well as the potentials of the discharge electrodes 1 and 3 can be measured by means of a high impedance (109 ohms, < 100 pF), bipolar voltmeter with full scale voltages of 2500 V to 250 mV as illustrated below. Electrode 4 is capacitively coupled to an external oscilloscope to monitor the discharge stability via the “discharge monitor” port. The voltmeter, pictured as an ideal amplifier with 109 ohm input impedance, can be connected to either of 4 connections via a low-leakage current relays. One connection is to a 25 V calibration voltage. The other 3 connections are to electrodes 1, 3 and the chassis ground. The return voltage to the voltmeter, or the reference voltage, is electrode 2. Note that the connection to electrode 3 is via a 1010 resistor. The voltmeter may be zeroed by selecting the “zero” range on the voltmeter range selection switch and rotating the “meter zero” knob. This is the only configuration in which the voltmeter should be zeroed. The discharge current should be zero during this operation. Another point worth noting is that the discharge voltage is connected to the discharge tube via 0.5M ohms. This resistance effectively limits the discharge current to 6 mA if the plasma resistance were 0 ohms. However, as the minimum voltage across the discharge tube required to sustain plasma is about 1.5 kV, the discharge current is effectively limited to about 3 mA. # Procedure ## Turning on the system 1. Check all electrical and gas flow connections. All gas valves must be closed, all electrical power off. 2. Turn on the pump. 3. Open the pump-out valve (Throttle Valve Coarse) (see Fig. 2). Vacuum gauge should go down towards 2 Torr. If not, call a Staff person and ask for help. 4. Open the main valve and coarse valve on He supply tank. ASK IF YOU DON'T UNDERSTAND WHAT EACH KNOB OR HANDLE DOES, OR WHAT THE GAUGES MEASURE. The high-pressure gauge should read between 100 and 2,000 lb/in2 (psi) pressure. If the tank pressure is below 50 lb/in2, get a new bottle. (Call for assistance from the staff) 5. Close the pump-out valve (Throttle Valve Coarse). The low pressure in the discharge tube should not rise rapidly. If it does, there is a leak. Get help. 6. Open the outflow valve (Throttle Valve Fine) 1 turn. 7. Open the "Gas Input Adjust" needle valve slowly 1/4 turn, or until pressure rise is very noticeable. At the same time, adjust the regulator valve of the gas cylinder - set to the blue mark on the left-hand regulator gauge. 8. Adjust the pump-out needle valve (Throttle Valve Fine) to obtain steady conditions between 10 and 30 Torr pressure. Once a small flow rate is established it is best not to touch the input valve again, but to do all regulation with the outflow needle valve (Throttle Valve Fine). It permits much finer adjustment because the pressure drop across it is much smaller. 9. After a few minutes of steady flow, turn on the high voltage to about 2,500 V. There will be a discrepancy between the voltage value read on the kilovolt meter and the value set on the knobs, but the turnable knobs used to set the high voltage is the true indicator of actual voltage being applied, and the meter is just a reference. There should be a glow, purplish pink in color, and the ammeter should read about 1 mA. 10. Turn on the oscilloscope, to monitor the fluctuations in the plasma potential by means of the grounded probe. Under proper conditions, oscillations should be much smaller than 0.1 V, perhaps 50 mV peak-to-peak. If oscillations are too large, change the gas pressure, flow rate, high-voltage setting, etc. until quiescent operation is found. If unsuccessful, turn discharge off for one minute, then start it again. The start-up voltage may have to be higher than the operating voltage, particularly at the higher pressures. Repeat the process a few times if necessary. If oscillations are still too large, get help. 11. If all is well, turn on the probe circuits with the voltmeter range setting at 250 V or 2500 V. Leave the magnet power off. DO NOT USE THE DVM. 12. Adjust the discharge ground with the potentiometer so that probe #2 floats near the ground potential using dials on the front panels. First, make sure the operational amplifier is properly adjusted to give the correct reading. Set 'Scale' to zero, then adjust 'Meter Zero' until the adjacent voltmeter gives a zero reading. Set the 'Select' dial to 'Ground-to-2' and 'Scale' to the appropriate scale. Then, use the 'Coarse' and/or 'Fine' dials of the potentiometer (labeled as 'Ground to discharge voltage offset' on the front panel) to adjust the ground to probe #2. If conditions are steady, this probe floating potential will not drift by more than a few volts (a fraction of one volt on our meter) in several minutes. Every change in conditions requires a potentiometer adjustment, however. ## Measurements 1. Id and Vd: The purpose of these first measurements is to illustrate some of the interesting properties of a plasma discharge. For a range of discharge tube pressures between 15 and 30 torr, measure the discharge current, Id, and the discharge voltage, Vd, as a function of high voltage. The discharge voltage is given by the potential between probes 2 and 3. Note that there is a 1010 ohm resistor in series with probe 3, to keep current flow around 10–8 A. Before you take data, play around with the high voltage, discharge current, gas pressure, and gas flow, in order to get a feel for the limits on the parameters. When taking data, a typical procedure is to set the pressure to 15 torr, set the high voltage somewhat higher than it takes to keep the discharge going properly, measure the current, and record it and the voltage. Then increase the current by adjusting the high voltage, and again record the current and voltage. Repeat until you have enough points to plot a curve. Now change the pressure to 20 torr, and repeat. Continue until 4 or 6 pressure plots are obtained. Think about what you are doing, and what other data you might need. What do you need to know to go from current to current density, from power supply voltage to discharge tube voltage, etc.? 2. B vs. Im: Using a gauss meter model 5180 with the HIGH VOLTAGE SUPPLY OFF, measure the magnetic field strength as a function of magnet current for both directions of magnet current flow (measure both the FORWARD and REVERSE directions of magnet current flow – just flip the COIL POLARITY switch into the correct position). 1st = Select Auto Zero. To select AUTO ZERO operation, press the ZERO pushbutton. Unit automatically returns to normal operation. 2nd = Select Auto Range. To select AUTO RANGE operation, press the SHIFT pushbutton followed by the RANGE pushbutton. Press the SHIFT pushbutton followed by the RANGE pushbutton to exit Auto Range mode. Manual Range. Also; To select MANUAL RANGE operation, press the RANGE pushbutton. Press the UP (5) and DOWN (6) arrow pushbuttons to select ranges. Press the RANGE pushbutton to return to normal operation. For a complete Manual See [Gausmeter] For an interactive Manual see [HAL 5180Manual.exe] See the Circuit Diagrams for more details from the schematic. Plot B as a function of Im. Does the magnet display significant hysteresis? See the Hysteresis page.[| Hysteresis] Is the B-I relationship linear? In this experiment, errors owing to hysteresis are small compared to other errors. Do not spend too much time calculating and explaining them or the phenomenon of hysteresis. 1. EH vs. B. For a range of pressures between 15 and 30 torr measure the Hall field between probes 1 and 2 as a function of magnetic field (note that probes 1 and 2 are about 8 mm appart). Take data for the full range of magnet current. Remember to keep the probes near ground potential by adjusting the potentiometer. Plot your results for each pressure. Is EH linearly dependent on B? For the linear parts of your data (usually B less than 300 gauss) calculate quantities describing the electron gas (ve, ne, νe, $\left \langle \sigma v \right \rangle$, Te, etc.). Are your results reasonable? How do these quantities change with pressure? Explain your results? ## Glow Discharge Structure Below shows the Voltage -Current curve of self-sustainng glow discharge. Notice the Normal Glow Region which is linear. This is the region where probes 2 and 3 are located. The voltage here should be linear and constant. ## Shutting off the system 1. Shut off the probe circuits and oscilloscope. Turn off the magnet power and discharge high-voltage power supply. 2. Open the pump-out valve. 3. Close the main valve on the helium gas cylinder. 4. Open the inflow valve wide, until the high-pressure regulator gauge goes to about 1/2 Torr (it will not pump out all the way to zero). Then close the inflow valve. 5. Close the outflow valve. Turn off the pump. 6. Shut off all valves not closed already. 7. As stated above, there will be a slow back-pressure that builds up in the tube, but do not worry about this. The system is now shut down, but call for help if there are any problems. The main thing to be careful of is putting a pressure greater than atmospheric in the discharge tube. There is an automatic relief valve, but if it fails to operate -- as it has in the past -- the glassware explodes. # References 1. S. C. Brown, Introduction to Electrical Discharges in Gases, Wiley, New York, (1965). \#QC711.5763 2. R. N. Franklin, Plasma Phenomena in Gas Discharges, Clarendon Press, Oxford (1976), page 48. 3. C. Kittel, Introduction to Solid State Physics, 4th Ed., Wiley, New York (1971), pp. 287-289. 4. C. Kittel and H. Kroemer, Thermal Physics, Freeman, SFO (1980). \#QC311.5.K52.1980 5. P. Lorrain & R. D. Corson , "Electromagnetic Fields and Waves", Section 7.3, pp. 299-301. 6. L. Pekarek, "Ionization Waves (Striations) in a Discharge Plasma", Soviet Physics Uspekhi 11, 188-208 (1968). 7. F. Chen, Introduction to Plasma Physics and Controlled Fusion, Chapter 1 & Chapter 5, Vol. 1, 2nd ed., Plenum Press (1984). Reference # 1. Lyman Spitzer, should be read to understand the basic concepts of Plasma discharges. Other reprints and reference materials can be found on the Physics 111 Library Site
auto_math_text
web
# Determine if a number is divisible by 13 (without using 13 itself) [closed] Your challenge, should you choose to accept it, is to create a function or a program that outputs "yes" if a given number is divisible by 13 and outputs "no" if it isn't. Rules: - You're not allowed to use the number 13 anywhere. - No cop-out synonyms for 13 either (like using 15 - 2). - Bonus points will be awarded for not using modulus, additional bonus for not using division. Scoring: - Your score will be the number of bytes in your code (whitespace not included) multiplied by your bonus. - If you didn't use modulus, that bonus is 0.90; if you didn't use division, that bonus is 0.90. - If you didn't use either, that bonus is 0.80. - The lower your score, the better. The input will always be an integer greater than 0 and less than 2^32. Your output should be a simple "yes" or "no". Clarifications: - Using some roundabout method of generating the number 13 for use is acceptable. Simple arithmetic synonyms like (10 + 3) are not allowed. - The function or program must literally output "yes" or "no" for if the given number is divisible by 13. - As always, clever solutions are recommended, but not required. • is 'true' or 'false' a valid output? Feb 23, 2012 at 19:09 • JavaScript (27 chars) function f(n){return "yes"}. This will return 'yes' for all the numbers that can be divided by 13 Feb 23, 2012 at 21:59 • "(whitespace not included)" always have been resulted in one of these two situation : a program encodes its content in whitespace, or a program written in Whitespace (programming language). Feb 23, 2012 at 22:29 • Using some roundabout method of generating the number 13 for use is acceptable. How do you determine what is "roundabout enough"? Apr 17, 2014 at 15:57 • @Rusher To be honest, I didn't notice that it was 2 years old, it just recently became active. As for your suggestion, I'd rather not ninja-change as non-OP a question with 2 pages of answers.. Apr 17, 2014 at 18:59 # Powershell, 43.2 param($a)if((1.2307692307*$a)-band15){"no";exit}"yes" Ungolfed param($a) if((1.2307692307 *$a) -band 15) {"no";exit} "yes" Magic number is of course 16/13. Multiplies by this, checks if the last four bits are all zero. ## GAP, 45 bytes * 0.8 = 36 Instead of giving one more solution that computes Gcd(26,n), I multiply with the one of the field with 169 elements (which has characteristic 13) and get zero iff n is divisible by 13. If I wanted a truthy result, I could just use IsZero, but to get a yes/no-String I turn that result into an ordinary integer, compute 0 to that power, add one and use that as index to a string list. Normally I could say list[index], but that doesn't work with list literals. However there is another short way: n->(["no","yes"],1+0^Int(n*One(GF(169)))) # Befunge, 121 bytes * 0.8 = 96.8 &:!#v_1-:!#v_1-:!#v_1-:!#v_1-:!#v_1-:!#v_1-:!#v_1-:!#v_1-:!#v_1-:!#v_1-:!#v_1-:!#v_1-:!#v_1- # yes"< > > > > > > > > > > > >"on",,@,,," Uses the wraparound nature of Befunge's code--the top row repeatedly decrements the input number until reaching 0. When it reaches 0, it takes the next v down to the bottom row, where it either hits the one < or one of the twelve >s. The bottom row outputs "yes" if executed from right to left, and "no" from left to right. # R, 79 Bytes f=function(n)ifelse(any(1:n*as.integer(IQR(rnorm(100000,,10)))==n),"yes","no") Relies on the interquartile range of a standard normal being a bit higher than 13. ungolfed f=function(n) { number=as.integer(IQR(rnorm(10000,,10))) seq=1:n*number if (any(seq==n)) { return("yes") } else {return("no")} } # JavaScript, 113 Bytes var input = document.querySelector("input").value;if(input/parseInt(atob("MTM=")) != 0) {alert(0);}else{alert(1)} var input = document.querySelector("input").value; if(input/parseInt(atob("MTM=")) != 0) {alert(0);}else{alert(1)} <input>
auto_math_text
web
espressomd-users [Top][All Lists] Advanced [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] ## Re: [ESPResSo] General Suitability of Espresso for Fine Particles From: Burkhard Duenweg Subject: Re: [ESPResSo] General Suitability of Espresso for Fine Particles Date: Tue, 09 Oct 2007 14:28:12 +0200 User-agent: Mozilla Thunderbird 1.0.7 (X11/20050923) Hello, Lorenzo Isella wrote: Dear All, I think I now have a smattering of the basics of Espresso and I have to start thinking how and if to use it for my research. I have browsed the web and found espresso applications for polymers, ions, proteins, and so on, but my task is really to simulate the Langevin dynamics of exhaust fine particles (think of them as carbonaceous particles whose diameter ranges from 2 to 600 nanometers, the larger ones created by the agglomeration of the small ones) to investigate agglomeration. These particles are typically suspended in air, there may or may not be convection from a carrier flow. People typically assume that their motion is ruled by a Langevin equation, and that these particles stick when they collide, giving rise to complicated structures I would like to investigate. (1) First of all, am I right to say that the dynamics in the Langevin thermostat as implemented in Espresso simulates stochastic particle paths? This is my understanding of the Langevin thermostat in general, but I am also obviously concerned about the implementation. ==> Yes. Actually, the diffusion is in momentum space. You should distinguish between a Langevin equation of the type: (d/(dt)) x = \mu f + noise (that would only live in real space) from (d/(dt)) x = p / m (d/(dt)) p = F - \Gamma (p/m) + noise where the the noise couples to the momentum, and the particle starts to diffuse only on longer time scales t >> m / \Gamma . It is the latter case which is implemented in Espresso as the Langevin thermostat. (2) Can e.g. the fene or the harmonic potential be twisted to simulate this "sticking upon collision"? Basically I need a strong binding potential with a short interaction range, the interaction range being identified with the particle radius. If not, is there any conceptual problem in tabulating it? ===> I think you should twist the LJ potential, and stochastically add a FENE bond once you (or your random generator) have decided that two particles stick. I don't know about implementational details, but I think in principle this should be doable. Likewise, tabulated potentials should be implementable. All this will probably require some coding, though. (3)Back to the particle (stochastic) trajectories: the treatment of the friction and noise terms is particularly delicate. In my case, this noise stands for the effects of air molecules kicking the particles. Depending on the air temperature, the air mean-free path could be larger or smaller than the particle radius and this has to be taken into account. Can I "tune" the noise term in Langevin equation? ==> Note that the temperature is a parameter in the simulation. Higher temperature means a larger means square noise amplitude. Therefore the effect which you mention should be automatically included. The air mean free path is irrelevant. It is rather important how much momentum is transferred onto the particle, and here the mass ratio between particle and air molecules is much more crucial. Let me add: If you think the convection of surrounding air is important, then you can do this via coupling to a lattice Boltzmann fluid. Then you also get the hydrodynamic correlations between the particles right. See P. Ahlrichs and B. Duenweg, Journal of Chemical Physics 111, 8225 (1999). (4)Related to the previous questions: let us say you have a set of single particles, each of them separately obeying a Langevin equation with a certain noise. After colliding and giving rise to a certain agglomerate, the noise acting on the agglomerate will NOT in general be the sum of the noises on the individual particles, due to shielding effects (inner particles may be difficult to reach by air molecules). Can this be somehow accounted for in Espresso? ===> You could, for example, use the DPD thermostat (see, e.g. T. Soddemann, B. Duenweg, and K. Kremer, Physical Review E 68, 046702 (2003), and then exploit that the friction is a pairwise property and can be different for each particle pair - as was done in Jacqueline Yaneva, Burkhard Duenweg and Andrey Milchev, Journal of Chemical Physics 122, 204105 (2005). In your case, you would reduce your friction as soon as the particle pair becomes bonded. Or you simulate with explicit air and the DPD thermostat. Then the effect which you mention would automatically be be included. Again I cannot tell how easily this is realized in Espresso. You should however be warned that you leave the realm of well-defined equlibrium statistical mechanics as soon as you start to hook up particles irreversibly during the course of the simulation. But you are probably simulating a non- equilibrium system anyways. Regards Burkhard. -- /------------------------------------------------------------------\ | Burkhard Duenweg e-mail: address@hidden | | Max-Planck-Institut Phone: +49-6131-379-198 | | fuer Polymerforschung Fax: +49-6131-379-340 | | Ackermannweg 10 Home Phone: +49-6721-186221 | | D-55128 Mainz | | Germany | \------------------------------------------------------------------/ reply via email to [Prev in Thread] Current Thread [Next in Thread]
auto_math_text
web
# Volume 4, Number 2 o - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - o - - - March 15, 1997 - - O P - S F N E T Volume 4, Number 2 - - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - Editors: - - Tom H. Koornwinder thk@wins.uva.nl - - Martin Muldoon muldoon@yorku.ca - - - - The Electronic News Net of the SIAM Activity Group - - on Orthogonal Polynomials and Special Functions - - - - Please send contributions to: poly@siam.org - - & address changes to: poly-request@siam.org - - - o - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - o Today's Topics: 1. Minisymposium on "Handbooks for Special Functions and the World Wide Web" 2. University of Wisconsin Centenary Conference, including minisymposium on special functions 3. VIII International Conference on Symmetry Methods in Physics, Dubna, Russia 4. Symposium on orthogonal polynomials in Sevilla 5. Report from Madras Workshop on Special Functions & Differential Equations 6. Bourbaki Lecture on Dunkl operators 7. Death of Louis Auslander 8. Reprinting of Olver's "Asymptotics and Special Functions" 9. Proceedings of 1995 Toronto Workshop on "Special Functions, q-Series and Related Topics" 10. The Ramanujan Journal 11. End of Problem Section in SIAM Review? 12. Compiled Booklist and list of Electronic Services (Wolfram Koepf) 13. ftp site for papers in Orthogonal Polynomials and Special Functions 14. Changes of Address, WWW Pages, etc. 15. Obtaining back issues of OP-SF Net and submitting contributions Calendar of events: see issue/topic: 1997 March 17 - May 30: MSRI program on Symmetric functions and representation theory 3.3 #5 May 22-24: Centenary Conference, including minisymposium on special functions in Madison, Wisconsin 3.4 #5 and 4.2 #2 June 3-7: First ISAAC Conference (International Society for Analysis, its Applications and Computation) in Newark, Delaware 4.1 #2 June 9-20: CRM Workshop on Algebraic Combinatorics, Montreal 3.5 #4 June 24-28: Continued Fractions and Geometric Function Theory, Trondheim, Norway 3.2 #8 July 14-18: SIAM 45th Anniversary Meeting, Stanford, California including Minisymposium on "Handbooks for Special Functions and the World Wide Web" 4.2 #1 July 14-18: 9th International Conference on Formal Power Series and Algebraic Combinatorics, Vienna, Austria 3.4 #7 July 28 - August 2: VIII International Conference on Symmetry Methods in Physics, Dubna, Russia 4.2 #3 September 22-26: VIII Simposium sobre Polinomios Ortogonales y Aplicaciones, Seville, Spain 3.5 #5 and 4.2 #4 Topic #1 -------------- OP-SF NET 4.2 ------------- March 15,1997 From: Willard Miller, Jr. <miller@ima.umn.edu> Subject: Minisymposium on "Handbooks for Special Functions and the World Wide Web" The minisymposium will be held at the July 14-18 1997 SIAM Annual Meeting in Stanford as an initiative of the SIAM Activity Group on Orthogonal Polynomials and Special Functions. Dick Askey and Willard Miller are co-organizers. The principal handbooks on special functions, the "Bateman Project" and the NIST "Handbook of Mathematical Functions" are among the most useful, widely consulted technical volumes ever published, but they are now out of date, due to rapid research progress and to revolutionary changes in technology. The Minisymposium will feature talks by representatives of the groups that are proposing to update the Bateman Project and Abramowitz & Stegun, respectively, and talks with critiques of those CD-Rom and WWW handbook projects that are already available. The Minisymposium will conclude with a general discussion concerning the appropriate format and structure for handbook projects, and funding possibilities. Confirmed talks to date are the following: Speaker: Daniel W. Lozier Mathematical and Computational Sciences Division National Institute of Standards and Technology Gaithersburg, MD 20899 Title: Toward a Revised NBS Handbook of Mathematical Functions Abstract: A modernized and updated revision of Abramowitz and Stegun, Handbook of Mathematical Functions, first published in 1964 by the National Bureau of Standards, is being planned for publication on the World Wide Web. The authoritative status of the original will be preserved by enlisting the aid of qualified mathematicians and scientists. The practical emphasis on formulas, graphs, and numerical evaluation will be extended by providing an interactive capability to permit generation of tables and graphs on demand. --------------------- Speaker: Department of Mathematics University of South Florida Tampa, Florida, 33620 Abstract: We hope to update the Bateman Project to reflect the developments in the subject over the last fifty years and cover topics of importance that were not covered in the initial project. A presentation will be made as to the current state of this project, the need for it, and a sketch of the contents and the personnel to be involved. Suggestions, recommendations, criticisms and any useful input will be welcome and greatly appreciated. --------------------- Speaker: Department of Mathematics University of Wisconsin 480 Lincoln Drive Title: Handbooks of special functions through the decades Abstract: Handbooks of special functions have been some of the most widely used mathematics books. Features of some of the better ones will be described and some uses will be illustrated. Further information on the SIAM meeting may be found at the URL: http://www.siam.org/meetings/an97/an97home.htm or by email from meetings@siam.org Topic #2 -------------- OP-SF NET 4.2 ------------- March 15,1997 From: Charles Dunkl <cfd5z@virginia.edu> Subject: University of Wisconsin Centenary Conference, including minisymposium on special functions Here is the most up-to-date list of speakers and titles for the minisymposium on special functions to be held during the University of Wisconsin Centenary Conference to be held in Madison, Wisconsin, May 22-24, 1997. (See OP-SF NET 3.4 #5). I do not have a time schedule, but there will be two sessions as indicated. The speakers have all accepted. Session 1 Gilbert G. Walter, University of Wisconsin-Milwaukee, "Wavelets and special functions" Steve Milne, Ohio State University, "Sums of squares, Jacobi elliptic functions and continued fractions, and Schur functions" Paul Nevai, Ohio State University, "Orthogonal polynomials on an arc of the unit circle" Alan Schwartz, University of Missouri-St. Louis , "Multivariate orthogonal polynomials, measure algebras, and differential operators" Charles F. Dunkl, University of Virginia, "Generalized Hermite polynomials and root systems" Session 2 George Gasper, Northwestern University, "q-Analogues of Erdelyi's fractional integrals and applications" "Leonard Systems and the q-Racah polynomials" Anatol N. Kirillov, CRM , Universite de Montreal, "Quantum Schur functions and quantum Schubert polynomials" Mourad Ismail, University of South Florida, "Toda lattice and orthogonal polynomials" Dennis Stanton, University of Minnesota, "Applications of cubic hypergeometric transformations" Topic #3 -------------- OP-SF NET 4.2 --------------- March 15, 1997 From: G. Pogosyan <symphys8@thsun1.jinr.dubna.su> Subject: VIII International Conference on Symmetry Methods in Physics, Dubna, Russia The VIII International Conference on Symmetry Methods in Physics will be held in Dubna, Russia during July 28 - August 2, 1997. This Conference, organized by the Bogoliubov Laboratory of Theoretical Physics of the Joint Institute for Nuclear Research, is dedicated to the 80th anniversary of Professor Smorodinsky's birth. One of the topics of the conference will be "Quantum groups and q-special functions". See the web page mentioned below for further topics. Plenary speakers are ( * means to be confirmed): M.Charlton (London) V.K.Dobrev (Sofia) H.D.Doebner * (Clausthal) J.P.Draayer (Baton Rouge) F.Iachello (New Haven) A.U.Klimyk * (Kiev) P.Kulish (St.Petersburg) V.B.Kuznetsov (Leeds) F.J.Lambert * (Brussels) I.Meshkov (Dubna) W.Miller Jr. (Minneapolis) P.Van Moerbeke (Louvain-la-Neuve) A.Yu.Morozov (Moscow) A.M.Perelomov (Zaragoza) L.O'Raifeartaigh (Dublin) N.Reshetikhin * (Berkeley) A.B.Shabat (Moscow) D.V.Shirkov (Dubna) G.Sudarshan * (Austin) It is possible to apply for giving a contributed talk. The Proceedings of the conference will be published. Persons interested in participating are kindly asked to apply before March 31, 1997 by fax or e-mail. For further information and for the application form see the Web page http://thsun1.jinr.dubna.su:80/~symphys8/ or send an email to the chairman of the local organizing committee Dr. George Pogosyan <symphys8@thsun1.jinr.dubna.su>. Topic #4 -------------- OP-SF NET 4.2 --------------- March 15, 1997 From: OP-SF Net editor <thk@wins.uva.nl> Subject: Symposium on orthogonal polynomials in Sevilla The VIII Simposium sobre Polinomios Ortogonales y Aplicaciones in Sevilla, Spain during September 22-26, 1997 was announced in OP-SF Net 3.5, Topic #5. Updated information can be found on the web page http://www.wis.kuleuven.ac.be/wis/applied/walter/sevilla.html For the reader's convenience we give here a list of plenary speakers: * D. Alpay: Orthogonal matrix polynomials and reproducing kernel spaces * Alexander I. Aptekarev (no title yet) * Richard Askey: Combinatorics and the classical orthogonal polynomials * Christian Berg: Indeterminate moment problems * Doron Lubinsky: Orthogonal polynomials for exponential weights * Andrei Martinez: On asymptotic properties of Sobolev orthogonal polynomials * Paul Nevai: The L^p (p>2) variant of Steklov's conjecture fails too * Evgeni Rakhmanov: On asymptotics of orthogonal polynomials of a discrete variable * Edward B. Saff (no title yet) * Herbert Stahl: Spurious poles of Pade approximants * Vilmos Totik: General orthogonal polynomials Registration should be done not later than May 30, 1997. Send email to 8spoa@obelix.cica.es Topic #5 -------------- OP-SF NET 4.2 ------------- March 15,1997 From: Tom Koornwinder <thk@wins.uva.nl>, Walter Van Assche <walter@wis.kuleuven.ac.be> Subject: Report from Madras Workshop on Special Functions & Differential Equations: Chennai (Madras), India, January 13-24, 1997 A workshop on special functions and differential equations was held at the "Institute of Mathematical Sciences" in Chennai (formerly known as Madras) in India, from January 13 to January 24, 1997. The main organizer was K. Srinivasa Rao, who succeeded in getting about 75 participants, with approximately two thirds of them from India. The other participants were from Belgium (7), The Netherlands (3), Australia (2), New Zealand, Finland, Canada, Poland, Germany, Austria, France, and Italy (each 1), and some people from India working temporarily abroad. The background of speakers and participants was partly from pure and applied mathematics and partly from mathematical and theoretical physics. At least half of the lectures dealt with the first conference theme, special functions. The other theme of differential equations was approached both from the analytic side and from the numerical side. A few lectures merged both themes. All lectures were plenary and invited and had a standard length of 45 minutes. There were quite a few minicourses consisting of two or three lectures. The topics of these minicourses ranged over: Ramanujan's mock theta functions; connection and linearization coefficients for orthogonal polynomials; generalizations of Laguerre polynomials by adding (a derivative of) delta(x) to the weight function; applications of 3j, 6j and 9j coefficients to special function theory; irrationality and transcendence proofs of some famous numbers by approximation theory; special functions associated with root systems; creation operators for Jack and Macdonald polynomials; numerical methods for solving o.d.e.'s; parallel algorithms for solving o.d.e.'s; non-linear quantum mechanics; the uncertainty principle; nonlinear evolution equations. D.-N. Verma (of Verma module fame) gave some informal seminars on Lie theory, Clebsch-Gordan coefficients and related matters, and Tom Koornwinder filled a gap in the program by giving a seminar lecture on Zeilberger's algorithm. The workshop even made the local newspaper and television due to a special lecture "A nuclear-weapon free world: desirable? possible? probable?" by F. Calogero, secretary general of the Pugwash conferences on science and world affairs, which was attended by his excellency R. Venkataraman, former president of India. During the opening ceremony of the workshop, R.P. Agarwal gave a survey on special functions in India during the last century. Among others, the theory of q-special functions, the (Miller type) Lie theoretic approach to special functions, and applications to theoretical physics and probability theory are well represented nowadays in India. Professor Agarwal concluded his survey with a call to avoid superficial work and to look always for deep results. Tom Koornwinder, as vice-chair of our SIAM Activity Group, was asked to make some remarks in reply. He indicated some further active research areas in our field, such as special functions of several variables, their relation with certain algebraic structures (Lie groups and algebras, root systems, quantum groups, Hecke algebras), the general theory of orthogonal polynomials (including the Sobolev inner products), computer algebra methods for finding hypergeometric identities, and the application of special functions in real life situations (engineering). Some of these aspects indeed were the subject of subsequent lectures. During the workshop there was an informal meeting for founding a Society for Special Functions & Applications in India. Its aim will be to promote research in this area, to inform people of what is going on, and possibly to create a new India-based forum for bringing out research publications of international standard on special functions. Application forms for life-long membership are already available. Interaction with our SIAM activity group looked desirable to all present at the discussions, A very extensive cultural program was prepared by the organizers. Among the events were a dance performance, a concert of traditional Indian music, a trip to a drive-in movie to see a Tamil version of Mrs. Doubtfire, and on Sunday a visit to the temple cities Kancheepuram and Mahabalipuram to visit the temples (barefoot of course) and to do business with the local sandal makers and sculptors (bargaining skill is desirable). Ramanujan is of course very closely connected with Chennai and a visit to the Ramanujan museum and the Ramanujan Institute for Mathematical Sciences of the University of Madras was therefore a natural part of the program. The Ramanujan museum is a rather recent realisation located in a mathematics education center. Srinivasa Rao gave a lecture on the life and work of Ramanujan and afterwards we were able to see the displays in the museum. Some of Ramanujan's work is very suitable for use in mathematics courses at all levels and the Ramanujan museum wants to advertize this idea. Our visit to the museum ended with a delicious high tea organized with great care and effort by the mathematics education center. All western participants were impressed by the quality of the hosting Institute of Mathematical Sciences, the great hospitality, and (for newcomers) the fascinating intricacies of Indian culture and society. The efforts of prof. K. Srinivasa Rao to make this workshop into a success are really beyond praise. Tom H Koornwinder Walter Van Assche Topic #6 -------------- OP-SF NET 4.2 --------------- March 15, 1997 From: OP-SF Net editor <thk@wins.uva.nl> Subject: Bourbaki Lecture on Dunkl operators On March 1, 1997 Gert Heckman (Catholic University of Nijmegen) delivered a lecture in the Seminaire Bourbaki, Paris on "Dunkl operators". We congratulate Gert Heckman and Charles Dunkl. Topic #7 -------------- OP-SF NET 4.2 ------------- March 15,1997 Subject: Death of Louis Auslander Louis Auslander died at 68 on February 25, 1997. His early work was on differential geometry. Later he worked on nilpotent Lie groups, and this led to the study of theta functions. Two of his books are "Abelian Harmonic Analysis, Theta Functions and Function Algebras on a Nilmanifold", Springer, 1975, and "Lecture Notes on Nil-Theta Functions", work was on finite Fourier transforms, which also involves some special functions. His expository paper with Tolimieri: "Is computing the finite Fourier transform pure or applied mathematics?", BAMS (New Series), 1 (1979) 847-897, has been cited frequently, and is worth reading. The first of the books mentioned above is also joint with Richard Tolimieri. Auslander attended the Oberwolfach meeting on special functions and group theory run by Askey, Koornwinder and Schempp in 1983. He was fascinated by some of the work described there, as were some of us by his work. Topic #8 -------------- OP-SF NET 4.2 ------------- March 15,1997 From: Frank Olver <olver@ipst.umd.edu> Subject: Reprinting of "Asymptotics and Special Functions" Readers may be interested to learn that my book "Asymptotics and Special reprinted by A.K. Peters, Ltd. It is again in hardback form and it lists at $69. Copies can be ordered through booksellers (ISBN: 1-56881-069-5) or directly from the publisher at 289 Linden Street, Wellesley, MA 02181; email: akpeters@tiac.net. Topic #9 -------------- OP-SF NET 4.2 ------------- March 15,1997 From: Martin Muldoon <muldoon@yorku.ca> Subject: Workshop Proceedings on "Special Functions, q-Series and Related Topics" The proceedings volume for the 1995 Toronto Workshop on "Special Functions, q-Series and Related Topics" (OP-SF NET 2.4, Topic #8) has now appeared. The following information is taken from the AMS Web Page. "Special Functions, q-Series and Related Topics" Edited by: Mourad E. H. Ismail (University of South Florida), David R. Masson (University of Toronto) and Mizan Rahman (Carleton University). This book contains contributions from the proceedings at The Fields Institute workshop on Special Functions, q-Series and Related Topics that was held in June 1995. The articles cover areas from quantum groups and their representations, multivariate special functions, q-series, and symbolic algebra techniques as well as the traditional areas of single-variable special functions. The book contains both pure and applied topics and reflects recent trends of research in the various areas of special functions. Contents K. Alladi -- Refinements of Rogers-Ramanujan type identities B. C. Berndt, H. H. Chan, and L.-C. Zhang -- Ramanujan's class invariants with applications to the values of q-continued fractions and theta functions G. Gasper -- Elementary derivations of summation and transformation formulas for q-series R. W. Gosper, Jr. -- \int ^{m/6}_{n/4}\ln \Gamma (z)dz F. A. Grunbaum and L. Haine -- On a q-analogue of Gauss equation and some q-Riccati equations R. A. Gustafson and C. Krattenthaler -- Determinant evaluations and U(n) extensions of Heine's _2\phi _1-transformations M. E. H. Ismail, D. R. Masson, and S. K. Suslov -- Some generating functions for q-polynomials E. Koelink -- Addition formulas for q-special functions T. H. Koornwinder -- Special functions and q-commuting variables M. Noumi, M. S. Dijkhuizen, and T. Sugitani -- Multivariable Askey-Wilson polynomials and quantum complex Grassmannians P. Paule and A. Riese -- A Mathematica q-analogue of Zeilberger's algorithm based on an algebraically motivated approach to q-hypergeometric telescoping W. Van Assche -- Orthogonal polynomials in the complex plane and on the real line Y. Xu -- On orthogonal polynomials in several variables Appendix I: Program list of speakers and topics Appendix II: List of participants Details: Publisher: American Mathematical Society Distributor: American Mathematical Society Series: Fields Institute Communications, ISSN: 1069-5265 Volume: 14 Publication Year: 1997 ISBN: 0-8218-0524-X Paging: 277 pp. Binding: hardcover List Price:$82 Institutional Member Price: $66 Individual Member Price:$49 Itemcode: FIC/14 Topic #10 -------------- OP-SF NET 4.2 ------------- March 15,1997 From: Frank G. Garvan <frank@math.ufl.edu> Subject: The Ramanujan Journal The Ramanujan Journal, an international journal devoted to the areas of mathematics influenced by Ramanujan was announced in OP-SF NET 2-4, #15 ---------------------------------------------------------------------- Editorial ---------------------------------------------------------------------- The Well-Poised Thread: An Organized Chronicle of Some Amazing Summations and Their Implications (Survey Article) George E. Andrews 7 ---------------------------------------------------------------------- Divisibility of Certain Partition Functions by Powers of Primes Basil Gordon and Ken Ono 25 ---------------------------------------------------------------------- Ramanujan's Master Theorem for Hermitian Symmetric Spaces Hongming Ding 35 ---------------------------------------------------------------------- Ramanujan's Singular Moduli Bruce C. Berndt, Heng Huat Chan, and Liang-Cheng Zhang 53 ---------------------------------------------------------------------- On the Ramanujan-Gollnitz-Gordon Continued Fraction Heng Huat Chan and Sen-Shan Huang 75 ---------------------------------------------------------------------- Rogers-Ramanujan type Identities for Partitions with Attached Odd Parts George E. Andrews and J. Plinio Santos 91 ---------------------------------------------------------------------- Lecture Hall Partitions Mireille Bousquet-Melou and Kimmo Eriksson 101 ---------------------------------------------------------------------- http://www.math.ufl.edu/~frank/ramanujan.html Topic #11 -------------- OP-SF NET 4.2 ------------- March 15,1997 From: OP-SF Net Editor <muldoon@yorku.ca> Subject: End of Problem Section in SIAM Review? We heard from Joop Boersma (Technical University Eindhoven) of a report that SIAM Review is considering dropping its Problem Section. Boersma would regret this very much. According to him the Problem Section provides a forum where problems from Applied Analysis, in particular, can be brought to the attention of a wider audience. He asks whether our Activity Group can intervene. Please send us your opinion. There has been some discussion among the editors and other officers of the Activity Group. Here are some extracts Tom Koornwinder: "The number of people submitting or solving any problems is only a small minority. Part of these people invest a lot of time in these activities, and try to deal with problem sections in many different journals. Some even form teams for this purpose (e.g., the well-known O.P. Lossers team in Eindhoven). I think that it is not so harmful for this category of people when a problem section in one journal will stop, because there are many other journals (including our Newsletter) where such sections will remain. More serious is Boersma's argument that researchers can call the help of other specialists in such a way (for instance applied mathematicians calling the help of specialists in special functions). However I think that it is better not to hide such questions in a problem section, but rather call it a section on research questions. One other about Special Functions to the SIAM community (almost every issue of the Problems section had at least one problem on SF). In a sense that was good, because OP & SF does not get much other coverage in SIAM News and SIAM Review. On the other hand, by the very nature of a Problem Section, the aspects of Special Functions being treated there emphasize the "special" and the formula aspect of Special Functions, much less the qualitative and conceptual sides of the field. In this way such a Problem Section can also help to maintain or strengthen existing prejudices against special functions." Martin Muldoon: "The problems [in SIAM Review] are much too difficult. I suspect that many other readers would agree and that may explain why the Section may be chopped. I believe that its main strength, compared to problem sections in other journals, is that many of the problems arise from applications. While it is true that it provides a way for applied mathematicians to call on experts, in practice it is surely much too slow for this. A member once told me that he thought our Newsletter should not have a Problem Section since it competes with and takes problems away from SIAM Review. I answered that I thought that were more than enough good problems to go around." Nico Temme: "My opinion about problem sections in journals is that the reader should have a fair chance, and that much detailed and technical expertise should not be needed to solve the problems. In this sense it is good to have a few places where one can find problems. Nieuw Archief voor Wiskunde offers only problems of which the solutions are known. But they had difficult years some time ago to get good editors. "In some cases the problems are too difficult for a non-introduced reader. This is acceptable when the solution is not known. "Usually I am not motivated to solve the first category. It is more "educational" and it consumes too much time. When I see a problem with a * in SIAM Review (which means an "open problem") I become interested. "I think that there should be a place for both categories, and that it should be clear where to find both categories. SIAM Review has a long tradition on this, and the journal is available in many places. ... "I think that having a Problem Section in our Newsletter for open problems is important and of interest to many readers." Topic #12 -------------- OP-SF NET 4.2 ------------- March 15,1997 From: Wolfram Koepf <koepf@zib.de> Subject: Compiled Booklist and Electronic Services (This item appeared in The Activity Group's Newsletter, February 1997). Here I give the announced list of books and survey papers that should give a basis for understanding the current trends and needs in OP-SF. Let me say that the given list can be neither complete nor perfect. If somebody feels that an important item is missing, please let me know. The following list should rather be understood as under construction. We will try to put this list on OP-SF Web, where it can easily be updated. Electronically, the material could also be more easily organized in various ways. Readers who are interested in classical orthogonal polynomials should probably first look at [Chihara], [NU], [Rainville], [Szego], or [Tricomi]; readers looking for applications in mathematical physics might consider [NU]. [Artin] Artin, E.: "The Gamma Function." Holt, Rinehart and Winston, New York, 1964. Mathematics Research Center, University of Wisconsin-Madison, March 21-April 2, 1975. Academic Press, New York, 1975. Functions." Regional Conference Series in Applied Mathematics 21, SIAM, [AKS] Askey, R. A., Koornwinder, T. H. and Schempp, W. (Eds.): "Special Functions: Group Theoretical Aspects and Applications." Mathematics and Its Applications, Vol. 18. Reidel, Dordrecht-Boston-Lancaster, 1984. [AW3] Askey, R. A. and Wilson, J.: Some basic hypergeometric orthogonal polynomials that generalize Jacobi polynomials. Memoirs Amer. Math. Soc. 319, Providence, Rhode Island, 1985. [Bailey] Bailey, W. N.: "Generalized Hypergeometric Series". Cambridge University Press, England, 1935; reprinted 1964 by Stechert-Hafner Service Agency, New York-London. [Chihara] Chihara, T. S.: "An Introduction to Orthogonal Polynomials." Gordon and Breach Publ., New York, 1978. [Freud] Freud, G.: "Orthogonale Polynome". Birkhauser, Basel, 1969; English translation, Pergamon Press, Oxford, 1971. [GR] Gasper, G. and Rahman, M.: "Basic Hypergeometric Series." Encyclopedia of Mathematics and Its Applications, Vol. 34. Cambridge University Press, 1990. [Geronimus1] Geronimus, Ya. L.: "Polynomials Orthogonal on a Circle and Interval." International Series of Monographs on Pure and Applied Mathematics, Vol. 18. Pergamon Press, Oxford-London-New York-Paris, 1961. [Geronimus2] Geronimus, Ya. L.: Appendix to the Russian translation of Szego's book "Orthogonal Polynomials." Staatsverlag fur physikalisch-mathematische Literatur, Moskau, 1962. [Henrici1] Henrici, P.: "Applied and Computational Complex Analysis, Vol. 1: Power Series, Integration, Conformal Mapping, Location of Zeros." John Wiley & Sons, New York, 1974. [Henrici2] Henrici, P.: "Applied and Computational Complex Analysis, Vol. 2: Special Functions, Integral Transforms, Asymptotics, Continued Fractions." John Wiley & Sons, New York, 1977. [Hua] Hua, L.K.: "Harmonic Analysis of Functions of Several Complex Variables in the Classical Domains." Translations of Mathematical Monographs, Vol. 6, Amer. Math. Soc., Providence, Rhode Island, 1963. [Lebedev] Lebedev, N. N.: "Special Functions and their Applications." Translated and edited by Richard A. Silverman. Dover Publications, New York, 1972. [LW] Lorentzen, L. and Waadeland, H.: "Continued Fractions with Applications." Studies in Computational Mathematics, Vol. 3. North-Holland, Amsterdam, 1992. [Miller1] Miller, W., Jr.: "Lie Theory and Special Functions." Academic Press, New York, 1968. [Miller2] Miller, W., Jr.: "Symmetry and Separation of Variables." Encyclopedia of Mathematics and Its Applications, Vol. 4. [Nevai1] Nevai, P. G.: "Orthogonal Polynomials." Memoirs Amer. Math. Soc., Vol. 213, Providence, Rhode Island, 1979. [Nevai2] Nevai, P. (Ed.): "Orthogonal Polynomials: Theory and Practice." Proceedings of the NATO Advanced Study Institute on Orthogonal Polynomials and Their Applications, Colombus, Ohio, U.S.A., May 22-June 3, 1989, [NU] Nikiforov, A. F. and Uvarov, V. B.: "Special Functions of Mathematical Physics". Translated from the Russian by R. P. Boas, Birkhauser, Basel, 1988. [NSU] Nikiforov, A. F., Suslov, S. K. and Uvarov, V. B.: "Classical Orthogonal Polynomials of a Discrete Variable." Springer-Verlag, Berlin-Heidelberg-New York, 1991. [NS] Nikishin, E. M. and Sorokin, V. N.: "Rational Approximations and Orthogonality." Translations of Mathematical Monographs 92, Amer. Math. Soc., Providence, Rhode Island, 1991. [Olver] Olver, F. W. J.: "Asymptotics and Special Functions." Academic Press, New York, 1974. (reprinted A. K. Peters, 1997) [Perron] Perron, O.: "Die Lehre von den Kettenbruchen." Teubner, Leipzig, 1913; second edition, Chelsea, New York, 1950. [PWZ] Petkovvsek, M., Wilf, H. S. and Zeilberger, D.: "A=B." A. K. Peters, Wellesley, 1996. [Rainville] Rainville, E. D.: "Special Functions". The MacMillan Co., New York, 1960. [Richards] Richards, D. St. P. (Ed.): "Hypergeometric Functions on Domains of Positivity, Jack Polynomials, and Applications." Proceedings of an AMS special session, March 22-23, 1991 in Tampa, FL, USA. Contemporary Mathematics 138, Amer. Math. Soc., Providence, Rhode Island, 1992. [Shohat] Shohat, J. A. and Tamarkin, J. D. : "The Problem of Moments." Amer. Math. Soc., Providence, Rhode Island, 1963. [Szego] Szego, G.: "Orthogonal Polynomials." Amer. Math. Soc. Coll. Publ., Vol. 23, New York, 1939; 4th Edition, 1975. [ST] Stahl, H. and Totik, V.: "General Orthogonal Polynomials." Encyclopedia of Mathematics and Its Applications. Cambridge University Press, Cambridge, 1992. [Talman] Talman, J.: "Special Functions: a Group Theoretic Approach." W. A. Benjamin, New York, 1968. [Temme] Temme, N. M.: "Special Functions. An Introduction to the Classical Functions of Mathematical Physics." John Wiley & Sons Inc., New York, 1996. [Tricomi] Tricomi, F. G.: "Vorlesungen uber Orthogonalreihen." Grundlehren der Mathematischen Wissenschaften 76, Springer-Verlag, Berlin-Gottingen-Heidelberg, 1955. [VanAssche] Van Assche, W.: "Asymptotics for Orthogonal Polynomials." Lecture Notes Math. 1265, Springer, Berlin-Heidelberg-New York, 1987. [Vilenkin1] Vilenkin, N. Ya.: "Special Functions and the Theory of Group Representations." Translations of Mathematical Monographs 22, Amer. Math. Soc., Providence, Rhode Island, 1968. [Vilenkin2] Vilenkin, N. Ya., and Klimyk, A. U.: "Representation of Lie Groups and Special Functions, Vols. 1-3, and "Recent Advances", Kluwer [Wall] Wall, H. S.: "Analytic Theory of Continued Fractions." Chelsea, Bronx, NY, 1973. The following are handbooks and other reference manuals for orthogonal polynomials and special functions. [AS] Abramowitz, M. and Stegun, I. A.: "Handbook of Mathematical Functions." Dover Publ., New York, 1964. [Erdelyi1] Erdelyi, A., Magnus, W., Oberhettinger, F. and Tricomi, F. G.: "Higher Transcendental Functions, Vols. 1-3." McGraw-Hill, New York, 1953-1955. [Erdelyi1] Erdelyi, A., Magnus, W., Oberhettinger, F. and Tricomi, F. G.: "Tables of Integral Transforms, Vols. 1-2." McGraw-Hill, New York, 1954. [GR] Gradshteyn, I. S. and Ryzhik, I. M.: "Table of Integrals, Series, and Products." Printed and CD-ROM Version. Academic Press, San Diego, California, 1996. [Han] Hansen, E. R.: "A Table of Series and Products." Prentice-Hall, Englewood Cliffs, NJ, 1975. [Swarttouw] Koekoek, R. and Swarttouw, R. F.: "The Askey-scheme of hypergeometric orthogonal polynomials and its q analogue." Report 94-05, Technische Universiteit Delft, Faculty of Technical Mathematics and Informatics, Delft, 1994. Electronic version: http://www.can.nl/~renes/index.html [MOS] Magnus, W., Oberhettinger, F. and Soni, R. P.: "Formulas and Theorems for the Special Functions of Mathematical Physics". Springer, Berlin-Heidelberg-New York, 1966. [PBM] Prudnikov, A.P., Brychkov, Yu.A. and Marichev, O.I.: "Integrals and Series, Vols.1-3." Gordon and Breach Publ., New York, 1989-1990. Electronic Services Here is a list of electronic services and WWW sites which are of interest to the members of our Activity Group. [SIAM OPSF] http://www.math.yorku.ca/Who/Faculty/Muldoon/siamopsf SIAM Activity Group on Orthogonal Polynomials and Special Functions. Martin Muldoon and Tom H. Koornwinder: A collection of electronic services relevant for OPSF [OPSF] ftp://unvie6.un.or.at/siam/ Hans Haubold: OPSF ftp site or [MathSciNet] http://www.ams.org/msnhtml/mathscinet or http://ams.mathematik.uni-bielefeld.de/mathscinet Mathematical Reviews: MathSciNet [MR] http://www.ams.org/committee/publications/author-lookup.html Mathematical Reviews: Author Lookup [Zentralblatt] http://www.emis.de/cgi-bin/MATH Zentralblatt fur Mathematik: MATH Database 1931-1996 on-line [AMSPPS] http://www.ams.org/preprints AMS: Preprint Server [netlib] http://www.netlib.org Netlib: collection of mathematical software, papers, and databases [CPC] http://www.cpc.cs.qub.ac.uk/cpc Elsevier: The CPC International Program Library [AWS] http://www.can.nl/~renes/index.html Rene Swarttouw: Electronic version of the Askey-Wilson scheme report. [Swarttouw] http://www.can.nl/~demo/CAOP/CAOP.html Rene Swarttouw: An interactive on-line version of the Askey-Wilson scheme, using Koepf's Maple implementations of Zeilberger's algorithm [OMI] http://www.integrals.com Wolfram Research: On-line Mathematica Integrator [ISC] http://www.cecm.sfu.ca/projects/ISC CECM: Inverse Symbolic Calculator [IS] http://netlib.att.com/math/sloane/doc/eistop.html N. J. A. Sloane: Integer Sequences [TI] http://http.cs.berkeley.edu/~fateman/htest.html Richard Fateman: Table of Integrals Look Up [FMC] http://www.mathsoft.com/cgi-shl/constant.bat Steven Finch: Favorite Mathematical Constants [RNC] http://www.cecm.sfu.ca/organics/papers/bailey David H. Bailey and Simon Plouffe: Recognizing Numerical Constants [Numbers] http://archives.math.utk.edu/subjects/numbers.html Mathematics Archives: Numbers [NESF] http://math.nist.gov/nesf Daniel Lozier: Numerical Evaluation of Special Functions Wolfram Koepf Topic #13 -------------- OP-SF NET 4.2 --------------- March 15, 1997 From: OP-SF Net editor <thk@wins.uva.nl> Subject: ftp site for papers in Orthogonal Polynomials and Special Functions Hans Haubold's ftp archive for preprints in the area of Orthogonal Polynomials and Special functions is the continuation of Waleed Al-Salam's preprint archive. One can approach the archive by anonymous ftp to unvie6.un.or.at, directory siam. Very recently, Hans Haubold has constructed a convenient WWW interface for this ftp site, at the address ftp://unvie6.un.or.at/siam/opsf_new/00index.html Via this home page you can move to a page listing all available files in alphabetical order of authors, and offering a link to each file. You can also move from the home page to the ftp interface. At the moment it is only in this way that you can reach the submissions directory, where the most recent contributions reside. Hans Haubold is sending regular information about new submissions to a list is no longer correct. Since January 15, 1997 the following paper has been submitted to the archive: W. Koepf and D. Schmersau, Representations of Orthogonal Polynomials. (see siam/submissions/koepf_schmersau6.ps). Topic #14 -------------- OP-SF NET 4.2 ------------- March 15,1997 From: OP-SF Net Editors <thk@wins.uva.nl>, <muldoon@yorku.ca> Subject: Changes of Address, WWW Pages, etc. 1. The preprint archive "Orthogonal polynomials and related special functions" maintained by Hans Haubold has got a new WWW address: ftp://unvie6.un.or.at/siam/opsf_new/00index.html 2. The URL for AT-NET should be changed to: http://www.mi.uni-erlangen.de/~approx/ 3. A particularly extensive list of Mathematical links is maintained at: Mathematics Information Servers --- Penn State http://www.math.psu.edu/MathLists/Contents.html 4. Daniel Loeb has moved from Universite de Bordeaux, France to a position at Daniel H. Wagner, Associates, Malvern, PA, USA. 171 Stoneway Lane, Bala Cynwyd, PA 19004-3124, USA Phone +1-610-668-7732, Fax: +1-610-644-6293, email: loeb@sprynet.com or loeb@pa.wagner.com His Web site remains in Bordeaux but the address has changed slightly: WWW: http://dept-info.labri.u-bordeaux.fr/~loeb/index.html Topic #15 -------------- OP-SF NET 4.2 --------------- March 15, 1997 From: OP-SF Net editor <thk@wins.uva.nl> Subject: Obtaining back issues of OP-SF Net and submitting contributions Back issues of OP-SF Net can be obtained from ftp: ftp.wins.uva.nl, in directory pub/mathematics/reports/Analysis/koornwinder/opsfnet.dir or WWW: ftp://ftp.wins.uva.nl/pub/mathematics/reports/Analysis/koornwinder/opsfnet.dir or WWW: http://www.math.ohio-state.edu/JAT/DATA/OPSFNET/opsfnet.html Contributions to the OP-SF Net 4.3 should reach the email address poly@siam.org before May 1, 1997. Koepf. Deadline for submissions to be included in the June 1997 issue is May 15, 1997 and for the October 1997 issue it is September 15, 1997. Wolfram Koepf Takustr. 7 D-14195 Berlin-Dahlem, Germany tel.: +49-30-841 85-348/347 fax: +49-30-841 85-269/125 email: koepf@zib.de preferably by email, and in latex format. Other formats are also acceptable and can be submitted by email, regular mail or fax. mathematics symbols or pictures) are automatically considered for publication in OP-SF Net, and vice versa, unless the writer requests otherwise. Previous issues of the Newsletter, but not the most recent one, can be obtained as dvi or PostScript files from Wolfram Koepf's WWW homepage: http://www.zib.de/koepf/ or by anonymous ftp at ftp.zib.de in directory pub/UserHome/Koepf/SIAM In order to join the SIAM Activity Group on Orthogonal Polynomials you have to become a member of SIAM. The annual dues are $93 for SIAM plus$10 for the Group. Contact the email address join@siam.org . o - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - o - OP-SF Net is a forum of the SIAM Activity Group on - - Special Functions and Orthogonal Polynomials. - - We disseminate your contributions on anything of interest to the - - special functions and orthogonal polynomials community. This - - includes announcements of conferences, forthcoming books, new - - software, electronic archives, research questions, job openings. - o - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - o - Send submissions to: poly@siam.org - - Send address changes to: poly-request@siam.org - - Get back issues by ftp from: ftp.wins.uva.nl, in directory - - pub/mathematics/reports/Analysis/koornwinder/opsfnet.dir - - http://www.math.yorku.ca/Who/Faculty/Muldoon/siamopsf/ - - Information on joining SIAM - - and this activity group: service@siam.org - o - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - o - The elected Officers of the Activity Group are: - - Charles Dunkl, Chair - - Tom H. Koornwinder, Vice Chair and OP-SF Net editor - - Nico M. Temme, Secretary - - Willard Miller, Jr., Program Director - - The appointed officers are: - - Wolfram Koepf, Newsletter editor - - Martin Muldoon, Webmaster and OP-SF Net editor - o - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - o Date Index | Thread Index | Problems or questions? Contact list-master@nist.gov
auto_math_text
web
Tool to find long gapped k-mers (k ~ 1000) 0 0 Entering edit mode 6 weeks ago giova34 • 0 I'm searching for a way to find very long k-mers (k ~ 2000). I realize the sequence entropy of 1000 nt is quite low, so I'm looking to search genome wide for long k-mers with gaps allowed - with the minimum threshold that at least 1200 bp be congruent in each discovered motif. So far, I've tried to do this with glam2 just to prototype. This never converges however - I first split human chromosome 21 into 1 Mb chunks, saving each 1 Mb chunk as a separate line. I then ask glam2 to find local alignments of ~ 2000 bp across these 1 Mb chunks. glam2 n chr21_chunked1000000.fa -z 10 -a 1200 -b 2000 -w 1500 & I wonder if there is already a tool out there that is better poised to accommodate long k-mer/motif discovery. Any recommendation/advice greatly appreciated! -G k-mers gapped motif • 71 views
auto_math_text
web
## Planck's oscillators and the energy assumption First of all, why is $$E = h\nu$$? Second, where can I find the derivation behind Planck's "oscillators in a box" calculations that led to the assumption that energy is quantized? I realize that my questions are a bit vague, but I cannot make them more specific as I do not have a firm grasp of the subject. PhysOrg.com physics news on PhysOrg.com >> Kenneth Wilson, Nobel winner for physics, dies>> Two collider research teams find evidence of new particle Zc(3900)>> Scientists make first direct images of topological insulator's edge currents Recognitions: Gold Member Science Advisor Any introductory QM book should have the answer to the first question. The second question follows from the first and is found in texts on statistical physics. Quote by Dr Transport Any introductory QM book should have the answer to the first question. The second question follows from the first and is found in texts on statistical physics. The problem is that the only book I have access to is "Introduction to the Quantum Theory" by David Park. The first question is not answered in this book. As for the second question, I don't have access to statistical physics texts at all. Is there an explanation on the internet somewhere? (Is this a good one?) Similar discussions for: Planck's oscillators and the energy assumption Thread Forum Replies Introductory Physics Homework 25 Advanced Physics Homework 0 Quantum Physics 6 Quantum Physics 0 Introductory Physics Homework 2
auto_math_text
web
Conference paper Open Access Tassi, Francesco; De Momi, Elena; Ajoudani, Arash ### Citation Style Language JSON Export { "publisher": "Zenodo", "DOI": "10.5281/zenodo.4663753", "author": [ { "family": "Tassi, Francesco" }, { "family": "De Momi, Elena" }, { "family": "Ajoudani, Arash" } ], "issued": { "date-parts": [ [ 2021, 5, 29 ] ] }, "abstract": "<p>Today&#39;s robots are expected to fulfill different requirements originated from executing complex tasks in uncertain environments, often in collaboration with humans. To deal with this type of multi-objective control problem, hierarchical least-square optimization techniques are often employed, defining multiple tasks as objective functions, listed in hierarchical manner. The solution to the Inverse Kinematics problem requires to plan and constantly update the Cartesian trajectories. However, we propose an extension to the classical Hierarchical Quadratic Programming formulation, that allows to optimally generate these trajectories at control level.<br>\nThis is achieved by augmenting the optimization variable, to include the Cartesian reference and allow for the formulation of an adaptive compliance controller, which retains an impedance-like behaviour under external disturbances, while switching to an admittance-like behavior when collaborating with a human. The effectiveness of this approach is tested using a 7-DoF Franka Emika Panda manipulator in three different collaborative scenarios.</p>", "type": "paper-conference", "id": "4663753" } 76 65 views
auto_math_text
web
### Areas Related to Circles: #### Next Chapter 12 of the Mathematics textbook for class 10 License:[Source NCERT ] May 24, 2016, 10:13 p.m.
auto_math_text
web
• Quantum interference and phonon-mediated back-action in lateral quantum dot circuits(1301.7461) Jan. 30, 2013 cond-mat.mes-hall Spin qubits have been successfully realized in electrostatically defined, lateral few-electron quantum dot circuits. Qubit readout typically involves spin to charge information conversion, followed by a charge measurement made using a nearby biased quantum point contact. It is critical to understand the back-action disturbances resulting from such a measurement approach. Previous studies have indicated that quantum point contact detectors emit phonons which are then absorbed by nearby qubits. We report here the observation of a pronounced back-action effect in multiple dot circuits where the absorption of detector-generated phonons is strongly modified by a quantum interference effect, and show that the phenomenon is well described by a theory incorporating both the quantum point contact and coherent phonon absorption. Our combined experimental and theoretical results suggest strategies to suppress back-action during the qubit readout procedure. • Determination of energy scales in few-electron double quantum dots(1105.3570) Jan. 3, 2012 cond-mat.mes-hall The capacitive couplings between gate-defined quantum dots and their gates vary considerably as a function of applied gate voltages. The conversion between gate voltages and the relevant energy scales is usually performed in a regime of rather symmetric dot-lead tunnel couplings strong enough to allow direct transport measurements. Unfortunately this standard procedure fails for weak and possibly asymmetric tunnel couplings, often the case in realistic devices. We have developed methods to determine the gate voltage to energy conversion accurately in the different regimes of dot-lead tunnel couplings and demonstrate strong variations of the conversion factors. Our concepts can easily be extended to triple quantum dots or even larger arrays. • Relaxation of hot electrons in a degenerate two-dimensional electron system: transition to one-dimensional scattering(1104.1645) Aug. 4, 2011 cond-mat.mes-hall The energy relaxation channels of hot electrons far from thermal equilibrium in a degenerate two-dimensional electron system are investigated in transport experiments in a mesoscopic three-terminal device. We observe a transition from two dimensions at zero magnetic field to quasi--one-dimensional scattering of the hot electrons in a strong magnetic field. In the two-dimensional case electron-electron scattering is the dominant relaxation mechanism, while the emission of optical phonons becomes more and more important as the magnetic field is increased. The observation of up to 11 optical phonons emitted per hot electron allows us to determine the onset energy of LO phonons in GaAs at cryogenic temperatures with a high precision, $\eph=36.0\pm0.1\,$meV. Numerical calculations of electron-electron scattering and the emission of optical phonons underline our interpretation in terms of a transition to one-dimensional dynamics. • An electron jet pump: The Venturi effect of a Fermi liquid(1011.2289) June 1, 2011 cond-mat.mes-hall A three-terminal device based on a two-dimensional electron system is investigated in the regime of non-equilibrium transport. Excited electrons scatter with the cold Fermi sea and transfer energy and momentum to other electrons. A geometry analogous to a water jet pump is used to create a jet pump for electrons. Because of its phenomenological similarity we name the observed behavior "electronic Venturi effect". • Electron-avalanche amplifier based on the electronic Venturi effect(1001.5201) Oct. 26, 2010 cond-mat.mes-hall Scattering of otherwise ballistic electrons far from equilibrium is investigated in a cold two-dimensional electron system. The interaction between excited electrons and the degenerate Fermi liquid induces a positive charge in a nanoscale region which would be negatively charged for diffusive transport at local thermal equilibrium. In a three-terminal device we observe avalanche amplification of electrical current, resulting in a situation comparable to the Venturi effect in hydrodynamics. Numerical calculations using a random phase approximation are in agreement with our data and suggest Coulomb interaction as the dominant scattering mechanism. • Phonon-mediated vs. Coulombic Back-Action in Quantum Dot circuits(0910.4093) April 20, 2010 cond-mat.mes-hall Quantum point contacts (QPCs) are commonly employed to capacitively detect the charge state of coupled quantum dots (QD). An indirect back-action of a biased QPC onto a double QD laterally defined in a GaAs/AlGaAs heterostructure is observed. Energy is emitted by non-equilibrium charge carriers in the leads of the biased QPC. Part of this energy is absorbed by the double QD where it causes charge fluctuations that can be observed under certain conditions in its stability diagram. By investigating the spectrum of the absorbed energy, we identify both acoustic phonons and Coulomb interaction being involved in the back-action, depending on the geometry and coupling constants. • Telegraph Noise in Coupled Quantum Dot Circuits Induced by a Quantum Point Contact(0801.4002) May 4, 2008 cond-mat.mes-hall Charge detection utilizing a highly biased quantum point contact has become the most effective probe for studying few electron quantum dot circuits. Measurements on double and triple quantum dot circuits is performed to clarify a back action role of charge sensing on the confined electrons. The quantum point contact triggers inelastic transitions, which occur quite generally. Under specific device and measurement conditions these transitions manifest themselves as bounded regimes of telegraph noise within a stability diagram. A nonequilibrium transition from artificial atomic to molecular behavior is identified. Consequences for quantum information applications are discussed.
auto_math_text
web
58 sujets IRFU Dernière mise à jour : 21-01-2021 ##### Probing new sources of CP violation in the Universe using Higgs boson production at the LHC SL-DRF-21-0364 Location : Service de Physique des Particules (DPHP)Groupe Atlas (ATLAS)Saclay Contact : Laurent SCHOEFFEL Starting date : 01-10-2021 Contact : Laurent SCHOEFFELCEA - DRF/IRFU/DPHP 01.69.08.25.83 Thesis supervisor : Laurent SCHOEFFELCEA - DRF/IRFU/DPHP 01.69.08.25.83 The proposed PhD is aiming at probing new sources of CP violation in the Universe by studying the Higgs boson properties at the LHC, in particular through a modification of the coupling of the Higgs boson with the heaviest elementary particle: the top quark. The goal of this PhD is to develop a new data analysis strategy within the ATLAS collaboration probing the CP properties of the ttH coupling in pp collisions. The idea would be to design an analysis based exclusively on pure CP observables, meant to complement the existing model-dependent approach relying on machine learning methods that mix observables specific and non-specific to CP. This new analysis should also have increased sensitivity to other rarer top quark associated Higgs production modes such as tHq. New observables that will need to be developed will be made flexible enough to be applicable to a large panel of Higgs and top-quark decay channels. They will first be tested on the multilepton channel. A focus will be put on the reconstruction of the heavy particles (Higgs and top quark) in the final state, which is quite challenging in this channel. ##### Measuring the Higgs-top coupling CP properties in the multilepton channel at the ATLAS experiment SL-DRF-21-0365 Location : Service de Physique des Particules (DPHP)Groupe Atlas (ATLAS)Saclay Contact : Henri BACHACOU Starting date : 01-10-2021 Contact : Henri BACHACOUCEA - DRF/IRFU +41227675650 Thesis supervisor : Henri BACHACOUCEA - DRF/IRFU +41227675650 The proposed PhD is aiming at measuring the CP properties of the Higgs-top coupling in the multilepton channel within the ATLAS experiment. Discovering new sources of CP violation is one of the most pressing questions in particle physics today. The Yukawa-like interactions in the Higgs boson sector could provide a particularly attractive way for additional sources of CP violation. The goal of PhD is to measure the ttH process in two innovative ways. First new reconstruction algorithms to identify the heavy particles in ttH events will be developed, which is made very challenging by the multilepton final state. Then the analysis will be designed exclusively based on pure CP observables. New observables based on ratios or angles will be explored, and measurements in regions of phase space where the separation between the various Higgs-top processes is less dependent on the CP properties of the Higgs-top-quark coupling will be studied. This new analysis should also have increased sensitivity to other rarer top quark associated Higgs production modes such as tHq. ##### Weak-gravitational lensing mass maps for cosmology and gravitational wave astronomy SL-DRF-21-0350 Location : Direction d’Astrophysique (DAP)Laboratoire CosmoStat (LCS)Saclay Contact : Martin Kilbinger Starting date : 01-09-2021 Contact : 21753 Thesis supervisor : Martin KilbingerCEA - DRF/IRFU/DAP/LCS 21753 Personal web page : http://www.cosmostat.org/jobs/mk_wlcosmogw_2020 Weak lensing denotes galaxy image distortions induced by structures on large scales. Dark-matter maps obtained from weak lensing help us to shed light on the mystery of the recent acceleration of the Universe. In addition, they are important for gravitational waves (GW), which can be magnified by dark matter along the line of sight. This PhD thesis will develop methods to analyse weak-lensing data. Machine-learning techniques will be used to calibrate the measurements. These methods will be applied to survey UNIONS (Ultraviolet Near-Infrared Optical Northen Sky Survey), and WFST (Wide-Field Survey Telescope), which will be built in China. The goal is to measure the properties of dark energy, and to correct magnification of GW signals. ##### "3x2pt" analysis: Cross-correlations of cosmological probes, and application to state-of-the-art weak-lensing and galaxy clustering surveys SL-DRF-21-0278 Location : Direction d’Astrophysique (DAP)Laboratoire CosmoStat (LCS)Saclay Contact : Martin Kilbinger Valeria Pettorino Starting date : 01-09-2021 Contact : 21753 Thesis supervisor : Valeria PettorinoCEA - DRF/IRFU/DAP/LCS Personal web page : http://www.cosmostat.org/jobs/mk_3x2pt_2020 The large upcoming cosmological experiments such as the space telescopes Euclid and Nancy Grace Roman, or the ground-based survey LSST, will use two main probes with the goal to measure the properties of dark matter and dark energy: weak gravitational lensing, which is the deformation of the images of distant galaxies by tidal fields on very large scales, and galaxy clustering, denoting the distribution of galaxies in the cosmic web. This PhD thesis will explore the cross-correlation between those two probes, which is of great importance since these observables are sensitive to the same structures, which interrelates them. This work will be applied to data from the surveys UNIONS and DESI. The student will be trained at the interface between theory and observations, and provide key tools for upcoming large surveys. ##### Optimization of the booster for the electron-positron collider FCC-ee SL-DRF-21-0083 Research field : Accelerators physics Location : Département des Accélérateurs, de Cryogénie et de Magnétisme (DACM)Laboratoire d’Etudes et de Développements pour les Accélérateurs (LEDA)Saclay Contact : Antoine CHANCE Starting date : 01-11-2020 Contact : (+33) 1 69 08 17 19 Thesis supervisor : Antoine CHANCECEA - DRF/IRFU/DACM/LEDA (+33) 1 69 08 17 19 Currently, one of the burning questions in particle physics is the understanding of the mass origin of the particles by exploring Higgs properties, more specifically its self-interaction. An electron-positron collider is then a powerful tool for precision physics. In this purpose, the project "Future Circular Collider Innovation Study" (FCCIS) aims to deliver a conceptual report and to give a sustainable implementation long-term plan for a 1OO-km-long electron-antielectron collider at CERN. The PhD student will join an international collaboration with CERN, DESY, INFN or KIT. The PhD will focus on the booster, the ring which accelerates electrons up to nominal energy before injecting into the collider. The main challenges of the booster are i) the injection energy. The PhD student will determine the optimum injection energy of the booster ; this choice will have a great impact on the injection complex and its cost ii) the booster optics. the PhD student will have to explore different optics and propose innovative solutions to improve and boost equilibrium conditions. iii) the injection into the collider. The PhD student will study how to inject into the ring and will design the transfer lines up to the collider. The PhD student will use the MAD-X code for the optics calculations, a reference code developed at CERN. ##### ADVANCED AND ARTIFICIAL INTELLIGENCE TECHNIQUES TO MITIGATE LINEAR AND NON-LINEAR IMPERFECTIONS IN FUTURE CIRCULAR COLLIDERS SL-DRF-21-0279 Research field : Accelerators physics Location : Département des Accélérateurs, de Cryogénie et de Magnétisme (DACM)Laboratoire d’Etudes et de Développements pour les Accélérateurs (LEDA)Saclay Contact : Barbara Dalena Starting date : 01-10-2021 Contact : Thesis supervisor : Barbara DalenaCEA - DRF/IRFU/DACM Personal web page : http://dalena.web.cern.ch/dalena/ After the discovery of the Higgs boson at the LHC, particle physics community is exploring and proposing next accelerators, to address the remaining open questions on the underlying mechanisms and on the constituents of the present universe. One of the studied possibilities is FCC (Future Circular Collider), a 100-km-long collider at CERN. The hadron version of FCC (FCC-hh) seems to be the only approach to reach energy levels far beyond the range of the LHC, in the coming decades, providing direct access to new particles with masses up to tens of TeV. The electron version of FCC brings a tremendous increase of production rates for phenomena in the sub-TeV mass range, making precision physics studies possible. A first study has shown no major showstopper in the colliders’ feasibility but has identified several specific challenges for the beam dynamics: large circumference (civil engineering constraints), beam stability with high current, the small geometric emittance, unprecedented collision energy and luminosity, the huge amount of energy stored in the beam, large synchrotron radiation power, plus the injection scenarios. This thesis will focus on the optimization of the hadron option of the future circular collider against linear and non-linear imperfections (i.e. magnets alignments and their field quality). A key point of this thesis is the comparison of current advanced correction schemes to techniques based on machine learning. The application of these techniques to accelerators is one of current hot topics in the field and pursued worldwide. ##### Machine learning for unmixing gravitational wave signals from the LISA interferometer SL-DRF-21-0300 Research field : Artificial intelligence & Data intelligence Location : Département d’Electronique, des Détecteurs et d’Informatique pour la physique (DEDIP)Laboratoire ingénierie logicielle et applications spécifiquesSaclay Contact : Jérôme Bobin Starting date : 01-09-2021 Contact : Jérôme BobinCEA - DRF/IRFU/DEDIP/LILAS 0169084463 Thesis supervisor : Jérôme BobinCEA - DRF/IRFU/DEDIP/LILAS 0169084463 Personal web page : www.jerome-bobin.fr Following the first detections of gravitational waves in 2015, that were awarded the Nobel prize in Physics in 2017, a new window is now open to observe our Universe. In contrast to ground-based interferometers, the LISA observatory (Laser Interferometer Space Antenna) will be sensitive to a very large number of signals of different physical natures: galactic binaries, supermassive black holes, extreme mass ratio inspirals, etc. This wealth of signals raise paramount data analysis challenges: unmixing a large number of gravitational events of very different nature. The goal of this PhD thesis is to develop the first gravitational signal unmixing method for LISA. The proposed approach will make use of adapted signal representations for each category of signals to be retrieved, which will make profit of their different temporal signatures. The design of such representations will be based on advanced machine learning techniques. The proposed methods will be evaluated with the participation to the LISA Data Challenges (LDC). ##### Uncertainties for large scale deep learning-based image reconstruction SL-DRF-21-0336 Research field : Artificial intelligence & Data intelligence Location : Direction d’Astrophysique (DAP)Laboratoire CosmoStat (LCS)Saclay Contact : Jean-Luc STARCK Starting date : 01-10-2021 Contact : 01 69 08 57 64 Thesis supervisor : Jean-Luc STARCKCEA - DSM/IRFU/SAp/LCS 01 69 08 57 64 Personal web page : http://jstarck.cosmostat.org Deep learning (DL) has changed the way of solving inverse problems. Many scientific challenges remain that must be met for its deployment in astronomical imagery: i) taking into account the physical training model, ii) estimating the uncertainties on reconstructed images, iii) generalization, and iv ) the volume of data for scaling up. To quantify the uncertainties, we have introduced a probabilistic DL approach (Remy et al., 2020), which makes it possible to derive the a posteriori distribution of the solution. This requires however to use expensive simulation techniques (MCMC) which does not allow its use in ambitious projects like Euclid or SKA. Several challenges will be addressed in this thesis: - Develop a new DL method to quantify uncertainties, while enjoying theoretical guarantees of coverage. We will rely on conformal quantile regression, a new method derived from theoretical statistics (Romano et al., 2019). - Generalization: We recently proposed a new architecture of neural networks (the learnets, Ramsi et al., 2020), which has the advantage of including certain properties of the wavelet transform such as exact reconstruction. This type of architecture should provide a solution to the generalization problem. - The scaling on data of dimension 3 or 4. It will then be a question of extending the results obtained in order to be able to efficiently handle this type of data. The last challenge of this thesis will be to set up these new tools to solve problems in two large international projects, for dark matter maps with Euclid and SKA. [1] B. Remy, F. Lanusse, Z. Ramzi, J. Liu, N. Jeffrey and J.-L. Starck, "Probabilistic Mapping of Dark Matter by Neural Score Matching", NeurIPS 2019 Machine Learning and the Physical Sciences Workshop. [2] Y. Romano E. Patterson E. J. Candès, Conformalized quantile regression. Advances in neural information processing systems 32 NeurIPS, 2019. [3] Z. Ramzi, JL Starck, T Moreau, P Ciuciu, "Wavelets in the deep learning era", European Signal Processing Conference, accepted submission to the EUSIPCO 2020 conference. ##### Natural language processing in time domain astrophysics SL-DRF-21-0773 Research field : Astroparticles Location : Service de Physique des Particules (DPHP)Groupe HESS 2Saclay Contact : Fabian Schussler Starting date : 01-10-2021 Contact : +33169083020 Thesis supervisor : Fabian SchusslerCEA - DRF/IRFU/DPHP/HESS 2 +33169083020 Personal web page : http://irfu.cea.fr/Pisp/fabian.schussler/index.html Time domain and high-energy astrophysics is dealing with the most violent phenomena in the universe. Rapid exchange of information is crucial to detect these transient, i.e. short-lived, events with multiple observatories covering the full multi-wavelength domain and all cosmic messengers. Victim of its own success, the current way of manual reading, analyzing and classifying information shared by astrophysicists in Astronomers Telegrams or circulars within the Gamma-ray Coordinates Network is approaching saturation. One of the most promising novel approaches is to build on the recent progress in artificial intelligence and especially natural language processing (NLP) and feature extraction. This thesis will bring together leading experts in two exiting domains: Artificial Intelligence and time domain, multi-messenger astrophysics. The project will be part of the UDOPIA program of the Paris-Saclay university and benefit from a rich ecosystem in both astrophysics and AI as well strong ties with leading industry partners. ##### High-energy multi-messenger astrophysics with H.E.S.S. and CTA SL-DRF-21-0237 Research field : Astroparticles Location : Service de Physique des Particules (DPHP)Groupe Astroparticules (GAP)Saclay Contact : Fabian Schussler Starting date : 01-10-2021 Contact : +33169083020 Thesis supervisor : Fabian SchusslerCEA - DRF/IRFU/DPHP/HESS 2 +33169083020 Personal web page : http://irfu.cea.fr/Pisp/fabian.schussler/index.html Very recently a fundamentally new domain of astronomy and astrophysics has shown its first results: multi-messenger and real-time astrophysics. The simultaneous detection of various new astrophysical messengers (gravitational waves, high-energy gamma rays and high-energy neutrinos) and the exchange and combination of data from very different observatories allows to open new windows and provides unprecedented insights into the most violent phenomena ever observed. New and significant conclusions can be obtained by combining these new messengers. Joint analyses of archival observations in different wavelengths have brought enormous insights in the past and, as this technique provides an assured and certain scientific return, it will also be used in the proposed thesis project. At the same time it has becomes clear that another important step does greatly enhance the sensitivity of multi-messenger searches: the need to gain full access to the wealth of information provided by analyzing and combining the data in real-time. The proposed thesis project will allow opening this new window to the high-energy universe: real-time multi-messenger astronomy at very high energies. The combination of the various particles and radiations in a truly multi-messenger online alert system will resolve several challenges faced in high-energy astrophysics and especially allow detecting and studying violent transient phenomena that are supposed to be at the origin of high-energy cosmic rays. The project will introduce the time domain to high-energy astrophysics and has the potential to cause a paradigm shift in how observations and data analyses are performed. The core of the proposed project will be H.E.S.S., currently the world’s most sensitive gamma-ray instrument, and CTA, the next generation, global high-energy gamma-ray observatory. We’ll combine their data with events recorded by IceCube, the world’s largest neutrino telescope and the advanced Virgo and Ligo gravitational wave interferometers. The detection of a transient high-energy gamma-ray source in coincidence with gravitational waves or high-energy neutrinos will provide the long sought evidence for their common origin and may resolve the century old quest for the origin of high-energy cosmic rays. We’ll also collaborate with the world’s most sensitive radio observatories (e.g. the SKA precursors MeerKAT and ASKAP) to search for counterparts to Fast Radio Bursts and in general study a large variety of messengers like Gamma-Ray Bursts or flares from active galactic nuclei. By scanning the data acquired with high-energy gamma-ray observatories in real-time, it will also possible to send alerts to the wider astronomical community to ensure simultaneous observations at other wavelengths. SL-DRF-21-0661 Research field : Astrophysics Location : Direction d’Astrophysique (DAP)Laboratoire de modélisation des plasmas astrophysiques (LMPA)Saclay Contact : Patrick Hennebelle Matthias GONZALEZ Starting date : 01-10-2021 Contact : 0169089987 Thesis supervisor : Matthias GONZALEZUniversité de Paris - DRF/IRFU/DAp/LMPA 33 1 69 08 17 79 Personal web page : http://irfu.cea.fr/Pisp/matthias.gonzalez/ ##### Dark Energy Tomography with the Euclid survey SL-DRF-21-0206 Research field : Astrophysics Location : Direction d’Astrophysique (DAP)Laboratoire CosmoStat (LCS)Saclay Contact : Valeria Pettorino Starting date : 01-10-2021 Contact : Thesis supervisor : Valeria PettorinoCEA - DRF/IRFU/DAP/LCS Personal web page : https://www.valeriapettorino.com/ While the Universe is expanding with increasing velocity, the question of what is causing cosmic acceleration remains unsolved. Acceleration seems to act against gravitational attraction, as if a new source of energy, dubbed dark energy, were responsible for it. This PhD proposal is meant to contribute to the Euclid mission, to tackle this dilemma by implementing the possibility to test dark energy at different redshifts, or what I refer to here as ‘dark energy tomography’, and integrate it in the Euclid Consortium validated likelihood. The PhD student will be able to work at the interface between data and theory and concretely collaborate to a large collaboration like the Euclid satellite. Objectives include 1) extending the likelihood software to test dark energy at different redshift epochs, 2) contribute to the collaboration effort on comparing theoretical predictions with data 3) investigate different machine learning methods to reconstruct the dark energy contribution in each redshift bin. ##### Multi-physics interaction between exoplanet atmospheres and their host stars SL-DRF-21-0543 Research field : Astrophysics Location : Direction d’Astrophysique (DAP)Laboratoire de dynamique des étoiles des (Exo) planètes et de leur environnement (LDE3)Saclay Contact : Antonio Garcia Muñoz Antoine Strugarek Starting date : 01-10-2021 Contact : Thesis supervisor : Antoine StrugarekCEA - DRF/IRFU/DAP/LDE3 0169083018 The census of known exoplanets includes >4,000 planets in >3,000 systems. The current exoplanet demographics show that exoplanets exhibit a large diversity in their mass, radius (and thus density and composition) and orbital arrangements. The research field moves today from exoplanet detection to the characterization of their atmospheres. The focus of this project is to study numerically the physical mechanisms (3D dynamics, photochemistry, magnetic interactions) that drive the escape of strongly irradiated atmospheres. The project will provide insight into exoplanet systems for which upper atmospheric in-transit observations of e.g. Lyman-alpha, C II, H-alpha, He I at 1083 nm exist but that remain without a proper theoretical interpretation. Our priority is thus to develop physically-motivated models that embrace the complexity of these interactions and help place in context the available observables. We see this project as a first step into the development of a versatile and powerful 3D multi-physics model that can become the international reference for exoplanet-host star interactions. ##### Cosmology - Clusters of galaxies - Artificial intelligence SL-DRF-21-0332 Research field : Astrophysics Location : Direction d’Astrophysique (DAP)Laboratoire de Cosmologie et d’Evolution des Galaxies (LCEG)Saclay Contact : Marguerite PIERRE Starting date : 01-10-2021 Contact : 0169083492 Thesis supervisor : Marguerite PIERRECEA - DRF/IRFU/SAp/LCEG 0169083492 Personal web page : https://sci.esa.int/s/WLg9apw Clusters of galaxies are the most massive entities in the universe. As such, they constitute powerful cosmological probes. The XLL survey is the largest programme of the European satellite XMM (X-ray band). It has enabled the detection of several hundreds of galaxy clusters. The goal of the PhD is to perform the cosmological analysis of the complete XXL cluster sample by using innovative machine learning techniques. ##### Measurement of the mass of galaxy clusters using gravitational lensing of the cosmic microwave background SL-DRF-21-0763 Research field : Astrophysics Location : Service de Physique des Particules (DPHP)Groupe Cosmologie MillimétiqueSaclay Contact : Jean-Baptiste Melin Starting date : 01-10-2021 Contact : 01 69 08 73 80 Thesis supervisor : Jean-Baptiste MelinCEA - DRF/IRFU/DPHP/Cosmo mm 01 69 08 73 80 Galaxy clusters, located at the node of the cosmic web, are the largest gravitationally bound structures in the Universe. Their abundance and spatial distribution are very sensitive to cosmological parameters. Galaxy clusters thus constitute a powerful cosmological probe. They have proven to be an efficient probe in the last years (Planck, South Pole Telescope, XXL, etc.) and they are expected to make great progress in the coming years (Euclid, Vera Rubin Observatory, CMB-S4, etc.). Theoretical predictions of the cluster abundance and spatial distribution depend on cosmological parameters and cluster mass. To determine cosmological parameters from cluster surveys, one needs to be able to measure accurately cluster mass. The error on the mass estimation is currently the main systematic error for the determination of cosmological parameters with galaxy clusters. This is the reason why it is crucial to improve on the measurement of the cluster mass and to control associated errors. The most direct method to measure cluster mass is based on gravitational lensing. It is now used routinely in optical surveys: a cluster induces distortions of the shapes of background galaxies. Using these distortions, it is possible to reconstruct cluster mass. It was shown recently that it is also possible to detect these distortions at millimetre wavelengths in the cosmic microwave background (CMB) instead of using background galaxies, and reconstruct the mass of galaxy clusters. The main advantage of using the cosmic microwave background is because it is located at very high distance allowing for mass measurement of distant clusters; it is not possible to do this measurement with background galaxies, which are too few for distant clusters. Irfu/DPhP has developed the first tools to measure galaxy cluster masses using gravitational lensing of the cosmic microwave background for the Planck mission. The PhD thesis work will consist in taking hands on the tools and improve them to make them compatible with ground-based data. They will then be applied to public SPT-SZ (https://pole.uchicago.edu) data and SPT-SZ+Planck data jointly. The ACT (https://act.princeton.edu) data has also been made public recently and a joint analysis ACT+Planck will also be made. In the second part of the thesis, the tools will be used to find observation strategies and compute integration times to measure cluster masses for high resolution ground based experiments such as NIKA2 (http://ipag.osug.fr/nika2/), alone and jointly with Planck. The current methods are optimal for maps in total intensity and in the low signal-to-noise regime as shown in the Figure above. The future experiments will have lower noise levels and will be very sensitive to polarization. The third part of the thesis will be dedicated to development of new methods to extract the masses for the future low noise cosmic microwave background experiments such as CMB-S4 (https://cmb-s4.org), PICO (arXiv:1902.10541) or CMB Backlight (arXiv: 1909.01592). Finally, we will study the precision on cosmological parameters that can be reached from galaxy cluster catalogues, given the precision on the mass expected from these future experiments. ##### Measuring the growth of massive structures in the distant Universe with deep spectroscopic surveys SL-DRF-21-0166 Research field : Astrophysics Location : Direction d’Astrophysique (DAP)Laboratoire de Cosmologie et d’Evolution des Galaxies (LCEG)Saclay Contact : Starting date : 01-10-2021 Contact : Thesis supervisor : ##### Intergalactic magnetic field and gamma ray bursts with CTA SL-DRF-21-0143 Research field : Astrophysics Location : Direction d’Astrophysique (DAP)Laboratoire d’Etudes des Phénomènes Cosmiques de Haute Energie (LEPCHE)Saclay Contact : Renaud Belmont Thierry STOLARCZYK Starting date : 01-09-2021 Contact : Renaud BelmontUniversité de Paris (Paris 7) - DRF/IRFU/DAP/LEPCHE Thesis supervisor : Thierry STOLARCZYKCEA - DRF/IRFU/DAp/LEPCHE +33 1 69 08 78 12 Personal web page : http://irfu.cea.fr/Pisp/thierry.stolarczyk/ The intergalactic magnetic field pervading the cosmic voids is suspected to be a relic field originating from the very first epoch of the cosmic history. The goal of this PhD is to look for signatures of this field in the high-energy data of gamma-ray bursts, and to predict the ability of the future CTA observatory to constrain its properties. This work combines both theoretical modelling and analysis of simulated CTA data. ##### Towards a 3D characterisation of supernova remants in X-rays SL-DRF-21-0318 Research field : Astrophysics Location : Direction d’Astrophysique (DAP)Laboratoire d’Etudes des Phénomènes Cosmiques de Haute Energie (LEPCHE)Saclay Contact : Fabio Acero Starting date : 01-10-2021 Contact : Fabio AceroCEA - DRF/IRFU/DAP/LEPCHE 0169084705 Thesis supervisor : Fabio AceroCEA - DRF/IRFU/DAP/LEPCHE 0169084705 X-ray data are multidimensional by nature. For each photon the energy and position is recorded by the X-ray satellite. Here we propose to develop novel techniques to fully exploit the multidimensional nature of the data by combining blind source separation technique with feature learning. ##### Characterization of SVOM Gamma-Ray Bursts Afterglows using MXT data SL-DRF-21-0153 Research field : Astrophysics Location : Direction d’Astrophysique (DAP)Laboratoire des spectro-Imageurs spatiaux (LISIS)Saclay Contact : Diego GOTZ Starting date : 01-10-2021 Contact : Diego GOTZCEA - DRF/IRFU/DAP/LISIS +33-1-69-08-59-77 Thesis supervisor : Diego GOTZCEA - DRF/IRFU/DAP/LISIS +33-1-69-08-59-77 More : http://www.svom.fr SVOM is a mission dedicated to the detection and characterization of Gamma-Ray Bursts (GRBs) and other multi-messenger sources, and it is scheduled for launch in June 2022. SVOM carries a unique multi-wavelength payload, sensitive from gamma-rays to the visible band, which is complemented on ground by dedicated wide field and narrow field robotic telescopes, distributed over the entire Earth. The SVOM space segment consists of ECLAIRS, a coded mask telescope operating in the 4-150 keV energy range, GRM, a gamma-ray (20 keV-5 MeV) spectrometer, and two follow-up narrow field telescopes, VT (visible) and MXT (0.2-10 keV). The Microchannel X-ray Telescope (MXT) is a compact and light focusing X-ray telescope. The main goal of MXT is to precisely localize the X-ray counterparts of SVOM GRBs and to study in detail their spectral and temporal characteristics. Gamma-Ray Bursts are issued either by the collapse of a very massive star (> 50 times the mass of the Sun) or by the coalescence of two compact objects (most likely neutron stars). In both scenari a short lived (less than ~100 s) gamma ray emission is followed by a longer lived (hours to days) X-ray emission, that can provide useful information about the GRB environment, the associated emission processes and, possibly, the nature of the GRB progenitors. The successful PHD candidate will in first place contribute to the analysis the MXT flight model calibration data obtained in summer 2021 at the PANTER X-ray testing facility. In particular, the PHD student will be in charge of producing the MXT spectral response matrices before the SVOM launch, and of updating them during the first two years of the mission, by analyzing the in-flight calibration data. The PHD student will be part of the MXT science team, act as a SVOM Burst Advocate, and thanks to this experience and to the deep instrumental knowledge acquired she/he will be able to correctly analyze, since the very beginning of the SVOM mission, X-ray afterglow data, and couple them to the multi wavelength data in order to build a clear phenomenological picture of the SVOM GRBs. In fact, the early GRB afterglow phase is still not fully understood, in particular from the point of view of the contribution of the GRB central engine to the so-called “plateau phases” of the afterglow light curve, whose interpretation could lead to a better understanding of the GRB progenitors. ##### Study of Polarimetric Bolometer Arrays with Spectroscopic Capabilities for Astrophysics SL-DRF-21-0652 Research field : Astrophysics Location : Direction d’Astrophysique (DAP)Laboratoire des spectro-Imageurs spatiaux (LSIS)Saclay Contact : Louis RODRIGUEZ Vincent REVERET Starting date : 01-10-2021 Contact : 0169086948 Thesis supervisor : Vincent REVERETCEA - DSM/IRFU/DAp/LSIS 01 69 08 74 02 The Herschel Space Observatory, launched in 2009, has revolutionized our vision of the "Cold Universe". In particular, it has radically changed our understanding of star formation by highlighting the ubiquitous filamentary structures of gas and dust and their essential role in the very early stages of star formation. On the other hand, the observations of the Planck satellite (also launched in 2009) in polarimetric mode have highlighted the presence of magnetic fields on large scales in molecular clouds. In these regions, the filaments can be either parallel to the magnetic field (low density filaments) or perpendicular (very dense filaments). But many questions remain. In order to understand the set of physical processes implemented in these stellar formation zones and to explain the links with the complex structure of the surrounding interstellar medium (ISM), new extremely sensitive instruments must be developed in the submillimeter domain. It seems necessary, on the one hand, to be able to finely characterize the magnetic field (via the detection of polarized light) in several spectral bands and, on the other hand, to detect the presence of several tracers of the ISM via the emission of certain spectral lines (C+ at 158 µm in particular). These observations, made from space or aboard stratospheric balloons, strongly constrain the volume and mass of the on-board charge. The idea of gathering one or more light analysis functions within a compact detector is a step in this direction. In this context, CEA has been developing for a few years now ultra-sensitive submillimeter bolometer arrays, capable of measuring the polarization within pixels, without the help of external polarizers. Developed in close collaboration with CEA-LETI in the framework of the B-BOP instrument on the SPICA observatory, this technology is in line with the developments of the Herschel-PACS detectors. These bolometers are developed in the framework of Labex Focus, 2 R&T CNES and ESA funding. In 2019, a thesis defended at the laboratory demonstrated that it was possible to add spectroscopic capacity to these new generation arrays by coupling the detector arrays to a compact Fabry-Perot interferometric system. The experimental demonstration of the complete device remains to be done and this is the core of this thesis topic: to study, implement and characterize experimentally the scientific performances of this compact spectro-imager-polarimeter. The first step will be to experimentally validate in a cryostat the Fabry-Perot mirror displacement system and to deduce its technical limitations. The second phase will consist in coupling this system to the bolometer arrays in order to produce and characterize the first prototypes of this new type of detectors. Finally, in a third part, the data processing aspect will be studied in order to extract the scientific signal as well as possible and to propose an adequate calibration. This work may also pave the way to more applied applications in medical imaging or in the field of security controls in the TeraHertz band, as proposed by LETI with its developments of room temperature micro-bolometers. ##### Impact of the density of galaxies in the analysis of the large spectroscopic survey DESI SL-DRF-21-0281 Research field : Astrophysics Location : Service de Physique des Particules (DPHP)Groupe Cosmologie (GCOSMO)Saclay Contact : Etienne Burtin Vanina RUHLMANN-KLEIDER Starting date : 01-10-2021 Contact : Etienne BurtinCEA - DRF/IRFU/DPHP/GCOSMO 01 69 08 53 58 Thesis supervisor : Vanina RUHLMANN-KLEIDERCEA - DRF/IRFU/DPHP/GCOSMO 01 69 08 61 57 In the last 30 years, the study of the Universe has seen the emergence of a standard model of the Universe based on general relativity. In this model, our Universe is made of ordinary matter, dark matter and a mysterious component called “dark energy”, responsible for the recent acceleration of the expansion of the universe. The upcoming large surveys, such as DESI in the US, will provide a map of the distribution of the galaxies 10 times more precise than the current state-of-the-art. The scientific community is gearing up to define new analysis techniques in order to extract the maximum of information from these surveys and hence enter the era of precision cosmology especially as far as growth of structure measurements are concerned. In this thesis, we propose to study a novel approach based on using the density at large scales to improve the precision on those measurements and to compare it with General Relativity predictions in order to search for possible deviations. The thesis will take place at Irfu, the Institute for Research on the Fundamental laws of the Universe. The PhD student will join the cosmology group of Irfu/DPhP, composed of 10 physicists, 4 PhD students and 2 post-docs. Actively involved in the eBOSS and DESI experiments, the group also participates in Euclid and has in the past had a strong contribution in the SNLS, Planck and BOSS international collaborations. The future PhD student will be integrated into the DESI collaboration and will benefit from all the group’s expertise acquired on BOSS and eBOSS ##### Inventive Conditions and Dimensions of Digital Design SL-DRF-21-0348 Research field : Computer science and software Location : DIRLaboratoire de recherche sur les sciences de la matièreSaclay Contact : Vincent Bontems Starting date : 01-05-2021 Contact : 0169087094 Thesis supervisor : Vincent BontemsCEA - DSM/IRFU 0169087094 More : https://www.octo.com/ A large part of digital innovation is structured by a few technological clusters anchored in specific geographic sites (Silicon Valley, Shenzhen, etc.). These places are ecosystems, that is to say natural, social and technical environments, which play the role of "technological terroirs". The first objective of this thesis is to clarify the specificity and impact of these original local contexts regarding inventiveness in digital design. On the one hand, they welcome and generate the possibility of innovation; on the other hand, they modulate and organize a global space, made up of all the territories connected to information networks. The Paris-Saclay Campus will be one of the study sites. The activities of OCTO Technology will provide others. The notion of design designates a recursive design process that produces and evolves a technical object by integrating the constraints and resources resulting from its social and cultural integration. The philosopher of technology Gilbert Simondon has developed a method to analyze the invention and evolution of technical lineages in relation to their "associated milieu". This thesis will measure the relevance of these concepts applied to digital objects, revise them and supplement them with the contribution of other thinkers of digital technologies. The deployment of algorithms or artificial intelligences is the result of a complex process of dissemination operating at multiple scales. An analysis of the conditions for inventiveness in terms of design regimes and middle grounds is relevant. However, the property of "scalability" is specific to digital design (due to marginal cost replication) and transforms the dimensions of this process. The thesis will therefore have to take into account the scale relativity in order to objectify the different levels where the inventiveness of digital design operates. This thesis requires a reflection that embraces the richness and diversity of technological dimensions, including its ethical dimension. It should lead to avenues, for the CEA as well as for OCTO technology, aimed at promoting the emergence of digital innovations from the eco-responsible perspective of "right technology". ##### Design of a novel Analog to Digital Converter with internal Machine Learning calibration SL-DRF-21-0349 Research field : Electronics and microelectronics - Optoelectronics Location : Département d’Electronique, des Détecteurs et d’Informatique pour la physique (DEDIP)Systèmes Temps Réel, Electronique d’Acquisition et MicroélectroniqueSaclay Contact : Fabrice Guilloux Philippe Schwemling Starting date : 01-10-2021 Contact : 33 1 69 08 67 31 Thesis supervisor : Philippe SchwemlingCEA - DRF/IRFU/DPHP/Atlas 33 1 69 08 85 85 In current and future high energy physics experiments (as the LHC at CERN), the particle detector uses sub-micron integrated circuits. The signals from these circuits are digitized upstream of the processing chain and conveyed far from the experience by ultra-fast digital links. The development of new analog to digital converters (ADC) that perform well in potentially extreme environments, especially in radiation environments, is a challenge. Up to now, the trend has been to make these circuit responses stable and insensitive to variations in T°, dose or technology. Another possibility is to establish precise calibration tables that can be "downloaded" into the ASIC when conditions change or, even better, automatically generated by the ASIC itself. This calibration parameter generation, in or outside of the ASIC, can be considered in the Machine Learning (ML) context. The thesis approach is therefore to understand both the complexity of a real ADC and the software analysis of the errors correction, by carrying out the ML algorithms resulting in the ADC calibration. With an accurate ADC calibration, it is also foreseen to greatly enhanced the ADC performance by combining several ADC channels into one, leading to conversion rates or resolutions unreachable for a single core ADC. ##### Design of a new readout circuit for highly pixelated hybrid detectors for hard X-ray spectro-imaging and polarimetry space applications. SL-DRF-21-0346 Research field : Electronics and microelectronics - Optoelectronics Location : Département d’Electronique, des Détecteurs et d’Informatique pour la physique (DEDIP)Systèmes Temps Réel, Electronique d’Acquisition et MicroélectroniqueSaclay Contact : Olivier GEVIN Olivier LIMOUSIN Starting date : 01-10-2021 Contact : Olivier GEVINCEA - DSM/IRFU/SEDI 0169081716 Thesis supervisor : Olivier LIMOUSINCEA - DRF/IRFU/DAP/LSIS 01 64 50 15 03 This space instrumentation thesis consists in designing a microelectronic matrix circuit integrating numerous analog and digital functions in 250 µm pixels for the reading of CdTe or silicon semiconductor detectors. Since 2011, our research team has been developing a new concept of hybrid detectors called MC2 (Mini CdTe on Chip) based on 3D WDoD(TM) (wireless die on die) technologies that should support the thermomechanical and radiative environment of a space mission. The ambition is to realize large focal planes with unequalled performances in time-resolved spectro-imaging and polarimetry, intended to serve the next discoveries in X and gamma astrophysics and solar flare physics. The targeted microelectronics technology is the XFAB 180 nm technology, which is particularly attractive for space applications due to its perennial and affordable commercial availability and its good radiation resistance. It is a credible choice as an alternative to the AMS 0.35 µm technology, massively exploited until now, especially for the space projects SVOM (gamma camera ECLAIRs) and Solar Orbiter (X STIX telescope) in our group. Future generations of our detectors will be able to benefit from this advantageous technology in R&D as well as in production, even in cases where the dose resistance is important. Two generations of matrix circuits realized in the XFAB 180 nm technology have shown very promising results to integrate ultra low noise and low power self-triggered spectroscopy chains in a 250 µm pixel side. These circuits have also shown the need to design, characterize and optimize several critical functions at the pixel level, common blocks and inter-pixel operators in order to obtain a better response uniformity and the desired ultimate noise level. The objective of the thesis is to provide innovative and high-performance solutions for a new circuit of 32 x 32 pixels with a 250 µm pitch, connectable on 2 sides, with an optimized interface and a modular architecture to be integrated in a spatially-integrated detection module. Spinoffs from these developments are also envisaged in the medical field, notably for breast cancer tomography, as well as in the field of environmental monitoring in the nuclear field. ##### Deep Learning and gamma spectroscopy: a new signal processing approach for CdTe detectors data analysis SL-DRF-21-0316 Research field : Instrumentation Location : Direction d’Astrophysique (DAP)Laboratoire des spectro-Imageurs spatiaux (LSIS)Saclay Contact : Olivier LIMOUSIN Starting date : 01-10-2021 Contact : Olivier LIMOUSINCEA - DRF/IRFU/DAP/LSIS 01 64 50 15 03 Thesis supervisor : Olivier LIMOUSINCEA - DRF/IRFU/DAP/LSIS 01 64 50 15 03 This thesis at the interface between nuclear instrumentation and applied mathematics consists in developing and implementing advanced methods for processing spectral data from CdTe Caliste detectors for high-energy photons. These sensors, resulting from fundamental research in space astrophysics, are the basic building block of the Spid-X gamma camera born from joint technological developments between the CEA and the company 3D PLUS. It aims at characterizing radiative environments in the framework of nuclear surveillance, for the safety of nuclear operations or research facilities, or for the dismantling of installations. The methods studied will use Deep Learning tools with the objective of analyzing gamma spectra acquired in a complex environment inducing spectral distortions, potentially difficult to interpret with classical algorithms. For this purpose, the PhD student will carry out the following lines of study: - The identification of radioelements and the measurement of their proportion in the signal with one or several absorbing and scattering materials between the sources and the detector (methods: Monte-Carlo Geant4 simulations, Bayesian neural networks, confidence robust learning and experimentation). - Determination of the nature of the material crossed and the thickness crossed (methods: adversarial neural networks (GANs), self-encoding, experimentation). - The application to coded mask imaging methods. Depending on the results obtained in the two previous axes and the resulting discovery space, the methods may be applied to the theme of coded mask methods for gamma-ray imaging. ##### Neutron and beta imaging with Micromegas detectors with optical readout SL-DRF-21-0319 Research field : Neutronics Location : Département d’Electronique, des Détecteurs et d’Informatique pour la physique (DEDIP)DÉtecteurs: PHYsique et SimulationSaclay Contact : Thomas PAPAEVANGELOU Esther FERRER RIBAS Starting date : 01-10-2021 Contact : 01 69 08 2648 Thesis supervisor : Esther FERRER RIBASCEA - DRF/IRFU/DEDIP/DEPHYS 0169083852 Personal web page : http://irfu.cea.fr/Pisp/esther.ferrer-ribas/ Recent developments have shown that coupling a Micromegas gaseous detector on a glass substrate with a transparent anode and a CCD camera enable the optical readout of Micromegas detectors with an impressive spatial resolution showing that the glass Micromegas detector is well-suited for imaging. This feasibility test has been effectuated with low-X-ray photons permitting energy resolved imaging. This test opens the way to different applications. Here we will focus, on one hand, on neutron imaging for non-destructive examination of highly gamma-ray emitting objects, such as fresh irradiated nuclear fuel or radioactive waste and on the other hand, we would like to develop a beta imager at the cell level in the field of anticancerous drug studies. Both applications require gas simulations to optimize light yields, optimization of the camera operation mode and design of the detectors in view of the specific constraints of reactor dismantling and medical applications: spatial resolution and strong gamma suppression for neutron imaging and precise rate and energy spectrum measurements for the beta. The image acquisition will be optimized for each case and dedicated processing algorithms will be developed. ##### Pushing ab initio calculations of atomic nuclei to higher precision SL-DRF-21-0293 Research field : Nuclear Physics Location : Service de Physique Nucléaire (DPhN)Laboratoire études du noyau atomique (LENA) (LENA)Saclay Contact : Thomas DUGUET Vittorio SOMA Starting date : 01-10-2021 Contact : 0169082338 Thesis supervisor : Vittorio SOMACEA - DRF/IRFU/DPhN/LENA 0169083236 Pushing ab initio calculations of atomic nuclei to higher precision The theoretical description of atomic nuclei from first principles, or in a so-called ab initio fashion, has become possible only recently thanks to crucial advances in many-body theory and the availability of increasingly powerful high-performance computers. Such ab initio techniques are being successfully applied to study the structure of nuclei starting from the lighter isotopes. Still, extensions to heavy elements and nuclear reactions are posing considerable difficulties. The objective of the thesis is to contribute to this on-going progress in nuclear many-body theory. The project will focus on a developing ab initio technique (the so-called Gorkov-Green function approach, devised at CEA Saclay) designed to describe open-shell or superfluid systems (the majority of atomic nuclei). After the first promising applications to light and medium-mass nuclei, the method faces crucial upgrades to reach the precision and competitiveness of state-of-the-art approaches. The proposed work will aim to put in place the necessary tools towards this direction. In particular, the Gorkov-Green function approach will be extended to the next level of precision. After some formal work, this will require a careful numerical implementation on top of the existing code. Given the increased cost of the corresponding numerical calculations, expected to go from moderately (100 proc.) to massively parallel (1000 proc.), special attention will have to be paid to the code optimisation and the use of pre-processing techniques like importance truncation or tensor factorisation. Overall, the thesis work will exploit the latest advances in nuclear theory, including the use of nuclear interactions from chiral effective field theory and renormalisation group techniques, as well as high-performance computing codes and resources. The work will consist in formal developments, computational tasks and application of the new technology to cases of experimental interest. International collaborations are envisaged. ##### Study of reaction mechanisms for the synthesis of super-heavy elements SL-DRF-21-0285 Research field : Nuclear Physics Location : Département Grand Accélérateur National d’Ions LourdsGrand Accélérateur National d’Ions LourdsSaclay Contact : Dieter ACKERMANN Starting date : 01-10-2021 Contact : 0231454742 Thesis supervisor : Dieter ACKERMANNCEA - DRF/IRFU//GANIL 0231454742 One of the major research activities in nuclear physics is the study of nuclei at their limit of existence. These very exotic nuclei are expected to exhibit new properties, but are also very difficult to produce. The thesis deals with the study of nuclear reactions leading to the synthesis of super-heavy nuclei. The models are not accurate enough to guide experiments and there is little experimental data to constrain the models. The goal of this thesis is therefore to use innovative methods to constrain the models. It will consist in using state of the art statistical methods for low number data analysis as well as in designing experiments devoted to a better understanding of the reaction mechanisms involved. ##### Is there a dark decay of neutrons in 6He ? SL-DRF-21-0287 Research field : Nuclear physics Location : Département Grand Accélérateur National d’Ions LourdsGrand Accélérateur National d’Ions LourdsSaclay Contact : Hervé SAVAJOLS Starting date : 01-10-2021 Contact : Hervé SAVAJOLSCNRS - GANIL, UPR 3266 02 31 45 4699 Thesis supervisor : Hervé SAVAJOLSCNRS - GANIL, UPR 3266 02 31 45 4699 Recently, two theoretical physicists put forward a thrilling hypothesis: neutrons may undergo a dark matter decay mode. Such a decay could explain the existing discrepancy of 4 standard deviations between two different methods of neutron lifetime measurements. If such a neutron decay is possible, then it could also occur in nuclei with sufficiently low neutron binding energy, a quasifree neutron decay. In this work, we consider the case of 6He with a two-neutron separation energy lower than the one for a single neutron. The observation of a free neutron from 6He decay would, although difficult to do, be a unique signature for the dark neutron decay. ##### Study of drip-line phenomena in neutron-rich nuclei SL-DRF-21-0286 Research field : Nuclear physics Location : Département Grand Accélérateur National d’Ions LourdsGrand Accélérateur National d’Ions LourdsSaclay Contact : Olivier SORLIN Starting date : 01-10-2021 Contact : 02 31 45 4525 Thesis supervisor : Olivier SORLINCNRS - GANIL/Grand Accélérateur National d’Ions Lourds 02 31 45 4525 The present project aims at better understanding the nuclear superfluidity of atomic nuclei, which manifests for instance in a smaller moment of inertia as compared to a rigid body, deduced from their rotational or vibrational spectra. Superfluidity is thought to be induced by the pairing of nucleons, but the size of the pairs, their evolution as a function of the atomic mass and of the binding energy, or with shell structure are not known. Moreover, superfluidity may be caused by a larger number of correlated nucleons, such as quartets. These properties of the atomic nucleus are almost impossible to study as nucleons are bound inside a nuclear potential. We have proposed an innovative route to suddenly promote neutron pairs or quartets out of the nuclear potential and study the correlations they had inside the nucleus from the observation of their many-body decay. The experimental strategy is to use the instrumentation of the R3B beam line at FAIR which offers the unique possibility to measure all relevant information, such as neutrons and residual nucleus momenta, to study nuclear superfluidity and its possible change of regime towards the neutron drip line. ##### Systematic studies of the continuum-coupling correlations in near-threshold states SL-DRF-21-0284 Research field : Nuclear physics Location : Département Grand Accélérateur National d’Ions LourdsGrand Accélérateur National d’Ions LourdsSaclay Contact : Marek PLOSZAJCZAK Starting date : 01-10-2021 Contact : 02 31 45 4590 Thesis supervisor : Marek PLOSZAJCZAKCEA - DRF/IRFU//GANIL 02 31 45 4590 It is proposed to study the salient effects of coupling between discrete and continuous states near various particle emission thresholds using the shell model in the complex energy plane. This model provides the unitary formulation of a standard shell model within the framework of the open quantum system for the description of well bound, weakly bound and unbound nuclear states. Recent studies have demonstrated the importance of the residual correlation energy of coupling to the states of the continuum for the understanding of eigenstates, their energy and decay modes, in the vicinity of the reaction channels. This residual energy has not yet been studied in detail. The studies of this thesis will deepen our understanding of the structural effects induced by coupling to the continuum and will support for experimental studies at GANIL and elsewhere. The student of this theoretical thesis will develop the numerical tools necessary for the evolution of the "Gamow Shell Model" (GSM), a tool par excellence for spectroscopic studies. ##### 3-dimensional scintillation dosimetry for small irradiation fields control in protontherapy SL-DRF-21-0288 Research field : Nuclear physics Location : Département Grand Accélérateur National d’Ions LourdsGrand Accélérateur National d’Ions LourdsSaclay Contact : Anne-Marie FRELIN-LABALME Starting date : 01-10-2021 Contact : 02 31 45 45 30 Thesis supervisor : Anne-Marie FRELIN-LABALMECEA - DRF/IRFU//GANIL 02 31 45 45 30 Radiotherapy is an important modality in treatment cancer. In this domain, proton beams have ballistic superiority against photon beams. Nevertheless, the use of protontherapy to treat small volume tumors (typically less than 27 cm3) is limited because of the lack of well adapted dosimetry tools for small irradiation fields quality assurance. To answer this issue, an innovative dosimetry system has been developed. It is based on a scintillating block of 10 × 10 × 10 cm3 and two ultra-fast cameras recording the scintillation from different points of view to reconstruct 3-dimensional dose maps. The current reconstruction method uses a library of preliminary beam measurements. The objective of this PhD thesis will be to develop a new method directly converting scintillation maps into dose maps. This includes, in the first stage of the thesis, the study of the energy dependence of the scintillation yield with proton beams. The new reconstruction method will then be evaluated and compared to ionization chamber and dosimetry films measurements. Finally, the dosimetric system will be used to study dose uncertainties during treatment plan. ##### DETECTORS FOR TIME-OF-FLIGHT PET IMAGING WITH HIGH SPATIAL RESOLUTION SL-DRF-21-0221 Research field : Nuclear physics Location : Service de Physique des Particules (DPHP)Groupe Santé et Energie (GSE)Saclay Contact : Dominique YVON Viatcheslav SHARYY Starting date : 01-10-2021 Contact : 01 6908 3625 Thesis supervisor : Viatcheslav SHARYYCEA - DRF/IRFU/DPHP 0169086129 Personal web page : http://irfu.cea.fr/Pisp/dominique.yvon/ DESCRIPTION Positron emission tomography (PET) is a nuclear imaging technique widely used in oncology and neurobiological research. Decay of the radioactive tracer emits positrons, which annihilate in the nearby tissue. Two gamma quanta of 511 keV energy are produced by positron annihilation and allow one to reconstruct the annihilation vertex and distribution of the tracer activity in the body of the patient. The precise determination of the position of the positron annihilation vertex is important for an accurate image reconstruction with a good contrast. In particular, it is useful for neuroimaging studies of brain and for pre-clinical studies with animal models (rodents) as well as for full body PET imaging. In this thesis we propose to contribute to an ambitious detector based on Cherenkov/Scintillating crystals. We have selected technologies that are particularly effective for PET imaging. The principles of the detector are patented. They should allow one to produce PET scanner with highly improved performances. The device uses advanced particles detector technologies: a dense scintillator crystal, micro-channel plate photomultipliers, gigahertz bandwidth amplifiers and fast data acquisition modules (WaveCatcher, SAMPIC). Data processing will involve Monté-Carlo simulations and data analysis based on GATE/Geant4 and Root C++ software libraries. SUPERVISION The successful candidate will work in the Department of Particle Physics of IRFU in close collaboration with the Department of Electronics Detectors and Computer Science for Physics. The CaLIPSO group includes two physicists and two students. Two Postdoc will join the project next spring. We collaborate closely with CNRS- IJCLabs on fast readout electronic, with CPPM of Marseille and CEA-SHFJ, and CEA-DES for simulations of medical imaging devices and image reconstruction algorithms, and with the University of Munster (Germany). THE PROPOSED WORK You will calibrate and optimize the detector prototypes and analyze the measured data. Your work will be focussing on detector time and spatial resolution optimization. This will involve many skills of an instrument scientist : fast photo-detection, fast electronics read-out (analog and digital) with picosecond precision, hardware and detector simulations with GEANT4 and GATE software. REQUIREMENTS Knowledge in physics, particle interactions with matter, radioactivity and particle detector principles, a vocation for instrumental (hardware) work, data analysis are mandatory. Being comfortable in programming, having a background in Gate/Geant4 simulation and C++ will be an asset. ACQUIRED SKILLS You will acquire skills in particle detector instrumentation, simulation of ionizing radiation detectors, photo-detection, implementation, operation of fast digitizing electronics, and data analysis. ##### Testing nuclear interaction at the dripline SL-DRF-21-0181 Research field : Nuclear physics Location : Service de Physique Nucléaire (DPhN)Laboratoire études du noyau atomique (LENA) (LENA)Saclay Contact : Aldric REVEL Anna CORSI Starting date : 01-10-2021 Contact : Aldric REVELCEA - DRF/IRFU/DPhN/LENA Thesis supervisor : Anna CORSICEA - DRF/IRFU/DPhN/LENA 01 69 08 7554 The exploration of nuclei close to the limit of their existence (called dripline) offers the unique opportunity to observe and study many phenomena not - or insufficiently - predicted by theory such as the appearance of neutron "halos" as well as the emergence of new magic numbers and the disappearance of those observed in nuclei close to stability. The proposed thesis topic revolves around the study of these emerging phenomena in exotic nuclei (see beyond dripline) via the analysis of data from experiments carried out in RIKEN (Japan) and using the state-of-the-art experimental devices SAMURAI and MINOS which are key for the study of these phenomena. ##### Continuum QCD approaches and 3D structure of the nucleon SL-DRF-21-0297 Research field : Nuclear physics Location : Service de Physique Nucléaire (DPhN)Laboratoire structure du nucléon (LSN) (LSN)Saclay Contact : Cédric Mezrag Hervé Moutarde Starting date : 01-10-2021 Contact : Thesis supervisor : Hervé MoutardeCEA - DRF/IRFU/SPhN/Théorie Hadronique 33 1 69 08 73 88 Most of the visible mass of the universe is contained in nucleons. However, the origin of this mass remains mysterious, with the portion from the Higgs mechanism in standard renormalization schemes corresponding to only a few percent of the total mass. The answer is to be found in the dynamics of strong interaction, described by the theory of quantum chromodynamics (QCD) in terms of quarks and gluons. Thus, the interaction between quarks and gluons is responsible for the emergence of known and measured properties of hadrons such as their masses or spins. There is now a strong theoretical and experimental dynamic to determine the 3D structure of hadrons in terms of quarks and gluons. From a theoretical point of view, the classical tools of quantum field theory, namely perturbative expansion, do not allow the study of the emerging properties of hadrons. The latter are inherently non- disruptive. The aim of this thesis is to develop and use a non-perturbative formalism based on the Dyson-Schwinger and Bethe-Salpeter equations to determine the 3D structure of hadrons, in particular the nucleon. Different dynamic assumptions will be used to obtain a 3D mapping of the charge, mass and orbital angular momentum effects. To do so, a significant part of the Ph.D. will be dedicated to numerical development and analysis, in order to tackle different inverse problems. A comparison of the results obtained with the experimental data will be carried out in collaboration with the other LSN members. ##### INVESTIGATION OF THE NUCLEAR TWO-PHOTON DECAY IN SWIFT FULLY STRIPPED HEAVY IONS SL-DRF-21-0139 Research field : Nuclear physics Location : Service de Physique Nucléaire (DPhN)Laboratoire études du noyau atomique (LENA) (LENA)Saclay Contact : Wolfram KORTEN Starting date : 01-10-2021 Contact : +33169084272 Thesis supervisor : Wolfram KORTENCEA - DRF/IRFU/DPhN/LENA +33169084272 Personal web page : https://www.researchgate.net/profile/Wolfram_Korten The nuclear two-photon, or double-gamma decay is a rare decay mode in atomic nuclei whereby a nucleus in an excited state emits two gamma rays simultaneously. Even-even nuclei with a first excited 0+ state are favorable cases to search for a double-gamma decay branch, since the emission of a single gamma ray is strictly forbidden for 0+ ? 0+ transitions by angular momentum conservation. The double-gamma decay still remains a very small decay branch (<1E-4) competing with the dominant (first-order) decay modes of atomic internal-conversion electrons (ICE) or internal positron-electron (e+-e-) pair creation (IPC). Therefore we will make use of a new technique to search for the double-gamma decay in bare (fully-stripped) ions, which are available at the GSI facility in Darmstadt, Germany. The basic idea of our experiment is to produce, select and store exotic nuclei in their excited 0+ state in the GSI storage ring (ESR). For neutral atoms the excited 0+ state is a rather short-lived isomeric state with a lifetime of the order of a few tens to hundreds of nanoseconds. At relativistic energies available at GSI, however, all ions are fully stripped of their atomic electrons and decay by ICE emission is hence not possible. If the state of interest is located below the pair creation threshold the IPC process is not possible either. Consequently, bare nuclei are trapped in a long-lived isomeric state, which can only decay by double-gamma emission to the ground state. The decay of the isomers would be identified by so-called time-resolved Schottky Mass Spectroscopy. This method allows to distinguish the isomer and the ground state state by their (very slightly) different revolution time in the ESR, and to observe the disappearance of the isomer peak in the mass spectrum with a characteristic decay time. An experiment to search for the double-gamma decay in 72Ge and 70Se has already been accepted by the GSI Programme Committee and should be realised in 2021/22. ##### Study and Modeling of an axisymmetric electron cyclotron resonance ion source SL-DRF-21-0283 Research field : Nuclear physics Location : Département Grand Accélérateur National d’Ions LourdsGrand Accélérateur National d’Ions LourdsSaclay Contact : Laurent MAUNOURY Starting date : 01-10-2021 Contact : 02.31.45.47.87 Thesis supervisor : Laurent MAUNOURYCNRS - DSM/GANIL//GANIL 02.31.45.47.87 GANIL has a long tradition in the use and development of low-pressure off-balance ion sources based on the Electron Cyclotron Resonance (ECR) process feeding the GANIL accelerators with highly charged ions. One of the challenge of this type of sources used upstream of the accelerator is to deliver high charge state ions at high intensity specifically for the production of metal ions. An early simulation tool (PhD Thesis of Alexandre Leduc) has been developed to better understand how the ECR Phoenix V3 source operates. This work has already led to the understanding of some limitations of these sources, but a step forward must be done to obtain more realistic modelling of an ECR plasma: include electron dynamics and the creation of self-consistent electrostatic potentials essential for the production of multicharged ions. Before achieving that, an intermediate milestone will be done by developing the simulation tools on an axisymmetric ECR ion source before extending them to a standard ECR ion source. ##### Towards super heavy elements: new paths for the study of heavy nuclei SL-DRF-21-0371 Research field : Nuclear physics Location : Service de Physique Nucléaire (DPhN)Laboratoire études du noyau atomique (LENA) (LENA)Saclay Contact : Barbara Sulignano Starting date : 01-10-2021 Contact : Barbara SulignanoCEA - DSM/IRFU/SPhN/LENA 0169 08 42 27 Thesis supervisor : Barbara SulignanoCEA - DSM/IRFU/SPhN/LENA 0169 08 42 27 Hunting for super heavy elements one of the most exciting and active topics during the last few years and has already produced new elements such as 113, 115, 117 and 118 in accelerator experiments. All these nuclei can be produced through fusion-evaporation reactions. However their studies are greatly hampered by the extremely low production rates, hence experimental information in this region is very scarce. The high-intensity stable beams of the superconducting linear accelerator of the SPIRAL2 facility at GANIL coupled with the Super Separator Spectrometer (S3) and a high-performance focal-plane spectrometer (SIRIUS) will open new horizons for the research in the domains of such rare nuclei and low cross-section phenomena at the limit of nuclear stability. The student will take an active part in the tests of the whole SIRIUS detector. Information on the heaviest elements have been obtained up to now via fusion evaporation reactions. It is however well known that the only nuclei one can reach using fusion-evaporation reactions are neutron deficient and moreover in a very limited number (because of the limited number of beam-target combinations). An alternative to fusion-evaporation could be a revolutionary method based on be deep-inelastic collisions. The student will take, therefore, an active part in the new scientific activities of the group having as primary aim the investigation of nuclear structure in the heavy elements employing the new alternative method using multi-nucleon transfer reactions. ##### Prompt and non-prompt quarkonium production in the Pb-Pb collisions at 5 TeV of the LHC Run 3 SL-DRF-21-0329 Research field : Nuclear physics Location : Service de Physique Nucléaire (DPhN)Laboratoire plasma de quarks et gluons (LQGP) (LQGP)Saclay Contact : Javier CASTILLO Starting date : 01-10-2021 Contact : +33 169087255 Thesis supervisor : Javier CASTILLOCEA - DRF/IRFU/DPhN/LQGP +33 169087255 A few micro-seconds after the Big Bang, the Universe was in a quark gluon plasma (QGP) state. Such state is predicted by Quantum Chromodynamics, which is the theory of strong interactions, and should be reached at very high temperature or energy density. Such conditions are reproduced in ultra-relativistic heavy ion collisions at the LHC at CERN. Among the various QGP observables, the study of hadrons with heavy-flavour quarks (charm c or beauty b) and quarkonia (c-cbar or b-bbar bound states) is particularly important to understand the properties of the QGP. Quarkonia are rare and heavy particles which are produced in the initial stages of the collision even before the QGP is formed, mainly through gluon-fusion processes, and are therefore ideal probes of the QGP. As they traverse the QGP, the quark/anti-quarks pair will get screened by the many free quarks and gluons of the QGP. Quarkonia will then be suppressed by a colour screening mechanism in the QGP. Since the various quarkonium states have different binding energies, each state will have a different probability of being dissociated. This results in a sequential suppression pattern of the quarkonium states. Additionally, if the initial number of produced quark/anti-quark pairs is large and if heavy quarks do thermalise in the QGP, then new quarkonia could be produced in the QGP by recombination of heavy quarks. This mechanism is known as regeneration. At the LHC, Upsilon (b-bbar) and J/psi (c-cbar) are complementary. The former are thought to be more suited than to address the sequential suppression, while the latter should allow to study possible regeneration mechanisms. In addition, non-prompt J/psi, i.e. from weak decays hadrons containing one valance b quark, give access to the transport properties of b quarks in the QGP. More recently, photoproduction of J/psi has been observed in peripheral Pb-Pb collisions; J/psi are produced from the photon flux of the moving Pb ions mostly at very low transverse momenta. The characterization of these photoproduced quarkonia would allow to better constrain the initial state of the collisions as well as the properties of the QGP. We propose to study the production of prompt and non-prompt quarkonia Pb-Pb collisions at a center-of-mass energy per nucleon pair (sqrt(sNN)) of 5 TeV at the LHC with the first data of Run 3 (2022-2024). An upgrade of the ALICE apparatus is ongoing with, in particular, the addition of silicon pixel tracker that will complement the ALICE forward spectrometer as well as new readout electronics for the latter. These upgrades will allow us to: Profit from the planned increase in luminosity of the LHC, thus tripling in one year the data collected in the full LHC Run 2 (2015-2018); Separate the prompt and non-prompt contributions thanks to the precise measurement of the quarkonium decay vertex into two muons. The student will first develop the procedures to separate prompt and non-prompt quarkonia. In doing so, the student will thus contribute to the development of the new software for data reconstructions, simulation, calibration and analysis that the ALICE Collaboration is developing for Runs 3 and 4 of the LHC. Secondly, the student will study the production of prompt and non-prompt quarkonia in terms of production yields and azimuthal anisotropy. These studies could be performed as a function of the centrality of the collision and transverse momentum and rapidity of the quarkonia, for various types of quarkonia. Depending on the progress of the thesis work, these studies, which are a priority for quarkonia produced by the hadronic collision, could be extended to photoproduced quarkonia. ##### Studies on neutron-induced reactions with MEDLEY at GANIL. SL-DRF-21-0513 Research field : Nuclear physics Location : Département Grand Accélérateur National d’Ions LourdsGrand Accélérateur National d’Ions LourdsSaclay Contact : Xavier LEDOUX Starting date : 01-10-2021 Contact : Xavier LEDOUXCEA - DRF/IRFU/GANIL 02 31 45 46 03 Thesis supervisor : Xavier LEDOUXCEA - DRF/IRFU/GANIL 02 31 45 46 03 Personal web page : https://www.ganil-spiral2.eu This thesis is devoted to the study of nuclear reactions induced by neutrons between 15 and 40 MeV at the NFS facility using the Medley detector. The double differential cross sections of light charged particles emitted during reactions on carbon and chromium will be measured in order to enrich databases and improve certain reaction codes. The fission cross sections of uranium 235 and 238, which are standards, will also be measured relatively to the elastic diffusion cross section on hydrogen. ##### FIssion Studies at VAMOS in Inverse Kinematics (FISVIK) SL-DRF-21-0511 Research field : Nuclear physics Location : Département Grand Accélérateur National d’Ions LourdsGrand Accélérateur National d’Ions LourdsSaclay Contact : J.D. FRANKLAND Starting date : 01-10-2021 Contact : J.D. FRANKLANDCNRS - GANIL/Grand Accélérateur National d’Ions Lourds 0231454628 Thesis supervisor : J.D. FRANKLANDCNRS - GANIL/Grand Accélérateur National d’Ions Lourds 0231454628 The nuclear fission process is driven by a complex interplay between the dynamical evolution of a quantum system composed of a large number of nucleons and the intrinsic nuclear structure of the system at extreme deformations as well as heat flows. The balance between these various aspects decide the characteristics of the emerging fragments. Innovative experiments are conducted to widen ##### Axion searches with the International Axion Observatory with ultra low background Micromegas detectors SL-DRF-21-0302 Research field : Particle physics Location : Département d’Electronique, des Détecteurs et d’Informatique pour la physique (DEDIP)DÉtecteurs: PHYsique et SimulationSaclay Contact : Thomas PAPAEVANGELOU Esther FERRER RIBAS Starting date : 01-10-2021 Contact : 01 69 08 2648 Thesis supervisor : Esther FERRER RIBASCEA - DRF/IRFU/DEDIP/DEPHYS 0169083852 Personal web page : http://irfu.cea.fr/Pisp/esther.ferrer-ribas/ Axions were introduced as the most promising solution in explaining the absence of Charge-Parity symmetry violation in the strong interaction. These neutral, very light particles, interact so weakly with ordinary matter that they could contribute to the Dark Matter. Axion search techniques rely on their interaction with photons. Helioscopes search for axions produced in the solar core by the conversion of plasma photons into axions giving rise to a solar axion flux at the Earth surface, with energy spectrum at the region 1-10 keV. The International Axion Observatory (IAXO) will achieve a signal-to-background ratio of about 4-5 orders of magnitude better than most sensitive experiments today. BabyIAXO, an intermediate experimental stage of IAXO, will be hosted at DESY (Germany). BabyIAXO is conceived to test all IAXO subsystems (magnet, optics and detectors) at a relevant scale for the final system and thus serve as prototype for IAXO, but at the same time as a fully-fledged helioscope with relevant physics reach in itself, and with potential for discovery. IAXO and BabyIAXO will be equipped with X-ray optics coupled to low background X-ray detectors. The required levels of background are extremely challenging, a factor 10 better than current levels. The PhD will work on the X-ray detector development in particular of the new generation of Micromegas detectors. The development will be focused on the optimization of the background level by a multi-approach strategy coming from ground measurements, screening campaigns of components of the detector, underground measurements, background models, in-situ background measurements as well as refinement of rejection algorithms. Physics analysis of BabyIAXO data is expected in the last year of the PhD. ##### Measuring four and three top-quark production in the multilepton channel at the ATLAS experiment SL-DRF-21-0366 Research field : Particle physics Location : Service de Physique des Particules (DPHP)Groupe Atlas (ATLAS)Saclay Contact : Frédéric DELIOT Starting date : 01-10-2021 Contact : Frédéric DELIOTCEA - DRF/IRFU 0169086628 Thesis supervisor : Frédéric DELIOTCEA - DRF/IRFU 0169086628 The proposed PhD is aiming at observing for the first time the production of four top quark (tttt) at the LHC using the multilepton signature in data collected with the ATLAS experiment. The production of four top quarks is one of the most spectacular final states which became accessible at the LHC. While expected to be small in the Standard Model, the tttt cross section is predicted to be strongly enhanced in many new physics scenarios. The tttt process is also sensitive to the detailed properties of the Yukawa-like interaction between the top quark and the Higgs boson. Probing further this class of events over the next years will therefore be crucial to understand better the true nature of the Higgs-top-quark interaction and potentially to pinpoint subtle deviations from the Standard Model. Several innovative ways will be pursued to reach the observation of this tttt process. First a better separation between the signal and the different background processes will be studied by designing various multivariate discriminants and by a better understanding of the modeling of the ttW process. Another path towards observation will be to study the possibility to reconstruct the different top quarks in the final state, which is particularly challenging in this channel. Achieving a measurement of the tttt production will allow to probe a key property of the top-Higgs interaction, i.e. the CP nature of the top-Higgs coupling. Ultimately, the tttt process should also be measured separately from the production of three top quarks, which is currently totally unexplored experimentally. ##### Design, characterisation and exploitation of resistive MICROMEGAS for the near detector of DUNE SL-DRF-21-0291 Research field : Particle physics Location : Service de Physique des Particules (DPHP)Groupe Neutrinos Accélérateurs (GNA)Saclay Contact : Guillaume Eurin Samira Hassani Starting date : 01-10-2021 Contact : Guillaume EurinCEA - DRF/IRFU/DPHP 0169085925 Thesis supervisor : Samira HassaniCEA - DRF/IRFU/DPHP/TK2 0169087226 Description and context : The discovery of neutrino oscillations proved that neutrinos are not massless. This cannot be explained in the framework of the Standard Model of Particle Physics. Several experiments are currently investigating neutrino oscillations properties such as T2K or NOvA. These experiments study a beam of muon neutrinos produced at particle accelerators. DUNE is one of the main projects under development in this field. The target mass of DUNE of 4$\times$17 kt and a larger beam power and longer baseline will allow a very precise measurement of the CP violating phase (deltaCP) and of the neutrino mass hierarchy. The measurement of CP violation in the leptonic sector would be a major discovery. It is connected, in several models for new physics, to the matter-antimatter asymmetry observed in the Universe. T2K has already set the first constraint on the value of (deltaCP), excluding at 90\% the conservation of CP in the leptonic sector (deltaCP=0). The concept of DUNE is very similar to existing experiments. Neutrinos are produced in the interaction of a beam with a target and detected at two locations: the near detector site at ~210 m and the far detector site at ~1300 km. The comparison between the spectra observed at both sites allows the parameters governing neutrino oscillations to be measured. The synergy between the detectors on both sites enables the reduction of the uncertainties coming from the flux or the neutrino interaction cross-sections. The construction of the near detector of DUNE should start at the end of 2026 or at the beginning of 2027. This makes the future years crucial for the development of the technologies to be deployed in the detectors and of the data analysis techniques necessary to their scientific exploitation. The beam monitoring for DUNE will be performed by the SAND detector, largely inspired from the one of T2K. The student will take part in the development and the optimisation of the resistive Micromegas detectors for the charge readout of the time projection chambers performing track measurements in SAND. The work will take place inside the DUNE international collaboration with a possible contribution to the tests and commissioning of the detectors for the near detector of T2K: ND280-Upgrade. Description of the group, institute and supervision : The DUNE collaboration is an international collaboration constituted of more than a thousand members with a strong contribution in Europe. The accelerator neutrino group at IRFU/DPhP has been actively involved in the collaboration for several years, namely thanks to the developments for the charge readout of the dual-phase technology for the far detector. The activities on the near detector SAND directly benefit from the expertise acquired from the design of the resistive Micromegas detectors for the near detector of T2K. In this context, the accelerator neutrino group is constituted of 6 permanent scientists and 2 students involved in the resistive Micromegas activities. The student will also benefit from the collaboration with the detector and technical services at IRFU/DEDIP, one of the international leaders in the development of micro-pattern gaseous detectors. Very advanced tools on detectors, DAQ, slow-control and electronics will therefore be available. Proposed activities : The student will take part in the optimisation on test bench data and with simulations of the design of the resistive Micromegas detectors for the TPCs of SAND. Cosmic rays tests and beam tests will be necessary to characterise the detectors in real conditions. These tests will be performed at the beam lines of CERN and/or DESY and on the equipment at IRFU. A large contribution to the detectors installation on site and to their operation is foreseen. The analysis of the data acquired during these tests, which will be done in collaboration with a post-doctoral researcher, will be one of the main activities and could result in a publication. The construction of the near detector of T2K is also supposed to be completed by Summer 2022. This will give the opportunity to the student, already during the internship, to take part in the testing of the resistive detectors in the context of the production of a final detector. The other main activity proposed will be the production and analysis of simulation data of the entire SAND detector. This work will allow the theoretical systematic uncertainties to be reevaluated using new neutrino-nucleon interaction models, an important contribution to the sensitivity of SAND for the oscillation measurements with DUNE. This analysis on the nuclear models, currently being developed in the group for T2K, will need to be adapted to DUNE while incorporating the specific instrumental developments based on the various prototypes available. The development of the track reconstruction algorithms for the TPCs of SAND will also be carried out in tight collaboration with the members of the group currently involved in this activity for the near detector of T2K. Education and skills required : A Master's degree in particle physics with knowledge of the Standard Model is a requirement for this thesis. A strong interest for instrumental activities is expected and a motivation for neutrino physics is a very positive point. Knowledge of C++ and ROOT will also be very useful. Acquired skills : The student will have by the end of his/her PhD a good understanding of detectors and computing tools used in a particle physics collaboration thanks to his/her involvement in their developments. He/she will be able to promote his/her technical skills on detectors and on data analysis methods in other contexts. Collaboration/Partenariats : The student will work within a large international collaboration. This will provide a very good experience in particle physics and an important visibility through the participation to physics schools, workshops and conferences where he/she will present his/her results. ##### STUDY OF THE RARE DECAY OF THE HGGS BOSON TO A PAIR OF MUONS WITH THE ATLAS DETECTOR AT THE LHC SL-DRF-21-0352 Research field : Particle physics Location : Service de Physique des Particules (DPHP)Groupe Atlas (ATLAS)Saclay Contact : RODANTHI NIKOLAIDOU Starting date : 01-09-2021 Contact : RODANTHI NIKOLAIDOUCEA - DRF/IRFU/SPP/Atlas 0169086157 Thesis supervisor : RODANTHI NIKOLAIDOUCEA - DRF/IRFU/SPP/Atlas 0169086157 In July 2012, the ATLAS and CMS collaborations announced the discovery of a new particle with a mass of about 125 GeV, at the Large Hadron Collider (LHC) at CERN. Since this discovery, the two collaborations have been actively studying the properties of this new particle, so far consistent with those of the Standard Model Higgs boson. In the Standard Model, the Brout-Englert-Higgs mechanism predicts that the Higgs boson will interact with particles of matter (quarks and leptons, called fermions) with a force proportional to the mass of the particle. It also predicts the Higgs boson will interact with the force carrier particles (W and Z bosons) with a strength proportional to the square of the particle's mass. Therefore, by measuring the Higgs boson decay and production rates, which depend on the interaction strength to these other particles, one can perform a fundamental test of the Standard Model. The ATLAS and CMS collaborations have already observed the decay of the Higgs boson into tau lepton, belonging to the third “generation” of fermions. Since muons are much lighter than tau leptons, the decay of the Higgs boson into a muon pair is expected to occur about 300 times less often than that of a lepton-tau pair. Despite this scarceness, the H ? µµ decay provides the best opportunity to measure the interaction of the Higgs boson with second generation fermions at the LHC, providing new information on the origin of mass for different generations of fermions. The ATLAS and CMS collaborations recently presented results on this decay using the dataset during the 2nd phase of the LHC (Run-2 from 2015 to 2018). Figure 1 shows the mass distribution of muon pairs while Figure 2 shows an example of a candidate event to be a Higgs boson decaying into two muons as recorded by the ATLAS detector. The study of this process is one of the main objectives of the third phase of the LHC (Run-3). The aim of this thesis is the search for a Higgs boson decaying into two muons by analyzing the whole dataset from Run-3 and combining them with the previous data from the second phase (Run-2) in order to establish the discovery of the decay of the Higgs boson into two muons and constrain possible theories of physics beyond the Standard Model that would affect this decay mode of the Higgs boson. The thesis will also include work on the performance evaluation of the ATLAS muon spectrometer. Particular interest will be paid to the understanding, analysis and operation of MicroMegas type gas detectors. The purpose of phase-I of the ATLAS detector upgrade is to prepare for the high luminosities that will supply the LHC. In this context, the 2 large detection planes called NSW (New Small Wheel) will be equipped with new MicroMegas type detectors and will replace part of the ATLAS muon spectrometer and be operational for the restart of the LHC in 2022. ##### MEASUREMENT OF VECTOR-BOSON SCATTERING WITH THE ATLAS DETECTOR AT THE LHC SL-DRF-21-0369 Research field : Particle physics Location : Service de Physique des Particules (DPHP)Groupe Atlas (ATLAS)Saclay Contact : Maarten Boonekamp Starting date : 01-09-2021 Contact : Maarten BoonekampCEA - DRF/IRFU/SPP/Atlas Thesis supervisor : Maarten BoonekampCEA - DRF/IRFU/SPP/Atlas The vector-boson scattering process, pp?VV+jj+X, where V=W,Z, is characterized by the presence of leptons from the W or Z decay, and high-energy forward jets in the final state. WW, WZ and ZZ scattering all have been observed during the LHC Run2, with partial datasets. The next step for the community in this field, using improved event selections and event classification, and larger data samples, is to enhance the visibility of the signal and confront the SM predictions with more precise analysis results. This measurement constitutes a fundamental test of the coupling between the Higgs and the vector bosons, and of the Standard Model as whole. ##### CALIBRATION OF BOLOMETERS AT THE KeV SCALE AND NEUTRINO COHERENT SCATTERING WITH THE NUCLEUS EXPERIMENT SL-DRF-21-0270 Research field : Particle physics Location : Service de Physique Nucléaire (DPhN)Laboratoire etudes et applications des reactions nucleaires (LEARN) (LEARN)Saclay Contact : David LHUILLIER Starting date : 01-10-2021 Contact : 01 69 08 94 97 Thesis supervisor : David LHUILLIERCEA - DSM/IRFU/SPhN/MNM 01 69 08 94 97 The central topic of this thesis is the NUCLEUS experiment, whose motivation is to measure the coherent scattering of neutrinos emitted by the reactors of the EDF power plant at Chooz, in the Ardennes. Although, in the MeV energy range that concerns us, coherent scattering on nuclei is the most probable mode of interaction of neutrinos with matter, it is extremely difficult to detect because its only signature is the tiny recoil of the target nucleus. Thus the first observation of this process dates from 2017 only, with neutrinos of a few 10 MeV from the Oak Ridge spallation source. Measurements at the reactors have yet to be made, and NUCLEUS aims to carry out a precise study of this as yet unexplored neutrino-matter coupling, with a unique sensitivity to potential new physics in the electroweak sector of the standard model. Coherent scattering differs from the beta-inverse reaction used up to now by an interaction cross section several orders of magnitude higher allowing a miniaturization of the detectors: only 10g of target for the first phase of NUCLEUS. Finally, the absence of a reaction threshold (instead of 1.8 MeV for the beta-inverse reaction) could also allow direct monitoring of the accumulation of plutonium in the nuclear reactor cores. NUCLEUS will use sapphire (Al2O3) and calcium tungstate (CaWO4) bolometers in the form of 5 mm edge cubic crystals. A detection threshold of 20 eV has already been reached with this technology. The thesis work proposed here will focus on two central aspects of the experiment: the calibration of the detectors and the rejection of cosmic rays, the main source of background. An accurate calibration is indeed crucial to study coherent scattering and to reach the best sensitivity on a potential new physics. Although the energy range of the expected nuclear recoils, of the order of 100 eV, is above the achieved detection thresholds, no absolute calibration method for bolometers currently exists for this new region of interest. The extrapolation of the available measurements from the keV scale is problematic due to a rapid and non-trivial evolution of the contribution of the different excitation modes: phonons, ionization and scintillation. A new method proposed by the Department of Nuclear Physics of CEA-Saclay (DPhN) would give access for the first time to calibrated nuclear recoils, in the 100 eV range and uniformly distributed in the volume of the bolometer. The validation of this method and a first measurement with a NUCLEUS bolometer will be developed during the thesis, in collaboration with the IJCLab d'Orsay, the University of Munich (TUM) and the University of Vienna (TU Wien). Applicable to different types of bolometers, this method has potentially a strong scientific impact towards coherent neutrino scattering programs, light dark matter research but also solid state physics. DPhN is also heavily involved in the development of the NUCLEUS muon veto. This active shielding surrounds as hermetically as possible the central detectors with plastic scintillator panels whose light is extracted by optical fibers connected to Silicon-Photomultipliers (SiPM). Its purpose is to sign the passage of cosmic rays near the bolometers in order to reject any event (potentially background) during the next ~100 microseconds. Data from this detector is a natural input to the NUCLEUS analysis. The start of the data collection on EDF site is planned for the end of 2022 - beginning of 2023. Finally, the DPhN is also at the origin of the STEREO experiment which is motivated by the search for sterile neutrinos and the precise measurement of the neutrino spectrum resulting from the fission of 235U. It is installed at the ILL research reactor and is completing its data collection this year. Part of the thesis work could be oriented towards combining the final results of STEREO with those of other neutrino experiments, an effort already started with the PROSPECT collaboration. Some of the techniques involved in spectrum unfolding and global fit could be transferable to NUCLEUS. Organization of the work: The priority at the beginning of the thesis will be put on the development of the calibration method for 100 eV bolometers with a first step of proof of concept at CEA and Orsay in 2021-22, then a measurement with the a NUCLEUS bolometer in Germany in 2022-23. This work should lead to several publications. Involvement in the analysis of NUCLEUS data will be stepped up in the second part of the thesis. The entry point will be the exploitation of data from the muon veto, installed on the EDF site from the end of 2022. The first work will be the optimization of gains and thresholds for each SiPM in order to ensure a high rejection of ambient gamma rays, a high muon detection efficiency and a controlled acquisition dead time. An automatic monitoring of the evolution in time of the performances will be set up. Then further analysis will focus on a specific source of background generated by cosmic rays. In connection with the work on the calibration of bolometers, sensitivity studies could be carried out within the framework of low energy tests of the standard model accessible by NUCLEUS: evolution of sin2_theta_W, magnetic moment of the neutrino ... A synergy with some developments of the final analysis of STEREO would then be exploitable. Through this work the student will have a complete training as an experimental physicist with aspects of simulation, detector development and data analysis. The physics topics addressed, coherent neutrino scattering and bolometer calibration, are very active in the community and will offer many research perspectives at the end of the thesis. The student will evolve in international collaborations. Within the CEA he (she) will benefit from the "transverse" character of the neutrino and will be in regular interaction with the nuclear physics, particle physics and reactor physics communities. ##### Coherent Elastic Neutrino-Nucleus scattering and search for new physics with the NUCLEUS experiment SL-DRF-21-0298 Research field : Particle physics Location : Service de Physique des Particules (DPHP)Groupe Sources et Réacteurs (GNSR)Saclay Contact : Matthieu Vivier Starting date : 01-10-2021 Contact : 0169086626 Thesis supervisor : Matthieu VivierCEA - DRF/IRFU/DPHP/Double Chooz 0169086626 This PhD topic is about the NUCLEUS experiment, which aims at precisely measuring coherent elastic neutrino-nucleus scattering (CEvNS) at the Chooz nuclear power plant (France). Although at ~MeV energies, CEvNS is the predominant interaction process of neutrinos with matter, it has remained unobserved for a very long time because of the difficulty to measure the very low energy nuclear recoils it induces. It was only 40 years after its first prediction that this process was observed in 2017 with neutrinos of a few tens of MeV at the Oak Ridge laboratory (Tennessee). The first detection of CEvNS at a nuclear reactor remains to be achieved, especially because the corresponding nuclear recoils lie in an energy regime (~100 eV) which is difficult to measure with conventional detection technologies, and also because of the unfavorable background conditions nuclear power plant environments generally offer. The NUCLEUS collaboration is therefore working on the design of an innovative detection system using two cryogenic calorimeter arrays capable of reaching ~10 eV energy thresholds, and surrounded by a twofold system of instrumented cryogenic vetoes. This set of cryogenic detectors will be protected by an external passive shielding and by a muon veto to improve the identification and discrimination of backgrounds. With this system, NUCLEUS aims at a precise measurement of CEvNS in order to push the study of the fundamental properties of the neutrino as well as the search for beyond standard model physics towards the low energy frontier. Interestingly, CEvNS also exhibits a cross-section 10 to 1000 times larger than the usual ~MeV neutrino detection channels (inverse beta decay reaction, neutrino-electron scattering process), making it possible to miniaturize future long-range neutrino detection setups. The first phase of the NUCLEUS experiment will for instance deploy an array of cryogenic calorimeters made of sapphire (Al2O3) and calcium tungstate (CaWO4) crystals, totaling 10 g of detector. In addition to the characterization and preparation of the experimental site at Chooz, our team at Irfu is taking a leading role in the project through several hardware and software developments. In particular, the DPhP is strongly involved in the realization of one of the instrumented cryogenic shielding of the experiment, here called the cryogenic outer veto. This detector consists of an arrangement of high-purity Germanium crystals, erected around the two cryogenic calorimeter arrays, and operated in ionization mode. This detection system will play a central role in the identification and discrimination of external backgrounds, such as ambient gamma radioactivity or atmospheric muons resulting from the interaction of primary cosmic rays in the atmosphere. The exploitation of the data delivered by this detector is then a natural entry in the global analysis effort to extract a first CEvNS signal at a reactor, with first background data collected in 2021/2022 during the blank assembly phase at the Technical University of Munich, and with data collected during the first physics run planned in 2023 at Chooz. The work proposed in this PhD thesis is focused on the external cryogenic veto of the experiment, with the ultimate goal of achieving a comprehensive understanding of the backgrounds in the CEvNS region of interest, between 0.01 and 1 keV. The priority at the beginning will be given to the realization and commissioning of the external cryogenic veto system during the blank assembly phase in Munich. This work includes the assembly of the different detector elements (crystals, support mechanics, readout electronics, etc.) in the cryostat of the experiment, and includes all the necessary tests to validate and quantify the performances of this detector. In a second step, the student will ramp up in the collaboration analysis effort by contributing to the development of analysis and simulation tools. These tools will be used to interpret the background and detector calibration data acquired during the blank assembly phase and during the first physics run. He (she) will focus on the study of a specific source of external background, and quantify its impact on the physics potential of the experiment. This work will require a good understanding of the processes governing radiation interactions in matter and of the solid-state physics driving the behavior of cryogenic detectors (e.g. phonon propagation). Finally, the student will use the first data from the physics run at Chooz to conduct a search for new physics beyond the standard model (measurement of the weak mixing angle at low energies, search for new neutrino couplings, constraints on the electromagnetic properties of the neutrino, etc.). This work will require the implementation of advanced statistical methods for interpreting the data, in order on the one hand to understand the impact of the various sources of uncertainty on the constraints obtained, and on the other hand to guarantee the reliability of the results. ##### LHC luminosity measurement with the ATLAS Liquid Argon Calorimeter and search for long lived massive particles SL-DRF-21-0321 Research field : Particle physics Location : Service de Physique des Particules (DPHP)Groupe Atlas (ATLAS)Saclay Contact : Philippe Schwemling Starting date : 01-10-2021 Contact : Philippe SchwemlingCEA - DRF/IRFU/DPHP/Atlas 33 1 69 08 85 85 Thesis supervisor : Philippe SchwemlingCEA - DRF/IRFU/DPHP/Atlas 33 1 69 08 85 85 Since the discovery of the Higgs boson efforts are focused on the search for new phenomena, beyond the Standard Model. One of the important aspects of the comparison between experimental measurements and theory is the need to normalize as precisely as possible experimental results to theory. This means in practice being able to measure as precisely as possible the luminosity of the LHC. The goal is to reach a precision better than 1% within the next few years, a factor two or three better than the precision that has been reached up to now. After the LHC restart, foreseen in 2022, it is planned to increase the luminosity by a factor of about two. To make the best out of this luminosity increase, the calorimeter trigger system is being significantly modified and upgraded. The upgraded trigger system is based on real time analysis by FPGAs of the digitized detector signals. An essential feature of the upgraded trigger system is its ability to measure the energy deposited in the calorimeter bunch crossing by bunch crossing. Combined with the stability, excellent linearity and response uniformity of the ATLAS Liquid Argon calorimeter, the upgraded trigger system offers the potential to measure the luminosity with excellent linearity and stability performances. A very promising analysis technique would be to use a neural net, that could be implemented in the core of the FPGA that processes the data. An other feature of the upgraded trigger system is its ability to keep track of all the interactions taken place in the detector over a much longer period of time than the main readout. The main readout system is able to keep in memory only up to four or five consecutive interactions. The trigger system has the capability to keep track of each individual bunch crossing over a period of time corresponding to several tens of consecutive bunch crossings. This long term memory feature gives the possibility to compensate real time the effect of charge space accumulation, which will be crucial for data taken after 2025, at very high luminosity. More importantly, this also opens up the possibility to detect particles reaching the detector long (several tens or even hundreds of ns, to be compared to the 25 ns between two consecutive bunch crossings) after their production. Such particles are slow and very heavy, and can be detected almost up to the kinematic limit of 7 TeV. This is significantly higher than the limits reachable by more classic techniques. Such particles typically appear in many classes of supersymmetric models. ##### ARTIFICIAL INTELLIGENCE TO SIMULATE BIG DATA AND SEARCH FOR THE HIGGS BOSON DECAY TO A PAIR OF MUONS WITH THE ATLAS EXPERIMENT AT THE LARGE HADRON COLLIDER SL-DRF-21-0478 Research field : Particle physics Location : Service de Physique des Particules (DPHP)Groupe Atlas (ATLAS)Saclay Contact : RODANTHI NIKOLAIDOU Starting date : 01-09-2021 Contact : RODANTHI NIKOLAIDOUCEA - DRF/IRFU/SPP/Atlas 0169086157 Thesis supervisor : RODANTHI NIKOLAIDOUCEA - DRF/IRFU/SPP/Atlas 0169086157 New artificial intelligence techniques are attracting growing interest in handling the massive volume of data collected by particle physics experiments, particularly at the LHC collider. This thesis proposes to study these new techniques for the simulation of the background noise of rare events originating from the decay into two muons of the Higgs boson as well as to set up a new artificial intelligence method to extract these rare events from the gigantic dimuon background noise. In 2012, the Higgs boson, a fundamental part of the Standard Model of particle physics, was discovered at the LHC. The demonstration of its decay in dimuon is now at the heart of the LHC program to measure the coupling of the Higgs boson to 2nd generation particles. Simulating the dimuon background with sufficient statistics is the first challenge of this analysis. The thesis proposes to test, for the first time, the use of very promising artificial intelligence models as a simulation method using "Generative Adversarial Networks (GANs)" with an architecture of two competing networks. In addition, the thesis also foresees a complete redesign of the analysis in order to implement new data processing methods (Deep Neural Networks) to optimize the extraction of the weak signal. ##### MEASUREMENT OF THE W-BOSON MASS WITH THE ATLAS DETECTOR AT THE LHC SL-DRF-21-0367 Research field : Particle physics Location : Service de Physique des Particules (DPHP)Groupe Atlas (ATLAS)Saclay Contact : Maarten Boonekamp Starting date : 01-09-2021 Contact : Maarten BoonekampCEA - DRF/IRFU/SPP/Atlas Thesis supervisor : Maarten BoonekampCEA - DRF/IRFU/SPP/Atlas The goal of the thesis is a measurement of a fundamental parameter in the Standard Model of particle physics, the W boson mass, with the ATLAS detector at the LHC, using leptonic W boson decays. The analysis will be based on dedicated low-pile-up data samples, which have limited integrated luminosity but optimal experimental resolution in the reconstruction of missing transverse energy, which is a requirement in the analysis of final states with neutrinos. The candidate will participate in the installation and the commissioning of the New Small Wheel, an upgraded muon detector for the ATLAS endcaps. IRFU has played a leading role in its construction and will get strongly involved in its scientific exploitation. In addition, the candidate will calibrate the muon momentum with sufficient precision for the measurement. The second phase of the project consists in improving the QCD aspects of the modelling of W-boson production and decay, and optimizing the analysis to minimize the resulting measurement uncertainty. After completion, the measurement will be interpreted in terms of compatibility with the Standard Model or as a hint of New Physics. ##### 3D imaging of a nuclear reactor during its decommissioning using muon tomography SL-DRF-21-0372 Research field : Particle physics Location : Service de Physique des Particules (DPHP)Groupe Santé et Energie (GSE)Saclay Contact : Hector GOMEZ Sébastien Procureur Starting date : 01-10-2021 Contact : Thesis supervisor : Sébastien ProcureurCEA - DRF/IRFU/DPhP (+33)(0)1 69 08 39 22 The final goal of this PhD thesis is to obtain the very first 3D image of a nuclear reactor with a non-invasive method, namely the muography. This penetrating imaging technique has shown a rapid development during the last years, with major technological improvements including several innovative contributions from the CEA. Muon telescopes of unprecedented resolution thus unveiled the 2D structures of very large objects like Khufu’s Pyramid or a nuclear reactor. Recently, an algorithm was developed to combine these 2D images in a 3D tomography, despite the small number of available projections and the huge size of the corresponding matrix system. This PhD will then be dedicated to the use of this algorithm to ongoing muography measurements on a nuclear reactor in decommissioning phase. The PhD student will actively participate to the data taking, data analysis and to the corresponding simulations. He/She will first apply the algorithm to smaller objects, in particular nuclear waste containers in various environments, in order to understand and optimize its performance. These intermediate steps, beyond their own interest, will help to better tune the algorithm parameters but also to determine the future measurements (positions, orientations, acquisition times, etc.). The overall goal of this work is thus the development of a 3D, generic, innovative imaging tool in the field of decontamination & decommissioning, with certainly many more applications in the societal and academic domains. ##### Gluon tomography with exclusive vector meson production SL-DRF-21-0568 Research field : Particle physics Location : Service de Physique Nucléaire (DPhN)Laboratoire structure du nucléon (LSN) (LSN)Saclay Contact : Francesco BOSSU Franck SABATIE Starting date : 01-10-2021 Contact : Thesis supervisor : Franck SABATIECEA - DRF/IRFU/SPhN 01 69 08 32 06 Thesis: Gluon tomography with exclusive vector meson production The understanding of the origin of the mass, the spin and the structure of the nucleons (i.e. protons and neutrons) from their elementary constituents (quarks and gluons, collectively called partons) is among the unanswered questions in particle physics. The theoretical framework of the Generalized Parton Distributions (GPDs) encodes the 3-dimensional structure of a nucleon and its study will provide insights on the origin of the fundamental properties of protons and neutrons. Experimentally, the cleanest method to study the internal structure of nucleons is to collide them with electrons at high energies. CEA/Irfu staff members are among the principal investigators of ongoing experiments at the Jefferson Lab (JLab) in USA, where a high current electron beam up to 11 GeV in energy collides with fixed targets of several types, and of the future experiments at the Electron Ion Collider (EIC), where electrons and protons will collide at energies in the center of mass up to 140 GeV. The high luminosities available at the JLab and at the future EIC allow the study of the properties of the nucleons with high statistical accuracy also via rare processes. Contrary to the naive expectations, it has been shown that not the valence quarks, but rather the gluons carry the major contribution to the mass and the spin of the nucleons. Therefore, it is crucial to precisely characterize gluons distributions in order to fully understand the properties of the nucleons. In particular, the current knowledge of the GPDs of gluons is rather limited. GPDs are accessible through the study of exclusive processes where all the final state particles are detected, and specifically, gluon GPDs can be accessed via the study of the exclusive electo-production of vector mesons such as the rho, phi et omega mesons. The goal of this thesis will be to analyze the data taken with the CLAS12 experiment at the Jefferson Lab focusing on measurements of exclusive meson production. Given the large size of the datasets, the student will have the opportunity to develop and apply machine learning algorithms to improve the reconstruction and the selection of event candidates. Extensive studies on simulated data will be necessary to fully understand the data, to train and optimize the candidate selection algorithms, to adapt ML models the real data and to tame possible systematic uncertainties. From the experience gained through the analysis of CLAS12 data, the candidate will also participate in the simulation studies for feasibility and optimization of the future detectors for the EIC for exclusive vector meson electro-production at high energies. The thesis will be carried out within the Laboratory of Nucleon Structure of the Department of Nuclear Physics of CEA/Irfu. The laboratory is composed by both experimentalists and theorists: the frequent interactions make the work environment very enriching. Knowledge of particle physics and computer science would help the candidate to quickly actively participate to the data analysis effort. Basics knowledge of particle detectors would be also an advantage to efficiently understand the experimental setup used for data collection. The student will also have the opportunity to collaborate with several researchers both locally (like IJCLab in Orsay and CPHT at Ecole Polytechnique) and internationally. The student will be part of the CLAS collaboration and will also join the EIC user group that will also require trips to United States for data taking and workshops. The student will have the opportunity to present the result of these research topics to international conferences. Contact: Francesco Bossù, CEA Saclay – IRFU/DPhN/LSN, (francesco.bossu@cea.fr) ##### Charged particle tracking in heavy-ion collisions in LHCb and data analysis in fixed-target collisions at the LHC SL-DRF-21-0500 Research field : Particle physics Location : Service de Physique Nucléaire (DPhN)Laboratoire plasma de quarks et gluons (LQGP) (LQGP)Saclay Contact : Michael Winn Alberto Baldisseri Starting date : 01-10-2021 Contact : +33 1 69 08 55 86 Thesis supervisor : Alberto BaldisseriCEA - DRF/IRFU/SPhN/ALICE +33 169089333 Created in heavy-ion collisions at the LHC (CERN), the quark gluon plasma (QGP) is an extreme state of matter in which the constituents of nucleons are 'deconfined' sufficiently long in order to be studied. Among the CERN LHC collaborations, LHCb studies the QGP both in collider mode, but also thanks to a fixed-target programme unique at the LHC. The current performance of the tracking detectors is limited in the most violent collisions, but several upgrades are foreseen at the horizon of 2030. The first goal of this thesis is the tracking development in order to assure optimal performances in future heavy-ion data takings. These studies will allow to define the performance parameters necessary to be achieved for the different subdetectors. Furthermore, alternative algorithms based on artificial intelligence will be explored in order to achieve the maximal detector performance. In parallel, an analysis component is proposed based on the fixed-target data. In particular, we propose to measure charm particle production. Unique in this kinematics and its energy range, these fixed-target collision measurements with the LHCb detector at the LHC will allow to establish better the role of charm quarks as observables sensitive of deconfinement. ##### Towards a high spatial resolution pixel detector for particle identification: new detectors contribution to physics SL-DRF-21-0714 Research field : Particle physics Location : Département d’Electronique, des Détecteurs et d’Informatique pour la physique (DEDIP)DÉtecteurs: PHYsique et SimulationSaclay Contact : Nicolas FOURCHES Starting date : 01-09-2021 Contact : Nicolas FOURCHESCEA - DRF/IRFU/DEDIP/DEPHYS 0169086164 Thesis supervisor : Nicolas FOURCHESCEA - DRF/IRFU/DEDIP/DEPHYS 0169086164 Future experiments on linear colliders (e+e-) with low hadronic background require improvements in the spatial resolution of pixel vertex detectors to the micron range, in order to determine precisely the primary and secondary vertices for particles with a high transverse momentum. This kind of detector is set closest to the interaction point. This will provide the opportunity to make precision lifetime measurements of short-lived charged particles. We need to develop pixels arrays with a pixel dimension below the micron squared. The proposed technologies (DOTPIX: Quantum Dot Pixels) should give a significant advance in particle tracking and vertexing. Although the principle of these new devices has been already been studied in IRFU (see reference), this doctoral work should focus on the study of real devices which should then be fabricated using nanotechnologies in collaboration with other Institutes. This should require the use of simulation codes and the fabrication of test structures. Applications outside basics physics are X ray imaging and optimum resolution sensors for visible light holographic cameras. ##### Z boson precision physics with the Atlas detector at LHC SL-DRF-21-0105 Research field : Particle physics Location : Service de Physique des Particules (DPHP)Groupe Atlas (ATLAS)Saclay Contact : Fabrice Balli Starting date : 01-09-2021 Contact : +33169081715 Thesis supervisor : Fabrice BalliCEA - DRF/IRFU/DPHP/Atlas +33169081715 The thesis will start in Autumn 2021. ATLAS, one of the major experiment at the LHC, is preparing for the expected increase of luminosity for Run3 and HL-LHC. The first part of the thesis is dedicated to a qualification task that could either consist in participating to the commissioning of the new muon detectors which are integrating the experiment, or taking part in the muon momentum calibration effort in view of Run3, that will start in 2022. Both options are closely related to the main thesis subject. The thesis will be followed by a measurement of precision physics in the field of the Z boson with ATLAS data. The subject is focused on electroweak precision physics in ATLAS. The aim is to measure with the best possible precision the electroweak mixing angle, as well as the mass of the Z boson, using Run2 and Run3 data. The explored channel is that of the Z boson decaying into a muon-antimuon lepton pair. The student will work on muon momentum calibration using the J/Psi resonance as a standard candle, and will also reduce, through advanced fitting methods, the uncertainties related to the parton distributions functions (PDFs). These measurements should lead to a high improvement in the electroweak fit and thus significantly constrain the Standard Model, as well as Beyond Standard Model physics. The CEA ATLAS group is part of the Department of Particle Physics (DPhP) of the Institute of Research into the Fundamental Laws of the Universe (IRFU) at CEA Paris-Saclay. DPhP comprises about 110 physicists. DPhP scientific themes include elementary components of matter at the highest energies at the CERN LHC collider, R&D for future accelerators, study of antimatter, neutrino physics, gamma ray astronomy, study of gravitational waves, observational cosmology and instrumentation for medical applications. The group has a world-leading expertise in electroweak physics, namely with measurements of Z, W and Higgs boson cross sections and measurement of the W boson mass, achieved for the first time at the LHC. It builds on competences in muon reconstruction and muon spectrometer alignment and electron/photon identification. ##### Deep learning to discover rare complex signals with the Atlas experiment at the LHC SL-DRF-21-0755 Research field : Particle physics Location : Service de Physique des Particules (DPHP)Groupe Atlas (ATLAS)Saclay Contact : Frédéric DELIOT Starting date : 01-10-2021 Contact : Frédéric DELIOTCEA - DRF/IRFU 0169086628 Thesis supervisor : Frédéric DELIOTCEA - DRF/IRFU 0169086628 This PhD proposes to apply artificial intelligence algorithms to big data in two innovative ways by exploiting the large proton-proton collision dataset collected by the ATLAS experiment at the Large Hadron Collider (LHC). The challenge is to extract processes that are both rare and complex from the huge amount of LHC data. Cutting edge deep learning techniques will be explored first to reconstruct complex final states with underconstrained kinematics. This would allow reconstructing final state particle energies and momenta knowing some conservation laws. Second deep learning will be implemented to extract rare signals. These new developments will be applied to two very rare and complex processes (ttH and 4-top). These two processes combined will allow testing the true nature of the coupling between the cornerstone of the Standard Model, the Higgs boson, and the heaviest elementary particle, the top quark, and could reveal new sources of asymmetry between matter and anti-matter. First unsupervised training will be tested for the first time for final state reconstruction. Observables based on the fully or partially reconstructed final state particles should then improve the ability to extract rare signals using for instance Graph Neural Network classifiers newly in high energy physics. Exploring these new strategies for event reconstruction and classification will pave the way to understanding how the increased amount of data expected in the next phase of the LHC can be exploited in an optimal way. ##### A simultaneous determination of parton-distribution and fragmentation functions using artificial neural networks SL-DRF-21-0317 Research field : Theoretical Physics Location : Service de Physique Nucléaire (DPhN)Laboratoire structure du nucléon (LSN) (LSN)Saclay Contact : Valerio Bertone Hervé Moutarde Starting date : 01-10-2021 Contact : Thesis supervisor : Hervé MoutardeCEA - DRF/IRFU/SPhN/Théorie Hadronique 33 1 69 08 73 88
auto_math_text
web
LOW-RESOLUTION ROTATIONAL SPECTRA. Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/15791 Files Size Format View 1969-O-09.jpg 171.5Kb JPEG image Title: LOW-RESOLUTION ROTATIONAL SPECTRA. Creators: Scharpen, Leroy H. Issue Date: 1969 Publisher: Ohio State University Abstract: The microwave rotational spectra of a large number of organic and several organometallic molecules in the molecular weight range 70-348 have been observed under conditions of low resolution. In many (more than 40) of the spectra, the prominent feature is one or more series of broad ($\sim 10-200$ MHz) but generally quite intense bands''. These bands are interpreted as the sum of unresolved, high K lines for particular $J \rightarrow J+1$ transitions of near-symmetric top molecules. J-values as high as 63 were observed for one of the heavier molecules. Within the measurement uncertainties, the band frequencies $\nu_{j}$ in a series are reproduced by the simple expression $\nu_{j} = B^{\ast} (J+1). B^{\ast}$The polarized phosphorescence emission and excitation spectra of riboflavin and alloxazine were measured in ethylene glycol-water (2:1, v/v) at $77^{\circ}K$ in order to assign the lowest triplet state$^{\ast}$ will be approximately (B+C) for a near-prolate rotor. Compounds studied had asymmetry parameters in the range $0.8|k|<1.0$. For the more asymmetric compounds, the bands showed some structure and could be resolved into components under conditions of high resolution search. Rotational isomerism is thought to be the origin of the multiple series of bands found in the 1-haloalkanes ($C_{3}$ to $C_{6}$ compounds) and certain benzyl derivatives. Molecular constants and conformational information extracted from these band spectra will be compared to those from high resolution studies (where available) and to those calculated from assumed molecular structures. Description: Author Institution: Hewlett-Packard Company, 1501 Page Mill Road, Palo Alto URI: http://hdl.handle.net/1811/15791 Other Identifiers: 1969-O-9
auto_math_text
web
| | | | ## Application of Genetic Algorithm to Solar Panel Efficiency; A Case Study of Port-Harcourt Metropolis #### Roland Uhunmwangho [1] , Berebon Victor Leesi [2] , Ameze Big-Alabo [3] ##### 117 196 This study focuses on the evaluation of solar panel efficiency used within Port Harcourt environment. A major factor affecting the efficiency of solar panels is the difference in region or weather and the ability of the solar panel to convert incident radiation into electrical energy. Solar panels have varying efficiency levels under different weather conditions. Most times solar panels fall short of expected efficiencies. It is therefore important to have adequate knowledge of the performance characteristics of a panel under specific weather to ensure maximum output. For this research work, the panel evaluated is a 125W Polycrystalline solar panel made of Gallium Arsenide. The efficiency of the panel is calculated and recorded. The weather data for Port-Harcourt is collected from the center for data collation Rivers State University.  The peak radiation value obtained from the weather data for the year under consideration is used to evaluate the efficiency and Genetic Algorithm is used to determine the optimal parameter of the cells making up the panel to obtain an optimized cell to improve the efficiency of the panel. To do this the cell initial properties is extracted and tabulated genetic algorithm is then used to improve this properties achieving better efficiency in the process. Genetic Algorithm, Solar Panels, Weather, Optimization, Efficiency • Balzhiser and Richard E. (1977) R &D Status report “fossil fuel and advance system division” EPRI Journal vol. 2 (No.5) pp 49 – 53. • Bernal-Augustine, J. L.; Dofo Lopez, R (2009) Multi-objective design and control of hybrid systems minimizing cost and unmet loads. Electrical power research Vol. 79, pp 170 – 181. • Durand, H. L (1979) “Present status and prospects of photovoltaic energy conversion; proceeding of the photovoltaic solar energy conversion conference pp 93-105 • Enrique, J. M., Duran, E. Sidrach-de-cardona, M. Andjuor, J. M. (2007) Theoretical assessment of the maximum power point tracking efficiency of photo voltaic facilities with different converter topologies. Solar energy, vol. 81, issue 1 pp 31-38 • Furkan, D., Mehmet, E. M. (2010) “Critical factors that affect solar cells, smart grid and renewable energy, Vol. 1, pp 47-50 Haupt, R. L., Haupt, S. E., (1998). • Practical Genetic algorithms (New York: Wiley) Hollande, J. H. (1992) Adaption in Natural and Artificial System MIT Press, Cambridge, MA • Omubo-Pepple, V. B., Israel-Cookey, C. I. Alamino, K (2009) “Effect of Temperature, flux and relative humidity on the efficient conversion of solar energy to electricity” Department of Physics, Rivers State University of Science and Technology European Journal of Scientific Research Vol. 35, No.2, pp 173-180. • Peippo K., Lund P.D. (1994). Optimal size of solar array and inverter in grid connected photovoltaic cell interconnection circuits. • Rizala, Y. Hasta, S., and Feriyadi, W. (2013) “Application of solar position Algorithm for sun tracking, system energy procedure. Vol. 32, pp 160-165. • Storn, R and Price, K. V (1997) Differential evolution a simple and efficient heuristic for global optimization over continuous spaces Journal of Global Optimization pp 341-359. Birincil Dil en Mühendislik Articles Yazar: Roland Uhunmwangho (Sorumlu Yazar)Ülke: Nigeria Yazar: Berebon Victor LeesiÜlke: Nigeria Yazar: Ameze Big-AlaboÜlke: Nigeria Bibtex @araştırma makalesi { ijet364144, journal = {International Journal of Engineering Technologies IJET}, issn = {2149-0104}, eissn = {2149-5262}, address = {İstanbul Gelişim Üniversitesi}, year = {2018}, volume = {4}, pages = {60 - 69}, doi = {}, title = {Application of Genetic Algorithm to Solar Panel Efficiency; A Case Study of Port-Harcourt Metropolis}, key = {cite}, author = {Uhunmwangho, Roland and Leesi, Berebon Victor and Big-Alabo, Ameze} } APA Uhunmwangho, R , Leesi, B , Big-Alabo, A . (2018). Application of Genetic Algorithm to Solar Panel Efficiency; A Case Study of Port-Harcourt Metropolis. International Journal of Engineering Technologies IJET, 4 (2), 60-69. Retrieved from http://dergipark.gov.tr/ijet/issue/38459/364144 MLA Uhunmwangho, R , Leesi, B , Big-Alabo, A . "Application of Genetic Algorithm to Solar Panel Efficiency; A Case Study of Port-Harcourt Metropolis". International Journal of Engineering Technologies IJET 4 (2018): 60-69 Chicago Uhunmwangho, R , Leesi, B , Big-Alabo, A . "Application of Genetic Algorithm to Solar Panel Efficiency; A Case Study of Port-Harcourt Metropolis". International Journal of Engineering Technologies IJET 4 (2018): 60-69 RIS TY - JOUR T1 - Application of Genetic Algorithm to Solar Panel Efficiency; A Case Study of Port-Harcourt Metropolis AU - Roland Uhunmwangho , Berebon Victor Leesi , Ameze Big-Alabo Y1 - 2018 PY - 2018 N1 - DO - T2 - International Journal of Engineering Technologies IJET JF - Journal JO - JOR SP - 60 EP - 69 VL - 4 IS - 2 SN - 2149-0104-2149-5262 M3 - UR - Y2 - 2018 ER - EndNote %0 International Journal of Engineering Technologies Application of Genetic Algorithm to Solar Panel Efficiency; A Case Study of Port-Harcourt Metropolis %A Roland Uhunmwangho , Berebon Victor Leesi , Ameze Big-Alabo %T Application of Genetic Algorithm to Solar Panel Efficiency; A Case Study of Port-Harcourt Metropolis %D 2018 %J International Journal of Engineering Technologies IJET %P 2149-0104-2149-5262 %V 4 %N 2 %R %U ISNAD Uhunmwangho, Roland , Leesi, Berebon Victor , Big-Alabo, Ameze . "Application of Genetic Algorithm to Solar Panel Efficiency; A Case Study of Port-Harcourt Metropolis". International Journal of Engineering Technologies IJET 4 / 2 (Haziran 2018): 60-69.
auto_math_text
web
Article| Volume 101, ISSUE 9, P2190-2200, November 02, 2011 # Crowding of Molecular Motors Determines Microtubule Depolymerization Open Archive ## Abstract The assembly and disassembly dynamics of microtubules (MTs) is tightly controlled by MT-associated proteins. Here, we investigate how plus-end-directed depolymerases of the kinesin-8 family regulate MT depolymerization dynamics. Using an individual-based model, we reproduce experimental findings. Moreover, crowding is identified as the key regulatory mechanism of depolymerization dynamics. Our analysis reveals two qualitatively distinct regimes. For motor densities above a particular threshold, a macroscopic traffic jam emerges at the plus-end and the MT dynamics become independent of the motor concentration. Below this threshold, microscopic traffic jams at the tip arise that cancel out the effect of the depolymerization kinetics such that the depolymerization speed is solely determined by the motor density. Because this density changes over the MT length, length-dependent regulation is possible. Remarkably, motor cooperativity affects only the end-residence time of depolymerases and not the depolymerization speed. ## Introduction Microtubules (MTs) are cytoskeletal filaments that serve a central role in intracellular organization ( • Hayles J. • Nurse P. A journey into space. , • Tolić-Nørrelykke I.M. Force and length regulation in the microtubule cytoskeleton: lessons from fission yeast. ) and several cellular processes, including mitosis ( • Sharp D.J. • Rogers G.C. • Scholey J.M. Microtubule motors in mitosis. , • Karsenti E. • Vernos I. The mitotic spindle: a self-made machine. ), cytokinesis ( • Eggert U.S. • Mitchison T.J. • Field C.M. Animal cytokinesis: from parts list to mechanisms. ), and intracellular transport ( • Hirokawa N. • Noda Y. • Niwa S. • et al. Kinesin superfamily motor proteins and intracellular transport. ). They can cope with these diverse tasks because they are highly dynamic structures that continually assemble and disassemble through the addition and removal of tubulin heterodimers at their ends. GTP hydrolysis is the energy source that drives switching between persistent states of growth and shrinkage, in a stochastic process termed dynamic instability ( • Mitchison T. • Kirschner M. Dynamic instability of microtubule growth. , • Dogterom M. • Leibler S. Physical aspects of the growth and regulation of microtubule structures. , • Desai A. • Mitchison T.J. Microtubule polymerization dynamics. , • Howard J. • Hyman A.A. Dynamics and mechanics of the microtubule plus end. ). Each cellular process uses a specific set of MT-associated proteins (MAPs) to tightly regulate the rates of growth and shrinkage as well as the rate of transition between these states ( • Wordeman L. Microtubule-depolymerizing kinesins. , • Howard J. • Hyman A.A. Microtubule polymerases and depolymerases. , • Howard J. • Hyman A.A. Growth, fluctuation and switching at microtubule plus ends. ). Depolymerases from the kinesin-8 and kinesin-13 protein families (e.g., Kip3p and MCAK, respectively) are important regulators of MT dynamics. They are thought to promote switching of MTs from growth to shrinkage (catastrophes) ( • Howard J. • Hyman A.A. Microtubule polymerases and depolymerases. ). Whereas MCAK lacks directed motility and diffuses along MTs ( • Helenius J. • Brouhard G. • Howard J. • et al. The depolymerizing kinesin MCAK uses lattice diffusion to rapidly target microtubule ends. ), Kip3p is a highly processive plus-end-directed motor ( • Varga V. • Helenius J. • Howard J. • et al. Yeast kinesin-8 depolymerizes microtubules in a length-dependent manner. , • Gupta Jr., M.L. • Carvalho P. • Pellman D. • et al. Plus end-specific depolymerase activity of Kip3, a kinesin-8 protein, explains its role in positioning the yeast mitotic spindle. ). Proteins from the kinesin-8 family are important for regulating MT dynamics in diverse organisms. Kif18A is a key component in chromosome positioning in mammalian cells ( • Mayr M.I. • Hümmer S. • Mayer T.U. • et al. The human kinesin Kif18A is a motile microtubule depolymerase essential for chromosome congression. , • Stumpff J. • von Dassow G. • Wordeman L. • et al. The kinesin-8 motor Kif18A suppresses kinetochore movements to control mitotic chromosome alignment. , • Du Y. • English C.A. • Ohi R. The kinesin-8 Kif18A dampens microtubule plus-end dynamics. ), where it regulates plus-end dynamics. Its orthologs, the plus-end-directed motors Kip3p in budding yeast ( • Gupta Jr., M.L. • Carvalho P. • Pellman D. • et al. Plus end-specific depolymerase activity of Kip3, a kinesin-8 protein, explains its role in positioning the yeast mitotic spindle. ) and Klp5/6 in fission yeast ( • Unsworth A. • Masuda H. • Toda T. • et al. Fission yeast kinesin-8 Klp5 and Klp6 are interdependent for mitotic nuclear retention and required for proper microtubule dynamics. , • Tischer C. • Brunner D. • Dogterom M. Force- and kinesin-8-dependent effects in the spatial regulation of fission yeast microtubule dynamics. , • Grissom P.M. • Fiedler T. • McIntosh J.R. • et al. Kinesin-8 from fission yeast: a heterodimeric, plus-end-directed motor that can couple microtubule depolymerization to cargo movement. ), show depolymerizing activity. A notable feature shared by these MT plus-end depolymerases is that they depolymerize longer MTs more rapidly than they do shorter ones ( • Varga V. • Helenius J. • Howard J. • et al. Yeast kinesin-8 depolymerizes microtubules in a length-dependent manner. , • Mayr M.I. • Hümmer S. • Mayer T.U. • et al. The human kinesin Kif18A is a motile microtubule depolymerase essential for chromosome congression. , • Tischer C. • Brunner D. • Dogterom M. Force- and kinesin-8-dependent effects in the spatial regulation of fission yeast microtubule dynamics. , • Varga V. • Leduc C. • Howard J. • et al. Kinesin-8 motors act cooperatively to mediate length-dependent microtubule depolymerization. ). A similar length-dependent regulation of MT assembly by kinesin-5 motors was observed in in vivo studies of chromosome congression in budding yeast ( • Gardner M.K. • Bouck D.C. • Odde D.J. • et al. Chromosome congression by kinesin-5 motor-mediated disassembly of longer kinetochore microtubules. ). The key experimental observations from in vitro studies of Kip3p ( • Varga V. • Leduc C. • Howard J. • et al. Kinesin-8 motors act cooperatively to mediate length-dependent microtubule depolymerization. ) are that 1), the end-residence time of Kip3p at the tip depends on the bulk concentration of Kip3p and correlates inversely with the macroscopic depolymerization speed; and 2), the macroscopic depolymerization rate is directly proportional to the flux of Kip3p toward the MT plus-end. It is thought that length-dependent depolymerization kinetics serves several purposes ( • Tolić-Nørrelykke I.M. Force and length regulation in the microtubule cytoskeleton: lessons from fission yeast. ). For example, positioning of the nucleus at the cell center during interphase is achieved by growing MTs that push against the cell poles while remaining attached to the nucleus. A higher rate of catastrophes for longer MTs implies that shorter MTs have an increased contact time with the cell poles. Computer simulations show that this leads to a higher efficiency of nuclear positioning during interphase ( • Foethke D. • Makushok T. • Nédélec F. • et al. Force- and length-dependent catastrophe activities explain interphase microtubule organization in fission yeast. ). There is convincing experimental evidence that molecular traffic along MTs strongly affects the MT depolymerization dynamics. However, in vitro experiments cannot yet fully explore the underlying traffic dynamics. Theoretical investigations using individual-based models can be instrumental in furthering a mechanistic understanding of this process. Fortunately, such models can be constructed on the basis of substantial quantitative data available from in vitro experiments ( • Varga V. • Helenius J. • Howard J. • et al. Yeast kinesin-8 depolymerizes microtubules in a length-dependent manner. , • Varga V. • Leduc C. • Howard J. • et al. Kinesin-8 motors act cooperatively to mediate length-dependent microtubule depolymerization. ) characterizing the binding kinetics and the motor activity of plus-end-directed motors. Therefore, we sought to identify the molecular mechanisms underlying the observed correlation between depolymerization dynamics and molecular traffic along MTs. In this study, we constructed an individual-based model for the coupled dynamics of MT depolymerization and molecular traffic of plus-end-directed motors. This model quantitatively reproduces previous experimental results ( • Varga V. • Helenius J. • Howard J. • et al. Yeast kinesin-8 depolymerizes microtubules in a length-dependent manner. , • Varga V. • Leduc C. • Howard J. • et al. Kinesin-8 motors act cooperatively to mediate length-dependent microtubule depolymerization. ). Moreover, we make precise quantitative predictions for the density profiles of molecular motors on the MT and demonstrate that molecular crowding and ensuing traffic jams regulate the depolymerization dynamics. We find two qualitatively distinct regimes of depolymerization dynamics: At low bulk concentrations of depolymerases, the depolymerization speed of MTs is density-limited and is a function of the bulk concentration and average motor speed alone. There is a sharp threshold in bulk depolymerase concentration above which macroscopic traffic jams emerge and the depolymerization speed is simply given by the microscopic depolymerization rate. Of note, none of these features are affected by the degree of cooperativity in the depolymerization kinetics. In contrast, the end-residence time of a depolymerase (i.e., the typical time it spends at the plus-end) is strongly correlated with cooperativity. We outline how these predictions from our theoretical analysis can be tested experimentally. ## Results ### Model definition We use an individual-based model, as illustrated in Fig. 1, to describe the dynamics of plus-end-directed depolymerases. Motor proteins, present at a constant bulk concentration c, are assumed to randomly bind to and unbind from the MT lattice with rates ωa and ωd, respectively. Bound motors are described as Poisson steppers (A more detailed biochemical model for motors on MTs has to await further experimental analysis. One of the different possible schemes has recently been studied by Klumpp et al. ( • Klumpp S. • Chai Y. • Lipowsky R. Effects of the chemomechanical stepping cycle on the traffic of molecular motors. ).) that processively walk along individual protofilaments toward the plus-end at an average speed u ( • Howard J. The movement of kinesin along microtubules. ). These motors hinder each other sterically because individual binding sites $i=1,…,L$ on each protofilament can be either empty $(ni=0)$ or occupied by a single motor $(ni=1)$. Because switching between protofilaments is rare ( • Howard J. The movement of kinesin along microtubules. ), transport along each of the protofilaments can be taken as independent, and the model becomes effectively one-dimensional ( • Ray S. • Meyhöfer E. • Howard J. • et al. Kinesin follows the microtubule's protofilament axis. ) (Fig. 1 B). Models of this type were recently discussed as minimal models for intracellular transport ( • Parmeggiani A. • Franosch T. • Frey E. Phase coexistence in driven one-dimensional transport. , • Parmeggiani A. • Franosch T. • Frey E. Totally asymmetric simple exclusion process with Langmuir kinetics. , • Lipowsky R. • Klumpp S. • Nieuwenhuizen T.M. Random walks of cytoskeletal motors in open and closed compartments. , • Klumpp S. • Lipowsky R. Traffic of molecular motors through tube-like compartments. ). In its given formulation, where the cytosol is considered as a homogeneous and constant reservoir of motors, it is equivalent to the driven lattice gas model known as the totally asymmetric simple exclusion process with Langmuir kinetics (TASEP/LK) ( • Parmeggiani A. • Franosch T. • Frey E. Phase coexistence in driven one-dimensional transport. ). A central finding from this model is that the interplay between on-off (Langmuir) kinetics and directed transport along protofilaments can result in “traffic jams” in which the density profile of motors along a protofilament shows a sharp increase from a low-density to a crowded high-density regime ( • Parmeggiani A. • Franosch T. • Frey E. Phase coexistence in driven one-dimensional transport. , • Lipowsky R. • Klumpp S. • Nieuwenhuizen T.M. Random walks of cytoskeletal motors in open and closed compartments. ). Crowding effects such as these ( • Pierobon P. • Mobilia M. • Frey E. • et al. Bottleneck-induced transitions in a minimal model for intracellular transport. , • Telley I.A. • Bieling P. • Surrey T. Obstacles on the microtubule reduce the processivity of kinesin-1 in a minimal in vitro system and in cell extract. ) are important for a molecular understanding of MT dynamics. Previous theoretical studies on this topic largely disregarded crowding effects or considered parameter regimes in which they are unimportant ( • Govindan B.S. • Gopalakrishnan M. • Chowdhury D. Length control of microtubules by depolymerizing motor proteins. , • Brun L. • Rupp B. • Nédélec F. • et al. A theory of microtubule catastrophes and their regulation. , • Hough L.E. • Schwabe A. • Betterton M.D. • et al. Microtubule depolymerization by the kinesin-8 motor Kip3p: a mathematical model. ). Depolymerization, including crowding effects, has also been investigated for diffusive depolymerases such as MCAK ( • Klein G.A. • Kruse K. • Jülicher F. • et al. Filament depolymerization by motor molecules. ). At the plus-end of the systems, we consider depolymerization dynamics that arise due to the interaction of molecular motors with the MT tip. Motivated by recent experiments ( • Varga V. • Leduc C. • Howard J. • et al. Kinesin-8 motors act cooperatively to mediate length-dependent microtubule depolymerization. ), we assume nonprocessive depolymerization, i.e., a molecular motor dissociates from the lattice after triggering depolymerization. Because the molecular mechanisms are not yet fully resolved, we study two scenarios of depolymerization (see Fig. 1 B). In the noncooperative scenario, the dissociation rate depends only on whether the last site is empty or occupied by a motor. If the last site is occupied, $nL=1$, the MT depolymerizes at rate δ0. However, recent single-molecule studies indicate that Kip3p may act cooperatively ( • Varga V. • Leduc C. • Howard J. • et al. Kinesin-8 motors act cooperatively to mediate length-dependent microtubule depolymerization. ), which we consider as our second scenario. After arriving at the plus-end, the motor is observed to pause and depolymerize a tubulin dimer only after a second Kip3p has arrived behind it. In this scenario, a tubulin dimer is depolymerized with rate δ1 if both the last and the second-to-last sites are occupied, $nL−1=nL=1$. Therefore, the total depolymerization rate can be written as: $Δ=δ0nL+δ1nL−1nL.$ (1) For stabilized MTs, the spontaneous depolymerization rate is small ( • Varga V. • Leduc C. • Howard J. • et al. Kinesin-8 motors act cooperatively to mediate length-dependent microtubule depolymerization. ) and thus is not considered here. The relative magnitude of the noncooperative rate δ0 and the cooperative rate δ1 determines the degree of cooperativity of the depolymerization kinetics. In an average over many realizations of the stochastic process (ensemble average), the depolymerization speed $Vdepol$ depends on the occupation of the last two binding sites by depolymerases (Fig. 1 B): $Vdepol=(δ0ρ++δ1κ+)a,$ (2) where a is the lattice spacing. Here $ρ+:=〈nL〉$ is the probability that the last site is occupied (i.e., the expected motor density at the plus-end), and $κ+:=〈nL−1nL〉$ denotes the probability that both the last and second-to-last sites are occupied. We analyzed this model via stochastic simulations and analytic calculations (for further details, see the Supporting Material). ### Validation of the model and its parameters The model parameters are, as far as they are available, fixed by experimental data. The motor speed, u, the motor run length, $ℓ$, and motor association rate, ωa, were measured previously ( • Varga V. • Leduc C. • Howard J. • et al. Kinesin-8 motors act cooperatively to mediate length-dependent microtubule depolymerization. ): $u=3.2μmmin−1,$ $ωa=24nM−1min−1μm−1,$ $ℓ≈11μm.$ Using an MT lattice spacing of $a=8.4nm$, we derive the corresponding parameters in our model as follows: The motor speed v corresponds to 6.35 lattice sites per second, i.e., a hopping rate of $ν=u/a=6.35s−1$. The inverse hopping rate $τ:=ν−1=0.16s$ and the size a of a tubulin dimer serve as our basic timescale and length scale, respectively. Then, the measured association rate corresponds to a rate $ωa≈5.3×10−4nM−1site−1τ−1$. The dissociation rate, $ωd=u/ℓ$, is derived as the ratio of the mean motor speed, v, and the mean motor run length, $ℓ$. The latter equals 1310 lattice sites. Thus, the dissociation rate is expressed as $ωd≈7.6×10−4site−1τ−1$. In contrast to the transport behavior on the MT, the parameters concerning the depolymerization rates, $δ0/1$, cannot be directly extracted from experiments. However, there is evidence for a depolymerization rate as high as the motor speed, u ( • Varga V. • Helenius J. • Howard J. • et al. Yeast kinesin-8 depolymerizes microtubules in a length-dependent manner. , • Varga V. • Leduc C. • Howard J. • et al. Kinesin-8 motors act cooperatively to mediate length-dependent microtubule depolymerization. ). As a starting point for the following discussion, we tentatively take $δ0=ν$. Using the above set of parameters, we now phenomenologically compare the results from numerical simulations of our model with observations from experiments. Specifically, we consider kymographs of the MT, which show how the MT length and the motor density on the MT evolve over time. For the simulation data shown in Fig. 2, we consider an MT consisting of 14 independent protofilaments and investigate the dynamics for the noncooperative scenario and a range of motor concentrations, $c=1.2,1.8,2.6nM$ (Fig. 2, AC). Surprisingly, as shown later, neither the cooperativity of the motors nor a decrease in the depolymerization rates led to different shapes of kymographs (see also Fig. S1). We find an initial time period in which, starting from an empty MT lattice, the motors first fill up the lattice ( • Vilfan A. • Frey E. • Mandelkow E. • et al. Dynamics and cooperativity of microtubule decoration by the motor protein kinesin. , • Frey E. • Vilfan A. Anomalous relaxation kinetics of biological lattice-ligand binding models. ). This is followed by a time window in which the motor density exhibits a quasi-stationary profile, i.e., the density at a certain distance from the minus-end does not change except for boundary effects induced by the plus-end. The corresponding density profiles are illustrated in Fig. 2 E and discussed in more detail in the following section. In this quasi-stationary regime, the depolymerization dynamics shows qualitatively different behavior depending on the concentration of free motor molecules: At a low concentration, $c<1.4nM$, and thus a low density of motors on the MT, depolymerization slows down gradually in the course of time (Fig. 2 A). When the motor concentration increases to larger values, $c>1.4nM$, an intermediate regime emerges in which the depolymerization speed stays roughly constant (Fig. 2, B and C). Remarkably, we find that during this regime, the depolymerization speed is directly proportional to the motor density, $Vdepol(L)=ρ−(L)u$ (Fig. 2 D). At a third stage in the depolymerization process, there is a rather abrupt change in the depolymerization speed right where the density profile also shows a steep drop (Fig. 2, CE). After we have elaborated more on the theoretical model, we will discuss why there is such a tight correlation between the depolymerization dynamics and the density profile. All of these qualitative features of MT dynamics are identical to those found experimentally ( • Varga V. • Helenius J. • Howard J. • et al. Yeast kinesin-8 depolymerizes microtubules in a length-dependent manner. , • Varga V. • Leduc C. • Howard J. • et al. Kinesin-8 motors act cooperatively to mediate length-dependent microtubule depolymerization. ), and suggest that the density profile and, in particular, traffic jams formed on the MT lattice are the main determinants of the depolymerization dynamics. Moreover, the timescales of the dynamics agree quantitatively well with experimental results for the same motor concentrations ( • Varga V. • Helenius J. • Howard J. • et al. Yeast kinesin-8 depolymerizes microtubules in a length-dependent manner. , • Varga V. • Leduc C. • Howard J. • et al. Kinesin-8 motors act cooperatively to mediate length-dependent microtubule depolymerization. ). This validates our theoretical model because up to the depolymerization rate δ, all of the model parameters were derived from experimental data ( • Varga V. • Leduc C. • Howard J. • et al. Kinesin-8 motors act cooperatively to mediate length-dependent microtubule depolymerization. ). ### Density profiles at the minus-end (bulk density) The above observations strongly point toward a tight correlation between the depolymerization speed and the motor density profile at the minus-end, $ρ−(x)$, which we henceforth call the bulk (motor) density. The quasi-stationary bulk density profiles shown in Fig. 2 E were obtained by assuming very long lattices; effects caused by the plus-end are not visible in the vicinity of the minus-end. A more detailed discussion of these simulations can be found in the Supporting Material. Because this bulk density will play an important role in the following analysis, we summarize its features here as obtained from analytical calculations detailed in the Supporting Material. At the minus-end, the density profiles show an initial linear increase. This is an “antenna effect” ( • Varga V. • Helenius J. • Howard J. • et al. Yeast kinesin-8 depolymerizes microtubules in a length-dependent manner. ), as illustrated in Fig. 3 A. Motors that attach in proximity to the MT minus-end immediately move toward the plus-end, thereby generating an approximately linearly increasing accumulation of motors. The slope is given by $K/ℓ$, where $K=cωa/ωd$ denotes the binding constant. At sufficiently large distances from the minus-end, the density profile becomes flat and dominated by Langmuir kinetics with the ensuing Langmuir density: $ρLa=K1+K=cωacωa+ωd.$ (3) The full density profile is obtained by concatenating the antenna profile and the flat Langmuir profile such that the motor current is continuous along the MT. We find two qualitatively distinct scenarios (Fig. 2 E). For low concentrations of molecular motors, c, the antenna profile matches the asymptotic Langmuir density continuously, resulting in a wedge-like profile. In contrast, above a certain threshold value for the concentration, determined by the binding constant $Kc−=1$, the two profiles can no longer be matched continuously and the density profile displays a sharp discontinuity, also termed a “domain wall” (DW) ( • Parmeggiani A. • Franosch T. • Frey E. Phase coexistence in driven one-dimensional transport. ). In other words, if the Langmuir density rises above a critical value of $ρLac=0.5$, a crowding-induced traffic jam will result ( • Frey E. • Parmeggiani A. • Franosch T. Collective phenomena in intracellular processes. ) (Fig. 3 A). The density profiles obtained from the analytic calculations and the stochastic simulations agree nicely, as illustrated in Fig. 2 E. In particular, the theoretical analysis gives an explicit expression for the width of the antenna-like profile: $ℓ−≈ℓ{11+KforK<1,1K(1+K)forK>1.$ (4) This result reduces to the average run length of molecular motors, $ℓ=u/ωd$, in the limit of a very low binding constant, $K≪1$, where crowding effects can be neglected ( • Hough L.E. • Schwabe A. • Betterton M.D. • et al. Microtubule depolymerization by the kinesin-8 motor Kip3p: a mathematical model. ). However, with increasing K, the regime with an antenna-like profile becomes significantly shorter than $ℓ$ (Fig. 2 F). ### Depolymerization dynamics is independent of cooperativity We now address how the cooperativity of the depolymerization kinetics affects the macroscopic depolymerization speed. There are two limiting cases: noncooperative depolymerization (nc) with $(δ0,δ1)=(δ,0)$, and fully cooperative depolymerization (fc) with $(δ0,δ1)=(0,δ)$ (for an illustration, see Fig. 3, B and C). Remarkably, we find from our stochastic simulations, shown in Fig. 4, that there is no difference in depolymerization speed for these two limiting cases. Even when the depolymerization dynamics contains cooperative as well as noncooperative terms, we do not find any significant differences in the depolymerization speed (Fig. 4 B). This observation from our stochastic simulations can be explained by the following molecular mechanism: Consider a model with fully cooperative depolymerization kinetics. Then, after the first motor has arrived at the plus-end, the terminal site of the MT will remain occupied from that time on. Depolymerization only occurs if another motor arrives at the second-to-last site. In other words, while the last site remains occupied, the second-to-last site triggers the depolymerization. Hence, as far as the depolymerization speed is concerned, the fully cooperative model is identical to a noncooperative model with the same molecular rate δ. In the noncooperative model, the terminal tubulin dimer is removed at rate δ once a molecular motor has arrived at the last site (Fig. 3 B). In the fully cooperative model, the terminal tubulin dimer is removed once a molecular motor has arrived at the second-to-last site next to a permanently occupied last site (Fig. 3 C). ### Depolymerization dynamics is strongly affected by crowding To gain further insights in the correlation between the depolymerization speed and the density of motors on the MT, we performed stochastic simulations focusing on the MT plus-end by regarding the dynamics in a comoving frame. Instead of simulating the full-length MT with an antenna profile and a subsequent flat Langmuir density, we considered a reduced model in which the density at the left end is set equal to the Langmuir density $ρLa$. For long MTs, the Langmuir density is always reached, so that the reduced system is fully equivalent to the original model. Our simulations show two clearly distinct regimes of depolymerization dynamics (Fig. 4): For small, microscopic depolymerization rates, $δτ<ρLa$, the polymerization speed is rate-limited: $Vdepol=aδ$. In contrast, for rates $δτ>ρLa$, the depolymerization speed is density-limited, and the Langmuir density is the limiting factor: $Vdepol=ρLau$. The boundary between the two regimes is remarkably sharp and given by $ρLa∗=δτ.$ (5) This implies that the depolymerization speed can switch between being density-limited and rate-limited by changing the concentration c or the values of the biochemical rates of depolymerases binding to and unbinding from the MT lattice. Overall, the depolymerization speed obeys a scaling law $Vdepol=ρLauV(δτρLa)={aδforδτ≤ρLaρLauforδτ>ρLa,$ (6) where $V(x)$ is a universal scaling function with the simple form $V(x)=x$ for $x<1$ and $V(x)=1$ for $x>1$. Experimentally, this implies that one should find data collapse when using such a scaling plot (Fig. 4 A). To gain a molecular understanding of these remarkable features of the depolymerization speed, one needs to have a closer look at the density profile of the molecular motors at the MT tip. If the depolymerization rate is small, $δ<ν$, motors leave the tip more slowly than they arrive. Therefore, the MT tip acts as a bottleneck for molecular transport that disturbs the density profiles either locally or macroscopically. A weak bottleneck induces a local perturbation (“spike”) ( • Pierobon P. • Mobilia M. • Frey E. • et al. Bottleneck-induced transitions in a minimal model for intracellular transport. ). These spikes are sharp changes of the density profile with a typical extension that scales with the size of a heterodimer. However, if the strength of a bottleneck exceeds a threshold value, the spike extends to a macroscopic perturbation (“traffic jam”) ( • Pierobon P. • Mobilia M. • Frey E. • et al. Bottleneck-induced transitions in a minimal model for intracellular transport. ). Fig. 5 A illustrates how, for a given Langmuir density, $ρLa=2/3$, the effect on the density profile changes from a spike (blue) to an extended traffic jam (red and green) when the depolymerization rate is δ. Let us now analyze the conditions and consequences of such bottlenecks in more detail. Suppose we are in a parameter regime where the plus-end disturbs the density profile only locally, i.e., on the scale of a heterodimer. Then, we may take the bulk density to be equal to the Langmuir density, $ρLa$, up to the last site (the plus-end) where it jumps to some higher or lower value $ρ+$. The particle loss current at the plus-end due to MT depolymerization is then given by $Jdepol=(1−ρLa)ρ+δ.$ (7) The factor $1−ρLa$ arises because the particle number decreases only if a particle depolymerizes the MT and the second-to-last site, $L−1$, is unoccupied. Otherwise, the depolymerization dynamics and the associated frame shift of the MT lattice do not change the occupation of the last site. This particle loss has to be balanced by the incoming particle flux, $JLa=ρLa(1−ρLa)ν.$ (8) Equating these particle fluxes ((7), (8)) implies the following condition for the motor density at the plus-end: $ρ+={ρLaδτforρLa≤δτ1forρLa>δτ,$ (9) where the fact that the motor density is bounded $ρ+≤1$ is already accounted for. The particle density on the last site, in turn, determines the depolymerization speed. For $ρLa<δτ$, one obtains according to (2), (9): $Vdepol=ρ+δa=ρLau.$ (10) Remarkably, here the effect of the depolymerization kinetics (δ) cancels out such that the macroscopic depolymerization speed is independent of the molecular details of depolymerization kinetics and is solely determined by the Langmuir density, i.e., the motor density in the bulk, $ρ−(x)$, and not at the tip of the MT. This result crucially depends on the presence of a microscopic spike. It explains the hitherto puzzling experimental result that the depolymerization speed is directly proportional to the bulk motor current along the MT ( • Varga V. • Leduc C. • Howard J. • et al. Kinesin-8 motors act cooperatively to mediate length-dependent microtubule depolymerization. ) (Fig. S2). Because the density is bounded, $ρ+≤1$, density profiles with a spike are only possible if the densities are not too large, $ρLa<δτ$. This is the case for the blue curve in Fig. 5 A. For densities exceeding the critical density, $ρLa∗=δτ$, the bottleneck-induced perturbation in the density profile can no longer remain a local spike, but has to become macroscopic in extent ( • Pierobon P. • Mobilia M. • Frey E. • et al. Bottleneck-induced transitions in a minimal model for intracellular transport. ) (see green and red curves in Fig. 5 A and the Supporting Material). One finds that over an extended region, the binding sites at the plus-end then remain permanently occupied such that $ρ+=1$. This immediately implies that the depolymerization speed becomes density-independent and proportional to the microscopic depolymerization rate: $Vdepol=aδ.$ (11) There is a tight correlation between the shape of the density profiles and the macroscopic depolymerization speed. The analytic results explain the molecular mechanism behind the numerically observed scaling law (Eq. 6), with a sharp transition from density-regulated to rate-limited depolymerization dynamics at a critical value of $ρLa∗=δτ$ (cf. the classification of density profiles and depolymerization regimes shown in Fig. 5 B). Actually, the above calculations can be generalized to the regime in which the motor density exhibits an antenna-like linear profile, i.e., for MT length shorter than $ℓ−$. As detailed in the Supporting Material, we find that the depolymerization speed is rate-limited, $Vdepol=aδ$, if MTs are shorter than $ℓ−$ but still longer than a second threshold length: $ℓd:=δacωa=ℓδτK.$ (12) In contrast, for $ℓd>ℓ−$, the depolymerization speed in the antenna regime is always length-dependent and strictly follows the shape of the antenna profile, $ρ−(x)$: $Vdepol=ρ−(L)u.$ (13) Using Eq. 4, the condition $ℓd>ℓ−$ on the threshold lengths is equivalent to $δτ>ρLa$ for $K<1$, and to $δτ>1−ρLa$ for $K>1$. Combining all of the above results, we find three mechanisms that govern the depolymerization dynamics, as illustrated in Fig. 5 C: • α. For $δτ>ρLa$, the depolymerization speed is always density-regulated and given by $Vdepol(L)=ρ−(L)u$, where L is the time-dependent length of the MT. In this parameter regime, the depolymerization speed is a direct map of the bulk motor density profile on the MT, $ρ−(x)$, a feature that can be exploited experimentally to measure the profile. • β. For $ρLa>δτ>1−ρLa$, the depolymerization speed is rate-limited for MTs longer than $ℓ−$, and becomes density-limited as soon as the MT length falls below $ℓ−$, where the density profile is antenna-like. This implies that there is a discontinuous jump in the depolymerization speed right at $L=ℓ−$. • γ. Finally, for all other values of $δτ$, the depolymerization speed of the MT remains rate-limited for lengths larger than a threshold length $ℓd$. At $ℓd$, which is smaller than $ℓ−$ in this parameter regime, there is again a discontinuous jump to a density-limited depolymerization dynamics. If the depolymerization rate is larger or equal to the hopping rate of molecular motors, $δτ≥1$, then $δτ>ρLa$ is always obeyed simply because $ρLa≤1$. In this regime, all of the molecular details of the depolymerization kinetics are irrelevant. Neither the cooperativity nor the actual value of the depolymerization rate matters in terms of the depolymerization speed; instead, only the bulk density regulates the speed. Note that this was the case for the data shown in Fig. 2, where we tentatively made the parameter choice $δτ=1$. If the motors are faster than the depolymerization process, $δτ<1$, we have to distinguish between the parameter regimes (α, β, and γ, Fig. 5 C). Here the value of the depolymerization rate matters if the bulk density exceeds a certain threshold concentration, $ρLa>δτ$, and the MTs are long enough. Finally, the depolymerization speed always becomes density-dependent and hence length-dependent if the MT length is short enough; the corresponding threshold length is $ℓreg=min[ℓ−,ℓd]$. ### The end-residence time strongly depends on cooperativity In contrast to the depolymerization speed, the mean end-residence time $τres$ is strongly affected by the degree of cooperativity. Fig. 6 displays $τres$ as obtained from our stochastic simulations for noncooperative and fully cooperative depolymerization kinetics. Our simulations show that the end-residence time for the fully cooperative model is identical to the average lifetime of a terminal tubulin dimer $τresfc=τd:=a/Vdepol$ (Fig. 6 A). Even for the noncooperative model, $τresnc$ equals $τd$ for large residence times and deviates from it only at small values. The relatively sharp transition to a constant lifetime of the terminal tubulin dimer occurs right at $τresnc=τ/ρLa$, i.e., the end-residence time equals the waiting time for a molecular motor to arrive at the MT tip. For $τresnc<τ/ρLa$, the lifetime of the terminal tubulin dimer is identical to the arrival time (Fig. 6, A and B). Once the arrival time becomes shorter than the inverse depolymerization rate, the end-residence time levels off at $τresnc=1/δ$. These results show that the dependence of the end-residence time on density can be used to quantify the degree of cooperativity. This would require experiments with motor densities on the MT larger than those studied up to now ( • Varga V. • Helenius J. • Howard J. • et al. Yeast kinesin-8 depolymerizes microtubules in a length-dependent manner. , • Varga V. • Leduc C. • Howard J. • et al. Kinesin-8 motors act cooperatively to mediate length-dependent microtubule depolymerization. ). The observation that the depolymerization speed is independent of the degree of cooperativity seems to be at odds with the experimental finding that the end-residence time, $τres$, of Kip3p depends on the total Kip3p concentration and is inversely proportional to the macroscopic depolymerization speed ( • Varga V. • Leduc C. • Howard J. • et al. Kinesin-8 motors act cooperatively to mediate length-dependent microtubule depolymerization. ). Actually, however, there is no contradiction and the findings are readily explained within our theoretical model: For a noncooperative model, $τresnc$ is simply given by the depolymerization rate, because after they arrive, the particles stay at the tip until they depolymerize the MT: $τresnc=1δ.$ (14) For a fully cooperative model, $τresfc$ depends not only on δ but also on the rate at which the second-to-last site becomes populated. Say the probability for the second-to-last site to be occupied is $ρ+$. Then, $τresfc$ is given by a sum of two contributions arising from the cases in which the second-to-last site is empty or occupied, respectively: $τresfc=(1−ρ+)(τρLa+1δ)+ρ+1δ.$ (15) If the second-to-last site is empty (which is the case with probability $1−ρ+$) $τres$ is the sum of arrival time $τ/ρLa$ and depolymerization time $1/δ$. Otherwise, the end-residence time $τres$ simply equals $1/δ$. As shown in the previous section, two distinct scenarios arise: For small bulk densities such that $ρLa<δτ$, the density profile at the plus-end exhibits a microscopic spike with $ρ+=ρLa/δτ$. For large densities, $ρLa>δτ$, a macroscopic traffic jam emerges such that $ρ+=1$. This result obtained for the motor density at the MT tip (Eq. 9) may now be used to calculate $τresfc$ using Eq. 15: $τresfc={1δforρLa>δτ,τρLaelse.$ (16) This agrees well with the results from stochastic simulations displayed in Fig. 6. A comparison with Eq. 6 shows that the end-residence time equals the typical depolymerization time, i.e., the expected lifetime of a terminal tubulin dimer, $τresfc=τd$. This is in agreement with experimental findings regarding the unbinding rate of motors at the plus-end ( • Varga V. • Leduc C. • Howard J. • et al. Kinesin-8 motors act cooperatively to mediate length-dependent microtubule depolymerization. ) and strongly supports the conclusion that depolymerization of MTs by Kip3p is fully cooperative. Varga et al. ( • Varga V. • Leduc C. • Howard J. • et al. Kinesin-8 motors act cooperatively to mediate length-dependent microtubule depolymerization. ) measured the end-residence time of motors on double stabilized MTs, i.e., where depolymerization is switched off. They observed that the end-residence time is inversely correlated with the concentration of Kip3p, and fit their data with an exponential using a cutoff. This is in accordance with our results shown in Fig. 6 B. However, because depolymerization has been switched off in the experiment, the rate δ, corresponding to the cutoff, now has to be interpreted as an unbinding-rate of motors at the plus-end. It would be highly interesting to design experiments in which the depolymerization kinetics remains switched on, because this would allow one to measure the magnitude of the microscopic depolymerization rate δ. ## Discussion In this work, we analyzed the effect of crowding and cooperativity on the depolymerization dynamics of MTs. To that end, we constructed an individual-based model for the coupled dynamics of plus-end-directed motor traffic and MT depolymerization kinetics. The model is based on well-established molecular properties of motors from the kinesin-8 family, i.e., the motors move on single protofilaments with high processivity at an average speed u, and exchange of motors between the bulk and the MT follows Langmuir kinetics. All parameters of the model, including the average walking speed, run length, and attachment rate, were directly extracted from available in vitro data ( • Varga V. • Leduc C. • Howard J. • et al. Kinesin-8 motors act cooperatively to mediate length-dependent microtubule depolymerization. ). We validated our model by reproducing the onset of length-dependent depolymerization as studied recently ( • Varga V. • Helenius J. • Howard J. • et al. Yeast kinesin-8 depolymerizes microtubules in a length-dependent manner. , • Varga V. • Leduc C. • Howard J. • et al. Kinesin-8 motors act cooperatively to mediate length-dependent microtubule depolymerization. ). Without using any additional fitting parameter, we found the same regimes of density profiles and ensuing depolymerization dynamics as in the experiments, i.e., a linear antenna-profile with a length-dependent depolymerization speed and a flat profile with a constant depolymerization speed. Moreover, we identified a threshold density of motors above which a crowding-induced traffic jam emerges at the minus-end. The predicted shape and extent of these traffic jams should be amenable to experiments that raise the depolymerase concentration c or change its rates of binding to and unbinding from the MT. The interplay between motor traffic and depolymerization kinetics at the MT plus-end leads to strong correlations between the depolymerization dynamics and density profiles of depolymerases. The plus-end acts as a bottleneck, and crowding effects cause traffic jams. We find two qualitatively distinct regimes: Motor densities below a critical threshold value, $ρLa∗=δτ$, always show a local spike-like perturbation at the plus-end, the extent of which is the size of a heterodimer. Above this threshold density, macroscopic traffic jams may emerge. These distinct density profiles at the plus-end affect the depolymerization speed and the end-residence time in qualitatively different ways. A quantitative analysis of the model using stochastic simulations as well as analytical calculations led to the following main results: The end-residence time of a depolymerase strongly depends on the degree of cooperativity. Whereas for noncooperative depolymerization kinetics the end-residence time is given by the microscopic depolymerization rate δ, it is density-dependent in the fully cooperative case: Increasing the Langmuir density above the threshold value $ρLa∗=δτ$, the end-residence time changes from being inversely proportional to the density $ρLa$ to a constant value $δ−1$. These results suggest an interesting way to determine the cooperativity of depolymerization kinetics and measure the value of the depolymerization rate δ. Although when the concentration c is increased, the end-residence time should be independent of concentration for noncooperative kinetics, it should strongly depend on concentration in the cooperative case. Experimental evidence points toward the latter ( • Varga V. • Leduc C. • Howard J. • et al. Kinesin-8 motors act cooperatively to mediate length-dependent microtubule depolymerization. ). In contrast, the depolymerization speed does not depend on the degree of cooperativity of the depolymerization kinetics. Noncooperative and fully cooperative versions of the model give identical results. As a function of depolymerase concentration and the MT length, the depolymerization dynamics exhibits two qualitatively distinct regimes: The depolymerization speed is either density-limited and determined by the bulk density of molecular motors, $ρ−(x)$, or rate-limited and dictated by the value of the microscopic depolymerization rate, δ. Both regimes emerge due to crowding of molecular motors at the plus-end, which acts as a bottleneck for molecular traffic. Density-limited regimes are correlated with microscopic traffic jams (“spikes”) at the plus-end: The density profile self-organizes into a shape that cancels out all the effects of the depolymerization kinetics such that the depolymerization speed is solely determined by the bulk motor density, $ρ−(x)$, and the average motor speed, u. Note that only in this regime length-dependent regulation is possible, because the density changes over the MT length. As emphasized above, if the depolymerization rate δ is larger than the hopping rate of the molecular motors, $δ>ν$, this remains the only regime of depolymerization dynamics. Then, the depolymerization speed is limited by the velocity of the plus-end directed motors, which is in accordance with recent experimental findings for Kip3p ( • Varga V. • Leduc C. • Howard J. • et al. Kinesin-8 motors act cooperatively to mediate length-dependent microtubule depolymerization. ). In a parameter regime where motors depolymerize more slowly than they walk, $δ<ν$, there is a second rate-limited regime above the threshold density $ρLa∗$ and for MTs longer than some threshold length $ℓreg$ where $Vdepol=aδ$. In this regime, the plus-end acts as a strong bottleneck for molecular traffic. This causes a macroscopic traffic jam such that the motor density steeply rises to full occupation of all lattice sites at the plus-end of the MT. The cellular system sacrifices its ability to regulate the speed of depolymerization and only regains it once the MT length falls below $ℓreg$, where the depolymerization speed again becomes density-regulated. From an evolutionary perspective, one might speculate that the system has evolved toward $δ=ν$, because this would allow regulation of the depolymerization dynamics over the broadest possible range. Beyond these observations, other predictions of our stochastic model can be put to the test in experiments. By varying the motor concentration, two interesting observations could be made: First, the phase diagram for the density profiles at the minus-end could be scrutinized experimentally. Second, the predictions on the density-profiles at the plus-end and their predicted strong correlations to the macroscopic depolymerization dynamics might be accessible to single-molecule studies. Manipulation of the molecular properties of the motor (e.g., the run length, attachment rate ( • Cooper J.R. • Wagenbach M. • Wordeman L. • et al. Catalysis of the microtubule on-rate is the major parameter regulating the depolymerase activity of MCAK. ), average speed, and depolymerization rate) would change the intrinsic biochemical rates of the system and could potentially lead to new parameter regimes. In addition, our results regarding the length and concentration dependence of the depolymerization process might be relevant in vivo, e.g., for mitotic chromosome alignment ( • Stumpff J. • von Dassow G. • Wordeman L. • et al. The kinesin-8 motor Kif18A suppresses kinetochore movements to control mitotic chromosome alignment. ). In our theoretical studies, we explored the full parameter range, and therefore clear predictions are available for comparison. We believe that in a more general context, our theoretical work provides new conceptual insights into the role of collective and cooperative effects in MT assembly and disassembly dynamics. Future research could focus on the antagonism between polymerases and depolymerases ( • Howard J. • Hyman A.A. Microtubule polymerases and depolymerases. , • Kinoshita K. • Arnal I. • Hyman A.A. • et al. Reconstitution of physiological microtubule dynamics using purified components. , • Brouhard G.J. • Stear J.H. • Hyman A.A. • et al. XMAP215 is a processive microtubule polymerase. ), spontaneous MT dynamics mediated by GTP hydrolysis, the abundance of molecular motors in a cell, or more-detailed modeling of molecular motors ( • Klumpp S. • Chai Y. • Lipowsky R. Effects of the chemomechanical stepping cycle on the traffic of molecular motors. ). This may finally lead to a molecular understanding of the regulatory mechanisms of cellular processes in which MT dynamics plays a central role. The authors thank Cécile Leduc for discussions; Varga et al. ( • Varga V. • Leduc C. • Howard J. • et al. Kinesin-8 motors act cooperatively to mediate length-dependent microtubule depolymerization. ) for kindly providing their data; Ulrich Gerland, Günther Woehlke, and Jonas Cremer for critical readings of the original manuscript; Anton Winkler for helpful suggestions on the revised manuscript; and Andrej Vilfan for drawing Fig. 1 A. This work was supported by the Deutsche Forschungsgemeinschaft in the framework of the SFB 863 and the German Excellence Initiative via the program “Nanosystems Initiative Munich”. ## Supporting Material • Document S1. Additional details, two figures, and references ## References • Hayles J. • Nurse P. A journey into space. Nat. Rev. Mol. Cell Biol. 2001; 2: 647-656 • Tolić-Nørrelykke I.M. Force and length regulation in the microtubule cytoskeleton: lessons from fission yeast. Curr. Opin. Cell Biol. 2010; 22: 21-28 • Sharp D.J. • Rogers G.C. • Scholey J.M. Microtubule motors in mitosis. Nature. 2000; 407: 41-47 • Karsenti E. • Vernos I. The mitotic spindle: a self-made machine. Science. 2001; 294: 543-547 • Eggert U.S. • Mitchison T.J. • Field C.M. Animal cytokinesis: from parts list to mechanisms. Annu. Rev. Biochem. 2006; 75: 543-566 • Hirokawa N. • Noda Y. • Niwa S. • et al. Kinesin superfamily motor proteins and intracellular transport. Nat. Rev. Mol. Cell Biol. 2009; 10: 682-696 • Mitchison T. • Kirschner M. Dynamic instability of microtubule growth. Nature. 1984; 312: 237-242 • Dogterom M. • Leibler S. Physical aspects of the growth and regulation of microtubule structures. Phys. Rev. Lett. 1993; 70: 1347-1350 • Desai A. • Mitchison T.J. Microtubule polymerization dynamics. Annu. Rev. Cell Dev. Biol. 1997; 13: 83-117 • Howard J. • Hyman A.A. Dynamics and mechanics of the microtubule plus end. Nature. 2003; 422: 753-758 • Wordeman L. Microtubule-depolymerizing kinesins. Curr. Opin. Cell Biol. 2005; 17: 82-88 • Howard J. • Hyman A.A. Microtubule polymerases and depolymerases. Curr. Opin. Cell Biol. 2007; 19: 31-35 • Howard J. • Hyman A.A. Growth, fluctuation and switching at microtubule plus ends. Nat. Rev. Mol. Cell Biol. 2009; 10: 569-574 • Helenius J. • Brouhard G. • Howard J. • et al. The depolymerizing kinesin MCAK uses lattice diffusion to rapidly target microtubule ends. Nature. 2006; 441: 115-119 • Varga V. • Helenius J. • Howard J. • et al. Yeast kinesin-8 depolymerizes microtubules in a length-dependent manner. Nat. Cell Biol. 2006; 8: 957-962 • Gupta Jr., M.L. • Carvalho P. • Pellman D. • et al. Plus end-specific depolymerase activity of Kip3, a kinesin-8 protein, explains its role in positioning the yeast mitotic spindle. Nat. Cell Biol. 2006; 8: 913-923 • Mayr M.I. • Hümmer S. • Mayer T.U. • et al. The human kinesin Kif18A is a motile microtubule depolymerase essential for chromosome congression. Curr. Biol. 2007; 17: 488-498 • Stumpff J. • von Dassow G. • Wordeman L. • et al. The kinesin-8 motor Kif18A suppresses kinetochore movements to control mitotic chromosome alignment. Dev. Cell. 2008; 14: 252-262 • Du Y. • English C.A. • Ohi R. The kinesin-8 Kif18A dampens microtubule plus-end dynamics. Curr. Biol. 2010; 20: 374-380 • Unsworth A. • Masuda H. • Toda T. • et al. Fission yeast kinesin-8 Klp5 and Klp6 are interdependent for mitotic nuclear retention and required for proper microtubule dynamics. Mol. Biol. Cell. 2008; 19: 5104-5115 • Tischer C. • Brunner D. • Dogterom M. Force- and kinesin-8-dependent effects in the spatial regulation of fission yeast microtubule dynamics. Mol. Syst. Biol. 2009; 5: 250 • Grissom P.M. • Fiedler T. • McIntosh J.R. • et al. Kinesin-8 from fission yeast: a heterodimeric, plus-end-directed motor that can couple microtubule depolymerization to cargo movement. Mol. Biol. Cell. 2009; 20: 963-972 • Varga V. • Leduc C. • Howard J. • et al. Kinesin-8 motors act cooperatively to mediate length-dependent microtubule depolymerization. Cell. 2009; 138: 1174-1183 • Gardner M.K. • Bouck D.C. • Odde D.J. • et al. Chromosome congression by kinesin-5 motor-mediated disassembly of longer kinetochore microtubules. Cell. 2008; 135: 894-906 • Foethke D. • Makushok T. • Nédélec F. • et al. Force- and length-dependent catastrophe activities explain interphase microtubule organization in fission yeast. Mol. Syst. Biol. 2009; 5: 241 • Klumpp S. • Chai Y. • Lipowsky R. Effects of the chemomechanical stepping cycle on the traffic of molecular motors. Phys. Rev. E. 2008; 78: 041909 • Howard J. The movement of kinesin along microtubules. Annu. Rev. Physiol. 1996; 58: 703-729 • Ray S. • Meyhöfer E. • Howard J. • et al. Kinesin follows the microtubule's protofilament axis. J. Cell Biol. 1993; 121: 1083-1093 • Parmeggiani A. • Franosch T. • Frey E. Phase coexistence in driven one-dimensional transport. Phys. Rev. Lett. 2003; 90: 086601 • Parmeggiani A. • Franosch T. • Frey E. Totally asymmetric simple exclusion process with Langmuir kinetics. Phys. Rev. E. 2004; 70: 046101 • Lipowsky R. • Klumpp S. • Nieuwenhuizen T.M. Random walks of cytoskeletal motors in open and closed compartments. Phys. Rev. Lett. 2001; 87: 108101 • Klumpp S. • Lipowsky R. Traffic of molecular motors through tube-like compartments. J. Stat. Phys. 2003; 113: 233-268 • Pierobon P. • Mobilia M. • Frey E. • et al. Bottleneck-induced transitions in a minimal model for intracellular transport. Phys. Rev. E. 2006; 74: 031906 • Telley I.A. • Bieling P. • Surrey T. Obstacles on the microtubule reduce the processivity of kinesin-1 in a minimal in vitro system and in cell extract. Biophys. J. 2009; 96: 3341-3353 • Govindan B.S. • Gopalakrishnan M. • Chowdhury D. Length control of microtubules by depolymerizing motor proteins. Europhys. Lett. 2008; 83: 40006 • Brun L. • Rupp B. • Nédélec F. • et al. A theory of microtubule catastrophes and their regulation. Proc. Natl. Acad. Sci. USA. 2009; 106: 21173-21178 • Hough L.E. • Schwabe A. • Betterton M.D. • et al. Microtubule depolymerization by the kinesin-8 motor Kip3p: a mathematical model. Biophys. J. 2009; 96: 3050-3064 • Klein G.A. • Kruse K. • Jülicher F. • et al. Filament depolymerization by motor molecules. Phys. Rev. Lett. 2005; 94: 108102 • Vilfan A. • Frey E. • Mandelkow E. • et al. Dynamics and cooperativity of microtubule decoration by the motor protein kinesin. J. Mol. Biol. 2001; 312: 1011-1026 • Frey E. • Vilfan A. Anomalous relaxation kinetics of biological lattice-ligand binding models. Chem. Phys. 2002; 284: 287-310 • Frey E. • Parmeggiani A. • Franosch T. Collective phenomena in intracellular processes. Genome Inform. 2004; 15: 46-55 • Cooper J.R. • Wagenbach M. • Wordeman L. • et al. Catalysis of the microtubule on-rate is the major parameter regulating the depolymerase activity of MCAK. Nat. Struct. Mol. Biol. 2010; 17: 77-82 • Kinoshita K. • Arnal I. • Hyman A.A. • et al. Reconstitution of physiological microtubule dynamics using purified components. Science. 2001; 294: 1340-1343 • Brouhard G.J. • Stear J.H. • Hyman A.A. • et al. XMAP215 is a processive microtubule polymerase. Cell. 2008; 132: 79-88
auto_math_text
web
# Debating taxes with Nick Pearce at the IPPR Yesterday evening I took part in a debate on Radio 4's PM programme, with the IPPR think tanks's Nick Pearce, about high marginal taxes rates, particularly on income and capital gains. You can listen to it here: [soundcloud url="http://api.soundcloud.com/tracks/34403379" params="auto_play=false&show_artwork=false&color=669933" width="100%" height="166" iframe="true" /] Last year Nick wrote that, the "over-reliance of the UK on revenues from financial services, the housing market and wealthy individuals was brutally exposed in the financial crisis".  The kind of argument he was making on Radio 4 yesterday would exacerbate that problem. The amount people pay in Capital Gains Tax is often quoted and it can often sound low, many politicians have argued that we need to increase the Capital Gains Tax rate to the Income Tax rate as a result.  What they miss is that Capital Gains have already been taxed.  To understand why look at the Gordon Growth Model: $P&space;=&space;frac{D_1}{(r&space;-&space;g)}$ Where P is the price; D1 is the value of next year's dividends; r is the cost of capital; and g is the rate at which dividends are expected to grow.  Of course, things can get a lot more complicated than that in the world of high finance.  But the fundamental picture is the one that simple equation sets out: the value of shares is the earnings they will pay to the current owner or someone else who might own them in the future. If a company's profits rise then that currently gets twice.  Once by Corporation Tax as profits (and often when it is paid out in dividends too).  Then again because the rise in profits increases expected dividends, immediately or in the future, and thereby increases the price of the shares (producing a capital gain). If we want more good jobs to go around then people who invest in rapidly growing new companies, the ones who get hit by that double tax, are really important.  Back in 2009 we set out just how important  in a paper on tax and entrepreneurship. So no, Mitt Romney almost certainly didn't pay just 13.9 per cent tax and it will be a huge risk to our future prosperity if we started setting new tax rates as if he did. Some rich Americans probably do find elaborate ways of avoiding paying their fair share in taxes.  One of the problems with a tax system that is too complicated and poorly designed is that those with the most expensive lawyers and accountants to work the system can get a better deal. But actually the United States, with its relatively low marginal tax rates, gets more revenue out of the rich than any other developed country, according to research by the Tax Foundation based on OECD figures.  Far more than many European countries that have much higher rates.  Over time, and as the success of tax cuts in the 1980s shows, tax cuts can be so economically successful that they increase revenue. Lower spending and lower taxes, particularly lower marginal rates of tax on your income, have a great track record - established by years of academic research which we're working to summarise with the 2020 Tax Commission - of delivering greater prosperity.  And simple, proportionate taxes are a more reliable way of ensuring everyone pays their fair share.Yesterday evening I took part in a debate on Radio 4's PM programme, with the IPPR think tanks's Nick Pearce, about high marginal taxes rates, particularly on income and capital gains. You can listen to it here: [soundcloud url="http://api.soundcloud.com/tracks/34403379" params="auto_play=false&show_artwork=false&color=669933" width="100%" height="166" iframe="true" /] Last year Nick wrote that, the "over-reliance of the UK on revenues from financial services, the housing market and wealthy individuals was brutally exposed in the financial crisis".  The kind of argument he was making on Radio 4 yesterday would exacerbate that problem. The amount people pay in Capital Gains Tax is often quoted and it can often sound low, many politicians have argued that we need to increase the Capital Gains Tax rate to the Income Tax rate as a result.  What they miss is that Capital Gains have already been taxed.  To understand why look at the Gordon Growth Model: $P&space;=&space;frac{D_1}{(r&space;-&space;g)}$ Where P is the price; D1 is the value of next year's dividends; r is the cost of capital; and g is the rate at which dividends are expected to grow.  Of course, things can get a lot more complicated than that in the world of high finance.  But the fundamental picture is the one that simple equation sets out: the value of shares is the earnings they will pay to the current owner or someone else who might own them in the future. If a company's profits rise then that currently gets twice.  Once by Corporation Tax as profits (and often when it is paid out in dividends too).  Then again because the rise in profits increases expected dividends, immediately or in the future, and thereby increases the price of the shares (producing a capital gain). If we want more good jobs to go around then people who invest in rapidly growing new companies, the ones who get hit by that double tax, are really important.  Back in 2009 we set out just how important  in a paper on tax and entrepreneurship. So no, Mitt Romney almost certainly didn't pay just 13.9 per cent tax and it will be a huge risk to our future prosperity if we started setting new tax rates as if he did. Some rich Americans probably do find elaborate ways of avoiding paying their fair share in taxes.  One of the problems with a tax system that is too complicated and poorly designed is that those with the most expensive lawyers and accountants to work the system can get a better deal. But actually the United States, with its relatively low marginal tax rates, gets more revenue out of the rich than any other developed country, according to research by the Tax Foundation based on OECD figures.  Far more than many European countries that have much higher rates.  Over time, and as the success of tax cuts in the 1980s shows, tax cuts can be so economically successful that they increase revenue. Lower spending and lower taxes, particularly lower marginal rates of tax on your income, have a great track record - established by years of academic research which we're working to summarise with the 2020 Tax Commission - of delivering greater prosperity.  And simple, proportionate taxes are a more reliable way of ensuring everyone pays their fair share.
auto_math_text
web
Article | Open | Published: # Interface effects in hybrid hBN-graphene nanoribbons ## Abstract We analyze the electronic properties of a hybrid graphene-BN nanoribbon system, using a Hubbard model Hamiltonian within a mean field approximation. Due to the different electronegativities of the boron and nitrogen atoms, an electric field is induced across the zigzag graphene strip, breaking the spin degeneracy of the electronic band structure. Optimal tight-binding parameters are found from first-principles calculations. Edge potentials are proposed as corrections for the on-site energies, modeling the BN-graphene nanoribbon interfaces. We show that half-metallic responses in the hybrid systems may be driven with the help of an external electric field. We also study the role of defects across the graphene nanoribbon and at the h-BN/graphene interface regions. Modulations on the spin-dependent gaps may be achieved depending on the nature and position of the defect, constituting a way towards spin-gap engineering by means of spatial doping. ## Introduction Graphene is a zero-band-gap semiconductor with valence and conduction bands touching at the corners of the Brillouin zone. However, in order to use graphene in semiconductor electronics, such as in field-effect transistors (FETs), it is very important to open a band gap. This can be achieved, for instance, by patterning graphene into quasi-one-dimensional nanoribbons or tuning its properties for its use as a two-dimensional semiconductor1. In nanoribbons, the opened gap depends on the edge type and the width of the strip2,3,4. Alternatively, combining the properties of large band gap semiconductors, such as hexagonal-boron nitride (h-BN), with the conductive properties of graphene, appears as a promising route for gap modulation5. In fact, h-BN has the same honeycomb lattice as graphene, but composed of alternating boron and nitrogen atoms with a lattice constant (2.50 Å) very similar to that of graphene (2.46 Å). The band gap of h-BN is [5–6]eV6. Hybrid graphene and h-BN materials have been synthesized using lithography patterning and sequential CVD growth steps7. Actually, efficient epitaxial growth routes of BN onto graphene edges are possible under particular experimental conditions8, allowing a good matching between both lattices. Gap engineering is also possible by means of external electric fields9,10,11. Insulator-metal transitions in BN nanoribbons have been predicted with the application of transversal electric fields12; the actual critical field depends on the width of the ribbon. Particularly, when an electric field is applied in the transversal direction of zigzag graphene nanoribbons (ZGNRs), the system behaves as a half-metallic material13,14. Namely, the spin-degeneracy is broken and the energy gap for electrons with spin down (spin up) decreases (increases) until a critical electric field is reached, for which there is a null gap for the spin-down channel, and the electrons in the other channel have a direct band gap. This opens the possibility of exploring spin filtering properties of carbon systems15. Interestingly, such half-metallic behavior can be achieved even in the absence of an external electric field for some systems16,17,18,19 as embedded ZGNRs in zigzag h-BN nanoribbons (ZBNNRs)20. Graphene/h-BN heterostructures have been proposed for engineering applications; for example, CO2 -capture devices21, electronic rectifiers22, and spin-filters activated by strain23, or by edge hydrogenation effects24. Induced half-metallic responses due to spin polarization through defective interfaces of h-BN/graphene was reported in the work of Lan et al.25. Also, laterally repeating h-BN/graphene systems present a variety of electronic properties according to the geometry of the h-BN/layers, and can be potentially used in optical devices26. The electronic properties can also be modified by dihydrogenation of the edges of hybrid C/BN nanostructures27, promoting interesting semiconductor-to-half-metal-to-metal transitions. Our study focuses on hybrid systems of the form m-ZBNNR/n-ZGNR/m-ZBNNR (see Fig. 1), with m and n being the number of zigzag boron nitride and carbon chains, respectively. For the sake of simplicity, in what follows we label this hybrid system with zigzag edges and interfaces as m BN/n G/m BN. A tight-binding (TB) model with electronic correlation given by a Hubbard-like term is used to account for spin-dependent solutions. Simple TB calculations can reproduce quite well the electronic bands near the Fermi level for ZGNRs when compared to first-principles results28. For the case of ZBNNRs, more complex descriptions are necessary due to the large ionicity of BN. Approaches to overcome this difficulty with similar systems include corrections to on-site energies and hopping integrals at the edges of nanoribbons29, mainly related to the π-bands. Alternatively, an edge potential approach was successfully proposed by Zheng et al. as an effective correction to on-site energy values in ZBNNRs30. This attempts to explain not only π-band features, but also some σ-band effects appearing in density functional theory (DFT) results. For the hybrid system m BN/n G/m BN, the presence of a ZGNR in the middle of the boron nitride nanoribbons changes the alternating electron polarization of B and N atoms. In fact, a single edge potential is unable to explain the electronic properties of the hybrid system. For this reason, we look for an electronic effective potential across the BN/graphene interfaces deriving it from DFT results31, resulting in on-site energy corrections for our TB calculations. With this parameterization, we study the band gap behavior of hybrid systems as a function of their geometry and structure. We also consider the presence of defects that are usually frequent in such nanostructured systems, mainly at the edges and interfaces, such as edge disorder and atom migration. Spontaneous magnetization in ZBNNRs doped with carbon atoms in substitutional positions in the lattices has been reported32, so understanding the effects of such lattice imperfections on the electronic properties is relevant to improve the controlled electronic responses of these nanostructured materials. Actually, atomically controlled substitutional boron doping of graphene nanoribbons has been reported to provide a route for the design of mobility gaps and novel types of graphene transistors33. ## Models and Methods ### Hubbard Model The hybrid system is described by a TB approach involving a set of optimal parameters that properly reproduce DFT calculations. The intrasite electron-electron interaction is included via a Hubbard term, which reproduces the half-metallic characteristics related to spin polarization. The Hubbard Hamiltonian is written as $${\mathcal H} =\,\sum _{i\mathrm{=1}}{\varepsilon }_{i}{c}_{i}^{\dagger }{c}_{i}+\,\,\sum _{\langle i,j\rangle }{t}_{ij}{c}_{i}^{\dagger }{c}_{j}+\,\sum _{i\sigma }{U}_{i}{n}_{i\sigma }{n}_{i\bar{\sigma }}+\,{\rm{h}}{\rm{.c}}\mathrm{.,}\,\,$$ (1) where the operator $${c}_{i\sigma }^{\dagger }({c}_{i\sigma })$$ creates (destroys) an electron with spin σ at site i, n is the spin-dependent occupation number operator, and εi, Ui, tij are the on-site energy, the intra-atomic Coulomb repulsion, and integral hopping energy, respectively. Using the mean-field approximation, the Coulomb term in (8) reduces to $${n}_{i,\uparrow }{n}_{i,\downarrow }\approx \langle {n}_{i,\downarrow }\rangle {n}_{i,\uparrow }+\langle {n}_{i,\uparrow }\rangle {n}_{i,\downarrow }$$, and the on-site energy may be written as $${\varepsilon }_{i}^{\prime} \equiv {\varepsilon }_{i}+{U}_{i}(\langle {n}_{i,\bar{\sigma }}\rangle -\frac{1}{2}),$$ (2) where the constant factor $$\frac{1}{2}$$ shifts the band center to $$\varepsilon =0$$34. Charge-neutrality condition is imposed, involving a self-consistent determination of the Fermi level energy. In this process, the average densities $$\langle {n}_{i\sigma }\rangle$$ were calculated following a real-space renormalization procedure within the Green function formalism, which has been successfully employed to describe different carbon nanomaterials15,35,36,37. Finally, the new on-site energies are calculated from the converged occupation numbers, according to Eq. (2). From these, the final band structures are obtained. A portion of the pristine hybrid system is shown in Fig. 1. A pair of two dimer lines constitutes a translational unit cell, that we denominate an armchair chain. Hence, Fig. 1 shows a 2-armchair-chain supercell. The nanoribbon has translational symmetry along the longitudinal axis, parallel to the zigzag edges. Therefore, we use the same UC (UB, UN) values for carbon (boron, nitrogen) atoms along the length of the nanoribbon. For non-pristine systems, we consider periodic defects by taking a defective supercell, and following the same self-consistent renormalization procedure described above. ### DFT calculations In order to obtain the Hubbard and TB parameters, we follow a similar procedure used for other h-BN/graphene systems38. We use the Quantum ESPRESSO (QE)39 code to compute the electronic structure for the hybrid systems, followed by the WanT Package40,41 to obtain the on-site and hopping parameters. We call them the post-processed DFT parameter sets. Pseudopotentials from the ultrasoft Vanderbilt type, a cutoff energy of 60Ry, and a k-point sampling for the zigzag nanoribbons of 52 points are employed. The codes provide a procedure based on projecting the Kohn-Sham solutions into localized atomic orbitals. The corresponding Hamiltonian in the real space has a Slater-Koster-matrix shape42 from which we obtain the TB parameter values. Although post-processing DFT calculations consider interactions between a large range of neighbor atoms, we are just interested in the on-site energy and first nearest-neighbor hoppings, so parameters obtained from QE and WanT are adjusted to build an optimal set of TB parameters which give the best agreement to the DFT calculations. Optimal on-site energies and hopping parameter values for other BN/graphene systems are reported elsewhere5. For the present hybrid system, we adjust those parameter values to obtain an electronic effective potential31, affected by the presence of ZBNNR’s edges and BN/C interfaces. More detailed description on the optimized tight-binding model used is presented as a Supplementary Material. ## Results ### Edge and interface potentials Taking into account the important contribution from edge states to the electronic density of zigzag BN nanoribbons and the electronegativity difference between B and N atoms, Zheng et al. proposed a TB model for a ZBNNR system with on-site energy corrections in the form of effective edge potentials30. Here we follow a similar approach. In addition to the edge corrections, we also consider the effects of the zigzag interfaces (N/C and C/B), on the on-site energies of the hybrid system. The results obtained from post-processing DFT calculations for the on-site energy values, εi, for B, N and C atoms are shown as black circles in Fig. 2 (left panel), which may be fitted into an envelope function according to $${\varepsilon }_{i}={\varepsilon }_{i}^{\mathrm{(0)}}+V({x}_{i}),$$ (3) where $${\varepsilon }_{i}^{\mathrm{(0)}}$$ are the on-site energies for the i-sites far enough from the edges or interfaces. The envelope function V(x) is not directly obtained from post-processing DFT calculations due to the influence of a large number of neighbor interactions. Here, we are interested in an effective TB model only considering first nearest-neighbor interactions. Blue triangles in the left panel of Fig. 2 are obtained by using an optimal envelope function that reproduces quite well the band structures of several hybrid systems. This function is obtained following the approach by Zheng et al. to correct the on-site energies in ZBNNRs using exponential-type edge potentials30. Furthermore, our procedure incorporates the effects from BN edges and BN/graphene interfaces, i.e., for the BN region we have the following contribution from the edges, $${V}_{{\rm{edge}}}(x)={P}_{B}{e}^{-|x-{x}_{B}|/\lambda }+{P}_{N}{e}^{-|x-{x}_{N}|/\lambda },$$ (4) which accounts for the different electronegativities and high π-electron density at xB and xN edges, while in the internal graphene strip the contributions from the interfaces are chosen as decaying exponential functions centered at the left and right interfaces, xL and xR, $${V}_{{\rm{int}}}(x)=-\,{P}_{G}{e}^{-|x-{x}_{L}|/{\lambda }_{G}}+{P}_{G}{e}^{-|x-{x}_{R}|/{\lambda }_{G}}\mathrm{.}$$ (5) On-site energies taken directly from post-processing DFT calculations do not reproduce some features of the band structures due to the contributions of many neighbors and the poor projection of Kohn-Sham solutions over localized atomic orbitals43. In order to identify the edge-localized states in a band, we compute the analogous of the inverse participation number for the tight-binding coefficients of the wavefunction, which quantifies the localization of a state44,45. Thus, we estimate the degree of localization in every band according to $$F\equiv \sum _{i\mathrm{=1}}^{N}{|{c}_{i}|}^{4}\,\,,$$ (6) where the $${c}_{i}$$’s are the probability amplitudes of the electronic eigenfunction for the orbital $$|{\varphi }_{i}\rangle$$ centered at site $$i$$, in an $$N$$-atom unit cell. By recognizing the edge states in a band structure, we can compute the corresponding B and N on-site energies by performing a fit to those bands. The band structures are shown in the right panel of Fig. 2 for both DFT and TB calculations. For the TB bands, the degree of localization is indicated in color, with a maximum value of $$F=1$$ when the electron is localized in just one atom, corresponding to B or N atoms at the edges of the hybrid zigzag nanoribbon unit cell (dotted lines). By adjusting those on-site energy values, we are able to match the boron-nitride edge states of DFT and TB band structures. The resulting parameters for edge and interface potentials are shown in Table 1. $${\lambda }_{G}$$ describes a short-range interface potential when compared to the edge potential ranges, $$\lambda$$, in the BN nanoribbon regions. This is reasonable since a graphene strip is only composed of carbon atoms, for which charge carriers are more susceptible to electronic screening compared to the BN region. It is important to remark that the same TB parameters explain the main band structure features of other hybrid systems. Actually, we have worked with a set of $$m$$ BN/$$n$$ G/$$m$$ BN such as $$(m;n)=\mathrm{(5;2),\; (5;3)...(5;6),\; (3;6),\; (4;6),}$$ $$\mathrm{(4;4),\; (2;12)}$$, and the interface potential proposed fits them even for wider graphene strips, as shown in Fig. 3, where the exponential behavior from interfaces are more clear. The curves were fitted by taking $${\varepsilon }_{i}^{\mathrm{(0)}}$$ values in agreement with results from literature for other heterostructures made up of graphene and h-BN5,46. The work by Zhao et al. provides TB parameters for BN quantum dots embedded in graphene5. Starting from those values, we have modified them (see Table 2) to better reproduce the DFT band structures of the hybrid systems. Importantly, we depart $$\mathrm{[0.10]}eV$$ from their $${t}_{CN}$$ value to ensure non-magnetic solutions for $$5$$ BN/$$2$$ G/$$5$$ BN, as predicted by DFT results (see Fig. 4(a)). An interesting result of the hybrid systems is their spontaneous half-metallicity without the need of an external field, which is explained by the presence of an effective electric potential generated through the graphene strip, as it can be noticed from Eq. 5. If the width of the graphene nanoribbon is comparable to λG, the interface potential resembles the potential of a uniform and transversal electric field in a graphene nanoribbon that induces half-metallic behavior above a critical electric field13. Particularly, as shown in Fig. 4 (top panels), our DFT results predict a non-magnetic solution for the shortest nanoribbon m BN/n G/m BN with (m;n) = (5;2), but predicts a half-metallic response for wider ribbons, with (m;n) = (5;4) and (5;6), in agreement with Ding et al.4. Similar band structures with half-metallic response are also exhibited by other h-BN/graphene configurations47. Tight-binding band structures are also calculated with our parameterization and compared with the DFT calculations. The TB results are plotted in Fig. 4(d–f), displaying a quite similar behavior for edge states and valence bands. Additionally, the TB results present the the same trend of closing the gap shown by the DFT results for wider graphene strips. ### Applying an external electric field In order to further explore the half-metallicity features observed in the hybrid graphene/h-BN nanoribbons we consider the case in which an extra electric field is applied in the transversal direction of the nanoribbons. To introduce the electric field into the calculations, we have followed the same procedure as in Wakabayashi et al.34, i.e. the on-site energies are modified by an electrostatic potential: $${\varepsilon }_{i}^{^{\prime} }={\varepsilon }_{i}+V(x),$$ (7) where V is an electrostatic potential produced by a transversal electric field, parallel to axis X of the figures shown in this article and perpendicular to the edges $$V(x)=x{E}_{field},$$ (8) with V being antisymmetric respect to the axis of the nanoribbon. We have chosen this model since our system is electrically neutral, and our calculations follow a procedure that ensures charge neutrality for the system as a whole. In the presence of a transversal electric field, first-principle calculations on zigzag BNNRs are shown to exhibit asymmetric responses48. Such behavior can be understood due to the absence of electron-hole symmetry. TB calculations show that when adding a positive external electric field on the hybrid system, the original half-metallicity disappears; with increasing electric field, the bandgap for the spin-up channel decreases rapidly, while the bandgap for the spin-down channel widens. The bandgaps for the up and down spin channels are eventually equal at a particular electric field, marked in Fig. 5(a) by an arrow. Further increasing the electric field, the dependence of the spin up and down gaps with the applied electric field resembles that of a pristine ZGNR. Eventually, it reaches the critical electric field at which the system exhibits half-metallic behavior3,49. After this point, the system becomes a nonmagnetic semiconductor. In other words, in the hybrid system, a first critical field occurs when a nonzero external field is applied (removing the original half-metallicity), and a second one happens for larger values, marking the occurrence of the half-metallic behavior. This second critical electric field is shifted to higher values compared to pure graphene nanoribbons, due to the intrinsic field produced by the ZBNNR edges. According to our TB calculations, the rate of change of the spin-up gap with respect to the electric field is very high around the first critical electric field value (next to zero field for h-BN/graphene heterostructure). We have checked that the exact electric field for which this abrupt transition takes place depends on the Coulomb potential U adopted, but the shape of the gap curve as a function of the electric field is completely preserved (more details are presented in the Supplementary Material). TB calculations from Culchac et al. also show a fast evolution of the gap value around the critical electric field value for ZGNRs49. The rate of change is smoother according to DFT calculations on ZGNRs, but strongly depends on the functional employed14,50. It is important to remark that the TB calculations can reproduce the removal of half-metallicity for large values of the electric field, a common feature for ZGNRs and h-BN/graphene heterostructures. For negative values of the electric field, the different spin-dependent gaps are moved towards a non-magnetic solution, with degenerate up and down solutions. Therefore, the addition of an external electric field breaks the intrinsic half-metallic behavior that happens for certain hybrid system geometries. This response is in agreement with the DFT results by Bohwmick et al.20. We also calculate the magnetization of the hybrid system 5BN/6 G/5BN as a function of the applied electric field. We use the expression $${M}_{i}=({n}_{i,up}-{n}_{i,dw}\mathrm{)/2}$$ for the local magnetization at site i, given in terms of the Bohr magneton, with ni,up (ni,dn) being the mean number of electrons with spin up (down) at site i. The results plotted in Fig. 5(b) clearly show a magnetization quenching for large electric-field values (both directions). This response also resembles the magnetization behavior of a zigzag graphene nanoribbon in the neighborhood of the critical value of the external transverse electric field49,51. A positive external field increasing also leads to a magnetization quenching, but with different field-dependence due to the internal electric field induced by the BN edges. Of course, the critical electric field which produces the magnetization suppression depends on the characteristic sizes of the hybrid system, that may be additionally engineered. ### Substitutional defects: roughness and diffusion in hybrid nanoribbons A more realistic description of the studied hybrid nanoribbons should certainly include roughness in the zigzag interfaces as well as possible diffusion of boron or nitrogen atoms from the h-BN region into the graphene strip52 and vice versa. The presence of defects at the interface has been reported elsewhere53 and could lead to an enhancement of the half-metallic behavior54. A DFT description of these hybrids would require high computational cost. Dieb et al. overcame this difficulty55 proposing a machine learning algorithm to describe B-doped graphene systems and to find the most stable configurations. Alternatively, we model hybrid systems by means of the tight-binding approximation, using our TB parameterization and fitting curves to model the hybrid system bandgaps. We first consider the possibility of boron or nitrogen atoms placed at carbon atom positions inside the graphene strip, mimicking a migration process, as shown schematically in Fig. 6(a). Notice that the considered supercell with a single substitutional B or N atom is repeated along the edge direction. This can be described following a real-space renormalization scheme. We analyze the gap evolution shown in Fig. 6(b,c), with respect to the B and N substitution at the n-th site. We should remember that nitrogen electronegativity is higher than those of carbon and boron atoms. Usually, in an B-doped AGNR the Fermi level is shifted downward due to missing π electrons from the B atoms56. As expected, impurity substitutions with boron and nitrogen atoms induce opposite trends for the bandgap behavior as a function of the impurity position along the graphene material: averaged higher/smaller spin-up gap as the B/N atom migrates from the N-C (left) to the C-B (right) interface. N-substitutions along the graphene strip show, on average, a more robust half-metallic response compared with B-substitutions and to the pristine case. Otherwise, notice that the system enhances the half-metallic response when B and N substitutions happens at the first neighbor carbon atom in both interfaces; site 11 for the B and site 22 for the N impurity. When a B occupies the 11-th position, it follows the natural B-N sequence along the h-BN armchair chain, leading to a narrower effective ZGNR width, what makes the gap increase. The same is valid for an N impurity at site 22. Calculations for less diluted systems (one impurity per supercells of 3, 5 and 7 armchair chains) reveal similar bandgap trends, but with more variations. As we have not directly included the structural distortions in the discussion on the defects, we have explored a possible alternative approach to include effectively the lattice distortion effects, based on the linear strain theory that changes hopping energies within tight-binding descriptions. Details are presented in the Supplementary Material. We have shown that similar results are obtained, validating our approach. Using our TB parameters with edge and interface potentials, we calculate the total energy of the system when a boron (or a nitrogen) atom substitutes a carbon atom in the graphene strip. The results show (not presented here) that a boron replacing a carbon atom at the interface is the most favorable energetic configuration, where the boron is bonded with a nitrogen atom. As expected, an analogous behavior happens for the nitrogen substitution on the other edge. These results have been corroborated by DFT calculations. We also study other defect configurations. Instead of having atomic diffusion across the graphene strip, we analyze the effect of the presence of impurities located at opposite interfaces. Figure 7(a) shows an example of defects at both h-BN/graphene interfaces; i.e., a nitrogen atom is located at the B/C right interface, substituting a carbon atom, while a boron atom is located at the N/C left interface in the graphene strip. A similar framework is considered in Fig. 7(b) with a carbon impurity substituting a nitrogen atom at the N/C left interface, maintaining the nitrogen impurity fixed at the right interface (red arrow). The effect on the energy bandgap due to the relative position of the defects is displayed in Fig. 7(c). We define ΔNa as a parameter that indicates how far the two defects are from the frontal relative position, in terms of the number of armchair chains separating the two defects. Thus, ΔNa = 0 when both impurities belong to the same armchair chain, but located at different interfaces. Figure 7(c) shows two main features: an oscillatory gap behavior with respect to the relative impurity positions at the opposite interfaces for the up component, and a nearly constant energy gap for the down component. The former can be explained in terms of the modification of the interface potentials due to the presence of impurities at each interface. The effective field generated by the charged boron and nitrogen atoms at the interface is perturbed in the presence of defects at the interfaces, which leads to a change on the bandgap of the system. Figure 7(c) shows an overall increase for the bandgap values respect to pristine case. The half-metallic character is destroyed; both up and down spin-gaps are finite. Moreover, Fig. 7(c) shows a maximum gap value when the impurities are exactly in front of each other, i.e., when ΔNa = 0, while for other neighboring relative positions of the defects, the gap starts decreasing as the impurities get apart from each other. In fact, the configuration with opposite impurities amounts to an effective reduction of the width of the ribbon, which yields an increased gap. We should also keep in mind that the oscillatory gap behavior as a function of ΔNa shown in Fig. 7(a,b) is a direct consequence of the translational periodicity imposed by the repetition of the supercell containing the two impurities. We have obtained the same oscillatory trend for other supercell sizes. Additionally, we have analyzed the role of the supercell size in the calculations. Figure 8 shows the dependence of the spin-up and down gaps as functions of the supercell size N for the particular system $$5BN\mathrm{/6}G\mathrm{/5}BN$$. The system tends in the dilute impurity limit to a half-metallic configuration, with spin-down gap tending to zero for larger sizes, though still showing a sizable spin-up gap, as in Fig. 7, but the overall behavior is well described with smaller supercells. More systematic studies are necessary, however, considering for instance the effect of different impurity concentrations at the edges on the electronic properties of h-BN/graphene heterostructures, and also doping the graphene strip region. Miao et al. reported57 that doping graphene sheets above a certain concentration leads to a half-metallic response. Therefore, the combined effects of interface potentials and a B(N)-doping in h-BN/graphene/h-BN systems may result in an enhancement of the half-metallic behavior depending on the doping concentration. ### Final Remarks In summary, we have found TB parameters for the n BN/m C/n BN heterostructure that reproduce the main features of DFT predictions including the effect of external electric fields. The TB parameters are not only valid for a specific size of the heterostructure: they are robust enough to explain main trends of the electronic band structures for a wide range of graphene/h-BN system sizes. Due to the ionic edges of ZBNNRs and BN/C interfaces in the hybrid systems described here, we can estimate an effective potential that gives a best fit to DFT predictions. Another advantage of our TB parameters is the study of transport using Green functions in big hybrid systems and the influence of impurities in a scattering region. The coupling regions between electrodes and scattering regions are minimized, reducing the computational cost of the transport calculations. We have enlarged our discussion on the adequateness of our approach and the possibility of validating our approach with other computational methods in the Supplementary Material. With respect to the applied electric fields, due to the polarity of the material, boron and nitrogen edges induce an electric field that competes with the external field, that we have successfully modeled. Half-metallicity becomes intrinsic to the BN/5 C/BN even in the absence of an external electric field. Furthermore, the graphene/h-BN heterostructure can not only work as a spin filter, but also as a spin swapping device by just turning on an external electric field. Taking advantage of our approach, we aim to study larger systems that would be computationally expensive by first-principles calculations. We use our TB parameters to analyze more realistic graphene/h-BN heterostructures by considering boron and nitrogen diffusion across the graphene strip, or graphene/h-BN interface with roughness due to substitutional impurities. We have found that in general nitrogen diffusion enhances the half-metallic response. We have also found an oscillatory gap behavior that is maximum for a pair of boron and nitrogen impurities facing each other at each BN/graphene interfaces. We attributed this maximum gap value to an effective reduction of the width of the nanoribbon. Further studies are needed to analyze a possible suppression or enhancement of the half-metallic response by modulating the concentration of boron and nitrogen impurities at the interfaces. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## References 1. 1. Schwierz, F. Graphene transistors. Nat. Nanotechnol. 5, 487 (2010). 2. 2. Yang, L., Park, C.-H., Son, Y.-W., Cohen, M. L. & Louie, S. G. Quasiparticle energies and band gaps in graphene nanoribbons. Phys. Rev. Lett. 99, 18680, https://doi.org/10.1103/PhysRevLett.99.186801 (2007). 3. 3. Son, Y.-W., Cohen, M. L. & Louie, S. G. Energy gaps in graphene nanoribbons. Phys. Rev. Lett. 97, 216803, https://doi.org/10.1103/PhysRevLett.97.216803 (2006). 4. 4. Ding, Y., Wang, Y. & Ni, J. Electronic properties of graphene nanoribbons embedded in boron nitride sheets. Appl. Phys. Lett. 95, 123105, https://doi.org/10.1063/1.3234374 (2009). 5. 5. Zhao, R., Wang, J., Yang, M., Liu, Z. & Liu, Z. BN-embedded graphene with a ubiquitous gap opening. The J. Phys. Chem. C 116, 21098–21103, https://doi.org/10.1021/jp306660x (2012). 6. 6. Golberg, D. et al. Boron nitride nanotubes and nanosheets. ACS Nano 4, 2979–2993, https://doi.org/10.1021/nn1006495 (2010). 7. 7. Kim, S. M. et al. Synthesis of patched or stacked graphene and hBN flakes: A route to hybrid structure discovery. Nano Lett. 13, 933–941, https://doi.org/10.1021/nl303760m (2013). 8. 8. Liu, W., Zhang, K., Wang, R., Zhong, J. & Li, L. Modulation of the electron transport properties in graphene nanoribbons doped with BN chains. AIP Adv. 4 (2014). 9. 9. Sahu, B., Min, H., MacDonald, A. H. & Banerjee, S. K. Energy gaps, magnetism, and electric-field effects in bilayer graphene nanoribbons. Phys. Rev. B 78, 045404, https://doi.org/10.1103/PhysRevB.78.045404 (2008). 10. 10. Guo, Y., Guo, W. & Chen, C. Semiconducting to half-metallic to metallic transition on spin-resolved zigzag bilayer graphene nanoribbons. The J. Phys. Chem. C 114, 13098–13105, https://doi.org/10.1021/jp102147a (2010). 11. 11. Tang, K. et al. Electric-field-induced energy gap in few-layer graphene. The J. Phys. Chem. C 115, 9458–9464, https://doi.org/10.1021/jp201761p (2011). 12. 12. Zhang, P., Wang, Z., Shi, J., Xiao, D. & Niu, Q. Theory of conserved spin current and its application to a two-dimensional hole gas. Phys. Rev. B 77, 075304, https://doi.org/10.1103/PhysRevB.77.075304 (2008). 13. 13. Son, Y.-W., Cohen, M. L. & Louie, S. G. Half-metallic graphene nanoribbons. Nature 444, 347 (2006). 14. 14. Kan, E.-J., Li, Z., Yang, J. & Hou, J. G. Will zigzag graphene nanoribbon turn to half metal under electric field? Appl. Phys. Lett. 91, 243116, https://doi.org/10.1063/1.2821112 (2007). 15. 15. León, C. & Latgé, A. Half-metallicity study of graphene nanoribbon bilayers under external fields. Phys. Rev. B 88, 245446, https://doi.org/10.1103/PhysRevB.88.245446 (2013). 16. 16. Rai, H. et al. Half-metallicity in armchair boron nitride nanoribbons: A first-principles study. Solid State Commun. 212, 19–24, https://doi.org/10.1016/j.ssc.2015.04.003 (2015). 17. 17. Kim, S.-W., Kim, H.-J., Choi, J.-H., Scheicher, R. H. & Cho, J.-H. Contrasting interedge superexchange interactions of graphene nanoribbons embedded in h-bn and graphane. Phys. Rev. B 92, 035443, https://doi.org/10.1103/PhysRevB.92.035443 (2015). 18. 18. Wu, H., hui Chen, G., peng Yu, Y., Wu, D. & Wang, Q. Theoretical exploration of the half-metallicity of graphene nanoribbons/boron nitride bilayer system. Comput. Mater. Sci. 95, 384–392, https://doi.org/10.1016/j.commatsci.2014.07.048 (2014). 19. 19. Lai, L. et al. Magnetic properties of fully bare and half-bare boron nitride nanoribbons. The J. Phys. Chem. C 113, 2273 (2009). 20. 20. Bhowmick, S., Singh, A. K. & Yakobson, B. I. Quantum dots and nanoroads of graphene embedded in hexagonal boron nitride. The J. Phys. Chem. C 115, 9889–9893, https://doi.org/10.1021/jp200671p (2011). 21. 21. Tan, X., Tahini, H. A. & Smith, S. C. Hexagonal boron nitride and graphene in-plane heterostructures: An experimentally feasible approach to charge-induced switchable CO2 capture. Chem. Phys. 478, 139–144, https://doi.org/10.1016/j.chemphys.2016.04.001 (2016). 22. 22. Modarresi, M., Roknabadi, M. & Shahtahmassebi, N. Rectifying behavior of graphene/h-boron-nitride heterostructure. Phys. B: Condens. Matter. 415, 62–66, https://doi.org/10.1016/j.physb.2013.01.038 (2013). 23. 23. Zhu, S. & Li, T. Strain-induced programmable half-metal and spin-gapless semiconductor in an edge-doped boron nitride nanoribbon. Phys. Rev. B 93, 115401, https://doi.org/10.1103/PhysRevB.93.115401 (2016). 24. 24. Ouyang, J. et al. Modulating the spin transport behaviors in ZBNCNRs by edge hydrogenation and position of BN chain. AIP Adv. 6, 035116, https://doi.org/10.1063/1.4944796 (2016). 25. 25. Lan, T. N., Ho, L. B. & Hai, T. H. Electronic, magnetic, and spin-polarized transport properties of hybrid graphene/boron- nitride nanoribbons having 5-8-5 line defects at the heterojunction. Physica status solidi (b) 252, 573–581, https://doi.org/10.1002/pssb.201451442 (2015). 26. 26. Özçelik, V. O., Durgun, E. & Ciraci, S. Modulation of electronic properties in laterally and commensurately repeating graphene and boron nitride composite nanostructures. The J. Phys. Chem. C 119, 13248–13256, https://doi.org/10.1021/acs.jpcc.5b01598 (2015). 27. 27. Liu, Y., Wu, X., Zhao, Y., Zeng, X. C. & Yang, J. Half-metallicity in hybrid graphene/boron nitride nanoribbons with dihydrogenated edges. The J. Phys. Chem. C 115, 9442–9450, https://doi.org/10.1021/jp201350e (2011). 28. 28. Castro Neto, A. H., Guinea, F., Peres, N., Novoselov, K. & Geim, A. The electronic properties of graphene. Rev. Mod. Phys. 81, 109–1629, https://doi.org/10.1103/RevModPhys.81.109 (2009). 29. 29. Zhao, K., Zhao, M., Wang, Z. & Fan, Y. Tight-binding model for the electronic structures of sic and bn nanoribbons. Phys. E: Low-dimensional Syst. Nanostructures 43, 440–445, https://doi.org/10.1016/j.physe.2010.08.025 (2010). 30. 30. Zheng, F., Sasaki, K., Saito, R., Duan, W. & Gu, B.-L. Edge states of zigzag boron nitride nanoribbons. J. Phys. Soc. Jpn. 78, 074713, https://doi.org/10.1143/JPSJ.78.074713 (2009). 31. 31. Pruneda, J. M. Origin of half-semimetallicity induced at interfaces of C-BN heterostructures. Phys. Rev. B 81, 161409, https://doi.org/10.1103/PhysRevB.81.161409 (2010). 32. 32. Du, A., Smith, S. C. & Lu, G. First-principle studies of electronic structure and c-doping effect in boron nitride nanoribbon. Chem. Phys. Lett. 447, 181–186, https://doi.org/10.1016/j.cplett.2007.09.038 (2007). 33. 33. Kawai, S. et al. Atomically controlled substitutional boron-doping of graphene nanoribbons. Nat. Commun. 6, 8098, https://doi.org/10.1038/ncomms9098 (2015). 34. 34. Wakabayashi, K. & Dutta, S. Nanoscale and edge effect on electronic properties of graphene. Solid State Commun. 152, 1420–1430, https://doi.org/10.1016/j.ssc.2012.04.025 (2012). 35. 35. Ritter, C., Makler, S. S. & Latgé, A. Energy-gap modulations of graphene ribbons under external fields: A theoretical study. Phys. Rev. B 77, 195443, https://doi.org/10.1103/PhysRevB.77.195443 (2008). 36. 36. Rosales, L. et al. Transport properties of antidot superlattices of graphene nanoribbons. Phys. Rev. B 80, 073402, https://doi.org/10.1103/PhysRevB.80.073402 (2009). 37. 37. Faria, D., Carrillo-Bastos, R., Sandler, N. & Latgé, A. Fano resonances in hexagonal zigzag graphene rings under external magnetic flux. J. Physics: Condens. Matter. 27, 175301 (2015). 38. 38. Nakamura, J., Nitta, T. & Natori, A. Electronic and magnetic properties of BNC ribbons. Phys. Rev. B 72, 205429, https://doi.org/10.1103/PhysRevB.72.205429 (2005). 39. 39. Giannozzi, P. et al. Quantum espresso: a modular and open-source software project for quantum simulations of materials. J. Physics: Condens. Matter. 21, 395502 (2009). 40. 40. WanT code by A. Ferretti, B. Bonferroni, A. Calzolari, & M. Buongiorno Nardelli (http://www.wannier-transport.org). 41. 41. Calzolari, A., Marzari, N., Souza, I. & Nardelli, M. B. Ab initio transport properties of nanostructures from maximally localized Wannier functions. Phys. Rev. B 69, 035108, https://doi.org/10.1103/PhysRevB.69.035108 (2004). 42. 42. Slater, J. C. & Koster, G. F. Simplified LCAO method for the periodic potential problem. Phys. Rev. 94, 1498–1524, https://doi.org/10.1103/PhysRev.94.1498 (1954). 43. 43. Agapito, L. A., Ferretti, A., Calzolari, A., Curtarolo, S. & Buongiorno Nardelli, M. Effective and accurate representation of extended bloch states on finite hilbert spaces. Phys. Rev. B 88, 165127, https://doi.org/10.1103/PhysRevB.88.165127 (2013). 44. 44. Wegner, F. Inverse participation ratio in 2 + ε dimensions. Z. Physik B 209 (1980). 45. 45. Kramer, B. & MacKinnon, A. Localization: theory and experiment. Reports on Prog. Phys. 56, 1469 (1993). 46. 46. Ashhadi, M., Hadavi, M. & Sarri, Z. Electronic transport properties and first-principles study of graphene/h-BN and h-BN bilayers. Phys. E: Low-dimensional Syst. Nanostructures 87, 312–316, https://doi.org/10.1016/j.physe.2016.11.012 (2017). 47. 47. Meng, F., Zhang, S., Lee, I.-H., Jun, S. & Ciobanu, C. V. Strain-tunable half-metallicity in hybrid graphene-hBN monolayer superlattices. Appl. Surf. Sci. 375, 179–185, https://doi.org/10.1016/j.apsusc.2016.03.085 (2016). 48. 48. Park, C.-H. & Louie, S. G. Energy gaps and stark effect in boron nitride nanoribbons. Nano Lett. 8, 2200–2203, https://doi.org/10.1021/nl080695i (2008). 49. 49. Culchac, F., Capaz, R., Costa, A. & Latgé, A. Magnetic response of zigzag nanoribbons under electric fields. J. Physics: Condens. Matter. 26, 216002 (2014). 50. 50. Rudberg, E., Salek, P. & Luo, Y. Nonlocal exchange interaction removes half-metallicity in graphene nanoribbons. Nano Lett. 7, 2211–2213, https://doi.org/10.1021/nl070593c (2007). 51. 51. Nomura, T., Yamamoto, D. & Kurihara, S. Electric field effects in zigzag edged graphene nanoribbons. J. Physics: Conf. Ser. 200, 062015 (2010). 52. 52. Novoselov, K. S., Mishchenko, A., Carvalho, A. & Castro Neto, A. H. 2D materials and van der waals heterostructures. Science 353 (2016). 53. 53. Ding, N. et al. Structures and electronic properties of vacancies at the interface of hybrid graphene/hexagonal boron nitride sheet. Comput. Mater. Sci. 117, 172–179, https://doi.org/10.1016/j.commatsci.2015.12.052 (2016). 54. 54. Zeng, J., Chen, W., Cui, P., Zhang, D.-B. & Zhang, Z. Enhanced half-metallicity in orientationally misaligned graphene/hexagonal boron nitride lateral heterojunctions. Phys. Rev. B 94, 235425, https://doi.org/10.1103/PhysRevB.94.235425 (2016). 55. 55. M. Dieb, T., Hou, Z. & Tsuda, K. Structure prediction of boron-doped graphene by machine learning. The J. Chem. Phys. 148, 241716, https://doi.org/10.1063/1.5018065 (2018). 56. 56. Yu, S. S., Zheng, W. T. & Jiang, Q. Electronic properties of nitrogen-/boron-doped graphene nanoribbons with armchair edges. IEEE Transactions on Nanotechnol. 9, 78–81, https://doi.org/10.1109/TNANO.2009.2020797 (2010). 57. 57. Miao, L. et al. Certain doping concentrations caused half-metallic graphene. J. Saudi Chem. Soc. 21, 111–117, https://doi.org/10.1016/j.jscs.2016.03.007 (2017). ## Acknowledgements This work has been partially supported by Brazilian Agencies CAPES and CNPq and Spanish MINECO under grant FIS2015-64654-P. A.L. thanks the financial support of FAPERJ under grant E-26/202.953/2016, and the INCT de Nanomateriais de carbono. C.L. thanks the financial support of CNPq. L.C. gratefully acknowledges helpful discussions with M.J. Ruiz. ## Author information ### Author notes 1. Leonor Chico and Andrea Latgé contributed equally. ### Affiliations 1. #### Instituto de Física, Universidade Federal Fluminense, Av. Litorânea sn, 24210-340, Niterói-Rio de Janeiro, RJ, Brazil • Carlos Leon • , Marcio Costa •  & Andrea Latgé 2. #### Instituto de Ciencia de Materiales de Madrid (ICMM), Consejo Superior de Investigaciones Científicas (CSIC), Madrid, 28049, Spain • Leonor Chico ### Contributions C.L. and M.C. processed the calculation and prepared the figures, A.L., C.L. and L.C. wrote the manuscript; all authors discussed and commented on the methods and results and contributed to the paper’s final version. ### Competing Interests The authors declare no competing interests. ### Corresponding author Correspondence to Andrea Latgé.
auto_math_text
web
Hey! I'm David, the author of the Real-World Cryptography book. Previously I was the security lead for Diem (Libra) at Facebook, and a security consultant for the Cryptography Services of NCC Group. This is my blog about cryptography and security and other related topics that I find interesting. more on the next page... # Gröbner basis on numb3rs posted April 2014 It doesn't seem to be the only appearance of the Gröbner basis on the show: In the Season 4 opening episode 'Trust Metric' (2007) of the television crime drama NUMB3RS, math genius Charlie Eppes mentions that he used Gröbner bases in an attempt to derive an equation describing friendship. From wolfram alpha comment on this story # An awesome explanation of the Fourier Transform posted April 2014 I've run into this über cool explanation of the Fourier Transform thanks to mtodd's blog Here's a bit from the introduction: What does the Fourier Transform do? Given a smoothie, it finds the recipe. How? Run the smoothie through filters to extract each ingredient. Why? Recipes are easier to analyze, compare, and modify than the smoothie itself. How do we get the smoothie back? Blend the ingredients. And cool examples of what can be done with the Fourier Transform: • If earthquake vibrations can be separated into "ingredients" (vibrations of different speeds & strengths), buildings can be designed to avoid interacting with the strongest ones. • If sound waves can be separated into ingredients (bass and treble frequencies), we can boost the parts we care about, and hide the ones we don't. The crackle of random noise can be removed. Maybe similar "sound recipes" can be compared (music recognition services compare recipes, not the raw audio clips). • If computer data can be represented with oscillating patterns, perhaps the least-important ones can be ignored. This "lossy compression" can drastically shrink file sizes (and why JPEG and MP3 files are much smaller than raw .bmp or .wav files). • If a radio wave is our signal, we can use filters to listen to a particular channel. In the smoothie world, imagine each person paid attention to a different ingredient: Adam looks for apples, Bob looks for bananas, and Charlie gets cauliflower (sorry bud). comment on this story # Diffie-Hellman, ElGamal and RSA posted April 2014 I'm in holidays for a week, easter I think, anyway, I didn't know what to do so I coded the Diffie-Hellman handshake, the ElGamal cryptosystem and the RSA cryptosystem in python. You can check the code on github here: github.com/mimoo/crypto_studies Check the tests.py file to see how the classes are used. Here's an extract: """Testing Diffie Hellman """ # 1. BOB bob = DiffieHellman() # G and g are generated automatically print("G is a group mod %i and of order %i, and the generator g is %i" % (bob.G[0], bob.G[1], bob.g)) # We generate a secret and a public key bob.generate_secret() bob.generate_public() # 2. ALICE # We already know G and g alice = DiffieHellman(bob.G, bob.g) # We generate the secret key and the public key alice.generate_secret() alice.generate_public() # 3. WE CREATE THE SHARED KEY bob.generate_sharedkey(alice.publickey) alice.generate_sharedkey(bob.publickey) # Bob and Alice now have the same _sharedkey and the same public (G, g) As the README says, it might be oversimplified and not totally correct. I mostly did that to do something in Python and also try to memorize how those systems work. I've also done a lot of Unity this week-end. And also a bit of WxPython but I don't really like it. I think I should focus on QT and C++. comment on this story # Whitebox Cryptography posted April 2014 Those past few months I've been working on a C implementation of a whitebox using Chow et al's paper and the DES algorithm. It's not done yet but the code is available on github. I also did a C implementation of DES just to get a grip on it, it's available on github as well. I just did a presentation of the research my team and I did, I think it went pretty well. The slides are here comment on this story # How hard is it to find an internship? posted April 2014 I've been looking for a summer internship and I haven't really found anything sor far. Although I've had some interviews with some start ups from the Silicon Valley (including TrueVault that really seemed like a good fit for a cryptographer in progress like me :D). But I've been unlucky so far since they're pretty busy, it's demo day-time for those applying to ycombinator there. Anyway I still have 4 months of holidays this summer and I'm wondering what I'll do if I can't find anything in Mountain View (n_n I really want to go there). If you know someone, or are interested in a passionate coder and eager learner, you can take a look at my resume here and rush to contact me before someone else does :) Otherwise I'll spend more time coding personal projects and writing this summer (by the way, Korben, a famous influential blogger in France has written about me and my application 3pages.fr in a blog post. Huge amount of traffic in a few hours, 600 people signing up in a day. I envy his traffic.) comment on this story # How good is flask? posted April 2014 I've used Django for my last project and I found the documentation unclear and the list of things I had to do to code simple things and deploy were... a bit too much for a simple project. I've glanced at the Flask documentation and have found it über-clear. The syntax seems to be pretty straight-forward as well. I'm really thinking about learning Flask for my next project and putting Django on hold. What do you guys think? There's also a talk on web2py in the current PyCon. I don't know if it's for me but I really need something I can do quick prototypes on. Sometimes I wonder if I should go back to PHP and try the new Laravel that really looks super cool :) comment on this story # NAT with iptables : super fast tutorial posted April 2014 So I know how to use iptables, I know what a NAT is, but I don't want to learn how to exactly do it. Misery... I have to learn how to do it because I have an exam that will probably ask me how to do it in a few days. So I've been looking for a super simple tutorial, a 1 minute tutorial, on how to setup a NAT configuration with iptables in 1 minute. Couldn't really find it so here it is, if this is somewhat useful for someone, you're welcome. ## First Step For NAT to work, you have to allow forwarding on your server. Easy peasy: $echo 1 > /proc/sys/net/ipv4/ip_forward Also, before adding new iptables rules, be sure to check what rules you already have$ iptables -L you should allow some forwarding for it to work (if the policy is default to DROP). But this not a tutorial about iptables. ## Static I have a server with: • eth0 connected to the network • eth1 connected to internet Let's modify the PREROUTING part. Traffic coming from internet on our public address (@pub) and trying to reach our machine: $iptables -t nat -A PREROUTING -d @pub -i eth0 -j DNAT --to-destination @priv Let's modify the table nat, append a rule to the pretrouting section : something is trying to reach @pub ? Let's put it in our input interface eth0, jump to the Destination Nat protocol, which tells us to send the packet to @priv. Now Let's modify the POSTROUTING part. Traffic coming from inside our network and trying to reach something, somewhere on internet:$ iptables -t nat -A POSTROUTING -s @priv -o eth1 -j SNAT --to-source @pub If the packet is coming from @priv, let's put it on our output interface eth1 and jump to the Source Nat Protocol that will modify the packet so it has the public address (@pub) as source. Here! You did it. One private IP address mapped to one public IP address. ## Dynamic Same kind of configuration but now we have several private addresses and only one public address. $iptables -t nat -A POSTROUTING -s @priv/mask -j MASQUERADE We can modify every packets coming from the subnetwork @priv to get masqueraded.$ iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE Or we can just tell all the network to get masqueraded. And this is it. No PREROUTING Needed. Again, you're welcome ;) # So... The Heartbleed Challenge has been completed posted April 2014 A few hours after the start of the Heartbleed challenge, actually, just 3 hours after the start of the Heartbleed challenge. Fedor Indutny seems to have cracked it. So now, chaos begins. If you own a certificate, you not only have to change it, but you also have to revoke it. I wonder how many will change, and how many will revoke. You can check that he indeed did it by doing this: Just to confirm it: put this into your /etc/hosts “165.225.128.15 http://www.cloudflarechallenge.com ” and visit “https://www.cloudflarechallenge.com/ “. here why it works: Putting that mapping in /etc/hosts lets your machine skip DNS lookup for that hostname, and just use his IP for that domain name. Then, your browser checks the received certificate against the authenticated TLS connection, and sees that all is well, allowing you to connect without a warning. Since the browser does not warn of a certificate mismatch, he must have a valid certificate for 'cloudflarechallenge.com'. QED. The Cloudflare team reviewing the attack: comment on this story
auto_math_text
web
# Past Theoretical Physics Seminars at BNL • ## Wednesday, October 31, 2018 2:30pm, Small Seminar Room Unveiling New Physics Through Angular Distributions at the LHC Rodolfo Capdevilla (Notre Dame) HET Seminar • ## Thursday, November 1, 2018 12:00pm, Room 2-160, Bldg. 510 DIS on "Nuclei" using holography RIKEN Lunch Seminar 4:00pm, CFNS Seminar Room 2-38 TBA Al Mueller (Columbia University) CFNS Seminar • ## Friday, November 2, 2018 2:00pm, CNFS Seminar Room 2-38 Diffractive Electron-Nucleus Scattering and Ancestry in Branching Random Walks Al Mueller (Columbia) Nuclear Theory / RIKEN seminar • ## Friday, November 9, 2018 2:00pm, CNFS Seminar Room 2-38 TBA Ajit Srivastava (Institute of Physics, Bhubaneswar) Nuclear Theory/RIKEN seminar • ## Wednesday, November 14, 2018 2:30pm, Small Seminar Room TBA Konstantinos Orginos (College of William and Mary) HET Seminar • ## Thursday, November 15, 2018 12:00pm, 2-160, Bldg. 510 Exclusive $\rho$ meson production in $eA$ collisions: collinear factorization and the CGC Renaud Boussarie (Brookhaven National Laboratory) We will focus on the theoretical description of exclusive ρ meson production in eA collisions, using a hybrid factorization scheme which involves Balitsky's shockwave description of the Color Glass Condensate in the t channel, and Distribution Amplitudes (DAs) in the s channel. We will first give a quick introduction to the shockwave framework and to collinear factorization up to twist 3 for DAs, then we will apply these framweworks to the production of a longitudinal meson at NLO accuracy, and to the production of a transverse meson at twist 3 accuracy. We will insist on the experimental applications, and on several theoretical questions raised by our results: the dilute BFKL limit at NLO for diffraction, and collinear factorization breaking at twist 3. • ## Friday, November 16, 2018 2:00pm, CNFS Seminar Room 2-38 N/A Dimitra Karabali (Lehman College CUNY) Nuclear Theory/RIKEN • ## Thursday, November 29, 2018 12:00pm, 2-160, Bldg. 510 TBA Mario Mitter (Brookhaven National Laboratory) • ## Friday, November 30, 2018 2:00pm, CNFS seminar room 2-38 TBA Juan Rojo (VU University) Nuclear Theory / RIKEN seminar • ## Wednesday, December 5, 2018 2:30pm, YITP Stony Brook TBA Jiji Fan (Syracuse) Joint YITP/HET Theory Seminar • ## Thursday, December 6, 2018 12:00pm, Room 2-160, Bldg. 510 Proton decay Jun-Sik Yoo (Stony Brook University) RIKEN Lunch Seminar 12:00pm, 2-160, Bldg. 510 On QCD and its Phase Diagram from a Functional RG Perspective Mario Mitter (BNL) • ## Wednesday, December 12, 2018 2:30pm, YITP Stony Brook TBA TBA Joint YITP/HET Theory Seminar • ## Thursday, January 10, 2019 12:00pm, 2-160, Bldg. 510 A novel background subtraction method for jet studies in heavy ion collisions Alba Soto Ontoso (BNL) • ## Friday, January 18, 2019 2:00pm, CFNS seminar room 2-38 TBA Nuclear Theory / RIKEN seminar • ## Thursday, January 24, 2019 12:00pm, 1-224, Bldg. 510 (different from usual room) In this talk, I will present a connection between two approaches of studying quarkonium dynamics inside quark-gluon plasma: the open quantum system formalism and the transport equation. I will discuss insights from the perspective of quantum information. I will show that under the weak coupling and Markovian approximations, the Lindblad equation turns to a Boltzmann transport equation after a Wigner transform is applied to the system density matrix. I will demonstrate how the separation of physical scales justifies the approximations, by using effective field theory of QCD. Finally, I will show some phenomenological results based on the derived transport equation. Xiaojun Yao (Duke University) In this talk, I will present a connection between two approaches of studying quarkonium dynamics inside quark-gluon plasma: the open quantum system formalism and the transport equation. I will discuss insights from the perspective of quantum information. I will show that under the weak coupling and Markovian approximations, the Lindblad equation turns to a Boltzmann transport equation after a Wigner transform is applied to the system density matrix. I will demonstrate how the separation of physical scales justifies the approximations, by using effective field theory of QCD. Finally, I will show some phenomenological results based on the derived transport equation. • ## Friday, January 25, 2019 2:00pm, CNFS Seminar Room 2-38 TBA Paolo Glorioso (Kadanoff Center for Theoretical Physics and Enrico Fermi Institute, University of Chicago) Nuclear Theory/RIKEN seminar
auto_math_text
web
# The 16th International Workshop on Tau Lepton Physics (TAU2021) (Virtual Edition) September 27, 2021 to October 1, 2021 Indiana University America/Indiana/Indianapolis timezone ## Tau identification in CMS during LHC Run 2 Sep 29, 2021, 1:10 PM 20m Virtual (Indiana University) ### Virtual #### Indiana University Oral contribution Tau2021 Abstracts ### Description The LHC Run 2 data-taking period was characterized by an increase in instantaneous luminosity and center-of-mass energy. Several techniques have been deployed in the CMS experiment to reconstruct and identify tau leptons in this environment. The DeepTau identification algorithm is used to identify hadronically decaying tau leptons from quark and gluon induced jets, electrons, and muons. Compared to previously used MVA identification algorithms, the use of deep-learning techniques brought a noticeable improvement in the tau identification and rejection of contaminating sources. Low transverse momentum topologies were addressed separately with a dedicated identification algorithm, while machine learning techniques were implemented to improve the identification of the tau hadronic decay channels. These algorithms have been already used for several published physics analyses in CMS. The algorithms are presented together with their measured performances.
auto_math_text
web
## dnd 5e – To maximize a combo with True Strike, which is the most damaging spell with attack roll? I’m pretty sure that inflict wounds is your best bet. The 5e spell list is small enough that it’s possible to scan the whole thing, and there’s nothing that deals more damage under the specific constraints you listed (pre-cast true strike, attack roll, not using any daily powers other than the ninth-level spell slot). I’ve got some space left in this answer, so I’d like to address a related question. This is a question you haven’t asked, but it’s a question which some of the people reading this answer might be curious about: “I’m fighting something that has Legendary Resistance and will choose to succeed at any saving throws I offer. I have a buff spell pre-cast. What’s the most damage I can deal, ideally without walking into melee range?” It turns out that most of the good answers don’t take advantage of the true strike. For instance, the old standby meteor swarm deals half of 40d6 on even a successful save, so that’s 70 damage plus whatever bonuses you can get from modifiers. The true strike spell grants advantage on the first attack per round, but a twentieth-level caster might be willing to use a stronger buff, such as greater invisibility. Used against a creature that can’t see through it, this spell grants advantage on all attacks in a given round. With this buff, scorching ray is better than inflict wounds: it fires ten rays maximum and deals 2d6 damage per ray, for an average of 70 damage, or 58.8 damage after applying the 84% hit chance. ## macos – Keyboard Shortcut to maximize a window? macos – Keyboard Shortcut to maximize a window? – Ask Different ## complex – Is this a correct result about maximize of sum of moduls? I am trying to find maximize of the expression $$S = |z-1| + |z+1| + |z+sqrt(3)i|$$ knowing that $$|sqrt(3) z + i| = 2$$. I tried ``````Maximize(ComplexExpand /@ {Abs(x + I y - 1) + Abs(x + I y + 1) + Abs(x + I y + Sqrt(3) I), Abs(Sqrt(3) x + Sqrt(3) y + I) == 2}, {x, y}) `````` and get {(Infinity), {x -> Indeterminate, y -> Indeterminate}} Is this a correct result? ## algorithm – How to maximize enclosed area and minimize perimeter on a grid with obstacles? Developing an RTS game, I have a tile-based terrain (grid) filled with obstacles. It looks like this (red shapes are obstacles): Now I need to plan an enclosed area on the terrain with some restrictions: • tile marked with a “star” must be included (it is a seed) • area needs to be at least 20 tiles (for this example) • farthest area tile from the seed needs to be no more than 7 steps away (for this example) • area needs to traversable by moving in 4 directions • area should be bound by existing obstacles and by new fences • new fences building takes time and resources, it is beneficial to use existing obstacles as much as possible and have the smallest number of fences placed. • there are cases where terrain could be free of any obstacles, or contrary to that – be a maze-like labyrinth) • it is okay to fail (e.g. there’s not enough walkable tiles around the seed), or the area/fences ratio is too low. • processing is done on CPU (if that matters) So, here are some “manual” attempts at outlining an area of required size using the least fences: As you can see, there are 3 enclosed areas (green, blue, orange) with very similar perimeters, yet different areas. Best one in this case would be “green” one – it has the least fences added (just 10) and has the sufficient area (21). ## What is the algorithm to allow to plan area of given size (~20) while minimizing the amount of additional fences required? So far I have tried the A* to get the area within reach (7 steps), clipped the last step (since it is unexplored and has no neighbor info) and clipped the obvious protruding buds, but I’m out of good ideas on how to improve from that: • yellow tile is the seed • red tiles are walkable (saturation shows distance from the seed) • dark grey tiles are obstacles • light grey tiles are obstacles that also need fences • purple tiles got clipped by existing incomplete algo • black tiles are unexplored • yellow dashes are required fences As you can see, it looks quite sub-optimal, but I’m at loss as to how to proceed. ## dynamic programming – Select each item from a sequence of \$n\$ buckets to maximize value? You have a sequence of n buckets. Each bucket $$B_i$$ contains three items, where each item has some value and belongs to some category. Multiple items may belong to the same category. Your job is to select exactly one item from each bucket so that the total value of the selected items is maximized, but such that no two items selected from adjacent buckets belong to the same category. Design an algorithm to determine which item should be selected from each bucket to maximize the total value. If there is no solution, your algorithm should state this. ## Problem We are given 2 arrays `a` and `b` both of length `n`. We build a third array `c` by rearranging the values in `b`. The goal is to find the optimal `c` that maximizes ``````result = (a(0) ^ c(0)) & (a(1) ^ c(1)) & ... & (a(n - 1) ^ c(n - 1)) `````` where `^` is XOR and `&` is AND. Is it possible to do this efficiently? It’s straightforward to iterate through all possible permutations of `b`, but this is infeasible for large `n`. ## More details • The order of the values in `a` is fixed. • The order of the values in `b` may be rearranged to form `c`. That is, starting with `b = (1, 2, 3)`, it may be that the maximum result is obtained when the values are rearranged to `c = (2, 1, 3)`. • `b` may be rearranged in-place if needed. • Since the optimal `c` is not necessarily unique, any optimal `c` may be returned. • Assume all values are 32-bit unsigned integers. • `1 <= n <= 10,000`. ## Test cases ``````Input: a = (3, 4, 5) b = (6, 7, 8) Output: c = (8, 7, 6) (result = 3) `````` ``````Input: a = (1, 11, 7, 4, 10, 11) b = (6, 20, 8, 9, 10, 7) Output: c = (8, 6, 10, 9, 7, 20) (result = 9) `````` ``````Input: a = (0, 1, 2, 4, 8, 16) b = (512, 256, 128, 64, 32, 16) Output: c = (16, 32, 64, 128, 256, 512) (result = 0) `````` ## optimization – Reordering an array to maximize expression You are given 2 arrays $$a$$ and $$b$$ of size $$n$$. Your task is to reorder array $$b$$ such that the following expression is maximized. $$(a_1 ⊕ b_1) & (a_2 ⊕ b_2)& (a_3 ⊕ b_3)& … & (a_n ⊕ b_n )$$ Constraints: $$1 le n le 10^5$$ $$0 le a_i,b_i le 2^{30}$$ ## algorithms – Maximize payout of job scheduling Suppose you have a set of jobs $$1, 2, dots, n$$ with corresponding payouts $$j_1, j_2, dots, j_n$$ where each job has a certain cooldown period $$p_1, p_2, dots, p_n$$. If you choose job $$i$$ you receive a payout of $$j_i$$ but must skip the next $$p_i$$ jobs before you can choose another job. You can only choose jobs in the order provided, but you can skip a job if the cooldown period is too large. Find a recursive relation to determine the maximum payout and prove its correctness. I’ve been struggling to come up with a recursive solution, but here’s what I have so far: So suppose the optimal payout up to job $$i$$ can be expressed as $$P(i)$$. Then if we don’t choose $$i$$, $$P(i) = P(i – 1)$$. If we do choose $$i$$, we take payout $$j_i$$ and must skip the next $$p_i$$ jobs. Then is it the case that $$P(i) = j_i + P(i – p_i)$$ (job $$i$$‘s payout + the maximum payout from $$p_i$$ jobs before)? If so then we just take $$max{P(i-1), j_i + P(i – p_i)}$$, but I’m not 100% sure this is correct. ## mathematical optimization – Maximize a function over complexes Assume that I have a bounded real valued function $$h(a,b)$$ such that $$ain mathbb{C}$$ and $$bin (0,1)$$. Can Mathematica maximize such function over $$mathbb{C}times (0,1)$$? I tried some functions such as `Maximize` and `NMaximize`, but following the documentation, they only consider real parameters. ## maximum – Unable to formulate the correct formulation for maximize a function So, i want to express these: $$max(cos(x)cos(y)cos(z)),quad x+y+z=pi,quad x,y,zin(0,pi)$$ Attempt: ``````In(81):= NMaximize({Cos(x) Cos(y) Cos(z), x + y + z == Pi && 0 < x <= (Pi) && 0 < y <= (Pi) && 0 < z <= (Pi)}, {x, y, z}) Out(81)= {0.125, {x -> 1.0472, y -> 1.0472, z -> 1.0472}} `````` But how so? I tried manually like this: ``````In(82):= N(Cos(30 Degree) Cos(30 Degree) Cos(30 Degree)) Out(82)= 0.649519 `````` The above calculation implies there are numbers greater than the maximum value, i mean $$0.649>0.125$$. I think, i made a mistake to formulate the expression. Could you help me please? If the formulation is correct that would be greater or equal to `Out(82)`. Thanks in advance?
auto_math_text
web
# A Possible Decision Theory for Many Worlds Living post by Evan Ward · 2019-05-04T21:20:42.127Z · score: 0 (8 votes) · LW · GW · 9 comments ## Contents Background My Proposal If one CoA is twice as choice-worthy as another, then I argue that we should commit to doing that CoA with 2:1 odds or 66% of the time based on radioactive particle decay. Why? What this Theory Isn't Is it Incrementally Useful? Crucial Considerations Is RECMDT Safer if Applied Only with Particular Mindsets? Converting Radioactive Decay to Random Bit Strings Converting Random Bit Strings to Choices What Does Application Look Like? Can We Really Affect the Distribution of Other Worlds through Our Actions? What if Many Worlds Isn't True? None Hey LessWrong! I may have gone in over my head as I am not well-versed in decision theory literature, but I tentatively believe I have a new decision theory for decision-making in a MWI universe. Let me know what you think! -------------------------------------------- Originally posted at: https://www.evanward.org/a-decision-theory-for-many-worlds-living/ ---------------------------------------------- Here, I describe a decision theory that I believe applies to Many-Worlds living that combines principles of quantum mechanical randomness, evolutionary theory, and choice-worthiness. Until someone comes up with a better term for it, I will refer to it as Random Evolutionary Choice-worthy Many-worlds Decisions Theory, or RECMDT. ### Background If the Many World's Interpretation (MWI) of quantum mechanics is true, does that have any ethical implications? Should we behave any differently in order to maximize ethical outcomes? This is an extremely important question that I'm not aware has been satisfactorily answered. If MWI is true and if we can affect the distribution of worlds through our actions, it means that our actions have super-exponentially more impact on ethically relevant phenomena. I take ethically relevant phenomena to be certain fundamental physics operations responsible for the suffering and well-being associated with the minds of conscious creatures. ### My Proposal We ought to make decisions probabilistically based on sources of entropy which correspond with the splitting of worlds (e.g. particle decay) and the comparative choice-worthiness of different courses of action (CoA). By choice-worthiness, I mean a combination of the subjective degree of normative uncertainty and expected utility of a CoA. I will go into determining choice-worthiness in another post. If one CoA is twice as choice-worthy as another, then I argue that we should commit to doing that CoA with 2:1 odds or 66% of the time based on radioactive particle decay. ### Why? Under a single unfolding of history, the traditional view is that we should choose whichever CoA available to us which has the highest choice-worthiness. When presented with a binary decision, the thought is that we should choose the most choice-worthy option given the sum of evidence every single time. However, the fact that a decision is subjectively choice-worthy does not mean it is guaranteed to actually be the right decision—it could actually move us towards worse possible worlds. If we think we are living in a single unfolding of history but are actually living under MWI, then a significant subset of the 3↑↑↑3+ (but a finite number) of existing worlds end up converging on similar futures, which are by no means destined to be good. However, if we are living in a reality of constantly splitting worlds, I assert that it is in everyone's best interest to increase the variance of outcomes in order to more quickly move towards either a utopia or extinction. This essentially increases evolutionary selection pressure that child worlds experience so that they either more quickly become devoid of conscious life or more quickly converge on worlds that are utopian. As a rough analogy, imagine having a planet covered with trillions of identical, simple microbes. You want them to evolve towards intelligent life that experiences much more well-being. You could leave these trillions of microbes alone and allow them to slowly incur gene edits so that some of their descendants drift towards more intelligent/evolved creatures. However, if you had the option, why not just increase the rate of the gene edits, by say, UV exposure? This will surely push up the timeline for intelligence and well-being and allow a greater magnitude of well-being to take place. Each world under MWI is like a microbe, and we might as well increase the variance, and thus, evolutionary selection pressure in order to help utopias happen as soon and as abundantly as possible. ### What this Theory Isn't A key component of this decision heuristic is not maximizing chaos and treating different CoAs equally, but choosing CoAs relative to their choice-worthiness. For example, in a utopian world with, somehow, 99% of the proper CoAs figured out, only in 1 out of 100 child worlds must a less choice worthy course of action be taken. In other words, once we get confident in particular CoA, we can take that action the majority of the time. After all, the goal isn't for 1 world to end up hyper-utopian, but to maximize utility over all worlds. If we wanted just a single world to end up hyper utopian, then we want to act in as many possible ways based on the results of true sources of entropy. It would be ideal to come up with any and flip a (quantum) coin and go off its results like Two-Face. Again, the goal is to maximize utility over all worlds, so we only want to explore paths in proportion to the odds that we think a particular path is optimal. ### Is it Incrementally Useful? A key component of most useful decision theories is that they are useful insofar as they are followed. As long as MWI is true, each time RECMDT is deliberately adhered to, it is supposed to increase the variance of child worlds. Following this rule just once, depending on the likelihood of worlds becoming utopian relative to the probability of them being full of suffering, likely ensures many future utopias will exist. ### Crucial Considerations While RECMDT should increase the variance and selection pressure on any child worlds of worlds that implement it, we do not know enough about the likelihood and magnitude of suffering at an astronomical level to guarantee that the worlds that remain full of life will overwhelmingly tend to be net-positive in subjective well-being. It could be possible that worlds with net-suffering are very stable and do not tend to approach extinction. The merit of RECMDT may largely rest on the landscape of energy-efficiency of suffering as opposed to well-being. If suffering is very energy inefficient compared to well-being, then that is good evidence in favor of this theory. I will write more about the implications of the energy-efficiency of suffering soon. ### Is RECMDT Safer if Applied Only with Particular Mindsets? One way to hedge against astronomically bad outcomes may be to only employ RECMDT when one fully understands and is committed to ensuring that survivability remains dependent on well-being. This works because following this decision theory essentially increases the variance of child worlds like using birdshot instead of a slug. If one employs this heuristic only while having a firm belief and commitment to a strong heuristic to reduce the probability of net-suffering worlds, then it seems that yourself in child worlds will also have this belief and be prepared to act on it. You can also only employ RECMDT while you believe in your ability to take massive-action on behalf of your belief that survivability should remain dependant on well-being. Whenever you feel unable to carry out this value, you should perhaps not act to increase the variance of child worlds because you will not be prepared to deal with the worst-case scenarios in those child worlds. Evidence against applying RECMDT only when one holds certain values strongly, however, is all the Nth-order effects of our actions. For decisions that have extremely localized effects where one's beliefs dominate the ultimate outcome, the plausible value of RECMDT over not applying it is rather small. For decision with many Nth order effects, such as deciding which job to take (which, for example, has many unpredictable effects on the economy), it seems that one cannot control for the majority of the effects of one's actions after an initial decision is made. The ultimate effects likely rest on features of our universe (e.g. the nature of human market economies in our local group of many-worlds) that one's particular belief has little influence over. In other words, for many decisions, one can affect the world once, but they cannot control the Nth order effects through acting a second time. Thus, while certain mindsets are useful to hold dearly regardless of whether one employs RECMDT, it seems that it is not generally useful for one to not employ RECMDT if they are not holding any particular mindsets. ### Converting Radioactive Decay to Random Bit Strings In order to implement this decision theory, agents much require access to a true source of entropy—pseudo-random number generators will NOT work. There are a variety of ways to implement this, such as by having an array of Geiger counters surrounding a radioactive isotope and looking at which groups of sensors get triggered first in order to yield a decision. However, I suspect one of the cheapest and most reliably random sensors would be built to implement the following algorithm from HotBits: Since the time of any given decay is random, then the interval between two consecutive decays is also random. What we do, then, is measure a pair of these intervals, and emit a zero or one bit based on the relative length of the two intervals. If we measure the same interval for the two decays, we discard the measurement and try again, to avoid the risk of inducing bias due to the resolution of our clock. John Walker from HotBits ### Converting Random Bit Strings to Choices We have a means above to generate truly random bit strings that should differ between child worlds. The next question is how do we convert these bit strings to choices regarding which CoA we will execute? This depends on the number of CoAs we were considering and the specific ratios that we arrived at for comparative choice-worthiness. We simply need to determine the least common multiple of all the individual odds of each CoA, and acquire a bit string that is long enough that its representation as a binary number is higher than the least common multiple. From there, we can use a simple preconceived encoding scheme to have the base 2 number encoded in the bit string select for a particular course of action. For example, in a scenario where one CoA is 4x as choice-worthy as another, we need a random number that represents the digits 0 to 4 equally. Drawing the number 4 can mean we must do the less-choice worthy CoA, and drawing 0-3 can mean we do the more choice-worth CoA. We need at least 3 random bits in order to do this. Since 2^3 is 8 and there is no way to divide the states 5, 6, 7 equally to the states 0, 1, 2, 3, and 4, we cannot use this bit string if it is over 4, and must acquire another one until we acquire a number under 4. Once we select a bitstring with a number below our least-common-multiple, we can use the value of the bit string to select our course of action. The above selection method prevents us from having to make any rounding errors, and it shouldn't take that many bits to implement as any given bit string of the proper length always has over a 50% chance of working out. Other encoding schemes introduce rounding errors, which only detract from the uncertainty of our choice-worthiness calculations. ### What Does Application Look Like? I think everyone with solid choice-worthy calibrating ability should have access to truly random bits to choose courses of action from. Importantly, the time of the production of these random bits is relevant. A one-year-old random bitstring captured from radioactivity is just as random as one captured 5 seconds ago, but employing the latter is key for ensuring the maximum number of recent sister universes make different decisions. Thus, people need access to recently created bit strings. These could be from a portable, personal Gieger counter, but it could also be from a centralized Gieger counter in say, the middle of the United States. The location does not matter as much as the recency of bit production. Importantly, however, bit strings should not ever be reused as this is not as random as using new bit strings as whatever made you decide to reuse them is non-random information. ### Can We Really Affect the Distribution of Other Worlds through Our Actions? One may think that since everything is quantum mechanics including our brains, can we really affect the distribution of child worlds from our intentions and decisions? This raises the classic problem of free will and our place in a deterministic universe. I think the simplest question to ask is: do our choices have an effect on ethically-relevant phenomena? If the answer is no, then why should we care about decision theory in general? I think it's useful to think of the answer as yes. ### What if Many Worlds Isn't True? If MWI isn't true, then RECMDT optimizes for worlds that will not exist at the potential cost to our own. This may seem to be incredibly dangerous and costly. However, as long as people make accurate choice-worthiness comparisons between different CoAs, then I will actually argue that adhering to RECMDT is not that risky. After all, choice-worthiness is distinct from expected-utility. It would be a waste to have people, in a binary choice of actions with one having 9x more expected-utility than the other, choose the action with less expected-utility even 10% of the time. However, it seems best, even in a single unfolding of history, that where we are morally uncertain, we should actually cycle through actions based on our moral uncertainty via relative choice-worthiness. By always acting to maximize choice-worthiness, we risk not capturing any value at all through our actions. While I agree that we should maximize expected-utility in both one shot and iterative scenarios alike and be risk neutral assuming we adequately defined our utility function, I think that given the fundamental uncertainty at play in a normative uncertainty assessment, it is risk neutral to probabilistically decide to implement different CoAs relative to their comparative choice-worthiness. Importantly, this is only the ideal method if the CoAs are mutually exclusive--if they are not, one might as well optimize for both moral frameworks. Hence, while I think RECMDT is true, I also think that even if MWI is proven false, a decision theory exists which combines randomness and relative choice-worthiness. Perhaps we can call this Random Choice-worthy Decision Theory, or RCDT. --------------------------------------------------------- Thanks for reading. Let me know what you think of this! comment by Donald Hobson (donald-hobson) · 2019-05-04T11:37:05.143Z · score: 7 (4 votes) · LW · GW I think that your reasoning here is substantially confused. FDT can handle reasoning about many versions of yourself, some of which might be duplicated, just fine. If your utility function is such that where . (and you don't intrinsically value looking at quantum randomness generators) then you won't make any decisions based on one. If you would prefer the universe to be in than a logical bet between and . (Ie you get if the 3^^^3 th digit of is even, else ) Then flipping a quantum coin makes sense. I don't think that randomized behavior is best described as a new decision theory, as opposed to an existing decision theory with odd preferences. I don't think we actually should randomize. I also think that quantum randomness has a Lot of power over reality. There is already a very wide spread of worlds. So your attempts to spread it wider won't help. comment by Alexei · 2019-05-05T01:14:20.206Z · score: 3 (2 votes) · LW · GW If you would prefer the universe to be in ... If I was to make Evan's argument, that's the point I'd try to make. My own intuition supporting Evan's line of argument comes from the investing world: it's much better to run a lot of uncorrelated positive EV strategies than a few really good ones, since the former reduces your volatility and drawdown, even while at the expense of EV measured in USD. comment by Evan Ward · 2019-05-04T21:42:56.309Z · score: 1 (1 votes) · LW · GW I'm sorry but I am not familiar with your notation. I am just interested in the idea: when an agent Amir is fundamentally uncertain about the ethical systems that he evaluates his actions by, is it better if all of his immediate child worlds make the same decision? Or should he hedge against his moral uncertainty, ensure his immediate child worlds choose courses of action that optimize for irreconcilable moral frameworks, and increase the probability that in a subset of his child worlds, his actions realize value? It seems that in a growing market (worlds splitting at an exponential rate), it pays in the long term to diversify your portfolio (optimize locally for irreconcilable moral frameworks). I agree that QM already creates a wide spread of worlds, but I don't think that means it's safe to put all of one's eggs in one basket when one has doubt that their moral system is fundamentally wrong. comment by Donald Hobson (donald-hobson) · 2019-05-05T08:59:22.092Z · score: 1 (1 votes) · LW · GW If you think that there is 51% chance that A is the correct morality, and 49% chance that B is, with no more information available, which is best. Optimize A only. Flip a quantum coin, Optimize A in one universe, B in another. Optimize for a mixture of A and B within the same Universe. (Act like you had utility U=0.51A+0.49B) (I would do this one.) If A and B are local objects (eg paperclips, staples) then flipping a quantum coin makes sense if you have a concave utility per object in both of them. If your utility is Then if you are the only potential source of staples or paperclips in the entire quantum multiverse, then the quantum coin or classical mix approaches are equally good. (Assuming that the resource to paperclip conversion rate is uniform. ) However, the assumption that the multiverse contains no other paperclips is probably false. Such an AI will run simulations to see which is rarer in the multiverse, and then make only that. The talk about avoiding risk rather than expected utility maximization, and how your utility function is nonlinear, suggests this is a hackish attempt to avoid bad outcomes more strongly. While this isn't a bad attempt at decision theory, I wouldn't want to turn on an ASI that was programmed with it. You are getting into the mathematically well specified, novel failure modes. Keep up the good work. comment by Evan Ward · 2019-06-09T19:37:26.718Z · score: 1 (1 votes) · LW · GW I really appreciate this comment and my idea definitely might come down trying to avoid risk rather than maximize expected utility. However, I still think there is something net positive about diversification. I write a better version of my post here: https://www.evanward.org/an-entropic-decision-procedure-for-many-worlds-living/ and if you could spare the time, I would love your feedback. comment by Alexei · 2019-05-05T01:12:28.866Z · score: 3 (2 votes) · LW · GW I'm actually very glad you wrote this up, because I have had a similar thought for a while now. And my intuition is roughly similar to yours. I wouldn't use terms like "decision theory," though, since around here that has very specific mathematical connotations. And while I do think my intuition on this topic is probably incorrect, it's not yet completely clear to me how. comment by Evan Ward · 2019-06-09T19:32:33.446Z · score: 3 (2 votes) · LW · GW I am glad you appreciated this! I'm sorry I didn't respond sooner. I think you are write about the term "decision theory" and have opted for "decision procedure" in my new, refined version of the idea at https://www.evanward.org/an-entropic-decision-procedure-for-many-worlds-living/ comment by Pattern · 2019-05-04T21:40:39.358Z · score: 1 (1 votes) · LW · GW Intuitively, one does not want to take actions a and b with probabilities of 2/3 and 1/3, whenever the EU of a is twice that of b. Rather, it might be useful to not act entirely as utility estimates based on the uncertainty present - but if you are absolutely certain U(a) = 2*U(b), then it seems obvious one should take action a, if they are mutually exclusive. (If there is a 1/2 chance that U(a) = 1, and U(b) = 2, and a 1/2 chance that U(a) = 1, and U(b) = 1/2, then EU(a) = 1, and EU(b) = 1.5.) comment by Evan Ward · 2019-06-09T19:34:40.715Z · score: 1 (1 votes) · LW · GW I think you are right, but my idea applies more when one is uncertain about their expected utility estimates. I write a better version if my idea here https://www.evanward.org/an-entropic-decision-procedure-for-many-worlds-living/ and would love your feedback
auto_math_text
web
mcrl2::lps::untime_algorithm¶ Include file: #include "mcrl2/lps/untime.h class mcrl2::lps::untime_algorithm Protected types¶ type mcrl2::lps::untime_algorithm::action_summand_type typedef for process_type::action_summand_type type mcrl2::lps::untime_algorithm::process_type typedef for Specification::process_type type mcrl2::lps::untime_algorithm::super typedef for detail::lps_algorithm< Specification > Protected attributes¶ bool mcrl2::lps::untime_algorithm::m_add_invariants bool mcrl2::lps::untime_algorithm::m_apply_fm data::set_identifier_generator mcrl2::lps::untime_algorithm::m_identifier_generator Identifier generator, for generating fresh identifiers. data::variable mcrl2::lps::untime_algorithm::m_last_action_time Variable denoting the time at which the last action occurred. const data::rewriter &mcrl2::lps::untime_algorithm::m_rewriter data::data_expression mcrl2::lps::untime_algorithm::m_time_invariant Data expression expressing the invariant for variables relating to time. For all parameters x relating to time, the expression 0<=x && x<=m_last_action_time, provided that in the initial vector the variable x gets the value 0, and in each summand the new value for x is either x, or the value that is assigned to last action time, which is the time tag of the action in that summand. Protected member functions¶ data::data_expression calculate_time_invariant() Data expression expressing the invariant for variables relating to time. For all parameters x relating to time, the expression 0<=x && x<=m_last_action_time is returned, provided that in the initial vector the variable x gets the value 0, and in each summand the new value for x is either x, or the value that is assigned to last action time, which is the time tag of the action in that summand. void untime(action_summand_type &s) Apply untime to an action summand. Public member functions¶ void run() untime_algorithm(Specification &spec, bool add_invariants, bool apply_fourier_motzkin, const data::rewriter &r)
auto_math_text
web
# NMF - Subscript out of bounds I've calculated the cophenetic coefficients for the NMF of gene expression data, but is giving error when performing the clustering information step as given below: clustered_data <- NMF_estimate[[num.opt - 1]][[2]] Error in NMF_estimate[[num.opt - 1]] : subscript out of bounds Here num.opt is num.opt <- coph[which(coph[,2]==max(coph[-1,2])),1] which gives 2, 5 as outputs, and NMF Estimate is a list of 6 elements. Please suggest a solution as soon as possible.
auto_math_text
web
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # The globalizability of temporal discounting ## Abstract Economic inequality is associated with preferences for smaller, immediate gains over larger, delayed ones. Such temporal discounting may feed into rising global inequality, yet it is unclear whether it is a function of choice preferences or norms, or rather the absence of sufficient resources for immediate needs. It is also not clear whether these reflect true differences in choice patterns between income groups. We tested temporal discounting and five intertemporal choice anomalies using local currencies and value standards in 61 countries (N = 13,629). Across a diverse sample, we found consistent, robust rates of choice anomalies. Lower-income groups were not significantly different, but economic inequality and broader financial circumstances were clearly correlated with population choice patterns. ## Main Effective financial choices over time are essential for securing financial well-being1,2, yet individuals often prefer immediate gains at the expense of future outcomes3,4. This tendency, known as temporal discounting5, is often treated as a behavioural anomaly measured by presenting a series of choices that vary values, timelines, framing (for example, gains or losses) and other trade-offs6. Responses can then be aggregated or indexed in ways that test different manifestations of the anomaly, whether strictly the trade-off of immediate versus future or the threshold at which individuals are willing to change their preference6. Anomalies identified under temporal discounting are routinely associated with lower wealth7,8,9,10,11,12,13,14, which is especially concerning given incongruent impacts on economic inequality brought about by the COVID-19 pandemic15. Inequality and low incomes have also routinely been associated with greater discounting of future outcomes13,16,17, so it is not surprising that global studies would find temporal discounting (to varying degrees) in populations around the world8. However, the prevailing interpretations (that is, that lower-income groups show more extreme discounting18,19) may result from narrow measurement approaches, such as only assessing immediate gains versus future gains. Another limitation of interpretations regarding discounting and economic classes involves the relative aspect of financial choices compared to income and wealth. Consider the patterns presented in Fig. 1a, which represent six months of spending patterns for 15,568 individuals in the United States who received stimulus payments as part of the 2020 CARES Act20. If the average amount spent 60 days prior to receiving the payment is used as a baseline, the lower-income group spent over 23 times more than baseline immediately after receipt, compared with around 10 times more than baseline for middle- and higher-income individuals. Apart from those days immediately following receipt, the relative spending patterns are almost identical for all three groups. However, as indicated on the right, those with higher incomes spent more in raw values, indicating that behaviours are more extreme only relative to income, and in fact, high-income individuals spent the most on average after receiving stimulus payments. While relative values may differentiate the consequences of spending, the spending patterns were generally about the same. In this research, we aimed to test how broadly generalizable patterns of temporal discounting are around the world, incorporating social and economic factors as well as multiple measures of intertemporal choice. With broader testing of more anomalies, rather than being limited to indifference points (a threshold value for preferring now versus later), more robust conclusions can be drawn about choice patterns. In this vein, the most comprehensive related study found that lower-income countries had lower trust in systems and had the steepest rates of discounting (that is, the threshold for giving up an immediate gain for a later, larger one was much higher)8,21. As the indifference point was the primary indicator, these results are extremely important but do not necessarily mean that lower-income populations have distinct decision-making patterns. Three similar studies also tested temporal choice in large, multi-national populations, some including more than 50,000 participants from more than 50 countries18,22. These studies largely focused on smaller-sooner versus larger-later constructs of temporal discounting. Most concluded that lower income and wealth, among other micro and macro variables, were strong predictors of higher discounting (or lower patience). However, these studies did not incorporate a broad range of temporal choice constructs, as their focus was typically specific to time preferences. To avoid the limitations of relying only on indifference points and to assess the generalizability of temporal discounting on a near-global scale, we used a similar method to those studies but tested multiple intertemporal choice domains. Our approach allows the rates of certain anomalies to be considered along with specific value thresholds. Our aim was to test each of these patterns for generalizability while also factoring in multiple economic aspects across populations, primarily wealth, inequality, debt and inflation. We pre-registered (https://osf.io/jfvh4) six primary hypotheses, anticipating that temporal discounting would be observed in all countries to varying extents, though mean differences between countries would be less extreme than variability within countries, both overall and for specific anomalies. We also anticipated that economic inequality would be a strong predictor of national discounting averages. Inflation, which tends to be higher in lower-income countries23, is also associated with stronger preferences for immediate gains24,25. In our final hypothesis, we expected to confirm this pattern, indicating that such preferences may be associated with increased probability that future gains will be worth substantially less than their current value. We expected that this might be even more broadly impactful than income or wealth, though each interacts in some way and all should be considered. We limited our hypotheses to inflation versus extreme inflation: we expected that differences in preferences would emerge only at substantially larger inflation rates (over 10%) and hyperinflation (over 50%), and less so between regions with varied but less extreme differences (substantively below 10%). To test our hypotheses, we used four choice anomalies outlined in one of the most influential articles26 on intertemporal choice—absolute magnitude, gain–loss asymmetry, delay–speedup asymmetry and common difference (we refer to this as present bias, which is the more common term)—plus a fifth, subadditivity, to complete three inter-related time intervals27. In contrast to most discounting research, using a series of intertemporal choice anomalies28 identified in WEIRD labs allows us to test including patterns that choice models often ignore. When multiple anomalies are tested alongside a simplified indifference measure (derived from the first set of choices), the prevalence of each anomaly provides a more robust determination of the generalizability of the construct than an indifference point alone. By addressing both the depth of the method used and concerns about the generalizability of behavioural research29, the richer perspective of our approach to measuring intertemporal decision-making in a global sample allows us to assess the presence and prevalence of anomalies in local contexts. It also allows us to test potential relationships with economic inequality to determine whether low-income groups are somehow more extreme decision-makers or whether the environment, beyond simply individual circumstances, is a more impactful factor across populations. Most research on temporal preferences uses indifference points6, which determine the threshold at which individuals will shift from immediate to delayed (and vice versa). Data from that approach are robust and converge on an inverse relationship between income/wealth and discounting rate. However, multiple binary choice comparisons are ideal for demonstrating multidimensional choice patterns, as in prospect theory, expected utility and other choice paradoxes or cognitive biases. They are also better suited for testing in multiple countries30,31 when multiple small adaptations to values in different currencies are necessary. Taking this into consideration, our method leveraged one of the most widely cited papers on decision-making26, which proposed four critical intertemporal choice anomalies. While studies of individual anomalies exist from various regions32,33,34, our approach aimed to produce a comprehensive multi-country assessment that simultaneously tested the generalizability of all four: • Absolute magnitude: Increased preference for delayed gains when values become substantially larger, even when relative differences are constant (for example, prefer $500 now over$550 in 12 months and prefer $5,500 in 12 months over$5,000 now4,7). • Gain–loss asymmetry: Gains are discounted more than losses, though differences (real and relative) are constant (for example, prefer to receive $500 now over$550 in 12 months, but also prefer to pay $500 now over paying$550 in 12 months). • Delay–speedup asymmetry: Accepting an immediate, smaller gain if the delay is framed as added value, but preferring the larger, later amount if an immediate gain is framed as a reduction (for example, prefer to receive a gain of $500 rather than wait 12 months for an additional$50 and prefer to wait for 12 months to receive $550 rather than to pay$50 and receive the gain now). • Present bias: Lower discounting over a given time interval when the start of the interval is shifted to the future (for example, prefer $500 now over$550 in 12 months and prefer $550 in two years over$500 in 12 months). We also assess subadditivity27 effects, which adds an interval of immediate to 24 months, thereby allowing us to fully assess discounting over three time intervals (0–12, 12–24 and 0–24 months)35. Subadditivity is considered present if discounting is higher for the two 12-month intervals than for the 24-month interval. All data were collected independent of any other study or source, with a 30-item instrument developed specifically for assessing a base discounting level and then the five anomalies. To validate the metric, a three-country pilot study (Australia, Canada and the United States) was conducted to confirm that the method elicited variability in choice preferences. We did not assess what specific patterns of potential anomalies emerged to avoid biasing methods or decisions related to currency adaptations. For the full study, all participants began with choosing either approximately 10% of the national monthly household income average (either median or mean, depending on the local standard) immediately, or 110% of that value in 12 months. For US participants, this translated into US$500 immediately or US$550 in one year. Participants who chose the immediate option were shown the same option set, but the delayed value was now 120% (US$600). If they continued to prefer the immediate option, a final option offered 150% (US$750) as the delayed reward. If participants chose the delayed option initially, subsequent choices were 102% (US$510) and 101% (US$505). This progression was then inverted for losses, with the same values presented as payments, increasing for choosing delayed and decreasing for choosing immediate. Finally, the original gain set was repeated using 100% of the average monthly income to represent higher-magnitude choices (Supplementary Table 1). After the baseline scenarios, the anomaly scenarios incorporated the simplified indifference point (the largest value at which the participants chose the delayed option in the baseline items; see Supplementary Methods). Finally, the participants answered ten questions on financial circumstances, (simplified) risk preference, economic outlook and demographics. The participants could choose between the local official language (or languages) and English. By completion, 61 countries (representing approximately 76% of the world population) had participated (Supplementary Tables 2 and 3). We assessed temporal choice patterns in three ways. First, we used the three baseline scenarios to determine preferences for immediate or delayed gains (at two magnitudes) and losses (one). Second, we calculated the proportion of participants who exhibited the theoretically described anomaly for each anomaly scenario (Supplementary Table 4). We also calculated proportions of participants who exhibited inconsistent decisions even if not specifically aligned with one of the defined anomalies. Finally, we computed a discounting score based on responses to all choice items, ranging from 0 (always prefer delayed gains or earlier losses) to 19 (always prefer immediate gains or delayed losses). The score then represents the consistency of discounting behaviours, irrespective of the presence of other choice anomalies (see Supplementary Information for details on reliability and validity). To explore individual and country-level differences, we performed a series of multilevel linear and generalized mixed models that predicted standardized temporal discounting scores and anomalies, respectively. We ran a set of increasingly complex models, including inequality indicators, while controlling for individual debt and assets, age, education, employment, log per-capita gross domestic product (GDP) and inflation at the individual and country levels. Because the raw scores (0–19) have no standard to compare against, we primarily used standardized scores (with a mean of 0 and standard deviation of 1) for analysis and visualization. We detected several relevant nonlinear effects (debt, financial assets and inflation; Supplementary Tables 57), which we incorporated into our final models via spline modelling36. The models were estimated using both frequentist (Supplementary Tables 8 and 9 and Supplementary Figs. 1 and 2) and Bayesian techniques (Supplementary Tables 10 and 11), assessing the consistency of the results. Support for potential null effects was evaluated using a variety of Bayesian approaches (Supplementary Table 12). There are some limitations in our approach. The most noteworthy is that we are limited to hypothetical scenarios in which the participants had no motivation to give a particular answer, which might have impacted responses had true monetary awards been offered. Though Japanese participants received payment, it was not contingent on their choices, so the same limitation holds. While that might have been an ideal approach, substantial evidence indicates that such hypothetical scenarios do not differ substantively from actual choices, and many such approaches have been validated to correlate with real-world behaviours37,38,39,40,41,42. Naturally, this does not provide a perfect replacement for comprehensive real-world behavioural observations, but there is sufficient evidence to indicate that hypothetical approaches yield reasonably valid results. The second limitation is that our approach to minimizing bias through highly randomized and broad data collection yielded demographics that varied in representativeness. For indications of how this may have impacted the results, we included a complementary demographics table for comparison between the sample and true national characteristics (Supplementary Table 18). Finally, in terms of robustness in our methods, we opted for five anomalies tested in relatively short form rather than a smaller number of domains in long form. We did this in part because it would be a meaningful contribution to the field as well as because it was more important to demonstrate the existence of anomalies than to emphasize precise thresholds (for example, indifference points). Though it was impractical to do comprehensive, adaptive measures for our approach, we strongly encourage future studies involving both a broad number of choice domains and extensive measures within each to offer greater precision. ## Results For 13,629 participants from 61 countries, we find that temporal discounting is widely present in every location, indicating consistency and robustness (with some variability) across all five intertemporal choice anomalies (Fig. 2). Income, economic inequality, financial wealth and inflation demonstrated clear links to the shape and magnitude of intertemporal choice patterns. Better financial environments were consistently associated with lower rates of temporal discounting, whereas higher levels of inequality and inflation were associated with higher rates of discounting. Yet, the overall likelihood of exhibiting anomalies remained stable irrespective of most factors. Differences between locations are evident, though remarkable consistency of variability exists within countries. Such patterns demonstrate that temporal discounting and intertemporal choice anomalies are widely generalizable, and that differences between individuals are wider than differences between countries. Being low-income is not alone in relating to unstable decision-making; being in a more challenging environment is also highly influential. The scientific and policy implications from these findings challenge simple assumptions that low-income individuals are fundamentally extreme decision-makers. Instead, these data indicate that anyone facing a negative financial environment—even with a better income within that environment—is likely to make decisions that prioritize immediate clarity over future uncertainty. While we do not explicitly test risk in the temporal measures, all future prospects inherently hold a risk component, which is compounded by temporal distance and environmental instability (that is, the further the distance between two prospects and the less stable the future may be, the greater the inherent risk difference may be perceived between an immediate and a future prospect)43,44,45. Likewise, the data indicate that all individuals at all income levels in all regions are more likely than not to demonstrate one or more choice anomalies. ### Detailed analysis of temporal choice anomalies We collected 13,629 responses from 61 countries (median sample size of 209, Supplementary Tables 2 and 3). Though the absolute minimum sample size necessary was 30 per country, the sliding scale used for ensuring full power (see Selection of countries) started at 120, increasing to 360 for larger countries. Forty-six countries achieved the target sample size, and 56 had at least 120 (with at least four countries per continent at 120), thus providing a wide range of economic and cultural environments. Only two countries, where data collection was exceptionally challenging, had below 90 participants, but all locations were still substantially above the absolute minimum. As well as exceeding the minimum sample size, we chose to retain these participants in the analyses because they represent groups often not included in behavioural science46,47. In line with related research8, Fig. 3 shows how countries with lower incomes typically had greater temporal discounting levels in the baseline items (Supplementary Table 14). This was most evident in the tendency to prefer immediate gains, even as delayed prospects increased. This pattern was not found for the loss scenario. However, as noted, these items give a useful measure for the indifference level for each individual but do not give a robust indication of whether temporal choice anomalies are present. Between-countries random-effect meta-analyses estimated pooled and unpooled effects for aggregate scores and individual anomalies (Supplementary Figs. 38). Temporal discounting was present in all countries, with only modest variability in national means (aggregate mean, 10.3; prediction interval, (6.8, 13.8); from Japan (mean = 7.1, s.d. = 3.9) to Argentina (mean = 14.1, s.d. = 3.0); Fig. 4). Overall, 54% of participants showed at least one anomaly, with 33% presenting multiple and only 2% showing four (Supplementary Table 15). Anomalies were present in all locations, and aggregate values indicated the widespread presence of the four primary anomalies (from 13.8% for absolute magnitude to 40.1% for gain–loss asymmetry, Fig. 3). Gain–loss rates were the most common anomaly in 80.3% (49) of the countries, with substantially higher rates observed than for the other anomalies. While only 10.7% of the sample engaged in subadditivity behaviour (range, 2.7% (Lebanon) to 20.7% (New Zealand)), the criteria were stricter for this anomaly. In all cases, significant Q-tests and I2 values over 70% suggested that effect size variation at the country level could not be accounted for by sampling variation alone. There were strong relationships between the individual and aggregate scores and some anomalies (that is, positive for absolute magnitude and negative for present bias and delay–speedup; Supplementary Fig. 9). Additionally, we found a negative link between GDP and temporal discount scores (β = −0.07; P = 0.001; 95% confidence interval, (−0.12, −0.03)), and positive effects for present bias (odds ratio (OR), 1.09; P = 0.003; 95% confidence interval, (1.03, 1.16)) and delay–speedup (OR = 0.95; P = 0.002; 95% confidence interval, (0.91, 0.99)). We found no evidence of an association for the remaining anomalies (0.95 < OR < 1.01, 0.027 < P < 0.688). We note that some ORs in the non-significant anomalies were similar to those that were significant, but given the sample size, we adhered to a strict cut-off for significance; future research may benefit from reanalysing these data within each country to explore whether more delineated patterns may exist between aggregate wealth and temporal choice anomalies. Despite between-country differences in mean scores and anomaly rates, there was substantial overlap between response distributions. Accordingly, results from multilevel models indicated that no more than 20% of the variance was ever explained by between-country differences for scores and was between 2% (absolute magnitude) and 8% (present bias) for anomalies. We thus find temporal discounting to be globally generalizable, robust and highly consistent (in line with expectations) (Supplementary Table 6 and Supplementary Fig. 10), where within-country differences between individuals are substantially greater than between-country differences. In other words, we find temporal discounting to be a globalizable (though not universal) construct. We also find that there is nothing WEIRD about intertemporal choice anomalies. #### Inequality We defined inequality at the level of the country and at the level of the individual. For countries, we used the most recently published Gini coefficients48. For individuals, we calculated the difference between their reported income and the adjusted net median local (country) income. At the country level, Gini had a positive relationship with temporal discounting scores (β = 0.09; P = 0.002; 95% confidence interval, (0.02, 0.06); Supplementary Table 8), yet no such pattern emerged for specific anomalies, as we observed no significant effect for the remaining cases (0.92 < OR < 1.01, 0.023 < P < 0.825, Supplementary Table 8). Individual income inequality did not predict temporal discounting scores (β = −0.01; P = 0.121; 95% confidence interval, (−0.03, 0.001)) or rates of anomalies (0.96 < OR < 1.04, 0.045 < P < 0.867, Supplementary Tables 8 and 9), except two small effects for present bias (OR = 1.07; P = 0.006; 95% confidence interval, (1.03, 1.13)) and absolute magnitude (OR = 0.92; P = 0.006; 95% confidence interval, (0.87, 0.98); Supplementary Table 9). As shown in Fig. 5, these patterns are largely in line with expectations, indicating that, in aggregate, greater inequality is associated with increased rates of discounting. However, as indicated in Fig. 3, intertemporal choice anomalies overall are not unique to a specific income level, and worse financial circumstances may be associated with more consistent choice patterns (that is, fewer anomalies) due to sustained preference for sooner gains. Whether this aligns with arguments that scarcity leads individuals to focus on present challenges is worthy of further exploration49. It also reiterates that patterns in population (that is, country) aggregates are not the same as predicting individual choices50. #### Assets and debt We found consistently that greater willingness to delay larger gains tends to be associated with greater wealth (financial assets), except for the extremely wealthy. Temporal discounting scores generally decreased as wealth increased, except for the wealthiest individuals (expected degrees of freedom (e.d.f.) (see ‘Further details on modeling temporal discounting’ in the Supplementary Information), 2.88; P < 0.0001; Supplementary Table 8 and Supplementary Fig. 2). We also observed assets being associated with present bias (e.d.f. = 1.01, P < 0.0001) and with delay–speedup (e.d.f. = 2.78, P < .0001). We observed the reverse pattern for absolute magnitude (e.d.f. = 1.96, P = 0.0009). For gain–loss asymmetry (e.d.f. = 0.474, P = 0.144) and subadditivity (e.d.f. = 0.001, P = 0.472), we found no meaningful relationship between assets and the likelihood of observing either (Supplementary Table 9 and Supplementary Fig. 2). Higher levels of debt were associated with lower discount rates, particularly for people with lower to medium debt (e.d.f. = 2.91, P < 0.0001, Supplementary Fig. 1), though there was no significant effect observed regarding debt and the likelihood of engaging in any specific anomaly (0.95 < OR < 1.01, 0.035 < P < 0.944, Supplementary Table 9). #### Inflation We observed strong relationships between inflation rates and temporal discounting scores as well as all anomalies. There was a particularly strong effect of hyperinflation on temporal discounting (e.d.f. = 1.81, P < 0.0001, Supplementary Table 8 and Supplementary Fig. 1), with some levelling out at the extremes. Countries experiencing severe hyperinflation demonstrate extreme discounts only for gains but not for payments, which minimizes the effect on total scores. However, if limiting to only gains, the effect remains extreme, as indicated by the two gain scenarios in Fig. 3. We observed a reverse trend of higher inflation being associated with a lower likelihood of engaging in anomalies (Supplementary Table 9 and Supplementary Fig. 2)—namely, for present bias (e.d.f. = 1.63, P < 0.0001), absolute magnitude (e.d.f. = 1.92, P < 0.0001), delay–speedup (e.d.f. = 1.75, P < 0.0001) and subadditivity (e.d.f. = 1.37, P = 0.0019). The only positive (but weaker) effect in the case of anomalies was found for gain–loss asymmetry (e.d.f. = 1.675, P = 0.0051). ## Discussion For good reason, psychological theory has come under considerable recent criticism due to a number of failed replications of previously canonical constructs51. There is also wide support to consider that the absence of testing (or adapting methods to test) across populations limits the presumed generalizability of conclusions in the field29. To the extent that it is possible for any behavioural phenomenon, we find temporal discounting and common intertemporal choice anomalies to be globally generalizable. This is largely based on finding remarkable consistency and robustness in patterns of intertemporal choice across 61 countries, with substantially more variability within each country than between their means. We emphasize that while discounting may be stronger in worse financial circumstances, particularly those with poorer economic outlooks, it exists in all locations at measurable levels. We do not imply that temporal discounting and specific intertemporal choice anomalies are universal (that is, present in all individuals at all times). Instead, our findings provide extreme confidence that the constructs tested are robust on a global level. In our view, they also disrupt some notions that lower-income individuals are somehow inherently unstable decision-makers, as negative environments are widely influential. Under such circumstances, it is both rational and, as our data show, entirely typical to follow the choice preferences we present. We hope these findings will be considered in both science and policy, particularly in how governments and institutions can directly impact inequality. Consider excessive savings requirements to acquire mortgages52, less favourable lending terms for low earners53, harmful interest rates on financing necessities such as education, restricting access to foreign currency and focusing taxes on income without considering wealth, assets or capital54. Some of these are based on assumptions of how income and wealth are primary indicators of long-term decision-making, but in fact those policies alone can create economic barriers that impact upward economic mobility. On top of impeding mobility, these policies risk institutional resilience by offering better terms (and therefore taking on greater risk) to higher-wealth groups on the basis of reductionist presumptions about who has the lowest discounting rates, or ignoring how inflation may impact spending and saving behaviours among the most financially vulnerable. The scope of the work, particularly the diversity of these 13,629 participants across 61 countries, should encourage more tests of global generalizability of fundamental psychological theory that adapt to local standards and norms. Similarly, policymakers should consider the effects of economic inequality and inflation beyond incomes and growth and give greater consideration to how they directly impact individual choices for entire populations, affecting long-term well-being. ## Methods Ethical approval was given by the Institutional Review Board at Columbia University for both the pilot study and the full study. For the full study, all countries involved had to provide attestations of cultural and linguistic appropriateness for each version of the instrument. Because this was not possible for the pilot study, ethical approval was given only to check the quality, flow and appropriateness of the survey instrument, but not to analyse or report data. For all data, all participants provided informed consent at the start of the survey, and no forms of deception or hidden purpose existed, so all aspects were fully explained. The materials and methods followed our pre-registered plan (https://osf.io/jfvh4). Substantive deviations from the original plan are highlighted in each corresponding section, alongside the justification for the deviation. All details on the countries included, translation, testing and sampling are included in the Supplementary Information. ### Participants The final dataset was composed of 13,629 responses from 61 countries. The original sample size was 25,877, which was reduced almost by half after we performed pre-registered data exclusions. We removed 6,141 participants (23.7%) who did not pass our attention check (a choice between receiving 10% of monthly income now or paying the same amount in one year). We removed 69 participants for presenting non-sensical responses to open data text (for example, ‘helicopter’ as gender). We removed 13 participants claiming to be over 100 years old. We included additional filters to our original exclusion criteria. Regarding the length of time for responses, individuals faster than three times the absolute deviation below the median time or that took less than 120 seconds to respond were removed. This criterion allowed us to identify 5,870 inappropriate responses. We further removed responses from IP addresses identified as either ‘tests’ or ‘spam’ by the Qualtrics service (264 answers identified). Lastly, we did not consider individuals not completing over 90% of the survey (9,434 responses failed this criterion). Note that these values add up to more than 100% because participants could fail multiple criteria. For analyses including income, assets and debt, we conducted additional quality checks. We first removed 38 extreme income, debt or assets (values larger than 1 × 108) responses. Next, we removed extreme outliers larger than 100 times the median absolute deviation above the country median for income and 1,000 times larger than the median absolute deviation for national median assets. We further removed anyone that simultaneously claimed no income while also being employed full-time. These quality checks identified 54 problematic responses, which were removed from the data. The final sample and target size are presented in Supplementary Table 2. We provide descriptive information on the full and by-country samples in Supplementary Table 3 and the main variables in Supplementary Table 4. ### Instrument The instrument was designed by evaluating methods used in similar research, particularly those with a multi-country focus8,21,29 or that covered multiple dimensions of intertemporal choice13,28. On the basis of optimal response and participation in two recent studies6,49 of a similar nature, we implemented an approach that could incorporate these features while remaining brief. This design increased the likelihood of reliable and complete responses. To confirm the viability of our design, we assessed the overall variability of pilot study data from 360 participants from the United States, Australia and Canada. The responses showed that the items elicited reasonable answers, and the three sets of baseline measures yielded responses that would be expected for the three countries. Specifically, it was more popular to choose earlier gains over larger, later ones for the smaller magnitude and closer to 50–50 for the larger magnitude and the payment set. The subsequent choice anomalies also yielded variability within items, which showed some variability between countries. These results confirmed that using baseline choices to set trade-off values in anomaly items was appropriate and would capture relevant differences. We did not analyse these data in full per our Institutional Review Board approval, as we did not want a detailed analysis of subsequent bias decisions. The pilot was completed in April 2021 with participants on the Prolific platform (compensated for participation, not for choices made). The final version of the instrument required the participants to respond to as few as 10 to as many as 13 anomaly items. All items were binary. During the first three anomaly sets, if a participant chose immediate and then delay (or vice versa), they proceeded to the next anomaly, so only two questions were required. If they decided on immediate–immediate or delay–delay, they would see the third set. After the anomalies, the participants answered ten questions about financial preferences, circumstances and outlook (most of these will be analysed in independent research). Finally, the participants provided age, race/ethnicity/immigration status, gender, education, employment and region of residence. Supplementary Table 1 presents all possible values for each set of items used in the final version of the instrument. All materials associated with the method are available in the pre-registration repository. ### Selection of countries By design, there was no systematic approach to country inclusion. Through a network of early career researchers worldwide, multiple invitations were sent and posted to collaborate. We explicitly emphasized including countries that are not typically included in behavioural research, and in almost every location, we had at least one local collaborator engaged. All contributors are named authors. Following data collection, 61 countries were fully included, using 40 languages. All countries also had an English version to include non-native speakers who were uncomfortable responding in the local language. Of the 61 countries, 11 were from Asia, 8 were from the Americas, 5 were from sub-Saharan Africa, 6 were from the Middle East and North Africa, 2 were from Oceania, and 29 were from Europe (19 from the European Union). Several additional countries were attempted but were unable to fulfil certain tasks or were removed for ethical concerns. ### Translation of survey items All instruments went through forward-and-back translation for all languages used. In each case, this required at least one native speaker involved in the process. All versions were also available in English, applying the local currencies and other aspects, such as race and education reporting standards. A third reviewer was brought in if discrepancies existed that could not be solved through simple discussion. Similar research methods were also used for wording. The relevant details where issues arose are included in the Supplementary Information. For cultural and ethical appropriateness, demographic measures varied heavily. For example, in some countries, tribal or religious categories are used as the standard. Other countries, such as the US, have federal guidelines for race and ethnicity, whereas France disallows measures for racial identity. The country-by-country details are posted on the pre-registration page associated with this project. All data were collected through Qualtrics survey links. For all countries, an initial convenience sampling of five to ten participants was required to ensure that comprehension, instrument flow and data capture were functional. Minor issues were corrected before proceeding to ‘open’ collection. Countries aimed to recruit approximately 30 participants before pausing to ensure functionality and that all questions were visible. We also checked that currency values had been appropriately set by inspecting responses’ variability (that is, if options were poorly selected, this would be visible in having all participants make the same choices across items). Minimal issues arose and are outlined in the Supplementary Information. For data circulation, all collaborators were allowed a small number of convenience participants. This decision limited bias while ensuring the readiness of measures and instruments, as multiple collaborators in each country used different networks, thereby reducing bias. Once assurances were in place, we implemented what we refer to as the Demić–Većkalov method, which two prior collaborators in recent studies developed. This method involves finding news articles online (on social media, popular forums, news websites, discussion threads, sports team supporter discussion groups/pages and so on) and posting in active discussions, encouraging anyone interested in the subject to participate. Circulation included direct contact with local organizations (non-governmental organizations and non-profits, often with thematic interests in financial literacy, microcredit and so on) to circulate with stakeholders and staff, email circulars, generic social media posts, informal snowballing and paid samples (in Japan only; no other participants were compensated). We note that this approach to data collection with a generally loose structure was intentional to avoid producing a common bias across countries. Similar to recent, successful multi-country trials30,55, this generates more heterogeneous backgrounds, though it still skews toward populations with direct internet access (that is, younger, higher education and somewhat higher income). As described in the pre-registration (https://osf.io/jfvh4), the minimum sample threshold to achieve a power of 0.95 for the models presented was 30 participants per country. However, to produce a more robust sample, we used three tiers for sample targets: population ≤ 10 million, 120 participants; 10 million ≤ population ≤ 100 million, 240 participants; and population > 100 million, 360 participants. Comprehensive details about methods, guidelines, measurement building and instruments are available in the Supplementary Information and on the pre-registration site. ### Procedure For the full study, all participants began by choosing from two gains of approximately 10% of the national household income average (either median or mean, depending on the local standard) immediately, or 110% of that value in 12 months. For US participants, this translated into US$500 immediately or US$550 in one year. Participants who chose the immediate option were shown the same option set, but the delayed value was now 120% (US$600). If they preferred the immediate prospect, a final option offered 150% (US$750) as the delayed reward. If participants chose the delayed option initially, subsequent choices were 102% (US$510) and 101% (US$505). This progression was then inverted for losses, with the identical values presented as payments, increasing for choosing delayed and decreasing for choosing immediately. Finally, the original gain set was repeated using 100% of the monthly income to represent higher-magnitude choices. Following the baseline scenarios, the anomaly scenarios incorporated the simplified indifference point, the largest value at which the participants chose the delayed option in the baseline items. For example, if an individual chose US$500 immediately over US$550 in 12 months, but US$600 in 12 months over US$500 immediately, then US$600 was the indifference value for subsequent scenarios. Those choices were then between US$500 in 12 months versus US$600 in 24 months (present bias), US$500 immediately versus US$700 in 24 months (subadditivity) and either being willing to wait 12 months for an additional US$100 in one set or being willing to lose US\$100 to receive a reward now rather than in 12 months (delay–speedup). For consistency, the values were initially derived from local average income (local currency) and then from constant proportions based on the initial values (Supplementary Information). This approach was chosen over directly converting fixed amounts in each country due to the substantial differences in currencies and income standards. Participants answered four additional questions related to the choice anomalies (gain–loss and magnitude effects were already collected in the first three sets). Due to contingencies in the instrument, all participants were then shown a present bias scenario (choice between 12 months and 24 months) followed by a subadditivity scenario (choice between immediate and 24 months). They were then randomly presented one of two delay–speedup scenarios (one framed as a bonus to wait, the other stated as a reduction to receive the gain earlier). After two similar but general choice and risk measures, they were presented with the second delay–speedup scenario. Due to the similarity in their wording, these scenarios were anticipated to have the lowest rates of anomalous choice. Finally, participants answered ten questions on financial circumstances, (simplified) risk preference, outlook and demographics. Participants could choose between the local official language (or languages) and English. By completion, 61 countries (representing approximately 76% of the world population) had participated. We assessed temporal choice patterns in three ways. First, we tested discounting patterns from three baseline scenarios to determine preference for immediate or delayed choices for gains (at two magnitudes) and losses (one). Second, we analysed the prevalence of all choice anomalies using three additional items. Finally, with this information, we computed a discounting score based on responses to all choice items and anomalies, which ranged from 0 (always prefer delayed gains or earlier losses) to 19 (always prefer immediate gains or delayed losses). ### Deviations from the pre-registered method There were minor deviations from the pre-registered method in terms of procedure. First, we did include an attention check, and the statement that we would not should have been removed; this was an error. Second, we had initially not planned to include students in the main analyses. Still, our recruitment processes turned out to be generally appropriate in terms of engaging students (16%) and non-students (84%) in the sample. We are therefore not concerned about skew and instead consider this a critical population. The impact of these deviations in the analyses is explained in the Supplementary Information. ### Statistical analysis Hierarchical generalized additive models36 were estimated using fast restricted maximum likelihood and penalized cubic splines56. We selected the shrinkage version of cubic splines to avoid overfitting and foster the selection of only the most relevant nonlinear smooths57. Robustness checks were performed for the selection of knots (Supplementary Fig. 10) and spline basis (Supplementary Table 7), leaving the results unchanged. In these models, we estimated all effects of continuous variables as smooths to identify potential nonlinear variables, plus country of residence as random effects. Relevant nonlinear effects were incorporated into our main linear and generalized mixed models. These models were fitted using a restricted maximum likelihood. Model convergence and assumptions were visually inspected. Bayesian versions of these models were estimated using four chains with 500 warmups and 1,000 iteration samples (4,000 total samples). We confirmed that all parameters presented $$\hat R$$ values equal to or below 1.01 and tail effective sample sizes above 1,000. We set the average proposal acceptance probability (delta) to 0.90 and the maximum tree depth to 15 (ref. 58) to avoid divergent transitions. We employed a set of weakly informative priors, including t distributions with three degrees of freedom and a standard deviation of 10 for model intercept and random effect standard deviations, a normal distribution with a zero mean, and a standard deviation of three for the fixed effect regression coefficient. For the standard deviation of the smooth parameter, we employed an exponential distribution with a rate parameter of one59. For smooth terms, we analysed whether each term was significant for the generalized additive model and presented substantial variance in the final models. We explored 95% confidence/credibility intervals for fixed effects58 and examined support for potential null effects. All reported tests were two-tailed. Our power estimation considered unstandardized fixed regression effects of |0.15| and |0.07| as ultra-low effect sizes (categorical and continuous variables). Thus, assuming a null effect of a similar or lower magnitude (|0.10|), we computed log Bayes factors to quantify evidence favouring null effects of this range60. To understand the sensitivity of our results, we explored support for narrower null effects (ranges of |0.05| and |0.01|). As Bayes factors depend on prior specification, we also estimated the percentage of posterior samples within these regions (which could be understood as a region of practically equivalence analysis61). Both statistics provide sensitive, complementary evidence of whether null effects were supported or not60,61. Unfortunately, such analyses could not be conducted for smooth effects, as no single parameter could resume the relationship between the predictor and the dependent variable. The analyses were conducted in R v.4.0.2 (ref. 62) using the Microsoft R Open distribution63. The meta-analyses were conducted using the meta package. Nonlinear effects were studied using the mgcv64 package, with the main models being estimated using the gamm4 (ref. 65) and the brms58 packages for frequentist and Bayesian estimation, respectively. All graphs were created using the ggplot2 (ref. 66) (v.3.3.3) package. Data manipulations were conducted using the tidyverse67 family of packages (v.1.3.0). ### Deviation from the pre-registered plan We aimed to follow our pre-registration analyses as closely as possible. On certain occasions, we decided to amplify the scope of the analyses and present robustness checks for the results presented by employing alternative estimation and inference techniques. There was only one substantive deviation from our pre-registered analyses aside from the delay–speedup calculation. In the original plan, we intended to explore the role of financial status. In our final analysis, we employed individual assets and debts to this end. Assets and debts were included as raw indicators instead of inequality measures because we did not find reliable national average assets or individual debt sources. One minor adaptation from our pre-registration involved our plan to test for nonlinear effects and use Bayesian estimation only as part of our exploratory analyses. However, as we identified several relevant nonlinear effects, we modified our workflow to accommodate those as follows: (1) we initially explored nonlinear effects using hierarchical generalized additive (mixed) models, (2) we included relevant nonlinear effects in our main pre-registered models and (3) we estimated Bayesian versions of these same models to test whether null effects could be supported in certain cases. ### Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. ## Data availability All data will be posted at https://osf.io/njd62 on September 1, 2022, while additional work is completed on an interactive tool with these data. Prior to this date, the data are available on request. Source data are provided with this paper. ## Code availability All code will be posted at https://osf.io/njd62 on September 1, 2022, while additional work is completed on an interactive tool with these data. Prior to this date, the code is available on request. ## References 1. Angrisani, M., Burke, J., Lusardi, A. & Mottola, G. The Stability and Predictive Power of Financial Literacy: Evidence from Longitudinal Data Working Paper No. 28125 (NBER, 2020); https://doi.org/10.3386/w28125 2. Haushofer, J. & Fehr, E. On the psychology of poverty. Science 344, 862–867 (2014). 3. Chapman, G. B. Temporal discounting and utility for health and money. J. Exp. Psychol. Learn. Mem. Cogn. 22, 771–791 (1996). 4. Green, L., Myerson, J. & Mcfadden, E. Rate of temporal discounting decreases with amount of reward. Mem. Cogn. 25, 715–723 (1997). 5. Critchfield, T. S. & Kollins, S. H. Temporal discounting: basic research and the analysis of socially important behavior. J. Appl. Behav. Anal. 34, 101–122 (2001). 6. Basile, A. G. & Toplak, M. E. Four converging measures of temporal discounting and their relationships with intelligence, executive functions, thinking dispositions, and behavioral outcomes. Front. Psychol. 6, 728 (2015). 7. Green, L., Myerson, J., Lichtman, D., Rosen, S. & Fry, A. Temporal discounting in choice between delayed rewards: the role of age and income. Psychol. Aging 11, 79–84 (1996). 8. Falk, A. et al. Global evidence on economic preferences. Q. J. Econ. 133, 1645–1692 (2018). 9. Adamkovič, M. & Martončik, M. A review of consequences of poverty on economic decision-making: a hypothesized model of a cognitive mechanism. Front. Psychol. 8, 1784 (2017). 10. Brown, J. R., Ivković, Z. & Weisbenner, S. Empirical determinants of intertemporal choice. J. Financ. Econ. 116, 473–486 (2015). 11. Shah, A. K., Mullainathan, S. & Shafir, E. Some consequences of having too little. Science 338, 682–685 (2012). 12. Sheehy-Skeffington, J. & Rea, J. How Poverty Affects People’s Decision-Making Processes (JRF, 2017); https://www.jrf.org.uk/report/how-poverty-affects-peoples-decision-making-processes 13. Epper, T. et al. Time discounting and wealth inequality. Am. Econ. Rev. 110, 1177–1205 (2020). 14. Lawrance, E. C. Poverty and the rate of time preference: evidence from panel data. J. Polit. Econ. 99, 54–77 (1991). 15. Deaton, A. COVID-19 and Global Income Inequality Working Paper No. 28392 (NBER, 2021); https://doi.org/10.3386/w28392 16. Ludwig, R. M., Flournoy, J. C. & Berkman, E. T. Inequality in personality and temporal discounting across socioeconomic status? Assessing the evidence. J. Res. Pers. 81, 79–87 (2019). 17. Ruggeri, K. & Folke, T. Unstandard deviation: the untapped value of positive deviance for reducing inequalities. Perspect. Psychol. Sci. https://doi.org/10.31234/osf.io/8wky5 (2021). 18. Burro, G., McDonald, R., Read, D. & Taj, U. Patience decreases with age for the poor but not for the rich: an international comparison. J. Econ. Behav. Organ. 193, 596–621 (2022). 19. Carvalho, L. S., Meier, S. & Wang, S. W. Poverty and economic decision-making: evidence from changes in financial resources at payday. Am. Econ. Rev. 106, 260–284 (2016). 20. Baker, S. R., Farrokhnia, R. A., Meyer, S., Pagel, M. & Yannelis, C. Income, Liquidity, and the Consumption Response to the 2020 Economic Stimulus Payments Working Paper No. 27097 (NBER, 2020); https://doi.org/10.3386/w27097 21. Falk, A. & Hermle, J. Relationship of gender differences in preferences to economic development and gender equality. Science 362, eaas9899 (2018). 22. Rieger, M. O., Wang, M. & Hens, T. Universal time preference. PLoS ONE 16, e0245692 (2021). 23. Ha, J., Ivanova, A., Montiel, P. & Pedroni, P. Inflation in Low-Income Countries Working Paper No. 8934 (World Bank, 2019); https://doi.org/10.1596/1813-9450-8934 24. Gong, L. Endogenous time preference, inflation, and capital accumulation. J. Econ. 87, 241–255 (2006). 25. De Mello, L. R. Jr. & Carneiro, F. G. Consumption behaviour and persistently high inflation: evidence from Latin America. Rev. Bras. Econ. 54, 227–246 (2000). 26. Loewenstein, G. & Prelec, D. Anomalies in intertemporal choice: evidence and an interpretation. Q. J. Econ. 107, 573–597 (1992). 27. Read, D. Is time-discounting hyperbolic or subadditive? J. Risk Uncertain. 23, 5–32 (2001). 28. Read, D. & Scholten, M. in Economic Psychology (ed. Ranyard, R.) 35–50 (John Wiley & Sons, 2017); https://doi.org/10.1002/9781118926352.ch3 29. Yarkoni, T. The generalizability crisis. Behav. Brain Sci. https://doi.org/10.1017/S0140525X20001685 (2020). 30. Ruggeri, K. et al. Replicating patterns of prospect theory for decision under risk. Nat. Hum. Behav. 4, 622–633 (2020). 31. Macchia, L., Plagnol, A. C. & Reimers, S. Does experience with high inflation affect intertemporal decision making? Sensitivity to inflation rates in Argentine and British delay discounting choices. J. Behav. Exp. Econ. 75, 76–83 (2018). 32. Clot, S. & Stanton, C. Y. Present bias predicts participation in payments for environmental services: evidence from a behavioral experiment in Uganda. Ecol. Econ. 108, 162–170 (2014). 33. Blumenstock, J. E., Callen, M. & Ghani, T. Mobile-Izing Savings with Automatic Contributions: Experimental Evidence on Present Bias and Default Effects in Afghanistan Discussion Paper No. DP11400 (CEPR, 2016); https://papers.ssrn.com/abstract=2814075 34. Ebrahimi Sarv Olia, M. H., Salimi, M. J., Bolo, G. & Ghouchifard, H. Sign effect, speedup–delay asymmetry and gender effect in the Tehran stock exchange. Int. J. Finance Manage. Account. 5, 41–53 (2020). 35. Scholten, M., Read, D. & Sanborn, A. Weighing outcomes by time or against time? Evaluation rules in intertemporal choice. Cogn. Sci. 38, 399–438 (2014). 36. Pedersen, E. J., Miller, D. L., Simpson, G. L. & Ross, N. Hierarchical generalized additive models in ecology: an introduction with mgcv. PeerJ 7, e6876 (2019). 37. Wiseman, D. B. & Levin, I. P. Comparing risky decision making under conditions of real and hypothetical consequences. Organ. Behav. Hum. Decis. Process. 66, 241–250 (1996). 38. Kühberger, A., Schulte-Mecklenbeck, M. & Perner, J. Framing decisions: hypothetical and real. Organ. Behav. Hum. Decis. Process. 89, 1162–1175 (2002). 39. Amlung, M. & MacKillop, J. Further evidence of close correspondence for alcohol demand decision making for hypothetical and incentivized rewards. Behav. Process. 113, 187–191 (2015). 40. Madden, G. J., Begotka, A. M., Raiff, B. R. & Kastern, L. L. Delay discounting of real and hypothetical rewards. Exp. Clin. Psychopharmacol. 11, 139–145 (2003). 41. Locey, M. L., Jones, B. A. & Rachlin, H. Real and hypothetical rewards. Judgm. Decis. Mak. 6, 552–564 (2011). 42. Brañas-Garza, P., Estepa-Mohedano, L., Jorrat, D., Orozco, V. & Rascón-Ramírez, E. To pay or not to pay: measuring risk preferences in lab and field. Judgm. Decis. Mak. 16, 1290–1313 (2021). 43. Halevy, Y. Strotz meets Allais: diminishing impatience and the certainty effect. Am. Econ. Rev. 98, 1145–1162 (2008). 44. Chakraborty, A., Halevy, Y. & Saito, K. The relation between behavior under risk and over time. Am. Econ. Rev. Insights 2, 1–16 (2020). 45. Epper, T. F. & Fehr-Duda, H. The missing link: unifying risk taking and time discounting. SSRN J. https://doi.org/10.2139/ssrn.2175461 (2012). 46. Urassa, M. et al. Cross-cultural research must prioritize equitable collaboration. Nat. Hum. Behav. 5, 668–671 (2021). 47. IJzerman, H. et al. Psychological science needs the entire globe. APS Obs. 34 (2021). 48. Gini Index (World Bank) (2021); https://data.worldbank.org/indicator/SI.POV.GINI 49. Shah, A. K., Mullainathan, S. & Shafir, E. An exercise in self-replication: replicating Shah, Mullainathan, and Shafir (2012). J. Econ. Psychol. 75, 102127 (2019). 50. Hensher, D. A. & Johnson, L. W. Applied Discrete-Choice Modelling (Routledge, 2018). 51. Camerer, C. F. et al. Evaluating replicability of laboratory experiments in economics. Science 351, 1433–1436 (2016). 52. Desmond, M. & Wilmers, N. Do the poor pay more for housing? Exploitation, profit, and risk in rental markets. Am. J. Sociol. 124, 1090–1124 (2019). 53. Cardaci, A. Inequality, household debt and financial instability: an agent-based perspective. J. Econ. Behav. Organ. 149, 434–458 (2018). 54. Causa, O., Hermansen, M., Ruiz, N., Klein, C. & Smidova, Z. Inequality in Denmark through the Looking Glass (OECD, 2016). 55. Ruggeri, K. et al. The general fault in our fault lines. Nat. Hum. Behav. https://doi.org/10.1038/s41562-021-01092-x (2021). 56. Wood, S. N. Generalized Additive Models: An Introduction with R 2nd edn (CRC, 2017). 57. Wood, S. mgcv: Mixed GAM computation vehicle with automatic smoothness estimation. R package, version 1.8-39 (2021). 58. Bürkner, P.-C. brms: an R package for Bayesian multilevel models using Stan. J. Stat. Softw. 80, 1–28 (2017). 59. Wesner, J. S. & Pomeranz, J. P. F. Choosing priors in Bayesian ecological models by simulating from the prior predictive distribution. Ecosphere 12, e03739 (2021). 60. Makowski, D., Ben-Shachar, M. S., Chen, S. H. A. & Lüdecke, D. Indices of effect existence and significance in the Bayesian framework. Front. Psychol. 10, 2767 (2019). 61. Kelter, R. How to choose between different Bayesian posterior indices for hypothesis testing in practice. Multivariate Behav. Res. https://doi.org/10.1080/00273171.2021.1967716 (2021). 62. R Core Team. R: a language and environment for statistical computing. https://www.R-project.org/ (R Foundation for Statistical Computing, 2021). 63. Microsoft R Open: The Enhanced R Distribution (MRAN) (2021); https://mran.microsoft.com/open 64. Balduzzi, S., Rücker, G. & Schwarzer, G. How to perform a meta-analysis with R: a practical tutorial. Evid. Based Ment. Health 22, 153–160 (2019). 65. Simon, W. & Scheipl, F. gamm4: Generalized additive mixed models using ‘mgcv’ and ‘lme4’. R package, version 0.2-6 (2021). 66. Wickham, H. ggplot2: Elegant graphics for data analysis (Springer, 2016). 67. Wickham, H. et al. Welcome to the tidyverse. J. Open Source Softw. 4, 1686 (2019). ## Acknowledgements The authors received no specific funding for this work. A small amount of discretionary funding provided by K.R.’s institution paid for the pilot study participants and for honoraria to organizations that assisted with data collection in several locations. These were provided by Columbia University Undergraduate Global Engagement and the Department of Health Policy and Management. Funds to support open-access publication were provided by the MRC-CBU at the University of Cambridge through a UKRI grant (UKRI-MRC grant no. MC_UU_00005/6). None of these funders had any role in or influence over design, data collection, analysis or interpretation. All collaborators contributed in a voluntary capacity. We thank the Columbia University Office for Undergraduate Global Engagement. We also thank X. Li and L. Njozela, as well as the Centre for Business Research in the Judge Business School at the University of Cambridge. ## Author information Authors ### Contributions Conceptualization: K.R. Methodology: K.R., A.P., E.G.-G. and M.Vdo. Project coordination and administration: K.R. and Ta.Du. Supervision: K.R., J.K.B.L., Ma.Fr., P.K., Jo.Raz., C.E.-S., L.W. and Z.Z. Writing: K.R., E.G.-G., A.P., R.S.R. and Ir.Sob. Advisory: A.P. and R.S.R. Instrument adaptation, translation, circulation and recruitment: all authors. Analysis and visualization: E.G.-G. and K.R. ### Corresponding author Correspondence to Kai Ruggeri. ## Ethics declarations ### Competing interests The authors declare no competing interests. ## Peer review ### Peer review information Nature Human Behaviour thanks Matúš Adamkovič, David Hardisty and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Supplementary information ### Supplementary information Supplementary Methods, Figs. 1–10 and Tables 1–18. ## Source data ### Source Data Fig. 1 Source data for Fig. 1. ### Source Data Fig. 2 Source data for Fig. 2. ### Source Data Fig. 3 Source data for Fig. 3. ### Source Data Fig. 4 Source data for Fig. 4a,b. ### Source Data Fig. 5 Source data for Fig. 5. ## Rights and permissions Reprints and Permissions Ruggeri, K., Panin, A., Vdovic, M. et al. The globalizability of temporal discounting. Nat Hum Behav (2022). https://doi.org/10.1038/s41562-022-01392-w • Accepted: • Published: • DOI: https://doi.org/10.1038/s41562-022-01392-w • ### Short-sighted decision-making by those not vaccinated against COVID-19 • Julia G. Halilova • Samuel Fynes-Clinton • R. Shayna Rosenbaum Scientific Reports (2022)
auto_math_text
web
# MV-7008: Co-simulation with AcuSolve In this tutorial, you will learn how to setup a model in MotionView that will be coupled with AcuSolve. With the addition of a co-simulation interface between MotionSolve and AcuSolve, you can now solve multi-physics problems that involve complex rigid body movement, coupled with fluid flow, that generates pressure forces on the rigid bodies. This capability lets you enhance the fidelity of your multi-body system, letting you generate more realistic results. In this scenario, MotionSolve computes the displacements and rotations for the rigid bodies, while AcuSolve computes the forces and moments on those bodies. Both solvers exchange data with each other while stepping forward in time via the TCP socket protocol. This means that the two solvers can be located on different machines and on different platforms and still communicate with one another. For example, the CFD simulation can run on an HPC, while the MBS simulation can run locally on a laptop. ## Tutorial Objectives You will use the MotionSolve-AcuSolve co-simulation interface to couple the rigid body dynamics of a check valve within a pipe with the flow field. The AcuSolve model has already been setup for you and is located at <installation_directory>\tutorials\mv_hv_hg\mbd_modeling\motionsolve\cosimulation\Check_Valve_Coupled.acs. Steps for running this model in AcuSolve are included as part of this tutorial. ## Software Requirements To successfully complete this tutorial, the following must be installed: Machine A Machine B Software Platform Software Platform MotionSolve/MotionView Windows 64-bit or Linux 64-bit AcuSolve Windows 64-bit or Linux 64-bit From the table above, the co-simulation is supported for both Windows and Linux platforms (64-bit). Cross platform co-simulation is also possible. ## Simulation Environment The co-simulation interface between MotionSolve and AcuSolve consists of a “middleware” utility executable, acuMSI.exe. This executable is responsible for: • Establishing a connection to both MotionSolve and AcuSolve. • Communicating the displacements and rotations from MotionSolve to AcuSolve. • Communicating the forces and moments from AcuSolve to MotionSolve. • Managing runtime and licensing. This is shown schematically below. ## Pipe with a check valve A check valve is a mechanical device that permits fluid to flow in only one direction. This is controlled by a shutter body. Fluid flowing in one direction pushes the shutter body in one direction, thereby opening the valve. Fluid flowing in the opposite direction pushes the shutter body in the other direction, which causes the valve to shut and prevents flow reversal in the pipe. Check valves are found in pumps, chemical and power plants, dump lines, irrigation sprinklers, hydraulic jacks, for example. The geometry that is modeled in this tutorial is illustrated in the figure below. It consists of: • A pipe with an inlet and outlet for the fluid flow. • A check valve assembly that consists of a shutter plate attached to a stem. • A stop mounted on a perforated plate downstream of the shutter body. • The fluid flow in the pipe is assumed to be axisymmetric. This allows you to model only a part of the check valve. In this example, a 30 degree section of the geometry is modeled, as shown by the blue part in the figure below. The advantage of doing this is a reduced simulation time while still capturing an accurate solution. The check valve assembly consists of a disc-like body mounted on a stem. When fluid flows in the direction specified by the red arrows in the figure above, the fluid forces the shutter body to translate in the same direction as the fluid. The motion of the shutter body is also affected by a spring damper attached between the shutter body and the perforated plate. Finally, 3D rigid body contact is modeled between the shutter body and the stop to arrest the motion of the shutter body in the direction of the flow. For the MBS model, only 1/12 of the shutter body and the perforated plate are modeled. At the start of the simulation, the flow field is stationary. A pressure specified at the inlet drives the flow, which varies over time as a piecewise linear function. This is illustrated in the figure below. As this pressure rises, the flow accelerates which in turn pushes the shutter body open and allows flow through the pipe. This dynamics of this kind of model can be termed as being “tightly” coupled between the two solvers. This means that the motion of the rigid bodies affects the fluid flow field, which in turn affects the rigid body motion in a cyclical fashion. The rest of this tutorial assumes that this model has been correctly setup in AcuSolve. Note that the model is designed to translate the shutter body until it collides with the perforated plate. The MotionView model has been designed with a contact between these two bodies that causes the shutter body to rebound appropriately. To allow the rigid bodies to come into contact without the AcuSolve finite element mesh fully collapsing, the perforated plate in the fluid model has been offset by 0.002m in the positive X direction. This allows the MotionView model to react as specified by the contact entity while keeping the AcuSolve mesh from fully collapsing. ## Load the Model in MotionView 1. From the Start menu, selectAltair <version> MotionView <version>. 2. Open the model Valve_model.mdl from <altair>\<version>\tutorials\mv_hv_hg\mbd_modeling\motionsolve\cosimulation. This model is prepared to run in MotionSolve but requires modifications to run in co-simulation with AcuSolve. These steps are outlined below. Once the model is loaded into MotionView, the graphical window displays the shutter valve, perforated plate, joint and spring entities, as well as a graphical representation of the spring damper as shown in the figure below. The MotionSolve model consists of the following components: Component Name Component Type Description Ground Body Rigid Body Ground Body Shutter Body Rigid Body 30 degree section of the shutter body. Perforated Body Rigid Body 30 degree section of the perforated plate. Contact 3D Rigid-Rigid Contact Force 3D rigid-rigid contact force between the Shutter body and the Perforated Body. Solver Units Data Set The units for this model (Newton, Meter, Kilogram and Second). Gravity Data Set Gravity specified for this model. The gravity is turned on and acts in the negative Y direction. Shutter Body Graphic Graphic The graphic that represents the shutter body. This graphic is used both for the co-simulation and for the contact force calculations. Perforated Plate Graphic Graphic The graphic that represents the perforated plate body. This graphic is used both for the co-simulation and for the contact force calculations. Spring Graphic The graphic that represents the spring modeled between the shutter body and the perforated plate body. This is only for visualization and does not affect the co-simulation results. Fixed Fixed Joint This fixed joint clamps the perforated plate body to the ground. Translation Translational Joint This translational joint allows motion of the shutter body along the X axis. Spring Spring Damper This is a simple spring damper mounted between the shutter body and the perforated plate body. ContactOutput Output An output signal that measures the contact force. Displacement Output An output signal that measures the displacement between the shutter body and the ground. Velocity Output An output signal that measures the velocity of the shutter body with respect to the ground. ## Specify the “Wet” Body That Will Interact with AcuSolve To couple with AcuSolve, you need to specify one or more "wet" bodies. A "wet" body is a body in the MotionSolve model which interacts with the fluid flow and thus has forces and moments acting on it. Such a body can translate or rotate due to the actuating fluid force/moment as computed by AcuSolve as well as due to any actuating forces/moments in the MotionSolve model. In this example, we will define a single "wet" body – the shutter body that translates along the X axis due to fluid impinging on it. To specify a body as "wet" in MotionView, you have to make use of a system definition, which is described below. 1. Add the system definition to your model by locating the Model system in your Project Browser and select it. This changes the panel selection at the bottom. 2. From the Import/Export tab, select Import and click the File open icon, . 3. The system definition that is used for this co-simulation is located at <installation_directory>\utility\mbd\fluid_force\sys_fluid_force.mdl. Click Import to import the file. The Import Definition dialog is displayed. 4. Leave the labels as-is and click OK. A new system called System Wet Body is created in your model. 5. Specify the "wet" body by clicking the newly created System Wet Body and clicking the Attachments tab as shown in the figure below. 6. Click the Body collector and resolve it to Shutter Body. If you examine the contents of this system under the project browser on the left, you will see the following new components have been created: Component Name Component Type Description Shutter Body - AcuSolveForce Action Only, TransRotational Force The force and moment calculated by AcuSolve is applied to the “wet” body through this force component. Plant Input Control Plant Input AcuSolve deposits the forces and torques into this modeling element. FromAS_FX FromAS_FY FromAS_FZ FromAS_TX FromAS_TY FromAS_TZ Solver Variables These variables hold the forces (X, Y and Z) and the moments (X, Y, Z) values from AcuSolve. Define wet_body attribute Template This template adds the attribute is_wet_body to the wet body that is chosen for this system. Define co-simulation with AcuSolve Data Set This template adds the attribute acusolve_cosim to the model to instruct MotionSolve to co-simulate with AcuSolve. At this point, you have setup the model in MotionView to interact with AcuSolve. ## Run the Model without Co-simulating with AcuSolve To make sure that the MotionView model is setup correctly, run the model in MotionSolve and make sure there are no warning/error messages from MotionSolve. 1. To deactivate the template Define co-simulation with AcuSolve shown in the figure below, right-click on it in the browser and select Deactivate. By doing this, you are deactivating the flag which tells MotionSolve that this model is intended for co-simulation with AcuSolve. Thus, MotionSolve simulates this model as a stand-alone model without co-simulating with AcuSolve You may run this model using the Run panel in MotionView and ensure that there are no error or warning messages reported. This is recommended to ensure that the model works properly before attempting a co-simulation. If you load the animation H3D generated from running this model, you will see that there is no motion in any of the parts. This is because all of the actuation for this model comes from AcuSolve, which was disabled for this simulation. 2. After you have verified the model, re-activate the template Define co-simulation with AcuSolve to perform a co-simulation, as shown in the figure below: 3. To activate the template, right-click on it in the browser and select Activate. 4. To export the model to your working directory, click the Export to Solver button and export the solver deck to your working directory. You may change the default name of the model. ## Verify the Model between MotionSolve and AcuSolve To successfully run the co-simulation, the model created in MotionView and the model created in AcuSolve must be consistent. The following criteria need to be met in order for the two models to be consistent. 1. The name of the “wet” body/bodies need to match between MotionSolve and AcuSolve. The names of the “wet” body/bodies are specified in the *.inp file on the AcuSolve side. Note: The names are case-sensitive and must match exactly (see the text in red below). MotionSolve (.xml): <!-- MODEL.b_Part1 --> <Body_Rigid full_label = "Model-Shutter Body" AcuSolve (.inp): EXTERNAL_CODE_SURFACE( "Valve wall" ) { rigid_body_name = "Model-Shutter Body" 2. The print_interval for the MotionSolve model needs to match the step size for the AcuSolve model. For this tutorial, it is set to 0.002s. 3. You must set the MotionSolve step size, h_max, to match the print_interval (0.002s in this case). 4. Also, verify that the end times for both models are set to the same values. For this tutorial, the end times for both the AcuSolve and MotionSolve models are set to 0.35s. See the Run Panel in MotionView to set the print_interval, step size (h_max), and end times. Note that the units in the MotionSolve and AcuSolve models do not need to match to run the co-simulation; however, the units must match to overlay the results animations in HyperView. The units in MotionView are set via the Units form shown below: All values in the MotionSolve model are set with respect to these units settings. ## Run the MotionSolve and Middleware Executables for Co-simulation from the Compute Console 1. From the Start menu, select All Programs > Altair <version> > Compute Console and select MotionSolve as the solver. Locate the model you just exported by clicking on the file open icon. 2. Click the ellipsis button next to the Options field to open the Available Options dialog. 3. Activate the –as_cosim option. When this flag is enabled, it tells the Run Manager to do the following: 1. Invoke the MotionSolve executable and run the model that is specified. 2. Invoke the middleware “acuMSI”, which enables communication between MotionSolve and AcuSolve. When you activate this option, the following dialog is displayed and you are prompted for additional options: You may specify the following options here: acuMSI Options -aport <integer> Specifies the communication port number for communication between AcuSolve and acuMSI. The default is 48000. Note If you need to change the default port for communication between AcuSolve and acuMSI, in addition to changing this argument, you also have to specify the changed port number in the AcuSolve input file. -mport <integer> Specifies the communication port number for communication between MotionSolve and acuMSI. The default is 94043. Note: If you need to change the default port for communication between MotionSolve and acuMSI, in addition to changing this argument, you also have to specify the changed port number in an environment variable MS_AS_PORT. MotionSolve checks for this environment variable at the start of the simulation and changes its listening port accordingly. -mi <integer> Specifies the maximum number of iterations per time step between the two solvers. The default is 0. -v <integer> Specifies the verbosity level of the output file from acuMSI. The default is set to 0 (verbosity OFF). 4. If you retain the default options, click None. 5. Click Apply Options and click Close. 6. You are now setup to start the co-simulation on the MotionSolve side. Click Run. This launches MotionSolve as well as the acuMSI executable. The MotionSolve run is paused at the first time step – it is now in waiting mode and the co-simulation will start as soon as AcuSolve is run. ## Run the AcuSolve Executable for Co-simulation 1. To run the co-simulation model in AcuSolve, first copy the model file Check_Valve_Coupled.acs from <installation_directory>\tutorials\mv_hv_hg\mbd_modeling\motionsolve\cosimulation to your working directory. 2. From the Start menu, select All Programs > Altair <version> > AcuSolve Job Launcher. In the window that is displayed, change the field Problem name as specified by the AcuSolve .inp file. Make sure your Problem directory is set to your current working directory. For this model, the default values are used. AcuSolve runs using a single processor, and AcuConsole generates the input files and launches AcuSolve. 3. Press Launch to launch the solution process. As the solution starts, an AcuTail window opens. After executing AcuPrep, AcuSolve stops at the acuRun path step. It’s waiting for the execution of the process.MotionSolve Soon, AcuSolve and MotionSolve should begin to communicate with one another. You should be able to see relevant time stepping information in both solver windows. For example, you should see something like the following in the MotionSolve window at the beginning of the co-simulation: INFO: [AS-COSIM] Connected to AcuMsi on port 94043 … Time=2.000E-06; Order=1; H=2.000E-06 [Max Phi=1.314E-16] Time=3.600E-02; Order=2; H=2.000E-03 [Max Phi=1.653E-08] … The co-simulation should take roughly 15 minutes to complete on a laptop (Intel i7. 2.8GHz). Note that there is no order dependency on launching the co-simulation – either MotionSolve or v can be launched first. ## Post-process the Results from the Co-simulation HyperView and HyperGraph can be used to post process the co-simulation results within the HyperWorks Desktop environment. To launch HyperView (HyperGraph), from the Start menu, select Altair > <version number> HyperView (HyperGraph). The animation H3D generated by the MotionSolve part of the co-simulation contains only the results from MotionSolve. Similarly, the result files from AcuSolve only contain the results for the AcuSolve model. To animate the results from the co-simulation, follow these steps: 1. Load the animation H3D generated by MotionSolve in HyperView. 1. Go to the Load Model panel. 2. Click the file open button, , next to Load model and navigate to the results directory (the same directory where the .xml file is located). 3. Select the .h3d file and click Open. 4. Click Apply. 2. HyperView loads the MotionSolve results file into the graphical window at the first time step. From this point, you can animate the transient results using Start/Stop on the Animation Slider. 3. To load the AcuSolve results, they must first be converted to the .h3d format. This can be accomplished by using the AcuSolve utility, AcuTrans. AcuTrans is available from the Start menu. Select Altair <version> AcuSolve > AcuSolve Cmd Prompt. 4. Navigate to the results directory and execute the following command to convert the AcuSolve database into the .h3d format: acuTrans -out -to h3d -h3dopt single -ts A This creates a single .h3d file containing all of the time steps available for the simulation. 5. Using AcuTrans, overlay the newly-created H3D over the MotionSolve result H3D in HyperView. This is accomplished by repeating Step 1 described above and activating Overlay when selecting the AcuSolve result H3D. Once loaded, the graphical window contains both results and can be animated as before. To visualize the information contained within the AcuSolve results, a Contour plot may be used. Click on the Contour button to display the panel. 6. Set the options as shown in the figure below and click Apply. This creates a contour plot of the velocity magnitude overlaid with the results from MotionSolve in one window. Plotting the MotionSolve Results in HyperGraph You can also interpret the results with a two dimensional plot using HyperGraph. HyperWorks Desktop can be used in a multi-window layout, allowing both HyperView and HyperGraph to be open at the same time. 1. First, open HyperView following the steps described in the previous section. 2. From the toolbar, click the Page Window Layout button and split the page into two vertical pages. This automatically adjusts the graphical window to accommodate two pages, defaulting to two instances of HyperView. 3. Click anywhere in the page on the right and switch to HyperGraph by clicking the Page Selector button and selecting HyperGraph 2D from the drop-down list. 4. Click the Build Plots button to load the .plt file from the same results directory. Once the .plt file is loaded into HyperGraph, the two outputs are available for plotting. 5. Perform the following selections: 1. Under Y Type, select Displacement. 2. Under Y Request, select Displacement (on Shutter Body). 3. Under Y Component, select X. 4. Click Apply. HyperGraph can be used to create additional traces on the same plot to generate the following plots.
auto_math_text
web
## Thursday, July 21, 2011 ... ///// ### Fermilab: Higgs is probably between 114 and 137 GeV Two minutes ago, the Fermilab has released a new press release: Fermilab experiments close in on favored Higgs mass range (interactions.org) Science Daily, Phys ORG, NewsTV.US According to some high-precision techniques based both on D0 and CDF data - techniques that are not quite carefully described in the press release but the precise W-boson and top quark mass are the key - the Higgs mass is most likely to be between 114 and 137 GeV. It is not clear from the press release how much new evidence that this interval is the right one they have actually found. They say that they have analyzed 2/3 of the data that will be available by September 30th when the Tevatron shuts down. It means that the dataset will jump by 50% relatively to what we have now. There will be two separate (D0 and CDF) Tevatron talks about the Higgs tomorrow in the morning. However, the full combined analysis by D0 and CDF will be presented on Wednesday July 27th at 3:00 p.m. French Summer Time, the last day of the EPS-HEP2011 conference. At 3:30 p.m., a similar plenary talk will consolidate all the LHC data on the Higgs which may tell us something even more accurate or striking than the 114-137 GeV interval. Only tea and outlooks from continents and from theory :-) will be awaiting the participants of the conference - and the excited scientific public observing the events through TRF and a few other means - afterwords. See What a light Higgs boson would mean for particle physics (TRF). Don Garbutt: The Higgs Field The results in Grenoble seem to indicate a complete validity of the Standard Model. Still, the preferred value of the Higgs mass suggests that the Standard Model breaks down at relatively low energies so it can't be the whole story. If supersymmetry is relevant, the superpartner masses probably exceed 1 TeV in most cases. Still, there's nothing wrong with it. I find the new inflow of robust and cold experimental results extremely refreshing. Pretty much all models that wanted to see new signals "behind the corners" have already been exterminated. This includes well-motivated models as well as the unmotivated ones. You could say that I would be equally or more happy if the colliders were producing totally new and unexpected results. And you're right. I am happy while learning the truth about Nature whatever it is. Still, the identity of the Higgs sector remains unknown and it may be the only unknown part of physics below 1 TeV. We could learn something tomorrow - on Friday - when new dozens of papers are likely to be released and they will be more relevant for the Higgs enigma than the papers released today.
auto_math_text
web
## Introduction Fibre lasers are unique light sources that have an increasing number of applications in industry1, nonlinear2 and multiphoton3 microscopy, micro-processing4, generation of optical vortices5, vector beams6, high power laser development7,8, and imaging9, among many others. Important efforts have been devoted to generating ultrashort pulses from these lasers10,11,12,13,14,15,16. In particular, a very active research field is supercontinuum generation in fibre lasers17,18,19,20. However, it is also well known that the supercontinuum generation process can be associated to pulse train instabilities21,22,23,24, which have been studied theoretically25,26 and experimentally with different approaches, e.g. based on interferometry27 and spectral-temporal analysis23. Regarding the characterization of stable ultrashort pulses, different techniques have been developed28. Over the last few years, the problem of the temporal measurement of unstable pulse trains has relied on using temporal characterization techniques such as autocorrelation, SPIDER (spectral phase interferometry for electric field reconstruction) or FROG (frequency-resolved optical gating)29,30. In autocorrelation, the pulse train instability is reflected in a narrow spike at the centre of the signal, which can mislead to a wrong pulse duration measurement. In interferometric measurements, such as SPIDER, previous works reported that the instability results in a reduction of contrast that cannot be experimentally identified29. In the case of FROG, the 2D trace is sensitive to the pulse train instability, which is associated to a higher error in the convergence29, very recently having been shown a specific analysis to identify the average pulse and coherent artifact contributions30. The effect of spectral amplitude and phase instabilities has been studied theoretically using the MIIPS (multiphoton intrapulse interference phase scan) technique and experimentally applied to a titanium-sapphire oscillator and amplifier31,32. The dispersion scan (d-scan) technique33 enables simultaneous measurement and compression of ultrashort pulses down to single-cycle (1.04-cycle) durations34 and has recently been used in the theoretical study of instabilities35,36. D-scan is based on measuring the spectrum of a nonlinear signal (such as second-harmonic generation) produced by a pulse as a function of (usually known) dispersion applied to the pulse. The resulting two-dimensional trace (the d-scan trace) enables retrieving the spectral phase of the pulse using a numerical algorithm. Previous works report, as expected, that pulse train instabilities lead to a spreading of the d-scan trace along the dispersion axis31,36. This effect is due to the fact that, in the presence of instabilities, the imparted dispersion for which the spectral phase is compensated for the different wavelengths of the input pulse is also inheriting that instability, therefore the nonlinear signal is redistributed along the dispersion axis. Here we present a d-scan-based method to experimentally assess the presence of pulse train instabilities and apply it to the measurement and optimization of supercontinuum fibre lasers. This method is based on the recently introduced self-calibrating d-scan (SC d-scan)37, which enables measuring a pulse with an arbitrary (i.e., unknown) compressor, since the latter’s nominal dispersion is also retrieved by the d-scan algorithm. The details of the SC d-scan technique are provided in the Methods section. Based on the retrieval of the applied dispersion, we can define a metric that effectively accounts for the instabilities. The resulting temporal characterization is therefore self-diagnosed against the pulse instabilities. ## Results and Discussion ### Theoretical study with second-order dispersion and random phase instabilities We start by validating the method numerically, using two sets of simulations devoted to two important types of instabilities: group delay dispersion (GDD) and random phase (RND). In the theoretical calculations that follow, we always use the measured spectrum (see Fig. 1c) of the fibre laser used in the experiments presented in the next section for a pump current I = 5 A, which gives a bandwidth of 50 nm centred at 1064 nm and a Fourier-limited duration τ0 = 32 fs FWHM (full-width at half maximum). We simulate a base (initial) pulse with a pure third-order dispersion (TOD) of −25,000 fs3 (duration τp = 40 fs FWHM), to which the pulse train instability is then added. We chose a non-flat spectral phase of the base pulse to study the effect of the pulse train instability on the retrieved pulse duration compared to the base pulse duration. As a GDD in the pulse would simply shift the d-scan trace in the dispersion axis, we introduced a moderate TOD leading to a 25% increase of the pulse duration FWHM. The known dispersion range of the simulated compressor, GDDK, is 20,000 fs2. In all the theoretical results, the retrieved pulse is calculated at an imparted GDD = 0 fs2. In Fig. 1a we show the simulated d-scan trace of the stable pulse train (for comparison purposes), together with the corresponding retrieved SC d-scan trace (Fig. 1b). Notice that the known simulated spectral phase (Fig. 1c) and the temporal intensity and phase (Fig. 1d) are not shown in the figure as they are equal to the retrieved ones. In a first set of simulations, a GDD instability with magnitude denoted by $$\gamma (f{s}^{2})$$ is added to the base pulse, with values ranging from 0 to 9000 fs2 (for γ = 0 we recover the base pulse), where the upper GDD value leads to high instabilities in the pulse train. For each value of γ, we simulate a train of 101 pulses with added random values of GDD normally distributed between $$\pm \gamma (f{s}^{2})$$. We then calculate and average the corresponding d-scan traces. For increasing values of γ, the d-scan trace becomes increasingly stretched over the z-axis, as seen, e.g., in Fig. 2a (Supplementary Video 1). To estimate the mean pulse duration, τγ, we calculate it as the temporal duration of the average of the pulse train (i.e., as the average duration of the 101 pulse temporal intensities). Despite GDDK being known, we can apply the SC d-scan algorithm to reconstruct the trace, hence obtaining the SC value for the total dispersion introduced by the compressor, GDDSC. For example, with $$\gamma =3600\,f{s}^{2}$$, the retrieved trace (Fig. 2b) is clearly stretched compared to the trace in the absence of instability (Fig. 1a), giving $$GD{D}_{SC} < GD{D}_{K}$$. Notice the different scales in the dispersion axes that reveal this behaviour. This means that the presence of an instability can be directly inferred from the discrepancy between the two GDD values ($$GD{D}_{SC}$$ and $$GD{D}_{K}$$). In Fig. 2a,b (Supplementary Video 1), we show the simulated and retrieved traces for different values of γ. The complete analysis of these simulations is given in Fig. 3 (1st column). In the absence of instability, the two GDD values are equal. As the instability increases, the ratio $$GD{D}_{SC}/GD{D}_{K}$$ decreases (Fig. 3(b1)). The retrieved pulse duration, $${\tau }_{SC}$$, is always below the base pulse duration $${\tau }_{P}$$ (Fig. 3(c1)). The merit function provided by the d-scan error38, which we will refer to as ε, initially increases with γ, but for high instability this tendency is reversed (Fig. 3(a1)), hindering its applicability to the evaluation of the amount of instability. It should be noticed that, both in the simulations and the experiments, for a trace generated by an instable pulse train, there is no retrieved trace that perfectly matches the structure. The retrieved trace corresponds to the GDDSC accounting for the instability and the described retrieved pulse (shorter than the base pulse). To quantify the GDD instability in the simulations, we define a quantity $${\Gamma }_{\gamma }=1-{({\tau }_{P}/{\tau }_{\gamma })}^{2}$$, which increases with γ. We now define the following general metric, which can be obtained from the SC d-scan retrieval provided the introduced dispersion is also known $${\Gamma }_{SC}=1-\frac{GD{D}_{SC}}{GD{D}_{K}}.$$ (1) In Fig. 3(d1), we show the evolution of $${\Gamma }_{SC}$$ and $${\Gamma }_{\gamma }$$, where both metrics provide similar quantifications of the instability. Also, the new metric $${\Gamma }_{SC}$$ is a monotonically increasing function of γ (contrarily to the merit function ε) further validating the use of $${\Gamma }_{SC}$$ as a measurement of the instability. To cross check these conclusions, we performed a second set of simulations with a different type of instability. To the same base pulse, we now added a normally distributed random phase $${\phi }_{\alpha }({\omega }_{j})=\alpha \cdot rnd(-\pi ,+\pi )$$, $$\forall {\omega }_{j}$$, graded by the random instability parameter, α, from 0 to 0.85 (α = 0 recovers the base pulse). Here, we also simulated the average d-scan trace of a train with 101 pulses, as shown in Fig. 2c (Supplementary Video 2) for α = 0.6, and used the average temporal intensity to calculate the mean duration τα. Like in the GDD instability case, the retrieved d-scan trace is stretched for increasing instability and the corresponding retrieval (Fig. 2d) yields $$GD{D}_{SC} < GD{D}_{K}$$. In Fig. 2c,d (Supplementary Video 2), we show the simulated and retrieved traces for different values of α. The complete simulation results of the SC retrievals are given in Fig. 3 (2nd column). In this case, the merit function is also increasing with the instability degree α (Fig. 3(a2)), but for high values of α it decreases, similarly to Fig. 3(a1). Regarding the retrieved pulse duration, $${\tau }_{\alpha }$$, it decreases from the base pulse duration $${\tau }_{P}$$ to the Fourier-limit $${\tau }_{0}$$, except for high amounts of instability (Fig. 3(c2)). We again find that the retrieved dispersion, $$GD{D}_{SC}$$, decreases with α (Fig. 3(b2)), confirming that it is a good indicator of the degree of instability. In this example, we evaluate the degree of instability as $${\Gamma }_{\alpha }={(1-{I}_{\alpha }/{I}_{0})}^{2}$$, with $${I}_{\alpha }$$ the peak intensity of the average pulse. Therefore $${\Gamma }_{\alpha }$$ is expected to increase from 0 to 1 as $$\alpha$$ increases. Here, the behaviour of the general metric, $${\Gamma }_{SC}$$, is also monotonically increasing and matches the tendency of our definition of degree of instability, $${\Gamma }_{\alpha }$$, as shown in (Fig. 3(d2)). ### Experimental application to unstable fibre laser pulse trains Using the framework established above, we applied SC d-scan to study experimentally the pulse train instability in two different configurations of a broadband mode-locked oscillator-amplifier fibre laser system (Fig. 4). The oscillator is a laser diode-pumped mode-locked fibre laser with a pulse repetition rate of 75 MHz, central wavelength of 1060 nm and spectral bandwidth of 13.6 nm (FWHM). At the output fibre from the oscillator the peak power is Pp = 86 W and the temporal width is 3.1 ps (FWHM). In order to avoid nonlinear effects in the subsequent pulse amplification process, the pulses are temporally stretched by means of an optical fibre with normal group delay dispersion (GDD > 0) before the amplifier. The amplifier is an Yb-doped fibre amplifier (YDFA) also with positive GDD, in a fibre-based CPA (chirped pulsed amplification) architecture. After the amplifier the pulses are compressed using a hollow-core photonic bandgap microstructured fibre with anomalous group delay dispersion (GDD < 0), which compensates for the dispersion introduced at the stretching and amplifying stages. The optical properties of the pulsed signal at the end of the compressing stage fiber are the following: spectral bandwidth of 14.5 nm (FWHM), temporal pulse duration of 200 fs (FWHM), average power of 0.3 W, and peak power of 20 kW. For additional spectral broadening, these pulses are free space coupled into a photonic crystal fibre (PCF) at a peak intensity of 45 GW/cm2. The shape and coherence properties of the resulting supercontinuum spectrum show a very strong dependence on the dispersion and nonlinearity characteristics of the PCF, as shown further below. In the first configuration of the laser system, the spectral broadening stage used a negative dispersion PCF. We measured d-scan traces for different pump laser currents, from I = 2 to 6A (Fig. 5, rows). The pulses were sent to a grating compressor composed of a 600 lines/mm grating in a four-pass configuration. The inter-grating distance, z, was varied over a total range of 170 mm, which provided the amount of dispersion required by the d-scan measurements (Fig. 5, column a). The experimental nominal dispersion (known) introduced by the compressor is $$GD{D}_{K}/L=1550f{s}^{2}/mm$$, calculated from the geometry of the compressor and the grating groove density39. We found that the trace was stretched along the z-axis (Fig. 5, column a) when compared to the trace of a stable Fourier-limited pulse train (Fig. 5, column c), which is a clear indication of pulse train instability. When using SC d-scan to retrieve the trace (Fig. 5, column b), we obtained compressor dispersions $$GD{D}_{SC}/L$$ monotonically varying from 315 to 14 fs2/mm as the pump current increases (see the values in Table 1), as expected for increasing pulse train instability. Despite the similarities (especially for higher pump currents and stronger nonlinear spectral broadening) between measured and SC d-scan retrieved traces (although the algorithm convergence is worse than for traces of stable pulse trains), the dispersion retrieved by the SC d-scan algorithm, $$GD{D}_{SC}/L$$, was much smaller (from 5 × to 111 × less) than the known dispersion introduced by the actual compressor (Table 1). The large difference in stretching in the dispersion axis scale of the simulated stable Fourier-limit pulse (Fig. 5, column c) compared to the experimental traces (Fig. 5, column a) –e.g., >100 × stretching for I = 6 A– is indicative of a high instability of the laser source and provides important quantitative information for its design and optimization. Following our general metric of Eq. (1), we find values of $${\Gamma }_{SC}$$ from 0.797 to 0.991, hence confirming the high instability of the pulse train. These experimental results are complemented by the data given in Table 1. As we increased the pump current, the laser spectrum experienced spectral broadening, $$\Delta \lambda$$, from 15 nm to 100 nm (FWHM), as shown in Fig. 5 (column d). Therefore, as the Fourier-limit of the pulse, $${\tau }_{FTL}$$, decreases from 83 fs to 25 fs, the corresponding trace should be narrower in the z-axis, contrarily to what actually occurs due to the increasing instability. The Fourier-limited pulse and the SC d-scan retrieved pulse are shown in Fig. 5 (column e). The SC d-scan retrieved pulse duration of the unstable pulse train, $${\tau }_{SC}$$, is closer to the Fourier limit as the instability increases (see Table 1), being consistent with the numerical simulations. It is remarkable that despite having a varying pulse spectrum (common in nonlinear instabilities), the metric $${\Gamma }_{SC}$$ stands correctly for the instability, reinforcing it as a good parameter to evaluate the pulse train instability. ### Optimization of the fibre laser source The pulse train instabilities measured with SC d-scan were identified as originating from nonlinear dynamics within the anomalous dispersion PCF, whereas the generation of stable pulses in fibres is usually achieved in all-normal dispersion (ANDi) schemes10,17,18,19,40. The overall dispersion curve of an ANDi PCF is negative but relatively close to zero over the whole bandwidth (Fig. 6a, red). The dominant nonlinear effect generated under these circumstances is self-phase modulation (SPM), and the spectrum can be broadened without incurring into pulse train instabilities (Fig. 6b, red). On the other hand, when using a PCF with anomalous dispersion (Fig. 6a, black), spectral broadening results from a mix of nonlinear effects, including SPM, stimulated Raman scattering, nth-order soliton breaking, and dispersive wave generation. In these circumstances, the pulsed emission loses its temporal coherence and presents a noisy spectrum (Fig. 6b, black). We should point out that slight differences in geometry of the PCF (Fig. 6c), namely in hole diameter and pitch, translate into marked differences in the spectral broadening behaviour. The required control of the PCF geometry when working close to zero dispersion is indeed at the frontier of current fibre manufacturing technology, which reinforces the need of techniques for determining the level of coherence of the pulses after nonlinear propagation in the PCF. Based on the above information and measurements, we opted for the ANDi PCF as the best choice for an optimized system. In this second configuration, the ANDi regime resulted in broadband spectra with a Fourier-limit of 14.3 fs FWHM (note that the spectral bandwidth is considerably larger than in the unstable cases previously presented). Since the previously used grating compressor would introduce a huge dispersion for these stable pulses, we needed to use a different compressor consisting of a pair of glass wedges and chirped mirrors to perform the d-scan. In Fig. 7 we show the corresponding d-scan results, where the pulse is shown to be well compressed, presenting a relatively small remaining TOD and fourth-order dispersion, as shown by the tilt and the curvature in the trace, respectively. At the optimum compression insertion (5.6 mm), the retrieved pulse has a duration of 14.7 fs (FWHM). Also, the compressor dispersion retrieved with SC d-scan was $$GD{D}_{SC}/L$$ = 160 fs2/mm, which is close to the nominal value of 140 fs2/mm estimated from the material and geometry of the wedges, thus confirming the stability of the fibre laser source. As the PCF is seeded with pulses with a Fourier-limit duration of ~90 fs (FWHM), our results show a compression factor of 6. Compared to previous works using PCFs, Hooper et al.18 measured 26 fs with autocorrelation (14 fs Fourier-limit), or Heidt et al.17 measured 5 fs with SPIDER (note that pulse train instabilities cannot be discarded in these cases). Other authors did not measure the temporal evolution of the supercontinuum generation16. ## Conclusions In conclusion, we have experimentally shown the capability of self-calibrating (SC) d-scan to evaluate the presence and degree of pulse train instabilities in post-compressed ultrafast fibre lasers. We have identified the origin of the instabilities to be the nonlinear dynamics associated to the anomalous dispersion regime in the PCF used for spectral broadening. Such instabilities can be assessed with a general metric, $${\Gamma }_{SC}$$, which is a function of the ratio between the actual introduced dispersion and the SC d-scan retrieved dispersion, and even a simple visual inspection of the measured trace can already reveal the presence of instabilities. This method has enabled us to detect and solve instability issues in a broadband fibre laser by using all-normal dispersion fibres, where we obtained a properly compressed 15 fs stable pulse train. The use of SC d-scan enables integrating the instability detection within the temporal diagnostic, which is very helpful, e.g., for the design and optimization of broadband mode-locked fibre lasers and can also be applied to other laser sources. The d-scan traces have been shown to be sensitive to different sources of amplitude and phase instabilities both theoretically and experimentally in the present and in previous works31,32,35,36. Therefore, one would expect that other sources of instabilities can also be quantified with SC d-scan, for example in solitonic mode-locked fibre lasers41,42 or saturable absorber based fibre mode-locked lasers12,14,43. ## Methods ### Self-calibrating d-scan technique The d-scan technique33 can simultaneously compress and characterize ultrashort laser pulses. The electric field of the input pulse to be measured can be written as $$E(\omega )=A(\omega )\exp [i\phi (\omega )]$$, where its amplitude can be obtained from the measured spectrum, $$S(\omega )$$, as $$A(\omega )=\sqrt{S(\omega )}$$, and the spectral phase $$\phi (\omega )$$ is retrieved from the measurement. In d-scan, a range of dispersions is applied to the input pulse while measuring the spectrum of the second-harmonic generation (or other nonlinear signal) from the resulting pulse (Fig. 8). The dispersion is typically imparted by a pulse compressor, for example a combination of chirped mirrors and glass-wedges33,44,45, grating compressors46 or prism compressors37. The measured nonlinear signal is a two-dimensional trace, the d-scan trace, being a function of the second-harmonic wavelength (or equivalently the frequency ω) and the imparted dispersion. The scanned dispersion ranges from negative to positive and, for an arbitrary spectral phase of the pulse, leads to compression of different parts of the spectrum at different added dispersions. Around optimum pulse compression, the nonlinear signal is higher, while that signal decreases for higher imparted dispersions. The particular structure of the d-scan trace encodes the spectral phase $$\phi (\omega )$$ of the pulse. The d-scan trace is given by the expression $${S}_{d\cdot scan}(\omega ,z)={| {\mathcal F} [{({ {\mathcal F} }^{-1}\{A(\omega )\exp [i\phi (\omega )]\exp [i\psi (\omega )\cdot z]\})}^{2}]|}^{2},$$ (2) where $$\psi (\omega )\cdot z$$ represents the imparted dispersion and is parametrized by z. Depending on the type of dispersion control, the variable z can account for the amount of glass wedge insertion in a wedge compressor, or the variation in distance between dispersive elements in a prism or grating compressor, among others. In SC d-scan37, both the pulse phase and the compressor nominal dispersion are simultaneously retrieved. The compressor dispersion per unit length can be expanded in a Taylor series $$\psi (\omega )={\psi }_{0}+{\psi }_{1}\cdot (\omega -{\omega }_{0})+\frac{GD{D}_{tot}}{2L}{(\omega -{\omega }_{0})}^{2}+\frac{TO{D}_{tot}}{6L}{(\omega -{\omega }_{0})}^{3}+\ldots ,$$ (3) where L is the total scan range in the variable z, with $$GD{D}_{tot}$$ and $$TO{D}_{tot}$$ denoting, respectively, the total GDD and TOD introduced during a whole scan by varying the parameter z over an amount L. The $${\psi }_{0},{\psi }_{1}$$ terms, corresponding to the carrier envelope phase and to a net group delay (i.e., a pulse arrival time), respectively, can be ignored as the trace is not sensitive to them. In most cases it is enough to use the GDD and TOD parameters in the expansion given in Eq. (3)37. For the SC d-scan retrievals we use the multi-variable optimization Levenberg-Marquardt algorithm (as previously used for SC d-scan retrievals34,37,47). We parametrize the unknown phase function, $$\phi (\omega )$$, in 32 discrete points (interpolated over the complete frequency grid for the calculations), while we model the compressor with pure GDD (a single parameter).
auto_math_text
web
dc.contributor.author Eager, Dan en_GB dc.contributor.author Hobbs, Benjamin en_GB dc.contributor.author Bialek, Janusz en_GB dc.date.accessioned 2012-11-15T15:26:14Z dc.date.available 2012-11-15T15:26:14Z dc.date.issued 2012-04-25 en_GB dc.identifier.other CWPE1217 dc.identifier.uri http://www.dspace.cam.ac.uk/handle/1810/243973 dc.identifier.uri https://www.repository.cam.ac.uk/handle/1810/243973 dc.description.abstract Many governments who preside over liberalised energy markets are developing policies aimed at promoting investment in renewable generation whilst maintaining the level of security of supply customers have come to expect. Of particular interest is the mix and amount of generation investment over time in response to policies promoting high penetrations of variable output renewable power such as wind. Modelling the dynamics of merchant generation investment in market environments can inform the debate. Such models need improved methods to calculate expected output, costs and revenue of thermal generation subject to varying load and random independent thermal outages in a power system with high penetrations of wind. This paper presents a dynamic simulation model of the aggregated Great Britain (GB) generation investment market. The short-term energy market is simulated using probabilistic production costing based on the Mix of Normals distribution technique with a residual load calculation (load net of wind output). Price mark-ups due to market power are accounted for. These models are embedded in a dynamic model in which generation companies use a Value at Risk (VaR) criterion for investment decisions. An energy-only' market setting is used to estimate the economic profitability of investments and forecast the evolution of security of supply. Simulated results for the GB market case study show a pattern of increased relative security of supply risk during the 2020s. In addition, fixed cost recovery for many new investments can only occur during years in which more frequent supply shortages push energy prices higher. A sensitivity analyses on a number of key model assumptions provides insight into factors affecting the simulated timing and level of generation investment. This is achieved by considering the relative change in simulated levels of security of supply risk metric such as de-rated capacity margins and expected energy unserved. The model can be used as a decision support tool in policy design, in particular how to address the increased energy-only market revenue risk facing thermal generation, particularly peaking units, that rely on a small number of high price periods to recover fixed costs and make adequate returns on investment. en_GB dc.publisher Faculty of Economics dc.relation.ispartofseries Cambridge Working Papers in Economics dc.rights All Rights Reserved en dc.rights.uri https://www.rioxx.net/licenses/all-rights-reserved/ en dc.subject Power generation economics en_GB dc.subject Mix of Normals distribution en_GB dc.subject Thermal power generation en_GB dc.subject Wind power generation. en_GB dc.title Dynamic Long-Term Modelling of Generation Capacity Investment and Capacity Margins en_GB dc.type Working Paper en_GB dc.identifier.doi 10.17863/CAM.4970 dc.identifier.url http://www.econ.cam.ac.uk/dae/repec/cam/pdf/cwpe1217.pdf 
auto_math_text
web
# diophantus Hello, this is beta version of diophantus. If you want to report about a mistake, please, write to hello@diophantus.org #### Coherent macroscopic quantum tunneling in boson-fermion mixtures 04 Apr 2007 cond-mat.mes-hall, cond-mat.str-el arxiv.org/abs/0704.0650 Abstract. We show that the cold atom systems of simultaneously trapped Bose-Einstein condensates (BEC's) and quantum degenerate fermionic atoms provide promising laboratories for the study of macroscopic quantum tunneling. Our theoretical studies reveal that the spatial extent of a small trapped BEC immersed in a Fermi sea can tunnel and coherently oscillate between the values of the separated and mixed configurations (the phases of the phase separation transition of BEC-fermion systems). We evaluate the period, amplitude and dissipation rate for $^{23}$Na and $^{40}$K-atoms and we discuss the experimental prospects for observing this phenomenon. # Reviews There are no reviews yet.
auto_math_text
web
# Carbon emissions, income inequality and economic development Abebe Hailemariam, Ratbek Dzhumashev () and Muhammad Shahbaz () Empirical Economics, 2020, vol. 59, issue 3, No 5, 1139-1159 Abstract: Abstract This paper investigates whether changes in income inequality affect carbon dioxide ( $$\mathrm{CO}_2$$ CO 2 ) emissions in OECD countries. We examine the relationship between economic growth and $$\mathrm{CO}_2$$ CO 2 emissions by considering the role of income inequality in carbon emissions function. To do so, we use a new source of data on top income inequality measured by the share of pretax income earned by the richest 10% of the population in OECD countries. We also use Gini coefficients, as the two measures capture different features of income distribution. Using recently innovated panel data estimation techniques, we find that an increase in top income inequality is positively associated with $$\mathrm{CO}_2$$ CO 2 emissions. Further, our findings reveal a nonlinear relationship between economic growth and $$\mathrm{CO}_2$$ CO 2 emissions, consistent with environmental Kuznets curve. We find that an increase in the Gini index of inequality is associated with a decrease in carbon emissions, consistent with the marginal propensity to emit approach. Our results are robust to various alternative specifications. Importantly, from a policy perspective, our findings suggest that policies designed to reduce top income inequality can reduce carbon emissions and improve environmental quality. Keywords: Carbon dioxide emissions; Income inequality; Economic development; Environmental Kuznets curve; Panel data; O4; Q0; Q1; Q3 (search for similar items in EconPapers) Date: 2020 References: View references in EconPapers View complete reference list from CitEc Citations: View citations in EconPapers (1) Track citations by RSS feed Access to the full text of the articles in this series is restricted. Related works: This item may be available elsewhere in EconPapers: Search for items with the same title. Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text Ordering information: This journal article can be ordered from http://www.springer. ... rics/journal/181/PS2
auto_math_text
web
Now showing items 1-20 of 170 • Modified Hamiltonian Monte Carlo for Bayesian Inference  (Statistics and Computing, 2019-07-22) The Hamiltonian Monte Carlo (HMC) method has been recognized as a powerful sampling tool in computational statistics. We show that performance of HMC can be significantly improved by incorporating importance sampling and ... • Opportunities at the Intersection of Synthetic Biology, Machine Learning, and Automation  (ACS Synthetic Biology, 2019-07) Our inability to predict the behavior of biological systems severely hampers progress in bioengineering and biomedical applications. We cannot predict the effect of genotype changes on phenotype, nor extrapolate the ... • A concerted systems biology analysis of phenol metabolism in Rhodococcus opacus PD630  (Metabolic Engineering, 2019-06) Rhodococcus opacus PD630 metabolizes aromatic substrates and naturally produces branched-chain lipids, which are advantageous traits for lignin valorization. To provide insights into its lignocellulose hydrolysate utilization, ... • Conductance-Based Refractory Density Approach for a Population of Bursting Neurons  (Bulletin of Mathematical Biology, 2019) The conductance-based refractory density (CBRD) approach is a parsimonious mathematical-computational framework for modeling interact- ing populations of regular spiking neurons, which, however, has not been yet extended ... • Exploring Li-ion conductivity in cubic, tetragonal and mixed-phase Al-substituted Li7La3Zr2O12 using atomistic simulations and effective medium theory  (Acta Materialia, 2019-08-15) Garnet Li7La3Zr2O12 (LLZO) is a promising solid electrolyte candidate for solid-state Li-ion batteries, but at room temperature it crystallizes in a poorly Li-ion conductive tetragonal phase. To this end, partial substitution ... • Brain energetics plays a key role in the coordination of electrophysiology, metabolism and hemodynamics: evidence from an integrated computational model  (Journal of theoretical biology, 2019-06-05) The energetic needs of brain cells at rest and during elevated neuronal activation has been the topic of many investigations where mathematical models have played a significant role providing a context for the interpretation ... • Patient-specific modelling of cortical spreading depression applied to migraine studies  (2019-06-17) Migraine is a common neurological disorder and one-third of migraine patients suffer from migraine aura, a perceptual disturbance preceding the typically unilateral headache. Cortical spreading depression (CSD), a ... • Machine learning framework for assessment of microbial factory performance  (Plos ONE, 2019-01-01) Metabolic models can estimate intrinsic product yields for microbial factories, but such frameworks struggle to predict cell performance (including product titer or rate) under suboptimal metabolism and complex bioprocess ... • Lessons from Two Design–Build–Test–Learn Cycles of Dodecanol Production in Escherichia coli Aided by Machine Learning  (ACS Synthetic Biology, 2019-01-01) The Design–Build–Test–Learn (DBTL) cycle, facilitated by exponentially improving capabilities in synthetic biology, is an increasingly adopted metabolic engineering framework that represents a more systematic and efficient ... • Glioma invasion and its interplay with the nervous tissue: a multiscale model  (2019) A multiscale mathematical model for glioma cell migration and proliferation is proposed, taking into account a possible therapeutic approach. Starting with the description of processes taking place on the subcellular level, ... • SDE-driven modeling of phenotypically heterogeneous tumors: The influence of cancer cell stemness  (Discrete and Continuous Dynamical Systems - Series B, 2019) We deduce cell population models describing the evolution of a tumor (possibly interacting with its environment of healthy cells) with the aid of differential equations. Thereby, different subpopulations of cancer cells ... • Tissue drives lesion: computational evidence of interspecies variability in cardiac radiofrequency ablation  (HE 10TH INTERNATIONAL CONFERENCE ON FUNCTIONAL IMAGING AND MODELING OF THE HEART, 2019) Radiofrequency catheter ablation (RFCA) is widely used for the treatment of various types of cardiac arrhythmias. Typically, the efficacy and the safety of the ablation protocols used in the clinics are derived from tests ... • How does radiofrequency ablation efficacy depend on the stiffness of the cardiac tissue? Insights from a computational model  (2019) Objective. Radiofrequency catheter ablation (RFCA) is an effective treatment for the elimination of cardiac arrhythmias, however it is not exempt from complications that can risk the patients’ life. The efficacy of the ... • Is minimising the convergence rate a good choice for efficient optimized schwarz preconditioning in heterogeneous coupling? The Stokes-Darcy case  (2019-01-05) • Qualitative analysis of kinetic-based models for tumor-immune system interaction  (Discrete and Continuous Dynamical Systems - Series B, 2018-08) A mathematical model, based on a mesoscopic approach, describing the competition between tumor cells and immune system in terms of kinetic integro-differential equations is presented. Four interacting populations are ... • Computational predictive modeling of integrated cerebral metabolism, electrophysiology and hemodynamics  (2019-02-12) Understanding the energetic requirement of brain cells during resting state and during high neuronal activity is a very active research area where mathematical models have contributed significantly by providing a context ... • Anticipation via canards in excitable systems  (Chaos: An Interdisciplinary Journal of Nonlinear Science, 2019-01-14) Neurons can anticipate incoming signals by exploiting a physiological mechanism that is not well understood. This article offers a novel explanation on how a receiver neuron can predict the sender’s dynamics in a ... • Atomistic Insight into Ion Transport and Conductivity in Ga/Al-Substituted Li$_7$La$_3$Zr$_2$O$_{12}$ Solid Electrolytes  (ACS Applied Materials & Interfaces, 2018-01-09) Garnet-structured Li$_{7}$La$_{3}$Zr$_{2}$O$_{12}$ is a promising solid electrolyte for next-generation solid-state Li batteries. However, sufficiently fast Li-ion mobility required for battery applications only emerges ... • A least-squares implicit RBF-FD closest point method and applications to PDEs on moving surfaces  (Journal of Computational Physics, 2018-10) The closest point method (Ruuth and Merriman, J. Comput. Phys. 227(3):1943-1961, [2008]) is an embedding method developed to solve a variety of partial differential equations (PDEs) on smooth surfaces, using a closest point ... • Pro-C congruence properties for groups of rooted tree automorphisms  (Archiv der Mathematik, 2018-11-21) We propose a generalisation of the congruence subgroup problem for groups acting on rooted trees. Instead of only comparing the profinite completion to that given by level stabilizers, we also compare pro-$\mathcal{C}$ ...
auto_math_text
web
# Characterization of the Gelling Process in Acidifying Milk Prepared for LS Instruments AG by Kitty van Gruijthuijsen from the Adolphe Merkle Institute, University of Fribourg February 2011 Introduction Diffusing Wave Spectroscopy (DWS) is a powerful optical technique to study the microrheological properties of viscoelastic samples [1]. It is based on the analysis of the fluctuations of light that has been scattered multiple times within a sample. Because DWS inherently averages over all scattering events, it can yield valuable information even on highly complex systems, such as milk or paint [2,3]. The DWS ResearchLab (model before the current DWS RheoLab) of LS Instruments is a versatile tool to conduct such DWS measurements in both transmission and backscattering mode. The patented DWS echo-technology reduces the usual DWS measurement time significantly and thus typical measurements take as little as one minute compared to hours, enabling the study of a wide range of samples that evolve in time. All measurements in this application note were conducted with the DWS ResearchLab. The yoghurt-Making Process Milk is probably the most studied natural colloidal system. It consists of a continuous water phase, in which dispersed fat droplets are stabilized by milk proteins. The latter include nanometer-sized whey proteins, like β-lactoglobulin, and caseins, which are 100-400 nm in size and constitute 80% of milk protein. Yoghurt is a fermentation product of milk; lactic-acid bacteria convert lactose into lactic-acid, which results in a lowering of the pH towards the iso-electric point of the caseins. Once the casein proteins start to lose their charges, they start to aggregate: the sample gels to form yoghurt! In the laboratory this process is mimicked by adding glucono-δ-lacton, which slowly dissociates, and thus gradually lowers the pH as shown in Figure1. Figure 1. pH decrease in time for milk mixed with 0.6 wt% glucono-δ-lacton. Non-Ergodicity A central parameter in DWS data treatment is the transport mean free path, l*. It is a measure of the length scale in the system over which the direction of scattered light is fully randomized. Thus the inverse of l* is a measure for the turbidity of the system. Because l* is hard to determine without a detailed knowledge of the sample properties, we extract it from a comparison of the transmitted intensity, TI, to that of a known reference sample containing 1 wt% of polystyrene particles (r= 200 nm) in water (Eqn. (1)): $$l_{sample }^{\ast }=\frac{TI_{sample}\cdot l_{reference}^{\ast }}{TI_{reference}}.$$ (1) Milk is a liquid in which the scattering objects can freely move. This yields a scattered signal which is automatically time averaged over several microscopic configurations of the sample: it is ergodic. In yoghurt the proteins and fat droplets form a network, which effectively does not change over the time span of a measurement. We call such samples non-ergodic and the resulting transmitted intensity signal will strongly depend on the exact position of the detector. The DWS ResearchLab uses a spinning ground glass to counteract this effect (Figure 2). It thus allows DWS measurements not only on milk but also on yoghurt [4]. Figure 2. Transport mean free path, l*, for acidifying milk determined with the regular DWS (open symbols) and with a rotating ground glass (closed symbols). Qualitative Information from the Correlation Function The DWS ResearchLab measures the dynamic sample properties in the form of normalized intensity correlation functions, g2(t)-1. Here the ground glass is employed in a second, different way. Using the patented DWS echo technology correlation times up to 12 s can be measured much faster than usually (within minutes instead of hours). The correlation function obtained with the echo technology is pasted to the regular correlation function, thus extending the actual measurement range and yielding a more discrete correlation function as highlighted in Figure 3. Acidifying milk shows the typical dynamic behavior of a gelling process: The correlation function shifts to longer times (see arrow in Figure 3), indicating a reduction of the dynamics. At the lowest pH values the correlation function no longer fully decays to zero due to the scatterers being trapped in the space-spanning yoghurt network. The shift to longer decay times can be quantified by the time at which the correlation function has decayed to half its initial value. Indeed the half time, plotted in Figure 4 as a function of decreasing pH, remains almost constant down to a pH of 5.2, which corresponds to the onset of aggregation. It then gradually increases over several orders of magnitude. Although the actual gel point cannot be directly determined from the half time, it can be taken as a measure for process optimization and regulation. This is for instance applied in cheese making, where the gelling curd has to be cut once it has achieved a specific consistency. Figure 3. Correlation functions of acidifying milk measured in time. The curves are labelled with their decreasing pH values. Figure 4. Times at which the correlation function has decayed to half its value as a function of decreasing pH for acidifying milk. Next>>
auto_math_text
web
By Topic • Abstract SECTION I ## INTRODUCTION Optical frequency combs that offer hundreds of carriers of highest quality across a large spectral band have drawn a tremendous amount of attention within recent years. Research on frequency combs culminated in the Nobel Price that was awarded to John Hall and Theodor Hänsch in 2005 for their work in laser-based precision spectroscopy [1], [2]. Optical frequency combs were recognized as a useful tool not only for spectroscopy but also for various other applications such as optical and microwave waveform generation [3], [4], optical signal processing [5], [6], fiber-optic communication [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], optical coherence tomography [18], and precision ranging [19]. Depending on the application, the individual linewidth, the line spacing or the total number of lines (e.g., spanning an octave for self-referencing) are important and ultimately influence the design. Optical frequency combs are most attractive for the latest generation of optical multi-carrier communication systems, where information is encoded in so called superchannels. To generate Tbit/s superchannels, data is encoded on multiple closely spaced, equidistant optical carriers and subsequently multiplexed. Such systems require a precisely controlled carrier spacing, which is inherently provided by optical comb sources. Here, a single comb source could replace hundreds of lasers, which would otherwise have to be controlled precisely in their relative and absolute frequencies. Prominent examples of such superchannel communication systems are based on all-optical orthogonal frequency division multiplexing (OFDM) [16], no guard interval OFDM [20], coherent wavelength division multiplexing (Co-WDM) [9], and Nyquist WDM [8], [21], [22], [23]. However, such schemes require a precise control of carrier frequencies [9], [16], [20], [21], [22], and in rare cases even of the carrier phases with respect to the symbol time slot [9]. Fortunately, in optical frequency comb sources, the carrier spacing is extremely precise, and strictly linear time dependencies of the phases between carriers are maintained, making them ideally suited for this application. However, in order to use the carriers of an optical frequency comb for communications they must have a sufficiently narrow spacing (e.g., 12.5 or 25 GHz) and offer sufficient quality. This actually means that • All the optical carriers should be of equal power (or follow a defined spectral distribution). This is needed to achieve a similar performance of the data encoded on the carriers and to avoid one modulated carrier limiting the performance of all neighbors through crosstalk. Equal power in all carriers typically can be achieved through equalization of a generated frequency comb [14], [16]. • The carriers must have a minimum carrier-power to noise-power-density ratio (CNR). The CNR ultimately limits the amount of information that can be encoded on a single subcarrier. The CNR directly defines the maximum achievable signal-to-noise ratio (SNR), for which minimum values can be found in tables. • The linewidth of all carriers has to be narrow. A narrow linewidth (low phase noise) is fundamentally needed—particularly for transmission systems making use of phase encoded signals. Coherent transmission systems typically need a laser linewidth that is significantly smaller than 100 kHz. Quite a few schemes have been experimentally investigated for use in optical communication systems that can fulfill these requirements to a smaller or larger extent. Among these are comb generators based on optical modulators [9], [10], [24], also in the form of recirculating frequency shifters [7], [11], [12], micro resonators [15], [25], [26], and mode-locked lasers (MLL) with spectral broadening in highly nonlinear fibers (HNLF) [13], [14]. All these schemes have different advantages and drawbacks: Schemes using modulators typically have a sufficient carrier quality but only offer a few lines and the operation points of the modulators have to be controlled precisely. Schemes that exploit self-phase modulation (SPM) typically provide a sufficient number of subcarriers, but the CNR is low at the spectral minima and at the outer edges of the spectrum. In this paper, we present a comb generator scheme based on slicing of spectra broadened in highly nonlinear fibers. The scheme extends the usable bandwidth of frequency combs generated through SPM by removing the minima [27] in broadened spectra. The scheme promises a large number of carriers with ample power and CNR for optical transmission systems by spectrally composing suitable subspectra [16]. In our demonstration, we generate a total number of 325 carriers, which we derived from a single MLL. The MLL was measured to provide a linewidth of approximately 1 kHz. The CNR of all carriers of the frequency comb was larger than 25.8 dB and the quality was proven in a transmission experiment. SECTION II ## SPECTRAL MINIMA IN SELF-PHASE MODULATION Spectral broadening through self-phase modulation (SPM) can generate extremely broad frequency combs. However, as already observed by Stolen et al. in 1978 [27], spectral minima occur for phase shifts significantly larger than $\pi$. Typically, these minima are explained as destructive interference of spectral components that have experienced a relative phase shift of $\pi$ [28]. In a simplified picture, the generation of the spectral minima in SPM can be illustrated as shown in Fig. 1. The intensity of an optical impulse is displayed in a moving time frame $z$$v_{g}t (group velocity v_{g}) along the propagation direction z, Fig. 1(a). Due to the Kerr nonlinearity, the optical phase shift \varphi\propto I is in proportion to the local intensity I. The slopes of the impulse lead to a frequency offset \Delta f, Fig. 1(b). If the maximum phase shift \varphi_{\max} is larger than \pi, then above (and also below) \Delta f = 0 there will be pairs of points with the same \Delta f but a relative phase shift \Delta \varphi = \pi. In this simple picture, this will lead to destructive interference of these spectral components, and therefore to minima in the spectrum. In our example, these frequencies are labeled \Delta f_{1} and \Delta f_{2}. Fig. 1. Schematic phase and frequency shift in self-phase modulation of a single impulse with intensity I in a moving time frame z$$v_{g}t$ (propagation coordinate $z$, time $t$, group velocity $v_{g}$). (a) Nonlinear phase shift $\varphi\propto I$. The phase shift has a maximum $\varphi_{\rm max}$ at the peak of the pulse and translates to a frequency shift $\Delta f$ as seen in (b). In a simplified picture, each frequency shift occurs twice within one pulse (see example in this figure), but with a different phase. If the maximum phase shift $\varphi_{\rm max}$ is larger than $\pi$, there will be at least two pairs of points ($\Delta f_{1}$ and $\Delta f_{2}$), one with a positive $\Delta f_{1}$ and one with a negative frequency shift $\Delta f_{2}$, which have a phase difference of $\Delta \varphi = \pi$. This phase shift then leads to spectral destructive interference at these frequency shifts $\Delta f_{1}$ and $\Delta f_{2}$. In our example these frequency shifts are symmetric around $\Delta f = 0$, which is not necessarily the case for asymmetric impulses or in the presence of dispersion. Fig. 2 presents simulated and measured power spectra that exhibit these minima. The simulated example spectra Fig. 2(a) are computed for a single spectrally broadened pulse. If $\Delta\varphi \geq \pi$, spectral minima develop. As seen in the figure, the number of minima $m$ allows for an estimation of the maximum phase shift $\varphi_{\max}$ according to TeX Source $$\varphi_{\max} \approx {2m + 1 \over 2}\pi.\eqno{\hbox{(1)}}$$ This equation is related to the equation (4.1.14) in [28], which gives a similar relation for the number of spectral maxima resulting from a certain maximum phase shift. Fig. 2. Simulated and measured spectra for spectral broadening through self-phase modulation in a highly nonlinear fiber. (a) Single pulse with its spectrum labeled “input”. After spectral broadening in a highly nonlinear fiber (HNLF), the maximum nonlinear phase shift is $\varphi_{\max}$. It increases proportional to the input power, which increases from left to right and from upper to lower row. The spectra exhibit minima for maximum nonlinear phase shifts significantly larger than $\pi$ [27]. (b) Non-broadened line spectrum of the mode locked laser (MLL), and line spectra broadened by SPM in the HNLF. The average input power to the HNLF is given in dBm. Spectral minima are marked with a white “v”. For higher powers, not all minima are within the displayed bandwidth of 30 nm. The spectra were measured at the output of the HNLF in the setup seen in Fig. 4. The measurement in Fig. 2(b) relates to the mode locked laser (MLL) with a repetition rate $R = 12. 5\ \hbox{GHz}$, which was used for all experiments in this paper. Its comb line spectrum was amplified to the specified average powers and broadened by SPM. The minima are marked with white symbols “v”. Because interference is involved, any instability in the optical MLL power leads to a frequency shift of these minima, and therefore to a pronounced instability of the neighboring comb lines. SECTION III ## COMB GENERATION BY SPECTRAL SLICING The following scheme generates a frequency comb with a large bandwidth and a high carrier quality for all carriers. We propose to compose this high quality frequency comb from a seed spectrum and one or more broadened spectra [16]. We replace carriers at spectral minima of broadened spectra with a low carrier power and CNR by carriers from differently broadened spectra with a high carrier quality. This process has the four steps shown in Fig. 3. First, we generate the seed spectrum using an optical pulse source, such as a MLL. From this seed spectrum, we then generate $N$ optical spectra via spectral broadening elements (NLE), such as a highly nonlinear fiber (HNLF). Next, we use spectral slicing and composing in a reconfigurable optical multiplexer (e.g., waveshaper). Here, the usable parts of the seed spectrum and the $N$ additional spectra are sliced and combined to form a continuous broad output spectrum. This effectively replaces all unstable and low-power parts of the optical spectra. Finally, the spectrum is equalized. The bottom inset in Fig. 3 illustrates this process for $N = 1$. Fig. 3. Concept of the optical comb generator by spectral slicing. First, $N$ optical spectra are generated; a mode locked laser (MLL) generates the initial seed spectrum (as shown in green) that is spectrally broadened in a nonlinear element (NLE) to generate $N$ broadened spectra with a large number of carriers (as shown in red). Second, the output spectrum is generated from the $N$ spectra using spectral slicing and composing; for this, the generated spectra are sliced and combined to form a wide spectrum. This wide spectrum is subsequently equalized to obtain the output spectrum of the comb generator. The lower part in this figure shows exemplary spectra for two combined optical spectra. For current implementations of this scheme with discrete components, the following points have to be kept in mind as they could limit the overall performance and/or usability for different applications: • A poor filter-stop band could limit the performance in several ways. 1. ° As a result of a poor filter-stop band multipath interference between carriers from different paths $0, 1, \ldots N$ in Fig. 3 could lead to low frequency fluctuations in the kHz range due to thermal path length changes. This could result in phase and amplitude fluctuations of the comb lines. This did not pose an issue in our experiments as the 40 dB filter extinction ratio of the waveshaper was sufficient as outlined in Section 4. 2. ° The filter's extinction ratio often is not sufficient at the edge of the filters when going from stop-band to pass band. In our experiment we indeed noticed some small crosstalk in the two lines where spectral slices are merged. However, this multipath interference should not be mistaken for coherent crosstalk in communication systems [29], as the interfering laser lines do not carry any data at that point. And indeed, in our case it did not lead to any signal degradation. 3. ° A poor filter-stop band could also degrade the overall CNR as a result of summing up noise floors from different paths. This however is quite unlikely, as the noise floor of the interfering spectrum would have to be much higher than the noise floor in the desired spectrum as it is attenuated by the filter extinction. In our experiment this was most definitely not an issue as we started with a very good CNR that was further suppressed in the filters. • As the different paths in the comb generator consist of different lengths of fiber, a decorrelation of the different segments of the comb could occur if the lengths are equal to or larger than the coherence length. This could lead to very small frequency shifts between spectral comb slices if the center frequency of the source laser drifts. Due to the short fiber length of 100 m and a coherence length in the order of 100 km, this was no issue in our experiments. Of course, in an ultimate integrated solution, the best stability could be by achieved by miniaturization that includes on chip nonlinear elements instead of HNLFs. SECTION IV ## EXPERIMENTAL SETUP OF THE BROADBAND SPECTRALLY SLICED COMB SOURCE Fig. 4 shows our experimental setup. A passively mode locked laser (MLL—Ergo XG) with a repetition rate of 12.5 GHz and a pulse width below 2 ps generates the initial seed spectrum [Fig. 4(a)]. The spacing of the spectral lines equals the repetition rate, while the pulse width and shape determine the spectral envelope. The chirp of the MLL is adjusted by 5.4 m of standard dispersion compensating fiber (DCF). Different fiber lengths were tested, and the length generating the largest amount of SPM at the output of the HNLF was chosen for the experiment. The spectrum is amplified, and a 5 nm bandpass filter suppresses the amplifier noise. The laser signal is now split in two parts. One part is spectrally broadened by SPM [see Fig. 4(b)] in a highly nonlinear photonic crystal fiber (HNLF). The HNLF is a photonic crystal fiber with the following parameters: length 100 m, dispersion 1.25 ps/nm/km, mode field diameter 2.8 ± 0.5 $\mu\hbox{m}$, attenuation < 9 dB/km, nonlinear coefficient 19 $W^{- 1}\ \hbox{km}^{-1}$. The other part is not broadened; instead, it is just passed through. Finally, the waveshaper slices, combines, and equalizes the broadened and the original MLL spectrum, thus producing the final output spectrum, Fig. 4(c). To show the suppression and estimate possible fluctuations of the power in individual lines, we have performed the measurement shown in Fig. 4(d). The most critical carriers are the carriers at the edges, where we only achieve a suppression of > 16 dB. All other carriers have a suppression > 30 dB and typically 40 dB. This corresponds to a power fluctuation of < 3 dB for the carriers at the edges and a fluctuation of < 0.6 and typically < 0.2 dB for the central carriers. The speed of these fluctuations only depends on linewidth and frequency stability of the MLL and the stability of the interferometer. In our experiment, these fluctuations were extremely slow due to the high stability of the MLL. Fig. 4. Experimental setup of the ultra-broad spectrally sliced comb. The output spectrum of the mode locked laser (MLL—Ergo-XG, repetition rate 12.5 GHz) is displayed in the first inset (a—shown in green). The MLL pulses are amplified and filtered in a 5 nm bandpass filter. The amplified pulses are split, and one path is passed through and the other path is spectrally broadened in a highly nonlinear photonic crystal fiber (b—shown in red—HNLF). A waveshaper (WS) performs the spectral slicing and composing to generate the output spectrum (c—shown in green and red). The suppression of the neighboring spectra at one of the edges is shown (d). The suppression at the carriers next to the edges is > 16 dB. For all other carriers, a suppression > 30 dB, typically 40 dB is achieved. The resulting flat frequency comb shown in Fig. 4(c) has 325 high-quality carriers that have been used for multiple Terabit/s transmission experiments [8], [16] with aggregate data rates of up to 32.5 Tbit/s [8]. In these experiments, we carefully measured data transmission on each and every spectral line of the generated frequency comb. We observed a similar performance for all subcarriers and did not see any impact of multipath interference [8], [16]. SECTION V ## CHARACTERIZATION ### 1. Linewidth Characterization of the Mode Locked Laser The linewidth of the individual modes of the MLL fundamentally limits the linewidth of the carriers in the broadened comb. We therefore measured the linewidths of five modes of the MLL, which we found to be approximately 1 kHz. We were limited to these five modes around 1550 nm due to the fixed wavelength of the narrow linewidth reference laser. The linewidth of lasers is often measured using a delayed self-heterodyne technique [30], [31], in which the laser is split and one copy is decorrelated in a fiber delay line before being mixed with the original laser line. However, from our transmission experiments with this comb source [8], we expect an extremely low linewidth. This would results in an excessively long fiber length required for decorrelation [30]. Such a long fiber delay renders self-heterodyne measurements extremely susceptible to acoustic noise and other environmental influences. As such types of noise have significant frequency components in the same order of magnitude as the linewidth of interest, self-heterodyne measurements were not practical. We therefore chose to measure the linewidth through a heterodyne measurement [31]. Normally, one would perform the measurement by combining the laser line under test with a narrow linewidth laser (NLWL) and then detecting the signal in a photodiode. The photocurrent is then analyzed in an RF-spectrum analyzer. However, due to the low measurement speed of these analyzers, this scheme requires wavelength tracking to compensate for slow drifts of the laser wavelengths [31]. Therefore, we decided for a new technique. To measure the linewidth of the laser without the drift of the laser wavelengths we chose to perform the measurement using real-time acquisition and subsequent spectral analysis of the recorded signal, see Fig. 5(c). In this scheme we used a coherent receiver (Agilent optical modulation analyzer) with a narrow linewidth laser (NLWL) as external local oscillator (LO). The receiver has an electrical bandwidth of 32 GHz per in-phase and per quadrature channel and a sampling rate of 80 GSa/s per channel. The NLWL had a linewidth of approximately 1 kHz, and the down-converted optical pulse train of the MLL was recorded for a time span of 12.5 ms. The advantage of this new technique is that we can restrict the data acquisition to a time interval within which the relative laser frequency drift is significantly smaller than the linewidth. This way there is no need for performing active wavelength tracking even though the frequency might drift over several kHz within a few seconds. This slow frequency drift, however, is not critical for optical communication systems, as the frequency offset compensation in coherent receivers can easily compensate for it. Fig. 5. Linewidth characterization of the mode locked laser (MLL). As shown in the bottom left subfigure (c), the mode locked laser (MLL) is recorded for 12.5 ms in a coherent receiver. The local oscillator is a narrow linewidth laser (NLWL) with a linewidth of approximately 1 kHz. The upper two subfigures (a), (b) display the power spectral density (shown in black) of one of the MLL lines mixed with the NLWL in an observation interval of 3.35 ms. The full width half maximum (FWHM) of the measured spectrum is less than 2 kHz, while the FWHM of the Lorentzian fit (shown in red) is 2.2 kHz. Side peaks at ±590 kHz are visible in addition to the Lorentzian line shape. The frequency drift dominates the spectrum for a longer record length of 12.5 ms (d). As reference, we also display the Lorentzian fit obtained from the shorter record length. First, we analyze a subset of $2^{28}$ samples (3.35 ms). Within this short time window, the influence of frequency drift of the two lasers is marginal. After performing a fast Fourier transform (FFT) on the temporal data, we analyze each of the 5 MLL lines within the receiver bandwidth of 64 GHz centered on 1550 nm. To this end, we process spectra with a width of $2^{15}$ samples ($\sim$9.8 MHz) centered at each MLL line. All spectral lines look alike, so that we only picture one line in Fig. 5. The full-width half-maximum (FWHM) of the power spectra of all five analyzed lines is found to be 1.9 kHz. The FWHM of a Lorentzian fit, red lines in Fig. 5(a), (b), and (d), is 2.2 kHz. Because the sidelobes at ±590 kHz were included in the fitting process and because of the limitation in the sampling duration and the 1 kHz linewidth of the reference laser, the inferred line width is a worst-case estimate of the actual 3 dB bandwidth. Next, we process the full data set. In this case, the spectra are distorted by the drift of the two lasers, Fig. 5(d). For the recorded 5 MLL lines we observe linewidths between 260 Hz and 1.33 kHz. From these measurements, we conclude that the characterized modes of the MLL have a linewidth of approximately 1 kHz. The sidelobes to be seen in Fig. 5 were also observed when investigating the MLL spectra with self-heterodyne measurements. However, due to environmental fluctuations, this measurement was not suitable for determining the linewidth. ### 2. CNR and OCNR Characterization of the Comb Source The signal-to-noise ratio (SNR) and therefore the minimum carrier-power to noise-power-density ratio (CNR) have to be as large as possible in order to allow for reliable transmission of information. We first implemented the comb source to study advanced multiplexing schemes at aggregate data rates beyond 10 Tbit/s [8], [16]. When investigating the noise characteristics in these experiments, we observed circular noise “clouds” around all constellation points. A statistical analysis of the error vector pointing from a nominal constellation point to the momentarily received signal yields the distribution of the noise in the system. From our statistical analysis in [8], we inferred a Gaussian distribution of the corresponding noise [8]. We conclude from these observations that the limiting factor for the signal quality is additive Gaussian noise due to amplified spontaneous emission in the optical amplifiers in the system, and not the phase noise of the optical carriers themselves. In the presence of significant phase noise the distribution of the received constellation points would have been distorted in an angular direction. Next, we measured the CNR of the optical comb source. The CNR is defined as the ratio of the carrier-power $P_{C}$ and the noise-power-density $N_{0}$ at the position of the carrier TeX Source $$\hbox{CNR}\quad [\hbox{dBHz}] = 10\log_{10}\left({P_{C} \over N_{0}\Delta f}\right).\eqno{\hbox{(2)}}$$ Here, the noise power density $N_{0}$ is normalized to a bandwidth of $\Delta f = 1\ \hbox{Hz}$. Characterization setup, measurement principle, and results of the CNR measurement are shown in Fig. 6. To measure the noise floor between the carriers with a spacing of 12.5 GHz, we used a high resolution optical spectrum analyzer (Apex AP2050A). The comb generated by spectral slicing has been implemented as described in Section 4 and Fig. 4. The waveshaper (WS) now serves a different purpose. It is programmed to realize a bandpass filter function with a 3 dB bandwidth of 60 GHz. The center of the bandpass coincides with the carrier of interest. The filtered spectrum is then amplified in a low noise EDFA (not depicted in Fig. 4, noise figure $\hbox{NF} = 3.7\ \hbox{dB}$) to increase the power of the measured noise to a value significantly larger than the receiver noise of the spectrum analyzer. To ensure that the measurement is not limited by the electronic noise of the spectrum analyzer, the spectra are measured with three different resolution bandwidths of 50 pm, 20 pm, and 10 pm. The noise level at the carrier frequency is determined by linear interpolation between the noise levels on both sides of the carrier, see illustration in Fig. 6(b). All noise density measurements are normalized to a reference bandwidth of 1 Hz. Noise density measurements with different resolution bandwidths deviated by less than 0.6 dB, therefore electronic noise had only minimal influence on the measurement. Both, the amplified seed and the broadened spectrum, were present at the waveshaper inputs simultaneously during the CNR measurements. Fig. 6. Carrier-power to noise-power-density ratio (CNR) characterization of the comb source. (a) For the characterization of the CNR, the generation of spectra is implemented as illustrated in Fig. 4. Instead of equalization and spectral slicing, the waveshaper (WS) extracts 60 GHz out of the generated spectra (see top right) around the carrier to be characterized. A low noise EDFA amplifies the excerpt for characterization in the Apex AP2050A high resolution spectrum analyzer. The CNR is measured as illustrated in (b). The peak of the carrier is measured and the noise is determined by pseudo linear interpolation between the minima between the measured carrier and its neighbors. The obtained CNR values are shown in the plot (bottom right). In optical communication systems, a commonly used measure for communication signals is the optical signal-to-noise ratio (OSNR), which is given as the signal power in relation to the noise power density normalized to a bandwidth of 0.1 nm. This measure was established for historic reasons as 0.1 nm used to be the minimum resolution of optical spectrum analyzers. We therefore also provide the OCNR TeX Source $$\hbox{OCNR}\quad [\hbox{dB}(0.1nm)] = 10\log_{10}\left({P_{C} \over N_{0}}{\lambda_{c}^{2} \over c\ \Delta\lambda_{\rm ref}}\right)\eqno{\hbox{(3)}}$$ with the speed of light $c$. Here, the noise-power density is normalized to a bandwidth of $\Delta\lambda_{\rm ref} = 0.1$ nm at a reference wavelength $\lambda_{c} = 1550\ \hbox{nm}$. The CNR (and OCNR values) of the amplified seed spectrum, see green range in Fig. 6(d), is larger than 133.5 dBHz—reference bandwidth 1 Hz (32.5 dB (0.1 nm)—reference bandwidth 0.1 nm) in the center region of the flattened comb—the green part in Fig. 4(d). It varies between 114 dBHz (13 dB (0.1 nm)) and 140.6 dBHz (39.6 dB (0.1 nm)) over the measured bandwidth of 9.5 nm—here, we measured the CNR also measured across the spectral lines that were removed by the filter. The CNR of the broadened spectrum—the red part that was used to generate the spectrum in Fig. 4(d) and without the central green part—varies between 126.8 dBHz (25.8 dB (0.1 nm)) and 136 dBHz (35 dB (0.1 nm)). The CNR of the broadened spectrum that was replaced by the amplified seed spectrum was measured as well. We found that the CNR in this region was heavily fluctuating as one would expect from our discussion in Section 2. As a result we found scattered red dots as seen in Fig. 6(d). It should be mentioned that we sliced the spectrum at the points, where the carriers of the amplified seed and the broadened spectrum had a similar CNR and also a similar power. The plots in Fig. 6(c) and (d) also show that there was a strong correlation between the CNR and the power of the individual lines. One might argue that this measurement scheme might underestimate the CNR for real operating condition, as the central part of the comb is attenuated by up to 25 dB and noise crosstalk from the inner part of the broadened spectrum might occur. This is not the case. The input spectrum to the HNLF has the same power as the input to the waveshaper for the center part of the spectrum. Therefore, the noise levels in between the lines for both spectra are similar. This crosstalk noise is at least 20 dB below the noise of the attenuated original spectrum due to the additional insertion loss of the HNLF (> 5 dB) and the extinction of the waveshaper (> 40 dB). To put these numbers in relation, we provide the required SNR for transmission of two of the most common modulation formats for current and next generation communication systems. The SNR needed to transmit a polarization multiplexed 12.5 GBd QPSK or 16QAM signal with a bit error ratio of $2 \times 10^{- 3}$ is 10 or 16 dB, respectively [32]. The maximum symbol rate for our frequency comb with a spacing of 12.5 GHz is 12.5 GBd at Nyquist channel spacing [8]. For this symbol rate, the SNR is approximately equal to the OCNR. Thus, our frequency comb provides a carrier quality that is sufficient for an SNR larger than 25.8 dB for all carriers. SECTION VI ## CONCLUSION We present a scheme for the generation of broad frequency combs via self-phase modulation. Differently broadened spectra are sliced and combined to a compound spectrum. Unstable, low-power spectral portions with a low CNR are removed and replaced with stable parts of the differently broadened spectra. A mode locked laser with a mode-linewidth of approximately 1 kHz serves as a seed spectrum. The generated frequency comb has 325 optical carriers and covers a bandwidth larger than 4 THz. We measured carrier-power to noise-power-density ratios (CNR) between 118.6 dBHz and 104.6 dBHz with 1 Hz reference bandwidth (39.6 dB(0.1nm) and 25.9 dB(0.1nm) with 0.1 nm reference bandwidth) for the regions of interest. The carrier quality suffices for transmission of multiple terabit/s using 16QAM [8], [16] with aggregate data rates of up to 32.5 Tbit/s [8]. ### ACKNOWLEDGMENT We thank Dr. Marc Eichhorn of the French-German Research Institute of Saint-Louis for fruitful discussions on the measurement of laser linewidth and self-phase modulation, and for his support in the interpretation of the results. Special thanks go to Dr. Erel Granot of the Ariel University Center of Samaria in Israel for his support in the evaluation of the linewidth measurements. ## Footnotes This work was supported in part by the European FP7 projects ACCORDANCE (FP7-ICT-2009-4) and FOX-C (FPC-ICT-2011-8), by the Agilent University Relations Program, by the German BMBF project CONDOR (BMBF 16BP1015), by the Helmholtz International Research School of Teratronics, by the Karlsruhe School of Optics and Photonics (KSOP), and by the German Research Foundation (DFG). Corresponding author: D. Hillerkuss (e-mail: dhillerkuss@ethz.ch). ## References No Data Available ## Cited By No Data Available None ## Multimedia No Data Available This paper appears in: No Data Available Issue Date: No Data Available On page(s): No Data Available ISSN: None INSPEC Accession Number: None Digital Object Identifier: None Date of Current Version: No Data Available Date of Original Publication: No Data Available
auto_math_text
web
## Search Result ### Search Conditions Years All Years for journal 'PTP' author 'K.* Harada' : 20 total : 20 ### Search Results : 20 articles were found. 1. Progress of Theoretical Physics Vol. 15 No. 6 (1956) pp. 545-560 : Elastic Scattering of Alpha-Particles by Heavy Elements Nobuo Oda and Kichinosuke Harada 2. Progress of Theoretical Physics Vol. 21 No. 2 (1959) pp. 260-268 : Radial Dependence of Imaginary Part of Nuclear Optical Potential Kichinosuke Harada and Nobuo Oda 3. Progress of Theoretical Physics Vol. 26 No. 5 (1961) pp. 667-679 : Alpha-Particle Reduced Widths in Heavy Nuclei 4. Progress of Theoretical Physics Vol. 26 No. 6 (1961) pp. 1010-1011 : Imaginary Part of the Optical Potential for Nuclei $A \sim 60$ and $A \sim 100$ Atsushi Sugie, Kichinosuke Harada and Hisashi Horie 5. Progress of Theoretical Physics Vol. 27 No. 2 (1962) pp. 430-432 : Alpha-Cluster and Nuclear Surface 6. Progress of Theoretical Physics Vol. 51 No. 5 (1974) pp. 1617-1619 : (4) Asymmetric Fission of $^{236}$U Akira Iwamoto, Shota Suekane, Shuhei Yamaji and Kichinosuke Harada 7. Progress of Theoretical Physics Vol. 55 No. 1 (1976) pp. 115-130 : (4) Potential Energy Surfaces for the Fission of the Actinide Nuclei Akira Iwamoto, Shuhei Yamaji, Shota Suekane and Kichinosuke Harada 8. Progress of Theoretical Physics Vol. 60 No. 6 (1978) pp. 1824-1833 : (5) Hadron Mass Corrections and Parton Transverse Momentum in Hadronic $\mu$-Pair Production Kuni Harada, Toshiaki Kaneko, Norisuke Sakai and Osamu Sawada 9. Progress of Theoretical Physics Vol. 63 No. 3 (1980) pp. 982-992 : (5) Application of Perturbative QCD to the Photoproduction of Lepton Pairs 10. Progress of Theoretical Physics Vol. 65 No. 2 (1981) pp. 783-786 : (5) Infra-Red Asymptotic Form of the One-Fermion Green's Function in Two-Dimensional Quantum Chromodynamics 11. Progress of Theoretical Physics Vol. 66 No. 4 (1981) pp. 1515-1518 : (5) Infra-Red Asymptotic Form of the Gluon Propagator in Four-Dimensional Quantum Chromodynamics 12. Progress of Theoretical Physics Vol. 67 No. 4 (1982) pp. 1255-1257 : (5) Infra-Red Asymptotic Form of the Gluon Propagator in Quantum Chromodynamics. II 13. Progress of Theoretical Physics Vol. 67 No. 6 (1982) pp. 1877-1888 : (5) Softly Broken Supersymmetric Theories Kuni Harada and Norisuke Sakai 14. Progress of Theoretical Physics Vol. 68 No. 4 (1982) pp. 1324-1339 : (5) Infrared Asymptotic Forms of the Gluon and Quark Propagators 15. Progress of Theoretical Physics Vol. 78 No. 3 (1987) pp. 675-679 : (5) A Consistent Gauss Law in Anomalous Gauge Theories Koji Harada and Izumi Tsutsui 16. Progress of Theoretical Physics Vol. 78 No. 4 (1987) pp. 878-885 : (5) Revealing the Gauge Freedom in the Path-Integral Formalism Koji Harada and Izumi Tsutsui 17. Progress of Theoretical Physics Vol. 93 No. 6 (1995) pp. 1059-1066 : (4) Noticeable Change of $p-p$ Spin-Orbit Interaction at Short Distance Masanori Matsuda, Junichi Nagata, Hiro Yoshino, Kazuo Harada and Soji Ohara 18. Progress of Theoretical Physics Vol. 113 No. 6 (2005) pp. 1315-1366 : (4) Effective Theory Approach to the Skyrme Model and Application to Pentaquarks Koji Harada, Yohei Mitsunari and Nao-aki Yamashita 19. Progress of Theoretical Physics Vol. 120 No. 4 (2008) pp. 741-749 : (c) Problems in the Derivations of the Renormalization Group Equation for the Low Momentum Nucleon Interactions Apparently Noninvariant Terms of $U(N) \times U(N)$ Nonlinear Sigma Model in the One-Loop Approximation
auto_math_text
web
# Binding thermodynamics of host-guest systems with SMIRNOFF99Frosst 1.0.5 from the Open Force Field Initiative This manuscript (permalink) was automatically generated from slochower/smirnoff-host-guest-manuscript@ad64b44 on October 4, 2019. ## Abstract Designing ligands that bind their target biomolecules with high affinity and specificity is a key step in small-molecule drug discovery, but accurately predicting protein-ligand binding free energies remains challenging. Key sources of errors in the calculations include inadequate sampling of conformational space, ambiguous protonation states, and errors in force fields. Noncovalent complexes between a host molecule with a binding cavity and a drug-like guest molecules have emerged as powerful model systems. As model systems, host-guest complexes reduce many of the errors in more complex protein-ligand binding systems, as their small size greatly facilitates conformational sampling, and one can choose systems that avoid ambiguities in protonation states. These features, combined with their ease of experimental characterization, make host-guest systems ideal model systems to test and ultimately optimize force fields in the context of binding thermodynamics calculations. The Open Force Field Initiative aims to create a modern, open software infrastructure for automatically generating and assessing force fields using data sets. The first force field to arise out of this effort, named SMIRNOFF99Frosst, has approximately one tenth the number of parameters, in version 1.0.5, compared to typical general small molecule force fields, such as GAFF. Here, we evaluate the accuracy of this initial force field, using free energy calculations of 43 α and β-cyclodextrin host-guest pairs for which experimental thermodynamic data are available, and compare with matched calculations using two versions of GAFF. For all three force fields, we used TIP3P water and AM1-BCC charges. The calculations are performed using the attach-pull-release (APR) method as implemented in the open source package, pAPRika. For binding free energies, the root mean square error of the SMIRNOFF99Frosst calculations relative to experiment is 0.9 [0.7, 1.1] kcal/mol, while the corresponding results for GAFF 1.7 and GAFF 2.1 are 0.9 [0.7, 1.1] kcal/mol and 1.7 [1.5, 1.9] kcal/mol, respectively, with 95% confidence ranges in brackets. These results suggest that SMIRNOFF99Frosst performs competitively with existing small molecule force fields and is a parsimonious starting point for optimization. ## Introduction The accurate prediction of protein-ligand binding free energies is a central goal of computational chemistry, with key applications in early stage drug discovery. However, calculations of protein-ligand binding thermodynamics still involve a number of challenging choices, including the choice of empirical force field, specifying the protonation states of ionizable residues, adding hydrogens and otherwise adjusting the initial protein structure, and positioning the candidate ligand in the binding pocket. Predictions of protein-ligand absolute binding free energies have achieved root mean square errors around 1-2 kcal/mol for “well-behaved” systems [1,2,3], with deviations an order of magnitude larger for some protein families with slow degrees of freedom [4]. Retrospective relative free energy calculations on a series of congeneric ligands, using proprietary methods, have also achieved root mean square errors compared to experiment of around 1 kcal/mol [5,6,7]. However, it is not possible to determine how much of the prediction error can be attributed to each of the decisions made by the modeler, as opposed to accuracy limitations of the force field. By minimizing the ambiguities involved in modeling protein-ligand complexes, host-guest systems offer a way to isolate and directly probe force field error. A variety of techniques for computing absolute binding free energies have been applied to host-guest systems, and some have shown accuracy as good as ~1 kcal/mol, as highlighted in the recent SAMPL5 and SAMPL6 blind challenges [1,8]. The techniques applied to this problem have included both quantum and classical dynamics, employing a range of energy and solvation models, with some techniques having knowledge-based steps, docking, or clustering [9,10,11,12,13,14,15,16]. The attach-pull-release (APR) method has consistently been ranked among the most reliable techniques for predicting binding thermodynamics of host-guest complexes in blind challenges [8,17]. In APR, the reversible work of transferring the guest from the binding site to solution, via a physical pathway, is computed using a series of umbrella sampling windows. Simulating each window and integrating over the partial derivative of the restraint energy with respect to the restraint target, in each window, is used to generate a potential of mean force along the pulling coordinate, yielding the binding free energy at standard state, ΔG° after applying an analytic correction to account for the effective concentration of the guest during the simulation [18]. Furthermore, subtracting the mean potential energies obtained from long simulations of the solvated bound complex and the solvated dissociated complex yields the binding enthalpy, ΔH [19]. Together, ΔG° and ΔH can be combined to determine the binding entropy at standard state, ΔS°. Thus, APR provides the complete thermodynamic signature of a host-guest binding reaction: ΔG°, ΔH, and −TΔS°. Cyclodextrins, in particular, are ideal host molecules for testing computational methods. They are neutral across a broad pH range, with well-characterized structures [20], and bind both small molecule fragments and drug-like guest molecules with reasonable affinity, from near -1 kcal/mol to about -5 kcal/mol in the present work [21], and with higher affinity for some cyclodextrin derivatives [21]. Moreover, cyclodextrins are stable in a wide range of experimental conditions and their high millimolar aqueous solubility allows a range of different experimental techniques to be used to measure their binding to guests [22]. Here, we report the calculation of binding free energies, enthalpies, and entropies of small guest molecules with functional groups often found in drugs to α- and β-cyclodextrin host molecules, converged to within 0.1 kcal/mol statistical uncertainty, using the APR method. These calculations offer an opportunity to benchmark—and ultimately optimize—new and existing force fields. The first force field produced by the Open Force Field Initiative, SMIRNOFF99Frosst v1.0.5, was released in late 2018 [23,24]. It is derived from AMBER parm99 [25] and Merck’s parm@Frosst [26]. Instead of relying on atom types to assign force field parameters to compounds, which is the procedure followed by the LEaP program used to assign parameters to molecules in AmberTools [27], SMIRNOFF99Frosst and the Open Force Field Toolkit use separately defined local chemical environments for each atom, bond, angle, and dihedral, to apply force field parameters specified by SMIRKS strings [28]. This process simplifies and effectively uncouples the parameters for each term in the force field. For example, the addition of a new Lennard-Jones parameter does not require creating a new atom type that forces the addition of new bonded, angle, and dihedral parameters. This approach leads to a much leaner force field specification; there are over 3000 lines of parameters in GAFF v1.7 [29], over 6000 lines of parameters in GAFF v2.1, and just 322 lines of parameters in SMIRNOFF99Frosst v1.0.5 [30]. It is important to note that SMIRNOFF99Frosst is not yet optimized at this stage, only compressed; subsequent work will focus on optimizing SMIRNOFF99Frosst and other SMIRNOFF-family force fields to fit quantum and experimental data [31]. In the following text, SMIRNOFF99Frosst refers to version 1.0.5 of the force field, unless otherwise noted. Thus far, SMIRNOFF99Frosst has been tested on hydration free energies of 642 small molecules and the densities and dielectric constants of 45 pure organic liquids [23]. Here, we benchmark SMIRNOFF99Frosst, GAFF v1.7, and GAFF v2.1 using noncovalent binding thermodynamics for 43 host-guest complexes (including two hosts and 33 unique guests) for which experimental thermodynamics data are available, representing three different functional group moieties. We first compare the results of SMIRNOFF99Frosst with those of the conventional force fields GAFF v1.7 and GAFF v2.1, based on calculations of experimental binding free energies, enthalpies, and entropies. We then characterize the differences in host conformations sampled by SMIRNOFF99Frosst compared to the other two force fields. ## Methods ### Choice of host-guest systems In this study, we report the binding thermodynamics of 43 host-guest complexes (Figure 1 and Table 1) computed using three different force fields. The complexes consist of either α- or β-cyclodextrin as host molecules and a series of small molecule guests containing ammonium, carboxylate, or cyclic alcohol functional groups. The cyclodextrins in the current study are cyclic polymers consisting of six (αCD) or seven (βCD) glucose monomers in the shape of a truncated cone. The equilibrium constants and standard molar enthalpies of binding for these 43 complexes have been measured using isothermal titration calorimetry (ITC) at pH = 6.90 and T = 298 K, and nuclear magnetic resonance spectroscopy (NMR) at pH = 7.0 and T = 298 ± 1 K [32]. Calculations on these host-guest systems have been performed previously [33], and, as in the prior study, we considered only a single stereoisomer for the 1-methylammonium guests because it was not clear whether a mixture or a pure solution was used in Rekharsky, et al. [32], and the ΔG° difference between each stereoisomer is expected to be < 0.1 kcal/mol [34]. Table 1: The 43 unique host-guest combinations used in this study. The formal charge of each guest is listed in brackets. The guest names correspond to Tables 1 and 2 in Rekharsky et al. [32]. a Only the R enantiomer was considered. b Only the S enantiomer was considered. SMILES strings are written as canonical isomeric SMILES as implemented in the OpenEye OEChem Toolkit version 2.0.2 [35]. Host-guest ID Host Guest Charge SMILES a-bam αCD 1-butylamine +1 CCCC[NH3+] a-nmb αCD n-methylbutylamine +1 CCCC[NH2+]C a-mba αCD 1-methylbutylaminea +1 CCC[C@@H](C)[NH3+] a-pam αCD 1-pentylamine +1 CCCCC[NH3+] a-ham αCD 1-hexylamine +1 CCCCCC[NH3+] a-nmh αCD n-methylhexylamine +1 CCCCCC[NH2+]C a-mha αCD 1-methylhexylaminea +1 CCCCC[C@@H](C)[NH3+] a-hpa αCD 1-heptylamine +1 CCCCCCC[NH3+] a-mhp αCD 1-methylheptylamineb +1 CCCCCC[C@H](C)[NH3+] a-oam αCD 1-octylamine +1 CCCCCCCC[NH3+] b-ham βCD 1-hexylamine +1 CCCCCC[NH3+] b-mha βCD 1-methylhexylaminea +1 CCCCC[C@@H](C)[NH3+] b-oam βCD 1-octylamine +1 CCCCCCCC[NH3+] a-cbu αCD cyclobutanol 0 C1CC(C1)O a-cpe αCD cyclopentanol 0 C1CCC(C1)O a-chp αCD cycloheptanol 0 C1CCCC(CC1)O a-coc αCD cyclooctanol 0 C1CCCC(CCC1)O b-cbu βCD cyclobutanol 0 C1CC(C1)O b-cpe βCD cyclopentanol 0 C1CCC(C1)O b-mch βCD 1-methylcyclohexanol 0 CC1(CCCCC1)O b-m4c βCD cis-4-methylcyclohexanol 0 CC1CCC(CC1)O b-m4t βCD trans-4-methylcyclohexanol 0 CC1CCC(CC1)O b-chp βCD cycloheptanol 0 C1CCCC(CC1)O b-coc βCD cyclooctanol 0 C1CCCC(CCC1)O a-but αCD butanoate -1 CCCC(=O)[O-] a-pnt αCD pentanoate -1 CCCCC(=O)[O-] a-hex αCD hexanoate -1 CCCCCC(=O)[O-] a-hx2 αCD trans-2-hexenoate -1 CCC/C=C/C(=O)[O-] a-hx3 αCD trans-3-hexenoate -1 CC/C=C/CC(=O)[O-] a-hep αCD heptanoate -1 CCCCCCC(=O)[O-] a-hp6 αCD 6-heptenoate -1 C=CCCCCC(=O)[O-] a-oct αCD octanoate -1 CCCCCCCC(=O)[O-] b-pnt βCD pentanoate -1 CCCCC(=O)[O-] b-hex βCD hexanoate -1 CCCCCC(=O)[O-] b-hep βCD heptanoate -1 CCCCCCC(=O)[O-] b-ben βCD benzoate -1 c1ccc(cc1)C(=O)[O-] b-pha βCD phenylacetate -1 c1ccc(cc1)CC(=O)[O-] b-mp3 βCD 3-methylphenylacetate -1 Cc1cccc(c1)CC(=O)[O-] b-mp4 βCD 4-methylphenylacetate -1 Cc1ccc(cc1)CC(=O)[O-] b-mo3 βCD 3-methoxyphenylacetate -1 COc1cccc(c1)CC(=O)[O-] b-mo4 βCD 4-methoxyphenylacetate -1 COc1ccc(cc1)CC(=O)[O-] b-pb3 βCD 3-phenylbutanoate -1 C[C@H](CC(=O)[O-])c1ccccc1 b-pb4 βCD 4-phenylbutanoate -1 c1ccc(cc1)CCCC(=O)[O-] ### Application of force field parameters We sought to compare force fields directly and therefore attempted to minimize additional differences among the simulations with each force field. In all simulations, we applied AM1-BCC [36,37] partial atomic charges to both the host and guest molecules using the antechamber program in AmberTools16 [27]. The Open Force Field Toolkit provides a mechanism for user-specified charges. If no charges are supplied, the toolkit will generate AM1-BCC charges. AM1-BCC is the recommended charge scehme, and the The host charges were calculated using a single glucose molecule with methoxy caps on the O1 and O4 alcohols (Figure 2); each glucose monomer in the cyclodextrin polymer has identical charges. After removing the capping atoms, the net charge of the glucose monomer was -0.064 e. To ensure a neutrality of the glucose monomer, the charge remainder was proportionally distributed across all atoms according to the magnitude of the partial charge for each atom. The minimum and maximum charge adjustments were 0.000684 and 0.007245 e, respectively. Using the entire αCD molecule as an input to antechamber results in partial atomic charges that differ by at most 0.02 e, compared to using a single monomer, and requires reducing the maximum path length used to determine the equivalence of atomic charges (Figure 16). We used TIP3P water [38] and Joung-Cheatham monovalent ion parameters [39] in each simulation set. GAFF v1.7 bond, angle, torsion, and Lennard-Jones parameters were applied using the tleap program distributed with AmberTools16. GAFF v2.1 parameters were applied in an identical manner to the GAFF v1.7 parameters, using the tleap program distributed with AmberTools18 and substituting leaprc.gaff for leaprc.gaff2 in the tleap input file. To apply SMIRNOFF99Frosst parameters, we followed a multistep process, beginning with the AMBER-format .prmtop and .inpcrd GAFF v1.7 files. The host and guest molecules were parameterized with version 0.0.3 of the Open Force Field Toolkit which uses the OpenEye OEChem Toolkit version 2.0.2 [35], which reads molecular coordinates and topologies and creates a serialized representation of the molecular system; version 1.0.5 of the SMIRNOFF99Frosst force field; specified in version 1.0 of the SMIRNOFF format. Once parameterized with SMIRNOFF99Frosst, the topology and coordinates for the host-guest complex were combined with the solvent and ions, which retained their TIP3P water parameters and Joung-Cheatham ion parameters, respectively. This was accomplished by the ParmEd program [40], which enables saving the OpenMM system created by the Open Force Field Toolkit in AMBER-format .prmtop and .inpcrd files. Ongoing updates to the Open Force Field Toolkit may result in changes to how this procedure is carried out in the future. ### Thermodynamic calculation We used the attach-pull-release (APR) method, as implemented in the open source package pAPRika version 0.0.3, to calculate absolute binding free energies. A complete description of the APR method has been provided in the literature [13,17,19,41]. The attachment and release phases each consisted of 15 independent windows. During the attachment phase, the force constants on the host and guest are scaled by a $$\lambda$$ parameter that goes from $$\lambda = 0$$, at which point all restraints are turned off, to $$\lambda = 1$$, at which point all restraints are at their maximum force constant. The $$\lambda$$ windows are more densely spaced where the force constant is smaller to improve sampling along highly curved regions of the potential of mean force. These restraints include a set of distance, angle, and torsion restraints that orient the host and guest along the long axis of the simulation box. A separate set of conformational restraints were applied between neighboring glucose units of the cyclodextrin to minimize deformations of the host molecule as the guest molecule is pulled out. The conformational restraints were applied along the pseudodihedrals O5n–C1n–O1n–C4n+1 and C1n–O1n–C4n+1–C5n+1 to improve convergence and sampling of the bound state (Figure 2 for atom names). To further improve sampling of weak-binding guests, we applied a hard wall restraint that confined the guest molecule to within a sphere of 12.3 and 13.5 Å of αCD and βCD, respectively, during the bound state. The release phase is the conceptual reverse of the attach phase, in which the conformational restraints on the host are gradually turned off ($$\lambda =1 \rightarrow 0$$) in the absence of the guest. This explicit release phase is performed once for αCD and once for βCD, as it is independent of guest molecule. Finally, an analytic correction is performed to compute the work of moving the guest from the restricted volume enforced by the APR restraints to standard state at 1 M concentration. The pulling phase consisted of 45 independent, equally spaced windows. During the pulling phase, the $$\lambda$$ parameter represents the target value of a distance restraint with constant force constant. This target distance is increased uniformly in 44 increments of 0.4 Å, yielding windows that separate the host and guest by 18 Å over the course of the calculation. Due to the asymmetry of the primary and secondary alcohols of cyclodextrin (Figure 18), as well as of the small molecule guests, there are generally two distinct binding poses that do not interconvert during the simulation timescale. To account for this effect, we separately compute the binding free energy and enthalpy for each orientation [13] and combine the results to produce a single value for each host-guest combination using the following equation: $$$\Delta G^\circ = -RT \ln(\exp(-\beta \Delta G_\text{primary}^\circ) + \exp(-\beta \Delta G_\text{secondary}^\circ)).$$$ The total binding enthalpy is weighted by both the binding enthalpy and binding free energy in each orientation using the following equation: $$$\Delta H = \frac{ \Delta H_\text{primary} \exp(-\beta \Delta G_\text{primary}) + \Delta H_\text{secondary} \exp(-\beta \Delta G_\text{secondary})}{\exp(-\beta \Delta G_\text{primary}) + \exp(-\beta \Delta G_\text{secondary})}.$$$ In this manuscript, we refer to calculations where the guest functional group in the bound state is at the primary face of cyclodextrin with a -p suffix, and calculations where it is at the secondary face of cyclodextrin with a -s suffix.. Thermodynamic integration [42] and the multistate Bennett acceptance ratio estimator (MBAR) [43] were used to compute the binding free energy (ΔG°). The results presented in the main text are those analyzed using thermodynamic integration to be consistent with prior analysis presented in Henriksen, et al. [33]. The binding enthalpy (ΔH) was computed as the difference in mean potential energy of the bound state (in the absence of any restraints) and the unbound state (where the guest is held far away from the host, but the conformational restraints on the host are disabled). The binding entropy (ΔS°) was computed by subtraction using ΔG° and ΔH. ### Simulations Simulations were performed with the pmemd.cuda module of AMBER 16 (calculations with the GAFF v1.7 force field) and AMBER 18 (calculations with the GAFF v2.1 and SMIRNOFF99Frosst force fields) molecular dynamics software [27,44]. Each window for each system was independently solvated and simulated. Simulation data for the host-guest complexes using GAFF v1.7 were taken from Henriksen, et al. [33] and are described in additional detail therein. Solvation consisted of 2000 TIP3P waters for the αCD systems and 2210 waters for the βCD systems in an orthorhombic box. The host and guest were oriented via non-interacting dummy atoms along the simulation box’s long $$z$$ axis, to allow use of an elongated periodic box that reduces the amount of solvent required for the calculation. Each simulation contained enough Na+ or Cl- ions to neutralize the host-guest complex and an additional 50 mM NaCl to match the experimental conditions in [32]. In the GAFF simulations, hydrogen mass repartitioning [45] was used to adjust the mass of hydrogen atoms by a factor of 3 and decreasing the mass of the bound heavy atoms proportionally, keeping the total molecular weight of each molecule constant and enabling a simulation timestep of 4 fs. Hydrogen mass repartitioning produces negligible changes in computed thermodynamic observables for other cyclodextrin-guest calculations, with deviations within statistical uncertainty [13]. Equilibration consisted of 50,000 steps of energy minimization, 100 ps of heating from 0 to 300 K, and then 2000 ps of additional NPT simulation. AMBER’s Langevin thermostat with a collision rate of 1 ps-1, the Monte Carlo barostat, a nonbonded cutoff of 9 Å and default PME parameters, were used for the NPT simulations. An isotropic analytic correction to the Lennard-Jones interactions is applied beyond the cutoff distance [46]. Production NPT simulations were run for a minimum of 2.5 ns and maximum of 50 ns per window, except for the windows used to calculate the enthalpy, which were each simulated for 1 μs. In the GAFF v1.7 and GAFF v2.1 simulations, the exact length of each window’s simulation was determined by the uncertainty in the work done in each λ window. In particular, for restraint energy $$U$$ in $$\lambda$$ window $$i$$, we define the instantaneous SEM of $$\partial U/\partial λ_i$$ as σ(λi), and each window (except for the windows used to calculate ΔH) was simulated until the value of $$w(\lambda_i)$$, defined as $w(\lambda_i) = \begin{cases} \sigma(\lambda_i) \, \frac{\lambda_{i+1}}{2} & i = 0 \\ \sigma(\lambda_i) \, \frac{\lambda_{i+1} - \lambda_{i-1}}{2} & i \in [1, N-1] \\ \sigma(\lambda_i) \, \frac{1 - \lambda_{i-1}}{2} & i = N \\ \end{cases}$(1) fell below a threshold of 0.02 kcal/mol during the attach phase and 0.1 kcal/mol during the pull phase. The second term in Equation 1 scales the uncertainty in the work in each λ window by the nonuniform spacing of the λ windows. $$w(\lambda_i)$$ is the approximate contribution of window λi to the overall PMF uncertainty. Excluding the first and last window, the average window length was 11.8 ns and 5.39 ns for GAFF v1.7 and GAFF v2.1 simulations, respectively. We took a more direct approach with the SMIRNOFF99Frosst simulations, due to changes in pAPRika that allowed us to target uncertainties of the same magnitude as in the GAFF simulations, by running each window for a constant length of 10 ns, except for the first and last window which ran for 1 μs to converge ΔH for all three force fields. ### Statistics The uncertainty in the work done by each restraint in each simulation window, σ(λi), was estimated using blocking analysis [47], in a manner which has been shown to yield good agreement with uncertainties obtained from independent replicates [13]. In particular, rather than looking for a plateau in the SEM as the size of the blocks increased, as originally described by Flyvbjerg and Peterssen [47], we instead use the largest standard error of the mean (SEM) obtained for any block size. This avoids the requirement of detecting a plateau and yields a more conservative estimate; i.e., a larger SEM. Then, using Gaussians with the mean and SEM of $$\frac{\partial U}{\partial \lambda}$$ in each window, new values of $$\frac{\partial U}{\partial \lambda}$$ were bootstrap sampled for each window 100,000 times and combined to create artificial data for 100,000 notional APR calculations. These were integrated across all windows with splines to generate 100,000 estimates of ΔG°. We report the mean and standard deviation of these 100,000 results as the final mean and its SEM. The SEM of ΔH was computed from the SEM of the total potential energy in each end point window, estimated using blocking analysis, added in quadrature. The standard error of the mean of −TΔS° was calculated using the uncertainties in ΔG° and ΔH added in quadrature. For each force field, we computed the root mean squared error (RMSE), mean signed error (MSE), the coefficient of determination (R2), Kendall’s rank correlation coefficient (τ), and the slope and intercept of the linear regressions of the computed properties against the experimental values. The R2 values for the subsets of ligand with each are also reported in the bottom right corner in each graph. Comparisons with experiment have 43 measurements, for the 43 unique host-guest complexes listed in Table 1; comparisons between force fields have 86 data points, representing the calculations for the two orientations of the guest, “p” and “s”, in the binding site (see above). The overall RMSE and R2 statistics for each comparison are reported as the sample mean estimated from using all the data, with the 95% confidence interval, from bootstrapping over the set of complexes, in brackets. ## Results This results section is organized as follows. We first present a comparison of binding free energies (ΔG°) and binding enthalpies (ΔH) of small molecule guests to α-cyclodextrin (αCD) and β-cyclodextrin (βCD), computed with SMIRNOFF99Frosst and two versions of the General AMBER Force Field (GAFF [29]). We then detail how the conformational preferences of the host molecules changes between force fields and seek insight into key parameter differences between SMIRNOFF99Frosst and GAFF and their effects. ### Comparison with experimental binding free energies, enthalpies, and entropies #### Binding free energies Despite having far fewer numerical parameters, SMIRNOFF99Frosst does about as well as GAFF v1.7 and arguably better than GAFF v2.1 at replicating binding free energies measured by ITC or NMR. Thus, SMIRNOFF99Frosst yields an overall ΔG° RMSE from experiment of 0.9 [0.7, 1.1] kcal/mol across the 43 host-guest systems, compared to the statistically indistinguishable 0.9 [0.7, 1.1] kcal/mol for GAFF v1.7, and distinct from 1.7 [1.5, 1.9] kcal/mol for GAFF v2.1 (where the 95% confidence interval is written in brackets) as detailed in Figure 3; Tables 2, 5. On the whole, GAFF v1.7 agrees well with SMIRNOFF99Frosst (Figure 21), as the RMSE and MSE between their results are 0.8 [0.6, 1.0] kcal/mol and -0.5 [-0.3, -0.7] kcal/mol. This result is not surprising as GAFF v1.7 and SMIRNOFF99Frosst may be considered cousin force fields with a common ancestor in AMBER’s parm99. Both SMIRNOFF99Frosst and GAFF v1.7 systematically underestimate the binding affinity for cyclic alcohols, with MSEs of 0.7 [0.2, 1.2] kcal/mol and 0.9 [0.4, 1.4] kcal/mol, respectively. In contrast, GAFF v2.1 significantly overestimates the binding of all compounds, leading to MSE and RMSE values of -1.6 [-1.7, -1.4] kcal/mol and 1.6 [1.4, 1.8] kcal/mol, respectively. However, GAFF v2.1 has a particularly good correlation with experiment across all functional group classes, with R² of 0.8 [0.6, 0.9], compared with 0.3 [0.1, 0.6] and 0.5 [0.3, 0.7] for SMIRNOFF99Frosst and GAFF 1.7, respectively. This may trace to differences in the host conformations sampled by GAFF v2.1, which indicate a more consistently open cyclodextrin “pocket” for guests to bind (Figure 14), as detailed below. #### Binding enthalpies and entropies In the case of binding enthalpies (Figure 3), SMIRNOFF99Frosst agrees the best with experiment (RMSE 1.8 [1.4, 2.3] kcal/mol), followed by GAFF v2.1 (RMSE = 2.2 [1.8, 2.7] kcal/mol), and then GAFF v1.7 (RMSE = 2.5 [2.0, 3.0] kcal/mol). In some cases, GAFF v1.7 underestimates ΔH by over 3 kcal/mol and up to 5 kcal/mol (b-chp). For binding entropies, GAFF v2.1 has the lowest RMSE relative to experiment (RMSE = 1.47 [1.1, 2.0] kcal/mol), followed by SMIRNOFF99Frosst (RMSE = 1.9 kcal/mol [1.5, 2.3]), and GAFF v1.7 (RMSE = 2.2 [1.7, 2.7] kcal/mol) (Figure 17). All force fields perform poorly at replicating −TΔS° for carboxylate guests, with RMSEs ranging from 1.8 [0.7, 3.2] kcal/mol (GAFF v2.1) to 3.0 [2.1, 3.9] kcal/mol (GAFF v1.7). All force fields also underestimate the entropic component of binding of a-coc (αCD:cyclooctanol) relative to experiment, by 3-5 kcal/mol. This is likely due to the poor fit of cycloctanol inside the cavity of αCD, particularly in the primary orientation (Figure 4). Overall, SMIRNOFF99Frosst and GAFF v1.7 yield rather different binding enthalpies (RMSE = 1.6 [1.3, 2.0] kcal/mol) and entropies (RMSE = 1.6 [1.2, 2.0] kcal/mol). The deviations between SMIRNOFF99Frosst and GAFF v2.1 are higher for ΔH (RMSE = 3.0 [2.5, 3.4] kcal/mol) and lower for −TΔS° (RMSE = 1.9 [1.6, 2.2] kcal/mol). Table 2: Predicted thermodynamic properties for each force field relative to experiment in kcal/mol. RMSE MSE Slope Intercept Tau ΔG° SMIRNOFF99Frosst 0.91 [0.71, 1.13] -0.01 [-0.28, 0.26] 0.34 [0.12, 0.56] 0.49 [0.26, 0.72] -1.55 [-0.80, -2.29] 0.40 [0.57, 0.23] ΔG° GAFF v1.7 0.88 [0.72, 1.08] 0.46 [0.23, 0.69] 0.54 [0.33, 0.71] 0.69 [0.47, 0.91] -0.48 [0.22, -1.16] 0.52 [0.65, 0.38] ΔG° GAFF v2.1 1.68 [1.51, 1.85] -1.56 [-1.74, -1.37] 0.82 [0.61, 0.92] 1.19 [0.96, 1.34] -1.00 [-0.52, -1.62] 0.73 [0.82, 0.61] ΔH SMIRNOFF99Frosst 1.85 [1.41, 2.30] 0.76 [0.26, 1.28] 0.44 [0.21, 0.66] 0.85 [0.54, 1.19] 0.41 [1.55, -0.50] 0.53 [0.69, 0.34] ΔH GAFF v1.7 2.54 [2.08, 3.00] 1.84 [1.31, 2.37] 0.39 [0.17, 0.62] 0.80 [0.47, 1.18] 1.36 [2.67, 0.31] 0.50 [0.65, 0.32] ΔH GAFF v2.1 2.21 [1.77, 2.65] -1.64 [-2.10, -1.20] 0.75 [0.58, 0.87] 1.38 [1.15, 1.63] -0.69 [0.16, -1.43] 0.67 [0.79, 0.52] −TΔS° SMIRNOFF99Frosst 1.90 [1.49, 2.32] -0.78 [-1.29, -0.24] 0.40 [0.14, 0.63] 0.90 [0.51, 1.29] -0.83 [-0.34, -1.34] 0.33 [0.50, 0.13] −TΔS° GAFF v1.7 2.21 [1.74, 2.68] -1.38 [-1.90, -0.86] 0.43 [0.16, 0.68] 0.95 [0.54, 1.38] -1.41 [-0.96, -1.89] 0.32 [0.50, 0.10] −TΔS° GAFF v2.1 1.80 [0.68, 3.19] -0.00 [-0.98, 1.27] 0.48 [0.00, 0.97] 1.13 [-0.22, 1.96] 0.08 [1.14, -1.79] 0.46 [0.82, -0.02] Analysis of the simulations with MBAR produces very slightly improved results for SMIRNOFF99Frosst ΔG°, ΔH, and −TΔS° compared to experiment (Table 8), but they do not appear to be statistically significant. ### Guest preferences for binding in the primary or secondary orientation The asymmetry of the hosts and the guests leads to two distinct bound states for each host-guest pair: one where the functional group of the guest sits at the primary face of the host and another where the functional group of the guest sits at the secondary face (18). The difference in binding free energy between these two orientations (ΔΔGorientation) can be large, at around 2 kcal/mol for SMIRNOFF99Frosst and GAFF v1.7 and 5 kcal/mol for GAFF v2.1. SMIRNOFF99Frosst predicts the largest ΔΔGorientation for the ammonium-containing butylamine and pentylamine guests with αCD (4), with the primary orientation being more favorable. Thus, the cationic ammonium groups are predicted to prefer the narrower primary portal of the host. GAFF v1.7 predicts a large ΔΔGorientation for the cyclic alcohols cyclooctanol and cycloheptanol, with the secondary orientation having a more favorable ΔG. When GAFF v2.1 is used, the differences between primary and secondary binding range even higher, greater than 4 kcal/mol, for αCD with these two guests. This effect is due, at least in part, to steric clashes in the bound state for very large guests (Figure 4, D), especially in the narrow primary cavity of the smaller αCD. It is worth noting that the experimental measurement for the the a-coc (αCD:cyclooctanol) complex has very large uncertainties associated with both ΔG° and ΔH. ### Comparison of results for αCD versus βCD It is of interest to compare the results between αCD and βCD by focusing on the ten guests for which experimental data are available with both hosts. The SMIRNOFF99Frosst and GAFF 1.7 force fields both yield somewhat more accurate binding affinities for αCD (RMSE = 0.8 [0.5, 1.1] kcal/mol) than for βCD (RMSE = 1.0 [0.8, 1.3] kcal/mol), whereas no clear patterns is observed for GAFF v2.1 (Figure 24). Much as seen for the two orientations of the guest molecules within each host, GAFF v2.1 yields relatively large differences in predicted free energies for each guest between the two hosts, but it does not seem to be more accurate for either host relative to the other. The SMIRNOFF99Frosst force field yields rather accurate binding free energies for binding of the ammonium guests (MSE = -0.1 [-0.5, 0.3] kcal/mol and RMSE = 0.7 [0.4, 1.1] kcal/mol) to both αCD and βCD (Figure 6 and Table 9). It also replicates the experimental trends that shorter-chain molecules bind less strongly, and that each guest binds more strongly to αCD than βCD. The results are also reasonably good for the cyclic alcohols (MSE = 0.7 [0.2, 1.2] kcal/mol and RMSE = 1.1 [0.7, 1.6] kcal/mol) (Figure 7 and Table 11), though the predicted affinities for αCD are uniformly too weak, while those for βCD are mostly too strong. Finally, SMIRNOFF99Frosst yields rather accurate binding affinities for the carboxylate guests with both αCD and βCD (MSE = -0.4 [-0.7, 0] kcal/mol and RMSE = 0.9 [0.6, 1.2] kcal/mmol) (Figure 8 and Table 10). GAFF v1.7 tends to predict slightly weaker binding than SMIRNOFF99Frosst, whereas GAFF v2.1 predicts much stronger binding for all classses of guest compounds (Figures 25, 26, and 27). ### Differences in cyclodextrin force field parameters between SMIRNOFF99Frosst and GAFF We now summarize differences among the parameters assigned to the host αCD by SMIRNOFF99Frosst, a descendant of parm99 and parm@Frosst; GAFF v1.7 (released circa March 2015 according to gaff.dat distributed with AMBER16); and GAFF v2.1 (which has not yet been published). On going from GAFF v1.7 to GAFF v2.1, the bond and angle parameters were updated to reproduce small molecule geometries obtained from high-level quantum mechanical calculations and vibrational spectra of over 600 molecules; the torsion parameters were optimized to reproduce the potential energy surfaces of torsion angles in 400 model compounds; and the Lennard-Jones coefficients were redeveloped to reproduce interaction energies and pure liquid properties, as specified in the footer of gaff2.dat provided with AmberTools18. Note that chemically analogous atoms, bonds, angles and torsions in αCD and βCD are assigned identical parameters. #### Lennard-Jones The SMIRNOFF99Frosst and GAFF v1.7 force fields assign identical σ and ε parameters to the atoms of αCD. Note, that hydroxyl hydrogens are assigned σ = 0 Å and ε = 0 kcal/mol in both GAFF v1.7 and SMIRNOFF99Frosst v1.0.5, but later versions of SMIRNOFF99Frosst, produced after the calculations in the current manuscript, adopt small σ and ε values based on a similiar atom type in parm@Frosst [48,49,50]. The GAFF v2.1 parameters differ in assigning shallower wells for oxygens and larger σ values for the hydroxyl hydrogens (Figure 9). #### Bond stretches Equilibrium bond lengths are very similar among the three force fields (Figure 28), but there are noticeable differences among the force constants (Figure 10) Thus, compared to GAFF v1.7, SMIRNOFF99Frosst tends to have slightly larger bond force constants, except for the O–H hydroxyl bond force constant, which is much stronger. In GAFF v2.1, the O–H hydroxyl bond force constant is very close to that of SMIRNOFF99Frosst, but the carbon-oxygen bond constants are distinctly weaker. #### Bond angles Relative to GAFF v1.7 and GAFF v2.1, SMIRNOFF99Frosst has fewer unique angle parameters applied to αCD; several distinct parameters appear to be compressed into a single force constant, around 50 kcal/mol/rad2 (Figure 11). These parameters correspond to C–C–C, C–O–C, O–C–O angles. The C–C–C angles are primarily around the ring of the glucose monomer. The C–O–C angles are both around the ring and between monomers (e.g., C1–O1–C4 and C1–O5–C5). Weaker force constants for these parameters in GAFF v1.7 compared to GAFF v2.1 may lead to increased flexibility. #### Dihedral parameters The dihedral parameters in SMIRNOFF99Frosst and GAFF v1.7 are extremely similar—where differences in barrier heights occur, they are in the hundredths or thousandths of 1 kcal/mol—with the exception of the H1–C1–C2–O2 parameter (Figure 2). For this dihedral, which corresponds to GAFF atom types h2-c3-c3-oh and SMIRKS pattern [#1:1]-[#6X4:2]-[#6X4:3]-[#8X2:4]), SMIRNOFF99Frosst applies a single term with periodicity = 1 and GAFF v1.7 applies a single term with periodicity = 3 (Table 12, Figures 12). The dihedral parameters in GAFF v2.1 differ from those in SMIRNOFF99Frosst in a number of ways. There are several dihedrals that have a different number of terms (Table 13). This is partly due to the addition of dihedral terms with a barrier height of exactly 0.00 kcal/mol in GAFF, which are used to override wildcard parameters that might match the same atom types. For example, GAFF v2.1 applies a three term energy function to the atom types c3-os-c3-c3, whereas SMIRNOFF99Frosst employs a two term energy function for the hydroxyl rotation SMIRKS pattern [#6X4:1]-[#6X4:2]-[#8X2H0:3]-[#6X4:4], but only the terms with periodicity 2 and 3 have nonzero barrier heights in GAFF v2.1. Similarly, SMIRNOFF99Frosst uses two nonzero terms to model the potential barrier for the SMIRKS pattern [#6X4:1]-[#6X4:2]-[#8X2H1:3]-[#1:4], yet GAFF v2.1 applies a single term with a barrier height of exactly 0.00 kcal/mol for this rotation (atom types c3-c3-oh-ho). The fact that GAFF employs dihedral terms with zero amplitude terms highlights the complexity that would be required to optimize existing force fields that have accumulated legacy parameters needed to maintain backwards compatibility with older force fields and simulation codes. In other cases, SMIRNOFF99Frosst and GAFF v2.1 have disagreements on the barrier height after matching the periodicity and phase for a given dihedral. For example, the amplitudes for the O1–C1–O5–C5 dihedral are 1.35 kcal/mol and 0.97 kcal/mol for SMIRNOFF99Frosst and GAFF v2.1, respectively, for the term with periodicity = 1, whereas the amplitudes are 0.85 kcal/mol and 1.24 kcal/mol for SMIRNOFF99Frosst and GAFF v2.1, respectively, for the term with periodicity = 2. It is notable that the barrier heights in GAFF v2.1 are similiar in magnitude to those in SMIRNOFF99Frosst, yet GAFF v2.1 produces much more rigid structures (Table 3, Figure 14), as detailed in the following section. Moreoever, many of the dihedrals that act between a pair of neighboring glucose monomers (i.e., inter-residue dihedrals) in cyclodextrin differ in their periodicies, phases, and amplitudes between SMIRNOFF99Frosst and GAFF v2.1 (Table 4, Figure 13). The dihedral acting on atoms O1n–C4n+1–C5n+1–O5n+1 is quite significantly different, with multiple minima and and barrier heights. This dihedral partially controls the rotation of glucose monomers towards or away from the interior of the cyclodextrin cavity. Surprisingly, glucose monomers in GAFF v2.1 penetrate the open cavity much less frequently than in SMIRNOFF99Frosst, despite the lower and broader dihedral energy in GAFF v2.1. Table 3: Dihedral barrier height differences between SMIRNOFF99Frosst and GAFF v2.1 for cases where the phase and periodicity of the energy term match but the barrier height does not. Atom names refer to Figure 2. Barrier height in kcal/mol. SMIRNOFF99Frosst GAFF v2.1 SMIRKS Atom 1 Atom 2 Atom 3 Atom 4 Per Phase Height (kcal/mol) Height (kcal/mol) [#6X4:1]-[#6X4:2]-[#6X4:3]-[#6X4:4] C1 C2 C3 C4 1 0 0.20 0.11 [#6X4:1]-[#6X4:2]-[#6X4:3]-[#6X4:4] C1 C2 C3 C4 2 0 0.25 0.29 [#6X4:1]-[#6X4:2]-[#6X4:3]-[#6X4:4] C1 C2 C3 C4 3 0 0.18 0.13 [*:1]-[#6X4:2]-[#6X4:3]-[*:4] C1 C2 C3 O3 3 0 0.16 0.21 [*:1]-[#6X4:2]-[#8X2H0:3]-[*:4] C1 O5 C5 H5 3 0 0.38 0.34 [#6X4:1]-[#6X4:2]-[#6X4:3]-[#6X4:4] C2 C3 C4 C5 1 0 0.20 0.11 [#6X4:1]-[#6X4:2]-[#6X4:3]-[#6X4:4] C2 C3 C4 C5 2 0 0.25 0.29 [#6X4:1]-[#6X4:2]-[#6X4:3]-[#6X4:4] C2 C3 C4 C5 3 0 0.18 0.13 [#6X4:1]-[#6X4:2]-[#6X4:3]-[#6X4:4] C3 C4 C5 C6 1 0 0.20 0.11 [#6X4:1]-[#6X4:2]-[#6X4:3]-[#6X4:4] C3 C4 C5 C6 2 0 0.25 0.29 [#6X4:1]-[#6X4:2]-[#6X4:3]-[#6X4:4] C3 C4 C5 C6 3 0 0.18 0.13 [*:1]-[#6X4:2]-[#6X4:3]-[*:4] C4 C5 C6 O6 3 0 0.16 0.21 [#1:1]-[#6X4:2]-[#6X4:3]-[#1:4] H1 C1 C2 H2 3 0 0.15 0.16 [#1:1]-[#6X4:2]-[#6X4:3]-[#1:4] H2 C2 C3 H3 3 0 0.15 0.16 [*:1]-[#6X4:2]-[#8X2:3]-[#1:4] H2 C2 O2 HO2 3 0 0.17 0.11 [#1:1]-[#6X4:2]-[#6X4:3]-[#1:4] H3 C3 C4 H4 3 0 0.15 0.16 [*:1]-[#6X4:2]-[#8X2:3]-[#1:4] H3 C3 O3 HO3 3 0 0.17 0.11 [#1:1]-[#6X4:2]-[#6X4:3]-[#1:4] H4 C4 C5 H5 3 0 0.15 0.16 [#1:1]-[#6X4:2]-[#6X4:3]-[#1:4] H5 C5 C6 H61 3 0 0.15 0.16 [#1:1]-[#6X4:2]-[#6X4:3]-[#1:4] H5 C5 C6 H62 3 0 0.15 0.16 [#6X4:1]-[#8X2:2]-[#6X4:3]-[#8X2:4] O1 C1 O5 C5 1 0 1.35 0.97 [#6X4:1]-[#8X2:2]-[#6X4:3]-[#8X2:4] O1 C1 O5 C5 2 0 0.85 1.24 [#6X4:1]-[#8X2:2]-[#6X4:3]-[#8X2:4] O1 C1 O5 C5 3 0 0.10 0.00 [*:1]-[#6X4:2]-[#6X4:3]-[*:4] O2 C2 C3 C4 3 0 0.16 0.21 [#8X2:1]-[#6X4:2]-[#6X4:3]-[#8X2:4] O2 C2 C3 O3 2 0 1.18 1.13 [#8X2:1]-[#6X4:2]-[#6X4:3]-[#8X2:4] O2 C2 C3 O3 3 0 0.14 0.90 [*:1]-[#6X4:2]-[#6X4:3]-[*:4] O3 C3 C4 C5 3 0 0.16 0.21 [*:1]-[#6X4:2]-[#8X2:3]-[#1:4] H61 C6 O6 HO6 3 0 0.17 0.11 [*:1]-[#6X4:2]-[#8X2:3]-[#1:4] H62 C6 O6 HO6 3 0 0.17 0.11 Table 4: Inter-residue dihedral parameter differences between SMIRNOFF99Frosst and GAFF v2.1. Atom names refer to Figure 2. NP: not present. Barrier height in kcal/mol. SMIRNOFF99Frosst GAFF v2.1 ID Atom 1 Res 1 Atom 2 Res 2 Atom 3 Res 3 Atom 4 Res 4 Per Phase Height (kcal/mol) Height (kcal/mol) 1 C1 n O1 n C4 n+1 C3 n+1 1 0 NP 0.00 C1 n O1 n C4 n+1 C3 n+1 2 0 0.10 0.16 C1 n O1 n C4 n+1 C3 n+1 3 0 0.38 0.24 2 C1 n O1 n C4 n+1 C5 n+1 1 0 NP 0.00 C1 n O1 n C4 n+1 C5 n+1 2 0 0.10 0.16 C1 n O1 n C4 n+1 C5 n+1 3 0 0.38 0.24 3 C2 n C1 n+1 O1 n+1 C4 n+1 1 0 NP 0.00 C2 n C1 n+1 O1 n+1 C4 n+1 2 0 0.10 0.16 C2 n C1 n+1 O1 n+1 C4 n+1 3 0 0.38 0.24 4 O1 n C4 n+1 C3 n+1 O3 n+1 1 0 NP 0.02 O1 n C4 n+1 C3 n+1 O3 n+1 2 0 1.18 0.00 O1 n C4 n+1 C3 n+1 O3 n+1 3 0 0.14 1.01 5 O1 n C4 n+1 C5 n+1 O5 n+1 1 0 NP 0.17 O1 n C4 n+1 C5 n+1 O5 n+1 2 0 1.18 0.00 O1 n C4 n+1 C5 n+1 O5 n+1 3 0 0.14 0.00 There are no improper dihedrals in αCD or βCD, nor any of the guests. ### Structural consequences of the force field parameter differences We observed a substantial difference between the conformational flexibility of the uncomplexed cyclodextrins in solution when simulated with GAFF v2.1 versus SMIRNOFF99Frosst and GAFF v1.7. With SMIRNOFF99Frosst and GAFF v1.7, the average RMSD of βCD, relative to the initial structure, is between 2.0–2.5 Å over 43 μs of unrestrained simulation, while with GAFF v2.1, the average RMSD is <1.0 Å (Figure 14). Not only are the RMSDs greater for SMIRNOFF99Frosst and GAFF v1.7, but there is greater variance in their RMSDs compared to GAFF v2.1, indicating greater flexibility. This large difference in structural fluctuations is clearly visible in the structure overlays also shown in the figure, which shows that GAFF v2.1 is the only one of the three force fields that leads to maintainance of a clearly defined binding cavity. In this respect, it is similar to the q4md-CD force field [51], which was designed specifically for cyclodextrins and which also maintains a relatively well-defined cavity [33]. This difference may be further analyzed by considering the “flip” pseudodihedral O2n–C1n–C4n+1–O3n+1, which characterizes the orientation of glucose monomers relative to their neighbors. An angle of 0° corresponds approximately to a glucose that forms part of a cylindrical wall of the binding cavity, while an angle of ± 90° indicates a glucose that has flipped to put its plane parallel to the top and bottom of the cylinder, partly filling the cavity. This dihedral is tightly distributed in GAFF v2.1, with all seven instances having a Gaussian-like distribution centered around -10° (Figure 15, A). GAFF v1.7 and SMIRNOFF99Frosst display, a mixed population of monomers both aligned with, and perpendicular to, the cyclodextrin cavity. In particular, during a single 1 μs simulation, each monomer will sample conformations at 0° and ±90°, as indicated by the timeseries in Figure 15, B. As detailed in the Discussion, the less flexible representation afforded by GAFF v2.1 agrees better with available NMR and crystallographic data. ## Discussion As a terse representation of a GAFF-like force field, SMIRNOFF99Frosst performs remarkably well. Despite having far fewer parameters than GAFF v1.7 and GAFF v2.1, SMIRNOFF99Frosst performs as well as GAFF v1.7 and arguably better than GAFF v2.1 on estimated binding free energies of small molecules to αCD and βCD, based on the mean signed error relative to experiment. Moreover, SMIRNOFF99Frosst performs better than either GAFF v1.7 or GAFF v2.1 on predicted binding enthalpies, with a mean signed error less than 1 kcal/mol. It should be noted that the binding free energy and enthalpy root mean squared errors (RMSE) and mean signed errors (MSE) for GAFF v2.1 are not substantially worse than those of SMIRNOFF99Frosst, and GAFF v2.1 has statistically significant better correlations with the the experimental data. GAFF v2.1 has excellent agreement with experiment on predicted binding entropy, followed by SMIRNOFF99Frosst and then GAFF v1.7. Taken together, these results support the notion that a force field with many fewer parameters can provide competitive performance. The reduction in the number of parameters, and the simplification of the force field specification, will make it easier to iteratively refine and optimize SMIRNOFF99Frosst against experimental data and the results of quantum mechanical calculations. However, both SMIRNOFF99Frosst and GAFF v1.7 result in excessively flexible representations of the cyclodextrin hosts, as detailed below. Cézard, et al. present strong NMR evidence that the vicinal 3J H5–H6′ (atom names H5–H62 in Figure 2) and 3J H5-H6″(atom names H5–H61) coupling show minimal fluctuation in distance over a number of timescales, suggesting little change in the population of rotamers [51]. This is also evident in X-ray structures, where the rigidity of the cyclodextrin ring is retained as long as water is present in the cavity and the torsional angles between adjacent glucose units show little variance (0.3–0.6°) across different crystal structures [52]. The combination of X-ray and NMR data suggest that the specialized q4md-CD [51] force field, and the rigid GAFF v2.1 [33] force field, better model the flexibility of the CD cavity. The CHARMM36 force field displays similar structural dynamics to q4md-CD, with certain GROMOS force fields even more rigid than those [53]. The present results suggest that, as SMIRNOFF99Frosst is further developed, it will be important to include sugars and other carbohydrates in the training sets used to develop parameters. Unfortunately, it may be challenging to find the types of high quality experimental data typically used to train force fields—heats of vaporization, heats of mixing, hydration free energies, and partition coefficients, among others—for biologically relevant sugars. Proper accounting of sugars, and protein-sugar interactions, will be especially useful for modeling physiologically relevant protein structures, such as proteoglycans and glycopeptides. The greater rigidity of the cyclodextrins when simulated with GAFF v2.1 may contribute to its tendency to generate greater binding affinities and more negative enthalpies than the other two force fields, as a more rigid host may avoid an energy penalty associated with flipping the glucose residues out of the binding cavity to accommodate a guest molecule. The better preorganized cavity might also relate to the uniformly higher correlations between calculation and experiment for GAFF v2.1. On the other hand, it is perhaps unexpected that this force field which best represents the conformational preferences of the cyclodextrin yields consistently too negative binding free energies and enthalpies. It is worth noting the magnitude of these effects will depend on the guest parameters, as well as water model and ion parameters as well. More broadly, the results presented in this manuscript further demonstrate that host-guest binding thermodynamics can be used to benchmark force fields, to help diagnose issues with parameters applied to specific functional groups, and to suggest directions for improvements. We are therefore continuing to build out experimental host-guest datasets tuned for this purpose, and to further streamline host-guest binding thermodynamics calculations so that binding data can be used alongside other data types, such as liquid properties, by automated tools for optimizing force field parameters. ## Code and data availability • GitHub repository used to convert AMBER input files from GAFF force field to SMIRNOFF99Frosst. • GitHub repository for setting up the attach-pull-release calculations using paprika version 0.0.3. • GitHub repository for analyzing the simulations and generating the plots in this manuscript. • GitHub repository for the Open Force Field group containing the toolkit and force field XML file. ## Author contributions Conceptualization: DRS, NMH, DLM, JDC, MKG; Methodology: DRS, NMH; Software: DRS, NMH; Formal Analysis: DRS, NMH, JDC, MKG; Investigation: DRS, NMH; Resources: MKG, JDC; Data Curation: DRS, NMH; Writing-Original Draft: DRS; Writing - Review and Editing: DRS, NMH, JDC, MKG, LPW; Visualization: DRS; Supervision: DLM, JDC, MKG; Project Administration: MKG; Funding Acquisition: MKG. ## Acknowledgments This work was funded in part by grant GM061300 to MKG from the National Institute of General Medical Sciences of the NIH. JDC was funded in part by grants R01 GM121505 and R01 GM124270 from the National Institute of General Medical Sciences of the NIH and P30 CA008748 from the National Cancer Institute of the NIH. The contents of this publication are solely the responsibility of the authors and do not necessarily represent the official views of the NIH. ## Disclosures The authors declare the following competing financial interest(s): MKG has an equity interest in and is a cofounder and scientific advisor of VeraChem LLC. JDC is a member of the Scientific Advisory Board of OpenEye Scientific Software. The Chodera laboratory receives or has received funding from multiple sources, including the National Institutes of Health, the National Science Foundation, the Parker Institute for Cancer Immunotherapy, Relay Therapeutics, Entasis Therapeutics, Silicon Therapeutics, EMD Serono (Merck KGaA), AstraZeneca, XtalPi, the Molecular Sciences Software Institute, the Starr Cancer Consortium, the Open Force Field Consortium, Cycle for Survival, a Louis V. Gerstner Young Investigator Award, the Einstein Foundation, and the Sloan Kettering Institute. A complete funding history for the Chodera lab can be found at http://choderalab.org/funding. ## List of abbreviations APR, attach-pull-release; CD, cyclodextrin; GAFF, Generalized AMBER Force Field ## References 1. Overview of the SAMPL6 host–guest binding affinity prediction challenge Andrea Rizzi, Steven Murkli, John N. McNeill, Wei Yao, Matthew Sullivan, Michael K. Gilson, Michael W. Chiu, Lyle Isaacs, Bruce C. Gibb, David L. Mobley, John D. Chodera Journal of Computer-Aided Molecular Design (2018-10) https://doi.org/gfpzh5 DOI: 10.1007/s10822-018-0170-6 · PMID: 30415285 · PMCID: PMC6301044 2. Attach-Pull-Release Calculations of Ligand Binding and Conformational Changes on the First BRD4 Bromodomain Germano Heinzelmann, Niel M. Henriksen, Michael K. Gilson Journal of Chemical Theory and Computation (2017-05-31) https://doi.org/gbpj4m DOI: 10.1021/acs.jctc.7b00275 · PMID: 28564537 · PMCID: PMC5541932 3. Computation of protein–ligand binding free energies using quantum mechanical bespoke force fields Daniel J. Cole, Israel Cabeza de Vaca, William L. Jorgensen MedChemComm (2019) https://doi.org/gfz353 DOI: 10.1039/c9md00017h · PMID: 31391883 · PMCID: PMC6644397 4. Predictions of Ligand Selectivity from Absolute Binding Free Energy Calculations Matteo Aldeghi, Alexander Heifetz, Michael J. Bodkin, Stefan Knapp, Philip C. Biggin Journal of the American Chemical Society (2017-01-09) https://doi.org/f9kr6h DOI: 10.1021/jacs.6b11467 · PMID: 28009512 · PMCID: PMC5253712 5. Accurate and Reliable Prediction of Relative Ligand Binding Potency in Prospective Drug Discovery by Way of a Modern Free-Energy Calculation Protocol and Force Field Lingle Wang, Yujie Wu, Yuqing Deng, Byungchan Kim, Levi Pierce, Goran Krilov, Dmitry Lupyan, Shaughnessy Robinson, Markus K. Dahlgren, Jeremy Greenwood, … Robert Abel Journal of the American Chemical Society (2015-02-12) https://doi.org/f64pdz DOI: 10.1021/ja512751q · PMID: 25625324 6. OPLS3e: Extending Force Field Coverage for Drug-Like Small Molecules Katarina Roos, Chuanjie Wu, Wolfgang Damm, Mark Reboul, James M. Stevenson, Chao Lu, Markus K. Dahlgren, Sayan Mondal, Wei Chen, Lingle Wang, … Edward D. Harder Journal of Chemical Theory and Computation (2019-02-15) https://doi.org/gfvpnf DOI: 10.1021/acs.jctc.8b01026 · PMID: 30768902 7. Validation of AMBER/GAFF for Relative Free Energy Calculations Lin Song, Tai-Sung Lee, Chun Zhu, Darrin M. York, Kenneth M. Merz Jr. American Chemical Society (ACS) (2019-02-04) https://doi.org/gf22kq DOI: 10.26434/chemrxiv.7653434 8. Overview of the SAMPL5 host–guest challenge: Are we doing better? Jian Yin, Niel M. Henriksen, David R. Slochower, Michael R. Shirts, Michael W. Chiu, David L. Mobley, Michael K. Gilson Journal of Computer-Aided Molecular Design (2016-09-22) https://doi.org/f9m82x DOI: 10.1007/s10822-016-9974-4 · PMID: 27658802 · PMCID: PMC5241188 9. The Movable Type Method Applied to Protein–Ligand Binding Zheng Zheng, Melek N. Ucisik, Kenneth M. Merz Journal of Chemical Theory and Computation (2013-11-26) https://doi.org/f5kjt2 DOI: 10.1021/ct4005992 · PMID: 24535920 · PMCID: PMC3924725 10. Blinded predictions of standard binding free energies: lessons learned from the SAMPL6 challenge Michail Papadourakis, Stefano Bosisio, Julien Michel Journal of Computer-Aided Molecular Design (2018-08-29) https://doi.org/gfpbrc DOI: 10.1007/s10822-018-0154-6 · PMID: 30159717 11. Resolving the problem of trapped water in binding cavities: prediction of host–guest binding free energies in the SAMPL5 challenge by funnel metadynamics Soumendranath Bhakat, Pär Söderhjelm Journal of Computer-Aided Molecular Design (2016-08-29) https://doi.org/f9m894 DOI: 10.1007/s10822-016-9948-6 · PMID: 27573983 · PMCID: PMC5239820 12. Absolute binding free energies for octa-acids and guests in SAMPL5 Florentina Tofoleanu, Juyong Lee, Frank C. Pickard IV, Gerhard König, Jing Huang, Minkyung Baek, Chaok Seok, Bernard R. Brooks Journal of Computer-Aided Molecular Design (2016-09-30) https://doi.org/f9m9c2 DOI: 10.1007/s10822-016-9965-5 · PMID: 27696242 · PMCID: PMC6472255 13. Computational Calorimetry: High-Precision Calculation of Host–Guest Binding Thermodynamics Niel M. Henriksen, Andrew T. Fenley, Michael K. Gilson Journal of Chemical Theory and Computation (2015-08-26) https://doi.org/f7q3mj DOI: 10.1021/acs.jctc.5b00405 · PMID: 26523125 · PMCID: PMC4614838 14. Prediction of CB[8] host–guest binding free energies in SAMPL6 using the double-decoupling method Kyungreem Han, Phillip S. Hudson, Michael R. Jones, Naohiro Nishikawa, Florentina Tofoleanu, Bernard R. Brooks Journal of Computer-Aided Molecular Design (2018-08-06) https://doi.org/gfpzp2 DOI: 10.1007/s10822-018-0144-8 · PMID: 30084077 · PMCID: PMC6347468 15. Comparison of the umbrella sampling and the double decoupling method in binding free energy predictions for SAMPL6 octa-acid host–guest challenges Naohiro Nishikawa, Kyungreem Han, Xiongwu Wu, Florentina Tofoleanu, Bernard R. Brooks Journal of Computer-Aided Molecular Design (2018-10) https://doi.org/gfz352 DOI: 10.1007/s10822-018-0166-2 · PMID: 30324304 · PMCID: PMC6413509 16. Detailed potential of mean force studies on host–guest systems from the SAMPL6 challenge Lin Frank Song, Nupur Bansal, Zheng Zheng, Kenneth M. Merz Journal of Computer-Aided Molecular Design (2018-08-24) https://doi.org/gfpw5t DOI: 10.1007/s10822-018-0153-7 · PMID: 30143917 17. The SAMPL4 host–guest blind prediction challenge: an overview Hari S. Muddana, Andrew T. Fenley, David L. Mobley, Michael K. Gilson Journal of Computer-Aided Molecular Design (2014-03-06) https://doi.org/f5585s DOI: 10.1007/s10822-014-9735-1 · PMID: 24599514 · PMCID: PMC4053502 18. The statistical-thermodynamic basis for computation of binding affinities: a critical review M. K. Gilson, J. A. Given, B. L. Bush, J. A. McCammon Biophysical Journal (1997-03) https://doi.org/dkv3sk DOI: 10.1016/s0006-3495(97)78756-3 · PMID: 9138555 · PMCID: PMC1184492 19. Bridging Calorimetry and Simulation through Precise Calculations of Cucurbituril–Guest Binding Enthalpies Andrew T. Fenley, Niel M. Henriksen, Hari S. Muddana, Michael K. Gilson Journal of Chemical Theory and Computation (2014-08) https://doi.org/f6hv4x DOI: 10.1021/ct5004109 · PMID: 25221445 · PMCID: PMC4159218 20. Structures of the Common Cyclodextrins and Their Larger AnaloguesBeyond the Doughnut Wolfram Saenger, Joël Jacob, Katrin Gessler, Thomas Steiner, Daniel Hoffmann, Haruyo Sanbe, Kyoko Koizumi, Steven M. Smith, Takeshi Takaha Chemical Reviews (1998-07) https://doi.org/dgz7hm DOI: 10.1021/cr9700181 21. Toward Expanded Diversity of Host–Guest Interactions via Synthesis and Characterization of Cyclodextrin Derivatives K. Kellett, S. A. Kantonen, B. M. Duggan, M. K. Gilson Journal of Solution Chemistry (2018-06-04) https://doi.org/gfp949 DOI: 10.1007/s10953-018-0769-1 22. Cyclodextrins and their uses: a review E. M.Martin Del Valle Process Biochemistry (2004-05) https://doi.org/bxthjg DOI: 10.1016/s0032-9592(03)00258-9 23. Escaping Atom Types in Force Fields Using Direct Chemical Perception David L. Mobley, Caitlin C. Bannan, Andrea Rizzi, Christopher I. Bayly, John D. Chodera, Victoria T. Lim, Nathan M. Lim, Kyle A. Beauchamp, David R. Slochower, Michael R. Shirts, … Peter K. Eastman Journal of Chemical Theory and Computation (2018-10-11) https://doi.org/gffnf3 DOI: 10.1021/acs.jctc.8b00640 · PMID: 30351006 · PMCID: PMC6245550 24. Open Force Field Initiative The Open Force Field Initiative https://openforcefield.org/ 25. A Modified Version of the Cornellet al.Force Field with Improved Sugar Pucker Phases and Helical Repeat Thomas E. Cheatham III, Piotr Cieplak, Peter A. Kollman Journal of Biomolecular Structure and Dynamics (1999-02) https://doi.org/gfzp4j DOI: 10.1080/07391102.1999.10508297 · PMID: 10217454 26. An Informal AMBER Small Molecule Force Field: parm@Frossthttp://www.ccl.net/cca/data/parm_at_Frosst/ 27. AMBER 2018 D. A. Case, I. Y. Ben-Shalom, S. R. Brozell, D. S. Cerutti, T. E. III Cheatham, V. W. D. Cruzeiro, T. A. Darden, R. E. Duke, D. Ghoreishi, M. K. Gilson, … P. A. Kollman 28. Daylight>SMIRKS Tutorialhttps://daylight.com/dayhtml_tutorials/languages/smirks/ 29. Development and testing of a general amber force field Junmei Wang, Romain M. Wolf, James W. Caldwell, Peter A. Kollman, David A. Case Journal of Computational Chemistry (2004) https://doi.org/cdmcnb DOI: 10.1002/jcc.20035 · PMID: 15116359 30. A general small molecule force field descended from AMBER99 and parm@Frosst, available in the SMIRNOFF format: openforcefield/smirnoff99Frosst Open Force Field Initiative (2019-09-27) https://github.com/openforcefield/smirnoff99Frosst Michael R Shirts, John Damon Chodera, David L Mobley, Michael K Gilson, Lee-Ping Wang Unpublished (2019) https://doi.org/gf2stw DOI: 10.13140/rg.2.2.27587.86562 32. Thermodynamic and Nuclear Magnetic Resonance Study of the Reactions of α- and β-Cyclodextrin with Acids, Aliphatic Amines, and Cyclic Alcohols Mikhail V. Rekharsky, Martin P. Mayhew, Robert N. Goldberg, Philip D. Ross, Yuko Yamashoji, Yoshihisa Inoue The Journal of Physical Chemistry B (1997-01) https://doi.org/cmwfwn DOI: 10.1021/jp962715n 33. Evaluating Force Field Performance in Thermodynamic Calculations of Cyclodextrin Host–Guest Binding: Water Models, Partial Charges, and Host Force Field Parameters Niel M. Henriksen, Michael K. Gilson Journal of Chemical Theory and Computation (2017-07-11) https://doi.org/gd2z2t DOI: 10.1021/acs.jctc.7b00359 · PMID: 28696692 · PMCID: PMC5606194 34. Chiral Recognition Thermodynamics of β-Cyclodextrin:  The Thermodynamic Origin of Enantioselectivity and the Enthalpy−Entropy Compensation Effect Mikhail Rekharsky, Yoshihisa Inoue Journal of the American Chemical Society (2000-05) https://doi.org/c2tvdg DOI: 10.1021/ja9921118 35. OEChem Toolkit 2019.Apr.2 OpenEye Scientific Software 36. Fast, efficient generation of high-quality atomic charges. AM1-BCC model: II. Parameterization and validation Araz Jakalian, David B. Jack, Christopher I. Bayly Journal of Computational Chemistry (2002-10-18) https://doi.org/cktk6g DOI: 10.1002/jcc.10128 · PMID: 12395429 37. Fast, efficient generation of high-quality atomic charges. AM1-BCC model: I. Method Araz Jakalian, Bruce L. Bush, David B. Jack, Christopher I. Bayly Journal of Computational Chemistry (2000-01-30) https://doi.org/cvvpkv DOI: 10.1002/(sici)1096-987x(20000130)21:2<132::aid-jcc5>3.0.co;2-p 38. Comparison of simple potential functions for simulating liquid water William L. Jorgensen, Jayaraman Chandrasekhar, Jeffry D. Madura, Roger W. Impey, Michael L. Klein The Journal of Chemical Physics (1983-07-15) https://doi.org/dg9sq8 DOI: 10.1063/1.445869 39. Determination of Alkali and Halide Monovalent Ion Parameters for Use in Explicitly Solvated Biomolecular Simulations In Suk Joung, Thomas E. Cheatham III The Journal of Physical Chemistry B (2008-07) https://doi.org/cgwnj7 DOI: 10.1021/jp8001614 · PMID: 18593145 · PMCID: PMC2652252 40. Lessons learned from comparing molecular dynamics engines on the SAMPL5 dataset Michael R. Shirts, Christoph Klein, Jason M. Swails, Jian Yin, Michael K. Gilson, David L. Mobley, David A. Case, Ellen D. Zhong Journal of Computer-Aided Molecular Design (2016-10-27) https://doi.org/f9m3wn DOI: 10.1007/s10822-016-9977-1 · PMID: 27787702 · PMCID: PMC5581938 41. Overcoming dissipation in the calculation of standard binding free energies by ligand extraction Camilo Velez-Vega, Michael K. Gilson Journal of Computational Chemistry (2013-08) https://doi.org/gbdv9w DOI: 10.1002/jcc.23398 · PMID: 24038118 · PMCID: PMC3932244 42. Statistical Mechanics of Fluid Mixtures John G. Kirkwood The Journal of Chemical Physics (1935-05) https://doi.org/djkdtx DOI: 10.1063/1.1749657 43. Statistically optimal analysis of samples from multiple equilibrium states Michael R. Shirts, John D. Chodera The Journal of Chemical Physics (2008-09-28) https://doi.org/cvzgk7 DOI: 10.1063/1.2978177 · PMID: 19045004 · PMCID: PMC2671659 44. The Amber Molecular Dynamics Package(2019) http://www.ambermd.org 45. Long-Time-Step Molecular Dynamics through Hydrogen Mass Repartitioning Chad W. Hopkins, Scott Le Grand, Ross C. Walker, Adrian E. Roitberg Journal of Chemical Theory and Computation (2015-03-30) https://doi.org/f697mk DOI: 10.1021/ct5010406 · PMID: 26574392 46. Nonlinear scaling schemes for Lennard-Jones interactions in free energy calculations Thomas Steinbrecher, David L. Mobley, David A. Case The Journal of Chemical Physics (2007-12-07) https://doi.org/fc7n55 DOI: 10.1063/1.2799191 · PMID: 18067350 47. Error estimates on averages of correlated data H. Flyvbjerg, H. G. Petersen The Journal of Chemical Physics (1989-07) https://doi.org/bjpm5j DOI: 10.1063/1.457480 48. Add hydroxyl hydrogen radii, remove generics, update for 1.0.7 release by davidlmobley · Pull Request #74 · openforcefield/smirnoff99Frosst GitHub https://github.com/openforcefield/smirnoff99Frosst/pull/74 49. Remove generics, add hydrogen radii, rename ffxml files by davidlmobley · Pull Request #101 · openforcefield/openforcefield GitHub https://github.com/openforcefield/openforcefield/pull/101 50. Adjust hydroxyl hydrogen to have a small radius, requires more research · Issue #61 · openforcefield/smirnoff99Frosst GitHub https://github.com/openforcefield/smirnoff99Frosst/issues/61 51. Molecular dynamics studies of native and substituted cyclodextrins in different media: 1. Charge derivation and force field performances Christine Cézard, Xavier Trivelli, Frédéric Aubry, Florence Djedaïni-Pilard, François-Yves Dupradeau Physical Chemistry Chemical Physics (2011) https://doi.org/cv3hss DOI: 10.1039/c1cp20854c · PMID: 21792425 52. Topography of cyclodextrin inclusion complexes. 8. Crystal and molecular structure of the .alpha.-cyclodextrin-methanol-pentahydrate complex. Disorder in a hydrophobic cage B. Hingerty, W. Saenger Journal of the American Chemical Society (1976-05) https://doi.org/cmfz5v DOI: 10.1021/ja00427a050 53. Validation and Comparison of Force Fields for Native Cyclodextrins in Aqueous Solution Julia Gebhardt, Catharina Kleist, Sven Jakobtorweihen, Niels Hansen The Journal of Physical Chemistry B (2018-01-24) https://doi.org/gcv3xq DOI: 10.1021/acs.jpcb.7b11808 · PMID: 29287148 ## Supporting Information Table 5: Experimental and predicted binding free energies (ΔG°). Values in kcal/mol. System Experimental SMIRNOFF99Frosst GAFF v1.7 GAFF v2.1 Mean SEM Mean SEM Mean SEM Mean SEM a-bam -1.58 0.02 -3.25 0.44 -0.82 0.21 -2.93 0.23 a-but -1.51 0.04 -1.49 0.27 -1.09 0.20 -3.14 0.22 a-cbu -2.02 0.02 -1.33 0.19 -0.89 0.22 -3.73 0.21 a-chp -2.51 0.06 -2.38 0.28 -1.69 0.24 -4.11 0.23 a-coc -3.23 1.14 -1.78 0.29 -1.86 0.24 -3.35 0.24 a-cpe -2.13 0.02 -1.59 0.25 -1.50 0.29 -3.79 0.22 a-ham -3.53 0.00 -3.43 0.30 -3.02 0.19 -5.99 0.17 a-hep -3.99 0.01 -3.95 0.21 -3.93 0.20 -6.23 0.17 a-hex -3.38 0.01 -2.70 0.21 -2.92 0.21 -5.27 0.20 a-hp6 -3.60 0.00 -3.32 0.23 -3.37 0.18 -5.41 0.18 a-hpa -4.14 0.00 -3.02 0.32 -3.16 0.22 -6.03 0.22 a-hx2 -3.34 0.01 -2.74 0.20 -2.60 0.19 -4.79 0.18 a-hx3 -3.01 0.01 -2.39 0.25 -1.58 0.23 -3.94 0.23 a-mba -1.76 0.02 -1.22 0.30 -0.89 0.25 -3.17 0.23 a-mha -3.60 0.00 -3.60 0.29 -2.89 0.17 -5.55 0.22 a-mhp -4.17 0.00 -3.98 0.29 -3.82 0.19 -6.23 0.21 a-nmb -1.69 0.02 -1.95 0.42 -0.83 0.18 -2.74 0.21 a-nmh -3.52 0.01 -4.15 0.59 -2.92 0.18 -5.56 0.19 a-oam -4.61 0.01 -4.68 0.49 -4.33 0.17 -6.99 0.19 a-oct -4.62 0.02 -4.64 0.30 -4.85 0.24 -6.81 0.19 a-pam -2.72 0.00 -2.66 0.77 -1.53 0.18 -4.00 0.23 a-pnt -2.60 0.01 -2.56 0.23 -1.74 0.19 -4.14 0.19 b-ben -1.64 0.02 -2.85 0.62 -1.83 0.29 -2.45 0.17 b-cbu -1.55 0.17 -1.88 0.20 -1.64 0.36 -2.77 0.17 b-chp -4.56 0.01 -3.08 0.25 -2.79 0.34 -6.27 0.23 b-coc -4.97 0.04 -3.28 0.23 -3.36 0.26 -7.13 0.22 b-cpe -3.05 0.01 -3.57 0.34 -3.55 0.31 -5.93 0.27 b-ham -2.49 0.08 -2.52 0.20 -2.01 0.26 -4.14 0.19 b-hep -3.39 0.18 -3.41 0.28 -3.34 0.35 -4.15 0.23 b-hex -2.28 0.03 -2.93 0.25 -2.47 0.27 -3.59 0.17 b-m4c -4.32 0.01 -2.89 0.24 -2.68 0.29 -5.64 0.22 b-m4t -4.54 0.01 -3.82 0.19 -3.50 0.26 -6.33 0.17 b-mch -4.18 0.01 -3.69 0.22 -3.31 0.26 -6.07 0.17 b-mha -2.56 0.07 -3.46 0.24 -2.14 0.28 -4.66 0.18 b-mo3 -2.16 0.01 -2.87 0.38 -2.73 0.41 -2.79 0.20 b-mo4 -2.51 0.01 -4.19 0.41 -3.10 0.31 -3.49 0.21 b-mp3 -1.46 0.04 -3.03 0.27 -2.48 0.28 -3.00 0.19 b-mp4 -2.19 0.01 -3.02 0.32 -2.77 0.34 -3.06 0.21 b-oam -3.59 0.12 -3.35 0.28 -2.60 0.30 -5.25 0.23 b-pb3 -3.52 0.01 -3.49 0.32 -2.87 0.30 -4.58 0.17 b-pb4 -3.60 0.02 -3.62 0.33 -3.34 0.35 -4.71 0.23 b-pha -1.70 0.05 -3.24 0.31 -2.55 0.29 -3.98 0.19 b-pnt -1.27 0.32 -2.22 0.25 -1.73 0.29 -2.00 0.16 Table 6: Experimental and predicted binding enthalpies (ΔH). Values in kcal/mol. System Experimental SMIRNOFF99Frosst GAFF v1.7 GAFF v2.1 Mean SEM Mean SEM Mean SEM Mean SEM a-bam -2.17 0.05 -0.43 0.28 -0.84 0.59 -3.05 0.38 a-but -2.53 0.12 -0.76 0.59 -1.08 0.37 -4.91 0.42 a-cbu -2.75 0.05 -2.08 0.21 -0.71 0.49 -4.94 0.29 a-chp -2.99 0.23 -3.42 0.39 -2.33 0.28 -5.27 0.35 a-coc -0.93 0.32 -3.80 0.45 -2.93 0.32 -6.17 0.32 a-cpe -2.74 0.02 -1.93 0.30 -1.06 0.42 -4.86 0.29 a-ham -4.19 0.02 -4.02 0.33 -2.33 0.30 -6.91 0.29 a-hep -4.19 0.09 -4.72 0.33 -4.05 0.36 -8.68 0.24 a-hex -3.40 0.02 -4.33 0.30 -2.95 0.30 -7.43 0.31 a-hp6 -4.48 0.02 -4.86 0.31 -3.73 0.21 -8.24 0.31 a-hpa -4.66 0.02 -4.47 0.36 -2.65 0.30 -7.38 0.26 a-hx2 -4.12 0.06 -4.24 0.31 -2.35 0.32 -6.56 0.30 a-hx3 -3.36 0.05 -2.25 0.55 -2.80 0.32 -5.51 0.28 a-mba -2.68 0.07 -0.95 0.41 -0.32 0.37 -3.11 0.36 a-mha -4.28 0.02 -3.31 0.50 -2.16 0.25 -6.40 0.34 a-mhp -4.74 0.02 -4.89 0.23 -3.41 0.23 -8.12 0.28 a-nmb -2.57 0.06 -1.10 0.28 0.03 0.23 -3.34 0.27 a-nmh -4.20 0.08 -4.20 0.48 -2.54 0.24 -6.74 0.30 a-oam -5.46 0.03 -4.93 0.28 -3.73 0.33 -8.02 0.28 a-oct -4.89 0.03 -6.08 0.21 -4.69 0.29 -9.53 0.30 a-pam -3.28 0.02 -1.72 0.61 -0.84 0.28 -4.45 0.33 a-pnt -2.75 0.01 -2.05 0.37 -1.62 0.33 -5.99 0.30 b-ben -2.51 0.08 -0.45 0.56 -0.76 0.82 -1.30 0.26 b-cbu 0.88 0.17 0.05 0.83 -0.19 0.46 0.87 0.29 b-chp -2.96 0.01 0.88 0.40 1.82 0.61 -4.39 0.31 b-coc -3.92 0.06 0.80 0.47 0.45 0.98 -5.57 0.44 b-cpe -1.09 0.01 1.86 0.37 3.62 0.65 -1.32 0.30 b-ham 0.60 0.05 1.66 0.68 2.29 0.78 0.42 0.33 b-hep 0.42 0.04 -0.05 0.38 1.91 0.29 -0.92 0.27 b-hex 1.31 0.04 0.41 0.65 1.30 0.60 -0.21 0.27 b-m4c -2.27 0.01 2.18 0.48 2.62 0.55 -3.13 0.30 b-m4t -2.17 0.02 1.26 0.45 2.49 0.51 -3.11 0.29 b-mch -2.29 0.03 -0.79 1.08 2.27 0.73 -3.37 0.34 b-mha 0.47 0.03 0.53 0.55 2.48 0.64 0.28 0.29 b-mo3 -2.93 0.03 -2.66 0.46 -0.59 0.45 -2.50 0.28 b-mo4 -1.96 0.01 -2.69 0.58 -0.91 0.33 -3.05 0.29 b-mp3 -2.75 0.13 -1.09 0.68 0.68 0.48 -2.64 0.31 b-mp4 -2.89 0.05 -2.84 0.76 0.78 0.60 -2.34 0.28 b-oam -0.48 0.03 0.98 0.41 2.66 0.43 -0.52 0.34 b-pb3 -2.25 0.01 -1.59 0.94 1.78 0.36 -2.24 0.30 b-pb4 -2.82 0.01 -0.02 0.62 -1.44 0.78 -3.70 0.29 b-pha -1.79 0.11 -1.10 0.69 -1.34 0.97 -3.45 0.36 b-pnt 1.89 0.53 -0.79 1.01 -0.51 0.84 0.40 0.31 Table 7: Experimental and predicted binding entropies (−TΔS°). Values in kcal/mol. System Experimental SMIRNOFF99Frosst GAFF v1.7 GAFF v2.1 Mean SEM Mean SEM Mean SEM Mean SEM a-bam 0.59 0.05 -2.82 0.53 0.02 0.63 0.11 0.45 a-but 1.02 0.13 -0.73 0.65 -0.01 0.42 1.77 0.47 a-cbu 0.73 0.05 0.75 0.28 -0.17 0.54 1.21 0.36 a-chp 0.48 0.24 1.04 0.48 0.63 0.37 1.16 0.41 a-coc -2.30 1.18 2.02 0.53 1.07 0.40 2.82 0.40 a-cpe 0.61 0.03 0.33 0.39 -0.44 0.51 1.07 0.37 a-ham 0.66 0.02 0.59 0.44 -0.68 0.36 0.92 0.34 a-hep 0.20 0.09 0.77 0.39 0.12 0.41 2.45 0.29 a-hex 0.02 0.02 1.62 0.37 0.03 0.36 2.17 0.37 a-hp6 0.88 0.02 1.54 0.38 0.37 0.27 2.84 0.36 a-hpa 0.52 0.02 1.45 0.48 -0.50 0.37 1.35 0.34 a-hx2 0.78 0.06 1.49 0.37 -0.25 0.37 1.77 0.35 a-hx3 0.35 0.05 -0.13 0.61 1.23 0.40 1.58 0.36 a-mba 0.92 0.07 -0.27 0.51 -0.57 0.44 -0.06 0.43 a-mha 0.68 0.02 -0.29 0.58 -0.73 0.30 0.85 0.40 a-mhp 0.57 0.02 0.92 0.37 -0.41 0.30 1.89 0.35 a-nmb 0.88 0.06 -0.85 0.50 -0.86 0.29 0.61 0.35 a-nmh 0.68 0.08 0.05 0.76 -0.38 0.30 1.18 0.36 a-oam 0.85 0.03 0.25 0.57 -0.60 0.37 1.02 0.34 a-oct 0.27 0.04 1.44 0.37 -0.16 0.38 2.72 0.36 a-pam 0.56 0.02 -0.94 0.98 -0.68 0.34 0.45 0.40 a-pnt 0.15 0.01 -0.51 0.43 -0.12 0.38 1.84 0.36 b-ben 0.87 0.08 -2.40 0.83 -1.07 0.87 -1.15 0.31 b-cbu -2.43 0.24 -1.93 0.85 -1.45 0.58 -3.64 0.34 b-chp -1.60 0.01 -3.96 0.47 -4.61 0.69 -1.87 0.39 b-coc -1.05 0.07 -4.08 0.53 -3.81 1.02 -1.56 0.49 b-cpe -1.96 0.01 -5.43 0.51 -7.17 0.72 -4.60 0.40 b-ham -3.09 0.09 -4.19 0.71 -4.29 0.82 -4.56 0.38 b-hep -3.81 0.18 -3.36 0.47 -5.25 0.45 -3.23 0.35 b-hex -3.59 0.05 -3.34 0.70 -3.77 0.66 -3.38 0.32 b-m4c -2.05 0.01 -5.07 0.54 -5.29 0.62 -2.51 0.37 b-m4t -2.37 0.02 -5.08 0.49 -5.99 0.57 -3.22 0.33 b-mch -1.89 0.03 -2.90 1.10 -5.58 0.78 -2.70 0.38 b-mha -3.03 0.08 -4.00 0.60 -4.62 0.69 -4.94 0.34 b-mo3 0.77 0.03 -0.20 0.60 -2.14 0.61 -0.29 0.34 b-mo4 -0.55 0.01 -1.50 0.71 -2.19 0.46 -0.45 0.36 b-mp3 1.29 0.14 -1.94 0.74 -3.16 0.56 -0.35 0.37 b-mp4 0.70 0.05 -0.19 0.83 -3.55 0.69 -0.72 0.35 b-oam -3.11 0.12 -4.33 0.50 -5.26 0.53 -4.73 0.41 b-pb3 -1.27 0.01 -1.90 0.99 -4.64 0.47 -2.34 0.34 b-pb4 -0.78 0.02 -3.60 0.71 -1.90 0.85 -1.01 0.37 b-pha 0.09 0.12 -2.14 0.76 -1.21 1.01 -0.52 0.41 b-pnt -3.16 0.62 -1.43 1.04 -1.22 0.89 -2.40 0.35 Table 8: Predicted thermodynamic properties for SMIRNOFF99Frosst relative to experiment in kcal/mol, analyzed using MBAR. RMSE MSE Slope Intercept Tau ΔG° SMIRNOFF99Frosst 0.80 [0.62, 1.00] -0.04 [-0.28, 0.20] 0.67 [0.22, 0.65] 0.53 [0.33, 0.73] -1.45 [-0.80, -2.07] 0.48 [0.32, 0.62] ΔH SMIRNOFF99Frosst 1.83 [1.37, 2.28] 0.73 [0.26, 1.24] 0.44 [0.21, 0.66] 0.84 [0.54, 1.18] 0.36 [-0.53, 1.47] 0.52 [0.34, 0.68] −TΔS° SMIRNOFF99Frosst 1.84 [1.44, 2.25] -0.76 [-1.26, -0.26] 0.40 [0.15, 0.63] 0.87 [0.50, 1.24] -0.84 [-1.33, -0.36] 0.32 [0.11, -0.50] Table 9: Predicted thermodynamic properties for each force field relative to experiment on ammonium guests. Values in kcal/mol. RMSE MSE Slope Intercept Tau ΔG° SMIRNOFF99Frosst 0.76 [0.43, 1.11] -0.10 [-0.54, 0.31] 0.48 [0.07, 0.84] 0.69 [0.19, 1.16] -1.06 [0.54, -2.77] 0.44 [0.75, 0.04] ΔG° GAFF v1.7 0.77 [0.59, 0.95] 0.69 [0.51, 0.88] 0.90 [0.76, 0.98] 1.08 [0.88, 1.26] 0.95 [1.56, 0.32] 0.74 [0.91, 0.50] ΔG° GAFF v2.1 1.85 [1.59, 2.09] -1.79 [-2.04, -1.53] 0.93 [0.83, 0.98] 1.32 [1.13, 1.51] -0.80 [-0.20, -1.46] 0.76 [0.92, 0.53] ΔH SMIRNOFF99Frosst 1.15 [0.77, 1.51] 0.83 [0.39, 1.27] 0.89 [0.76, 0.97] 1.15 [0.89, 1.53] 1.31 [2.81, 0.38] 0.78 [0.92, 0.56] ΔH GAFF v1.7 2.12 [1.77, 2.47] 2.02 [1.67, 2.37] 0.92 [0.80, 0.98] 1.09 [0.86, 1.35] 2.29 [3.34, 1.39] 0.75 [0.90, 0.54] ΔH GAFF v2.1 1.90 [1.31, 2.43] -1.51 [-2.15, -0.88] 0.96 [0.91, 0.99] 1.54 [1.38, 1.83] 0.09 [1.18, -0.44] 0.81 [0.95, 0.62] −TΔS° SMIRNOFF99Frosst 1.47 [0.90, 2.10] -0.93 [-1.59, -0.31] 0.65 [0.13, 0.91] 0.99 [0.58, 1.35] -0.88 [-0.09, -1.66] 0.26 [0.64, -0.26] −TΔS° GAFF v1.7 1.45 [1.14, 1.79] -1.33 [-1.66, -1.00] 0.88 [0.18, 0.97] 1.04 [-0.02, 1.37] -1.27 [-0.55, -1.62] 0.28 [0.64, -0.21] −TΔS° GAFF v2.1 1.04 [0.67, 1.40] -0.27 [-0.84, 0.26] 0.89 [0.29, 0.98] 1.36 [-0.53, 1.66] -0.12 [1.16, -0.59] 0.23 [0.62, -0.26] Table 10: Predicted thermodynamic properties for each force field relative to experiment on carboxylate guests. Values in kcal/mol. RMSE MSE Slope Intercept Tau ΔG° SMIRNOFF99Frosst 0.87 [0.59, 1.16] -0.36 [-0.74, -0.01] 0.34 [0.02, 0.68] 0.45 [0.11, 0.75] -1.85 [-0.91, -2.83] 0.40 [0.67, 0.07] ΔG° GAFF v1.7 0.68 [0.49, 0.88] 0.03 [-0.28, 0.34] 0.52 [0.16, 0.80] 0.68 [0.33, 0.97] -0.84 [0.08, -1.75] 0.53 [0.76, 0.23] ΔG° GAFF v2.1 1.46 [1.21, 1.71] -1.36 [-1.61, -1.10] 0.81 [0.61, 0.93] 1.18 [0.85, 1.46] -0.87 [0.02, -1.74] 0.72 [0.87, 0.54] ΔH SMIRNOFF99Frosst 1.41 [0.94, 1.93] 0.20 [-0.43, 0.84] 0.53 [0.20, 0.79] 0.83 [0.40, 1.53] -0.14 [2.12, -1.30] 0.59 [0.80, 0.30] ΔH GAFF v1.7 1.95 [1.34, 2.55] 1.24 [0.55, 1.93] 0.47 [0.13, 0.78] 0.79 [0.32, 1.49] 0.82 [3.10, -0.54] 0.53 [0.75, 0.23] ΔH GAFF v2.1 2.43 [1.75, 3.06] -1.73 [-2.51, -0.96] 0.69 [0.49, 0.85] 1.40 [0.99, 2.29] -0.66 [2.15, -1.61] 0.63 [0.82, 0.35] −TΔS° SMIRNOFF99Frosst 1.73 [1.17, 2.29] -0.57 [-1.32, 0.16] 0.29 [0.02, 0.61] 0.62 [0.16, 1.09] -0.68 [0.05, -1.43] 0.27 [0.58, -0.09] −TΔS° GAFF v1.7 2.07 [1.35, 2.76] -1.22 [-2.00, -0.46] 0.29 [0.00, 0.67] 0.63 [-0.02, 1.18] -1.31 [-0.58, -2.09] 0.27 [0.58, -0.09] −TΔS° GAFF v2.1 1.46 [1.12, 1.77] 0.37 [-0.27, 1.00] 0.50 [0.13, 0.76] 0.93 [0.58, 1.30] 0.37 [1.07, -0.34] 0.37 [0.67, -0.01] Table 11: Predicted thermodynamic properties for each force field relative to experiment on cyclic alcohol guests. Values in kcal/mol. RMSE MSE Slope Intercept Tau ΔG° SMIRNOFF99Frosst 1.07 [0.66, 1.58] 0.71 [0.22, 1.21] 0.54 [0.09, 0.86] 0.55 [0.20, 0.84] -0.84 [0.16, -2.09] 0.44 [0.75, 0.02] ΔG° GAFF v1.7 1.22 [0.86, 1.67] 0.93 [0.45, 1.41] 0.56 [0.12, 0.89] 0.59 [0.25, 0.89] -0.47 [0.64, -1.77] 0.47 [0.78, 0.05] ΔG° GAFF v2.1 1.80 [1.48, 2.15] -1.64 [-2.04, -1.14] 0.73 [0.19, 0.98] 1.01 [0.49, 1.27] -1.63 [-0.67, -3.19] 0.66 [0.89, 0.27] ΔH SMIRNOFF99Frosst 2.88 [1.99, 3.68] 1.66 [0.21, 3.03] 0.09 [0.00, 0.44] 0.07 [-1.28, 1.67] -0.29 [3.93, -4.06] 0.09 [0.56, -0.35] ΔH GAFF v1.7 3.63 [2.67, 4.47] 2.66 [1.13, 4.07] 0.10 [0.00, 0.57] 0.12 [-1.09, 2.28] 0.91 [6.66, -2.47] 0.14 [0.60, -0.31] ΔH GAFF v2.1 2.08 [1.18, 3.16] -1.64 [-2.54, -0.91] 0.54 [0.00, 0.97] 1.08 [-0.37, 1.90] -1.51 [0.83, -5.50] 0.46 [0.89, -0.09] −TΔS° SMIRNOFF99Frosst 2.47 [1.62, 3.36] -0.96 [-2.22, 0.52] 0.40 [0.00, 0.93] 1.18 [-0.45, 2.26] -0.88 [0.36, -3.60] 0.30 [0.71, -0.20] −TΔS° GAFF v1.7 3.00 [2.07, 3.88] -1.73 [-3.14, -0.18] 0.37 [0.00, 0.93] 1.23 [-0.38, 2.48] -1.59 [-0.31, -4.23] 0.29 [0.71, -0.20] −TΔS° GAFF v2.1 1.80 [0.68, 3.19] -0.00 [-0.98, 1.27] 0.48 [0.00, 0.97] 1.13 [-0.22, 1.96] 0.08 [1.14, -1.79] 0.46 [0.82, -0.02] Table 12: Dihedral parameter differences between SMIRNOFF99Frosst and GAFF v1.7. Atom names refer to Figure 2. NP: not present. SMIRNOFF99Frosst GAFF v1.7 Atom 1 Atom 2 Atom 3 Atom 4 Per Phase Height (kcal/mol) Height (kcal/mol) H1 C1 C2 O2 1 0 0.25 NP H1 C1 C2 O2 3 0 0.00 0.16 Table 13: Dihedral parameter differences between SMIRNOFF99Frosst and GAFF v2.1, where one dihedral has fewer or more periodicity terms than the corresponding term in the other force field. Atom names refer to 2. NP: not present. SMIRNOFF99Frosst GAFF v2.1 Atom 1 Atom 2 Atom 3 Atom 4 Per Phase Height (kcal/mol) Height (kcal/mol) C1 C2 O2 HO2 1 0 0.25 NP C1 C2 O2 HO2 3 0 0.16 0.00 C1 O5 C5 C4 1 0 NP 0.00 C1 O5 C5 C4 2 0 0.10 0.16 C1 O5 C5 C4 3 0 0.38 0.24 C1 O5 C5 C6 1 0 NP 0.00 C1 O5 C5 C6 2 0 0.10 0.16 C1 O5 C5 C6 3 0 0.38 0.24 C2 C1 O5 C5 1 0 NP 0.00 C2 C1 O5 C5 2 0 0.10 0.16 C2 C1 O5 C5 3 0 0.38 0.24 C2 C3 O3 HO3 1 0 0.25 NP C2 C3 O3 HO3 3 0 0.16 0.00 C5 C6 O6 HO6 1 0 0.25 NP C5 C6 O6 HO6 3 0 0.16 0.00 H1 C1 C2 O2 1 0 0.25 NP H1 C1 C2 O2 3 0 0.00 0.16 O1 C1 C2 O2 1 0 NP 0.02 O1 C1 C2 O2 2 0 1.18 0.00 O1 C1 C2 O2 3 0 0.14 1.01 O2 C2 C1 O5 1 0 NP 0.02 O2 C2 C1 O5 2 0 1.18 0.00 O2 C2 C1 O5 3 0 0.14 1.01 O5 C5 C6 O6 1 0 NP 0.02 O5 C5 C6 O6 2 0 1.18 0.00 O5 C5 C6 O6 3 0 0.14 1.01 HO2 O2 C2 C3 1 0 0.25 NP HO2 O2 C2 C3 3 0 0.16 0.00 HO3 O3 C3 C4 1 0 0.25 NP HO3 O3 C3 C4 3 0 0.16 0.00
auto_math_text
web
Mathematician:Nigel J. Cutland Mathematician British mathematician whose main fields of interest are non-standard analysis, Loeb spaces, and applications in probability and stochastic analysis. English History • Born: 10 February 1944
auto_math_text
web
# Will we ever build a Galactic Empire? I devoured science fiction as a kid. One of my favorite books was Asimov’s Foundation, which told the story of a splintering Galactic Empire and the seeds of a new civilization that would grow from its ashes. I marveled and was filled with joy at the notion that in our far future humanity might spread across the galaxy, settling countless new planets and bringing civilization to every corner of it. Then I went to college and learned physics, especially special relativity and cosmology, and learned that there are actually some hugely significant barriers in the way of these visions ever being manifest in reality. Ever since, it has seemed obvious to me that the idea of a galactic civilization, or of humans “colonizing the universe” are purely fantasy. But I am continuously surprised when I hear those interested in futurism throw around these terms casually, as if their occurrence were an inevitability so long as humans stay around long enough. This is deeply puzzling to me. Perhaps these people are just being loose with their language, and when they refer to a galactic civilization, they mean something much more modest, like a loosely-connected civilization spread across a few nearby stars. Or maybe there is a fundamental misunderstanding of both the limitations imposed on us by physics and the the enormity of the scales in question. Either way, I want to write a post explaining exactly why these science fiction ideas are almost certainly never going to become reality. My argument in brief: 1. We are really slow. 2. The galaxy is really big. 3. The universe is even bigger. Back in 1977, humans launched the Voyager 1 spacecraft, designed to explore the outer solar system and then to head for the stars. Its trajectory was coordinated to slingshot around Jupiter and then Saturn, picking up speed at each stage, and then finally to launch itself out of the solar system into the great beyond. 36 years later, in 2012, it has finally left the Sun’s heliopause, marking the first steps into interstellar space. It is now 11.7 billion miles from Earth, which sounds great until you realize that this distance is still less than two-tenths of a single percent of one light year. Compare this to, say, the distance to the nearest star Alpha Centauri, we find that it has traveled less than .05% of the distance. At this rate, it would take another 80,000 years to make contact with Alpha Centauri (if it were aimed in that direction)! On the scale of distances between stars, our furthest current exploration has gotten virtually nowhere. It’s the equivalent of somebody who started in the center of the Earth hoping to burrow up all the way to the surface, and takes 5 days to travel a single meter. Over 42 years, they would have travelled just under 3000 meters. OK, you say, but that’s unfair. The Voyager was designed in the 70s! Surely we could do better with modern space shuttle design. To which my reply is, sure we can do better now, but not better enough. The fastest thing humans have ever designed is the Helios 2 shuttle, which hit a speed of 157,078 mph at its fastest (.023% of the speed of light). If we packed some people in this shuttle (which we couldn’t) and sent it off towards the nearest star (assuming that it stayed at this max speed the entire journey), guess how long it would take? Over 18 thousand years. And keep in mind, this is only talking about the nearest star, let alone spreading across the galaxy! It should be totally evident to everybody that the technology we would need to be able to reach the stars is still far far away. But let’s put aside the limits of our current technology. We are still in our infancy as a space-faring species in many ways, and it is totally unfair to treat our current technology level as if it’s something set in stone. Let’s give the futurists the full benefit of the doubt, and imagine that in the future humans will be able to harness incredible quantities of energy that vastly surpass anything we have today. If we keep increasing our energy capacity more and more, is there any limitation on how fast we could get a spacecraft going? Well, yes there is! There is a fundamental cosmic speed limit built into the of the universe, which is of course the speed of light. The speed-vs-energy curve has a vertical asymptote at this value; no finite amount of energy can get you past this speed. So much for arbitrarily high speeds. What can we do with spacecraft traveling near the speed of light? It turns out, a whole lot more! Suppose we can travel at 0.9 times the speed of light, and grant also that the time to accelerate up to this speed is insignificant. Now it only takes us 4.7 years to get to the nearest star! The next closest star takes us 6.6 years. Next is 12.3 years. This is not too bad! Humans in the past have made years-long journeys to discover new lands. Surely we could do it again to settle the stars. But now let’s consider the trip to the center of the galaxy. At 90% the speed of light, this journey would take 29,000 years. As you can see, there’s a massive difference between jumping to nearby stars and attempting to actually traverse the galaxy. The trouble is that most people don’t realize just how significant this change in distance scale is. When you hear “distance to Alpha Centauri” and “distance to the center of the Milky Way”, you are probably not intuitively grasping how hugely different these two quantities are. Even if we had a shuttle traveling at 99.99% times the speed of light, it would take over 100,000 years to travel the diameter of the Milky Way. You might be thinking “Ok, that is quite a long time. But so what! Surely there will be some intrepid explorers that are willing to leave behind the safety of settled planets to bring civilization to brand new worlds. After all, taking into account time dilation, a 1000 year journey from Earth’s perspective only takes 14 years to pass from the perspective of a passenger on a ship traveling at 99.99% percent of the speed of light.” And I accept that! It doesn’t seem crazy that if humans develop shuttle that travel a significant percentage of the speed of light, we could end up with human cities scattered all across the galaxy. But now the issue is with the idea of a galactic CIVILIZATION. Any such civilization faces a massive problem, which is the issue of communication. Even having a shared society between Earth and a planet around our nearest star would be enormously tricky, being that any information sent from one planet to the other would take more than four years to arrive. The two societies would be perpetually four years behind each other, and this raises some serious issues for any central state attempting to govern both. And it’s safe to say that no trade can exist between two planets that have a 10,000 year delay in communication. Nor can any diplomacy, or common leadership, or shared culture or technology, or ANY of the necessary prerequisites for a civilization. I think that for these reasons, it’s evident that the idea of a Galactic Empire like Asimov’s could never come into being. The idea of such a widespread civilization must rely on an assumption that humans will at some point learn to send faster than light messages. Which, sure, we can’t rule out! But everything in physics teaches us that we should be betting very heavily against it. Our attitude towards the idea of an eventual real life Galactic Empire should be similar to our attitude towards perpetual motion machines, as both rely on a total rewriting of the laws of physics as we know them. But people don’t stop at a Galactic Empire. Not naming names, but I hear people that I respect throwing around the idea that humans can settle the universe. This is total madness. The jump in distance scale from diameters of galaxies to the size of the observable universe, is 40 times larger than the jump in distance scale we previously saw from nearby stars to galactic diameters. The universe is really really big, and as it expands, every second more of it is vanishing from our horizon of observable events. Ok, so let’s take the claim to be something much more modest than the literal ‘settle galaxies all across the observable universe’. Let’s take it that they just mean that humans will spread across our nearest neighborhood of galaxies. The problem with this is again that the distance scale in question is just unimaginably large. Clearly we can’t have a civilization across galaxies (as we can’t even have a civilization across a single galaxy). But now even making the trip is a seemingly insurmountable hurdle. The Andromeda Galaxy (the nearest spiral galaxy to the Milky Way) is 2 million light years away. Even traveling at 99.99% of the speed of light, this would be a 28,000 year journey for those within the ship. From one edge of the Virgo supercluster (our cluster of neighbor galaxies) to the other takes 110 million years at the speed of light. To make this trip doable in a single lifetime requires a speed of 99.999999999999% of c (which would result in the trip taking 16 years inside the spacecraft). The kinetic energy required to get a mass m to these speeds is given by KE = (γ – 1) mc2. If our shuttle and all the people on it have a mass of, say, 1000 kg, the required energy ends up being approximately the entire energy output of the Sun per second. Again, it’s possible that if humans survive long enough and somehow get our hands on the truly enormous amounts of energy required to get this close to the speed of light, then we could eventually have human societies springing up in nearby galaxies, in total isolation from one another. But (1) it’s far from obvious that we will ever be capable of mastering such enormous amounts of energy, and (2) I think that this is not what futurists are visualizing when they talk about settling the universe. Let me just say that I still love science fiction, love futurism, and stand in awe when I think of all the incredible things the march of technological progress will likely bring us in the future. But this seems to be one area where the way that our future prospects are discussed is really far off the mark, and where many people would do well to significantly adjust their estimates of the plausibility of the future trajectories they are imagining. Though science and future technology may bring us many incredible new abilities and accomplishments as a species, a galactic or intergalactic civilization is almost certainly not one of them. # Solving the Linear Cournot Model The Cournot model is a simple economic model used to describe what happens when multiple companies compete with one another to produce some homogenous product. I’ve been playing with it a bit and ended up solving the general linear case. I assume that this solution is already known by somebody, but couldn’t find it anywhere. So I will post it here! It gives some interesting insight into the way that less-than-perfectly-competitive markets operate. First let’s talk about the general structure of the Cournot model. Suppose we have n firms. Each produces some quantity of the product, which we’ll label as $q_1, q_2, ..., q_n$. The total amount of product on the market will be given the label $Q = q_1 + q_2 + ... + q_n$. Since the firms are all selling identical products, it makes sense to assume that the consumer demand function $P(q_1, q_2, ..., q_n)$ will just be a function of the total quantity of the product that is on the market: $P(q_1, q_2, ..., q_n) = P(q_1 + q_2 + ... + q_n) = P(Q)$. (This means that we’re also disregarding effects like customer loyalty to a particular company or geographic closeness to one company location over another. Essentially, the only factor in a consumer’s choice of which company to go to is the price at which that company is selling the product.) For each firm, there is some cost to producing the good. We capture this by giving each firm a cost function $C_1(q_1), C_2(q_2), ..., C_n(q_n)$. Now we can figure out the profit of each firm for a given set of output values $q_1, q_2, ..., q_n$. We’ll label the profit of the kth firm as $\Pi_k$. This profit is just the amount of money they get by selling the product minus the cost of producing the product: $\Pi_k = q_k P(Q) - C_k(q_k)$. If we now assume that all firms are maximizing profit, we can find the outputs of each firm by taking the derivative of the profit and setting it to zero. $\frac{d\Pi_k}{dq_k} = P(Q) + q_k \frac{dP}{dQ} - \frac{dC_k}{dq_k} = 0$. This is a set of n equations with n unknown, so solving this will fully specify the behavior of all firms! Of course, without any more assumptions about the functions $P$ and $C_k$, we can’t go too much further with solving this equation in general. To get some interesting general results, we’ll consider a very simple set of assumptions. Our assumptions will be that both consumer demand and producer costs are linear. This is the linear Cournot model, as opposed to the more general Cournot model. In the linear Cournot model, we write that $P(Q) = a - bQ$ (for some a and b) and $C_k(q_k) = c_k q_k$. As an example, we might have that P(Q) = $100 –$2 × Q, which would mean that at a price of \$40, 30 units of the good will be bought total. The constants $c_k$ represent the marginal cost of production for each firm, and the linearity of the cost function means that the cost of producing the next unit is always the same, regardless of how many have been produced before. (This is unrealistic, as generally it’s cheaper per unit to produce large quantities of a good than to produce small quantities.) Now we can write out the profit-maximization equations for the linear Cournot model. $\frac{d\Pi_k}{dq_k} = P(Q) + q_k \frac{dP}{dQ} - \frac{dC_k}{dq_k} = a - bQ - b q_k - c_k = 0$. Rewriting, we get $q_k + Q = \frac{a - c_k}{b}$. We can’t immediately solve this for $q_k$, because remember that Q is the sum of all the quantities produced. All n of the quantities we’re trying to solve are in each equation, so to solve the system of equations we have to do some linear algebra! $2q_1 + q_2 + q_3 + ... + q_n = \frac{a - c_1}{b} \\ q_1 + 2q_2 + q_3 + ... + q_n = \frac{a - c_2}{b} \\ q_1 + q_2 + 2q_3 + ... + q_n = \frac{a - c_2}{b} \\ \ldots \\ q_1 + q_2 + q_3 +... + 2q_n = \frac{a - c_n}{b}$ Translating this to a matrix equation… $\begin{bmatrix} 2 & 1 & 1 & 1 & 1 & \ldots \\ 1 & 2 & 1 & 1 \\ 1 & 1 & 2 & & \ddots \\ 1 & 1 & & \ddots \\ 1 & & \ddots \\ \vdots \end{bmatrix} \begin{bmatrix} q_1 \\ q_2 \\ q_3 \\ \vdots \\ q_{n-2} \\ q_{n-1} \\ q_n \end{bmatrix} = \frac{1}{b} \begin{bmatrix} a - c_1 \\ a - c_2 \\ a - c_3 \\ \vdots \\ a - c_{n-2} \\ a - c_{n-1} \\ a - c_n \end{bmatrix}$ Now if we could only find the inverse of the first matrix, we’d have our solution! $\begin{bmatrix} q_1 \\ q_2 \\ q_3 \\ \vdots \\ q_{n-2} \\ q_{n-1} \\ q_n \end{bmatrix} = \begin{bmatrix} 2 & 1 & 1 & 1 & 1 & \ldots \\ 1 & 2 & 1 & 1 \\ 1 & 1 & 2 & & \ddots \\ 1 & 1 & & \ddots \\ 1 & & \ddots \\ \vdots \end{bmatrix} ^{-1} \frac{1}{b} \begin{bmatrix} a - c_1 \\ a - c_2 \\ a - c_3 \\ \vdots \\ a - c_{n-2} \\ a - c_{n-1} \\ a - c_n \end{bmatrix}$ I found the inverse of this matrix by using the symmetry in the matrix to decompose it into two matrices that were each easier to work with: $\mathcal{I} = \begin{bmatrix} 1 & 0 & 0 & 0 \ldots \\ 0 & 1 & 0 \\ 0 & 0 & \ddots \\ 0 \\ \vdots \end{bmatrix}$ $\mathcal{J} = \begin{bmatrix} 1 & 1 & 1 & 1 \ldots \\ 1 & 1 & 1 \\ 1 & 1 & \ddots \\ 1 \\ \vdots \end{bmatrix}$ $\begin{bmatrix} 2 & 1 & 1 & 1 & 1 & \ldots \\ 1 & 2 & 1 & 1 \\ 1 & 1 & 2 & & \ddots \\ 1 & 1 & & \ddots \\ 1 & & \ddots \\ \vdots \end{bmatrix} = \mathcal{I} + \mathcal{J}$ As a hypothesis, suppose that the inverse matrix has a similar form (one value for the diagonal elements, and another value for all off-diagonal elements). This allows us to write an equation for the inverse matrix: $(\mathcal{I} + \mathcal{J}) (A \mathcal{I} + B \mathcal{J}) = \mathcal{I}$ To solve this, we’ll use the following easily proven identities. $\mathcal{I} \cdot \mathcal{I} = \mathcal{I} \\ \mathcal{I} \cdot \mathcal{J} = \mathcal{J} \\ \mathcal{J} \cdot \mathcal{I} = \mathcal{J} \\ \mathcal{J} \cdot \mathcal{J} = n \mathcal{J} \\$ $(\mathcal{I} + \mathcal{J}) (A \mathcal{I} + B \mathcal{J}) \\ = A \mathcal{I} + A \mathcal{J} + B \mathcal{J} + nB \mathcal{J} \\ = A \mathcal{I} + \left( A + B(n+1) \right) \mathcal{J} \\ = \mathcal{I}$ $A = 1 \\ A + B(n+1) = 0$ $A = 1 \\ B = - \frac{1}{n+1}$ $(\mathcal{I} + \mathcal{J})^{-1} = \mathcal{I} - \frac{1}{n+1} \mathcal{J} = \frac{1}{n+1} \begin{bmatrix} n & -1 & -1 & -1 & -1 & \ldots \\ -1 & n & -1 & -1 \\ -1 & -1 & n & & \ddots \\ -1 & -1 & & \ddots \\ -1 & & \ddots \\ \vdots \end{bmatrix}$ Alright awesome! Our hypothesis turned out to be true! (And it would have even if the entries in our matrix hadn’t been 1s and 2s. This is a really cool general method to find inverses of this family of matrices.) Now we just use this inverse matrix to solve for the output from each firm! $\begin{bmatrix} q_1 \\ q_2 \\ q_3 \\ \vdots \\ q_{n-2} \\ q_{n-1} \\ q_n \end{bmatrix} = (\mathcal{I} - \frac{1}{n+1} \mathcal{J}) \ \frac{1}{b} \begin{bmatrix} a - c_1 \\ a - c_2 \\ a - c_3 \\ \vdots \\ a - c_{n-2} \\ a - c_{n-1} \\ a - c_n \end{bmatrix}$ Define: $\mathcal{C} = \sum_{i=1}^{n}{c_i}$ $q_k = \frac{1}{b} (a - c_k - \frac{1}{n+1} \sum_{i=1}^{n}{(a - c_i)}) \\ ~~~~ = \frac{1}{b} (a - c_k - \frac{1}{n+1} (an - \mathcal{C})) \\ ~~~~ = \frac{1}{b} (\frac{a + \mathcal{C}}{n+1} - c_k)$ $Q^* = \sum_{k=1}^n {q_k} \\ ~~~~~ = \frac{1}{b} \sum_{k=1}^n ( \frac{a + \mathcal{C}}{n+1} - c_k ) \\ ~~~~~ = \frac{1}{b} \left( \frac{n}{n+1} (a + \mathcal{C}) - \mathcal{C} \right) \\ ~~~~~ = \frac{1}{b} \left( \frac{n}{n+1} a - \frac{\mathcal{C}}{n+1} \right) \\ ~~~~~ = \frac {an - \mathcal{C}} {b(n+1)}$ $P^* = a - bQ^* \\ ~~~~~ = a - \frac{an - \mathcal{C}}{n+1} \\ ~~~~~ = \frac{a + \mathcal{C}}{n+1}$ $\Pi_k^* = q_k^* P^* - c_k q_k^* \\ ~~~~~ = \frac{1}{b} (\frac{a+\mathcal{C}}{n+1} - c_k) \frac{a + \mathcal{C}}{n+1} - \frac{c_k}{b} (\frac{a + \mathcal{C}}{n+1} - c_k) \\ ~~~~~ = \frac{1}{b} \left( \left( \frac{a + \mathcal{C}}{n+1} \right)^2 - 2c_k\left( \frac{a + \mathcal{C}}{n+1} \right) + c_k^2 \right) \\ ~~~~~ = \frac{1}{b} \left( \frac{a + \mathcal{C}}{n+1} - c_k \right)^2$ And there we have it, the full solution to the general linear Cournot model! Let’s discuss some implications of these results. First of all, let’s look at the two extreme cases: monopoly and perfect competition. Monopoly: n = 1 $Q^* = \frac{1}{2b} (a - c) \\ P^* = \frac{1}{2} (a + c) \\ \Pi^* = \frac{1}{b} \left( \frac{a - c}{2} \right)^2$ Perfect Competition: n → ∞ $q_k^* \rightarrow \frac{1}{b} (\bar c - c_k) \\ Q^* \rightarrow \frac{1}{b} (a - \bar c) \\ P^* \rightarrow \bar c \\ \Pi^* \rightarrow \frac{1}{b} (\bar c - c_k)^2$ The first observation is that the behavior of the market under monopoly looks very different from the case of perfect competition. For one thing, notice that the price under perfect competition is always going to be lower than the price under monopoly. This is a nice demonstration of the so-called monopoly markup. The quantity $a$ intuitively corresponds to the highest possible price you could get for the product (the most that the highest bidder would pay). And the quantity $c$, the production cost, is the lowest possible price at which the product would be sold. So the monopoly price is the average of the highest price you could get for the good and the lowest price at which it could be sold. The flip side of the monopoly markup is that less of the good is produced and sold under a monopoly than under perfect competition. There are trades that could be happening (trades which would be mutually beneficial!) which do not occur. Think about it: the monopoly price is halfway between the cost of production and the highest bidder’s price. This means that there are a bunch of people that would buy the product at above the cost of production but below the monopoly price. And since the price they would buy it for is above the cost of production, this would be a profitable exchange for both sides! But alas, the monopoly doesn’t allow these trades to occur, as it would involve lowering the price for everybody, including those who are willing to pay a higher price, and thus decreasing net profit. Things change as soon as another firm joins the market. This firm can profitably sell the good at a lower price than the monopoly price and snatch up all of their business. This introduces a downward pressure on the price. Here’s the exact solution for the case of duopoly. Duopoly: n = 2 $q_1 = \frac{1}{3b} (a - 2c_1 + c_2) \\ q_2 = \frac{1}{3b} (a + c_1 - 2c_2) \\ Q^* = \frac{1}{3b} (2a - c_1 - c_2) \\ P^* = \frac{1}{3} (a + c_1 + c_2) \\ \Pi_1^* = \frac{1}{3b} (a - 2c_1 + c_2)^2 \\ \Pi_2^* = \frac{1}{3b} (a + c_1 - 2c_2)^2 \\$ Interestingly, in the duopoly case the market price still rests at a value above the marginal cost of production for either firm. As more and more firms enter the market, competition pushes the price down further and further until, in the limit of perfect competition, it converges to the cost of production. The implication of this is that in the limit of perfect competition, firms do not make any profit! This may sound a little unintuitive, but it’s the inevitable consequence of the line of argument above. If a bunch of companies were all making some profit, then their price is somewhere above the cost of production. But this means that one company could slightly lower its price, thus snatching up all the customers and making massively more money than its competitors. So its competitors will all follow suit, pushing down their prices to get back their customers. And in the end, all the firms will have just decreased their prices and their profits, even though every step in the sequence appeared to be the rational and profitable action by each firm! This is just an example of a coordination problem. If the companies could all just agree to hold their price fixed at, say, the monopoly price, then they’d all be better off. But each individual has a strong monetary incentive to lower their price and gather all the customers. So the price will drop and drop until it can drop no more (that is, until it has reached the cost of production, at which point it is no longer profitable for a company to lower their price). This implies that in some sense, the limit of perfect competition is the best possible outcome for consumers and the worst outcome for producers. Every consumer that values the product above the cost of its production will get it, and they will all get it at the lowest possible price. So the consumer surplus will be enormous. And companies producing the product make no net profit; any attempt to do so immediately loses them their entire customer base. (In which case, what is the motivation for the companies to produce the product in the first place? This is known as the Bertrand paradox.) We can also get the easier-to-solve special case where all firms have the same cost of production. Equal Production Costs $\forall k (c_k = c)$ $q_k^* = \frac{1}{n+1} \frac{a - c}{b} \\ Q^* = \frac{n}{n+1} \frac{a - c}{b} \\ P^* = \frac{a + nc}{n + 1} \\ \Pi^* = \frac{1}{b} \left( \frac{a - c}{n+1} \right)^2$ It’s curious that in the Cournot model, prices don’t immediately drop to production levels as soon you go from a monopoly to a duopoly. After all, the intuitive argument I presented before works for two firms: if both firms are pricing the goods at any value above zero, then each stands to gain by lowering the price a slight bit and getting all the customers. And this continues until the price settles at the cost of production. We didn’t build in any ability of the firms to collude to the model, so what gives? What the Cournot model tells us is certainly more realistic (we don’t expect a duopoly to behave like a perfectly competitive market), but where does this realism come from? The answer is that in a certain sense we did build in collusion between firms from the start, in the form of agreement on what price to sell at. Notice that our model did not allow different firms to set different prices. In this model, firms compete only on quantity of goods sold, not prices. The price is set automatically by the consumer demand function, and no single individual can unilaterally change their price. This constraint is what gives us the more realistic-in-character results that we see, and also what invalidates the intuitive argument I’ve made here. One final observation. Consider the following procedure. You line up a representative from each of the n firms, as well as the highest bidder for the product (representing the highest price at which the product could be sold). Each of the firms states their cost of production (the lowest they could profitably bring the price to), and the highest bidder states the amount that he values the product (the highest price at which he would still buy it). Now all of the stated costs are averaged, and the result is set as the market price of the good. Turns out that this procedure gives exactly the market price that the linear Cournot model predicts! This might be meaningful or just a curious coincidence. But it’s quite surprising to me that the slope of the demand curve ($b$) doesn’t show up at all in the ultimate market price, only the value that the highest bidder puts on the product! # Building an analog calculator Op-amps are magical little devices that allow us to employ the laws of electromagnetism for our purposes, giving us the power to do ultimately any computation using physics. To explain this, we need to take three steps: first transistors, then differential amplifiers, and then operational amplifiers. The focus of this post will be on purely analog calculations, as that turns out to be the most natural type of computation you get out of op amps. ## 1. Transistor Here’s a transistor: It has three ports, named (as labelled) the base, the collector and the emitter. Intuitively, you can think about the base as being like a dial that sensitively controls the flow of large quantities of current from the collector to the emitter. This is done with only very small trickles of current flowing into the base. More precisely, a transistor obeys the following rules: 1. The transistor is ON if the voltage at the collector is higher than the voltage at the emitter. It is OFF otherwise. 2. When the transistor is in the ON state, which is all we’ll be interested in, the following regularities are observed: 1. Current flows along only two paths: base-to-emitter and collector-to-emitter. A transistor does not allow current to flow from the emitter to either the collector or the base. 2. The current into the collector is proportional to the current into the base, with a constant of proportionality labelled β whose value is determined by the make of the transistor. β is typically quite large, so that the current into the collector is much larger than the current into the base. 3. The voltage at the emitter is 0.6V less than the voltage at the base. Combined, these rules allow us to solve basic transistor circuits. Here’s a basic circuit that amplifies input voltages using a transistor: Applying the above rules, we derive the following relationship between the input and output voltages: The +15V port at the top is important for the functioning of the amplifier because of the first rule of transistors, regarding when they are ON and OFF. If we just had it grounded, then the voltage at the collector would be negative (after the voltage drop from the resistor), so the transistor would switch off. Having the voltage start at +15V allows VC to be positive even after the voltage drop at the resistor (although it does set an upper bound on the input currents for which the circuit will still operate). And the end result is that any change in the input voltage will result in a corresponding change in the output voltage, amplified by a factor of approximately -RC/RE. Why do we call this amplification? Well, because we can choose whatever values of resistance for RC and RE that we want! So we can make RC a thousand times larger than RE, and we have created an amplifier that takes millivolt signals to volt signals. (The amplification tops out at +15V, because a signal larger than that would result in the collector voltage going negative and the transistor turning off.) Ok, great, now you’re a master of transistor circuits! We move on to Step 2: the differential amplifier. ## 2. Differential Amplifier A differential amplifier is a circuit that amplifies the difference between two voltages and suppresses the commonalities. Here’s how to construct a differential amplifier from two transistors: Let’s solve this circuit explicitly: Vin = 0.6  + REiE+ Rf (iE+ iE’) Vin’ = 0.6 + REiE’ + R(iE+ iE’) ∆(Vin – Vin’) = RE(i– iE’) = R(β+1)(iB – iB’) ∆(Vin + Vin’) = (RE + 2Rf)(iE + iE’) = (RE + 2Rf)(β+1)(iB + iB’) Vout = 15 – RiC Vout’ = 15 – RC iC ∆(Vout – Vout’) = – RC(i– iC’) = – Rβ (iB– iB’) ∆(Vout + Vout’) = – RC(i+ iC’) = – RC β (i+ iB’) ADM = Differential Amplification = ∆(Vout – Vout’) / ∆(Vin – Vin’) = -β/(β+1) RC/RE ACM = “Common Mode” Amplification = ∆(Vout + Vout’) / ∆(Vin + Vin’) = -β/(β+1) RC/(R+ 2Rf) We can solve this directly for changes in one particular output voltage: ∆Vout = ADM ∆(Vin – Vin‘) + ACM ∆(Vin + Vin‘) To make a differential amplifier, we require that ADM be large and ACM be small. We can achieve this if Rf >> R>> RC. Notice that since the amplification is a function of the ratio of our resistors, we can easily make the amplification absolutely enormous. Here’s one way this might be useful: say that Alice wants to send a signal to Bob over some distance, but there is a source of macroscopic noise along the wire connecting the two. Perhaps the wire happens to pass through a region in which large magnetic disturbances sometimes occur. If the signal is encoded in a time-varying voltage on Alice’s end, then what Bob gets may end up being a very warped version of what Alice sent. But suppose that instead Alice sends Bob two signals, one with the original message and the other just a static fixed voltage. Now the difference between the two signals represents Alice’s message. And crucially, if these two signals are sent on wires that are right side-by-side, then they will pick up the same noise! This means that while the original message will be warped, so will the static signal, and by roughly the same amount! Which means that the difference between the two signals will still carry the information of the original message. This allows Alice to communicate with Bob through the noise, so long as Bob takes the two wires on his end and plugs them into a differential amplifier to suppress the common noise factor. ## 3. Operational Amplifier To make an operational amplifier, we just need to make some very slight modifications to our differential amplifier. The first is that we’ll make our amplification as large as possible. We can get this by putting multiple stages of differential amplifiers side by side (so if your differential amplifier amplifies by 100, two of them will amplify by 10,000, and three by 1,000,000). What’ll happen now if we send in two voltages, with Vin very slightly larger than Vin‘? Well, suppose that the difference is on the order of a single millivolt (mV). Then the output voltage will be on the order of 1 mV multiplied by our amplication factor of 1,000,000. This will be around 1000V. So… do we expect to get out an output signal of 1000V? No, clearly not, because our maximum possible voltage at the output is +15V! We can’t create energy from nowhere. Remember that the output voltage Vout is equal to 15V – RCiC, and that the transistor only accepts current traveling from the collector to the emitter, not the other way around. This means that iC cannot be negative, so Vout cannot be larger that +15V. In addition, we know that Vout cannot be smaller than 0V (as this would turn off the transistor via Rule 1 above). What this all means is that if Vin is even the slightest bit smaller than Vin‘, Vout will “max out”, jumping to the peak voltage of +15V (remember that ADM is negative). And similarly, if Vin is even the slightest bit larger than Vin‘, then Vout will bottom out, dropping to 0V. The incredible sensitivity of the instrument is given to us by the massive amplification factor, so that it will act as a binary measurement of which of the voltages is larger, even if that difference is just barely detectable. Often, the bottom voltage will be set to -15V rather than ground (0V), so that the signal we get out is either +15V (if Vin is smaller than Vin‘) or -15V (if Vin is greater than Vin‘). That way, perfect equality of the inputs will be represented as 0V. That’s the convention we’ll use for the rest of the post. Also, instead of drawing out the whole diagram for this modified differential amplifier, we will use the following simpler image: Ok, we’re almost to an operational amplifier. The final step is to apply negative feedback! What we’re doing here is just connecting the output voltage to the Vin‘ input. Let’s think about what this does. Suppose that Vin is larger than Vin‘. Then Vout will quickly become negative, approaching -15V. But as Vout decreases, so does Vin‘! So Vin‘ will decrease, getting closer to Vin. Once it passes Vin, the quantity Vin – Vin‘ suddenly becomes negative, so Vout will change direction, becoming more positive and approaching +15V. So Vin‘ will begin to increase! This will continue until it passes Vin again, at which point Vout will change signs again and Vin‘ will start decreasing. The result of this process will be that no matter what Vin‘ starts out as, it will quickly adjust to match Vin to a degree dictated by the resolution of our amplifier. And we’ve already established that the amplifier can be made to have an extremely high resolution. So this device serves as an extremely precise voltage-matcher! This device, a super-sensitive differential amplifier with negative feedback, is an example of an operational amplifier. It might not be immediately obvious to you what’s so interesting or powerful about this device. Sure it’s very precise, but all it does is match voltages. How can we leverage this to get actually interesting computational behavior? Well, that’s the most fun part! ## 4. Let’s Compute! Let’s start out by seeing how an op-amp can be used to do calculus. Though this might seem like an unusually complicated starting place, doing calculus with op amps is significantly easier and simpler than doing something like multiplication. As we saw in the last section, if we simply place a wire between Vout and Vin‘ (the input labelled with a negative sign on the diagram), then we get a voltage matcher. We get more interesting behavior if we place other circuit components along this wire. The negative feedback still exists, which means that the circuit will ultimately stabilize to a state in which the two inputs are identical, and where no current is flowing into the op-amp. But now the output voltage will not just be zero. Let’s take a look at the following op-amp circuit: Notice that we still have negative feedback, because the V input is connected to the output, albeit not directly. This means that the two inputs to the op amp must be equal, and since V+ is grounded, the other must be at 0V as well. It also means that no current is flowing into the op amp, as current only flows in while the system is stabilizing. Those two pieces of information – that the input voltages are equal and that no current flows through the device – are enough to allow us to solve the circuit. And there we have it, a circuit that takes in a voltage function that changes with time and outputs the integral of this function! A natural question might be “integral from what to what”? The answer is just: the integral from the moment the circuit is connected to the present! As soon as the circuit is connected it begins integrating. Alright, now let’s just switch the capacitor and resistor in the circuit: See if you can figure out for yourself what this circuit does! Alright, here’s the solution. We have a differentiator! Feed in some input voltage, and you will get out a precise measurement of the rate of change of this voltage! I find it pretty amazing that doing integration and differentiation is so simple and natural. Addition is another easy one: We can use diodes to produce exponential and logarithm calculators. We utilize the ideal diode equation: The logarithm can now be used to produce circuits for calculating exponentials and logarithms: Again, these are all fairly simple-looking circuits. But now let’s look at the simplest way of computing multiplication using an op amp. Schematically, it looks like this: And here’s the circuit in full detail: There’s another method besides this one that you can use to do multiplication, but it’s similarly complicated. It’s quite intriguing that multiplication turns out to be such an unnatural thing to get nature to do to voltages, while addition, exponentiation, integration, and the rest are so simple. What about boolean logic? Well, it turns out that we already have a lot of it. Our addition corresponds to an OR gate if the input voltages are binary signals. We also already have an AND gate, because we have multiplication! And depending on our choice of logic levels (which voltage corresponds to True and which corresponds to False), we can really easily sketch a circuit that does negation: And of course if we have NOT and we have AND, then we have NAND, and if we have NAND, then we have all the rest of binary logic. We can begin to build flip-flop gates out of the NAND gates to get simple memory cells, and we’re on our way to Turing-completeness! # Complex numbers in physics “You cannot observe an imaginary number in reality.” Have you ever heard this claim before? It has a nice ring to it, and sounds almost tautological. I’ve personally heard the claim made by several professors of physics, alongside a host of less qualified people. So let’s take a close look at it and see if it holds up to scrutiny. Can you in fact observe imaginary numbers in reality? First of all, let’s ask a much simpler sounding question. Can you ever observe a real number in reality? Well, yeah. Position, time, charge, mass, and so on; pretty much any physical quantity you name can be represented by real numbers. But if we’re going to be pedantic, when we measure the position of a particle, we are not technically observing a number. We are observing a position, a physical quantity, which we find has a useful representation as a real number. Color is another physical phenomena that is usually represented mathematically as a real number (the frequency of emitted light). But we do not necessarily want to say that color is a number. No, we say that color is a physical phenomena, which we find is usefully described as a real number. More specifically, we have some physical phenomena whose structure contains many similarities to the abstract structure of these mathematical objects known as real numbers, so it behooves us to translate statements about the phenomena over to the platonic realm of math, where the work we do is precise and logically certain. Once we have done the mathematical manipulations we desire to get useful results, we translate our statements about numbers back into statements about the physical phenomena. There are really just two possible failure points in this process. First, it might be that the mathematical framework actually doesn’t have the same structure as the physical phenomena we have chosen it to describe. And second, it might be that the mathematical manipulations we do once we have translated our physical statements into mathematical statements contain some error (like maybe we accidentally divided by zero or something). So on one (overly literal) interpretation, when somebody asks whether a certain abstract mathematical object exists in reality, the answer is always no. Numbers and functions and groups and rings don’t exist in reality, because they are by definition abstract objects, not concrete ones. But this way of thinking about the relationship between math and physics does give us a more useful way to answer the question. Do real numbers exist in reality? Well, yes, they exist insofar as their structure is mirrored in the structure of some real world phenomena! If we’re being careful with our language, we might want to say that real numbers are instantiated in reality instead of saying that they exist. So, are imaginary numbers instantiated in reality? Well, yes, of course they are! The wave function in quantum mechanics is an explicitly complex entity — try doing quantum mechanics with only real-valued wave functions, and you will fail, dramatically. The existence of imaginary values of the wave function is absolutely necessary in order to get an adequate description of our reality. So if you believe quantum mechanics, then you believe that imaginary numbers are actually embedded in the structure of the most fundamental object there is: the wave function of the universe. A simpler example: any wave-like phenomena is best described in the language of complex numbers. A ripple in the water is described as a complex function of position and time, where the complex phase of the function represents the propagation of the wave through space and time. Any time you see a wave-like phenomena (by which I mean any process that is periodic, including something as prosaic as a ticking clock), you are looking at a physical process that really nicely mirrors the structure of complex numbers. Now we’ll finally get to the main point of this post, which is to show off a particularly elegant and powerful instance of complex numbers applying to physics. This example is in the realm of electromagnetism, specifically the approximation of electromagnetism that we use when we talk about circuits, resistors, capacitors, and so on. Suppose somebody comes up to you in the street, hands you the following circuit diagram, and asks you to solve it: If you’ve never studied circuits before, you might stare at it blankly for a moment, then throw your hands up in puzzlement and give them back their sheet of paper. If you’ve learned a little bit about circuits, you might stare at it blankly for a few moments, then write down some complicated differential equations, stare at them for a bit, and then throw your hands up in frustration and hand the sheet back to them. And if you know a lot about circuits, then you’ll probably smile, write down a few short lines of equations involving imaginary numbers, and hand back the paper with the circuit solved. Basically, the way that students are initially taught to solve circuits is to translate them into differential equations. These differential equations quickly become immensely difficult to solve (as differential equations tend to do). And so, while a few simple circuits are nicely solvable with this method, any interesting circuits that do nontrivial computations are at best a massive headache to solve with this method, and at worst actually infeasible. This is the real numbers approach to circuits. But it’s not the end of the story. There’s another way! A better and more beautiful way! And it uses complex numbers. Here’s the idea: circuits are really easy to solve if all of your circuit components are just resistors. For a resistor, the voltage across it is just linearly proportional to the current through it. No derivatives or integrals required, we just use basic algebra to solve one from the other. Furthermore, we have some nice little rules for simplifying complicated-looking resistor circuits by finding equivalent resistances. The problem is that interesting circuits don’t just involve resistors. They also contain things like inductors and capacitors. And these circuit elements don’t have that nice linear relationship between current and voltage. For capacitors, the relationship is between voltage and the integral of current. And for inductors, the relationship is between voltage and the derivative of current. Thus, a circuit involving a capacitor, a resistor, and an inductor, is going to be solved by an equation that involves the derivative of current, the current itself, and the integral of current. In other words, a circuit involving all three types of circuit elements is in general going to be solved by a second-order differential equation. And those are a mess. The amazing thing is that if instead of treating current and voltage as real-valued functions, you treat them as complex-valued functions, what you find is that capacitors and inductors behave exactly like resistors. Voltage and current become related by a simple linear equation once more, with no derivatives or integrals involved. And the distinguishing characteristic of a capacitor or an inductor is that the constant of proportionality between voltage and current is an imaginary number instead of a real number. More specifically, a capacitor is a circuit element for which voltage is equal to a negative imaginary number times the current. An inductor is a circuit element for which voltage is equal to a positive imaginary number times the current. And a resistor is a circuit element for which voltage is equal to a positive real number times the current. Suppose our voltage is described by a simple complex function: Vexp(iωt). Then we can describe the relationship between voltage and current for each of these circuit elements as follows: Notice that now the equations for all three circuit components look just like resistors! Just with different constants of proportionality relating voltage to current. We can even redraw our original diagrams to make the point: Fourier showed that any function whatsoever can be described as a sum of functions that look like Vo exp(iωt). And there’s a nice theorem called the superposition theorem that allows us to use this to solve any circuit, no matter what the voltage is. So, let’s go back to our starting circuit. What we can now do is just redraw it, with all capacitors and inductors substituted for resistors with imaginary resistances: This may look just as complicated as before, but it’s actually much much simpler. We can use the rules for reducing resistor equations to solve an immensely simpler circuit: Our circuit is now much simpler, but the original complexity had to be offloaded somewhere. As you can see, it’s been offloaded onto the complex (in both senses of the word) value of the resistance of our simplified circuit. But this type of complexity is much easier to deal with, because it’s just algebra, not calculus! To solve the current in our circuit now, we don’t need to solve a single differential equation, we just have to do some algebraic rearranging of terms. We’ll give the final equivalent resistance the name “Z”. Now suppose that our original voltage was just the real part of V (that is, Vcos(ωt)). Then the current will also be the real part of I. And if our original voltage was just the imaginary part of V, then the current will be the imaginary part of I. And there we have it! We’ve solved our circuit without struggling over a single differential equation, simply by assuming that our quantities were complex numbers instead of real numbers. This is one of my favorite examples of complex numbers playing an enormously important role in physics. It’s true that it’s a less clear-cut example of complex numbers being necessary to describe a physical phenomena, because we could have in principle done everything with purely real-valued functions by solving some differential equations, but it also highlights the way in which accepting a complex view of the world can simplify your life. # The Central Paradox of Statistical Mechanics: The Problem of The Past This is the third part in a three-part series on the foundations of statistical mechanics. 1. The Necessity of Statistical Mechanics for Getting Macro From Micro 2. Is The Fundamental Postulate of Statistical Mechanics A Priori? 3. The Central Paradox of Statistical Mechanics: The Problem of The Past — — — What I’ve argued for so far is the following set of claims: 1. To successfully predict the behavior of macroscopic systems, we need something above and beyond the microphysical laws. 2. This extra thing we need is the fundamental postulate of statistical mechanics, which assigns a uniform distribution over the region of phase space consistent with what you know about the system. This postulate allows us to prove all the things we want to say about the future, such as “gases expand”, “ice cubes melt”, “people age” and so on. 3. This fundamental postulate is not justifiable on a priori grounds, as it is fundamentally an empirical claim about how frequently different micro states pop up in our universe. Different initial conditions give rise to different such frequencies, so that a claim to a priori access to the fundamental postulate is a claim to a priori access to the precise details of the initial condition of the universe. There’s just one problem with all this… apply our postulate to the past, and everything breaks. Notice that I said that the fundamental postulate allows us to prove all the things we want to say about the future. That wording was chosen carefully. What happens if you try to apply the microphysical laws + the fundamental postulate to predict the past of some macroscopic system? It turns out that all hell breaks loose. Gases spontaneously contract, ice cubes form from puddles of water, and brains pop out of thermal equilibrium. Why does this happen? Very simply, we start with two fully time reversible premises (the microphysical laws and the fundamental postulate). We apply it to present knowledge of some state, the description of which does not specify a special time direction. So any conclusion we get must as a matter of logic be time reversible as well! You can’t start with premises that treat the past as the mirror image of the future, and using just the rules of logical equivalence derive a conclusion that treats the past as fundamentally different from the future. And what this means is that if you conclude that entropy increases towards the future, then you must also conclude that entropy increases towards the past. Which is to say that we came from a higher entropy state, and ultimately (over a long enough time scale and insofar as you think that our universe is headed to thermal equilibrium) from thermal equilibrium. Let’s flesh this argument out a little more. Consider a half-melted ice cube sitting in the sun. The microphysical laws + the fundamental postulate tell us that the region of phase space consisting of states in which the ice cube is entirely melted is much much much larger than the region of phase space in which it is fully unmelted. So much larger, in fact, that it’s hard to express using ordinary English words. This is why we conclude that any trajectory through phase space that passes through the present state of the system (the half-melted cube) is almost certainly going to quickly move towards the regions of phase space in which the cube is fully melted. But for the exact same reason, if we look at the set of trajectories that pass through the present state of the system, the vast vast vast majority of them will have come from the fully-melted regions of phase space. And what this means is that the inevitable result of our calculation of the ice cube’s history will be that a few moments ago it was a puddle of water, and then it spontaneously solidified and formed into a half-melted ice cube. This argument generalizes! What’s the most likely past history of you, according to statistical mechanics? It’s not that the solar system coalesced from a haze of gases strewn through space by a past supernova, such that a planet would form in the Goldilocks zone and develop life, which would then gradually evolve through natural selection to the point where you are sitting in whatever room you’re sitting in reading this post. This trajectory through phase space is enormously unlikely. The much much much more likely past trajectory of you through phase space is that a little while ago you were a bunch of particles dispersed through a universe at thermal equilibrium, which happened to spontaneously coalesce into a brain that has time to register a few moments of experience before dissipating back into chaos. “What about all of my memories of the past?” you say. As it happens the most likely explanation of these memories is not that they are veridical copies of real happenings in the universe but illusions, manufactured from randomness. Basically, if you buy everything I’ve argued in the first two parts, then you are forced to conclude that the universe is most likely near thermal equilibrium, with your current experience of it arising as a spontaneous dip in entropy, just enough to produce a conscious brain but no more. There are at least two big problems with this view. Problem 1: This conclusion is, we think, extremely empirically wrong! The ice cube in front of you didn’t spontaneously form from a puddle of water, uncracked eggs weren’t a moment ago scrambled, and your memories are to some degree veridical. If you really believe that you are merely a spontaneous dip in entropy, then your prediction for the next minute will be the gradual dissolution of your brain and loss of consciousness. Now, wait a minute and see if this happens. Still here? Good! Problem 2: The conclusion cannot be simultaneously believed and justified. If you think that you’re a thermal fluctuation, then you shouldn’t credit any of your memories as telling you anything about the world. But then your whole justification to coming to the conclusion in the first place (the experiments that led us to conclude that physics is time-reversible and that the fundamental postulate is true) is undermined! Either you believe it without justification, or you don’t believe despite justification. Said another way, no reflective equilibrium exists at an entropy minimum. David Albert calls this peculiar epistemic state cognitively unstable, as it’s not clear where exactly it should leave you. Reflect for a moment on how strange of a situation we are in here. Starting from very basic observations of the world, involving its time-reversibility on the micro scale and the increase in entropy of systems, we see that we are inevitably led to the conclusion that we are almost certainly thermal fluctuations, brains popping out of the void. I promise you that no trick has been pulled here, this really is the state of the philosophy of statistical mechanics! The big issue is how to deal with this strange situation. One approach is to say the following: Our problem is that our predictions work towards the future but not the past. So suppose that we simply add as a new fundamental postulate the proposition that long long ago the universe had an incredibly low entropy. That is, suppose that instead of just starting with the microphysical laws and the fundamental postulate of statistical mechanics, we added a third claims: the Past Hypothesis. The Past Hypothesis should be understood as an augmentation of our Fundamental Postulate. Taken together, the two postulates say that our probability distribution over possible microstates should not be uniform over phase space. Instead, it should be what you get when you take the uniform distribution, and then condition on the distant past being extremely low entropy. This process of conditioning clearly preferences one direction of time over the other, and so the symmetry is broken. It’s worth reflecting for a moment on the strangeness of the epistemic status of the Past Hypothesis. It happens that we have over time accumulated a ton of observational evidence for the occurrence of the Big Bang. But none of this evidence has anything to do with our reasons for accepting the Past Hypothesis. If we buy the whole line of argument so far, our conclusion that something like a Big Bang occurred becomes something that we are forced to believe for deep logical reasons, on pain of cognitive instability and self-undermining belief. Anybody that denies that the Big Bang (or some similar enormously low-entropy past state) occurred has to contend with their view collapsing in self-contradiction upon observing the physical laws! # Is The Fundamental Postulate of Statistical Mechanics A Priori? This is the second part in a three-part series on the foundations of statistical mechanics. 1. The Necessity of Statistical Mechanics for Getting Macro From Micro 2. Is The Fundamental Postulate of Statistical Mechanics A Priori? 3. The Central Paradox of Statistical Mechanics: The Problem of The Past — — — The fantastic empirical success of the fundamental postulate gives us a great amount of assurance that the postulate is good one. But it’s worth asking whether that’s the only reason that we should like this postulate, or if it has some solid a priori justification. The basic principle of “when you’re unsure, just distribute credences evenly over phase space” certainly strikes many people as highly intuitive and justifiable on a priori grounds. But there are some huge problems with this way of thinking, one of which I’ve already hinted at. Here’s a thought experiment that illustrates the problem. There is a factory in your town that produces cubic boxes. All you know about this factory is that the boxes that they produce all have a volume between 0 m3 and 1 m3. You are going to be delivered a box produced by this factory, and are asked to represent your state of knowledge about the box with a probability distribution. What distribution should you use? Suppose you say “I should be indifferent over all the possible boxes. So I should have a uniform distribution over the volumes from 0 m3 to 1 m3.” This might seem reasonable at first blush. But what if somebody else said “Yes, you should be indifferent over all the possible boxes, but actually the uniform distribution should be over the side lengths from 0 m to 1 m, not volumes.” This would be a very different probability distribution! For example, if the probability that the side length is greater than .5 m is 50%, then the probability that the volume is greater than (.5)3 = 1/8 is also 50%! Uniform over side length is not the same as uniform over volume (or surface area, for that matter). Now, how do you choose between a uniform distribution over volumes and a uniform distribution over side lengths? After all, you know nothing about the process that the factory is using to produce the boxes, and whether it is based off of volume or side length (or something else); all you know is that all boxes are between 0 m3 and 1 m3. The lesson of this thought experiment is that the statement we started with (“I should be indifferent over all possible boxes”) was actually not even well-defined. There’s not just one unique measure over a continuous space, and in general the notion that “all possibilities are equally likely” is highly language-dependent. The exact same applies to phase space, as position and momentum are continuous quantities. Imagine that somebody instead of talking about phase space, only talked about “craze space”, in which all positions become positions cubed, and all momentum values become natural logs of momentum. This space would still contain all possible microstates of your system. What’s more, the fundamental laws of nature could be rewritten in a way that uses only craze space quantities, not phase space quantities. And needless to say, being indifferent over phase space would not be the same as being indifferent over craze space. Spend enough time looking at attempts to justify a unique interpretation of the statement “All states are equally likely”, when your space of states is a continuous infinity, and you’ll realize that all such attempts are deeply dependent upon arbitrary choices of language. The maximum information entropy probability distribution is afflicted with the exact same problem, because the entropy of your distribution is going to depend on the language you’re using to describe it! The entropy of a distribution in phase space is NOT the same as the entropy of the equivalent distribution transformed to craze space. Let’s summarize this section. If somebody tells you that the fundamental postulate says that all microstates compatible with what you know about the macroscopic features of your system are equally likely, the proper response is something like “Equally likely? That sounds like you’re talking about a uniform distribution. But uniform over what? Oh, position and momentum? Well, why’d you make that choice?” And if they point out that the laws of physics are expressed in terms of position and momentum, you just disagree and say “No, actually I prefer writing the laws of physics in terms of position cubed and log momentum!” (Substitute in any choice of monotonic functions). If they object on the grounds of simplicity, point out that position and momentum are only simple as measured from a standpoint that takes them to be the fundamental concepts, and that from your perspective, getting position and momentum requires applying complicated inverse transformations to your monotonic transformation of the chosen coordinates. And if they object on the grounds of naturalness, the right response is probably something like “Tell me more about this ’naturalness’. How do you know what’s natural or unnatural? It seems to me that your choice of what physical concepts count as natural is a manifestation of deep selection pressures that push any beings whose survival depends on modeling and manipulating their surroundings towards forming an empirically accurate model of the macroscopic world. So that when you say that position is more natural than log(position), what I hear is that the fundamental postulate is a very useful tool. And you can’t use the naturalness of the choice of position to justify the fundamental postulate, when your perception of the naturalness of position is the result of the empirical success of the fundamental postulate!” In my judgement, none of the a priori arguments work, and fundamentally the reason is that the fundamental postulate is an empirical claim. There’s no a priori principle of rationality that tells us that boxes of gases tend to equilibrate, because you can construct a universe whose initial microstate is such that its entire history is one of entropy radically decreasing, gases concentrating, eggs unscrambling, ice cubes unmelting, and so on. Why is this possible? Because it’s consistent with the microphysical laws that the universe started in an enormously low entropy configuration, so it’s gotta also be consistent with the microphysical laws for the entire universe to spend its entire lifetime decreasing in entropy. The general principle is: If you believe that something is physically possible, then you should believe its time-inverse is possible as well. Let’s pause and take stock. What I’ve argued for so far is the following set of claims: 1. To successfully predict the behavior of macroscopic systems, we need something above and beyond the microphysical laws. 2. This extra thing we need is the fundamental postulate of statistical mechanics, which assigns a uniform distribution over the region of phase space consistent with what you know about the system. This postulate allows us to prove all the things we want to say about the future, such as “gases expand”, “ice cubes melt”, “people age” and so on. 3. This fundamental postulate is not justifiable on a priori grounds, as it is fundamentally an empirical claim about how frequently different microstates pop up in our universe. Different initial conditions give rise to different such frequencies, so that a claim to a priori access to the fundamental postulate is a claim to a priori access to the precise details of the initial condition of the universe. There’s just one problem with all this… apply our postulate to the past, and everything breaks. Up next: Why does statistical mechanics give crazy answers about the past? Where did we go wrong? # A Cognitive Instability Puzzle, Part 2 This is a follow of this previous post, in which I present three unusual cases of belief updating. Read it before you read this. I find these cases very puzzling, and I don’t have a definite conclusion for any of them. They share some deep similarities. Let’s break all of them down into their basic logical structure: Joe Joe initially believes in classical logic and is certain of some other stuff, call it X. An argument A exists that concludes that X can’t be true if classical logic is true. If Joe believes classical logic, then he believes A. If Joe believes intuitionist logic, then he doesn’t believe A. Karl Karl initially believes in God and is certain of some other stuff about evil, call it E. An argument A exists that concludes that God can’t exist if E is true. If Karl believes in God, then he believes A. If Karl doesn’t believe in God, then he doesn’t believe A. Tommy Tommy initially believes in her brain’s reliability and is certain of some other stuff about her experiences, call it Q. An argument A exists that concludes that hat her brain can’t be reliable if Q is true. If Tommy believes in her brain’s reliability, then she believes A. If Tommy doesn’t believe in her brain’s reliability, then she doesn’t believe A. First of all, note that all three of these cases are ones in which Bayesian reasoning won’t work. Joe is uncertain about the law of the excluded middle, without which you don’t have probability theory. Karl is uncertain about the meaning of the term ‘evil’, such that the same proposition switches from being truth-apt to being meaningless when he updates his beliefs. Probability theory doesn’t accommodate such variability in its language. And Tommy is entertaining a hypothesis according to which she no longer accepts any deductive or inductive logic, which is inconsistent with Bayesianism in an even more extreme way than Joe. The more important general theme is that in all three cases, the following two things are true: 1) If an agent believes A, then they also believe an argument that concludes -A. 2) If that agent believes -A, then they don’t believe the argument that concludes -A. Notice that if an agent initially doesn’t believe A, then they have no problem. They believe -A, and also happen to not believe that specific argument concluding -A, and that’s fine! There’s no instability or self-contradiction there whatsoever. So that’s really not where the issue lies. The mystery is the following: If the only reason that an agent changed their mind from A to -A is the argument that they no longer buy, then what should they do? Once they’ve adopted the stance that A is false, should they stay there, reasoning that if they accept A they will be led to a contradiction? Or should they jump back to A, reasoning that the initial argument that led them there was flawed? Said another way, should they evaluate the argument against A from their own standards, or from A’s standards? If they use their own standards, then they are in an unstable position, where they jump back and forth between A and -A. And if they always use A’s standards… well, then we get the conclusion that Tommy should believe herself to be a Boltzmann brain. In addition, if they are asked why they don’t believe A, then they find themselves in the weird position of giving an explanation in terms of an argument that they believe to be false! I find myself believing that either Joe should be an intuitionist, Karl an atheist, and Tommy a radical skeptic, OR Joe a classical-logician, Karl a theist, and Tommy a reliability-of-brain-believer-in. That is, it seems like there aren’t any significant enough disanalogies between these three cases to warrant concluding one thing in one case and then going the other direction in another. # Logic, Theism, and Boltzmann Brains: On Cognitively Unstable Beliefs First case Propositional logic accepts that the proposition A-A is necessarily true. This is called the law of the excluded middle. Intuitionist logic differs in that it denies this axiom. Suppose that Joe is a believer in propositional logic (but also reserves some credence for intuitionist logic). Joe also believes a set of other propositions, whose conjunction we’ll call X, and has total certainty in X. One day Joe discovers that a contradiction can be derived from X, in a proof that uses the law of the excluded middle. Since Joe is certain that X is true, he knows that X isn’t the problem, and instead it must be the law of the excluded middle. So Joe rejects the law of the excluded middle and becomes an intuitionist. The problem is, as an intuitionist, Joe now no longer accepts the validity of the argument that starts at X and concludes -X! Why? Because it uses the law of the excluded middle, which he doesn’t accept. Should Joe believe in propositional logic or intuitionism? Second case Karl is a theist. He isn’t absolutely certain that theism is correct, but holds a majority of his credence in theism (and the rest in atheism). Karl is also 100% certain in the following claim: “If atheism is true, then the concept of ‘evil’ is meaningless”, and believes that logically valid arguments cannot be made using meaningless concepts. One day somebody presents the problem of evil to Karl, and he sees it as a crushing objection to theism. He realizes that theism, plus some other beliefs about evil that he’s 100% confident in, leads to a contradiction. So since he can’t deny these other beliefs, he is led to atheism. The problem is, as an atheist, Karl no longer accepts the validity of the argument that starts at theism and concludes atheism! Why? Because the arguments rely on using the concept of ‘evil’, and he is now certain that this concept is meaningless, and thus cannot be used in logically valid arguments. Should Karl be a theist or an atheist? Third case Tommy is a scientist, and she believes that her brain is reliable. By this, I mean that she trusts her ability to reason both deductively and inductively. However, she isn’t totally certain about this, and holds out a little credence for radical skepticism. She is also totally certain about the content of her experiences, though not its interpretation (i.e. if she sees red, she is 100% confident that she is experiencing red, although she isn’t necessarily certain about what in the external world is causing the experience). One day Tommy discovers that reasoning deductively and inductively from her experiences leads her to a model of the world that entails that her brain is actually a quantum fluctuation blipping into existence outside the event hole of a black hole. She realizes that this means that with overwhelmingly high probability, her brain is not reliable and is just producing random noise uncorrelated with reality. The problem is, if Tommy believes that her brain is not reliable, then she can no longer accept the validity of the argument that led her to this position! Why? Well, she no longer trusts her ability to reason deductively or inductively. So she can’t accept any argument, let alone this particular one. What should Tommy believe? — — — How are these three cases similar and different? If you think that Joe should be an intuitionist, or Karl an atheist, then should Tommy believe herself to be a black hole brain? Because it turns out that many cosmologists have found themselves to be in a situation analogous to Case 3! (Link.) I have my own thoughts on this, but I won’t share them for now. # How will quantum computing impact the world? A friend of mine recently showed me an essay series on quantum computers. These essays are fantastically well written and original, and I highly encourage anybody with the slightest interest in the topic to check them out. They are also interesting to read from a pedagogical perspective, as experiments in a new style of teaching (self-described as an “experimental mnemonic medium”). There’s one particular part of the post which articulated the potential impact of quantum computing better than I’ve seen it articulated before. Reading it has made me update some of my opinions about the way that quantum computers will change the world, and so I want to post that section here with full credit to the original authors Michael Nielsen and Andy Matuschak. Seriously, go to the original post and read the whole thing! You won’t regret it. ## No, really, what are quantum computers good for? It’s comforting that we can always simulate a classical circuit – it means quantum computers aren’t slower than classical computers – but doesn’t answer the question of the last section: what problems are quantum computers good for? Can we find shortcuts that make them systematically faster than classical computers? It turns out there’s no general way known to do that. But there are some interesting classes of computation where quantum computers outperform classical. Over the long term, I believe the most important use of quantum computers will be simulating other quantum systems. That may sound esoteric – why would anyone apart from a quantum physicist care about simulating quantum systems? But everybody in the future will (or, at least, will care about the consequences). The world is made up of quantum systems. Pharmaceutical companies employ thousands of chemists who synthesize molecules and characterize their properties. This is currently a very slow and painstaking process. In an ideal world they’d get the same information thousands or millions of times faster, by doing highly accurate computer simulations. And they’d get much more useful information, answering questions chemists can’t possibly hope to answer today. Unfortunately, classical computers are terrible at simulating quantum systems. The reason classical computers are bad at simulating quantum systems isn’t difficult to understand. Suppose we have a molecule containing n atoms – for a small molecule, n may be 1, for a complex molecule it may be hundreds or thousands or even more. And suppose we think of each atom as a qubit (not true, but go with it): to describe the system we’d need 2^n different amplitudes, one amplitude for each bit computational basis state, e.g., |010011. Of course, atoms aren’t qubits. They’re more complicated, and we need more amplitudes to describe them. Without getting into details, the rough scaling for an natom molecule is that we need k^n amplitudes, where . The value of k depends upon context – which aspects of the atom’s behavior are important. For generic quantum simulations k may be in the hundreds or more. That’s a lot of amplitudes! Even for comparatively simple atoms and small values of n, it means the number of amplitudes will be in the trillions. And it rises very rapidly, doubling or more for each extra atom. If , then even natoms will require 100 million trillion amplitudes. That’s a lot of amplitudes for a pretty simple molecule. The result is that simulating such systems is incredibly hard. Just storing the amplitudes requires mindboggling amounts of computer memory. Simulating how they change in time is even more challenging, involving immensely complicated updates to all the amplitudes. Physicists and chemists have found some clever tricks for simplifying the situation. But even with those tricks simulating quantum systems on classical computers seems to be impractical, except for tiny molecules, or in special situations. The reason most educated people today don’t know simulating quantum systems is important is because classical computers are so bad at it that it’s never been practical to do. We’ve been living too early in history to understand how incredibly important quantum simulation really is. That’s going to change over the coming century. Many of these problems will become vastly easier when we have scalable quantum computers, since quantum computers turn out to be fantastically well suited to simulating quantum systems. Instead of each extra simulated atom requiring a doubling (or more) in classical computer memory, a quantum computer will need just a small (and constant) number of extra qubits. One way of thinking of this is as a loose quantum corollary to Moore’s law: The quantum corollary to Moore’s law: Assuming both quantum and classical computers double in capacity every few years, the size of the quantum system we can simulate scales linearly with time on the best available classical computers, and exponentially with time on the best available quantum computers. In the long run, quantum computers will win, and win easily. The punchline is that it’s reasonable to suspect that if we could simulate quantum systems easily, we could greatly speed up drug discovery, and the discovery of other new types of materials. I will risk the ire of my (understandably) hype-averse colleagues and say bluntly what I believe the likely impact of quantum simulation will be: there’s at least a 50 percent chance quantum simulation will result in one or more multi-trillion dollar industries. And there’s at least a 30 percent chance it will completely change human civilization. The catch: I don’t mean in 5 years, or 10 years, or even 20 years. I’m talking more over 100 years. And I could be wrong. What makes me suspect this may be so important? For most of history we humans understood almost nothing about what matter is. That’s changed over the past century or so, as we’ve built an amazingly detailed understanding of matter. But while that understanding has grown, our ability to control matter has lagged. Essentially, we’ve relied on what nature accidentally provided for us. We’ve gotten somewhat better at doing things like synthesizing new chemical elements and new molecules, but our control is still very primitive. We’re now in the early days of a transition where we go from having almost no control of matter to having almost complete control of matter. Matter will become programmable; it will be designable. This will be as big a transition in our understanding of matter as the move from mechanical computing devices to modern computers was for computing. What qualitatively new forms of matter will we create? I don’t know, but the ability to use quantum computers to simulate quantum systems will be an essential part of this burgeoning design science. Quantum computing for the very curious (Andy Matuschak and Michael Nielsen) I only recently realized how philosophical the original EPR paper was. It starts out by providing a sufficient condition for something to be an “element of reality”, and proceeds from there to try to show the incompleteness of quantum mechanics. Let’s walk through this argument here: The EPR Reality Condition: If at time t we can know the value of a measurable quantity with certainty without in any way disturbing the system, then there is an element of reality corresponding to that measurable quantity at time t. (i.e. this is a sufficient condition for a measurable property of a system at some moment to be an element of the reality of that system at that moment:) Example 1: If you measure an electron spin to be up in the z direction, then quantum mechanics tells you that you can predict with certainty that the spin in the z direction will up at any future measurement. Since you can predict this with certainty, there must be an aspect or reality corresponding to the electron z-spin after you have measured it to be up the first time. Example 2: If you measure an electron spin to be up in the z-direction, then QM tells you that you cannot predict the result of measuring the spin in the x-direction at a later time. So the EPR reality condition does not entail that the x-spin is an element of the reality of this electron. It also doesn’t entail that the x-spin is NOT an element of the reality of this electron, because the EPR reality condition is merely a sufficient condition, not a necessary condition. Now, what does the EPR reality condition have to say about two particles with entangled spins? Well, suppose the state of the system is initially |Ψ> = (|↑↓ – |↓↑) / √2 This state has the unusual property that it has the same form no matter what basis you express it in. You can show for yourself that in the x-spin basis, the state is equal to |Ψ> = (|→← – |←→) / √2 Now, suppose that you measure the first electron in the z-basis and find it to be up. If you do this, then you know with certainty that the other electron will also be measured to be up. This means that after measuring it in the z-basis, the EPR reality condition says that electron 2 has z-spin up as an element of reality. What if you instead measure the first electron in the x-basis and find it to be right? Well, then the EPR reality condition will tell you that the electron 2 has x-spin right as an element of reality. Okay, so we have two claims: 1. That after measuring the z-spin of electron 1, electron 2 has a definite z-spin, and 2. that after measuring the x-spin of electron 1, electron 2 has a definite x-spin. But notice that these two claims are not necessarily inconsistent with the quantum formalism, since they refer to the state of the system after a particular measurement. What’s required to bring out a contradiction is a further assumption, namely the assumption of locality. For our purposes here, locality just means that it’s possible to measure the spin of electron 1 in such a way as to not disturb the state of electron 2. This is a really weak assumption! It’s not saying that any time you measure the spin of electron 1, you will not have disturbed electron 2. It’s just saying that it’s possible in principle to set up a measurement of the first electron in such a way as to not disturb the second one. For instance, take electrons 1 and 2 to opposite sides of the galaxy, seal them away in totally closed off and causally isolated containers, and then measure electron 1. If you agree that this should not disturb electron 2, then you agree with the assumption of locality. Now, with this additional assumption, Einstein Podolsky and Rosen realized that our earlier claims (1) and (2) suddenly come into conflict! Why? Because if it’s possible to measure the z-spin of electron 1 in a way that doesn’t disturb electron 2 at all, then electron 2 must have had a definite z-spin even before the measurement of electron 1! And similarly, if it’s possible to measure the x-spin of electron 1 in a way that doesn’t disturb electron 2, then electron 2 must have had a definite x-spin before the first electron was measured! What this amounts to is that our two claims become the following: 1. Electron 2 has a definite z-spin at time t before the measurement. 2. Electron 2 has a definite x-spin at time t before the measurement. And these two claims are in direct conflict with quantum theory! Quantum mechanics refuses to assign a simultaneous x and z spin to an electron, since these are incompatible observables. This entails that if you buy into locality and the EPR reality condition, then you must believe that quantum mechanics is an incomplete description of nature, or in other words that there are elements of reality that can not described by quantum mechanics. ## The Resolution(s) Our argument rested on two premises: the EPR reality condition and locality. Its conclusion was that quantum mechanics was incomplete. So naturally, there are three possible paths you can take to respond: accept the conclusion, deny the second premise, or deny the first premise. To accept the conclusion is to agree that quantum mechanics is incomplete. This is where hidden variable approaches fall, and was the path that Einstein dearly hoped would be vindicated. For complicated reasons that won’t be covered in this post, but which I talk about here, the prospects for any local realist hidden variables theory (which was what Einstein wanted) look pretty dim. To deny the second premise is to say that in fact, measuring the spin of the first electron necessarily disturbs the state of the second electron, no matter how you set things up. This is in essence a denial of locality, since the two electrons can be time-like separated, meaning that this disturbance must have propagated faster than the speed of light. This is a pretty dramatic conclusion, but is what orthodox quantum mechanics in fact says. (It’s implied by the collapse postulate.) To deny the first premise is to say that in fact there can be some cases in which you can predict with certainty a measurable property of a system, but where nonetheless there is no element of reality corresponding to this property. I believe that this is where Many-Worlds falls, since measurement of z-spin doesn’t result in an electron in an unambiguous z-spin state, but in a combined superposition of yourself, your measuring device, the electron, and the environment. Needless to say, in this complicated superposition there is no definite fact about the z-spin of the electron. I’m a little unsure about where the right place to put psi-epistemic approaches like Quantum Bayesianism, which resolve the paradox by treating the wave function not as a description of reality, but solely as a description of our knowledge. In this way of looking at things, it’s not surprising that learning something about an electron at one place can instantly tell you something about an electron at a distant location. This does not imply any faster-than-light communication, because all that’s being described is the way that information-processing occurs in a rational agent’s brain.
auto_math_text
web
Pw2Wannier90¶ Description¶ Use the plugin to support inputs of Quantum Espresso pw2wannier90.x executable. Computes the $$M_{mn}$$, $$A_{mn}$$ and similar matrices needed in input by the Wannier90 code. See the Wannier90 documentation to know which quantities are computed and their meaning Supported codes¶ • tested from pw2wannier90.x v.5.1.2 onwards Inputs¶ • parent_calculation, A PW calculation. It is also recommended that a bands calculation be used as the parent for the best viewing results, though this is not mandatory. • parameters, class Dict Input parameters of projwfc.x, as a nested dictionary, mapping the input of QE. See the QE documentation for the full list of variables and their meaning. • nnkp_file, class SinglefileData A SinglefileData containing the .nnkp file, typically generated by Wannier90 during the preprocess phase (e.g., using the -pp flag to the wannier90.x executable. • settings, class Dict (optional) An optional dictionary that activates non-default operations. See discussion below for possible options. Outputs¶ As no parser is implemented yet, no specific outputs except for the standard AiiDA ones (like the RemoteData output) are created. If you want to retrieve some files (like the .mmn or .amn files) you can decide add a key-value pair to the optional settings input Dict node, with key ADDITIONAL_RETRIEVE_LIST and where the value is a list of filenames to retrieve. They will, as usual, be saved in an output FolderData node. Errors¶ Errors of the parsing are reported in the log of the calculation (accessible with the verdi calculation logshow command). No parsing is performed at the moment.
auto_math_text
web
# The Moser Lecture Prize By Mr. James E Haines Print ## Full Moser Lecture Guidelines and Previous Winners Jürgen Moser (1928-1999) was one of the world's leading mathematicians who helped develop influential theories in celestial mechanics and dynamical systems theory (among others) and is renowned for his work on the Kolmogorov- Arnold-Moser (KAM) theory. He received his doctorate from Göttingen and went on to be a Professor at the Courant Institute, NYU (1955-1980) and at ETH-Zürich (1980-1995). Jurgen Moser received the Wolf Prize in Mathematics in 1995. #### Prize Endowments We would like to ask for your help in contributing to a lasting financial base for the prizes. It is our goal to raise an endowment of at least \$10,000 for each prize. Contributions for the Moser Lecture endowment may be sent to: The Moser Lecture Fund Attn: Michelle Montgomery Society for Industrial and Applied Mathematics 3600 University City Science Center
auto_math_text
web
Open Access Publications from the University of California ## Measurement of the Higgs Boson Properties at \sqrt s = 13 TeV With CMS in the \gamma \gamma Decay Channel • Author(s): Olmedo, Manuel Alejandro
auto_math_text
web
Pulley Questions Physics Home > Pulleys / Pulley Questions Physics Welcome to the Industrial and Scientific Tool! The best site about Pulley Questions Physics Laikakingdom Classical Round Toe Platform soft Leather Rough High Heel Shoes(6 B(W) US, Black) List Price: $136.00 Price:$63.26 You Save: $72.74 (53%) • Rough High-Heels. • DUE TO THE ASIAN SIZE.You can check the size chart on the left.Make sure that the shoes you choose will fit to your foot before you order.Thank you. • Soft leather facing,wears comfortable and breathe. Laikakingdom Leather Rough High Heels Shoes Special Design For Women(6 B(M) US, Black) List Price:$190.00 Price: $99.90 You Save:$90.10 (47%) • DUE TO THE ASIAN SIZE.You can check the size chart on the left.Make sure that the shoes you choose will fit to your foot before you order.Thank you. • 2 cm for Platform. • For casual and fashion time. INDEX Fashion Flattie With Cute Butterfly Knot Simple Shoes For Women (8 B(M) US, Beige) List Price: $155.00 • The rough sole are made of manmade sole. • Feel comfortable and breathe when you wearing then. • Flattie Shoes With Butterfly Knot. INDEX Dull Polish High-Heels Fashion Shoes For Women(7 B(M) US, Green) List Price:$170.00 • Made of high quality imported material. • Manmade Sole and Fine with sole. • Mid-Long High-Heels Shoes. Pulley Questions Physics - Best News How we beat the 1 percent: This is how the moneyed classes get dethroned • On entering Paris, which I had come to visit, I said to myself— here are a million human beings who would all die in a short time if provisions of every kind ceased to flow toward this great metropolis. Imagination is baffled when it tries to appreciate the vast multiplicity of commodities that must enter tomorrow through the barriers in order to preserve the inhabitants from falling a prey to the convulsions of famine, rebellion and pillage. And yet all sleep at this moment, and their peaceful slumbers are not disturbed for a single instant by the prospect of such a frightful catastrophe. On the other hand, eighty departments have been laboring today, without concert, without any mutual understanding, for the provisioning of Paris. How does each succeeding day bring what is wanted, nothing more, nothing less, to so gigantic a market? What, then, is the ingenious and secret power that governs the astonishing regularity of movements so complicated, a regularity in which everybody has implicit faith, although happiness and life itself are at stake? • Source: Salon Pulley Questions Physics - Recent Articles Tension on pulleys Physics Question - Physics Stack Exchange • Collar A is connected to a 50lb load on a frictionless horizontal rod. Determine magnitude of P to maintain equilibrium when x=4.5. Im confused on the concept of Tension I originally got the answer 11.25 lb like the images below but my solution manual is different. The solution manual says: $$\tan \alpha= 20/4.5 = 77.3 d^\circ$$ sum of F sub X=0 $$-P + T \cos 77.3$$ $$P= 50lb (cos 77.3)$$ $$= 10.98lb$$ Is this because the tension in the rope is equivalent to the mass of the hanging weight? Is there a way to solve for 10.98lb using method like the one below ...i.e. adding the vector components and solving for the unknowns? confused especially on how the solution manual is using 50lb for magnitude of T and when using vector addition getting the 11 so lbs. Thanks! ... Source: Tension on pulleys Physics Question - Physics Stack Exchange Pulley Questions Physics Best books Barron's AP Physics C, Advanced Placement Examination Creator: Robert A. Pelcovits, Joshua Farkus | Science - 2008 Combined. Rotation. and. Translation. One very common AP problem involves calculating the acceleration of a system including both rotation (e.g., the rotation of a pulley) and translation (e.g., the vertical motion of masses connected by a ... Publisher: Barron's Educational Series College Physics: Reasoning and Relationships Creator: Nicholas Giordano | Science - 2009-02-13 First consider the part of the rope held by the person: it exerts an upward force on the pulley, equal in magnitude to the ... However, it is sometimes very useful to solve a problem approximately, perhaps because some important quantities ... Publisher: Cengage Learning IIT Physics Topic Wise Solved Questions Creator: Tmh | A light inextensible string that goes over a smooth fixed pulley as shown in the figure connects two blocks of masses 0.36 kg and 0.72 kg. Taking g = 10 m/s2, find the work done (in joules) by the string on the block of mass 0.36 kg during the ... Publisher: Tata McGraw-Hill Education The World of Physics Creator: John Avison | 1989 Questions. 7. Assume g = 1 0 N/kg. 1 Calculate the work done when a force of 20 N moves an object a distance of a) 5 m in ... 1 7 shows a wire of British Rail's electrification system, being held taut by a load L and a pulley system P. a) By what ... Publisher: Nelson Thornes IGCSE Physics Challenging Drill Questions (Yellowreef) Creator: Thomas Bond, Chris Hughes | Science - 2013-11-03 The figure below shows the motor pulley, the belt and the machine pulley. The belt does not slip on either pulley. Clockwise rotation Motor pulley Motor pulley belt (a) The diameter of the motor pulley is 12.0 cm; the diameter of the machine ... Publisher: Yellowreef Limited Pulley Questions Physics Apr 19, 7668 by foreva_lynn | Posted in Physics i am a little behind in physics bcuz i have been busy studying for my chem exams, hence im a little lost when it comes to problems like this. please help me out. include the equations, why u used it, and also plug in the numbers with the answers, bcuz A. The tension should be 130N. That's the force on each side and hence the tension experienced by the rope. The system is in static equilibrium. B I can't remember my physics from old days very well, but the tension should be 70N+ (m*9.8).
auto_math_text
web
# Speed of our reality perception… ## Main Question or Discussion Point I'm really puzzled at our perception of fundamental physical reality… I’ve got it right or all wrong? (I am no scientist, just a curious person, so please bear with me.) Some constants first: - Planck time (PT): 5.4×10−44 s - Planck length (PL): 1.6×10^-35 m - Electron radius (EL): 2.4×10^-17 m - Light speed (LS): 299792458 m/s - Electron speed (ES1): 299792456 m/s (4 GeV beam) - Electron speed (ES2): 2200000 m/s (in hydrogen atom) - Age of Universe (AU): 4.3*10^17 s (13.7 billion years) Imagine one electron traveling 1 meter at the speed ES1 (almost as the speed of light), and that we record this with a perfect recording system (for this thought experiment let’s dismiss all sorts of technical or physical limitations). If I am not wrong, main stream science says that space-time is continuous and not discrete, but nonetheless, let’s imagine that space-time is discrete, where the smallest measurable length is Planck length. (In the end, it doesn't really matter if time is continuous or discrete, Plank length is still the smallest theoretical length, while smallest practical measurable length is way larger.) As I’ve already learned, a fundamental particle, like an electron, is not really a solid particle but it exists as a “wave packet” that is distributed over space-time, so, in truth it would move in steps much smaller than Plank length or Plank time, because it is distributed over many space-time points… But to make things simpler, let’s say that we count only the steps which are of “full” Plank length. So, my question is, how many frames per second (FPS) would we record with such a perfect camera? Well, the way I see it it’s simply 1m / PL, which is about 6x10^34 FPS. Now, what looks most curious to me, almost incredible, is that if we were to review such a recording in a human time-scale, on our TV, which is 30 FPS in USA, where we’d look just 1 second at every frame (step of that motion) it would take us about as much time as the age of Universe multiplied by itself, to review it! Which means that our perception of reality is incredibly slow compared to reality itself (if I remember it correctly our awareness “records” few events per second), so, we actually miss out most of the happening in this reality/Universe… Which makes me consider that if there were some kind of aware beings who operate in a much faster time-scale, it might happen that they already visited us, or might be right here among us, but we weren’t / aren't even aware of them – to them we might appear as if we are frozen in time, and they might pass right besides us and we’d not even notice it… Is it only me finding this fascinating? Related Other Physics Topics News on Phys.org ZapperZ Staff Emeritus First of all, electron "speeds" are not "constants"! They are certainly not considered to be one of the fundamental constants. And neither is "electron speed in hydrogen". Secondly, it seems as if you are trying to "look" at the trajectory of an electron with this "perfect camera". Is this true? If it is, then you haven't clearly defined the mechanism of tracking such a thing (are you shooting photons at it to know where it is at any given position? Are you detecting its charge?). Inevitably, you are basing your "observation" on a number of things that you take for granted, the same way you take for granted that you can watch a tennis match and follow the trajectory of a tennis ball, without realizing that what you are doing is observing light that hits the tennis ball and then enters your eyes. Zz. Interesting read, one thing which I think may limit the ability for life in 'faster' reality or whatever are maybe: The constants of nature perhaps will put a limit to what sort of 'life' can actually exist. I believe your suggestion is these lifeforms might see what to us is just a second as what to us feels like a millennium (much greater orders of magnitude in your figures!) so before long by living in that reality, things like the speed of light 'catch up' with the rate of their reality. By living in a reality where time is effectively 'slower' before long you might actually be able to see light come toward you (Which you wouldn't see, as it's light which hasn't reached you yet) . I can't think of consequences this would have though, but I'm sure you can quickly draw up some other interesting 'scenario's' other constants would imply. ZapperZ Staff Emeritus Interesting read, one thing which I think may limit the ability for life in 'faster' reality or whatever are maybe: The constants of nature perhaps will put a limit to what sort of 'life' can actually exist. I believe your suggestion is these lifeforms might see what to us is just a second as what to us feels like a millennium (much greater orders of magnitude in your figures!) so before long by living in that reality, things like the speed of light 'catch up' with the rate of their reality. By living in a reality where time is effectively 'slower' before long you might actually be able to see light come toward you (Which you wouldn't see, as it's light which hasn't reached you yet) . I can't think of consequences this would have though, but I'm sure you can quickly draw up some other interesting 'scenario's' other constants would imply. This post is rife with bad physics. I don't think you've understood the basic postulate of special relativity at all. Light doesn't "catch up" with anything, and the proper time doesn't slow down your own reference frame. Do you see your time slowing down, even though to some other galaxy, you are actually moving utterly fast? Please do not ignore the PF Rules that you had agreed to, especially our policy on speculative posts. These rules ARE enforced, they are not mere window dressings. Zz. Drakkith Staff Emeritus I think you are talking about how "fast" your brain can process information. While being able to process and communicate faster could be considered experience reality faster, it is not the same thing as what you are implying. Let's say that my brain used Light instead of electro-chemical reactions and impulses to operate. I would probably be able to "think" faster than any computer we have today. However, I am still 100% constrained to the rules of physics. If I wanted to move I'd still have to apply the same forces and use the same amount of energy as everyone else. Apply too much force too quickly and SNAP, broken bones. So even if you could think faster than most all that really gets you is more "time" to think. (Having a hard time using accurate terms for all this) You have a good insight though about experience and time. For example someone living on the surface of a neutron star would experience life at a relatively slower rate and you would be born and dead before they finish their dinner. Of course they would also be thinking slower relative to you, so they would experience a normal life from their reference frame. First of all, electron "speeds" are not "constants"! They are certainly not considered to be one of the fundamental constants. And neither is "electron speed in hydrogen". Secondly, it seems as if you are trying to "look" at the trajectory of an electron with this "perfect camera". Is this true? If it is, then you haven't clearly defined the mechanism of tracking such a thing (are you shooting photons at it to know where it is at any given position? Are you detecting its charge?). Inevitably, you are basing your "observation" on a number of things that you take for granted, the same way you take for granted that you can watch a tennis match and follow the trajectory of a tennis ball, without realizing that what you are doing is observing light that hits the tennis ball and then enters your eyes. I see you focused more on the details than on the idea which fascinates me, all fine of course. I agree, perhaps instead of an electron I should mention some "less-complex" object, like tennis ball, or a bullet. And sure, I am aware that perfect camera can only be recording light bouncing back from that object. Moreover, as it happens I even know that the resolution of a single visible photon to motion is on the order of about 10^−15 seconds (this is the time period for one oscillation of the photon wave). And that movements shorter in time than this just won't impact the behavior of the photon much. Which means, that we can never make such a perfect camera which would record motion of objects in all the details (steps, frames) as it really happens (that's why I said in beginning let's dismiss physical and technical limitations of such perfect camera). But in the end, the main point I wanted to present is that any motion has at least (if space-time is discrete) as many steps (frames) as there are Plank's length fitting the distance that object travels (length travelled / Plank length). Which brings us to the idea, that one second, which for a human perception of time is something “comfortable”, might in reality be something like a century to potential beings that operate on a much faster time-scale in perception… Last edited: I believe your suggestion is these lifeforms might see what to us is just a second as what to us feels like a millennium (much greater orders of magnitude in your figures!)... Yes, that's the main idea, thanks for catching it that well. I was thinking how to present it, and I am sure I didn't do a very good job… And I am not sure if this idea was already presented by someone else, I guess it had to happen, because it's not hard to think it up, it’s just not something we'd normally consider... I think you are talking about how "fast" your brain can process information. While being able to process and communicate faster could be considered experience reality faster, it is not the same thing as what you are implying. Let's say that my brain used Light instead of electro-chemical reactions and impulses to operate. I would probably be able to "think" faster than any computer we have today. However, I am still 100% constrained to the rules of physics. If I wanted to move I'd still have to apply the same forces and use the same amount of energy as everyone else. Apply too much force too quickly and SNAP, broken bones. So even if you could think faster than most all that really gets you is more "time" to think. (Having a hard time using accurate terms for all this) Ohh yes, that's a good and valid point, and I did consider it as well. But such beings, who might operate on a much faster time-scale, in sense of perceiving existence, would not have to be made of flash and bones like we humans are... I imagine, that if there are such beings they must be much smaller than us. I relate our physical size to our perception of time, is that wrong? I mean, we humans, being close to 2m tall, are huge compared to fundamental particles, and I imagine that there could be beings still much bigger than fundamental particles, but also much smaller than us, meaning, they could also physically move much faster (not just think/perceive faster) -- the bigger one is the more energy one needs to move own body parts, and the more "fragile" at the same time... Whatchathink? Last edited: Drakkith Staff Emeritus Ohh yes, that's a good and valid good point, and I did consider this too. But such beings, who might operate on a much faster time-scale, in sense of perceiving existence, would not have to be made of flash and bones like we humans are... I imagine, that if there are such beings they must be much smaller than us. I relate our physical size to our perception of time, is that wrong? I mean, we humans, being close to 2m tall, are huge compared to fundamental particles, and I imagine that there could be beings still much bigger than fundamental particles, but also much smaller than us, meaning, they could also physically move much faster (not just think/perceive faster). Whatchathink? I don't know. Perhaps you can post in the Biology forum. They might know a little more about how the nervous system works. I don't know. Perhaps you can post in the Biology forum. They might know a little more about how the nervous system works. But you do agree, if such beings were really small, that the energy they had to spend for own movements would be much lower than ours, thus, this idea of such beings is physically possible? But then, I kinda see your point (even if you didn't imply it), such beings would also have to have very small physical brains, and thus, even if they were aware of reality they would not really have any good capacity of understand it, as we do. (Of course, if we assume, that all brains work in alike manner, well, they could have something like CPUs instead of the brains, but still, we'd come to some physical limit of how big such beings would be, and I guess they wouldn't be really small, in sense, that perception of time for them would really be dramatically different that ours.) Seems, that we humans, are just of an ideal physical size to experience reality in such (self-aware) "fullness", not too small to be too stupid, and not too big to waste too much energy on own movements. Last edited: ZapperZ Staff Emeritus I see you focused more on the details than on the idea which fascinates me, all fine of course. I agree, perhaps instead of an electron I should mention some "less-complex" object, like tennis ball, or a bullet. And sure, I am aware that perfect camera can only be recording light bouncing back from that object. Moreover, as it happens I even know that the resolution of a single visible photon to motion is on the order of about 10^−15 seconds (this is the time period for one oscillation of the photon wave). And that movements shorter in time than this just won't impact the behavior of the photon much. Which means, that we can never make such a perfect camera which would record motion of objects in all the details (steps, frames) as it really happens (that's why I said in beginning let's dismiss physical and technical limitations of such perfect camera). You have a universal time scale for the period of a photon wave? Whoa! When did you discover this? But in the end, the main point I wanted to present is that any motion has at least (if space-time is discrete) as many steps (frames) as there are Plank's length fitting the distance that object travels (length travelled / Plank length). An assumption that has no verification. Which brings us to the idea, that one second, which for a human perception of time is something “comfortable”, might in reality be something like a century to potential beings that operate on a much faster time-scale in perception… You made several speculative assumptions here. At what point do you actually use valid physics? Maybe these are the "details" you don't care about? Unfortunately, as Meis Van deRohe used to say, god is in the details! You might also want to re-read the PF Rules that you had agreed to. That's another "detail" that you should not overlook. Zz. In this “thought experiment” I assumed that space-time (ST) is discrete, that there are as many steps (frames) in motion as there are Plank's length fitting that distance (length travelled / Plank length). Well, in truth, even if ST is discrete there are more steps but just not measurable… But what if ST is continuous, which is a more likely scenario by what current science tells us, then how many steps are there in every motion? And this fascinates me in whole different way, in sense, how's motion even possible? Even if we had a perfect camera (to exaggerate, God’s camera), without any physical and technical limitations, but which still has to record in discrete steps, it could still never record motion of objects the way they really moved, right? Since in continuous ST, if we try to discretize ST with such a perfect camera, there are infinite steps... I just cannot apply this idea of ST being continuous to how motion happens. Even a simple movement of my own hand, perceived through this idea, looks like a miracle to me ;) Can anyone help me out with understanding this (motion in our reality)? You made several speculative assumptions here. At what point do you actually use valid physics? Maybe these are the "details" you don't care about? Of course I value the details - I am just no expert on this, so, I shouldn't post then? Also, I did say that your focus on details is fine as well, I just think that those less-than-perfect assumptions are not changing the validity of idea which I proposed. Am I wrong, if so, how? ZapperZ Staff Emeritus Of course I value the details, I also did say that your focus on details is fine as well, I just think that those less-than-perfect assumptions are not changing the validity of idea which I proposed. Am I wrong, if so, how? The problem is that many of your assumptions are Not Even Wrong! This is neglecting the fact that you think a "photon wave" has only one time scale. Check the period for, say, a radio wave versus a gamma wave. Zz. You have a universal time scale for the period of a photon wave? Whoa! When did you discover this? Your respected memeber (science advisor "Chalnoth") said that to me some time ago, I'll quote him now: I asked: "What would be the shortest time to get enough photons for generating a sensible image with current technology, if you might know? Actually, I am asking how many frames per second is possible to capture using best current technology." That really depends on the situation. It depends upon how high-resolution you want your image to be. It depends upon how low you want the noise to be. It depends upon how bright your source is. And it depends upon the collecting area of your camera. One might be able to consider a theoretical upper limit for a massively-bright source on a camera with a very large collecting area, in which case the minimum time required would be the minimum time to absorb some minimal number of photons, which would be a couple orders of magnitude more time than the period of a single photon. So if you have a bright enough source, perhaps somewhere in the $10^{-12}$ second range for visible light? I don't think you'll get even remotely close to this limit for any realistic scenario, as this would require a tremendously bright light source. Bear in mind that the Planck time is ~$10^{-44}$ seconds. The resolution of a single visible photon to motion is on the order of about $10^{-15}$ seconds (this is the time period for one oscillation of the photon wave). Movements shorter in time than this just won't impact the behavior of the photon much. I think you are just too harsh at people who are not experts, sorry. Last edited: ZapperZ Staff Emeritus Your respected memeber (science advisor "Chalnoth") said that to me some time ago, I'll quote him now: You are neglecting (another detail, perhaps?) the fact that VISIBLE light is being discussed in the quote you cited! He is estimating the UPPER LIMIT of the visible light spectrum! Photons are all electromagnetic wave, and covers the whole known spectrum of EM wave, not just visible light! I think you are just too harsh at people who are not experts, sorry. Here's the problem. You need to learn how to crawl first before wanting to run the sprint at the Olympics. Zz. An assumption that has no verification. Again I'll quote your respected memeber: One way of perhaps thinking of it is this. Imagine a single particle. This particle exists as a "wave packet" that is distributed over space. Since we're considering a discretized space-time, this wave packet can only take "location" values at specific points. That is, instead of being a smooth wave, it is made up of a series of more or less randomly-distributed points. As we move forward in time, the wave packet covers a different random distribution of points. But to make it all a bit more complicated, the discretization is not just in space, but also in time, so that you can't even sensibly talk about what it's doing at one particular instant, but have to take a chunk of time and count up all of the points that randomly fit within that chunk. So we might imagine our "chunk" of time as being one Planck time in length, and call that "now". We can then slowly move our "chunk" of time forward, and one by one, space-time points will fall behind into the past, while new space-time points will become part of the present. Thus one can't even talk about the particle itself making steps of a Planck length or Planck time, because it is distributed over many space-time points, and stepping a tiny fraction of a Planck length, or moving forward a tiny fraction of a Planck time, can lead to the particle covering many new points in space-time. This means that as long as the particle has a wavelength much longer than the Planck length, this discretization really doesn't make any difference. It doesn't even make "steps" of a Planck length in size, but much much smaller steps (and the bigger the particle's wavelength, the smaller those steps). In just simplified all of this... and later I even extended my explanation to be more correct, by saying (I guess you missed it): In this “thought experiment” I assumed that space-time (ST) is discrete, that there are as many steps (frames) in motion as there are Plank's length fitting that distance (length travelled / Plank length). Well, in truth, even if ST is discrete there are more steps but just not measurable… And "Chalnoth” above explains this, I think, very well. ZapperZ, I understand your view, it isn't easy to deal with such ignorant/stupid people (as me) on the daily basis. But I'd still like to ask you to focus on what was my main idea. Which is, perception of time and motion... It would be really delicious, if experts here would just take the main idea, and talk among themselves to see, where it can bring us. Last edited: ZapperZ Staff Emeritus ZapperZ, I understand your view, it isn't easy to deal with such ignorant/stupid people (as me) on the daily basis. But I'd still like to ask you to focus on what was my main idea. Which is, perception of time and motion. It would be really delicious, if experts here would just take the main idea, and talk among themselves to see, where it can bring us. You and I must read the same thing and understand them differently. To me, what you quoted here is exactly the argument on why your question makes very little sense. So I'm not sure why you are using it against me, rather than looking back at your original post and seeing why it really isn't a valid assumption. BTW, even IF there is such a thing as Planck length scale, it doesn't mean we can detect it. http://www.physorg.com/news/2011-06-physics-einstein.html So yes, experts ARE talking among themselves about this topic. We may not do it here on a public forum, but we ARE talking! Zz. My main question for the experts is (Zz, I'll do you a favor and stop posting after asking this): Is it possible that there might be different kind of beings, quite unlike us humans, who could perceive time and motion in a very different time-scale than we do? Say, what we perceive to be one second would to them be a much longer period of time… is this possibility realistic in physical sense? P.S. Plank constants was not my question, nor focus, I just put them out for imagining better what I wanted to present. ZapperZ Staff Emeritus My main question for the experts is (Zz, I'll do you a favor and stop posting after asking this): Is it possible that there might be different kind of beings, quite unlike us humans, who could perceive time and motion in a very different time-scale than we do? Say, what we perceive to be one second would to them be a much longer period of time… is this possibility realistic in physical sense? P.S. Plank constants was not my question, nor focus, I just put them out for imagining better what I wanted to present. See, this is exactly the point I made in my comment about trying to learn the basic first before applying the faulty knowledge to other areas. First of all, on what physical basis would you base the existence of such a thing, i.e. different time scales? The ONLY physical basis that we currently have is Relativity. In particular, we don't need another "unlike humans" to achieve that. Any frame of reference moving at a different speed will have its time being perceived to be different when compared to another inertial frame! But it is unclear if this is what you're after, or if you are asking about different inertial frames having actually different proper time scale! If that's the case, on what physical basis would that based on, and how would one compare it to know there is a difference? The latter is crucial (a detail?) because without the ability to detect, it might as well not happen! Zz. See, this is exactly the point I made in my comment about trying to learn the basic first before applying the faulty knowledge to other areas. You are sure good at taking away my inspiration for such topics (and I guess I am not alone here who experienced this). I don't have the time to study physic, I am over 40 years old, got a job and family to take care of. Though, I am still curious, very much so, about nature of things, I shouldn't be? I shouldn't ask others for their opinion and understanding, because I am just too ignorant? So, are you telling me: either study or don't post here? First of all, on what physical basis would you base the existence of such a thing, i.e. different time scales? The ONLY physical basis that we currently have is Relativity. In particular, we don't need another "unlike humans" to achieve that. Any frame of reference moving at a different speed will have its time being perceived to be different when compared to another inertial frame! ZZ, I let it to you, and other experts, to ponder on this a bit more, if you and other find this idea interesting enough. Humans are searching for life out there, right? But, did those scientists consider what I mentioned here? Did they implement in their search the possibility, that perhaps we should not only try to reach out by communicating at "human time-scale", meaning, perhaps when we send out our communication that we should also make it "time-compressed" or "time-expanded", perhaps this means we should "compress" frequencies, make the waves narrower (I don't know how to express this properly, but at least I want to let others imagine what I mean). Perhaps science should consider this idea, I don't know. (I do read popular science magazines, but never saw this idea being mentioned.) But it is unclear if this is what you're after, or if you are asking about different inertial frames having actually different proper time scale! If that's the case, on what physical basis would that based on, and how would one compare it to know there is a difference? The latter is crucial (a detail?) because without the ability to detect, it might as well not happen! Again, it would be delicious if experts take the "bones" I offered, so to say, and out of them make a fine "construction" (of idea). If my idea is boring, and perhaps even stupid (it’s how you make me feel), well then, just delete this whole thread and accept my apologizes for wasting your time. Last edited: ZapperZ Staff Emeritus You are sure good at taking away my inspiration for such topics (and I guess I am not alone here who experienced this). I don't have the time to study physic, I am over 40 years old, got a job and family to take care of. Though, I am still curious, very much so, about nature of things, I shouldn't be? I shouldn't ask others for their opinion and understanding, because I am just too ignorant? So, are you telling me: either study or don't post here? ZZ, I let it to you, and other experts, to ponder on this a bit more, if you and other find this idea interesting enough. Humans are searching for life out there, right? But, did those scientists consider what I mentioned here? Did they implement in their search the possibility, that perhaps we should not only try to reach out by communicating at "human time-scale", meaning, perhaps when we send out our communication that we should also make it "time-compressed" or "time-expanded", perhaps this means we should "compress" frequencies, make the waves narrower (I don't know how to express this properly, but at least I want to let others imagine what I mean). Perhaps science should consider this idea, I don't know. (I do read popular science magazines, but never saw this idea being mentioned.) Again, it would be delicious if experts take the "bones" I offered, so to say, and out of them make a fine "construction" (of idea). If my idea is boring, and perhaps even stupid (it’s how you make me feel), well then, just delete this whole thread and accept my apologizes for wasting your time. The problem here is that you appear to be here to TEACH us "experts" a lesson, rather than trying to LEARN from where your basic premise is faulty. Re-read my first response to your original post. When I look at something, I try to (i) figure out the underlying principle involved, or (ii) understand the starting premise, or (iii) discover the impetus for either a question, suggestion, or idea. Notice what I questioned in the very beginning: your concept of what "constants" are, and misconception on how we observed (perceived?) things. If you used these are either the starting point, or impetus for the rest of your query, aren't you in the least bit interested in knowing if they are correct or valid? Because if they aren't, then the rest of what you built on is moot because the foundation is incorrect! One of the things we try to strive for here in this forum is not only presenting the "material", but also getting people to THINK for themselves in ways in which, even when they don't have the knowledge, they at least have a systematic way of making an analytical evaluation of any ideas that they either hold, or come across. This skill transcends beyond just physics or science. It allows for anyone to examine and discover what assumptions they hold, and to what degree are they certain on the validity of such assumptions. I tried to convey that to you from the very beginning, hoping that you'd have an interest in trying to learn basic ideas with which we can build things on. It appears that I was mistaken. Zz. Drakkith Staff Emeritus
auto_math_text
web
## Results from HPT 2012 One unexpected perk of being in China: I woke up before 7:30 this morning. That would never happen without jet lag. Unfortunately, even waking up at 7:30 every day hasn’t given me any time to write up a mid-conference blog post. Talks have been running from 8:30-6:30, with the rest of the time mostly taken up by meals and discussions. So I’ll just post this “teaser” of some of the more interesting results that were presented. Of the presentations that gave new results, most of them are based the September proton-lead run at the LHC. This was just a pilot run, meant to ensure that there wouldn’t be any unexpected problems with colliding two different types of particles, so there wasn’t a lot of data collected — only 2 million collisions — but it was already enough to start shedding some light on the underlying physics. # No initial state effects Ion-ion collisions have already been extensively studied at both RHIC and the LHC, and as you might imagine, when you smash a blob of a hundred blobs of particles into another blob of a hundred blobs of particles, what you get is a mess. But it’s an orderly mess, in a sense, one which physicists have managed to characterize as forming in several distinct stages. Roughly speaking: first, the ions approach and the individual partons (quarks and gluons) start to pass by each other, then they collide and form a blob of quark-gluon plasma (the “medium”), and finally the interactions within the medium produce hadrons that then have to make their way out of the QGP. Interactions between the partons in the first step are called initial state effects, and interactions between the hadrons and the medium in the last step are called final state effects. (figure from Larry McLerran’s slides) Now, from the results of ion-ion collisions at RHIC and also at the LHC, we know that final state effects do significantly affect what comes out to be detected. Jet quenching is the typical example: if a beam of reaction products (a jet) has to go through enough of the medium, it will be spread out into a weak “spray” rather than a strong jet. But there are other effects, less well understood, that make proton-proton collisions different from ion-ion collisions, and we can’t really tell whether or to what extent they are initial state effects or final state effects. Proton-ion collisions allow us to make that distinction because in a pA collision, there aren’t enough particles involved to form a final-state medium, so any difference between pp and pA must be due to initial state effects. This is where the results of the LHC’s pA run come in. ALICE has measured $$R_{pA}$$, a way of characterizing the difference between proton-ion collisions and proton-proton collisions, and found it to be basically equal to 1 for high-momentum particles. That means the proton-ion collisions generate about as many particles as would be expected just by scaling up a proton-proton collision to the number of nucleons the projectile proton hits on its way through the ion, which in turn suggests that any initial-state effects are negligible. # The Ridge Meanwhile, an interesting feature known simply as “the ridge” has been capturing everyone’s interest in proton-proton collisions. This takes a little bit of explanation, so bear with me. When two protons collide with a large amount of energy, the individual parton collision will often produce two jets coming out in opposite directions, in the center-of-mass reference frame of the partons. However, those partons can have a net motion relative to the detector, in which case the jets will “inherit” that motion and will both emerge toward one direction or the other. This is essentially the same thing as relativistic beaming. However, this only affects the jets’ direction along the beam axis; they will always still be on opposite sides in the azimuthal direction $$\phi$$. The standard way to visualize this is, for each event, to measure the difference in longitudinal coordinate, $$\Delta\eta$$, and the difference in azimuthal coordinate, $$\Delta\phi$$, between the jets, and make a histogram. When you do this, you get a peak at $$(0,0)$$ corresponding to the leading jet, and a hump at $$\Delta\phi = \pi$$ spread out along all values of $$\Delta\eta$$ corresponding to the subleading jet. When this was actually done with the LHC proton-proton (pp) data for the collisions which produce the largest numbers of particles, in addition to the peak and hump, they also saw a bit of a ridge at $$\Delta\phi = 0$$ for all values of $$\Delta\eta$$, i.e. for jets coming out in the same transverse direction, but separated by an angle along the beam axis. This means there are a significant number of dijet events where the jets both come out on the same side of the detector, but not in the same direction! Getting these results is interesting not only because these same-side jets require two partons to collide with a huge amount of transverse momentum, which is supposed to be fairly unusual, but also because any collision that involves that much transverse momentum you’d expect to produce two jets going in nearly the same direction. Clearly, that doesn’t always happen. This behavior is strange enough that nobody has a convincing explanation for what could be causing it. To help pin down any possible causes of this ridge, the next natural step is to see whether it occurs in other circumstances, like proton-ion collisions (as opposed to two protons). And the new results presented at HPT 2012 show that it does! As with proton-proton collisions, the ridge shows up only in the collisions that produce the largest numbers of particles. Here’s the plot from CMS for collisions producing 110 or more particles: This and the previous plot are taken from Gunther Roland’s presentation at the conference. So what does this mean? In the immediate future, it means a lot of theorists get a chance to come up with a lot of crazy ideas in an attempt to explain this ridge :-P Seriously though, it means that something is going on in these heavy ion collisions, and consequently in the structure of atomic nuclei, which isn’t close to being covered by existing theories. The heavy-ion program at the LHC is definitely proving its worth.
auto_math_text
web
Mobile QR Code 1. (Intelligent Image Processing Laboratory, Konkuk University, Seoul, Korea jinye96@konkuk.ac.kr) 2. (Intelligent Image Processing Laboratory, Konkuk University, Seoul, Korea cyim@konkuk.ac.kr ) ## 1. Introduction Haze is a phenomenon in which particles in the air scatter light and obscure an image [1]. As a result, outdoor images may have visual limitations. Haze-induced issues can be fatal, especially in situations where traffic conditions require high visibility in real time [2], such as car accidents, suspension of aircraft operations, and docking of ships. Various studies have been conducted to remove the haze effects from images to ensure consistent visibility regardless of weather conditions. In addition, during disasters such as fires, it is essential to secure visibility in the event of high smoke levels in the air. Removing the haze effects in images is essential to observe situations through video equipment such as CCTV to cope with disaster situations. Consistent visible images have many advantages. Enhanced images after haze removal can be used as data for various deep learning applications, such as object recognition and tracking. In deep learning, the learning effect of recognizing objects in images may vary even when the illumination varies at weak points [3]. Hence, dehazed images are highly desirable in various fields. In robot vision, cameras are critical because they are responsible for the visual capabilities. Obtaining clear images from the camera regardless of the weather conditions can determine the performance of robot vision and mobility [4]. Dehazing can be used to obtain haze-free real-time images in driving environments and can provide improved visibility during driving and parking [5]. Consistently available high visibility can also reduce the probability of accidents in autonomous driving. Additionally, image dehazing can be applied to crime prevention by assisting in identifying the face of a perpetrator in a hazy image. Hence, image dehazing is important in the field of image processing and image enhancement. Because image dehazing is an ill-posed problem, it is necessary to test various approaches based on an atmospheric scattering model to solve it. Many studies on image dehazing have been performed using convolutional neural networks (CNNs) [6-8]. The DehazeNet method [6] was used to estimate a transmission map directly from a hazy image. The AOD-Net method [7] obtains a dehazed image from a hazy image using CNNs. Recently, various studies have been conducted using methods to estimate feature maps in hazy images to produce haze-free images [8]. One method [9] uses CNNs to estimate the depth, and another method [10] applies transfer learning using a simple encoder-decoder network structure with skip connections. In this paper, we propose a way to estimate the transmission map indirectly from depth estimation using CNNs to generate dehazed images based on an atmospheric scattering model. ## 2. Related Works In the past, haze removal was performed using image processing, in which basic contrast enhancement could be used. However, most haze removal methods are based on hypotheses or empirical evidence. For example, the dark channel prior (DCP) method [12] is based on the hypothesis that there is a low value in at least one of three RGB channels in color images. Several methods using CNNs that can remove haze within an image were developed with advancements in deep learning techniques. This led to the development of the DehazeNet method [6], which is based on an atmospheric scattering model and estimates the transmission map from a hazy image through a CNN. An illustration of the atmospheric scattering model is shown in Fig. 1. The atmospheric scattering model [11] can be represented as: ##### (1) $I~ \left(x\right)=~ J~ \left(x\right)t\left(x\right)+~ \alpha ~ \left(1~ -~ t\left(x\right)\right).$ From Eq. (1), the clean (haze-free) image $J\left(x\right)$ can be expressed as: ##### (2) $J\left(x\right)=~ \frac{1}{t\left(x\right)}I\left(x\right)-\frac{1}{t\left(x\right)}\alpha +\alpha$ . In Eq. (1), $I\left(x\right)~$is the hazy image, $J\left(x\right)$ is the clean image, $t\left(x\right)$is the transmission map, and $\alpha$ is the global atmospheric light. The transmission map $t\left(x\right)$ [1,6] can be expressed as: ##### (3) $t\left(x\right)=~ e^{-\beta d\left(x\right)}~ .$ In Eq. (3), $\beta$ is the scattering coefficient, and $~ d\left(x\right)$ is the depth (distance). Eq. (3) shows that depth information affects the transmission values $t\left(x\right)$. Typical image dehazing methods estimate the transmission map. In some previous works, the depth information was used for a similar problem of fog removal. Fog effects have been removed using depth estimation, which is based on the assumption that the difference between brightness and saturation becomes larger as the depth becomes larger [19]. Depth values have been estimated from the degree of blur for single-image fog removal [20]. Unlike previous studies, we propose the application of depth information with deep learning for the estimation of transmission maps. In the proposed method, the depth is estimated using deep learning methods, which give more accurate depth values. DehazeNet [6] is a typical haze removal method that uses CNNs to estimate the transmission map and obtain a dehazed image from it based on an atmospheric scattering model. It requires guided image filtering as a post-processing procedure to refine the transmission map. An advantage of this method is that CNNs are used for deep learning to solve the image dehazing problem. Unlike the DehazeNet [6] method, the AOD-Net [7] method combines a transmission map and global atmospheric light parameters into a single parameter function $K\left(x\right)$, which can be learned through deep learning networks. An advantage of AOD-Net is that it is an end-to-end deep learning network. The densely connected pyramid dehazing network (DCPDN) [13] is a GAN-based method that can produce an image similar to a dehazed image. In this method, separate networks are used for learning the transmission map and global atmospheric light so that it can generate a haze-free image through a joint discriminator. A recent method called FFA-Net [8] learns channel attention and pixel attention maps on a block-by-block basis. It removes the haze by concatenating a hazy image and learns the feature maps as residual networks. Deep learning networks have been used for depth estimation as well as haze removal. The Monodepth method [14] learns from stereo images and can predict depth information from a single image, whereas the Densedepth method [10] can estimate depth information using transfer learning. In the Monodepth method [14], KITTI data [15] are used as stereo images to estimate the disparity map derived from the left image, which is consistent with the right image. The right image requires the disparity map estimated on the left image to calculate the error. As this process repeats, it is possible to create an image in the opposite direction, which allows the creation of stereo images and depth information. Monodepth2 [9] is a follow-up to the Monodepth method [14]. It uses the characteristics of the KITTI dataset [15], which was constructed using consecutive images captured by a moving car. In Monodepth2, the results can be corrected through reprojection. This method leads to fewer errors because of the creation of stereo images. The Densedepth method [10] uses layers consisting of an encoder–decoder structure, which are interconnected using skip connections. In this method, KITTI data [15] and NYU Depth V2 data [16] can be used to estimate depth information for both indoor and outdoor images. ##### Table 1. Parameters of depth estimation networks. Parameter Monodepth2 Densedepth Training dataset KITTI dataset NYU2 depth dataset KITTI dataset Batch size 12 4 Epoch 20 20 Learning rate 0.0001 0.0001 Min depth 0.1 10 Max depth 100.0 1000 Optimizer Adam Adam ## 3. The Proposed Method The proposed method generates a transmission map indirectly from a depth map, which can be generated by using previous depth estimation networks based on deep learning. Then, we obtain a dehazed image using a transmission map based on an atmospheric scattering model. Fig. 2 shows a sequence diagram of the proposed method. As shown in Fig. 2, the depth map is estimated before the estimation of the transmission map. For the depth map, a training process is performed using depth estimation networks based on deep learning. After the training process is complete, a depth estimation model is obtained. Once the depth estimation model is obtained, image dehazing can be performed on a hazy image. For a hazy input image, we estimate the depth map using the depth estimation model. The transmission map is estimated from the depth map using the relationship described in Eq. (3). Finally, we obtain the dehazed image using the atmospheric scattering model described in Eq. (2). Two methods were tested for the depth estimation model. The first method is Monodepth2 [9], which allows the correction of loss values in learning by applying additional information to the network for the depth estimation. The second method is Densedepth [10], which uses transfer learning with both indoor and outdoor image data. The network structure of Monodepth2 method is shown in Fig. 3. The depth network is based on the U-Net structure, which enables the prediction of the overall depth information. The pose network assists in predicting the depth information from the movement of objects that are in the front and rear image frames. Using the information in the pose networks, the networks adjust the parameters to generate a depth map. For this method, training is conducted using the KITTI datasets [15], which include mono images, stereo images, and mono and stereo images. Fig. 4 presents the detailed network structure of the Densedepth method [10]. This network was originally applied for image classification, and the encoder–decoder structure method was used to estimate the depth. The training of this method uses the KITTI data and NYU2 depth data. The NYU2 depth data are indoor data, and the KITTI data are outdoor data. ## 4. Experimental Results Table 1 presents the parameters used for training Monodepth2 and Densedepth in the experiments. ### 4.1 Results of Depth Estimation Networks Fig. 5 shows the experimental results using Monodepth2. Fig. 5(a) shows a test image of the Berkeley dataset [17] as the input. Figs. 5(b)-(d) show the resulting depth maps using mono images as the training data. Figs. 5(b) and (c) show the results of training with image sizes of 640 ${\times}$ 192 and 1024 ${\times}$ 320, respectively. Fig. 5(d) shows the result of training with an image size of 640 ${\times}$ 192, as shown in Fig. 5(b) without applying the pose network. Figs. 5(e)-(g) are the resulting depth map images using stereo images as the training data. Figs. 5(e) and (f) show the results of training with image sizes of 640 ${\times}$ 192 and 1024 ${\times}$ 320, respectively. Fig. 5(g) shows the resulting depth map without applying the pose network with a size of 640 ${\times}$ 192, which is same as the size in Fig. 5(e). Figs. 5(h)-(j) show the resulting depth maps using both mono and stereo images as the training data. Figs. 5(h) and (i) show the results of training with sizes of 640 ${\times}$ 192 and 1024 ${\times}$ 320, respectively. Fig. 5(j) show the result of training without applying the pose network with a size of 640${\times}$192, which is the same as the size in Fig. 5(h). There is a tendency for small objects to be perceived at farther distances and for large objects to be perceived at nearer distances. If the pose network is not applied, the overall depth outlines are blurred, and the depth estimation values become less accurate. Fig. 6 shows the experimental results obtained using Densedepth. The encoder part of Densedepth network was set as DenseNet-169 [21] for the experiments. We compared the results of depth estimation for indoor and outdoor images using the NYU2 depth dataset and KITTI dataset. Fig. 6(a) shows the indoor image data [16] used as the input for Figs. 6(b) and (c). Fig. 6(d) shows the outdoor image data [19] used as the input for Figs. 6(e) and (f). Figs. 6(b) and (e) show the depth map images obtained by training using the NYU2 depth dataset. Figs. 6(c) and (f) show the depth map images obtained by training using the KITTI dataset. The NYU2 depth dataset provides indoor image data, and the KITTI dataset provides outdoor image data, so the resulting depth maps are different. With the NYU2 depth dataset, the results preserve more edges of objects. The results obtained with the NYU2 depth dataset show more detailed depth results than those obtained with the KITTI dataset for indoor images. For the outdoor images, the results obtained with the NYU2 depth dataset cannot predict the overall depth map, while the results with the KITTI dataset can predict the depth map more evenly. ### 4.2 Transmission Map and Dehazed Image Obtained using the Proposed Method For depth estimation, we used previously described depth estimation networks [9,10]. Both networks were implemented using TensorFlow codes. For the test, unannotated real-world hazy images were used [18]. Haze removal experiments were carried out by converting the depth map into the transmission map using the relationship described in Eq. (3). Figs. 7 and 8 show the process of image dehazing using the proposed method. Figs. 7(a) and 8(a) show the input hazy images [18]. Figs. 7(b) and 8(b) show the depth maps from the input images using the depth estimation model with the depth estimation network. Figs. 7(c) and 8(c) show the visualized transmission maps from the depth map. Figs. 7(d) and 8(d) show the dehazed images after the haze removal is carried out from the transmission map using the atmospheric scattering model described in Eq. (2). In these results, the depth value becomes lower for nearby objects, and the transmission value become higher. In addition, objects at farther distances changed more. ### 4.3 Result Comparison of Proposed Method and DehazeNet DehazeNet [6] directly generates a transmission map from a hazy image using the CNNs. Comparisons of the results for hazy natural outdoor images for the proposed method and the DehazeNet method are shown in Figs. 9-11. Figs. 9(a)-(c) show the hazy images obtained using Bdd100k [17]. Figs. 10(a)-(c) show the dehazed images obtained using the DehazeNet method. Figs. 11(a)-(c) show the dehazed images obtained using the proposed method. In the DehazeNet method, haze effects are sufficiently removed, changes in the intensity levels are high, and the roads are excessively darkened from recognizing the bright parts of the road as haze. In the results obtained using the proposed method, the haze is removed more evenly, and the road parts are dehazed correctly while preserving the intensity levels without any darkening effects. We also performed experiments using the synthetic objective testing set (SOTS) [18] for the comparison of image dehazing results by DehazeNet and the proposed method with Monodepth2 and Densedepth. Figs. 12 and 13 show the results with the PSNR values, which were calculated using the groundtruth images of SOTS. The figures show that the proposed method gives better PSNR results for image dehazing than the DehazeNet method. The dehazed images from DehazeNet are darker than those from the proposed method. ### 4.4 Comparison of the Results with Respect to β The degree of change in the transmission map by depth value can be adjusted using the scattering coefficient β in Eq. (3). Figs. 14-18 show the results obtained with various β values (0.2, 0.4, 0.6, 0.8, 1.0, and 1.2, respectively). As β increases, the degree of change in the transmission map by depth value becomes higher. In this case, there is more difference in these values for the dehazed image compared to the input hazy image. Conversely, as β decreases, the degree of change in the transmission map by depth value becomes lower. In these results, it is observed that the β value of 1 results in more appropriate dehazed images. Depending on the characteristics of the original hazy image, a disadvantage of creating an excessively dark area appears when β becomes high in high-contrast areas such as shadows. If the estimated depth information does not match well with the original image, a high β value may result in artifacts due to the errors in depth information. If the depth information is somewhat similar to the original image, a higher β value provides better dehazing effects. When applying the depth estimation network trained with the KITTI dataset, the dehazing process was performed relatively well for the road environment. The dehazing effects were relatively low when there were buildings on both sides without any vehicles, as shown in Figs. 15 and 16. ## 5. Conclusion In this paper, we proposed a novel technique for image dehazing by indirectly creating a transmission map through the estimation of the depth map as opposed to direct estimation of the transmission map in previous image dehazing methods. The dehazing results using the proposed method were superior to those of previous methods that generate the transmission map directly with post-processing from the input image. However, the proposed method has limitations. First, the dataset covering the road environment in daylight could provide incorrect results for depth map estimation. Second, it is necessary to set the value of atmospheric light adaptively as each test set needs the appropriate atmospheric light value to be estimated for image dehazing. Future research should be directed to resolve these issues. ### ACKNOWLEDGMENTS This research was supported by the MSIT (Ministry of Science, ICT), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2020-2016-0-00465) and supervised by the IITP (Institute for Information & communications Technology Planning & Evaluation). This work was also supported by the National Foundation of Korea (NRF) grant, which is funded by the Korean government (MIST) (NRF-2019R1H1A2079873). ### REFERENCES 1 Narasimhan S. G., Nayar S. K., Jul. 2002, Vision and the atmosphere, Int. J. Comput. Vision, Vol. 48, No. 3, pp. 233-254 2 Jingkun Z., Sep. 2015, Analysis of causes and hazards of China’s frequent hazy weather, The Open Cybernetics & Systemics Journal, Vol. 9, pp. 1311-1314 3 Yan Z., Zhang H., Wang B., Paris S., Yu. Y., 2016, Automatic photo adjustment using deep learning, ACM Trans. Graphics, Vol. 35, No. 2, pp. 11 4 Cowan C. K., Kovesi P. D., May. 1988, Automatic sensor placement from vision task requirements, IEEE Trans. Pattern Analysis Machine Intelligence, Vol. 10, No. 3, pp. 407-416 5 Lee S., Maik V., Jang J., Shin J., Paik J., May. 2005, Noise-adaptive spatio-temporal filter for real-time noise removal in low light level images, IEEE Trans. Consumer Electronics, Vol. 51, No. 2, pp. 648-653 6 Cai B., Xu X., Jia K., Qing C., Tao D., Jan. 2016, DehazeNet: an end-to-end system for single image haze removal, IEEE Trans. Image Processing, Vol. 25, No. 11, pp. 5187-5198 7 Li B., Peng X., Wang Z., Xu J-Z., Feng D., 2017, AOD-Net: all-in-one dehazing Network, IEEE Int. Conf. Computer Vision, pp. 4770-4778 8 Xu Qin , Zhilin Wang , Yuanchao Bai , Xiaodong Xie , Huizhu Jia , 2019, FFA-Net: Feature fusion attention network for single image dehazing, arXiv preprint arXiv:1911.07559 9 Godard C., Aodha O. M., Brostow. G. J., 2019, Digging into self-supervised monocular depth estimation, IEEE Int. Conf. Computer Vision 10 Alhashim I., Wonka. P., 2018, High quality monocular depth estimation via transfer learning, arXiv e-prints, abs/1812.11941 11 McCartney E. J., 1976, Optics of the atmosphere: Scattering by molecules and particles, New York, NY, USA: Wiley 12 He K., Sun J., Tang X., Dec. 2011, Single image haze removal using dark channel prior, IEEE Trans. Pattern Analysis Machine Intelligence, Vol. 33, No. 12, pp. 2341-2353 13 Zhang H., Patel V. M., 2018, Densely connected pyramid dehazing network, IEEE Int. Conf. Computer Vision Pattern Recognition, pp. 3194-3203 14 Godard C., Aodha O. M., Brostow G. J., 2017, Unsupervised monocular depth estimation with left-right consistency, IEEE Int. Conf. Computer Vision Pattern Recognition, pp. 270-279 15 Geiger A., Lenz P., Stiller C., Urtasun R., Sep. 2013, Vision meets robotics: The KITTI dataset, International Journal of Robotics Research, Vol. 32 16 Silberman N., Hoiem D., Kohli P., Fergus R., 2012, Indoor segmentation and support inference from rgbd images, European Conf. Computer Vision 17 Yu F., Chen H., Wang X., Xian W., Chen Y., Liu F., Madhavan V., Darrell T., 2020, Bdd100k: a diverse driving dataset for heterogeneous multitask learning, IEEE Conf. Computer Vision Pattern Recognition 18 Li B., Ren W., D.Fu , Tao D., Feng D., Zeng W., Wang. Z., Aug. 2019, Benchmarking single image dehazing and beyond, IEEE Transactions on Image Processing, Vol. 28, No. 1, pp. 492-505 19 Pal D., Arora A., 2018, Removal of fog effect from highly foggy images using depth estimation and fuzzy contrast enhancement method, International Conference on Computing Communication and Automation, pp. 1-6 20 Jiwani M. A., Dandare S. N., Jun 2013, Single image fog removal using depth estimation based on blur estimation, International Journal of Scientific and Research Publications, Vol. 3, No. 6, pp. 1-6 21 Huang G., Liu Z., Maaten L., Weinberger K. Q., 2017, Densely connected convolutional networks, IEEE Conference on Computer Vision and Pattern Recognition, pp. 2261-2269 ## Author ##### Yejin Kim Yejin Kim received a BSc in Software from Konkuk University, Korea, in 2018. Currently, she is a graduate student at the Department of Software at Konkuk University and a researcher in the Intelligent Image Processing Laboratory. Her research interests include image dehazing via deep learning. ##### Changhoon Yim Changhoon Yim received a BSc from the Department of Control and Instrumentation Engineering, Seoul National University, Korea, in 1986, an MSc in Electrical and Electronics Engineering from the Korea Advanced Institute of Science and Technology in 1988, and a PhD in Electrical and Computer Engineering from the University of Texas at Austin in 1996. He worked as a research engineer at the Korean Broadcasting System from 1988 to 1991. From 1996 to 1999, he was a member of the technical staff in the HDTV and Multimedia Division, Sarnoff Corporation, New Jersey, USA. From 1999 to 2000, he worked at Bell Labs, Lucent Technologies, New Jersey, USA. From 2000 to 2002, he was a Software Engineer at KLA-Tencor Corporation, California, USA. From 2002 to 2003, he was a Principal Engineer at Samsung Electronics, Suwon, Korea. Since 2003, he has been a faculty member and is currently a professor in the Department of Computer Science and Engineering, Konkuk University, Seoul, Korea. His research interests include digital image processing, video processing, multimedia communication, and deep learning.
auto_math_text
web
## An algorithm to determine probability of one string appearing earlier than another string in an evenly distributed binary sequence Given two binary strings of length $$n$$, determine in polynomial time the probability of one string appearing in front of the other one in an evenly distributed binary sequence. ## Hide the subfolder from appearing in url I want to move from `www.example.com/_/randomfolder` to `example.com/randomfolder`. It should acces the contents from `/_/randomfolder` but should show `example.com/randomfolder`. What I have tried: # Trial 1 ``RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteRule ^(.*)\$ http://www.%{HTTP_HOST}/\$ 1 [R=301,L,QSA] RewriteEngine On RewriteCond %{HTTPS} off RewriteRule ^(.*)\$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] `` Problem: it full fills my requirement from `/_/randomfolder` but should show `example.com/randomfolder` but affects my main domain `example.com` it shows nothing, only a 404 page. I want it to show `public_html/index.php` only. # Trial 2: ``RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d #To load site from /_/xyz to /xyz without hcanigng url RewriteRule ^[A-Za-z0-9._]+ /_/%{REQUEST_URI} [L] `` Problem: I have to manually change the address bar from `www.example.com/_/randomfolder` to `example.com/randomfolder`, it load contents with or without `_` but on first time load its redirects to `/_/random folder`. ## Why aren’t my hero images appearing as thumbnails in the SERP? I display hero images across the top of my blog pages. However, I use CSS images rendering through div tags and I read that google doesn’t typically index those. So the images aren’t included as thumbnails in SERPs, see below. Can you please confirm if there is a way to direct google to index the images like with an image sitemap? Does an image have to be included in the IMG tag to be considered for a thumbnail in the SERP? ## News not appearing in News-Webpart and Sites not appearing in Activity Webpart Basically what the title says. I can add a news site in my SharePoint 2019 Server, but it will not appear in any of the webparts. It is shared as a news article, it is also readable by all users. I already tried starting manual re-indexing. I also tried giving the users all kinds of additional permissions, but so far nothing worked. I also noticed that I can not see the webpart options such as “news sources” for the news-webpart, that I have seen on various tutorials. I can also not choose any lists from the list-webpart, but I am not sure if this is the same issue. ## Chrome appearing completley blank on Ubuntu I am running ubuntu 18.04 on parallel for mac business edition 14 (macbook pro late 2013 retina with i7, 16gb ram, gt 750m). The problem is: chrome for some reason appearing blank on ubuntu: like this: Desktop screenshot ## Mails sent from the command line not appearing in inbox or spam folder I sent some test mails using the command line. In the log I can see that `status = sent`, but I am not getting any emails in my inbox or spam folder. ``Oct 11 15:51:01 ip-10-0-1-80 postfix/local[20606]: 724AB6D5B: to=<root@localhost>, orig_to=<root>, relay=local, delay=0.01, delays=0.01/0/0/0, dsn=2.0.0, status=sent (delivered to mailbox) `` ## Ubuntu and Windows Partitions Appearing Under the Same Name I’m trying to install Ubuntu on my SSD along Windows 10. I started by installing Windows 10, disabling fast boot and then trying to install Ubuntu. I created a Swap of 16gb. Ubuntu told me I needed an EFI so I created one of about 100 mb. The root took the remaining space. However when I restarted my computer I don’t see any partitions, and simply see this: (https://i.imgur.com/6RTFWBB.jpg). If I click the hard drive it’ll just say no os found. I went back to check the partitions through the Ubuntu installer and see this: https://i.imgur.com/3ZcVeLN.jpg It looks like the partitions were installed correctly, but I’m not getting a proper boot loader to choose between windows and Ubuntu. What should I do? ## List attachments not appearing on query.iqy in Excel I have exported a list to excel so that I can view all new submissions in excel without having to view them in Sharepoint but attachments are not in the list even though they are on the original SP list I exported? Help! ## How to stop subsites from appearing in Quick launch and top bar in SP Onlline if it is created via a workflow So we are creating subsites through a K2 workflow. I know you can specify for the subsites to not appear in top bar and quick launch of the parent site if created manually. But if its created through K2 I dont get the option to turn off those features. Can anyone help me with this? Much appreciated.
auto_math_text
web
# allennlp.modules.lstm_cell_with_projection¶ An LSTM with Recurrent Dropout, a hidden_state which is projected and clipping on both the hidden state and the memory state of the LSTM. class allennlp.modules.lstm_cell_with_projection.LstmCellWithProjection(input_size: int, hidden_size: int, cell_size: int, go_forward: bool = True, recurrent_dropout_probability: float = 0.0, memory_cell_clip_value: Optional[float] = None, state_projection_clip_value: Optional[float] = None)[source] Bases: torch.nn.modules.module.Module An LSTM with Recurrent Dropout and a projected and clipped hidden state and memory. Note: this implementation is slower than the native Pytorch LSTM because it cannot make use of CUDNN optimizations for stacked RNNs due to and variational dropout and the custom nature of the cell state. Parameters: input_size : int, required. The dimension of the inputs to the LSTM. hidden_size : int, required. The dimension of the outputs of the LSTM. cell_size : int, required. The dimension of the memory cell used for the LSTM. go_forward: bool, optional (default = True) The direction in which the LSTM is applied to the sequence. Forwards by default, or backwards if False. recurrent_dropout_probability: float, optional (default = 0.0) The dropout probability to be used in a dropout scheme as stated in A Theoretically Grounded Application of Dropout in Recurrent Neural Networks . Implementation wise, this simply applies a fixed dropout mask per sequence to the recurrent connection of the LSTM. state_projection_clip_value: float, optional, (default = None) The magnitude with which to clip the hidden_state after projecting it. memory_cell_clip_value: float, optional, (default = None) The magnitude with which to clip the memory cell. output_accumulator : torch.FloatTensor The outputs of the LSTM for each timestep. A tensor of shape (batch_size, max_timesteps, hidden_size) where for a given batch element, all outputs past the sequence length for that batch are zero tensors. final_state: Tuple[torch.FloatTensor, torch.FloatTensor] The final (state, memory) states of the LSTM, with shape (1, batch_size, hidden_size) and (1, batch_size, cell_size) respectively. The first dimension is 1 in order to match the Pytorch API for returning stacked LSTM states. forward(inputs: torch.FloatTensor, batch_lengths: List[int], initial_state: Optional[Tuple[torch.Tensor, torch.Tensor]] = None)[source] Parameters: inputs : torch.FloatTensor, required. A tensor of shape (batch_size, num_timesteps, input_size) to apply the LSTM over. batch_lengths : List[int], required. A list of length batch_size containing the lengths of the sequences in batch. initial_state : Tuple[torch.Tensor, torch.Tensor], optional, (default = None) A tuple (state, memory) representing the initial hidden state and memory of the LSTM. The state has shape (1, batch_size, hidden_size) and the memory has shape (1, batch_size, cell_size). output_accumulator : torch.FloatTensor The outputs of the LSTM for each timestep. A tensor of shape (batch_size, max_timesteps, hidden_size) where for a given batch element, all outputs past the sequence length for that batch are zero tensors. final_state : Tuple[torch.FloatTensor, torch.FloatTensor] A tuple (state, memory) representing the initial hidden state and memory of the LSTM. The state has shape (1, batch_size, hidden_size) and the memory has shape (1, batch_size, cell_size). reset_parameters()[source]
auto_math_text
web
# pyribs¶ Website Source PyPI Conda CI/CD Docs Docs Status Twitter pyribs.org GitHub docs.pyribs.org A bare-bones Python library for quality diversity optimization. pyribs is the official implementation of the Covariance Matrix Adaptation MAP-Elites (CMA-ME) algorithm and implements the Rapid Illumination of Behavior Space (RIBS) redesign of MAP-Elites detailed in the paper Covariance Matrix Adapation for the Rapid Illumination of Behavior Space. ## Overview¶ Quality diversity (QD) optimization is a subfield of optimization where solutions generated cover every point in a behavior space while simultaneously maximizing (or minimizing) a single objective. QD algorithms within the MAP-Elites family of QD algorithms produce heatmaps (archives) as output where each cell contains the best discovered representative of a region in behavior space. While many QD libraries exist, this particular library aims to be the QD analog to the pycma library (a single objective optimization library). In contrast to other QD libraries, this library is “bare-bones,” meaning pyribs (like pycma) focuses solely on optimizing fixed-dimensional continuous domains. Focusing solely on this one commonly-occurring problem allows us to optimize the library for performance as well as simplicity of use. For applications of QD on discrete domains, we recommend using qdpy or sferes. A user of pyribs selects three components that meet the needs of their application: • An Archive saves the best representatives generated within behavior space. • Emitters control how new candidate solutions are generated and affect if the algorithm prioritizes quality or diversity. • An Optimizer joins the Archive and Emitters together and acts as a scheduling algorithm for emitters. The Optimizer provides an interface for requesting new candidate solutions and telling the algorithm how candidates performed. ## Citation¶ If you use pyribs in your research, please cite it as follows. Note that you will need to include the hyperref package in order to use the \url command. @misc{pyribs, title = {pyribs: A bare-bones Python library for quality diversity optimization}, author = {Bryon Tjanaka and Matthew C. Fontaine and Yulun Zhang and Sam Sommerer and Stefanos Nikolaidis}, year = {2021}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/icaros-usc/pyribs}}, } If you use the CMA-ME algorithm, please also cite Fontaine 2020. @inproceedings{10.1145/3377930.3390232, author = {Fontaine, Matthew C. and Togelius, Julian and Nikolaidis, Stefanos and Hoover, Amy K.}, title = {Covariance Matrix Adaptation for the Rapid Illumination of Behavior Space}, year = {2020}, isbn = {9781450371285}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3377930.3390232}, doi = {10.1145/3377930.3390232}, booktitle = {Proceedings of the 2020 Genetic and Evolutionary Computation Conference}, pages = {94–102}, numpages = {9}, location = {Canc\'{u}n, Mexico}, series = {GECCO '20} } ## Usage¶ Here we show an example application of CMA-ME in pyribs. To initialize the algorithm, we first create: • A 2D GridArchive where each dimension contains 20 bins across the range [-1, 1]. • An ImprovementEmitter, which starts from the search point 0 in 10 dimensional space and a Gaussian sampling distribution with standard deviation 0.1. • An Optimizer that combines the archive and emitter together. After initializing the components, we optimize (pyribs maximizes) the negative 10-D Sphere function for 1000 iterations. Users of pycma will be familiar with the ask-tell interface (which pyribs adopted). First, the user must ask the optimizer for new candidate solutions. After evaluating the solution, they tell the optimizer the objective value and behavior characteristics (BCs) of each candidate solution. The algorithm then populates the archive and makes decisions on where to sample solutions next. Our toy example uses the first two parameters of the search space as BCs. import numpy as np from ribs.archives import GridArchive from ribs.emitters import ImprovementEmitter from ribs.optimizers import Optimizer archive = GridArchive([20, 20], [(-1, 1), (-1, 1)]) emitters = [ImprovementEmitter(archive, [0.0] * 10, 0.1)] optimizer = Optimizer(archive, emitters) for itr in range(1000): solutions = optimizer.ask() objectives = -np.sum(np.square(solutions), axis=1) bcs = solutions[:, :2] optimizer.tell(objectives, bcs) To visualize this archive with matplotlib, we then use the grid_archive_heatmap function from ribs.visualize. import matplotlib.pyplot as plt from ribs.visualize import grid_archive_heatmap grid_archive_heatmap(archive) plt.show() For more information, refer to the documentation. ## Installation¶ pyribs supports Python 3.6-3.8 (for now, 3.9 will only work if you are able to build llvmlite on your system). Earlier Python versions may work but are not officially supported. To install from PyPI, run pip install ribs This command only installs dependencies for the core of pyribs. To install support tools like ribs.visualize, run pip install ribs[all] Equivalently, you can install the base version (equivalent to ribs) from Conda with conda install -c conda-forge pyribs-base The full version (equivalent to ribs[all]) can be installed with conda install -c conda-forge pyribs To test your installation, import it and print the version with: python -c "import ribs; print(ribs.__version__)" You should see a version number like 0.2.0 in the output. ### From Source¶ To install a version from source, clone the repo git clone https://github.com/icaros-usc/pyribs Then cd into it cd pyribs And run pip install -e .[all] ## Documentation¶ See here for the documentation: https://docs.pyribs.org To serve the documentation locally, clone the repo and install the development requirements with pip install -e .[dev] Then run make servedocs This will open a window in your browser with the documentation automatically loaded. Furthermore, every time you make changes to the documentation, the preview will also reload. ## Contributors¶ pyribs is developed and maintained by the ICAROS Lab at USC. We thank Amy K. Hoover and Julian Togelius for their contributions deriving the CMA-ME algorithm. ## License¶ pyribs is released under the MIT License. ## Credits¶ The pyribs package was initially created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.
auto_math_text
web
Filed under: # The Broncos' John Fox -- By the Numbers John Fox? Really? These were the words from a friend when he heard that the Broncos had agreed to terms with John Fox to be Denver's next head coach. I must admit that my own reaction was somewhat similar, given the fact that I had never paid any particular attention to John Fox's career with the Panthers. My image of Carolina is one of a team that is perpetually near the bottom of their division and rarely among the better teams in the NFC. What I have found since Fox became one of the Broncos' head coaching candidates is that my perception is not completely accurate. In Fox's nine years with the Panthers, they finished in first place in the NFC South three times, in second place twice, third place twice and fourth place twice. The question then, becomes how have the Panthers stacked up against the rest of the league with John Fox at the helm. After the jump, we'll take a look at some answers to that question. The most basic question is one of Fox's win-loss record. That is the foundation upon which playoff appearances and championships are built. Fox led the Panthers to a 73-71 record in his nine years. By way of comparison, the Broncos had a 78-66 record during the same period. However, those numbers might be just a tad deceiving. Fox led the Panthers to three divisional titles -- compared to Denver's one divisional title during the same span. Both Carolina and the Broncos had three playoff appearances between 2002 and 2010. Fox led the Panthers to five post season wins, including an NFC Championship, for an overall post season record of 5-3. The Broncos went 1-3 in their playoff appearances. When we look at the Panthers' year-by-year record we find: Year Record Notes 2002 7-9 Panthers had gone 8-8, 7-9, 1-15 in 1999, 2000 and 2001. 2003 11-5 Won division. Beat Dallas, St. Louis & Philadelphia to win NFC Championship. Lost by 3 points to New England in the Super Bowl. 2004 7-9 2005 11-5 Won division. Beat New York and Chicago before losing to Seattle in the NFC Championship game. 2006 8-8 2007 7-9 2008 12-4 Won division. Lost to Arizona in a divisional playoff game. 2009 8-8 2010 2-14 We see that Fox led the Panthers to three winning seasons, two .500 seasons, three 7-9 seasons and one 2-14 season in nine seasons. Overall, not the greatest record, but not the worst either. What is a little disconcerting is the roller coaster nature of the Panthers' season records: a winning season one year, followed by a .500 or losing season the following year. It would be worth further research to discern the reasons for this pattern. Another perspective on Fox's success can be found by looking at how Carolina has stacked up against the rest of the NFL in a variety of categories. The first set of statistics looks at the following categories: Win/Loss Percentage (obviously the lower the number, the higher the Panthers were ranked in the NFL), Take Away/Give Away Ratio (lower numbers represent more take aways than give aways), Points +/- (the lower the ranking, the more the Panthers outscored their opponents) and Yards +/- (the lower the number, the more Carolina outgained their opponents). Year Record Win/Loss % Take Away/Give Away Points +/- Yards +/- 2002 7-9 20th 23rd 23rd 22nd 2003 11-5 7th 25th 16th 11th 2004 7-9 18th 4th 12th 19th 2005 11-5 5th 3rd 4th 11th 2006 8-8 13th 23rd 20th 14th 2007 7-9 18th 18th 21st 23rd 2008 12-4 2nd 7th 8th 16th 2009 8-8 16th 8th 17th 15th 2010 2-14 32nd 25th 32nd 31st As with the Panthers' win/loss records, we see something of a wide spread of rankings. They have fallen in the top ten in some years, but in the bottom ten in others. It is hard to tell from these just where Fox will lead Denver. A second set of statistics are the Panthers' offensive rankings. We will look at three offensive categories, each with a set of statistics: Overall Offense: Yards, Points, Give Aways. Please note in Yards and Points, the lower the number, the better Carolina did, in Give Aways, the higher the ranking, the less they gave the ball away. Year Yards Points Give Aways 2002 31st 30th 29th 2003 16th 15th 20th 2004 13th 13th 12th 2005 22nd 8th 13th 2006 24th 27th 16th 2007 29th 26th 14th 2008 10th 7th 6th 2009 19th 21st 22nd 2010 32nd 32nd 29th Fox's offense has not consistently been high in most categories and we can see a steady pattern of improvement, followed by a slide, followed by improvement, etc. Rushing Offense: Yards, Touchdowns, Yards/Attempt and Fumbles Lost. In Yards, Touchdowns and Yards/Attempt, lower numbers are better while the reverse is true for Fumbles Lost. Year Yards Touchdowns Yards/Attempt Fumbles Lost 2002 25th 21st 31st 27th 2003 7th 24th 17th 25th 2004 28th 19th 28th 14th 2005 19th 8th 29th 10th 2006 24th 28th 19th 10th 2007 14th 28th 15th 14th 2008 3rd 1st 2nd 5th 2009 3rd 10th 3rd 17th 2010 13th 31st 12th 28th The Panthers seem to have made strong progress in rushing offense. Overall, however, their performance in these areas was as up and down as the rest of the statistics we have looked at above. Passing Offense: Yards, Touchdowns, Interceptions, Net Yards/Attempt. In Yards, Touchdowns and Net Yards/Attempt, lower numbers are better. In Interceptions, higher numbers are better. Year Yards Touchdowns Interceptions Net Yards/Attempt 2002 30th 30th 25th 27th 2003 18th 16th 12th 9th 2004 9th 5th 13th 12th 2005 17th 5th 13th 12th 2006 15th 17th 17th 20th 2007 29th 17th 17th 30th 2008 19th 24th 9th 4th 2009 27th 24th 27th 22nd 2010 32nd 32nd 25th 32nd During Fox's first years, the Panthers passing game appeared to be steadily improving, before beginning to slump during the second half of his tenure. A third and final set of statistics are the Panthers' defensive rankings. We will look at three defensive categories, each with a set of statistics: Overall Defense: Yards, Points, Take Aways. Please note, the lower the number, the better Carolina did. Year Yards Points Take Aways 2002 2nd 5th 7th 2003 8th 10th 18th 2004 20th 15th 2nd 2005 3rd 5th 2nd 2006 7th 8th 28th 2007 16th 15th 11th 2008 18th 12th 15th 2009 8th 9th 4th 2010 18th 26th 11th Fox's defense appears to be fairly consistently strong. It was often in the top ten in key categories. One question that does arise is: Why did the defense start very strong, then gradually decline? Rushing Defense: Yards, Touchdowns, Yards/Attempt and Fumbles Recovered. Please note, the lower the number, the better Carolina did. Year Yards Touchdowns Yards/Attempt Fumbles Recovered 2002 8th 6th 1st 7th 2003 11th 6th 12th 23rd 2004 17th 28th 12th 13th 2005 4th 4th 4th 2nd 2006 11th 7th 9th 28th 2007 18th 21st 4th 5th 2008 20th 15th 23rd 8th 2009 22nd 21st 22nd 1st 2010 23rd 28th 10th 10th Once again we can see how Fox's defenses were strong in the early years, but not so strong in later years. Passing Defense: Yards, Touchdowns, Interceptions, Net Yards/Attempt. Please note, the lower the number, the better Carolina did. Year Yards Touchdowns Interceptions Net Yards/Attempt 2002 4th 7th 15th 4th 2003 9th 13th 12th 8th 2004 18th 7th 1st 21st 2005 9th 2nd 4th 4th 2006 4th 20th 22nd 9th 2007 17th 15th 23rd 16th 2008 16th 10th 21st 6th 2009 4th 2nd 5th 9th 2010 11th 6th 11th 17th The Panthers' passing defense rankings show the same kind of up and down patterns as the rest of the rankings during Fox' tenure. Overall, these rankings show a team which has averaged being in the middle of the league -- sometimes excelling, other times lagging behind. What I found encouraging was the improvement in Fox's first year. Carolina had ranked in the bottom five in nearly every offensive and defensive category in the league on their way to going 1-15. Fox was able to get them into the top ten in nearly every defensive category. This is precisely the major type of help the Broncos need.
auto_math_text
web
• ### The Binary Neutron Star event LIGO/VIRGO GW170817 a hundred and sixty days after merger: synchrotron emission across the electromagnetic spectrum(1801.03531) Feb. 25, 2018 astro-ph.HE We report deep Chandra, HST and VLA observations of the binary neutron star event GW170817 at $t<160$ d after merger. These observations show that GW170817 has been steadily brightening with time and might have now reached its peak, and constrain the emission process as non-thermal synchrotron emission where the cooling frequency $\nu_c$ is above the X-ray band and the synchrotron frequency $\nu_m$ is below the radio band. The very simple power-law spectrum extending for eight orders of magnitude in frequency enables the most precise measurement of the index $p$ of the distribution of non-thermal relativistic electrons $N(\gamma)\propto \gamma^{-p}$ accelerated by a shock launched by a NS-NS merger to date. We find $p=2.17\pm0.01$, which indicates that radiation from ejecta with $\Gamma\sim3-10$ dominates the observed emission. While constraining the nature of the emission process, these observations do \emph{not} constrain the nature of the relativistic ejecta. We employ simulations of explosive outflows launched in NS ejecta clouds to show that the spectral and temporal evolution of the non-thermal emission from GW170817 is consistent with both emission from radially stratified quasi-spherical ejecta traveling at mildly relativistic speeds, \emph{and} emission from off-axis collimated ejecta characterized by a narrow cone of ultra-relativistic material with slower wings extending to larger angles. In the latter scenario, GW170817 harbored a normal SGRB directed away from our line of sight. Observations at $t\le 200$ days are unlikely to settle the debate as in both scenarios the observed emission is effectively dominated by radiation from mildly relativistic material.
auto_math_text
web
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01r494vk20n Title: Wave-driven rotation and mass separation in rotating magnetic mirrors Authors: Fetterman, Abraham Advisors: Fisch, Nathaniel J Contributors: Plasma Physics Department Subjects: Plasma physics Issue Date: 2012 Publisher: Princeton, NJ : Princeton University Abstract: Axisymmetric mirrors are attractive for fusion because of their simplicity, high plasma pressure at a given magnetic pressure, and steady state operation. Their subclass, rotating mirrors, are particularly interesting because they have increased parallel confinement, magnetohydrodynamic stability, and a natural heating mechanism. This thesis finds and explores an unusual effect in supersonically rotating plasmas: particles are diffused by waves in both potential energy and kinetic energy. Extending the alpha channeling concept to rotating plasmas, the alpha particles may be removed at low energy through the loss cone, and the energy lost may be transferred to the radial electric field. This eliminates the need for electrodes in the mirror throat, which have presented serious technical issues in past rotating plasma devices. A high azimuthal mode number perturbation on the magnetic field is a particularly simple way to achieve the latter effect. In the rotating frame, this perturbation is seen as a wave near the alpha particle cyclotron harmonic, and can break the azimuthal symmetry and magnetic moment conservation without changing the particles total energy. The particle may exit if it reduces its kinetic energy and becomes more trapped if it gains kinetic energy, leading to a steady state current that maintains the field. Simulations of single particles in rotating mirrors show that a stationary wave can extract enough energy from alpha particles for a reactor to be self-sustaining. In the same way, rotation can be produced in non-fusion plasmas. Waves are identified to produce rotation in plasma centrifuges, which separate isotopes based on their mass difference. Finally, a new high throughput mass filter which is well suited to separating nuclear waste is presented. The new filter, the magnetic centrifugal mass filter (MCMF), has well confined output streams and less potential for nuclear proliferation than competing technologies. To assess the usefulness of the MCMF, a metric for comparing mass filters is developed. With this metric, the MCMF is compared with other mass filters such as the Ohkawa filter and the conventional plasma centrifuge. URI: http://arks.princeton.edu/ark:/88435/dsp01r494vk20n Alternate format: The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: http://catalog.princeton.edu/ Type of Material: Academic dissertations (Ph.D.) Language: en Appears in Collections: Plasma Physics Files in This Item:
auto_math_text
web
# Quark Matter 2012 12-18 August 2012 US/Eastern timezone ## Midrapidity antibaryon-to-baryon ratios in pp and Pb-Pb collisions measured by the ALICE experiment 15 Aug 2012, 12:00 20m Regency 2/3 () ### Regency 2/3 Oral Presentation Global and collective dynamics ### Speaker Michal Broz (Comenius University (SK)) ### Description The ALICE Experiment features low material budget and high resolution tracking, which allow for precise measurements of charged particle production. The measurement of the antibaryon to baryon ratios ($\bar{B}$/B), in particular, probes the baryon transport and the degree of baryon stopping in high energy collisions, providing insight into the collision dynamics and the structure of baryons. In this talk, we discuss the measurement of diferent $\bar{B}$/B ratios ($\bar{p}/p,$\bar{\lambda}/\lambda$,$\xi^+/\xi^-$,$\omega^+/\omega^-\$) in pp collisions at \sqrt{s} = 0.9, 2.76, and 7 TeV and in Pb-Pb collisions at \sqrt{s_{NN}} = 2.76 TeV, as a function of charged particle multiplicity, rapidity and transverse momentum. Results from pp and Pb-Pb collisions are presented and compared to models. ### Primary author Collaboration ALICE (CERN, Geneva, Switzerland) ### Co-author Christine Nattrass (University of Tennessee (US)) Slides
auto_math_text
web
• ### Exploring Cosmic Origins with CORE: Extragalactic sources in Cosmic Microwave Background maps(1609.07263) May 18, 2017 astro-ph.CO, astro-ph.GA We discuss the potential of a next generation space-borne Cosmic Microwave Background (CMB) experiment for studies of extragalactic sources. Our analysis has particular bearing on the definition of the future space project, CORE, that has been submitted in response to ESA's call for a Medium-size mission opportunity as the successor of the Planck satellite. Even though the effective telescope size will be somewhat smaller than that of Planck, CORE will have a considerably better angular resolution at its highest frequencies, since, in contrast with Planck, it will be diffraction limited at all frequencies. The improved resolution implies a considerable decrease of the source confusion, i.e. substantially fainter detection limits. In particular, CORE will detect thousands of strongly lensed high-z galaxies distributed over the full sky. The extreme brightness of these galaxies will make it possible to study them, via follow-up observations, in extraordinary detail. Also, the CORE resolution matches the typical sizes of high-z galaxy proto-clusters much better than the Planck resolution, resulting in a much higher detection efficiency; these objects will be caught in an evolutionary phase beyond the reach of surveys in other wavebands. Furthermore, CORE will provide unique information on the evolution of the star formation in virialized groups and clusters of galaxies up to the highest possible redshifts. Finally, thanks to its very high sensitivity, CORE will detect the polarized emission of thousands of radio sources and, for the first time, of dusty galaxies, at mm and sub-mm wavelengths, respectively. • We examine the cosmological constraints that can be achieved with a galaxy cluster survey with the future CORE space mission. Using realistic simulations of the millimeter sky, produced with the latest version of the Planck Sky Model, we characterize the CORE cluster catalogues as a function of the main mission performance parameters. We pay particular attention to telescope size, key to improved angular resolution, and discuss the comparison and the complementarity of CORE with ambitious future ground-based CMB experiments that could be deployed in the next decade. A possible CORE mission concept with a 150 cm diameter primary mirror can detect of the order of 50,000 clusters through the thermal Sunyaev-Zeldovich effect (SZE). The total yield increases (decreases) by 25% when increasing (decreasing) the mirror diameter by 30 cm. The 150 cm telescope configuration will detect the most massive clusters ($>10^{14}\, M_\odot$) at redshift $z>1.5$ over the whole sky, although the exact number above this redshift is tied to the uncertain evolution of the cluster SZE flux-mass relation; assuming self-similar evolution, CORE will detect $\sim 500$ clusters at redshift $z>1.5$. This changes to 800 (200) when increasing (decreasing) the mirror size by 30 cm. CORE will be able to measure individual cluster halo masses through lensing of the cosmic microwave background anisotropies with a 1-$\sigma$ sensitivity of $4\times10^{14} M_\odot$, for a 120 cm aperture telescope, and $10^{14} M_\odot$ for a 180 cm one. [abridged] • ### Candidate Clusters of Galaxies at $z>1.3$ Identified in the Spitzer SPT Deep Field Survey(1404.0023) March 31, 2014 astro-ph.CO, astro-ph.GA We present 279 galaxy cluster candidates at $z > 1.3$ selected from the 94 deg$^{2}$ Spitzer South Pole Telescope Deep Field (SSDF) survey. We use a simple algorithm to select candidate high-redshift clusters of galaxies based on Spitzer/IRAC mid-infrared data combined with shallow all-sky optical data. We identify distant cluster candidates in SSDF adopting an overdensity threshold that results in a high purity (80%) cluster sample based on tests in the Spitzer Deep, Wide-Field Survey of the Bo\"otes field. Our simple algorithm detects all three $1.4 < z \leq 1.75$ X-ray detected clusters in the Bo\"otes field. The uniqueness of the SSDF survey resides not just in its area, one of the largest contiguous extragalactic fields observed with Spitzer, but also in its deep, multi-wavelength coverage by the South Pole Telescope (SPT), Herschel/SPIRE and XMM-Newton. This rich dataset will allow direct or stacked measurements of Sunyaev-Zel'dovich effect decrements or X-ray masses for many of the SSDF clusters presented here, and enable systematic study of the most distant clusters on an unprecedented scale. We measure the angular correlation function of our sample and find that these candidates show strong clustering. Employing the COSMOS/UltraVista photometric catalog in order to infer the redshift distribution of our cluster selection, we find that these clusters have a comoving number density $n_c = (0.7^{+6.3}_{-0.6}) \times 10^{-7} h^{3} \mathrm{Mpc}^{-3}$ and a spatial clustering correlation scale length $r_0 = (32 \pm 7) h^{-1} \rm{Mpc}$. Assuming our sample is comprised of dark matter halos above a characteristic minimum mass, $M_{{\rm min}}$, we derive that at $z=1.5$ these clusters reside in halos larger than $M_{{\rm min}} = 1.5^{+0.9}_{-0.7} \times 10^{14} h^{-1} M_{\odot}$. (abridged) • Planck's all sky surveys at 30-857 GHz provide an unprecedented opportunity to follow the radio spectra of a large sample of extragalactic sources to frequencies 2-20 times higher than allowed by past, large area, ground-based surveys. We combine the results of the Planck Early Release Compact Source Catalog (ERCSC) with quasi-simultaneous ground-based observations, as well as archival data, at frequencies below or overlapping Planck frequency bands, to validate the astrometry and photometry of the ERCSC radio sources and study the spectral features shown in this new frequency window opened by Planck. The ERCSC source positions and flux density scales are found to be consistent with the ground-based observations. We present and discuss the spectral energy distributions (SEDs) of a sample of "extreme" radio sources to illustrate the richness of the ERCSC for the study of extragalactic radio sources. Variability is found to play a role in the unusual spectral features of some of these sources. • We present the XMM-Newton follow-up for confirmation of Planck cluster candidates. Twenty-five candidates have been observed to date using snapshot (~10 ksec) exposures, ten as part of a pilot programme to sample a low range of signal-to-noise ratios (4<S/N<6), and a further 15 in a programme to observe a sample of S/N>5 candidates. The sensitivity and spatial resolution of XMM-Newton allows unambiguous discrimination between clusters and false candidates. The 4 false candidates have S/N <= 4.1. A total of 21 candidates are confirmed as extended X-ray sources. Seventeen are single clusters, the majority of which are found to have highly irregular and disturbed morphologies (about ~70%). The remaining four sources are multiple systems, including the unexpected discovery of a supercluster at z=0.45. For 20 sources we are able to derive a redshift estimate from the X-ray Fe K line (albeit of variable quality). The new clusters span the redshift range 0.09 <= z <= 0.54, with a median redshift of z~0.37. A first determination is made of their X-ray properties including the characteristic size, which is used to improve the estimate of the SZ Compton parameter, Y_SZ. The follow-up validation programme has helped to optimise the Planck candidate selection process. It has also provided a preview of the X-ray properties of these newly-discovered clusters, allowing comparison with their SZ properties, and to the X-ray and SZ properties of known clusters observed in the Planck survey. Our results suggest that Planck may have started to reveal a non-negligible population of massive dynamically perturbed objects that is under-represented in X-ray surveys. However, despite their particular properties, these new clusters appear to follow the Y_SZ-Y_X relation established for X-ray selected objects, where Y_X is the product of the gas mass and temperature. • ### Residual noise covariance for Planck low-resolution data analysis(0906.0175) May 31, 2009 astro-ph.CO, astro-ph.IM Aims: Develop and validate tools to estimate residual noise covariance in Planck frequency maps. Quantify signal error effects and compare different techniques to produce low-resolution maps. Methods: We derive analytical estimates of covariance of the residual noise contained in low-resolution maps produced using a number of map-making approaches. We test these analytical predictions using Monte Carlo simulations and their impact on angular power spectrum estimation. We use simulations to quantify the level of signal errors incurred in different resolution downgrading schemes considered in this work. Results: We find an excellent agreement between the optimal residual noise covariance matrices and Monte Carlo noise maps. For destriping map-makers, the extent of agreement is dictated by the knee frequency of the correlated noise component and the chosen baseline offset length. The significance of signal striping is shown to be insignificant when properly dealt with. In map resolution downgrading, we find that a carefully selected window function is required to reduce aliasing to the sub-percent level at multipoles, ell > 2Nside, where Nside is the HEALPix resolution parameter. We show that sufficient characterization of the residual noise is unavoidable if one is to draw reliable contraints on large scale anisotropy. Conclusions: We have described how to compute the low-resolution maps, with a controlled sky signal level, and a reliable estimate of covariance of the residual noise. We have also presented a method to smooth the residual noise covariance matrices to describe the noise correlations in smoothed, bandwidth limited maps. • ### Making Maps from Planck LFI 30GHz Data with Asymmetric Beams and Cooler Noise(0806.3167) April 9, 2009 astro-ph The Planck satellite will observe the full sky at nine frequencies from 30 to 857 GHz. The goal of this paper is to examine the effects of four realistic instrument systematics in the 30 GHz frequency maps: non-axially-symmetric beams, sample integration, sorption cooler noise, and pointing errors. We simulated one year long observations of four 30 GHz detectors. The simulated timestreams contained CMB, foreground components (both galactic and extra-galactic), instrument noise (correlated and white), and the four instrument systematic effects. We made maps from the timelines and examined the magnitudes of the systematics effects in the maps and their angular power spectra. We also compared the maps of different mapmaking codes to see how they performed. We used five mapmaking codes (two destripers and three optimal codes). None of our mapmaking codes makes an attempt to deconvolve the beam from its output map. Therefore all our maps had similar smoothing due to beams and sample integration. Temperature to polarization cross-coupling due to beam mismatch causes a detectable bias in the TE spectrum of the CMB map. The effects of cooler noise and pointing errors did not appear to be major concerns for the 30 GHz channel. The only essential difference found so far between mapmaking codes that affects accuracy (in terms of residual RMS) is baseline length. All optimal codes give essentially indistinguishable results. A destriper gives the same result as the optimal codes when the baseline is set short enough. For longer baselines destripers require less computing resources but deliver a noisier map. • ### Making Maps from Planck LFI 30GHz Data(astro-ph/0702483) Feb. 19, 2007 astro-ph This paper is one of a series describing the performance and accuracy of map-making codes as assessed by the Planck CTP working group. We compare the performance of multiple codes written by different groups for making polarized maps from Planck-sized, all-sky cosmic microwave background (CMB) data. Three of the codes are based on destriping algorithm, whereas the other three are implementations of a maximum-likelihood algorithm. Previous papers in the series described simulations at 100 GHz (Poutanen et al. 2006) and 217 GHz (Ashdown et al. 2006). In this paper we make maps (temperature and polarisation) from the simulated one-year observations of four 30 GHz detectors of Planck Low Frequency Instrument (LFI). We used Planck Level S simulation pipeline to produce the observed time-ordered-data streams (TOD). Our previous studies considered polarisation observations for the CMB only. For this paper we increased the realism of the simulations and included polarized galactic foregrounds to our sky model. Our simulated TODs comprised of dipole, CMB, diffuse galactic emissions, extragalactic radio sources, and detector noise. The strong subpixel signal gradients arising from the foreground signals couple to the output map through the map-making and cause an error (signal error) in the maps. Destriping codes have smaller signal error than the maximum-likelihood codes. We examined a number of schemes to reduce this error. On the other hand, the maximum-likelihood map-making codes can produce maps with lower residual noise than destriping codes. • ### Cosmological constraints from a 2D SZ catalog(astro-ph/0407436) July 21, 2004 astro-ph We perform a Fisher matrix analysis to quantify cosmological constraints obtainable from a 2-dimensional Sunyaev-Zel'dovich (SZ) cluster catalog using the counts and the angular correlation function. Three kinds of SZ survey are considered: the almost all-sky Planck survey and two deeper ground-based surveys, one with 10% sky coverage, the other one with a coverage of 250 square degrees. With the counts and angular function, and adding the constraint from the local X-ray cluster temperature function, joint 10% to 30% errors (1 sigma) are achievable on the cosmological parameter pair (sigma_8, Omega_m) in the flat concordance model. Constraints from a 2D distribution remain relatively robust to uncertainties in possible cluster gas evolution for the case of Planck. Alternatively, we examine constraints on cluster gas physics when assuming priors on the cosmological parameters (e.g., from cosmic microwave background anisotropies and SNIa data), finding a poor ability to constrain gas evolution with the 2-dimensional catalog. From just the SZ counts and angular correlation function we obtain, however, a constraint on the product between the present-day cluster gas mass fraction and the normalization of the mass-temperature relation, T_*, with a precision of 15%. This is particularly interesting because it would be based on a very large catalog and is independent of any X-ray data. • ### The XMM--NEWTON Omega Project: I. The X-ray Luminosity - Temperature Relation at z>0.4(astro-ph/0311344) March 29, 2004 astro-ph (abridged) We describe XMM-Newton Guaranteed Time observations of a sample of eight high redshift (0.45<z<0.62) clusters. The goal of these observations was to measure the luminosity and the temperature of the clusters to a precision of \~10%, leading to constraints on the possible evolution of the luminosity--temperature relation, and ultimately on the values of the matter density, Omega_M and, to a lesser extent, the cosmological constant Omega_L. The clusters were drawn from the SHARC and 160 Square Degree (160SD) ROSAT surveys. Here we describe our data analysis techniques and present, for the first time with XMM-Newton,Lx-Tx relation. For each of the eight clusters in the sample, we have measured total bolometric luminosities, performed beta-model fits to the radial surface profiles and made spectral fits to a single temperature isothermal model. We describe data analysis techniques that pay particular attention to background mitigation. Characterizing the Lx-Tx relation as Lx = L_{6} (T/6keV)^{alpha},we find L_{6}=16.8 +7.6/-5.2 10^{44} erg/s and alpha=2.7 +/-0.4 for a EdS H=50 cosmology at a typical redshift z =0.55. Comparing with the low redshift study by Markevitch, assuming L-T to evolve as (1+z)^A, we find A=0.68 +/-0.26 for the same cosmology and A=1.52 +0.26/-0.27 for a concordance cosmology. We conclude that there is now evidence from both XMM-Newton and Chandra for an evolutionary trend in the L-T relation. Our observations lend support to the robustness and completeness of the SHARC and 160SD surveys. • ### The XMM--NEWTON Omega Project: II.Cosmological implications from the high redshift L-T relation of X-ray clusters(astro-ph/0311381) Nov. 17, 2003 astro-ph The evolution with redshift of the temperature-luminosity relation of X-ray galaxy clusters is a key ingredient to break degeneracies in the interpretation of X-ray clusters redshift number counts. We therefore take advantage of the recent measurements of the temperature-luminosity relation of distant clusters observed with XMM-Newton and Chandra satellites to examine theoretical number counts expected for different available X-rays cluster samples, namely the RDCS, EMSS, SHARC, 160deg^2 and the MACS at redshift greater than 0.3. We derive these counts without any adjustment, using models previously normalized to the local temperature distribution function and to the high-z (z = 0.33) TDF. We find that these models having Omega_M in the range [0.85-1.] predict counts in remarkable agreement with the observed counts in the different samples. We illustrate that this conclusion is weakly sensitive to the various ingredients of the modeling. Therefore number counts provide a robust evidence of an evolving population. A realistic flat low density model (Omega_M = 0.3), normalized to the local abundance of clusters is found to overproduce cluster abundance at high redshift (above z = 0.5) by nearly an order of magnitude. This result is in conflict with the popular concordance model. The conflict could indicate a deviation from the expected scaling of the M-T relation with redshift. • ### On the Angular Correlation Function of SZ Clusters : Extracting cosmological information from a 2D catalog(astro-ph/0302567) Oct. 24, 2003 astro-ph We discuss the angular correlation function of Sunyaev-Zel'dovich (SZ)-detected galaxy clusters as a cosmological probe. As a projection of the real-space cluster correlation function, the angular function samples the underlying SZ catalog redshift distribution. It offers a way to study cosmology and cluster evolution directly with the two-dimensional catalog, even before extensive follow-up observations, thereby facilitating the immediate scientific return from SZ surveys. As a simple illustration of the information content of the angular function, we examine its dependence on the parameter pair Om_m, sigma_8 in flat cosmologies. We discuss sources of modeling uncertainty and consider application to the future Planck SZ catalog, showing how these two parameters and the normalization of the SZ flux-mass relation can be simultaneously found when the local X-ray cluster abundance constraint is included. • ### Goodness--of--fit Statistics and CMB Data Sets(astro-ph/0305428) May 22, 2003 astro-ph Application of a Goodness--of--fit (GOF) statistic is an essential element of parameter estimation. We discuss the computation of GOF when estimating parameters from anisotropy measurements of the cosmic microwave background (CMB), and we propose two GOF statistics to be used when employing approximate band--power likelihood functions. They are based on an approximate form for the distribution of band--power estimators that requires only minimal experimental information to construct. Monte Carlo simulations of CMB experiments show that the proposed form describes the true distributions quite well. We apply these GOF statistics to current CMB anisotropy data and discuss the results. • ### Cosmological Parameter Estimation from CMB and X-ray clusters(astro-ph/0112220) Dec. 10, 2001 astro-ph We present the results of a combined analysis of cosmic microwave background (CMB) and X-ray galaxy clusters baryon fraction to deduce constraints over 6 in flationnary cosmological parameters. Such a combination is necessary for breaki ng degeneracies inherent to the CMB. • ### Searching and studying clusters with the SZ effect(astro-ph/0111211) Nov. 11, 2001 astro-ph I discuss galaxy cluster surveys based on the Sunyaev--Zel'dovich effect and their relevance for cosmological studies. The unique aspects of cluster selection by this method are emphasized and certain issues of surveying are addressed. Finally, I briefly present prospects for upcoming surveys. • ### Simulations of Sunyaev-Zel'dovich maps and their applications(astro-ph/0109186) Sept. 12, 2001 astro-ph We describe a fast method to simulate in a semi-analytical way consistent maps of the thermal, kinetic and polarised Sunyaev-Zel'dovich effect, featuring both cluster spatial correlations and large scale velocity flows. • ### The XMM-Newton $\Omega$ Project(astro-ph/0106098) June 6, 2001 astro-ph The abundance of high-redshift galaxy clusters depends sensitively on the matter density $\OmM$ and, to a lesser extent, on the cosmological constant $\Lambda$. Measurements of this abundance therefore constrain these fundamental cosmological parameters, and in a manner independent and complementary to other methods, such as observations of the cosmic microwave background and distance measurements. Cluster abundance is best measured by the X-ray temperature function, as opposed to luminosity, because temperature and mass are tightly correlated, as demonstrated by numerical simulations. Taking advantage of the sensitivity of XMM-Newton, our Guaranteed Time program aims at measuring the temperature of the highest redshift (z>0.4) SHARC clusters, with the ultimate goal of constraining both $\OmM$ and $\Lambda$. • ### Concerning Parameter Estimation Using the Cosmic Microwave Background(astro-ph/0104366) April 23, 2001 astro-ph Most parameter constraints obtained from cosmic microwave background (CMB) anisotropy data are based on power estimates and rely on approximate likelihood functions; computational difficulties generally preclude an exact analysis based on pixel values. With the specific goal of testing this kind of approach, we have performed a complete (un-approximated) likelihood analysis combining the COBE, Saskatoon and MAX data sets. We examine in detail the ability of certain approximate techniques based on band-power estimates to recover the full likelihood constraints. The traditional $\chi^2$-method does not always find the same best-fit model as the likelihood analysis (a bias), due mainly to the false assumption of Gaussian likelihoods that makes the method overly sensitive to data outliers. Although an improvement, other approaches employing non-Gaussian flat-band likelihoods do not always faithfully reproduce the complete likelihood constraints either; not even when using the exact flat-band likelihood curves. We trace this to the neglect of spectral information by simple flat band-power estimates. A straightforward extension incorporating a local effective slope (of the power spectrum, $C_l$) provides a faithful representation of the likelihood surfaces without significantly increasing computing cost. Finally, we also demonstrate that the best-fit model to this particular data set is a {\em good fit}, or that the observations are consistent with Gaussian sky fluctuations, according to our statistic. • ### Cosmological Constraints from the Cosmic Microwave Background(astro-ph/0004282) April 23, 2001 astro-ph Using an approximate likelihood method adapted to band--power estimates, we analyze the ensemble of first generation cosmic microwave background anisotropy experiments to deduce constraints over a six--dimensional parameter space describing Inflation--generated adiabatic, scalar fluctuations. The basic preferences of simple Inflation scenarios are consistent with the data set: flat geometries $(\OmT \equiv 1-\Omk \sim 1)$ and a scale--invariant primeval spectrum ($n\sim 1$) are favored. Models with significant negative curvature ($\OmT < 0.7$) are eliminated, while constraints on postive curvature are less stringent. Degeneracies among the parameters prevent independent determinations of the matter density $\OmM$ and the cosmological constant $\Lambda$, and the Hubble constant $\Ho$ remains relatively unconstrained. We also find that the height of the first Doppler peak relative to the amplitude suggested by data at larger $l$ indicates a high baryon content ($\Omb h^2$), almost independently of the other parameters. Besides the overall qualitative advance expected of the next generation experiments, their improved dipole calibrations will be particularly useful for constraining the peak height. Our analysis includes a {\em Goodness--of--Fit} statistic applicable to power estimates and which indicates that the maximum likelihood model provides an acceptable fit to the data set. • ### An Approximation to the Likelihood Function for Band-Power Estimates of CMB Anisotropies(astro-ph/9903045) April 23, 2001 astro-ph Band-power estimates of cosmic microwave background fluctuations are now routinely used to place constraints on cosmological parameters. For this to be done in a rigorous fashion, the full likelihood function of band-power estimates must be employed. Even for Gaussian theories, this likelihood function is not itself Gaussian, for the simple reason that band-powers measure the {\em variance} of the random sky fluctuations. In the context of Gaussian sky fluctuations, we use an ideal situation to motivate a general form for the full likelihood function from a given experiment. This form contains only two free parameters, which can be determined if the 68% and 95% confidence intervals of the true likelihood function are known. The ansatz works remarkably well when compared to the complete likelihood function for a number of experiments. For application of this kind of approach, we suggest that in the future both 68% and 95% (and perhaps also the 99.7%) confidence intervals be given when reporting experimental results. • ### A New Local Temperature Distribution Function for X-ray Clusters: Cosmological Applications(astro-ph/9908037) Dec. 26, 2000 astro-ph (abridged) We present a new determination of the local temperature function of X-ray clusters. We use a new sample comprising fifty clusters for which temperature information is now available, making it the largest complete sample of its kind. It is therefore expected to significantly improve the estimation of the temperature distribution function of moderately hot clusters. We find that the resulting temperature function is higher than previous estimations, but agrees well with the temperature distribution function inferred from the BCS and RASS luminosity function. We have used this sample to constrain the amplitude of the matter fluctuations on cluster's scale of $8\sqrt[3]{\Omega_0}^{-1}h^{-1}$Mpc, assuming a mass-temperature relation based on recent numerical simulations. We find $\sigma_8 = 0.6\pm 0.02$ for an $\Omega_0 = 1$ model. Our sample provides an ideal reference at $z \sim 0$ to use in the application of the cosmological test based on the evolution of X-ray cluster abundance (Oukbir & Blanchard 1992, 1997). Using Henry's sample, we find that the abundance of clusters at $z = 0.33$ is significantly smaller, by a factor larger than 2, which shows that the EMSS sample provides strong evidence for evolution of the cluster abundance. A likelihood analysis leads to a rather high value of the mean density parameter of the universe: $\Omega =0.92 \pm 0.22$ (open case) and $\Omega =0.86 \pm 0.25$ (flat case), which is consistent with a previous, independent estimation based on the full EMSS sample by Sadat et al.(1998). Some systematic uncertainties which could alter this result are briefly discussed. • ### CMB Cosmological Parameter Estimation: Methods and Current Results(astro-ph/0004283) April 19, 2000 astro-ph The majority of present efforts to constrain cosmological parameters with cosmic microwave background (CMB) anisotropy data employ approximate likelihood functions, the time consuming nature of a complete analysis being a major obstacle. We have performed a full (unapproximated) likelihood analysis on several experiments that allows us to examine the various assumptions made in these approximate methods and to evaluate their performance. Our results indicate that care should be taken when using such approaches. With an improved approximate method, we present some constraints on cosmological parameters using the entire present CMB data set. • ### The ESO Slice Project (ESP) Galaxy Redshift Survey. VII. The Redshift and Real-Space Correlation Functions(astro-ph/9901378) Jan. 27, 1999 astro-ph We present analyses of the two-point correlation properties of the ESP galaxy redshift survey. From the redshift-space correlation function xi(s), we see positive clustering out to separations ~50/h Mpc, with a smooth break on larger scales and zero-crossing between 60 and 80/h Mpc. xi(s) is reasonably well described by a shallow power law with \gamma~1.5 between 3 and 50/h Mpc, while on smaller scales (0.2-2/h Mpc) it has a shallower slope (\gamma~ 1). We examine the full effect of redshift-space distortions through the two-dimensional correlation function xi(rp,pi), from which we project out the real-space xi(r) below 10/h Mpc. This function is well described by a power-law model (r/r_o)^{-\gamma}, with r_o=4.15^{+0.20}_{-0.21} h^{-1} Mpc and \gamma=1.67^{+0.07}_{-0.09}. Comparison to other redshift surveys shows a consistent picture in which clustering remains positive out to separations of 50/h Mpc or larger, in substantial agreement with the results obtained from angular surveys like the APM and EDSGC. Also the shape of the two-point correlation function is remarkably unanimous among these data sets, in all cases requiring more power above 5/h Mpc (a `shoulder'), than a simple extrapolation of the canonical xi(r)=(r/5)^{-1.8}. xi(s) for volume-limited subsamples shows evidence of luminosity segregation only for the most luminous sample with M_{b_J}\le -20.5. When redshift-space distortions are removed through projection of xi(rp,pi), however, a weak dependence on luminosity is seen at small separations also at fainter magnitudes. This effect is masked in redshift space, as the mean pairwise velocity dispersion experiences a parallel increase, basically erasing the effect of the clustering growth on xi(s). • ### Approximating the Likelihood Function of CMB Experiments(astro-ph/9810316) Oct. 20, 1998 astro-ph We discuss the problem of constraining cosmological parameters with cosmic microwave background band--power estimates. Because these latter are variances, they do not have gaussian distribution functions and, hence, the standard $\chi^2$--approach is not strictly applicable. A general purpose approximation to experimental band--power likelihood functions is proposed, which requires only limited experimental details. Comparison with the full likelihood function calculated for several experiments shows that the approximation works well. • ### Constraints on Cosmological Parameters from Current CMB Data(astro-ph/9810318) Oct. 20, 1998 astro-ph We discuss the constraints one can place on cosmological parameters using current cosmic microwave background data. A standard $\chi^2$--minimization over band--power estimates is first presented, followed by a discussion of the more correct likelihood approach. We propose an approximation to the complete likelihood function of an arbitrary experiment requiring only limited and easily found information about the observations. Examination of both open models -- $(\Omega,h,Q,n)$ -- and flat models ($\Omega+\Omega_\Lambda=1$) -- $(\Omega,\Omega_b,h,Q,n)$ -- leaves one rather robust result: models with small curvature are favored.
auto_math_text
web
# Thread: For the life of me I can't resolve the correct inversion of this simple thing... 1. ## For the life of me I can't resolve the correct inversion of this simple thing... I have a simple formula, but I also need the inverse of it. Basically, it is: y / x * (y / a * b) An example of numbers would be 25000 / 52000 * (25000 / 260000 * 65000). *(this is exactly what I need for y <= x) When I try to invert it, I get something like this, which is incorrect..: (1-x / y+1) * (y / a * b) So an example would be (1-52000/75000+1)*(75000 / 260000 * 65000) (and this is too high for y > x) Thanks, and sorry to trouble you. I just spent two days trying to resolve this and was (some freaking how) unable to. I must have forgotten a lot of math..... 2. ## Re: For the life of me I can't resolve the correct inversion of this simple thing... Originally Posted by jeremy000000 I have a simple formula, but I also need the inverse of it. Basically, it is: y / x * (y / a * b) An example of numbers would be 25000 / 52000 * (25000 / 260000 * 65000). *(this is exactly what I need for y <= x) When I try to invert it, I get something like this, which is incorrect..: (1-x / y+1) * (y / a * b) So an example would be (1-52000/75000+1)*(75000 / 260000 * 65000) (and this is too high for y > x) Thanks, and sorry to trouble you. I just spent two days trying to resolve this and was (some freaking how) unable to. I must have forgotten a lot of math..... We need more information. What is this a formula for? And how did you "invert" that? None of this makes any sense. -Dan 3. ## Re: For the life of me I can't resolve the correct inversion of this simple thing... Originally Posted by jeremy000000 I have a simple formula, but I also need the inverse of it. Basically, it is: y / x * (y / a * b) It is not clear what you mean by the inverse of this formula. Also note that $y/a*b=\frac{y}{a}\cdot b$ and $y/(a*b)=\frac{y}{ab}.$ 4. ## Re: For the life of me I can't resolve the correct inversion of this simple thing... The correct form would be y/a * b, instead of y/(a*b)..... What y/a*b represents is just another way to do an absolute flat tax (like what the US wants to move to), only now we are taking a snap-shot of national income, personal income, and the amount of tax we need, isntead of just going 'yeah - 25%'... This is like trying to figure out a progressive tax code without resorting to 'tax brackets'. The variables are as follows: y = income (10,000-100,000 ex). a = national income. (260000 ex) b = necessary national tax (65,000 ex), and x = flat average (52,000 ex - 260000 / 5 people) So what I want to do is give a 0-100% of all tax for incomes 0-52,000, and 100%-200% for incomes 52,000+. Comparing to actual numbers, people can do the 0-100% part, and the equasion holds: income / 260,000 * 65,000 = .25 (avg tax rate if everyone were making 52,000). y / x then becomes the 'adjustment' to that 'avg "tax"'... I need to be able to invert it so instead of 0-1, it is 1-2... *note* for any income, this does not come out to .25 if you do y/(a*b).... the actual one we need is y/a*b Then I want to apply the 0-200% rule. For y <= x, this is what I need. But I just can't get y > x right - I always end up with too much. It's just an adjustment to the .25 flat-rate... So for the lower incomes, it's 0-100% Of .25%.. For higher incomes, it should be 100-200% of .25%. The problem is, the way I had it the higher incomes, overall they pay too much "tax" and the national tax then doesn't equal 65,000. As a simple example, assume we have 5 'groups' that we use to represent "5 people", and their incomes are 10,000, 25,000, 50,000, 75,000 and 100,000. The total is 260,000, and we need 65,000 in funding. We have to arrive at 65,000 regardless, but instead of a flat .25, we want to make it so everyone pulls their fair 'load' based on deviation in incomes from the flat average of 52,000. 5. ## Re: For the life of me I can't resolve the correct inversion of this simple thing... I guess for a little background on why this poc is necessary is this: Right now, congress wants a flat 25% tax rate. What that would do, if you did actual numbers with 50% of the population making under $30,000/yr, the median middle class making$41,000 and the median lower class making $21,000, is that the median lower class would immediately fall to extreme poverty by today's measure and the median middle class falls to the poverty line by today's measure - all because of a flat tax rate. If GDP is stalling now, it's sure to vomit at this 'tax code'. So then the answer becomes how do we fix that once it goes wrong? And here is the reason why I am trying to work out this problem. It should be mathematically feasible for any income earner in relation to the flat-average income. Once we know how to do that, we can make further adjustments on other measurements such as 'median income'. In affect, a proper continuous tax code would reverse the problems in a flat tax code, without re-introducing a progressive tax or tax brackets...... Thanks, (sorry this algebraic problem was moved to business. It's more math than business, but whatever...) 6. ## Re: For the life of me I can't resolve the correct inversion of this simple thing... I do not know of any politically serious proposal for a "flat tax rate" that implies what you seem to think is implied. Virtually all such proposals continue to be progressive in this sense: tax is applied at a flat percentage above some threshold. It reduces the number of brackets to two, one being a zero tax bracket. Furthermore, most of these proposals involve the elimination of deductions that benefit primarily higher income families (such as the special rate on income from capital gains and the absence of tax on interest from obligations of states and municipalities.) I doubt anyone except a specialist can figure out the likely initial effects of the elimination of many deductions. But if I understand what you are trying to do on a simple basis, you need to be looking at both the level of income on which no tax is levied and the flat rate applied above that level. You could analyze as an example the actual proposal made by Steve Forbes, who called for a 17% flat rate on income, with no deductions except a general deduction of 46,000 per household. So households with income of 46,000 or less would pay no federal income tax at all. A household with an income of 100,000 would pay about 9% of their income in federal income tax, and a household with an income of 200,000 would pay about 13%. It is in fact a progressive tax. The effective tax rate on the Forbes proposal would be$\dfrac{(I - 46000) * 0.17}{I}, where\ I\ is\ annual\ income.$A general formula would be$I \le D \implies e = 0;I > D \implies e = \dfrac{(I - D) * f}{I};\$ where e = effective tax rate, I = annual household income, D = income not subject to tax, and f = flat tax rate. I = annual household 7. ## Re: For the life of me I can't resolve the correct inversion of this simple thing... Thanks Jeff! I can figure the rest of what I wanted out from here, with just the information you gave. I will apply these to a realistic sample distribution on the web to see how it does. On a further note, if you want to know where I obtained the 25% flat tax rate, which was on the GOP agenda after the 2012 elections, started by Rick Perry as a 20% flat tax idea: Rick Perry?s Flat Tax: A Bold Challenge to 9-9-9 | The Fiscal Times That idea has been floating around the GOP ever since. I can also see what you are saying that I missed, which is a newer version of GOP tax overhaul: House Republicans unveil tax reform plan -- does it stand a chance? | Fox News However, I have to wonder under this new tax code on the second link, if the government will still get the same level of funding when it has dropped almost 3 trillion in taxes by 10%, giving us probably an additional 700 billion dollar further deficit than today. However, it does not limit taxes for poverty programs, which is essential in maintaining the viability of those programs. Here's what I mean: 1. If I can solve this further with real numbers on realistic data, I can solve both healthcare (medicare, medicaid, private insurance, public insurance) with a national one, and I can give the government each year exactly the funding it budgeted for that year. This at least solves two of our biggest fiscal problems. Then we need to replace the unsustainable social security as a retirement option, which if we can just give ourselves 65 years before it is put into action, we can do without any interruption. Solving the first two problems gives us the fiscal footing we need to be able to solve that last one (buys us time). Then we wait for the education bubble to pop and only then step in. It may fix itself by doing nothing about it at all, but I would hate to end up with only trade schools. At that time, extra money could be used to give a budget to state colleges just to keep them afloat during collapse. All the economic bubbles burst within 15 years, and by my measure the two biggest are all of healthcare (including medicare and medicaid, hospitals, doctors, and pharma - leaving us with clinics and generic medication), and then higher education. They are bubbles not because of 'perceived value', but rather because we are pumping money that isn't realistic into them, allowing them to think we still have the ability to pay for it, when in fact we already cannot. In 15 years, a person newly attending any 4 year degree will not be able to mathematically pay it back (going by inflation, wage stagnation, and average student loan amount) at all. They will default almost no matter what in only a year. Also in 15 years, that is when medicare and medicaid will no longer be acceptable, and at this time public and private insurance cannot be paid for, again going by income vs. necessity costs. Other sectors are similar. Want to know why? Simply unsustainable growth patterns. We have a lot of them, but these are the big ones. If we solve these, the damage is dramatically reduced. And it starts with properly and realistically addressing the national budget and budgets of these areas. 8. ## Re: For the life of me I can't resolve the correct inversion of this simple thing... I can already see a flaw in Forbe's tax code. Assume the nation's poverty level is 10,000. Assume 5 people in our little 'economy' make incomes. The distribution is the same as I stated above, 10,000, 25,000, 50,000, 75,000, and 100,000. the 'national income' is 260,000. The nation has a budget of 25% of national income, or (3.5 trillion in 2013 dollars of 14 trillion in income), and it needs 80% of that from income tax. In affect, it needs 20% of national income as an income tax. Assuming Steve Forbes' assumption, we would need a flat tax of 20% (overdoing it just to show you), to give us the needed income tax we budgeted. So here's what this comes out to with the Forbes' tax: 10,000 = e = 0. 25,000 = e .12 = 3,000. 50,000 = e = .16 = 8,000. 75,000 = e = .17 = 13,000. 100,000 = e = .18 = 18,000. Total income tax for year = 42,000. Expected and Needed (target): 52,000. If you continue with trying to come up with the exact target flat-tax rate according to Forbes', you'll not only see that according to total national income it needs to change every year according to how much is budgeted, but you can also notice that it is only progressive up to 20%. At least it gives a tax break to the poor, I guess, but if you worked out the realistic numbers, you would see that this creates a similar tax scenario that is not exact (how do you know who pays what?), has to change every year (the effective tax rate needs to be variable every year in this scenario, which becomes a guessing game - 20%? 25%? 24%? Who knows?), and is relatively unpredictable (how can both the government and the individual at tax season have any reasonable expectation?). However, that doesn't mean it isn't at least a start. I'll show you what Forbes' failed to accomplish....
auto_math_text
web
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # The quantum Zeno and anti-Zeno effects with strong system-environment coupling ### Subjects To date, studies of the quantum Zeno and anti-Zeno effects focus on quantum systems that are weakly interacting with their environment. In this paper, we investigate what happens to a quantum system under the action of repeated measurements if the quantum system is strongly interacting with its environment. We consider as the quantum system a single two-level system coupled strongly to a collection of harmonic oscillators. A so-called polaron transformation is then used to make the problem in the strong system-environment coupling regime tractable. We find that the strong coupling case exhibits quantitative and qualitative differences as compared with the weak coupling case. In particular, the effective decay rate does not depend linearly on the spectral density of the environment. This then means that, in the strong coupling regime that we investigate, increasing the system-environment coupling strength can actually decrease the effective decay rate. We also consider a collection of two-level atoms coupled strongly with a common environment. In this case, we find that there are further differences between the weak and strong coupling cases since the two-level atoms can now indirectly interact with one another due to the common environment. ## Introduction By repeatedly measuring a quantum system very frequently, the evolution of the quantum system can be slowed down, an effect that has been dubbed as the Quantum Zeno effect (QZE)1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22. On the other hand, if the quantum system is measured repeatedly not very rapidly, the measurements can actually speed up the temporal evolution. This effect, the opposite of the QZE, is known as the Quantum anti-Zeno effect (QAZE)23,24,25,26,27,28. Both the QZE and the QAZE have attracted tremendous theoretical and experimental interest due to their great importance for emerging quantum technologies as well as their fundamental theoretical interest. However, it is worth noting that the emphasis in studies performed on the QZE and the QAZE to date has been on the population decay of quantum systems. In these studies, the quantum system is prepared in an excited state, and then the system is repeatedly checked to see if the system is still in the excited state or not23,24,25,26,27,28,29,30,31,32,33,34. It is well-known then that the decay rate of the quantum system depends on the overlap of the spectral density of the environment and a measurement-induced level width23. Depending on this overlap, decreasing the measurement interval can lead to a decrease (the QZE) or an increase (the QAZE) of the decay rate. While studies of the QZE and the QAZE performed to date by and large focus on the population decay model where only decay takes place, we also know from the study of open quantum systems that, in general, quantum systems interacting with their environment also undergo dephasing. To this end, the QZE and the QAZE were studied for the exactly solvable pure dephasing model in ref. 35 where it was shown that the QZE and the QAZE are significantly different for the pure dephasing case as compared with the population decay case. This study was then extended to arbitrary system-environment models in ref. 36 where a general framework for calculating the effective decay rate of the system for an arbitrary system-environment model was presented. It was found that the effective decay rate can be written as an overlap integral of the spectral density of the environment and an effective ‘filter function’ that depends on the system-environment model at hand, the measurement interval, and the measurement being repeatedly performed. This general formalism was then used to study the QZE and the QAZE when both dephasing and population decay are present. For example, repeated measurements for the paradigmatic spin-boson model37 were considered and it was shown that the presence of both population decay and dephasing make the results differ considerably both quantitatively and qualitatively as compared to the pure population decay case. It should be pointed out, however, that the results presented in ref. 36 were derived under the assumption that the system-environment coupling is weak. This is consistent with studies performed for the population decay models, where the effective decay rate can be derived using time-dependent perturbation theory24. On the other hand, the behavior of a quantum system, subjected to repeated measurements, that is interacting strongly with its environment is not well understood38. For instance, one could ask whether or not the effective decay rate is still an overlap integral of the spectral density function and a ‘filter’ function. This paper intends to answer precisely such questions by looking at what happens to the spin-boson model under the action of repeated measurements if the central two-level system is interacting strongly with a surrounding environment of harmonic oscillators. Since the system-environment coupling is strong, the system-environment interaction cannot be treated perturbatively, and thus the treatment given in ref. 36 is no longer applicable. Our strategy then is to perform a unitary transformation, known as the polaron transformation, on the system-environment Hamiltonian39, 40, 42,43,44, 46. One then finds that the system and the environment can end up interacting weakly in this new ‘polaron’ frame. Perturbation theory can then be applied and the effect of repeated measurements is analyzed. We find that the analysis of the QZE and QAZE are in general very different compared to the population decay case. For example, it is clear that for the usual population decay case, increasing the system-environment strength increases the effective decay rate. However, for the strong system-environment regime that we investigate, we find that increasing the system-environment coupling regime can actually decrease the effective decay rate. We also study the QZE and the QAZE for more than one two-level system interacting with a common environment. For the weak coupling regime, the effective decay rate is directly proportional to the number of two-level systems coupled to the common environment36. On the other hand, for the strong system-environment coupling regime, we find that the effective decay rate for more than one two-level system is very different compared to the single two-level system case. The indirect interaction between the two-level systems due to their interaction with a common environment now plays a very important role, and the effective decay rate is no longer simply proportional to the number of two-level systems coupled to the common environment. ## Results ### Spin-boson model with strong system-environment coupling We start with the paradigmatic spin-boson model Hamiltonian37, 47, 48 which we write as (we set ħ = 1 throughout) $${H}_{L}=\frac{\varepsilon }{2}{\sigma }_{z}+\frac{{\rm{\Delta }}}{2}{\sigma }_{x}+\sum _{k}{\omega }_{k}{b}_{k}^{\dagger }{b}_{k}+{\sigma }_{z}\sum _{k}({g}_{k}^{\ast }{b}_{k}+{g}_{k}{b}_{k}^{\dagger }),$$ (1) where the system Hamiltonian is $${H}_{S,L}=\frac{\varepsilon }{2}{\sigma }_{z}+\frac{{\rm{\Delta }}}{2}{\sigma }_{x}$$, the environment Hamiltonian is $${H}_{B}={\sum }_{k}{\omega }_{k}{b}_{k}^{\dagger }{b}_{k}$$, and the system-environment coupling is $${V}_{L}={\sigma }_{z}{\sum }_{k}({g}_{k}^{\ast }{b}_{k}+{g}_{k}{b}_{k}^{\dagger })$$. ε is the energy level difference of the two-level system, Δ is the tunneling amplitude, ω k are the frequencies of the harmonic oscillators, b k and $${b}_{k}^{\dagger }$$ are the annihilation and creation operators for the harmonic oscillators, and σ x and σ z are the standard Pauli operators. The ‘L’ denotes the ‘lab’ frame. If the system-environment coupling is strong, we cannot treat the system-environment coupling perturbatively. Furthermore, the system-environment correlation effects are significant as well in general. To motivate our basic approach in this strong coupling regime, we note that if the system tunneling amplitude is negligible and the initial system state is an eigenstate of σ z , then, even though the system and the environment are strongly interacting, the evolution of the system state is negligible. This then means that we should look to unitarily transform H L such that the effective system-environment coupling contains the tunneling amplitude Δ. This unitary transformation is provided by the ‘polaron’ transformation, whereby the system-environment Hamiltonian in this new ‘polaron’ frame becomes $$H={e}^{\chi {\sigma }_{z}\mathrm{/2}}{H}_{L}{e}^{-\chi {\sigma }_{z}/2}$$, where $$\chi ={\sum }_{k}[\frac{2{g}_{k}}{{\omega }_{k}}{b}_{k}^{\dagger }-\frac{2{g}_{k}^{\ast }}{{\omega }_{k}}{b}_{k}]$$ 39,40,41,42,43,44,45,46. The system-environment Hamiltonian in the polaron frame is then H = H S  + H B  + V, where $${H}_{S}=\frac{\varepsilon }{2}{\sigma }_{z}$$, $${H}_{B}={\sum }_{k}{\omega }_{k}{b}_{k}^{\dagger }{b}_{k}$$, and $$V=\frac{{\rm{\Delta }}}{2}[{\sigma }_{+}X+{\sigma }_{-}{X}^{\dagger }]$$, with $$X={e}^{\chi }$$ (see Methods for details). Now, if the tunneling amplitude is small, we can use time-dependent perturbation theory, treating V as the perturbation. This is the key idea to deal with the strong system-environment coupling regime. Although the system and the environment are strongly interacting, in the polaron frame, they are effectively interacting weakly. Let us now use this fact in order to calculate the survival probability, and thereby the effective decay rate. For concreteness, we assume that the initial state prepared is |↑〉, where σ z |↑〉 = |↑〉. In other words, we consider the same initial state as that considered in the analysis of the usual population decay model23, 24. At time t = 0, we prepare the system state |↑〉, and we subsequently perform measurements with time interval τ to check if the system state is still |↑〉 or not. The survival probability after time interval τ is then s(τ) = Tr s,B [|↑〉〈↑|ρ L(τ)], where $${\rho }_{L}(\tau )$$ is the combined density matrix of the system and the environment at time τ just before the projective measurement. Then, $$s(\tau )={{\rm{Tr}}}_{S,B}[|\uparrow \rangle \langle \uparrow |{e}^{-i{H}_{L}\tau }{\rho }_{L}\mathrm{(0)}{e}^{i{H}_{L}\tau }]\mathrm{.}$$ (2) It is important to note that the initial state that we have prepared cannot simply be taken as the usual product state |↑〉〈↑|$$\otimes {e}^{-\beta {H}_{B}}/{Z}_{B}$$, with $${Z}_{B}={{\rm{Tr}}}_{B}[{e}^{-\beta {H}_{B}}]$$ since the system and the environment are strongly interacting and consquently there will be significant initial system-environment correlations49, 50. Rather, the initial state that we should consider is $${\rho }_{L}\mathrm{(0)}={P}_{\uparrow }{e}^{-\beta {H}_{L}}{P}_{\uparrow }/Z$$, where P  = |↑〉〈↑|, and $$Z={{\rm{Tr}}}_{S,B}[{P}_{\uparrow }{e}^{-\beta {H}_{L}}]$$. Keeping this in mind, we use the polaron transformation to cast the expression for the survival probability $$s(\tau )$$ after the measurement at time τ in terms of quantities in the polaron frame. Doing so leads us to $$s(\tau )={{\rm{Tr}}}_{S,B}[|\,\uparrow \,\rangle \langle \,\uparrow \,|{e}^{-iH\tau }{P}_{\uparrow }\frac{{e}^{-\beta H}}{Z}{P}_{\uparrow }{e}^{iH\tau }],$$ (3) with the Hamiltonian H now in the polaron frame. Now, for small Δ, to a first approximation, the initial state in the polaron frame can be written as $${P}_{\uparrow }\otimes {e}^{-\beta {H}_{B}}/{Z}_{B}$$. This is a similar approximation as the usual assumption that the initial system-environment state is $${\rho }_{S}\mathrm{(0)}\otimes {\rho }_{B}$$ since, in the polaron frame, the system and the environment are weakly interacting. We thus get $$s(\tau )={{\rm{Tr}}}_{S,B}[|\uparrow \rangle \langle \uparrow |{e}^{-iH\tau }(|\uparrow \rangle \langle \uparrow |\otimes {e}^{-\beta {H}_{B}}/{Z}_{B}){e}^{iH\tau }]={{\rm{Tr}}}_{S}[|\uparrow \rangle \langle \uparrow |{\rho }_{S}(\tau )],$$ (4) where $${\rho }_{S}(\tau )={{\rm{Tr}}}_{B}[{e}^{-iH\tau }(|\uparrow \rangle \langle \uparrow |\otimes {e}^{-\beta {H}_{B}}/{Z}_{B}){e}^{iH\tau }]$$. Our objective then is to find $${\rho }_{S}(\tau )$$, given the initial system-environment state ρ(0) = |↑〉〈↑| $$\otimes \,{e}^{-\beta {H}_{B}}/{Z}_{B}$$. We find that (see the Methods section) $$\begin{array}{rcl}{\rho }_{S}(\tau ) & = & {U}_{S}(\tau )({\rho }_{S}\mathrm{(0)}+i\sum _{\mu }{\int }_{0}^{\tau }d{t}_{1}[{\rho }_{S}\mathrm{(0),}{\tilde{F}}_{\mu }({t}_{1})]{\langle {\tilde{B}}_{\mu }({t}_{1})\rangle }_{B}\\ & & +\,\sum _{\mu \nu }{\int }_{0}^{\tau }d{t}_{1}{\int }_{0}^{{t}_{1}}d{t}_{2}\{{C}_{\mu \nu }({t}_{1},{t}_{2})[{\tilde{F}}_{\nu }({t}_{2}){\rho }_{S}\mathrm{(0),}{\tilde{F}}_{\mu }({t}_{1})]+{\rm{h}}{\rm{.c}}{\rm{.}}\}){U}_{S}^{\dagger }(\tau \mathrm{).}\end{array}$$ (5) Here $${U}_{S}(\tau )={e}^{-i{H}_{S}\tau }$$, $${F}_{1}=\frac{{\rm{\Delta }}}{2}{\sigma }_{+}$$, $${B}_{1}=X$$, $${F}_{2}=\frac{{\rm{\Delta }}}{2}{\sigma }_{-}$$, $${B}_{2}={X}^{\dagger }$$, $${\tilde{F}}_{\mu }(t)={U}_{S}^{\dagger }(t){F}_{\mu }{U}_{S}(t)$$, $${\tilde{B}}_{\mu }(t)={U}_{B}^{\dagger }(t){B}_{\mu }{U}_{B}(t)$$ with $${U}_{B}(t)={e}^{-i{H}_{B}t}$$, $${\langle \ldots \rangle }_{B}={{\rm{Tr}}}_{B}[{\rho }_{B}(\ldots )]$$ where $${{\rm{Tr}}}_{B}$$ denotes taking trace over the environment, the environment correlation functions are defined as $${C}_{\mu \nu }({t}_{1},{t}_{2})={\langle {\tilde{B}}_{\mu }({t}_{1}){\tilde{B}}_{\nu }({t}_{2})\rangle }_{B}$$, and h.c. denotes the hermitian conjugate. Now, since the system-environment coupling in the polaron frame is weak, we can neglect the build up of correlations between the system and the environment. Thus, we can write the survival probability after time $$t=N\tau$$, where N is the number of measurements performed after time t = 0, as $$S(t=N\tau )={[s(\tau )]}^{N}\equiv {e}^{-{\rm{\Gamma }}(\tau )N\tau }$$, thereby defining the effective decay rate $${\rm{\Gamma }}(\tau )$$. It then follows that $${\rm{\Gamma }}(\tau )=-\frac{1}{\tau }\,\mathrm{ln}\,s(\tau )$$. Since we have the system density matrix in the polaron frame, we can work out the survival probability $$s(\tau )$$ and hence the effective decay rate $${\rm{\Gamma }}(\tau )$$. The result is that (see the Methods section for details) $${\rm{\Gamma }}(\tau )=\frac{{{\rm{\Delta }}}^{2}}{2\tau }{\int }_{0}^{\tau }dt{\int }_{0}^{t}dt^{\prime} {e}^{-{{\rm{\Phi }}}_{R}(t^{\prime} )}\,\cos [\varepsilon t^{\prime} -{{\rm{\Phi }}}_{I}(t^{\prime} )],$$ (6) where $${{\rm{\Phi }}}_{R}(t)={\int }_{0}^{\infty }\,d\omega \,J(\omega )\frac{1\,-\,\cos (\omega t)}{{\omega }^{2}}\,\coth (\frac{\beta \omega }{2}),$$ (7) $${{\rm{\Phi }}}_{I}(t)={\int }_{0}^{\infty }\,d\omega \,J(\omega )\frac{\sin (\omega t)}{{\omega }^{2}},$$ (8) and the spectral density of the environment has been introduced as $${\sum }_{k}{|{g}_{k}|}^{2}(\ldots )\to {\int }_{0}^{\infty }d\omega J(\omega )(\ldots )$$. At this point, it is useful to compare this expression for the effective decay rate for the case of strong system-environment coupling with the case of the usual population decay model where the effective decay rate is $${\rm{\Gamma }}(\tau )=\tau {\int }_{0}^{\infty }d\omega \,J(\omega ){{\rm{sinc}}}^{2}[\frac{(\varepsilon -\omega )\tau }{2}]$$ 23, 24. It should be clear that for the strong system-environment coupling case, the effective decay rate given by Eq. (6) has a very different qualitative behavior. In particular, the effective decay rate can no longer be regarded as simply an overlap integral of the spectral density of the environment with a sinc-squared function. Rather, the effective decay rate now has a very prominent non-linear dependence on the spectral density, leading to very different behavior as compared with the population decay case. For example, as the system-environment coupling strength increases, $${{\rm{\Phi }}}_{R}(t)$$ increases, and thus we expect Γ(τ) to decrease. To make this claim concrete, let us model the spectral density as $$J(\omega )=G{\omega }^{s}{\omega }_{c}^{1-s}{e}^{-\omega /{\omega }_{c}}$$, where G is a dimensionless parameter characterizing the system-environment coupling strength, ω c is the cutoff frequency, and s is the Ohmicity parameter48. For concreteness, we look at the Ohmic case (s = 1). In this case, $${{\rm{\Phi }}}_{R}(t)=\frac{G}{2}\,\mathrm{ln}(1+{\omega }_{c}^{2}{t}^{2})$$, while $${{\rm{\Phi }}}_{I}(t)=G{\tan }^{-1}({\omega }_{c}t)$$, leading to $${\rm{\Gamma }}(\tau )=\frac{{{\rm{\Delta }}}^{2}}{2\tau }{\int }_{0}^{\tau }dt{\int }_{0}^{t}dt^{\prime} \frac{\cos [\varepsilon t^{\prime} -G{\tan }^{-1}({\omega }_{c}t^{\prime} )]}{{\mathrm{(1}+{\omega }_{c}^{2}t{^{\prime} }^{2})}^{G\mathrm{/2}}}.$$ (9) The double integral can be worked out numerically. Results are shown in Fig. 1(a) for different system-environment coupling strengths G. For the strong system-environment regime that we are dealing with, it is clear that increasing the system-environment coupling strength G actually decreases the effective decay rate. This is in contrast with what happens in the weak system-environment regime for the paradigmatic population decay model [see Fig. 1(b)]. Here it is clear that increasing the system-environment coupling strength increases the effective decay rate as expected. It should also be noted that the behaviour of Γ(τ) as a function of τ allows us to identify the Zeno and anti-Zeno regimes. One approach is to simply say that if Γ(τ) decreases when τ decreases, we are in the Zeno regime, while if Γ(τ) increases if τ decreases, then we are in the anti-Zeno regime23, 30, 33, 35. From Fig. 1(b), it should also be noted that increasing the coupling strength does not change the qualitative behavior of the Zeno to anti-Zeno transition, but for the strong coupling regime [see Fig. 1(a)], while we only observe the Zeno effect for G = 1, both the Zeno and anti-Zeno effects are observed for G = 2.5. Similarly, as shown in Fig. 2(a), increasing the cutoff frequency for the strong coupling case decreases the effective decay rate, but the opposite behaviour is observed for the weak coupling case [see Fig. 2(b)]. In our treatment until now, we have considered the change in the system state due to the tunneling term. This tunneling term, due to its presence in $${H}_{S,L}=\frac{\varepsilon }{2}{\sigma }_{z}+\frac{{\rm{\Delta }}}{2}{\sigma }_{x}$$, leads to the system state changing even if the system and the environment are not coupled to each other. Thus, an alternative way to quantify the effective decay rate would be to remove the evolution due to the system Hamiltonian (in the ‘lab’ frame) $${H}_{S,L}$$ before performing each measurement since what we are really interested in is the change in the system state due to the system-environment interaction. A similar approach has been followed in refs 35, 36 and 51 Therefore, we now derive an expression for the effective decay rate of the system state when, just before each measurement, we remove the system evolution due to $${H}_{S,L}$$. The survival probability, after one measurement, is now (starting from the state |↑〉) $$s(\tau )={{\rm{Tr}}}_{S,B}[(|\uparrow \rangle \langle \uparrow |){e}^{i{H}_{S,L}\tau }{e}^{-i{H}_{L}\tau }{\rho }_{{\rm{L}}}\mathrm{(0)}{e}^{i{H}_{L}\tau }{e}^{-i{H}_{S,L}\tau }].$$ (10) Notice now the presence of $${e}^{i{H}_{S,L}\tau }$$ and $${e}^{-i{H}_{S,L}\tau }$$ which remove the evolution of the system due to the system Hamiltonian before performing the measurement. Once again transforming to the polaron frame, we obtain $$s(\tau )=1-{{\rm{Tr}}}_{S,B}[(|\downarrow \rangle \langle \downarrow |){e}^{i{H}_{S,P}\tau }{e}^{-iH\tau }(|\uparrow \rangle \langle \uparrow |\otimes {\rho }_{B}){e}^{iH\tau }{e}^{-i{H}_{S,P}\tau }],$$ (11) where $${H}_{S,P}=\frac{\varepsilon }{2}{\sigma }_{z}+\frac{{\rm{\Delta }}}{2}({\sigma }_{+}X+{\sigma }_{-}{X}^{\dagger })$$ and $${\rho }_{B}={e}^{-\beta {H}_{B}}/{Z}_{B}$$. Since we are assuming that the tunneling amplitude is small, the unitary operator $${e}^{-i{H}_{S,P}\tau }$$ can be expanded as a perturbation series. At the same time, $${e}^{-iH\tau }$$ can also expanded as a perturbation series. Keeping terms to second order in the tunneling amplitude (see the Methods section), we find that now the modified decay rate Γ n (τ) is $${{\rm{\Gamma }}}_{n}(\tau )={\rm{\Gamma }}(\tau )+{{\rm{\Gamma }}}_{{\rm{mod}}}(\tau ),$$ (12) where the modification to the previous decay rate is $$\begin{array}{rcl}{{\rm{\Gamma }}}_{{\rm{mod}}}(\tau ) & = & \frac{{{\rm{\Delta }}}^{2}}{\tau }\{\frac{1}{{\varepsilon }^{2}}{\sin }^{2}(\frac{\varepsilon \tau }{2}){e}^{-{{\rm{\Phi }}}_{R}\mathrm{(0)}}{e}^{-i{{\rm{\Phi }}}_{I}\mathrm{(0)}}\\ & & -\,\frac{1}{\varepsilon }\,\sin (\frac{\varepsilon \tau }{2}){\int }_{0}^{\tau }dt{e}^{-{{\rm{\Phi }}}_{R}(t)}\,\cos \,[\varepsilon (t-\frac{\tau }{2})-{{\rm{\Phi }}}_{I}(t)]\}.\end{array}$$ (13) Using these expressions, we have plotted the behavior of Γ n (τ) for the strong system-environment coupling regime in Fig. 3(a). It should be clear that once again increasing the system-environment coupling strength generally decreases the effective decay rate Γ n (τ). This is in sharp contrast with what happens in the weak coupling regime. For the weak coupling case, it is known that36 $${{\rm{\Gamma }}}_{n}(\tau )={\int }_{0}^{\infty }d\omega J(\omega )Q(\omega ,\tau ),$$ (14) where the filter function $$Q(\omega ,\tau )$$ is $$Q(\omega ,\tau )=\frac{2}{\tau }\{\coth (\frac{\beta \omega }{2}){D}_{1}(\omega ,\tau )+{D}_{2}(\omega ,\tau )\},$$ (15) with $${D}_{1}(\omega ,\tau )={\int }_{0}^{\tau }dt{\int }_{0}^{t}dt^{\prime} \,\cos (\omega t^{\prime} )[{a}_{x}(t-t^{\prime} ){a}_{x}(t)+{a}_{y}(t-t^{\prime} ){a}_{y}(t)],$$ (16) and $${D}_{2}(\omega ,\tau )={\int }_{0}^{\tau }dt{\int }_{0}^{t}dt^{\prime} \,\sin (\omega t^{\prime} )[-{a}_{x}(t-t^{\prime} ){a}_{y}(t)+{a}_{x}(t){a}_{y}(t-t^{\prime} )]\mathrm{.}$$ (17) Here $${a}_{x}(t)=\frac{2\varepsilon {\rm{\Delta }}}{{{\rm{\Omega }}}^{2}}{\sin }^{2}(\frac{{\rm{\Omega }}t}{2})$$ and $${a}_{y}(t)=\frac{{\rm{\Delta }}}{{\rm{\Omega }}}\,\sin ({\rm{\Omega }}t)$$ with $${{\rm{\Omega }}}^{2}={\varepsilon }^{2}+{{\rm{\Delta }}}^{2}$$. Using these expressions, we can investigate how the decay rate varies as the measurement interval changes for different system-environment coupling strengths in the weak coupling regime. Typical results are illustrated in Fig. 3(b) from which it should be clear that increasing the coupling strength in the weak coupling regime increases the effective decay rate. Furthermore, changing the coupling strength has no effect on the measurement time interval at which the Zeno to anti-Zeno transition takes place for the weak coupling regime as the three curves in Fig. 3(b) achieve their maximum value for the same value of τ. This is not the case for the strong coupling regime [see Fig. 3(a)]. At this point, it is worth pausing to consider where the qualitative difference in the behavior of the effective decay rate in the weak and the strong coupling regime comes from. The effective decay rate is derived from the survival probability after one measurement $$s(\tau )$$. For both the weak and the strong coupling regimes, the survival probability after one measurement is given by Eq. (10). For both cases, the Hamiltonian $${H}_{S,L}$$ and $${H}_{L}$$ are the same. The only difference is in the choice of the system-environment state $${\rho }_{L}\mathrm{(0)}$$. For the weak coupling case, this state is simply the product state |↑〉〈↑|$$\otimes {e}^{-\beta {H}_{B}}/{Z}_{B}$$. This is not the case for the strong coupling due to the significant system-environment correlations. Thus, we can say that the qualitative difference in the behavior of the effective decay rate is because of the presence of the system-environment correlations. It seems that these correlations can protect the quantum state of the system - as the coupling strength increases, these correlations become more and more significant, and at the same time, the effective decay rate goes down. ### Large spin-boson model with strong system-environment coupling Let us now generalize the usual spin-boson model to deal with N S two-level systems interacting with a common environment. In this case, the system-environment Hamiltonian (in the ‘lab’ frame) is given by36, 40, 50 $${H}_{L}=\varepsilon {J}_{z}+{\rm{\Delta }}{J}_{x}+\sum _{k}{\omega }_{k}{b}_{k}^{\dagger }{b}_{k}+2{J}_{z}\sum _{k}({g}_{k}^{\ast }{b}_{k}+{g}_{k}{b}_{k}^{\dagger }),$$ (18) where $${J}_{x,y,z}$$ are the usual angular momentum operators obeying the commutation relations $$[{J}_{k},{J}_{l}]=i{\varepsilon }_{klm}{J}_{m}$$. We now start from the spin coherent state |j〉 such that J z |j〉 = j|j〉 with $$j={N}_{S}\mathrm{/2}$$. Other eigenstates of $${J}_{z}$$ can be considered as the initial state in a similar manner. Our objective is to again perform repeated projective measurements, described by the projector |j〉〈 j|, with time interval τ and thereby investigate what happens to the effective decay rate. As before, the survival probability after one measurement is $$s(\tau )={{\rm{Tr}}}_{S,B}[|j\rangle \langle \,j|{e}^{-i{H}_{L}\tau }{\rho }_{L}\mathrm{(0)}{e}^{i{H}_{L}\tau }]$$. Since we consider the system and the environment to be strongly interacting, we once again perform the polaron tranformation given by $$H={e}^{\chi {J}_{z}}{H}_{L}{e}^{-\chi {J}_{z}}$$, with $$\chi$$ the same as before. Then, we find that $$H=\varepsilon {J}_{z}+\sum _{k}{\omega }_{k}{b}_{k}^{\dagger }{b}_{k}-\kappa {J}_{z}^{2}+\frac{{\rm{\Delta }}}{2}({J}_{+}X+{J}_{-}{X}^{\dagger }),$$ (19) where $$\kappa =4{\sum }_{k}\frac{{|{g}_{k}|}^{2}}{{\omega }_{k}}$$, and $${J}_{\pm }={J}_{x}\pm i{J}_{y}$$ are the standard raising and lowering operators. Interestingly, the transformed Hamiltonian now contains a term proportional to $${J}_{z}^{2}$$. This term arises because the collection of two-level systems interacting with the collective environment are indirectly interacting with each other. This term is obviously proportional to the identity operator for a single two-level system, and thus has no influence for a single two-level system. If the tunneling amplitude is small, then we again use perturbation theory and assume that, in the polaron frame, the system-environment correlations can be neglected. We find that now the effective decay rate is (see the Methods section) $${\rm{\Gamma }}(\tau )=\frac{{{\rm{\Delta }}}^{2}j}{\tau }{\int }_{0}^{\tau }dt{\int }_{0}^{t}dt^{\prime} {e}^{-{{\rm{\Phi }}}_{R}(t^{\prime} )}\,\cos [\varepsilon t^{\prime} +\kappa \mathrm{(1}-2j)t^{\prime} -{{\rm{\Phi }}}_{I}(t^{\prime} )]\mathrm{.}$$ (20) Here $${{\rm{\Phi }}}_{R}(t)$$ and $${{\rm{\Phi }}}_{I}(t)$$ are the same as defined before. This result obviously agrees with the result that we obtained for a single two-level system. Moreover, it is clear from Eq. (20) that increasing the system-environment coupling strength G should reduce the effective decay rate due to the $${e}^{-{{\rm{\Phi }}}_{R}(t^{\prime} )}$$ factor in the integrand. This is precisely what we observein Fig. 4(a). Furthermore, it may be thought that increasing j (or, equivalently, N S ) increases the effective decay rate. On the other hand, the dependence on j is not so clear because of the presence of the indirect interaction. Namely, increasing j increases the oscillatory behavior of the integrand due to the dependence of the integrand on $$\cos [\varepsilon t^{\prime} +\kappa \mathrm{(1}-2j)t^{\prime} -{{\rm{\Phi }}}_{I}(t^{\prime} )]$$. Thus, once the integral over this rapidly oscillating integrand is taken, we can again get a small number. Such a prediction is borne out by Fig. 4(b) where the effective decay rate has been plotted for different values of j. It is obvious that there is a big difference between the single two-level system case and the more than one two-level system case. Furthermore, it seems that increasing j can largely reduce the value of the effective decay rate, meaning that in the strong coupling regime, the indirect interaction helps in keeping the quantum state alive. Let us now consider the situation where the evolution to the system Hamiltonian $${H}_{S,L}=\varepsilon {J}_{z}+{\rm{\Delta }}{J}_{x}$$ is removed before each measurement. In the polaron frame $${H}_{S,L}$$ becomes $${H}_{S,P}=\varepsilon {J}_{z}+\frac{{\rm{\Delta }}}{2}({J}_{+}X+{J}_{-}{X}^{\dagger })$$. The major difference now compared to the previous single two-level system case is that the total system-environment Hamiltonian in the polaron frame $$H={H}_{S,P}+{H}_{B}-\kappa {J}_{z}^{2}$$ contains a term (namely, $$-\kappa {J}_{z}^{2}$$) that is not part of the system Hamiltonian in the polaron frame. As a result, when the system evolution is removed just before performing each measurement, the evolution induced by this extra term survives. Keeping this fact in mind, the effective decay rate $${{\rm{\Gamma }}}_{n}(\tau )$$ is now $${{\rm{\Gamma }}}_{n}(\tau )={\rm{\Gamma }}(\tau )+{{\rm{\Gamma }}}_{{\rm{mod}}}(\tau ),$$ (21) where $${\rm{\Gamma }}(\tau )$$ is given by Eq. (20) and $$\begin{array}{rcl}{{\rm{\Gamma }}}_{{\rm{mod}}}(\tau ) & = & \frac{{{\rm{\Delta }}}^{2}}{\tau }\mathrm{(2}j)\{\frac{1}{{\varepsilon }^{2}}{\sin }^{2}(\frac{\varepsilon \tau }{2}){e}^{-{{\rm{\Phi }}}_{R}\mathrm{(0)}-i{{\rm{\Phi }}}_{I}\mathrm{(0)}}\\ & & -\,\frac{1}{\varepsilon }\,\sin (\frac{\varepsilon \tau }{2}){\int }_{0}^{\tau }dt{e}^{-{{\rm{\Phi }}}_{R}(t)}\,\cos \,[\kappa \mathrm{(2}j-\mathrm{1)(2}\tau -t)+\varepsilon (t-\frac{\tau }{2})-{{\rm{\Phi }}}_{I}(t)]\}\mathrm{.}\end{array}$$ (22) In Fig. 5(a), we have shown the behavior of $${{\rm{\Gamma }}}_{n}(\tau )$$ when the system-environment coupling strength is increased for $${N}_{S}=2$$. It should be obvious that we observe multiple Zeno-anti Zeno regimes. Also, increasing the coupling strength does not generally increase the effective decay rate $${{\rm{\Gamma }}}_{n}(\tau )$$. This behavior should be contrasted with the weak coupling scenario. For weak coupling, it has been found that the effective decay rate is still given by Eq. (14), but now the filter function is N S times the filter function given by Eq. (15)36. Thus, increasing the coupling strength should now increase the effective decay rate. This is precisely what is observed in Fig. 5(b). Consequently, the weak coupling and the strong coupling regimes are very different for the strong and the weak coupling regimes. The difference is again due to the system-environment correlations. ## Discussion We have investigated the quantum Zeno and anti-Zeno effects for a single two-level system interacting strongly with an environment of harmonic oscillators. Although it seems that perturbation theory cannot be applied, we have applied a polaron transformation that can make the coupling strength effectively small in the transformed frame and thereby validate the use of perturbation theory. We have obtained general expressions for the effective decay rate, independent of any particular form of the spectral density of the environment. Thereafter, we have shown that the strong coupling regime shows both qualitative and quantitative differences in the behavior of the effective decay rate as a function of the measurement interval and the QZE to QAZE transitions as compared with the weak system-environment coupling scenario. The effective decay rate is no longer an overlap integral of the spectral density of the environment and some other function. Rather, there is a very pronounced non-linear dependence on the spectral density of the environment. Most importantly, increasing the coupling strength in the strong coupling regime can actually reduce the effective decay rate. These differences can be understood in terms of the significant role played by the system-environment correlations. Moreover, we have extended our results to many two-level systems interacting with a common environment. Once again, we obtained expressions for the effective decay rate that are independent of the spectral density of the environment. We illustrated that in this case as well the behavior of the effective decay rate is very different from the commonly considered weak coupling regime. Our results should be important for understanding the role of repeated measurements in quantum systems that are interacting strongly with their environment. ## Methods ### The polaron transformation For completeness, let us sketch how to transform the spin-boson Hamiltonian to the polaron frame39, 40, 42,43,44, 46. We need to find $$H={e}^{\chi {\sigma }_{z}\mathrm{/2}}{H}_{L}{e}^{-\chi {\sigma }_{z}\mathrm{/2}}$$. We use the identity $${e}^{\theta A}B{e}^{-\theta A}=B+\theta [A,B]+\frac{{\theta }^{2}}{\mathrm{2!}}[A,[A,B]]+\cdots$$ (23) Now, it is clear that $$[\chi {\sigma }_{z}\mathrm{/2,}{\sigma }_{z}\mathrm{/2}]=0$$. Also, $$[{\sum }_{k}(\frac{{g}_{k}}{{\omega }_{k}}{b}_{k}^{\dagger }-\frac{{g}_{k}^{\ast }}{{\omega }_{k}}{b}_{k}),{\sum }_{k}{\omega }_{k}{b}_{k}^{\dagger }{b}_{k}]=-\,{\sigma }_{z}{\sum }_{k}({g}_{k}{b}_{k}^{\dagger }+{g}_{k}^{\ast }{b}_{k}).$$ Carrying on, we find that $$[{\sigma }_{z}\chi \mathrm{/2,}{\sigma }_{z}{\sum }_{k}({g}_{k}^{\ast }{b}_{k}+{g}_{k}{b}_{k}^{\dagger })]=-\,2{\sum }_{k}\frac{{|{g}_{k}|}^{2}}{{\omega }_{k}}\mathrm{.}$$ This is simply a c-number, so the higher-order commutators are zero. Furthermore, this c-number leads to a constant shift in the transformed Hamiltonian, and can thus be dropped. Putting all the commutators together, we find that $${e}^{\chi {\sigma }_{z}\mathrm{/2}}[\frac{\varepsilon }{2}{\sigma }_{z}+\sum _{k}{\omega }_{k}{b}_{k}^{\dagger }{b}_{k}+{\sigma }_{z}\sum _{k}({g}_{k}^{\ast }{b}_{k}+{g}_{k}{b}_{k}^{\dagger })]{e}^{-\chi {\sigma }_{z}\mathrm{/2}}=\frac{\varepsilon }{2}{\sigma }_{z}+\sum _{k}{\omega }_{k}{b}_{k}^{\dagger }{b}_{k}.$$ (24) Next, we observe that $${e}^{\chi {\sigma }_{z}\mathrm{/2}}\frac{{\rm{\Delta }}}{2}{\sigma }_{x}{e}^{-\chi {\sigma }_{z}\mathrm{/2}}={e}^{\chi {\sigma }_{z}\mathrm{/2}}\frac{{\rm{\Delta }}}{2}({\sigma }_{+}+{\sigma }_{-}){e}^{-\chi {\sigma }_{z}\mathrm{/2}}$$, where $${\sigma }_{+}$$ and $${\sigma }_{-}$$ are the standard spin half raising and lowering operators. Furthermore, $$[\chi {\sigma }_{z}\mathrm{/2,}{\sigma }_{+}]={\sigma }_{+}\chi$$, leading to $${e}^{\chi {\sigma }_{z}\mathrm{/2}}{\sigma }_{+}{e}^{-\chi {\sigma }_{z}\mathrm{/2}}={\sigma }_{+}{e}^{\chi }$$. Similarly, $${e}^{\chi {\sigma }_{z}\mathrm{/2}}{\sigma }_{-}{e}^{-\chi {\sigma }_{z}\mathrm{/2}}={\sigma }_{-}{e}^{-\chi }$$. Thus, we finally have the required Hamiltonian in the polaron frame. For the large spin case, the calculation is very similar40. The major difference is that now the c-number term that we dropped before cannot be dropped any longer since this term is proportional to $${J}_{z}^{2}$$ (for the spin half case, this is proportional to the identity operator, so this is just a constant shift for the spin half case). Namely, we now find that $$[\chi {J}_{z},\varepsilon {J}_{z}+\sum _{k}{\omega }_{k}{b}_{k}^{\dagger }{b}_{k}+2{J}_{z}\sum _{k}({g}_{k}^{\ast }{b}_{k}+{g}_{k}{b}_{k}^{\dagger })]=-\,2{J}_{z}\sum _{k}({g}_{k}{b}_{k}^{\dagger }+{g}_{k}^{\ast }{b}_{k})-8{J}_{z}^{2}\sum _{k}\frac{{|{g}_{k}|}^{2}}{{\omega }_{k}}\mathrm{.}$$ (25) Also, $$[\chi {J}_{z},-2{J}_{z}\sum _{k}({g}_{k}{b}_{k}^{\dagger }+{g}_{k}^{\ast }{b}_{k})-8{J}_{z}^{2}\sum _{k}\frac{{|{g}_{k}|}^{2}}{{\omega }_{k}}]=8{J}_{z}^{2}\sum _{k}\frac{{|{g}_{k}|}^{2}}{{\omega }_{k}}\mathrm{.}$$ (26) The rest of the calculation is very similar to the spin half case, and leads to the Hamiltonian in the polaron frame. ### Finding the system density matrix in the polaron frame Here we describe how to obtain the system density matrix in the polaron frame $${\rho }_{S}(\tau )$$ just before performing the measurement at time τ. We define $${U}_{{\rm{tot}}}(\tau )={e}^{-iH\tau }={U}_{0}(\tau ){U}_{I}(\tau )$$, where $${U}_{0}(\tau )$$ is the unitary time-evolution operator corresponding to H S and H B , while $${U}_{I}(\tau )$$ is the ‘left over’ part that we can find using time-dependent perturbation theory. Writing the system-environment coupling in the polaron frame as $${\sum }_{\mu }{F}_{\mu }\otimes {B}_{\mu }$$, with $${F}_{1}=\frac{{\rm{\Delta }}}{2}{\sigma }_{+}$$, $${B}_{1}=X$$, $${F}_{2}=\frac{{\rm{\Delta }}}{2}{\sigma }_{-}$$ and $${B}_{2}={X}^{\dagger }$$, $${U}_{I}(\tau )$$ can be found to be $${U}_{I}(\tau )=1+{A}_{1}+{A}_{2}$$, with $${A}_{1}=-i{\sum }_{\mu }{\int }_{0}^{\tau }{\tilde{F}}_{\mu }({t}_{1}){\tilde{B}}_{\mu }({t}_{1})d{t}_{1}$$ and $${A}_{2}=-\,{\sum }_{\mu \nu }{\int }_{0}^{\tau }d{t}_{1}{\int }_{0}^{{t}_{1}}d{t}_{2}{\tilde{F}}_{\mu }({t}_{1}){\tilde{F}}_{\nu }({t}_{2}){\tilde{B}}_{\mu }({t}_{1}){\tilde{B}}_{\nu }({t}_{2})$$. Correct to second order in the tunneling amplitude Δ (in particular, we assume that Δτ is small enough such that higher order terms can be ignored), we can then write $$\begin{array}{rcl}{\rho }_{S}(\tau ) & \approx & {{\rm{Tr}}}_{B}\{{U}_{0}(\tau )[{\rho }_{{\rm{tot}}}\mathrm{(0)}+{\rho }_{{\rm{tot}}}\mathrm{(0)}{A}_{1}^{\dagger }+{\rho }_{{\rm{tot}}}\mathrm{(0)}{A}_{2}^{\dagger }\\ & & +\,{A}_{1}{\rho }_{{\rm{tot}}}\mathrm{(0)}+{A}_{2}{\rho }_{{\rm{tot}}}\mathrm{(0)}+{A}_{1}{\rho }_{{\rm{tot}}}\mathrm{(0)}{A}_{1}^{\dagger }]{U}_{0}^{\dagger }(\tau )\},\end{array}$$ (27) where $${\rho }_{{\rm{tot}}}\mathrm{(0)}={\rho }_{S}\mathrm{(0)}\otimes {\rho }_{B}$$. Eq. (27) can now be simplified term by term. First, we find that $${{\rm{Tr}}}_{B}\{{U}_{0}(\tau ){\rho }_{{\rm{tot}}}\mathrm{(0)}{U}_{0}^{\dagger }(\tau )\}={\tilde{\rho }}_{S}(\tau )$$, where $${\tilde{\rho }}_{S}(\tau )={U}_{S}(\tau ){\rho }_{S}\mathrm{(0)}{U}_{S}^{\dagger }(\tau )$$ is the system density matrix if the tunneling amplitude is zero. Next, we find that $${{\rm{Tr}}}_{B}\{{U}_{0}(\tau ){\rho }_{{\rm{tot}}}\mathrm{(0)}{A}_{1}^{\dagger }{U}_{0}^{\dagger }(\tau )\}=i{\sum }_{\mu }{\int }_{0}^{\tau }d{t}_{1}\,{U}_{S}(\tau ){\rho }_{S}\mathrm{(0)}{\tilde{F}}_{\mu }({t}_{1}){U}_{S}^{\dagger }(\tau ){\langle {\tilde{B}}_{\mu }({t}_{1})\rangle }_{B}$$, where $${\langle {B}_{\mu }({t}_{1})\rangle }_{B}={{\rm{Tr}}}_{B}\{{U}_{B}(\tau ){\rho }_{B}{\tilde{B}}_{\mu }({t}_{1}){U}_{B}^{\dagger }(\tau )\}$$. Similarly, $${{\rm{Tr}}}_{B}\{{U}_{0}(\tau ){A}_{1}{\rho }_{{\rm{tot}}}\mathrm{(0)}{U}_{0}^{\dagger }(\tau )\}=-\,i{\sum }_{\mu }{\int }_{0}^{\tau }d{t}_{1}\,{U}_{S}(\tau )$$ $${\tilde{F}}_{\mu }({t}_{1}){\rho }_{S}\mathrm{(0)}{U}_{S}^{\dagger }(\tau ){\langle {\tilde{B}}_{\mu }({t}_{1})\rangle }_{B}$$. Carrying on, $${{\rm{Tr}}}_{B}\{{U}_{0}(\tau ){A}_{2}{\rho }_{{\rm{tot}}}\mathrm{(0)}{U}_{0}^{\dagger }(\tau )\}=-\,\sum _{\mu \nu }{\int }_{0}^{\tau }d{t}_{1}{\int }_{0}^{{t}_{1}}d{t}_{2}\,{U}_{S}(\tau ){\tilde{F}}_{\mu }({t}_{1}){\tilde{F}}_{\nu }({t}_{2}){\rho }_{S}\mathrm{(0)}{U}_{S}^{\dagger }(\tau ){C}_{\mu \nu }({t}_{1},{t}_{2}),$$ (28) with the environment correlation function $${C}_{\mu \nu }({t}_{1},{t}_{2})$$ defined as $${C}_{\mu \nu }({t}_{1},{t}_{2})={\langle {\tilde{B}}_{\mu }({t}_{1}){\tilde{B}}_{\nu }({t}_{2})\rangle }_{B}={{\rm{Tr}}}_{B}\{{\tilde{B}}_{\mu }({t}_{1}){\tilde{B}}_{\nu }({t}_{2}){\rho }_{B}\}$$. Similarly, $${{\rm{Tr}}}_{B}\{{U}_{0}(\tau ){\rho }_{{\rm{tot}}}\mathrm{(0)}{A}_{2}^{\dagger }{U}_{0}^{\dagger }(\tau )\}=-\,\sum _{\mu \nu }{\int }_{0}^{\tau }d{t}_{1}{\int }_{0}^{{t}_{1}}d{t}_{2}\,{U}_{S}(\tau ){\rho }_{S}\mathrm{(0)}{\tilde{F}}_{\nu }({t}_{2}){\tilde{F}}_{\mu }({t}_{1}){U}_{S}^{\dagger }(\tau ){C}_{\nu \mu }({t}_{2},{t}_{1}\mathrm{).}$$ (29) Finally, $${{\rm{Tr}}}_{B}\{{U}_{0}(\tau ){A}_{1}{\rho }_{{\rm{tot}}}{A}_{1}^{\dagger }{U}_{0}^{\dagger }(\tau )\}=\sum _{\mu \nu }{\int }_{0}^{\tau }d{t}_{1}{\int }_{0}^{\tau }d{t}_{2}{U}_{S}(\tau ){\tilde{F}}_{\mu }({t}_{1}){\rho }_{S}\mathrm{(0)}{\tilde{F}}_{\nu }({t}_{2}){U}_{S}^{\dagger }(\tau ){C}_{\nu \mu }({t}_{2},{t}_{1}\mathrm{).}$$ (30) Using the fact that $${\int }_{0}^{\tau }d{t}_{1}{\int }_{0}^{\tau }d{t}_{2}={\int }_{0}^{\tau }d{t}_{1}{\int }_{0}^{{t}_{1}}d{t}_{2}+{\int }_{0}^{\tau }d{t}_{2}{\int }_{0}^{{t}_{2}}d{t}_{1}$$, $$\begin{array}{rcl}{{\rm{Tr}}}_{B}\{{U}_{0}(\tau ){A}_{1}{\rho }_{{\rm{tot}}}{A}_{1}^{\dagger }{U}_{0}^{\dagger }(\tau )\} & = & \sum _{\mu \nu }{\int }_{0}^{\tau }d{t}_{1}{\int }_{0}^{{t}_{1}}d{t}_{2}{U}_{S}(\tau ){\tilde{F}}_{\mu }({t}_{1}){\rho }_{S}\mathrm{(0)}\\ & & \times \,{\tilde{F}}_{\nu }({t}_{2}){U}_{S}^{\dagger }(\tau ){C}_{\nu \mu }({t}_{2},{t}_{1})+{\rm{h}}{\rm{.c}}\mathrm{.,}\end{array}$$ (31) where h.c. denotes hermitian conjugate. Putting all the terms back together, the system density matrix can be written as Eq. (5). ### Finding the effective decay rate We now explain how to find the effective decay rate given by Eq. (6). With the system density matrix at time τ available, we first calculate the survival probability s(τ). This can be done via $$s(\tau )=1-$$〈↓|ρ s (τ)|↓〉. Since the state |↓〉 is an eigenstate of H S , and ρ S (0) = |↑〉〈↑|, it is straightforward to see that $$s(\tau )=1-2{\rm{Re}}\,(\sum _{\mu \nu }{\int }_{0}^{\tau }d{t}_{1}{\int }_{0}^{{t}_{1}}d{t}_{2}{C}_{\mu \nu }({t}_{1},{t}_{2})|\downarrow \rangle {\tilde{F}}_{\nu }({t}_{2})|\uparrow \rangle \langle \uparrow |{\tilde{F}}_{\mu }({t}_{1})|\downarrow \rangle )\mathrm{.}$$ (32) We now note that, since $${F}_{1}=\frac{{\rm{\Delta }}}{2}{\sigma }_{+}$$, $${F}_{2}=\frac{{\rm{\Delta }}}{2}{\sigma }_{-}$$, and $${H}_{S}=\frac{\varepsilon }{2}{\sigma }_{z}$$, $${\tilde{F}}_{1}(t)=\frac{{\rm{\Delta }}}{2}{\sigma }_{+}{e}^{i\varepsilon t}$$ and $${\tilde{F}}_{2}(t)=\frac{{\rm{\Delta }}}{2}{\sigma }_{-}{e}^{-i\varepsilon t}$$. Therefore, $$\begin{array}{rcl}s(\tau ) & = & 1-2{\rm{Re}}\,({\int }_{0}^{\tau }d{t}_{1}{\int }_{0}^{{t}_{1}}d{t}_{2}{C}_{12}({t}_{1},{t}_{2})\langle \downarrow |{\tilde{F}}_{2}({t}_{2})|\downarrow \rangle \langle \downarrow |{\tilde{F}}_{1}({t}_{1})|\downarrow \rangle )\\ & = & 1-\frac{{{\rm{\Delta }}}^{2}}{2}{\rm{Re}}\,({\int }_{0}^{\tau }d{t}_{1}{\int }_{0}^{{t}_{1}}d{t}_{2}{C}_{12}({t}_{1},{t}_{2}){e}^{i\varepsilon ({t}_{1}-{t}_{2})})\mathrm{.}\end{array}$$ (33) What remains to be worked out is the environment correlation function $${C}_{12}({t}_{1},{t}_{2})={{\rm{Tr}}}_{B}[{\rho }_{B}\tilde{X}({t}_{1}){\tilde{X}}^{\dagger }({t}_{2})]$$. Using the cyclic invariance of the trace, it is clear that this correlation function is actually only a function of $${t}_{1}-{t}_{2}$$ only, since $${{\rm{Tr}}}_{B}\,[{\rho }_{B}\tilde{X}({t}_{1}){\tilde{X}}^{\dagger }({t}_{2})]={{\rm{Tr}}}_{B}\,[{\rho }_{B}{e}^{i{H}_{B}({t}_{1}-{t}_{2})}X{e}^{-i{H}_{B}({t}_{1}-{t}_{2})}{X}^{\dagger }]={C}_{12}({t}_{1}-{t}_{2})$$. Thus, $$s(\tau )=1-\frac{{{\rm{\Delta }}}^{2}}{2}{\rm{Re}}\,({\int }_{0}^{\tau }d{t}_{1}{\int }_{0}^{{t}_{1}}d{t}_{2}{C}_{12}({t}_{1}-{t}_{2}){e}^{i\varepsilon ({t}_{1}-{t}_{2})})=1-\frac{{{\rm{\Delta }}}^{2}}{2}{\rm{Re}}\,({\int }_{0}^{\tau }dt{\int }_{0}^{t}dt^{\prime} {C}_{12}(t^{\prime} ){e}^{i\varepsilon t^{\prime} }),$$ (34) where we have introduced $$t^{\prime} ={t}_{1}-{t}_{2}$$. The calculation of $${C}_{12}(t^{\prime} )$$ can be performed as follows. First, we use the useful fact that for $${{\rm{Tr}}}_{B}[{\rho }_{B}{e}^{Z}]={e}^{\langle {Z}^{2}\rangle \mathrm{/2}}$$ where Z is a linear function of the creation and annihilation operators. Second, to obtain a single exponential so that the previous identity can be used, we use the identity that for any two operators X and Y, $${e}^{X}{e}^{Y}={e}^{X+Y+\frac{1}{2}[X,Y]\ldots }$$. Fortunately for us, the series terminates for our case, so the higher order terms are zero. Using these two identities, we find that $${C}_{12}(t^{\prime} )={e}^{-{\Phi }_{R}(t^{\prime} )}{e}^{-i{\Phi }_{I}(t^{\prime} )}$$ where $${{\rm{\Phi }}}_{R}(t)$$ and $${{\rm{\Phi }}}_{I}(t)$$ have been defined in Eqs (7) and (8), and the spectral density of the environment has been introduced as $${\sum }_{k}{|{g}_{k}|}^{2}(\ldots )\to {\int }_{0}^{\infty }d\omega J(\omega )(\ldots )$$. This finally leads to $$s(\tau )=1-\frac{{{\rm{\Delta }}}^{2}}{2}{\int }_{0}^{\tau }dt{\int }_{0}^{t}dt^{\prime} {e}^{-{{\rm{\Phi }}}_{R}(t^{\prime} )}\,\cos [\varepsilon t^{\prime} -{{\rm{\Phi }}}_{I}(t^{\prime} )]$$. We can define an effective decay rate $${\rm{\Gamma }}(\tau )=-\frac{1}{\tau }\,\mathrm{ln}\,s(\tau )$$. For small Δ, we expect the deviation of the survival probability from one to be small. Thus, we end up with Eq. (6). ### Calculating the modified decay rate Let us now briefly sketch how to obtain Eq. (12). We note that the system Hamiltonian $${H}_{S,L}$$ becomes in the polaron frame $${H}_{S,P}=\frac{\varepsilon }{2}{\sigma }_{z}+\frac{{\rm{\Delta }}}{2}({\sigma }_{+}X+{\sigma }_{-}{X}^{\dagger })$$. Then, to second order in $${\rm{\Delta }},{e}^{-i{H}_{S,P}\tau }\approx 1+{A}_{SP}^{\mathrm{(1)}}+{A}_{SP}^{\mathrm{(2)}}$$, with $${A}_{SP}^{\mathrm{(1)}}=-\,i{\int }_{0}^{\tau }d{t}_{1}{\sum }_{\mu }{\tilde{F}}_{\mu }({t}_{1}){B}_{\mu }$$, and $${A}_{SP}^{\mathrm{(2)}}=-\,{\int }_{0}^{\tau }d{t}_{1}{\int }_{0}^{\tau }d{t}_{2}{\sum }_{\mu \nu }{\tilde{F}}_{\mu }({t}_{1}){B}_{\mu }{\tilde{F}}_{\nu }({t}_{2}){B}_{\nu }$$. Here, $${\tilde{F}}_{\mu }(t)={e}^{i\varepsilon {\sigma }_{z}t\mathrm{/2}}{F}_{\mu }{e}^{-i\varepsilon {\sigma }_{z}t\mathrm{/2}}$$. Substituting these expressions in the expression for the survival probability as well as the perturbation expansions for $${e}^{iH\tau }$$ and $${e}^{-iH\tau }$$, and keeping terms up to second order in Δ, we find that the new survival probability consists of the previous survival probability plus some additional terms. It can be be easily seen that most of these additional terms, once the trace with the projector |↓〉〈↓| is taken, give zero. The additional terms that need to be worked out are $${{\rm{Tr}}}_{B}[\langle \,\downarrow \,|{U}_{B}(\tau ){A}_{1}{\rho }_{S}\mathrm{(0)}{\rho }_{B}{U}_{B}^{\dagger }(\tau ){A}_{SP}^{\mathrm{(1)}}|\,\downarrow \,\rangle ]$$, $${{\rm{Tr}}}_{B}[\langle \,\downarrow \,|{A}_{SP}^{\mathrm{(1)}\dagger }{U}_{B}(\tau ){\rho }_{S}\mathrm{(0)}{\rho }_{B}{A}_{1}^{\dagger }{U}_{B}^{\dagger }(\tau )|\,\downarrow \,\rangle ]$$, and $${{\rm{Tr}}}_{B}[\langle \,\downarrow \,|{A}_{SP}^{\mathrm{(1)}\dagger }{U}_{B}(\tau ){\rho }_{S}\mathrm{(0)}{\rho }_{B}{A}_{1}^{\dagger }{U}_{B}^{\dagger }(\tau )|\,\downarrow \,\rangle ]$$. The first of these terms is equal to $$-{\sum }_{\mu \nu }{\int }_{0}^{\tau }d{t}_{1}{\int }_{0}^{\tau }d{t}_{2}\langle \,\downarrow \,|{\tilde{F}}_{\nu }({t}_{1}){\rho }_{S}\mathrm{(0)}{\tilde{F}}_{\mu }({t}_{2})|\,\downarrow \,\rangle {C}_{\mu \nu }(\tau -{t}_{1})$$, while the second is simply the hermitian conjugate of the first. On the other hand, the last term is equal to $${\sum }_{\mu \nu }{\int }_{0}^{\tau }d{t}_{1}{\int }_{0}^{\tau }d{t}_{2}\langle \,\downarrow \,|{\tilde{F}}_{\nu }({t}_{1}){\rho }_{S}\mathrm{(0)}{\tilde{F}}_{\mu }({t}_{2})|\,\downarrow \,\rangle {C}_{\mu \nu }\mathrm{(0).}$$ Next, we use the fact that $${F}_{1}=\frac{{\rm{\Delta }}}{2}{\sigma }_{+}$$ and $${F}_{2}=\frac{{\rm{\Delta }}}{2}{\sigma }_{-}$$ to simply the inner products. Putting all the pieces together, we arrive at Eq. (12). The calculation of $${{\rm{\Gamma }}}_{n}(\tau )$$ for the large spin case is quite similar. One only needs to be careful about the fact that the system-environment Hamiltonian, in the polaron frame, contains a term, namely $$-\kappa {J}_{z}^{2}$$, that is not a part of the transformed system Hamiltonian $${H}_{S,P}$$. ## References 1. 1. Misra, B. & Sudarshan, E. C. G. The zeno’s paradox in quantum theory. J. Math. Phys. (N. Y.) 18, 756–763, doi:10.1063/1.523304 (1977). 2. 2. Facchi, P., Gorini, V., Marmo, G., Pascazio, S. & Sudarshan, E. Quantum zeno dynamics. Phys. Lett. A 275, 12–19, doi:10.1016/S0375-9601(00)00566-1 (2000). 3. 3. Facchi, P. & Pascazio, S. Quantum zeno subspaces. Phys. Rev. Lett. 89, 080401, doi:10.1103/PhysRevLett.89.080401 (2002). 4. 4. Facchi, P. & Pascazio, S. Quantum zeno dynamics: mathematical and physical aspects. J. Phys. A: Math. Theor. 41, 493001, doi:10.1088/1751-8113/41/49/493001 (2008). 5. 5. Wang, X.-B., You, J. Q. & Nori, F. Quantum entanglement via two-qubit quantum zeno dynamics. Phys. Rev. A 77, 062339, doi:10.1103/PhysRevA.77.062339 (2008). 6. 6. Maniscalco, S., Francica, F., Zaffino, R. L., Lo Gullo, N. & Plastina, F. Protecting entanglement via the quantum zeno effect. Phys. Rev. Lett. 100, 090503, doi:10.1103/PhysRevLett.100.090503 (2008). 7. 7. Facchi, P. & Ligabò, M. Quantum zeno effect and dynamics. J. Phys. A: Math. Theor. 51, 022103 (2010). 8. 8. Militello, B., Scala, M. & Messina, A. Quantum zeno subspaces induced by temperature. Phys. Rev. A 84, 022106, doi:10.1103/PhysRevA.84.022106 (2011). 9. 9. Raimond, J. M. et al. Quantum zeno dynamics of a field in a cavity. Phys. Rev. A 86, 032120, doi:10.1103/PhysRevA.86.032120 (2012). 10. 10. Smerzi, A. Zeno dynamics, indistinguishability of state, and entanglement. Phys. Rev. Lett. 109, 150410, doi:10.1103/PhysRevLett.109.150410 (2012). 11. 11. Wang, S.-C., Li, Y., Wang, X.-B. & Kwek, L. C. Operator quantum zeno effect: Protecting quantum information with noisy two-qubit interactions. Phys. Rev. Lett. 110, 100505, doi:10.1103/PhysRevLett.110.100505 (2013). 12. 12. McCusker, K. T., Huang, Y.-P., Kowligy, A. S. & Kumar, P. Experimental demonstration of interaction-free all-optical switching via the quantum zeno effect. Phys. Rev. Lett. 110, 240403, doi:10.1103/PhysRevLett.110.240403 (2013). 13. 13. Stannigel, K. et al. Constrained dynamics via the zeno effect in quantum simulation: mplementing non-abelian lattice gauge theories with cold atoms. Phys. Rev. Lett. 112, 120406, doi:10.1103/PhysRevLett.112.120406 (2014). 14. 14. Zhu, B. et al. Suppressing the loss of ultracold molecules via the continuous quantum zeno effect. Phys. Rev. Lett. 112, 070404, doi:10.1103/PhysRevLett.112.070404 (2014). 15. 15. Schäffer, F. et al. Experimental realization of quantum zeno dynamics. Nat. Commun. 5, 3194, doi:10.1038/ncomms4194 (2014). 16. 16. Signoles, A. et al. Confined quantum Zeno dynamics of a watched atomic arrow. Nat. Phys. 10, 715–719, doi:10.1038/nphys3076 (2014). 17. 17. Debierre, V., Goessens, I., Brainis, E. & Durt, T. Fermi’s golden rule beyond the zeno regime. Phys. Rev. A 92, 023825, doi:10.1103/PhysRevA.92.023825 (2015). 18. 18. Kiilerich, A. H. & Mølmer, K. Quantum zeno effect in parameter estimation. Phys. Rev. A 92, 032124, doi:10.1103/PhysRevA.92.032124 (2015). 19. 19. Qiu, J. et al. Quantum zeno and zeno-like effects in nitrogen vacancy centers. Sci. Rep 5, 17615, doi:10.1038/srep17615 (2015). 20. 20. Slichter, D. H. et al. Quantum zeno effect in the strong measurement regime of circuit quantum electrodynamics. New J. Phys. 18, 053031, doi:10.1088/1367-2630/18/5/053031 (2016). 21. 21. Mueller, M. M., Gherardini, S. & Caruso, F. Stochastic quantum Zeno-based detection of noise correlations. Sci. Rep 6, 38650, doi:10.1038/srep38650 (2016). 22. 22. Gherardini, S. et al. Stochastic quantum Zeno by large deviation theory. New J. Phys. 18, 013048, doi:10.1088/1367-2630/18/1/013048 (2016). 23. 23. Kofman, A. G. & Kurizki, G. Acceleration of quantum decay processes by frequent observations. Nature (London) 405, 546–550, doi:10.1038/35014537 (2000). 24. 24. Koshino, K. & Shimizu, A. Quantum zeno effect by general measurements. Phys. Rep 412, 191–275, doi:10.1016/j.physrep.2005.03.001 (2005). 25. 25. Chen, P.-W., Tsai, D.-B. & Bennett, P. Quantum zeno and anti-zeno effect of a nanomechanical resonator measured by a point contact. Phys. Rev. B 81, 115307, doi:10.1103/PhysRevB.81.115307 (2010). 26. 26. Barone, A., Kurizki, G. & Kofman, A. G. Dynamical control of macroscopic quantum tunneling. Phys. Rev. Lett. 92, 200403, doi:10.1103/PhysRevLett.92.200403 (2004). 27. 27. Fujii, K. & Yamamoto, K. Anti-zeno effect for quantum transport in disordered systems. Phys. Rev. A 82, 042109, doi:10.1103/PhysRevA.82.042109 (2010). 28. 28. Fischer, M. C., Gutiérrez-Medina, B. & Raizen, M. G. Observation of the quantum zeno and anti-zeno effects in an unstable system. Phys. Rev. Lett. 87, 040402, doi:10.1103/PhysRevLett.87.040402 (2001). 29. 29. Maniscalco, S., Piilo, J. & Suominen, K.-A. Zeno and anti-zeno effects for quantum brownian motion. Phys. Rev. Lett. 97, 130402, doi:10.1103/PhysRevLett.97.130402 (2006). 30. 30. Segal, D. & Reichman, D. R. Zeno and anti-zeno effects in spin-bath models. Phys. Rev. A 76, 012109, doi:10.1103/PhysRevA.76.012109 (2007). 31. 31. Zheng, H., Zhu, S. Y. & Zubairy, M. S. Quantum zeno and anti-zeno effects: Without the rotating-wave approximation. Phys. Rev. Lett. 101, 200404, doi:10.1103/PhysRevLett.101.200404 (2008). 32. 32. Ai, Q., Li, Y., Zheng, H. & Sun, C. P. Quantum anti-zeno effect without rotating wave approximation. Phys. Rev. A 81, 042116, doi:10.1103/PhysRevA.81.042116 (2010). 33. 33. Thilagam, A. Zeno–anti-zeno crossover dynamics in a spin–boson system. J. Phys. A: Math. Theor. 43, 155301, doi:10.1088/1751-8113/43/15/155301 (2010). 34. 34. Thilagam, A. Non-markovianity during the quantum zeno effect. J. Chem. Phys. 138, 175102, doi:10.1063/1.4802785 (2013). 35. 35. Chaudhry, A. Z. & Gong, J. Zeno and anti-zeno effects on dephasing. Phys. Rev. A 90, 012101, doi:10.1103/PhysRevA.90.012101 (2014). 36. 36. Chaudhry, A. Z. A general framework for the quantum zeno and anti-zeno effects. Sci. Rep 6, 29497, doi:10.1038/srep29497 (2016). 37. 37. Leggett, A. J. et al. Dynamics of the dissipative two-state system. Rev. Mod. Phys. 59, 1–85, doi:10.1103/RevModPhys.59.1 (1987). 38. 38. Wu, W. & Lin, H.-Q. Quantum Zeno and anti-Zeno effects in quantum dissipative systems. e-print arXiv:1701.02100 (2017). 39. 39. Silbey, R. & Harris, R. A. Variational calculation of the dynamics of a two level system interacting with a bath. J. Chem. Phys. 80, 2615–2617, doi:10.1063/1.447055 (1984). 40. 40. Vorrath, T. & Brandes, T. Dynamics of a large spin with strong dissipation. Phys. Rev. Lett. 95, 070402, doi:10.1103/PhysRevLett.95.070402 (2005). 41. 41. Jang, S., Cheng, Y.-C., Reichman, D. R. & Eaves, J. D. Theory of coherent energy transfer. J. Chem. Phys. 129, 101104, doi:10.1063/1.2977974 (2008). 42. 42. Chin, A. W., Prior, J., Huelga, S. F. & Plenio, M. B. Generalized polaron ansatz for the ground state of the sub-ohmic spin-boson model: An analytic theory of the localization transition. Phys. Rev. Lett. 107, 160601, doi:10.1103/PhysRevLett.107.160601 (2011). 43. 43. Lee, C. K., Moix, J. & Cao, J. Accuracy of second order perturbation theory in the polaron and variational polaron frames. J. Chem. Phys. 136, 204120, doi:10.1063/1.4722336 (2012). 44. 44. Lee, C. K., Cao, J. & Gong, J. Noncanonical statistics of a spin-boson model: Theory and exact monte carlo simulations. Phys. Rev. E 86, 021109, doi:10.1103/PhysRevE.86.021109 (2012). 45. 45. Jang, S., Zhang, P.-P. & Cheng, Y.-C. Criteria for the accuracy of small polaron quantum master equation in simulating excitation energy transfer dynamics. J. Chem. Phys. 139, 224112, doi:10.1063/1.4840795 (2013). 46. 46. Gelbwaser-Klimovsky, D. & Aspuru-Guzik, A. Strongly coupled quantum heat machines. J. Chem. Phys. Lett. 6, 3477–3482, doi:10.1021/acs.jpclett.5b01404 (2015). 47. 47. Weiss, U. Quantum dissipative systems (World Scientific: Singapore, 2008). 48. 48. Breuer, H.-P. & Petruccione, F. The Theory of Open Quantum Systems (Oxford University Press, Oxford, 2007). 49. 49. Chaudhry, A. Z. & Gong, J. Amplification and suppression of system-bath-correlation effects in an open many-body system. Phys. Rev. A 87, 012129, doi:10.1103/PhysRevA.87.012129 (2013). 50. 50. Chaudhry, A. Z. & Gong, J. Role of initial system-environment correlations: A master equation approach. Phys. Rev. A 88, 052107, doi:10.1103/PhysRevA.88.052107 (2013). 51. 51. Matsuzaki, Y., Saito, S., Kakuyanagi, K. & Semba, K. Quantum zeno effect with a superconducting qubit. Phys. Rev. B 82, 180518, doi:10.1103/PhysRevB.82.180518 (2010). ## Acknowledgements Support from the LUMS startup grant is gratefully acknowledged. Support from HEC under Grant No 5917/Punjab/NRPU/R&D/HEC/2016 is also acknowledged. ## Author information Authors ### Contributions A.Z.C. contributed solely to this work. ## Ethics declarations ### Competing Interests The authors declare that they have no competing interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Chaudhry, A.Z. The quantum Zeno and anti-Zeno effects with strong system-environment coupling. Sci Rep 7, 1741 (2017). https://doi.org/10.1038/s41598-017-01844-8 • Accepted: • Published: • ### The quantum Zeno and anti-Zeno effects with driving fields in the weak and strong coupling regimes • Mehwish Majeed Scientific Reports (2021) • ### On the Possibility to Observe Relations Between Quantum Measurements and the Entropy of Phase Transitions in Zn2(BDC)2(DABCO) • Svetlana G. Kozlova •  & Denis P. Pishchur Foundations of Physics (2021) • ### Spin-Bath Dynamics in a Quantum Resonator-Qubit System: Effect of a Mechanical Resonator Coupled to a Central Qubit • A. Dehghani • , B. Mojaveri •  & M. Vaez International Journal of Theoretical Physics (2020) • ### A new approach to study the Zeno effect for a macroscopic quantum system under frequent interactions with a harmonic environment • Fatemeh Ghasemi •  & Afshin Shafiee Scientific Reports (2019) • ### The quantum Zeno and anti-Zeno effects with non-selective projective measurements • Mehwish Majeed Scientific Reports (2018) By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
auto_math_text
web
Quark Matter 2014 - XXIV International Conference on Ultrarelativistic Nucleus-Nucleus Collisions 19-24 May 2014 Europe/Zurich timezone Long-range angular correlations at the LHC with ALICE 20 May 2014, 14:40 20m europium Contributed Talk Correlations and Fluctuations Speaker Leonardo Milano (CERN) Description The observation of long-range correlations on the near- and away-side (also known as the double-ridge) in high-multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV and its similarity to Pb-Pb collisions remains one of the open questions from the p-Pb run at the Large Hadron Collider. It has been attributed to mechanisms that involve initial-state effects, such as gluon saturation and colour connections forming along the longitudinal direction, and final-state effects, such as parton-induced interactions and collective effects developing in a high-density system possibly formed in these collisions. In order to understand the nature of this double-ridge structure the two-particle correlation analysis has been extended to identified particles. The observed mass dependence in p-Pb resembles qualitative expectations from hydrodynamics, and is also observed in Pb-Pb collisions. A study of correlations at forward rapidity probes the low-x regime of the nucleus, where saturation effects are expected to become stronger. The possibility of accessing this regime using the ALICE forward muon detector is explored. In addition, a possible ridge signal within the ALICE acceptance in pp collisions $\sqrt{s} = 7$ TeV is also investigated. On behalf of collaboration: ALICE Primary author Leonardo Milano (CERN) Slides
auto_math_text