text
stringlengths
14
7.51M
subset
stringclasses
3 values
source
stringclasses
2 values
# Echo in a single vibrationally excited molecule ## Abstract Echoes occur in many physical systems, typically in inhomogeneously broadened ensembles of nonlinear objects. They are often used to eliminate the effects of dephasing caused by interactions with the environment as well as to enable the observation of proper, inherent object properties. Here, we report the experimental observation of quantum wave-packet echoes in a single, isolated molecule. The entire dephasing–rephasing cycle occurs without any inhomogeneous spread of molecular properties, or any interaction with the environment, and offers a way to probe the internal coherent dynamics of single molecules. In our experiments, we impulsively excite a vibrational wave packet in an anharmonic molecular potential and observe its oscillations and eventual dispersion with time. A second, delayed pulse gives rise to an echo—a partial recovery of the initial coherent oscillations. The vibrational dynamics of single molecules is visualized by a time-delayed probe pulse dissociating them, one at a time. Two mechanisms for the echo formation are discussed: a.c. Stark-induced molecular potential shaking and creation of a depletion-induced ‘hole’ in the nuclear spatial distribution. The single-molecule wave-packet echoes may lead to the development of new tools for probing ultrafast intramolecular processes in various molecules. ## Access options from\$8.99 All prices are NET prices. ## Data availability The data represented in Figs. 1, 2b and 36 are available through the figshare depository at https://doi.org/10.6084/m9.figshare.10252619.v1. All other data that support the plots within this paper and other findings of this study are available from the corresponding authors upon reasonable request. ## References 1. 1. Hahn, E. L. Spin echoes. Phys. Rev. 80, 580–594 (1950). 2. 2. Hahn, E. L. Free nuclear induction. Phys. Today 6, 4 (1953). 3. 3. Kurnit, N. A., Abella, I. D. & Hartmann, S. R. Observation of a photon echo. Phys. Rev. Lett. 13, 567–568 (1964). 4. 4. Mukamel, S. Principles of Nonlinear Optical Spectroscopy (Oxford Univ. Press, 1995). 5. 5. Chebotayev, V. P. & Dubetsky, B. Ya. A classical model of the photon echo. Appl. Phys. B 31, 45–52 (1983). 6. 6. Hill, R. M. & Kaplan, D. E. Cyclotron resonance echo. Phys. Rev. Lett. 14, 1062–1063 (1965). 7. 7. Gould, R. W., O’Neil, T. M. & Malmberg, J. H. Plasma wave echo. Phys. Rev. Lett. 19, 219–222 (1967). 8. 8. Bulatov, A., Kuklov, A., Vugmeister, B. E. & Rabitz, H. Echo in optical lattices: stimulated revival of breathing oscillations. Phys. Rev. A 57, 3788–3792 (1998). 9. 9. Buchkremer, F. B. J., Dumke, R., Levsen, H., Birkl, G. & Ertmer, W. Wave packet echoes in the motion of trapped atoms. Phys. Rev. Lett. 85, 3121–3124 (2000). 10. 10. Herrera, M., Antonsen, T. M., Ott, E. & Fishman, S. Echoes and revival echoes in systems of anharmonically confined atoms. Phys. Rev. A 86, 023613 (2012). 11. 11. Meunier, T. et al. Rabi oscillations revival induced by time reversal: a test of mesoscopic quantum coherence. Phys. Rev. Lett. 94, 010401 (2005). 12. 12. Stupakov, G. Echo Effect in Hadron Colliders SSC Report SSCL-579 (SSCL, 1992). 13. 13. Spentzouris, L. K., Ostiguy, J.-F. & Colestock, P. L. Direct measurement of diffusion rates in high energy synchrotrons using longitudinal beam echoes. Phys. Rev. Lett. 76, 620–623 (1996). 14. 14. Stupakov, G. V. in Handbook of Accelerator Physics and Engineering 2nd edn (ed. Chau, A. W. et al.) Ch. 2.3.13, 121–123 (World Scientific, 2013). 15. 15. Sen, T. & Li, Y. S. Nonlinear theory of transverse beam echoes. Phys. Rev. Accel. Beams 21, 021002 (2018). 16. 16. Karras, G. et al. Orientation and alignment echoes. Phys. Rev. Lett. 114, 153601 (2015). 17. 17. Karras, G. et al. Experimental observation of fractional echoes. Phys. Rev. A 94, 033404 (2016). 18. 18. Lin, K. et al. Echoes in space and time. Phys. Rev. X 6, 041056 (2016). 19. 19. Lu, J. et al. Nonlinear two-dimensional terahertz photon echo and rotational spectroscopy in the gas phase. Proc. Natl Acad. Sci. USA 113, 11800–11805 (2016). 20. 20. Rosenberg, D., Damari, R., Kallush, S. & Fleischer, S. Rotational echoes: rephasing of centrifugal distortion in laser-induced molecular alignment. J. Phys. Chem. Lett. 8, 5128–5135 (2017). 21. 21. Rosenberg, D., Damari, R. & Fleischer, S. Echo spectroscopy in multilevel quantum-mechanical rotors. Phys. Rev. Lett. 121, 234101 (2018). 22. 22. Eberly, J. H., Narozhny, N. B. & Sanchez-Mondragon, J. J. Periodic spontaneous collapse and revival in a simple quantum model. Phys. Rev. Lett. 44, 1323–1326 (1980). 23. 23. Parker, J. & Stroud, C. R. Coherence and decay of Rydberg wave packets. Phys. Rev. Lett. 56, 716–719 (1986). 24. 24. Averbukh, I. Sh. & Perelman, N. F. Fractional revivals: universality in the long-term evolution of quantum wave packets beyond the correspondence principle dynamics. Phys. Lett. A 139, 449–453 (1989). 25. 25. Robinett, R. W. Quantum wave packet revivals. Phys. Rep. 392, 1–119 (2004). 26. 26. Moerner, W. E. & Kador, L. Optical detection and spectroscopy of single molecules in a solid. Phys. Rev. Lett. 62, 2535–2538 (1989). 27. 27. Orrit, M. & Bernard, J. Single pentacene molecules detected by fluorescence excitation in a p-terphenyl crystal. Phys. Rev. Lett. 65, 2716–2719 (1990). 28. 28. Guenther, T. et al. Coherent nonlinear optical response of single quantum dots studied by ultrafast near-field spectroscopy. Phys. Rev. Lett. 89, 057401 (2002). 29. 29. Unold, T., Mueller, K., Lienau, C., Elsaesser, T. & Wieck, A. D. Optical Stark effect in a quantum dot: ultrafast control of single exciton polarizations. Phys. Rev. Lett. 92, 157401 (2004). 30. 30. Brinks, D. et al. Visualizing and controlling vibrational wave packets of single molecules. Nature 465, 905–908 (2010). 31. 31. Brinks, D. et al. Ultrafast dynamics of single molecules. Chem. Soc. Rev. 43, 2476–2491 (2014). 32. 32. Liebel, M., Toninelli, C. & Van Hulst, N. Room-temperature ultrafast nonlinear spectroscopy of a single molecule. Nat. Photon. 12, 45–49 (2018). 33. 33. Bach, R., Pope, D., Liou, Sy-H. & Batelaan, H. Controlled double-slit electron diffraction. N. J. Phys. 15, 033018 (2013). 34. 34. Aspect, A. & Grangier, P. in The First Single Photon Sources and Single Photon Interference Experiments 3–23 (Springer International, 2019). 35. 35. Dörner, R. et al. Cold target recoil ion momentum spectroscopy: a ‘momentum microscope’ to view atomic collision dynamics. Phys. Rep. 330, 95–192 (2000). 36. 36. Ullrich, J. et al. Recoil-ion and electron momentum spectroscopy: reaction-microscopes. Rep. Prog. Phys. 66, 1463 (2003). 37. 37. De, S. et al. Following dynamic nuclear wave packets in N2, O2 and CO with few-cycle infrared pulses. Phys. Rev. A 84, 043410 (2011). 38. 38. Bocharova, I. A. et al. Time-resolved Coulomb-explosion imaging of nuclear wave-packet dynamics induced in diatomic molecules by intense few-cycle laser pulses. Phys. Rev. A 83, 013417 (2011). 39. 39. Lynden-Bell, D. Statistical mechanics of violent relaxation in stellar systems. Mon. Not. R. Astron. Soc. 136, 101–121 (1967). 40. 40. Lichtenberg, A. J. Phase-Space Dynamics of Particles (Wiley Series in Plasma Physics, Wiley, 1969). 41. 41. Banin, U., Bartana, A., Ruhman, S. & Kosloff, R. Impulsive excitation of coherent vibrational motion ground surface dynamics induced by intense short pulses. J. Chem. Phys. 101, 8461–8481 (1994). 42. 42. Wüest, A. & Merkt, F. Potential energy curves of diatomic molecular ions from high-resolution photoelectron spectroscopy. I. The first six electronic states of Ar2 +. J. Chem. Phys. 120, 638–646 (2004). 43. 43. Cybulski, S. M. & Toczyłowski, R. R. Ground state potential energy curves for He2, Ne2, Ar2, He–Ne, He–Ar and Ne–Ar: a coupled-cluster study. J. Chem. Phys. 111, 10520–10528 (1999). 44. 44. Wu, J. et al. Steering the nuclear motion in singly ionized argon dimers with mutually detuned laser pulses. Phys. Rev. Lett. 110, 033005 (2013). 45. 45. Wrachtrup, J., von Borczyskowski, C., Bernard, J., Brown, R. & Orrit, M. Hahn echo experiments on a single triplet electron spin. Chem. Phys. Lett. 245, 262–267 (1995). 46. 46. Koppens, F. H. L., Nowack, K. C. & Vandersypen, L. M. K. Spin echo of a single electron spin in a quantum dot. Phys. Rev. Lett. 100, 236802 (2008). 47. 47. Press, D. et al. Ultrafast optical spin echo in a single quantum dot. Nat. Photon. 4, 367–370 (2010). 48. 48. Dong, H. & Fleming, G. R. Three-pulse photon echo of finite numbers of molecules: single-molecule traces. J. Phys. Chem. B 117, 11318–11325 (2013). 49. 49. Schmidt, B. E. et al. Poor man’s source for sub 7 fs: a simple route to ultrashort laser pulses and their full characterization. Opt. Express 16, 18910–18921 (2008). 50. 50. Garraway, B. M. & Suominen, K. A. Wave-packet dynamics: new physics and chemistry in femto-time. Rep. Prog. Phys. 58, 365 (1995). 51. 51. Magrakvelidze, M. & Thumm, U. Dissociation dynamics of noble-gas dimers in intense two-color IR laser fields. Phys. Rev. A 88, 013413 (2013). 52. 52. Gadea, F. X. & Paidarová, I. Ab initio calculations for Ar2 +, He2 + and He3 +, of interest for the modelling of ionic rare-gas clusters. Chem. Phys. 209, 281–290 (1996). ## Acknowledgements We acknowledge useful discussions with D. Oron, D. Raanan and G. Stupakov. This work is supported by the National Key R&D Program of China (grant no. 2018YFA0306303), the National Natural Science Foundation of China (grants nos. 11425416, 11834004, 61690224, 11621404 and 11761141004), the 111 Project of China (grant no. B12024), the Israel Science Foundation (grant no. 746/15), the ICORE programme ‘Circle of Light’, ISF-NSFC (grant no. 2520/17) and Projects from Shanghai Science and Technology Commission (19JC1412200). I.A. acknowledges support as the Patricia Elman Bildner Professorial Chair, and acknowledges the hospitality extended to him by the UBC Department of Physics & Astronomy during a sabbatical stay. This research was made possible, in part, by the historic generosity of the Harold Perlman Family. ## Author information J.W., I.A., Y.P., Y.S., J.Q. and I.T. conceived the idea and initiated the study. J.Q., P.L., K.L., W.Z. and F.S. designed and carried out the experiments. I.T. and J.Q. performed the simulations. J.Q., I.T., K.L., J.W., I.A. and Y.P. contributed to the data analysis and writing the manuscript. J.W., I.A. and Y.P. supervised and guided the work. Correspondence to Yehiam Prior or Ilya Sh. Averbukh or Jian Wu. ## Ethics declarations ### Competing interests The authors declare no competing interests. Peer review information Nature Physics thanks Stefanie Gräfe and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Extended data ### Extended Data Fig. 1 Magnified parts of Figs. 3d and 4b. a, Yield without the kick pulse. b, Yield with the kick pulse. Both curves represent the yield of ion fragments with KER in the range $$0.7\ {\rm{eV}}\ \le \ {\rm{KER}}\ \le \ 1.6\ {\rm{eV}}$$. ## Supplementary information ### Supplementary Video 1 Gradual build-up (from single-molecule events) of the kinetic energy release (KER) distribution of molecular fragments as a function of the probe delay following excitation by a pump pulse. By the end of the movie there is a total of 2.0 million events. ## Rights and permissions Reprints and Permissions
web
auto_math_text
## UTS Open '15 #2 - Secret Code View as PDF Points: 10 (partial) Time limit: 0.6s Memory limit: 64M Authors: Problem types Allowed languages Ada, Assembly, Awk, Brain****, C, C#, C++, COBOL, CommonLisp, D, Dart, F#, Forth, Fortran, Go, Groovy, Haskell, Intercal, Java, JS, Kotlin, Lisp, Lua, Nim, ObjC, OCaml, Octave, Pascal, Perl, PHP, Pike, Prolog, Python, Racket, Ruby, Rust, Scala, Scheme, Sed, Swift, TCL, Text, Turing, VB, Zig Ms. Evans's database stores a number of words consisting of the first () English letters. To prevent technicians from seeing sensitive information, the words are encrypted in a very simple way: each English letter is mapped to exactly one English letter, such that no two letters map to the same letter. A letter can map to itself. To encrypt a word, each letter of the word is replaced with the letter it maps to. For example, if , the following mappings are valid: The following mappings are not valid: One of the hard drives failed yesterday, and some information about the mapping has been lost. Specifically, for the letter, it is known that it mapped to either or ( and are among the first English letters; ). Ms. Evans has a list of () questions. Question asks: given what we know about the mapping, is it possible that could map to ? and are strings of equal length composed of the first lower-case English characters. No string will exceed characters in length. It is guaranteed that the input corresponds to at least one valid mapping. #### Input Format The first line contains . The of the next lines contains and . The next line contains . The of the next lines contains and . #### Output Format For each question, output a single line containing the answer: either YES or NO. #### Sample Input 1 2 a b a b 4 aa bb aa ab ba aa ab ba #### Sample Output 1 YES NO NO YES #### Sample Input 2 4 b d a c a b c b 3 a b b b abcd dabc #### Sample Output 2 NO NO YES • commented on Feb. 11, 2015, 10:50 p.m. Can someone explain the second test case? Why can't a be mapped to b if a can be mapped to b and d? • commented on Dec. 28, 2015, 10:56 a.m. If 'a' were to be mapped to 'b', then 'c' must be mapped to 'a' and 'd' must be mapped to 'c' (by elimination). This means that 'b' must be mapped to 'd' (since every letter must be mapped by exactly one letter), but this isn't possible since 'b' can only be mapped to 'a' or 'c'. So 'a' cannot be mapped to 'b'. • commented on Dec. 28, 2015, 10:30 a.m. In all the valid mappings, a maps to d. • commented on Feb. 11, 2015, 4:50 p.m. We have updated the statement with examples of valid and invalid mappings in an attempt to make the question clearer. • commented on Feb. 11, 2015, 4:48 p.m. Why cant 'a' be mapped to 'b'? In the input it says the 2 possibilities for 'a' are 'b' and 'd'. • commented on Feb. 11, 2015, 4:50 p.m. Amen • commented on Feb. 11, 2015, 4:43 p.m. I am thoroughly confused x.x • commented on Feb. 11, 2015, 4:50 p.m. To the problem writers: If this is your first contest, I feel you. It definitely is tough making the first one when schools like DM have been doing this for a year. Keep it up! hob • commented on Feb. 11, 2015, 7:57 p.m. hob, you realize that nullptr went to the IOI... • commented on Feb. 11, 2015, 8:40 p.m. BuMP how do you know this shizbiz • commented on Feb. 11, 2015, 8:16 p.m. Bruh going to IOI doesn't require you to write questions. • commented on Feb. 11, 2015, 4:52 p.m. Thank you very much for your kind, understanding words. • commented on Feb. 11, 2015, 8:53 p.m. edited o • commented on Feb. 11, 2015, 8:39 p.m. Can't tell if that was sarcasm or just being a decent human being... x) (^o^ )/ Either way, no problem! hob
web
auto_math_text
# League of Legends Summoner Analysis This program will ask for user to insert summoner and base from the last 20 games of said summoner, it will give avg stats and see if good or not by wins in last 20 games(a simple grading system). import requests from getId import idcollect from games import GAME from wins import win_calc #Key for riot API Key = '**********************' summonerName = input('Enter summoner name:') #Objects ids=idcollect() game=GAME() wins=win_calc() #Collecting the acc id of summoner name accId=ids.ID_collected(summonerName,Key) #Collecting game id lists game_list=[] game_list=game.find_game_ids(accId,Key) #Collecting wins list win_list=[] win_list=game.game_data(game_list,Key,summonerName) #Calcuate whether the summoner is good or not wins.is_dis_mane_good(win_list) import requests class GAME: def find_game_ids(self,accId,key): i=0 GAMEID = [] Idgame=20 url_match_list=('https://na1.api.riotgames.com/lol/match/v4/matchlists/by-account/'+(accId)+'?queue=420&endIndex=20&api_key='+(key)) response2=requests.get(url_match_list) #Adding 20 games into the list while Idgame>0: GAMEID.append('https://na1.api.riotgames.com/lol/match/v4/matches/'+str(response2.json()['matches'][i]['gameId'])+'?api_key='+(key)) i=i+1 Idgame=Idgame-1 return GAMEID def game_data(self,game_list,key,sumName): wins=[] deaths=[] deaths= [] kills=[] assists=[] visions=[] csTotal=[] #Finding the data of said summoner in each game id for urls in game_list: response=requests.get(urls) Loop=0 index=0 while Loop<=10: if response.json()['participantIdentities'][index]['player']['summonerName']!=sumName: Loop= Loop+1 index=index+1 elif response.json()['participantIdentities'][index]['player']['summonerName']==sumName: deaths.append(response.json()['participants'][index]['stats']['deaths']) kills.append(response.json()['participants'][index]['stats']['kills']) assists.append(response.json()['participants'][index]['stats']['assists']) visions.append(response.json()['participants'][index]['stats']['visionScore']) csTotal.append(response.json()['participants'][index]['stats']['totalMinionsKilled']) wins.append(response.json()['participants'][index]['stats']['win']) break #Finding avg of each stat deaths=sum(deaths)/20 kills=sum(kills)/20 assists=sum(assists)/20 visions=sum(visions)/20 csTotal=sum(csTotal)/20 print('The avg kills is '+str(kills)+'\nThe avg deaths is '+str(deaths)+'\nThe avg assists is '+str(assists)+'\nThe avg visions is '+str(visions)+'\nThe avg cs total is '+str(csTotal)) return wins import requests class idcollect: def ID_collected(self,sumName,key): #COLLECTING DATA TO BE INSERTING FOR MATCHLIST DATABASE url=('https://na1.api.riotgames.com/lol/summoner/v4/summoners/by-name/'+(sumName)+'?api_key='+(key)) response=requests.get(url) accId=(response.json()['accountId']) return accId import random class win_calc: def is_dis_mane_good(self,winlist): winlist=sum(winlist)/20 if (winlist<.33): trash=['DIS MANE STINKS','run while you can','I repeat, YOU ARE NOT WINNING THIS','I predict a fat L','Have fun trying to carry this person','He is a walking trash can','He needs to find a new game','BAD LUCK!!!'] print (random.choice(trash)) elif (winlist>.33 and winlist<=.5): notgood=['Losing a bit','Not very good','He needs lots of help','Your back might hurt a little','Does not win much'] print (random.choice(notgood)) elif (winlist>.5 and winlist<=.65): ight=['He is ight','He can win a lil','You guys have a decent chance to win','Serviceable','Should be a dub'] print (random.choice(ight)) elif (winlist>.65): good=['DUB!','You getting carried','His back gonna hurt a bit','winner winner chicken dinner','Dude wins TOO MUCH','You aint even gotta try','GODLIKE'] print (random.choice(good)) • Welcome to Code Review. I've replaced the title with a more fitting description of your program. Feel free to edit your post to include your concerns into the post text; the title should simply state what your program is about, not your concerns. I hope you get some nice reviews :). – Zeta Aug 12 at 5:48 • You forgot to mention that this is a revised version of codereview.stackexchange.com/questions/247554/…. Additionally, there are things you haven't done, like, making your code PEP8 compliant, which were refered in the other post. – Ismael Miguel Aug 12 at 15:07 find_game_ids is far more complicated than it needs to be. You have essentially two "counters", Idgame and i. One is being used to be placed in a string, and the other is to limit how many loops happen, but they're the same value if you think about it; just opposites. You don't need Idgame since you can just check if i < 20. You also don't need to manually manage i. range is for use-cases exactly like this: def find_game_ids(self, accId, key): game_id = [] url_match_list = f"https://na1.api.riotgames.com/lol/match/v4/matchlists/by-account/{accId}?queue=420&endIndex=20&api_key={key}" response2 = requests.get(url_match_list) for i in range(20): game_id.append(f"https://na1.api.riotgames.com/lol/match/v4/matches/{response2.json()['matches'][i]['gameId']}?api_key={key}" return game_id i here will be every number from 0 to 19. I would also recommend creating a variable elsewhere to hold the 20 and call in N_GAMES or something. You seem to use that 20 in multiple spots. If you change it in one place and forget to change it somewhere else, you'll potentially have a nasty bug. Other things I changed: • Variable names should be lowercase, separated by underscores according to PEP8. You have names all around this file that inconsistently use Upper_case. Use lower_case unless you're naming a class name. • Instead of adding string together using +, I changed it to use f-strings (note the f before the quotes). That lets you put a variable directly into a string using the {variable_name} syntax. This can be further improved though. If you're iterating to create a list like you are here, list comprehensions can sometimes be cleaner: def find_game_ids(self, accId, key): url_match_list = f"https://na1.api.riotgames.com/lol/match/v4/matchlists/by-account/{accId}?queue=420&endIndex=20&api_key={key}" response2 = requests.get(url_match_list) return [f"https://na1.api.riotgames.com/lol/match/v4/matches/{response2.json()['matches'][i]['gameId']}?api_key={key}" for i in range(20)] The major readability problem in each case stems from how long that string is. You may want to break it over multiple lines, or generate it outside of the function using another function. In game_data, you're calling response.json() repeatedly. Looking over the source of that method, it does not appear to do any caching. That means that every call to .json will reparse the data, which is a waste of CPU time. Save that into a variable once and use it as needed: def game_data(self, game_list, key, sumName): . . . for urls in game_list: response = requests.get(urls) resp_json = response.json() # Save it to use it again later Loop = 0 index = 0 while Loop <= 10: if resp_json['participantIdentities'][index]['player']['summonerName'] != sumName: Loop = Loop + 1 index = index + 1 elif resp_json['participantIdentities'][index]['player']['summonerName'] == sumName: deaths.append(resp_json['participants'][index]['stats']['deaths']) kills.append(resp_json['participants'][index]['stats']['kills']) assists.append(resp_json['participants'][index]['stats']['assists']) visions.append(resp_json['participants'][index]['stats']['visionScore']) csTotal.append(resp_json['participants'][index]['stats']['totalMinionsKilled']) wins.append(resp_json['participants'][index]['stats']['win']) . . . Not only is that shorter, it also makes it easier to add in some preprocessing to the data later, and also has the potential to be much faster, because you aren't doing the same processing over and over again. #Finding avg of each stat deaths=sum(deaths)/20 kills=sum(kills)/20 assists=sum(assists)/20 visions=sum(visions)/20 csTotal=sum(csTotal)/20 Like I said, you're using 20 in multiple places. What if you want to change this number later? It's not going to be fun to go around and find every relevant 20 and update it to the new value. Have that number stored once, and use that variable: # Top of file by imports N_GAMES = 20 . . . # The for-loop in the updated find_game_ids for i in range(N_GAMES): . . . # At the bottom of game_data deaths=sum(deaths)/N_GAMES kills=sum(kills)/N_GAMES assists=sum(assists)/N_GAMES visions=sum(visions)/N_GAMES csTotal=sum(csTotal)/N_GAMES For the classes win_calc and id_collect, there a few noteworthy things. First, they shouldn't be classes. A good indicator that you shouldn't be using a class is that you're never using self in any of its methods. By using a class in this case, you need to construct an empty object just to call a method on it, which you're doing here: wins=win_calc() Just to call a method on it later: wins.is_dis_mane_good(win_list) Just make those classes plain functions: import random def is_dis_mane_good(winlist): winlist = sum(winlist) / 20 if (winlist < .33): trash = ['DIS MANE STINKS', 'run while you can', 'I repeat, YOU ARE NOT WINNING THIS', 'I predict a fat L', 'Have fun trying to carry this person', 'He is a walking trash can', 'He needs to find a new game', print(random.choice(trash)) . . . And then just use them as plain functions: is_dis_mane_good(win_list) Second, if it were appropriate to have them as classes, the names should be in CapitalCase: WinCalc and IDCollect (or maybe IdCollect). Also, I'd rename is_dis_mane_good. Using a slang in the output of the program is one thing, but naming your methods obscure names isn't doing yourself or other readers of your code any favors. As well in that function, I'd make some more changes: • I suggest you prefix your decimal numbers with a 0. 0.33 is much more readable than .33. • You can use operator chaining to simplify those checks too. winlist > 0.33 and winlist <= 0.5 can become 0.33 < winlist <= 0.5. As noted in the comments though, you can actually get rid of half of each check since, for example, if winlist < 0.33 was false, then you know winlist must be greater than 0.33, so the winlist > 0.33 check is redundant. • There's that 20 again ;). The more places you have it, the more likely you are to forget to update at least one of them. I'd use N_GAMES there instead. • You can get rid of the duplicated print(random.choice(. . .)) calls by assigning the list to a variable after each check, then having one print at the bottom. After those changes, I'm left with this: def competency_message(winlist): winlist = sum(winlist) / N_GAMES message_set = [] if winlist < 0.33: # Should be winlist <= 0.33 maybe? message_set = ['DIS MANE STINKS', 'run while you can', 'I repeat, YOU ARE NOT WINNING THIS', 'I predict a fat L', 'Have fun trying to carry this person', 'He is a walking trash can', 'He needs to find a new game', elif winlist <= 0.5: message_set = ['Losing a bit', 'Not very good', 'He needs lots of help', 'Your back might hurt a little', 'Does not win much'] elif winlist <= 0.65: message_set = ['He is ight', 'He can win a lil', 'You guys have a decent chance to win', 'Serviceable', 'Should be a dub'] else: message_set = ['DUB!', 'You getting carried', 'His back gonna hurt a bit', 'winner winner chicken dinner', 'Dude wins TOO MUCH', 'You aint even gotta try', 'GODLIKE'] print(random.choice(message_set)) • Thx all these tips are really helpful. – drakebakincake Aug 12 at 1:36 • Note that you can also remove a bit of duplication in that if/elif structure, by writing it as if winlist < 0.33: ... elif winlist <= 0.5: ... elif winlist <= 0.65: ... etc – canton7 Aug 12 at 12:18 • @canton7 Updated, thank you. – Carcigenicate Aug 12 at 13:37 • Ah, I meant e.g. elif 0.5 < winlist <= 0.65 could be elif winlist <= 0.65 – canton7 Aug 12 at 13:39 • @canton7 Whoops, sorry, just woke up. Updated and added some explanation. – Carcigenicate Aug 12 at 13:44
web
auto_math_text
المقتصد في المؤسسة التربوية # PE Civil Depth Exam (Evening Session): This practice exam contains 80- Geotechnical questions, and answers each set from all Geotechnical & Soil Foundation Engineering: A Look fora metal, since metals generally … FE. This practice exam provides you with 110 questions and detailed solutions that mimic the exam same look and feel of the real, computer-based FE exam. The 16-hour PE Structural exam tests for a minimum level of competency in structural engineering. 1 (PDF); FE Chemical Practice Exam (PDF); FE Civil Practice Exam (PDF)  NCEES-FE Civil PRACTICE EXAM. × Full PDF Package Download Full PDF Package. Preview this book now! This book contains 100 questions and solutions to familiarize you with the FE Mechanical exam format and content. Licensure in the U. NEW! Practice Tests: The Key To Pass the FE Exam You should study about 10 to 15 hours a week, but if you attempt to do this all in one day, you are going to feel the burnout very quickly. FE Practice Test with Complete Solutions. FE Exam Review Civil Engineering Hydraulics, Hydrology, and Fluid Mechanics Cary Troy, Lyles School of Civil Engineering February 11, 2015 1/25/2022 FE exam review Fluid Mechanics/Dynamics Noriaki Ohara Civil and Architectural Engineering Chemical: 8-12 problems Civil: 4-6 (+ 8-12) problems Environmental: 9-14 problems Mechanical: 9-14 problems Other : 8-12 (+ 4-6) problems out of 110 problems Acknowledgement: This material was mainly based on Olia (2008). Give yourself about 30 seconds to a minute on each. 12/27/2021 As I mentioned in the FE exam review class, several problems in the “FE Exam Review” section, of the Beer and Johnston, Statics/Dynamics Website, have errors in their statements, solutions, and multiple-choice answers. Built-in progress tracking with milestones. مراجعة للكتب الخاصة بامتحان FE تخصص ميكانيكا عيوب ومميزات كل كتاب والتقيم الخاص بهم-احمد الهواريmaterial 1- FE UCO: Department of Engineering and Physics FE Mechanical Practice Problems offers comprehensive practice for the NCEES FE Mechanical exam. FE … Practice exams are typically updated only when the exam specifications are revised (every 5 to 7 years). Also be sure to check out the other resources: the Best Calculator for the FE Exam, the collection of FE Practice Exams and all things related to the FE Exam. FE REFERENCE HANDBOOK Reviewing the handbook before exam day will help you become familiar with the charts, formulas, tables, and other reference information provided. Fergus Materials Engineering Office: 284 Wilmore – tensile test – endurance test – impact test. This is an instant download. . 1) A table has … Free FE Practice Test PrepFE™ Free FE Electrical and Computer Example Practice Problems. Sample exams are one of the most valuable FE Exam resources you can find. Bending Test FE Mechanical Practice Exam. 425 Metro Place North, Suite 450 Dublin, OH 43017, USA +1 614-873-7475; [email protected] The key to passing the PE exam is to learn the key concepts and skills that are tested on the exam. FE Review Courses, Practice Exams - PassFEexam. 6 lbt we i c2altt\(s2. You can find YouTube review session videos for most subjects. Fundamentals of Engineering - NCEES FE Civil Practice Exam Passing the FE exams just got even easier with JobTestPreps advanced PrepPack. About Us. × Close Log In. The flow and difficulty of the questions in this book are extremely similar to the questions given on the real test---- if anything the FE was easier than this practice test! Master the TI-36X Pro calculator and using the PDF of the reference manual and you will blast through the exam!!! The FE Civil Exam Review Guide eBook is the ultimate resource for engineers preparing to take the exam and earn their licensure. You can complete the entire registration process on-line. pdf Read Or Download PPI FE Chemical Practice Problems, 1stEdition (Paperback) – Comprehensive Practice for the NCEESFE Chemical Exam Full BookFORMAT FILE[ebook, pdf, epub, mobi pocket, audiobook, txt, doc, ppt, jpeg, chm, xml, azw, pdb, kf8, prc, tpz]LINK DOWNLOAD / READ ONLINE, CLICK NEXT PAGE FE Exam Review for Structural Analysis Prof. B --57°28 ' 275 m What Past Examination Questions and Answers. I include sketches in my solutions to allow you to identify the problems to which my solutions apply without necessarily having to refer to Part 1 of the above “FE_Exam_Review_rev3. Here's a collection of the FE exam resources that are available, some are free, some are from commercial providers, universities and engineering societies. 24/7 email support with … fe exam, fe exam book, fe civil, fe electrical and computer practice exam pdf, civil engineering fe exam preparation pdf fe mechanical practice exam pdf, fe This practice book really helped me in passing the FE exam. FE Exam Review – FE Exam – Probability and Statistics – Normal Distribution. Preview this book now! This book contains 100 questions and solutions to familiarize you with the FE Chemical exam format and content. Date: 04/12/10 2 Problems. FE Exam Practice. Our NCEES-FE Question Bank 2022 includes PDF, VCE Practice Tests and cheat sheet that will help you get 100% marks in real exam. com; Quick Links. I was going to order one today thinking that there would be a PDF available online, but turns out that its sent as a View FE Mechanical Practice Exam (2020). FE EXAM 84 downloads 859 Views 2MB Size. FE Civil Practice Problems, LindeburgISBN:978-1-59126-440-8. ” My own solutions, which you will find below, follow the problem numbering scheme I established above. [PDF] Fe Chemical Practice Problems Download eBook for Free 1 day ago ebook4scaricare. FE Exam Review, Online Problems and Solutions. To find information specific to your state's policies, you will need to contact  2021/08/17 Registering for the FE Exam. You can adjust the width and height parameters according to your needs. Each question is worth 2 points. These videos can help students to “visualize” taking the exam and determine a time management strategy. E. 2mm} C \rightarrow \space \rule{0. For more exam information click below. The last update was July 2020. This is a must for any student studying for the FE exam. 6 5r Ay ott aw lf F= Seca tee fe) = ors ‘To determine the center of pressure, use these ‘equations: ® 1497. This guide covers all of the NCEES FE topics including Mathematics, Probability and Statistics, Chemistry, Instrumentation and Controls, Engineering Ethics and Societal Impacts, Safety, Health, and Environment, Engineering Economics, Statics, Dynamics, Strength of Materials, Materials, … Olin Library Resources 0 NCEES FE Supplied Reference Handbook (free pdffrom ncees. It will be presented in modules corresponding to the FE topics, particularly those in Civil and Mechanical Engineering. In our free FE practice exam, we have tried to include the above categories in our questions. The author of this exam has made all reasonable efforts to provide current and accurate information for the readers of this practice exam… Use these FE Practice Questions to Prepare for the FE Exam A practice exam, specific for your discipline, of multiple choice questions so you can study with the end in mind ( AND minimize your reading ). org is our home on the web. Students can access the non-printable eBook from their Study Hub (internet access is required). Register for the FE exam. PrepFE has hundreds of problems to help you get ready for the FE exam. exam consists of 110 questions over these topic How to Study and Take Exams – PDF discussing study techniques  Number of questions: What is the sample variance of the following numbers? PDF formula and CDF table for Binomial distribution [a discrete rv]. 45 mL C) 2. pdf. • Take it as many times as you want ($225 each time!) • FE exam will be offered year-round as computer-based exams at Pearson VUE testing centers (there is one in Princeton). Download your free copy to find out more about registering for exams and what to expect on exam day. The preview will let you see how the practice exam questions are structured and show you examples with detailed step-by-step solutions. The 6-hour chemical engineering F. However, you cannot request your actual FE Certificate until you have received your diploma and forwarded NCEES a copy. Results of the FE Exam will typically be available to you 7-10 days after you take the exam. Op · 4 yr. Many studies have shown that periodically practicing after studying a specific subject is an effective exam preparation technique. We help creative entrepreneurs build their digital hustle. The first step to becoming a licensed engineer is to pass the Fundamentals of Engineering Sum up the individual atomic charges and set them equal 00. FE Exam Fundamentals of Engineering Exam Review ASCE review sessions. g. pdf. In this virtual classroom, students can reach their highest potential through a uniquely individualized learning program. 5/3/2021 PE Civil Depth Exam (Evening Session): This practice exam contains 80- Geotechnical questions, and answers each set from all Geotechnical & Soil Foundation Engineering: FE Chemical Practice Exam. We’ll also share the solutions for all the problems we’ll mention in this article. The purpose of this course is to review the material covered in the Fundamentals of Engineering (FE) exam to enable the student to pass it. Automobilismo é aqui! fe mechanical practice exam pdf with solutions. Michael R. What score do you need to pass the FE Environmental exam? The passing score varies, but we recommend aiming for at least 70% on exam day. Gain access to our user-friendly Study Hub which allows students to download and keep notes, send questions to the instructor, and more! Learn from multiple instructors, as our classes feature http:///www. Email. V. 5 hour, closed-book, Three textbooks and several free resources are recommended as study be licensed in order to practice. You will be given the NCEES FE Exam Reference Handbook, which contains all the necessary equations, tables, and graphs that you will need to solve each problem. Second, you have 2. The FE Environmental exam is the first step in becoming a Professional environmental engineer. 5cm}{0. handbook electricity and magnetism slides; pdf with one slide per page; pdf with four slides per page Fundamentals of Engineering Exam Review Other Disciplines FE Specifications Topic: Engineering Probability and Statistics 6–9 FE exam problems Exam Problem Numbers A. 1/4/2022 Fundamentals of Engineering (FE) General Exam Review Program March 3rd ~ April 12 th, 2010. This is the official practice exam developed by NCEES. The book contains 220 practice problems for the FE-Civil exam. Please Report any type of abuse (spam, illegal acts, harassment, copyright violation, adult content, warez, etc. 018. Students have the option to purchase the eBook separately or as a bundle with the hardcover book. Test Duration: 30 Minutes. EIT/FE Exam EE Review Prof. The flow rate is 400 gallons per minute, and all piping is 4-in, schedule 40, steel pipe (ID = 4 These practice exams contain questions that have been used on past exams and questions written just for study materials to give you extra practice. This practice exam provides you with 110 questions and detailed solutions that mimic the exam same look and feel of the real, computer-based FE exam. Complete with answers and explanations so you can learn by doing the exam. These questions are unique to the NCEES Computer Based Training (CBT) exam and solvable in two to three minutes. Date: 04/12/10 9 Problems. Date: 04/12/10 4 Problems. Use our expert team instructions to focus on the contents and questions that are included in the FE exam to pass your exam in your first try. 19. The exam has 110 multiple-choice questions. Related Papers. … 14 thg 4, 2022 You can take the test online or download the PDF questions, with full answers and analysis included – all for free (thanks to IFT)! After Date: 04/12/10 27 Problems Relating Kc and Kp The ideal gas equation PV = nRT can be manipulated into a form that reflects a gas concentration term P = (n/V) RT where n/V = … Past Examination Questions and Answers of Fundamental Information Technology Engineer Fundamental Information Technology Engineers Examination(FE) NCEES-FE MECHANICAL Practice Exam. This guide describes the rules for each exam format. You are expected to know: 1. To ensure that all exam materials are clarified for you, we provide you with a comprehensive set of drills, study guides, and video tutorials, along with full-length and timed simulations . The FE and FS are designed for recent graduates and students who are close to finishing an undergraduate engineering or surveying degree. Practice exams are typically updated only when the exam specifications are revised (every 5 to 7 years). , Lindeburg (2016) ISBN: 978-159126-446-0 FE Chemical Practice Exam, NCEES ISBN: 978-1-93261-380-3 FE-Industrial and Systems (ESD does not offer this class currently) Industrial Discipline-Specific Review for the FE/EIT Exam, Lindeburg (2nd FE Mechanics of Materials Review r T Tr J τ= τ= shear stress, force/length^2 T = applied torque, force·length r = distance from center to point of interest in cross-section (maximum is the total radius dimension) J = polar moment of inertia (see table at end of STATICS section in FE review manual), length^4 TL JG φ= φ= angle of twist, radians L = length of shaft 7/8/2020 Why take the FE Exam? LAPELS recently changed rule §1509 allowing Engineer Interns to take the PE exam any time subsequent to becoming certified as an EI with LAPELS. Geo/Trig Sample Problems: 1,5 Geo/Trig Exam … Buy the practice exam. F4 2017 Exam Section Available for 7-day checkout FE Reference Handbook by National Council of Examiners for Engineering and Surveying Staff 4/19/2018 11/12/2017 7/8/2020 43,334 recent views. With a team of extremely dedicated and quality lecturers, ncees pe civil practice exam will not only be a place to share knowledge but also to help students get inspired to explore and discover many creative ideas from themselves. " This resource center is a good place to reference during your journey to get your license. About Us; Affirm This practice book really helped me in passing the FE exam. However, the sample test does not provide comprehensive coverage of all topics on the test. is now available. pdf. g. Buy More Practice Tests (for immediate PDF download). Select from timed exams (shown), category specific, and regular practice exams. The last update was July 2020. Complete with answers and explanations so you can learn by doing the exam. Title: FE electrical and computer practice exam Author: NCEES Created Date: 6/11/2017 9:49:19 AM NCEES-FE ELECTRICAL PRACTICE EXAM. Your guide to passing the FE & PE exams and furthering yourself as a professional engineer Engineering Pro Guides provides mechanical and electrical PE & FE exam resources, design tools, software customization, consulting services, and much more. If your FE exam prep is done, then it's time to practice your skills with an exam. FE-Mechanical. Preview this book now! This book contains 100 questions and solutions to familiarize you with the FE Other Disciplines exam format and content. Download VCE Exams, VCE to PDF. Per its Manual of Policy and Position Statements, NCEES considers the 16-hour These practice exams contain questions that have been used on past exams and questions written just for study materials to give you extra practice. The exam has transitioned to computer-based testing (CBT) and will be held year-round. • Sample Questions: See Online Booklet • FE Reference Handbook – Can be purchased or is Free as a Download • Create a MyNCEES Account and download your free copy – A searchable, electronic copy will be NCEES-FE Dumps, NCEES-FE Braindumps, NCEES-FE Real Exam Questions, NCEES-FE Practice Test Created Date: 5/21/2019 12:41:58 AM A Brief Review of FE Exam Subjects In Surveying and Transportation, Afternoon Session in Civil Engineering, Using Sample Problems and Supplemental Material By Robert S. 1) Balance this chemical reaction by choosing the right variables for the blanks$ \rule{0. 2013 Structural Analysis is part of the afternoon exam. millicuries, how much of the solution should be injected into the patient? A) 45 mL B) . FE Practice exam: Thank you for taking our FE Sample Questions. Check for Pearson VUE testing locations. Danoo515. 1. CT values (mg/L•min) for inactivation of Giardia cysts by free chlorine at 10°C. fe civil practice exam pdf 27. Changes are unlikely before 2025. You can adjust the width and height parameters according to your needs. Ch13-Radiation from Surfaces. 1 · NCEES FE Civil Practice Exam · NCEES FE Reference Handbook 10. 1 10/16/07 Rev. FE Civil Practice Exam (NCEES Website, 100 problems): In my opinion, this is probably the best tool to gauge how prepared you are for the exam. It will give you a honest chance to evaluate the practice exams and try it before you buy it. What we believe. is regulated by the states. , water, wastewater, air) – Environmental regulations – Water treatment and wastewater treatment (e. Contact. FE Reference Handbook 10. Engrg; LaGrega Haz Waste; Wark & Warner Air. You can download the paper by clicking the button above. pdf. I already worked lots of problems with the manual, I was trying to get more practice by working a practice exam… 1 FE: Electric Circuits © C. The book was… Read More ». Login to purchase. Ahmed 0 FE/EIT Sample Examinations, Michael R. No more waiting for a book to come in the mail. ISBN: 978-1-947801-03-5. After passing the FE or FS exam, a person must apply to the appropriate State Board to become: An Engineering Intern A Surveying Intern The FE and FS exams are valid in all states and FE Exam Question of the Day! The book. • Take it as many times as you want ($225 each time!) • FE exam will be offered year-round as computer-based exams at Pearson VUE testing centers (there is one in Princeton). 2021/05/03 Use the diagnostic exams in your FE review manual to determine how much you should study in the various knowledge areas. Examinees must bring their own reference materials. End of preview. Purchase an NCEES FE computer-based practice exam, which provides the most realistic PDF and practice using it to help you navigate easily on exam day. Date: 04/12/10 10 Problems. Richard Spencer Basic Electricity Outline • Charge, Force, Electric Field, Work and Energy • Work, Energy and Voltage • The Atom • Current, … 8 Step Plan to Pass the FE • Step 6 – Take a second practice exam a week before FE – Identify areas that need a little more work • Step 7 – Work only on areas identified in … We believe in a try before you buy policy. Why Take the FE/EIT Exam Now? • “The best Home · FE Mechanical Practice Problems for the ME Exam Fundametal of Engineering - 3rd Edition - Michael Lindelburg. Richard Spencer Basic Electricity Outline • Charge, Force, Electric Field, Work and Energy • Work, Energy and Voltage • The Atom • Current, Resistance and Ohm’s Law • Power and Energy • Conductors, Resistors and Insulators • Schematics & models • … Quickies: These are questions in the FE Review Manual that can be answered fairly quickly using a picture, your calculator, the reference manual or very little paper-and-pencil calculation. Date: 04 Exam Information • In NJ, can take it Junior year. 250+ Practice Problems. If you aren't sure of an answer to a question, you can bookmark it and return later, but don't leave it blank in FE Exam Review: Environmental • Civil Engineering afternoon: 12% EVEN – Water quality (ground and surface) – Air quality – Solid/hazardous waste – Sanitary sewer system loads – Basic tests (e. Course Title MECH ENG 208. 1 as a reference during the exam. Technical Study Guide &. Paperback. NCEES FE Reference Handbook 10. net . Please Report any type of abuse (spam, illegal acts, harassment, copyright violation, adult content, warez, etc. It can be difficult to concentrate on test questions for several hours at a time, although it is important to be able to sit down for the allotted 6 hours to practice taking a full exam. 2 Overview • 110 multiple choice questions total • 5 hrs 20 min to answer questions • slightly less than 3 minutes per question . All questions are solveable using the NCEES FE 1/28/2022 File Type PDF Civil Engineering Fe Exam Sample Questions Highly regarded for its clarity and depth of coverage, the bestselling Principles of Highway Engineering and Traffic Analysis provides a comprehensive introduction to the highway-related … FE Other Disciplines Practice Exam. org/PracticeExams). Date: 04/12/10 3 Problems. Preparation for the Fundamentals of engineering exam Modeled after reference texts, NCEES practice questions and using the handbook timed and scored using NCEES known practices, as well as going over key concepts in Electrical engineering, Mathematics, Engineering Economy etc. FE practice exam. ). ISBN: 978-1-932613-99-5. Page 3 EXAM SPECIFICATION START TEST SOLUTIONS SCORE SHEET Page 6 Page 13 Page 112 Page FE Exam | FE Information. A PDF version of the FE Reference Handbook similar to the one you will use on exam day is also available there. Solutions. Hitting the Refresh or Back button on your browser will interrupt the test. E. The preview will let you see how the practice exam questions are structured and show you examples with detailed step-by-step solutions. We will walk you step by step th ncees pe civil practice exam provides a comprehensive and comprehensive pathway for students to see progress after the end of each module. You will be given 2. It develops, administers, and scores the examinations used for engineering and surveying licensure in the United States.$125. Go to NCEES. pdf The Fundamentals of Engineering (FE) exam is generally your first step in the process to becoming a professional licensed engineer (P. Download PDF. FE Exam Review, Online Problems and Solutions. Gross EE1-1FE: Electric Circuits © C. FE Reference Handbook 10. This material is available immediately for those taking the Ondemand review course and one week prior to the first class for those taking the Live Online review course. I include sketches in my solutions to allow you to identify the problems to which my solutions apply without necessarily having to refer to Part 1 of the above “FE_Exam_Review_rev3. The the NCEES FE Exam Reference Handbook, which contains all the necessary equations, tables, and graphs that you will need to solve each problem. EIT Review - Spring 2009 Sample Questions Scott Strong March 23, 2009 The practice exams offered, in addition to the official NCESS practice, were vital in my experience. $29. ISBN: 978-1-947801-04-2. Please Report any type of abuse (spam, illegal acts, harassment, copyright violation, adult content, warez, etc. FE Environmental Exam Prep Course Structure. 96 If … 1/28/2022 11/18/2014 7/1/2020 Reviewing the appropriate supplied-reference handbook before exam day helps you become familiar with the charts formulas, tables, and other reference information provided. The questions are complete with answers and thorough solutions. 3. Free practice exams are available at ncees. Loading Preview. The NCEES FE Exam Reference Handbook will be provided as a searchable electronic pdf during the test. Page 3. Tsai, Ph. Alternatively send us an eMail with the URL of the document to [email protected] IC&RC has developed practice exams to assist candidates with their exam preparation.$49 Add to Cart . pdf from GENERAL DO 1000 at University of South Florida. The exam is offered in seven disciplines. This exam package contains 500+ questions to prepare you for the NCEES FE Mechanical Exam. Example 30 Inthepipesystemdepictedbelow,thedischargeinpipeABis100 m3/sec. William I. Product Information:. FE Exam Review for Structural Analysis Prof. • You will register and schedule your appointment through your My NCEES account on the NCEES website. All exam practices, study files, sample questions, books, student shares and more Exam prep materials · Scoring. The manual will be a searchable PDF. Second, the key concepts and FE Review Materials Properties Jeffrey W. The flow and difficulty of the questions in this book are extremely similar to the questions given on the real test---- if anything the FE was easier than this practice test! Master the TI-36X Pro calculator and using the PDF of the reference manual and you will blast through the exam!!! UPDATED FOR THE JULY 2020 FE-CIVIL EXAM SPECIFICATION (please send questions to girumsol_at_gmail. A 6-hour appointment time includes a tutorial, the exam, and a break. If you are interested in checking out any materials, visit our front office in DLC 173 or email Christina Oerter. 1/19/2022 A random sample is selected from a normal distribution with variance 46. Exam Simulator Attempts: 0 out of 99. The surveyor then walks a long the bank for a distance of 275 m to point A. FE review manual: rapid preparation for the general fundamentals of engineering Practice Exams, Braindumps, Certification Exam Guides, Sample Questions,  20 thg 9, 2018 Hi everyone, I am going to be taking my FE Mechanical Exam this Saturday at 8 AM. . He is employed by a small engineering firm, and works with Jim, a licensed professional engineer and owner of the company. 2020/07/22 Can anyone share pdf version of FE CIVIL practice or review manual pdf by Michael Lindeburg? Read our review and download FE Civil Practice Exam PDF at the end. Learn more about online FE exam prep→ Practice problems with detailed explanations. The NCEES FE Reference Handbook is the only resource that  Now every state regulates the practice of engineering to ensure public safety by granting only Professional Engineers (PEs) the authority to sign and seal  NCEES FE Chemical Practice Exam · NCEES FE Reference Handbook 10. Some of the mistakes in the solutions are quite serious because they involve the application of wrong physical principles. Measures of central tendencies and dispersions (e. The FE exam is a computer-based test. 🟡 FE Civil Course https://www. com 012. E. FE exam but is not yet licensed. 19. 2 Overview • 110 multiple choice questions total • 5 hrs 20 min to answer questions FE_Exam… Watch videos showing step-by-step solutions to problems you missed or found challenging, and get even more practice with unreleased questions from real SATs  We believe in a try before you buy policy. FE review manual: rapid preparation for the general fundamentals of engineering Requirements Engineering Fundamentals: A Study Guide for the Certified  FE COMPUTER-BASED PRACTICE EXAMS Evaluate your readiness for the FE exam by testing your knowledge with the most realistic computer-based simulation available. This sample test is provided to give you an idea of the format of questions that might be asked on the exam. The rule became effective July 20, 2014 Note, there is a risk associated with “early taking” which will be clearly shown on the applications as some states have said that fe civil practice; buku pendamping belajar ilmu pengetahuan alam untuk smp mtskelas vii semester 1; the connecticut tercentenary 1635 1935; theaster gates a clay sermon; guide to the study of mathematics for the matriculation examination in london university; an elegiac; new jersey dmv permit practice test manual; sawyer s internal auditing FE Exam Practice – Prob and Stats – Linear Regression (Time Saving Calculator Method!!) FE Exam Review – Probability and Statistics – Binomial Distribution. 0. To find the width of a river surveyor sets up a transit at point C on one river bank and sights directly across to point B on the other bank. There are several issues that make this key very difficult. Practice problems from: Metcalf & Eddy Wastewater. A 6-hour appointment time includes a tutorial, the exam, and a break. Half the screen will show the exam questions, and half will display the reference manual. 1/25/2022 • Practice Exams ($49. Download. If the width of a 95% confidence interval about the sample mean is 4. Roll the dice and if you pass you pass. With a team of extremely dedicated and quality lecturers, fe chemical practice exam pdf … Purchase an NCEES FE computer-based practice exam, which provides the most realistic exam-day simulation available (ncees. pdf on DocDroid. It will allow you to gain competency of necessary knowledge areas and develop proficiency in solving a multitude of problems. Some of the mistakes in the solutions are quite serious because they involve the application of wrong physical principles. Taking the NCEES FE exam is the first step toward becoming a licensed engineer. PPE Headquarters FE CIVIL Practice Problems • 110 exam questions • Complete with answers and explanations Chapter 3/4/2021 FE Civil Practice Problems, LindeburgISBN:978-1-59126-440-8. See preview. Order a trial exam for the FE exam you plan to take (cost is about$30) If desired, an FE Environmental Online Practice Exam can be purchased. Complete FE exam prep for purchase ↓. The cost of the FE Exam is $175. Reviewing for Professional Engineering Exams 10/16/07 Rev. HOW DO I REGISTER? The National Professional Practice Examination (NPPE) is a 2. These questions are unique to the NCEES Computer Based Training (CBT) exam and solvable in two to three minutes. 1/25/2022 1/15/2022 3/28/2019 Work additional sample problems Apply solution strategies PASS THE FE EXAM!! E MMETT A. Students may use the FE Reference Handbook version 10. I have been studying a lot and feel prepared but I wanted to take a practice exam. Having intuition to the subject matter. FE practice exam with free PDF of step by step solutions. Updated for July 2020 Exam will be provided as a searchable electronic pdf during the test. Download the PDF and practice using it to help you navigate easily on exam … FE COMPUTER-BASED PRACTICE EXAMS Evaluate your readiness for the FE exam by testing your knowledge with the most realistic computer-based simulation available. With enough experience after passing the EIT, you will become eligible for the PE Exam Engineering NCEES develops, scores, and administers exams used for engineering and survey licenses throughout you. Fundamentals of Engineering Exam - Free download as Powerpoint Presentation (. I have been studying a lot and feel prepared but I wanted To help you ace the FE exam on the first attempt, we have gathered 10 FE exam problems that every test taker should be familiar with. Give yourself about 30 seconds to a minute on each. School Missouri University of Science & Technology. ” My own solutions, which you will find below, follow the problem 26 thg 3, 2022 Reading, Use of English, Listening Practice Tests (online & pdf) Collection of CAE Exam practice tests found on the Internet here to help May 10th, 2018 - PPI Helps You Pass The FE Exam PE Exam And SE Exams PPI S Review Courses Are Designed To Help You Pass Your Engineering Exam' ' The Ultimate Guide to Passing the PE Exam in Less May 10th, 2018 - Get this FREE 39 page e book checklists and schedule to PASS the PE exam … Statistics Topics on FE Number of questions: Chemical: 4–6 Civil: 2‐3? (part of math) Electrical: 4–6 Environmental: 4–6 Industrial: 10–15 Mechanical: 4–6 Other: 6–9 … Download a sample SSAT Middle Level and Upper Level exam into a printable format below. pdf), Text File (. The Fundamentals of Engineering (FE) exam is generally the first step in the process to becoming a professional licensed engineer (P. 3. Download and review corrections to your preparation materials. Each timed practice exam consists of 50 questions and functions much the same as the actual exam. 95) – Consists of questions from past exams and provides extra practice. You can adjust the width and height parameters according to your needs. pdf.$32 Add to Cart . The Fundamentals of Engineering (FE) exam is a computer-based test that can be taken year round. Understanding and having the knowledge of the subject matter. 1 1 Terms and Organizations • EIT – Engineer in Training • FE – Fundamentals of Engineering • PE – Professional Engineer • P&P – Principles and Practice … The NCEES FE Reference Handbook is the only reference material that can be used during the exam. Use the Previous and Next buttons to move from one question to another. • Take it as many times as you want ($225 each time!) • FE exam will be offered year-round as computer-based exams at … I passed the Fe exam without solving practice exam. If you liked what you saw in the mini practice exams above, we encourage you to check out PrepFE. ). net . Login to purchase. Report. 3 In the FE exam you will not be able to bring in any outside reference material. You will take the FE exam on a 24 inch split-screen computer monitor. Autar Kaw (Statics, Dynamics, Computers, Mathematics) 2020/12/08 This PDF version will be very similar, if not identical, to the printed It is not an FE Review Book or Study Guide with sample problems. This was built to have the same look and feel of … 8 Step Plan to Pass the FE • Step 6 – Take a second practice exam a week before FE – Identify areas that need a little more work • Step 7 – Work only on areas identified in Step 6 • Step 8 – Stop studying an hour before going to sleep the night before to relax – Get a full 8 hours of sleep the NCEES FE Exam Reference Handbook, which contains all the necessary equations, tables, and graphs that you will need to solve each problem. txt) or read online for free. 5 hours to … Based on the Texas Engineering Practice Act and Board Rules There are two ways to take this exam: 1. 6/7/2021 The FE Industrial and Systems exam includes six hours of computer-based industrial and systems exam questions. Autar Kaw (Statics, Dynamics, Computers, Mathematics) Glen Besterfield (Ethics, Mechanics of Materials) Unlike other FE review programs that attempt to re-engineer old review material to match the new exam format and topics, the Capstone online FE CBT Review Curriculum was designed and developed specifically for the FE CBT exam. 1 · NCEES FE Electrical and The 6-hour chemical engineering F.$29. Check out FE Prep Materials. EngineerInTrainingExam. E. 1 10/16/07 Rev. Fundamentals of Engineering (FE) Electrical and Computer - Practice Exam # 2: Full length practice exam containing 110 solved problems based on NCEES® FE CBT Specification Version 9. Pages 9. Log Inactivations. Refer to the appropriate section for your exam. As I mentioned in the FE exam review class, several problems in the “FE Exam Review” section, of the Beer and Johnston, Statics/Dynamics Website, have errors in their statements, solutions, and multiple-choice answers. New FE Exam July 2020 / In this video, I talk about the changes that are going to take place on July 1st, 2020. and completing this practice exam does not guarantee that ones results will exactly mirror our own results. We've selected 10 diverse practice problems from our question bank that you can use to review for the Electrical and Computer engineering FE exam and give you an idea about some of the content we provide. It is designed for recent graduates and students who are close to finishing an undergraduate engineering degree from an EAC/ABET-accredited program. Geo/Trig Sample Problems: 1,5 Geo/Trig Exam Problems: 1,2,3,6, 8-12,14,16,17 Algebra Sample Problems: 1,2,5 NCEES-FE Civil PRACTICE EXAM . FE Exam Sample Exams. 1 (PDF) FE Chemical Practice Exam (PDF) FE Civil Practice Exam (PDF) Wasim’s Practice Exam #1,2,3 (some of the questions are outdated) Wasim’s 3rd Edition Study Guide. ” My own solutions, which you will find below, follow the problem numbering scheme I established above. Some are so entrenched in school that they can take it with little study, but I would recommend taking at least one practice exam to see where you stand. 335857725-NCEES-FE-Civil-Practice-Exam-50-Problems. If money is not an issue, try taking the exam with 2 weeks of cramming. pdf · FE- Civil Practice Problems. Problem-based learning backed-up by detailed theory. FE Review-Math 25 1. Changes are unlikely before 2025. edu FERC Fluid Mechanics FE Review These slides contain some notes, thoughts about what to study, and some practice problems. This practice exam provides you with 110 questions and detailed solutions that mimic the exam same look and feel of the real, computer-based FE exam. Practice Examination ‘To determine the magnitude of the point foree, se these equitions: poBtbus, af p= i ltl 7 OR otras thm ft if ~ (oot masjen( an = a9 6ltf. The Engineer in Training (EIT) exam, formally known as the Fundamentals of Engineering (FE) exam, is the first step in acquiring your Professional Engineer (PE) License. Therefore, we are giving you a free preview (containing 10 sample questions) of Exam #1 which have been updated for the 2022 FE Exam. Visit the NCEES FE website or the NCEES YouTube channel for details. The Exam Information • In NJ, can take it Junior year. D. Therefore, we are giving you a free preview (containing 10 sample questions) of Exam #1. Exam problems have been sourced from the past years’ exams… Free FE Practice Test PrepFE™ Free FE Mechanical Example Practice Problems. -C. is now available. If your exam is scheduled after July 1st, mak The Fundamentals of Engineering (FE) Mechanical exam is a computer-based test (CBT) which allows an electronic resource provided in the exam. 4 lb/ft3). The FE electrical and computer practice exam is generally the first step in becoming professionally licensed. The FE Mechanical exam includes 110 questions that use both the US Customary System (USCS) and the International System of Units (SI). The topics correspond directly to the CBT topics. Free Download - Civil Discipline Specific Review for the FE EIT Exam Third Civil Engineering PE Practice Exams, PE Civil Reference Handbook, PE Civil  2021/10/19 Disciplinary study guides and practice problems. = 9. Main Menu; by School; by Literature Title; FE exam review. NCEES is a national nonprofit organization dedicated to advancing professional licensure for engineers and surveyors. com Show details FE Chemical Practice Problems offers comprehensive practice for the NCEES Chemical FE exam. Currently, the only NCEES exams administered via computer are the Fundamentals of Engineering (FE) and the Fundamentals of Surveying (FS) exams. FE Practice Test with Complete Solutions. FE Mechanical Practice Exam. It may take only 30 seconds to answer some of the exam NCEES-FE Civil PRACTICE EXAM. View videos explaining the Computer Based Exam you will take. Use the button below to explore NCEES exam prep materials by exam type and discipline. You can do this over the course of a month or so, or a couple weeks. Contributions by. Advisor for more information. pdf from ENG 404 at Kettering University. Everyone has a different study style but the best way to study for this exam is to test your self on the art of the multiple choice question. In this online training format, you will learn in the same computer-based environment you will use … FE Review Course – Fluid Mechanics Spring 2012 Frank T. First, the key concepts and skills are unknown to most engineers studying for the exam. Click the button to get started Practice Quiz. net/civil-fe-exam-prep-course/🟡 FE Exam One on One Tutoring https://www. Date: 04/12/10 8 Problems. First, it's only 50 questions, not 110 questions like the real exam. Prepare for PE exams with focused courses designed to maximize your study time through the use of live and on-demand sessions; handouts, homework problems, study tips, and resources from faculty with 20+ years' experience. The preview will let you see how the practice exam … EIT/FE Exam EE Review Prof. g. Year and Month. An electronic reference booklet of equations and constants is supplied at the beginning of the exam. or. Why Take the FE/EIT Exam Now? • “The best  fe mechanical practice exam pdf with solutionsbreaking news pell city, al. You have six hours to complete it. ). If a practice exam is going to change, the new book is normally published in January or July, 6 months before the exam specification becomes effective. 5 millicuries/mL. resource teacher … Scores on the free-response questions are weighted and combined with the results of the computer-scored multiple-choice questions, and this raw score is  3 Overview • 110 multiple choice questions total • 5 hrs 20 min to answer questions • slightly less than 3 minutes per question Fundamentals of Engineering Exam Review Series Mathematics Prof. All material listed in the SuiteFoundation Study Guide may be tested. Saouma Oct. A lot of you has asked me what are the best books to us Tons of students pass their exams using our exam preparation resources. Engineering Plus has materials, such as practice exam books that can be checked out. 1. A. Showing 10 of 31 FE Exam Articles. You can adjust the width and height parameters according to your needs. In the FE exam you will not be able to bring in any outside reference material. All reference materials must be bound and remain bound during the exam. Report. Spam: This document is spam or advertising. directhub. edu (O) 578-4246 1/20/2022 1/28/2022 Exam Format The FE exam contains 110 questions and is administered year-round via computer at approved Pearson VUE test centers. level 2. net/fe-exam-tutoring/https://www. Engineers: Take School of PE for a test drive with our FREE 10-question practice quiz. If a practice exam is going to change, the new book is normally published in January or July, 6 months before the exam specification becomes effective. 5, what is the size of the sample? SOLN: Variance is known, so will use Z rather than t: Z0. pdf), Text File (. Olia, M. Get it as soon as Saturday, Sep 18. FE Civil Practice Exam Review: There are a lot of the civil engineering books and a lot  Morning_FE Practice Exam - Free download as PDF File (. The advice and discussions on this sub have benefited me greatly, so much so when I talk to other people, other ChE students, and sometimes professors, I reference this sub. Practice for the FE exam by practicing exam-like problems. 1/11/2022 Quickies: These are questions in the FE Review Manual that can be answered fairly quickly using a picture, your calculator, the reference manual or very little paper-and-pencil calculation. Meredith Metzger Department of Mechanical Engineering University of Utah . Ill ! I - . Visit us there for updates on everything exam-related, including specifications, exam-day policies, scoring, and practice tests. 63) A sample of cerium-141 for a diagnostic test was dissolved in saline solution to an activity of 4. 5cm}{0. com. S. Additional Free Resources for All Examinations back to top. We. preview is currently unavailable. Embed. 5 mL 64) The half-life of a radioisotope is 10/14/2021 Call Number: Reserves TJ159 . fe exam, fe exam book, fe civil, fe electrical and computer practice exam pdf, civil engineering fe exam preparation pdf fe mechanical practice exam pdf, fe ‎ Fe exam infoمعادلة شهاده الهندسه لأمريكا وكندا ‎ shared a video from the playlist FE EXAM: CIVIL . TABLE 2. Technical Study Guide &. Alternatively send us an eMail with the URL of the document to [email protected] pdf from GENERAL DO 1000 at University of South Florida. PrepFE is a startup disrupting the FE exam prep industry by providing the highest quality online FE exam prep experience that traditional textbook publishers just can’t match as their practice questions are stale on paper while PrepFE's problems are always IELTS USA is pleased to offer a free IELTS practice test to test takers who register for IELTS in IELTS Practice Listening Test Blank Answer Sheet (pdf)  Exam Information • In NJ, can take it Junior year. 29. SlaythePE. You’ll have 5 hours and 20 minutes to complete the actual exam. Date: 04/12/10 7 Problems. The Practice Portal Pro features a bank of practice problems for students to test their knowledge. 95) ($29. 1 Read More. Work it all the way through, untimed, in a separate set of paper without marking in the book, with access to the reference handbook (electronic copy on NCEES website). Download Full PDF Package. fe exam, fe exam book, fe civil, fe electrical and computer practice exam pdf, civil engineering fe exam preparation pdf fe Fe Electrical Practice Exam Pdf Study! fe exam electrical study guide study degrees, courses learning, education online. Don't devote 80% of your time going through lessons only to find out you cannot answer half of the questions on a practice exam - IT'S A COMMON TRAP. com | In today's Video we are going to work an Equilibrium Practice Problem from Statics. This book is part of a comprehensive learning management system designed to help you pass the FE exam … In the Speaking section, you will be presented with four questions that ask you to speak about a familiar topic and about a passage you have read and/or a . A short summary of this paper. Request Any Book in An FE preparation class may be offered in your area. Even though, the FE exam has changed from paper and The FE Review Manual and FE Environmental Practice books will prepare you for exam day by familiarizing you with the handbook. net . 5 hours to complete your exam. We've selected 10 diverse practice problems from our question bank that you can use to review for the Civil engineering FE exam and give you an idea about some of the content we provide. If the patient undergoing the test needs a dose of 10. Download Full PDF Package. FE EE Practice Exam. com to: Download free Online Reference Handbook. Posted by 1 year ago. Updated for July 2020 Exam will be provided as a searchable electronic pdf during the test. Check out our excellent FE practice exam to prepare today! Premium PDF of NCEES NCEES-FE NCEES - FE Civil Engineering 2022 Exam Dumps with Actual Questions Updated today with latest syllabus are provided here. pptx), PDF File (. Burchill, P. Clear and concise explanations of difficult concepts. Study Resources. Cover Art PPI FE Chemical Review Manual - Comprehensive Review Guide for the NCEES FE Chemical then you can download the . Log in with Facebook Log in with Google. It is designed for recent graduates and students who are close to finishing an undergraduate engineering degree from an EAC/ABET-accredited program. 1 10/16/07 Rev. Branch 1 is 500 m long, and it has a diameter of 2 m and a friction factor of 0. After you pass the FE exam, you are considered an “Engineer-in-Training. The practice exam does a very good job of giving you questions that are similar in difficulty than what is on the actual exam. , discrete, continuous, normal, binomial) 4 1/28/2022 Practice exams are typically updated only when the exam specifications are revised (every 5 to 7 years). The FE Exam is designed so there are more questions than you can complete in the time available, so your ability to answer questions efficiently is critical. Our FE exam review courses include 73 hours Lectures and Workshops based on the new NCEES reference manual and exam specifications as well as all other materials you need to prepare for FE exam and pass! FE Exam Prep Course Features. The angle CAB is 57° 28 '. The Ultimate Civil FE Practice Exam is your key to preparing for the FE exam. Understanding the concepts. Alternative Route: 3 months is a grind and I wish I did the following alternative route. There are 110 questions on the exam. View full document. FE Chemical Review Manual Introduction: FE Chemical Review Manual PDF is a book written by Michael Linderburg. sticky notes and flags are not permitted in the exam room. Contributions by. Spam: This document is spam or advertising. It is the world's second-largest religion with 1. FE Mechanical Review Manual (FEMERM) Lindeburg THE ULTIMATE CIVIL FE PRACTICE EXAM TABLE OF CONTENTS WELCOME. S UMNER, P H D, PE. pdf… Fundamentals of Engineering (FE) General Exam Review Program March 3rd ~ April 12 th, 2010. instagram. Date: 04/12/10 4 The Fundamentals of Engineering Exam or FE is your first step to getting your PE exam. 95. Branch 2 has a length of 400 m, diameter of 3 m, and a friction factor of 0. You will be given the NCEES FE Exam Reference Handbook, which contains all the necessary equations, tables, and graphs that you will need to solve each problem. pdf Ncees-fe Civil Practice Exam (1)Full description. pH=7. E. 11/15/2021 The FE exam is a computer-based exam administered year-round at NCEES-approved Pearson VUE test centers. The practice problems closely mimic NCEES’ computer-based test (CBT) experience. pdf. Errata will be corrected in future editions of the material. According to the Bureau of Labor Statistics, “Industrial engineers find ways to eliminate wastefulness in FE Civil Practice Problems, LindeburgISBN:978-1-59126-440-8. We've selected 10 diverse practice problems from our question bank that you can use to review for the Civil engineering FE exam and give you an idea about We've selected 10 diverse practice problems from our question bank that you can use to review for the Mechanical engineering FE exam and give you an idea FE.$29. FE NCEES Practice Exams https://account. Ford 12/10/2021 10/9/2014 1/4/2022 PE Mechanical – Thermal and Fluid Systems – Practice Exam Questions www. Solid Waste Engineering; Ray. ago. Total Number of Questions: 10. Questions. Alternatively send us an eMail with the URL of the document to [email protected] The FE exam is 110 questions which needs to be completed in 6 hours. A good resource to prepare for the exam and know what Writing the FE Exam. 2-8,) ee = 0. Foyle, P. (2008). Institute for Transportation Research and Education NC State University May 2010 References: 1. Principles and Practice of Engineering (PE) and Structural Engineering (SE) exams: These are open-book exams. We've selected 10 diverse practice problems from our question bank that you can use to review for the Mechanical engineering FE exam and give you an idea  8 Step Plan to Pass the FE • Step 6 – Take a second practice exam a week before FE – Identify areas that need a little more work • Step 7 – Work only on areas identified in Step 6 • Step 8 – Stop studying an hour before going to sleep the night before to relax – Get a full 8 hours of sleep FE SAMPLE QUESTIONS NATIONAL COUNCIL OF EXAMINERS FOR ENGINEERING AND SURVEYING NCEES. Login to purchase. AM (A). net . Date: 04/12/10 6 Problems. The questions are complete with answers and thorough solutions. This exam package contains 500+ questions to prepare you for the NCEES FE Civil Exam. org. Quickies: These are questions in the FE Review Manual that can be answered fairly quickly using a picture, your calculator, the reference manual or very little paper-and-pencil calculation. View FE exam practice problems. Download your free copy to find out more about registering for exams and what to expect on exam day. Each module will review main concepts, illustrate them with Practicing by taking this test will familiarize you with the style of the real selection test. The last update was July 2020. If you take the practice exam, and do well on it without looking at the FE Exam Resource Center. The Principles and Practice of Engineering (PE), Principles and Practice Course Notes / Study Materials / Textbooks. Changes are unlikely before 2025. NCEES FE Civil Practice Exam Read More. Which of the following expressions surely supports the statement f(n) = Ω(g(n))?. ncees. The NCEES Examinee Guide is the official guide to policies and procedures for all NCEES exams. Watch online tutorials and videos. A valve manufacturer uses the rig shown below to test their valves. This book features over 460 three-minute, multiple-choice, exam-like practice problems to illustrate the type of problems you will encounter during the exam. Instead get more use to Fe manual because you can solve most of the problem only using Fe manual. directhub. com/the_lerchness_monster/FE Civil Calculator Policy: … FE exam practice problems. These are collectively called Alternative-Item Types (AITs). Free PDF downloads and exam prep purchases are made via your MyNCEES account. U. All questions are based on actual exam material. Here is a collection of the available resources to help you become wise and time efficient for the FE exam style questions. Vor 18 StundenIslam ( / ˈɪslɑːm /; Arabic: اَلْإِسْلَامُ, romanized : al-’Islām, [ɪsˈlaːm] ( listen)) is an Abrahamic monotheistic religion teaching that Muhammad is a messenger of God. -l mechanical practice exam - • fg n e engineers and Corrections or changes to published materials are posted once they are approved by a panel of subject-matter experts. PPI Learning Hub is the go-to destination for FE Chemical exam prep. I would plan to take the FE 3 semesters before graduation. fe chemical practice exam pdf provides a comprehensive and comprehensive pathway for students to see progress after the end of each module. Preview PDF. Therefore, we are giving you a free preview (containing 10 sample questions) of Exam #1 which have been updated for the 2022 FE Exam. 2013 Structural Analysis is part of the afternoon exam. Each timed practice exam consists of 50 questions and functions much the same as the actual exam. , P. The NCEES FE Exam Reference Handbook will be provided as a searchable electronic pdf during the test. This practice exam provides you with 110 questions and detailed solutions that mimic the exam same look and feel of the real, computer-based FE exam. Exam format The FE exam contains 110 questions and is administered year-round via computer at approved Pearson VUE test centers. Probability distributions (e. DOWNLOAD . This course will give you an idea of the kind of questions 6/24/2020 1/26/2022 FE Civil Practice Module. View and download NCEES-FE Civil PRACTICE EXAM. We've selected 10 diverse practice problems from our question bank that you can use to review for the Mechanical engineering FE exam and give you an idea about some of the content we provide. Hey r/ChemicalEngineering, I wanted to preface this by saying thank you to everyone on this sub who contributes to it. Uploaded By Jonesey1982. E. The NCEES FE Exam Reference Handbook will be provided as a searchable electronic pdf during the test. Simulate the real FE exam test-day experience with practice problems on PrepFE's online dashboard. I do have some issues with this exam, though. Gross EE1-2FUNDAMENTALS OF ENGINEERING Practice Exam 1. 4 by Wasim Asghar PE | Sep 26, 2016 FE Exam Prep Books / Hello Friends, I have been wanting to share with you this video for a while now. Meredith Metzger Department of Mechanical Engineering University of Utah . View FE Mechanical Practice Exam (2020). FE exam Review Courses. w. Prepare for your exam with 72-hours of comprehensive exam prep. General Instructions: Click the Start Test button to begin. 667 ft ‘To determine the length of the moment arm, use this equation: et litt Free FE Practice Test PrepFE™ Free FE Mechanical Example Practice Problems. You are expected to know: 1. E. 10/7/2021 Use these FE Practice Questions to Prepare for the FE Exam A practice exam, specific for your discipline, of multiple choice questions so you can study with the end in mind ( AND minimize your reading ). 3. After studying a subject or topic PPI Mechanical Engineering Practice Problems, 14th Edition – Comprehensive Practice Guide for the NCEES PE Mechanical Exam. This Paper. pdf on DocDroid In NEED OF A FE MECHANICAL PRACTICE EXAM PDF PLEASE HELP :/. Download Full PDF Package. , mean, mode, variance, standard deviation) 1-3 B. Each question is worth 2 points. Description. 2/16/2020 Fluid Mechanics FE Review Carrie (CJ) McClelland, P. All questions are based on actual exam material. This preview shows page 1 - 9 out of 9 pages. [email protected] Date: 04/12/10 3 Problems. 1 Full PDF related to this paper. Except for various industrial occupations that are specifically exempted by the Texas Engineering Practice Act (TEPA), it is unlawful to offer or perform any engineering services without being licensed as a Professional … Instructor: Dr. … Thanks for watching and don't forget to subscribe!Follow me on Instagram!https://www. Don’t waste your time studying the entire materials that were covered in your engineering field. APEGA candidates can write the exam at any Pearson VUE testing location in North America that offers the exam. be licensed in order to practice. There is no negative marking for guessing the correct answers. In the afternoon, you are to answer 60 questions, and Structural Analysis is about 10% of the test content (or about 6 questions). 5cm}{0. Official NCEES Practice Exam. We help creative entrepreneurs build their digital business by focusing on three key elements of a successful online platform - design, content + strategy. Password Full PDF Package Download Full PDF Package. Regarding general topics that will appear on every exam (Mathematics, Ethics, and Engineering Economics), it is worth understanding to do as many correctly as possible. Stress A F Normal =σ= A F Shear =τ= F F F A A A Sample dimensions are decreased, so stress is even higher. g. The FE exam is a computer-based exam administered year Updates on exam content and procedures NCEES. In Alberta, there are locations in Edmonton and Calgary. ). of Civil & Environmental Engineering, LSU [email protected] The key to passing the FE exam We've selected 10 diverse practice problems from our question bank that you can use to review for the Other Disciplines engineering FE exam and give you an idea about some of the content we provide. Related information that you'll find useful includes learning how to use your calculator during exam: FE Exam Calculator, and what mark you'll need to pass: FE Exam … Fundamentals of Engineering Exam Review Series Mathematics Prof. appear on exams for all disciplines. 29. 9 billion followers or 24. 2mm} CO_2 + \rule{0. Date: 04/12/10 2 Problems. -l mechanical practice exam - • fg n e engineers and surveyors CONTENTS … you’ll receive a 10 percent discount on the practice exam. FE Exam Review, Online Problems and Solutions. 12336. 95. 1/28/2022 View PPE FE Civil Exam. We've selected 10 diverse practice problems from our question bank that you can use to review for the Mechanical engineering FE exam and give you an idea about some of the content we provide. 9 out of 5 stars. School of PE will provide each student several pages of handouts prepared by our instructors as a guide for the review course. AP Calculus BC Practice ExamFrom the 2018 Administration This exam … pdf” document under the heading: “Part 1,. They give you life-like exam experience. • You will register and schedule your appointment through your My NCEES account on the NCEES website. NCEES FE Chemical Practice Exam Read More. , primary, secondary, tertiary) FE Practice Test. exam consists of 110 questions over these topic areas: mathematics, probability and statistics, engineering sciences, materials … 9/29/2020 The Ultimate Civil FE Practice Exam Volume 1. Professional Engineering Exam Review Reviewing for FE (Fundamentals of Engineering) PE (Principals and Practice) Exams Jeffrey J. E. Pollution; Vesilind et al. The road to acquiring the Professional Engineer license can be broken down to three step process, each step with its unique requirements and characteristics; the steps are as The FE Reference Manual is a PDF that is searchable. Give yourself about 30 seconds to a minute on each. Bound refers to (1) materials permanently bound, What Are the Best FE Practice Exams? If you are looking for other material you may consider getting our very own practice exams, The Ultimate Civil FE Practice Exam Volume 1 and 2 in addition please check out the best free FE resources. Here, and within our Study Program Prepineer, we believe in the concept of *Sets and reps*. All questions are solveable using the NCEES FE Reference 10/23/2019 Test Name: FE Civil Quiz. A 6-hour appointment time includes a tutorial, the exam, and a break. PDF Ncees-fe Mechanical Practice Exam (2)Descripción completa  26 thg 1, 2022 Free FE Practice Exams [2022]. This … Please see the CECS Graduate. The answers to the problems are given in the last slide. PPI Pro Tip: You can  [PDF] DOWNLOAD A SAMPLE OF THE MECHANICAL FE TEXTBOOK. 2 mL D) 22 mL E) 4. pdf -. • Register any time, up to one year in advance, for an exam appointment at any testing center. Saouma Oct. The Learning Hub is more than an eLearning platform—it is a time-tested program that guides examinees through their exam prep from start to finish. ORG/EXAMS . Here you can download free practice test for such certifications as MCSA, MCSE, A+, Network+, Security+, CCNA, CCNP, CCIE. 12 cSt, density = 62. Download free Examinee Guide. Errata View and download NCEES-FE MECHANICAL Practice Exam. The NCEES FE Exam Reference Handbook will be provided as a searchable electronic pdf during the test. txt) or view presentation slides online. 02. 9% of the world's population, known as Muslims. Please Report any type of abuse (spam, illegal acts, harassment, copyright violation, adult content, warez, etc. ). Ill ! I - . sample exams and guides that outline how to pass the PE exam. 2mm} … FE_CIVIL_PRACTICE_PROBLEM. . The working fluid is water ( kinematic viscosity= 1. You will be given 2. 5 hours to complete your exam. Thus: +1) + 268) + 3-2) x ° 42 50. V. org/exam-prep/. Lindeburg PE. Hi everyone, I am going to be taking my FE Mechanical Exam this Saturday at 8 AM. Free FE Civil Example Practice Problems. Explore the CBT FE and FS exams by watching video demonstrations . 2021 Q2 Exam. The FE exam has 110 questions that can be asked in a variety of formats such as multiple-choice, fill-in-the-blanks, match the columns, drag, and drop, etc. The NCEES Examinee Guide is the official guide to policies and procedures for all NCEES exams. FE practice problems for FE civil,mechanical,electrical, or others! Download and review corrections to your preparation materials. ppt / . Geo/Trig Sample Problems: 1,5 Geo/Trig Exam Problems: 1,2,3,6, 8-12,14,16,17 Algebra Sample Problems: 1,2,5 DOWNLOAD A SAMPLE OF THE OTHER DISCIPLINES FE TEXTBOOK & FULL EXAM. A. Past Exam Questions and Answers. The FE exam is a computer based test (CBT). org, hard copy available in Olin library) 0 Kaplan Review Books 0 ASME FE Exam Prep Lecture CD‐ROM 0 FE/EIT AM w/CD‐ROM (REA) ‐The Best Test Prep for the Engineer in Training Exam, N. Struggling to find resources and stay on track to take the FE Exam? Whether you are still in school or have kids in college, I want to help you pass the FE and take the next step in your engineering career. pdf from MECH ENG 208 at Missouri University of Science & Technology. Lindeburg 0 Mechanical Discipline‐Specific Review for … FE Civil Review Lindeburg pdf. In the afternoon, you are to answer 60 questions, and Structural Analysis is about 10% of the test content (or about 6 questions). See preview. ). 05/2 = 1. com):If you are interested in a straight forward but comprehensive resource designed to support your preparation for the FE-Civil exam, this book is for you. The key to passing the FE exam Welcome to The Ultimate Civil FE Practice Exam! Thank you so much for purchasing this book! This exam contains 110 questions and solutions following the exact same format of the NCEES exam. Dept. 250+ Practice Problems. This exam uses separate vertical and lateral components to test your ability to safely design buildings or bridges, especially in areas of high seismicity and high wind. In the review session, we will be working some of these problems. If a practice exam is going to change, the new book is normally published in January or July, 6 months before the specification becomes effective. Title: Structural analysis and design Author: Emmett Sumner Created Date: FE Chemical Review Manual, Lindeburg (2016) ISBN: 978-159126-445-3 FE Chemical Practice Problems 1st Ed. Clear and … All FE courses updated with new NCEES exam specifications. \$125. Close. Page 3. Date: 04/12/10 5 Problems. NCEES-FE Civil PRACTICE EXAM. Mar 29, 2019 · AP Calculus BC Resources Series Extra Practice Worksheet 2020 Answers Series Practice Problems Worksheet Solutions 2020. One of the biggest challenges on this exam is the time constraint. 95. Alternatively send us an eMail with the URL of the document to [email protected] To create conditions most like a real test: Practice by completing all 26 test questions Be sure to set a timer before beginning each part Do not look at the answers that follow at the end until you have completed all the test questions 1/12/2021 Why should an engineering student take the Fundamentals of Engineering (FE) exam? Passing the FE exam is one of the prerequisites for engineering licensure. Read Paper. The more we are able to review and work through structured problems presented in all different formats the better we will be prepared. The exam is offered in seven disciplines. This curated bundle includes FE Chemical Review web book, practice problems, and the most realistic practice exams on the market
web
auto_math_text
# Photometric Calibration ## Background Digital cameras of today have CMOS based sensors to convert the light incident (irradiance) on them into digital values. These sensors have a characteristic Inverse Camera Response Function (ICRF) which maps the irradiance to the pixel value generated (typically between 0-255). In the cameras we use, the ICRF curve is adjusted so that the color reproduced in the digital pixels resemble what our human eye can see. This is particularly useful for consumer products but when one is using cameras for scientific applications ranging from vision systems in autonomous cars to 3D reconstruction, it is imperative to have the true pixel value to be calibrated to the true irradiance values on the CMOS sensor. ## Problem The goal is to obtain the absolute value of light intensity and calibrate the CMOS sensor output of the camera to match the said absolute value of light intensity. Highest accuracy and precision are desired. There are two ways of approaching this problem: 1. Method I: Get the value of the intensity of light of the surface using Photometers/Lux meters/Radiometers. 2. Method II: Use a standardized light source with controllable wavelength and intensity. A comparative overview of the two stated methods has been given below in Table 1, each advantage is given an unweighted score of 1: Table 1 Method I Method II Principle of Operation Uses a transducer to convert light intensity to a digital signal. Uses a transducer to convert digital signals into light waves. Sensor/Transducer Silicon(doped) Photodiode Silicon(doped) LED / Tungsten Filament Cost $- Cheap$ - Expensive Luminous efficiency error 9% - High 0.001% - Low Dependence on ambient light In-effective/false positives under fluorescent lighting Independent of ambient lighting Response time 5 s 0.500 s Characteristics of oblique incidence/ Luminance Spatial uniformity Incidence: 10° ±1.5% / 30° ±3% / 60° ±10% /80° ±30% Spatial Uniformity: >94% over 360° x 200° field of view Spectral range Lux meter: 1; Photometer: 850 nm to 940 nm Visible, 850 nm to 940 nm Spectral mismatch 1% >0.00001% Luminescence range 0.0 to 999 cd/m2 0 to 700 cd/m2 Typical application Lux meter: Ambient light; Photometer/Radiometer: Color of surfaces. Calibration of Lux meters, Photometers, Radiometers, Cameras & other optical equipment. Operational features Comparatively less stable output; Needs regular calibration; Integration with desktop on select models. Precise control. Easy integration with desktop. Long life. Stable output Total score 2/10 7/10 ## Result Method II is the most desirable way to go about solving the problem at hand. ## References Use these for choosing the type of validation of photometric calibration: 1. https://www.labsphere.com/site/assets/files/2928/pb-13089-000_rev_00_waf.pdf • http://ericfossum.com/Publications/Papers/1999%20Program%20Test%20Methodologies%20for%20Digital%20Camera%20on%20a%20Chip%20Image%20Sensors.pdf • http://sensing.konicaminolta.us/2013/10/measuring-light-intensity-using-a-lux-meter/ • http://tmi.yokogawa.com/products/portable-and-bench-instruments/luxmeters/digital-lux-meters/ • http://ens.ewi.tudelft.nl/Education/courses/et4248/Papers/Niclass12.pdf • http://photo.net/learn/dark_noise/ • http://ro.ecu.edu.au/cgi/viewcontent.cgi?article=2497&context=ecuworks • http://personal.ph.surrey.ac.uk/~phs1pr/mphys-dissertations/2007/Wallis.pdf
web
auto_math_text
# I have a huge function filled with nested blocks Could someone help me on how to eliminate some nested blocks or improve this code? I am concerned this will slow down my site dramatically. function dispalyEvent($weekNr,$week, $year){ echo "<p>";$gendate = new DateTime(); $gendate->setISODate($year,$week,$weekNr); $event_query = mysql_query("SELECT * FROM calendar ORDER BY starttime"); //Go through all event in the database while($event = mysql_fetch_array($event_query)) { //Create a range for starting date and ending date$date1 = new DateTime($event['startyear'].$event['startmonth'].$event['startdate']);$date2 = new DateTime($event['endyear'].$event['endmonth'].$event['enddate']);$date2->modify('+1 day'); $period = new DatePeriod($date1, new DateInterval('P1D'), $date2);$title = $event['title'];$name = $event['name'];$recur_query = mysql_query("SELECT * FROM recur WHERE title = '$title' AND name = '$name'"); $recur = mysql_fetch_array($recur_query); $recurring =$recur['type']; //Find day of starting recurring event and ending day if (!$recurring == "None"){$starttime = explode("/",$recur['startdate']);$startdate = new DateTime(); $startdate->setDate($starttime[2], $starttime[0],$starttime[0]); $endtime = explode("/",$recur['enddate']); $enddate = new DateTime();$enddate->setDate($endtime[2],$endtime[0], $endtime[0]); } else {$startdate = new DateTime(); $enddate = new DateTime(); } //Put the dates in integer to find if it is out of range$displaydate = intval($gendate->format("Ymd"));$startdate = intval($startdate->format("Ymd"));$enddate = intval($enddate->format("Ymd")); settype($displaydate, "integer"); settype($startdate, "integer"); settype($enddate, "integer"); //Go through each date in the range foreach ($period as$savedDate) { //Check if the Item is Approved if ($event['Approved'] == "Approved"){ switch($recurring){ Case 'None': //If the date in the range is the same as the displaydate if ($gendate->format("Y-m-d") ==$savedDate->format('Y-m-d')){ //Create event renderEvent($event['ad'],$event['starttime'], $event['title'],$event['endtime'], $event['location'],$event['address'], $event['price'],$event['description']); } break 1; Case 'Daily': //Check margin between start and end date of recurring event if ($displaydate >$startdate and !$displaydate <$enddate){ //Check if the day number is the same if ($recur['day']-1 ==$gendate->format("w")){ //Create event renderEvent($event['ad'],$event['starttime'], $event['title'],$event['endtime'], $event['location'],$event['address'], $event['price'],$event['description']); } } break 1; Case 'Weekly': //Check margin between start and end date of recurring event if ($displaydate >$startdate and !$displaydate <$enddate){ //Find the amount of weeks between two dates $weekRange = datediffInWeeks($recur['startdate'], $recur['enddate']); //Round down to the possible amount to display$weeks = ceil($weekRange /$recur['day']); //Returns the week cuurent week to display $currentWeek =$gendate->format("W"); //Loop for every #(1, 2, 3, 4) of weeks for ($n=0;$n<$weeks;$n++) { //Display event if weeks are the same if ($n ==$currentWeek) { //Put days in array $days = explode(",",$recur['weekday']); //If number day of the week is the same display event foreach ($days as$day) { //Check if the day number is the same if ($day ==$gendate->format("w")) { //Create event renderEvent($event['ad'],$event['starttime'], $event['title'],$event['endtime'], $event['location'],$event['address'], $event['price'],$event['description']); } } } } } break 1; } } } } echo "</p>"; } - Try to avoid nested if block and use ternary operator. Move code in functions. –  sunil Mar 31 at 6:39 Explain the purpose of this function. Is it called for each day on the calendar to render events for that day? –  Bob65536 Mar 31 at 8:24 While I agree with the functions part of your comment, I don't agree with using ternaries to avoid nesting blocks. if-else branches don't introduce a new block-scope, and the PHP ternary is fundamentally flawed. Besides, the OP's code is messy as it is, and there are issues with operator precedence all over. Ternaries will only add confusion and be a new source of bugs at this point. That's just bad advice in this case... –  Elias Van Ootegem Mar 31 at 11:47 Ok, the following review may seem blunt or harsh, but please, try to keep in mind that this is in order to help. I'm not trying to hurt or mock anyone, but in order for code-review to be as effective as it ought to be, I'll have to be brutal. If you haven't read it already, the help-section asks you post working code. bug-riddled code isn't subject to review yet, it has to be debugged first. It is possible you aren't aware of it, and that you may think your code works, when really it doesn't. Well, not as you expect it to, at least. I know it feels banal and tedious, closely looking at the operator precedence table doesn't do any harm. Quite the opposite, in fact. You'll soon find out why Both David Harkness and myself mention potenial bugs or unexpected behaviour with expressions like: if (!$recurring == "None") //and if ($displaydate > $startdate and !$displaydate < $enddate) And as a last point in this foreword to what is already a hefty answer, I would like to strongly suggest you change your php.ini settings for the error_reporting and set display_errors to true, one, or stdout, depending on your PHP version. The error_reporting's default value is likely to be E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED, while debugging, it's best to set it to E_ALL | E_STRICT, or call error_reporting(-1); in your script. As I have done before, I'll walk through your code line by line, offering advice and the reasoning behind my criticism. At the end, I'll add an example of code you could end up with if you decide to take my recommendations to heart. Update: I did not add a code example as there are simply too many unknowns to deal with, and that any example would basically end up being a total re-write of your code, which isn't my job, and is of little educational use to you. Instead, just to make it 100% clear, however blunt or harsh this answer may seem here's a meta-post on why I consider it necessary for code-review to be tough Now, without further ado, let's get too it: function dispalyEvent($weekNr, $week,$year){ Yes, I have some criticisms about the very first line of code you posted already. Ok, a function displayEvent, that expects 3 arguments. All three have to do with time. But if you need variables that tell you something about time, why not ask of the user (caller) to pass a DateTime class from the off? function displayEvent(DateTime $date) { Now this tells the user of your code that he's expected to pass a DateTime instance as an argument. It reduces the number of arguments from 3 to 1, and allows for type-hints. As we'll see in a second, this also saves you the bother of creating the DateTime instances inside the function. The advantage of that is that, if the caller already has a DateTime instance, he can simply pass that object, and not call methods to get the year, week and weekNr values, which are only being used to re-construct the same DateTime instance all over. Onwards: echo "<p>"; Don't echo in a function. A function returns. The caller of your function may then choose to echo the return value, or may store it somewhere else. Having a function echo something puts the user of your code in a tight spot: calling this function means he can't set the headers, can't use this function to retrieve data and present it in a way he wants to. Just create a variable $outString = '';, and return that at the end. $gendate = new DateTime();$gendate->setISODate($year,$week,$weekNr); As I said before: this code can be made redundant simply by changing the function's signature to expect a DateTime instance from the off $event_query = mysql_query("SELECT * FROM calendar ORDER BY starttime"); //Go through all event in the database while($event = mysql_fetch_array($event_query)) { Please, please, please stop using the deprecated mysql_* extension. Switch to PDO or mysqli_* instead. Henceforth I'll be using PDO. And as a rule of thumb, or even personal mantra: Avoid SELECT * queries whenever you can. Select what you need, and how you need it. You haven't done that last bit at all, judging by the next snippet of code: //Create a range for starting date and ending date $date1 = new DateTime($event['startyear'].$event['startmonth'].$event['startdate']); $date2 = new DateTime($event['endyear'].$event['endmonth'].$event['enddate']); $date2->modify('+1 day'); Why not select these dates like so: SELECT CONCAT_WS('-', startyear, startmonth, startdate) AS date1 That way, you'll be able to write: $date1 = new DateTime($event['date1']); That's just, I think you'll agree, a hell of a lot cleaner. Anyway, back to the code: $period = new DatePeriod($date1, new DateInterval('P1D'),$date2); $title =$event['title']; $name =$event['name']; Why bother assigning individual variables, you have an associative array, what's wrong with that? An assoc array is a data structure that holds together all related data anyway. This data clearly belongs together, why not keep it together in that array? We'll get to the DatePeriod business in a moment, for now, let's carry on: $recur_query = mysql_query("SELECT * FROM recur WHERE title = '$title' AND name = '$name'");$recur = mysql_fetch_array($recur_query);$recurring = $recur['type']; //Find day of starting recurring event and ending day if (!$recurring == "None"){ $starttime = explode("/",$recur['startdate']); $startdate = new DateTime();$startdate->setDate($starttime[2],$starttime[0], $starttime[0]);$endtime = explode("/",$recur['enddate']);$enddate = new DateTime(); $enddate->setDate($endtime[2], $endtime[0],$endtime[0]); } else { $startdate = new DateTime();$enddate = new DateTime(); } Ok, you may have noticed I fixed the indentation. Seriously, indentation is important. For your sake and ours. Stay consistent and try to adhere to the standard as much as you can. Anyway: This code basically queries the same DB for, pretty much, the same data over and over again. Of course, the where clause is different every time, but what you're doing is sending a string to MySQL, who then parses and compiles the query and then fetches the data. A prepared statement can be sent to the DB once, to be compiled, optimized (and in many cases, a lot of the data is even pre-fetched), and you can then send the values that are to be placed in the WHERE clause whenever you need that query to be executed. This saves the DB server a lot of work, saves time and is more secure. You're just stringing $name and $title in the query, for example. Completely oblivious to the fact that There could be a name like "O'Connor" assigned to $name. Resulting in the Query: SELECT * FROM recur WHERE title = 'foobar' AND name = 'O'Connor' Which will cause problems. And what if Bobby Tables pays a visit? On the DateTime things, I can only say: Why explode? Why not simply write: $recur['startdate'] = new DateTime($recur['startdate']); DateTime does a great job at "guessing" the format, but if you wish not to rely on this, you can always choose to specify the format yourself: $recur['startdate'] = DateTime::createFromFormat( 'd/m/Y', $recur['startdate'] ); Anyway, let's continue: //Put the dates in integer to find if it is out of range$displaydate = intval($gendate->format("Ymd"));$startdate = intval($startdate->format("Ymd"));$enddate = intval($enddate->format("Ymd")); settype($displaydate, "integer"); settype($startdate, "integer"); settype($enddate, "integer"); DRY, Don't Repeat Yourself. You are calling the intval function. Look at the return type: int intval ( mixed $var [, int$base = 10 ] ) // \---> returns an INT Why, then are you calling settype? It's pretty safe to say you're calling settype on an int already. Even if you're not, why not cast? A cast saves the overhead of a function call: $displaydate = (int)$gendate->format("Ymd"); That's all there is too it, and you've saved yourself the bother of 2 function calls. Moving on: //Go through each date in the range foreach ($period as$savedDate) { //Check if the Item is Approved if ($event['Approved'] == "Approved"){ switch($recurring){ Ok, think about what you're doing here. For each date in the DatePeriod, you're evaluating, basically, what the results of the initial query told you. Why do you need to check those more than once? You know the $event['Approved'] and $recurring values aren't going to change. Determine which case is going to be true beforehand. Then you can significantly shorten the loop body. You're only processing those events that have been approved! Why not add that to the WHERE clause in your query???? SELECT * FROM calendar WHERE Approved = 'Approved' ORDER BY starttime; That way, you don't have to check the value of $event['Approved'] to begin with. Also: break; is the same as writing break 1;. The latter just looks weird here. Anyway, consider writing separate functions for various event-types: renderNonRecurring, renderDailyEvent and (but there's a lot to be said about this case still) renderWeeklyEvent. You can then write something as simple as: foreach ($period as $savedDate) { switch ($recurring) { case 'None': if ($gendate ==$savedDate) {//DateTime instances can be compared like so, no format needed renderNonRecurringEvent($event); } break; } } Notice how I don't pass every individual key of the array to the render function, but instead just pass all of the event-related data. Doesn't that make sense to you? Of course, looking at this function's tendency to echo, I take it your render* functions echo, too. Just have them return the output string and concatenate it to the $outString I mentioned in the beginning of my answer: $outString .= renderEvent($event); Now, for the big one: Case 'Weekly': //Check margin between start and end date of recurring event if ($displaydate >$startdate and !$displaydate <$enddate){ Operator precedence... this condition is just terribly unreliable. and has a low precedence. Use &&. Always. Unless you know what you're doing. Also think about what you're trying to check when you write !$displaydate <$enddate Are you saying (!$displaydate) <$enddate //if inverse boolean value of $displaydate <$enddate //ie: if $displaydate truthy, then this would evalute to: // if (!true) <$enddate -> false < $enddate --> 0 <$enddate Or are you trying to check for: $displaydate >=$enddate //makes a lot more sense, no? For some reason, you've created a function to get the diff in weeks. What is odd is that you insist on passing the date string to this function, when you've already constructed a DateTime instance for these dates. At least pass that to the function, because I'm prepared to take a punt that this datediffInWeeks function creates those same instances all over. But to be honest, I'd just not bother, and write this in-line, there's not a lot too it anyway. Here's the code you have: //Find the amount of weeks between two dates $weekRange = datediffInWeeks($recur['startdate'], $recur['enddate']); //Round down to the possible amount to display$weeks = ceil($weekRange /$recur['day']); And this is what I'd write: $weekRange =$recur['startdate']->diff($recur['enddate']);$weeks = range(0, ceil($weekRange->d/7));//d property is number of days, as int //to get range of number of weeks:$weeks = range( (int)$recur['startdate']->format('W'),//start from current week$recur['startdate']->format('W') + ceil($weekRange->d/7) ); Now once you have that, there is no point in looping over the array, is there? a simple in_array call, or even if (min($weeks) <= $gendate->format("W") && max($weeks) >= $gendate->format("W")) would do the trick. The same logic applies to the days business. That way, you can do away with all those nested loops, because that's just an unholy, slow, messy and unmaintainable mess. PDO: Reusing prepared statements: Here's an example of how I'd query the data using PDO, re-using prepared statements: //outside the loop, call prepare$stmt = $pdo->prepare('SELECT * FROM recur WHERE title = :title AND name = :name'); //note no quotes, just :title and :name$events = $pdo->query('SELECT * FROM calendar WHERE Allowed = "Allowed" ORDER BY starttime ASC');//order by <field> ASC/DESC while ($event = $events->fetch(PDO::FETCH_ASSOC)) {//inside call execute as much as you want$stmt->execute( array( ':name' => $event['name'], ':title' =>$event['title'] ) ); $recur =$stmt->fetch(PDO::FETCH_ASSOC); } General recommendations Refactor this code ASAP. Learn about more modern MySQL extensions, PDO or mysqli_*. Both are more powerful than mysql_*, but mysqli_* is the most powerful of the lot. However, its API is messy (alowing both OO and procedural style programming), and has a lot more pitfalls owing to its complexity. I haven't touched on this in my answer, but never assume all is going well. Check what functions return. They could return false, 0 or null, or they could throw an Exception, to let you know all is not well. Don't ignore those situations, deal with them. Write a couple of one-liners down as guide lines, for example: • Prepare queries that use variables in the WHERE clause, by using prepared statements • If you have to scroll to read through a function, you're doing too much in one function. Split the logic over several functions. • DRY • Only SELECT what you need • Functions don't echo, they return. Think of them as books. They contain information, you read it, and can then relay that information to others. A book doesn't read itself out loud to other people. That's not its function. • errors happen. That's a fact. Check the return values of functions (false, 0, null or wrap them in a try-catch block). Check the manual, to see what each function returns in case something goes wrong. • Learn about prepared statements and injection. This implies changing the MySQL extension you use, this page helps you with that • Debugging implies seeing the bugs. Therefore E_STRICT | E_ALL + display_errors are a must. - Thank you so much for the hard work. I am new to php but I understand what you are saying and I will post my new code. Thanks –  Pieter de Vries Mar 31 at 18:03 @PieterdeVries: It's all in a day's work for bicycle repair man ;) Do let me know when and where you've posted your updated code, in case you want it reviewed –  Elias Van Ootegem Apr 1 at 7:24 Well I have some issues with figuring out some of the things. I am still perfecting it and just so you know the case clause is for repeated events. You may see the form at logicalwebhost.com/goats/calendar/calendar_entry.php BTW the reason I used mysql instead of opd or MySQLi is because the project was started in that by another user but I did know about the change in php. –  Pieter de Vries Apr 2 at 18:18 Here is my new code: codereview.stackexchange.com/questions/46133/… –  Pieter de Vries Apr 3 at 2:59 Here are a few quick tips: • Your indentation seems mostly consistent, but there are a few lines off by a space. Most editors have a format feature if you find it too tedious to maintain it manually. • Other whitespace is inconsistent as well. Compare these two lines: foreach ($period as$savedDate) { // perfect and switch($recurring){ // allergic to spaces? • Pick a naming convention and stick with it: $startdate versus $savedDate. I recommend camelCase. • break statements don't need the default 1 argument. Inside switch blocks especially it looks very strange and makes you stop and think when there's no reason. • Indent the break statements one level below their matching case statements, at the same level as the code before them. Some people keep the case lines at the same level as the switch since they go hand-in-hand to keep the lines from shifting so far to the right. Bug Be very careful when negating boolean expressions. if (!$recurring == "None") will never pass because no value equals the string "None" when it's negated. This should be if (!($recurring == "None")) to correct the operator precedence, but if ($recurring !== "None") is clearer. Finally, the best way to see how it performs is to time it repeatedly and take the average. - White space use is largely a personal taste (as long as it is not on the level of for(int i=0,int j=3;i<2*j+1,j>=34;i+=2,j/=2*3+c) ) and I find both of the given examples acceptable. Although OP would benefit from being consistent. –  Emily L. Mar 31 at 9:15 The operator precedence bug is present all over.. most notably if ($displaydate >$startdate and !$displaydate <$enddate): using and is preventing this to be evaluated in the wackiest of ways, but it doesn't betray much awareness of operator precedence at all... –  Elias Van Ootegem Mar 31 at 11:36
web
auto_math_text
# qiskit.transpiler.InstructionDurations.get¶ InstructionDurations.get(inst, qubits, unit='dt')[source] Get the duration of the instruction with the name and the qubits. Parameters • inst (Union[str, Instruction]) – An instruction or its name to be queried. • qubits (Union[int, List[int], Qubit, List[Qubit]]) – Qubits or its indices that the instruction acts on. • unit (str) – The unit of duration to be returned. It must be ‘s’ or ‘dt’. Returns The duration of the instruction on the qubits. Return type float|int Raises TranspilerError – No duration is defined for the instruction.
web
auto_math_text
# week 01: the importance of doing arithmetic Let's start with a brief introduction to genomes, genes, mRNA transcription, and RNA-seq experiments. This is mostly intended for those of you with little biology background. Even if you're a biologist, though, you should find this intro useful. Throughout the course, among other things, we'll emphasize quantitative intuition: thinking about biological systems in mathematical and physical terms. I don't mean anything sophisticated by that. I mean thinking in terms of ballpark numbers, simple quantitative relationships, and common sense. You should have a feel, even just within an order of magnitude, of when experimental results make sense. If an RNA-seq experiment tells you your favorite gene is expressed at one million RNAs per cell, you should know that a typical cell doesn't even have that many mRNAs total. Physicists sometimes call this kind of rough and ready numerical thinking Fermi estimation -- a nod to Enrico Fermi's legendary capability for doing rough calculations quickly. And like everything that's important in science, there's a classic xkcd about it. ### what's in the human genome On the board, I'll go through the rough architecture of the human genome, as one example genome: • 3Gb haploid content in 24 chromosomes (22 autosomes, X and Y) • Protein-coding genes: 20K (probably an underestimate) • Protein-coding gene transcription units: about 40%, counting introns • mRNA exons: 3% (including 5' and 3' untranslated regions, UTRs) • coding: 1% • conserved noncoding DNA: maybe 6%-ish, squishy in part because there's no clear threshold • Transposable elements (TEs) in various states of decay: about 55% • all overlapping in various ways -- for example, TEs and conserved regulatory regions in introns, UTRs. Some people are surprised that known, annotated protein-coding gene transcription units cover so much (40%) of the genome, because they've been told that coding genes only account for 1% of the genome, and there's been so much nonsense written about how "surprising" it is that much of the human genome is transcribed, by people who apparently forgot about introns and UTRs. It's easy enough to verify just by counting in the annotation of the current human genome assembly. ### what is a gene (an RNA perspective) Again on the board, I'll sketch the standard canon of how we understand genes, especially from the perspective of RNA transcription (neglecting protein translation, at least for now, because the course is going to use RNA expression so much as an example of data analysis): • structure and size of transcription units - focusing on mammalian polII • (other txn units: rRNA -- tRNA -- snRNA, snoRNA, other small RNAs -- miRNA -- lncRNA) • transcription start -- promoter -- 5'cap • transcription stop -- cleavage/poly-A+ addition • transcriptional activation: promoters, enhancers, silencers • RNA splicing -- introns and exons • Isoforms: alternative transcription starts, stops, and alternative splicing • RNA transport out of nucleus, and localization in cells • translation -- RNA quality control -- nonsense-mediated decay • RNA stability and decay ### RNA content of a cell To interpret an RNA-seq experiment we want to have some intuition for what we're counting. How many RNAs are there in a cell, from each gene? Here, I'm going to use specific numbers from Jackson, Pombo, and Iborra, The balance sheet for transcription: an analysis of nuclear RNA metabolism in mammalian cells, though there's many other relevant references as well. A HeLa cell (a cultured human cancer cell line) divides every 22 hours, and it contains about 3.5M (3.5 million) ribosomes. This means the cell has to make 3.5M new ribosomes every 22h. This presents an engineering challenge for the cell. It's hard to produce ribosomes that fast. A major component of the ribosome are the SSU (small subunit) and LSU (large subunit) ribosomal RNAs, which add up to about 7Kb of mature rRNA, processed out of a 13Kb primary transcription unit. So the cell has to be synthesizing 13Kb x 3.5M / 22h / 3600 sec/hr = 600K RNA nucleotides per second to keep up the pace, just in ribosomal RNA transcripts. Ribosomal RNA genes are produced by RNA polymerase I, which is specialized for rRNA synthesis (mRNAs are produced by polII), but it runs at about the same speed as polII -- around 40nt/s. So we need about 15,000 active polI polymerases to polymerize 600K nucleotides/sec of rRNA. We just can't do it with just two rRNA loci in our diploid genome. Physically, the polI machinery occupies about 50-100nt of DNA; even if we load them as fast and as tightly as possible, we could get about one polI complex every 100nt or so, which is indeed this is what's observed in HeLa: about 100-120 active polI polymerases on each 13kb rRNA transcription unit. What to do? The cell does it by parallelization. It amplifies the number of rRNA genes. The haploid human genome contains about 180 repeats of the ribosomal rDNA transcription unit, embedded in a 45kb repeat unit, in tandem arrays on chromosomes 13,14,15,21, and 22. About 8 Mb of our 3200 Mb genome is devoted to these tandemly arrayed rRNA units. (That's a lot, relatively. The rest of our 20,000 genes is coded by just another 35Mb total, scattered around the genome.) Since we're diploid - we have two copies of each chromosome - we have about 360 rRNA transcription units. According to Jackson et al. 2000, about 120-150 of these units in a diploid cell appear to be transcriptionally active at any one time. (I don't know what's up with the rest; that's an interesting discrepancy. I would have thought they'd all be active!) Now the factory's balance sheet works: 120-150 active units times 100-120 polI polymerases per locus times 40nt/s = 480K-720K nucleotides/sec. Jackson et al. 2000 now switches to presenting numbers from a different mammalian cultured cell type (mouse L cells), but for Fermi-estimation-purposes, all mammalian cells are roughly the same. In total, a growing mammalian cell is synthesizing about 3000K (3M) nucleotides/sec; 39% of that active synthesis is polI on pre-rRNAs that we just talked about, 58% is polII on pre-mRNAs, and 3% is polIII on small RNA genes. Mouse L cells divide almost twice as fast as HeLa cells, so they're making rRNA at a total rate of about 1200Kb/sec, compared to HeLa's 600Kb/sec. Close enough for Fermi. Assuming that polI, polII, and polIII all move at around 40-50nt/sec, then we have something like 30K active polI polymerases, 45K polII (though Jackson says 60K), and 3K polIII. Happily, those deduced numbers aren't too far from other independent measurements in HeLa cells, which have estimated there's 15,000 active polI, 70,000 active polII, and 7,000 active polIII polymerases. Only about a third or so of RNA polymerases are actively engaged and transcribing genes at any given time; there's an estimated total of 300K polII polymerases in a HeLa cell. Recall there are about 3.5M ribosomes per HeLa cell -- the protein factories greatly outnumber the RNA factories, meaning that much more ATP energy equivalent is pouring through translation than through transcription. There are about 20,000 protein-coding genes, multiplied by two because we're diploid, so if we suppose we have 60K active polII polymerases (and if we ignore other noncoding polII transcription, such as lncRNAs), we only have an average of about 1.5 RNA polymerases engaged per protein-coding gene. Only about half (10K) of genes are on in a given cell type, so call it an average of ~3 polymerases per active gene. Those polymerases are pretty lonely! The average polII transcription unit is around 40kb long, so polII polymerases are spaced about 10kb apart on typical active genes. A dark country road, compared to the jam-packed, hard-working polI rRNA transcription unit freeways. The RNA population of big, intron-containing pre-mRNA transcripts is called heterogeneous nuclear RNA (hnRNA). Introns are spliced, typically rapidly, and splicing is happening on most transcripts even as they're still being transcribed. Only 2-3% of hnRNA (i.e. pre-mRNA) reaches the cytoplasm as mature mRNAs, after splicing and processing. Introns are big. In terms of total RNA by mass at steady state (as opposed to synthesis rates), 75% of HeLa RNA is rRNA, 10% is tRNA, and 2.5% is mRNA (and I think the missing fraction in these numbers is a combination of mitochondrial RNA, nuclear pre-mRNA, and small noncoding RNAs). Assuming that the 75% rRNA consists of 3.5M copies of 7Kb of SSU+LSU, and assuming that mRNAs are ~2Kb and tRNAs are ~100nt, we can estimate there are about 30M tRNAs per cell (about 10 tRNAs per ribosome), and about 400K mRNAs (about one for every ten ribosomes). That turns out to be about right! Not all genes are expressed in every cell type; typically, about half (10K) are expressed in any given cell type. So, ballpark, with 400K mRNAs/cell and 10K active genes, we expect a mean of around 40 mRNAs/gene. (We expect a skewed distribution, with a lower median, because some genes will be highly expressed.) Figure 1c from Hebenstreit et al 2011 showing that mRNA expression levels per gene are bimodal-ish, sort of as if there's an "off" and an "on" distribution. Notice that the x-axis is a logarithmic scale. Fitting a normal distribution to the logarithm of the expression level means a so-called lognormal distribution. If we plotted it just as expression level, we'd we'd see a skewed distribution with a fat right tail: most genes are expressed at moderate levels, and a few at high levels. ('open image in new tab' to enlarge.) When genes are "off" they're actually still being transcribed at some low rate. Turning genes "on" up-regulates them by about 100x (with a wide range, depending on the gene). A result that I think is important comes from Hebenstreit et al., RNA sequencing reveals two major classes of gene expression levels in metazoan cells, Molecular Systems Biology, 2011. Their key result is shown in the figure to the right. If you plot a histogram of RNA-seq expression levels in a single cell type, you tend to get a bimodal-ish distribution, as if there is an "off" peak at around 0.01-0.1 RPKM (very very roughly, 0.01-0.1 mRNA/cell), and an "on" peak around 20 mRNA/cell. (RPKM is "reads per kilobase per million reads", a normalized measure of expression level; more about that below.) This 20 mRNA/cell or so is so eerily close to what we expect from numerology that it's part of the working model that I currently have in my head when I read RNA-seq papers: • the RNA content of a typical mammalian cell is almost all rRNA and tRNA; a tiny fraction (~2-3%) is mRNA. • which consists of about 400K mRNAs. • from 10K active genes, of 20K total. • most active genes are expressed at around 20-40 mRNAs/cell. • inactive genes are leaky, observed at 0.01-0.1 mRNAs/cell. • the difference between active and inactive is about 100-1000x. Some things about RNA-seq experiments make a surprising amount of sense in light of these ballpark numbers. For example, many people are doing cell type specific transcriptomics these days (including my lab). We expect to see 100-1000x "enrichment" for many genes in cross cell type comparisons, but instead, in many protocols, it's more typical to see 2-10x enrichment for genes that are "on" versus "off". This probably reflects the fact that a cell type specific purification protocol isn't perfect, and is carrying along substantial background contamination. ### mRNA synthesis, decay, and steady state Figure 2a and 2b from Schwanhausser et al 2011 showing distribution of mammalian mRNA half-lifes (in hr), with a median around 9h, and mRNA steady-state levels (in mRNA/cell), with a median around 17. ('open image in new tab' to enlarge.) Let's suppose, as an approximation, that mRNA levels in a cell are at steady state. Let's also assume that there are only two steps controlling mRNA levels: mRNAs are synthesized at rate $$k_1$$ (in mRNAs/hr/cell), and mRNAs decay at rate $$k_2$$ (in hr$$^{-1}$$). Steady state is when the number of new mRNAs synthesized equals the number that decay: $$k_1 = k_2 \mathrm{[mRNA]},$$ $$\mathrm{[mRNA]} = \frac{k_1}{k_2}.$$ So if the synthesis rate is 2 mRNAs/hr/cell, and the mRNA decay rate is 0.1 hr^-1, then there are 20 mRNAs/cell at steady state. mRNA decay is more usually expressed by RNA biochemists in terms of the half life (in hours), not as the decay rate. The half life is $$t_{1/2} = \frac{\ln 2}{k_2}$$; and $$k_2 = \frac{\ln 2}{t_{1/2}}$$. Figure 3b from Schwanhausser et al 2011 showing distribution of inferred mRNA synthesis rates, in mRNA/hr, with a median of around 2. ('open image in new tab' to enlarge.) One reference for typical mammalian mRNA synthesis and decay rates is [Schwanhausser et al., 2011, Global quantification of mammalian gene expression control]. Some of their key figure panels are reproduced here to the right. In mammalian cells, typical synthesis rates are on the order of 2 mRNAs/hr; typical half lives are on the order of 9h; and steady-state mRNA copy numbers are on the order of 20. If polII initiates at 2 mRNA/hr, on two diploid copies of a ~30kb pre-mRNA transcription unit, as we saw above, the picture is very different from rRNA synthesis. At 40nt/s, it takes ~15m for polII to transcribe a typical locus, so again these numbers suggest that at any given time there's the order of zero or one polII complex on a typical gene. Notice that I'm sneakily slipping back and forth between "mean" and "typical" (i.e. median). If we were talking about normal (Gaussian) distributions of things, mean and median are the same thing. But the distributions we're talking about are skewed: notice that the plots from the Schwanhausser paper are roughly lognormal. So I'm not too bothered by the discrepancy between less than one polII complex per "typical" gene, versus three or so polII's per gene on average; the lognormal distribution shifts the mean upwards from the median, the same way that average household income in the US looks pretty good until you look at the median income. With small numbers like 20 mRNA/cell, you can run into problems with continuous approximations like the steady-state equations above. This is why there's a lot of interest in stochastic dynamics of small numbers of things per cell. For example, many promoters don't fire at a steady rate; single cell experiments have shown that they "burst", with bursts of initiation followed by more quiescent states. We're neglecting that in these ballpark numbers. This steady state approximation neglects that dilution effect of cell growth and division: if there's 20 mRNAs/cell, then the cell has to make 20 new mRNAs per cell division to keep things constant. In a HeLa cell doubling every 22hr, that's not an insignificant concern. ### an RNA-seq experiment Conceptually, an RNA-seq experiment is straightforward, but the details can matter quite a bit in the interpretation. In principle, "all" we're going to do is purify RNA transcripts from a biological sample, and measure the relative level of each transcript $$i$$ in that sample -- $$\tau_i$$, in units of transcripts per million transcripts (TPM) -- essentially by counting a random sample of short RNA fragments. In practice, we need to think about what's going on at each step: • the biological sample might be a cell culture, a whole animal that we're grinding up, a dissected tissue, or some sort of purified subset of cells. The RNA-seq experiment is going to give us population averaged measurements over all the cells in this sample, so we need to think about how heterogeneous it is, especially if we're going to compare different samples. Even within a cell type, mRNA expression levels may change over time (age, circadian rhythms...), with environmental conditions (light, food, behavior...), or with individual differences (genotype, sex). We may need to think about how to hold relevant confounding variables constant in our experimental design. If the sample is composed of a mixture of different cell types, population averages can change just because of changes in their proportion. For example, if you kick off an inflammatory response in your tissue in your experiment, it may be infiltrated with immune cells; you'll see an enrichment of genes expressed by those cells not necessarily because the genes turned on, but because the proportion of cells changed. • the RNA prep, where we purify the RNA from the sample, may specifically favor some subsets of RNAs over others. Small RNAs (<100nt or so) may tend to get lost in standard RNA purification steps. mRNAs are often purified by poly-A+ selection, which is designed to select against noncoding RNAs (rRNA and tRNA). • the library prep, where we make a set of (probably fragmented) double-stranded DNAs from the RNA, may also favor certain RNAs over others. Some libraries are dT-primed, so they will favor the 3'end of polyA+ mRNAs (and the occasional genomically-encoded poly-A+ stretch); random-primed libraries shouldn't, but will tend to prime on non-mRNAs (like rRNA) which may not be what you want. • even the sequencing protocol, where we obtain short DNA sequences from one or both ends of our dsDNA library fragments, may have some biases, for example with respect to sequence (GC%) composition. Nobody should be calling an RNA-seq experiment "unbiased". A good description of an RNA-seq experiment should describe all the steps that may have biased the sequences that ended up being collected. ### from mapped read counts to TPM Each gene $$i$$ expresses some number of mRNA transcripts per cell; let's call that number $$t_i$$. (We're already making a strong simplifying assumption -- that there's only one mRNA splicing isoform per gene -- but let's go with that for now.) In an RNA-seq experiment, we've usually taken our RNA sample from an unknown number of cells, so we're generally going to have to think in terms of relative proportions, not absolute numbers. In a large sample, the proportion of mRNA transcripts from gene $$i$$ is $$\tau_i = \frac{t_i}{\sum_j t_j}$$. These unknown $$\tau_i$$ expression levels are what we want to infer from our observed counts of mapped reads. In RNA-seq data processing, we map short sequence reads to annotated transcript sequences, which gives us a number of mapped reads per transcript: $$c_i$$. These counts $$c_i$$ are our observed data. The proportion of mapped reads that mapped to transcript $$i$$ is $$\nu_i = \frac{c_i}{\sum_j c_j}$$. We can also consider the normalized $$\nu_i$$ to be observed data. The RNA-seq experiment doesn't measure $$\tau_i$$ directly. We made a fragment library and obtained sequence reads from the ends of each fragment, so long mRNAs are more likely to be sampled than short ones. A basic assumption of short-read RNA-seq analysis is that we sample fragments, and therefore reads, from mRNA transcripts with a probability proportional to $$\tau_i \ell_i$$, where $$\ell_i$$ is the transcript length. (Imagine simulating the experimental process with a Python script.) So: $$\nu_i = \frac{\tau_i \ell_i}{\sum_j \tau_j \ell_j}$$ and therefore: $$\tau_i = \frac{\nu_i}{\ell_i} \left( \sum_j \frac{\nu_j}{\ell_j} \right)^{-1}$$ That is: first we normalize each $$\nu_i$$ individually by the mRNA length $$\ell_i$$; then we normalize by the total sum of $$\frac{\nu_i}{\ell_i}$$ over all genes; and this gives us our estimates of $$\tau_i$$. It's convenient to scale $$\tau_i$$ by a constant that's on the same order as the number of mRNAs per cell: one million. (Fermi estimation again.) Thus $$\tau_i$$ values are reported in units of "transcripts per million transcripts" (TPM), which you can think of as being on the order of mRNAs per cell (very roughly, because different cells have more or fewer than $$10^6$$ mRNAs in them). Look back over that and you'll see several assumptions that we could try to improve on. We don't have to assume that each gene gives rise to only a single mRNA isoform, but because isoforms overlap (share exon sequences) we wouldn't be able to assume that there was an observable 1:1 relationship between a mapped read and a particular mRNA $$i$$; we'd have to make a statistical model that treats the mapping of a read to a specific isoform $$i$$ as something we'd need infer. We also don't have to assume that all nucleotide positions are uniformly sampled by short reads, because of biases in the RNA-seq library generation and sequencing procedure; we could model that bias. We'll learn some techniques for such statistical modeling as we go forward in the course. ### TPM vs. RPKM The mathematical model above was explained in a seminal paper from Colin Dewey's group [Li et al., Bioinformatics 2010]. I've followed their notation. But it isn't the procedure that was first used in RNA-seq, when RNA-seq was first introduced by Barbara Wold's group [Mortazavi et al., Nature Methods 2008]. In Mortazavi et al., a different procedure was used, and gene expression estimates were obtained in units of "reads per kilobase per million mapped reads" (RPKM). (When people started using paired-end sequencing more often, it became more reasonable to talk in terms of mapping library fragments instead of independent reads, thus fragments per kilobase per million mapped reads (FPKM), but RPKM and FPKM are the same thing for our purposes.) Converting mapped read counts $$\nu_i$$ to RPKM gives us $$\frac{\nu_i}{\ell_i} \cdot 10^9$$. ($$10^9$$ because per kilobase per million.) This is proportional to $$\tau_i$$ (within a given sample) but it's unnormalized. We would need to normalize by $$10^{-3} \sum_j \tau_j \ell_j$$ if we want to convert RPKM to TPM: that is, by the abundance-weighted mean mRNA transcript length. If the abundance-weighted mean mRNA transcript length is 1kb, TPM and RPKM are the same thing. So do we care? A problem with RPKM arises when we start trying to compare across samples, because different samples don't necessarily have the same abundance-weighted mean mRNA transcript length. Maybe in one sample some long mRNAs went up, and some short mRNAs went down. Now we could see that an mRNA transcript $$i$$, present at exactly the same proportion $$\tau_i$$ in the mRNA populations of the two samples, would show different RPKM measures just because some other genes shifted their expression and altered the abundance-weighted mean mRNA transcript length. Don't dis RPKMs too much though, even if it's amusing to see Lior Pachter flagellate himself over it, because you can get a related normalization artifact from TPMs too, and this artifact is probably even more common. The relative abundance of $$\tau_i$$ of transcript $$i$$ is obviously going to be affected by the absolute expression level of every other mRNA in the cell, so it is a mistake to assume that a change in $$\tau_i$$ necessarily means a change in the expression level of gene $$i$$ (in RNAs/cell). If you turn a bunch of some genes up, the relative proportion of other genes must go down even if their absolute concentration remains unchanged. One common way this can happen is if you alter the growth rate, which changes the expression level of a large battery of genes having to do with making ribosomes (among other things). There's been an amazing amount of wailing and gnashing of teeth over RPKM vs. TPM. Much of it seems confusing and/or wrong to me. I think it's just an example of people passing along lore without having a quantitative understanding of what they're talking about. The Li et al. 2010 paper is clear and correct.
web
auto_math_text
1. # Growth models Sun 13 December 2015 ## Logistic model ### A resource consumption view Consider a resource consumption model that follows the density of a single microbial species through time $N(t)$ and the density of that species’ limiting resource $R(t)$: $\frac{dR}{dt} = -a R N \ \frac{dN}{dt} = \epsilon a R N$ where $a$ is ... 2. # Organizing a poster session Thu 18 June 2015 Yesterday (June 17th, 2015) we organized the 1st students poster session of the Python Programming for Biologists course in Tel-Aviv University. In the poster session, the students presented the research they did using things they learned during the course (sequence data analysis, mathematical modeling of population dynamics, statistical analysis ... 3. # Muller’s ratchet Sun 22 December 2013 Following Gordo & Charlesworth (2000). This Wright-Fisher model starts with a haploid asexual population at a mutation-selection balance. The population size is $N$, the mutation rate is $u$ and the selection coefficient is $s$. Denote the frequency of the best class by $x$ and its initial value \(x_0 = e^{-u/s ... 4. # The distribution of deleterious mutations at the mutation-selection balance Tue 17 December 2013 If we sample a random individual from an asexual population that had allot of time to adapt to its environment, how many deleterious mutations can we expect it to have? This distribution of deleterious mutations is the starting point of many population genetics models. In an eariler post we’ve ... 5. # Summary: “Evolution of mutation rates in bacteria” (Denamur and Matic 2006) Sun 14 July 2013 This is an “executive summary” of Denamur and Matic (2006), which is a review of the literature on the evolution of the mutation rate in bactera. 1. Deleterious mutations are 100,000-fold more frequent than beneficial mutations in E. coli. 2. Mutators have been found in various species of bacteria in frequencies ... 6. # Summary: “Mutation rates: How low can you go?” (Sniegowski and Raynes 2013) Sun 16 June 2013 This is a short summary of Sniegowski and Raynes (2013), a review about the evolution of the mutation rate, with an emphasis on the Drift Barrier Hypothesis (DBH). The summary is written in my own words, with a few footnotes and highlighting that express my thoughts. 1. Because most mutations are ... 7. # Summary: “Balancing robustness and evolvability” (Lenski et al. 2006) Sun 16 June 2013 This is a summary of Lenski, Barrick and Ofria (2006). The following are direct quotes of the original article. My comments are given as footnotes, highlighting and italics. • Organisms must have a balance between robustness and evolvability, that is, between resisting and allowing change in their own internal states. • A ... 8. # Summary: “Mutators and sex in bacteria: Conflict between adaptive strategies” (Tenaillon et al. 2000) Mon 14 January 2013 ## Overview This post is mostly a technical summary of the paper by Tenaillon, Le Nagard, Godelle and Taddei (2000). I wrote the summary because I use it as a baseline for my own research, which involves the evolution of stress-induced mutators (Ram and Hadany 2012). The hypothesis the paper deals ... 9. # The convergence of mean fitness towards the mutation-selection balance Mon 19 November 2012 ## Overview In an eariler post I described how the mean fitness of a population at the mutation-selection balance can be analysed. I assumed that the population is asexual, that only deleterious mutations occur, that there is no drift or recombination, and that selection is constant. In this post I would ... 10. # Mean fitness at the mutation-selection balance Sun 14 October 2012 ## Overview The first post on the Mutation-Selection Blog must be about the mutation-selection balance, right? So what is the mutation-selection balance? In evolutionary biology, selection acts to remove deleterious mutations from the population, while mutation generates new deleterious mutations. When they cancel each other out, the population is at the ...
web
auto_math_text
# Terrestrial planet search review paper 1. Jan 22, 2006 ### marcus http://arxiv.org/abs/astro-ph/0601469 Comparative Planetology and the Search for Life Beyond the Solar System Charles A. Beichman, Malcolm Fridlund, Wesley A. Traub, Karl R. Stapelfeldt, Andreas Quirrenbach, Sara Seager To Appear in Protosars and Planets V "The study of planets beyond the solar system and the search for other habitable planets and life is just beginning. Ground-based (radial velocity and transits) and space-based surveys (transits and astrometry) will identify planets spanning a wide range of size and orbital location, from Earth-sized objects within 1 AU to giant planets beyond 5 AU, orbiting stars as near as a few parsec and as far as a kiloparsec. After this initial reconnaissance, the next generation of space observatories will directly detect photons from planets in the habitable zones of nearby stars. The synergistic combination of measurements of mass from astrometry and radial velocity, of radius and composition from transits, and the wealth of information from the direct detection of visible and mid-IR photons will create a rich field of comparative planetology. Information on proto-planetary and debris disks will complete our understanding of the evolution of habitable environments from the earliest stages of planet-formation through to the transport into the inner solar system of the volatiles necessary for life. The suite of missions necessary to carry out the search for nearby, habitable planets and life requires a Great Observatories'' program for planet finding (SIM PlanetQuest, Terrestrial Planet Finder-Coronagraph, and Terrestrial Planet Finder-Interferometer/Darwin), analogous to the highly successful Great Observatories Program'' for astrophysics. With these new Great Observatories, plus the James Webb Space Telescope, we will extend planetology far beyond the solar system, and possibly even begin the new field of comparative evolutionary biology with the discovery of life itself in different astronomical settings."
web
auto_math_text
pqueue-1.4.1.3: Reliable, persistent, fast priority queues. Data.PQueue.Prio.Max Description General purpose priority queue. Each element is associated with a key, and the priority queue supports viewing and extracting the element with the maximum key. A worst-case bound is given for each operation. In some cases, an amortized bound is also specified; these bounds do not hold in a persistent context. This implementation is based on a binomial heap augmented with a global root. The spine of the heap is maintained lazily. To force the spine of the heap, use seqSpine. We do not guarantee stable behavior. Ties are broken arbitrarily -- that is, if k1 <= k2 and k2 <= k1, then there are no guarantees about the relative order in which k1, k2, and their associated elements are returned. (Unlike Data.Map, we allow multiple elements with the same key.) This implementation offers a number of methods of the form xxxU, where U stands for unordered. No guarantees whatsoever are made on the execution or traversal order of these functions. Synopsis # Documentation data MaxPQueue k a Source # A priority queue where values of type a are annotated with keys of type k. The queue supports extracting the element with maximum key. Instances Source # Instance detailsDefined in Data.PQueue.Prio.Max Methodsfmap :: (a -> b) -> MaxPQueue k a -> MaxPQueue k b #(<$) :: a -> MaxPQueue k b -> MaxPQueue k a # Ord k => Foldable (MaxPQueue k) Source # Instance detailsDefined in Data.PQueue.Prio.Max Methodsfold :: Monoid m => MaxPQueue k m -> m #foldMap :: Monoid m => (a -> m) -> MaxPQueue k a -> m #foldr :: (a -> b -> b) -> b -> MaxPQueue k a -> b #foldr' :: (a -> b -> b) -> b -> MaxPQueue k a -> b #foldl :: (b -> a -> b) -> b -> MaxPQueue k a -> b #foldl' :: (b -> a -> b) -> b -> MaxPQueue k a -> b #foldr1 :: (a -> a -> a) -> MaxPQueue k a -> a #foldl1 :: (a -> a -> a) -> MaxPQueue k a -> a #toList :: MaxPQueue k a -> [a] #null :: MaxPQueue k a -> Bool #length :: MaxPQueue k a -> Int #elem :: Eq a => a -> MaxPQueue k a -> Bool #maximum :: Ord a => MaxPQueue k a -> a #minimum :: Ord a => MaxPQueue k a -> a #sum :: Num a => MaxPQueue k a -> a #product :: Num a => MaxPQueue k a -> a # Ord k => Traversable (MaxPQueue k) Source # Instance detailsDefined in Data.PQueue.Prio.Max Methodstraverse :: Applicative f => (a -> f b) -> MaxPQueue k a -> f (MaxPQueue k b) #sequenceA :: Applicative f => MaxPQueue k (f a) -> f (MaxPQueue k a) #mapM :: Monad m => (a -> m b) -> MaxPQueue k a -> m (MaxPQueue k b) #sequence :: Monad m => MaxPQueue k (m a) -> m (MaxPQueue k a) # (Ord k, Eq a) => Eq (MaxPQueue k a) Source # Instance detailsDefined in Data.PQueue.Prio.Max.Internals Methods(==) :: MaxPQueue k a -> MaxPQueue k a -> Bool #(/=) :: MaxPQueue k a -> MaxPQueue k a -> Bool # (Data k, Data a, Ord k) => Data (MaxPQueue k a) Source # Instance detailsDefined in Data.PQueue.Prio.Max.Internals Methodsgfoldl :: (forall d b. Data d => c (d -> b) -> d -> c b) -> (forall g. g -> c g) -> MaxPQueue k a -> c (MaxPQueue k a) #gunfold :: (forall b r. Data b => c (b -> r) -> c r) -> (forall r. r -> c r) -> Constr -> c (MaxPQueue k a) #toConstr :: MaxPQueue k a -> Constr #dataTypeOf :: MaxPQueue k a -> DataType #dataCast1 :: Typeable t => (forall d. Data d => c (t d)) -> Maybe (c (MaxPQueue k a)) #dataCast2 :: Typeable t => (forall d e. (Data d, Data e) => c (t d e)) -> Maybe (c (MaxPQueue k a)) #gmapT :: (forall b. Data b => b -> b) -> MaxPQueue k a -> MaxPQueue k a #gmapQl :: (r -> r' -> r) -> r -> (forall d. Data d => d -> r') -> MaxPQueue k a -> r #gmapQr :: (r' -> r -> r) -> r -> (forall d. Data d => d -> r') -> MaxPQueue k a -> r #gmapQ :: (forall d. Data d => d -> u) -> MaxPQueue k a -> [u] #gmapQi :: Int -> (forall d. Data d => d -> u) -> MaxPQueue k a -> u #gmapM :: Monad m => (forall d. Data d => d -> m d) -> MaxPQueue k a -> m (MaxPQueue k a) #gmapMp :: MonadPlus m => (forall d. Data d => d -> m d) -> MaxPQueue k a -> m (MaxPQueue k a) #gmapMo :: MonadPlus m => (forall d. Data d => d -> m d) -> MaxPQueue k a -> m (MaxPQueue k a) # (Ord k, Ord a) => Ord (MaxPQueue k a) Source # Instance detailsDefined in Data.PQueue.Prio.Max.Internals Methodscompare :: MaxPQueue k a -> MaxPQueue k a -> Ordering #(<) :: MaxPQueue k a -> MaxPQueue k a -> Bool #(<=) :: MaxPQueue k a -> MaxPQueue k a -> Bool #(>) :: MaxPQueue k a -> MaxPQueue k a -> Bool #(>=) :: MaxPQueue k a -> MaxPQueue k a -> Bool #max :: MaxPQueue k a -> MaxPQueue k a -> MaxPQueue k a #min :: MaxPQueue k a -> MaxPQueue k a -> MaxPQueue k a # (Read k, Read a) => Read (MaxPQueue k a) Source # Instance detailsDefined in Data.PQueue.Prio.Max MethodsreadsPrec :: Int -> ReadS (MaxPQueue k a) #readList :: ReadS [MaxPQueue k a] #readPrec :: ReadPrec (MaxPQueue k a) # (Ord k, Show k, Show a) => Show (MaxPQueue k a) Source # Instance detailsDefined in Data.PQueue.Prio.Max MethodsshowsPrec :: Int -> MaxPQueue k a -> ShowS #show :: MaxPQueue k a -> String #showList :: [MaxPQueue k a] -> ShowS # Ord k => Semigroup (MaxPQueue k a) Source # Instance detailsDefined in Data.PQueue.Prio.Max Methods(<>) :: MaxPQueue k a -> MaxPQueue k a -> MaxPQueue k a #sconcat :: NonEmpty (MaxPQueue k a) -> MaxPQueue k a #stimes :: Integral b => b -> MaxPQueue k a -> MaxPQueue k a # Ord k => Monoid (MaxPQueue k a) Source # Instance detailsDefined in Data.PQueue.Prio.Max Methodsmempty :: MaxPQueue k a #mappend :: MaxPQueue k a -> MaxPQueue k a -> MaxPQueue k a #mconcat :: [MaxPQueue k a] -> MaxPQueue k a # (NFData k, NFData a) => NFData (MaxPQueue k a) Source # Instance detailsDefined in Data.PQueue.Prio.Max.Internals Methodsrnf :: MaxPQueue k a -> () # # Construction O(1). Returns the empty priority queue. singleton :: k -> a -> MaxPQueue k a Source # O(1). Constructs a singleton priority queue. insert :: Ord k => k -> a -> MaxPQueue k a -> MaxPQueue k a Source # Amortized O(1), worst-case O(log n). Inserts an element with the specified key into the queue. insertBehind :: Ord k => k -> a -> MaxPQueue k a -> MaxPQueue k a Source # O(n) (an earlier implementation had O(1) but was buggy). Insert an element with the specified key into the priority queue, putting it behind elements whose key compares equal to the inserted one. union :: Ord k => MaxPQueue k a -> MaxPQueue k a -> MaxPQueue k a Source # Amortized O(log(min(n1, n2))), worst-case O(log(max(n1, n2))). Returns the union of the two specified queues. unions :: Ord k => [MaxPQueue k a] -> MaxPQueue k a Source # The union of a list of queues: (unions == foldl union empty). # Query null :: MaxPQueue k a -> Bool Source # O(1). Checks if this priority queue is empty. size :: MaxPQueue k a -> Int Source # O(1). Returns the size of this priority queue. ## Maximum view findMax :: MaxPQueue k a -> (k, a) Source # O(1). The maximal (key, element) in the queue. Calls error if empty. getMax :: MaxPQueue k a -> Maybe (k, a) Source # O(1). The maximal (key, element) in the queue, if the queue is nonempty. deleteMax :: Ord k => MaxPQueue k a -> MaxPQueue k a Source # O(log n). Delete and find the element with the maximum key. Calls error if empty. deleteFindMax :: Ord k => MaxPQueue k a -> ((k, a), MaxPQueue k a) Source # O(log n). Delete and find the element with the maximum key. Calls error if empty. adjustMax :: (a -> a) -> MaxPQueue k a -> MaxPQueue k a Source # O(1). Alter the value at the maximum key. If the queue is empty, does nothing. adjustMaxWithKey :: (k -> a -> a) -> MaxPQueue k a -> MaxPQueue k a Source # O(1). Alter the value at the maximum key. If the queue is empty, does nothing. updateMax :: Ord k => (a -> Maybe a) -> MaxPQueue k a -> MaxPQueue k a Source # O(log n). (Actually O(1) if there's no deletion.) Update the value at the maximum key. If the queue is empty, does nothing. updateMaxWithKey :: Ord k => (k -> a -> Maybe a) -> MaxPQueue k a -> MaxPQueue k a Source # O(log n). (Actually O(1) if there's no deletion.) Update the value at the maximum key. If the queue is empty, does nothing. maxView :: Ord k => MaxPQueue k a -> Maybe (a, MaxPQueue k a) Source # O(log n). Retrieves the value associated with the maximum key of the queue, and the queue stripped of that element, or Nothing if passed an empty queue. maxViewWithKey :: Ord k => MaxPQueue k a -> Maybe ((k, a), MaxPQueue k a) Source # O(log n). Retrieves the maximal (key, value) pair of the map, and the map stripped of that element, or Nothing if passed an empty map. # Traversal ## Map map :: (a -> b) -> MaxPQueue k a -> MaxPQueue k b Source # O(n). Map a function over all values in the queue. mapWithKey :: (k -> a -> b) -> MaxPQueue k a -> MaxPQueue k b Source # O(n). Map a function over all values in the queue. mapKeys :: Ord k' => (k -> k') -> MaxPQueue k a -> MaxPQueue k' a Source # O(n). Map a function over all values in the queue. mapKeysMonotonic :: (k -> k') -> MaxPQueue k a -> MaxPQueue k' a Source # O(n). mapKeysMonotonic f q == mapKeys f q, but only works when f is strictly monotonic. The precondition is not checked. This function has better performance than mapKeys. ## Fold foldrWithKey :: Ord k => (k -> a -> b -> b) -> b -> MaxPQueue k a -> b Source # O(n log n). Fold the keys and values in the map, such that foldrWithKey f z q == foldr (uncurry f) z (toDescList q). If you do not care about the traversal order, consider using foldrWithKeyU. foldlWithKey :: Ord k => (b -> k -> a -> b) -> b -> MaxPQueue k a -> b Source # O(n log n). Fold the keys and values in the map, such that foldlWithKey f z q == foldl (uncurry . f) z (toDescList q). If you do not care about the traversal order, consider using foldlWithKeyU. ## Traverse traverseWithKey :: (Ord k, Applicative f) => (k -> a -> f b) -> MaxPQueue k a -> f (MaxPQueue k b) Source # O(n log n). Traverses the elements of the queue in descending order by key. (traverseWithKey f q == fromDescList$ traverse (uncurry f) (toDescList q)) If you do not care about the order of the traversal, consider using traverseWithKeyU. # Subsets ## Indexed take :: Ord k => Int -> MaxPQueue k a -> [(k, a)] Source # O(k log n). Takes the first k (key, value) pairs in the queue, or the first n if k >= n. (take k q == take k (toDescList q)) drop :: Ord k => Int -> MaxPQueue k a -> MaxPQueue k a Source # O(k log n). Deletes the first k (key, value) pairs in the queue, or returns an empty queue if k >= n. splitAt :: Ord k => Int -> MaxPQueue k a -> ([(k, a)], MaxPQueue k a) Source # O(k log n). Equivalent to (take k q, drop k q). ## Predicates takeWhile :: Ord k => (a -> Bool) -> MaxPQueue k a -> [(k, a)] Source # Takes the longest possible prefix of elements satisfying the predicate. (takeWhile p q == takeWhile (p . snd) (toDescList q)) takeWhileWithKey :: Ord k => (k -> a -> Bool) -> MaxPQueue k a -> [(k, a)] Source # Takes the longest possible prefix of elements satisfying the predicate. (takeWhile p q == takeWhile (uncurry p) (toDescList q)) dropWhile :: Ord k => (a -> Bool) -> MaxPQueue k a -> MaxPQueue k a Source # Removes the longest possible prefix of elements satisfying the predicate. dropWhileWithKey :: Ord k => (k -> a -> Bool) -> MaxPQueue k a -> MaxPQueue k a Source # Removes the longest possible prefix of elements satisfying the predicate. span :: Ord k => (a -> Bool) -> MaxPQueue k a -> ([(k, a)], MaxPQueue k a) Source # Equivalent to (takeWhile p q, dropWhile p q). spanWithKey :: Ord k => (k -> a -> Bool) -> MaxPQueue k a -> ([(k, a)], MaxPQueue k a) Source # Equivalent to spanWithKey (k a -> not (p k a)) q. break :: Ord k => (a -> Bool) -> MaxPQueue k a -> ([(k, a)], MaxPQueue k a) Source # Equivalent to span (not . p). breakWithKey :: Ord k => (k -> a -> Bool) -> MaxPQueue k a -> ([(k, a)], MaxPQueue k a) Source # Equivalent to spanWithKey (k a -> not (p k a)) q. ### Filter filter :: Ord k => (a -> Bool) -> MaxPQueue k a -> MaxPQueue k a Source # O(n). Filter all values that satisfy the predicate. filterWithKey :: Ord k => (k -> a -> Bool) -> MaxPQueue k a -> MaxPQueue k a Source # O(n). Filter all values that satisfy the predicate. partition :: Ord k => (a -> Bool) -> MaxPQueue k a -> (MaxPQueue k a, MaxPQueue k a) Source # O(n). Partition the queue according to a predicate. The first queue contains all elements which satisfy the predicate, the second all elements that fail the predicate. partitionWithKey :: Ord k => (k -> a -> Bool) -> MaxPQueue k a -> (MaxPQueue k a, MaxPQueue k a) Source # O(n). Partition the queue according to a predicate. The first queue contains all elements which satisfy the predicate, the second all elements that fail the predicate. mapMaybe :: Ord k => (a -> Maybe b) -> MaxPQueue k a -> MaxPQueue k b Source # O(n). Map values and collect the Just results. mapMaybeWithKey :: Ord k => (k -> a -> Maybe b) -> MaxPQueue k a -> MaxPQueue k b Source # O(n). Map values and collect the Just results. mapEither :: Ord k => (a -> Either b c) -> MaxPQueue k a -> (MaxPQueue k b, MaxPQueue k c) Source # O(n). Map values and separate the Left and Right results. mapEitherWithKey :: Ord k => (k -> a -> Either b c) -> MaxPQueue k a -> (MaxPQueue k b, MaxPQueue k c) Source # O(n). Map values and separate the Left and Right results. # List operations ## Conversion from lists fromList :: Ord k => [(k, a)] -> MaxPQueue k a Source # O(n). Build a priority queue from the list of (key, value) pairs. fromAscList :: [(k, a)] -> MaxPQueue k a Source # O(n). Build a priority queue from an ascending list of (key, value) pairs. The precondition is not checked. fromDescList :: [(k, a)] -> MaxPQueue k a Source # O(n). Build a priority queue from a descending list of (key, value) pairs. The precondition is not checked. ## Conversion to lists keys :: Ord k => MaxPQueue k a -> [k] Source # O(n log n). Return all keys of the queue in descending order. elems :: Ord k => MaxPQueue k a -> [a] Source # O(n log n). Return all elements of the queue in descending order by key. assocs :: Ord k => MaxPQueue k a -> [(k, a)] Source # O(n log n). Equivalent to toDescList. toAscList :: Ord k => MaxPQueue k a -> [(k, a)] Source # O(n log n). Return all (key, value) pairs in ascending order by key. toDescList :: Ord k => MaxPQueue k a -> [(k, a)] Source # O(n log n). Return all (key, value) pairs in descending order by key. toList :: Ord k => MaxPQueue k a -> [(k, a)] Source # O(n log n). Equivalent to toDescList. If the traversal order is irrelevant, consider using toListU. # Unordered operations foldrU :: (a -> b -> b) -> b -> MaxPQueue k a -> b Source # O(n). An unordered right fold over the elements of the queue, in no particular order. foldrWithKeyU :: (k -> a -> b -> b) -> b -> MaxPQueue k a -> b Source # O(n). An unordered right fold over the elements of the queue, in no particular order. foldlU :: (b -> a -> b) -> b -> MaxPQueue k a -> b Source # O(n). An unordered left fold over the elements of the queue, in no particular order. foldlWithKeyU :: (b -> k -> a -> b) -> b -> MaxPQueue k a -> b Source # O(n). An unordered left fold over the elements of the queue, in no particular order. traverseU :: Applicative f => (a -> f b) -> MaxPQueue k a -> f (MaxPQueue k b) Source # O(n). An unordered traversal over a priority queue, in no particular order. While there is no guarantee in which order the elements are traversed, the resulting priority queue will be perfectly valid. traverseWithKeyU :: Applicative f => (k -> a -> f b) -> MaxPQueue k a -> f (MaxPQueue k b) Source # O(n). An unordered traversal over a priority queue, in no particular order. While there is no guarantee in which order the elements are traversed, the resulting priority queue will be perfectly valid. keysU :: MaxPQueue k a -> [k] Source # O(n). Return all keys of the queue in no particular order. elemsU :: MaxPQueue k a -> [a] Source # O(n). Return all elements of the queue in no particular order. assocsU :: MaxPQueue k a -> [(k, a)] Source # O(n). Equivalent to toListU. toListU :: MaxPQueue k a -> [(k, a)] Source # O(n). Returns all (key, value) pairs in the queue in no particular order. # Helper methods seqSpine :: MaxPQueue k a -> b -> b Source # O(log n). Analogous to deepseq in the deepseq package, but only forces the spine of the binomial heap.
web
auto_math_text
# Group Members Name: Student id: Jan Romme 0755197 Freek Ramp 0663262 Kushagra 0873174 Roel Smallegoor 0753385 Janno Lunenburg - Tutor - # Software architecture and used approach A schematic overview of the software architecture is shown in the figure below. The software basically consists only of 3 nodes, laserProcessor, arrowDetection and strategy. A short elaboration of the nodes: laserProcessor: This node reads the /pico/laser topic which sends the laser data in polar coordinates. All this node does is it converts the polar coordinates to cartesian coordinates and filters out points closer than 10 cm. The cartesian coordinate system is preferred over the polar coordinate system because it feels more intuitive. Finally once the conversion and filtering is done it publishes the transformed coordinates onto topic /cloud. arrowDetection: This node reads the camera image from topic /pico/asusxtion/rgb/image_color and detects red arrows. If it detects an arrow the direction is determined and posted onto topic /arrow. strategy: This node does multiple things. First it detects openings where it assigns a target to, if no openings are found the robot finds itself in a dead end. Secondly it reads the topic /arrow and if an arrow is present it overrides the preferred direction. Finally the robot determines the direction of the velocity of the robot using the Potential Field Method (PFM) and sends the velocity to the topic /pico/cmd_vel. The approach is very simple and easy to implement. The strategy node for example has approximately only 100 lines of code. This is the reason why there has not been chosen to divide all elements over different nodes. The benefit of easy code is the easiness to tackle bugs and that bugs are less likely to occur. Another benefit of this 'simple' code is the easiness to tweak the robot's behavior to optimize for performance. # Applied methods description ## Arrow detection The following steps describe the algorithm to find the arrow and determine its direction: 1. Read RGB image from /pico/asusxtion/rgb/image_color topic. 2. Convert RGB image to HSV image. 3. Filter out the red color using cv::inRange. 4. Find contours and convex hulls and apply filter. The filter removes all contours and convex hulls where the following relationship does not hold: $\displaystyle{ 0.5 \lt \frac{Contour \ area}{Convex \ hull \ area} \lt 0.65 }$. 5. Use cv::approxPolyDP over the contours and apply filter. The function cv::approxPolyDP is used to fit polylines over the resulting contours. The arrow should have approximately 7 lines per polyline. The polyline with the number of lines as described in the following formula is the arrow: $\displaystyle{ 5 \ \lt \ number \ of \ lines \ in \ polyline \ \lt \ 10 }$ 6. Determine if arrow is pointing left or right. First the midpoint of the arrow is found using $\displaystyle{ x_{mid} = \frac{x_{min}+x_{max}}{2} }$. When the midpoint is known the program iterates over all points of the arrow contour. Two counters are made which count the number of points left and right of $\displaystyle{ x_{mid} }$. If the left counter is greater than the right counter the arrow is pointing to the left, otherwise the arrow is pointing to the right. 7. Making the detection more robust. Extra robustness is required for example when at one frame the arrow is not detected the program still has to know there is an arrow. This is done by taking the last 5 iterations, check if in all these iterations the arrow is detected then publish the direction of the arrow onto the topic /arrow. If in the last 5 iterations no arrow has been detected the arrow is not visible anymore thus publish that there is no arrow onto the topic /arrow. ## Strategy ### Target In the approach used a target is required for Pico to drive to. In this section the determination of the target point is explained. In the code a loop iterates through all data points, which are in order from right to left. Only the points within 1.4 meter from Pico are taken into account. If the distance between two consecutive points is greater than a determined threshold, the presence of a door (opening) is assumed. This will be further explained with the following figure. In the figure the vision of Pico is illustrated by the dotted black arc and the detected points within this marge by red dots. In the left figure one can see that two openings are detected, i.e. the distance between two consecutive points is greater than a user defined threshold value (0.6 m). The potential targets (shown in blue) are just simply placed in the middle of the two consecutive points. In the middle figure a problem is illustrated which occured while testing. When Pico looks past a corner it somehow detects points which are not really there, so called ghost points. This is a problem because in the case of ghost points the distance between two points becomes smaller than the threshold and no openings are detected. To filter this effect the distance is not taken between two points, but over five (found by testing). A downside of this method is that points far ahead of Pico give the possibility of false positives. But because in this case Pico only looks at a relative short distance, this will not occur. In the figure on the right the filter is applied and the door is found again. The targets are used to orientate Pico. By adjusting $\displaystyle{ \theta }$ the target point is kept straight in front of Pico. ### PFM Now the target points are determined Pico needs to drive towards them without hitting any walls. In the first strategy, shown at the bottom of this page, walls were fitted so Pico knew where to drive. This method had robustness issues which lead to some problems during the corridor challenge. Therefore a more robust method was chosen for the maze competition. With the Potential Field Method virtual repulsive forces are applied to Pico for every laser point. The force caused by a point depends on the distance, $\displaystyle{ r }$, between Pico and a laser point. The virtual force is defined as: $\displaystyle{ F_r=-\frac{1}{r^5} }$, where the resulting force $\displaystyle{ F }$ is pointing away from the obstacle. When the sum is taken over all repulsive forces, the net repulsive force is pointing away from close objects. A target is added as an attractive forces. The function for the attractive force is $\displaystyle{ F_a=r }$. In one of the first iterations of the software the attractive force was composed like the repulsive force, but during testing a linear relation between distance and force of the target point proved to be working best. The resulting force is not always the same and because Pico must drive as fast as possible only the direction of the total force is taken. The maximum speed is set in the direction of the resulting force. The strategy node detects a dead-end when no doors are detected. The robot anticipates by turning around its z-axis in opposite direction of the preferred direction (which is always left or always right). When it detects a door it continues the normal procedure. Consider the following example: 1. The robot sees a door and keeps on going. The preferred direction of the robot is going left. 2. The robot sees 2 options, but it takes the most left door since this is the preferred direction. 3. The robot sees one door and drives to the target. 4. The robot does not see any doors, it recognises a dead end. It now starts turning around its z-axis in clockwise direction (opposite to the preferred direction which was left). 5. The robot detects a new door and continues normal procedure. In this case a dead-end is only detected if it is within the vision of Pico, which is limited to 1.4 meters. When the dead-end is longer than in the example Pico will drive in the dead-end until the dead-end is within the vision. If the vision field for dead-ends is extended a dead-end will be detected sooner and pico will finish the maze faster. There was no time however to implement this in the algorithm. ### Safety The importance of safety and the requirement that pico should not hit an obstacle under any circumstances was realized. For this reason a safety feature was introduced. In ideal situation, pico should never need to use this safety feature. The program was designed in such a way that the robot could complete its task without using 'Safety'. But there are many variables in real world, all of which are difficult to account particularly in the given time frame. It is always a possibility that for some unforeseen reason, pico gets too close to an object and might hit it. 'Safety' was designed to prevent this. The first version of 'Safety' was designed to stop pico if it moves very close to any object it detects. This version was, of course, not very effective as the robot would come to a stand-still once Safety was invoked which prevented it from finishing its task. 'Safety' was later upgraded. When too close, pico needs to move away from the obstacle and this movement should not interfere with path-detection and space orientation of the robot. In the latest version, a sophisticated safety design was incorporated. Two circles, critical circle and safety circle are considered. The program first counts the number of points detected within the critical circle. If more than 10 points are counted, 'Safety' mode is started. This is done for noise filtering. Safety is given highest priotity and overrides decisions for movement of pico. The program now calculates average x and y position of all points within safety circle (safety circle is larger than critical circle). Without changing the direction it is facing, pico moves in direction directly opposite to the average obstacle point. Since there is no turning involved, spacial orientation of pico remains unaffected. This point is updated with every cycle and pico keeps moving away until the point moves out of safety circle. 'Safety' mode is then turned off and normal operation is resumed. ### End of maze A part of the script is dedicated to determine if pico has reached the end of maze and its time to stop. In the point cloud, a maximum 1080 points can be detected in a 270 degree range in front of pico. There are many differences that pico can detect when its outside the maze compared to within the maze. One of those differences, the one that has been implemented in the program, is that the detected points are farther away from pico when it is outside the maze. Pico counts the number of points that are farther than 1.6 meters from itself. Within the maze, at all times, this number is not larger than 400 points. When this number is greater than 0.6 times the maximum possible (0.6x1080 = 648) pico runs in 'End of maze' mode. In this mode, pico drives straight for forty cycles after which it comes to a standstill. 'End of maze' counter is checked during every cycle and if, at any point, the counter is not large enough, the cycle counter is reset to zero and pico moves out of 'End of maze' mode. This is a preventive measure against false end-of-maze detection. # First software approach ## Obstacle Detection ### Finding Walls and Doors Our first approach was to find walls and doors (openings) expressed as a line so we could easily steer the robot to a certain door. The following algorithm was developed: 1. All laserpoints are plotted in a cv::Mat stucture (left). Secondly a Probalistic Hough Line Transform is applied to the plotted laserpoints (right). 2. The resulting lines are filtered into two categories, a most common directory and a second most common directory. This is shown in the figure below. 3. Since the Probalistic Hough Line Transformation return multiple lines per wall, i.e. a wall can consists of 20 lines, a line is fit through the contour of each wall. The result is 1 line per wall as shown in the figure below. 4. Now it is possible to fit 'doors' between walls which is shown in the figure below. ### Why this method doesn't work in real life The method described in how to find walls and doors did not work properly in real life. We experienced some serious robustness problems due to the fact that in some of the iterations complete walls and/or doors were not detected, thus the robot couldn't steer in a proper fashion to the target. Secondly this method required some serious computation power which is not preferable. # Evaluation of the maze competetion During the corridor competition it became clear that the chosen strategy was not robust enough to detect the maze in real life. This first strategy did work in the simulator but not in real-life. Thus to achieve a strategy which will work more robust we changed our strategy as can be read above. During the testing of the second strategy software in the days before the maze competition we noticed that this was a good decision with excellent results. This conclusion was drawn because of the fact that PICO was not only able to drive correctly through a maze with straight walls and 90 degrees corners, it was also able to drive through a 'maze' with bended walls and random degree angles. Even when people were walking through the maze PICO still did not gave up solving the maze. This made us confident about the performance during the real competition. During the competition the ‘always go left’ rule was chosen, although it did not matter because of the design of the maze. As a result, PICO chose the correct direction at the first crossing. The correct action for the second crossing was taking a right turn. PICO first looked into the left and front corridor to check whether or not is was a dead end, which it was, so it turned right. The crossings in the rest of the maze were provided with a red arrow which was detected correctly and used to steer into the right corridor. To improve the strategy a few adaptions for efficiency and robustness can be made. The first corner was taken really fast, it seemed like it was ‘drifting’ through the corned which is the most speed efficient way to pass a crossing. But at the second crossing it first had to check if the other directions where dead ends which costs valuable time. To improve this, the dead end detection should look at a wider range of point instead of only in front of PICO which will make it turn in the right direction even more early. To accomplish an even better robustness the ghost point ‘filter’ should be improved. In the current strategy a fixed amount of points is chosen to sum the distances, but this is not perfectly robust. Although the PICO finished correctly, an algorithm to detect if a point is a ghost point (for example to check if there are other points close to the current point) should make the detection even more robust. In conclusion, the chosen strategy turned out to be robust and efficient compared to other groups. Only a few lines of code are used and the strategy does not use a lot of data. During the competition we became first with a total time of 1 minute and 2 seconds, a great achievement!
web
auto_math_text
const SYNC_RESTART_TIMEOUT: Duration; Controls how long we wait to restart syncing after finishing a sync run. This timeout should be long enough to: • allow pending downloads and verifies to complete or time out. Sync restarts don't cancel downloads, so quick restarts can overload network-bound nodes with lots of peers, leading to further failures. (The total number of requests being processed by peers is the sum of the number of peers, and the peer request buffer size.) We assume that Zebra nodes have at least 10 Mbps bandwidth. So a maximum-sized block can take up to 2 seconds to download. Therefore, we set this timeout to twice the default number of peers. (The peer request buffer size is small enough that any buffered requests will overlap with the post-restart ObtainTips.) • allow zcashd peers to process pending requests. If the node only has a few peers, we want to clear as much peer state as possible. In particular, zcashd sends "next block range" hints, based on zcashd's internal model of our sync progress. But we want to discard these hints, so they don't get confused with ObtainTips and ExtendTips responses. This timeout is particularly important on instances with slow or unreliable networks, and on testnet, which has a small number of slow peers.
web
auto_math_text
## Difference between Brushed and Brushless Motors Brushless DC motors are the one which use an electronic commutation and are powered through a DC source using switching supply and an inverter, which basically generates an AC electrical signal for driving the motor in order to drive the motor. Performance is the main concern for such motors and … ## Difference between Capacitor and Battery Capacitor and battery both perform the same function of storing and releasing an energy, however, there are essential differences between both of them due to how they function differently. Capacitors store energy in the form of an electric field while batteries store energy in the form of chemical energy. The … ## Linearity in Circuits Consider the relationship between voltage and current for a resistor (Ohm’s Law). Suppose that c current I1 (the excitation or input) is applied to a resistor, R. then the resulting voltage V1 (the response or output) is ${{V}_{1}}={{I}_{1}}R$ Similarly, if I2 is applied to R, then V2=I2R results. But if … ## Difference between AC Drives and DC Drives The AC drives operate with AC signal or voltage, for instance, 1-phase or 3-phase AC voltages whereas DC drives operate with DC signal or voltage, for instance, DC supplies, and Batteries. Generally, a DC drive changes an Alternating Current (AC) into Direct Current (DC) using a converter (Rectifier) to operate … ## Difference between Autotransformer and Conventional Transformer In two winding transformer, whole power is transferred from primary to secondary side by means of induction ONLY While, in the case of Auto transformer, part of the whole power is transferred by induction and rest of the power is transferred through conduction. In two-winding transformers, primary and secondary windings … ## Difference between Analog and Digital Signal in Tabular Form Analog signals and Digital signals are the two categories of signals. These signals are utilized in order to broadcast different types of information. Analog signals are the ones that observe analog methodology to broadcast information in such a fashion that they can be analyzed at any instant of time while Digital … ## Difference between Analog and Digital Multimeter A multimeter is a device which is used to measure several electrical quantities such as current, voltage, resistance, inductance, capacitance, and electrical frequency. The most significant difference between an analog multimeter and the digital multimeter is that the analog multimeter comprises of a scale and a deflection pointer which actually … ## Ammeter vs Voltmeter | Difference between Ammeter and Voltmeter Devices like ammeter and voltmeter are employed in order to measure electricity. Both of these devices are based on the galvanometer which is employed to detect small current. An ammeter is a device which is used to measure the “current” (in ampere, micro-ampere, and milli-ampere) through a conductor whereas a … ## Difference Between Active and Passive Components Active Components Electrical Components which need an external source to initiate the operation are known as active components such as Silicon-Controlled Rectifier (SCR), Transistors, and Diodes. Example Since a Diode is an active element so it needs an external source (either voltage or current) in order to initiate the operation. … ## Difference between AC and DC Power AC power alters its direction with time, while DC power remains constant. Furthermore, AC power oscillates at 60 Hz frequency whereas DC power has Zero frequency. The main advantage of an AC power over DC is that it can be transmitted over long distances at higher voltages using transformers with … ## Difference between AC and DC Motor An electric motor is a machine that converts electric energy into mechanical energy. An electrical signal (voltage) is applied to the input terminals of the motor, and the output of the motor output generates a specific torque according to the motor characteristics. AC motors and DC motors perform the same … ## Difference between AC and DC Generator The AC generator generates an output voltage which alters in amplitude as well as time whereas the DC generator generates a constant output voltage which does not change in amplitude as well as time. The electrical energy we utilize has two fundamental types, one is known as Alternating while the … ## Difference between AC and DC Current Electrical current flows using two ways: either in a direct current or in an alternating current (AC). Electric current is basically the free movement of electrons by a conductor. The main difference between AC and DC is based upon the direction in which the flow of electrons occurs. DC Current …
web
auto_math_text
# Users Manual for Program MPLOT Introduction Flow chart Command line options Mouse button control Post processing Input data commands which controls post processing Calculate different ride comfort assessments Designing higher order filters Plotting Introduction Input data at the lowest input data level Input data valid at level DIAGRAM Input data valid at level PAGE Input data valid at level MAIN All plotting commands in alphabetical order Examples ## Introduction Program MPLOT reads result files written in MPdat format, having the file extension .id or .mp. Program MPLOT has two main functions: 1. Post-processing Algebraic operations, filtering, statistical evaluations and different transformations can be made. All available post-processing commands are documented under Input data commands which controls post processing. 2. Plotting Two- and three- dimensional plots of scalars and vectors. The user has possibility to select results from many different idents by adding the name of the ident enclosed in parenthesis after the name of the variable. All available plot-processing commands are documented under Input data commands which controls plotting. Some notes on the input data reading: • Input data is read in free format, valid separators between the input data are <space>, <comma>, <tab>, <equal sign> or <carriage return>. The commands are understood both in lower and upper case letters. • Lines starting with a #-sign will be treated as commentary lines. • Lines starting with a !-sign will be sent as a command to the UNIX system. • In the description of the input data commands in this manual, certain simplifications have been made in order to reduce the text volume in this document. The simplifications are implemented according to the same method as listed in the user manual for the CALC program. The simplifications are: • Words enclosed between grave accents The program expects to read a valid string, which provides a valid main- or sub- command. An error will occur if integer or real are read. • Words enclosed between a grave accent and an acute accent ' The program expects to read a string which must be a variable previously defined in the input data. An error will occur if integer or real are read. • Words without accents The program expects to read data constants which comprise integer or real. An error will occur if the character is read. • Words which begin with a grave accent At this stage, the program expects to read a character string or a data constant. If a string is read, the program will seek the variable which has been assigned that name and will store the address to the variable. If a data constant is read, the value will be stored directly in the memory. • Words which begin with signs and a grave accent +- At this stage, the program accepts reading a variable or a constant. The variable or constant can have up to two preceding signs minus and/or plus. • Words which begin with signs and a grave accent in parenthesis (+-) At this stage, the program accepts reading a variable or a constant. The variable or constant can have up to two preceding signs minus and/or plus. The parentheses imply that if a variable is read, its value will not be updated. Program CALC will only use the value of the variable at time=tstart. ## Flow chart The flow of data in GENSYS is carried out as follows: Activity 1 The results from the calculation activity TSIM, FRESP, MODAL or QUASI are stored in MPdat-format on an output data file named '$ident.id'. The variables stored in the *.id-file are defined in the CALC-command s_var. Activity 2 Program MPLOT copies the contents of file '$ident.id' into the files '$ident.mp' and '$ident.mp2', because it is much faster to read and write in direct access files. The files '$ident.mp' and '$ident.mp2' are also working as a memory for program MPLOT, holding all new vectors and scalars generated in the post processing commands. In the command line arguments -save_mp and -no_save_mp the user can control if the files '$ident.mp' and '$ident.mp2' shall be saved or not after the MPLOT activity. Activity 3 Plots can be created by reading result data files from several idents at the same time. If *.mp- and *.mp2- files do exist they will be read and plotted, but if they do not exist program MPLOT will use the *.id-files. ## Command line options When launching program MPLOT the user can supply a number of command line options. The user can put his or hers favorite options in a file named gen_conf. File gen_conf is primarily searched in the local working directory. If file gen_conf not can be found in the local working directory, program MPLOT searches for file .gen_conf in the users home-directory. If no file can be found locally or in the $HOME-directory, program MPLOT will read the file$gensys/bin/gen_conf. Following command line options are understood: -addarg = Prompt for more arguments before calculation starts -no_addarg = Do not prompt for more arguments -debug = Don't remove temporary work files -h or -help = Print this help information -interactive = Run MPLOT in interactive mode (please see further information under Description of pulldown menus). -no_interactive = Run MPLOT in batch mode. -pri = Print all results on printer $LPDEST=lp1 -pri_graphics = Print graphical results. -pri_resu = Print result-textfile *.resu. -pri_textfiles = Print all other result-textfiles from program MPLOT. -no_pri = Do not print any results at all.(Default) -no_pri_graphics = Do not print graphical results. -no_pri_resu = Do not print result-textfile *.resu. -no_pri_textfiles = Do not print all other result-textfiles from program MPLOT. -q_sys batfile = Write plot commands into file batfile, if batfile=stdout write plot commands to standard output -res_1600x1200 = Resolution 1600x1200 pixels -res_1280x1024 = Resolution 1280x1024 pixels -res_1024x768 = Resolution 1024x768 pixels -res_800x600 = Resolution 800x600 pixels -save_mp = Save workspace in file$ident.mp -save_graphics = Save graphical results (Default) -save_resu = Save the result-textfile *.resu (Default) -save_textfiles = Save all other result-textfiles from program MPLOT (Default) -no_save_mp = Do not save workspace (Default) -no_save_graphics = Do not save graphical results. -no_save_resu = Do not save the result-textfile *.resu. -no_save_textfiles = not save all other result-textfiles from program MPLOT. -use_MPfile = If an ident is stored in a file *.mp, read the file without questions -no_use_MPfile = If an ident is stored in a file *.mp, remove the file -ask_use_MPfile = If an ident is stored in a file *.mp, prompt the user if the file shall be read or not arg(1) = Input data file arg(2) = Ident If not input data file and/or ident has been given among the command line options script MPLOT will prompt the user for these items. If program MPLOT is started with the -interactive command line option. The following pulldown menu will appear at the top of the window: Open MPdat-file ... Opens a popup-menu for selection of a MPdat-file. If a MPdat-file not is opened the user will be promoted for a MPdat-file each time a new curve or point is selected. Close MPdat-file Closes current MPdat-file. Idents in time order Creates a list of all idents in current directory in time order. Idents alphabetic Creates a list of all idents in alphabetic order. Read mplotf-file ... Open and read a *.mplotf-file. The diagrams are read and saved into the memory of the program. The diagrams can be plotted by clicking on "Draw" + "Open_saved_page". Print ... Opens a popup printer-menu, the user can select different graphic output formats and if the results shall be send to printer or file. Calculator ... Opens a popup-window in where numerical calculations can be done. Command ... Manually write a command to the program MPLOT. Import ... Import variables from columns read from external ASCII-file. Export ... Export variables and scalars to a plain ASCII-file. MPdat contents short Creates a list of all variables and scalars in opened MPdat-file. Writes only a short list containing min-, max-, start- and end- values. MPdat contents long Creates a list of all variables and scalars in opened MPdat-file. Writes a full dump of all values for all variables in opened MPdat-file. MPdat contents matlab Creates a list of all variables and scalars in opened MPdat-file. Writes a full dump of all values for all variables in opened MPdat-file, in a format readable for matlab. Save plots and exit Stops the execution of program MPLOT. Before program MPLOT quits the user has the possibility to store all created MPLOT-commands in an external mplotf-file. Exit Stops program MPLOT, without storing MPLOT-commands. filt ... Filtering of variables in time-domain according to the command filt. fourier ... Translates a variable to or from frequency domain according to the command fourier. ftwz ... Calculation of Ride Index Wz according to the command ftwz. func ... Algebraic postprocessing of variables according to the command func. stat ... Calculation of statistical properties according to the command stat . transt ... Filtering of time-domain variables in frequency domain according to the command transt . transf ... Filtering of frequency-domain variables in frequency domain according to the command transf . Curve from current ident Opens a popup-window with a list of all curves in opened MPdat-file. If no MPdat-file is opened, the user will be promted to select a MPdat-file first. Curve from other idents Opens a popup-window where the user can select both ident and variable. Scalar from current ident Opens a popup-window with a list of all scalars in opened MPdat-file. If no MPdat-file is opened, the user will be promted to select a MPdat-file first. Diagram # Opens a popup-window for selection of diagram number. If the user changes current diagram number, further plotting will be directed to the new diagram number. Clear page Clears the contents of current page. Reset scale factor Reset the view scale factor, if it has been modified with the mouse buttons. Pushbutton opens current page for editing. (N.B. Current file must be saved before it can be reread.) Pushbutton creates a new page. The textfields to the right of the Edit-pushbutton shows current coordinate when mouse button #2 is pressed. The radiobuttons to the right of the textfields, shows the number of current page. ## Mouse button control In the interactive version of program MPLOT Btn #1 Zoom in the picture by pressing and at the same time drag the mouse upwards in the window. Btn #2 Show current coordinate of the pointer. The coordinates are written in the two text fields up in the pulldown menu. Btn #3 Panning in the window. Wheel Zoom by rotating the mouse wheel. Wheel+CTRL Pressing the CTRL-button at the same time, makes the zooming to go faster. ## Post processing ### Input data commands which controls post processing In Mplot there are a number of commands which create new curves and scalars. This section was previously referred to as DYNPOST, but is today an integrated part in Mplot. The various mplot commands and the DYNPOST commands can be combined today. Input data for these commands are read in free format. Two principles govern input of the input data: • Alt 1) first a main command is read, and directly after the main command all input data are read in a long row. • Alt 2) first a main command is read, thereafter the input data is read by subcommands. The two input methods can also be combined, e.g. the user can begin with the method 1) and then change to method 2), and vice versa. When an input data variable is required "Iname" the user can choose to read the variable from an other ident if he put the name of the ident directly after the name and enclosed in parenthesis ex. "Iname(ident)" The following post-processing commands are available at level MAIN in Mplot: (the list begins with a brief summary) CREATE_CURVE = Directive to create a curve in the memory CREATE_SCALAR = Directive to create a scalar in the memory CATALO = Directive to inspect the content in the memory CATFIL = Define the name of the output-file for command CATALO DELETE_CURVE = Directive to delete a curve from the memory DELETE_SCALAR = Directive to delete a scalar from the memory ELSE = Precedes an else-block ELSEIF_THEN = Precedes an elseif_then-block ENDIF = Ends an if_then-block EQDIST = Directive to generate equidistance time steps FILT = Directive for filtration in time domain FOURIER = Directive for Fourier transformation FTWZ = Directive for FFT-transformation and Wz-calculation FUNC = Directive to execute mathematical operations IF_THEN = Define the beginning of an if_then-block INSERT = Redirects input reading from another input file IN_SUBSTRUCT = Directive which inserts a sub-structure LOOP = Define the beginning of a loop-block NO_WARNING = Directive which suppresses warning texts NTABLE = Number of columns to be written in the *.resu-file. PRINT = Directive to print results on ASCII-files STAT = Directive for statistical analysis STAT2 = Directive for 3-dimensional statistical analysis SUBSTRUCT = Directive for defining substructures TRANS = Filtering of a time domain variable in frequency domain TRANSF = Filtering of a frequency domain variable in frequency domain TRANST = As directive TRANS, but with argument Tsname UNTIL = Define the end of a loop-block began in command LOOP WZ_AFRESP = Calculation of Wz from a variable frequency domain CREATE_CURVE, cc_func, input(*) Command which creates new curves in current ident. Command CREATE_CURVE have the following sub-commands: append_sngl = Adds new points to a new or existing vector file_vpair_free = Reads value-pairs from an ASCII-file linear_increasing = Creates a linearly increasing vector new_sngl = Creates a new vector in the memory truncate = Truncates an existing vector into a new vector cc_type = append_sngl Adds new points to a new or existing vector in the memory. create_curve append_sngl vname, tname, v_dim, values vname = Name of the new or existing vector. If the vector is already stored in memory, the values will be appended to the vector vname, otherwise a new vector will be created in memory. tname = Vector vname:s default x-axis. When the plotting of a vname in mplot and the x-axis is not specified, this vector will be used as the x-variable. Tname must be stored in memory before this command can be given, or tname must be assigned the same name as the vname. If tname is not specified, tname will be set to vname. v_dim = The dimension of the vector, which shall be placed in parentheses. As default v_dim will be set the value 'blank'. values = Values which will be read into the vector. The input will be ended when a new valid command is read. If the name of a scalar is read, the value of the scalar be given to the vector. cc_type = file_vpair_free Creates two vectors in the memory. Command 'file_vpair_free' reads value-pairs from an external file. If a line in the external file 'file' begins with the letter # it will be regarded as a comment. create_curve file_vpair_free tname, vname, t_dim, v_dim, format, file cc_type = linear_increasing Creates a new vector in the memory, linearly increasing from vstart. create_curve linear_increasing vname, vstart, vincr, npoints, v_dim vname = Name of the new vector. vstart = Start value for the new vector. vincr = Increment between two consecutive values in vname. npoints = Number of points in vname. v_dim = The dimension of the vector, which shall be placed in parentheses. As default v_dim will be set the value 'blank'. cc_type = new_sngl Creates a new vector in the memory. create_curve new_sngl vname, tname, v_dim, values vname = Name of the new vector. tname = Vector vname:s default x-axis. When the plotting of a vname in mplot and the x-axis is not specified, this vector will be used as the x-variable. Tname must be stored in memory before this command can be given, or tname must be assigned the same name as the vname. If tname is not specified, tname will be set to vname. v_dim = The dimension of the vector, which shall be placed in parentheses. As default v_dim will be set the value 'blank'. values = Values which will be read into the vector. The input will be ended when a new valid command is read. If the name of a scalar is read, the value of the scalar be given to the vector. cc_type = truncate Truncates an existing vector into a new vector. create_curve truncate new_yvar new_xvar old_yvar old_xvar +-tstart +-tstop new_yvar = Name of the new shorter vector. new_xvar = Name of the new shorter x-variable for new_yvar. old_yvar = Name of the existing vector. old_xvar = Name of the existing x-vector. tstart = X-value in old_xvar from where new_yvar and new_xvar will start. tstart = X-value in old_xvar from where new_yvar and new_xvar will stop. CREATE_SCALAR, cs_func, input(*) Directive to create a new scalar in current ident. cs_type = new Creates a new scalar in memory. create_scalar new pname, pvalue, p_dim pname = Name of the new scalar. pvalue = Numerical value of the scalar. p_dim = The dimension of the scalar, which shall be placed in parentheses. As default v_dim will be set the value 'blank'. cs_type = curve_first Creates a scalar in memory, which is the first value in a variable. create_scalar curve_first pname= vname pname = Name of the new scalar. vname = Variable from which the first value shall be picked. cs_type = curve_interp Creates a scalar in memory, interpolated from a variable. create_scalar curve_interp pname= vname tval tname pname = Name of the new scalar. vname = Variable from which the value shall be interpolated. tval = Input X-value. tname = Name of the X-variable. If undefined the default X-variable will be used. cs_type = curve_last Creates a scalar in memory, which is the first value in a variable. create_scalar curve_last pname= vname pname = Name of the new scalar. vname = Variable from which the last value shall be picked. cs_type = curve_sum Creates a scalar in memory, which is the sum of all the points of a variable. create_scalar curve_sum pname, vname pname = Name of the new scalar. vname = Variable from which all values shall be summed. cs_type = curve_vmax Creates a scalar in the memory, which is the maximum value of a variable. create_scalar curve_vmax pname, vname pname = Name of the new scalar. vname = Variable from which the maximum value shall be picked. cs_type = curve_vmin Creates a scalar in the memory, which is the minimum value of a variable. create_scalar curve_vmin pname, vname pname = Name of the new scalar. vname = Variable from which the minimum value shall be picked. cs_type = curve_zero Creates scalars in the memory, containing all X-values for which variable vname is zero. create_scalar curve_zero pname, vname pname = Basename of the new scalars. Command curve_zero adds the extension .z1 .z2 .z3 etc. for each time vname passes zero. vname = The variable from where the zeros are read. CATALO= cat_type Directive to inspect the contents of current ident. The output is written to file defined in directive CATFIL below. cat_type = Selects different type of prints, cat_type can be given one of the following character string: "vectors" = Prints the vectors(variables) stored in memory "points" = Prints the scalars stored in memory "both" = Prints both vectors and scalars CATFIL= file-name Define the name of the file to which the CATALO-output above will be written. If the file-name is " " or "no", the output will be written to standard output. The default value of file-name is "$ident.cata". DELETE_CURVE, pname Directive to delete a vector in current ident. pname = Name of the vector which shall be deleted. DELETE_SCALAR, pname Directive to delete a scalar in current ident. pname = Name of the scalar which shall be deleted. ELSE The ELSE statement is used when coding an IF_THEN-else decision statement. The keyword precedes an else-block, defined to be all the statements after the ELSE-statement up to, but not including the corresponding ENDIF-statement. ENDIF The command ENDIF concludes the statement IF_THEN. EQDIST, Iname, EQname, EQtime Directive to generate curves with equidistant time steps. Iname = Name of the input variable. EQname = Name of the output variable. If omitted Eqname will be set to 'EQ'+Iname. EQtime = Name of the output variable's time variable. If omitted EQtime will be set to 'EQTIME'. FILT, Type, Num, Iname, Fname, Tname Directive to filtrate in time domain. The command requires that the input variable has equidistant time steps. Type = Type of filter, see table below. Num = Numerical input data value, see table below. Iname = Name of the input variable. Fname = Name of the output variable. If omitted Tname will be set to 'F'+Iname. Tname = Name of the time axis. If omitted the default x-variable for Iname will be used. Type Num Function DELAY t_delay Delay Iname the amount t_delay. A positive delay means that the curve is shifted to a later point in time. Program MPLOT fills the beginning of the variable with zeros. If t_delay is not a multiple of the time step in Iname's default time variable, program MPLOT will interpolate the values in variable Fname linearly, so that the two curves Iname and Fname will have the same default time variable. If variable Iname contains high frequency components this interpolation can act as a low pass filtering effect. DERIV n/a Derivation of the input variable Iname with regard to Tname. INTEG beg_value Integration of the input variable Iname with regard to Tname. Num defines the initial value of the integral. HPASS1 fo First order high pass filter. In the beginning, the output variable Fname will have the same value as Iname as initial value. HPASS1_0 fo First order high pass filter. In the beginning, the output variable Fname will have the value=0 as initial value, regardless of the value of the input variable. HPASS2 fo,zeta Second order high pass filter. In the beginning, the output variable Fname will have the same value as Iname as initial value. HPASS2_0 fo,zeta Second order high pass filter. In the beginning, the output variable Fname will have the value=0 as initial value, regardless of the value of the input variable. LPASS1 fo First order low pass filter. In the beginning, the output variable Fname will have the same value as Iname as initial value. LPASS1_0 fo First order low pass filter. In the beginning, the output variable Fname will have the value=0 as initial value, regardless of the value of the input variable. LPASS2 fo,zeta Second order low pass filter. In the beginning, the output variable Fname will have the same initial value as Iname. "fo" defines the cut-off frequency of the filter and "zeta" defines the fraction of critical damping of the filter. LPASS2_0 fo,zeta Second order low pass filter. In the beginning, the output variable Fname will have the value=0 as initial value, regardless of the value of the input variable. "fo" defines the cut-off frequency of the filter and "zeta" defines the fraction of critical damping of the filter. MAX_ABS t_incr Sliding absolute max value calculation. Filter MAX_ABS selects the value with biggest magnitude in window t_incr. T_incr has the same dimension as variable "Tname". If number of values in window t_incr is less than 1(one), MPLOT will write a warning message and no filtration is possible. The window works as a fifo-memory (first in first out), why "Fname" will be shifted in time relative to "Iname". N.B. The sliding max_abs-value will adopt its value from one of the values in the window. No extra- or interpolation will take place on the borders of window t_incr. MAX_SIGN t_incr Sliding max value calculation. Filter MAX_SIGN selects the largest value in window t_incr. T_incr has the same dimension as variable "Tname". If number of values in window t_incr is less than 1(one), MPLOT will write a warning message and no filtration is possible. The window works as a fifo-memory (first in first out), why "Fname" will be shifted in time relative to "Iname". N.B. The sliding max_sign-value will adopt its value from one of the values in the window. No extra- or interpolation will take place on the borders of window t_incr. MEAN t_incr Sliding mean value calculation. The size of the window which is used in the calculation of the sliding mean value is defined in the variable t_incr. The variable t_incr has the same dimension as the time variable Tname given in the FILT-command. If number of values in the window is less than 1 MPLOT will write a warning message on standard output. If number of values in the window is equal to 1 no filtration of Iname will take place. The window works as a fifo-memory (first in first out), which is why the output variable will have a time delay relative to the input variable. The time delay between input and output will be equal to the half width of the window, t_incr. MEAN_M t_incr Sliding mean value calculation. Similar to the filter MEAN above, but the window is symmetric with respect to actual time, therefore no time shift between input- and output- variable will take place. MIN_ABS t_incr Sliding absolute min value calculation. Filter MIN_ABS selects the value with smallest magnitude in window t_incr. If Iname changes its sign in window t_incr, Fname will be set equal to 0(zero). T_incr has the same dimension as variable "Tname". If number of values in window t_incr is less than 1(one), MPLOT will write a warning message and no filtration is possible. The window works as a fifo-memory (first in first out), why "Fname" will be shifted in time relative to "Iname". N.B. The sliding min_abs-value will adopt its value from one of the values in the window. No extra- or interpolation will take place on the borders of window t_incr. MIN_SIGN t_incr Sliding min value calculation. Filter MAX_SIGN selects the most smallest value in window t_incr. T_incr has the same dimension as variable "Tname". If number of values in window t_incr is less than 1(one), MPLOT will write a warning message and no filtration is possible. The window works as a fifo-memory (first in first out), why "Fname" will be shifted in time relative to "Iname". N.B. The sliding min_sign-value will adopt its value from one of the values in the window. No extra- or interpolation will take place on the borders of window t_incr. RMS t_incr Sliding RMS-value calculation. The size of the window which is used in the calculation of the sliding mean value is defined in the variable t_incr. The variable t_incr has the same dimension as the time variable Tname given in the FILT-command. If number of values in the window is less than 1, MPLOT will write a warning message on standard output. If number of values in the window is equal to 1, no filtration of Iname will take place. The window works as a fifo-memory (first in first out), which is why the output variable will be given a time delay relative to the input variable. The time delay between input and output will be equal to the half width of the window, t_incr. STD t_incr Sliding "Standard Deviation"-value calculation. The size of the window which is used in the calculation of the sliding mean value is defined in the variable t_incr. The variable t_incr has the same dimension as the time variable Tname given in the FILT-command. If number of values in the window is less than 2, MPLOT will write a warning message on standard output. If number of values in the window is equal to 2, no filtration of Iname will take place. The window works as a fifo-memory (first in first out), which is why the output variable will be given a time delay relative to the input variable. The time delay between input and output will be equal to the half width of the window, t_incr. FOURIER, Iname_r, Iname_i, Fname_r, Fname_i, Ityp, +-Tstart, +-Tstop, Fstart, Fstop, Tname, FQname, Window, Tsname Calculation of different types of Fourier spectras. To make Fourier spectras of a variable, the steps in variable Tname must be equdistant. According to the Nyquist sampling theorem maximum frequency range in the created spectra is from -1/(2*DX) up to +1/(2*DX). Where DX is the equdistant step in Tname. If Tname equals variable time, DX equals tout. Text file example HTML example Iname_r = Name of the input variable's real part. Iname_i = Name of the input variable's imaginary part. If there is no imaginary part, omit this subcommand or set Iname_i equal to 'no'. Default value: 'no' Fname_r= Name of the real part of the Fourier series. If 'no' is given to Fname_r, no Fourier series calculation will be carried out. Default value: Iname_r+'FTr' Fname_i = Name of the imaginary part of the Fourier series. If 'no' is given to Fname_r, no Fourier series calculation will be carried out. Default value: Iname_i+'FTi' Ityp = Defines type of Fourier series calculation. The user can choose among the following: • DFT_N The DFT (discrete Fourier transform) is calculated according to: $X[k] = \sum_{n=0}^{N-1} x[n] \,e^{-i 2 \pi \frac{k}{N} n}$ Where: k= 0, 1,...,N-1 N= Number of samples • DFT_T Similar to DFT_N above, but the spectra is multiplied with the equdistant step in variable Tname, which turns the DFT into a Fourier Transform. • FOUR_S Similar to DFT_T above, but divided with the total length of variable Tname, which turns the DFT into a Fourier Series: • IFOUR_S Is the inverse to ITYP=FOUR_S, i.e. calculation of the time history from Fourier Series. • PSD_S Calculation of the double-sided PSD-spectra according to: Sxx = T·cn·cn* where: cn = The Fourier series component calculated according to FOUR_S above. cn* = Complex conjugate for cn • PSD_G Calculation of the single-sided PSD-spectra. The single-sided PSD-spectra does only have positive frequencies, and is calculated according to: Sxx = 2·T·cn·cn* Default value for Ityp is FOUR_S. Tstart = Starting point of calculations, given in the same units as Tsname. Sometimes a simulation can have strong vibrations in the beginning due to initial value problems. If that is the case, the initial vibrations can affect the Fourier Spectra. Please set Tstart to at least 0.1[s] to avoid initial value problems. Default value: at the beginning of Iname_r and Iname_i. Tstop = End point of calculations, given in the same units as Tsname. Default value: at the end of Iname_r and Iname_i. Fstart = Start frequency of created spectra, expressed in the inverted units of Tname. (If Tname is in [s], Fstart will be in [Hz]) Fstart must be positive or equal to -Fstop. Default value: -1/(2*DX) where DX is the equdistant step in Tname Fstop = End frequency of created spectra, expressed in the inverted units of Tname. (If Tname is in [s], Fstop will be in [Hz]) Fstop must be positive and greater than Fstart. Default value: 1/(2*DX) where DX is the equdistant step in Tname Tname = Time variable to be used. If Tname not has been set in input data, the default X-variable for Iname_r and Iname_i will be used. FQname = The name of the frequency axis created, if Fqname not has been set in input data, the name Fname_r+'HZ' will be used. Window = Type of window which is used in the Fourier calculation. The user can choose between the following: • RECT Normal rectangular window • HANN Hanning window according to the function F=(1-cos(2π·x/L)). The window has the property that the variable is damped both at the starting point and the endpoint, to reduce leakage. Uh(f) = U(f) - 0.5*(U(f-f1) - U(f+f1)) Where: Uh = The Fourier transform with the HANN window. U = Ideal Fourier transform. f = The frequency. f1 = Frequency resolution. L = The length of the window. The following can be read from the equation above: 1. The height of a single peak will not be changed. 2. A single peak will be wider because Uh(f-f1) = - 0.5*U(f) and Uh(f+f1) = - 0.5*U(f). 3. As the tops are widened due to the window, the energy for the spectra will also become greater. Default value for Window is RECT. Tsname = Definition of the variable where Tstart and Tstop refers. If Tsname not has been set in input data, the default X-variable for Iname_r and Iname_i will be used. FTWZ, Iname, FTname, WZname,Ityp, +-Tstart, +-Tstop, Fstart, Fstop, Tname, FQname, Window, Tsname, WZscal Fourier series calculation of vectors and calculation of ride index. This command creates two curves (Ftname and Wzname) and one scalar (Wzscal), the ride index is stored in scalar Wzscal. Command FTWZ also writes information to the *.resu-file. The command requires that the input variable has equidistant time steps. Iname = Name of the input variable. Ftname = Name of the absolute value of the Fourier series. Default value of Ftname is 'no_store', which entails that the Fourier series calculation will be made, but the results will not be stored in memory. If 'no' is given to the name Ftname, a Fourier series calculation will not be carried out, nor will the ride index be calculated. WZname = The Fourier series filtrated with the ride index filter. Default value of Wzname is 'no_store', which entails that Wzname will be calculated but not stored in memory. If 'no' is given to the Wzname, no filtration will be carried out, nor will the ride index be calculated. Ityp = Defines the type of Ride Index to be carried out. Ityp can be set to: GV Vertical and lateral Ride Index for freight vehicles. LPV Ride Index in lateral direction for passenger vehicles. LPV_SP2 Sperling's Ride Index Wz in lateral direction for passenger vehicles, with quadratic summation of the frequency components. According to "Dynamics of railway vehicle systems" written by Vijay K. Garg and Rao V. Dukkipati. LPV_SP3 Sperling's Ride Index Wz in lateral direction for passenger vehicles, with cubic summation of the frequency components. According to "Dynamics of railway vehicle systems" written by Vijay K. Garg and Rao V. Dukkipati. VPV Ride Index in vertical direction for passenger vehicles. VPV_SP2 Sperling's Ride Index Wz in vertical direction for passenger vehicles, with quadratic summation of the frequency components. According to "Dynamics of railway vehicle systems" written by Vijay K. Garg and Rao V. Dukkipati. VPV_SP3 Sperling's Ride Index Wz in vertical direction for passenger vehicles, with cubic summation of the frequency components. According to "Dynamics of railway vehicle systems" written by Vijay K. Garg and Rao V. Dukkipati. Default value: VPV Tstart = Start time for the calculations. Default value -1.E36 [s]. Tstop = Stop time for the calculations. Default value 1.E36 [s]. Fstart = Start frequency for the created Fourier spectrum. Default value 0 [Hz]. Fstop = Stop frequency for the created Fourier spectrum. Default value 1.E36 [Hz]. Tname = Time variable to be used. If Tname not has been set in input data, the default X-variable for Iname will be used. FQname = The name of the frequency axis created. If FQname not has been set in input data, FTname+'HZ' will be used. Window = Type of window which will be used in the Fourier transformation. The user can choose among the following windows: RECT Rectangular window DUBL Rectangular window, but the interval is doubled by a symmetrical reflection of the curve in the end point. The window is designed in order to reduce leakage problems, especially when the Wz-evaluation is made in a transition curve. Compared to the HANN-window this window will take all vibrations into full account, even those vibrations which is close to the borders of the window. This is the default window if not Window is given in input data. HANN Hanning window according to the function F=(1-cos(2π·x/L)). The window has the property that the variable is damped both at the starting point and the endpoint, in order to reduce leakage. Uh(f) = U(f) - 0.5*(U(f-f1) - U(f+f1)) Where: Uh = The Fourier transform with the HANN window. U = Ideal Fourier transform. f = The frequency. f1 = Frequency resolution. L = The length of the window. The following can be read from the equation above: The height of a single peak will not be changed. A single peak will be wider because Uh(f-f1) = - 0.5*U(f) and Uh(f+f1) = - 0.5*U(f). As the tops are widened due to the window, the energy for the spectra will also become greater. In Wz-evaluation, it is important that the spectra's energy corresponds to the energy of the variable, which is why this window shall not be used in Wz-calculations. Tsname = Definition of the variable to where Tstart and Tstop refers. If Tsname not has been set in input data, the default time variable for Iname will be used. WZscal = Name of the created Wz scalar. The default value '?' which entails that Wzscal is given the name Iname+'WZ'. FUNC, Indata_group There are two ways in which the input data to the command FUNC can be written: 1. Method 1 Indata_group begins with a sub-command which controls how the FUNC-command will work. The sub-commands are similar to the FUNC-commands in program CALC. See, Reading FUNC-commands with subcommands. 2. Method 2 If indata_group not begins with a sub-command, the input data will be read in the same way as previously in MPLOT until Rel.9502. See, Reading FUNC-commands without subcommands. Reading FUNC-commands with subcommands. Indata_group includes the subcommands f_type and input data(*) adapted to every subcommand. Here follows a brief summary of the subcommands available in FUNC. Thereafter, a more detailed description of each subcommands will be given. abs = Calculates the absolute value of a variable. add = Addition acos = Calculates the arc-cos of a variable. asin = Calculates the arc-sin of a variable. atan = Calculates the arc tangent of a variable. atan2 = Calculates the arc tangent of two variables cabs = Calculates the complex absolute value of two variables cen_erri_595 = Calculate the r.m.s. 95%th percentile of a variable accoring to CEN TC_256 WG_7 and ERRI B153 const = Creates a new scalar or vector. copy = Copies a variable to a new variable. cos = Calculates the cosinus of a variable. db = Calculates decibel as 20*log10(abs(var)). div = Division exp = Exponential function base e. decr = Reduces a variable's value by a variable or value. deriv_m = Creates the derivative of a variable. div = Divide two variables by each other. incr = Increases a variable's value by a variable or value. integ_heun = Integrates a variable according to Heun's method. intpl_l = Creates a linear interpolated variable. inv = The multiplicative inverse of a variable. l_lim = Sets the lower limit of a variable. log = Logarithmic function base e. log10 = Logarithmic function base 10. max = Calculates the max. value of constants, scalars or vectors. maxloc = The location of the max. value. To be used together with func max mean_sec = Calculation of average value section per section min = Calculates the min. value of constants, scalars or vectors. minloc = The location of the min. value. To be used together with func min mul = Multiplies two variables with each other. opere = Evaluate an expression. operf = Evaluate an expression and store the output to a variable. operp = Executes algebraic operations in order of priority. power = First variable to the power of the second. rot_orient = Orients a vector with respect to another vector. reverse = Writes the contents of a vector in reverse order. sign2 = variable #1 multiplied with the sign from variable #2. sin = Calculates the sinus of a variable. slope_dx = Derivation of a variable, where the size of "dt" is choosen by the user. spline_G_rel = Natural B-spline data smoothing. sqrt = Calculates the square root of a variable. sub = Subtracts two variables from each other. tan = Calculates the tangent of a variable. u_lim = Sets the upper limit of a variable. In order to limit the amount of text, certain abbreviations have been used in the user manual. These abbreviations are also used in the CALC-program, and are as follows: 1. Words enclosed between grave accents The program expects to read a valid string, which defines a valid main or sub-command. An error will occur if an integer or real is read. 2. Words enclosed between a grave accent and an acute accent ' The program expects to read a string which defines a variable previously defined in the input data. An error will occur if an integer or real is read. 3. Words without accents The program expects to read data constants which comprise integer or real. An error will occur if a character is read. 4. Words which begin with a grave accent At this stage, the program expects to read a character string or a data constant. If a string is read, the program will seek the variable which has been assigned that name and will store the address to the variable. If a data constant is read, the value will be stored directly in the memory. 5. Words which begin with the signs +- At this stage, the program accepts reading one or two symbols before the constants or the variables. A plus sign means that the variable's positive value, likewise a minus sign implies that the variable's negative value, is used in the calculation. Should two minus signs be used in a row, the variable's positive value will be applied. Should +- or -+ be printed before the variable, the variable's negative value will be used. f_type = abs Creates a variable which is the absolute value of another variable. func abs f_name', +-var f_name = Name of the newly created variable. If f_name has already been defined, it will be overwritten by this command, and a warning will be written to standard output. var = Name of the input data variable. If the variable var' has not been defined, an error will occur, and the program will continue with the next input data command. f_type = add, sub, mul, div, power Creates a new variable which is dependent on two input variables, add stands for addition, sub stands for subtraction, div stands for division and power raises var1 to the value of var2. In the case f_type=div and abs(var2) < 1.e-30, the result will be set to 1.e30. func add f_name', +-var1, +-var2 func sub f_name', +-var1, +-var2 func mul f_name', +-var1, +-var2 func div f_name', +-var1, +-var2 func power f_name', +-var1, +-var2 f_name = Name of the newly created variable. If f_name has already been defined, it will be overwritten by this command, and a warning will be written to standard output. var1 = Name of the input data variable number 1. If the variable var1' has not been defined, an error will occur, and the program will continue with the next input data command. var2 = Name of the input data variable number 2. If the variable var2' has not been defined, an error will occur, and the program will continue with the next input data command. f_type = acos, asin, atan, cos, sin, tan Calculates one of the trigonometric functions arc-cos, arc-sin, arc-tan, cosinus, sinus or tangent for the input variable var', and stores the output in f_name'. The units of the angles are given in radians. func acos f_name', +-var func asin f_name', +-var func atan f_name', +-var func cos f_name', +-var func sin f_name', +-var func tan f_name', +-var f_name = Name of the newly created variable. If f_name has already been defined, it will be overwritten by this command, and a warning will be written to standard output. var = Name of the input data variable. If the variable var' has not been defined, an error will occur, and the program will continue with the next input data command. f_type = atan2 Calculates the arc tangent of two variables. The arguments must not both be 0.0. If argument 2 is 0.0, the absolute value of the result is π/2. If argument 1 is 0.0, the absolute value of the result is 0.0 if argument 2 is positive and π if argument is negative. Otherwise, the result is in the range , exclusive, through , inclusive, and is calculated as follows: arctan(arg_1/arg_2) func atan2 f_name', +-arg_1, +-arg_2 f_name = Name of the newly created variable. If variable f_name already exists in memory, it will be overwritten by this command, and a warning will be written to standard output. arg_1 = Name of the input data variable. If the variable arg_1' not exists in memory, an error will occur, and the program will continue with the next input data command. arg_2 = Name of the input data variable. If the variable arg_2' not exists in memory, an error will occur, and the program will continue with the next input data command. f_type = cabs Calculates the complex absolute value of two variables. Argument 1 is considered to be the real part and argument 2 is considered to be the imaginary part. func cabs f_name', +-arg_1, +-arg_2 f_name = Name of the newly created variable. If variable f_name already exists in memory, it will be overwritten by this command, and a warning will be written to standard output. arg_1 = Name of the input data variable. If the variable arg_1' not exists in memory, an error will occur, and the program will continue with the next input data command. arg_2 = Name of the input data variable. If the variable arg_2' not exists in memory, an error will occur, and the program will continue with the next input data command. f_type = cen_erri_595 Calculate the r.m.s. 95%th percentile of a variable accoring to CEN TC_256 WG_7 and ERRI B153 The norm specifies that the input filtered accelerations shall be divided into 5[s] sections. The r.m.s. values are calculated for each section. Finally the total 95%th percentile of all r.m.s. is calculated. func cen_erri_595 NMV', +-acc NMV = Name of the newly created scalar containing the value of the r.m.s. 95%th percentile. acc = Name of the input data vector containing the acceleration filtered in CEN_TC256_WB, CEN_TC256_WD, ERRI153_WB or ERRI153_WD The cumulative frequency function of all r.m.s. values will be stored in memory under the name f_name.FD f_type = const Creates a new scalar or vector. func const f_name', +-var f_name = Name of the newly created scalar or vector. If variable f_name already exists in memory, it will be overwritten by this command, and a warning message will be written on standard output. var = Name of the input data constant, scalar or vector. f_type = copy Copies a variable with the same content as a previously defined variable. func copy f_name', +-var f_name = Name of the newly created variable. If variable f_name already exists in memory, it will be overwritten by this command, and a warning will be written to standard output. var = Name of the input data variable. If the variable var' not exists in memory, an error will occur, and the program will continue with the next input data command. f_type = db Calculates decibel as 20*log10(abs(var)), stores the result in f_name. If the input variable is less than 1.e-30, the result will be set equal to -600. func db f_name', +-var f_name = Name of the newly created variable. If f_name has already been defined, it will be replaced by this command, and a warning will be written to standard output. var = Name of the input variable, which must exist and be a type of variable. f_type = decr Decreases a variable's value with var. func decr f_name', +-var f_name = Name of the variable, the value of which will be decreased. F_name must be defined, otherwise an error will occur. var = Input variable or input data which will be subtracted from f_name. f_type = deriv_m Creates the derivative of a variable. The derivation is symmetric in every point dy(t)= (y(t+dt)-y(t-dt))/(2*dt), except in the first and last point. func deriv_m f_name', +-var f_name = Name of the newly created variable. If variable f_name already exists in memory, it will be overwritten by this command, and a warning will be written to standard output. var = Name of the input data variable. If the variable var' not exists in memory, an error will occur, and the program will continue with the next input data command. f_type = exp The value e raised to the power of var. If the input variable var is bigger than 70, the result will be set equal to 2.5e30. func exp f_name', +-var f_name = Name of the newly created variable. If f_name has already been defined, it will be replaced by this command, and a warning will be written to standard output. var = Name of the input variable, which must exist and be a type of variable. f_type = incr Increase a variable's value with var. func incr f_name', +-var f_name = Name of the variable which will have its value increased. F_name must be defined otherwise an error will occur. var = Input variable or input data which will be added to f_name. f_type = integ_heun Integrates a variable. The integration is made according to Heun's method. func integ_heun f_name', +-var f_name = Name of the newly created variable. If variable f_name already exists in memory, it will be overwritten by this command, and a warning will be written to standard output. var = Name of the input data variable. If the variable var' not exists in memory, an error will occur, and the program will continue with the next input data command. f_type = intpl_l Creates a linear interpolated variable. The points of interpolation are given in this directive. The independent variable is read from an existing scalar or constant. The input data reading into field intpl_l is ended by giving a new main command according to post processing-commands or Plotting-commands. If the value in var' lies outside the interval x1 - xn, the output data is extrapolated with help of the gradients from the outer points. func intpl_l f_name' var' x1,y1 x2,y2 x3,y3 x4, .... f_name = Name of the newly created variable. If variable f_name already exists in memory, its value will be replaced with this command, and a warning message will be written on standard output. var = Name of the independent input data variable. Must be of the type scalar or constant otherwise an error will occur. x1,y1 = Point number 1 x2,y2 = Point number 2 etc. f_type = inv Inverts the variable var', and stores the output data in f_name'. If the variable var' is less than 1.e-30, f_name will be set at 1.e30. func inv f_name', +-var f_name = Name of the newly created variable. If variable f_name already exists in memory, it will be replaced by this command, and a warning text will be written to standard output. var = Name of the input variable. If there is no variable var', a warning will be written to standard output and the program will continue with the next input data command. f_type = l_lim Sets the lower limit of a variable. If the value in f_name' is less than var, the value of f_name' will be set to var. func l_lim f_name', +-var f_name = Name of the variable which will be limited. F_name must be defined, otherwise an error print will occur. var = Input variable or input data which defines the lower limit. f_type = log Calculates the logarithmic function base e of the variable var, stores the result in f_name. If the input variable is less than 1.e-30, the result will be set equal to -69.0775. func log f_name', +-var f_name = Name of the newly created variable. If f_name has already been defined, it will be replaced by this command, and a warning will be written to standard output. var = Name of the input variable, which must exist and be a type of variable. f_type = log10 Calculates the logarithmic function base 10 of the variable var, stores the result in f_name. If the input variable is less than 1.e-30, the result will be set equal to -30. func log10 f_name', +-var f_name = Name of the newly created variable. If f_name has already been defined, it will be replaced by this command, and a warning will be written to standard output. var = Name of the input variable, which must exist and be a type of variable. f_type = max Creates a new variable which is the maximum of two or more input variables. func max f_name' +-var1 +-var2 +-var3 ,,, etc. f_name = Name of the newly created variable. If variable f_name already exists in memory, it will be replaced by this command, and a warning text will be written to standard output. var1 = Name of input variable or input data 1. var2 = Name of input variable or input data 2. var3 = Name of input variable or input data 2. ,,, = Etc, until next main command or eof. f_type = maxloc The location of the maximum value. When searching for the max value of many variables, it can also be of interest to know the name/number of the variable having the greatest value. To be used together with func max. func maxloc f_name' +-var1 +-var2 +-var3 ,,, etc. f_name = Name of the newly created variable. If variable f_name already exists in memory, it will be replaced by this command, and a warning text will be written to standard output. var1 = Name of input variable or input data 1. var2 = Name of input variable or input data 2. var3 = Name of input variable or input data 2. ,,, = Etc, until next main command or eof. f_type = mean_sec Calculation of average value section per section. func mean_sec f_name' +-'sec_length +-'var_name +-'var_tname f_name = Name of the newly created variable. If variable f_name already exists in memory, it will be replaced by this command, and a warning text will be written to standard output. sec_length = Length of sections to average var_name = Variable to calculate the average value of var_tname = X-axle of var_name Also a X-variable for f_name will be created. The elements in the X-variable are calculated as: X(isec)= sum(Xvect(isec_beg:isec_end)) / (isec_end-isec_beg+1) I.e. the sections will start at "X(isec)-sec_length/2" and end at "X(isec)+sec_length/2" The name of the X-variable will be set equal to 'T'//f_name. f_type = min Creates a new variable which is the minimum of two or more input variables. func min f_name' +-var1 +-var2 +-var3 ,,, etc. f_name = Name of the newly created variable. If variable f_name already exists in memory, it will be replaced by this command, and a warning text will be written to standard output. var1 = Name of input variable or input data 1. var2 = Name of input variable or input data 2. var3 = Name of input variable or input data 2. ,,, = Etc, until next main command or eof. f_type = minloc The location of the minimum value. When searching for the min value of many variables, it can also be of interest to know the name/number of the variable having the smallest value. To be used together with func min. func minloc f_name' +-var1 +-var2 +-var3 ,,, etc. f_name = Name of the newly created variable. If variable f_name already exists in memory, it will be replaced by this command, and a warning text will be written to standard output. var1 = Name of input variable or input data 1. var2 = Name of input variable or input data 2. var3 = Name of input variable or input data 2. ,,, = Etc, until next main command or eof. f_type = opere Evaluate an expression. The expression shall be written as a command to matlab or octave. Output from the command is written to standard output. func opere expression expression = Text string containing the expression that shall be evaluated. N.B. The expression shall be written inside grave accents "". f_type = operf Evaluate an expression. The expression shall be written as a command to matlab or octave. Output is written to variable f_name. func operf f_name= expression tname= var f_name = Name of the newly created variable. If variable f_name already exists in memory, it will be replaced by this command, and a warning text will be written to standard output. expression = Text string containing the expression that shall be evaluated. N.B. The expression shall be written inside grave accents "". tname=var = Optional subcommand for setting the associated default X-variable. f_type = operp Creates a variable which is based on several algebraic operations in sequence. The variables included can be variables or data constants. On execution of the statement, priority is given to multiplication and division over addition and subtraction. Should the user require the operations to be executed in a different order, this can be done with the use of parentheses. The number of parenthesis levels is limited to max. 101 levels. Reading of input data into 'func operp', will continue until a new main command according to 3.1) has been read. func operp f_name' +-var1 oper1 +-var2 oper2 ... etc. f_name = Name of the newly created variable. If the f_name has already been defined, it will be replaced by this command, and a warning message will be written on standard output. var1 = Name of the input variable or input data 1. oper1 = Operation number 1 var2 = Name of the input variable or input data 2. oper2 = Operation number 2. The following symbols represent permitted operations: + = Addition - = Subtraction * = Multiplication / = Division ( = Beginning of parenthesis ) = End of parenthesis N.B. A delimiter must be used between every var and oper. A delimiter is either a space, a tab sign, or a comma. Parentheses are only permitted up to 101 levels. f_type = rot_orient Orients a vector with respect to another vector. The function is especially developed for wheel measurements made with the miniprof wheel- and rail- profile measuring device. Sometimes it can be difficult to know how the miniprof recorder was oriented when it was measuring a profile. This function allows the user to compare the measured profile with a theoretical profile and orient the measured profile with respect to theoretical profile. func rot_orient f_name' var_meas' var_orig' +-'xbeg_1 +-'xend_1 +-'xbeg_2 +-'xendp_2 var_meas = Name of the measured data which shall be oriented. var_orig = Name of the theoretical(original) data which shall be used as reference. xbeg_1 = Start coordinate for orientation area #1 xend_1 = Stop coordinate for orientation area #1 xbeg_2 = Start coordinate for orientation area #2 xend_2 = Stop coordinate for orientation area #2 Please look in the example shown in kpf_sum.html. f_type = reverse Writes the contents of a vector in reverse order to the new variable f_name. The function can be used when creating low- and high- pass filters without phase shift. func reverse f_name', +-var1 f_name = Name of the newly created variable. If variable f_name already exists in memory, it will be replaced by this command, and a warning message will be written on standard output. var1 = Name of input vector. f_type = sign2 Creates a new variable which is based on two inputs var1 and var2. Command sign2 reads the value from the variable var1 and multiplies its absolute value with the sign of variable var2. func sign2 f_name', +-var1, +-var2 f_name = Name of the newly created variable. If f_name has previously been defined, it will be replaced by this command, and a warning message will be written on standard output. var1 = Name of input variable or input data 1. var2 = Name of input variable or input data 2. f_type = slope_dx Creates the derivative of a variable. The derivative is calculated according to dy(t)= (y(t+dt)-y(t))/dt. func slope_dx f_name', +-var, +-dt f_name = Name of the newly created variable. If f_name has already been defined, it will be replaced by this command, and a warning will be written to standard output. var = Name of the input variable, which must exist and be a type of variable. dt = A constant step size, when calculating the derivative. The unit of "dt" is the same as for the default X-variable of variable "var". f_type = spline_G_rel Natural B-spline data smoothing. func spline_G_rel f_name', +-var, tol f_name = Name of the newly created variable. If f_name has already been defined, it will be replaced by this command, and a warning will be written to standard output. var = Name of the input variable, which must exist and be a type of variable. tol = Tolerance defining how close the smoothing shall follow variable var. The tolerance is scaled relative to the RMS-value of the variable. Therefore tol should be rather big in order to get any filtering effect ~100-1000. However if tol is too big, the filtered curve will be a straight line. f_type = sqrt Calculates the square root of the variable var, and stores the result in f_name. If the variable is less than 0, the absolute value of the variable var is taken, before the square root is calculated. func sqrt f_name', +-var f_name = Name of the newly created variable. If f_name has already been defined, it will be replaced by this command, and a warning will be written to standard output. var = Name of the input variable, which must exist and be a type of variable. f_type = u_lim Sets the upper limit of a variable. If the value in f_name' exceeds var, the value of f_name' will be set to var. func u_lim f_name', +-var f_name = Name of the variable which is to be limited. The f_name must be defined otherwise an error will occur. var = Input variable or input data which defines the upper limit. Input according to method 2) without subcommands. This method of reading FUNC-input data commands is an old method and is only kept for backward compatibility. In this case, the indata_group has the same formation regardless of function, "FUNC, Indata_group" can thereby be replaced by the following: FUNC, Yname1, Oper, Yname2, Fname FUNC = Directive to execute mathematical operations in one or two variables. The input data Oper controls which type of function will be executed. Yname1 = Name of the input variable #1. Yname1 can be a type of variable, scalar or constant. Oper = The user can choose between the following operations: ADD Addition ATAN2 Arc-tan of Yname1 divided by Yname2 CABS Complex absolute value of two variables, Yname1 is used as the real part and Yname2 is used as the imaginary part DIV Division MUL Multiplication PWR Yname1 raised to the power of Yname2 SUB Subtraction Operations involving only one variable (Yname2 is ignored): ABS Absolute value of Yname1 dB Decibel of Yname1 i.e. 20*lg(abs(Yname1)) EXP Exponential function base e LG Logarithmic function base 10 LN Logarithmic function base e RMS Root Mean Square SQR Square root Yname2 = Name of the input variable #2. Yname1 can be a type of variable, scalar or constant. Fname = Name of output variable. Fname is always a type of variable. INSERT in_type, input(*) Definition of include-files. This directive makes it possible for the user to redirect the input reading to another file. The insert-command can also be given in the inserted file, to further redirect the input reading. in_type = file The inserted file is read just as it is, without any filtering functions. insert file insfile' insfile = Name of the file which will be expanded in the input data file. in_type = format The insert file is read according to the format specification for the input data file. This directive is applicable when the input data is to read from an extern file, but the user wishes to select a desired number of columns. insert format formspec' insfile' formspec = Format specification which controls the data filtration. The format specification is used in FORTRAN's read statement which reads the insfile. On reading the file, there are two format specifications which are permitted a(character) and x(skip). The format specification shall be enclosed in parentheses, as this is required by FORTRAN's read statement. As almost all the format specifications contain comma signs, the format specification must be enclosed in accent signs, otherwise the comma sign will be interpreted as a delimiter. E.g.: column positions 1-10 and 21-30 are to be read from the file testfile insert format '(a10,10x,a10)' testfile insfile = Name of the data file which will be expanded in the input data file. in_type = free_form' The insert-file is read in free format, certain columns are selected with the special format-command to free_form. This directive is applicable when input data is to be read from an extern file and certain columns are to be selected, but the number of characters in the columns and/or spaces is not known. The following characters can be used as delimiters: space, tabulator, comma, and equal signs. insert free_form formspec' insfile' formspec = Format specification which controls the data filtration. The format specification is given with special commands which determines whether a column should be read or not. The format specification is similar to a FORTRAN's format statement, with the extension that it is not necessary to count the number of positions in the columns which have been read. On reading the file, there are two format specifications which are permitted a(character) and x(skip). The format specification can be enclosed in parentheses, and as the entire format specification will be inserted into the formspec without interruption, the format specification must be enclosed in accents, otherwise the comma sign will be interpreted as a delimiter. E.g.: a file contains 5 columns, columns 1 and 3 shall be read from the file data/testfile. insert free_form '(a,x,a,x,x)' data/testfile N.B. All columns on the file data/testfile must be marked, either with an 'a' if the column is to be read, or with an 'x' if the column is not to be read. insfile = Name of the data file which will be expanded into the input data file. in_type = format_rows The insert file will be read according to the format specification, but the input rows from the inserted file can also be selected. insert format_rows formspec' istart istop insfile' formspec = Format specification which controls the data filtration process. See description under in_type = format. istart = Start line from where the reading will start. istop = Stop line where the reading will stop. insfile = Name of the data file which will be expanded in the input data file. in_type = free_form_rows The insert-file will be read according to the format specification as in_type=free_form, but the input rows from the inserted file can also be selected. insert free_form_rows formspec' istart istop insfile' formspec = Format specification which controls the data filtration process. See description under in_type = free_format. istart = Start line from where the reading will start. istop = Stop line where the reading will stop. insfile = Name of the data file which will be expanded in the input data file. If_then variable_1, test, variable_2 or Elseif_then variable_1, test, variable_2 Definition of an (else)if_then-block. By using the if_then-command, the user has the possibility to control which statements in the input data file that shall be executed or not. If the test condition is true all statements inside the if_then-block will be executed, otherwise they will be passed over. The if_then-block is ended with an ELSEIF_THEN, ELSE or ENDIF -command. The command if_then has the following input data: variable_1 = Test variable number 1 containing a variable name or data constant. Test = Text string which defines the test condition. Following test strings are valid in test: .eq. = True if variable_1 is equal to variable_2 .ne. = True if variable_1 is not equal to variable_2 .gt. = True if variable_1 is greater than variable_2 .ge. = True if variable_1 is greater than or equal to variable_2 .lt. = True if variable_1 is lesser than variable_2 .le. = True if variable_1 is lesser than or equal to variable_2 variable_2 = Test variable number 2 containing a variable name or data constant. N.B. For the test conditions .eq. and .ne., variable_1 and variable_2 will be converted to integers before the test is undertaken. Command if_then can also be used for checking if a variable exists or not, see the alternative usage of command if_then below. If_then test variable_1 Definition of an if_then-block. This alternative usage of command if_then make tests on one variable only. test = Text string which defines the test condition. Following test strings are valid in test: .exist. True if variable_1 exists in the memory of the program .not_exist. True if variable_1 not exists in the memory of the program var1 = Test variable IN_SUBSTRUCT struct_name [ arg1 arg2 arg3 etc. ] Calls a substructure which has previously been defined with the command substruct. The same number of arguments will be defined within brackets that have been used in the definition of the substructure. The delimiters which are used to separate the arguments are comma, space, or tabulator sign. The argument will replace$1, $2 etc. in the substructure. LOOP Command LOOP is used when a set of commands shall be repeated. The group of loop-commands is concluded with a UNTIL-command. NO_WARNING Directive for suppressing of warning text. When the command NO_WARNING is given, a flag is placed in the program so that any error prints from coming commands are suppressed. The command NO_WARNING only applies to the subsequent commands. If the user has a series of commands which will repeatedly give rise to warning texts, the command NO_WARNING must be given for each of these commands. NTABLE= Ncols Number of columns to be written in the *.resu-file. Declared= Integer*4 Default = 4 PRINT Sub_command <Input data> Command which exports results into text files. The output is written to the file name defined in PRYFIL. If the output file already exists its contents will be appended. The following subcommands are available: HEAD_LINES = Prints header lines SCALAR = Prints scalars in column format SCALAR_ROW = Prints all scalars in one row TEXT = Prints text VAR_IN_PAGE = Prints the variables in current page VARIABLE = Prints variables PRINT HEAD_LINES Prints the contents of the header lines on PRYFIL. PRINT SCALAR Iname1, Iname2, Iname3,,, etc. Prints scalars in column format on PRYFIL. Iname1 = Name of scalar #1 Iname2 = Name of scalar #2 Iname3 = Etc. PRINT SCALAR_ROW Comment &nspb; Iname(1:200) Prints scalars in row format on PRYFIL. Comment = Initial comment text on the row. The length of the comment will consist of at least 24 character. If the comment string is shorter than 24 characters blank spaces will be appended in the end of the comment. Iname1 = Name of scalar #1 Iname2 = Name of scalar #2 Iname3 = Etc. max. 200 scalars In the initial argument "Comment", the string "$IDENT" will be replaced with the name of current ident. PRINT TEXT Char_string Prints string Char_string on PRYFIL. Max length of Char_string is 256 characters. If Char_string contains the string "$IDENT", the name of the current ident will be inserted in its place. PRINT VAR_IN_PAGE Prints all variables in current page on PRYFIL. N.B. Command "PRINT VAR_IN_PAGE" must be given under level DIAGRAM or PAGE. Outside these levels the curves are not defined. PRINT VARIABLE Iname(1:200), Tstart, Tstop, Tinc, Tname Prints variables, scalars or constants on PRYFIL. Iname can be prepended with a minus sign, in order to print the negative value of the variables, scalars or constant. N.B. All variables must have the same default time axis, otherwise an error will occur. Iname1 = Name of first variable, scalar or constant Iname2 = Name of second variable, scalar or constant Iname3 = Name of third variable, scalar or constant . . . = Etc. max. 200 Tstart = Start time in the output table (concludes further reading of more variable names). Tstop = Stop time in the output table. Tinc = Time increment in the output printing. Tname = The name of the time variable. If Tname is not defined, the default time variable for Iname(1) will be used. PRYFIL = File_name Sets the name of the file to where output from command PRINT will be written. File_name = File name of the print from the print command. Declared= Character*80 Default = mplotr/$IDENT.print Where $IDENT will be replaced with the name of the current ident. STAT, Iname, FDname, FRname, +-Tstart, +-Tstop, Fymax, Fymin, Nintvl, Tname, Tsname, FCname, FHname, Ptile(20) Directive which executes statistical operations and calculations of the input variable. In addition to FDname and FRname, the following scalars are created: Iname+'MAX' = Max.value Iname+'MIN' = Min.value Iname+'MED' = Mean value Iname+'MEDIAN' = Median Iname+'STD' = Standard deviation Iname+'RMS' = RMS-value Iname+'RMQ' = RMQ-value Iname+'TMA' = Time when max Iname+'TMI' = Time when min These scalars created in directive STAT will also be written to file$ident.resu. Iname = Name of the input variable. FDname = Name of the cumulative frequency function created, stored as a variable. If the user does not wish to calculate the cumulative frequency function, FDname shall be set to 'NO'. Default = 'NO'. FRname = Name of the statistical frequency function, which is stored as a variable. If the user does not wish to calculate the frequency function, FRname shall be set to 'NO'. Default = 'NO'. Tstart = Start time for the calculations. Default value = -1.e36 which gives the same start time as the input variable has. Tstop = Stop time for the calculations. Default value = 1.e36 which gives the same stop time as the input variable has. Fymax = Max. value in the calculation of Frname. In the calculation of Frname, the values of the variable, which are bigger than Fymax, are not taken into consideration. Default value = 1.e36: which entails that MPLOT picks the max. value of the variable Iname, and sets Fymax = MAX(Iname) + 0.01 %. This bigger value is choosen in order to ensure that all values in Iname will be taken into consideration Fymin = Min. value in the calculation of Frname. In the calculation of Frname, the values of the variable, which are less than Fymin, are not taken into consideration. Default value =-1.e36: which entails that MPLOT picks the min. value of the variable Iname, and sets Fymin = min(Iname) - 0.01 %. This small decrease in the min. value is to ensure that no points in This smaller value is choosen in order to ensure that all values in Iname will be taken into consideration Nintvl = The number of intervals between Fymin and Fymax, which forms the basis for the calculation in Frname. Default = 50. Tname = The name of the time variable. If Tname is not defined, the default time variable for Iname will be used. Tsname = Definition of the variable to where Tstart and Tstop refers. If Tsname not has been set in input data, the default time variable for Iname will be used. FCname = X-axis for FDname, will be set to 'C'+Iname if FCname is not specified. FHname = X-axis for FRname, will be set to 'H'+Iname if FHname is not specified. Ptile = The subcommand Ptile gives the user the possibility to define percentiles in the cumulative frequency function to be stored as scalars in the memory of program MPLOT. These scalars are assigned the following names: FDname+'P1', FDname+'P2', FDname+'P3',, etc. If FDname is set equal to "no" above, the percentile scalars will instead be assigned the following names: 'FD'+Iname+'P1', 'FD'+Iname+'P2', 'FD'+Iname+'P3',, etc. The percentage value itself will also be stored as scalars in the memory of program MPLOT. These scalars will be assigned the following names: FDname+'p1', FDname+'p2', FDname+'p3',, etc. or 'FD'+Iname+'p1', 'FD'+Iname+'p2', 'FD'+Iname+'p3',, etc. A maximum of 20 percentiles can be given under the sub-command Ptile. If the percentages given in the Ptile command equals one of the following values: 0.15 25 50 75 or 99.85, the percentiles will automatically be printed on the $ident.resu-file. STAT2, IHname, IVname, Tstart, Tstop, Fminx, Fmaxx, Nintx, Fminy, Fmaxy, Ninty Directive for 3-dimensional statistical analysis of the input variables IHname and IVname. The output data is presented in a table with the IHname horizontally and the IVname vertically. In every square, a figure is presented which will indicate the degree of probability that this particular combination will occur. The output data in every square is given in per mille. The interval of the squares is divided into Fminx, Fminx+dx, Fminx+2*dx, etc. up to Fmaxx. The step dx is equal to: dx = (Fmaxx-Fminx) / Nintx The same classification is used in the Y-direction as in the X-direction. At the end of the print, the probability will be multiplied in each cell by the value of the vertical variable. The products in each column are then summed up, and the result is presented at the bottom of each print. This print can be useful in e.g. the evaluation of where on the wheel or the rail, the most wear occurs. The created Y-curve which contains the sum of the probability in each column is stored in memory under the name IVname+'.stat2'. The created X-curve which contains the center point in each column is stored in memory under the name IHname+'.stat2'. The output data is written to a separate file, named$ident.stat2. IHname = Name of the horizontal input variable. IVname = Name of the vertical input variable. Tstart = Start time for the calculations. Default = -1.e36 Tstop = Stop time for the calculations. Default = 1.e36 Fminx = Start level of the first interval. Default is equal to the min. value of the variable. Fmaxx = End level of the last interval. Default is equal to the max. value of the variable. Nintx = The number of intervals between Fminx and Fmaxx. Default = 30. Fminy = Start level of the first interval. Default is equal to the min. value of the variable. Fmaxy = End level of the last interval. Default is equal to the max. value of the variable. Ninty = The number of intervals between Fminy and Fmaxy. Default = 30 SUBSTRUCT struct_name [ input data commands ] When the same input data shall be repeated several times, and/or with only a minor difference in the input data between the different times, it can be suitable to formulate the input data in a substructure with arguments. When the substructure is then called, different values can be given to the substructure's argument. Users, who are acquainted with the program in FORTRAN or in Pascal, will see great similarities with FORTRAN's subroutines and/or Pascal's procedures. Users, who have worked with UNIX-scripts may see great similarities between a substructure in CALC-input data and a UNIX-script. struct_name = Defines the name of the substructure. [ input data commands ] = The input data command to mplot is given in brackets. The parts in the input data-block which shall be changed between the different calls to the substructure shall be denoted $1,$2, $3 etc. The use of the substructure is described in command IN_SUBSTRUCT. TRANS, Iname, Fname, Type, +-Tstart, +-Tstop, +-Indata(i) Directive to filtrate in the frequency domain. The command requires that the input variable has equidistant time steps. Iname = Name of the input variable. Fname = Name of the output variable. Type = The desired transfer function defined in subroutine FILTER. Tstart = Start time for the transformation. Default value 0. Tstop = Stop time for the transformation. Default value 1.e36 Indata = Input data for the transfer function: Type Input data Function BS_WB n/a Vertical comfort filter according to BS 6841:1987. BS_WD n/a Transversal comfort filter according to BS 6841:1987. BS_WF n/a Vertical motions sickness filter according to BS 6841:1987. (Further information, see Ride comfort assessments) BUTT6 fo A 6th order Butterworth low pass filter. The input data fo defines the cut-off frequency of the filter. CEN_TC256_WB n/a Vertical comfort filter according to CEN Technical Committee 256 WG 7 CEN_TC256_WC n/a X seat back comfort filter according to CEN Technical Committee 256 WG 7 CEN_TC256_WD n/a Transversal comfort filter according to CEN Technical Committee 256 WG 7(Further information, see Ride comfort assessments) EN12299_PDE n/a Filter for discrete events according to EN 12299.(Further information, see Ride comfort assessments) EN12299_WB n/a Vertical comfort filter according to EN 12299. EN12299_WC n/a X seat back comfort filter according to EN 12299. EN12299_WD n/a Transversal comfort filter according to EN 12299.(Further information, see Ride comfort assessments) ERRI153_WB n/a Vertical comfort filter according to ERRI Question B 153 Report no.18. ERRI153_WC n/a X seat back comfort filter according to ERRI Question B 153 Report no.18. ERRI153_WD n/a Transversal comfort filter according to ERRI Question B 153 Report no.18. FILT_ZP Filter built up by zeros and poles. nz, Number of zeros. np, Number of poles. cScalfact, Complex scale factor (two values). cZero_1, Complex zero #1 (two values). cZero_2, Complex zero #2 (two values). . . . . . . cZero_nz, Complex zero #nz (two values). cPole_1, Complex pole #1 (two values). cPole_2, Complex pole #2 (two values). . . . . . . cPole_np Complex pole #nz (two values). HPASS1 fo High pass filter of first order. The input data fo defines the cut-off frequency of the filter. HPASS2 fo,zeta High pass filter of second order. The input data fo defines the cut-off frequency of the filter. The input data zeta defines the damping in the filter, set is defined as fraction of critical damping. IKLIPP f1,f2 Ideal band inhibit filter. All frequencies between f1 to f2 are set to zero, other frequencies are transferred intact to Fname. ISO2631_97WC n/a Longitudinal Ride Comfort filter according to ISO 2631-1:1997. ISO2631_97WD n/a Lateral Ride Comfort filter according to ISO 2631-1:1997. ISO2631_97WK n/a Vertical Ride Comfort filter according to ISO 2631-1:1997. ISO2631_97WF n/a Vertical motion sickness filter according to ISO 2631-1:1997. (Further information, see Ride comfort assessments) ISO8041L n/a Lateral comfort filter according to ISO 8041. This filter is similar to ISOL, but this filter is realized with a physical filter with pole's and zero's in contrast to ISOL which is just an amplification factor which every frequency is multiplied with, without taking into consideration the phase shift which occurs in a physical filter. Moreover the filter includes a band pass filter comprising a 2-pole high pass Butterworth filter at 0.8 Hz, and a 2-pole low pass Butterworth filter at 100 Hz. Further information, see ISOL-filter. ISO8041V n/a Vertical comfort filter according to ISO 8041. This filter is similar to ISOV, but this filter is realized with a physical filter with pole's and zero's in contrast to ISOV which is just an amplification factor which every frequency is multiplied with, without taking into consideration the phase shift which occurs in a physical filter. Moreover the filter includes a band pass filter comprising a 2-pole high pass Butterworth filter at 0.8 Hz, and a 2-pole low pass Butterworth filter at 100 Hz. Further information, see ISOV-filter. ISO8041V_MS n/a Vertical motion sickness filter according to ISO 8041. (Further information, see Ride comfort assessments) ISOL n/a Lateral comfort filter according to ISO 2631. ISOV n/a Vertical comfort filter according to ISO 2631. ISOV_2631_3 n/a Vertical motion sickness filter according to ISO 2631/3. ISOTL n/a Lateral comfort filter according to ISO 2631, with third octave band analysis. ISOTV n/a Vertical comfort filter according to ISO 2631, with third octave band analysis. (Further information, see Ride comfort assessments) KLIPP f1,f2 Ideal band pass filter. All frequencies between f1 to f2 are transferred, other frequencies are set to zero. LPASS1 fo Low pass filter of first order. The input data fo defines the cut-off frequency of the filter. LPASS2 fo,zeta Low pass filter of second order. The input data fo defines the cut-off frequency of the filter. The input data zeta defines the damping in the filter, set is defined as fraction of critical damping. TRANSF, Iname_r, Iname_i, Fname_r, Fname_i, Type, Indata(i) Directive to filtrate in the frequency domain, for filtration in the spectra generated by the FRESP program. Iname_r = Name of the input spectra's real part. Iname_i = Name of the input spectra's imaginary part. Fname_r = Name of the output spectra's real part. Fname_i = Name of the output spectra's imaginary part. Type = Complex transfer function, defined in subroutine FILTER. Indata = Input data to the transfer function: Same filters and Indata(i) as in command TRANS are also available in command TRANSF. TRANST, Iname, Fname, Type, +-Tstart, +-Tstop, Tsname, +-Indata(i) Directive, similar to the command TRANS , to filtrate in the frequency domain. The difference between TRANS and TRANST, is that the command TRANST has a subcommand Tsname. Same filters and Indata(i) as in command TRANS are also available in command TRANST. Tsname = Variable which Tstart and Tstop will interpolate in, for the formulation of calculation interval. As default, Tsname is set to "?", which means that the default time variable for Iname will be used. UNTIL variable_1, test, variable_2 Directive which concludes an input data group opened with LOOP. The commands given between LOOP and UNTIL will be repeated until the test condition becomes true. variable_1 = Test variable number 1 containing a variable name or data constant. test = Text string defining the test condition. .eq. = True if variable_1 is equal to variable_2. .ne. = True if variable_1 is not equal to variable_2. .gt. = True if variable_1 is greater than variable_2. .ge. = True if variable_1 is greater or equal to variable_2. .lt. = True if variable_1 is less than variable_2. .le. = True if variable_1 is less or equal to variable_2. variable_2 = Test variable number 2 containing a variable name or data constant. For the test conditions .eq. and .ne., variable_1 and variable_2 will be converted to integers before the test is undertaken. All other tests are undertaken in real. WZ_AFRESP, Iname, WZname, Ityp, FQname Directive to calculate a WZ-factor from an absolute value variable created by the program FRESP. The command does not care which frequency step the spectra has been stored with on the output file. The command WZ_AFRESP interpolates the spectra from 0.05-15 Hz with a step of 0.05 Hz regardless of how the variable has been stored on the output file. The Fourier serie variable Iname will be the absolute value of the real part and the imaginary part. The directive creates the variable WZname, which contains the filtrated frequency variable of Iname. A frequency axis, named EQ_freq_0.05, is created for the variable Wzname. However, the variable EQ_freq_0.05 is not created if it already exists. Print of the WZ-value occurs on file$ident.resu and is stored in the memory as a scalar. Iname = Name of the input variable. WZname = Name of the output variable after Wz-filtering. If the WZname is given, no WZ-calculation will take place. Ityp = Defines the type of Ride Index to be carried out. Ityp can be set to: VPV Ride Index in vertical direction for passenger vehicles. LPV Ride Index in lateral direction for passenger vehicles. GV Vertical and lateral Ride Index for freight vehicles. VPV will apply if Ityp is not specified. FQname = The name of the frequency axis. If FQname not has been set in input data, the default X-variable for Iname will be used. ### Calculate different ride comfort assessments. Ride comfort according to: Motion sickness according to: #### Calculation of ride index according to Wz. Wz can be calculated in command FTWZ and WZ_AFRESP. Command FTWZ will perform both Fourier serie- and Wz-calculation. Example: create_scalar new Xeval_start= 120 Ftwz car_1b1.ay FTname= car_1b1.ay.ft ityp= lpv tstart= Xeval_start tsname= lsb_11.pn Ftwz car_1.m.ay FTname= car_1.m.ay.ft ityp= lpv tstart= Xeval_start tsname= lsc_1.pn Ftwz car_1b2.ay FTname= car_1b2.ay.ft ityp= lpv tstart= Xeval_start tsname= lsb_12.pn Output data results is written to file $ident.resu Example: ======================================= S U M M A R Y O F R E S U L T S ======================================= RIDE QUALITY for car_1b1.az car_1.m.az car_1b2.az ------------ WZ (VPV ) 1.95 1.61 2.07 DOMINANT FREQ. 2.24 .67 1.11 RIDE QUALITY for car_1b1.ay car_1.m.ay car_1b2.ay ------------ WZ (LPV ) 2.28 2.05 2.40 DOMINANT FREQ. 2.28 2.28 2.28 FREQUENCY ANALYSIS car_1b1.ay.ft car_1.m.ay.ft car_1b2.ay.ft ------------------ FREQUENCY no 1 1.243 .474 2.282 AMPLITUDE .388349E-01 .345279E-01 .363003E-01 FREQUENCY no 2 1.383 2.283 .475 AMPLITUDE .372562E-01 .336819E-01 .348181E-01 FREQUENCY no 3 1.156 .457 .457 AMPLITUDE .367242E-01 .330116E-01 .338166E-01 FREQUENCY no 4 .455 .439 .962 AMPLITUDE .337917E-01 .303532E-01 .324284E-01 FREQUENCY no 5 .473 .421 1.683 AMPLITUDE .332709E-01 .293488E-01 .315409E-01 FREQUENCY no 6 .963 2.224 2.223 AMPLITUDE .329042E-01 .263795E-01 .311171E-01 FREQUENCY ANALYSIS car_1b1.az.ft car_1.m.az.ft car_1b2.az.ft ------------------ FREQUENCY no 1 .671 .673 .862 AMPLITUDE .220027E-01 .227541E-01 .329338E-01 FREQUENCY no 2 1.109 .615 .938 AMPLITUDE .215236E-01 .191837E-01 .320969E-01 FREQUENCY no 3 1.284 .503 1.109 AMPLITUDE .210400E-01 .183239E-01 .308856E-01 FREQUENCY no 4 1.173 .462 .985 AMPLITUDE .206495E-01 .152621E-01 .298146E-01 FREQUENCY no 5 2.241 .860 .674 AMPLITUDE .202751E-01 .137461E-01 .295354E-01 FREQUENCY no 6 1.348 .702 1.067 AMPLITUDE .194817E-01 .123255E-01 .260435E-01 The print is started with an ident name, date and heading. Thereafter follows the print of the Wz-factors and the frequency component which states the dominant frequency for the Wz-factor. Finally, a summary of the six largest tops in the Fourier serie of the non-filtered acceleration variable. #### Calculation of comfort factors according to ISO 2631/1-1985. In ISO 2631, an evaluation has been made of the effect that different vibration environments have on people. The study has primarily studied one frequency at a time in order to acquire a comfort requirement. When measuring vibrations including several frequencies, there are fundamentally only two methods to choose between: 1) Broad-band analysis, where the entire frequency register is weighed together and a RMS-value is calculated, 2) Third octave band analysis where the RMS-level in every octave is calculated and the max. value is acquired. Method 1) is a good alternative if a simple factor is required as certification for comfort, and to acquire a certification which is simple to compare with other vibration spectra, calculated by the same method. Method 2) is a more exact method to be able to evaluate the limit of fatigue experienced by a person sitting in a vibrating environment. However, this works best if the vibrations are concentrated to primarily one single frequency. ##### ISO 2631 Method 1) Calculation of the broad-band comfort factor The method first filtrates the acceleration variable with a weight filter, then the variable's RMS-value is calculated. This RMS-value functions as a comfort factor. The filter has no physical background, but instead is a mathematical structure as straight and ideal as shown in the standard. Illustration of the vertical filter: ^ | | _____ The transfer function has the value | /| |\ "1" in the interval 4-8 (Hz). | / | | \ | / | | \ | / | | \ | | | | \ | | | | \ | | | | \ --------------------------> 1 4 8 36 (Hz) Illustration of the lateral filter: ^ | | The transfer function has the value | ______ "1" in the interval 1-2 (Hz). | | |\ | | | \ | | | \ | | | \ | | | \ | | | \ ----------------------------> 1 2 36 (Hz) In MPLOT, the octave band is studied from 1 to 31.5 (Hz) in the ISO-analysis, but as every octave has a third octave width, this means that frequencies from 0.841 to 38.05 will be taken into consideration. In order to execute a comfort calculation according to ISO 2631, an example of input data is given here: The acceleration variables are then filtered through weight filters. The vertical filter is called ISOV, and the filter which is used for transversal accelerations is called ISOL. Example: Trans car_1b1.ax fname= car_1b1.axISO type= ISOL Trans car_1.m.ax fname= car_1.m.axISO type= ISOL Trans car_1b2.ax fname= car_1b2.axISO type= ISOL Trans car_1b1.ay fname= car_1b1.ayISO type= ISOL Trans car_1.m.ay fname= car_1.m.ayISO type= ISOL Trans car_1b2.ay fname= car_1b2.ayISO type= ISOL Trans car_1b1.az fname= car_1b1.azISO type= ISOV Trans car_1.m.az fname= car_1.m.azISO type= ISOV Trans car_1b2.az fname= car_1b2.azISO type= ISOV The filtered variable's RMS-value is then calculated in command STAT. Example: Create_scalar new Xeval_start= 120 Create_scalar new Xeval_stop = 2120 Stat car_1b1.axISO tstart= Xeval_start tstop= Xeval_stop tname= lsb_11.pn Stat car_1.m.axISO tstart= Xeval_start tstop= Xeval_stop tname= lsc_1.pn Stat car_1b2.axISO tstart= Xeval_start tstop= Xeval_stop tname= lsb_12.pn Stat car_1b1.ayISO tstart= Xeval_start tstop= Xeval_stop tname= lsb_11.pn Stat car_1.m.ayISO tstart= Xeval_start tstop= Xeval_stop tname= lsc_1.pn Stat car_1b2.ayISO tstart= Xeval_start tstop= Xeval_stop tname= lsb_12.pn Stat car_1b1.azISO tstart= Xeval_start tstop= Xeval_stop tname= lsb_11.pn Stat car_1.m.azISO tstart= Xeval_start tstop= Xeval_stop tname= lsc_1.pn Stat car_1b2.azISO tstart= Xeval_start tstop= Xeval_stop tname= lsb_12.pn The output from the statistical calculation will be presented on file$ident.resu. ISO's comfort factor equals the RMS-value of the filtered acceleration. Example: STATISTICS for car_1b1.ayISO car_1.m.ayISO car_1b2.ayISO ---------- common exponent: *E -3 *E -3 *E -3 MAX VALUE 572.364 299.483 697.256 MIN VALUE -614.531 -350.874 -770.525 RMS VALUE 160.733 78.368 187.065 <- Ride Index RMQ VALUE 217.853 107.326 251.346 AVERAGE VALUE .120 -.006 -.125 MEDIAN -.589 -.656 -1.385 STANDARD DEVIATION 160.733 78.368 187.065 If only a part of the variable is used in the evaluation of the comfort factor, the selection shall be made in the STAT command and not in the TRANS command, as the TRANS command fill out the result with zeros in order to make Fname to the same length as the input vector Iname. ##### ISO 2631 Method 2) Calculation of the comfort factor with the third band analysis It has been discussed in ISO-norms, that there is at present no evidence to show that several interference acceleration tones of different frequency mix with each other, and result in a decrease in comfort. Today's research results show that the dominating frequency is the tone which governs the comfort factor. That is why ISO 2631/1-1985 suggests that the comfort factors are evaluated within every octave band, and the octave which has the highest RMS-value determines the comfort factor for the entire the broad-band spectra. According to ISO, this method is superior to the method described above under Method 1), but ISO adds that the differences between the methods are usually not so significant which is why the method under Method 1) can be used, as it is simpler to use in analysis and, moreover, it is more conservative. There is the possibility to use ISO 2631 in MPLOT with third octave analysis. The filter variable is created under the TRANS directive with the ISOTV filter for vertical acceleration, and ISOTL for transversal acceleration. Example: Create_scalar new Xeval_start= 120 Create_scalar new Xeval_stop = 2120 ## ## Ride Comfort according to ISO 2631 1985 third band analysis. ## ------------------------------------------------------------ Transt car_1b1.ax fname= car_1b1.axISOT type= ISOTL tsname= lsb_11.pn tstart= Xeval_start tstop= Xeval_stop Transt car_1.m.ax fname= car_1.m.axISOT type= ISOTL tsname= lsc_1.pn tstart= Xeval_start tstop= Xeval_stop Transt car_1b2.ax fname= car_1b2.axISOT type= ISOTL tsname= lsb_12.pn tstart= Xeval_start tstop= Xeval_stop # Transt car_1b1.ay fname= car_1b1.ayISOT type= ISOTL tsname= lsb_11.pn tstart= Xeval_start tstop= Xeval_stop Transt car_1.m.ay fname= car_1.m.ayISOT type= ISOTL tsname= lsc_1.pn tstart= Xeval_start tstop= Xeval_stop Transt car_1b2.ay fname= car_1b2.ayISOT type= ISOTL tsname= lsb_12.pn tstart= Xeval_start tstop= Xeval_stop # Transt car_1b1.az fname= car_1b1.azISOT type= ISOTL tsname= lsb_11.pn tstart= Xeval_start tstop= Xeval_stop Transt car_1.m.az fname= car_1.m.azISOT type= ISOTL tsname= lsc_1.pn tstart= Xeval_start tstop= Xeval_stop Transt car_1b2.az fname= car_1b2.azISOT type= ISOTL tsname= lsb_12.pn tstart= Xeval_start tstop= Xeval_stop # no_warning Stat car_1b1.axISOT no_warning Stat car_1.m.axISOT no_warning Stat car_1b2.axISOT no_warning Stat car_1b1.ayISOT no_warning Stat car_1.m.ayISOT no_warning Stat car_1b2.ayISOT no_warning Stat car_1b1.azISOT no_warning Stat car_1.m.azISOT no_warning Stat car_1b2.azISOT If desired, the spectra can be plotted: The above diagram has been created with the following input data: Page xaxis= log y_bot= 0 Diagram 11 Curve yvar= car_1b1.ayISOT Diagram 12 Curve yvar= car_1.m.ayISOT Diagram 13 Curve yvar= car_1b2.ayISOT Diagram 21 Curve yvar= car_1b1.azISOT Diagram 22 Curve yvar= car_1.m.azISOT Diagram 23 Curve yvar= car_1b2.azISOT EndPage From the max. value of the statistical print, the ride index can be determined. The dominating octave's frequency can also be read in the same statistical column. In order to acquire an understanding of the quality of the result, the following table can be used: Vertical Transversal Limit of fatigue RMS [m/s2] RMS [m/s2] ----------------- ---------- ----------- 24 h 0.140 0.100 16 h 0.212 0.150 8 h 0.315 0.224 4 h 0.530 0.355 2.5 h 0.710 0.500 1 h 1.180 0.850 25 min 1.800 1.250 16 min 2.120 1.500 1 min 2.800 2.000 When the three ride indexes are calculated for all three coordinate directions, a total ride index can be calculated by the following equation: Asum= √(1.4*Ax)2 + (1.4*Ay)2 + Az2 The Asum value in the formula above has the same scale as vertical accelerations. In order to acquire an understanding of the fatigue limit in the table above, the column for vertical RMS shall be read. #### Calculation of ride comfort according to BS 6841:1987. In the standard, there are several different methods to evaluate mechanical vibrations depending on the type of vibration environment, and what type of activities are to be undertaken. In this section of the user manual, chapter 6) from BS 6841:1987 is discussed. The chapter is called "Guide to the evaluation of vibration and repeated shock with respect to discomfort and perception" and deals with the comfort of people of normal health, who are exposed to full-body vibrations during journeys, both for pleasure and work. Input data example: create_scalar new Xeval_start= 120 create_scalar new Xeval_stop = 2120 # Transt car_1b1.ax fname= car_1b1.ax.BS type= BS_WD Transt car_1.m.ax fname= car_1.m.ax.BS type= BS_WD Transt car_1b2.ax fname= car_1b2.ax.BS type= BS_WD Stat car_1b1.ax.BS tname= lsb_11.pn tstart= Xeval_start tstop= Xeval_stop Stat car_1.m.ax.BS tname= lsc_1.pn tstart= Xeval_start tstop= Xeval_stop Stat car_1b2.ax.BS tname= lsb_12.pn tstart= Xeval_start tstop= Xeval_stop # Transt car_1b1.ay fname= car_1b1.ay.BS type= BS_WD Transt car_1.m.ay fname= car_1.m.ay.BS type= BS_WD Transt car_1b2.ay fname= car_1b2.ay.BS type= BS_WD Stat car_1b1.ay.BS tname= lsb_11.pn tstart= Xeval_start tstop= Xeval_stop Stat car_1.m.ay.BS tname= lsc_1.pn tstart= Xeval_start tstop= Xeval_stop Stat car_1b2.ay.BS tname= lsb_12.pn tstart= Xeval_start tstop= Xeval_stop # Transt car_1b1.az fname= car_1b1.az.BS type= BS_WB Transt car_1.m.az fname= car_1.m.az.BS type= BS_WB Transt car_1b2.az fname= car_1b2.az.BS type= BS_WB Stat car_1b1.az.BS tname= lsb_11.pn tstart= Xeval_start tstop= Xeval_stop Stat car_1.m.az.BS tname= lsc_1.pn tstart= Xeval_start tstop= Xeval_stop Stat car_1b2.az.BS tname= lsb_12.pn tstart= Xeval_start tstop= Xeval_stop In order to determine whether the RMS- or RMQ-value shall be used in the evaluation of the BS ride index, a so-called "crest factor" must be calculated. If the crest factor exceeds the value 6.0 the RMQ-value shall be used otherwise the RMS-value shall be used. The crest factors can be calculated by the following input data commands: Func operp crestxb1= car_1b1.ax.BSMAX / car_1b1.ax.BSRMS Func operp crestx.m= car_1b1.ax.BSMAX / car_1b1.ax.BSRMS Func operp crestxb2= car_1b1.ax.BSMAX / car_1b1.ax.BSRMS # Func operp crestyb1= car_1b1.ay.BSMAX / car_1b1.ay.BSRMS Func operp cresty.m= car_1b1.ay.BSMAX / car_1b1.ay.BSRMS Func operp crestyb2= car_1b1.ay.BSMAX / car_1b1.ay.BSRMS # Func operp crestzb1= car_1b1.az.BSMAX / car_1b1.az.BSRMS Func operp crestz.m= car_1b1.az.BSMAX / car_1b1.az.BSRMS Func operp crestzb2= car_1b1.az.BSMAX / car_1b1.az.BSRMS # print scalar crestxb1 crestx.m crestxb2 crestyb1 cresty.m crestyb2 crestzb1 crestz.m crestzb2 The crest factors will be written to the print-file by the print command. If the crest factor exceeds 6.0 at any point, the evaluation of the comfort factor will, instead, be executed according to app. C in BS 6841. It states here that the RMQ-value for the variable will be used instead of the RMS-value. No special calculation of the RMQ-value is necessary, as this is calculated simultaneously with the RMS-value through the STAT directive. When the three ride indexes are calculated for all three coordinate directions, a total ride index can be calculated by the following equation: Asum= sqrt( Ax2 + Ay2 + Az2 ) If any of the values Ax,Ay or Az are lower than 25% of max.(Ax,Ay,Az), this value will not be used in the computation. If two of the values are less than 25% of the third value, Asum will be equal to the greatest value Ax,Ay and Az. In BS 6841:1987, no limit values for fatigue levels are discussed, due to the fact that there are so many other impressions which affect the aspect of comfort, e.g. sound, smells, seating quality etc.. But the following table is presented under App.C as a guideline: Acceleration variable RMS (m/s2) Subjective opinion ------------------- --------------------- < 0.015 no noticeable vibration < 0.315 not uncomfortable 0.315 - 0.630 a little uncomfortable 0.500 - 1.000 quite uncomfortable 0.800 - 1.600 uncomfortable 1.250 - 2.500 very uncomfortable > 2.0 extremely uncomfortable #### Calculation of the ride comfort according to: EN 12299, CEN Technical Committee 256 WG 7 and ERRI Question B 153. The standard covers four types comfort evaluations: Mean comfort simplified (Nmv), Mean comfort complete (Nvd,Nva), Comfort on curve transitions (Pct) and Comfort on discrete events (Pde). NMV   Mean comfort simplified: • Measure the accelerations in all three directions: longitudinal, lateral and vertical. • Filter longitudinal and lateral accelerations with EN12299_WD • Filter vertical accelerations with EN12299_WB • Calculate the 95% probability of rms-values for time intervals of 5 s in func cen_erri_595 • Calculate Nmv according to • Text file example    HTML example PCT   Comfort on Curve Transitions: • Measure the lateral acceleration in [m/s] and car-body roll rotation speed in degrees per second. • Filter the two signals in a 1 s averaging window. • Calculate the jerk by connecting two points in the filtered acceleration in a time interval of 1 s. • In the phase from beginning to the end of the transition curve read the maximum value of the car-body roll rotation speed. • In the phase from beginning to end + 1.6 s of the transition curve read the maximum value of the lateral acceleration. • In the phase from 1 s before the beginning to the end of the transition curve read the maximum value of the lateral jerk. • Calculate PCT according to • The term in brackets is included only if > 0. • If the measured lateral acceleration is in [m/s] the coefficients A, B, C, D, E are as follows: Conditions A B C D E In rest - standing 28.542 20.693 11.1 0.185 2.283 In rest - seated 8.9704 9.6840 5.9 0.120 1.626 In the standard EN 12299 the acceleration is expressed in percent of gravitational acceleration, why the original table is as follows: Conditions A B C D E In rest - standing 2.80 2.03 11.1 0.185 2.283 In rest - seated 0.88 0.95 5.9 0.120 1.626 • Text file example    HTML example PDE   Comfort on Discrete Events: • Measure the lateral acceleration in the car-body • Filter the acceleration in filter EN12299_PDE • Calculate sliding average-, max- and min- value in a 2 s interval • Calculate sliding peak value by subtracting the sliding max- and min- value • Calculate PDE according to • If the measured lateral acceleration is in [m/s] the coefficients a, b, c are as follows: Conditions a b c In rest - standing 16.616 27.013 37.0 In rest - seated 8.4608 13.048 21.7 In the standard EN 12299 the acceleration is expressed in percent of gravitational acceleration, why the original table is as follows: Conditions a b c In rest - standing 1.63 2.65 37.0 In rest - seated 0.83 1.28 21.7 • Text file example    HTML example Proposed scale in comfort units for Nmv, Nva, Nvd is as follows: N < 1 Very comfortable 1 < N < 2 Comfortable 2 < N < 4 Medium 4 < N < 5 Uncomfortable 5 < N Very uncomfortable #### Calculation of motion sickness value according to ISO 2631/3-1985 and ISO 2631-1:1997. An evaluation has been carried out by ISO 2631/3 to study people's tendency towards motion sickness. It is stated in the norms that there are many factors in addition to motion, that cause motion sickness among passengers. It has, been difficult to formulate a good criterion, for other factors than vertical motion, therefore only vertical motions is covered in the norm. The acceleration has been measured at a point as far from the center of the car-body as possible, in order to acquire the greatest possible vertical motion. In the formulation of the norm, only one frequency at a time has been studied in order to determine the requirement. In case of an excitation comprising of several frequencies, a broad-band analysis must be made where the entire frequency register is computed, and a total RMS-value is calculated according to Method 1). The calculation of motion sickness according to ISO 2631/3-1985 is carried out by first filtering the acceleration variable, and then calculating the RMS value of the variable. The RMS value equals the motion sickness value. Illustration of the vertical motion sickness filter ISOV_2631_3: ^ | | The transfer function has the value | ______ "1" in interval 0.1-0.315 (Hz). | | |\ | | | \ | | | \ | | | \ | | | \ | | | | | | | | | | | | ----------------------------> 0.1 0.315 0.63 (Hz) The filter ISOV_2631_3 has no physical background. No frequencies over 0.63 Hz are taken into consideration, nor will any frequencies under 0.1 Hz be taken into consideration. According to ISO 2631/3-1985, it has been difficult to prove that frequencies outside 0.1-0.63 Hz give rise to motion sickness. In the ISO-analysis, the third octave band is from 0.1 to 0.63 Hz in filter ISOV_2631_3. However, as every octave has a third octave width, this will mean that the frequency range will be extended from 0.089 to 0.708. In order to be able to calculate the risk of motion sickness according to ISO 2631/3, an example is given: Create_scalar new Xeval_start= 120 Create_scalar new Xeval_stop = 2120 # Transt car_1b1.az fname= car_1b1.az.ISO_MS85 type= ISOV_2631_3 Transt car_1.m.az fname= car_1.m.az.ISO_MS85 type= ISOV_2631_3 Transt car_1b2.az fname= car_1b2.az.ISO_MS85 type= ISOV_2631_3 Stat car_1b1.az.ISO_MS85 tname= lsb_11.pn tstart= Xeval_start tstop= Xeval_stop Stat car_1.m.az.ISO_MS85 tname= lsc_1.pn tstart= Xeval_start tstop= Xeval_stop Stat car_1b2.az.ISO_MS85 tname= lsb_12.pn tstart= Xeval_start tstop= Xeval_stop In order to be able to calculate the risk of motion sickness according to ISO 2631-1:1997, an example is given: Create_scalar new Xeval_start= 120 Create_scalar new Xeval_stop = 2120 # Transt car_1b1.az fname= car_1b1.az.ISO_MS97 type= ISO2631_97WF Transt car_1.m.az fname= car_1.m.az.ISO_MS97 type= ISO2631_97WF Transt car_1b2.az fname= car_1b2.az.ISO_MS97 type= ISO2631_97WF Stat car_1b1.az.ISO_MS97 tname= lsb_11.pn tstart= Xeval_start tstop= Xeval_stop Stat car_1.m.az.ISO_MS97 tname= lsc_1.pn tstart= Xeval_start tstop= Xeval_stop Stat car_1b2.az.ISO_MS97 tname= lsb_12.pn tstart= Xeval_start tstop= Xeval_stop Output from the statistical calculation will be printed on the file $ident.resu. Example:. STATISTICS for car_1b1.az. car_1.m.az. car_1b2.az. ---------- ISO_MS85 ISO_MS85 ISO_MS85 common exponent: *E -3 *E -3 *E -3 MAX VALUE 39.316 37.327 44.755 MIN VALUE -36.759 -34.646 -44.080 RMS VALUE 16.246 15.682 18.972 <- motion sickness value RMQ VALUE 20.660 19.920 24.187 AVERAGE VALUE .241 .230 .209 MEDIAN .119 .370 .975 STANDARD DEVIATION 16.245 15.680 18.971 XVAR WHEN MAX 924.000 649.333 646.667 XVAR WHEN MIN 1204.000 1476.000 1474.222 If only a part of the variable is used in the evaluation of the comfort factor, the selection shall be made in the STAT command and not in the TRANS command, as the TRANS command fill out the result with zeros in order to make Fname to the same length as the input vector Iname. #### Calculation of vertical motion sickness according to ISO 8041 ISO 8041 "Human response to vibration - Measuring instrumentation" shows how a physical filter with poles and zeros can be designed which is adapted to ISO 2631/3. The filter includes two fundamental parts, a band pass filter and a frequency weighting filter. The band pass filter is designed as follows: Hb(s) = s2 · omega22 (s2 + s·omega1/Q1 + omega12) · (s2 + s·omega2/Q1 + omega22) where omega1 = 0.49909 [rad/s] omega2 = 4.99090 [rad/s] Q1 = 0.7071067 [1] The frequency weighting filter is designed as follows: Hms(s) = 1 + 0.105 · s (0.472·s)2 + 0.581·s + 1 In order to be able to calculate the motion sickness factor according to ISO 8041, an example is given: Example: Create_scalar new Xeval_start= 120 Create_scalar new Xeval_stop = 2120 # Transt car_1b1.az fname= car_1b1.az.ISO_MS41 type= ISO8041V_MS Transt car_1.m.az fname= car_1.m.az.ISO_MS41 type= ISO8041V_MS Transt car_1b2.az fname= car_1b2.az.ISO_MS41 type= ISO8041V_MS Stat car_1b1.az.ISO_MS41 tname= lsb_11.pn tstart= Xeval_start tstop= Xeval_stop Stat car_1.m.az.ISO_MS41 tname= lsc_1.pn tstart= Xeval_start tstop= Xeval_stop Stat car_1b2.az.ISO_MS41 tname= lsb_12.pn tstart= Xeval_start tstop= Xeval_stop Output from the statistical calculation will be printed on file$ident.resu. Example: STATISTICS for car_1b1.az. car_1.m.az. car_1b2.az. ---------- ISO_MS41 ISO_MS41 ISO_MS41 common exponent: *E -3 *E -3 *E -3 MAX VALUE 41.254 37.554 50.971 MIN VALUE -42.739 -34.946 -52.018 RMS VALUE 15.417 14.681 18.363 <- motion sickness value RMQ VALUE 19.927 18.962 24.291 AVERAGE VALUE .037 .028 .051 MEDIAN .781 .876 1.591 STANDARD DEVIATION 15.417 14.681 18.363 XVAR WHEN MAX 687.556 686.667 685.778 XVAR WHEN MIN 299.111 298.222 299.111 If only a part of the variable is used in the evaluation of the comfort factor, the selection shall be made in the STAT command and not in the TRANS command, as the TRANS command fill out the result with zeros in order to make Fname to the same length as the input vector Iname. #### Calculation of the motion sickness factor according to BS 6841:1987. BS 6841:1987 is very similar to ISO 2631/3. The main difference is that a physical filter has been adapted to the ideal curvature in ISO 2631/3. The filter in BS 6841:1987 comprises a band pass filter in series with a filter which is called frequency weight filter. The band pass filter is designed as follows: Hb(s) = s2 · omega22 (s2 + s·omega1/Q1 + omega12) · (s2 + s·omega2/Q1 + omega22) Q1 = 0.71 [1] The frequency weighting filter is designed as follows: Hw(s) = K * H4 * H5 * H6 where: omega42 H4(s) = ---------------------------------- (s2 + s*omega4/Q2 + omega42) (s2 + s*omega5/Q3 + omega52) H5(s) = ---------------------------------- omega52 omega62 H6(s) = ---------------------------------- (s2 + s*omega6/Q4 + omega62) K = 0.4 [1] Q2 = 0.86 [1] Q3 = 0.80 [1] Q4 = 0.80 [1] In order to calculate the motion sickness factor according to BS 6841:1987, an example is given below: Create_scalar new Xeval_start= 120 Create_scalar new Xeval_stop = 2120 # Transt car_1b1.az fname= car_1b1.az.BS_WF type= BS_WF Transt car_1.m.az fname= car_1.m.az.BS_WF type= BS_WF Transt car_1b2.az fname= car_1b2.az.BS_WF type= BS_WF Stat car_1b1.az.BS_WF tname= lsb_11.pn tstart= Xeval_start tstop= Xeval_stop Stat car_1.m.az.BS_WF tname= lsc_1.pn tstart= Xeval_start tstop= Xeval_stop Stat car_1b2.az.BS_WF tname= lsb_12.pn tstart= Xeval_start tstop= Xeval_stop Output from the statistical calculation will be printed on file $ident.resu. Example: STATISTICS for car_1b1.az.BS car_1.m.az.BS car_1b2.az.BS ---------- _WF _WF _WF common exponent: *E -3 *E -3 *E -3 MAX VALUE 20.087 18.801 24.729 MIN VALUE -24.954 -22.399 -29.222 RMS VALUE 8.797 8.366 9.772 <- motion sickness value RMQ VALUE 11.289 10.788 12.780 AVERAGE VALUE -.013 -.008 -.024 MEDIAN .003 .066 .502 STANDARD DEVIATION 8.797 8.366 9.772 XVAR WHEN MAX 696.444 696.444 694.667 XVAR WHEN MIN 306.222 308.889 306.222 If only a part of the variable is used in the evaluation of the comfort factor, the selection shall be made in the STAT command and not in the TRANST command, as the TRANST command fill out the result with zeros in order to make Fname to the same length as the input vector Iname. ### 5.3) Designing higher order filters. In command FILT exists only first and second order, low and high pass filters. If a filter of higher order is required it can created by making several filtering in series. #### 5.3.1) Butterworth-filter The Butterworth filter is a type of filter where the poles are equidistantly placed in a half circle in the complex factor domain. The half circle only exists for negative sigma values. In a Butterworth filter all sub filters shall have the same cut-off frequency, only the damping zeta shall be different according to the table below. Filter's Approximate Order Filter Filter type Relative damping relative damping 2 1 lpass2_0 sin (π/4) 0.7071 3 1 lpass1_0 2 lpass2_0 sin (π/6) 0.5000 4 1 lpass2_0 sin (π/8) 0.3827 2 lpass2_0 sin (3*π/8) 0.9239 5 1 lpass1_0 2 lpass2_0 sin (π/10) 0.3090 3 lpass2_0 sin (3*π/10) 0.8090 6 1 lpass2_0 sin (π/12) 0.2588 2 lpass2_0 sin (3*π/12) 0.7071 3 lpass2_0 sin (5*π/12) 0.9659 7 1 lpass1_0 2 lpass2_0 sin (π/14) 0.2225 3 lpass2_0 sin (3*π/14) 0.6235 4 lpass2_0 sin (5*π/14) 0.9010 8 1 lpass2_0 sin (π/16) 0.1951 2 lpass2_0 sin (3*π/16) 0.5556 3 lpass2_0 sin (5*π/16) 0.8315 4 lpass2_0 sin (7*π/16) 0.9808 9 1 lpass1_0 2 lpass2_0 sin (π/18) 0.1736 3 lpass2_0 sin (3*π/18) 0.5000 4 lpass2_0 sin (5*π/18) 0.7660 5 lpass2_0 sin (7*π/18) 0.9397 10 1 lpass2_0 sin (π/20) 0.1564 2 lpass2_0 sin (3*π/20) 0.4540 3 lpass2_0 sin (5*π/20) 0.7071 4 lpass2_0 sin (7*π/20) 0.8910 5 lpass2_0 sin (9*π/20) 0.9877 ## Input data commands which controls plotting In plotting MPLOT reads input data at different levels. The levels are: MAIN The highest level in program MPLOT. Input data reading in MPLOT starts at this level PAGE The level that starts the definition of a new page. DIAGRAM The level that starts the definition of a new diagram in the page. CURVE POINT POINT_LINE POINT3D The lowest level consists of these four items. This level controls the definition of a new curve or point in the diagram. Example: Func, Stat, <--- In the beginning of the file, input are Filt,...Etc. read at level MAIN Page 1 <--- Command to move input data reading to level PAGE Diagram 11 <--- Command to move input data reading to level DIAGRAM Curve <--- Command to move input data reading to lowest level Here you can define all data that is only valid for this curve or point: yvar, xvar, line_type, linetext, ...etc. An EndCurve-command is not needed the curve- or point- definition is finished by writing a new CURVE-, POINT-, POINT_LINE-, POINT3D- or EndDiagram- command. EndDiagram <--- Command to leave level DIAGRAM EndPage <--- Command to leave level PAGE and draw the page <--- Back to level level MAIN again When a new curve or point is to be created, input data given under lowest level will have the highest priority, and input data given under level MAIN will have the lowest priority. Input data commands are not case sensitive, but variable names are. Input data that valid at the lowest level is also valid in the levels above level PAGE and level MAIN. Therefore starts the input data descriptions at the lowest level: ### Input data at the lowest input data level The lowest input data level consists of the following levels: CURVE Defines a curve to be plotted in actual diagram, the name of the curve is defined in command YVAR. POINT Defines a symbol to be plotted in actual diagram, the position of the symbol is defined in command YVAR and XVAR. POINT_LINE Defines a curve connecting the symbols defined by command POINT. POINT3D Defines a three-dimensional symbol to be plotted in actual diagram, the position of the symbol is defined in command ZVAR, YVAR and XVAR. Reference Manuals MPLOT Main Menu Level DIAGRAM #### Input data valid at level CURVE The CURVE command initializes the definition of a curve. The Y-variable of the curve is defined in command YVAR. The default X-variable of YVAR can be redefined by command XVAR. The default ident can be redefined by command VAR_ID. Command CURVE creates the text "Line" written to the left of the Y-axis. The text can be supressed with command WRITE_ID. LINE_TYPE = Sets the type of line to be used. LINETEXT = Explanatory text for the actual LINE_TYPE. ROT_YTEXT = Rotation of the text to the left of the Y-axis. VAR_ID = Sets the ident from which the variable shall be read from. WRITE_YNAME = Controls the printing of YNAME. WRITE_YTEXT = Controls the printing of all text to the left of the Y-axle. XVAR = Select variable for the X-axle. YNAME = Sets variable-name to be plotted at the Y-axis. YVAR = Select variable for the Y-axle. YVAR_EXPL = Explanation of YVAR. Reference Manuals MPLOT Main Menu Level DIAGRAM #### Input data valid at level POINT The POINT command indicates plotting of a point. The Y-value of the point is defined in YVAR or YVALUE, the X-value of the point is defined in XVAR or XVALUE. Command POINT creates the text "Point" written to the left of the Y-axis. The text can be supressed with command WRITE_ID. DOT_TYPE = Sets the type of symbols to be used ROT_YTEXT = Rotation of the text to the left of the Y-axis. VAR_ID = Sets the ident from which the variables shall be read. WRITE_YNAME = Controls the printing of YNAME. WRITE_YTEXT = Controls the printing of all text to the left of the Y-axle. XVALUE = Manual input of the X-value for the point. XVAR = Selects the scalar to be used as X-value of the point. YNAME = Sets variable-name to be plotted at the Y-axis. YVALUE = Manual input of Y-value to the point. YVAR = Selects the scalar to be used as Y-value of the point. YVAR_EXPL = Explanation of YVAR. Reference Manuals MPLOT Main Menu Level DIAGRAM #### Input data valid at level POINT_LINE The POINT_LINE command initializes the definition of a line which connects symbols earlier defined in command POINT. Command POINT_LINE creates the text "Pline" written to the left of the Y-axis. The text can be supressed with command WRITE_ID. LINE_TYPE = Sets the type of line to be used LINETEXT = Explanatory text for the actual LINE_TYPE. WRITE_YNAME = Controls the printing of YNAME. YNAME = Sets variable-name to be plotted at the Y-axis. YVAR_EXPL = Explanation of YVAR. Reference Manuals MPLOT Main Menu Level DIAGRAM #### Input data valid at level POINT3D The POINT3D command indicates plotting of a three-dimensional point. The X-, Y- and Z-value of the point is defined in command XVAR, YVAR and ZVAR respectively. The X-, Y- and Z-value of the point can also be given manually in command XVALUE, YVALUE and ZVALUE respectively. Command POINT3D creates the text "Point" written to the left of the Y-axis. The text can be supressed with command WRITE_ID. DOT_TYPE = Sets the type of symbols to be used DRAW_ISOLINES = Drawing of contour lines. ROT_YTEXT = Rotation of the text to the left of the Y-axis. VAR_ID = Sets the ident from which the variables shall be read. WRITE_YNAME = Controls the printing of YNAME. XVALUE = Manual input of the X-value for the point. XVAR = Selects the scalar to be used as X-value of the point. YNAME = Sets variable-name to be plotted at the Y-axis. YVALUE = Manual input of Y-value to the point. YVAR = Selects the scalar to be used as Y-value of the point. YVAR_EXPL = Explanation of YVAR. ZVALUE = Manual input of the Z-value to point. ZVAR = Select variable for the Z-axle. Reference Manuals MPLOT Main Menu Level DIAGRAM ### Input data valid at level DIAGRAM The DIAGRAM command initializes the definition of a local diagram. Under level DIAGRAM all commands described under the lowest level are available, plus the following commands: CALC_DATE = Date of calculation. DIAG_HEIGHT = Defines the height of the diagram. DIAG_WIDTH = Defines the width of the diagram. HEAD = Set a diagram header line. LIMIT_LINE = Define limit lines. LIMIT_LINE_EXPL = Explanatory text for limit lines. LXCM = Length of X-axis. LYCM = Length of Y-axis. ROT_YTEXT = Rotation of the text to the left of the Y-axis. WRITE_CALC_DATE = Controls the printing of CALC_DATE. WRITE_HEAD = Controls the printing of header lines. WRITE_XNAME = Controls the printing of XNAME. X_LEFT = The value of the X-axis on the left end. X_MID = The value of the X-axis at the midpoint. X_RIGHT = The value of the X-axis on the right end. XAX_YVAL = Controls the location of the X-axis in the diagram. XAXIS = Controls the plotting of the X-axis. XCM/DEC = Logarithmic scaling factor in X-direction. XGRID1 = Number of cm to the first grid line. XGRIDINT = Number of cm between every vertical line in the grid. XINT/CM = Scaling factor in X-direction. XNAME = Sets the name to be plotted at the X-axis. XVAR = Variable in X-direction. XVAR_EXPL = Explanatory text for XVAR. Y_BOT = The value of the Y-axis at the bottom end. Y_MID = The value of the Y-axis at the midpoint. Y_TOP = The value of the Y-axis at the top end. YAXIS = Controls the plotting of the Y-axis. YAX_XVAL = Controls the location of the Y-axis in the diagram. YCM/DEC = Logarithmic scaling factor in the Y-direction. YGRID1 = Number of cm to the first grid line. YGRIDINT = Number of cm between every horizontal line in the grid. YINT/CM = Scaling in the Y-direction. Commands to leave level DIAGRAM: CURVE = Initializes the definition of a curve. POINT = Initializes the definition of a point POINT_LINE = Initializes the definition of a line defined by points POINT3D = Initializes the definition of a 3-dimensional point ENDDIAGRAM = Leave level DIAGRAM and go up to level PAGE. Reference Manuals MPLOT Main Menu Level DIAGRAM ### Input data valid at level PAGE The PAGE command initializes the definition of a new page. Under level PAGE all commands described under level DIAGRAM are available, plus the following commands: FRAME = Controls the drawing of the frame around the diagrams. NCOLS = Number of columns of the local diagram. NROWS = Number of rows in the local diagram. OVERWRITE = States the area of which the curves may be plotted. PAGE_HEAD = Text header lines, written above the diagrams. WRITE_MPLOT_DATE = Controls the printing of MPLOT_DATE. WRITE_MPLOT_ID = Controls the printing of MPLOT_ID. WRITE_PAGE_HEAD = Controls the printing of PAGE_HEAD. WRITE_PAGE_NO = Controls the printing of the page number. XAXIS_ALONG = Controls the orientation of the X-axis. Commands to leave level PAGE: DIAGRAM = Directive to initialize a diagram ENDPAGE = Send current page to output and go up to level MAIN Reference Manuals MPLOT Main Menu Level PAGE ### Input data valid at level MAIN The ENDPAGE command finalizes current page definition and input data reading continues at level MAIN. Under level MAIN all commands described under level PAGE are available, plus the following commands: ILASER = Indicator for writing the graphs to a printer. INKFIL = Controls the writing of a debug logging file. ISCREN = Indicator for plotting on the screen. MPLOT_DATE = Current date, shown in the upper right-hand corner of the title. MPLOT_ID = Ident for this plotting activity. PAPER_FORMAT = Defines the size of the paper to be used. POSTFI = Graphic output data file. VAR_DIR = Directory where the calculation results are stored. Commands to leave level MAIN: PAGE = Initializes the definition of a new page. Reference Manuals MPLOT Main Menu Level MAIN ### All plotting commands in alphabetical order CALC_DATE = Date of calculation. CURVE = Initializes the definition of a curve. DIAG_HEIGHT = Height of the diagram. DIAG_WIDTH = Width of the diagram. DIAGRAM = Initializes the definition of a diagram. DOT_TYPE = Sets the type of symbols to be used DRAW_ISOLINES = Drawing of contour lines. ENDDIAGRAM = Leave level DIAGRAM and go up to level PAGE. ENDPAGE = Send current page to output and go up to level MAIN EXEC = Terminates the ongoing command. EXIT = Exits Mplot. FRAME = Controls the drawing of the frame around the diagrams. HEAD = Set a diagram header line. ILASER = Controls the writing of a postscript file. INKFIL = Controls the writing of a debug logging file. ISCREN = Indicator for plotting on the screen. LIMIT_LINE = Define limit lines. LIMIT_LINE_EXPL = Explanatory text for limit lines. LINE_TYPE = Sets the type of line to be used. LINETEXT = Explanatory text for the actual LINE_TYPE. LOC_CALC_DATE = Location of CALC_DATE. LOC_DIAG_LL = Location of the lower left-hand corner of DIAG. LOC_FRAME_LL = Location of the lower left-hand corner of drawing area. LOC_HEAD = Set the location of diagram HEAD-line. LOC_LIMIT_EXPL = Location of LIMIT_LINE_EXPL. LOC_LINETEXT = Location of LINETEXT. LOC_MPLOT_DATE = Location of MPLOT_DATE. LOC_MPLOT_ID = Location of MPLOT_ID. LOC_PAGE_HEAD = Location of PAGE_HEAD. LOC_PAGE_NO = Location of the page number. LOC_VAR_ID = Location of VAR_ID. LOC_XNAME = Location of XNAME. LOC_XVAR_EXPL = Location of XVAR_EXPL. LOC_YNAME = Location of YNAME. LOC_YVAR_EXPL = Location of YVAR_EXPL. LXCM = Length of X-axis. LYCM = Length of Y-axis. MPLOT_DATE = Current date, shown in the upper right-hand corner of the title. MPLOT_ID = Ident text for this plotting activity. NCOLS = Number of columns of the local diagram. NOTE = Commentary line, rest of the line is ignored. NROWS = Number of lines in the local diagram. OVERWRITE = States the area of which the curves may be plotted. PAGE = Directive for initiating the page definition. PAGE_HEAD = Text header lines, written above the diagrams. PAPER_FORMAT = Defines the size of the paper to be used. POINT = Initializes the definition of a point POINT_LINE = Initializes the definition of a line defined by point POINT3D = Initializes the definition of a 3-dimensional point POSTFI = Graphic output data file. QUIT = Exits Mplot. ROT_YTEXT = Rotation of the text to the left of the Y-axis. SIZE_CALC_DATE = Character size of CALC_DATE. SIZE_HEAD = Character size of HEAD-lines. SIZE_LIMIT_EXPL = Size of LIMIT_LINE_EXPL. SIZE_LINETEXT = Size of LINETEXT. SIZE_MPLOT_DATE = Size of MPLOT_DATE. SIZE_MPLOT_ID = Size of MPLOT_ID. SIZE_PAGE_HEAD = Size of PAGE_HEAD. SIZE_PAGE_NO = Size of page number. SIZE_XNAME = Size of XNAME. SIZE_YNAME = Size of YNAME. STOP = Exits Mplot. VAR_DIR = Directory where the calculation results are stored. VAR_ID = Sets the ident from which the variables shall be read. WRITE_CALC_DATE = Controls the printing of CALC_DATE. WRITE_HEAD = Controls the printing of HEAD. WRITE_ID = Controls the printing of the line identification. WRITE_MPLOT_DATE = Controls the printing of MPLOT_DATE. WRITE_MPLOT_ID = Controls the printing of MPLOT_ID. WRITE_PAGE_HEAD = Controls the printing of PAGE_HEAD. WRITE_PAGE_NO = Controls the printing of the page number. WRITE_VAR_ID = Controls the printing of VAR_ID. WRITE_XNAME = Controls the printing of XNAME. WRITE_YNAME = Controls the printing of YNAME. WRITE_YTEXT = Controls the printing of all text to the left of the Y-axle. X_LEFT = The value of the X-axis on the left end. X_MID = The value of the X-axis at the midpoint. X_RIGHT = The value of the X-axis on the right end. XAX_YVAL = Location of the X-axis in the diagram. XAXIS = Controls the plotting of the X-axis. XAXIS_ALONG = Controls the orientation of the X-axis. XCM/DEC = Logarithmic scaling factor in X-direction. XGRID1 = Number of cm to the first grid line. XGRIDINT = Number of cm between every vertical line in the grid. XINT/CM = Scaling factor in X-direction. XNAME = Name of the X-axis. XVALUE = Manual input of the X-value for the point. XVAR = Variable in X-direction. XVAR_EXPL = Explanatory text for XVAR. Y_BOT = The value of the Y-axis at the bottom end. Y_MID = The value of the Y-axis at the midpoint. Y_TOP = The value of the Y-axis at the top end. YAX_XVAL = Controls the location of the Y-axis in the diagram. YAXIS = Controls the plotting of the Y-axis. YCM/DEC = Logarithmic scaling factor in the Y-direction. YGRID1 = Number of cm to the first grid line. YGRIDINT = Number of cm between every horizontal line in the grid. YINT/CM = Scaling in the Y-direction. YNAME = Sets variable-name to be plotted at the Y-axis. YVALUE = Manual input of Y-value to point. YVAR = Variable in Y-direction. YVAR_EXPL = Explanation of YVAR. ZVALUE = Manual input of the Z-value to point. ZVAR = Variable in Z-direction. Reference Manuals MPLOT Main Menu Alphabetical list CALC_DATE Date of calculation, shown under each diagram. See also: LOC_CALC_DATE, SIZE_CALC_DATE, WRITE_CALC_DATE Declared= Character*80 Default= The date read from the MPdat-file CURVE The CURVE command initializes the definition of a new curve and moves the input data reading to level CURVE. DIAG_HEIGHT Defines the hight of the diagram. Declared = Real*4 Default = The height of PAPER_FORMAT minus margins divided by NROWS. DIAG_WIDTH Defines the width of the diagram. Declared = Real*4 Default = The width of PAPER_FORMAT minus margins divided by NCOLS. DIAGRAM diag_no Read the diagram number and move input data reading to level DIAGRAM. diag_no = Diagram number. The page is divided into a number of smaller drawing areas, defined by diag_no. The small drawing areas are numbered, in rows and columns similar to the components in a matrix, i.e. the first row is numbered 11,12, 13, ... etc. The next row is numbered 21,22,23,...etc. Max. number of columns are limited to 9. Number of rows can be bigger then 10, but total number of drawing areas must be less than 160. Declared= Integer*4 DOT_TYPE Defines the shape of the dots used when plotting scalars. Declared = Integer*4 Default = 'AUTO' i.e. the first point is plotted with DOT_TYPE 2, point 2 with DOT_TYPE 3 etc. DRAW_ISOLINES = DIAG_NO, METHOD, LEVELS Drawing of isolines, for points which are defined with the command DIAGPT3D. The command demands that the points lie in a grid, and are numbered with point number 1 at the top left-hand corner, and point 2 to the right of point 1 etc. downwards in rows until the last point is located at the bottom to the right. This also implies that every line is equally long, and every column equally high. Long and high refer to the number of points in the lines and columns, however it is not required that the points are located at a equidistant spacing from each other. DIAG_NO = Defines the diagram number, line and column- number. Declared= Integer*4 Default = 11 METHOD = Sets the method of drawing the isolines. Following values are valid for METHOD: 'NO'= cancels the drawing of isolines. 'LIN'= linear interpolation between the points. Declared= Character*20 Default = 'NO' LEVELS = Numerical values of the hight of the isolines to be drawn. If LEVELS is set to a negative value, LEVELS will control the number of lines to be drawn in the diagram. Declared= Real*4 Default = 0 ENDDIAGRAM The ENDDIAGRAM command ends current diagram definition and returns the input data reading to level PAGE. ENDPAGE The ENDPAGE command ends the current page definition, and writes the output on screen and/or file. After command ENDPAGE further input data will be read at level MAIN. EXEC Terminates the ongoing command. Certain commands are not executed until a new main-command has been read. This is due to the fact that the amount of input data is not known from the outset. After program MPLOT have read command EXEC, further input data reading will continue at level MAIN FRAME Controls the drawing of the frame around the diagrams. Following values are valid: GLOBAL = draws a large frame around the entire page. LOCAL = draws a frame around each diagram. GLOBAL_LOCAL = draws both global and local frames. NO = suppresses the drawing of frames. Declared= Character*12 Default= GLOBAL_LOCAL HEAD= head_no, htext Input of text header lines, written in the diagrams. head_no = Head line number to be read. Valid values for head_no are between 1 and 10. Declared= Integer*4 htext = The text of the header line. Declared= Character*80(10) In the header lines the user has the possibility to retrieve values of scalars. In order to retrieve a value from a scalar, the user shall write the name of the scalar with a$-sign in the beginning of the name of the scalar. The following example retrieves the values the ride comfort indexes in current ident: head1= 'Wz.m=$car_1.m.ayWZ RMS.m=$car_1.m.ay.ERRMS' head2= 'Wzb1=$car_1b1.ayWZ RMSb1=$car_1b1.ay.ERRMS' head3= 'Wzb2=$car_1b2.ayWZ RMSb2=$car_1b2.ay.ERRMS' The following example retrieves the values the ride comfort indexes in other idents: head1= 'Wz.m=$car_1.m.ayWZ(ident_001) RMS.m=$car_1.m.ay.ERRMS(ident_001)' head2= 'Wz.m=$car_1.m.ayWZ(ident_002) RMS.m=$car_1.m.ay.ERRMS(ident_002)' head3= 'Wz.m=$car_1.m.ayWZ(ident_003) RMS.m=$car_1.m.ay.ERRMS(ident_003)' If the user wishes the heading text to be retrieved from the MPdat-file, the following sub directives can be given in command HEAD: IDENT = Ident string from where the headers will be retrieved. Declared= Character*100 HEAD = Number of the head line which will be retrieved. Declared= Integer*4 Example: ILASER Format for graphic output. Valid values for ILASER are: intro_common_commands.html#jILASER Declared= Integer*4    Default= 6 INKFIL Controls the writing of a debug logging file. Command INKFIL can be switched on and off several times during the plotting activity. INKFIL can be given the following values: 0 => No printing. 3 => Print of log file to the file code 03. 6 => Print of log file to the file code 06 i.e. standard output. Declared= Integer*4    Default= 0 ISCREN Declared= Integer*4    Default= 0 LIMIT_LINE = LINE_NO, +-YLIM1, +-XLIM1, +-YLIM2, +-XLIM2 Creation of limit lines in the diagram. LINE_NO = The number of the limit line. Up to four limit lines can be defined. Declared= Integer*4 Default = 'NONE' YLIM1 = The limit of the left-hand side of the diagram. If only YLIM1 is specified, the limit will assume to be horizontal and will extend over the entire diagram. Declared= Real*4 Default = 'NONE' XLIM1 = X-coordinate for the start of the limit line. Declared= Real*4 Default = 'LEFT' i.e. at the beginning of the X-axis. YLIM2 = The right-hand side limit of the diagram. Declared= Real*4 Default = 'NONE' XLIM2 = X-coordinate for the right end of the limit line. If XLIM2 is not specified, the program will assume that the limit line will extend across the entire diagram. Declared= Real*4 Default = 'RIGHT' i.e. the end of the X-axis. LIMIT_LINE_EXPL = LINE_NO, EXPL_TEXT Explanatory text for LIMIT_LINE. LINE_NO = The number of the limit line to be explained. Declared= Integer*4 Default = 'NONE' EXPL_TEXT = Text to be written above the diagram, will be written if it is non-blank. Declared= Character*80 Default = Blank LINE_TYPE= LINE_TYPE, THICKNESS Command LINE_TYPE determines the type of line to be used when drawing curves. LINE_TYPE controls the color and shape of the curves. LINE_TYPE = Defines type of line to be drawn. Following values are valid for LINE_TYPE: Declared = Integer*4 Default = 'AUTO' i.e. the first line is plotted with line type 1, line 2 with line type 2 etc. THICKNESS = Defines the width of the line to be drawn. Following values are valid for THICKNESS: 1= Thinnest possible line. 2= Slightly thicker line. 3= Thicker line etc. Declared= Integer*4    Default= 1 LINETEXT Explanatory text for the actual LINE_TYPE. The text is written if it is non-blank. Declared= Character*80    Default= " " (Blank) LOC_CALC_DATE Controls the location of CALC_DATE. Declared = Real*4(2)    Units= [cm] Default = AUTO   i.e. In the lower right-hand corner. LOC_DIAG_LL Location of the lower left corner of the DIAGRAM. Declared = Real*4(2)    Units= [cm] Default = AUTO   i.e. the diagrams are close-packed LOC_FRAME_LL Location of the lower left corner of the drawing area, in relation to the lower left corner of the local DIAGRAM. Declared = Real*4(2)    Units= [cm] Default = AUTO  i.e. the drawing area is placed in the middle of the DIAGRAM Controls the position of the HEAD-lines. head_no = Number of the HEAD-line. Valid values for head_no are between 1 and 10. Declared= Integer*4 x-value = The X-coordinate of the lower left corner. Declared= Real*4    Units= [cm] y-value = The Y-coordinate of the lower left corner. Declared= Real*4    Units= [cm] LOC_LIMIT_EXPL Sets the location of LIMIT_LINE_EXPL. Declared = Real*4(2)    Units= [cm] Default = AUTO  i.e. above the diagram. LOC_LINETEXT Controls the location of the text defined in command LINETEXT. Declared = Real*4(2)    Units= [cm] Default = Above the diagram originating from the left-hand side. LOC_MPLOT_DATE Controls the location of MPLOT_DATE. Declared = Real*4(2)    Units= [cm] Default = AUTO  i.e. at the top right-hand side of the page. LOC_MPLOT_ID Controls the location of MPLOT_ID. Declared = Real*4(2)    Units= [cm] Default = AUTO  i.e. at the upper right-hand side of the page head_no = Number of the PAGE_HEAD-line. Valid values for head_no are between 1 and 10. Declared= Integer*4 x-value = The X-coordinate of the lower left corner. Declared= Real*4    Units= [cm] y-value = The Y-coordinate of the lower left corner. Declared= Real*4    Units= [cm] LOC_PAGE_NO Controls the location of the page number in X- and Y-coordinates. Declared = Real*4(2)    Units= [cm] Default = AUTO  i.e. at the upper right-hand side of the page LOC_VAR_ID Sets the location of VAR_ID. Declared = Real*4(2)    Units= [cm] Default = To the left of YNAME. LOC_XNAME Sets the location of XNAME. Declared = Real*4(2)    Units= [cm] Default = AUTO  i.e. under the X-axis to the right. LOC_XVAR_EXPL Sets the location of XVAR_EXPL. Declared = Real*4(2)    Units= [cm] Default = AUTO  i.e. to the right of XNAME. LOC_YNAME Controls the location of YNAME. Declared = Real*4(2)    Units= [cm] Default = AUTO  i.e. on the top left-hand side of the Y-axis. LOC_YVAR_EXPL Controls the location of YVAR_EXPL. Declared = Real*4(2)    Units= [cm] Default = AUTO  i.e. to the right of YVAR. LXCM Sets the length of the X-axis. Declared = Real*4    Units= [cm] Default = 'AUTO'  i.e. Maximum length within DIAG_WIDTH LYCM Sets the length of the Y-axis. Declared = Real*4    Units= [cm] Default = 'AUTO'  i.e. Maximum length within DIAG_HEIGHT MPLOT_DATE Current date, shown in the upper right-hand corner of the title. Declared= Character*80    Default= Current date MPLOT_ID Ident for current plotting activity, plotted in the upper right corner of the title. Can be set in this command or given in Command Line Options. Declared= Character*80    Default= "mplot_id" NCOLS Defines number of columns of local diagrams. Declared = Integer*4 Default = 'AUTO'  i.e. diag_no-(floor(diag_no/10))*10 where diag_no is defined in DIAGRAM. NOTE Commentary line, the rest of the line is ignored. Declared= Character*80    Default= Blank NROWS Defines number of rows of local diagrams. Declared = Integer*4 Default = 'AUTO'  i.e. floor(diag_no/10) where diag_no is defined in DIAGRAM. OVERWRITE States the area of which the curves may be plotted. Following values are valid: GLOBAL = Allows curves to be plotted over the whole PAGE. LOCAL = Allows curves to be plotted over the whole local DIAGRAM. NO = Allows only curves to be plotted in the drawing area of the local DIAGRAM. Declared= Character*6    Default= 'NO' PAGE   page_no Declared= Integer*4    Default= "Previous page number" + 1 Text header lines, written above the diagrams. head_no = Head line number to be read. Valid values for head_no are between 1 and 10. Declared= Integer*4 htext = The text of the header line. Declared= Character*80(10) If the user wishes the header line to be read from a MPdat-file, htext can be replaced by the sub-directives IDENT and HEAD: PAPER_FORMAT Defines the size of the paper to be used. The orientation of the paper is controlled in command XAXIS_ALONG. Following values are valid: A0 => Selects paper A0. A1 => Selects paper A1. A2 => Selects paper A2. A3 => Selects paper A3. A4 => Selects paper A4. Declared= Character*2    Default= 'A4' POINT The POINT command initializes the definition of a new symbol and moves the input data reading to level POINT. POINT_LINE method, p_id The POINT_LINE command initializes the definition of a new curve and moves the input data reading to POINT_LINE. POINT_LINE plots a curve, fitted to the points given by command POINT. method = Controls the method of which the line will be drawn. Following values are valid: POLYGON = polygon through the points. LEASTSQ_LIN_LIN = linear regression in a lin-lin diagram. LEASTSQ_LIN_LOG = linear regression in a lin-log diagram. LEASTSQ_LOG_LIN = linear regression in a log-lin diagram. LEASTSQ_LOG_LOG = linear regression in a log-log diagram. SPLINE_LIN_LIN = cubic splines in a lin-lin diagram. SPLINE_LIN_LOG = cubic splines in a lin-log diagram. SPLINE_LOG_LIN = cubic splines in a log-lin diagram. SPLINE_LOG_LOG = cubic splines in a log-log diagram. Declared= Character*20 Default = 'LEASTSQ_LIN_LIN' p_id = List of the numbers of the points, which shall be taken into consideration when drawing the curve. If P_ID = 'ALL', all points will be used when the drawing the curve. Declared= Integer*4(1000)    Default= 'ALL' POINT3D The POINT3D command initializes the definition of a three-dimensional point and moves the input data reading to level POINT3D. POSTFI Sets the name of the graphic output data file. Declared= Character*80    Default= "\$ident.ps" QUIT, STOP or EXIT End the execution of program MPLOT. ROT_YTEXT Controls the rotation in degrees of the text to the left of the Y-axis. The scale to the left of the Y-axis can be rotated in a arbitrary angle. The text strings WRITE_ID, VAR_ID, YNAME and YVAR_EXPL are rotated 0. or 180. degrees. The rotation is a right handed rotation. Which means that setting ROT_YTEXT= -90 vill give a scale with upright numbers. If ROT_YTEXT is set equal to -90 you may also want to make more space to the left of the Y-axis. This can be made with the LOC_FRAME_LL-command. Declared= Real*4    Default = 180. [deg] SIZE_CALC_DATE Controls the character size of CALC_DATE. Declared= Real*4    Default = .175 [cm] Controls the character size of HEAD. Declared= Real*4    Default = .175 [cm] SIZE_LIMIT_EXPL Controls the character size of LIMIT_LINE_EXPL. Declared= Real*4    Default = .175 [cm] SIZE_LINETEXT Controls the character size of LINETEXT. Declared= Real*4    Default = .175 [cm] SIZE_MPLOT_DATE Controls the character size of MPLOT_DATE. Declared= Real*4    Default = .2 [cm] SIZE_MPLOT_ID Controls the size of MPLOT_ID. Declared= Real*4    Default= .2 [cm] Declared= Real*4    Default= AUTO, max. .25 [cm] SIZE_PAGE_NO Controls the size of the page number. Declared= Real*4    Default= .2 [cm] SIZE_XNAME Controls the size of XNAME. Declared= Real*4    Default= .175 [cm] SIZE_YNAME Controls the size of YNAME. Declared= Real*4    Default= .175 [cm] VAR_DIR Directory where the calculation results are stored. Declared= Character*80    Default= "."  i.e. current directory VAR_ID Sets the ident from which the variable shall be read. Following wildcards are allowed: ? matches any single character in a filename * matches any string of characters (including the empty string) in a filename [ ] matches any single character from the set enclosed in the brackets Declared = Character*80 Default = The ident given in command MPLOT_ID at level MAIN, or the ident given in Command Line Options. WRITE_CALC_DATE Controls the plotting of CALC_DATE. Following values are valid: 'Y' or 'YES' indicates that the date will be plotted. 'N' or 'NO' indicates that the date will not be plotted. Declared= Character*3    Default= 'N' head_no = Number of the HEAD-line to be printed or not. Valid values for head_no are between 1 and 10. Declared= Integer*4 value = Sets if the HEAD-line shall be printed or not. Valid values are 'Y' or 'N'. Declared= Character*3(10) WRITE_ID Gives information which command that has created the curve or point Level Text to be written to the left of the Y-axis: CURVE "Line" POINT "Point" POINT3D "Point" POINT "Pline" 'Y' or 'YES' indicates that the text will be printed. 'N' or 'NO' indicates that the tex will not be printed. Declared= Character*3    Default = 'Y' WRITE_MPLOT_DATE Controls the printing of MPLOT_DATE. 'Y' or 'YES' states that the date will be printed 'N' or 'NO' states that the date will not be printed Declared= Character*3    Default= 'Y' WRITE_MPLOT_ID Controls the plotting of current ident MPLOT_ID. 'Y' or 'YES' states that the plot ident will be plotted. 'N' or 'NO' states that the plot ident will not be plotted. Declared= Character*3    Default= 'Y' head_no = Number of the PAGE_HEAD-line to be printed or not. Valid values for head_no are between 1 and 10. Declared= Integer*4 value = Sets if the PAGE_HEAD-line shall be printed or not. Valid values are 'Y' or 'N'. Declared= Character*3(10) WRITE_PAGE_NO Controls the plotting of page number. 'Y' or 'YES' indicates that the page number will be plotted. 'N' or 'NO' indicates that the page number will not be plotted. Declared= Character*3    Default= 'Y' WRITE_VAR_ID Controls the plotting of VAR_ID. Following values are valid: 'Y' or 'YES' indicates that the ident text will be printed. 'N' or 'NO' indicates that the ident text will not be printed. Declared= Character*3    Default= 'Y' WRITE_XNAME Controls the plotting of XNAME. Following values are valid: 'Y' or 'YES' indicates that the text will be printed. 'N' or 'NO' indicates that the text will not be printed. Declared= Character*3    Default = 'Y' WRITE_YNAME Controls the plotting of YNAME. Following values are valid: 'Y' or 'YES' indicates that the text will be printed. 'N' or 'NO' indicates that the text will not be printed. Declared= Character*3    Default= 'Y' WRITE_YTEXT Controls the plotting of all text to the left of the Y-axis, i.e. all of WRITE_ID, WRITE_VAR_ID and WRITE_YNAME. Following values are valid: 'Y' or 'YES' indicates that all Y-text will be printed. 'N' or 'NO' indicates that no Y-text will not be printed. Declared= Character*3    Default= 'Y' X_LEFT Sets the left-hand value of the X-axis. This input data is equal to the commands X_MID and X_RIGHT, the last given input data command will apply. X_LEFT, X_MID and X_RIGHT shall all be set to 'AUTO' if automatic scaling is desired. A special case occurs if both X_LEFT and X_RIGHT are defined but not XCM/DEC. In that case XINT/CM will be set to (X_RIGHT-X_LEFT)/LXCM and the MARKINGS-distance in the XAXIS-command will be set to a suitable value in order to give integer numbers at the X-axis. This special case do only occur if X_LEFT and X_RIGHT is given under level DIAGRAM, and please set XGRIDINT equal to 'AUTO'. Declared= If text: Character*4 else Real*4 Default  = 'AUTO' X_MID Sets the value of the X-axis at the midpoint of the axis. This input data is equal to the commands X_LEFT and X_RIGHT, the last given input data command will apply. X_LEFT, X_MID and X_RIGHT shall all be set to 'AUTO' if automatic scaling is desired. Declared= If text: Character*4 else Real*4 Default  = 'AUTO' X_RIGHT Sets the right-hand value of the X-axis. This input data is equal to the commands X_MID and X_LEFT, the last given input data command will apply. X_LEFT, X_MID and X_RIGHT shall all be set to 'AUTO' if automatic scaling is desired. A special case occurs if both X_LEFT and X_RIGHT are defined but not XCM/DEC. In that case XINT/CM will be set to (X_RIGHT-X_LEFT)/LXCM and the MARKINGS-distance in the XAXIS-command will be set to a suitable value in order to give integer numbers at the X-axis. This special case do only occur if X_LEFT and X_RIGHT is given under level DIAGRAM, and please set XGRIDINT equal to 'AUTO'. Declared= If text: Character*4 else Real*4 Default  = 'AUTO' XAX_YVAL Controls the location of the X-axis in the diagram. XAX_YVAL refers to the value where the X-axis will meet the Y-axis. Instead of giving a Y-value, following character strings are understood: 'BOT' = The X-axis is placed at the bottom of the Y-axis. 'MID' = The X-axis is placed at the midpoint of the Y-axis. 'TOP' = The X-axis is placed at the top of the Y-axis. Declared= If text: Character*3 else Real*4 Units= [cm] Default  = 'BOT' XAXIS = TYPE, MARKINGS, FIGURES Controls the plotting of the X-axis. TYPE = Sets the type of scale to be used. Following values are valid: 'LIN'  = linear scale. 'LOG' = logarithmic scale. 'OFF' = cancels the plotting of the X-axis. Declared= Character*3 Default  = 'LIN' MARKINGS = Sets the length in cm between the markings on the X-axis. In use of the logarithmic scale, MARKINGS are used as the minimum distance between the two consecutive markings. MARKINGS = 0 cancels the print. Declared= Real*4 Default  = 1 [cm] FIGURES = Sets how often the axle markings will be marked with a figure. Following values are valid for FIGURES: 1 = a figure on every axis marking. 2 = a figure on every second axis marking. 3 = a figure on every third axis marking,,, etc. 0 = cancels the print of figures. Declared= Real*4 Default  = 2 XAXIS_ALONG Orientation of the X-axis. Following values are valid: LONG_SIDE gives a diagram in landscape format SHORT_SIDE gives a diagram in portrait format Declared= Character*12    Default= 'LONG_SIDE' XCM/DEC Sets the scale factor of the X-axle, if XAXIS has been set to 'LOG'. XCM/DEC defines the number of cm per decade. Declared= If text: Character*4 else Real*4 Default  = 'AUTO' (i.e. a value from the following series: 1,2,5,10,20,...etc.) XGRID1 Sets number of cm from the Y-axis to the first grid line. Declared= If text: Character*4 else Real*4 Default  = 'AUTO' i.e. same distance as to the first axis marking. XGRIDINT Sets the number of cm between every vertical line of the grid. The grid pattern is drawn with LINE_TYPE= 2. Declared= If text: Character*4 else Real*4 Default  = 'AUTO' i.e. same distance as the axis markings. XINT/CM Sets the scale factor of the X-axle, if XAXIS has been set to 'LIN'. XINT/CM defines the number of units per cm. Declared= If text: Character*4 else Real*4 Default  = 'AUTO' (i.e. a value from the following series: 1,2,5,10,20,...etc.) XNAME Sets variable-name to be plotted at the X-axis. Declared= Character*80 Default  = Same name as XVAR. XVALUE Manually set the X-value of the point. Declared= Real*4 Default  = The value of XVAR XVAR Select variable for the X-axle. The user can draw a X-variable from an another ident, by defining the name of the ident inside parenthesis: XVAR= var_name(ident) Different curves can have different X-variables, but the text under the X-axle will be picked from XVAR(1). Declared= Character*80(200) Default = The default X-variable of YVAR. XVAR_EXPL Explanatory text of XVAR. The text is printed if it is non-blank. The text is printed with the same text size as XNAME. Declared= Character*80    Default= " "(Blank) Y_BOT The value of the Y-axis at the bottom of the axis. This input data is equal to the commands Y_MID and Y_TOP, the last given input data command will apply. Y_BOT, Y_MID and Y_TOP shall all be set to 'AUTO' if automatic scaling is desired. A special case occurs if both Y_BOT and Y_TOP are defined but not YINT/CM. In that case YINT/CM will be set to (Y_BOT-Y_TOP)/LYCM and the MARKINGS-distance in the YAXIS-command will be set to a suitable value in order to give integer numbers at the Y-axis. This special case do only occur if Y_BOT and Y_TOP is given under level DIAGRAM, and please set YGRIDINT equal to 'AUTO'. Declared= If text: Character*4 else Real*4 Default  = 'AUTO' Y_MID The value of the Y-axis at the middle of the axis. This input data is equal to the commands Y_BOT and Y_TOP, the last given input data command will apply. Y_BOT, Y_MID and Y_TOP shall all be set to 'AUTO' if automatic scaling is desired. Declared= If text: Character*4 else Real*4 Default  = 'AUTO' Y_TOP The value of the Y-axis at the top of the axis. This input data is equal to the commands Y_MID and Y_BOT, the last given input data command will apply. Y_BOT, Y_MID and Y_TOP shall all be set to 'AUTO' if automatic scaling is desired. A special case occurs if both Y_BOT and Y_TOP are defined but not YINT/CM. In that case YINT/CM will be set to (Y_BOT-Y_TOP)/LYCM and the MARKINGS-distance in the YAXIS-command will be set to a suitable value in order to give integer numbers at the Y-axis. This special case do only occur if Y_BOT and Y_TOP is given under level DIAGRAM, and please set YGRIDINT equal to 'AUTO'. Declared= If text: Character*4 else Real*4 Default  = 'AUTO' YAX_XVAL Controls the location of the Y-axis in the diagram. YAX_XVAL refers to the value where the Y-axis will meet the X-axis. Instead of giving a X-value, following character strings are understood: 'LEFT' = The Y-axis is placed at the left of the X-axis. 'MID' = The Y-axis is placed at the midpoint of the X-axis. 'RIGHT' = The Y-axis is placed at the right of the X-axis. Declared= If text: Character*5 else Real*4 Units= [cm] Default  = 'LEFT' YAXIS = TYPE, MARKINGS, FIGURES Controls the plotting of the Y-axis. TYPE = Sets the type of scale to be used. Following values are valid: 'LIN' = linear scale. 'LOG' = logarithmic scale. 'OFF' = cancels the plotting of the Y-axis. Declared= Character*3 Default  = 'LIN' MARKINGS = Sets the length in cm between the markings on the Y-axis. In use of the logarithmic scale, MARKINGS are used as the minimum distance between the two consecutive markings. MARKINGS = 0 cancels the print. Declared= Real*4 Default  = 1 [cm] FIGURES = Sets how often the axle markings will be marked with a figure. Following values are valid for FIGURES: 1 = a figure on every axis marking. 2 = a figure on every second axis marking. 3 = a figure on every third axis marking,,, etc. 0 = cancels the print of figures. Declared= Real*4 Default  = 1 YCM/DEC Sets the scale factor of the Y-axle, if YAXIS has been set to 'LOG'. YCM/DEC defines the number of cm per decade. Declared= If text: Character*4 else Real*4 Default  = 'AUTO' (i.e. a value from the following series: 1,2,5,10,20,...etc.) YGRID1 Sets the number of cm from the x-axis to the first grid line. Declared= If text: Character*4 else Real*4 Default  = 'AUTO' i.e. same distance as to the first axis marking. YGRIDINT Sets the number of cm between every horizontal line in the grid. The grid pattern is drawn with LINE_TYPE= 2. Declared= If text: Character*4 else Real*4 Default  = 'AUTO' i.e. same distance as the axis markings. YINT/CM Sets the scale factor of the Y-axle, if YAXIS has been set to 'LIN'. YINT/CM defines the number of units per cm. Declared= If text: Character*4 else Real*4 Default  = 'AUTO' (i.e. a value from the following series: 1,2,5,10,20,...etc.) YNAME Sets variable-name to be plotted at the Y-axis. Declared= Character*80 Default  = Same name as YVAR. YVALUE Manually set the Y-value of the point. Declared= Real*4 Default  = The value of YVAR YVAR Select variable to draw in the diagram. The user can draw an Y-variable from an another ident, by defining the name of the ident inside parenthesis: YVAR= var_name(ident) The variable for the X-axle is defined in XVAR. If XVAR is undefined, the default X-variable of YVAR will apply. In order to draw the curve upside-down, the variable can be preceded by a "-" sign. However in this case the auto-scale will not work. The user must manually set the scale factor in command YINT/CM. Declared= Character*80(200) Default  = 'NONE' i.e. no Y-variable. YVAR_EXPL Explanatory text of YVAR. The text is printed if it is non-blank. The text is printed with the same text size as YNAME. Declared= Character*80    Default= " "(Blank) ZVALUE Manually set the Z-value of the point. Declared= Real*4 Default  = The value of ZVAR ZVAR Selects the scalar to be used as Z-value of the dot. The user can draw a Z-variable from an another ident, by defining the name of the ident inside parenthesis: ZVAR= var_name(ident) Declared= Character*80(200) Default  = n/a ## Examples Input data for program MPLOT can be written in many ways. Here are a number of examples: Tsim_Simple_OneIdent.mplotf Plotting curves from one ident. Tsim_Simple_ManyIdent.mplotf Plotting curves from many idents. Tsim_Safe_OneIdent.mplotf Postprocessing a time-domain simulation of a railway vehicle. wear_cur_opti_scal.mplotf Plotting scalars from many idents. Root_Locus.mplotf Creating a root-locus plot. vary_speed_lambda.mplotf Creating a three-dimensional plot.
web
auto_math_text
# The missingprimeval antimatter ### Sakharov's Twin Universe The baryon asymmetry of the universe, i.e. the matter-antimatter imbalance in cosmology, is a fundamental mystery. Right after the Big Bang, matter and antimatter should have been produced in exact same quantities. Both have then annihilated together, their mass converted into photons. Therefore, no matter and no antimatter should remain in the universe and we should not even exist, let alone stars and planets. But this is not the case: we're right here, made of matter. And the primeval antimatter has gone. Why? Scientists discovered in the 1960s that the production rate of matter (from the combination of primeval quarks) occurs slightly faster than the production rate of antimatter (from the combination of antiquarks), a phenomenon called "$$\text{CP}$$ violation". This was paradoxical since it was thought before that such combination processes were symmetrical. But due to this $$\text{CP}$$-symmetry violation, more matter was synthesized in the early universe, and prevailed over antimatter. While the mechanisms of such difference in the matter-antimatter production rate are understood, the profound cause of such asymmetry in the laws of nature remains a mistery. Physicists also discovered that more quarks than antiquarks appeared right after the Big Bang. The mystery deepened. One could ever resort to the anthropic principle: that the roots of such matter-antimatter imbalance were the mandatory condition for the life to appear in the universe. Certainly… Russian nuclear physicist Andrei Sakharov was the first, from 1967, to restore a global symmetry, considering the universe was made not of a single entity, but two "brother" universes originating from the same Big Bang singularity, and having two arrows of time in opposite directions from the instant $$t = 0$$: Sakharov called the instant $$t = 0$$ the "singularity", the "hypersurface of zero extent" or more simply, the point $$\Phi$$. One may consider in cosmology not only later times than $$\Phi$$, but also earlier times, but then the statistical properties of the state of the Universe at the instant $$\Phi$$ are such that the entropy increases not only going forward in time from this instant, but also going backward in time: $$dS/dt > 0 \text{,} \qquad S_{(t)} > S_{(0)} \qquad \text{for} \qquad t>0$$ $$dS/dt < 0 \text{,} \qquad S_{(t)} > S_{(0)} \qquad \text{for} \qquad t<0$$ The author has named this sort of situation the "reversal of time's arrow." — Andrei Sakharov — The initial singularity $$\Phi$$ then not only reverses time ($$\text{T}$$-symmetry) but also parity ($$\text{P}$$-symmetry or "enantiomorphy" in chemistry: the image of an object in a mirror) as well as charge conjugation ($$\text{C}$$-symmetry, which transforms a particle in its antiparticle, and vice versa): a complete $$\text{CPT}$$ symmetry. The $$\text{CP}$$-symmetry violation is opposite in this universe, meaning that antimatter prevailed over antimatter there. Sakharov sums up the main idea in a paragraph, explaining the particle transformation in a positive-only lecture of events : We can visualize that neutral spinless maximons (or photons) are produced at $$t < 0$$ from contracting matter having an excess of antiquarks, that they pass "one through the other" at the instant $$t = 0$$ when the density is infinite, and decay with an excess of quarks when $$t > 0$$, realizing total $$\text{CPT}$$ symmetry of the universe. All the phenomena at $$t < 0$$ are assumed in this hypothesis to be $$\text{CPT}$$ reflections of the phenomena at $$t > 0$$. — Andrei Sakharov — Andrei Sakharov's papers in cosmology and particle physics from the 1960s and 1970s were primarily published in Russian in USSR, and while some of them were also published in English at that time, it is not before 1982 that a book edited in English (Marcel Dekker/CRC Press) popularized his ideas in the Western world.  The French version of that book was published by the Anthropos editions only in 1984. Nowadays, the standard model of particle physics has literally embraced Sakharov's ideas about $$\text{CP}$$-symmetry violation, to the point that the established theory now refers to these fundamental principles allowing baryogenesis as "the Sakharov conditions". However, it is worth noting that mainstream cosmology still continues to superbly ignore the part related to $$\text{T}$$-symmetry, although the reversal of time's arrow has profound implications in cosmology and astrophysics. # Two kinds of antimatter ### C-symmetry vs PT-symmetry In the 1970s, ignoring the existence of Sakharov's prior work, physicist Jean-Pierre Petit independently developed the same idea of two universes in complete $$\text{CPT}$$ symmetry, one being populated by matter, and the other by antimatter. Petit called this model the twin universe theory and initially published two papers in the proceedings of the sessions of the French Academy of Sciences in 1977. A correct translation of their title is respectively: • • "Enantiomorphic twin universes with opposite proper times" and • • "Universes interacting with their opposite image in the mirror of time". While Sakharov focused on the description of $$\text{CPT}$$ symmetry in the framework of particle physics (thus without involving gravity, so his twin universes never interact except at the very moment of their birth), Petit immediately considered two universes interacting through gravitation, with Newtonian dynamics used classically in astrophysics (Vlasov-Poisson equation). This was the early groundwork for the Janus cosmological model. According to this first development, the content of the two universes would be invisible from each other (no electromagnetic interaction), but both would interact anti-gravitationally, explaining the missing cosmological antimatter and the confinement of galaxies, among many other features. Initial developments of the theory considered twin "parallel" universes like Sakharov, but it evolved in the Janus model as one universe being made of a single Riemannian manifold having two metrics, i.e. one 4D hypersurface with a "frontside" and a "backside" (details in the page about Negative mass & field equations). Sakharov's model and dynamical group theory lead to a new classification of particles in the Janus model, according to 4 types. Due to opposite $$\text{CP}$$ violations and opposite arrows of time, there are indeed two kinds of matter and two kinds of antimatter, according to the sector where they belong: • Positive mass matter: our common matter, what we are made of. Prevailed in our sector. • Positive mass antimatter: $$\text{C}$$-symmetry with respect to our matter. We may call it Dirac's antimatter. It is the antimatter created in lab (and as a prediction of the Janus model for future results of ALPHA/AEGIS/GBAR experiments at CERN: since it has positive energy and mass, that antimatter will fall down in the Earth gravitational field). • Negative mass matter: $$\text{CPT}$$-symmetry with respect to our matter. The "twin matter" or "matter of the negative sector". • Negative mass antimatter: $$\text{C} \times \text{CPT} = \text{PT}$$-symmetry with respect to our matter. We may call it Feynman's antimatter.* It is the "missing" cosmological antimatter, which prevailed in the negative sector, due to opposite $$\text{CP}$$ violation there. Since it has negative energy and mass, it "falls up" in the Earth gravitational field. Why the matter of the negative sector, i.e. the cosmological antimatter ($$\text{PT}$$-symmetry) would be gravitationally repulsive with respect to our matter? The fundamental cause is explained below with symplectic geometry: how $$\text{T}$$-symmetry goes with energy and mass inversion. * According to great American physicist Richard Feynman, an antiparticle ($$\text{C}$$-symmetry) would be indistinguishable from the image in a mirror ($$\text{P}$$-symmetry) of its particle running backward in time ($$\text{T}$$-symmetry). This is related to the "$$\text{CPT}$$ theorem", which claims that the $$\text{CPT}$$ symmetry of a particle is the very same as that particle. But the $$\text{CPT}$$ theorem is an axiom valid in a world where only positive energy states can exist. Since $$\text{T}$$-symmetry reverses the energy of a particle, the $$\text{CPT}$$ theorem must be reconsidered. Feynman's mirror: an antiparticle ($$\text{C}$$-symmetry) would be the same as a particle in $$\text{PT}$$-symmetry. # Symplectic Geometry ### Geometric quantization of physics A foreword about symplectic geometry. To keep it as simple as possible, Symplectic geometry is a branch of mathematics merging differential geometry and dynamical systems theory, based on the Hamiltonian formulation of classical mechanics. It is the study of manifolds with a symplectic form that allows to measure sizes. Indeed, in Riemannian geometry, the metric tensor gives lengths and angles, whereas the symplectic form measures areas. The term "symplectic" comes from the Greek word συμπλεκτικός (sumplektikós) meaning "intertwining" and was first coined by German mathematician Hermann Weyl in 1939, to replace the term "complex group" defining line complexes, in order to avoid the confusion with the more common definition of "complex", i.e. complex numbers. Three mathematicians greatly developed symplectic geometry in the second half of the 20th century: Based on the orbit method from Russian mathematician Alexandre Kirillov in representation theory, American mathematician Bertram Kostant and French mathematician Jean-Marie Souriau developed the modern theory of geometric quantization, which allows to make fundamental quantities of physics (like energy and momentum) emerge as pure geometrical objects. Souriau (1922–2012) left a great legacy to mathematical physics in the fields of classical, relativistic and quantum mechanics. He notably introduced the important concepts of the moment space, the moment map (Hamiltonian action of a Lie group on a symplectic manifold, generalizing the notions of linear and angular momentum) and the coadjoint action, i.e. the action of a Lie group on the dual space to its Lie algebra. This allowed him, for instance, to give the first geometric and kinetic interpretations of spin. Below we're about to show that Souriau also gave a fundamental answer to the question: what is the real physical meaning for the reversal of time's arrow? # Gravity & T-symmetry ### How time reversal meansenergy and mass inversion Lorentz transformations use space inversion and time reversal operators, which can provoke $$\text{P}$$-symmetry, $$\text{T}$$-symmetry, or $$\text{PT}$$-symmetry. The Lorentz group has therefore four connected components: • • two orthochronous transformations (motions going forward in time), • • two antichronous transformations (motions going backward in time). They will be more detailed thereafter. In classical mechanics, the antichronous transformations, which revert time with $$\text{T}$$-symmetry and $$\text{PT}$$-symmetry, are dismissed as being "unphysical", so only half of the Lorentz and Poincaré groups are used: the restricted Lorentz group and the restricted Poincaré group, which use only the orthochronous transformations. But in 1970, Souriau used a non-trivial extension of the full Poincaré group to show the real physical meaning of time reversal for a particle: $$\text{T}$$-symmetry equals energy inversion. If a particle has a nonzero inertial mass, it means that if the energy of that particle is inverted, so is its mass (since $$m = E / c^2$$), that is to say: Time reversal produces negative mass. Souriau's 1970 book on symplectic geometry (a must in the field), which contains that important demonstration with more details in chapter 14, has been translated in English in 1997. # Dynamicalgroup theory ### Lie groups and their Lie algebra What is group theory? What are "groups"? Mathematically, we'll see they are simply some matrices acting on other matrices. But physically, what are they? What is a group? • • A group is made to transport. • • The way we transport is more important than what is being transported. Tell me how you move, I'll tell you what you are. — Jean-Marie Souriau — All groups exposed herein are Lie groups. They are assembled together to form other groups: The Orthogonal group $$\text{O(3)}$$ is the fundamental group which preserves distances in transformations in a Euclidean space, managing rotations and symmetries in 3D. It contains the important subgroup $$\text{SO(3)}$$ (Special Orthogonal 3D) also called the rotation group because it is the one which manages rotations around an axis (as for $$\text{SO(2)}$$, special orthogonal group in two dimensions, it handles rotations around a point). The Euclidean group $$\text{E(3)}$$ is an isometry group of the 3D Euclidean space, where it manages rotations, symmetries and translations. It is made on top of the group $$\text{O(3)}$$. To put it very simply, it is a static group, a group of the intemporal space. Physically, it can be decomposed into a force and a couple applied on the object of study in rigid body mechanics. These sets of actions on objects in space by such groups conserve lengths. These groups and their subgroups can position an object in space, rotate it around a point or a line, but also make it become its image in a mirror ($$\text{P}$$-symmetry). According to Einstein's special relativity, instead of living in a 3D Euclidean space $${[x,y,z]}$$ of signature $$(+++)$$ where time is a separate entity, we are actually living in a 4D hypersurface where the three dimensions of space are perpendicular to a dimension of time, in a single spacetime $${[t,x,y,z]}$$ called a Minkowski space, of signature $$(-+++)$$. Similarly, the Euclidean group (where time is absent) generates static objects which populate their associated space. When time enters the picture (which Souriau associates to its own group of time translations, the Chronos group), group theory becomes dynamical group theory, which no longer manages static objects, but sets of "event-points" or trajectories. The Galilei group (or Galilean group) $$\text{Gal(3)}$$ manages motions of the non-relativistic material point, i.e. classical mechanics. Souriau decomposes the Galilei group in two subgroups: • • Euclidean transformatons + time translations = Aristotle group, a 10-dimension group. • • The Bruno group (named after the famous Italian Dominican friar, philosopher, mathematician, poet, and cosmological theorist Giordano Bruno, proponent of Copernicus' heliocentric model and partisan of the plurality of worlds in an infinite universe with no center. He was tried for heresy by the Roman Inquisition and sentenced to death by burning alive). Souriau says that the Bruno group "embarks", in reference to someone embarking aboard "Bruno's boat" moving on a river. The Aristotle group is a dogmatic group, as its name suggests. It could produce an impression of immobility, as someone sleeping in his armchair at home, for example. But the same person also moves at a great velocity around the Sun, which itself moves at an even greater velocity around the galaxy… Stillness is therefore only a subjective regularity: it depends on the chosen frame of reference. Fortunately, a more objective geometry can be achieved by another group. To do this, let's go back to Bruno: the trajectory of a stone thrown in the air can be obtained from both the shore or from the boat. The motion which, as seen from the boat, would become exactly the same as the actual motion seen from the shore, would be another possible motion. The embarkation thus becomes a correspondence between motions at the same instant, which Souriau calls the Bruno transformations. As from the Galilei group, it can be obtained by completing the Aristotle group with "Galilei transformations", i.e. adding the same initial velocity to all points of the system. The application of Noether's symplectic theorem then predicts that the motion of the center of mass is uniform and occurs in a straight line, a result sometimes known as the principle of inertia (first formulated by French mathematician Pierre Gassendi in 1642). To establish the same result with his first law of motion, Isaac Newton had to introduce a special principle of mechanics, the equality of action and reaction. But in symplectic geometry, Newton's third law is useless: the Galilean symplectic invariance alone is sufficient. Every action of the Galilei group can be decomposed into two actions, one taken in the Aristotle group and the other in the Bruno group. # Dismantling the Poincaré group ### On the hunt for its invariants As seen above, the Galilei group secretes the laws of classical Newtonian mechanics. Another group is compatible with special relativity: the Poincaré group, a ten-dimensional Lie group, that of Minkowski spacetime isometries. The Poincaré group thus manages motions of the relativistic material point in a 4D spacetime. Geometric quantization allows for example to retrieve the non-relativistic (free) Schrödinger equation from the Galilei group presented above, and the Klein-Gordon equation from the Poincaré group, which is the group we will focus on in the following. In early 20th century, great German mathematician Emmy Noether gave her name to one of the most important theorems of physics: Noether's theorem, which states that for every subgroup of a dynamical group, corresponds an invariant. In the Poincaré group, there is a first subgroup of time translations: $$\begin{bmatrix} 1 & 0 & 0 & 0 & \Delta t \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \end{bmatrix} \times \begin{bmatrix} t \\ x \\ y \\ z \\ 1 \end{bmatrix} = \begin{bmatrix} t + \Delta t\\ x \\ y \\ z \\ 1 \end{bmatrix}$$ It is a group with one parameter ($$\Delta t$$), and its corresponding invariant is a scalar : The energy $$E$$. s has also a second subgroup of spatial translations: $$\begin{bmatrix} 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & \Delta x \\ 0 & 0 & 1 & 0 & \Delta y \\ 0 & 0 & 0 & 1 & \Delta z \\ 0 & 0 & 0 & 0 & 1 \end{bmatrix} \times \begin{bmatrix} t \\ x \\ y \\ z \\ 1 \end{bmatrix} = \begin{bmatrix} t \\ x + \Delta x \\ y + \Delta y \\ z + \Delta z \\ 1 \end{bmatrix}$$ which is a group with three parameters ($$\Delta x, \Delta y, \Delta z$$). Its invariant is the (linear) momentum: $$\text{p} = \begin{bmatrix} p_x \\ p_y \\ p_z \end{bmatrix}$$ This is how momentum is defined according to dynamical group theory. Such geometric quantization makes fundamental quantities of physics become pure geometrical objects. Similarly to the two previous subgroups of the Poincaré group, we can make a third act, the subgroup of spacetime translations: $$\begin{bmatrix} 1 & 0 & 0 & 0 & \Delta t \\ 0 & 1 & 0 & 0 & \Delta x \\ 0 & 0 & 1 & 0 & \Delta y \\ 0 & 0 & 0 & 1 & \Delta z \\ 0 & 0 & 0 & 0 & 1 \end{bmatrix} \times \begin{bmatrix} t \\ x \\ y \\ z \\ 1 \end{bmatrix} = \begin{bmatrix} t + \Delta t \\ x + \Delta x \\ y + \Delta y \\ z + \Delta z \\ 1 \end{bmatrix}$$ The invariant object would then be the energy-momentum four-vector or four-momentum: $$\text{P} = \begin{bmatrix} E \\ p_x \\ p_y \\ p_z \end{bmatrix}$$ We've just seen 4 parameters (or dimensions, beware such "dimensions" of a group are just a math terminology) of the Poincaré group, which contains 10 dimensions. What are these other 6 parameters? They are the dimensions of the Lorentz group $$\text{L}$$, which is a subgroup of the Poincaré group: $$\begin{bmatrix} \text{L} & 0 \\ 0 & 1 \end{bmatrix} \times \begin{bmatrix} \xi \\ 1 \end{bmatrix} = \begin{bmatrix} \text{L} \, \xi \\ 1 \end{bmatrix} \qquad \text{with} \qquad \xi = \begin{bmatrix} t \\ x \\ y \\ z \end{bmatrix}$$ Noether's theorem says it must have a corresponding "object" in physics, defined by 6 parameters that are invariant under the action of this subgroup. Souriau showed that such object contains the spin, and that it has a dimension of angular momentum. # The "moment" ### The shadow of every move We've seen that the subgroups above correspond somewhat to a kind of "dismantling the group, part by part". When we do the opposite operation, we reconstitute the group. The set of invariants shown above constitutes what Souriau calls the moment $$\text{J}$$: $$\text{J} \, = \, \left\{ E, p_x, p_y, p_z, ..., \text{spin} \right\}$$ Every movement of an object in spacetime has its moment. Beware, such "moment" is not a synonym of "instant", nor of the physical concepts of linear or angular momentum. The word "moment" used since 1765 (it was introduced by Swiss mathematican Leonhard Euler), is related to movimentum in Latin, which means movement, i.e. a physical motion between points in space. There is a tight and fundamental relationship between "motion" and "moment". The moment, symbol of materiality, shadows every move. # The Moment Space ### A geometry can hide another A group action is how a group of matrices can act through their multiplication in a way to manage, for example in the Euclidean group, rotations, symmetries and translations in one fell swoop. But Souriau discovered that a group can also act on moments, generating in return a new geometric space. As a consequence, there exist another action of the group on ANOTHER SPACE. There is indeed one space where motions are inscribed: spacetime. In Minkowski 4D spacetime, a group acts on a point $$\left\{ t_1, x_1, y_1, z_1 \right\}$$ which gives another point $$\left\{ t_2, x_2, y_2, z_2 \right\}$$. However, what is inscribed in spacetime is only the trajectory. Yet, the motion plays in two spaces, the second one being the space of the parameters of the motion, which Souriau calls the moment space. Let's consider a motion of an object in space. Such motion is also defined by its moment $$\text{J}$$. The physicist can then make an element $$g$$ of say, the Galilei group, act on this moment $$\text{J}$$. He obtains a new motion, hence a new moment $$\text{J}'$$. This action is written: $$\text{J}' \, = \, g \times \text{J} \times {}^\text{t}\kern{-2pt}g$$ $$g$$ is the element of the chosen group, a square matrix, which is simply a matrix with the same number of lines and columns, e.g.: $${}^\text{t}\kern{-1pt}g$$ is the transpose of that matrix, i.e. the symmetric inverse of the square table with respect to its main diagonal: And for the special case of a column vector, its transpose is simply the same matrix written as as a row vector: $$\text{J}$$ is the moment matrix. It is a 5×5 antisymmetric matrix, i.e the symmetric elements with respect to the main diagonal have opposite signs. The elements of the main diagonal are equal to zero (which is its own opposite). We can count the components of such antisymmetric matrices: To better understand what $$\text{J}$$ is, that 5×5 matrix can be decomposed in the 4×4 matrix $$\color{#b7d2c2}{\text{M}}$$, the energy-momentum four-vector $$\color{#a07dd1}{\text{P}}$$ (column vector with four components) and its transpose the row vector $$\color{#cabcd1}{{}^\text{t}\kern{-1pt}\text{P}}$$: We can now write the moment matrix $$\text{J}$$ in a more compact way: $$\text{J} = \begin{bmatrix} \color{#b7d2c2}{\text{M}} & \color{#a07dd1}{-\text{P}} \\ \color{#cabcd1}{{}^\text{t}\kern{-1pt}\text{P}} & 0 \end{bmatrix}$$ We've shown that the moment-matrix $$\text{J}$$ contains objects that have a physical interpretation, like the four-vector $$\color{#a07dd1}{\text{P}}$$, in which $$\color{#9d92d1}{E}$$ is the energy and $$\color{#b57dd1}{\text{p} = \left\{ p_x, p_y, p_z \right\} }$$ the momentum. But what is exactly that antisymmetric matrix $$\color{#b7d2c2}{\text{M}}$$? What is its physical meaning? Let's decompose it to find out: In the compact form: $$\color{#b7d2c2}{\text{M}} = \begin{bmatrix} \color{#d1827d}{\text{s}} & \color{#d1c27d}{\text{f}} \\ \color{#d1cdbc}{-{}^\text{t}\kern{-1pt}\text{f}} & 0 \end{bmatrix}$$ The velocity $$V$$ is implicitly present in the $$\text{L}$$ matrix of the Lorentz group. If we consider a motion occurring in a specific direction, for example along a vector $$oz$$, with a velocity $$V$$ and a translation $$\Delta z = \text{c}$$, and if $$\text{c} = V \, \Delta t$$, then we are in a coordinate system where we follow the particle's motion along this spacetime translation. The vector $$\color{#d1c27d}{ \text{f} }$$ is then shown to be null. The matrix $$\color{#d1827d}{ \text{s} }$$ is then written: $$\color{#d1827d}{ \text{s} = \begin{bmatrix} 0 & -s & 0 \\ s & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix} }$$ It is the spin of the particle. Souriau demonstrated in 1970 this purely geometric character of spin: a 3×3 antisymmetric matrix.  The method of geometric quantization, which he invented, allows to show that spin can only be a multiple of a fixed quantity: $$\hbar$$ (the reduced Planck constant). Further down the page, we will see that Souriau also showed that a particle with an electric charge is equivalent of saying that it moves in a spacetime having a fifth dimension (similar to the Kaluza or Jordan–Thiry dimension), and that such dimension is really tiny (its characteristic length is the Planck length) and is closed onto itself as a fiber bundle. It is the fact that this dimension is closed onto itself that causes the electric charge to be quantized. In spacetime, there exists a "form of closure" that causes an object to become identical to itself under the action of a 360° rotation. The quantization of spin comes from that property: there is a close relationship between geometric quantization and the closure of a dimension. The quantity $$\color{#d1c27d}{ \text{f} = \begin{bmatrix} f_x \\ f_y \\ f_z \end{bmatrix} }$$ is what Souriau calls the passage. It vanishes when the frame of reference is that of the particle in motion, and appears only from another reference frame, for exemple considering an object spinning on a moving boat from the shore. It may be considered as an artifact of the motion, although Souriau said about it: What is the passage? It is what prevents you from moving without moving something in the opposite direction: the exchange of passage between this thing and you. Proof that it is real: by plane or by boat, there is a toll per passage. — Jean-Marie Souriau — For example, you are sitting in a plane in flight at the rear of the cabin, and you are invited to move to the front. You will only pass if you borrow a little passage from the aircraft. This will cause it to lose a couple of inches off of its flight plan. It is the conservation of passage that makes it possible to establish the following rule: if an object is in free space, its center of mass moves in a straight line, at a constant speed, in the direction of the momentum. If the momentum is equal to zero, the center of mass is immobile. Of course, in reality a rocket launched in space is not completely free, it undergoes the gravitational attraction of planetary bodies and the Sun, which slowly disrupt the motion of the center of gravity by transferring some passage to the rocket. The underlying quantity here is more classical than the passage $$\color{#d1c27d}{\text{f}}$$: it is the center of mass $$C_m$$. They are deduced from each other by the rule $$C_m = \color{#d1c27d}{\text{f}} + \color{#b57dd1}{p} \, t$$. Therefore the passage $$\color{#d1c27d}{\text{f}}$$ locates the position of $$C_m$$ at the instant $$t = 0$$. The complete Galilean moment is thus: $$\text{J} \, = \, \left\{ \text{energy, mass, momentum, passage, spin} \right\}$$ Every motion of any thing has its own moment, and the only thing a moment can do is to transfer itself partially from one thing to another. It cannot be created nor disappear. This is how the moment can be measured, by taking a part of the moment of the object, and transferring it to the measuring instrument. You see above that (rest) mass is a parameter of the moment. The classical mass, which arose as an arbitrary additive constant in the Galilei group, is now defined in the Poincaré group as the relativistic mass $$m = E / c^2$$ which depends on velocity. Another difference with the non-relativistic dynamical group: there is no barycentric decomposition anymore. It was a consequence in the Galilei group of the existence of a privileged subgroup which does not have an equivalent in the Poincaré group. Then, every virtual motion can be derived from the real motion with a change of reference frame, and it is shown that the Poincaré group describes properties of elementary particles using only two numbers which have a physical interpretation: the rest mass and spin (i.e. intrinsic angular momentum). For the case of a massless particle (like the photon), a third number steps in: the helicity (circular dichroism), which can take only the values ±1. Photons are indeed either left circularly polarized (LCP) or right circularly polarized (RCP) according to the sign of the helicity. Now that we have exposed the main tools, we can show the coaction of the Poincaré group on its moment space. ### Action of the Poincaré group on its moment space The coadjoint action is the action of a group on its moment space, in more mathematical terms, it is the action of a Lie group on the dual vector space to its Lie algebra, or Lie coalgebra. In dynamical systems theory, The Poincaré group manages the motion of the relativistic material point. The element $$g$$ of the Poincaré group is: $$g = \begin{bmatrix} \text{L} & \text{C} \\ 0 & 1 \end{bmatrix}$$ where $$\text{L}$$ is the Lorentz group (subgroup of the Poincaré group) and $$\text{C}$$ is a column vector representing a translation within four dimensions: $$\text{C} = \begin{bmatrix} \Delta t \\ \Delta x \\ \Delta y \\ \Delta z \\ \end{bmatrix}$$ Thanks to the decomposition of the previous section, we can detail the action of the Poincaré group on its moment space, which is: $$\text{J}' \, = \, g \times \text{J} \times {}^\text{t}\kern{-2pt}g$$ We have: $${}^\text{t}\kern{-1pt}g \, = \, \begin{bmatrix} {}^\text{t}\kern{-1pt}\text{L} & 0 \\ {}^\text{t}\kern{-1pt}\text{C} & 1 \end{bmatrix}$$ Expanding the compact notation of the action above, we have: \begin{aligned} \text{J}' \, &= \, \begin{bmatrix} \text{L} & \text{C} \\ 0 & 1 \end{bmatrix} \times \begin{bmatrix} \text{M} & -\text{P} \\ {}^\text{t}\kern{-1pt}\text{P} & 0 \end{bmatrix} \times \begin{bmatrix} {}^\text{t}\kern{-1pt}\text{L} & 0 \\ {}^\text{t}\kern{-1pt}\text{C} & 1 \end{bmatrix} \\ \text{J}' \, &= \, \begin{bmatrix} \text{L} & \text{C} \\ 0 & 1 \end{bmatrix} \times \begin{bmatrix} \text{M} \, {}^\text{t}\kern{-1pt}\text{L} - \text{P} \, {}^\text{t}\kern{-1pt}\text{C} & -\text{P} \\ {}^\text{t}\kern{-1pt}\text{P} \, {}^\text{t}\kern{-1pt}\text{L} & 0 \end{bmatrix} \\ \text{J}' \, &= \, \begin{bmatrix} \color{#b7d2c2}{ \text{L} \, \text{M} \, {}^\text{t}\kern{-1pt}\text{L} - \text{L} \, \text{P} \, {}^\text{t}\kern{-1pt}\text{C} + \text{C} \, {}^\text{t}\kern{-1pt}\text{P} \, {}^\text{t}\kern{-1pt}\text{L} } & \color{#a07dd1}{ -\text{L} \, \text{P} } \\ \color{#cabcd1}{ {}^\text{t}\kern{-1pt}\text{P} \, {}^\text{t}\kern{-1pt}\text{L} } & 0 \end{bmatrix} \end{aligned} Since: $$\text{J} = \begin{bmatrix} \color{#b7d2c2}{\text{M}} & \color{#a07dd1}{-\text{P}} \\ \color{#cabcd1}{{}^\text{t}\kern{-1pt}\text{P}} & 0 \end{bmatrix}$$ We have: $$\color{#b7d2c2}{\text{M}' \, = \, \text{L} \, \text{M} \; {}^\text{t}\kern{-1pt}\text{L} - \text{L} \, \text{P} \; {}^\text{t}\kern{-1pt}\text{C} + \text{C} \, {}^\text{t}\kern{-1pt}\text{P} \, {}^\text{t}\kern{-1pt}\text{L}}$$ $$\color{#a07dd1}{ \text{P}' \, = \, \text{L} \, \text{P} }$$ This is the coadjoint action of the Poincaré group on its moments $$\color{#b7d2c2}{\text{M}}$$ (passage and spin) and $$\color{#a07dd1}{\text{P}}$$ (energy and momentum). # Antichronous action ### how energy is reversed The elements of the Lorentz group act on a series of points in spacetime which constitute a motion. By letting an element $$\text{L}$$ of the Lorentz group act on a given motion, we obtain another motion. The full Lorentz group has four connected components: $$\text{L}_\text{N}$$ (neutral), $$\text{L}_\text{P}$$ (inverts space), $$\text{L}_\text{T}$$ (inverts time), $$\text{L}_{\text{PT}}$$ (inverts space and time). The neutral component $$\text{L}_\text{N}$$ is a subgroup containing the unitary matrix that does not invert space nor time. Then, we introduce Petit's writing, which consists in the following 4-element matrix $$\omega$$ built with two parameters $$\color{#7da9d1}{\lambda_1}$$ and $$\color{#bcc7d1}{\lambda_2}$$: $$\omega_{(\color{#7da9d1}{\lambda_1}, \, \color{#bcc7d1}{\lambda_2})} = \begin{bmatrix} \color{#7da9d1}{\lambda_1} & 0 & 0 & 0 \\ 0 & \color{#bcc7d1}{\lambda_2} & 0 & 0 \\ 0 & 0 & \color{#bcc7d1}{\lambda_2} & 0 \\ 0 & 0 & 0 & \color{#bcc7d1}{\lambda_2} \\ \end{bmatrix} \qquad \text{with} \qquad \begin{matrix} \color{#7da9d1}{\lambda_1 = \pm 1} \\ \color{#bcc7d1}{\lambda_2 = \pm 1} \end{matrix}$$ Thus the four components of the full Lorentz group can easily be expressed using the four possible combinations of these two parameters applied on the neutral component $$\text{L}_\text{N}$$: $$\text{L} \; = \; \omega \, \text{L}_{\text{N}}$$ Indeed, we have: $$\omega_{(\color{#7da9d1}{1}; \, \color{#bcc7d1}{1})} \times \text{L}_\text{N} = \begin{bmatrix} \color{#7da9d1}{1} & 0 & 0 & 0 \\ 0 & \color{#bcc7d1}{1} & 0 & 0 \\ 0 & 0 & \color{#bcc7d1}{1} & 0 \\ 0 & 0 & 0 & \color{#bcc7d1}{1} \end{bmatrix} \in \left\{ \text{L}_\text{N} \right\}$$ $$\omega_{(\color{#7da9d1}{1}; \, \color{#bcc7d1}{-1})} \times \text{L}_\text{N} = \begin{bmatrix} \color{#7da9d1}{1} & 0 & 0 & 0 \\ 0 & \color{#bcc7d1}{-1} & 0 & 0 \\ 0 & 0 & \color{#bcc7d1}{-1} & 0 \\ 0 & 0 & 0 & \color{#bcc7d1}{-1} \end{bmatrix} \in \left\{ \text{L}_\text{P} \right\}$$ $$\omega_{(\color{#7da9d1}{-1}; \, \color{#bcc7d1}{1})} \times \text{L}_\text{N} = \begin{bmatrix} \color{#7da9d1}{-1} & 0 & 0 & 0 \\ 0 & \color{#bcc7d1}{1} & 0 & 0 \\ 0 & 0 & \color{#bcc7d1}{1} & 0 \\ 0 & 0 & 0 & \color{#bcc7d1}{1} \end{bmatrix} \in \left\{ \text{L}_\text{T} \right\}$$ $$\omega_{(\color{#7da9d1}{-1}; \, \color{#bcc7d1}{-1})} \times \text{L}_\text{N} = \begin{bmatrix} \color{#7da9d1}{-1} & 0 & 0 & 0 \\ 0 & \color{#bcc7d1}{-1} & 0 & 0 \\ 0 & 0 & \color{#bcc7d1}{-1} & 0 \\ 0 & 0 & 0 & \color{#bcc7d1}{-1} \end{bmatrix} \in \left\{ \text{L}_{\text{PT}} \right\}$$ We see that $$\color{#7da9d1}{\lambda_1 = -1}$$ inverts time and $$\color{#bcc7d1}{\lambda_2 = -1}$$ inverts space. The four sets are grouped in two subsets: the orthochronous subset $$\text{L}_\text{O}$$ and the antichronous subset $$\text{L}_\text{A}$$: $$\color{#04c4fc}{\text{L}_\text{O} \, = \, \left\{ \text{L}_\text{N} ; \text{L}_\text{P} \right\}} \qquad \text{and} \qquad \color{#04c4fc}{\text{L}_\text{A} \, = \, \left\{ \text{L}_\text{T} ; \text{L}_{\text{PT}} \right\}}$$ The full Poincaré group, built on top of the Lorentz group adding spacetime translations, has also these four connected components. Thus its element can be written: $$g = \begin{bmatrix} \omega \, \text{L}_{\text{N}} & \text{C} \\ 0 & 1 \end{bmatrix}$$ Then the action of the Poincaré group on spacetime, the space of trajectories, is: $$\begin{bmatrix} \omega \, \text{L}_{\text{N}} & \text{C} \\ 0 & 1 \end{bmatrix} \times \begin{bmatrix} \xi \\ 1 \end{bmatrix} = \begin{bmatrix} \omega \, \text{L}_{\text{N}} \, \xi + \text{C} \\ 1 \end{bmatrix}$$ But this action hides a more important action: the action of the Poincaré group on its moment $$\text{J}$$, which has ten scalars (since the Poincaré group is a 10-dimensional group), these scalars being: • • The energy $$\color{#9d92d1}{E}$$ • • The momentum $$\color{#b57dd1}{ \text{p} = \left\{ p_x, p_y, p_z \right\} }$$ • • The passage $$\color{#d1c27d}{ \text{f} = \left\{ f_x, f_y, f_z \right\} }$$ • • The spin $$\color{#d1827d}{ \text{s} = \left\{ l_x, l_y, l_z \right\} }$$ The dual of the action of the Poincaré group on its Lie algebra is the coadjoint action on its moments $$\color{#b7d2c2}{\text{M}}$$ (passage and spin) and the four-momentum vector $$\color{#a07dd1}{\text{P}}$$ (energy and momentum) which gives in this case, following the writing seen previously: $$\color{#b7d2c2}{\text{M}' \; = \; (\omega\text{L}_{\text{N}}) \, \text{M} \; {}^\text{t}\kern{-1pt}(\omega\text{L}_{\text{N}}) \; - \; (\omega\text{L}_{\text{N}}) \, \text{P} \; {}^\text{t}\kern{-1pt}\text{C} \; + \; \text{C} \, {}^\text{t}\kern{-1pt}\text{P} \, {}^\text{t}\kern{-1pt}(\omega\text{L}_{\text{N}}) }$$ $$\color{#a07dd1}{ \text{P}' \; = \; (\omega\text{L}_{\text{N}}) \, \text{P} }$$ To highlight the effects of $$\text{P}$$, $$\text{T}$$ and $$\text{PT}$$ symmetries on $$\left\{ \color{#9d92d1}{E}, \color{#b57dd1}{\text{p}}, \color{#d1c27d}{\text{f}}, \color{#d1827d}{\text{s}} \right\}$$, we will choose the simplest possible action, where there is no spacetime translation, so the vector $$\text{C}$$ cancels out. We also simplify the writing, choosing $$\text{L}_{\text{N}} = 1$$. We get: $$\color{#b7d2c2}{ \text{M}' \; = \; \big[ \omega_{(\lambda_2, \, \lambda_1)} \big] \times \text{M} \times {}^\text{t}\kern{-1pt}\big[ \omega_{(\lambda_2, \, \lambda_1)} \big] }$$ $$\color{#a07dd1}{ \text{P}' \; = \; \big[ \omega_{(\lambda_2, \, \lambda_1)} \big] \, \text{P} }$$ Let's consider for example $$\text{T}$$-symmetry, where there is only time inversion ( $$\color{#7da9d1}{\lambda_1 = -1}$$ ), no space inversion ( $$\color{#bcc7d1}{\lambda_2 = 1}$$ ), in a case where there is also no spacetime translation ( $$\text{C}=0$$ ). We thus have: $$\omega_{(\color{#bcc7d1}{1}, \, \color{#7da9d1}{-1})} \times \text{L}_{\text{N}} \; \equiv \; \text{L}_{\text{T}}$$ Hence: $$\text{L}_{\text{T}} \times \xi \; = \; \begin{bmatrix} \color{#bcc7d1}{1} & 0 & 0 & 0 \\ 0 & \color{#bcc7d1}{1} & 0 & 0 \\ 0 & 0 & \color{#bcc7d1}{1} & 0 \\ 0 & 0 & 0 & \color{#7da9d1}{-1} \end{bmatrix} \times \begin{bmatrix} x \\ y \\ z \\ t \end{bmatrix} = \begin{bmatrix} x \\ y \\ z \\ -t \end{bmatrix}$$ This is the trivial action of time reversal in the space of trajectories. But the coadjoint action, the action of the group on its moment space, gives on one hand: \begin{aligned} \color{#b7d2c2}{ \text{M}' \, = \, \text{L}_\text{T} \times \text{M} \times {}^\text{t}\kern{-1pt}\text{L}_\text{T} } \, &= \, \begin{bmatrix} \color{#bcc7d1}{1} & 0 & 0 & 0 \\ 0 & \color{#bcc7d1}{1} & 0 & 0 \\ 0 & 0 & \color{#bcc7d1}{1} & 0 \\ 0 & 0 & 0 & \color{#7da9d1}{-1} \end{bmatrix} \times \begin{bmatrix} \color{#d1827d}{0} & \color{#d1827d}{-l_z} & \color{#d1827d}{l_y} & \color{#d1c27d}{f_x} \\ \color{#d1827d}{l_z} & \color{#d1827d}{0} & \color{#d1827d}{-lx} & \color{#d1c27d}{f_y} \\ \color{#d1827d}{-l_y} & \color{#d1827d}{l_x} & \color{#d1827d}{0} & \color{#d1c27d}{f_z} \\ \color{#d1cdbc}{-f_x} & \color{#d1cdbc}{-f_y} & \color{#d1cdbc}{-f_z} & 0 \end{bmatrix} \times \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 \end{bmatrix} \\ \color{#b7d2c2}{ \text{M}' } \, &= \, \begin{bmatrix} \color{#d1827d}{0} & \color{#d1827d}{-l_z} & \color{#d1827d}{l_y} & \color{#d1c27d}{-f_x} \\ \color{#d1827d}{l_z} & \color{#d1827d}{0} & \color{#d1827d}{-lx} & \color{#d1c27d}{-f_y} \\ \color{#d1827d}{-l_y} & \color{#d1827d}{l_x} & \color{#d1827d}{0} & \color{#d1c27d}{-f_z} \\ \color{#d1cdbc}{f_x} & \color{#d1cdbc}{f_y} & \color{#d1cdbc}{f_z} & 0 \end{bmatrix} \end{aligned} and on the other hand: $$\color{#a07dd1}{ \text{P}' \; = \; \text{L}_\text{T} \times \text{P} } \; = \; \begin{bmatrix} \color{#bcc7d1}{1} & 0 & 0 & 0 \\ 0 & \color{#bcc7d1}{1} & 0 & 0 \\ 0 & 0 & \color{#bcc7d1}{1} & 0 \\ 0 & 0 & 0 & \color{#7da9d1}{-1} \end{bmatrix} \times \begin{bmatrix} \color{#b57dd1}{p_x \\ p_y \\ p_z} \\ \color{#9d92d1}{E} \end{bmatrix} = \begin{bmatrix} \color{#b57dd1}{p_x \\ p_y \\ p_z} \\ \color{#9d92d1}{-E} \end{bmatrix}$$ N.B.: The matrix $$\omega_{(\color{#bcc7d1}{\lambda_2}, \; \color{#7da9d1}{\lambda_1})}$$ is here expressed according to a 4D spacetime noted $$\left\{ x, y, z, t \right\}$$ instead of the usual relativity convention $$\left\{ t, x, y, z \right\}$$ that we use everywhere else, in order to fit with the graphical and matrix representations of $$\color{#b7d2c2}{ \text{M} }$$ and $$\color{#a07dd1}{ \text{P} }$$ as shown earlier. We can detail the same process for the 4 connected components of the Lorentz group and we discover that: • • P-symmetry: momentum and passage are inverted. Energy and spin stay the same. • • T-symmetry: energy and passage are inverted. Momentum and spin stay the same. • • PT-symmetry: momentum and energy are inverted. Passage and spin stay the same. The following table details the possibles Lorentz transformations and their effect on the moment components. $$\text{N}$$ $$\text{P}$$ $$\text{T}$$ $$\text{PT}$$ $$\lambda_2 = 1; \, \lambda_1 = 1$$ $$\lambda_2 = -1; \, \lambda_1 = 1$$ $$\lambda_2 = 1; \, \lambda_1 = -1$$ $$\lambda_2 = -1; \, \lambda_1 = -1$$ $$\text{energy}$$ $$E' = \lambda_1 E$$ $$E$$ $$E$$ $$-E$$ $$-E$$ $$\text{momentum}$$ $$\text{p}' = \lambda_2 \, \text{p}$$ $$\text{p}$$ $$-\text{p}$$ $$\text{p}$$ $$-\text{p}$$ $$\text{passage}$$ $$\text{f}\,' = \lambda_2 \, \lambda_1 \, \text{f}$$ $$\text{f}$$ $$-\text{f}$$ $$-\text{f}$$ $$\text{f}$$ $$\text{spin}$$ $$\text{s}' = \text{s}$$ $$\text{s}$$ $$\text{s}$$ $$\text{s}$$ $$\text{s}$$ No transformation modifies the spin. $$\text{T}$$-symmetry is what makes a particle inverts its energy. Here we provide the main excerpt of Souriau's book Structure of Dynamical Systems, Chapter 14 "A mechanistic description of elementary particles", pp. 189–193: # T-symmetry in Quantum Field Theory ### Linear & unitary vs antilinear & antiunitary operators Souriau's work presented above leads us to question the nature of the operator $$\text{T}$$, which reverts time. As we have shown, in dynamical group theory the operator $$\text{T}$$ is real, and $$\text{T}$$-symmetry produces energy (and mass) inversion. However, in quantum field theory (QFT), operators are complex. They can then be unitary or antiunitary, and linear or antilinear. Below, we show that the arbitrary choice of an operator $$\text{T}$$ antilinear and antiunitary in QFT was made on the sole purpose to avoid negative energy particles, considered as impossible. Indeed, Steven Weinberg wrote, in the "bible" book The Quantum Theory of Fields Vol. 1 (Cambridge University Press, 1995), chapter 2 "Relativistic Quantum Mechanics", section 2.6 "Space Inversion and Time-Reversal", pp. 75–76 (emphasis added): At this point we have not yet decided whether $$\text{P}$$ and $$\text{T}$$ are linear and unitary or antilinear and antiunitary. The decision is an easy one. Setting $$\rho = 0$$ in Eq. (2.6.4) gives $$\text{P} \, i \, H \, \text{P}^{-1} \, = \, i \, H \text{,}$$ where $$H \equiv P^0$$ is the energy operator. If $$\text{P}$$ were antiunitary and antilinear then it would anticommute with $$i$$, so $$\text{P} H \text{P}^{-1}=-H$$. But then for any state $$\Psi$$ of energy $$E > 0$$, there would have to be another state $$\text{P}^{-1}\Psi$$ of energy $$−E < 0$$. There are no states of negative energy (energy less than that of the vacuum), so we are forced to choose the other alternative: $$\text{P}$$ is linear and unitary, and commutes rather than anticommutes with $$\text{H}$$. On the other hand, setting $$\rho = 0$$ in Eq. (2.6.6) yields $$\text{T} \, i \, H \, \text{T}^{-1} \, = \, -i \, H \text{.}$$ If we supposed that $$\text{T}$$ is linear and unitary then we could simply cancel the $$i$$s, and find $$\text{T} H \text{T}^{-1}=-H$$, with the again disastrous conclusion that for any state $$\Psi$$ of energy $$E$$, there is another state $$\text{T}^{-1}\Psi$$ of energy $$−E$$. To avoid this, we are forced here to conclude that $$\text{T}$$ is antilinear and antiunitary. Original excerpt available here: Although British physicist Paul Dirac literally interpreted his discovery, in 1930, of negative-energy quantum states from his own equations as the Dirac sea, an infinitely deep sea of particles with negative energy, the modern interpretation from quantum field theory is that negative energy states are forbidden, as the zero-point field of the quantum vacuum represents a ground state of lowest possible energy, so energy states below the zero value could not exist. Indeed, the probability of existence of energy states in quantum mechanics implies the ratio $$E/m$$. Therefore, if the energy is negative, that probability would be negative. But it is a biased explanation, as if the energy is negative, so is the mass (which would also emit negative energy photons)… hence the probability remains positive. Positive energy and negative energy states would coexist in two separate worlds with increasing energy potential from both sides of a common ground state of lowest possible energy. Fortunately, we've just shown that the foundations of quantum mechanics do in fact allow such negative energy states arbitrary dismissed from quantum field theory but predicted by the Janus model. # A Fifth Dimension ### Geometric quantization of the electric charge Let us return to Souriau's work in geometric quantization and dynamical systems theory. The motion of the relativistic material point (symmetry, rotation, translation through time) is managed by the Poincaré group, which is a "dynamical group". It contains a subgroup, the Lorentz group, which acts upon points, for example in a 4D spacetime $$[t, x, y, z]$$, that are events. In 1958, adding a compact (microscopic) 5th space dimension $$\zeta$$ folded on itself as a fiber bundle in a five-dimensionnal relativistic theory (allowing to retrieve the Jordan-Thiry and Kaluza-Klein 5D theories, or Maxwell and Einstein 4D theories, as approximations), Souriau created a new five-dimensional Lorentz group describing events in a 5D spacetime $$[x, y, z, t, \zeta]$$. With this extended Lorentz group, he gave the first geometric interpretation of the matter-antimatter duality. When the additional dimension $$\zeta$$ is defined modulo $$2\pi$$, the electric charge $$e$$ is quantized. The Kaluza space, hyperbolic Riemannian of signature $$(+----)$$, has a 5×5 Gram matrix: $$\Gamma = \begin{vmatrix} 1 & 0 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 & 0 \\ 0 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 & -1 \end{vmatrix} \, = \, \begin{vmatrix} \text{G} & 0 \\ 0 & -1 \end{vmatrix} \qquad {\small \text{where}} \qquad \text{G} \, = \, \begin{vmatrix} 1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1 \end{vmatrix}$$ The metric of the Kaluza space is: $$d\Sigma^2 = dt^2 - dx^2 - dy^2 - \ dz^2 - d\zeta^2$$ $$r = \begin{bmatrix} x \\ y \\ z \end{bmatrix} \qquad \xi = \begin{bmatrix} t \\ x \\ y \\ z \end{bmatrix} = \begin{bmatrix} t \\ r \end{bmatrix} \qquad \Omega = \begin{bmatrix} t \\ x \\ y \\ z \\ \zeta \end{bmatrix} = \begin{bmatrix} \xi \\ \zeta \end{bmatrix} = \begin{bmatrix} t \\ r \\ \zeta \end{bmatrix}$$ $$d\Sigma^2 \, = \, {}^\text{t}\kern{-2pt}d\Omega \; \Gamma \; d\Omega$$ If we look for the isometry group of this Kaluza space, we find a group with a matrix representation that looks very much like the one from a Poincaré group, but with an extra dimension: $$\begin{bmatrix} \Lambda & \Xi \\ 0 & 1 \end{bmatrix} \qquad {\small \text{with}} \qquad {}^\text{t}\kern{-2pt}\Lambda \, \Gamma \, \Lambda = \Gamma$$ The vector $$\Xi$$ represents here a spacetime translation within five dimensions: $$\Xi = \begin{bmatrix} \Delta t \\ \Delta x \\ \Delta y \\ \Delta z \\ \Delta \zeta \end{bmatrix}$$ This group acts on the points of the 5D spacetime, the space of trajectories: $$\begin{bmatrix} \Lambda & \Xi \\ 0 & 1 \end{bmatrix} \times \begin{bmatrix} \Omega \\ 1 \end{bmatrix} = \begin{bmatrix} \Lambda \Omega + \Xi \\ 1 \end{bmatrix}$$ Translations along the fifth dimension $$\zeta$$ represent a subgroup of this group, whose matrix representation is (subgroup with 1 parameter): $$\begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & \Delta \zeta \\ 0 & 0 & 0 & 0 & 0 & 1 \end{bmatrix} \times \begin{bmatrix} t \\ x \\ y \\ z \\ \zeta \\ 1 \end{bmatrix} = \begin{bmatrix} t \\ x \\ y \\ z \\ \zeta+\Delta\zeta \\ 1 \end{bmatrix}$$ And Noether's theorem says that a new scalar will be invariant under the action of this subgroup, and this scalar is The Kaluza group is built from a group $$\Lambda$$. The Lorentz group is one of its subgroups, whose element is: $$g = \begin{pmatrix} \text{L} & 0 \\ 0 & 1 \end{pmatrix}$$ Here is another subgroup of the Kaluza group: $$\begin{bmatrix} \text{L} & 0 & 0\\ 0 & \lambda & 0 \\ 0 & 0 & 1 \end{bmatrix} \times \begin{bmatrix} \xi \\ \zeta \\ 1 \end{bmatrix} = \begin{bmatrix} \text{L}\xi \\ \lambda\zeta \\ 1 \end{bmatrix} \qquad {\small \text{with}} \qquad \lambda = \pm 1$$ The element $$(\lambda=-1)$$ of this group inverts the fifth dimension. Taking the previous sketch of such compact dimension closed on itself: The winding direction of the particle's motion is reversed. Souriau shows that this implies the inversion of the electric charge, or $$\text{C}$$-symmetry. This illustrated geometric interpretation and the short explanation with groups correspond to a larger mathematical work published by Souriau only in French in the early 1960s, which left the worldwide scientific community largely unaware of it. Yet these papers are available online for free. Related papers making this work more explicit in English according to the framework of the Janus cosmological model are also available. It must be said here that the electric charge alone cannot be a complete geometrical definition of antimatter with charge conjugation, as particles own other quantum numbers, that can also be quantized with the same tools, but on which we won't expand further here. The purpose of this short presentation was to show the idea that antimatter depends of a type of motion in a higher dimensional space; and to give the interested reader the opportunity to consult the referenced literature. ### References 1 Sakharov, A. D. (January 1967): "Нарушение СР–инвариантности, С–асимметрия и барионная асимметрия Вселенной" Pi'sma ZhÉTF (in Russian). 5 (1): 32–35. Translated as: Sakharov, A. D. (January 1967): "Violation of CP invariance, C asymmetry, and baryon asymmetry of the universe" JETP Letters. 5 (1): 24–26. Republished as: Sakharov, A. D. (May 1991): "Violation of CP invariance, C asymmetry, and baryon asymmetry of the universe" Soviet Physics Uspekhi. 34 (5): 392–393. doi:10.1070/PU1991v034n05ABEH002497. 2 Sakharov, A. D. (September 1980): "Космологические модели Вселенной с поворотом стрелы времени" Pi'sma ZhÉTF (in Russian). 79 (3): 689–693. Translated as: Sakharov, A. D. (September 1980): "Cosmological models of the Universe with reversal of time's arrow" JETP Letters. 52 (3): 349–351. 3 Sakharov, A. D. (October 1982): "Многолистные модели Вселенной" Pi'sma ZhÉTF (in Russian). 82 (3): 1233–1240. Translated as: Sakharov, A. D. (October 1982): "Many-sheeted models of the Universe" JETP. 56 (4): 705–709. 4 Sakharov, A. D. (7 December 1982): Collected Scientific Works. Marcel Dekker / CRC Press. ISBN 978-0824717148. 5 Sakharov, A. D. (1984): Œuvres scientifiques. Economica Anthropos. ISBN 978-2715710900. 6 Petit, J.-P. (23 May 1977): "Univers jumeaux, énantiomorphes, à temps propres opposés" [Enantiomorphic twin universes with opposite proper times] Comptes rendus de l'Académie des Sciences (in French). Paris. 263: 1315–1318. 7 Petit, J.-P. (6 June 1977): "Univers en interaction avec leurs images dans le miroir du temps" [Universes interacting with their opposite time-arrow fold] Comptes rendus de l'Académie des Sciences (in French). Paris: French Academy of Sciences. 284: 1413–1416. 8 Souriau, J.-M. (1970): "Description mécaniste des particules élémentaires : Inversions d’espace et de temps" in Structure des Systèmes Dynamiques, Dunod (1970). Translated as: Souriau, J.-M. (1997): "A mechanistic description of elementary particles: Inversions of space and time" in Structure of Dynamical Systems, Progress in Mathematics, Birkhäuser, pp. 189–193 (1997). ISBN 978-1-4612-6692-1. doi:10.1007/978-1-4612-0281-3_14. 9 Weinberg, S. (1995): "Relativistic Quantum Mechanics", in The Quantum Theory of Fields, Cambridge University Press, pp. 75–76 and p. 104 (1995). ISBN: 978-0521670531. doi:10.1017/CBO9781139644167. 10 Debergh, N.; Petit, J.-P.; D'Agostini, G. (November 2018): "On evidence for negative energies and masses in the Dirac equation through a unitary time-reversal operator" Journal of Physics Communications. 2 (11): 115012. doi:10.1088/2399-6528/aaedcc. arXiv:1809.05046. 11 Souriau, J.-M. (1958): "Une axiomatique relativiste pour la microphysique" [A relativistic axiomatic for microphysics] Comptes rendus de l'Académie des Sciences (in French). Paris: French Academy of Sciences. 247: 1159–1562. 12 Souriau, J.-M. (1959): "Conséquences physiques d’une théorie unitaire" [Physical consequences of a unified theory] Comptes rendus de l'Académie des Sciences (in French). Paris: French Academy of Sciences. 248: 1478–1480. 13 Souriau, J.-M. (1959): "Relativité multidimensionnelle non stationnaire" [Non-stationary multidimensional relativity], International CNRS meetings: Relativistic theories of gravitation. pp. 293–297. 14 Souriau, J.-M. (1962): "Relativité à 5 dimensions" [5-dimensional Relativity], in Géométrie et Relativité [Geometry and Relativity]. Hermann Mesnil (in French). Republished by Jacques Gabay (2008). ISBN 978-2876473218. 15 Petit, J.-P.; Midy, P.; Landsheat, F. (June 2001): "What twin matter could be made of", in Twin matter against dark matter Marseille Cosmology Conference Where's the Matter? Tracing Dark and Bright Matter with the New Generation of Large Scale Surveys, Marseille, France. pp. 30–32. 16 Henry-Couannier, F.; d'Agostini, G.; Petit, J.-P. (February 2005): "I- Matter, antimatter and geometry. II- The twin universe model: a solution to the problem of negative energy particles. III- The twin universe model plus electric charges and matter-antimatter symmetry" arXiv:0712.0067. 17 Petit, J.-P.; D'Agostini, G. (2008) "Five Dimensional bigravity. New topological description of the Universe". arXiv:0805.1423. 18 Petit, J.-P. (January 2018): "A Symplectic Cosmological Model" Progress in Physics. 14 (1): 38–40.
web
auto_math_text
# String interpolation in C# This tutorial shows you how to use string interpolation to format and include expression results in a result string. The examples assume that you are familiar with basic C# concepts and .NET type formatting. If you are new to string interpolation or .NET type formatting, check out the interactive string interpolation tutorial first. For more information about formatting types in .NET, see the Formatting Types in .NET topic. Note The C# examples in this article run in the Try.NET inline code runner and playground. Select the Run button to run an example in an interactive window. Once you execute the code, you can modify it and run the modified code by selecting Run again. The modified code either runs in the interactive window or, if compilation fails, the interactive window displays all C# compiler error messages. ## Introduction The string interpolation feature is built on top of the composite formatting feature and provides a more readable and convenient syntax to include formatted expression results in a result string. To identify a string literal as an interpolated string, prepend it with the $ symbol. You can embed any valid C# expression that returns a value in an interpolated string. In the following example, as soon as an expression is evaluated, its result is converted into a string and included in a result string: double a = 3; double b = 4; Console.WriteLine($"Area of the right triangle with legs of {a} and {b} is {0.5 * a * b}"); Console.WriteLine($"Length of the hypotenuse of the right triangle with legs of {a} and {b} is {CalculateHypotenuse(a, b)}"); double CalculateHypotenuse(double leg1, double leg2) => Math.Sqrt(leg1 * leg1 + leg2 * leg2); // Expected output: // Area of the right triangle with legs of 3 and 4 is 6 // Length of the hypotenuse of the right triangle with legs of 3 and 4 is 5 As the example shows, you include an expression in an interpolated string by enclosing it with braces: {<interpolationExpression>} Interpolated strings support all the capabilities of the string composite formatting feature. That makes them a more readable alternative to the use of the String.Format method. ## How to specify a format string for an interpolation expression You specify a format string that is supported by the type of the expression result by following the interpolation expression with a colon (":") and the format string: {<interpolationExpression>:<formatString>} The following example shows how to specify standard and custom format strings for expressions that produce date and time or numeric results: var date = new DateTime(1731, 11, 25); Console.WriteLine($"On {date:dddd, MMMM dd, yyyy} Leonhard Euler introduced the letter e to denote {Math.E:F5} in a letter to Christian Goldbach."); // Expected output: // On Sunday, November 25, 1731 Leonhard Euler introduced the letter e to denote 2.71828 in a letter to Christian Goldbach. For more information, see the Format String Component section of the Composite Formatting topic. That section provides links to the topics that describe standard and custom format strings supported by .NET base types. ## How to control the field width and alignment of the formatted interpolation expression You specify the minimum field width and the alignment of the formatted expression result by following the interpolation expression with a comma (",") and the constant expression: {<interpolationExpression>,<alignment>} If the alignment value is positive, the formatted expression result is right-aligned; if negative, it's left-aligned. If you need to specify both alignment and a format string, start with the alignment component: {<interpolationExpression>,<alignment>:<formatString>} The following example shows how to specify alignment and uses pipe characters ("|") to delimit text fields: const int NameAlignment = -9; const int ValueAlignment = 7; double a = 3; double b = 4; Console.WriteLine($"Three classical Pythagorean means of {a} and {b}:"); Console.WriteLine($"|{"Arithmetic",NameAlignment}|{0.5 * (a + b),ValueAlignment:F3}|"); Console.WriteLine($"|{"Geometric",NameAlignment}|{Math.Sqrt(a * b),ValueAlignment:F3}|"); Console.WriteLine($"|{"Harmonic",NameAlignment}|{2 / (1 / a + 1 / b),ValueAlignment:F3}|"); // Expected output: // Three classical Pythagorean means of 3 and 4: // |Arithmetic| 3.500| // |Geometric| 3.464| // |Harmonic | 3.429| As the example output shows, if the length of the formatted expression result exceeds specified field width, the alignment value is ignored. For more information, see the Alignment Component section of the Composite Formatting topic. ## How to use escape sequences in an interpolated string Interpolated strings support all escape sequences that can be used in ordinary string literals. For more information, see String escape sequences. To interpret escape sequences literally, use a verbatim string literal. An interpolated verbatim string starts with the $ character followed by the @ character. You can use the $ and @ tokens in any order: both $@"..." and @$"..." are valid interpolated verbatim strings. To include a brace, "{" or "}", in a result string, use two braces, "{{" or "}}". For more information, see the Escaping Braces section of the Composite Formatting topic. The following example shows how to include braces in a result string and construct a verbatim interpolated string: var xs = new int[] { 1, 2, 7, 9 }; var ys = new int[] { 7, 9, 12 }; Console.WriteLine($"Find the intersection of the {{{string.Join(", ",xs)}}} and {{{string.Join(", ",ys)}}} sets."); var userName = "Jane"; var stringWithEscapes =$"C:\\Users\\{userName}\\Documents"; var verbatimInterpolated = $@"C:\Users\{userName}\Documents"; Console.WriteLine(stringWithEscapes); Console.WriteLine(verbatimInterpolated); // Expected output: // Find the intersection of the {1, 2, 7, 9} and {7, 9, 12} sets. // C:\Users\Jane\Documents // C:\Users\Jane\Documents ## How to use a ternary conditional operator ?: in an interpolation expression As the colon (":") has special meaning in an item with an interpolation expression, in order to use a conditional operator in an expression, enclose it in parentheses, as the following example shows: var rand = new Random(); for (int i = 0; i < 7; i++) { Console.WriteLine($"Coin flip: {(rand.NextDouble() < 0.5 ? "heads" : "tails")}"); } ## How to create a culture-specific result string with string interpolation By default, an interpolated string uses the current culture defined by the CultureInfo.CurrentCulture property for all formatting operations. Use implicit conversion of an interpolated string to a System.FormattableString instance and call its ToString(IFormatProvider) method to create a culture-specific result string. The following example shows how to do that: var cultures = new System.Globalization.CultureInfo[] { System.Globalization.CultureInfo.GetCultureInfo("en-US"), System.Globalization.CultureInfo.GetCultureInfo("en-GB"), System.Globalization.CultureInfo.GetCultureInfo("nl-NL"), System.Globalization.CultureInfo.InvariantCulture }; var date = DateTime.Now; var number = 31_415_926.536; FormattableString message = $"{date,20}{number,20:N3}"; foreach (var culture in cultures) { var cultureSpecificMessage = message.ToString(culture); Console.WriteLine($"{culture.Name,-10}{cultureSpecificMessage}"); } // Expected output is like: // en-US 5/17/18 3:44:55 PM 31,415,926.536 // en-GB 17/05/2018 15:44:55 31,415,926.536 // nl-NL 17-05-18 15:44:55 31.415.926,536 // 05/17/2018 15:44:55 31,415,926.536 As the example shows, you can use one FormattableString instance to generate multiple result strings for various cultures. ## How to create a result string using the invariant culture Along with the FormattableString.ToString(IFormatProvider) method, you can use the static FormattableString.Invariant method to resolve an interpolated string to a result string for the InvariantCulture. The following example shows how to do that: string messageInInvariantCulture = FormattableString.Invariant(\$"Date and time in invariant culture: {DateTime.Now}"); Console.WriteLine(messageInInvariantCulture); // Expected output is like: // Date and time in invariant culture: 05/17/2018 15:46:24 ## Conclusion This tutorial describes common scenarios of string interpolation usage. For more information about string interpolation, see the String interpolation topic. For more information about formatting types in .NET, see the Formatting Types in .NET and Composite formatting topics.
web
auto_math_text
# Genetic Drift First published Thu Sep 15, 2016 In the 1950s, a lively debate broke out among biologists that continues to this day, over what might seem like the most unlikely of organisms: the land snail, Cepaea nemoralis. Yet, there are in fact some interesting aspects to C. nemoralis. This species of snail is polymorphic; the snail’s shell varies in color (pink, brown, and yellow) as well as the number of visible bands (anywhere from 0–5). But the colors and bands are not equally distributed across populations. In some populations, pink predominates, whereas in others, yellow or brown, and similarly, some banding numbers are more prevalent in some populations than in others. Thus, not only are there variations within populations (it is rare to find a population that is all one color or where all the snails have the same number of bands), but there are variations between populations. What is the explanation for this distribution of forms? Those whose knowledge of evolution familiarized them only with the theory of natural selection might assume that, for example, that in the populations where yellow snails were the most prevalent, it was because they were fitter than the other colors—that there was some environmental factor that favored yellow over brown and pink. And that in the populations where brown snails were the most prevalent, there was some difference in the environment that led them to be favored over yellow and pink snails. But is there some other explanation? Perhaps the distributions are in some sense due to chance, perhaps even in a way that can be modeled mathematically. What would that mean, and how would you determine which explanation was correct? The attempt to develop “chancy” explanations that are alternatives (perhaps complementary alternatives) to those due to natural selection is what is led biologists to develop models of genetic drift. Genetic drift (variously called “random drift”, “random genetic drift”, or sometimes just “drift”) has been a source of ongoing controversy within the philosophy of biology and evolutionary biology communities, to the extent that even the question of what drift is has become controversial. There seems to be agreement that drift is a chance (or probabilistic or statistical) element within population genetics (see entry on population genetics) and within evolutionary biology more generally, and that the term “random” isn’t invoking indeterminism or any technical mathematical meaning, but that’s about where agreement ends. Yet genetic drift models are a staple topic in population genetics textbooks and research, with genetic drift described as one of the main factors of evolution alongside selection, mutation, and migration. Some claim that genetic drift has played a major role in evolution (particularly molecular evolution), while others claim it to be minor. This article will examine these and other controversies. In order to break through the logjam of competing definitions of drift, this entry begins with a brief history of the concept, before examining various philosophical claims about the proper characterization of drift and whether it can be distinguished from natural selection; the relation of drift to debates over statisticalism; whether drift can be detected empirically and if so, how; and the proper understanding of drift as a model and as a (purported) law. ## 1. Origins of the Concept of Genetic Drift Although Charles Darwin invoked “chance” in various ways in the Origin of Species (Beatty 1984), he seems not to have included a concept of drift in his account. He does note in passing that [v]ariations neither useful nor injurious would not be affected by natural selection, and would be left either a fluctuating element, as perhaps we see in certain polymorphic species, or would ultimately become fixed, owing to the nature of the organism and the nature of the conditions. (Darwin 1872: 63; see similar claims on p. 120 and p. 176) As the reader will see, this is tantalizingly similar to contemporary conceptions of drift. But Darwin does not develop the idea further; in particular, he does not tell us why the distributions of such variations would be fluctuating over time or how it is that they would ultimately become fixed. The first serious (and mathematical) treatments of drift are usually traced to two of the founders of population genetics, Sewall Wright and R.A. Fisher, although neither claimed to have developed the ideas behind drift (Beatty 1992). Wright (1951) credits John Gulick (1873) with the genesis of the idea whereas Fisher (1922b) first discussed the idea as derived from the work of A.C. and A.L. Hagedoorn (1921), although Wright (1931a) cites the Hagedoorns too. It is unclear who first uses the term “drift” in this context; it appears as early as Wright (1929). So, let us briefly examine Gulick and the Hagedoorns in order to understand the origins of the term “drift”. Gulick (1873) points out that with natural selection, one can assume that where different forms are found, different external conditions will also be found (with the different forms having adapted over the course of generations to the different external conditions). However, there seem to be cases (e.g., among snails) where the external conditions are very similar, yet the organismic forms are very different. He notes that these species tend to occupy very small areas, even though there is reason to believe it is not because they lack the ability to migrate further. He then postulates a scenario: Suppose some members of a species migrate to a new area where they are free from competition and largely separated from the original population. New variations will arise in the new population, but unless they are “decidedly malformed”, they will persist. The new population will thus come to differentiate itself from the original population (e.g., with new shades of color or with variations of shape), perhaps rapidly if there is a “preexisting tendency to rapid variation”. Some points to note here that become relevant in later discussions of drift: 1) Drift is described in contrast to natural selection. 2) The variations increasing in the population are those that are neutral, or at least not severely deleterious. (Note that 1 and 2 are also present in the quote from Darwin above). 3) Drift is associated with small populations (although it is not fully clear why). 4) Drift is associated with the founding of a new population in a new area. 5) Changes in the population are the result of movements of organisms and their tendency to produce new variations, both of which are physical processes and not purely mathematical constructs (something that becomes an issue in later debates). 6) The changes described are of organisms in a population. Hagedoorn and Hagedoorn (1921) similarly point out that some traits of organisms are “trivial”, i.e., “cannot possibly be accounted for as useful”, such as “the shape and arrangement of small hairs on the seeds of some cereals” (p. 107). They likewise maintain that such traits, which can be stable (“pure”, i.e., fixed) within a species, cannot be the product of natural selection; instead, the Hagedoorns assert, they must be “due to some process which accompanies selection” (p. 108). The Hagedoorns then proceed to describe several ways in which variability in a population can be reduced: a new population is founded which lacks some of the variability of the original population; a population is split in half (with the variability in the daughter populations differing from each other and from the original); and “random sampling” where even though the size of the population remains relatively constant from year to year, only a small fraction successfully reproduce. On this last point, they state, The group of organisms chosen by fate to become the parents of the next generation is usually, but always occasionally, considerably smaller than the number of individuals of their species. (1921: 120) Thus, the Hagedoorns endorse points 1–5 above, while describing two additional processes besides #4 (the founding of a new population), namely the splitting of a population and the random sampling of parents. They further explain the relevance of #3 (small populations): “the smaller the group, the more limited its potential variability, the sooner it will be pure altogether” (p. 123). And finally, they maintain that drift can produce fixation (“purity”), or the complete loss of variation within a population, even in the absence of selection. Fisher (1922b) reads the Hagedoorns as claiming that “random survival is a more important factor in limiting the variability of species than preferential survival” (p. 321), a claim that he challenged by attempting to show that such a process would be too slow to overcome the rate of mutation (and thus the introduction of new variability—but he seems to say otherwise in 1922a). An essay published by Wright in 1931 provides what is perhaps one of the earliest explicit characterizations of drift: It has seemed to me that another factor should be much more important in keeping the system of gene frequencies from settling into equilibrium. This is the effect of random sampling in a breeding population of limited size. The gene frequencies of one generation may be expected to differ a little from those of the preceding merely by chance. In the course of generations this may bring about important changes, although the farther the drift from the theoretical equilibrium, the greater will be the pressure toward return. (Wright 1931b: 205; emphasis added) The paper from which this quote is taken was meant to be a summary of a longer paper, also published in 1931 (Wright 1986: 88). In the longer paper (1931a), Wright specifies that the random sampling is of gametes. (Gametes are cells that fuse together during fertilization, such as an egg and a sperm). So, even though Wright (1931a) notes that the Hagedoorns had “urged the importance of such random fixation as a factor in evolution”, and states that Fisher (1922b) had analyzed the issue, has he changed the subject to be random sampling of gametes rather than of “parents” (i.e., organisms)? In short, no: Wright (1932 and elsewhere) makes it clear that he considers drift to encompass both random sampling of gametes and random sampling of organisms. In other words, he has expanded the phenomena that the concept of drift is meant to cover from that discussed by Gulick, the Hagedoorns, and Fisher. But Wright’s 1932 paper also emphasized what would become a persistent confusion between drift and inbreeding; both inbreeding and drift are more significant in small populations, so it can become easy to conflate them. But you can have random sampling of parents (say, through a population split) without inbreeding, and inbreeding without random sampling of parents. That alone shows that drift and inbreeding are not the same. So, not all of the expansions of drift were productive ones. It should be noted that while Wright and Fisher had numerous back-and-forth discussions and disagreements about each other’s claims concerning the role of drift in evolution (Provine 1986, Skipper 2002), they did not seem to disagree about what drift was. Wright (1948) considered the following to be an “acceptable statement” of his view from Fisher and E.B. Ford: Great evolutionary importance has been attached by Sewall Wright (1931, 1932, 1935, 1940) to the fact that small shifts in the gene-ratios of all segregating factors will occur from generation to generation owing to the errors of random sampling in the process by which the gametes available in any one generation are chosen to constitute the next. Such chance deviations will, of course, be greater the smaller the isolated populations concerned. (Fisher and Ford 1947) On the other hand, Wright’s later incorporation of fluctuations in mutation rate, fluctuations in migration, and fluctuations in selection (see, e.g., Wright 1949) as types of drift was challenged by Cain and Currey, who asserted that “the worker on actual examples must classify processes according to their biological significance” and that such lumping together would produce confusion and prevent proper analysis of actual situations (Cain & Currey 1963: 59). They thus urged the use of the term “sampling drift”, which Wright adopted in the fourth volume of his 1978 magnum opus, Evolution and the Genetics of Populations. In short, drift’s founders exhibit a diversity of views about drift, which John Beatty helpfully describes as follows: drift is a heterogeneous category of evolutionary causes and effects, whose overall significance relative to other modes of evolution (especially evolution by natural selection) has been greatly disputed. (Beatty 1992: 273) Potential causes invoked in the discussion above include sampling of gametes, sampling of parents, founding of new populations, splitting of populations, each of which is intensified when populations are small, while potential effects mentioned include fluctuations of gene frequencies from one generation to the next, loss of variants from a population, and fixation of a (possibly non-adapted) type in a population. Are these causes and effects all drift? With that sort of confusing heterogeneity, there is little surprise that the concept has drawn philosophical attention. But at least we have our starting point for philosophical discussion (see Beatty 1992 and Plutynski 2007 for additional historical overview). ## 2. What Is Drift, and Can It Be Distinguished from Natural Selection? Philosophers have taken a variety of approaches to characterizing drift and distinguishing it from natural selection, including a causal process approach that derives from the history just presented, approaches that are mathematically derived, and other sorts of approaches. These are discussed in turn. ### 2.1 A Historically-Derived Account of Drift: The Causal Process Account Reflecting on the historical uses of the term “drift”, Beatty states that what most of the phenomena so designated [as drift] have in common is one or another biological form of random or indiscriminate sampling, and consequent sampling error. (Beatty 1992: 273; see also Plutynski et al. 2016 on the modern synthesis authors’ agreement on this point) Let’s begin with indiscriminate sampling. Beatty states that parent sampling is the process of determining which organisms of one generation will be parents of the next, and how many offspring each parent will have (1984: 188; italics in original) Beatty maintains that this parent sampling can be discriminate, that is, with regard to physical differences, or indiscriminate, that is, without regard to physical differences (1984: 189). Discriminate parent sampling is generally considered natural selection; indiscriminate parent sampling is random drift. Beatty characterizes gamete sampling similarly, as the process of determining which of the two genetically different types of gametes produced by a heterozygotic parent is actually contributed to each of its offspring (1984: 189; italics in original) He continues: This sort of sampling might be indiscriminate in the sense that any physical difference between the two types of gametes produced by a heterozygote might be irrelevant to whether one or the other is actually contributed to any particular offspring. (1984: 189) And again, the indiscriminate form of sampling is drift while discriminate gamete sampling would be selection. Several illustrations of indiscriminate sampling are common in the literature, but some are more helpful than others. One is a hypothetical scenario in which two genetically and phenotypically identical twins are walking together; one is struck by lightning whereas the other lives to reproduce. (The example seems to have its origins in Scriven 1959 and Mills and Beatty 1979, although these authors were making a point about fitness, not about drift). This is an unfortunate illustration in part because it is too easy to get caught up in the question of whether the twins are really genetically and physically identical, but, more importantly, it is misleading because in fact, drift requires heritable variation, just as selection does. The lightning example is also problematic because it makes drift seem exceptional and catastrophic, whereas it is generally considered to be pervasive (i.e., occurring in all populations) and not necessarily due to catastrophic or unusual events. Others (e.g., Matthen and Ariew 2002; Walsh, Lewens, and Ariew 2002) use a series of coin tosses as an illustration of drift, but this too is problematic, because it encourages binary thinking, instead of allowing for multiple variants with multiple outcomes, and because it is unclear what the “population” of coin flips amounts to. A better illustration of drift has its origins in Theodosius Dobzhansky’s (1937) discussion of Dubinin and Romaschoff’s (1932) model, which asks us to imagine an urn filled with different colored balls. If the balls are drawn from the urn without respect to color, e.g., by a person drawing balls while blindfolded, then the balls are being indiscriminately sampled (unlike discriminate sampling, where someone deliberately tries to pick balls of a certain color). If a large sample of balls is taken, we expect the frequencies of colored balls in the sample drawn from the urn to be very close to the frequencies in the urn. If only a small sample of colored balls is drawn from the urn, then our sample may very well have different proportions of colored balls than the urn does. Multiple samplings taken over time, which would correspond to multiple generations, would tend to exhibit a pattern of fluctuating frequencies (recall the quote from Darwin above). The illustration thus models the population, its variants with their physical differences, and gives a clear understanding of the possible outcomes. It is also easily extrapolatable to, e.g., colorblind predators (Hodge 1987) and other indiscriminate sampling agents. It does, however, have some limitations; for example, it lacks an analogue for reproduction, since the balls do not produce offspring, multiple or otherwise. Although the characterization of drift and selection in terms of indiscriminate and discriminate sampling seems straightforward, with Beatty’s explication of indiscriminate sampling an important clarification of what the Hagedoorns, Fisher, and Wright seem to have meant by “random” sampling, Beatty (1984) raises a problem for the conceptual distinction between natural selection and random drift. The problem is as follows: For every population of organisms in a given environment, with a certain distribution of types and associated fitnesses, there is a range of possible outcomes of natural selection, with some more likely than others. It is, of course, more likely that the fitter organisms will have greater reproductive success in the next generation than the less fit, but it is also possible that they will not. (Darwin repeatedly emphasized this chance element of natural selection). So, what do we say about the outcomes where the less fit outreproduce the more fit? Beatty states: To the extent that those outcomes are less representative of the physical abilities of those organisms to survive and reproduce in the environment in question, any evolutionary change that results will be more a matter of random drift, concluding that it is conceptually difficult to distinguish natural selection from random drift, especially where improbable results of natural selection are concerned (Beatty 1984: 196; emphasis in original) As will be discussed further below, much of the twentieth century was marked by debates among biologists about the relative importance of drift and selection in evolution. Were those debates at least in part the result of conceptual unclarity? Millstein (2002) argues that we need not accept this inadvertent consequence of Beatty’s argument, and that selection can, in fact, be distinguished from drift. In order to do this, three extensions should be made to Beatty’s account. First, similar to Hodge (1987), Millstein suggests that a proper distinction between drift and selection relies on causation, specifically, that drift processes are indiscriminate sampling processes in which any heritable physical differences between entities (organisms, gametes, etc.) are causally irrelevant to differences in reproductive success, whereas natural selection processes are discriminate sampling processes in which any heritable physical differences between entities (organisms, gametes, etc.) are causally relevant to differences in reproductive success. These more precise characterizations of “indiscriminate sampling” and “discriminate sampling” are intended to replace the metaphorical “sampling” talk, retaining the term “sampling” as a useful shorthand only. Second, we should be careful to distinguish the process of drift from the outcomes that drift produces, and the process of selection from the outcomes that selection produces. (Of course, the importance of distinguishing process from outcome is not a novel insight; what is novel here is its application to the problem of distinguishing drift from selection. The distinction has sometimes been rendered as “process vs. product” rather than “process vs. outcome” in the philosophical literature, but given the teleological and other misleading connotations of “product”, the term “outcome” is preferable and “product” should be avoided). Third, we should characterize drift and selection as processes rather than outcomes (as in the first of the three points). If we do these three things, then drift and selection are conceptually distinct and the problem Beatty raises is dissolved; discriminate sampling processes where unlikely outcomes obtain are still selection processes. On this view, it is further acknowledged that it is possible for drift and selection to produce the same outcomes, which helps explain the persistence of biologists’ debates over the relative importance of drift and selection without making them seem trivial (see Millstein 2002 for additional discussion of Beatty’s arguments). And what are these drift processes? They are the same physical indiscriminate sampling processes that Gulick, the Hagedoorns, Wright, and Fisher (and later, Kimura, who will be discussed further below) sought to characterize: the sampling of gametes in the formation of zygotes, the sampling of parents, the founding of new populations, and the splitting of populations. (Note that this is not intended to be an exhaustive list). The outcomes are likewise those mentioned by drift’s founders: fluctuations of gene frequencies from one generation to the next, loss of variants from a population, and fixation of a type in a population. (Again, this is not an exhaustive list). Each of these outcomes is affected by population size, as any indiscriminate sampling process is; smaller populations undergoing drift tend to experience greater fluctuations in gene frequencies, a faster loss of variants from the population, and faster fixation of types in a population. Thus, the Beatty/Hodge/Millstein account of drift—the Causal Process Account of Drift—is one that is grounded in the historical development of the term and in biological practice, with the philosopher’s role being one of clarification and elaboration. Christopher Stephens (2004), Robert Northcott (2010), and Chris Haufe (2013) also seem to endorse the bare bones view of drift as a sampling process, if not the Causal Process Account of Drift in all of its details. ### 2.2 Reactions to and Varieties of Sampling-Based Accounts of Drift However, the Causal Process Account of Drift has not gone unchallenged. Robert Brandon (2005) argues that it “does not map well onto the ways biologists differentiate drift from selection” (2005: 156), that selection and drift are the same process (i.e., sampling), and that the large majority of biological cases are not cases of indiscriminate sampling. He maintains that “Drift is any deviation from the expected levels of reproduction due to sampling error” whereas “Selection is differential reproduction that is due to (and in accord with) expected differences in reproductive success” (2005: 168–9). These definitions include both process and outcome. Millstein (2005) responds to each of these challenges and defends her view over his. For the purposes of this essay, it is important to note, as Millstein (2005) acknowledges, that Brandon is certainly correct in his descriptive claim that many biologists incorporate both process and outcome in their definitions of drift (see Millstein, Skipper, and Dietrich (2009) for examples). Indeed, the plurality of definitions of drift offered by contemporary biologists—some process-oriented, some outcome-oriented, some both, and some alternating within the same work—gives rise to the need for philosophical analysis, even if the result, in the end, is to accept that pluralism. The Causal Process Account of Drift is making a prescriptive claim on the grounds of 1) increased clarity, 2) the ability to conceptually differentiate biologically very different phenomena, such as selection in a fluctuating environment from fluctuating gene frequencies due to indiscriminate sampling, which have the same outcomes, while 3) maintaining a grounding in biological practice and (some) biological usage. Jessica Pfeifer (2005) weighs in on the disagreement between Millstein and Brandon, arguing that it is reasonable to think that the source of probabilities in natural selection are at least partly a result of abstracting from or ignoring certain features of the environment and that, if this view is adopted, it is not conceptually confused to treat selection and drift as causally distinct. On Pfeifer’s view, drift is caused by the distribution of ignored factors, whereas selection for the trait in question is caused by those features that are not ignored. Peter Gildenhuys (2009) argues that the term “drift” is used to refer to causal influences over a population that have three features: they are non-interactive, non-pervasive, and indiscriminate (NINPICs). Thus, he endorses drift as indiscriminate sampling; the other modifications he makes to the view seem to stem from thinking that the Causal Process Account precludes drift and selection from co-occurring and from thinking the view needs to account for location (e.g., an organism being in the wrong place at the wrong time) as an irrelevant causal factor. In any case, he acknowledges that his account and the Causal Process Account probably agree in practice over what sorts of things should be characterized as constituting drift. Larry Shapiro and Elliott Sober (2007) also endorse the view that selection and drift are distinct processes, but Sober, at least, has backed off this view in a recent paper co-authored with Hayley Clatterbuck and Richard Lewontin (Clatterbuck, Sober, and Lewontin 2013). Like Gildenhuys, they seem to suggest that if selection and drift are distinct processes that they cannot co-occur, and they seem to think that the mere introduction of finite population size introduces drift. To be clear, however, there can be indiscriminate sampling processes and discriminate sampling processes occurring in the same population, even with respect to the same trait. For example, in a study of over 900 populations, biologist Maxime Lamotte acknowledged that camouflage gave appropriately colored Cepaea nemoralis land snails a selective advantage in their respective environment while simultaneously maintaining that foundings of new populations are “of considerable importance because of the chance variations in the composition of the first colonizers” (Lamotte 1959: 80); the variations he refers to are variations in the colors of the snail colonizers (Millstein 2009). Moreover, since the Causal Process Account requires that variations be heritable, the non-heritable locations of organisms are simply irrelevant for the purposes of deciding on drift vs. selection; for example, the founders of a new population may all hail from the geographic edge of the original population, but they can still be an indiscriminate sample of the whole. Finally, historically, at least (as discussed above), small population size has always been associated with drift, but it was never the main phenomenon to be represented. Thus, the variant sampling accounts of drift should be evaluated in light of these considerations. ### 2.3 Mathematical Approaches to Drift Walsh, Lewens, and Ariew (2002) provide a good entrée into understanding a mathematical approach to drift. They begin by acknowledging the historical uses of the term drift, identifying four: 1) a “Sewall Wright Effect”, 2) a “Hagedoorn Effect”, 3) “Indiscriminate Sampling, and 4) “The Finiteness of Natural Populations” (with some of their characterizations not fully accurate; e.g., with the first, they conflate Sewall Wright’s Shifting Balance Model where drift plays a role with drift itself). They suggest that these phenomena are “disparate” although they acknowledge, citing Beatty (1984), that the first three can be understood in terms of sampling. But the fourth form, they assert, cannot. They describe the fourth phenomenon as follows: The Hardy-Weinberg Law says that in infinite populations (of diploid organisms) there is no change in gene frequencies when there is no variation in gene fitnesses. But natural populations are finite in size; often they are small. In finite populations there will always be some non-negligible chance that trait frequencies will diverge from expectation. (Walsh, Lewens, and Ariew 2002: 456) With respect to this understanding of drift as the Hardy-Weinberg Law conjoined with a finite population, Walsh, Lewens, and Ariew maintain that “there is no larger population literally being sampled” (2002: 459). Instead: …in these cases what happens is that the distribution of fitnesses in the population yields a prediction concerning the way in which a population will change. Drift is manifested as a difference from the outcome predicted by the fitnesses in the population. The law of large numbers tells us that the likelihood of significant divergence from these predictions is an inverse function of the size of the population. The small size of a population increases the chances of error. (Walsh, Lewens, and Ariew 2002: 459) Walsh, Lewens, and Ariew then state that this is the common feature that the four types of drift they outline all share; thus drift, on their view, is when a series of births, survivals, deaths, and reproductions diverges from the outcome predicted by differences in fitness. Note that, using the terminology introduced in the previous section, this is a purely outcome-oriented (“outcome-only”) definition of drift. Whatever type of process (if any) might have produced the outcome plays no role in the definition. Matthen (2010) defends a similar characterization of drift: “Departures from expected values are what population geneticists call ‘drift’” (Matthen 2010: 3). However, notice that the fourth drift “phenomenon” that Walsh, Lewens, and Ariew identify isn’t really a phenomenon. Rather, it refers to a model from population genetics, the Hardy-Weinberg model, and predictions based on that model. It is true that as a consequence of the model, there would be no deviation from fitness expectations with an idealized infinite population (if we make some philosophical assumptions about what happens in infinity; Sober (1984) raises questions about these assumptions), but that there would likely be such deviations with a finite population. But this is a purely mathematical approach. The purely mathematical approach doesn’t ask what phenomena the model is supposed to be modeling and why it is that finite populations are of interest. There is also no explanation of why a Hardy-Weinberg model with an assumption of finite population size should count as drift; perhaps this is because of a philosophical tradition of referring to this as drift (e.g., some parts of Sober 1984 read that way), or perhaps it is because of the previously-noted longstanding association between drift and small population sizes. In contrast, on the Causal Process Account of Drift, drift would occur even in infinite populations as long as there is indiscriminate sampling; drift-like outcomes (deviation from fitness expectations) are not required. Millstein, Skipper, and Dietrich (2009) challenge the outcome-only approach to defining drift and the purely mathematical approach that it is based on. Thus, there seem to be some meta-disagreements between the process-only and outcome-only approaches to drift; should our definitions of drift look to the history of biology or should they look to mathematics? Should we glean our understandings of drift from the phenomena that biologists have sought to understand (the phenomena that they developed models for) or from the models alone? If from the models alone, are we inadvertently drawing on the measure or operationalization of drift rather than drift itself (Plutynski 2007)? Differing answers to these questions seem to be at the heart of the differing definitions of drift. If one doesn’t interpret the model purely mathematically, but instead considers what phenomenon the Hardy-Weinberg Principle with-finite-populations is seeking to model, then there is indeed a larger population being sampled. The equation models the changes from one generation to the next; the original generation is the population that is being sampled, and the next generation is the “sample”. That sample may or may not be representative of the original population, e.g., a heterozygote Aa of the original generation may or may not have offspring in the proportions 25% AA, 50% Aa, and 25% aa. The mathematical-oriented outcome-only definition of drift is often associated with the so-called “statisticalist” approach to evolutionary theory, but since it is simply a definition of drift and not a metaphysical thesis it need not be. For example, a non-statisticalist could consistently hold that natural selection is causal (and thus, evolution would not be purely statistical, as it is on the statisticalist interpretation), but that drift occurs when and only when there is a deviation of the actual outcome from fitness expectation. Indeed, Frédéric Bouchard and Alexander Rosenberg (2004) seem to hold such a view. The intersection between issues of drift and the debates over statisticalism will be discussed further below. ### 2.4 Other Accounts of Drift Other accounts of drift are not so easily classified. For example, Timothy Shanahan (1992) argues that conceptually, random drift and natural selection are the ends of a continuum. However, to reach this conclusion, Shanahan must reject heritability as a necessary condition for natural selection. As evolutionary biologist John Endler has argued, this has the effect of trivializing natural selection to the claim that “there are differences among different phenotypes” (Endler 1986: 13). Grant Ramsey (2013) develops a concept he calls “driftability”, which locates drift in individual organisms rather than in a population. Ramsey points out that the possible lives that an organism can lead is a large and heterogeneous set; thus, the actual life that any particular organism leads will probably not be a representative sample (which he seems to equate to an average) of the set. Differences within the set of will lead to different evolutionary outcomes; this intra-organismic heterogeneity within the set of possible lives of an organism is driftability. Peter Godfrey-Smith (2009) characterizes drift as changes where two parameters, the smoothness of fitness landscapes and dependence of reproductive character on fitness differences, are low. (However, when he needs to explicate why drift has special importance in small populations and why it can be mathematically described in particular ways, he appeals to indiscriminate sampling. Perhaps, then, indiscriminate sampling is at the core of Godfrey-Smith’s view of drift). Again, these accounts, which seem to deviate significantly from biological usage and practice, raise meta-philosophical issues about how we ought to go about characterizing a scientific concept like “drift”. How far can the definition of a term deviate from scientific usage and practice and still be considered to be a definition of the same thing? One could also be a pluralist about drift, arguing that there is reason to accept more than one definition of drift, although it is unclear if anyone has actually endorsed this position. Marshall Abrams seems to come the closest, stating: If random drift is anything, it is not one thing…The term applies to many effects on populations or organisms which are said to be due to “chance,” and to factors which are thought to help to produce such effects. (2007: 673) ## 3. Intersection of Genetic Drift with Statisticalist-Causalist Debates Recent debates about random drift are often entangled with debates over the purported purely statistical (non-causal) nature of evolutionary biology, but the issues are separable. There are issues concerning random drift that do not involve questions of statisticalism (as this article seeks to document), and there are issues concerning statisticalism that do not involve random drift (e.g., much of the literature focuses on natural selection rather than random drift). The statisticalist claim, generally traced to a pair of papers by Walsh, Lewens, and Ariew (2002) and Matthen and Ariew (2002), is essentially that evolution is a population-level phenomenon, and that although there are causes at the level of individual organisms (births, deaths, etc.), there are no causes at the population level, only a statistical summary of the individual events. Note that it is widely acknowledged that the models in evolutionary theory are statistical ones, so the distinctive statisticalist claim is that evolutionary biology is purely statistical. There are at least three alternatives to the statisticalist claim; one challenges the claim that evolutionary biology is a population-level phenomenon, arguing that it is constituted by causes at the level of individual organisms (e.g., Bouchard and Rosenberg 2004), a second defends the view that there are population-level causes (e.g., Millstein 2006; Shapiro and Sober 2007), while a third argues for causes at both levels (Pence forthcoming). It was already noted above (see section 2.3) that the outcome-only definition of drift is often adopted by statisticalists, but not exclusively so; it can also be endorsed by a causalist who, e.g., believes that natural selection is a causal process but that drift is simply deviation from selective expectations. Perhaps surprisingly, it would also be possible for someone to endorse a version of the Causal Process Account of Drift and yet still accept the basic statisticalist premise—if one thought that drift should be understood in terms of indiscriminate sampling, and also thought that indiscriminate sampling should be understood in terms of causes at the level of individual organisms, then the evolutionary changes wrought by drift would just be be the statistical summation of individual level causes. In short, it would be a mistake to infer one’s position on the statisticalist debate from one’s definition of drift, although such slippage is common and the issues are in truth often entangled. What, then, are the statisticalist issues that random drift is entangled with? The concerns raised by Walsh, Lewens, and Ariew (2002) and Matthen and Ariew (2002) have their origins in claims made by Sober (1984) in his classic The Nature of Selection. Sober characterizes evolutionary theory as a theory of forces, with its zero-force state described by the Hardy-Weinberg equation of population genetics (see the population genetics entry for an explanation of the equation); in such a state, there is no selection, no mutation, no migration, no meiotic drive, random mating, and infinite population size. Thus, the Hardy-Weinberg equation is an idealized model that never obtains in the real world. It is a bit difficult to see where drift fits into the equation, which is no doubt the source of much of the confusion over how to define drift. In his 1984 book, Sober alternatively characterizes drift in terms of random sampling (the process, which occurs, Sober explains, during gamete formation and the founding of new populations) and sampling error (the outcome, i.e., the deviation from fitness expectations). From the point of view of process, at least some types of sampling could be understood to be part of the Mendelian process. The Mendelian process includes the “process wherein organism produce gametes and gametes produce organisms” (Sober 1984: 35), which Sober says is not treated as a force, but rather as the background against which evolutionary forces are described. But Sober does not take this route. Instead, Sober contrasts drift as sampling error (again, the outcome—presumably introduced when one relaxes Hardy-Weinberg assumptions to allow for finite populations) with selection, mutation, and migration; all are forces, Sober asserts, but drift is a different sort of force. It is not “deterministic” and it does not have a definite direction (although it does have a magnitude, determined by the population size). That is, given trait frequencies and fitness values, Sober suggests, selection predicts a specific outcome for the next generation in a specific direction (and is in this sense “deterministic”), whereas drift could yield an increase any of the types present in the population (and is in this sense directionless). Moreover, Sober states, you cannot say how much drift has contributed a change relative to the change introduced by selection; to do so would be as impossible as trying to say, when flipping a fair coin ten times and obtaining six heads, how much of the result was due to the fairness of the coin and how much was due to the fact that it was tossed ten times. He thus concludes that “if drift is an evolutionary force, it is a force of a different color” (1984: 117); he calls it a force, he says, mainly to indicate its causal role. These metaphysical claims about drift (and selection and other evolutionary processes—but it is just drift that interests us here) set the stage for the statisticalists’ challenge. Matthen and Ariew (2002) challenge the claim that there is a defensible sense in which drift is a force. Aside from the fact that it does not have a predictable or constant direction (as Sober readily acknowledges), they point to a case of two similar populations subject to the same selective pressures, one in which the trait T becomes fixed and the other in which the alternative trait T′ becomes fixed. What explains the outcome in the two cases? Exactly the same thing, Matthen and Ariew assert, just as exactly the same coin setup explains two heads and four heads in two series of ten coin tosses. Thus, they seem to suggest, there is no additional cause or force, “drift”. Walsh, Lewens, and Ariew (2002) similarly attack Sober’s claim that drift is a force; following Rosenberg (1994), they assert that “the events that are labelled ‘drift’ events, like lightning strikes, etc. are no different in kind from selection events” (Walsh, Lewens, and Ariew 2002: 457)—and thus, drift is not a distinct sort of force. Moreover, they argue, drift is best interpreted statistically as statistical error (see argument in section 2.3), and so again, it is not a force. To be clear, there is a historical component to the statisticalists’ claims; on their view, Darwin’s evolutionary theory was causal, but with the introduction of population genetics, evolution became purely statistical. (It is sometimes unclear whether the statisticalists are making claims about models/theories or whether they are making ontological claims, or whether they think that the latter can be inferred from the former). Hodge (2016; see also Plutynski et al. 2016) challenges this historical claim by arguing that in fact Darwin—although he liked to think that his epistemological and nomological ideals were in descent from Newton’s—never understood natural selection to be a lawful force along with gravitation or inertia. Hodge further argues that biologists today compare and contrast natural selection with artificial selection and with drift in ways directly descending from Darwin’s view of natural selection as a causal process. A number of other philosophers have sought to challenge the statisticalists’ claim that drift is not a force. For example, Stephens (2004) defends the claim that drift is both a force and a cause. He asserts that if we understand drift properly as a process of indiscriminate sampling (instead of as an outcome, as the statisticalists do), we can see that it is a force, but that in a population of a given size it always has the same force, regardless of whether there is a large or a small deviation from expectation. (It may be that Sober’s inadvertent vagueness in his 1984 book on the process/outcome question contributed to the confusion, since the statisticalists were responding to Sober). He clarifies that, “because drift is a probabilistic cause, the same causal force can have two different outcomes” (Stephens 2004: 557; emphasis in original). Stephens further defends the claim that drift can be understood as a force or a cause by arguing (contra Matthen and Ariew) that drift does have a direction, namely, eliminating heterozygosity; that it can “make a difference” by affecting the probability of evolutionary change (more in small populations, less in large); and that we can cogently speak of the relative importance of drift in an ensemble of populations or in a population, albeit not at the level of individuals (see Walsh 2007 for a response to Stephens 2004, 2010 and Gildenhuys 2014 for a response to his response). Brandon (2006), however, argues that “eliminating heterozygosity” is not sufficient to show that drift is directional, given that (as Stephens would readily acknowledge) if there were two alleles at a locus, beginning at equal frequencies, we could not predict which of the two alleles would go to fixation, only that one of them would; Brandon likens this to saying that “a 20-Newton force is acting on object A”, (2006: 325), which, he seems to imply, is not a directional claim. Moreover, he argues, drift is not a separate process from selection. Thus, drift, on Brandon’s view, is not a force (but it is a law; more on that below). Joshua Filler (2009) responds to Brandon by defending a more elaborated account of what a force is (in part drawing on Bigelow et al. 1988); on that elaborated account, drift’s direction can be seen as less specific than selection, mutation, and migration, but it gives some directional information. Thus, Filler argues, it can be seen as less “forcelike” than selection, mutation, and migration, but it is still a force. Charles Pence (forthcoming), however, worries that Filler’s modifications to our understanding of “force” may be ad hoc, so he offers an alternative defense of the claim that drift is a force. First, Pence argues, we already countenance forces that have stochastically specified directions, such as Brownian motion, and drift is analogous. Second, we can create an evolutionary thought experiment where drift is absent, showing that it is not constitutive of all evolving systems; this is intended to respond to Brandon’s claim that drift is not separable. Importantly, however, Pence points out that there are essential questions that the entire debate has yet to address: “what exactly the use of ‘forces’ is to do for us, and when explanations utilizing a force metaphor are useful or perspicuous” (Pence forthcoming). On the other hand, Kenneth Reisman and Patrick Forber (2005) separate the question of whether drift is a force from the question of whether drift is a cause, arguing only for the latter while not taking a stand on the former. In a related paper, they rely on Woodward’s (2003) manipulability account of causation and a 1957 study of drift by Dobzhansky and Pavlovsky to argue that decreasing the number of founding members in replicate populations produces an increase in the variability of evolutionary outcomes across those replicate populations. (Forber and Reisman 2007: 617) In other words, they argue that the conditions of the study can be seen to satisfy the conditions of the manipulability account, showing that drift ought to be understood as a (population-level) cause. Shapiro and Sober (2007) state that they endorse Reisman and Forber’s arguments as well as the view of drift as a process. However, it’s unclear whether Shapiro and Sober (or Reisman and Forber) actually do endorse the view of drift as a process. That is, it’s unclear why population size as a causal factor should count as a process. This is because it’s a bit unclear if drift is actually what is being manipulated in Dobzhansky and Pavlovsky’s (1957) experiments, or if it is just population size—and population size does not seem to be a process. Perhaps Reisman and Forber have simply shown that population size is a causal factor of evolutionary change in populations undergoing drift. They also seem to be taking Woodward’s account very literally, in that they seem to think that the “variable” that has to be manipulated must be a variable in a mathematical model; otherwise, one could (at least in principle) manipulate the sampling process by manipulating the population rather than simply the population size; Clatterbuck’s (2015) account, discussed in section 5, gives an example of this. (Population size is a variable in mathematical models of drift, whereas sampling, as discussed above, is implicit). One could also conceivably manipulate the environment so that sampling was discriminate rather than indiscriminate or to change the nature of the indiscriminate sampling. Other challenges to the statisticalists’ claims about drift as a cause neither endorse the view of drift as a force nor drift as a distinct process. For example, Brandon and Ramsey (2007) see drift and selection as “copossible” outcomes of the same process. But because there is a causal process—albeit only one rather than two—the statisticalists’ claims are not upheld. Abrams (2007) likewise seems loathe to adopt “force” talk about drift, at least in a strong realist sense. He argues that “both selection and drift are aspects of a probability distribution over future frequencies of genotypes or phenotypes in a population” (2007: 667). Yet selection and drift can be distinguished: Selection is the aspect of the distribution controlled by differences in fitness, while drift is the aspect of such a distribution controlled by population size (apart any from effects of population size on fitness). (2007: 667) This makes it sound as though Abrams is adopting a view of drift akin to that of Reisman and Forber, with its focus on population size, and indeed Abrams does elaborate on the causal role of population size, but (as noted above) his views on drift are more accurately pluralist ones. More generally, Pence (forthcoming) suggests one way of categorizing the different causalist approaches (although not all of these have explicitly addressed drift): At the very least, we need to distinguish between (1) the force interpretation, as discussed here; (2) the causal process approach (elaborated most notably by Millstein 2002, 2006, 2013); (3) the causal mechanism approach, first deployed for natural selection by Barros (2008) and building on the work of Machamer et al. (2000); (4) the manipulationist approach, discussed by Reisman and Forber (2005), Forber and Reisman (2007), and Shapiro and Sober (2007), building on the work of Woodward and Hitchcock (2003); and (5) the counterfactual approach, deployed for natural selection by Glennan (2009) and Huneman (2012) and utilizing a notion of counterfactual causal dependence or “relevance”. (Pence forthcoming) ## 4. Detecting Drift Empirically Throughout the 20th and 21st centuries, biologists have struggled to detect drift empirically. In particular, they have experienced challenges in differentiating cases of drift from cases of selection. The Causal Process Account of Drift in particular can help to make sense of why this is so, as will be discussed below. Drift and selection may be different sorts of causal processes (indiscriminate and discriminate sampling, respectively), but they can produce similar outcomes. Biologists have thus struggled to identify distinctive outcomes for drift and selection, only to find underdetermination re-emerge after biologists modified or added to their assumptions about the relevant processes (for a clear characterization of this, see Dietrich and Skipper 2007). Relatedly, biologists have also disagreed over the relative prevalence of drift and selection (Beatty 1995, 1997); as noted above, these disagreements were there almost from the outset with Fisher’s response to the Hagedoorns. ### 4.1 Classic Studies The disagreements over the prevalence of drift and selection began almost immediately after Wright and Fisher incorporated drift into their evolutionary models (see Provine 1986 for an extended discussion of their disagreements, to which this discussion is very much indebted). Wright (1931a, 1932), drawing on his experience with animal breeding, developed the Shifting Balance Theory (SBT), which consisted of three phases; these have been understood either as empirical claims about actual conditions (Provine 1986) or descriptions of the ideal conditions for evolution (Skipper 2002). Skipper provides a clear description of the SBT as it Wright would eventually formulate it (see Hodge 2011 for discussion of early formulations): In the first phase, random genetic drift causes gene frequencies to change and pull subpopulations semi-isolated within the global population into adaptive valleys because random fluctuations in gene frequencies are almost always maladaptive. In phase two, mass selection will then act within subpopulations and increase their fitness, dragging them from adaptive valleys to adaptive peaks. In the third phase, selection between subpopulations, which Wright called interdemic selection, driven by differential dispersion (migration of organisms from more fit subpopulations to less fit subpopulations) would then enable the global population to be raised to its optimal peak. (Skipper 2002: 345) Drift, then, on Wright’s view, plays an essential role in the evolutionary process. Fisher, by contrast, thought that mass selection on large populations was the predominant and most effective mode of evolution, leaving very little role for drift given the large population sizes. (This disagreement about the relative role of drift in evolution is one of several things that Wright and Fisher disagreed about; e.g., as is clear from the above, they also disagreed about population structure and effective population size. See Skipper 2002, 2009 and Plutynski 2005 for analyses of contemporary biologists who continue to argue the Wright-Fisher debate). These theoretical considerations were soon followed by field examinations of drift (there were also laboratory studies, but these were less contentious). Two sets of studies of natural populations are particularly notable. One set, referred to in the introduction to this essay, is composed of the studies of the polymorphic land snail, Cepaea nemoralis; these studies, and debates over the prevalence of drift, began in the 1930s and became quite heated in “The Great Snail Debate” of the 1950s and 1960s (Millstein 2008, 2009 gives more extensive discussion, from which the following is drawn; the moniker “The Great Snail Debate” is due to Provine 1986). Early researchers, most famously Arthur J. Cain and Philip M. Sheppard (1950, 1954), sought to demonstrate the adaptedness of the color and banding morphs of the snails as well as the sizes of the populations in which they lived. But these were challenging to determine; adaptedness was primarily studied indirectly, by seeking correlations between variants and their backgrounds (presuming camouflage and selection by predator), and population sizes varied considerably. And any correlations found were statistical ones, with lots of “noise”. This left the door open for drift; Maxime Lamotte (1959) argued that by examining the populations as a whole, one found greater variation among the small populations than among the large. This was a distinctive signature for drift, but such outcomes are hard to find and require a very special set of circumstances to obtain (large numbers of populations of varying sizes that are easy to count). However, Lamotte’s study is not the only classic evolutionary study to exploit this unique drift outcome; Cavalli-Sforza’s studies of blood groups in humans did so as well (Richardson 2006). Notably, this was not drift as an alternative to selection (Cain and Sheppard had suggested that selection precluded much of a role for drift), but rather, an argument for a substantial role for drift in addition to a substantial role for selection. Another important set of studies is of the polymorphic Scarlet Tiger Moth, Panaxia dominula (see Provine 1986 for extended discussion, to which the following is indebted). Fisher and Ford deliberately chose P. dominula because of its small population sizes, figuring that if they could make the case for selection against drift in its worst case, they could make a decisive case against Wright, who they saw as advocating a form of nonadaptive evolution. But rather than trying to make a case for selection, as Cain and Sheppard did, Fisher and Ford (1947) tried to make a case against drift by arguing that the fluctuations of gene frequencies across generations were too large to be accounted for by drift and that the size of the fluctuations did not differ between the small and the large populations, as you would expect if they were undergoing drift. Therefore, they concluded, the populations were undergoing selection. Wright’s (1948) reply challenged both the logic of this conclusion (disproving drift does not prove selection) as well as their characterization of his views, but more significantly, he pointed out that Fisher and Ford had not included data on population sizes for the years in which the gene frequency fluctuations were statistically significant, so, in fact, it was possible that the populations had been undergone constrictions in those years, in which case drift would have been able to produce the observed fluctuations. He also pointed out that fluctuations in selection or migration (which he very problematically suggested could be part of an expanded understanding of drift) could have produced the same outcomes. In short, the P. dominula studies turned on a slightly different but related proposed unique outcome for drift, that of fluctuations over time rather than variation at a time as for C. nemoralis, in both cases comparing small populations to large (as well as considering the capability of populations to produce large fluctuations), but with P. dominula, drawing definitive conclusions ran into difficulty from lack of data concerning the key variable of population size. There is also the recognition that fluctuating selection can produce the same outcome of fluctuations of gene frequencies across generations as drift can, making it difficult to resolve the empirical case. Finally, note that biologists have continued to study both C. nemoralis and P. dominula, with the result that additional processes have been identified that further complicate the empirical analyses (Millstein 2008; Skipper 2009). ### 4.2 Studies of Drift and Molecular Evolution As Dietrich (1994) has documented, early work in molecular evolution was focused almost exclusively on selection, something that changed with the advent of the neutral theory of molecular evolution and the work of Motoo Kimura, Jack King, and Thomas Jukes. Dietrich writes: The basis of its challenge was Kimura’s proposal that most changes detected at the molecular level were not acted upon by natural selection; they were neutral, and the mechanism of their change was random genetic drift. (1994: 22) To be clear, however, both selection and drift play an essential role in the neutral theory; on Kimura’s view, selection will quickly eliminate the large number of deleterious mutants and fix the small number of advantageous ones, leaving the remaining mutant alleles as neutral, whereupon they undergo a process of drift (Dietrich 2006). Drift would eventually cause the neutral (and nearly neutral) mutants to either go to fixation or be lost (although they would be polymorphic in the meantime), so that observed molecular differences would be the outcomes of a random process of mutation, which Kimura understood as produced largely from DNA replication error, processes of directional selection, and processes of random drift produced by gamete sampling. (Dietrich 2006: 670) Many of the debates about the neutral theory contrasted this approach (the “neutralist” approach) to evolution with a more selectionist (or sometimes, “panselectionist”) approach, causing many to refer to a “neutralist-selectionist controversy”. The advocates of the neutral theory initially proposed it as an unrealistic simple model only to have subsequent data on the prevalence of neutral alleles convince them of its realism. That led to more explicit testing of the neutral theory and debates over the molecular clock (Dietrich 2006). To focus on the first of these, Dietrich (2006) suggests that although the neutral theory promised to generate numerous quantitative testable predictions—and did—testing it has in fact proved difficult. Is this because of the difficulty in identifying outcomes that are unique to drift? Perhaps so. For example, the neutral theory predicted a certain measure of heterozygosity, but when Francisco Ayala and colleagues failed to observe that predicted outcome in a study of natural populations of Drosophila, Jack King responded by saying that many of the assumptions made by the model could be the source of the discrepancy. In other words, as has been well-discussed in the philosophy of science, there are no “crucial experiments” between two theories because theories are always tested together with their assumptions, any one of which can be given up rather than rejecting the theory. Debates over the so-called “molecular clock” can also be understood in terms of the search for distinctive outcomes for drift. At first, it was thought that only drift could make sense of an apparent constant rate of evolution—that only drift would produce such an outcome—yet this time it is the selectionists who modified their assumptions such that their models would also predict a molecular clock outcome (Dietrich and Skipper 2007). Subsequently, Tomoko Ohta began to argue for a more significant role for weakly selected mutants (Ohta 1973). Essentially, Ohta’s definition of “nearly neutral” includes mutants that are less neutral than Kimura’s “nearly neutral” mutants, and larger amounts of them as well (Dietrich and Millstein 2008). Later refinements included both slightly deleterious and slightly advantageous mutants. This changes the processes that one would expect to be acting. With the (strictly) neutral theory, only drift would act on the neutral mutants. With the nearly neutral theory, the nearly neutral mutations (whether advantageous or deleterious) would subject to very weak selection (discriminate sampling) and to drift in the form of indiscriminate gamete sampling (Dietrich and Millstein 2008). Because the selection is weak, it would be swamped by the effects of drift, but both processes would still be occurring. Ohta believed that the nearly neutral theory could better account for the results found by Ayala and his colleagues (with its large number of relatively rare alleles), and also, that it could better explain some features of the molecular clock (Dietrich and Millstein 2008). However, it has proven much more difficult to find unique outcomes for the nearly neutral theory than the neutral theory. This makes the nearly neutral theory more difficult to test for and less useful as a null hypothesis as compared to the neutral theory, even as it might account for the available data better. ### 4.3 Recent Empirical Issues Concerning Drift Hayley Clatterbuck, Elliott Sober, and Richard Lewontin (2013) argue that it doesn’t make sense to talk of drift “dominating” selection or being “stronger than” selection. (Among many such claims, this would challenge Dietrich and Millstein’s (2008) claim, discussed in the previous section, that the best way to understand the nearly neutral theory is as weak selection dominated by drift). Clatterbuck, Sober, and Lewontin take selection and drift to be a population-level process or processes (they don’t take a stand on whether there is a one process or two, but they are clearly “causalists” rather than “statisticalists”), but they raise concerns for the way that biologists have understood the value $$Ns$$, which is the effective size of the population multiplied by the selection coefficient. After some initial controversies over how to interpret this value, most biologists eventually came to the view that selection “dominates” drift when $$Ns$$ is much greater than some specified number and that drift “dominates” selection when $$Ns$$ is much less than that number, with proposed numbers including 1/4, 1/2, and 1; in between values are thought to be where the two causes are more or less equal (Clatterbuck, Sober, and Lewontin 2013). Clatterbuck, Sober, and Lewontin point out that on the standard picture, the values of N and s do not predict one gene frequency outcome; rather, they predict a probability distribution of possible gene frequency outcomes. But, they argue, you can change this distribution by changing N or by changing s, with values “chosen so that the first change makes more of a difference, or less, or the same, as you please”; thus, it is “arbitrary to focus on comparisons that give drift the upper hand, or that do the same for selection” (Clatterbuck, Sober, and Lewontin 2013: 538). Here, the suggestion seems to be that N represents the drift cause and that s represents the selection cause—although it’s not clear why drift-as-cause should be equated with population-size-as-a-cause—with the conclusion that the causes are not separable. To say that they are separable would, Clatterbuck, Sober, and Lewontin say, would be akin to asking whether the result of four heads of a fair coin tossed ten times is due to the fairness of the coin or the number of tosses. They also suggest that it is impossible to have a population that is not undergoing drift, since even in an infinite population there could be deviation from selective expectations (again, making it hard to separate the causes). Finally, they argue that it doesn’t make sense to say that with identical setups that producing differing outcomes, that selection dominates in some (when the favored allele increases in frequency) and drift in others (when the favored allele decreases in frequency). Robert Brandon and Lenore Fleming (2014) point out, however, that Clatterbuck, Sober, and Lewontin’s analysis of a seemingly empirical question is not based on an empirical discovery, but rather, a conceptual analysis, and that as a conceptual analysis, it is not fully consistent; that is, they do not consistently treat drift as a causal process, but sometimes treat it as an outcome (as in the last point described in the previous paragraph). Brandon and Fleming cite several recent empirical studies where biologists seek to provide evidence that drift dominates selection, not relying on the simple $$Ns$$ to come to that conclusion, and point out that there are other methods for testing the relative strengths of drift and selection, such as the McDonald-Kreitman (MK) test. In Brandon and Fleming’s view, “drift is not a process. It is, however, a predictable result of a process, namely probabilistic sampling” (2014: 581). It is somewhat unclear, however, how on their view the drift outcome (which outcome?) is “swamping” or “overriding” the selective outcome (presumably by the outcome being the cause of some further, as yet unidentified, outcome). Further elaboration on this point would be useful. It has also been suggested, most notably by John Gillespie (2000a,b, 2001), that many purported instances of genetic drift are in fact due to genetic draft. Genetic draft is a process of linked selection (a hitchhiking process) where it is a matter of chance which of two neutral alleles (in a two-locus model) happens to be linked to a site that undergoes an advantageous mutation, and where the timing of these mutations, followed by a rapid selective “sweep” to fixation, is random. As Skipper (2006) discusses, one of the interesting properties of genetic draft is that it can produce an outcome similar to that which one would expect from genetic drift in small populations, namely, a reduction in genetic variation (thus the similar name). Gillespie argues that draft is less sensitive to population size than drift, which leads him to claim that draft is a more significant cause of evolution than drift (Skipper 2006). Thus, draft introduces yet another relative significance debate (Beatty 1995, 1997) to add to the others discussed in this article: drift vs. selection, Wright’s SBT vs. Fisher’s mass selection, neutralist vs. nearly neutralist vs. selectionist molecular evolution. However, it is worth noting that while draft can produce one of the same outcomes that drift can (reduction in heterozygosity), it would not give rise to fluctuating gene frequencies from one generation to the next. So, not all drifty outcomes can be accounted for by draft. William Provine (2014) has taken the reduction of drift’s role a step further, calling random genetic drift a “fallacy” and arguing that no such phenomenon exists in nature, period. Provine notes that for both Fisher and Wright (and even Kimura), drift was deeply intertwined with inbreeding, to the extent that one was sometimes confused for the other, which seems warranted—or, at least, that the relationship between the two was never fully clarified (see, e.g., Wright 1931a). Provine defines random drift as “fortuitous extinction of genes” at a genic locus on a chromosome. According to Provine, Wright believed that random sampling of gametes in Mendelism produced “random genetic drift” at every locus [on every chromosome] in small populations, and also that inbreeding led to rampant “random genetic drift” in small populations (Provine 2014: 54). Here, Provine treats drift as an outcome, and as that outcome seems physically unlikely or even impossible due to gene linkage, there is, on his view, no drift. Wright did assert that [j]ust because the direction of drift is accidental, the result is a kaleidoscopic shifting of the average characters of the population through predominant types which practically are never repeated, (1931b: 207; see also Wright 1930: 354) but that makes drift sound like a cause, not an outcome, and it does not explicitly state that the “shifting” occurs at every locus on every chromosome. But even if he did, Wright could certainly have been mistaken about the effects of drift without being mistaken about the phenomenon itself; notably, Provine does not deny the existence of the random (i.e., indiscriminate) sampling of gametes, although he does object to the seeming reification of the term “gene pool” and the neglect of the relevance of chromosomes, points that are well-taken. ## 5. Drift as Models, Drift as a Law As noted above, the phenomenon of drift is represented in mathematical models of population genetics. The standard mathematical model of drift found in textbooks is the Wright-Fisher model, the core of which is the binomial distribution, and it is the model that philosophers of biology typically appeal to. In the Wright-Fisher model—an idealized model, as all models are—there are assumed to be N diploid adults in a population, mating randomly, with an allele A that has a frequency of $$p_0$$ and an alternate allele at the same locus. The model further assumes that adults produce an infinite number of gametes having the same allele frequency. $$2N$$ gametes are drawn from the “gamete pool” at random to constitute the N diploid individuals of the next generation. However, as Millstein, Skipper, and Dietrich (2009) point out, the Wright-Fisher model is quite idealized, since, of course, populations do not reproduce by calling in their local statistician and asking her to pick exactly $$2N$$ gametes at random (with replacement) and toss them into the next generation (Gillespie 2004: 49) —but there are more realistic alternatives. John Gillespie’s model, for example, similarly assumes a diploid population of N members and a two-allele locus, with frequencies p and $$q = (1 - p)$$; it also similarly assumes random mating. Where it differs from the Wright-Fisher model is that each of the $$2Np A$$ gametes constitutes the next generation with a random number of offspring gametes. There is no restriction on the distribution of the numbers of offspring nor on the total number of offspring gametes. Thus, unlike the Wright-Fisher model, Gillespie’s model is not tied to binomial sampling, although sampling more generally is still being modeled. Millstein, Skipper, and Dietrich suggest that the philosophical significance of such alternative models (including others such as the Moran model, the Cannings model, or the coalescent) is that we need to be careful about drawing conclusions about drift from any one particular mathematical model, always keeping in mind Giere’s (1988) point that models are built as representations of specific aspects of physical systems. Clatterbuck (2015) similarly reminds the philosophical community that it is a mistake to just focus on the Wright-Fisher model, highlighting the Eldon–Wakeley model in particular. She emphasizes that different drift models are not predictively equivalent, so that details of the causal network (represented by the differing assumptions of the different models) underlying a population can be seen to change the outcomes of drift. This challenges statisticalists’ assumptions that drift is purely mathematical. Relatedly, she argues that broadening our conception of drift to include alternative models reveals novel ways of intervening on drift that strengthen the causal argument against its statisticalism. For example, by intervening on the population in such a way as to increase the probability that individuals have a far greater number of offspring (relative to the population size) than allowed by the Wright–Fisher model we would increase the probability of a neutral allele increasing in frequency (2015: 3501). This is an improvement over Reisman and Forber’s manipulationist arguments (discussed above), since it is more clearly the drift (the sampling process) that is being manipulated rather than just population size. But perhaps drift is more than just a model (or a set of models); perhaps it is a scientific law of evolutionary biology. Or so Brandon (2006) argues, dubbing a “Principle of Drift” analogous to the Principle of Inertia in that both are intended to be “zero-force” laws. According to the Principle of Drift, 1. “A population at equilibrium will tend to drift from that equilibrium unless acted on by an evolutionary force” and 1. “A population on evolutionary trajectory t, caused by some net evolutionary force F, will tend to depart from the extrapolated path predicted based on F alone (in either direction or magnitude or both) even if no other evolutionary force intervenes, unless F continues to act” (Brandon 2006: 328). On this view, drift—and thus change—is the default state of evolutionary systems, challenging Sober’s view (discussed previously), that the Hardy-Weinberg Principle characterizes the zero-force state in evolutionary biology. In defending the Principle of Drift, Brandon challenges those who would argue that there are no laws of biology (e.g., Beatty 1995; see cites within), although Brandon (1990) had already done so by defending a Principle of Selection (see also the related work by McShea and Brandon 2010, in which drift is characterized as a special case of what they dub the “Zero-Force Evolutionary Law”, or ZFEL). ## 6. Conclusions Philosophical discussions of random genetic drift have been lively and fruitful. But as they are still relatively recent, there are many issues yet to be explored or explored fully. Although much energy has been diverted toward debating statisticalism, there are many debates over drift that biologists are engaged in that philosophers could profitably weigh in on, both historical and contemporary. With respect to the last two sections of this article in particular (empirical issues concerning drift and models of drift), we may have only begun to scratch the surface. ## Bibliography • Abrams, M., 2007, “How Do Natural Selection and Random Drift Interact?” Philosophy of Science, 74(5): 666–679. • Beatty, John, 1984, “Chance and Natural Selection”, Philosophy of Science, 51(2): 183–211. • –––, 1992, “Random Drift”, in Keywords in Evolutionary Biology, edited by Evelyn Fox Keller and Elisabeth A. Lloyd, Cambridge, MA: Harvard University Press, 273–281. • –––, 1995, “The Evolutionary Contingency Thesis”, in Concepts, Theories, and Rationality in the Biological Sciences, edited by G. Wolters and J.G. Lennox, Pittsburgh: University of Pittsburgh Press, 45–81. • –––, 1997, “Why Do Biologists Argue Like They Do?” Philosophy of Science, 64(4):S432-S443. • Bigelow, John, Brian Ellis, and Robert Pargetter, 1988, “Forces”, Philosophy of Science, 55: 614–630. • Bouchard, Frédéric, and Alex Rosenberg, 2004, “Fitness, Probability, and the Principles of Natural Selection”, British Journal for the Philosophy of Science, 55: 693–712. • Brandon, Robert N., 1990, Adaptation and Environment, Princeton, NJ: Princeton University Press. • –––, 2005, “The Difference between Selection and Drift: A Reply to Millstein”, Biology & Philosophy, 20(1): 153–170. • –––, 2006, “The Principle of Drift: Biology’s First Law”, The Journal of Philosophy, 103(7): 319–335. • Brandon, Robert N., and Lenore Fleming, 2014, “Drift Sometimes Dominates Selection, and Vice Versa: A Reply to Clatterbuck, Sober and Lewontin”, Biology and Philosophy, 29: 577–585. • Brandon, Robert N., and Grant Ramsey, 2007, “What’s Wrong with the Emergentist Statistical Interpretation of Natural Selection and Random Drift?” in The Cambridge Companion to the Philosophy of Biology, edited by David L. Hull and Michael Ruse, Cambridge: Cambridge University Press, 66–84. • Cain, Arthur J. and John D. Currey, 1963, “Area Effects in Cepaea”, Philosophical Transactions of the Royal Society of London B, 246: 1–81. • Cain, Arthur J. and Philip M. Sheppard, 1950, “Selection in the Polymorphic Land Snail Cepaea Nemoralis”, Heredity, 4(3): 275–294. • –––, 1954, “Natural Selection in Cepaea”, Genetics, 39: 89–116. • Cain, Joe and Michael Ruse (eds.), 2009, Descended from Darwin: Insights into the History of Evolutionary Studies, 1900–1970, Philadelphia: American Philosophical Society. • Clatterbuck, Hayley, 2015, “Drift Beyond Wright–Fisher”, Synthese, 192(11): 3487–3507. • Clatterbuck, Hayley, Elliot Sober, and Richard C. Lewontin, 2013, “Selection Never Dominates Drift (nor Vice Versa)”, Biology and Philosophy, 20(1): 153–170. • Darwin, Charles, 1872, The Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life, Sixth ed. London: Murray. • Dietrich, Michael R., 1994, “The Origins of the Neutral Theory of Molecular Evolution”, Journal of the History of Biology, 27(1): 21–59. • –––, 2006, “Three Perspectives on Neutrality and Drift in Molecular Evolution”, Philosophy of Science, 73(5): 666–677. • Dietrich, Michael R. and Roberta L. Millstein, 2008, “The Role of Causal Processes in the Neutral and Nearly Neutral Theories”, Philosophy of Science, 75(5): 548–559. • Dietrich, Michael R. and Robert A. Skipper Jr., 2007, “Manipulating Underdetermination in Scientific Controversy: The Case of the Molecular Clock”, Perspectives on Science, 15: 295–326. • Dobzhansky, Theodosius, 1937, Genetics and the Origin of Species, New York: Columbia University Press. • Dobzhansky, Theodosius and Olga Pavlovsky, 1957, “An Experimental Study of Interaction between Genetic Drift and Natural Selection”, Evolution, 11: 311–319. • Dubinin, N.P. and D.D. Romaschoff, 1932, “Die Genetische Struktur Der Art Und Ihre Evolution”, Biologichesky Zhurnal, 1: 52–95. • Endler, John A., 1986, Natural Selection in the Wild, Princeton, NJ: Princeton University Press. • Filler, Joshua, 2009, “Newtonian Forces and Evolutionary Biology: A Problem and Solution for Extending the Force Interpretation”, Philosophy of Science, 76: 774–783. • Fisher, Ronald A., 1922a, “Darwinian Evolution of Mutations”, Eugenics Review, 14: 31–34. • –––, 1922b, “On the Dominance Ratio”, Proceedings of the Royal Society of Edinburgh, 42: 321–341. • Fisher, Ronald A. and E.B. Ford, 1947, “The Spread of a Gene in Natural Conditions in a Colony of the Moth Panaxia Dominula L”, Heredity, 1: 143–174. • Forber, Patrick and Kenneth Reisman, 2007, “Can There Be Stochastic Evolutionary Causes?”, Philosophy of Science, 74(5): 616–627. doi:10.1086/525608 • Giere, Ronald N., 1988, Explaining Science: A Cognitive Approach, Chicago: University of Chicago Press. • Gildenhuys, Peter, 2009, “An Explication of the Causal Dimension of Drift”, British Journal for the Philosophy of Science, 60(3): 521–555. doi:10.1093/bjps/axp019 • –––, 2014, “Arbitrariness and Causation in Classical Population Genetics”, British Journal for the Philosophy of Science, 65: 429–444. doi:10.1093/bjps/axs042 • Gillespie, John H., 2000a, “Genetic Drift in an Infinite Population. The Pseudohitchhiking Model”, Genetics, 155(2): 909–19. • –––, 2000b, “The Neutral Theory in an Infinite Population”, Gene, 261(1): 11–18. • –––, 2001, “Is the Population Size of a Species Relevant to Its Evolution?” Evolution, 55(11): 2161–2169. • –––, 2004, Population Genetics: A Concise Guide, Second ed. Baltimore: Johns Hopkins University Press. • Godfrey-Smith, Peter, 2009, Darwinian Populations and Natural Selection, Oxford: Oxford University Press. • Gulick, John T, 1873, “On Diversity of Evolution under One Set of External Conditions”, Journal of the Linnean Society of London, 11: 496–505. • Hagedoorn, A.L. and A.C. Hagedoorn, 1921, On the Relative Value of the Processes Causing Evolution, The Hague: Nijhoff. • Haufe, Chris, 2013, “From Necessary Chances to Biological Laws”, The British Journal for the Philosophy of Science, 64(2): 279–295. • Hodge, M.J.S., 1987, “Natural Selection as a Causal, Empirical, and Probabilistic Theory”, in The Probabilistic Revolution, Volume II, edited by Lorenz Krüger, Gerd Gigerenzer and Mary S. Morgan, Cambridge, MA: MIT Press, 233–270. • –––, 2011, “Darwinism after Mendelism: The Case of Sewall Wright’s Intellectual Synthesis in His Shifting Balance Theory of Evolution (1931)”, Studies in the History and Philosophy of the Biological and Biomedical Sciences, 42(1): 30–39. • –––, 2016, “Chance and Chances in Darwin’s Early Theorizing and in Darwinian Theory Today”, in Ramsey and Pence 2016: 41–75. • Lamotte, Maxime, 1959, “Polymorphism of Natural Populations of Cepaea Nemoralis”, Cold Spring Harbor Symposia on Quantitative Biology, 24: 65–84. • Matthen, Mohan, 2010, “What Is Drift? A Response to Millstein, Skipper, and Dietrich”, Philosophy and Theory in Biology, 2:e102. • Matthen, Mohan and André Ariew, 2002, “Two Ways of Thinking About Fitness and Natural Selection”, The Journal of Philosophy, 99: 55–83. • McShea, Daniel W. and Robert N. Brandon, 2010, Biology’s First Law: The Tendency for Diversity and Complexity to Increase in Evolutionary Systems, Chicago: University of Chicago Press. • Mills, Susan K. and John Beatty, 1979, “The Propensity Interpretation of Fitness”, Philosophy of Science, 46(2): 263–286. • Millstein, Roberta L., 2002, “Are Random Drift and Natural Selection Conceptually Distinct?” Biology & Philosophy, 17(1): 33–53. • –––, 2005, “Selection Vs. Drift: A Response to Brandon’s Reply”, Biology & Philosophy, 20(1): 171–175. • –––, 2006, “Natural Selection as a Population-Level Causal Process”, British Journal for the Philosophy of Science, 57(4): 627–653. • –––, 2008, “Distinguishing Drift and Selection Empirically: “The Great Snail Debate” of the 1950s”, Journal of the History of Biology, 41(2): 339–367. • –––, 2009, “Concepts of Drift and Selection in ‘the Great Snail Debate’ of the 1950s and Early 1960s”, in Cain and Ruse 2009: 271–298. • Millstein, Roberta L., Robert A. Skipper Jr., and Michael R. Dietrich, 2009, “(Mis)Interpreting Mathematical Models of Drift: Drift as a Physical Process”, Philosophy and Theory in Biology, 1: 1–13. • Northcott, Robert, 2010, “Walsh on Causes and Evolution”, Philosophy of Science, 77(3): 457–467. • Ohta, Tomoko, 1973, “Slightly Deleterious Mutant Substitutions in Evolution”, Nature, 246: 96–98. • Pence, Charles H., forthcoming, “Is Genetic Drift a Force?” Synthese, first online 30 January 2016. doi:10.1007/s11229-016-1031-2 • Pfeifer, Jessica, 2005, “Why Selection and Drift Might Be Distinct”, Philosophy of Science, 72(5): 1135–1145. • Plutynski, Anya, 2005, “Parsimony and the Fisher-Wright Debate”, Biology & Philosophy, 20(4): 697–713. • –––, 2007, “Drift: A Historical and Conceptual Overview”, Biological Theory, 2(2): 156–167. • Plutynski, Anya, Kenneth Blake Vernon, Lucas John Matthews, and Daniel Molter, 2016, “Chance in the Modern Synthesis”, in Ramsey and Pence 2016: 76–102. • Provine, William B., 1986, Sewall Wright and Evolutionary Biology, Chicago: University of Chicago. • –––, 2014, The “Random Genetic Drift” Fallacy, New York: CreateSpace Independent Publishing Platform. • Ramsey, Grant, 2013, “Driftability”, Synthese, 190(17): 3909–3928. • Ramsey, Grant and Charles H. Pence (eds.), 2016, Chance in Evolution, Chicago: University of Chicago Press. • Reisman, Kenneth and Patrick Forber, 2005, “Manipulation and the Causes of Evolution””, Philosophy of Science, 72(5): 1113–1123. doi:10.1086/508120 • Richardson, Robert C., 2006, “Chance and the Patterns of Drift: A Natural Experiment”, Philosophy of Science, 73: 642–654. • Rosenberg, Alexander, 1994, Instrumental Biology or the Disunity of Science, Chicago: University of Chicago Press. • Scriven, Michael, 1959, “Explanation and Prediction in Evolutionary Theory”, Science, 130(3374): 477–482. doi:10.1126/science.130.3374.477 • Shanahan, Timothy, 1992, “Selection, Drift, and the Aims of Evolutionary Theory”, in Trees of Life: Essays in Philosophy of Biology, edited by Paul Griffiths, 131–161. Dordrecht: Kluwer. • Shapiro, Larry and Elliott Sober, 2007, “Epiphenomenalism—the Do’s and the Don’ts”, in Studies in Causality: Historical and Contemporary, edited by G. Wolters and Peter Machamer Pittsburgh, PA: University of Pittsburgh Press, , 235–264. • Skipper Jr., Robert A., 2002, “The Persistence of the R.A. Fisher-Sewall Wright Controversy”, Biology & Philosophy, 17(3): 341–367. • –––, 2006, “Stochastic Evolutionary Dynamics: Drift Versus Draft”, Philosophy of Science, 73(5): 655–665. • –––, 2009, “Revisiting the Fisher-Wright Controversy”, in Cain and Ruse 2009: 299–322. • Sober, Elliott, 1984, The Nature of Selection, Cambridge, MA: MIT Press. • Stephens, Christopher, 2004, “Selection, Drift, and the “Forces” of Evolution”, Philosophy of Science, 71(4): 550–570. • –––, 2010, “Forces and Causes in Evolutionary Theory”, Philosophy of Science, 77(5): 716–727. • Walsh, Denis M., 2007, “The Pomp of Superfluous Causes: The Interpretation of Evolutionary Theory”, Philosophy of Science, 74: 281–303. • Walsh, Denis M., Tim Lewens, and André Ariew, 2002, “The Trials of Life: Natural Selection and Random Drift”, Philosophy of Science, 69(3): 452–473. doi:10.1086/342454 • Woodward, James, 2003, Making Things Happen: A Theory of Causal Explanation, Oxford: Oxford University Press. • Wright, Sewall, 1929, “The Evolution of Dominance”, American Naturalist, 63(689): 556–561. • –––, 1930, “The Genetical Theory of Natural Selection: A Review”, The Journal of Heredity, 21: 349–356. • –––, 1931a, “Evolution in Mendelian Populations”, Genetics, 16: 97–159. • –––, 1931b, “Statistical Theory of Evolution”, Journal of the American Statistical Association, 26(173, Supplement): 201–208. • –––, 1932, “The Roles of Mutation, Inbreeding, Crossbreeding and Selection in Evolution”, Proceedings of the Sixth International Congress of Genetics, 1: 356–366. • –––, 1948, “On the Roles of Directed and Random Changes in Gene Frequency in the Genetics of Populations”, Evolution, 2(4): 279–294. • –––, 1949, “Population Structure in Evolution”, Proceedings of the American Philosophical Society, 93(6): 471–478. • –––, 1951, “Fisher and Ford on the Sewall Wright Effect”, American Scientist, 39: 452–458. • –––, 1978, Evolution and the Genetics of Populations, Volume 4: Variability within and among Natural Populations, Chicago: University of Chicago Press. • –––, 1986, Evolution: Selected Papers, Chicago: University of Chicago Press.
web
auto_math_text
Réseaux sociaux : # derniers depôts Ajouté le 30/01/2020 ## [hal-02456153] FILTERING OUT TIME-FREQUENCY AREAS USING GABOR MULTIPLIERS We address the problem of filtering out localized time-frequency components in signals. The problem is formulated as a minimization of a suitable quadratic form, that involves a data fidelity term on the short-time Fourier transform outside the support of the undesired component, and an energy pe-nalization term inside the support. The minimization yields a linear system whose solution can be expressed in closed form using Gabor multipliers. Ajouté le 29/01/2020 ## [hal-02175302] A convergent FV-FE scheme for the stationary compressible Navier-Stokes equations In this paper, we prove a convergence result for a discretization of the three-dimensional stationary compressible Navier-Stokes equations assuming an ideal gas pressure law p(ρ) = aρ^γ with γ &gt; 3/2. It is the first convergence result for a numerical method with adiabatic exponents γ less than 3 when the space dimension is three. The considered numerical scheme combines finite volume techniques for the convection with the Crouzeix-Raviart finite element for the diffusion. Ajouté le 29/01/2020 ## [hal-02428009] Transfer matrix of a truncated cone with viscothermal losses: application of the WKB method The propagation in tubes with varying cross section and wall visco-thermal effects is a classical problem in musical acoustics. To treat this aspect, the first method was the division in a large number of short cylinders. The division in short conical frustums with wall effects independent of the radius is better, but remains time consuming for narrow tubes and low frequencies. The use of the WKB method for the transfer matrix of a truncated cone without any division is investigated. Ajouté le 28/01/2020 ## [hal-02454518] SPOQ lp-Over-lq Regularization for Sparse Signal Recovery applied to Mass Spectrometry Underdetermined or ill-posed inverse problems require additional information for sound solutions with tractable optimization algorithms. Sparsity yields consequent heuristics to that matter, with numerous applications in signal restoration, image recovery, or machine learning. Since the l0 count measure is barely tractable, many statistical or learning approaches have invested in computable proxies, such as the l1 norm. Ajouté le 28/01/2020 ## [hal-02455652] SPOQ $\ell_p$-Over-$\ell_q$ Regularization for Sparse Signal: Recovery applied to Mass Spectrometry Underdetermined or ill-posed inverse problems require additional information for sound solutions with tractable optimization algorithms. Sparsity yields consequent heuristics to that matter, with numerous applications in signal restoration, image recovery, or machine learning. Since the $\ell_0$ count measure is barely tractable, many statistical or learning approaches have invested in computable proxies, such as the $\ell_1$ norm. Ajouté le 28/01/2020 ## [hal-02455218] Spectral Estimation for Multivariate Locally Time-Warped Signals Spectral estimation generally aims at determining from a single realization of a given signal, the distribution of its power as a function of frequency. In this paper, we focus on multivariate, locally time-warped signals. We show that the spectral estimation problem can also be interpreted as a doubly nonstationary blind source separation (BSS) problem, where both the mixing matrix and the original sources contribute to nonstationarity. Ajouté le 28/01/2020 ## [hal-02450866] Advances in Intravital Non-Linear Optical Imaging of the Central Nervous System in Rodents Purpose of review: Highly coordinated cellular interactions occur in the healthy or pathologic adult rodent central nervous system (CNS). Until recently, technical challenges have restricted the analysis of these events to largely static modes of study such as immuno-fluorescence and electron microscopy on fixed tissues. The development of intravital imaging with subcellular resolution is required to probe the dynamics of these events in their natural context, the living brain. Ajouté le 27/01/2020 ## [hal-02447654] Flexible photonic based on dielectric antennas Flexible and stretchable photonics are emerging fields aiming to develop novel applications where the devices need to conform to uneven surfaces or whenever lightness and reduced thickness are major requirements. However, owing to the relatively small refractive index of transparent soft matter including most polymers, these materials are not well adapted for light management at visible and near-infrared frequencies. Ajouté le 25/01/2020 ## [tel-02448325] Prédiction des Séries Temporelles Larges De nos jours, les systèmes de gestion et de traitement des données sont censés stocker et traiter de grandes séries temporelles. Comme le nombre de variables observées et liées augmente, leur prédiction est devenue de plus en plus compli- quée, et l’utilisation de toutes les variables pose des problèmes pour les modèles de prédiction classiques. Ajouté le 25/01/2020 ## [hal-02448277] GFSM: a Feature Selection Method for Improving Time Series Forecasting Handling time series forecasting with many predictors is a popular topic in the era of &quot;Big data&quot;, where wast amounts of observed variables are stored and used in analytic processes. Classical prediction models face some limitations when applied to large-scale data. Using all the existing predictors increases the computational time and does not necessarily improve the forecast accuracy.
web
auto_math_text
By Topic # IEE Colloquium on Artificial Intelligence in Planning for Production Control ## Filter Results Displaying Results 1 - 10 of 10 • ### Exploiting the implications of choices in set partitioning Publication Year: 1988, Page(s):3/1 - 3/3 | | PDF (184 KB) Computational experience so far has shown that on set partitioning problems of the scheduling type, forward checking considers far fewer partial solutions than does the Garfinkel and Nemhauser algorithm. However, since it seems unlikely that these algorithms could compete with mathematical programming methods when applied to problems of the size occurring in scheduling applications, the main inter... View full abstract» • ### Control of flexible manufacturing cells Publication Year: 1988, Page(s):7/1 - 7/4 | | PDF (192 KB) A flexible manufacturing cell integrates devices to execute the goals of overall factory production control system. Such cells should be readily adapted to perform new tasks and be capable of detecting errors to recover or degrade gracefully. An approach to allow this is the distribution of intelligence to individual device hardware and software using the concept of an actor', an actor based cell... View full abstract» • ### A prototype AI-based tool for production scheduling Publication Year: 1988, Page(s):2/1 - 2/4 Cited by:  Patents (1) | | PDF (136 KB) Describes the application of an artificial intelligence planning system, SCINAPSE to a class of production scheduling problems. SCINAPSE is a domain-independent planner which operates on a model of some domain. Given objectives posed as goals within that domain, it can then create plans for their accomplishment. One such type of domain, which has been addressed is production scheduling, where the ... View full abstract» • ### An intelligent knowledge-based scheduler for heavy manufacturing Publication Year: 1988, Page(s):6/1 - 6/3 | | PDF (124 KB) A consortium from Alcan, YARD and the computer science department at the University of Strathclyde are collaborating on an Alvey-funded project on the use of intelligent knowledge-based systems (IKBS) techniques for production scheduling in heavy manufacturing. The main objectives of the project are: to research and develop methodologies for scheduling in heavy manufacturing and similar domains, a... View full abstract» • ### IEE Colloquium on Artificial Intelligence in Planning for Production Control' (Digest No.85) Publication Year: 1988 | | PDF (16 KB) The following topics were dealt with: qualitative assessment of flexibility characteristics in FMS; prototype AI-based tool for production scheduling; exploiting the implications of choices in set partitioning; reactive constraint-based scheduling; IKBS for FMS: an object-centred knowledge acquisition strategy for productivity optimisation; intelligent knowledge-based scheduler for heavy manufactu... View full abstract» • ### Qualitative assessment of flexibility characteristics in FMS Publication Year: 1988, Page(s):1/1 - 1/6 | | PDF (224 KB) Describes a SERC funded collaborative project at the Open University in conceptual modelling of flexibility in production systems. The aim is to produce a set of formal relationships for associating flexibility types with design configurations, using a model-based approach to represent qualitatively different types of flexibility arising from particular designs. This flexibility knowledge of class... View full abstract» • ### IKBS for FMS: an object-centred knowledge acquisition strategy for productivity optimisation Publication Year: 1988, Page(s):5/1 - 5/3 | | PDF (232 KB) Discusses some possible uses of IKBS techniques in the domain of manufacturing-specifically flexible manufacturing systems. The application is in control of part of an existing factory system. The necessity to maintain a real-time system places some significant and awkward constraints on any techniques used. The problem area is discussed together with the work which has been done using IKBS techni... View full abstract» • ### Steps toward the automation of robot programming Publication Year: 1988, Page(s):9/1 - 9/4 | | PDF (160 KB) Considers three approaches that may be taken towards the goal of automated robot programming. The first approach follows from a high-level task specification, the second from the use of sensors and the third from allowing the robot to learn deep knowledge as a supplement to predefined behaviour View full abstract» • ### Reactive constraint-based scheduling Publication Year: 1988, Page(s):4/1 - 4/5 | | PDF (228 KB) The authors of the paper are currently concerned with the design of a knowledge-based scheduling system at the operational level of VLSI wafer fabrication. This is a particular difficult job-shop scheduling problem for two reasons. Firstly, it is difficult to define the criteria by which a schedule can be considered optimal, due to conflicting and changing objectives. Secondly, the wafer fabricati... View full abstract» • ### Practical microcomputer solutions of the cutting stock problem Publication Year: 1988, Page(s):8/1 - 8/4 | | PDF (152 KB) The work presented arose from research into the production of microcomputer software to derive patterns from which industrial saws cut large stock sheets of wood into rectangular panels in some optimal fashion. It is demonstrated that even in sophisticated algorithmic solutions some applied intelligence relating to the parameters has an effect on the results View full abstract»
web
auto_math_text
# Q1 vs Q2 contact analysis¶ Here we calculate a Q1 vs Q2 plot, where Q1 refers to fraction of native contacts along a trajectory with reference to the first frame, and Q2 represents the fraction of native contacts with reference to the last. Last updated: December 2022 with MDAnalysis 2.4.0-dev0 Minimum version of MDAnalysis: 0.17.0 Packages required: Note The contacts.q1q2 function uses the radius_cut_q method to calculate the fraction of native contacts for a conformation by determining that atoms i and j are in contact if they are within a given radius ([FKDD07], [BHE13]) [1]: import MDAnalysis as mda from MDAnalysis.tests.datafiles import PSF, DCD from MDAnalysis.analysis import contacts import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline ## Background¶ Please see the Fraction of native contacts for an introduction to general native contacts analysis. The test files we will be working with here feature adenylate kinase (AdK), a phosophotransferase enzyme. ([BDPW09]) The trajectory DCD samples a transition from a closed to an open conformation. [2]: u = mda.Universe(PSF, DCD) /home/pbarletta/mambaforge/envs/mda-user-guide/lib/python3.9/site-packages/MDAnalysis/coordinates/DCD.py:165: DeprecationWarning: DCDReader currently makes independent timesteps by copying self.ts while other readers update self.ts inplace. This behaviour will be changed in 3.0 to be the same as other readers ## Calculating Q1 vs Q2¶ We choose to calculate contacts for all the alpha-carbons in the protein, and define the contact radius cutoff at 8 Angstrom. contacts.q1q2 returns a contacts.Contacts object, which we can run directly. [3]: q1q2 = contacts.q1q2(u, 'name CA', radius=8).run() The data is in q1q2.timeseries. The first column of the data is always the frame number. [4]: q1q2_df = pd.DataFrame(q1q2.results.timeseries, columns=['Frame', 'Q1', 'Q2']) [4]: Frame Q1 Q2 0 0.0 1.000000 0.946494 1 1.0 0.980926 0.949262 2 2.0 0.973660 0.952952 3 3.0 0.972752 0.951107 4 4.0 0.970027 0.948339 ### Plotting¶ We can plot the fraction of native contacts over time. [5]: q1q2_df.plot(x='Frame') plt.ylabel('Fraction of native contacts'); Alternatively, we can create a Q1 vs Q2 plot to characterise the transition of AdK from its opened to closed position. [6]: q1q2_df.plot(x='Q1', y='Q2', legend=False) plt.ylabel('Q2'); ## References¶ [1] Oliver Beckstein, Elizabeth J. Denning, Juan R. Perilla, and Thomas B. Woolf. Zipping and Unzipping of Adenylate Kinase: Atomistic Insights into the Ensemble of Open↔Closed Transitions. Journal of Molecular Biology, 394(1):160–176, November 2009. 00107. URL: https://linkinghub.elsevier.com/retrieve/pii/S0022283609011164, doi:10.1016/j.jmb.2009.09.009. [2] R. B. Best, G. Hummer, and W. A. Eaton. Native contacts determine protein folding mechanisms in atomistic simulations. Proceedings of the National Academy of Sciences, 110(44):17874–17879, October 2013. 00259. URL: http://www.pnas.org/cgi/doi/10.1073/pnas.1311599110, doi:10.1073/pnas.1311599110. [3] Joel Franklin, Patrice Koehl, Sebastian Doniach, and Marc Delarue. MinActionPath: maximum likelihood trajectory for large-scale structural transitions in a coarse-grained locally harmonic energy landscape. Nucleic Acids Research, 35(suppl_2):W477–W482, July 2007. 00083. URL: https://academic.oup.com/nar/article-lookup/doi/10.1093/nar/gkm342, doi:10.1093/nar/gkm342. [4] Richard J. Gowers, Max Linke, Jonathan Barnoud, Tyler J. E. Reddy, Manuel N. Melo, Sean L. Seyler, Jan Domański, David L. Dotson, Sébastien Buchoux, Ian M. Kenney, and Oliver Beckstein. MDAnalysis: A Python Package for the Rapid Analysis of Molecular Dynamics Simulations. Proceedings of the 15th Python in Science Conference, pages 98–105, 2016. 00152. URL: https://conference.scipy.org/proceedings/scipy2016/oliver_beckstein.html, doi:10.25080/Majora-629e541a-00e. [5] Naveen Michaud-Agrawal, Elizabeth J. Denning, Thomas B. Woolf, and Oliver Beckstein. MDAnalysis: A toolkit for the analysis of molecular dynamics simulations. Journal of Computational Chemistry, 32(10):2319–2327, July 2011. 00778. URL: http://doi.wiley.com/10.1002/jcc.21787, doi:10.1002/jcc.21787.
web
auto_math_text
# How to build the [111] slab model of NiSe2 with different terminations with ASE tool? The following figure is the bulk structure of NiSe$$_2$$ downloaded from the materials project database. Now I want to study the properties of its [111] plane. In detial, I cut the slab with atomic simulation environment (ASE) as follows: The cutted slab is: In particular, I find the Se5 atom is strange. In additon, the cutted slab is different from this paper: Here different terminations are considered. Therefore, how can I use ASE cut these (111) slabs with different terminations? • If you do manage to automate something, even for this specific material, please share as an answer since others may be able to build off your specific example later. My answer is unfortunately an answer of despair. Jan 1 at 4:52
web
auto_math_text
Definitions # Spontaneous process A spontaneous process is the time-evolution of a system in which it releases free energy (most often as heat) and moves to a lower, more thermodynamically stable, energy state. The sign convention of changes in free energy follows the general convention for thermodynamic measurements, in which a release of free energy from the system corresponds to a negative change in free energy, but a positive change for the surroundings. A process that is capable of proceeding in a given direction, as written or described, without needing to be driven by an outside source of energy. The term is used to refer to macro processes in which entropy increases; such as a smell diffusing in a room, ice melting in lukewarm water, salt dissolving in water, and iron rusting. The laws of thermodynamics govern the direction of a spontaneous process, ensuring that if a sufficiently large number of individual interactions (like atoms colliding) are involved then the direction will always be in the direction of increased entropy (since entropy increase is a statistical phenomenon). ## Overview For a reaction at constant temperature and pressure, the change ΔG in the Gibbs free energy is: $Delta G = Delta H - T Delta S ,$ The sign of ΔG depends on the signs of the changes in enthalpyH) and entropyS), as well as on the absolute temperature (T, in degrees Kelvin). Notice that changes in the sign of ΔG cannot occur solely as a result of changes in temperature alone, because the absolute temperature can never be less than zero. When ΔG is negative, a process or chemical reaction proceeds spontaneously in the forward direction. When ΔG is positive, the process proceeds spontaneously in reverse. When ΔG is zero, the process is already in equilibrium, with no net change taking place over time. We can further distinguish four cases within the above rule just by examining the signs of the two terms on the right side of the equation. When ΔS is positive and ΔH is negative, a process is spontaneous When ΔS is positive and ΔH is positive, a process is spontaneous at high temperatures, where exothermicity plays a small role in the balance. When ΔS is negative and ΔH is negative, a process is spontaneous at low temperatures, where exothermicity is important. When ΔS is negative and ΔH is positive, a process is not spontaneous at any temperature, but the reverse process is spontaneous. The second law of thermodynamics states that for any spontaneous process the overall change ΔS in the entropy of the system must be greater than or equal to zero, yet a spontaneous chemical reaction can result in a negative change in entropy. This does not contradict the second law, however, since such a reaction must have a sufficiently large negative change in enthalpy (heat energy) that the increase in temperature of the reaction surroundings (considered to be part of the system in thermodynamic terms) results in a sufficiently large increase in entropy that overall the change in entropy is positive. That is, the ΔS of the surroundings increases enough because of the exothermicity of the reaction that it overcompensates for the negative ΔS of the system, and since the overall ΔS = ΔSsurroundings + ΔSsystem, the overall change in entropy is still positive. Another way to view the fact that some spontaneous chemical reactions can lead to products with lower entropy is to realize that the second law states that entropy of a closed system must increase (or remain constant). Since a positive enthalpy means that energy is being released to the surroundings, then the 'closed' system includes the chemical reaction plus its surroundings. This means that the heat release of the chemical reaction sufficiently increases the entropy of the surroundings such that the overall entropy of the closed system increases in accordance with the second law of thermodynamics. Just because a chemist calls a reaction “spontaneous” does not mean the reaction happens with great speed. For example, the decay of diamonds into graphite is a spontaneous process, but this decay is extremely slow and takes millions of years. The rate of a reaction is independent of its spontaneity, and instead depends on the chemical kinetics of the reaction.
web
auto_math_text
## Greexotics – A First Step in the Land of Exotic Derivatives Greeks – Part 2 Following the first article of the series in which we analyzed the greeks of a Binary option, in this second part of the series we would like to introduce Asian options and their greeks. Asian options are part of a broader macro-class of exotic derivatives named Path Dependent Derivatives; in Read more… ## Our views on Twitter Share Price In our Special Report (https://bsic.it/2013/10/19/bsic-special-report-twitter-ipo/), we analyzed Twitter’s characteristics, comparables and ratios. We valued Twitter shares at approximately 25$, really close to the final offering price (26$). After the IPO, Twitter’s share price dramatically soared to 45\$, an unbelievable 72% surge: Analysts seem to agree with our view, and Mr. Read more… ## Trade idea: short put Nikkei – strike 14000 From a technical point of view, NIKKEI recent movements do not deliver any blatant signal to ground expectations on: last closure stands close to the moving average and the RSI lies flat around 50.  However, we believe it’s worth to take a stand on the general markets’ sentiment over Japan. Read more… ## Trade idea: take advantage of the low-volatility environment with a butterfly strategy Trade idea: take advantage of the low-volatility environment with a butterfly strategy After breaking several all-time highs the US indices are now poised to move higher with less and less volatility. Good news coming from the job market and retail sales is welcome with a sudden decrease in implied volatility Read more…
web
auto_math_text
Tags: crypto vigener Rating: 5.0 Clam managed to get parole for his dumb cryptography jokes, but after making yet another dumb joke on his way out of the courtroom, he was sent straight back in. This time, he was sentenced to 5 years of making dumb Vigenere challenges. Clam, fed up with the monotony of challenge writing, made a program to do it for him. Can you solve enough challenges to get the flag? Connect to the challenge at nc challs.actf.co 31333. Source(main.py) Source(main.py) #!/usr/local/bin/python3 import string import os import random with open("flag.txt", "r") as f: alpha = string.ascii_lowercase def encrypt(msg, key): ret = "" i = 0 for c in msg: if c in alpha: ret += alpha[(alpha.index(key[i]) + alpha.index(c)) % len(alpha)] i = (i + 1) % len(key) else: ret += c return ret inner = alpha + "_" noise = inner + "{}" print("Welcome to the vinegar factory! Solve some crypto, it'll be fun I swear!") i = 0 while True: if i % 50 == 49: fleg = flag else: fleg = "actf{" + "".join(random.choices(inner, k=random.randint(10, 50))) + "}" start = "".join(random.choices(noise, k=random.randint(0, 2000))) end = "".join(random.choices(noise, k=random.randint(0, 2000))) key = "".join(random.choices(alpha, k=4)) print(f"Challenge {i}: {start}{encrypt(fleg + 'fleg', key)}{end}") x = input("> ") if x != fleg: print("Nope! Better luck next time!") break i += 1 My script #!/usr/bin/env python3 from pwn import * import string s = remote("challs.actf.co", 31333) #s = remote("127.0.0.1", 31333) for t in range(52): s.recvuntil('Challenge') noise = str(s.recvuntil('\n')) print(noise) s.recvuntil('> ') alphabet = string.ascii_lowercase length = int(len(noise)) output_file = open('output.txt', 'w') for i in range(0, length-35): if noise[i] in alphabet and noise[i + 1] in alphabet and noise[i + 2] in alphabet and noise[i + 3] in alphabet: if noise[i + 4] == '{': a = noise[i:i + 6] i=i+3 while noise[i + 2] != '}' and noise[i + 2] != '{' and noise[i+2] != '\n': i = i + 1 a = a + (noise[i + 2]) if noise[i+2] == '{': continue if noise[i + 3] in alphabet and noise[i + 4] in alphabet and noise[i + 5] in alphabet and noise[i + 6] in alphabet: a = a + (noise[i + 3:i + 7]) # arr.append("a") output_file.write(a + '\n') # # print(a) # print('\n') else: continue output_file.close() with open('output.txt') as fp: while line: noiseflag = line.strip() enc = line dec = 'actf' k = '' alphabet = string.ascii_lowercase for i in range(0,4): need_index = (alphabet.index(enc[i]) - alphabet.index(dec[i]) + 26) % 26 k = k + alphabet[need_index] msg = '' def decrypt(ciph, key): ret = '' i = 0 for m in ciph: if m in alphabet: ret=ret+alphabet[(alphabet.index(m)- alphabet.index(key[i])) % len(alphabet)] i = (i+1) % 4 else: ret = ret + m return ret if decrypt(noiseflag, k)[-4:]=='fleg': dec_input = decrypt(noiseflag, k) print(dec_input.encode()[:-4]) s.sendline(dec_input.encode()[:-4]) This script does not work every time, the problem is that I don't know how to use python3 properly. And the problem is that at the end of the noise it finds a suitable duration of 4 literal characters and the { character, and it starts reading empty characters outside the string, and the python swears at this. How I fixed it (not very correct), I just run the loop to the length of the string - 35. And the principle of the algorithm is simple 1) Reads a line starting with "Challenge", then finds a suitable sequence format. For example dghr{fbhnem}fbhd 2) Writes all matching sequences to output.txt 3) Reads the file output.txt line by line 4) finds a key so that at the beginning the first letters would be decoded as "actf", and if there is a string where the last 4 letters are expanded as "fleg", the script sends this string 5) And so 50 times Try to fix my mistake and good luck!
web
auto_math_text
发布时间: 2019-05-16 摘要点击次数: 全文下载次数: DOI: 10.11834/jig.180509 2019 | Volume 24 | Number 5 图像处理和编码 收稿日期: 2018-09-12; 修回日期: 2018-10-31 基金项目: 国家自然科学基金项目(61462065,61661036) 第一作者简介: 张桂梅, 1970年生, 女, 教授, 主要研究方向为计算机视觉、图像处理和模式识别。E-mail:guimei.zh@163.com;李艳兵, 男, 南昌航空大学机械工程硕士研究生, 主要研究方向为计算机视觉、图像处理和模式识别。E-mail:472944577@qq.com. 中图法分类号: TP391.41 文献标识码: A 文章编号: 1006-8961(2019)05-0700-14 # 关键词 Image inpainting of fractional TV model combined with texture structure Zhang Guimei, Li Yanbing Key Laboratory of Image Processing and Pattern Recognition, Nanchang Hangkong University, Nanchang 330063, China Supported by: National Natural Science Foundation of China (61462065, 61661036) # Abstract Objective As a fundamental issue in the field of image processing, image inpainting has been widely studied in the past two decades. The goal of image inpainting is to reconstruct the original high-quality image from its corrupted observation. Notably, prior knowledge of images is important in image inpainting. Thus, designing an effective regularization to represent image priors is a critical task for image inpainting. The TV(total variation) model usually exploits local structures and high effectiveness to preserve image edges; thus, it has been found to be widely used in image inpainting. However, the regular term of the TV model is a first-order differential, which usually loses image details and tends to suffer from over-smooth effects owing to the piecewise constant assumption. Fortunately, fractional differential is capable of enhancing low-and intermediate-frequency signals and amplifying high-frequency signals moderately; thus, it is introduced into the TV model. However, the existing fractional TV model is limited with regard to its preservation of the information of images with texture and edge details. Furthermore, it does not fully use prior information such as known edges and textures. Method To address these problems, we propose a new fractional-order TV model that introduces texture structure information into a fractional TV model for image inpainting. A minimum value is used in the TV model to calculate the image gradient when solving the fractional model. Thus, the improved model is robust because it overcomes the problem of the model being non-differentiable at zero point. In this way, the weak texture information is effectively preserved. The improved model determines the texture direction of the region to be restored on the basis of the priors of the known region of the image and fully uses the texture information of the image to improve the accuracy of image inpainting. Result The Barbara and Lena images are selected as test images. The Barbara image presents a large weak texture area. By contrast, the Lena image includes few texture regions and a highly smooth area. Therefore, these two images are used for the experiment. To improve efficiency, we intercept the texture part of the original image and conduct many experiments by using differently sized templates and different orders of fractional differential. Then, the optimal parameters for different images, such as template size and order, can be obtained. The optimal parameters for the Barbara and Lena images are as follows. For the Barbara image, the optimal order is 0.1, and the optimal template size is 3×3 pixels; for the Lena image, the optimal order is 0.9, and the optimal template size is 5×5 pixels. The algorithm is compared with three algorithms with better restoration effects. Mean square error (MSE) and peak signal-to-noise ratio (PSNR) are introduced to evaluate the performance of the different methods. Experimental results indicate that the proposed algorithm achieves improved inpainting result. Unlike that in the TV model, the PSNR values after the restoration of the Barbara, Lena, and Rock images in the proposed method increase by 5.94%, 8.07%, and 3.85%, respectively; and the MSE values decrease by 48.66%, 65.89%, and 35%, respectively. Relative to the fractional TV model, the proposed method achieves PSNR values for the Barbara image, Lena image, and Rock image that increase by 4.17%, 8.59%, and 1.81%, respectively; its MSE values decrease by 37.90%, 68.00%, and 18.68%, respectively. Conclusion The relationship between inpainting effect, template order, and template size is are demonstrated in experiments, thereby providing the basis for selecting optimal parameters. Although the optimal parameters of different types of images are different, the optimal inpainting order is generally between 0 and 1 because the smooth part of the image corresponds to the low-frequency part of the signal. The texture details of the image correspond to the intermediate-frequency part of the signal. Meanwhile, the TV algorithm is not ideal for the weak texture region. To enhance the gradient information of the region, we must improve the low-and intermediate-frequency parts. Therefore, choosing the order between 0 and 1 is recommended. Furthermore, although the optimal order varies with the type of the image, a weak texture region usually results in a small order. Theoretical analysis and experimental results show that the proposed model can effectively improve the accuracy of image restoration relative to the original TV model and fractional order TV model. The proposed model is suitable for inpainting images with weak texture and edge information. This model is an important extension of the TV model. # Key words image inpainting; fractional differential; weak texture; TV model; edge detail # 0 引言 TV模型由于能去除图像中的噪声,近年来学者们将TV模型应用到图像修复中。如Chan等人[9]提出了将TV模型应用到图像修复中,该方法是基于偏微分方程的图像修复算法,但是其收敛速度较慢,且容易产生过度平滑。针对基于偏微分方程修复算法运行速度较慢的问题,Telea[10]提出了一种新的基于水平集的快速图像修复方法,该算法解决了修复算法运行速度较慢的问题,但是基于水平集的方法易造成区域模糊并且对边缘信息保持不佳。针对该问题,李开宇等人[11]提出一种改进方案,在权函数的设计中,引入连续强度来保持边缘信息,并利用等照度线的方向计算出两个像素点的位置关系。当修复单个点时,引入置信因子来加权插值点。该算法在保证运行效率的同时提高了修复精度,但是该方法并没有解决纹理过度平滑问题。文献[8]提出了分数阶与TV模型相结合的修复方法,相对原始的TV模型有了较大提升,能够非线性地保留图像平滑区域的纹理信息,有效地解决纹理过度平滑问题,对图像细节信息具有较好的修复效果。但是该方法对具有弱导数性质的纹理细节等信息的保持仍不够理想,并且在模型最小化过程中导致了计算上的困难,其主要原因是正则项和数据项在零点处不可微。 # 1.1 分数阶微分 G-L是比较经典的分数阶定义,它的实质是由整数阶推导出来的,根据整数阶微分的定义,在区间$t\in [a, b](a<b, a, b\in \bf{R})$内存在函数$f\left( t \right)$连续可微,可得该连续函数的1阶微分定义为 ${f^\prime }(t) = \mathop {\lim }\limits_{h \to 0} \frac{{f(t + h) - f(h)}}{h}$ (1) $\begin{array}{*{20}{l}} {{f^{\prime \prime }}(t) = \mathop {\lim }\limits_{h \to 0} \frac{{{f^\prime }(t + h) - {f^\prime }(h)}}{h} = }\\ {\mathop {\lim }\limits_{h \to 0} \frac{{f(t + 2h) - 2f(t + h) + f(h)}}{{{h^2}}}} \end{array}$ (2) $\sum\limits_{\begin{array}{*{20}{c}} {X \in \boldsymbol{B}}\\ {x \in \boldsymbol{B'}} \end{array}} {\frac{1}{{\left| {\nabla {\boldsymbol{u}_x}} \right|}}} \left( {{\boldsymbol{u}_o} - {\boldsymbol{u}_X}} \right) + {\lambda _e}(O)\left( {{\boldsymbol{u}_O} - \boldsymbol{u}_O^0} \right) = 0$ (8) ${\mathit{\boldsymbol{u}}_O} = \frac{{\sum\limits_{\begin{array}{*{20}{c}} {X \in \mathit{\boldsymbol{B}}}\\ {x \in \mathit{\boldsymbol{B'}}} \end{array}} {\frac{{{\mathit{\boldsymbol{u}}_X}}}{{\sqrt {\left| {\nabla {\mathit{\boldsymbol{u}}_x}} \right|} }} + {\lambda _e}(O)\mathit{\boldsymbol{u}}_O^0} }}{{\sum\limits_{\begin{array}{*{20}{c}} {X \in \mathit{\boldsymbol{B}}}\\ {x \in \mathit{\boldsymbol{B'}}} \end{array}} {\frac{1}{{\sqrt {\left| {\nabla {\mathit{\boldsymbol{u}}_x}} \right|} }} + {\lambda _e}(O)} }}$ (9) # 2.1 分数阶TV模型 ${J_\lambda }(\mathit{\boldsymbol{u}}) = \int\limits_{\mathit{\boldsymbol{E}} \cup \mathit{\boldsymbol{D}}} {\left| {{\nabla ^\alpha }\mathit{\boldsymbol{u}}} \right|{\rm{d}}x{\rm{d}}y} + \frac{\lambda }{2}\int_\mathit{\boldsymbol{E}} {{{\left| {\mathit{\boldsymbol{u}} - {\mathit{\boldsymbol{u}}^0}} \right|}^2}} {\rm{d}}x{\rm{d}}y$ (10) $-\operatorname{div}\left(\frac{\nabla^{\alpha} \boldsymbol{u}}{\left|\nabla^{\alpha} \boldsymbol{u}\right|}\right)+\lambda_{e}\left(\boldsymbol{u}-\boldsymbol{u}^{0}\right)=0$ (11) ${\mathit{\boldsymbol{u}}_O} = \frac{{\sum\limits_{\begin{array}{*{20}{c}} {X \in \mathit{\boldsymbol{B}}}\\ {x \in \mathit{\boldsymbol{B'}}} \end{array}} {\frac{{{\mathit{\boldsymbol{u}}_X}}}{{\sqrt {\left| {{\nabla ^\alpha }{\mathit{\boldsymbol{u}}_x}} \right|} }} + {\lambda _e}(O)\mathit{\boldsymbol{u}}_O^0} }}{{\sum\limits_{\begin{array}{*{20}{c}} {X \in \mathit{\boldsymbol{B}}}\\ {x \in \mathit{\boldsymbol{B'}}} \end{array}} {\frac{1}{{\sqrt {\left| {{\nabla ^a}{\mathit{\boldsymbol{u}}_x}} \right|} }} + {\lambda _e}(O)} }}$ (12) # 2.2 改进的分数阶TV模型 $-\operatorname{div}\left(\frac{\nabla^{\alpha} \boldsymbol{u}}{\left|\nabla^{\alpha} \boldsymbol{u}\right|+p}\right)+\lambda_{e}\left(\boldsymbol{u}-\boldsymbol{u}^{0}\right)=0$ (13) ${\mathit{\boldsymbol{u}}_O} = \frac{{\sum\limits_{\begin{array}{*{20}{c}} {X \in \mathit{\boldsymbol{B}}}\\ {x \in \mathit{\boldsymbol{B'}}} \end{array}} {\frac{{{\mathit{\boldsymbol{u}}_X}}}{{\sqrt {\left| {{\nabla ^\alpha }{\mathit{\boldsymbol{u}}_x}} \right| + {p^2}} }} + {\lambda _e}(O)\mathit{\boldsymbol{u}}_O^0} }}{{\sum\limits_{\begin{array}{*{20}{c}} {X \in \mathit{\boldsymbol{B}}}\\ {x \in \mathit{\boldsymbol{B'}}} \end{array}} {\frac{1}{{\sqrt {\left| {{\nabla ^a}{\mathit{\boldsymbol{u}}_x}} \right| + {p^2}} }} + {\lambda _e}(O)} }}$ (14) # 2.4 实验参数的设置 $\begin{array}{*{20}{c}} {{D^\alpha }f(x)\mathop \Leftrightarrow \limits^{{\rm{FT}}} {{(DF)}^\alpha }(u) = {{(vu)}^\alpha }F(u) = {\hat h ^\alpha }(u)F(u)}\\ {{{\hat h}^\alpha }(u) = {{\hat b}^\alpha }(u){{\rm{e}}^{v{\theta ^\alpha }(u)}}} \end{array}$ (17) # 2.5 算法步骤 1) 根据$\mathit{\boldsymbol{u}} - {\mathit{\boldsymbol{u}}^0}$计算差值图像,确定图像的待修复区域; 2) 根据 $\mathit{\boldsymbol{\rho }} = \mathop {\arg \min }\limits_{\left( {{c_x},{c_y}} \right) \in \mathit{\boldsymbol{C}}} \frac{1}{{|\mathit{\boldsymbol{R}}|}}\sum\limits_{\begin{array}{*{20}{c}} {(u,v) \in \mathit{\boldsymbol{R}}}\\ {\left( {{u^\prime },{v^\prime }} \right) \in \mathit{\boldsymbol{S}}} \end{array}} {\left| {I(u,v) - I\left( {{u^\prime },{v^\prime }} \right)} \right|}$ 3) 为了增加模型的稳定性,根据式(7),在能量泛函的梯度下降方程引入极小值$p$,如 $- {\mathop{\rm div}\nolimits} \left( {\frac{{{\nabla ^\alpha }\mathit{\boldsymbol{u}}}}{{\sqrt {{{\left| {{\nabla ^\alpha }\mathit{\boldsymbol{u}}} \right|}^2} + {p^2}} }}} \right)$ 4) 根据式(13)最终化简得到 ${\mathit{\boldsymbol{u}}_O} = \frac{{\sum\limits_{\begin{array}{*{20}{c}} {X \in \mathit{\boldsymbol{B}}}\\ {x \in \mathit{\boldsymbol{B'}}} \end{array}} {\frac{{{\mathit{\boldsymbol{u}}_X}}}{{\sqrt {\left| {{\nabla ^\alpha }{\mathit{\boldsymbol{u}}_x}} \right| + {p^2}} }} + {\lambda _e}(O)\mathit{\boldsymbol{u}}_O^0} }}{{\sum\limits_{\begin{array}{*{20}{c}} {X \in \mathit{\boldsymbol{B}}}\\ {x \in \mathit{\boldsymbol{B'}}} \end{array}} {\frac{1}{{\sqrt {\left| {{\nabla ^a}{\mathit{\boldsymbol{u}}_x}} \right| + {p^2}} }} + {\lambda _e}(O)} }}$ 5) 根据经验取$p = \frac{p}{5}$进行迭代,经过多次循环迭代得到最佳的$p$值,输出${{\boldsymbol{u}}_{O}}$ # 3 实验结果与分析 $f_{\text{MSE}}=\frac{1}{m \times n} \sum\limits_{i, j}\left(I_{1}(i, j)-I(i, j)\right)^{2}$ (18) ${f_{{\rm{PSWR}}}} = 10\lg \frac{{\mathop {\max }\limits_{1 \le i \le n,1 \le n} {{\left| {{I_1}(i,j)} \right|}^2}}}{{\frac{1}{{m \times n}}\sum\limits_{i = 1}^m {\sum\limits_{j = 1}^n {{{\left[ {{I_1}(i,j) - I(i,j)} \right]}^2}} } }}$ (19) # 3.1 模板大小和分数阶阶次对修复效果的影响实验 Table 1 Gray-scale mean square error comparison (Barbara image) k值 阶次 α=0.1 α=0.3 α=0.5 α=0.7 α=0.9 α=1.1 α=1.3 α=1.5 α=1.7 α=1.9 1 0.445 2 0.446 0 0.446 2 0.445 8 0.447 2 0.447 4 0.447 8 0.447 4 0.447 0 0.448 5 2 0.445 4 0.445 8 0.446 2 0.446 4 0.447 2 0.447 4 0.447 6 0.447 4 0.448 3 0.449 1 3 0.445 4 0.445 8 0.446 0 0.446 6 0.447 2 0.447 4 0.447 6 0.447 6 0.448 9 0.449 1 注:加粗字体表示最优结果。 Table 2 Peak signal to noise ratio comparison (Barbara image) /dB k值 阶次 α=0.1 α=0.3 α=0.5 α=0.7 α=0.9 α=1.1 α=1.3 α=1.5 α=1.7 α=1.9 1 51.645 51.637 51.635 51.639 51.626 51.624 51.620 51.624 51.628 51.614 2 51.643 51.639 51.635 51.633 51.626 51.624 51.622 51.624 51.616 51.608 3 51.643 51.639 51.637 51.631 51.626 51.624 51.622 51.622 51.610 51.608 注:加粗字体表示最优结果。 Table 3 Gray-scale mean square error comperison (Lena image) k值 阶次 α=0.1 α=0.3 α=0.5 α=0.7 α=0.9 α=1.1 α=1.3 α=1.5 α=1.7 α=1.9 1 0.044 5 0.044 5 0.044 1 0.042 0 0.037 2 0.037 2 0.038 8 0.039 0 0.037 6 0.037 8 2 0.044 5 0.044 5 0.043 9 0.040 4 0.036 7 0.037 2 0.037 4 0.037 4 0.036 9 0.037 6 3 0.044 5 0.044 5 0.043 7 0.040 2 0.036 9 0.036 9 0.037 2 0.036 9 0.036 9 0.037 6 注:加粗字体表示最优结果。 Table 4 Peak signal to noise ratio comparison (Lena image) /dB k值 阶次 α=0.1 α=0.3 α=0.5 α=0.7 α=0.9 α=1.1 α=1.3 α=1.5 α=1.7 α=1.9 1 61.647 61.647 61.687 61.893 62.431 62.431 62.244 62.222 62.384 62.360 2 61.647 61.647 61.708 62.065 62.479 62.431 62.407 62.407 62.455 62.384 3 61.647 61.647 61.728 62.087 62.455 62.455 62.431 62.455 62.455 62.384 注:加粗字体表示最优结果。 # 3.2 修复性能验证 Table 5 Gray-scale mean square error comparison 图像 修复前 TV(文献[9]) 文献[2]算法 文献[8]算法 本文算法 Barbara(α=0.1, k=1) 4.008 2 0.867 1 0.591 1 0.716 9 0.445 2 Lena(α=0.9, k=2) 2.601 6 0.107 6 0.108 4 0.114 7 0.036 7 岩石(α=0.1, k=1) 8.635 0 0.890 0 1.366 2 0.711 4 0.578 5 注:加粗字体表示每种图片修复的最优结果。 Table 6 Peak signal to noise ratio comparison /dB 图像 修复前 TV(文献[9]) 文献[2]算法 文献[8]算法 本文算法 Barbara(α=0.1, k=1) 42.101 48.750 50.414 49.576 51.645 Lena(α=0.9, k=2) 43.979 57.814 57.781 57.535 62.479 岩石(α=0.1, k=1) 38.768 48.637 46.776 49.610 50.508 注:加粗字体表示每种图片修复的最优结果。 # 参考文献 • [1] Bertalmio M, Sapiro G, Caselles V, et al. Image inpainting[C]//Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques. New York, USA: ACM, 2000: 417-424.[DOI:10.1145/344779.344972] • [2] Criminisi A, Perez P, Toyama K. Object removal by exemplar-based inpainting[C]//Proceedings of 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Madison, WI, USA: IEEE, 2003: 721-728.[DOI:10.1109/CVPR.2003.1211538] • [3] Zhang Y. When is missing data recoverable?[R]. Houston, TX: Rice University, 2006. • [4] Zhang G M, Sun X X, Liu J X. Fractional total variation denoising model based on adaptive projection algorithm[J]. Pattern Recognition and Artificial Intelligence, 2016, 29(11): 1009–1018. [张桂梅, 孙晓旭, 刘建新. 基于自适应投影算法的分数阶全变分去噪模型[J]. 模式识别与人工智能, 2016, 29(11): 1009–1018. ] [DOI:10.16451/j.cnki.issn1003-6059.201611006] • [5] Yang Z Z, Zhou J L, Yan X Y, et al. Image enhancement based on fractional differentials[J]. Journal of Computer-Aided Design & Computer Graphics, 2008, 20(3): 343–348. [杨柱中, 周激流, 晏祥玉, 等. 基于分数阶微分的图像增强[J]. 计算机辅助设计与图形学学报, 2008, 20(3): 343–348. ] • [6] Zhang G M, Sun X X, Liu J X, et al. Research on TV-L1 optical flow model for image registration based on fractional-order differentiation[J]. Acta Automatica Sinica, 2017, 43(12): 2213–2224. [张桂梅, 孙晓旭, 刘建新, 等. 基于分数阶微分的TV-L1光流模型的图像配准方法研究[J]. 自动化学报, 2017, 43(12): 2213–2224. ] [DOI:10.16383/j.aas.2017.c160367] • [7] Zhang G M, Sun X X, Chen B B, et al. Edge detection algorithm combining fractional order derivative and Canny operator[J]. Journal of Image and Graphics, 2016, 21(8): 1028–1038. [张桂梅, 孙晓旭, 陈彬彬, 等. 结合分数阶微分和Canny算子的边缘检测[J]. 中国图象图形学报, 2016, 21(8): 1028–1038. ] [DOI:10.11834/jig.20160807] • [8] Zhang Y, Pu Y F, Hu J R, et al. A class of fractional-order variational image inpainting models[J]. Applied Mathematics & Information Sciences, 2012, 6(2): 299–306. • [9] Chan T F, Shen J H. Mathematical models for local nontexture inpaintings[J]. SIAM Journal on Applied Mathematics, 2002, 62(3): 1019–1043. [DOI:10.1137/S0036139900368844] • [10] Telea A. An image inpainting technique based on the fast marching method[J]. Journal of Graphics Tools, 2004, 9(1): 23–34. [DOI:10.1080/10867651.2004.10487596] • [11] Li K Y, Sun Y G. Fast image inpainting algorithm introducing continuous strength and confidence factor[J]. Journal of Image and Graphics, 2012, 17(4): 465–470. [李开宇, 孙玉刚. 引入连续性强度和置信度因子的快速图像修复[J]. 中国图象图形学报, 2012, 17(4): 465–470. ] [DOI:10.11834/jig.20120403] • [12] Cafagna D. Fractional calculus:a mathematical tool from the past for present engineers[J]. IEEE Industrial Electronics Magazine, 2007, 1(2): 35–40. [DOI:10.1109/MIE.2007.901479] • [13] Zhu Y G, Zhang G M. Fractional total variation denoising algorithm based on an adaptive residual image[J]. Journal of Image and Graphics, 2017, 22(12): 1677–1689. [祝严刚, 张桂梅. 自适应残差图像的分数阶全变分去噪算法[J]. 中国图象图形学报, 2017, 22(12): 1677–1689. ] [DOI:10.11834/jig.170198] • [14] Shao X X, Guo S X, Wang L. Image mosaic algorithm based on extended phase correlation of edge[J]. Journal of Jilin University, 2010, 28(01). • [15] Zhang C Y, Zhao Y, Chen H X. Registration of depth and video data based on edge detection[J]. Journal of Jilin University(Information Science Edition), 2011, 58(3): 587–593. [DOI:10.1007/s10971-011-2431-x]
web
auto_math_text
# Publications associated with Galactic dynamics ## Spectroscopy of the Young Stellar Association Price-Whelan 1: origin in the magellanic leading arm and constraints on the Milky Way Hot Halo Astrophysical Journal American Astronomical Society 887 (2019) 115 DL Nidever, AM Price-Whelan, Y Choi, RL Beaton, TT Hansen, D Boubert, D Aguado, R Ezzeddine, S Oh, NW Evans We report spectroscopic measurements of stars in the recently discovered young stellar association Price-Whelan 1 (PW 1), which was found in the vicinity of the Leading Arm (LA) of the Magellanic Stream (MS). We obtained Magellan+MIKE high-resolution spectra of the 28 brightest stars in PW 1 and used The Cannon to determine their stellar parameters. We find that the mean metallicity of PW 1 is [Fe/H] = −1.23 with a small scatter of 0.06 dex and the mean RV is ${V}_{\mathrm{hel}}$ = 276.7 $\mathrm{km}\,{{\rm{s}}}^{-1}\,$ with a dispersion of $11.0$ $\mathrm{km}\,{{\rm{s}}}^{-1}$. Our results are consistent in ${\text{}}{T}_{\mathrm{eff}}$, $\mathrm{log}g$, and [Fe/H] with the young and metal-poor characteristics (116 Myr and [Fe/H] = −1.1) determined for PW 1 from our discovery paper. We find a strong correlation between the spatial pattern of the PW 1 stars and the LA II gas with an offset of −10fdg15 in ${L}_{\mathrm{MS}}$ and +1fdg55 in ${B}_{\mathrm{MS}}$. The similarity in metallicity, velocity, and spatial patterns indicates that PW 1 likely originated in LA II. We find that the spatial and kinematic separation between LA II and PW 1 can be explained by ram pressure from Milky Way (MW) gas. Using orbit integrations that account for the LMC and MW halo and outer disk gas, we constrain the halo gas density at the orbital pericenter of PW 1 to be ${{\boldsymbol{n}}}_{\mathrm{halo}}(17\,\mathrm{kpc})={2.7}_{-2.0}^{+3.4}\times {10}^{-3}\,\mathrm{atoms}\,{\mathrm{cm}}^{-3}$ and the disk gas density at the midplane at $20\,\mathrm{kpc}$ to be ${{\boldsymbol{n}}}_{\mathrm{disk}}(20\,\mathrm{kpc},0)={6.0}_{-2.0}^{+1.5}\times {10}^{-2}\,\mathrm{atoms}\,{\mathrm{cm}}^{-3}$. We, therefore, conclude that PW 1 formed from the LA II of the MS, making it a powerful constraint on the MW–Magellanic interaction. Show full publication list
web
auto_math_text
# Modular Level Building This topic is 2767 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts First off, I am using C#, and am using the June 2007 sdk for DX9 math and matrices to get some quick results. Skip to the bottom if you only want to read the problem I need help with. As a preface: I have asked several people about this, and they basically just sent me to wikipedia's linear algebra page. Well that doesn't work for me. I can't just read top-level gibberish and immediately understand it. I can, however understand a quick tutorial on calculating a face's normals where they actually express it in a readable programming language such as C/C++. Now, here's onto the description of my program- I want to write an application that can take pre-defined models and piece them together, kind of like an erector set, Hotwheels tracks, and model railroads, stuff of the like. In non-layman's terms, I want to write a modular world builder from pre-defined pieces, in which while the pieces themselves are pre-defined, the layouts are not. This can have several effects in the end, one such being procedurally-generated levels. I'm going to wind up repeating a few things over and over here, just to iterate and be sure that my question can be understood. This is a post I made on another forum on the subject, with a few edits:[s] [/s][spoiler] I'm working on a project right now, but i've hit a complete stand-still. This problem has to deal with taking 2 3-dimensional models and putting them together, based on things I call "golden areas". I have defined this golden area to be a quadrilateral (ie rectangle) in which all the vertices are parallel on either the X or Y axis (not going to deal with Z axis. Forget that.) In C#, my vertices are defined basically as any other (textured) vertice, with an XYZ for normals and an XYZ for position. I do believe that I won't have to do jack to the normals, however I do have to translate the vertices, as well as the normals (but that can be changed whenever I rotate them right?) We can assume a number of things (or I'll go insane, mark my words) 1. All golden areas that are of certain sides will be marked with a tag. +garea## where ## is a number defining the gareas that can attach to them (all garea00's will only attach to other garea00's). (the below is the reason I'm posting for help!) Now, here's the real problem(s) I have to determine how much I have to rotate the whole model. This rotation is called yaw. Thus I gotta somehow use a method so I can find the orientation of the model, such that whenever it is rotated, I won't have, say, a model intersecting a model (ie a box inside a box). here's a very, very basic example of what I want to do: Note that I refer to a golden area as a flat plane with 4 verts, and 2 tris where we can attach other golden areas together to. In our case these are somewhat purple. Edit: the problem is that there's no real way to find out what side of a triangle is on which side. I'll probably figure this out on my own eventually but if you have any ideas or links please let me know! (wikipedia on linear algebra makes me want to kill small adorable animals) Editx2: I found the answer to the figuring out of what side a triangle is, via vectors and normals... in my little program, it too, fits the counter-clockwise rule that all normals have to fit under. [s] [/s][/quote][/spoiler] [s] [/s] A "Golden Area" is an area in which other pieces can attach to - I also call them limbs/branches interchangeably. A golden area has 2 faces, each face comprising of 3 vertices. The golden area for a limb always faces the inside of the limb (closed-world) -The Problem- I need help with is taking a flat plane that is in one area with whatever rotation (requiring that this plane is parallel to the Z axis or I'll go nuts) and getting how much I need to rotate that plane via yaw (or around the Z axis, Z being up and down by my preferences) in order such that the two golden areas, when they and their corresponding models are lined up, are facing opposite one another. [s]how i shot web and make one object face another when object is plane that is parallel to at least 1 axis (Z) C# DX 9 Matrix maths[/s] Edit: After looking around and realizing that my problem wasn't as cut-and-dried as it sounds like it should be, I turned off the raging. Of course, I will be slowly working on my program on my own and perhaps I will figure out what I need, but I'm leaving it de-crappified and not as likely to get the thread locked.Editx5 Or me banned >.> Edit 9001: IF this is the wrong forum for this kind of thing, please let me know. ##### Share on other sites Assuming that you maintain a position and direction for each piece and boiling it all down, are you asking how to rotate a known direction vector to align (or anti-align) it with another known direction vector? If so, the angle of a direction vector ( about the Z axis ) is atan2( v.y, v.x ). I'm envisioning that you have one or more attachment points for each piece, each attachment point having a position and direction. To attach 2 pieces at chosen attachment points ( e.g., point 2 for the first piece, point 3 for the second ), calculate the angle between the two attachment directions, rotate one of the pieces about its attachment point* by the appropriate angle, and translate the piece by the difference in attachment points. *Or rotate the piece about its origin and calculate the new attachment point. ##### Share on other sites Let me get this straight. You have two models, M1 and M2. On M1, there is a known subset of vertices, {u[sub]1[/sub], u[sub]2[/sub], ..., u[sub]n[/sub]}, which I will call M1's "connector vertices." On M2, there is a known subset of vertices, {v[sub]1[/sub], v[sub]2[/sub], ..., v[sub]n[/sub]}, which I will call M2's "connector vertices." You want to find a rigid body motion (a translation and rotation) that brings the connector vertices of M2 to those of M1. I.e., you want a rotation matrix R and a translation vector d such that R v[sub]1[/sub] + d = u[sub]1[/sub] R v[sub]2[/sub] + d = u[sub]2[/sub] ... R v[sub]n[/sub] + d = u[sub]n[/sub] . Your unknowns are R and d. You have a bunch of equations. Solve. Here's how: Define V = [v[sub]2[/sub]-v[sub]1[/sub], ..., v[sub]n[/sub]-v[sub]1[/sub]] and U = [u[sub]2[/sub]-u[sub]1[/sub], ..., u[sub]n-[/sub]u[sub]1[/sub]]. Then what you want for 'R' is, R V = U which has the least-squares solution, R = U V[sup]T[/sup] (V V[sup]T[/sup])[sup]-1[/sup] . If this matrix does not end up satisfying det R = 1 and R[sup]T[/sup] R = I then it is not a rotation matrix; this tells you that your vertex subsets don't match and the pieces won't join up. Finally, d = u[sub]1[/sub] - R v[sub]1[/sub] or, maybe a little more robustly (you should get the same answer), d = (1/n)[(u[sub]1[/sub]+...+u[sub]n[/sub]) - R (v[sub]1[/sub]+...+v[sub]n[/sub])] . Now that you have R and d, your problem is solved. Also note that you need at least 4 connector vertices for this to work; fewer will not uniquely specify the orientation. For the special case of n=4, you can simplify the rotation matrix computation to, R = U V[sup]-1[/sup][sup][/sup] . ##### Share on other sites No code from me for this problem, sorry. But a solution (if I made no mistakes) looks like this: Assuming that you have a part already fixed in the world. Its local-to-global transformation matrix (a.k.a. model matrix) is named M[sub]1[/sub]. W.r.t. this local co-ordinate system, the 4 vertices of the said quad are given. Unfortunately, the quad by itself is not sufficient. Assuming it is not a square, there are still 4 possible solutions to map another, equally sized quad from the 2nd part onto the 1st one. Even using the normals of the quads isn't sufficient, because there are still 2 possible solutions. For now, let us assume that you doesn't have quads but local co-ordinate axes. Such axes have a position and an orientation in space. They are described by a standard transformation matrix build from translation and rotation, i.e. (using column vectors) A := T * R Let us define such an axis in part 1 given w.r.t. the model transformation of said part 1, so that the global counterpart of that axis is A[sub]1g[/sub] := M[sub]1[/sub] * A[sub]1l[/sub] and similarly for the 2nd part A[sub]2g[/sub] := M[sub]2[/sub] * A[sub]2l[/sub] [sub] Lets assume further that local z direction of such an axis is ever defined to point to the inside of the respective part. Then what you want to do is to find a transformation matrix X so that the both axes lie one upon the other in global space, but ... there is one caveat. Although both axes' positions should be identical in global space, their orientations should not, because it is defined that the local z direction ever point into the part. Hence the global z direction of the one axis must point opposite to the global z direction of the other axis. This also enforces that another direction must be opposite, or else we have violated the uniform handedness of the both axes. So lets us say that the y directions should fall coincident, but the x and z should be mirrored. This is expressed by a non-uniform scaling matrix build as S( -1, 1, -1 ) In summary, we have a formula like X * M[sub][size="1"]2[/sub] * T[sub][size="1"]2l[/sub] * R[sub][size="1"]2l[/sub][size=2][size=2] * [size=2]S = A[sub][size="1"]1g[/sub] to solve for X, what means X = A[sub][size="1"]1g[/sub] * ( M[sub][size="1"]2[/sub] * T[sub][size="1"]2l[/sub] * R[sub][size="1"]2l[/sub][size=2][size=2] * [size=2]S )[sup][size="1"]-1[/sup] [/sub] [sub] [/sub] [sub]Now remember that you have a quad instead of the axis. One vertex of that quad may be used for the axis' position, but you have to guarantee that the both vertices (i.e. one of both quads) matches. Then you can use the quad normals as local z direction. However, there is still one direction missed. You can use the difference vectors to the 2 neighbored vertices as up and right vectors (after normalization). However, that again means that you make the order of vertices totally clear. In other words, if you can't work with axes directly, go away from quads and use tris instead, where 1 vertex is marked as origin and the other both vertices are given in a defined order (e.g. CCW).[/sub] [sub] [/sub] [sub] [/sub] [sub]EDIT: Ups, in the meanwhile Emergent has given a complete answer, too. However, it uses another method.[/sub] ##### Share on other sites Thanks for the help guys! @ Emergent - I'm not used to the notations of upper-level maths, so when you defined U and V, you're talking about a 4x4 Matrix right? Also, what does the T in V[sup]T [/sup], R[sup]T [/sup]Signify? @ haegarr I haven't read through your response all the way yet but it sounds pretty thorough, but I do have one question: don't have quads but local co-ordinate axes[/quote] Are you talking about using the center point of the quad as an origin for the coordinate axes? ##### Share on other sites @ Emergent - I'm not used to the notations of upper-level maths, so when you defined U and V, you're talking about a 4x4 Matrix right? In the line above, I wrote V = [v[sub]2[/sub]-v[sub]1[/sub], ..., v[sub]n[/sub]-v[sub]1[/sub]] and U = [u[sub]2[/sub]-u[sub]1[/sub], ..., u[sub]n-[/sub]u[sub]1[/sub]].[/quote] What this means is that V and U are both 3x(n-1) matrices (where, recall, 'n' was the number of "connector vertices"). Each column is a vector difference. I.e., the first column of V is the vector from the vertex v[sub]1[/sub] to the vertex v[sub]2[/sub]; this is v[sub]2[/sub]-v[sub]1[/sub]. The second column of V is v[sub]3[/sub]-v[sub]1[/sub]; the third is v[sub]4[/sub]-v[sub]1[/sub]; etc. You should also know that 'R' is a 3x3 rotation matrix, and 'd' is a 3x1 vector. If you're more familiar with 4x4 homogeneous matrices, R is the upper left 3x3 subblock, and d is the upper right subblock; i.e., you can put them together into the 4x4 matrix [ R d ] [ 0 1 ] As for the matrix equation R V = U... Remember we started with the vector equations, R v[sub]1[/sub] + d = u[sub]1[/sub] R v[sub]2[/sub] + d = u[sub]2[/sub] ... R v[sub]n[/sub] + d = u[sub]n[/sub] . If you subtract the first equation from the rest, you get R v[sub]2[/sub] - R v[sub]1[/sub] = u[sub]2[/sub] - u[sub]1[/sub] ... R v[sub]n[/sub] - R v[sub]1[/sub] = u[sub]n[/sub] - u[sub]1[/sub] or just R (v[sub]2[/sub] - v[sub]1[/sub]) = u[sub]2[/sub] - u[sub]1[/sub] ... R (v[sub]n[/sub] - v[sub]1[/sub]) = u[sub]n[/sub] - u[sub]1[/sub] . The matrix equation R V = U is just a compact notation for this system of vector equations. Also, what does the T in V[sup]T [/sup], R[sup]T [/sup]Signify? The superscript 'T' means "transpose." You rearrange the elements of a matrix so its columns become rows and vice versa. An mxn matrix becomes an nxm one. ##### Share on other sites I _think_ I understand it now. If I don't please correct me. Matrix math is rough to get right... Here's some pseudo code I wrote up, in lieu of your method: n = count of vertices Matrix V[3][n-1] Matrix U[3][n-1] Matrix R[3][3] Matrix D[1][3] for i = 1; i < 4; i++ v[3] = quadV.vert - quadV.vert[0]; U[3] = quadU.vert - quadU.vert[0]; end for R = U * VTransposed * (V * VTrans) ^(-1) if (RTransposed * R == 1) && (Determinant of R == 1) { d = (1/n) * ((sum of quadU vertices) - R * (Sum of all quadV vertices)) } else Failure ##### Share on other sites I _think_ I understand it now. If I don't please correct me. Yeah, that looks about right. I've made a few edits (below), but I think these fix minor oversights more than conceptual issues. Matrix math is rough to get right... Good matrix libraries produce more readable code and can help a lot here, I think. Have you tried the matrix library Eigen? It's my favorite by far. Anyway, here are the edits: n = count of vertices Matrix V[3][n-1] Matrix U[3][n-1] Matrix R[3][3] Matrix D[3][1] // was D[1][3] for i = 1; i < 4; i++ for j = 0; j < 3; ++j // This inner loop does the vector subtraction (there are 3 scalar elements in each vector). Ideally, this loop would live inside an overloaded subtraction operator for whatever vector class you use. v[j][i-1] = quadV.vert[j] - quadV.vert[0][j]; // Fixed column indexing. And I'm assuming quadV.vert is a 3x1 vector, whose jth scalar element is quadV.vert[j] U[j][i-1] = quadU.vert[j] - quadU.vert[0][j]; end for end for //--- No edits below here R = U * VTransposed * (V * VTrans) ^(-1) if (RTransposed * R == 1) && (Determinant of R == 1) { d = (1/n) * ((sum of quadU vertices) - R * (Sum of all quadV vertices)) } else Failure ##### Share on other sites I just realized something about this method. It works in rotating the pieces together but it doesn't do anything about the requirement that each quad must face opposite one another. I guess I can just do a normals test to be sure that they are opposite one another after that.... ##### Share on other sites @ haegarr I haven't read through your response all the way yet but it sounds pretty thorough, but I do have one question: don't have quads but local co-ordinate axes Are you talking about using the center point of the quad as an origin for the coordinate axes? [/quote] The axes (I mean axes like the red arrow for x direction, green arrow for y direction, and blue arrow for z direction together indicating a co-ordinate system as seen in many DDC packages) are complete handles for itself ... err, besides perhaps the attached key value(s) to determine which axes are allowed to match. They have a definite location and orientation. It would be best (IMHO) if stitching could be done by using them alone. However, there is a way to calculate an axis from at least 3 vertices if the requirements are met that are described in my previous post. The quad's center may be used as origin as well as any of its corner, or any other point computed by interpolation if you wish. However, not the origin but the directions are the more problematic thing. To have matching directions, you need a defined order of vertices (similar to the 1:1 mapping need in Emergent's approach). Because of this, I said that dealing with axes will be "smoother" here. 1. 1 2. 2 Rutin 18 3. 3 4. 4 5. 5 frob 12 • 9 • 21 • 11 • 9 • 17 • ### Forum Statistics • Total Topics 632606 • Total Posts 3007380 ×
web
auto_math_text
# qiskit.transpiler.passes.CommutationAnalysis¶ class CommutationAnalysis(*args, **kwargs)[código fonte] Analysis pass to find commutation relations between DAG nodes. Property_set[‘commutation_set’] is a dictionary that describes the commutation relations on a given wire, all the gates on a wire are grouped into a set of gates that commute. TODO: the current pass determines commutativity through matrix multiplication. A rule-based analysis would be potentially faster, but more limited. __init__()[código fonte] Initialize self. See help(type(self)) for accurate signature. Methods Initialize self. Return the name of the pass. run(dag) Run the CommutationAnalysis pass on dag. Attributes is_analysis_pass Check if the pass is an analysis pass. is_transformation_pass Check if the pass is a transformation pass. property is_analysis_pass Check if the pass is an analysis pass. If the pass is an AnalysisPass, that means that the pass can analyze the DAG and write the results of that analysis in the property set. Modifications on the DAG are not allowed by this kind of pass. property is_transformation_pass Check if the pass is a transformation pass. If the pass is a TransformationPass, that means that the pass can manipulate the DAG, but cannot modify the property set (but it can be read). name() Return the name of the pass. run(dag)[código fonte] Run the CommutationAnalysis pass on dag. Run the pass on the DAG, and write the discovered commutation relations into the property_set.
web
auto_math_text
# Reconsidering the 3+3 Dose Escalation in Oncology Studies When it comes to first-in-human clinical trials in oncology, most clinical researchers opt for a simple, tried-and-true approach to finding a recommended phase 2 dose (RP2D), which in oncology is typically a standard 3+3 dose escalation design. This automatic default to the 3+3 design when developing a phase 1 protocol may be due to a lack of understanding of why we should consider alternative designs, and a lack of knowledge about what other options even exist. In recent years, we have seen an emergence of molecularly targeted agents (MTAs) and immunotherapies as well as a transition away from cytotoxic agents. As the treatment landscape changes, it is becoming increasingly important for researchers to consider alternative study designs. For instance, patients being treated with MTAs and immunotherapies may not experience a dose limiting toxicity (DLT), and the side effects of these compounds may not be dose-dependent; therefore, dose escalating until the maximum tolerated dose (MTD) is identified may not be relevant. Aside from the 3+3 dose escalation, there are alternative phase 1 study designs that should be considered which will help achieve RP2D faster, while exposing fewer patients to lower, less effective doses of the study drug. Alternate study designs have proven to be safer and more reliable when compared with the industry-standard 3+3 design, and some can be easily implemented with minimal statistical complexities Rule-Based Designs Rule-based designs apply simple rules to allow for step-up or step-down dosing in the absence or presence of toxicities seen at each dose level. The most widely used rule-based design in clinical practice is the traditional 3+3 design. Most people who have ever worked on an oncology clinical trial have experience with this design. The 3+3 design enrolls 3 patients at a given dose level. If no patients experience an adverse event that qualifies as a protocol defined DLT, then dosing may proceed to the next dose level. If one patient experiences a DLT within a specified timeframe (typically 1 cycle), then that dose level is expanded to include 3 more patients. If a second patient at that same dose level experiences a DLT, then the MTD is considered to have been exceeded and the next-lower dose level should be expanded to confirm the MTD. By far the most popular phase 1 study design in oncology, the 3+3 dose escalation has been utilized in more than 95% of published phase 1 trials in the past two decades (Ji & Wang, 2013). One of the benefits of the 3+3 design (and likely a major factor to its popularity) is that it is very simple and can be easily understood by study investigators and clinical researchers. The logistical simplicity of the design together with familiarity with the escalation rules by clinicians and researchers are likely precluding exploration and implementation of novel study designs (Hansen et. al., 2014). However, our continued reliance on the 3+3 design should be questioned. Statistical simulations have shown that the MTD is identified in as few as 30% of clinical trials that utilized a 3+3 design (Hansen et. al., 2014). Furthermore, the 3+3 design exposes an unnecessary number of patients to subtherapeutic doses. Due to the number of escalations and the number of patients required to be treated at each dose level, a large proportion of patients are treated at low doses that are potentially subtherapeutic, while few patients receive doses at or close to the RP2D (Le Tourneau et. al., 2009; Simon et. al., 1997). Model-Based Designs Model-based designs use data from each dose level to model a dose-toxicity curve and provide a confidence interval for the RP2D, once achieved (Le Tourneau et. a., 2009). While these designs do require biostatistics expertise and statistical modeling software, model-based designs can achieve better estimations of the target probability of a DLT at the RP2D while minimizing suboptimal dosing (Hansen et. al., 2014). Particularly for agents with a low expected toxicity profile, it may make sense to consider a model-based design, given that that model-based designs assume a relationship between the study drug dose and the likelihood of occurrence of a DLT. Furthermore, in model-based design, medical decisions are based on statistical inference, which reduces subjectivity in the dose escalation decision-making process (Ji & Wang, 2013). Traditional model-based designs such as the continual reassessment method (or CRM) were first introduced nearly 30 years ago, and yet they are still scarcely used in practice due to a perception of being too statistically complex (Wheeler et. al., 2019; Yan et. al., 2017). More recently, there has been increased interest in a combination design that incorporates the simplicity of a rule-based design with the better performance of a model-based design; a model is used for decision making but it allows for the decision making rules to be pre-tabulated before the trial begins (Yan et. al., 2017). One such combination design, the modified Toxicity Probability Interval (mTPI) design, is equally as simple, transparent, and cost less to implement as the 3+3 design (Ji & Wang, 2013). The mTPI design requires a biostatistician to generate a simple decision table to be included in the protocol based on the number of planned dose levels in the study. In the decision table, the dose may be escalated, de-escalated, or eliminated based on the number of subjects treated and the number of DLTs. “Eliminate” means that the current and higher doses will be eliminated from the trial to prevent treating any future subjects at these doses because they are overly toxic. In a simulation of 2,000 trials comparing the operating characteristics of the 3+3 design and the mTPI design, it was concluded that compared with the 3+3 design, the mTPI design is safer, because it treats fewer patients at doses above the MTD and is more likely to identify the true MTD than the 3+3 design (Ji & Yang, 2013). One of the drawbacks of the mTPI design and model-based designs, in general, is that while they can accelerate dose escalation by treating fewer patients at sub-therapeutic dose levels, the inclusion of one patient per dose level may also deprive the study team of data on interpatient pharmacokinetic variability (Le Tourneau et. al., 2009). However, this limitation can easily be addressed by expanding the cohort size if additional PK data are needed. Looking Beyond the 3+3 The limitations of the 3+3 study design and potential for alternative designs have been discussed for decades, with little to no increase in the number of phase 1 studies utilizing alternate designs. In 1997, a simulation comparing the 3+3 design with 3 accelerated titration designs was conducted. The results showed that the alternate designs were favorable for a number of reasons: they reduced the duration of trials, reduced the number of patients exposed to subtherapeutic doses, and provided an estimate of the population distribution of the MTD where the 3+3 design did not (Simon et. al., 1997). Nearly 25 years later, these results have had seemingly little to no impact on clinical trial designs as the 3+3 design continues to be commonly utilized. Further substantiating the 1997 simulation, a recent comparison of 172 rule-based versus model-based oncology trials were conducted. The results showed that rule-based designs took a median of 10 months longer than model-based designs to complete, fewer patients were treated at sub-optimal dose levels in model-based versus rule-based studies, and that despite the savings in time and minimization of suboptimal treatment, safety was preserved in the model-based design (Brummelen et. al., 2016). While alternative designs have remained more the exception than the rule, FDA has begun encouraging more innovative and adaptive designs in early phase studies; recent guidance calls out a need to consider adaptive trial designs in exploratory and dose-finding studies as a way to ensure optimal dose selection while affording the opportunity to learn more about exposure, pharmacodynamics, and variability in patient response (FDA, 2019). As new drugs are brought to the clinic, it is important to understand that there is no one best escalation scheme that can be applied across all scenarios. While the 3+3 design still may be appropriate in some situations, clinical researchers should take into consideration the mechanism of action of their drug as well as the expected toxicity profile when considering study design and resist opting for the 3+3 design in every instance for its simplicity. Implementing alternative study designs as a means to efficiently achieve a safe and optimal recommended phase 2 dose is a crucial consideration for clinical researchers in 2020 and beyond. References Brummelen, E. M. J. V., Huitema, A. D. R., Werkhoven, E. V., Beijnen, J. H., & Schellens, J. H. M. (2016). The performance of model-based versus rule-based phase I clinical trials in oncology. Journal of Pharmacokinetics and Pharmacodynamics, 43(3), 235–242. doi: 10.1007/s10928-016-9466-0 FDA. Guidance for Industry: Adaptive Design Clinical Trials for Drugs and Biologics. 2019. https://www.fda.gov/downloads/drugs/guidances/ucm201790.pdf. Accessed 18 Feb 2020. Hansen, A. R., Graham, D. M., Pond, G. R., & Siu, L. L. (2014). Phase 1 trial design: is 3+ 3 the best?. Cancer Control21(3), 200-208. Ji, Y., & Wang, S. J. (2013). Modified toxicity probability interval design: a safer and more reliable method than the 3+ 3 design for practical phase I trials. Journal of Clinical Oncology31(14), 1785. Le Tourneau, C., Lee, J. J., & Siu, L. L. (2009). Dose escalation methods in phase I cancer clinical trials. JNCI: Journal of the National Cancer Institute101(10), 708-720. Simon, R., Rubinstein, L., Arbuck, S. G., Christian, M. C., Freidlin, B., & Collins, J. (1997). Accelerated titration designs for phase I clinical trials in oncology. Journal of the National Cancer Institute89(15), 1138-1147. Wheeler, G. M., Mander, A. P., Bedding, A., Brock, K., Cornelius, V., Grieve, A. P., … & Bond, S. J. (2019). How to design a dose-finding study using the continual reassessment method. BMC medical research methodology19(1), 1-15. Yan, F., Mandrekar, S. J., & Yuan, Y. (2017). Keyboard: a novel Bayesian toxicity probability interval design for phase I clinical trials. Clinical Cancer Research23(15), 3994-4003.
web
auto_math_text
The OpenVX Specification  dba1aa3 Tensor TableLookUp ## Detailed Description Performs LUT on element values in the input tensor data. This kernel uses each element in a tensor to index into a LUT and put the indexed LUT value into the output tensor. The tensor types supported are VX_TYPE_UINT8 and VX_TYPE_INT16. Signed inputs are cast to unsigned before used as input indexes to the LUT. ## Functions vx_node VX_API_CALL vxTensorTableLookupNode (vx_graph graph, vx_tensor input1, vx_lut lut, vx_tensor output) [Graph] Performs LUT on element values in the input tensor data. More... vx_status VX_API_CALL vxuTensorTableLookup (vx_context context, vx_tensor input1, vx_lut lut, vx_tensor output) [Immediate] Performs LUT on element values in the input tensor data. More... ## Function Documentation vx_node VX_API_CALL vxTensorTableLookupNode ( vx_graph graph, vx_tensor input1, vx_lut lut, vx_tensor output ) [Graph] Performs LUT on element values in the input tensor data. Parameters [in] graph The handle to the graph. [in] input1 Input tensor data. Implementations must support input tensor data type VX_TYPE_INT16 with fixed_point_position 8, and tensor data types VX_TYPE_UINT8, with fixed_point_position 0. [in] lut The look-up table to use, of type vx_lut. The elements of input1 are treated as unsigned integers to determine an index into the look-up table. The data type of the items in the look-up table must match that of the output tensor. [out] output The output tensor data with the same dimensions as the input tensor data. Returns vx_node. A node reference vx_node. Any possible errors preventing a successful creation should be checked using vxGetStatus. vx_status VX_API_CALL vxuTensorTableLookup ( vx_context context, vx_tensor input1, vx_lut lut, vx_tensor output ) [Immediate] Performs LUT on element values in the input tensor data. Parameters [in] context The reference to the overall context. [in] input1 Input tensor data. Implementations must support input tensor data type VX_TYPE_INT16 with fixed_point_position 8, and tensor data types VX_TYPE_UINT8, with fixed_point_position 0. [in] lut The look-up table to use, of type vx_lut. The elements of input1 are treated as unsigned integers to determine an index into the look-up table. The data type of the items in the look-up table must match that of the output tensor. [out] output The output tensor data with the same dimensions as the input tensor data. Returns A vx_status_e enumeration. Return values VX_SUCCESS Success * An error occurred. See vx_status_e.
web
auto_math_text
gbm From gbm v2.1.1 0th Percentile Generalized Boosted Regression Modeling Fits generalized boosted regression models. Keywords models, nonparametric, tree, nonlinear, survival Usage gbm(formula = formula(data), distribution = "bernoulli", data = list(), weights, var.monotone = NULL, n.trees = 100, interaction.depth = 1, n.minobsinnode = 10, shrinkage = 0.001, bag.fraction = 0.5, train.fraction = 1.0, cv.folds=0, keep.data = TRUE, verbose = "CV", class.stratify.cv=NULL, n.cores = NULL) gbm.fit(x, y, offset = NULL, misc = NULL, distribution = "bernoulli", w = NULL, var.monotone = NULL, n.trees = 100, interaction.depth = 1, n.minobsinnode = 10, shrinkage = 0.001, bag.fraction = 0.5, nTrain = NULL, train.fraction = NULL, keep.data = TRUE, verbose = TRUE, var.names = NULL, response.name = "y", group = NULL) gbm.more(object, n.new.trees = 100, data = NULL, weights = NULL, offset = NULL, verbose = NULL) Arguments formula a symbolic description of the model to be fit. The formula may include an offset term (e.g. y~offset(n)+x). If keep.data=FALSE in the initial call to gbm then it is the user's responsibility to resupply the offset to gbm.more. distribution either a character string specifying the name of the distribution to use or a list with a component name specifying the distribution and any additional parameters needed. If not specified, gbm will try to guess: if the response has only 2 unique values, bernoulli is assumed; otherwise, if the response is a factor, multinomial is assumed; otherwise, if the response has class "Surv", coxph is assumed; otherwise, gaussian is assumed. Currently available options are "gaussian" (squared error), "laplace" (absolute loss), "tdist" (t-distribution loss), "bernoulli" (logistic regression for 0-1 outcomes), "huberized" (huberized hinge loss for 0-1 outcomes), "multinomial" (classification when there are more than 2 classes), "adaboost" (the AdaBoost exponential loss for 0-1 outcomes), "poisson" (count outcomes), "coxph" (right censored observations), "quantile", or "pairwise" (ranking measure using the LambdaMart algorithm). If quantile regression is specified, distribution must be a list of the form list(name="quantile",alpha=0.25) where alpha is the quantile to estimate. The current version's quantile regression method does not handle non-constant weights and will stop. If "tdist" is specified, the default degrees of freedom is 4 and this can be controlled by specifying distribution=list(name="tdist", df=DF) where DF is your chosen degrees of freedom. If "pairwise" regression is specified, distribution must be a list of the form list(name="pairwise",group=...,metric=...,max.rank=...) (metric and max.rank are optional, see below). group is a character vector with the column names of data that jointly indicate the group an instance belongs to (typically a query in Information Retrieval applications). For training, only pairs of instances from the same group and with different target labels can be considered. metric is the IR measure to use, one of conc: Fraction of concordant pairs; for binary labels, this is equivalent to the Area under the ROC Curve mrr: Mean reciprocal rank of the highest-ranked positive instance map: Mean average precision, a generalization of mrr to multiple positive instances ndcg: Normalized discounted cumulative gain. The score is the weighted sum (DCG) of the user-supplied target values, weighted by log(rank+1), and normalized to the maximum achievable value. This is the default if the user did not specify a metric. ndcg and conc allow arbitrary target values, while binary targets {0,1} are expected for map and mrr. For ndcg and mrr, a cut-off can be chosen using a positive integer parameter max.rank. If left unspecified, all ranks are taken into account. Note that splitting of instances into training and validation sets follows group boundaries and therefore only approximates the specified train.fraction ratio (the same applies to cross-validation folds). Internally, queries are randomly shuffled before training, to avoid bias. Weights can be used in conjunction with pairwise metrics, however it is assumed that they are constant for instances from the same group. For details and background on the algorithm, see e.g. Burges (2010). data an optional data frame containing the variables in the model. By default the variables are taken from environment(formula), typically the environment from which gbm is called. If keep.data=TRUE in the initial call to gbm then gbm stores a copy with the object. If keep.data=FALSE then subsequent calls to gbm.more must resupply the same dataset. It becomes the user's responsibility to resupply the same data at this point. weights an optional vector of weights to be used in the fitting process. Must be positive but do not need to be normalized. If keep.data=FALSE in the initial call to gbm then it is the user's responsibility to resupply the weights to gbm.more. var.monotone an optional vector, the same length as the number of predictors, indicating which variables have a monotone increasing (+1), decreasing (-1), or arbitrary (0) relationship with the outcome. n.trees the total number of trees to fit. This is equivalent to the number of iterations and the number of basis functions in the additive expansion. cv.folds Number of cross-validation folds to perform. If cv.folds>1 then gbm, in addition to the usual fit, will perform a cross-validation, calculate an estimate of generalization error returned in cv.error. interaction.depth The maximum depth of variable interactions. 1 implies an additive model, 2 implies a model with up to 2-way interactions, etc. n.minobsinnode minimum number of observations in the trees terminal nodes. Note that this is the actual number of observations not the total weight. shrinkage a shrinkage parameter applied to each tree in the expansion. Also known as the learning rate or step-size reduction. bag.fraction the fraction of the training set observations randomly selected to propose the next tree in the expansion. This introduces randomnesses into the model fit. If bag.fraction<1 then="" running="" the="" same="" model="" twice="" will="" result="" in="" similar="" but="" different="" fits.="" gbm uses the R random number generator so set.seed can ensure that the model can be reconstructed. Preferably, the user can save the returned gbm.object using save. train.fraction The first train.fraction * nrows(data) observations are used to fit the gbm and the remainder are used for computing out-of-sample estimates of the loss function. nTrain An integer representing the number of cases on which to train. This is the preferred way of specification for gbm.fit; The option train.fraction in gbm.fit is deprecated and only maintained for backward compatibility. These two parameters are mutually exclusive. If both are unspecified, all data is used for training. keep.data a logical variable indicating whether to keep the data and an index of the data stored with the object. Keeping the data and index makes subsequent calls to gbm.more faster at the cost of storing an extra copy of the dataset. object a gbm object created from an initial call to gbm. n.new.trees the number of additional trees to add to object. verbose If TRUE, gbm will print out progress and performance indicators. If this option is left unspecified for gbm.more then it uses verbose from object. class.stratify.cv whether or not the cross-validation should be stratified by class. Defaults to TRUE for distribution="multinomial" and is only implementated for multinomial and bernoulli. The purpose of stratifying the cross-validation is to help avoiding situations in which training sets do not contain all classes. x, y For gbm.fit: x is a data frame or data matrix containing the predictor variables and y is the vector of outcomes. The number of rows in x must be the same as the length of y. offset a vector of values for the offset misc For gbm.fit: misc is an R object that is simply passed on to the gbm engine. It can be used for additional data for the specific distribution. Currently it is only used for passing the censoring indicator for the Cox proportional hazards model. w For gbm.fit: w is a vector of weights of the same length as the y. var.names For gbm.fit: A vector of strings of length equal to the number of columns of x containing the names of the predictor variables. response.name For gbm.fit: A character string label for the response variable. group group used when distribution = 'pairwise'. n.cores The number of CPU cores to use. The cross-validation loop will attempt to send different CV folds off to different cores. If n.cores is not specified by the user, it is guessed using the detectCores function in the parallel package. Note that the documentation for detectCores makes clear that it is not failsave and could return a spurious number of available cores. Details See the gbm vignette for technical details. This package implements the generalized boosted modeling framework. Boosting is the process of iteratively adding basis functions in a greedy fashion so that each additional basis function further reduces the selected loss function. This implementation closely follows Friedman's Gradient Boosting Machine (Friedman, 2001). In addition to many of the features documented in the Gradient Boosting Machine, gbm offers additional features including the out-of-bag estimator for the optimal number of iterations, the ability to store and manipulate the resulting gbm object, and a variety of other loss functions that had not previously had associated boosting algorithms, including the Cox partial likelihood for censored data, the poisson likelihood for count outcomes, and a gradient boosting implementation to minimize the AdaBoost exponential loss function. gbm.fit provides the link between R and the C++ gbm engine. gbm is a front-end to gbm.fit that uses the familiar R modeling formulas. However, model.frame is very slow if there are many predictor variables. For power-users with many variables use gbm.fit. For general practice gbm is preferable. Value gbm, gbm.fit, and gbm.more return a gbm.object. References Y. Freund and R.E. Schapire (1997) “A decision-theoretic generalization of on-line learning and an application to boosting,” Journal of Computer and System Sciences, 55(1):119-139. G. Ridgeway (1999). “The state of boosting,” Computing Science and Statistics 31:172-181. J.H. Friedman, T. Hastie, R. Tibshirani (2000). “Additive Logistic Regression: a Statistical View of Boosting,” Annals of Statistics 28(2):337-374. J.H. Friedman (2001). “Greedy Function Approximation: A Gradient Boosting Machine,” Annals of Statistics 29(5):1189-1232. J.H. Friedman (2002). “Stochastic Gradient Boosting,” Computational Statistics and Data Analysis 38(4):367-378. B. Kriegler (2007). Cost-Sensitive Stochastic Gradient Boosting Within a Quantitative Regression Framework. PhD dissertation, UCLA Statistics. C. Burges (2010). “From RankNet to LambdaRank to LambdaMART: An Overview,” Microsoft Research Technical Report MSR-TR-2010-82. The MART website. gbm.object, gbm.perf, plot.gbm, predict.gbm, summary.gbm, pretty.gbm.tree. • gbm • gbm.more • gbm.fit Examples N <- 1000 X1 <- runif(N) X2 <- 2*runif(N) X3 <- ordered(sample(letters[1:4],N,replace=TRUE),levels=letters[4:1]) X4 <- factor(sample(letters[1:6],N,replace=TRUE)) X5 <- factor(sample(letters[1:3],N,replace=TRUE)) X6 <- 3*runif(N) mu <- c(-1,0,1,2)[as.numeric(X3)] SNR <- 10 # signal-to-noise ratio Y <- X1**1.5 + 2 * (X2**.5) + mu sigma <- sqrt(var(Y)/SNR) Y <- Y + rnorm(N,0,sigma) # introduce some missing values X1[sample(1:N,size=500)] <- NA X4[sample(1:N,size=300)] <- NA data <- data.frame(Y=Y,X1=X1,X2=X2,X3=X3,X4=X4,X5=X5,X6=X6) # fit initial model gbm1 <- gbm(Y~X1+X2+X3+X4+X5+X6, # formula data=data, # dataset var.monotone=c(0,0,0,0,0,0), # -1: monotone decrease, # +1: monotone increase, # 0: no monotone restrictions distribution="gaussian", # see the help for other choices n.trees=1000, # number of trees shrinkage=0.05, # shrinkage or learning rate, # 0.001 to 0.1 usually work interaction.depth=3, # 1: additive model, 2: two-way interactions, etc. bag.fraction = 0.5, # subsampling fraction, 0.5 is probably best train.fraction = 0.5, # fraction of data for training, # first train.fraction*N used for training n.minobsinnode = 10, # minimum total weight needed in each node cv.folds = 3, # do 3-fold cross-validation keep.data=TRUE, # keep a copy of the dataset with the object verbose=FALSE, # don't print out progress n.cores=1) # use only a single core (detecting #cores is # error-prone, so avoided here) # check performance using an out-of-bag estimator # OOB underestimates the optimal number of iterations best.iter <- gbm.perf(gbm1,method="OOB") print(best.iter) # check performance using a 50% heldout test set best.iter <- gbm.perf(gbm1,method="test") print(best.iter) # check performance using 5-fold cross-validation best.iter <- gbm.perf(gbm1,method="cv") print(best.iter) # plot the performance # plot variable influence summary(gbm1,n.trees=1) # based on the first tree summary(gbm1,n.trees=best.iter) # based on the estimated best number of trees # compactly print the first and last trees for curiosity print(pretty.gbm.tree(gbm1,1)) print(pretty.gbm.tree(gbm1,gbm1$n.trees)) # make some new data N <- 1000 X1 <- runif(N) X2 <- 2*runif(N) X3 <- ordered(sample(letters[1:4],N,replace=TRUE)) X4 <- factor(sample(letters[1:6],N,replace=TRUE)) X5 <- factor(sample(letters[1:3],N,replace=TRUE)) X6 <- 3*runif(N) mu <- c(-1,0,1,2)[as.numeric(X3)] Y <- X1**1.5 + 2 * (X2**.5) + mu + rnorm(N,0,sigma) data2 <- data.frame(Y=Y,X1=X1,X2=X2,X3=X3,X4=X4,X5=X5,X6=X6) # predict on the new data using "best" number of trees # f.predict generally will be on the canonical scale (logit,log,etc.) f.predict <- predict(gbm1,data2,best.iter) # least squares error print(sum((data2$Y-f.predict)^2)) # create marginal plots # plot variable X1,X2,X3 after "best" iterations par(mfrow=c(1,3)) plot(gbm1,1,best.iter) plot(gbm1,2,best.iter) plot(gbm1,3,best.iter) par(mfrow=c(1,1)) # contour plot of variables 1 and 2 after "best" iterations plot(gbm1,1:2,best.iter) # lattice plot of variables 2 and 3 plot(gbm1,2:3,best.iter) # lattice plot of variables 3 and 4 plot(gbm1,3:4,best.iter) # 3-way plots plot(gbm1,c(1,2,6),best.iter,cont=20) plot(gbm1,1:3,best.iter) plot(gbm1,2:4,best.iter) plot(gbm1,3:5,best.iter) # do another 100 iterations gbm2 <- gbm.more(gbm1,100, verbose=FALSE) # stop printing detailed progress Documentation reproduced from package gbm, version 2.1.1, License: GPL (>= 2) | file LICENSE Community examples Looks like there are no examples yet.
web
auto_math_text
# Deeplearning in Text Classification In the Divine Comedy, Minos is a daemon appointed to guard the entrance of the hell. He listens to the sins of souls and indicates them their destinations by wrapping his tail as many times as the assigned circle. The figure is emblematic of the machine learning classification, where an entity is identified as belonging to a category or to another. Rather than condemning souls to endless pains, the harmless tool I am describing can judge whether an user’s utterance belongs to a specific intention, or to a limited range of emotions. Namely, it can serve intention recognition and sentimental analysis. In the realm of conversational commerce, the examined sentence could be: I want to buy some apples and pears The system recognizes the intention search and presents the results. Intention prediction is not an untackled problem and the market offers plenty of services. There are many players such as Google (Api.ai), Facebook (Wit.ai), Microsoft (Luis.ai) just for mentioning some of them, but this shouldn’t prevent further explorations in the topic, sometimes with unexpected positive surprises, as shows in the graph. The test was performed against real data used for training the deployed model of the Chatbot system and the results are relevant for the real working scenario, so no cherry picking in this case. 300 training samples, 56 test samples for 25 classes, these are the dataset’s numbers. Minos, the text classifier, uses an ensemble of machine learning models. It combines multiple classifiers for getting a good prediction out of utterances submitted to Charly. One of the models is based on Convolutional Neural Networks (CNN). ## CNN in NLP CNN is mostly applied to image recognition thanks to the tollerance on translations (rotations, distortions) and the compositionality principle (entities are composed by its constituents). Admittedly, CNN might appear counter-intuitive at a first approach because text looks very different from images: 1. The order of the words in text is not as important as the order of the pixel in an image. 2. Humans percept text sequentially, not in convolutions. ### Invariance Entities like images and texts, should be compared differently. The smallest atomic element in text is the single charater, rather than the word, like the pixel in images. The proportion is more like: text : char = image : pixel By this angle of view, the order of characters in sentences is fundamental. Convolutions in text come in form of: single word => bi-grams (two adjacent words) => n-grams like graphical features lines , corners => mouths, eyes => faces come out of portraits. In CNN the pair adjective + object for example, could be recognized invariantly of its position, at the begin or at the end of a sentence, exactly like a face is recognized wherever it is located in the whole picture. ### Sequentiality It might seem more intuitive to apply Recurrent Neural Networks (like LSTM, Attention or Seq2seq) for text classification, due to the sequential nature of RNNs algorithms. I didn’t run any test on them so far, but I would promptly play with TreeLSTM. CNN performs well, and one might say that Essentially, all models are wrong, but some are useful, an essay the fit with the idea that final outcome drives the decisions, and experimental results play an important role. ### Word Embeddings Alike any NLP, in CNN words are replaced by their correspective semantic vector. Most famous are Google word2vec, GloVe and FastText. I decided to make use of ConceptNet Numberbatch that took first place in both subtasks at SemEval 2017 task 2. Moreover, the vector file is very small (~250M) compared to Google News word2vec (~1.5G) and from an engeneering point of view, those numbers matter. Minos is still experimental and not well tuned, doors are open for improvements. An aspect shouldn’t be ignored on working with CNN is the Catastrofic Forgetting, an annoying phenomenon that ruins irrevocably the entire training.
web
auto_math_text
# AIRS SRF Study Figure Viewer ## 1 Purpose and Scope To investigate the sensitivity of the AIRS radiances to changes in the AIRS SPectral Response Functions (SRF). To answer the question whether in the long wave region (AIRS detector modules 11 and 12) the knowledge of the SRFs is in error and whether they can be improved using independent measurements and intercomparison with CrIS and or IASI. It has been observed in the AIRS:CrIS SNO mean bias, that there is a repeatable and significant variation in the region from about 645 to 750 cm-1, that is for detectors of modules 11 and 12. This variation is attributed to the AIRS observed radiances. But to-date this variation has not been accounted for nor has it been determined whether it can be corrected. The scope of this technical note is to record the methods and results. ## 2 Methods Working directory: /home/chepplew/projects/airs/srf_study/ Main routine therein: airs_srf_analysis.m Sources: (from strow) /home/chepplew/gitLib/srf_model/ and /home/chepplew/gitLib/l1c_freq_adjust/ Intermedite data files: /home/chepplew/projects/airs/srf_study/outd/ Figures: /home/chepplew/projects/airs/srf_study/figs/
web
auto_math_text
Some steps to legalise US dollar trade in black market | Dec. 3, 2014 | Caracas, Venezuela Event On November 28th a decision by the president, Nicolás Maduro, that paves the way to legalise US-dollar exchange in the black market was published in the official gazette. Analysis The new law lays the groundwork for the government to make the three-tiered currency system more flexible, if and when it decides to do so. It does not immediately legalise the black market, but instead gives the Banco Central de Venezuela (BCV, the Central Bank) the option to do so without seeking the approval of Congress. Currency controls, first implemented by the former president, Hugo Chávez (1999-2013), in 2003, have made dollars scarce in Venezuela, causing widespread shortages of basic goods and pushing up consumer price inflation (which stood at 63% in August, the last month in which the BCV published inflation figures). This has driven the growth of the thriving black market for hard currency. The local currency has plummeted on the black market in recent weeks—a drop that accelerated in the wake of OPEC's decision to maintain global oil production levels—dropping by 30% in one month to BsF155:US$1, around 25 times the strongest official exchange rate of BsF6.3:US$1. Venezuela operates three official exchange rates; the strongest rate is reserved for what the government deems to be "essential goods." Importers complain increasingly about the difficulty of obtaining hard currency through official channels and must often resort to purchasing dollars on the black market. The parallel market acts as a safety valve for the country's economy, although currently it is not able to satiate demand for dollars, leaving supermarket shelves empty, a major political issue for the already-unpopular Maduro administration just before Christmas. Legalising the black market would make it more attractive to importers, narrowing the gap between the black market rate and the government's official rates. Impact on the forecast We have made no changes to our exchange rate forecasts at this time and will await further action by the BCV before doing so.
web
auto_math_text
# ATLAS PUB Notes 2022-12-23 19:29 Identification of highly boosted 𝒁 → 𝒆 + 𝒆 − decays with the ATLAS detector using deep neural networks This note describes the development and evaluation of a new algorithm dedicated to identify highly boosted $Z\rightarrow e^+e^-$ decays. [...] ATL-PHYS-PUB-2022-056. - 2022. Original Communication (restricted to ATLAS) - Full text 2022-12-10 11:18 Integrating the GBT into standard networks A data acquisition device that interfaces the custom radiation-hard front end fibre links to industry standard Ethernet or InfiniBand networks is proposed. [...] ATL-DAQ-PUB-2022-001. - 2022. - 2 p. Original Communication (restricted to ATLAS) - Full text 2022-11-25 20:39 Monte Carlo modelling of loop-induced $ZH$ production in ATLAS This note presents and compares Monte Carlo generator predictions for the loop-induced production of a Higgs boson, $H$, in association with a $Z$ boson. [...] ATL-PHYS-PUB-2022-055. - 2022. - 14 p. Original Communication (restricted to ATLAS) - Full text 2022-11-23 09:59 The Serial and LVDS repeaters for the ATLAS New Small Wheel sTGC trigger Serial and LVDS repeaters boards will be used in the electronics chain from the NSW sTGC Pad Front End boards - pFEBs - to the Pad Trigger, from the Pad Trigger to the strips FE boards - sFEBs - and from there to the Router boards [...] ATL-MUON-PUB-2022-003. - 2022. - 20 p. Original Communication (restricted to ATLAS) - Full text 2022-11-17 13:40 Searches of lepton-flavour-violating decays of the Higgs bosons with the ATLAS detector at the HL-LHC This note presents a study for the prospect of the searches for lepton-flavour-violating decays of the Higgs bosons with $3000\,\mathrm{fb}^{-1}$ of proton-proton collisions at $\sqrt{s} = 14$TeV using the ATLAS detector at the HL-LHC. [...] ATL-PHYS-PUB-2022-054. - 2022. - 20 p. Original Communication (restricted to ATLAS) - Full text 2022-11-17 13:35 HL-LHC prospects for the measurement of Higgs boson pair production in the $b\bar{b}b\bar{b}$ final state and combination with the $b\bar{b}\gamma\gamma$ and $b\bar{b}\tau^+\tau^-$ final states at the ATLAS experiment Projection studies for the non-resonant Higgs boson pair production in the $b\bar{b}b\bar{b}$ final state with the ATLAS detector at the HL-LHC are presented. [...] ATL-PHYS-PUB-2022-053. - 2022. Original Communication (restricted to ATLAS) - Full text 2022-11-17 13:31 Towards Common top pair Monte-Carlo Settings for ATLAS and CMS This is a collection of plots comparing Monte Carlo simulation of top quark pair events by the ATLAS and CMS experiments. [...] ATL-PHYS-PUB-2022-052. - 2022. - 8 p. Original Communication (restricted to ATLAS) - Full text 2022-11-09 20:02 Top working group cross-section summary plots, November 2022 This note presents updated figures that summarise top cross-section results from the ATLAS top working group and the LHCtopWG.. ATL-PHYS-PUB-2022-051. - 2022. - 26 p. Original Communication (restricted to ATLAS) - Full text 2022-11-09 19:58 Top quark mass and properties summary plots November 2022 This note presents plots summarising ATLAS results and combinations of top quark mass measurements and other top quark properties.. ATL-PHYS-PUB-2022-050. - 2022. - 11 p. Original Communication (restricted to ATLAS) - Full text 2022-11-09 19:56 Top Quarks + X Summary Plots November 2022 This note presents summary plots for cross section measurements performed by ATLAS and CMS within the Top Quarks + X sub-group as of November 2022.. ATL-PHYS-PUB-2022-049. - 2022. Original Communication (restricted to ATLAS) - Full text
web
auto_math_text
Use of integral data assimilation and differential measurements as a contribution to improve $^{235}$U and $^{235}$U cross sections evaluations in the fast and epithermal energy range - Archive ouverte HAL Access content directly Journal Articles EPJ N - Nuclear Sciences & Technologies Year : 2018 ## Use of integral data assimilation and differential measurements as a contribution to improve $^{235}$U and $^{235}$U cross sections evaluations in the fast and epithermal energy range Virginie Huy • Function : Correspondent author • PersonId : 1055438 Connectez-vous pour contacter l'auteur Gilles Noguere • Function : Author • PersonId : 988300 G. Rimpault #### Abstract Critical mass calculations of various HEU-fueled fast reactors result in large discrepancies in C/E values, depending on the nuclear data library used and the configuration modeled. Thus, it seems relevant to use integral experiments to try to reassess cross sections that might be responsible for such a dispersion in critical mass results. This work makes use of the Generalized Least Square method to solve Bayes equation, as implemented in the CONRAD code. Experimental database used includes ICSBEP Uranium based critical experiments and benefits from recent re-analyses of MASURCA and FCA-IX criticality experiments (with Monte-Carlo calculations) and of PROFIL irradiation experiments. These last ones provide very specific information on $^{235}$U and $^{238}$U capture cross sections. Due to high experimental uncertainties associated to fission spectra, we chose to consider either fitting these data or set them to JEFF-3.1.1 evaluations. The work focused on JEFF-3.1.1 $^{235}$U and $^{238}$U evaluations and results presented in this paper for $^{235}$U capture and $^{238}$U capture, and inelastic cross sections are compared to recent differential experiment or recent evaluations. Our integral experiment assimilation work notably suggests a 30% decrease for $^{235}$U capture around 1-2.25 keV, a 10% increase in the unresolved resonance range when using JEFF-3.1.1 as "a priori" data. These results are in agreement with recent microscopic measurements from Danon et al. [Nucl. Sci. Eng. 187, 291 (2017)] and Jandel et al. [Phys. Rev. Lett. 109, 202506 (2012)]. For $^{238}$U cross sections, results are highly dependent on fission spectra. #### Domains Physics [physics] ### Dates and versions cea-02305677 , version 1 (04-10-2019) ### Identifiers • HAL Id : cea-02305677 , version 1 • DOI : ### Cite Virginie Huy, Gilles Noguere, G. Rimpault. Use of integral data assimilation and differential measurements as a contribution to improve $^{235}$U and $^{235}$U cross sections evaluations in the fast and epithermal energy range. EPJ N - Nuclear Sciences & Technologies, 2018, 4, pp.41. ⟨10.1051/epjn/2018035⟩. ⟨cea-02305677⟩ ### Export BibTeX TEI Dublin Core DC Terms EndNote Datacite 21 View
web
auto_math_text
# $Ξ^-$ and $Ω$ Distributions in Hadron-Nucleus Interactions $Ξ^-$ and $Ω$ Distributions in Hadron-Nucleus Interactions - Download this document for free, or read online. Document in PDF available to download. Download or read this book online for free in PDF: $Ξ^-$ and $Ω$ Distributions in Hadron-Nucleus Interactions Strange baryons have long been known to exhibit a leading particle effect. A recent comparison of $\Xi^-$ production in $\pi^-$, $n$, and $\Sigma^-$ interactions with nuclei show this effect clearly. These data are supplemented by earlier measurements of $\Xi^-$ and $\Omega$ production by a $\Xi^-$ beam. We calculate the $\Xi^-$ and \$ Author: R. Vogt; T. D. Gutierrez Source: https://archive.org/
web
auto_math_text
### FREE Webinar : How to Select your CFD Academic Project ? on September 04, 2014 in Webinar Any academic degree, let it be bachelors or masters, ends with a project work. The project work is one of the most critical parts of any academic degree. It is so important that it always decides what’s going to be next for the student. Let it be higher studies or industrial job, the whole career path (at least the starting point of that path) is based on the project work. Listen to the webinar recording in which basic rules of CFD project selection are discussed. The webinar also contains aspects like CFD project complexity matrix, CFD learning curve and sample project ideas. ### Basics of Y Plus, Boundary Layer and Wall Function in Turbulent Flows on July 25, 2014 in CFD Before getting into the details of the turbulent models let us discuss an important concept known as $y^{+}$ and know how it is related to turbulence modeling, mesh generation process and how it is going to affect the CFD end result. It is important to know about the concept of wall $y^{+}$ or in general how the flow behaves near the wall, to consider the effects near the wall as it is the basis on which choice of the turbulence model is governed. ### Turbulence Parameter Calculator at Inlet Boundary on July 09, 2014 When modelling turbulent flows in CFD, turbulence models require the specification of turbulence variable values at the inlet boundaries. There are several ways to provide turbulence parameters at boundaries. This calculator gives all the turbulence values based on inlet conditions like velocity or mass flow rate, inlet area and fluid properties. It calculates all turbulence values like Turbulent Kinetic Energy ($k$), Turbulent Dissipation ($\epsilon$), Specific Rate of Dissipation ($\omega$), Turbulent Viscosity Ratio ($\mu_t/\mu$), Turbulence Intensity ($\textit{I}$) and Turbulence Length Scale ($\textit{l}$). ### Career Path to CFD Engineer on June 30, 2014 in CFD In recent years, there has been growing interest by various engineering product companies to perform design simulation studies at different stages of product development to compete in the market. This has consistently resulted in increased requirement of skilled CFD resources and proving to be  a very good career opportunity for engineers aspiring to make a career in the interesting domain of heat transfer and fluid flows. However there seems to be a widespread confusion in the student community as to what skills are desired by these industries for a fresher to qualify. ### Introduction to Turbulence and Turbulence Modeling on May 15, 2014 in CFD Understanding the turbulent behaviour of fluids is one of the most fascinating, forbidding & critical problems in all of classical physics. Turbulence is omnipresent, as most of the fluid flows are turbulent in nature right from the microscopic level at interior of biological cells to the macroscopic scales of the geophysical and astrophysical phenomena including planetary interiors, oceans and atmospheres that represent the dominant physics of turbulent fluid flows.
web
auto_math_text
# zbMATH — the first resource for mathematics Coupling a reactive potential with a harmonic approximation for atomistic simulations of material failure. (English) Zbl 1425.74030 Summary: Molecular dynamics (MD) simulations involving reactive potentials can be used to model material failure. The empirical potentials which are used in such simulations are able to adapt to the atomic environment, at the expense of a significantly higher computational cost than non-reactive potentials. However, during a simulation of failure, the reactive ability is needed only in some limited parts of the system, where bonds break or form and the atomic environment changes. Therefore, simpler non-reactive potentials can be used in the remainder of the system, provided that such potentials reproduce correctly the behavior of the reactive potentials in this region, and that seamless coupling is ensured at the interface between the reactive and non-reactive regions. In this article, we propose a methodology to combine a reactive potential with a non-reactive approximation thereof, made of a set of harmonic pair and angle interactions and whose parameters are adjusted to predict the same energy, geometry and Hessian in the ground state of the potential. We present a methodology to construct the non-reactive approximation of the reactive potential, and a way to couple these two potentials. We also propose a criterion for on-the-fly substitution of the reactive potential by its non-reactive approximation during a simulation. We illustrate the correctness of this hybrid technique for the case of MD simulation of failure in two-dimensional graphene originally modeled with REBO potential. ##### MSC: 74A25 Molecular, statistical, and kinetic theories in solid mechanics 74A45 Theories of fracture and damage ReaxFF Full Text: ##### References: [1] Ashurst, W. T.; Hoover, W. G., Microscopic fracture studies in the two-dimensional triangular lattice, Phys. Rev. B, 14, 1465-1473, (1976) [2] Abraham, F. F.; Brodbeck, D.; Rafey, R. A.; Rudge, W. E., Instability dynamics of fracture: A computer simulation investigation, Phys. Rev. Lett., 73, 272-275, (1994) [3] Abraham, F. F.; Brodbeck, D.; Rudge, W. E.; Xu, X., A molecular dynamics investigation of rapid fracture mechanics, J. Mech. Phys. Solids, 45, 9, 1595-1619, (1997) · Zbl 0974.74558 [4] Abell, G. C., Empirical chemical pseudopotential theory of molecular and metallic bonding, Phys. Rev. B, 31, 6184-6196, (1985) [5] Brenner, D. W., Empirical potential for hydrocarbons for use in simulating the chemical vapor deposition of diamond films, Phys. Rev. B, 42, 9458-9471, (1990) [6] Tersoff, J., New empirical model for the structural properties of silicon, Phys. Rev. Lett., 56, 632-635, (1986) [7] Stuart, S. J.; Tutein, A. B.; Harrison, J. A., A reactive potential for hydrocarbons with intermolecular interactions, J. Chem. Phys., 112, 14, 6472-6486, (2000) [8] van Duin, A. C.T.; Dasgupta, S.; Lorant, F.; Goddard, W. A., Reaxff: A reactive force field for hydrocarbons, J. Phys. Chem. A, 105, 41, 9396-9409, (2001) [9] Brenner, D. W.; Shenderova, O. A.; Harrison, J. A.; Stuart, S. J.; Ni, B.; Sinnott, S. B., A second-generation reactive empirical bond order (REBO) potential energy expression for hydrocarbons, J. Phys.: Condens. Matter, 14, 783, (2002) [10] Tersoff, J., New empirical approach for the structure and energy of covalent systems, Phys. Rev. B, 37, 6991-7000, (1988) [11] Plimpton, S. J., Fast parallel algorithms for short-range molecular dynamics, J. Comput. Phys., 117, 1-19, (1995) · Zbl 0830.65120 [12] Pastewka, L.; Pou, P.; Pérez, R.; Gumbsch, P.; Moseler, M., Describing bond-breaking processes by reactive potentials: importance of an environment-dependent interaction range, Phys. Rev. B, 78, (2008) [13] Pastewka, L.; Klemenz, A.; Gumbsch, P.; Moseler, M., Screened empirical bond-order potentials for si-C, Phys. Rev. B, 87, (2013) [14] Buehler, M. J.; van Duin, A. C.T.; Goddard, W. A., Multiparadigm modeling of dynamical crack propagation in silicon using a reactive force field, Phys. Rev. Lett., 96, (2006) [15] Bernstein, N.; Kermode, J. R.; Csányi, G., Hybrid atomistic simulation methods for materials systems, Rep. Progr. Phys., 72, 2, (2009) [16] Gao, W.; Huang, R., Thermomechanics of monolayer graphene: rippling, thermal expansion and elasticity, J. Mech. Phys. Solids, 66, 42-58, (2014) [17] Tewary, V. K.; Yang, B., Parametric interatomic potential for graphene, Phys. Rev. B, 79, (2009) [18] Atrash, F.; Hashibon, A.; Gumbsch, P.; Sherman, D., Phonon emission induced dynamic fracture phenomena, Phys. Rev. Lett., 106, (2011) [19] Dove, M. T., Introduction to lattice dynamics, (1993), Cambridge University Press [20] Shell, M. S., The relative entropy is fundamental to multiscale and inverse thermodynamic problems, J. Chem. Phys., 129, 14, (2008) [21] Rudzinski, J. F.; Noid, W. G., Coarse-graining entropy, forces, and structures, J. Chem. Phys., 135, 21, (2011) [22] Tejada, I. G.; Brochard, L.; Stoltz, G.; Legoll, F.; Lelièvre, T.; Cancès, E., Combining a reactive potential with a harmonic approximation for molecular dynamics simulation of failure: construction of a reduced potential, J. Phys.: Conf. Ser., 574, 1, (2015) [23] Mounet, N., Structural, vibrational and thermodynamic properties of carbon allotropes from first-principles: diamond, graphite, and nanotubes, (2005), Massachusetts Institute of Technology, (Master’s thesis) [24] Tadmor, E. B.; Ortiz, M.; Phillips, R., Quasicontinuum analysis of defects in solids, Phil. Mag. A, 73, 6, 1529-1563, (1996) [25] Tadmor, E. B.; Legoll, F.; Kim, W. K.; Dupuy, L. M.; Miller, R. E., Finite-temperature quasi-continuum, Appl. Mech. Rev., 65, 1, (2013) [26] Luskin, M.; Ortner, C., Atomistic-to-continuum coupling, Acta Numer., 22, 397-508, (2013) · Zbl 06302681 [27] Miller, R. E.; Tadmor, E. B., A unified framework and performance benchmark of fourteen multiscale atomistic/continuum coupling methods, Model. Simul. Mater. Sci. Eng., 17, 5, (2009) [28] Metropolis, N.; Rosenbluth, A. W.; Rosenbluth, M. N.; Teller, A. H.; Teller, E., Equation of state calculations by fast computing machines, J. Chem. Phys., 21, 6, 1087-1092, (1953) [29] Vanderbilt, D.; Taole, S. H.; Narasimhan, S., Anharmonic elastic and phonon properties of si, Phys. Rev. B, 40, 5657-5668, (1989) [30] Bou-Rabee, N.; Owhadi, H., Long-run accuracy of variational integrators in the stochastic context, SIAM J. Numer. Anal., 48, 1, 278-297, (2010) · Zbl 1215.65012 [31] Allen, M. P.; Tildesley, D. J., Computer simulation of liquids, (1987), Oxford University Press · Zbl 0703.68099 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
web
auto_math_text
# Implementing Deep Reinforcement Learning with PyTorch: Deep Q-Learning In this article we will look at several implementations of deep reinforcement learning with PyTorch. a year ago   •   10 min read In this article we will look at several implementations of deep reinforcement learning with PyTorch. This article is based on notes from the course Modern Reinforcement Learning: Deep Q Learning in PyTorch and is organized as follows: 1. Deep Q-Learning 2. Double Q-Learning 3. Dueling Deep Q-Learning This post may contain affiliate links. See our policy page for more information. This article will assume that you have an understanding of the fundamentals of deep reinforcement learning and deep Q-learning, but if you need a refresher check out these articles on the subject: ## 1. Deep Q-Learning ### Analyzing the Deep Q-Learning Paper The paper that we will be implementing in this article is called Human-level control through deep reinforcement learning, in which the authors created the reinforcement learning technique called the Deep Q-Learning algorithm. While we won't cover all the details of the paper, a few of the key concepts for implementing it in PyTorch are noted below. This algorithm is unique in that it uses pixels and game scores as input instead of using lower dimensional representations. As the authors put it: This work bridges the divide between high-dimensional sensory inputs and actions ,resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks. In particular, the algorithm uses a deep convolutional network architecture which proves to be successful as it is robust with respect to transformations and scaling. More formally, the authors use the following equation to approximate the optimal action-value function: $$Q^*(s, a) = \max\limits_{\pi}\mathbb{E}[r_t + \gamma r_{t+1} + \gamma^2 r_{t+2} + ... | s_t = s, a_t = a, \pi]$$ ...which is the maximum sum of rewards $r_t$ discounted by c at each timestep $t$, achievable by a behaviour policy $\pi = P(a|s)$, after making an observation $(s)$ and taking an action $(a)$. In other words, the authors use a Markov decision process and deep neural networks to approximate the action-value function using a policy to generate the agent's action selection. In order to implement this we'll need the following data structures: • A deep convolutional neural network • A class to handle the experience replay • A separate neural network to calculate the target values • A mechanism for updating the weights of the network Below is a visualization of the convolutional neural network architecture from the paper, from which we can see it have: • 3 convolutional layers (only 2 shown, 3 stated in the text) • 2 linear layers • ReLu activations between the first 3 layers • The last layer is unactivated The paper then discusses what to expect from the deep Q-learning algorithm when playing Atari games: ...our DQN agent performed at a level that was comparable to that of a professional human games tester across the set of 49 games, achieving more than 75% of the human score on more than half of the games An important point is that this performance is without any input about the environment, meaning it is a completely model-free algorithm. The exact architecture of the DQN agent is as follows: • Input to the network is 84 x 84 x 4 image produced from preprocessing followed by a rectifier • The second hidden layer convolves 64 filters of 4 3 4 with stride 2 followed by a rectifier • The third convolutional layer convolves 64 filters of 3 x 3 x 3with stride 1 followed by a rectifier • The final hidden layer is fully-connected and consists of 5 12 rectifier units • The output layer is a fully-connected linear layer with a single output for each valid action (between 4 and 18 in the Atari games) Below we can see the deep Q-learning algorithm that we're going to implement with PyTorch: Now let's move on to preprocessing the images from OpenAI Gym Atari emulator. ### Data Preprocessing Preprocessing and stacking the frames from the OpenAI Atari environments is a critical to the success of the deep Q-learning agent. The problems that we need to solve include: • The images have 3 channels but our agent only needs 1 so we need to convert grayscale • The images are quite large so we need to downscale them to 84 x 84 pixels • The paper describes "flickering" of some objects in some environments - we solve this by keep track of the 2 previous frames and taking the max of the 2 • We repeat each action for 4 steps • PyTorch expects images to have channels first, OpenAI returns channels last so we need to flip the axis of the Numpy arrays • We need to stack the 4 most recent frames • We need to scale the outputs by dividing the images by 255 In this step we really just need to keep in mind what data structures we need and what algorithms we want to implement. We will start with the problem of taking the 2 previous frames and returning the max and then repeating each action for 4 steps. We won't cover the preprocessing code in this article, but you can find useful pseudocode for preprocessing and the implementation on GitHub by Phil Tabor here. ### Creating the Deep Q-Learning Agent's Memory In this section we will create a mechanism for the agent to keep track of states, actions, rewards, new states, and the final state. All of these factors will be used in the calculation of the Target for the loss function of the DQN. For maximum flexibility of the agent the memory should accomodate states with an arbitrary shape. We should also sample from the memory uniformly, meaning each memory will have an equal probability of being sampled, and we should not repeat memories. In order to implement this in Python we can use either deques - which has the feature of adding and removing elements from either end - or Numpy arrays. In this implementation we'll use Numpy arrays. Again we won't go over the code in this article, but you can find Phil Tabor's implementation of the replay memory here on Github. The great part about this replay memory is that it can be reused for any Atari environment. ### Building the Deep Q Network Let's now build the deep Q network class, which will include the following: • 3 convolutional layers and 2 fully connected layers • A function to find the input size for the fully connected layer • An RMSProp optimizer and MSE loss function • Model checkpointing after every 100 records A PyTorch implementation of the deep Q network from the course can be found on Github here. ### Building the Deep Q Agent Now that we have the replay memory and deep Q network class we can build the agent. As discussed, one of the main innovations of deep Q-learning is that it is an online network that's updated with gradient descent in addition to a target network that calculates the target values. The target network is updated periodically with the weights of the online network. We also use a replay memory to sample the agent's history and trains the network. The agent also needs functions for the following: • A constructor called DQNAgent • An epsilon-greedy action selection called choose_selection • A function to copy the weights of the online network to the target network called replace_target_network • A function to decrease epsilon over time called decrement_epsilon • A function to learn from experiences called learn • A function to store memories called save_models • A function to interface with the deep Q network to save the model called load_models The implementation of the deep Q-learning agent in PyTorch by Phil Tabor can be found on Github here. ### Building the Main Loop & Analyzing Performance Now that we have the deep Q-learning agent we need to write a main loop and analyze performance. We start by making the environment, which in this case will be 'PongNoFrameskip-v4'. Next we instantiate a best_score to save the model when it acheives a new high score. We will play 500 games for training and then instantiate the DQNAgent(). Next we load the checkpoint if appropriate, and then we define a filename for saving our plot at the end of training. The main_dqn.py file by Phil Tabor can be found on Github here. After running the code we can see we got an average score of ~16 points after running ~1 million learning steps. From the plot we can also see the agent learns as epsilon decreases but the majority of learning happens in the greedy phase. From this deep Q-learning implementation we can see we get quite significant results in a short period of time and a (somewhat) rudimentary approach to the problem. ## 2. Double Q-Learning Now that we have an implementation of deep Q-learning, we can expand on this and look at other papers. Since we have the replay memory and preprocessing functionality we can simply tweak the agent as needed. The next paper we'll review is called Deep Reinforcement Learning with Double Q-learning from Google DeepMind. Again, we'll look at what algorithm we need to implement, the data structures we need, and the model architecture suggested by the authors. ### Analyzing the Paper As highlighted in the abstract, Q-learning is known to overestimate action values under certain conditions. The paper says that such overestimations are common, can harm performance, alt0ugh they can generally be prevented. The Double Q-learning algorithm is an adaption of the DQN algorithm that reduces the observed overestimation, and also leads to much better performance on several Atari games. The reason the Q-learning can sometimes learn unrealistically high actions values is as follows: ...it includes a maximization step over estimated action values, which tends to prefer overestimated to underestimated values. The issue that the authors highlight is that if this overestimation is not uniform and not concentrated on states that we want to learn more about (i.e. encouraging exploration), they might negatively affect the quality of the resulting policy. The theory behind Double Q-learning is similar to deep Q-learning, although one of the main differences is that we can decouple the action selection from the evaluation. In other words, as the authors state: The idea of Double Q-learning is to reduce overestimations by decomposing the max operation in the target into action selection and action evaluation. As described in the paper, in the original Double Q-learning algorithm: ...two value functions are learned by assigning each experience randomly to update one of the two value functions, such that there are two sets of weights, $\theta$ and $\theta^t$. For each update, one set of weights is used to determine the greedy policy and the other to determine its value. In simple terms, as the name suggests instead of having a single Q-function we have two. In order to implement this, all we need to change from our DQN algorithm is to modify the calculation of our target. The learning function, action function, etc. are all otherwise the same. The Double DQN algorithm also uses the same network architecture as the original DQN. ### Implementing Double Q-Learning with PyTorch As mentioned, we can reuse much of the deep Q-learning code including the following functions: • Networks • Memory • Action selection • Network replacement • Epsilon decrement • Model saving The difference with Double Q-learning is in the calculation of the target values. The update equation for Double Q-learning from the paper is shown below: The Double Q-learning implementation in PyTorch by Phil Tabor can be found on Github here. ## 3. Dueling Deep Q-Learning Let's now look at one more deep reinforcement learning algorithm called Duelling Deep Q-learning. ### Analyzing the Paper The paper that we will look at is called Dueling Network Architectures for Deep Reinforcement Learning. In the abstract of the paper the authors discuss how many deep reinforcement learning algorithms use conventional architectures such as convolutional networks, LSTMs, or autoencoders. The architecture that they present is for model-free reinforcement learning. The dueling network has two separate estimators: • One for the state value function • And one for the state-dependent action advantage function The authors describe the benefit of this architecture as follows: The main benefit of this factoring is to generalize learning across actions without imposing any change to the underlying reinforcement learning algorithm. The authors also highlight that this dueling architecture enables the RL agent to outperform the state-of-the-art on the Atari 2600 domain. In the introduction the authors highlight that their approach can easily be combined with existing and future RL algorithms, so we won't have to make too many modifications to the code. The authors specify the proposed network architecture as follows: The dueling architecture consists of two streams that represent the value and advantage functions, while sharing a common convolutional feature learning module. From the image above we see the popular Q-network on top and and the dueling Q-network on the bottom. The authors go on to clarify how the architecture can be understood: This dueling network should be understood as a single Q network with two streams that replaces the popular single-stream Q network in existing algorithms such as Deep Q-Networks What this means is that the dueling architecture can learn the values of each state, without having to learn the effect of each action for each state. ### Implementing Dueling Deep Q-Learning In order to implement the dueling deep Q-learning algorithm we need to complete the following for the network: • The convolutional layers are the same • We need to split the linear layers into two steams: value & advantage stream • We need to modify the feed forward function We then need to complete the following for the agent class: • Memory, target network updates, model saving, and epsilon decrementing are all the same • Our action selection function needs an advantage stream • We need to combine the value & advantage streams for the learn function The dueling deep Q-learning network implemented in PyTorch by Phil Tabor can be found on GitHub here and the agent can be found here. ## Summary: Deep Reinforcement Learning with PyTorch As we've seen, we can use deep reinforcement learning techniques can be extremely useful in systems that have a huge number of states. In these systems, the tabular method of Q-learning simply will not work and instead we rely on a deep neural network to approximate the Q-function. We first looked at the naive deep Q-learning approach, and then reviewed several papers that solve the issue of correlations ruining the learning process, as well as the issue of using a single Q-function to both pick actions and update the weights of the network. In particular, double Q-learning and dueling deep Q-learning are two interesting algorithms that solve these challenges.
web
auto_math_text
Volume 306 - XII Multifrequency Behaviour of High Energy Cosmic Sources Workshop (MULTIF2017) - Astrophysics of High Energy Cosmic Sources High-energy emission properties of pulsars C. Venter,* A.K. Harding, I. Grenier *corresponding author Full text: pdf Pre-published on: 2018 March 26 Published on: 2018 May 30 Abstract The sheer number of new $\gamma$-ray pulsar discoveries by the Fermi Large Area Telescope since 2008, combined with the quality of new multi-frequency data, has caused a revolution in the field of high-energy rotation-powered pulsars. These rapidly rotating neutron stars exhibit rich spectral and temporal phenomenology, indicating that there are still many unsolved mysteries regarding the magnetospheric conditions in these stars - even after 50 years of research! Indeed, 2017 marks the golden anniversary of the discovery of the first radio pulsar, and theorists and observers alike are looking forward to another half-century of discovery, with many new experiments coming online in the next decades. In this review paper, we will briefly summarise recent HE pulsar observations, mention some theoretical models that provide a basic framework within which to make sense of the varied measurements, and finally review some of the latest theoretical developments in pulsar emission modelling. DOI: https://doi.org/10.22323/1.306.0038 Open Access
web
auto_math_text
43 IRUS Total A search for supersymmetry in $\sqrt{s}=13~\tev$ proton-proton collisions with the CMS detector at the LHC Title: A search for supersymmetry in $\sqrt{s}=13~\tev$ proton-proton collisions with the CMS detector at the LHC Authors: Elwood, Adam Item Type: Thesis or dissertation Abstract: An inclusive search for supersymmetry with jets and missing transverse energy is presented. Data from √s = 13 TeV pp-collisions with a total integrated luminosity of 12.9 fb^(−1) delivered by the LHC and collected by the CMS detector are analysed. The dominant quantum chromodynamic multijet background is strongly suppressed with several kinematic variables, which are also used to discriminate between Standard Model and supersymmetric processes. The observed events are found to be compatible with the expected contributions from Standard Model processes. This result is interpreted in the context of simplified supersymmetric models of gluino and third-generation squark production. The mass of the gluino, bottom squark and top squark are excluded to 1775, 1025 and 875 GeV respectively. In preparation for the collection of √s = 13 TeV data by CMS, the jet algorithm for the Level-1 trigger is upgraded. The new algorithm allows for dynamic pileup subtraction and takes advantage of hardware upgrades to the trigger. The performance of different types of pileup subtraction are evaluated and the most promising algorithm, chunky-donut subtraction, is chosen. The algorithm is found to give a significant performance improvement and has been used to collect data from 2016 onwards. Content Version: Open Access Issue Date: Apr-2017 Date Awarded: Jul-2017 URI: http://hdl.handle.net/10044/1/49220 DOI: https://doi.org/10.25560/49220 Supervisor: Tapper, Alex Sponsor/Funder: Science and Technology Facilities Council (Great Britain) Department: Physics Publisher: Imperial College London Qualification Level: Doctoral Qualification Name: Doctor of Philosophy (PhD) Appears in Collections: Physics PhD theses
web
auto_math_text
Found 526 results 2012 , Approximations for many-body Green's functions: insights from the fundamental equations, New Journal of Physics, vol. 14. p. 013056, 2012. , Beyond the GW approximation: Combining correlation channels, Phys. Rev. B, vol. 85. American Physical Society, p. 155131, 2012. , Crystal structure of cold compressed graphite, Phys. Rev. Lett., vol. 108, 065501. 2012. , Crystalline and magnetic anisotropy of the 3d-transition metal monoxides MnO, FeO, CoO, and NiO, Phys. Rev. B, vol. 86. American Physical Society, p. 115134, 2012. M. Guzzo, Dynamical correlation in solids : a perspective in photoelectron spectroscopy, Ecole Polytechnique, Palaiseau, 2012. , Efficient calculation of the polarizability: a simplified effective-energy technique, The European Physical Journal B, vol. 85. Springer-Verlag, pp. 1-10, 2012. , Efficient GW calculations for SnO2, ZnO, and rubrene: The effective-energy technique, Phys. Rev. B, vol. 85. American Physical Society, p. 085126, 2012. , Electronic and magnetic properties of NiS2, NiSSe and NiSe2 by a combination of theoretical methods, EUROPEAN PHYSICAL JOURNAL B, vol. 85, 2012. , Electronic excitations in solar cells from GW approaches, in Computational Approaches to Energy Materials, R. Catlow, A. Sokol and A. Walsh., Oxford: Wiley-Blackwell, 2012. , The ETSF: An e-Infrastructure That Bridges Simulations and Experiments, Computing in Science & Engineering, vol. 14. 2012. , Excitons in molecular crystals from first-principles many-body perturbation theory: Picene versus pentacene, PHYSICAL REVIEW B, vol. 86, 2012. , Feedback mechanism for the stability of the band gap of CuInSe2, Phys. Rev. B, vol. 86, 045216. 2012. , First Principles Study of Hydrogen Desorption from the NaAlH4 Surface Doped by Ti Clusters, JOURNAL OF PHYSICAL CHEMISTRY C, vol. 116, pp. 4311-4315, 2012. , High-pressure structures of disilane and their superconducting properties, Phys. Rev. Lett., vol. 108, 117004, 2012. , In memoriam of Professor Rodolfo Del Sole, physica status solidi (b), vol. 249. WILEY-VCH Verlag, pp. 1092–1094, 2012. , Large crystal local-field effects in second-harmonic generation of a Si/CaF${}_{2}$ interface: An \textit{ab initio} study, Phys. Rev. B, vol. 86. American Physical Society, p. 035309, 2012.
web
auto_math_text
0 Research Papers # A Three-Dimensional Conjugate Approach for Analyzing a Double-Walled Effusion-Cooled Turbine BladePUBLIC ACCESS [+] Author and Article Information Department of Engineering Science, University of Oxford, Oxford OX1 3PJ, UK Alexander V. Murray Department of Engineering Science, University of Oxford, Oxford OX1 3PJ, UK e-mail: alexander.murray@eng.ox.ac.uk Peter T. Ireland Department of Engineering Science, University of Oxford, Oxford OX1 3PJ, UK e-mail: peter.ireland@eng.ox.ac.uk Eduardo Romero Rolls-Royce Plc., Bristol BS34 7QE, UK e-mail: eduardo.romero@rolls-royce.com 1Corresponding author. Contributed by the International Gas Turbine Institute (IGTI) of ASME for publication in the JOURNAL OF TURBOMACHINERY. Manuscript received July 20, 2018; final manuscript received August 29, 2018; published online October 17, 2018. Editor: Kenneth Hall. J. Turbomach 141(1), 011002 (Oct 17, 2018) (10 pages) Paper No: TURBO-18-1168; doi: 10.1115/1.4041379 History: Received July 20, 2018; Revised August 29, 2018 ## Abstract A double-wall cooling scheme combined with effusion cooling offers a practical approximation to transpiration cooling which in turn presents the potential for very high cooling effectiveness. The use of the conventional conjugate computational fluid dynamics (CFD) for the double-wall blade can be computationally expensive and this approach is therefore less than ideal in cases where only the preliminary results are required. This paper presents a computationally efficient numerical approach for analyzing a double-wall effusion cooled gas turbine blade. An existing correlation from the literature was modified and used to represent the two-dimensional distribution of film cooling effectiveness. The internal heat transfer coefficient was calculated from a validated conjugate analysis of a wall element representing an element of the aerofoil wall and the conduction through the blade solved using a finite element code in ANSYS. The numerical procedure developed has permitted a rapid evaluation of the critical parameters including film cooling effectiveness, blade temperature distribution (and hence metal effectiveness), as well as coolant mass flow consumption. Good agreement was found between the results from this study and that from literature. This paper shows that a straightforward numerical approach that combines an existing correlation for film cooling from the literature with a conjugate analysis of a small wall element can be used to quickly predict the blade temperature distribution and other crucial blade performance parameters. <> ## Introduction The desire to build a gas turbine with both high efficiency and specific power output has led to the use of ever-increasing turbine entry temperatures [1]. The continuous increase in turbine entry temperatures results in an extremely harsh environment for turbine blades and other critical hot stage components. Film cooling combined with a multipass system has been the conventional cooling method for turbine aerofoils to date. However, the desire to attain much higher engine efficiency and at the same time reduce the amount of cooling air requirements has led to the need to research and implement advanced cooling techniques such as transpiration cooling or those that closely approximate transpiration, for instance, porous multiwall cooling schemes like double-wall cooling. This paper presents an efficient analysis procedure for the class of double skin cooling systems for advanced engines shown schematically in Fig. 1. ## Literature Review A literature review of earlier conjugate heat transfer (CHT) simulations and film cooling effectiveness applicable to effusion cooling is presented here. ###### Three-Dimensional Conjugate Simulation Approaches. A fully coupled CHT simulation allows both the solution of the fluid flow and heat transfer to be determined in just one code without the need to carry out interpolation of boundary conditions between codes. However, carrying out a coupled CHT of a double-walled effusion cooled blade presents a challenge not only because of the very many small-diameter cooling holes that need to be modeled, but also owing to inadequate discretization and turbulence modeling included in the computational fluid dynamics (CFD). Even though numerical simulation approaches like direct numerical and large-eddy simulations offer improved accuracy in solving complex turbulent flows, in the case of practical cooling systems, they remain time-consuming and computationally costly [2] and thus not ideal especially in some cases for example where preliminary results are desired to enable redesign and optimization. To avoid the use of such computationally expensive numerical simulation methods, several authors have developed simplified numerical approaches for studying effusion cooled systems. Laschet et al. [35] presented a 3D conjugate approach for modeling both the fluid flow and heat transfer using a homogenization method—an approach which assumes that a given multilayered model consists of a periodic repetition of a unit volume cell. This approach was used to predict the aerothermal behavior of flat and curved multilayered plates as well as the aerothermal behavior of transpiration cooled plates. A CHT flow solver that depends on an implicit finite volume method in conjunction with multiblock approach was used. The domain was divided into separate blocks and the full, compressible, 3D Navier–Stokes equations were solved in each of the fluid blocks. The simulation time required to run the homogenous model was very small and the authors suggested possible future application of this technique for analysis of complex cooled gas turbine components. The limitation of this approach is that it is restricted to homogeneous models and may not be possible to be used in the analysis of a heterogeneous system where a unit volume cell cannot be simply reproduced to represent the whole domain. Zecchi et al. [6] developed a quick decoupled conjugate simulation tool for preliminary design stage analysis of an uncooled turbine vane. The program inputs were the hot-side and cold-side heat transfer coefficients (HTC) (from correlations) and the fluid temperature. Their simulation tool permitted not only the evaluation of metal temperature distribution but also coolant mass flow distribution on the vane. The authors compared the results from this simplified analysis approach with the experimental results, of the same vane from Hylton et al. [7], and a satisfactory match was found. Heidmann et al. [8] developed a CHT approach to study a film cooled turbine vane. This method was later applied to a gas turbine blade by Kassab et al. [9]. In the latter method, fluid convection was simulated using the Glenn-HT [10] general multiblock heat transfer code and conduction solved using a boundary element method that was coupled directly to the fluid flow solver. The authors noted that the use of the boundary element method saved computational time as no volumetric grid within the solid was required. Mendez et al. [11] offered a simplified approach for modeling and simulating effusion cooling using a flat plate. In this approach, the number of rows was assumed to be infinitely large and a small, finite calculation domain was extracted from the plate. The calculation domain was chosen so that it contained a single perforated hole. Periodic boundaries were specified to reproduce infinite plate geometry. The results obtained from their simplified approach were found to match the existing experimental results well. Amaral et al. [12] evaluated temperature of an internally cooled gas turbine blade by employing a decoupled CHT, which involved use of three different solvers; (1) Navier–Stoke solver to evaluate nonadiabatic external flow and heat transfer, (2) finite element analysis to obtain heat and stress solution within the solid domain and (3) a 1D aerothermal model, in the cooling channels, that used empirical heat transfer and friction formulae available in the open literature. An iterative procedure was used to stabilize the input/output data from the solvers. The authors validated the results from this method against experimental test results. Even though the empirical formulae were used in the 1D solver, the obtained results were satisfactory. It was noted that this method offered an acceptable a trade-off between accuracy and computational cost. Bonini et al. [13] and Andreini et al. [14] presented a simplified decoupled 3D CHT procedure for evaluating the performance of gas turbine blades. In their methodology, the internal cooling system was modeled using an in-house 1D thermo-fluid network solver, external heat loads, and pressure distribution evaluated using three-dimensional CFD. Film cooling effectiveness was calculated using correlations for shaped and cylindrical holes developed by Colban et al. [15] and L'Ecuyer and Soechting [16], respectively. The effect of multiple rows of films was accounted for using the superposition approach proposed by Sellers [17]. Heat conduction through the metal of the blade was calculated using three-dimensional ANSYS steady-state thermal module. Their CHT entailed an iterative procedure of guessing a reasonable first metal temperature, running simulations, updating the metal temperature, and repeating the process until the solution converged. The predictions from their work demonstrated that decoupled CHT produced results comparable to those from experiments and satisfactory agreement between the two was demonstrated. The decoupled CHT presented by Bonini et al. [13] and Andreini et al. [14] was validated by Andrei et al. [18] using both an internally cooled and an internally and film cooled gas turbine vane. The authors compared both the metal temperature and adiabatic effectiveness distribution results against those from experiments (carried out at the NASA Lewis Research Centre (Cleveland, OH) by Hylton et al. [7]) and a fully 3D coupled CHT CFD analysis and a good match was noted between the results. ###### Methodology for Predicting Film Cooling Effectiveness. Several authors have developed film effectiveness correlations, including Colban et al. [15] who developed a correlation specifically for both laid-back and regularly shaped cooling holes, Baldauf et al. [19] for a row of cylindrical holes and L'Ecuyer and Soechting [16] for a row of cylindrical cooling holes. Display Formula (1)$ϵf=1−∏i=1n1−ϵfix$ Baldauf et al. [20] carried out a comprehensive analysis of laterally averaged film cooling effectiveness. Their correlation included the influence of a full set of parameters and the impact of film cooling performance demonstrated agreement with the measured results. However, these researchers only considered a single row of cooling holes. Sellers [17] presented a simple way of predicting the film cooling effectiveness for many rows of cooling holes from correlations (or data) from single rows. Sellers [17] approach replaces the free stream temperature for downstream rows with the adiabatic wall temperature calculated from upstream films. Sellers [17] superposition approach can be generalized to any number of rows as shown experimentally by Murray et al. [21], see Eq. (1). There are many reviews of film cooling effectiveness in the literature (for example, Goldstein [22]) but a few deal with effusion cooling applied to turbine aerofoils or predict the two-dimensional distribution of effectiveness downstream of the cooling hole. The present work aims to extend the decoupled 3D CHT approach presented by Bonini et al. [13] and Andreini et al. [14] to a double-walled effusion cooled gas turbine blade so as to develop a computationally efficient conjugate approach that can be used to produce an assessment of the performance of the complex double-walled blade. The Goldstein [22] correlation was employed for film cooling effectiveness for a single hole and film superposition using Sellers [17] to represent the 2D distribution of film cooling effectiveness of an array of cooling holes. The suitability of this approach was confirmed for multiple rows of films by Murray et al. [21]. The internal heat transfer coefficient was evaluated from conjugate analysis of a unit wall element and the conduction through the blade was simulated using the finite element code in ANSYS steady-state thermal module. Figure 2 shows a concept double-wall turbine blade with a cooling design for the leading edge and trailing edge (TE). ###### Test Geometry in the Present Study. A practical double-wall blade, as shown with more details in Fig. 2, comprises of three main regions with different cooling designs. (1) Leading edge cooling is achieved through a combination of impingement jets from the inner skin and showerhead cooling holes on the outer skin. (2) Midchord region, which is cooled both internally (through impingement jets from the inner skin and convective heat transfer in the array of pedestals) and externally by an almost-continuous film cover from the staggered array of effusion cooling holes. (3) Early and late TE cooling is achieved using pin-fin bank and TE slots, respectively. The present study focuses on the double-wall midchord region of the blade. Pedestal height, size, spacing as well as impingement and effusion cooling holes size and spacing are set to match that of the desired wall element. Each of the wall elements studied by Murray et al. [23] has distinct aerothermal characteristics (such as convective efficiency, discharge coefficient, and mass flow rate). A sample of CFD results of a unit cell showing the flow velocity distribution is as shown in Fig. 3. To demonstrate how the numerical analysis approach developed herein works, one of the best-performing wall elements named geometry 3 (from Murray et al. [23]) was used. The entire span of the midchord region of the aerofoil is made up of 88 of these wall elements. To simplify the analysis, geometry 3 wall element (whose geometrical parameters and dimensions are as shown in Fig. 4 and Table 1, respectively) was used in the whole midchord region. However, owing to the varying flow structure and heat load around a turbine blade, it will be essential in the near future of the double-wall blade design to consider employing different wall elements around the blade. For instance, employing a wall element with the low-mass flow in the transonic region of the suction surface where coolant ejection onto the surface is undesirable because of the aerodynamic losses. Cylindrical cooling holes with 1 mm diameter and inclined at an angle of $30$ deg to the external surface of the blade were modeled in a staggered pattern on the blade surface, as shown in Fig. 5. This design, with equal spanwise and streamwise pitches of 10 diameters, resulted in a total of 12 staggered rows of cooling holes; 7 on the suction side and 5 on the pressure surface. ###### Wall Block Analysis. The researchers in Ref. [23] carried out an elaborate conjugate CFD while performing aerothermal and thermomechanical analysis of seven different geometries of wall blocks (wall elements). Each of the elements comprises a small domain with square sides equal to half the streamwise pitch between the effusion holes. In this wall element, the external and internal skins are connected via a bank of pedestals, see Fig. 4. To carry out numerical analysis in the present study, the internal heat transfer coefficient of the unit wall element from conjugate CFD was correlated using a power law, Eq. (2). Therefore, for a given Re and with known values of constants A and B it is possible to evaluate the Nu and hence internal heat transfer coefficient for a specific wall element geometry. Display Formula (2)$Nu=AReB$ where A = 0.07 and B = 0.80. The wall element analysis also provided an effective discharge coefficient which was subsequently used to predict the coolant flow rate through the effusion holes on the aerofoil, using Eq. (3). Ideal coolant mass flow rate, $m˙ideal$ is calculated assuming an isentropic one-dimension expansion from the coolant total pressure inside the blade to the static pressure local to the exit of the cooling hole. In the present work, the discharge coefficient was correlated from the conjugate analysis of Murray et al. [23] for average aerofoil cross-flow but later work has correlated $Cd$ with blowing ratio Display Formula (3)$m˙c,actual=Cdm˙c,ideal$ To evaluate the internal heat transfer coefficient in the blade, $hi,blade$, heat balance was carried out between the block and the blade resulting in Eq. (4). Where $Ablade$ is the internal surface area which the wall element occupies on the blade and $ηc$ is the convective efficiency of the wall element (Fig. 6 gives a graph of $ηc$ variation with Re for the wall element used in this study). ###### Numerical Simulation. As aforementioned, the conjugate model in the present study adopted a decoupled approach, which modeled the blade in three separate steps; internal heat load modeling, film cooling (external heat load) modeling and a conjugate simulation of the whole domain. In practice, the outer skin of the aerofoil, which is effusion cooled, is connected to the internal impingement plate via a bank of pedestals forming a double-skin system, as shown in Fig. 1. In our approach, we predict the blade temperature in the aerofoil wall (modeled as 1 mm thick) but account for impingement cooling and the pedestals by increasing the internal heat transfer coefficient Display Formula (4)$hi,blade=m˙c,actualCp,cηcAblade$ The internal and external heat load modeling are described in detail in the Wall Block Analysis and Film Cooling Modelling sections respectively. ###### Film Cooling Modeling. A MATLAB2 script was written to compute both the adiabatic wall temperature and the film cooling effectiveness at every grid point on the blade. The Mainstream Velocity and Density, Adiabatic Wall Temperature and Film Cooling Effectivenes sections describe how each of the inputs necessary for the film cooling evaluation was obtained. ###### Mainstream Velocity and Density. The mainstream Mach number and velocity were evaluated from the local static pressure assuming an isentropic flow. The blade profile used is similar to the one used by Gurram et al. [24] for blade trailing edge studies but was scaled to an engine representative size. The engine-representative conditions used by Colladay [25], summarized in Table 2, were used in this analysis. The mainstream density was evaluated from the ideal gas equation. The recovery temperature (Eq. (5)) was evaluated from the specified total gas temperature and was used as the effective gas temperature. This is because, for low Mach number flows, the effective gas temperature can be taken to be static temperature; however, at high Mach numbers, such as flow over a large region of the suction surface, it is more accurate to use recovery temperature as the effective gas temperature. An estimate of a turbulent layer flow recovery factor formula $(Λ=Pr3)$ proposed by Lee [26] was used Display Formula (5)$Tr,g=Ts,g+ΛTo,g−Ts,g$ ###### Film Cooling Effectiveness. The effectiveness of the coolant film diminishes downstream of the cooling hole exit. There have been a vast number of studies, both experimental and numerical, that have been undertaken to measure film effectiveness under a variety of conditions. From these data, film effectiveness has been correlated with several variables, which include downstream position, coolant and mainstream Re, hole shape and diameter, mainstream gas-coolant density ratio, and specific heat ratio. In this study, the correlation developed by Goldstein [22], Eq. (6), which has an addition of the lateral coordinate, z, (see Fig. 5) was used. The aerofoil external surface was divided into a number of elements. The values of the mainstream gas parameters, such as density, velocity, and static pressure vary from one grid point to another. A MATLAB2 code was written to compute the film cooling effectiveness at each of these grid points. The values of $xdecay$, $αt$, and $z1/2$ vary with the blowing ratio and streamwise pitch. It was possible to calculate the required values at any given blowing ratio and streamwise pitch by extrapolating the detailed film effectiveness results compiled by Murray et al. [21] and Baldauf et al. [27] obtained through combination of CFD and experiments on a flat plate. Display Formula (6)$εf=MugD8αtxD+xdecayexp−0.693zz1/22$ For this case of effusion cooling where there are many rows of cooling holes on the blade surface, film effectiveness at each of the succeeding downstream rows of cooling holes is strongly influenced by the film coming from the upstream rows. In this study, Sellers [17] superposition approach (see Eq. (1)) is employed to evaluate the composite film effectiveness of the downstream rows. The mainstream gas recovery and coolant inlet temperatures were used as boundary conditions. The temperature of the coolant at the exit of each cooling hole, $Tc,ex$ is not known but depends on the wall temperature, coolant temperature, internal heat transfer coefficient and cooling system wetted surface area. The temperature increase of the coolant, from supply to film cooling hole exit, divided by the maximum temperature increase possible is defined as the convective efficiency,$ηc$ [28]. This parameter is a function of the coolant mass-flow and cooling geometry, as shown in Eq. (7). Assuming a fully developed turbulent pipe flow and using Dittus–Boelter's expression relating Nu and Re ($Nu=0.023Re0.8Pr0.33$) then Eq. (7) can be rewritten so that the $ηc$ becomes a function of only $L/Dh$ (Eq. (8)). Figure 6 shows $ηc$ plotted as a function of Re comparing the convective efficiency of the wall element in this study with three simple duct cooling systems, characterized by L/$Dh$. From the graph, it is evident that the wall element possesses very high convective efficiency corresponding to that of a circular pipe with length L in the range of 20 < L/$Dh<40$Display Formula (7)$ηc=1−exp−4StLDh$ Display Formula (8)$ηc=1−exp−0.12Re−0.2LDh$ Display Formula (9)$Taw=Tr,g−ϵfTr,g−Tc,ex$ An iterative process (the program logic is illustrated in Fig. 7) was necessary owing to the interdependence nature of the metal temperature distribution, the amount of heat picked up by coolant as it flows through the internal passages, the adiabatic temperature decay downstream of the cooling hole, the hot-side and cold-side heat transfer coefficient. Equation (9) was employed to evaluate the coolant exit temperature,$Tc,ex$ which supplies the film. An initial guess for the metal temperature distribution, $Tm$ (chosen to be a uniform value which was the average of the gas and the coolant inlet temperature) was set. The $ηc$ corresponding to a given wall element geometry and Re, determined from a flow analysis, was read from the $ηc−Re$ database, from which $Tc,ex$ was determined. The latter $Tc,ex$ was then used in Eq. (9) to evaluate adiabatic wall temperature, $Taw$. The calculated $Taw$ was exported to ANSYS steady-state analysis and conjugate simulations executed, as explained in the Conjugate Model Simulation section of this paper, from which volume-average metal temperature, $Tm$ was evaluated. This new updated temperature was then fed into the iteration loop and the iteration process executed till convergence. Convergence was met when $ΔTm/(Tr,g−Tc,in)$ was below 0.01%. To accelerate convergence in subsequent cases, the initial metal temperature was set to be the converged metal temperature of the preceding case. On average, four iterations were required for convergence in all the cases studied. ###### External Heat Transfer Coefficient. A simple integral approach was used to determine the external heat transfer coefficient around the aerofoil, in the presence of varying free stream velocity. Specifically, Ambrok's procedure for a turbulent boundary layer, Eq. (10), described in Kays and Crawford [29], was used. This procedure of determining external HTC was applied to all regions of suction and pressure surfaces except at the aerofoil's leading edge. The leading edge was modeled as a two-dimensional cylinder Display Formula (10)$St=0.0287Pr−0.4Tsurface−Tr,g0.25μ0.2∫0xTsurface−Tr,g1.25(ugρg)dx0.2$ ###### Conjugate Model Simulation. A 3D Fourier's law module included in ANSYS 16.2—steady-state thermal module was used for the conjugate simulation. The blade was meshed using ANSYS meshing software and mesh refinement undertaken near the cooling holes to minimize discretization errors. The $Taw$ and external heat transfer coefficient (external heat load) as well as internal heat transfer coefficient (from Eq. (4)) and coolant inlet temperature (internal heat load) were imported into ANSYS finite element method module and interpolated on the external and internal surfaces of the blade, respectively. The summary of this process is shown in Fig. 8. Steps 2, 3, and 4 in Fig. 8 correspond to the steps in the program logic (Fig. 7). Three different total coolant inlet pressure cases; 40, 42, and 44 bar were considered in this analysis for a freestream total pressure of 40 bar and transonic exit conditions. The results are documented graphically in the following section of results and discussion. ## Results and Discussion ###### Film Effectiveness and Adiabatic Wall Temperature. The Goldstein [22] correlation was used to predict the film effectiveness for each row of holes, which enabled the adiabatic wall temperature to be predicted in a procedure outlined in Fig. 7. The approach introduced by Sellers [17], which superposes the effect of rows of the film, was used to account for the accumulation of film effectiveness. To the authors' knowledge, this is the first time the two-dimensional film distribution has been superposed using the method of Sellers [17], the approach shown to be successful by Murray et al. [21]. Figure 9 shows film effectiveness and its corresponding adiabatic wall temperature on the external surface of the aerofoil. A similar film cooling effectiveness trend can be seen on both the pressure and suction surface of the blade. On both the pressure and suction surfaces, the metal effectiveness increases from leading to trailing edge. This is because; (a) the film effectiveness builds up downstream as more coolant is injected into the films and (b) the internal heat transfer coefficient increases as the coolant flow through the wall block increases as the pressure difference between the coolant and static pressure increases. It is interesting to note that the film cooling effectiveness, $ϵf$ on the pressure surface is significantly lower than the level on the suction surface. This bias is known to occur in real turbine designs which include aerofoil curvature and passage secondary flows. The correlation used in the present work is the Goldstein [22] equation based on flat plate data and makes no allowance for curvature or vortices. The different $ϵf$ levels on pressure and suction surfaces is caused by the difference in the mainstream velocity between the two surfaces. Figure 10 shows the variation of nondimensional film flow rate per hole (obtained by dividing coolant flow rate through each hole by the mean coolant flow rate) as well as film cooling and metal effectiveness as a function of dimensionless streamwise distance, considering a case where the total coolant pressure is equal to 41 bar. There is undoubtedly high coolant mass flow on the suction surface compared to pressure surface—approximately three times in the cooling holes in the vicinity of the transonic mainstream flow. This is undesirable as it causes additional aerodynamic losses. In the near future, the design of the double-wall effusion blade will demand a well-thought means of reducing the amount of coolant ejected into the high-loss regions of the suction surface. ###### External Heat Transfer. As aforementioned, the external heat transfer coefficient,$hg$ around the aerofoil was calculated from the Ambrok's procedure described in Kays and Crawford [29]. The obtained values of $hg$ were nondimensionalized using an averaged heat transfer coefficient over the whole surface, $hg,ave$. The resulting graph is included in Fig. 10. The highest heat load occurs in the vicinity of the leading edge where stagnation takes place. The suction surface experiences a rapid fall in the heat transfer coefficient in the laminar region of the boundary layer before a sharp rise in the transition region and finally a gradual fall toward the trailing edge. On the pressure surface, there is a gradual fall in the $hg$ from the leading edge to almost the half of the downstream distance followed by a gradual rise toward the trailing edge. It is expected that $hg$ should be higher on the pressure than on the suction surface. However, it is not the case in this study owing to the inability of the modified flat plate correlation used in this study to capture vortices, particularly the Taylor–G $o¨$ rtler vortices (Mayle et al. [30]) that are known to cause an increase in the pressure surface $hg$. A similar observation was made by Chowdhury et al. [31] after employing a simple analytical predictive model to analyze heat transfer coefficients distribution on both suction and pressure surfaces of a gas turbine blade. ###### Internal Cooling Effectiveness. The success of the internal cooling system arises from the high heat transfer coefficients caused by the combination of the pedestals and the impingement jets. To evaluate the value of internal convection cooling, the model was adjusted to remove the benefits of film cooling (i.e., $ϵf$ was set to zero). The input temperature, on the external surface, while carrying out the conjugate simulation of the blade was set to be the mainstream effective gas temperature, $Tr,g$ not the adiabatic wall temperature, $Taw$. The resulting volume-averaged metal temperature was used in Eq. (12) to calculate the internal cooling effectiveness. The spatially averaged internal cooling effectiveness is plotted against nondimensional coolant mass flow rate in Fig. 11. In order to achieve very low m* values in the graph in Fig. 11, the coolant pressure was varied at each location downstream of the aerofoil so as to allow very low coolant mass flow through the aerofoil Display Formula (11)$m*=m˙cCpchgAex$ The results show that the overall effectiveness of the double-walled effusion cooled system is dominated by the internal convective heat transfer. Figure 11 makes clear the significant contribution from internal convection to overall cooling effectiveness, with the internal cooling contributing approximately 80% to the overall cooling effectiveness. This contribution is found to be even higher at low $m*$. The high internal convection is attributed to the use of impingement cooling, the significant internal surface area of the impingement plate and pedestals, which manifest themselves in the high internal heat transfer coefficient, calculated from Eq. (4). The dashed lines (calculated with different values of convective efficiency as a parameter) show the cooling efficiency curves of $ηc=0.2,0.4,0.6,and0.8$ calculated from Eq. (13) considering theoretical values of $m*$ and $ϵm$. The internal cooling from this study corresponds to a convective efficiency of approximately $ηc=0.5$ particularly at low m* values (m* was calculated from Eq. (11)). This reflects the convective efficiency results of its unit building cell shown in Fig. 6Display Formula (12)$ϵm=Tr,g−TmTr,g−Tc,in$ Display Formula (13)$ϵm=m*ηc1+m*ηc$ It is natural that a high internal convection is beneficial and it results in less coolant air requirements as reported by Colladay [25]. From their analysis and comparison of wall cooling schemes for advanced gas turbine applications, they found out that an increase in the internal convection efficiency resulted in a reduced amount of cooling air required to maintain a given wall temperature. For instance, a full-coverage (effusion) cooling system with an internal convective efficiency of 0.6 required a $m*$ of 2.1 to maintain a wall temperature of 1255 K. But when the internal convective efficiency dropped to 0.2, $m*$ needed to maintain the same wall temperature increased by over 60% to about 3.4. This same observation has been echoed by Holland and Thake [28] in their analysis of high pressure turbine blade cooling. This balance between internal convection and film cooling for combustor liner geometries is also reported by Andrews et al. [32]. They carried out an experimental investigation on a flat plate geometry to assess the relative performance of effusion and transpiration cooling. In their case, however, there was a greater contribution to overall cooling from film cooling as there was no large internal surface area, and correspondingly high effective internal heat transfer coefficient, influencing the cooling. ###### Overall Cooling Effectiveness. In this case, both the internal convection and film cooling were taken into consideration. To evaluate the overall metal effectiveness, the conjugate simulation was performed with the external load $hg$ and $Taw$. It should be noted that the procedure followed to calculate the internal cooling and overall cooling effectiveness was similar except that the input temperature onto the blade's external surface in the finite code in the former was the mainstream effective gas temperature, $Tr,g$ while in the later, it was adiabatic wall temperature, $Taw$. The overall metal effectiveness graph is plotted alongside film and internal cooling effectiveness, Fig. 11. Both the overall effectiveness and effectiveness from the internal cooling are within $ηc$ range of 35 to 60%, reflecting the efficiency results from its building cell (shown in Fig. 6). The effectiveness results from this study corresponding to a m* = 2 from Fig. 11 were compared with the effectiveness results from a hypothetical blade (which combines a good level of both film and convection cooling) introduced by Holland and Thake [28]. The authors used analytical equations to estimate the cooling effectiveness of the hypothetical blade. It is interesting to note that the cooling scheme studied here has an effectiveness value that is approximately 20% lower than the hypothetical blade of Holland and Thake [28], at the same nondimensionless coolant mass flow (m* = 2). ## Conclusion A computationally efficient numerical approach, which permits an assessment of the performance of a complex double-walled effusion cooled turbine blade, has been developed. The modified Goldstein [22] correlation was used to predict the film effectiveness for each row of holes and the film superposition downstream of the rows obtained using the Sellers [17] approach. The internal heat transfer coefficient was evaluated from a validated unit wall element conjugate analysis and the conduction through the blade was simulated using the finite code available in ANSYS steady-state thermal module. The results (which include film, metal effectiveness, and coolant mass flow consumption) have been found to closely match results available from the open literature. The developed novel numerical analysis approach offers a computationally efficient tool that can be used in the preliminary and optimization stages of a gas turbine blade design. The internal cooling was found to contribute a larger proportion to the overall cooling effectiveness of the double-walled effusion cooled blade and this was attributed to a very large internal surface area brought about by a combination of impingement cooling and a large internal surface area contributed by the pedestals. In addition, there can be a substantial reduction in the cooling air requirements by employing cooling schemes with high internal convection, such as the double wall cooling scheme in this study, combined with effusion cooling. ## Acknowledgements The authors would like to express sincere gratitude to Rolls-Royce Plc. and Engineering and Physical Science Research Council (EPSRC) for their support as well as the Rhodes Trust for supporting the lead author. ## Nomenclature • A = surface area • $Cd$ = discharge coefficient • $Cp$ = specific heat capacity • $Cx$ = chord length • D = diameter • $Dh$ = hydraulic diameter • h = heat transfer coefficient • K = thermal conductivity • L = length • $m˙$ = mass flow rate • M = blowing ratio • Ma = Mach number • m* = nondimensional mass-flow • Pr = Prandtl number • Re = Reynolds number • $Sx,Sz$ = streamwise and spanwise pitches, respectively • St = Stanton number • T = temperature • $Taw$ = • $Tm$ = volume-average metal temperature • $Tr$ = recovery temperature • $u$ = velocity • x, y, z = Cartesian coordinates • $xdecay$ = streamwise film decay factor • $z1/2$ = lateral distance at which the temperature difference drops to half its value along the centerline on the hole Greek Symbols • $αt$ = turbulent thermal diffusivity • $ϵf$ = film effectiveness • $ϵm$ = metal effectiveness • $ϵo$ = overall metal effectiveness • $ηc$ = convective efficiency • $Λ$ = Recovery factor • $μ$ = dynamic viscosity • $ρ$ = density Subscripts • c = coolant • ex = external • g = mainstream gas • in = internal • o = total • s = static ## References Han, J.-C. , 2013, Gas Turbine Heat Transfer and Cooling Technology, CRC Press/Taylor & Francis, Boca Raton, FL. Andersson, B. , Andersson, R. , Håkansson, L. , Mortensen, M. , Sudiyo, R. , and van Wachem, B. , 2011, Computational Fluid Dynamics for Engineers, Cambridge University Press, Cambridge, UK. Laschet, G. , Rex, S. , Bohn, D. , and Moritz, N. , 2002, “ 3-D Conjugate Analysis of Cooled Coated Plates and Homogenization of Their Thermal Properties,” Numer. Heat Transfer: Part A, 42(1–2), pp. 91–106. Laschet, G. M. , Rex, S. , Bohn, D. , and Moritz, N. , 2003, “ Homogenization of Material Properties of Transpiration Cooled Multilayer Plates,” ASME Paper No. GT2003-38439. Laschet, G. , Krewinkel, R. , Hul, P. , and Bohn, D. , 2013, “ Conjugate Analysis and Effective Thermal Conductivities of Effusion-Cooled Multi-Layer Blade Sections,” Int. J. Heat Mass Transfer, 57(2), pp. 812–821. Zecchi, S. , Arcangeli, L. , Facchini, B. , and Coutandin, D. , 2004, “ Features of a Cooling System Simulation Tool Used in Industrial Preliminary Design Stage,” ASME Paper No. GT2004-53547. Hylton, L. D. , Mihelc, M. S. , Turner, E. R. , Nealy, D. A. , and York, R. E. , 1983, “ Analytical and Experimental Evaluation of the Heat Transfer Distribution Over the Surfaces of Turbine Vanes,” NASA/Detroit Diesel Allison; Indianapolis, IN, Technical Report No. NASA CR 168015. Heidmann, J. D. , Kassab, A. J. , Divo, E. A. , Rodriguez, F. , and Steinthorsson, E. , 2003, “ Conjugate Heat Transfer Effects on a Realistic Film-Cooled Turbine Vane,” ASME Paper No. GT2003-38553. Kassab, A. , Divo, E. , Heidmann, J. , Steinthorsson, E. , and Rodriguez, F. , 2003, “ BEM/FVM Conjugate Heat Transfer Analysis of a Three-Dimensional Film Cooled Turbine Blade,” Int. J. Heat Fluid Flow, 13(5), pp. 581–610. Rigby, D. L. , Heidmann, J. D. , Ameri, A. A. , and Garg, V. K. , 2001, “ Improved Modeling Capabilities in Glenn-HT—The NASA Glenn Research Center General Multi-Block Navier–Stokes Heat Transfer Code,” Cleveland, OH, NASA Report No. 20020073073. Mendez, S. , Nicoud, F. , and Poinsot, T. , “ Large-Eddy Simulation of a Turbulent Flow around a Multi-Perforated Plate,” Complex Effects in Large Eddy Simulations, Kassinos S. C., Langer C. A., Iaccarino G., Moin P., eds., Vol. 56. Springer, Berlin, Heidelberg. Amaral, S. , Verstraete, T. , Van den Braembussche, R. , and Arts, T. , 2010, “ Design and Optimization of the Internal Cooling Channels of a High Pressure Turbine Blade—Part I: Methodology,” ASME J. Turbomach., 132(2), p. 021013. Bonini, A. , Andreini, A. , Carcasci, C. , Facchini, B. , Ciani, A. , and Innocenti, L. , 2012, “ Conjugate Heat Transfer Calculations on GT Rotor Blade for Industrial Applications—Part I: Equivalent Internal Fluid Network Setup and Procedure Description,” ASME Paper No. GT2012-69846. Andreini, A. , Bonini, A. , Da Soghe, R. , Facchini, B. , Ciani, A. , and Innocenti, L. , “ Conjugate Heat Transfer Calculations on GT Rotor Blade for Industrial Applications—Part II: Improvement of External Flow Modeling,” ASME Paper No. GT2012-69849. Colban, W. F. , Thole, K. A. , and Bogard, D. , 2010, “ A Film-Cooling Correlation for Shaped Holes on a Flat-Plate Surface,” ASME J. Turbomach., 133(1), p. 011002. L'Ecuyer, M. R. , and Soechting, F. O. , 1985, “ A Model for Correlating Flat Plate Film Cooling Effectiveness for Rows of Round Holes,” AGARD Heat Transfer and Cooling in Gas Turbine, West Palm Beach, FL, p. 12. Sellers, J. P. , 1963, “ Gaseous Film Cooling With Multiple Injection Stations,” AIAA J., 1(9), pp. 2154–2156. Andrei, L. , Andreini, A. , Facchini, B. , and Winchler, L. , 2014, “ A Decoupled CHT Procedure: Application and Validation on a Gas Turbine Vane With Different Cooling Configurations,” Energy Procedia, 45, pp. 1087–1096. Baldauf, S. , Scheurlen, M. , Schulz, A. , and Wittig, S. , 2002, “ Correlation of Film-Cooling Effectiveness From Thermographic Measurements at Engine-like Conditions,” ASME Paper No. GT2002-30180. Baldauf, S. , Schulz, A. , Wittig, S. , and Scheurlen, M. , 1997, “ An Overall Correlation of Film Cooling Effectiveness From One Row of Holes,” ASME Paper No. 97-GT-079. Murray, A. V. , Ireland, P. T. , Wong, T. H. , Tang, S. W. , and Rawlinson, A. J. , 2018, “ High Resolution Experimental and Computational Methods for Modelling Multiple Row Effusion Cooling Performance,” Int. J. Turbomach., Propul. Power, 3(1), p. 4. Goldstein, R. J. , 1971, “ Film Cooling,” Advances in Heat Transfer, Elsevier, New York, pp. 321–379. Murray, A. V. , Ireland, P. T. , and Rawlinson, A. J. , 2017, “ An Integrated Conjugate Computational Approach for Evaluating the Aerothermal and Thermomechanical Performance of Double-Wall Effusion Cooled Systems,” ASME Paper No. GT2017-64711. Gurram, N. , Ireland, P. T. , Wong, T. H. , and Self, K. P. , 2016, “ Study of Film Cooling in the Trailing Edge Region of a Turbine Rotor Blade in High Speed Flow Using Pressure Sensitive Paint,” ASME Paper No. GT2016-57356. Colladay, R. S. , 1972, “ Analysis and Comparison of Wall Cooling Schemes for Advanced Gas Turbine Applications,” NASA/Lewis Research Center, Cleveland, OH, Report Nos. NASA-TN-D-6633. Lee, T. W. , 2013, Aerospace Propulsion, Wiley, West Sussex, UK. Baldauf, S. , Schulz, A. , and Wittig, S. , 1999, “ High-Resolution Measurements of Local Effectiveness From Discrete Hole Film Cooling,” ASME J. Turbomach., 123(4), pp. 758–765. Holland, M. J. , and Thake, T. F. , 1980, “ Rotor Blade Cooling in High Pressure Turbines,” J. Aircr., 17(6), pp. 412–418. Kays, W. M. , and Crawford, M. E. , 1993, Convective Heat and Mass Transfer, McGraw-Hill, New York. Mayle, R. E. , Blair, M. F. , and Kopper, F. C. , 1979, “ Turbulent Boundary Layer Heat Transfer on Curved Surfaces,” ASME J. Heat Transfer, 101(3), pp. 521–525. Chowdhury, N. H. K. , Zirakzadeh, H. , and Han, J.-C. , 2017, “ A Predictive Model for Preliminary Gas Turbine Blade Cooling Analysis,” ASME J. Turbomach., 139(9), p. 091010. Andrews, G. E. , Asere, A. A. , Mkpadi, M. C. , and Tirmahi, A. , 1986, “ Transpiration Cooling: Contribution of Film Cooling to the Overall Cooling Effectiveness,” ASME Paper No. 86-GT-136. View article in PDF format. ## References Han, J.-C. , 2013, Gas Turbine Heat Transfer and Cooling Technology, CRC Press/Taylor & Francis, Boca Raton, FL. Andersson, B. , Andersson, R. , Håkansson, L. , Mortensen, M. , Sudiyo, R. , and van Wachem, B. , 2011, Computational Fluid Dynamics for Engineers, Cambridge University Press, Cambridge, UK. Laschet, G. , Rex, S. , Bohn, D. , and Moritz, N. , 2002, “ 3-D Conjugate Analysis of Cooled Coated Plates and Homogenization of Their Thermal Properties,” Numer. Heat Transfer: Part A, 42(1–2), pp. 91–106. Laschet, G. M. , Rex, S. , Bohn, D. , and Moritz, N. , 2003, “ Homogenization of Material Properties of Transpiration Cooled Multilayer Plates,” ASME Paper No. GT2003-38439. Laschet, G. , Krewinkel, R. , Hul, P. , and Bohn, D. , 2013, “ Conjugate Analysis and Effective Thermal Conductivities of Effusion-Cooled Multi-Layer Blade Sections,” Int. J. Heat Mass Transfer, 57(2), pp. 812–821. Zecchi, S. , Arcangeli, L. , Facchini, B. , and Coutandin, D. , 2004, “ Features of a Cooling System Simulation Tool Used in Industrial Preliminary Design Stage,” ASME Paper No. GT2004-53547. Hylton, L. D. , Mihelc, M. S. , Turner, E. R. , Nealy, D. A. , and York, R. E. , 1983, “ Analytical and Experimental Evaluation of the Heat Transfer Distribution Over the Surfaces of Turbine Vanes,” NASA/Detroit Diesel Allison; Indianapolis, IN, Technical Report No. NASA CR 168015. Heidmann, J. D. , Kassab, A. J. , Divo, E. A. , Rodriguez, F. , and Steinthorsson, E. , 2003, “ Conjugate Heat Transfer Effects on a Realistic Film-Cooled Turbine Vane,” ASME Paper No. GT2003-38553. Kassab, A. , Divo, E. , Heidmann, J. , Steinthorsson, E. , and Rodriguez, F. , 2003, “ BEM/FVM Conjugate Heat Transfer Analysis of a Three-Dimensional Film Cooled Turbine Blade,” Int. J. Heat Fluid Flow, 13(5), pp. 581–610. Rigby, D. L. , Heidmann, J. D. , Ameri, A. A. , and Garg, V. K. , 2001, “ Improved Modeling Capabilities in Glenn-HT—The NASA Glenn Research Center General Multi-Block Navier–Stokes Heat Transfer Code,” Cleveland, OH, NASA Report No. 20020073073. Mendez, S. , Nicoud, F. , and Poinsot, T. , “ Large-Eddy Simulation of a Turbulent Flow around a Multi-Perforated Plate,” Complex Effects in Large Eddy Simulations, Kassinos S. C., Langer C. A., Iaccarino G., Moin P., eds., Vol. 56. Springer, Berlin, Heidelberg. Amaral, S. , Verstraete, T. , Van den Braembussche, R. , and Arts, T. , 2010, “ Design and Optimization of the Internal Cooling Channels of a High Pressure Turbine Blade—Part I: Methodology,” ASME J. Turbomach., 132(2), p. 021013. Bonini, A. , Andreini, A. , Carcasci, C. , Facchini, B. , Ciani, A. , and Innocenti, L. , 2012, “ Conjugate Heat Transfer Calculations on GT Rotor Blade for Industrial Applications—Part I: Equivalent Internal Fluid Network Setup and Procedure Description,” ASME Paper No. GT2012-69846. Andreini, A. , Bonini, A. , Da Soghe, R. , Facchini, B. , Ciani, A. , and Innocenti, L. , “ Conjugate Heat Transfer Calculations on GT Rotor Blade for Industrial Applications—Part II: Improvement of External Flow Modeling,” ASME Paper No. GT2012-69849. Colban, W. F. , Thole, K. A. , and Bogard, D. , 2010, “ A Film-Cooling Correlation for Shaped Holes on a Flat-Plate Surface,” ASME J. Turbomach., 133(1), p. 011002. L'Ecuyer, M. R. , and Soechting, F. O. , 1985, “ A Model for Correlating Flat Plate Film Cooling Effectiveness for Rows of Round Holes,” AGARD Heat Transfer and Cooling in Gas Turbine, West Palm Beach, FL, p. 12. Sellers, J. P. , 1963, “ Gaseous Film Cooling With Multiple Injection Stations,” AIAA J., 1(9), pp. 2154–2156. Andrei, L. , Andreini, A. , Facchini, B. , and Winchler, L. , 2014, “ A Decoupled CHT Procedure: Application and Validation on a Gas Turbine Vane With Different Cooling Configurations,” Energy Procedia, 45, pp. 1087–1096. Baldauf, S. , Scheurlen, M. , Schulz, A. , and Wittig, S. , 2002, “ Correlation of Film-Cooling Effectiveness From Thermographic Measurements at Engine-like Conditions,” ASME Paper No. GT2002-30180. Baldauf, S. , Schulz, A. , Wittig, S. , and Scheurlen, M. , 1997, “ An Overall Correlation of Film Cooling Effectiveness From One Row of Holes,” ASME Paper No. 97-GT-079. Murray, A. V. , Ireland, P. T. , Wong, T. H. , Tang, S. W. , and Rawlinson, A. J. , 2018, “ High Resolution Experimental and Computational Methods for Modelling Multiple Row Effusion Cooling Performance,” Int. J. Turbomach., Propul. Power, 3(1), p. 4. Goldstein, R. J. , 1971, “ Film Cooling,” Advances in Heat Transfer, Elsevier, New York, pp. 321–379. Murray, A. V. , Ireland, P. T. , and Rawlinson, A. J. , 2017, “ An Integrated Conjugate Computational Approach for Evaluating the Aerothermal and Thermomechanical Performance of Double-Wall Effusion Cooled Systems,” ASME Paper No. GT2017-64711. Gurram, N. , Ireland, P. T. , Wong, T. H. , and Self, K. P. , 2016, “ Study of Film Cooling in the Trailing Edge Region of a Turbine Rotor Blade in High Speed Flow Using Pressure Sensitive Paint,” ASME Paper No. GT2016-57356. Colladay, R. S. , 1972, “ Analysis and Comparison of Wall Cooling Schemes for Advanced Gas Turbine Applications,” NASA/Lewis Research Center, Cleveland, OH, Report Nos. NASA-TN-D-6633. Lee, T. W. , 2013, Aerospace Propulsion, Wiley, West Sussex, UK. Baldauf, S. , Schulz, A. , and Wittig, S. , 1999, “ High-Resolution Measurements of Local Effectiveness From Discrete Hole Film Cooling,” ASME J. Turbomach., 123(4), pp. 758–765. Holland, M. J. , and Thake, T. F. , 1980, “ Rotor Blade Cooling in High Pressure Turbines,” J. Aircr., 17(6), pp. 412–418. Kays, W. M. , and Crawford, M. E. , 1993, Convective Heat and Mass Transfer, McGraw-Hill, New York. Mayle, R. E. , Blair, M. F. , and Kopper, F. C. , 1979, “ Turbulent Boundary Layer Heat Transfer on Curved Surfaces,” ASME J. Heat Transfer, 101(3), pp. 521–525. Chowdhury, N. H. K. , Zirakzadeh, H. , and Han, J.-C. , 2017, “ A Predictive Model for Preliminary Gas Turbine Blade Cooling Analysis,” ASME J. Turbomach., 139(9), p. 091010. Andrews, G. E. , Asere, A. A. , Mkpadi, M. C. , and Tirmahi, A. , 1986, “ Transpiration Cooling: Contribution of Film Cooling to the Overall Cooling Effectiveness,” ASME Paper No. 86-GT-136. ## Figures Fig. 1 Features of a double-walled effusion cooled concept turbine blade Fig. 2 Features of a double-walled effusion cooled concept turbine blade including leading edge showerhead cooling holes, pin-fin bank, TE slots and the flow direction Fig. 3 CFD results of flow velocity contour distribution in the unit cell from Murray et al. [23] Fig. 4 Double-wall blade with the unit wall element showing the definition of the geometrical parameters Fig. 5 Outer skin of the blade where numerical analysis is performed Fig. 6 ηc−Re characteristics compared to that of three simple duct cooling systems, characterized by L/Dh = 20, 40 and 60 Fig. 11 A graph of effectiveness as a function of nondimensional coolant mass flow, m* from this study Fig. 10 Nondimensional film flow rate per hole, film cooling effectiveness, metal effectiveness and dimensionless external heat transfer coefficient as a function of the blade's dimensionless streamwise location Fig. 9 (a) Film cooling effectiveness on the blade and (b) its corresponding adiabatic wall temperature at Po,c=40 bar Fig. 8 Model setup in ansys steady-state thermal module Fig. 7 Steps in the iterative code used to determine aerofoil wall temperature ## Tables Table 1 The dimensions of the unit wall element from Ref. [23] Table 2 Engine-representative operating conditions used ## Errata Some tools below are only available to our subscribers or users with an online account. ### Related Content Customize your page view by dragging and repositioning the boxes below. Related Journal Articles Related Proceedings Articles Related eBook Content Topic Collections
web
auto_math_text
## default Throughout, all rings are by default $k$-algebras. Go to the list of words starting with: a b c d e f g h i j k l m n o p q r s t u v w y z
web
auto_math_text
# Higgsino and gaugino pair production at the LHC with aNNLO+NNLL precision @article{Fiaschi2020HiggsinoAG, title={Higgsino and gaugino pair production at the LHC with aNNLO+NNLL precision}, author={Juri Fiaschi and Michael Klasen}, journal={arXiv: High Energy Physics - Phenomenology}, year={2020} } • Published 2 June 2020 • Physics • arXiv: High Energy Physics - Phenomenology We present a calculation of higgsino and gaugino pair production at the LHC at next-to-next-to-leading logarithmic (NNLL) accuracy, matched to approximate next-to-next-to-leading order (aNNLO) QCD corrections. We briefly review the formalism for the resummation of large threshold logarithms and highlight the analytical results required at aNNLO+NNLO accuracy. Our numerical results are found to depend on the mass and nature of the produced charginos and neutralinos. The differential and total… 7 Citations ## Figures from this paper • Physics • 2023 We calculate total and differential cross sections for the pair production, at the Large Hadron Collider, of exotic leptons that could emerge from models with vector-like leptons and in Type-III • Physics • 2022 A search for long-lived charginos produced either directly or in the cascade decay of heavy prompt gluino states is presented. The search is based on proton– proton collision data collected at a • Physics • 2022 We revisit LHC searches for heavy invisible particles by exploiting QCD initial state radiation. We recast a dijet signal region in a general multijet plus MET search by ATLAS. We find that • Physics • 2022 A search for long-lived charginos produced either directly or in the cascade decay of heavy prompt gluino states is presented. The search is based on proton– proton collision data collected at a • Physics Journal of High Energy Physics • 2022 We perform a threshold resummation calculation for the associated production of squarks and electroweakinos at the LHC to the next-to-leading logarithmic (NLL) accuracy. Analytical results for the • Physics • 2021 The level of experimental precision that will be achieved with LHC Run-III data and in the forthcoming High Luminosity stage calls for equally accurate theoretical predictions to compare with. Here • Physics • 2020 We present updated results for the production cross sections of slepton pairs and neutralino-chargino pairs at the LHC with next-to-next-to logarithmic precision matched at approximate QCD ## References SHOWING 1-10 OF 157 REFERENCES Direct fabrication of large micropatterned single crystals. p1205 21 Feb 2003. (news): Academy plucks best biophysicists from a sea of mediocrity. p994 14 Feb 2003. ### JHEP 04 • 049 • 2020 • Physics Journal of High Energy Physics • 2014 Motivated by the shift in experimental attention towards electroweak super-symmetric particle production at the CERN LHC, we update in this paper our precision predictions at next-to-leading order of • Physics • 2020 This paper presents results of searches for the electroweak production of supersymmetric particles in models with compressed mass spectra. The searches use 139 fb(-1) of root s = 13 TeV proton-prot • Physics • 2020 A search for supersymmetry through the pair production of electroweakinos with mass splittings near the electroweak scale and decaying via on-shell W and Z bosons is presented for a three-lepton fi • Computer Science • 2019 A factorization theorem valid near the kinematic threshold of the partonic Drell-Yan process is presented and an issue for threshold resummation beyond the leading-logarithmic accuracy at next-to-leading power is demonstrated. • Physics • 2019 We present a calculation of slepton pair production at the LHC at next-to-next- to-leading logarithmic (NNLL) accuracy, matched to approximate next-to-next-to-leading order (aNNLO) QCD corrections. • Physics Journal of High Energy Physics • 2020 We sum the leading logarithms α s n ln 2 n − 1 1 − z $${\alpha}_s^n{\ln}^{2n-1}\left(1-z\right)$$ , n = 1 , 2 , . . . , near the kinematic threshold z = m H 2 / s ̂ → 1 $$z={m}_H^2/\hat{s}\to 1$$ • Physics • 2020 A search for the electroweak production of charginos and sleptons decaying into final states with two electrons or muons is presented. The analysis is based on 139 fb$^{-1}$ of proton–proton • Physics Physical Review D • 2019 We investigate stau-antistau annihilation into heavy quarks in the phenomenological Minimal Supersymmetric Standard Model within the DM@NLO project. We present the calculation of the corresponding
web
auto_math_text
# MedeA Thermal Conductivity - Quantify Materials’ Heat Transport Characteristics At-a-Glance The MedeA ®[1] Thermal Conductivity module harnesses today’s computing power and computational methods to predict thermal conductivity for bulk solid and liquid materials, and nanostructured systems. This powerful module employs both equilibrium and non-equilibrium classical simulation methods to provide information essential for the optimal design of advanced products and component materials. MedeA Thermal Conductivity takes advantage of the parallel performance of LAMMPS, and it combines Materials Design’s expertise in forcefields, simulations, and software engineering. With MedeA Thermal Conductivity, explore pure bulk phases, and examine the effects of interfaces (i.e. Thermal Boundary, or ‘Kapitza’, resistance), impurities, isotopic purity, and nanostructure on the thermal conductivity of your materials systems. Key Benefits • Handles all computational details, letting you focus on the science • Allows easy set up of complex calculations using the powerful flowchart interface, as well as recall, to modify conditions or specify a different model before running again • Provides automatic analysis, including fitting of results with uncertainty estimation • Validates data based on graphs, including fitting errors and all intermediate results, through the convenient web interface • Works with JobServer and TaskServer to run your calculations on the appropriate hardware, centralizing the results • Integrates with MedeA Forcefields for advanced forcefield handling and assignment of a wide variety of organic, inorganic, and metallic materials ‘The equilibrium molecular dynamics method within MedeA’s thermal conductivity module makes it easy to perform routine property estimates on homogeneous non-metallic solids and liquids, while the complementary non-equilibrium method is particularly useful for making quantitative assessments of the effect of interfaces on heat transfer in complex multiphase materials.’ ## Computational Characteristics • Uses the LAMMPS simulation engine for maximum performance on any computer, whether it be a scalar workstation or a massively parallel cluster • Provides the lattice component of the thermal conductivity for ordered systems. For insulators and semiconductors at moderate temperatures, this is essentially all of the thermal conductivity • Equilibrium molecular dynamics (EMD) Green-Kubo method: • Requires moderate system sizes • Requires variable simulation times: duration of simulation depends on the thermal conductivity, for example: higher conductivities require longer simulation times • More automated than RNEMD methods - simply build and equilibrate the system, and run • Reverse non-equilibrium methods (RNEMD) method: • Requires elongated cells in the direction of conduction • Extrapolation vs. cell length for crystalline materials • Probes higher conductivities, which arise from longer phonon mean free path lengths, and require correspondingly longer cells • Optimizes imposed heat transfer rate, requiring some user intervention • The effect of the cell cross section may sometimes need to be examined • Compatible with any of the forcefields handled by MedeA Forcefield ## Required Modules • MedeA Environment • MedeA Forcefield • MedeA LAMMPS download: pdf
web
auto_math_text
# Camels All the Way Down December 1, 2022 In this post, I’ll present (in a somewhat playful manner) a common critique of Haskell, which famously has a lazy evaluation semantics, and see how it can also apply to OCaml, which is eager. In other words, I take the cheap route and bash various programming languages to get readers. To that end, let me start with a parable. ## The parable of the lazy programmer Having studied programming languages at CMU and worked at a certain proprietary trading firm, I’ve now been firmly indoctrinated into the Cult of ML,1 a family of functional programming languages. Originally this was Standard ML, but nowadays I write more OCaml. Anyway, my wise mentors have always warned me about straying too far into the error of the Haskell programmers, lest I become swayed by their heresy and enter into apostasy against the Cult of ML.2 “Lo!” they warned me, “the central vice of Haskell lies in its fundamental laziness.3 Believe not, therefore, their wicked lies!” The most oft-cited issue with pervasive laziness is that it obscures the runtime behavior of code. Because of this, it’s quite hard to analyze the complexity of any nontrivial Haskell code. Additionally, exceptions never show up quite where you expect them to, and debug print statements need to be handled with care.4 There are some more subtle reasons why we might not like laziness, too: for example, Hask is famously not a category, and one way to see this is how the built-in seq function, which lets the programmer selectively use eager evaluation, works. “Stay your judgment but a minute,” the devilishly devious devotee of Haskell replies. “Yea, there may be certain trade-offs, but one benefit of laziness ye cannot deny: that both the finite and the infinite may be contained within a single type! With laziness, ye may fearlessly work with both the finite and the infinite. Be thralls no more to those who would limit your grasp to the finite.” The Haskell programmer, in his error, speaks of this: -- Haskell -- We can define an infinite list of natural numbers. nats = [0..] -- We can do normal operations on the infinite list, like filtering. evens = filter even nats -- Prints out [0,2,4,6,8]. print $take 5$ filter even nats This is, prima facie, a strong argument. The ability to deal with infinite data structures just as easily as finite ones seems like a strong selling point of Haskell. But Harper,5 the defender of the ML orthodoxy, is quick with the reply: “Not so! The Haskell programmer would erase the distinction between the finite and the infinite: a grave error indeed. For how can one speak of the finite if it is always liable to be confused with the infinite? Or what communion can the finite have with the infinite which does not destroy them both? Were we not taught by the Ancients the method of induction, which may be properly applied only to the finite? To mix the two cardinalities is a cardinal sin indeed, for with it we must forsake the most noble method of induction.” The point here is somewhat subtle: one of the great benefits of functional languages is that we can rigorously reason about our code using structural induction. In our introductory functional programming class at CMU, we have the students write many such proofs, to the point that they probably get sick of it. But there’s a bit of a problem here: induction can only be done on a type whose elements all have finite “depth,” so to speak. Any first-year undergraduate will be familiar with the problem: a proof by mathematical induction can only show that some proposition holds for every natural number. It cannot show that it holds “at infinity,” whatever that means. Now if you’ve done any functional programming before, you’ve probably seen how a good type system makes it easy to express concepts in code, and the beautiful thing is that we can prove theorems about inductively defined types using structural induction. For example, we can define the natural numbers of good old Peano arithmetic like such: -- Haskell data Nat = Zero | Succ Nat three = Succ $Succ$ Succ Zero -- Less than or equal leq :: Nat -> Nat -> Bool leq Zero _ = True leq _ Zero = False leq (Succ n) (Succ m) = leq n m -- Prints true then false print $leq Zero three print$ leq three Zero But now here’s the “gotcha” moment: we would like to think of Nat as the type of natural numbers, in the sense that every value n :: Nat is a natural number. Indeed, Zero corresponds to $0$ and three corresponds to $3$. What about the following value of type Nat? -- Haskell -- What natural number is this? ω = Succ ω -- We can do useful computations with infinity, like determining that -- it's greater than three. print $leq three ω This is $\omega$, the fixed point of the successor function (i.e. the “number” that is its own successor, an infinite ordinal). This is not a natural number, despite being a value of type Nat! Because of this, the type Nat cannot properly be said to be the type of natural numbers. Likewise, the “list” type [a] in Haskell actually contains much more than just lists; it also contains infinite streams whose elements are of type a. This, one might imagine, is a rather severe deficiency, and a source of some embarrassment for the Haskell programmer. ## The hypocrisy unveiled Upon hearing this, the Haskell programmer is momentarily taken aback. “Touché,” he acknowledges, “I may concede the merit in your point.” Yet ever quick on his feet, he continues thus: “But this cannot be considered a point in favor of your position, O arrogant eager programmer! For behold, does not this OCaml code exhibit the same behavior?” (* OCaml *) type nat = Zero | Succ of nat let rec leq n m = match (n, m) with | (Zero, _) -> true | (_, Zero) -> false | (Succ n', Succ m') -> leq n' m' (* Uh oh... *) let rec omega = Succ omega (* We can show that three is less than infinity! *) let three = Succ (Succ (Succ Zero)) let true = leq three omega The OCaml programmer hangs her head in shame. “Alas, this I must confess: thou hast made plain a shameful mistake in our blessed language’s design. We have too freely allowed some expressions to inhabit the right-hand side of a recursive let binding.6 Yet I would submit this to thee: the mistake is not so great as it is in Haskell.” Why might this be? For one, this class of counterexamples is actually quite small in OCaml; if we make the situation even a little trickier, the compiler will complain about it: (* OCaml *) let succ n = Succ n (* The compiler doesn't like this *) let rec omega' = succ omega' But another, more fundamental difference, is this: in Haskell, the type Nat contains not only the “conatural” numbers (i.e. the natural numbers extended with a point at infinity), but also many other divergent expressions wrapped in a Succ call! -- Haskell loop () = loop () -- This value cannot be thought of even as a conatural number diverge = Succ$ loop () In OCaml, this is not the case. Due to eager evaluation, we cannot construct a value of type nat that diverges. (* OCaml *) let loop () = loop () (* This will run forever *) let diverge = Succ (loop ()) So although yes, the behavior of let rec declarations in OCaml was arguably a mistake, the language’s eagerness at least guards us somewhat: the resulting infinities are limited only to a relatively “well-behaved” sort. In Haskell, the pervasive laziness leads to even more disastrous effects, and all bets are off. 1. Here I mean “meta language,” not “machine learning.” I hate that I always have to make this distinction; now I know how cryptographers feel every time they have to explain that “crypto” means “cryptography,” not “cryptocurrency.”↩︎ 2. I’m joking, of course. I owe a lot to Haskell: it was actually the first functional language that I learned. When I was a high school student, it was somewhat trendy to learn functional programming, so I went through a few Haskell tutorials and was immediately hooked.↩︎ 3. In fact, according to Catholic doctrine, sloth—laziness—is one of the seven cardinal sins!↩︎ 4. Well, I’m aware that there are libraries that make debugging pretty easy.↩︎ 5. Bob Harper, whose class on programming languages I’m taking right now. Actually, I happen to be procrastinating on doing my homework for that class by writing this post.↩︎ 6. I actually asked this question on the course Piazza for 15-312, and a TA linked the relevant language extension page.↩︎
web
auto_math_text
# University of Fallujah Recent publications Since the beginning of the corona pandemic, numerous scientific projects have been conducted worldwide to investigate how the new virus can be combated. Researchers are developing various vaccines and drugs at full speed – with varying degrees of success. In this work, silico screening (molecular docking analysis) is performed on twenty natural compounds, which are expected to provide valuable lead molecules and medication to treat a new condition SARS-CoV-2. Our results indicate that out of the 20 compounds on the candidate list, lutein and Polydatin, natural components of fruits and vegetables (especially egg yolk and maize) have shown an excellent performance in our docking studies through a minimum binding energy of − 9.8 kcal/mol also − 7.4 kcal/mol, separately. This indicates their potential for the inhibitory molecular interactions against COVID-19. The main intent of the research is to analyse the protein components and investigate the molecules. Many studies have shown that faced with an epidemic, the effect of fear on human behavior can reduce the number of new cases. In this work, we consider an SIS-B compartmental model with fear and treatment effects considering that the disease is transmitted from an infected person to a susceptible person. After model formulation and proving some basic results as positiveness and boundedness, we compute the basic reproduction number $\mathcal R_0$ and compute the equilibrium points of the model. We prove the local stability of the disease-free equilibrium when $\mathcal R_0 < 1$. We study then the condition of occurrence of the backward bifurcation phenomenon when $\mathcal R_0\leq1$. After that, we prove that, if the saturation parameter which measures the effect of the delay in treatment for the infected individuals is equal to zero, then the backward bifurcation disappears and the disease-free equilibrium is globally asymptotically stable. We then prove, using the geometric approach, that the unique endemic equilibrium is globally asymptotically stable whenever the $\mathcal R_0 > 1$. We finally perform several numerical simulations to validate our analytical results. Density Functional Theory has been utilized to investigate the electronic and structural characteristics of Aluminium phosphide (AlP). The exchange-correlation potential was calculated using the Generalized Gradient Approximation. The structural, electronic and vibrational features of AlP diamondoids and nanocrystals were investigated using Density Functional Theory at the PBE/6-31(d) level, which included polarization functions. Vibrational modes have been optimized concerning IR intensity, force constants, and lowered masses. In this study there are two components to the vibrational force constant for AlP diamondoids. The first one is distinguished by a reduced mass that is greater than 1 amu and consists primarily of Al-P vibrations that are positioned roughly between 0 and 231 cm-1. The second component has a decreased mass very near to 1 amu and is in the 1228–2400 cm–1 range. It is entirely made up of hydrogen vibrational modes. AlP diamondoids were evaluated with the results of experimental bulk in terms of molecular size-related changes in allocated vibrational frequencies. Breast cancer (BC) is a genetic disease in the mammary glands' ducts and lobules, with ductal cancers comprising most of the malignancies. Biomarkers can provide an assessment of cancer diagnosis and prediction. The study aims to compare the expression of serum (miR-21-5p) and CA 15-3 expression in the Iraqi population as more efficient biomarkers, then checked MiRNA-21 main characters as a biomarker comparison with (CA15-3) levels. Circulating serum miRNA-21 expression was measured using (the quantitative Real Time-PCR technique) in 50 patients at various stages of breast cancer compared to 27 healthy controls. Meanwhile, CA 15-3 levels were quantified using electro-chemo luminescence immunoassay (ECLIA) methods. The results show the expression of miRNA-21 and the concentration of CA15-3 increased significantly (p>0.01) in patients as compared to control, but the higher median level of MiRNA-21 than of CA15-3. The ROC curve analysis shows that the accuracy, Overall Model Quality, AUC, sensitivity and specificity of miRNA-21 as a biomarker is much higher than the CA 15-3. In conclusion, miRNA-21 may fill the gap that CA 15-3 still lacks in detecting breast cancer at an early stage. Keywords: Breast cancer, microRNA-21, CA15-3, gene expression, RT-q PCR During 2019-2020, the experiment was conducted in the laboratory of the Department of Field Crop Sciences, Faculty of Agricultural Engineering Sciences - Baghdad University, to investigate the impact of soaking wheat seeds produced during the 2016 agricultural season with three plant extracts (licorice root extract 2%, 4% and 6%, Acadian and Humic(500, 1000, & 1500 mg L-1). Aside from the two control treatments (soaking in distilled water with dried seeds). The results show that the soaking treatment with licorice root extract outperformed the other therapies in conventional laboratory germination, root length, and seedling vigor index (95 percent and 3.42 cm 1207) compared to the two control treatments (soaking with distilled water and dry seeds). While all the Humic and Acadian soaking treatments at the concentrations (500 and 1000) mg L-1 did not significantly differ with the distilled water soaking treatment.The characteristics of standard laboratory germination percentage, root length, coleoptile length and seedling vigor index. Thus, we conclude that soaking wheat seeds with high concentrations of Acadian (more than 1000 mg L-1) leads to a deterioration in the vitality of the seeds. While soaking with licorice root extract enhances the vibrancy and activity of wheat seeds compared to the other extracts used. As a result, we propose soaking the somewhat old and low-vital wheat seeds in a concentration of at least 2% licorice root extract. Keywords: Radicle dry weight, Seedling vigor, seed germination, seed storage. After a long time that the current account has been governed by Customs and Usages, the legislators began since eighty years, one after another, to organize it, but with some differences. Especially, by organizing this account only in the Bank or Commercial operations. Where, this legal tool can be used by all physical persons, merchants or not, and private and public moral persons, in all operations, Commercial or not. And for that we researched on, by analytical and comparative study, to reach the results that we can draw out from it and to formulate the necessary and suitable recommendations and suggestions to modify the legislations in force to enable everyone to take advantage of the current account. Economic growth is one of the most important goals that countries around the world seek to achieve, and this study measures and analyzes the impact of fiscal discipline on economic growth during the period (2004-2020) using the Autoregressive Distributed Deceleration (ARDL) model. Furthermore, The results showed that the economic growth in Iraq has an inverse relationship with the indicators of financial discipline (deficit or surplus, public debt, public expenditure) except for the public revenue indicator, where it is linked with a direct relationship as it appears that there is a long-term equilibrium relationship, that is, there is a joint integration between the variables the focus of the study according to the border test (bounds test) through the error correction vector coefficient, as (1.03) of short-term errors are automatically corrected within a period of one year in order to reach equilibrium in the long term. The research aims to identify the study of the efficiency of the housing function in the city of Fallujah and its problems, and to know its housing and population characteristics through the use of a number of criteria (population characteristics, occupancy rate, housing ownership, the nature of materials used in construction, the area of the housing unit and the number of floors, as well as the presence of the garage, home garden and design) to determine the extent The efficiency of the housing unit to create a healthy environment in the housing unit and its compliance with the standards in force locally, knowing the housing need, the number of existing housing units, the problems it faces in terms of infrastructure, economic and social problems, and making recommendations aimed at advancing the housing reality in the city, and thus the research took the following aspects. First: Population distribution and population density. Second: Characteristics of the housing unit in the city of Fallujah. Third: Residential regions in the city of Fallujah. Fourth: The reality of the problems of the residential job. New mononuclear platinum(IV) and dinuclear ruthenium(III) complexes were synthesized with the o-phenylenediamine (OPD) ligand. The complexes were characterized by different spectroscopic techniques, and DFT calculations and their cytotoxic activity has been investigated in this study. The obtained FT-IR data confirmed that the o-phenylenediamine (OPD) ligand coordinated from the two nitrogen atoms with the platinum(IV) and the Ru(III) metal ions. The mass spectral data confirmed that the platinum(IV) complex was mononuclear, whereas the Ru(III) complex was dinuclear. The magnetic susceptibility measurements indicated that the platinum(IV) complex was diamagnetic, while Ru(III) complex was paramagnetic. UV-Vis. measurements showed the peaks of the charge transfer and the (d-d) metal transitions. The DFT and TD-DFT calculations showed that the complexes were stable and the electronic energies are (-521.99 and - 963.59 a.u.) for the Pt(IV) and the Ru(III); respectively, while for the o-phenylenediamine ligand is (-342.96 a.u.), the HOMO energies are (-0.214 a.u.) for the Pt(IV) complex, and (-0.267 a.u.) for the Ru(III) complex, the LUMO energies are (-0.145 a.u.) for the Pt(IV) complex and (-0.183 a.u.) for the Ru(III) complex. The dipoles moment of the complexes are (13.70 and 0.004 Debye); respectively indicating that the complexes are polarized. From all the spectroscopic data, and the bond angles excluded from the DFT calculations the proposed structures of the complexes are distorted octahedral. The prepared complexes' cytotoxicity has been studied and the results showed that the Pt(IV) and the Ru(III) complexes have good cytotoxicity against L20B cell lines with IC50 (169.8 and 204.8 µg); respectively which opens the field for further application as antitumor complexes. Praise be to God, Lord of the Worlds, and prayers and peace be upon the most honorable of the prophets and messengers, and upon all his family and This research included an introduction, two sections :companions. and after and a conclusion. The first section showed a brief translation of the personal and scientific life of Imam Al-Hijjawi,( may Allah have mercy on him), and the second one contained the choices of Imam Al-Hajjawi, (may Allah have mercy on him). For the conclusion, it showed the most important results that the researcher reached in the study, and praise be to God, Lord of the Worlds. This study includes an introduction, a preface, and five demands. The preface was in introducing Imam al-Qurtubi (may Allah have mercy on him).The first section was in the testimony of one witness to the wound. In it, Imentioned the sayings of the jurists and their evidence, indicating the correct opinion. The second section was the increase in faith over fifty. Al-Qasama has clarified the sayings of the jurists and their evidence and discussed them The most correct opinion. The third section of establishing retribution By Qasama, it showed the sayings of the jurists and their evidence, and discussed them The most correct opinion. The most correct one is according to the strength of the evidence, and then the study is concluded by a conclusion and references. Praise be to God, Lord of the Two Worlds, and prayers and peace be upon our master Muhammad and all his family and companions. In this research I dealt with the choices of the scholar Ibn al-Aqrab - may God have mercy on him - who lived in the eighth century AH in the city of Aleppo, In the year (710 AH), I collected his choices and compared them with the eight schools of jurisprudence, and the fatwas of contemporary. where his choices were collected and reached two issues in the khul’ and zihar. The research included an introduction, two chapters and a conclusion. The second topic was the choices of the scholar Ibn al-Aqrab - may God have mercy on him - in the khula and zihar. As for the conclusion, it was in the most important results that I reached in this research. Objective interpretation is one of the most famous sections of the interpretation of the Noble Qur’an, and it is of three types: either the student takes a specific word in the Noble Qur’an, or he studies a specific subject regardless of the pronunciation, or he takes a surah or a group of verses, which he studies objectively, and I chose the first type of it. So I searched in the Book of God Almighty for the word (livelihood) or (living) and found it found in five verses of the Qur’an. , and God bless. Imam Abi al-Fadl Wali Al-Din, may Allah have mercy on him, is one of the Shafi’i scholars, distinguished by his knowledge, wisdom of his mind and the broadness of his understanding. This was obvious through the study of "Matn Al-Ghayah Wal-Taqreeb", in which he mentioned his opinion as well as his preferences and choices. The book also contained jurisprudence rules terms, criterions and rules of jurisprudence. He authored the book "Al-Nihayah" by mentioning issues of Shafi’i jurisprudence, with reference to other schools of thought in a few places. The research includes many objectives to answer a number of questions that revolve around this topic, including Explaining the meaning of right and usufruct and what are the reasons that give an individual the right to usufruct. - Studying the jurisprudence rulings that work to regulate the right to use river water, and clarifying the extent of the interest of Islamic jurisprudence in ensuring these rights. A statement of the legal controls that regulate usufruct. A clarification of what is meant by the partnership contained in the hadiths of the Prophet that call people to the legitimacy of partnership, as it clarifies when an individual has the right to exploit that partnership, and what are the controls for benefit in the event of partnership, and how to sustain this benefit. - A statement of the legal position on the use of river water, the extent to which it is organized and guarantees the rights of individuals in this aspect, and what are the stages this regulation has gone through, with an explanation of some legal articles that regulate the right of usufruct. Explanation of the extent of compatibility and difference between Islamic jurisprudence and legal jurisprudence on the issue of the right to use river water. - Restricting to the two most important rights for the use of river water, namely, the right to drink and to drink.. In this research, a well-known Shafi’i jurist is shed light on, namely, Judge Abu Ali Al-Farqi, who died in the year 528 AH. I found them many, so my study was limited to what he mentioned in the chapters on marriage only, and my methodology in writing was the comparative approach, where I mentioned the opinions of the jurists from the different Islamic schools, open to the opinion of the judge Al-Fariqi, with the evidence for each saying, with a discussion of the evidence, and a statement of the most correct ones. God grants success. This study dealt with the Jurisprudential issues that consist of three parts according to Shafi’is in Al-Bayan book by Al-Omrani (dead 558 AH) in the book of Taharah and its impact on the fatwas of the contemporaries, a comparative study, because Imam Al-Omrani (may Allah have mercy on him) was an ascetic and pious Imam, a benevolent scholar, well-known for his jurisprudence and principles, speech and grammar. He was born in Yemen, and they called him the Sheikh of the Shafi’is, and the Sheikh of the people of Yemen. He mentioned in his book Al-Bayan, which is one of the important books of the Shafi’is, the most important three aspects in this book, and it is known that the aspects differ from the sayings, because these words are the words of Shafi’i (may Allah have mercy on him). There are two important sayings in this respect; that is, the old saying and the new one. While concerning the aspects, they represent the opinions of the companions of Al-Shafi’i that are based on its origins and rules. These may be their diligence sometimes not based on its origins and rules, and this is not from the doctrine, but is attributed to its owner. Therefore, the researcher selected four issues in the book of “Al- bayan” by Al-Omrani(may Allah have mercy on them) in the book of Taharah, which is related to water, and knowledge of the other sayings of the doctrine that agreed with those aspects of the Shafi’is. Keywords: (jurisprudence, three aspects, Shafi'i, water, the book of the statement, its impact, fatwas of contemporaries). This study showed what was mentioned in the description of the reward and what was added to it in the Noble Qur’an . The researcher tried to shed light and meditate on what was mentioned in the Holy Qur’an regarding the description of reward. It is found that most of what is said about “reward” is described by its greatness, generous، ungrateful and good. As for what is added to the reward, it has been added to the benefactors, the workers, the believers and the reformers, and to the noun connected (min) the interpreter with what follows it for the doers of good. So that the researcher tried to identify the implications of the description for the above-mentioned، benefiting in all of that from the sources of language and interpretation. The research took place in two demands، the first relates to the description of the reward، and the second with what the reward is added to it. Thus, the study invoked the Qur’anic texts related to these two demands and searched in the sources of language and interpretation for the reasons and indications related to them. As for the introduction that the researcher presented، in which he explained the main idea and division of the research.. Then came the conclusion that the researcher reached the most important findings and results.. This research studier the duration of Elaa and its Judgment in Islamic law. The importance of this research is shown when a man neither wants to marry a woman nor he let her marry another man. He swears not to approach her. He doesn't her as divorced nor married. The purpose behind the Elaa is to do harm to the wife. Even muslims did Elaa too. But God prevented it and gave the husband a duration so that he may think again and not to rush. A Fter that if he thinks that it is better to leaves this harm awm away, he can teave. And if he thinks it is better to devorce, he can devorce. God limited the duration in Four months. So anyone who swears to Elaa for Four months or less, he doesn't need to wait for the end of it. Because the duration of Elaa ends before the time limit or in the time limit. Waiting for four month and ten days should be in the time of Elaa Because only after the duration of Elaa the wife can demand to raise the case to the Judge. If the wife has no right for demanding without Elaa. For the demanding from the husband to reverse his decision should be after four months. Another thing is that doing harm by stopping sexual inter course less than the duration limit. Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here. 419 members • quality assurance and accreditation • College of Medicine • Biotechnology and Environmental Center Information
web
auto_math_text
• Create Account ### #ActualDJTN Posted 26 July 2012 - 10:52 AM Well now I’ve got another issue pertaining to the velocity. It’s changing incorrectly. If I have a velocity vector (0, 0 ,10) and I do a matrix transform based off the rotations of the camera (pitch, roll, yaw), the velocity should always be pushing away from the camera. The problem is that the velocities Y is fluctuating up and down with the slightest change in the camera's pitch. ### #1DJTN Posted 25 July 2012 - 04:21 PM Well now I’ve got another issue pertaining to the velocity. It’s changing incorrectly. I transform the coordinates of the velocity vector by a rotation (yaw/pitch/roll) matrix that I calculate by decomposing the view matrix. Everything works as expected except the pitch affecting the velocity vector is different depending on the yaw. If I don’t change the pitch of the camera and only rotate on the yaw axis (spinning the camera around clock-wise), the pitch goes up and down. This has to be a math problem but I cannot pin point it. I have access to my camera class but I do not use pitch, rotation and yaw calculations. My camera rotates using angles: CamTarget.Y = radius * Sin(vRadians)
web
auto_math_text
# ANALYSIS OF THE MM-WAVE SPECTRUM OF THE LOWEST $\Sigma$ AND $\Pi$ BENDING STATES OF ARHCN Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/12951 Files Size Format View 1992-WG-02.jpg 131.8Kb JPEG image Title: ANALYSIS OF THE MM-WAVE SPECTRUM OF THE LOWEST $\Sigma$ AND $\Pi$ BENDING STATES OF ARHCN Creators: Cooksy, A. L.; Drucker, Stephen; Klemperer, William Issue Date: 1992 Publisher: Ohio State University Abstract: The lowest excited $\Sigma$ and $\Pi$ states of the van der Waals bending mode of ArHCN, including Stark-field data, were analyzed simultaneously using a linear molecular basis set and incorporating high order hyperfine and effective Coriolis coupling constants. The rotational analysis is distinctly unsatisfying, since 9 -- 11 constants are being fit to 15 energy levels (when hyperfine structure and Stark data are not included) indicating that a potential surface fit may already be more appropriate. However, as a result of the near degeneracy of these states, the perpendicular transition moment $\mu_{b}$, the component of the electric field gradient tensor $V_{xz}$, and in some instances the relative signs of the terms off-diagonal in l(or k) could be determined empirically. We estimate: $<\theta\Sigma>=102^{\circ}, <\Delta\theta^{2}\Sigma>^{1/2}=40^{\circ}, R_\Sigma=3.91 \AA, <\theta_\Pi>=85^{\circ}, <\Delta\theta^{2}_\Pi>^{1/2}=40^{\circ}$, and $R_{\Pi}=3.85 A$, These may be compared to the values $\theta_{eq}=0^{\circ}, <\Delta\theta^{2}_{gud, state}>^{1/2}=30^{\circ}$, and $R_{eq}=4.62 A$. This value of $<\theta\Sigma>$ suggests that there is still a barrier to the antilinear Ar-NCH configuration at the $v=1$ energy. The results are in good agreement with previous theoretical work on the $\Pi$ $state,^{1,2}$ but less so in the case of the $\Sigma$ $state,^{1}$ which is more sensitive to the potential surface at the antilinear configuration. Description: 1. D. C. Clary, C.E. Dateo, and T. Stoecklin, J. Chem. Phys. 93 , 7666 (1990). 2. D. Yaron and W. Klemperer, J. Chem. Phys. 95 , (1991). Author Institution: Department of Chemistry, Harvard University; Department of Chemistry, Harvard University URI: http://hdl.handle.net/1811/12951 Other Identifiers: 1992-WG-02
web
auto_math_text
With two looming paper deadlines, two rambunctious kids, an undergrad class, program committee work, faculty recruiting, and an imminent trip to Capitol Hill to answer congressional staffers’ questions about quantum computing (and for good measure, to give talks at UMD and Johns Hopkins), the only sensible thing to do is to spend my time writing a blog post. So: a bunch of people asked for my reaction to the new Nature Communications paper by Daniela Frauchiger and Renato Renner, provocatively titled “Quantum theory cannot consistently describe the use of itself.”  Here’s the abstract: Quantum theory provides an extremely accurate description of fundamental processes in physics.  It thus seems likely that the theory is applicable beyond the, mostly microscopic, domain in which it has been tested experimentally.  Here, we propose a Gedankenexperiment to investigate the question whether quantum theory can, in principle, have universal validity.  The idea is that, if the answer was yes, it must be possible to employ quantum theory to model complex systems that include agents who are themselves using quantum theory.  Analysing the experiment under this presumption, we find that one agent, upon observing a particular measurement outcome, must conclude that another agent has predicted the opposite outcome with certainty.  The agents’ conclusions, although all derived within quantum theory, are thus inconsistent.  This indicates that quantum theory cannot be extrapolated to complex systems, at least not in a straightforward manner. I first encountered Frauchiger and Renner’s argument back in July, when Renner (who I’ve known for years, and who has many beautiful results in quantum information) presented it at a summer school in Boulder, CO where I was also lecturing.  I was sufficiently interested (or annoyed?) that I pulled an all-nighter working through the argument, then discussed it at lunch with Renner as well as John Preskill.  I enjoyed figuring out exactly where I get off Frauchiger and Renner’s train—since I do get off their train.  While I found their paper thought-provoking, I reject the contention that there’s any new problem with QM’s logical consistency: for reasons I’ll explain, I think there’s only the same quantum weirdness that (to put it mildly) we’ve known about for quite some time. In more detail, the paper makes a big deal about how the new argument rests on just three assumptions (briefly, QM works, measurements have definite outcomes, and the “transitivity of knowledge”); and how if you reject the argument, then you must reject at least one of the three assumptions; and how different interpretations (Copenhagen, Many-Worlds, Bohmian mechanics, etc.) make different choices about what to reject. But I reject an assumption that Frauchiger and Renner never formalize.  That assumption is, basically: “it makes sense to chain together statements that involve superposed agents measuring each other’s brains in different incompatible bases, as if the statements still referred to a world where these measurements weren’t being done.”  I say: in QM, even statements that look “certain” in isolation might really mean something like “if measurement X is performed, then Y will certainly be a property of the outcome.”  The trouble arises when we have multiple such statements, involving different measurements X1, X2, …, and (let’s say) performing X1 destroys the original situation in which we were talking about performing X2. But I’m getting ahead of myself.  The first thing to understand about Frauchiger and Renner’s argument is that, as they acknowledge, it’s not entirely new.  As Preskill helped me realize, the argument can be understood as simply the “Wigner’s-friendification” of Hardy’s Paradox.  In other words, the new paradox is exactly what you get if you take Hardy’s paradox from 1992, and promote its entangled qubits to the status of conscious observers who are in superpositions over thinking different thoughts.  Having talked to Renner about it, I don’t think he fully endorses the preceding statement.  But since I fully endorse it, let me explain the two ingredients that I think are getting combined here—starting with Hardy’s paradox, which I confess I didn’t know (despite knowing Lucien Hardy himself!) before the Frauchiger-Renner paper forced me to learn it. Hardy’s paradox involves the two-qubit entangled state $$\left|\psi\right\rangle = \frac{\left|00\right\rangle + \left|01\right\rangle + \left|10\right\rangle}{\sqrt{3}}.$$ And it involves two agents, Alice and Bob, who measure the left and right qubits respectively, both in the {|+〉,|-〉} basis.  Using the Born rule, we can straightforwardly calculate the probability that Alice and Bob both see the outcome |-〉 as 1/12. So what’s the paradox?  Well, let me now “prove” to you that Alice and Bob can never both get |-〉.  Looking at |ψ〉, we see that conditioned on Alice’s qubit being in the state |0〉, Bob’s qubit is in the state |+〉, so Bob can never see |-〉.  And conversely, conditioned on Bob’s qubit being in the state |0〉, Alice’s qubit is in the state |+〉, so Alice can never see |-〉.  OK, but since |ψ〉 has no |11〉 component, at least one of the two qubits must be in the state |0〉, so therefore at least one of Alice and Bob must see |+〉! When it’s spelled out so plainly, the error is apparent.  Namely, what do we even mean by a phrase like “conditioned on Bob’s qubit being in the state |0〉,” unless Bob actually measured his qubit in the {|0〉,|1〉} basis?  But if Bob measured his qubit in the {|0〉,|1〉} basis, then we’d be talking about a different, counterfactual experiment.  In the actual experiment, Bob measures his qubit only in the {|+〉,|-〉} basis, and Alice does likewise.  As Asher Peres put it, “unperformed measurements have no results.” Anyway, as I said, if you strip away the words and look only at the actual setup, it seems to me that Frauchiger and Renner’s contribution is basically to combine Hardy’s paradox with the earlier Wigner’s friend paradox.  They thereby create something that doesn’t involve counterfactuals quite as obviously as Hardy’s paradox does, and so requires a new discussion. But to back up: what is Wigner’s friend?  Well, it’s basically just Schrödinger’s cat, except that now it’s no longer a cat being maintained in coherent superposition but a person, and we’re emphatic in demanding that this person be treated as a quantum-mechanical observer.  Thus, suppose Wigner entangles his friend with a qubit, like so: $$\left|\psi\right\rangle = \frac{\left|0\right\rangle \left|FriendSeeing0\right\rangle + \left|1\right\rangle \left|FriendSeeing1\right\rangle}{\sqrt{2}}.$$ From the friend’s perspective, the qubit has been measured and has collapsed to either |0〉 or |1〉.  From Wigner’s perspective, no such thing has happened—there’s only been unitary evolution—and in principle, Wigner could even confirm that by measuring |ψ〉 in a basis that included |ψ〉 as one of the basis vectors.  But how can they both be right? Many-Worlders will yawn at this question, since for them, of course “the collapse of the wavefunction” is just an illusion created by the branching worlds, and with sufficiently advanced technology, one observer might experience the illusion even while a nearby observer doesn’t.  Ironically, the neo-Copenhagenists / Quantum Bayesians / whatever they now call themselves, though they consider themselves diametrically opposed to the Many-Worlders (and vice versa), will also yawn at the question, since their whole philosophy is about how physics is observer-relative and it’s sinful even to think about an objective, God-given “quantum state of the universe.”  If, on the other hand, you believed both that 1. collapse is an objective physical event, and 2. human mental states can be superposed just like anything else in the physical universe, then Wigner’s thought experiment probably should rock your world. OK, but how do we Wigner’s-friendify Hardy’s paradox?  Simple: in the state $$\left|\psi\right\rangle = \frac{\left|00\right\rangle + \left|01\right\rangle + \left|10\right\rangle}{\sqrt{3}},$$ we “promote” Alice’s and Bob’s entangled qubits to two conscious observers, call them Charlie and Diane respectively, who can think two different thoughts that we represent by the states |0〉 and |1〉.  Using far-future technology, Charlie and Diane have been not merely placed into coherent superpositions over mental states but also entangled with each other. Then, as before, Alice will measure Charlie’s brain in the {|+〉,|-〉} basis, and Bob will measure Diane’s brain in the {|+〉,|-〉} basis.  Since the whole setup is mathematically identical to that of Hardy’s paradox, the probability that Alice and Bob both get the outcome |-〉 is again 1/12. Ah, but now we can reason as follows: 1. Whenever Alice gets the outcome |-〉, she knows that Diane must be in the |1〉 state (since, if Diane were in the |0〉 state, then Alice would’ve certainly seen |+〉). 2. Whenever Diane is in the |1〉 state, she knows that Charlie must be in the |0〉 state (since there’s no |11〉 component). 3. Whenever Charlie is in the |0〉 state, she knows that Diane is in the |+〉 state, and hence Bob can’t possibly see the outcome |-〉 when he measures Diane’s brain in the {|+〉,|-〉} basis. So to summarize, Alice knows that Diane knows that Charlie knows that Bob can’t possibly see the outcome |-〉.  By the “transitivity of knowledge,” this implies that Alice herself knows that Bob can’t possibly see |-〉.  And yet, as we pointed out before, quantum mechanics predicts that Bob can see |-〉, even when Alice has also seen |-〉.  And Alice and Bob could even do the experiment, and compare notes, and see that their “certain knowledge” was false.  Ergo, “quantum theory can’t consistently describe its own use”! You might wonder: compared to Hardy’s original paradox, what have we gained by waving a magic wand over our two entangled qubits, and calling them “conscious observers”?  Frauchiger and Renner’s central claim is that, by this gambit, they’ve gotten rid of the illegal counterfactual reasoning that we needed to reach a contradiction in our analysis of Hardy’s paradox.  After all, they say, none of the steps in their argument involve any measurements that aren’t actually performed!  But clearly, even if no one literally measures Charlie in the {|0〉,|1〉} basis, he’s still there, thinking either the thought corresponding to |0〉 or the thought corresponding to |1〉.  And likewise Diane.  Just as much as Alice and Bob, Charlie and Diane both exist even if no one measures them, and they can reason about what they know and what they know that others know.  So then we’re free to chain together the “certainties” of Alice, Bob, Charlie, and Diane in order to produce our contradiction. As I already indicated, I reject this line of reasoning.  Specifically, I get off the train at what I called step 3 above.  Why?  Because the inference from Charlie being in the |0〉 state to Bob seeing the outcome |+〉 holds for the original state |ψ〉, but in my view it ceases to hold once we know that Alice is going to measure Charlie in the {|+〉,|-〉} basis, which would involve a drastic unitary transformation (specifically, a “Hadamard”) on the quantum state of Charlie’s brain.  I.e., I don’t accept that we can take knowledge inferences that would hold in a hypothetical world where |ψ〉 remained unmeasured, with a particular “branching structure” (as a Many-Worlder might put it), and extend them to the situation where Alice performs a rather violent measurement on |ψ〉 that changes the branching structure by scrambling Charlie’s brain. In quantum mechanics, measure or measure not: there is no if you hadn’t measured. Unrelated Announcement: My awesome former PhD student Michael Forbes, who’s now on the faculty at the University of Illinois Urbana-Champaign, asked me to advertise that the UIUC CS department is hiring this year in all areas, emphatically including quantum computing. And, well, I guess my desire to do Michael a solid outweighed my fear of being tried for treason by my own department’s recruiting committee… Another Unrelated Announcement: As of Sept. 25, 2018, it is the official editorial stance of Shtetl-Optimized that the Riemann Hypothesis and the abc conjecture both remain open problems. ### 157 Responses to “It’s hard to think when someone Hadamards your brain” 1. Quantum theory - Nature Paper 18 Sept | Page 2 | Physics Forums Says: […] is a recent comment by Scott Aaronson on Frauchiger's and Renner's paper: https://www.scottaaronson.com/blog/?p=3975   Lord Jestocost, Sep 25, 2018 at 4:09 […] 2. Usman Nizami Says: Sir how can great mathematician like Michael Atiyah come up with false proof of Riemann hypothesis? He is making mistakes in recent years. Is that because of his age. Dpse no one do a good math in old age? What do you think? Atiyah has said “Solve the Riemann hypothesis and you become famous. If you are famous already, you become infamous, ” And what is problem with Inter-universal Teichmüller theory. Shinichi Mochizuki is serious mathematician. How one should handle such complicated work. 3. Usman Nizami Says: So Michael Atiyah’s proof of Riemann hypothesis is wrong. 4. Peter Says: Wow, they really went out of their way to rehash a pretty basic reformulation of psychologism, I guess this time with quantum mechanics. Maybe I’ve just had to read and reread too much Frege over the years but the second I saw “it must be possible to employ quantum theory to model complex systems that include agents who are themselves using quantum theory” in the abstract I knew exactly what the rest of the paper was going to be arguing for. I think that your technical arguments against their conclusion are sound and do a great job of showing part of what the issue is. I would love to see them engage with the philosophy on this topic and defend why they think quantum mechanics opens the door to psychologism, rather than implying it ambiently and just focusing on a technical physical argument. 5. Jay Says: So clear, thanks! Unrelated question about entropy, computation and neural nets. Because of adiabetic computing we could theorically perform a computation as long as we want for free. Because of Landauer’s principle we know that measuring the result (and preparing the computation) must always incur at some entropy cost. Now, non linearities are considered a key ingredient for deep learning (actually for any neural net, deep or shallow). However, because of the former lines it seems that deep learning could theorically rely on linear transforms only, and delays the true non linearities until we want to measure the resulting net. Are they known caveats? For example, one could imagine that measuring the error gradient might require either a true non linearity (and some entropy cost) or keeping track of all the examples (at no entropy cost but at the cost of increasing computation length or memory requirement). Do you know the answer or where you would start looking for an answer? 6. Scott Says: Peter #4: But this is actually not a question that can be resolved with verbal philosophy. The reason their paper was published in Nature is that they give a technical argument for why the axioms of QM, together with certain reasonable-looking auxiliary postulates, lead to a contradiction. That demands a response that meets their argument on the battlefield, for example by identifying an error, rejecting one of the postulates, or (as I think I did) identifying and rejecting an additional unstated postulate. 7. David Says: Peter #4 “just focusing on a technical physical argument” Well, they are physicists, not philosophers. 8. DarMM Says: Scott, Bob (I think I have the F,F’,W,W’ to Alice,Bob,Charlie,Diane Map right! :)) when reasoning using the initial state would conclude he and Alice will get (okay,okay) 1/12 of the time. However when reasoning about Charlie’s reasoning (concerning Diane) he would conclude this is impossible. The bulk of the paper concerns Bob “transferring” Charlie’s conclusion to himself, via a special case of the Born Rule (P(E) = 1) and the assumption of consistent reasoning between agents. However your point is that one of the steps in the transference involves hearing about an Alice measurement. However an Alice measurement is the exact scenario that would invalidate Charlie’s initial reasoning and thus Bob should discard Charlie’s original reasoning about Diane. Is this remotely correct? 9. Scott Says: DarMM #7: Yes, that sounds right to me. 10. DarMM Says: Scott #8: Thanks, kind of you to answer. 11. John K Clark Says: The inability of the mathematical community to figure out if the proofs of the ABC conjecture and the Riemann hypothesis are valid makes me wonder if that could have implications for another unsolved problem, is P= NP? If they are not equal then you’d expect it would be fundamentally easier to check a proof than find a proof, but then why are world class mathematicians unable to check them? If I have a valid proof of the ABC conjecture but it would take you as much brain power to understand it as it would for you to find a proof of it on your own have I accomplished anything of value, would there be any point in you reading it? John K Clark 12. Daniel Says: This seems like yet another example of someone trying to pound a round peg (quantum mechanics) into a square hole (classical measurement theory) and getting confused. The contradiction is with the antiquated assumptions of the authors, not with quantum mechanics. 13. Scott Says: John #10: No, it’s much more prosaic than that. According to Erica Klarreich’s excellent Quanta article, Scholze identified the issues with Corollary 3.12 of Mochizuki’s paper not long after the paper came out, but held off on going public because he hoped someone else would do it, and it sounds like the same may have been true for other experts. As for Atiyah’s claimed half-page proof of RH, it looks like people were pointing out serious issues with it within hours. 14. Neel Krishnaswami Says: Hi Scott, can you explain to me what people are trying to achieve with “quantum interpretations”? (This sounds more hostile than I mean it — I’m actually genuinely confused.) To my understanding, undergrad QM consists of basically four claims: 1. States of an experimental system are described by elements of a Hilbert space. 2. Observables are self-adjoint operators of that Hilbert space. 3. The time evolution of an experimental system is described by the Schrodinger equation. 4. Observing an experimental system is described by Born’s law. The repetition of “experimental system” here means that we are talking about controlled experiments: a system which is isolated from the rest of the universe, with the sole interaction being the process of observation. (This ideal is hard to achieve in practice, but experimental physicists and engineers are getting ever better at it.) Since (a) Born’s law has a statistical, probabilistic character unlike the other three parts of QM, and (b) observation is a primitive concept of quantum mechanics, it is reasonable to wonder if it can be derived. That is, can we model both an observer and an experiment as a larger, interacting quantum system, and then derive the Born rule for the observer from an analysis of the joint system? (Basically, this is what I understand the measurement problem to be.) This seems like a really great, natural question to me. If we can, that’s great news! We now know where Born’s law comes from, in the same way that statistical mechanics explains where thermodynamics comes from. If we can’t, that’s even better, because it means that there must be some deep physics we have overlooked, which make Born’s law work out! But, it seems like to answer that question, we have to do the work: someone has to come up with a physically reasonable model of a coupled system consisting of an experiment and observer, and then actually solve the equations. Only then can the model be compared with the empirical fact of Born’s rule. Quantum interpretations (like MWI) mostly seem like an assertion that if you wave your hands enough, you don’t need to do this work, a claim I find deeply dubious. But plenty of smart people — people who have thought much more about QM than me! — seem to think that there is something serious going on with these interpretations. So there must be something I’m missing. 15. Peter Says: Scott #5, David #6 I feel as though both of you missed my point. “I would love to see them engage with the philosophy on this topic and defend why they think quantum mechanics opens the door to psychologism, rather than implying it ambiently and just focusing on a technical physical argument.” I said I would love *to see* them do this. As in, it would make me happy if they did this. So saying, “well they wrote a technical paper” isn’t a counter to what I said. I didn’t say they should be reprimanded and banished from academia for doing what they did, I said I would love it if they had engaged one of the glaring issues in their argument instead of only presenting a technical argument. I am not going to say that every physics paper requires half of it to be about philosophy, it doesn’t in the slightest. But why is me saying “I wish they had engaged in the philosophical issues here” in regards to a paper that makes claims about questions at the very heart of the grey area between physics and philosophy, reductionism and it’s ilk, refutable by pointing out that they aren’t philosophers? Secondly, Scott your second example of what should be done is exactly what I’m saying should be done. On of their postulates is that “contrary to philosophical consensus, this revivification of psychologism is valid.” That is a presupposition they used in their argument, it is not a conclusion that is drawn from their other presuppositions. Me bringing this up is exactly what you described as, “identifying and rejecting an additional unstated postulate.” So I agree completely that we should engage them on their own battlefield. But their battlefield is a grey area and part of their assumptions involve opening the door to a almost universally dismissed school of thought and as I said, I would have loved if they engaged that part of their argument. Refuting psychologism isn’t about verbal philosophy. The rejection of a reduction of physical laws to psychological entities and then drawing conclusions about the nature of those laws given various postulates about those psychological entities isn’t just a matter of toying with language. Dismissing someone drawing universal physical and metaphysical conclusions based off of the internal psychological states of agents isn’t just ‘dismissing their language games as pseudo-problems.’ 16. Scott Says: Neel #14: Yes, what people are trying to accomplish with “interpretations” is basically just to understand how unitary evolution and measurement can coexist in the same universe, and to put it crudely, how the universe knows when to apply one rule and when the other. On the spectrum of possible positions, MWI is the extreme that answers this question by holding that only unitary evolution has any fundamental ontological status; “measurement” is just an approximate concept that observers use to describe their experience of not knowing which branch of the wavefunction they’re going to find themselves in (i.e., indexical uncertainty). You don’t have to like it — many people don’t! — but it’s certainly a natural point in possibility space; if Hugh Everett hadn’t proposed it then someone else would have. 17. Joe Says: Scott, when are you speaking at JHU? I can’t find any information about the talk online. 18. Kevin Van Horn Says: Forgive my ignorance, but how are the {|0>, |1>} and {|+>, |->} bases related? You mentioned the Hadamard transformation; is that what relates them? 19. Scott Says: Kevin #18: Yes. 20. Aula Says: Scott #13: It seems to me that Atiyah’s claimed proof of RH is in some ways a rerun of Deolalikar’s claimed proof that P!=NP. In each case, the claimed proof both has/had obvious elementary errors (Atiyah appears to have forgotten some basic facts about complex analysis) and is/was trying to prove too much (there are functions that don’t satisfy an analogue of RH but have all those properties of the Riemann zeta function that Atiyah uses), so it’s no wonder that people started to pick apart both attempts almost immediately. 21. Craig Gidney Says: > *in my view it ceases to hold once we know that Alice is going to measure Charlie in the {|+〉,|-〉} basis* I would describe this slightly differently. Consider how you would actually go about implementing a measurement in the {|0 BobSawAndThoughtAbout0〉+ |1 BobSawAndThoughtAbout1〉, |0 BobSawAndThoughtAbout0〉- |1 BobSawAndThoughtAbout1〉} basis. I would do it as follows: Step 1: Uncompute those pesky “BobSawAndThoughtAbout” qubits. As in literally reverse time for Bob, so he unthinks his thoughts. Step 2: The relevant information is now factored into a single qubit. Perform a {|+〉,|-〉} basis measurement on that qubit. Step 3: Recompute the “BobSawAndThoughtAbout” qubits. The reason that the measurement is a problem is because it forces us to uncompute Bob thinking about his initially-valid conclusion, then recompute him thinking the same things but the conclusions are no longer valid (because the initial qubits are no longer in the |00〉+|10〉+|01〉state). In order for Bob’s thoughts to actually be correct, he has to think something like “If this is before the uncomputation and recomputation, and I saw 0, then Alice is definitely in the + state. But if this is after the recomputation, I don’t know what Alice’s state is.”. 22. Craig Gidney Says: Hm, actually, I think this “recompute with different thoughts” paradox has a classical analogue. 1. Alice and Bob are loaded into separate reversible classical computers. 2. We flip a coin to generate a random bit, then give Alice and Bob each a copy of that bit. 3. Suppose w.l.o.g. that the random bit is 0. 4. We run the computer for a bit. Alice thinks “My bit is 0, therefore Bob’s bit is 0.”. This conclusion is valid. 5. We uncompute Bob back to the initial state, flip the bit we gave him, and recompute. 6. Bob’s bit must be in state 1. But using “transitivity of knowledge” it must be in state 0 because Alice validly concluded that his bit was in state 0. Paradox. Obviously the mistake here is assuming that Alice’s conclusions about Bob’s state after step 3 must also apply to Bob’s state after step 5. It’s much easier to see the problem here than in the quantum case because the perturbation of Bob is directly described (we flip his bit), instead of hiding behind an anti-commuting measurement. 23. Scott Says: Joe #17: Info for my JHU talk is here. It’s on Thursday at 10:30am. 24. Edward Measure Says: Re Atiyah and the RH: Many years ago, two semi-famous physicists (SFP) were listening to Eddington, then an old man, expound one of the highly questionable theories of his old age. SFP1 to SFP2: “Is that going to happen to us?” SFP2: “Don’t worry – a genius like Eddington may go nuts, but guys like you just get dumber and dumber.” 25. Harry Johnston Says: Neel #14: That is, can we model both an observer and an experiment as a larger, interacting quantum system, and then derive the Born rule for the observer from an analysis of the joint system? I believe so, sort of. But if I understand correctly, when you ask the question, “what is the probability that I will observe that the other observer got result X” you have to use the Born rule to answer it. That’s not a trivial result. It means that it doesn’t matter when you apply the Born rule – you can apply it to the original measurement, or to your observation of the other observer, and you’ll always get the same answer, and that’s important. But it doesn’t really count as an independent derivation of the Born rule. Obviously, the “actually solve the equations” bit doesn’t explicitly model a conscious observer, or even a real-world measuring device, but a simplified model of one. I believe one way you might model a measuring device is by allowing the state of the system being measured to interact with a thermodynamic reservoir, which introduces decoherence. You can sum over the microstates of the reservoir to turn the pure quantum state into a density matrix, and that gives you your classical probabilities – but you’re implicitly using the Born rule when you do that. I’m told that Everett actually did the necessary math way back in 1957. [Epistemic status: mostly guesswork. I haven’t personally read the relevant articles, and I don’t know whether Everett used the approach I suggest above or something different.] 26. ppnl Says: Scott #16 ” Yes, what people are trying to accomplish with “interpretations” is basically just to understand how unitary evolution and measurement can coexist in the same universe, and to put it crudely, how the universe knows when to apply one rule and when the other.” Why doesn’t decoherence answer this in a simple and obvious way? You put a system in a box that isolates it and unitary evolution applies. If you allow a tiny interaction with the outside then the Born rule applies in a tiny way. You allow strong thermodynamic interactions and the Born rule is applied many many times in tiny increments and the system looks classical. There is no unitary or Born rule. There is just the amount of quantum information leaking out. The Born rule is an expression of the lack of information and so by definition must be random. 27. Andrei Says: Scott, My take on these thought experiments involving “isolated” systems is that they are not possible, even in practice. There is no way you can build a box so that an outside observer cannot find out, without opening it, if there is a dead cat or alive cat inside. One can for example measure the electric, magnetic, gravitational fields produced by the particles that make up the cat. These fields are of unlimited range and cannot be completely blocked by any box, whatever the material this box is made of. In other words, all observers, no matter where they are have access to the same amount of information about the system under consideration, so all those paradoxes disappear. Thank you, Andrei 28. mjgeddes Says: Both metaphysics and foundations of mathematics need to be clarified to understand quantum mechanics and crack the Riemann hypothesis. Coming back to the ‘3 Worlds’ of Penrose (Physics, Mathematics and Cognition) , my current view is that 2 of them *are* fundamental (Physics and Mathematics) and 1 is *not* fundamental- Cognition is emergent and composed of the other two primitives, having no reality over and above the other two. Budding philosophers of meta-physics that are realists about physical reality and abstract objects often want to unify mathematics and physics somehow, and the first ideas that leap to mind are that ‘all is math’ (modern platonism) or ‘all is information’ (‘it from qubit’). But this is precisely where all the confusion starts! The ‘it from qubit’ and Tegmark multiverse ideas try to have the physical world emerging from a foundation of information/math, but it just doesn’t make sense. This had me so confused for a long long time. The fault is with the ideas! These ideas have really confused everyone and lead them seriously astray! Rejecting ‘it from qubit’ and Tegmark multiverse, I settled on the only other sensible possibility : physical and mathematical existence are mostly separate. I came to the conclusion that *both* physical and mathematical existence are co-foundations of reality, but neither is primary. One doesn’t ’emerge’ from the other. Whilst there may be some over-lap, most mathematical (abstract) objects *aren’t* physically realized. There just isn’t a single foundation of reality – there are *two* foundations. So what is the nature of ‘information in this picture’? I think it’s where physical and mathematical existence *do* over-lap. Computation is the portion of mathematics that *is* physically realized. But *not* everything physical is computation. The ‘it from qubit’ project errs in the claim that ‘everything is computation/information’. I think the mistake is to stretch the definition of ‘computation’ to the point of it being meaningless. My new view is that only goal-directed systems that form symbolic representations of reality qualify as ‘computation’ – computers, brains and minds. The *mind* is made of information. Computers and brains too. But most of physical reality is not. So, in this picture, Cognition is synonymous with information (the portion of math that is physically realized). This clarification of metaphysics should hopefully lead to a clarification of quantum mechanics and quantum computing. As to mathematics, perhaps it has a ‘dual-foundation’ as well. The main candidate for the foundation of math is ‘set theory’, but it appears to me that ‘arithmetic/number theory’ qualifies as a genuine rival. According to wikipedia, it appears that most of classical mathematics can be derived from second-order arithmetic, and sets aren’t needed. Just as mathematical/physical existence could form a dual-foundation of reality in metaphysics, what if sets/numbers form a dual-foundation for mathematics? Perhaps mathematics just doesn’t have a single foundation either. The Riemann hypothesis can be approached purely from number theory and sets dispensed with. John Baez in a recent tweet mentioned a hypothetical ‘F1’ (finite field with one element), which looks very intriguing – if such a thing existed it would completely revise abstract algebra (by dispensing with the need for sets), and lead to a solution to RH. 29. fred Says: Scott, when you write “In quantum mechanics, measure or measure not: there is no if you hadn’t measured.” How is that different from the first claim of superdeterminism that there’s really no such thing as counterfactuals, ever? 30. fred Says: Scott, what do you make of the claim in the paper that “We conclude by suggesting a modified variant of the experiment, which may be technologically feasible. The idea is to substitute agents F¯¯¯ and F by computers.” Does it mean there’s really a chance that this entire setup could be done practically? 31. fred Says: Andrei #27 “My take on these thought experiments involving “isolated” systems is that they are not possible, even in practice. [..]These fields are of unlimited range and cannot be completely blocked by any box, whatever the material this box is made of.” Black holes! 32. Scott Says: fred #29: The two things have nothing to do with each other. To violate the Bell inequality doesn’t actually require any counterfacual reasoning, involving “if I had measured this state in this basis” (even though I didn’t). All it requires is repeating the Bell experiment many times, while randomly varying the measurement bases. To deny the possibility of doing that requires denying that it’s ever possible to make any random choices at all, which is much much crazier than anything we’re talking about here. 33. Scott Says: fred #30: Yes, you could even do the experiment today, not with conscious beings in superposition but at least with qubits (i.e., the original Hardy’s Paradox). But even if you were able to do the experiment with conscious beings, it would still tell you nothing whatsoever that you didn’t already know. All you’d find is that the measurement outcomes are exactly the ones predicted by quantum mechanics — so in particular, that the “impossible” outcome occurs with probability 1/12. But that would still leave the task of identifying the fallacious assumption in the argument for why that outcome was “impossible.” 34. Amir Says: Scott #33:You sound like a mathematician – assuming that once we have a consistent theory, experimental results will agree with it. I’m sure physicists would insist on actually doing the experiment, for instance to disprove spontaneous collapse theories. 35. Andrei Says: fred #31: I am not sure if you intended it as a joke, but from the point of view of an external observer the cat will never pass the event horrizon, so it will never be inside the “box”. It is also questionable if black holes do have an interior where one can make experiments. 36. Harry Johnston Says: @Andrei #27, as far as I’m aware the purpose of the box in the Schrödinger’s cat thought experiment is just to make it clear that the experimenter is not looking directly at the cat during the experiment. It isn’t necessary for the observer to have no way to tell whether the cat is dead or alive, so long as they don’t actually go ahead and make the necessary measurements. In other words, it isn’t access to the information that counts, it’s the information you actually choose to collect. Do you have some reason to believe differently? 37. Andrei Says: Scott #33: “To violate the Bell inequality doesn’t actually require any counterfacual reasoning, involving “if I had measured this state in this basis” (even though I didn’t). All it requires is repeating the Bell experiment many times, while randomly varying the measurement bases. To deny the possibility of doing that requires denying that it’s ever possible to make any random choices at all, which is much much crazier than anything we’re talking about here.” I think that you may find superdeterminism more acceptable if you agree with the following steps: Step 1: When thinking about classical physics forget about Newtonian billiard balls that travel in straight lines and only interact by direct collisions. Think about field theories like general relativity or classical electromagnetism. Such theories have built-in the type of contextuality required to understand QM. Step 2: A Bell/EPR experiment, just like any other experiment, is nothing but a particular case of charged particles interacting with other charged particles. The particle source, the detectors, the human experimenters or whatever you may use are made of such charged particles (electrons and quarks). According to classical electromagnetism these particles are continuously influencing each other’s motion as the result of their associated electric and magnetic fields. Such influence never stops because of the unlimited range of those fields. Step 3: the hidden variable (say the polarization of the entangled photons) is expected to depend on the electric and magnetic fields acting at the location of the particle source. So, if you agree on Step 2 you would accept that the hidden variable does depend on the detectors’ settings, which is all superdeterminism is about. 38. Harry Johnston Says: Scott #33, while I’m personally reasonably confident that yes, you would get the result predicted by QM, aren’t Frauchiger & Renner claiming otherwise? (I haven’t read the entire paper, but the abstract says “This indicates that quantum theory cannot be extrapolated to complex systems, at least not in a straightforward manner.”) … of course, I guess they’re talking about their version of the experiment, where the entangled states are agents, not the experiment that we can actually perform right now. 39. Scott Says: Amir #34: No, it would be batshit insane to believe that “once we have a consistent theory, experimental results will agree with it”—since there are many consistent theories that disagree with each other (as any ‘mathematician’ surely understands…!). But we’re not talking here about some arbitrary theory that happens to be consistent—we’re talking about quantum mechanics, about which any proposed experimental test has to be evaluated for novelty in light of a century of previous tests (every single one of which QM has passed). Googling it just now, there’s indeed a whole literature on experimental tests of Hardy’s paradox, reporting exactly the results (Pr=1/12, in the example in my post) that QM predicts. So, would an experimental test of Frauchiger-Renner need to check what happened if we replaced the qubits by conscious observers? If so, how would we know when they were conscious? Or would it be enough to replace the qubits by AI’s? If so, then why couldn’t a qubit already count as an “AI” for our purposes—extremely rudimentary, of course, but recording the relevant information and therefore good enough to test the prediction? But if so, then the experiments on Hardy’s paradox have already tested Frauchiger-Renner, and shown that the results are just the ones predicted by QM. 40. Scott Says: Andrei #37: No, superdeterminism is still crazy. Of course humans and their detectors are made of the same charged and uncharged particles as everything else. But science has only ever worked because it’s possible to isolate some things in the universe from other things—not perfectly, but well enough. And this is completely uncontroversial in cases other than the Bell inequality, which shows that the superdeterminists aren’t consistent about their own theory. E.g., a political poll of 1000 households reveals 48% support for a certain candidate, plus or minus 5%. Why isn’t anyone free to believe that the real number is actually 1%, or 99%, because the pollster’s auto-dialer is also made of subatomic particles governed by physics, so maybe it was predetermined since the Big Bang that the dialer would overwhelmingly pick the numbers of the candidate’s supporters (or opponents)? I’ll tell you why: because, absent some positive reason to believe it and some account of how it happened, that would be stupid! Yet with the Bell inequality, and only with the Bell inequality, we’re asked to believe that a cosmic conspiracy exactly like the above one is in force—and, the craziest part of all, this conspiracy does not lead to faster-than-light communication, or any of the other world-changing effects that it could just as easily lead to and that we might expect, but only to the precise results that QM already predicted for us without any need for such a conspiracy, like winning the CHSH game 85% of the time (and not 86%). Occam’s Razor yearns to slice. 41. Scott Says: Harry Johnston #38: OK, point taken. But if one has any experience with experiments in the foundations of QM, one knows full well what’s going to happen next. Namely: some experimental group will do a slightly souped-up test of Hardy’s Paradox, of course getting just the results that QM predicts, and will then market it in Science or Nature as “the first experimental probe of the logical contradiction at the heart of QM … who could’ve imagined that the ‘impossible’ outcome would occur with probability 1/12?” And then the science journalists will wet themselves with excitement. It’s all nearly as predictable as QM itself! 🙂 42. Andrei Says: Scott: #40 “science has only ever worked because it’s possible to isolate some things in the universe from other things—not perfectly, but well enough.” There are two main reasons why it is possible to treat a subsystem (say the Solar system) of a large system (our galaxy) in isolation. 1. Physics is local. The state of a subsystem is completely described by a specification of positions and velocities of particles and the field magnitudes. But of course, this does not imply that the subsystem is independent from the large system because the local fields are themselves a function of position/momenta of all particles in the whole system. 2. If there is a large distance between the subsystem and the rest of the particles one can ignore, to some extent the fields produced by them. However, it is important to notice that this approximation only works if you are only interested in the relative motions of the particles inside the subsystem. For example you can ignore the galactic field if you want to calculate the trajectory of the Earth around the Sun. But if you are interested in the relative motion of the Earth and some planet in another region of the galaxy you need to know the galactic field. Assuming that the two distant planets move independently would lead to false predictions as there is no good reason why they should orbit the galactic center. In a Bell test we are in a similar situation as two distant planets in a galaxy. We are not interested in the internal evolution of each detector and of the particle source but in their relative evolution. We are interested in the correlations of distant measurements. So, in this case, ignoring the mutual EM interactions leads to wrong predictions. ” it’s possible to isolate some things in the universe from other things—not perfectly, but well enough. And this is completely uncontroversial in cases other than the Bell inequality, which shows that the superdeterminists aren’t consistent about their own theory.” The choice one makes does depend a lot on the available options. There is no need to appeal to superdeterminism in the case of the political poll because we can explain those results within the limits of accepted physics. There is nothing surprising about them. On the contrary, Bell’s theorem gives us only dificult options, like: 1. non-realism If you agree with non-realism will you be consistent enough to apply this non-realist view to the political poll? 2. non-locality If you agree with non-locality will you be consistent enough to apply this non-local view to the political poll? The purpose of any interpretation of QM is to recover QM’s predictions. So, unless you think there is a conflict between QM and the political poll there is no reason to expect a superdeterminist to have a different take on it as opposed to a Bohmian or QBist. 43. John Sidles Says: Scott remarks  “If one has any experience with experiments in the foundations of QM, one knows full well what’s going to happen next … This sentence has many possible completions, of which the following completion is suggested as consonant with the traditional excellences of Shtetl Optimized discourse: …  scientists will employ ingenious new theoretical insights, and ingenious new experimental techniques, to add more decimal places to measurements of the fine structure constant α This particular continuation is motivated by the extraordinary success of ongoing efforts to measure α within ever-smaller error bounds. The theoretical, experimental, and social reasons for this α-success brightly illuminate (for me anyway) two recent quantum computing preprints, namely, “How many qubits are needed for quantum computational supremacy?” (arXiv:1805.05224v2, mainly out of MIT/CalTech) and “Fluctuations of energy-relaxation times in superconducting qubits” (arXiv:1809.01043v1, mainly by the Google/Martinis group). In brief, the two chief technical paths to higher-precision measurements of α directly parallel the two chief technical paths to demonstrating quantum supremacy. The first path emphasizes coherent quantum dynamics, as exemplified by “g-2″/”geonium” experiments (e.g., arXiv:0801.1134), and by low-noise qubit arrays (e.g., arXiv:1709.06678v1). The second path emphasizes error-corrected/topologically-protected quantum dynamics, as exemplified by quantum metrology triangles (QMTs, e.g., arXiv:1204.6500), and by proposals for scalable quantum error correction (e.g., arXiv:1801.00862). In a nutshell, better appreciation of the realities of α-measurement techniques are helpful in evolving better-informed views regarding the scalable viability of the extended Church-Turing Thesis, versus the scalable feasibility of Quantum Supremacy demonstrations. —— PS: Michael Atiyah’s recently proposed α-theory inexplicably considers none of these subtly interlocking quantum electrodynamical issues … which is one more overarching reason to agree with the Shtetl Optimized editorial policy, that Atiyah’s theory probably is not right. 44. Ted Says: Scott’s argument at the end of the main post (emphasizing the importance of distinguishing between human experimenter’s thoughts in situations in which certain experiments are or are not performed) reminds me a lot of Guy Blaylock’s article “The EPR paradox, Bell’s inequality, and the question of locality” (Am. J. Phys. 78, 111 (2009), arXiv:0902.3827). Blaylock argues that the real takeaway of Bell’s theorem isn’t the failure of local realism in quantum mechanics, as is often claimed, but actually the failure of counterfactual definiteness – e.g. he claims that Many-Worlds completely respects local realism. (To clarify, he defines “causality” to mean “no FTL communication” and “locality” as the stronger requirement “no establishment of spacelike correlations, whether or not they can be used for communication”. In this terminology, all interpretations of QM are causal, but not all are local – e.g. Many-Worlds is but Copenhagen isn’t.) 45. ppnl Says: Harry Johnston @36 ” … as far as I’m aware the purpose of the box in the Schrödinger’s cat thought experiment is just to make it clear that the experimenter is not looking directly at the cat… ” No that’s just wrong. The whole point of Schrödinger’s cat is to try to isolate when a measurement is made. Say you put the cat in a box and watch it by closed circuit TV. Well you are watching so the wave collapses right? OK try again but you do not look at the TV but only record it so that you can watch it later. Wave collapse? Ok what if you chop up the cd containing the recording? Well you could in principle reassemble the cd. So let’s try burning it. In a deep quantum sense the information is still out and so in theory the quantum wave must collapse. Think black hole information paradox where not even feeding the disk to a black hole destroys the information. Now think of all the other ways information could leak out of the box. Do you smell decomp? How about the sound of the cat’s heart beating? What is that scratching noise? What about body heat radiated that you cannot see but affects the air around the box? No, the whole thing makes no sense unless you are talking about total isolation in the box. The difficulty of making quantum computers is exactly the difficulty of building Schrödinger’s cat type boxes around every logic gate in a computer. And then allowing the boxes to interact with each other without interacting with the rest of the world. If it were just about not directly looking quantum computers would be easy. 46. AM Says: Scott, Just by looking at Mochizuki’s web page: http://www.kurims.kyoto-u.ac.jp/~motizuki/IUTch-discussions-2018-03.html 1) Scholze and Stix: “We, the authors of this note, came to the conclusion that there is no proof”. Well, don’t expect to see something like: “Let’s assume Mochizuki’s Corollary 3.12 holds true, then by the above argument XYZ it leads to a contradiction.” In fact, in their write-up you won’t find even a single theorem/lemma by the authors! 2) Nevertheless, Mochizuki does a good job addressing (in a VERY accessible way!) their imprecise arguments. Scott, you always present yourself as a careful and polite thinker, but it seems unfair to judge a mathematical breakthrough based on a popular science article. In Ivan Fesenko’s words: “[It] almost entirely consists of ignorant opinions of a small group of closely related people who have been active in negatively talking about IUT publicly but have their research track record in the subject area empty and are not known to have applied serious efforts to study even anabelian geometry.” Thanks. Alex 47. Lorenzo Maccone Says: Hi Scott, thanks for you blog post which is really nice! The connection to Hardy’s paradox is nontrivial but explains very neatly the argument. Thanks! (Please keep up your blog) 48. Renato Renner Says: Hi Scott There are currently many who are blogging about our result, so I started to use my little quantum random number generator on my desk: outcome “0” means I don’t react to it, outcome “1” means I write a reply. For your blog the outcome was “1”! Now, we are already in a situation that involves superposed agents, namely me who wrote this reply and you who are reading it. (I am assuming that you do not believe in objective wavefunction collapses and hence accept assumption Q of our paper.) You would now probably say that this does not prevent us from reasoning as usual, but that we would be getting in trouble if our brains were subject to measurements in the Hadamard basis. So far I would definitively agree. And I would certainly subscribe the claim: “It is hard (if not impossible) to think after someone has Hadamarded your brain.” But this brings me to the core of my reply. Note the small difference between my claim and your title. While I agree that it is hard to think *after* someone has Hadamarded your brain, I do not see any reason to deny that we can think *before* the Hadamarding. Talking more technically, the reason why, as you noted, I do not endorse your scenario (the “Wigner’s friendification of Hardy’s paradox” or maybe the “Hardyfication of Wigner’s paradox”) is that it neglects a key element: Your simplified argument completely ignores the timing. But, clearly, it makes a difference whether I think before or after my brain is Hadamarded. In our argument, we were therefore careful to ensure that, whenever one agent talks about the conclusions drawn by another agent, he does so *before* any Hadamarding. This should be apparent from Table 3 of our paper, which essentially summarises our entire argument. Take, for example, the reasoning by agent \bar{W}. He reasons around time 0:23 about the conclusions drawn at time 0:14 by agent F. The key fact to notice here is that both relevant agents, i.e., F and \bar{W}, are in a similar situation as we are (hopefully) now when reading this text. While, from an outside viewpoint, they may be in a superposition state, no Hadamard has been applied to them. The only way I can hence make sense of your claim that we are using an additional implicit assumption in our argument (the chaining of statements) is that you are questioning the step that, in Table 3, corresponds to going from the third column to the fourth (the “further implied statement”). Did I get this right? (All the other steps are explicitly covered by our three assumptions, Q, C, and S.) Before concluding, and since you mentioned this several times in your blog, let me stress that “consciousness” does not play any role in our argument. The agents may as well be computers, which are programmed with rules corresponding to our assumptions Q, C, and S, and which they use for their reasoning (summarised in Table 3). So, when we talk about “agents”, we just mean “users” of quantum theory. After all, the question we are asking is: “Can quantum theory be used to consistently describe users of the same theory?” This question has little to do with consciousness (which is why we tried to avoid this term). 49. Harry Johnston Says: @ppnl #45, I’m not entirely sure whether we’re disagreeing about the physics or just the philosophy around it. But see the Quantum Eraser Experiment. 50. DarMM Says: If \bar{F} gets tails, then he knows (given the state he prepares) that the L lab will evolve into the fail state. Thus W will measure fail definitely. If agent F measures z=+1/2, F can conclude that \bar{F} knows r=tails. Already here I’m a bit confused. F himself would think that since he sees z=+1/2 he and his lab are not in a determined state in the fail,okay basis. From that he would think W could get either fail or okay. However since z=+1/2 => r=tails, he could reason that \bar{F} is certain W will get the fail result. However he would know that \bar{F}’s conclusions about his lab result from \bar{F} reasoning about him using a superposed state. Is there not already a contradiction at this point between F and \bar{F}’s reasoning. F would reach one conclusion about W based purely on his own z=+1/2 result and a different one when reasoning via \bar{F}’s superposed treatment of his lab. Or (more likely) am I missing something? 51. Scott Says: Harry Johnston: ppnl is 100% correct. The purpose of the box, in the Schrödinger’s cat experiment, is to isolate the cat from the entire rest of the universe, not merely from some particular observer. Any interaction with the environment could have the effect of entangling the cat with the environment, and thereby changing the local state of the cat from a pure state (i.e., a superposition) to a mixed state (i.e., either dead OR alive), which is a completely different situation. 52. fred Says: Scott #40 “science has only ever worked because it’s possible to isolate some things in the universe from other things—not perfectly, but well enough.” Well, I guess it also depends on what we mean by “works”. In the case of gravitational mechanics, isolation is difficult. Even in the simple case of the three body problem, there’s no closed solution, and predictions become hard because numerical instabilities (chaos). Another example is accounting for the effect of an electron on itself. 53. fred Says: I have a dumb question for anyone’s who understand black holes and the holographic principle. Assuming a black holes forms from a collapsing star, is all the material of the star crossing its own even horizon has the collapse progresses? What about the very first few particles (or clumps of such particles) where the collapse initiates? Aren’t they always inside the black hole? But if so, how would their information ever end up encoded on the BH surface? 54. fred Says: Btw, those sorts of difficulties related with the “everything is connected to everything else” (like chicken and egg issues between fields and particles) or “an infinite amount of effects need to be accounted for” (like when summing all the possible paths in Feynman QED)… make me really question the general claim that the universe is so “elegantly” mathematical. Either there’s some clever type of mathematics we’ve not discovered yet or the physical universe basically has infinite resources (aka magic)… or we need to understand better what’s going on a the Planck’s level. 55. sf Says: Dear Scott, Grazie mille for this extremely clear, intuitive explanation. On the one hand you make perfectly clear what was not kosher with the paradox, but, on the other, one has to worry whether we have enough defences to be able to catch other potential traps on the fly, before we fall into them. It often seems one has to get to the absurd punch line each time, before backtracking and spotting the flaw. It seems to me that if one wants to formalize the Born rule a bit, one of the rules would have to say that you can apply it by repeating an experiment and counting outcomes, but there’s no messing around with the internals of the experiment during repetitions. The ‘measurement’ involves the input set-up or preparation of a quantum state, just as much as it involves the meter reading at the output. The references to ‘knowing’ what Alice et al know are trying to get around this Bohr type censorship of “messing around with the internals”. The only admissible ‘knowledge’ should be in reference to states where everything in the system has collapsed. If this doesn’t change anything about our view of QM, it could provide for some subtleties about what ‘knowledge’ can mean in a QM world. In fact, maybe ‘knowledge’ should be grounded in notions of prediction, and repeated experiments. One issue I wonder about; is it worth trying to formalize QM ‘measurement’ in terms of a Turing machine which does the ‘measurement’ and counts outputs in a series of trials? ie the ‘measurement process’ should consist of more than just a one shot experiment; something more like an ensemble (over time). Is it necessary or useful to have the series of repetitions defined, as part of the context for Born’s rule? I guess this is trying to make the notion of probability more operational/empirical. Turing like ‘states’ are somehow conceptually very different from QM states; they are is in some sense designed, with a way to control them, and so that they recur ‘as desired’. If this doesn’t relate directly to consciousness, it can illustrate how notions like ‘access’ and grounding aka embodiment, have to be involved, to have a sensible discussion of any notion of consciousness. 56. Sniffnoy Says: AM #46: That’s not how refuting a proof works? Scholze and Stix aren’t claiming Corollary 3.12 is false, they’re claiming that its proof is invalid. You don’t show a proof is invalid with a proof of your own, you do it by pointing out the error. Yes, obviously if someone has made a false claim, then proving the falsity of their claim is a good thing to do, but it’s a fundamentally seperate thing to do, in that it doesn’t tell you where the error is. (And the error could be in your own proof. Or, technically, in neither, because there’s an inconsistency in mathematics.) In short, writing proofs and refuting proofs are not the same sort of thing, so it does not make sense to complain that, in their refutation, Scholze and Stix do not include a proof of their own. 57. Scott Says: Renato #48: Thanks; I’m glad that my blog post was one of the lucky ones to earn a reply from you! 🙂 I acknowledge how much care and attention you devote in your paper to the issue of timing. But I contend that, no matter how we formalize the statements in question, and what it means for the agents to “know” the statements, there will some place where we illegitimately cross the temporal Rubicon between before and after Charlie’s brain gets scrambled by a measurement in the Hadamard basis. Somewhat flippantly, I might say: we know this must be the case, because the end result contradicts the predictions of QM! 🙂 More seriously: at two nearby stages of (my version of) your argument, we conclude that Diane’s brain is in the state |1⟩, and then that Diane’s brain is in the state |+⟩. So, I can isolate where I get off your train to somewhere between the former statement and the latter one… Incidentally, point taken about the word “consciousness.” But that leads to an interesting question: you say it’s not important if Charlie and Diane are “conscious”; all that matters is whether they’re “agents using quantum mechanics.” But if so, then couldn’t we treat even a single qubit as a “QM-using agent,” in the same sense that one qubit could be said to “measure” another qubit when they’re entangled? In that case, would you agree that the experimental tests of Hardy’s Paradox have already tested your paradox as well? 58. Job Says: What would be the consequences of this result? Does it conflict with Quantum Computing in any way? Is that why Scott found it both interesting and annoying? 59. fred Says: mjgeddes #28 “My new view is that only goal-directed systems that form symbolic representations of reality qualify as ‘computation’ – computers, brains and minds. The *mind* is made of information. Computers and brains too. But most of physical reality is not.” We can be more specific by noting that “high level” properties of physical systems are the basis of the symbols. By “high level” I mean from a statistical mechanics point of view, such as temperature, shapes, etc … of big clusters of atoms. “Symbolic” means that we see the spontaneous appearance of small and stable systems with microstates that are extremely sensitive to the macro states of some much larger and/or distant systems. E.g. a thermostat is a small system where a few atoms are very sensitive to the average temperature in the room it’s in. Similarly, a few neurons in the brain of a cosmologist are very sensitive to the shapes of very distant and gigantic clumps of atoms (galaxies). This “sensitivity” can be also interpreted as an isomorphism between the properties of two systems of very different size (a massive reduction is going on). The micro states of the small systems are also somewhat robust to their own internal noise/randomness. Like, all PCs running a given software have the same values in their registers, even though they’re all different at the atomic level. Resilience to QM randomness is the main property of a “computation”. But this picture is not enough to understand one thing: Information is relative – e.g. there’s no single answer to the question “how many circles are there in this room?”, or, if we look at an arbitrary system of atoms, we can’t answer questions like “how much software is running in there?”. It’s the same difficulty “information integration” theories are running into. They try to extract objective/universal information metrics from systems, but you just can’t do it, because information is relative (the same thing is pointed out by Kolmogorov complexity metrics). In other words, we can only consider/recognize the dictionary of symbols in ourselves and in all our human artifacts, but we can’t necessarily recognize the existence of such mappings in external systems (who’s to say that a city isn’t conscious?). Another way to see this difficulty is to note that a dictionary (whether an actual book with definitions of words, or an actual brain with connections between all its stored symbols) is really a collection of circular relationships. The definition of every single word/symbol relies on other words/symbols. If it’s all circular, how does get boostrapped? Yet, we, as conscious beings, do experience a somewhat grounded/specific interpretation of our world. What’s missing in the scientific approach to understanding emergence of consciousness (like information integration) is that they fail to recognize the existence of implicit symbols that are irreducible and beyond their reach, those symbols cannot be expressed in terms of words or broken down by reductionism, they are simply beyond the reach of the scientific method. So the dictionary of our mind isn’t all circular but ends up in basic symbols that are either the content of consciousness or consciousness itself. Like “blue” or “pain”… not the words or sounds of the words, but the bottom experience of “blue” that is instantly recognizable to us, and no amount of extra physical facts about it (blue is this wavelength, it excites certain cells in the eyes, …) is ever going to add anything to the bottom mystery that is experiencing blue. 60. fred Says: Scott #57 “in the same sense that one qubit could be said to “measure” another qubit when they’re entangled?” Isn’t this a bit like reducing something subtle like the halting problem (about the power of Turing Complete machines running other TC machines) to noting that a billiard ball hitting a couple other billiard balls is some sort of classical computing operation too, so it should be enough to cover everything? 😛 61. Ilya Shpitser Says: It was great chatting with you, Scott! 62. Harry Johnston Says: Scott #51, I’ll take your word for it, I guess. But we can perform a two-slit experiment with electrons, right? And we still get an interference pattern despite the fact that the state of the electromagnetic field, if measured sufficiently accurately, would allow us to determine which slit the electron went through. I don’t see how the Schrödinger’s cat experiment is any different. 63. Scott Says: Harry Johnston #62: The dilemma you point out also confused me when I was first learning the subject. It has an actual resolution. Yes, you can do the double-slit experiment with an electron, and yes, that temporarily sets up a superposition over two different configurations of the electromagnetic field. However, that does not mean that any record gets created in the external world about which slit the electron went through. Indeed, if superposing the electron suddenly created records arbitrarily far away in the universe, that would be faster-than-light communication! Rather, the differing field configurations mean only that there’s the potential for a record to be created—if, for example, we put a charged particle in the field, the closer the better, and a record is created of its displacement. The interaction between the superposed electron and the charged particle would be mediated by an exchange of virtual photons, which has some amplitude to happen and some amplitude not to happen. If we succeed in observing interference between the two different paths of the electron, then that very fact tells us that the total amplitude for all the Feynman diagrams that would’ve led to an external record being created was small. And again: if maintaining a system in superposition were as easy as personally forgetting its state, then building a scalable quantum computer would be child’s play. 64. mjgeddes Says: fred #59, Yes, I agree with your first couple of paragraphs. Information processing is an emergent property and anyone who thinks reality at base is composed of a string of 0s or 1s (or the qubit generalization) is out to lunch 😉 I don’t think consciousness and the nature of symbols is ‘beyond the ‘scientific method’ though. It’s just that a full understanding needs to go beyond physics to the world of mathematics. Mathematics, it seems to me, is precisely all about how knowledge is represented or ‘coded’. It does indeed seem that one needs to begin with some irreducible ‘base codes’ or ‘base representations’ if one wants to grasp how minds work. Physical states alone can’t provide an understanding of that. That’s why I’m a mathematical realist – I ascribe actual reality to abstract objects – I think they exist ‘out there’ and are not just a language we use or invent. The combined might of physics *and* mathematics, I believe, should be enough to provide a full explanation of consciousness. Draw a 2-d graph with ‘mathematical existence’ along the x-axis, and ‘physical existence’ along the y-axis. I think these two types of existence are orthogonal in the sense that you can’t reduce one to the other or dispense with either if you want a full explanation of reality. You need both. Now physical existence is all about the structural properties of things: particles , fields (inc forces) and space-time. Mathematical existence is about abstract patterns, or how knowledge is represented: sets, numbers and functions. One could consider that each has it’s own ‘arrow of time’: physics time (physics) and logical time(mathematics). Then the graph shows the progression of logical time (x-axis), and physics time (y-axis). Both ‘arrows of time’ can be unified by the concept of ‘complexity’. ‘Physics time’ is about the complexity of the physical states of a system, whereas ‘logical time’ is about the complexity of ‘mathematical proofs’. Then I can define cognition (inc. mind, ‘computation’ and ‘information’) as a composite (emergent) property built up from both physics time and logical time. Thinking of the graph as ‘the dimensions of existence’, then in a real sense, one can say that a new ‘layer’ of reality is being generated as physics time and logical time ‘progress’. The base layers are physical and mathematical existence, and cognition (inc. consciousness) is the emerging new layer! Think of cognition (inc. consciousness) as the ‘high entropy’ state of existence. Both physics and logical time ‘point’ towards the emergence of cognition. 65. Renato Renner Says: I also felt lucky when I saw that my quantum random number generator chose your blog. 🙂 But now, after rethinking the consequences that future Hadamards can have, at least according to your interpretation, I am afraid that the result of my random number generator may not even exist … More seriously: I would expect that anyone who claims that our argument is flawed should be able to localise the flaw. So, here is the challenge: Read Table 3 (in the order of increasing superscripts, which specify at what time they are made) and identify the first entry you disagree with. This should be a rather easy task: each entry corresponds to a well-defined statement that you (if you were an agent taking part in the experiment and had observed the outcome indicated in the second column, labelled “assumed observation”) should either be willing to make or not. Having said this, I would of course never try to impose homework on you, Scott. 😉 Therefore, starting from your analysis of your simplified “Alice-Bob-Charlie-Diane” argument, I tried myself to reverse-engineer what you would say about our original thought experiment. This reverse-engineering is certainly not unique (partially because you dropped all timing information). However, I found that the only statement to which your concern that we “illegitimately cross the temporal Rubicon” may apply, at least remotely, is the very first of the table, i.e., \bar{F}^{n:02}, for it relates an observation at time n:01 to an observation at time n:31. But my conclusion would then be that you just disagree with Assumption Q (which would of course be fine). Unrelated to this: Your question about experimental tests is indeed an interesting one. I’ll comment on it later (to avoid making this comment even longer). 66. sf Says: It may also be interesting to consider the Frauchiger – Renner paper in terms of Scott’s definition of “State”: In physics, math, and computer science, the state of a system is… https://www.edge.org/response-detail/27127 which subtly gets around a lot of non-trivial technical difficulties. There is a problem coming from ‘knowing’ being both a physical fact about brains (in QM here), and at the same time a property of observers, more associated with their classical features. The definition of “State” that Scott suggests requires one to choose a minimal description, eliminating redundancies, but also avoiding potential internal conflicts. This said, the notions of “State” that are used in physics and computer science are not necessarily (or a priori?) compatible; at the least there’s some coarse graining to go from the continuums of physics to the discrete world of computer science, which may involve some quotienting operation, or an additional notion of identity/equivalence. Giving more global entities some kind of ontological status then seems to cause problems, like in the mind-body problem. The point may be that the equivalence relation should have some physical basis, but it’s usually regarded as some abstract construct of a meta-theory. 67. asdf Says: Neel #14: “That is, can we model both an observer and an experiment as a larger, interacting quantum system, and then derive the Born rule for the observer from an analysis of the joint system?” I think that is what decoherence theory set out to do, unsuccessfully though. https://plato.stanford.edu/entries/qm-decoherence/#ConApp 68. ppnl Says: asdf #67: ” The measurement problem, in a nutshell, runs as follows. Quantum mechanical systems are described by wave-like mathematical objects (vectors) of which sums (superpositions) can be formed (see the entry on quantum mechanics). Time evolution (the Schrödinger equation) preserves such sums. Thus, if a quantum mechanical system (say, an electron) is described by a superposition of two given states, say, spin in x-direction equal +1/2 and spin in x-direction equal -1/2, and we let it interact with a measuring apparatus that couples to these states, the final quantum state of the composite will be a sum of two components, one in which the apparatus has coupled to (has registered) x-spin = +1/2, and one in which the apparatus has coupled to (has registered) x-spin = -1/2. The problem is that, while we may accept the idea of microscopic systems being described by such sums, the meaning of such a sum for the (composite of electron and) apparatus is not immediately obvious. ” Well no I think it is pretty obvious. Yes the measuring apparatus can be seen as being in a superposition of states after the measurement. But if the measuring apparatus is in contact with the rest of the universe then it is in a decohered state. That means any observer also in contact with the rest of the universe will see it as simply a classical object in a classical state. There is no way to tell the difference between decoherence and wave collapse even in principle so they are effectively the same thing. In a weird way we can do away with wave collapse entirely and simply see it as a consequence of decoherence. Any macroscopic object looking out at the universe be it human or measuring apparatus will see a decohered universe. That means it will seem to have a coherent past and follow largely classical rules. Another way to look at it is all the order in the universe is composed of a vast pattern of superposed states as viewed from inside that superposition. The superposed cat knows very well if he is alive or dead if you ask him. But you have to be in the box in order to ask. Lemme see if I got this… State |ψ>, and Alice and Bob will measure the first and second qubits of this state in the basis {+,-}… There are three components to the state |ψ>, and in them: 1. |00>: If Alice were to measure in {0,1} and Bob were to measure in {+,-}, Bob would measure |+>. If Bob were to measure in {0,1} and Alice were to measure in {+,-}, Alice would measure |+>. 2. |01>: If Alice were to measure in {0,1} and Bob were to measure in {+,-}, Bob would measure |+>. If Bob were to measure in {0,1} and Alice were to measure in {+,-}, it is uncertain what Alice would measure. 3. |10>: If Alice were to measure in {0,1} and Bob were to measure in {+,-}, it is uncertain what Bob would measure. If Bob were to measure in {0,1} and Alice were to measure in {+,-}, Alice would measure |+>. Frauchiger and Renner then turn these counterfactual subjunctive “were to measure in {0,1}”s into actual measurements by having Charlie and Dianne do their own measurements in the {0,1} basis on the first and second qubits, branching the universe into (1), (2), and (3). 1. In this branch Charlie knows that Bob will measure |+> were he to get around to measuring before decoherence of the second qubit takes place, and Dianne knows that Alice will measure |+> were she to get around to measuring before decoherence of the first qubit takes place. 2. In this branch Charlie knows that Bob will measure |+> were he to get around to measuring before decoherence of the second qubit takes place. 3. In this branch Dianne knows that Alice will measure |+> were she to get around to measuring before decoherence of the first qubit takes place. In each branch, Charlie and Dianne write down, respectively, “I have measured qubit 1 in the {0,1} basis, and I may know that Bob will measure |+>” and “I have measured qubit 2 in the {0,1} basis, and I may know that Alice will measure |+>”. Frauchiger and Renner then apply the fact that Charlie and Dianne have measured and the principle of the excluded middle to conclude that it is logically impossible—no matter how the branching has taken place—for both Alice and Bob to simultaneously measure |->. If we then opened the boxes and decohered Charlie and Dianne, we would be done. But… Frauchiger and Renner then apply quantum erasers to Charlie and Dianne. The quantum eraser leaves their “I have measured…” messages intact and visible. But the quantum erasers recombines the branches and the restored coherent state is still (or again?) |ψ>. And then when Alice and Bob do their measurements in the {+,-} basis, 1/12 of the time we find |- -> Is that what is going on here? And is the point that either the principle of the excluded middle or the standard use of the subjunctive must fail for QM to be true? 70. ppnl Says: Harry Johnston #62 ” But we can perform a two-slit experiment with electrons, right? And we still get an interference pattern despite the fact that the state of the electromagnetic field, if measured sufficiently accurately, would allow us to determine which slit the electron went through. I don’t see how the Schrödinger’s cat experiment is any different. ” Well a dead cat produces a smell of decomp. We could measure that smell and thus know if the cat is dead or alive right? Except the smell is locked inside the box and no information can pass through the walls of the box. The electron has an electric field yes. But that field is locked in a type of box with the electron. That box is created by the distance between the electron and anything that that field could interact with. Remember the field is very weak and the electron is traveling very fast. The chances for any particular electron interacting with anything is small. If you do put a detector close enough to the electron to measure the field then its wave will collapse. You have opened the box. And it will collapse even if you never look at the detector. It just being there is enough to open the box. 71. Schmelzer Says: Andrei #42: There would be no problem to accept “non-locality”, given that it is only non-Einstein-causality, and perfectly local theories with maximal speed of information transfer of 1.001c have to be named “non-local”. Moreover, the “non-locality” is a well-defined one, described by explicit equations which we already use (namely the formula for the Bohmian velocity, which appears as the velocity in the continuity equation for the probability flow which is one half of the Schroedinger equation). Essentially all one has to do to preserve realism and causality is to go back to the Lorentz ether interpretation of relativity. It has a preferred frame, which can be used to make faster than light causal influences compatible with classical causality. The generalization of the Lorentz ether to gravity is simple: Consider the Einstein equations of GR (or the equations of whatever covariant metric theory of gravity you prefer) in harmonic coordinates, and interpret the harmonic condition as continuity and Euler equations for the Lorentz ether. Instead, superdeterminism is much worse. arxiv:1712.04334 looks at it also from the point of view of Bayesian probability – which, following Jaynes, is simply the logic of plausible reasoning. There, the superdeterminism loophole does not even exist: Once we have no information about, say, how a dice is faked, we have to assume equal probability to all outcomes. This is sufficient to rule out superdeterminism by the way too. Not as a hypothesis about reality (real dices may be faked) but as a consequence of the fact that we have no information about this big conspiracy we cannot take it into account in plausible reasoning. 72. Harry Johnston Says: OK, I’ve read the paper, and there’s something I don’t understand: from the point of view of W and W¯, doesn’t L become entangled with L¯ when F measures the spin S? And doesn’t that mean that when W¯ makes their measurement, it affects not only the state of L¯ but also the state of L? F¯ doesn’t seem to me to be taking that into account when predicting w. (I’d also note that the measurement W¯ makes is forcing L¯ into a particular superposition of the F¯-measured-heads and the F¯-measured-tails eigenstates, even if it was originally in one or the other, which is effectively equivalent to winding back that measurement and therefore presumably thermodynamically impossible. But does that depend on your interpretation of what a measurement is?) [@ppnl: yes, agreed. One point I’d overlooked was that an electron moving in a straight line doesn’t produce electromagnetic radiation. But I also hadn’t thought the QM math through properly, as was obvious once Scott pointed it out. Careless of me. Sorry.] 73. Jochen Says: It seems to me that Scott raises a valid question. If I understand correctly, then one could also think about this as follows: so suppose \bar(W) surmises that the state of the lab L is |1/2>, i.e. makes the statement “I am certain that F knows that z=+1/2 at time n:11.”. This statement is equivalent to saying “If I were to measure L, I would obtain |1/2> with certainty.” Before \bar(W) makes any measurement, we have a state with components |h>|-1/2>, |t>|-1/2>, and |t>|1/2>, where |h> and |t> refer to the states of Lab 1, including the knowledge of F, and |1/2> and |-1/2> refer to the states of Lab 2, including the knowledge of \bar(F). If we rewrite this in the {|OK>,|Fail>}-basis of \bar(W), we get something with components |OK>|1/2>, |Fail>|-1/2>, and |Fail>|1/2>, right? So after \bar(W) measures and obtains |OK>, only the component |OK>|1/2> survives. Hence, if she were to measure L, she would obtain |1/2>. But if we now re-express that state in the {|h>,|t>}-basis, then we would no longer have a state in which F is certain that W would obtain w=fail, since that state now would have components |h>|1/2> and |t>|1/2>. In essence, the transformation by \bar(W) on the lab has “erased” the prior knowledge of F. So the statement \bar(W)(n:22) is correct, in that it refers to the certainty F had at n:11; but the statement \bar(W)(n:23) does not follow—rather, due to the knowledge obtained in her measurement, \bar(W) is now no longer certain that F would predict that W observes ‘fail’; rather, she knows that if she were to ask F now, they’d reply that they have no idea what W will observe. 74. Schmelzer Says: Renninger #65: Given that dBB is quite obviously self-consistent, all one has to do is to trace what happens in dBB, and trace the “inconsistency” down to every detail. This is missed in the paper. You have vaguely discussed dBB, with the conclusion “One possibility could be to … add the rule that its predictions are only valid if an experimenter who makes the predictions keeps all relevant information stored.” Even if true: If this is sufficient to restore self-consistency, it was never violated. Let’s look at picture 3 and ask ourselves what makes Alice think that $$w\neq ok$$. It is the consideration of only a part of the experiment, namely she knows the initial state she gives to Bob, and then considers the final experiment W. This reduction is quite explicit: “While the time information contained in the plot $$s^E$$ is thus more coarse-grained than that of $$s^{F1}$$, it is still sufficient to apply the laws of quantum theory.” Fine, indeed, it is. But given that the reduced picture omits the experiment A, which changes the state of F2, the reduced scenario is simply not applicable to the full experiment. Yes, this change of the state of F2 by the measurement of F1 by A is that spooky action at a distance which is related with entanglement, and the Bohmian trajectory of both laboratories will be surrealistic ones, but this does not make them inconsistent. The same logical error I see in the derivation is sufficient to prove a classical analog. Alice tells Bob she is communist. Bob will tell Diana what he knows about Alice political views. Knowing that Bob is a honest guy, Alice will surely think that Diana thinks she is communist. Everything fine, no contradiction, given that Alice at that time is communist. In the classical world, somebody who is once communist remains communist forever. Now we add the analogon of quantum theory where Charlie can give Alice Solshenizyn for reading, looking at her reaction to measure if she is communist. With some probability for tunneling, that she becomes anti-communist and so Charlie thinks she is anti-communist. Alice does not talk to Bob after this, so the prediction remains, Alice will surely think that Diana thinks she is communist. So, there may be multiple outcomes about what people think about Alice being communist or not. But Bohm claims that we are in the time of the internet, and once Diana asks Bob about what Alice thinks, he has, as a honest person, to answer “she told me he is communist long ago, but sorry, wait, I will check if this is correct yet”, and then give Diana the actual information about Alice’s political views. And we have, again, a world without different people having different views about what Alice believes. Now, apply your argument to this Bohmian internet theory. You consider, for Alice, the reduced version (she tells Bob she is communist, Bob answers Diana’s question). This is a consistent story, you can apply it. Once there is no Solshenizyn reading in this story, Alice remains communist, and Bob will tell Diana that Alice is communist even if he checks the Bohmian internet. But this reduction gives a different result than the full story. Does it follow that the theory is somehow inconsistent? 75. sf Says: Not only do ’we’ know, as Scott points out, “that Alice is going to measure Charlie in the {|+〉,|-〉} basis, which would involve a drastic unitary transformation (specifically, a “Hadamard”) on the quantum state of Charlie’s brain”, BUT EVEN CHARLIE KNOWS THIS. The Frauchiger-Renner paper says: “We analyse the experiment from the viewpoints of the four agents, F, F’, W, and W’, who have access to different pieces of information (cf. Fig. 2). We assume, however, that all agents are aware of the entire experimental procedure as described in Box 1, and that they all employ the same theory“. (F, F’, W, and W’, are C,D,A,B, here) So Charlie, Diane even know that they are each just part of an entangled, superposed state which will soon be collapsed in a different basis. It is only when they neglect this in their collective reasoning that the paradox survives. In fact, assumption (Q) is structured to have them selectively neglect such things, but it’s not clear that this makes sense, insofar as it contradicts the quote above. We may be able to weaken the assumptions on how much C,D, know, so that they can still reason coherently and unknowingly apply the Born rule to get predictions that are invalid from the viewpoint of higher level observers, such as A,B, or us, but this is essentially coming back to Wigner’s friend. Re: #75 State |ψ>, and Alice and Bob will measure the first and second qubits of this state in the basis {+,-}… There are three components to the state |ψ>, and in them: 1. |00>: If Alice were to measure in {0,1}, then Alice would know that if Bob were then to measure in {+,-}, Bob would measure |+>. If Bob were to measure in {0,1}, then Bob would know that if Alice were then to measure in {+,-}, Alice would measure |+>. 2. |01>: If Alice were to measure in {0,1}, then Alice would know that if Bob were then to measure in {+,-}, Bob would measure |+>. 3. |10>: If Bob were to measure in {0,1}, then Bob would know that if Alice were then to measure in {+,-}, Alice would measure |+>. Frauchiger and Renner then turn these counterfactual subjunctive “were to measure in {0,1}”s into actual measurements by having Charlie and Dianne do their own measurements in the {0,1} basis on the first and second qubits, branching the universe into (1), (2), and (3). 1. In this branch Charlie knows that if the wave function has collapsed—if the universe has branched—Bob would measure |+> were he to get around to measuring before decoherence of the second qubit takes place. In this branch Dianne knows that if the wave function has collapsed—if the universe has branched—Alice would measure |+> were she to get around to measuring before decoherence of the first qubit takes place. But Charle and Dianne know that **even though they have done their measurements** they are in their boxes, and hence the wave function has not yet collapsed—the universe has not yet branched. Thus Charlie and Dianne know that when it comes time for Bob and Alice to do their measurements in {+,1}, there will be contributions not just from the $\frac{|00>{{\sqrt{3}}$ component of $|\psi>$, but from the $\frac{|01>{{\sqrt{3}}$ and the $\frac{|10>{{\sqrt{3}}$ components of $|\psi>$ as well. And so they do not know that if Bob were to measure in {+,-} Bob would measure |+> and that if Alice were to measure in {+,-} Alice would measure |+>. Instead, they know that they are uncertain about what Bob and Alice will measure. They know that the facts that Charlie and Dianne know that they have obtained definite results in the {0,1} basis have (or will have had) no consequences for the true wave function, which remains the original $|\psi>$, and will remain $|\psi>$ until Alice and Bob do **their** measurements. 2. Similar… 3. Similar… ? And is the lesson that: (A) Many-worlds does not have a problem if agents properly understand what the branching structure of the universe will be when decoherence occurs. (B) Other approaches have a big problem, because not even conscious and certain measurement by Turing-Class intelligences justifies a movement from the quantum-superposition to the classical-probabilities level of analysis. ? 77. Andrei Says: Harry Johnston #36: “@Andrei #27, as far as I’m aware the purpose of the box in the Schrödinger’s cat thought experiment is just to make it clear that the experimenter is not looking directly at the cat during the experiment. It isn’t necessary for the observer to have no way to tell whether the cat is dead or alive, so long as they don’t actually go ahead and make the necessary measurements. In other words, it isn’t access to the information that counts, it’s the information you actually choose to collect. Do you have some reason to believe differently?” Sorry, I didn’t notice your post so I am answering it only now. Let’s consider the case of a double-slit experiment. We know that by placing a particle detector at one slit the interference pattern disappears. It does not matter if you actual notice the detector, or if you look at its output. It is the presence of the detector that changes the behavior of the incoming particles, not your knowledge. In the case of a cat in the box I can always place a detector outside the box to determine if the cat is alive or not (by detecting its gravitational field with a very sensitive accelerometer for example). We are still limited by the uncertainty principle, but for a macroscopic object such as a cat it is irrelevant. So, the box is useless. There is no way to place a cat in superposition by placing it in a box. 78. sf Says: > 1. |00>: If Alice were to measure in {0,1}, then Alice would know… The measurements are done on the superposition (or sum) of the three components, not separately on each component. You correctly insist on the ordering of the two measurements here, which wasn’t so clear in the previous version, #69. Jumping to the end, it’s hard to find one definitive way of looking at things here. There are many viewpoints with interesting insights. So far it’s not clear that there’s any consensus on any approach. Re #78 Touché… 80. Andrew Says: Quick question — what is the functional difference (if any) between infinite hidden variables and infinite parallel universes? Specifically, if a “complex” system inhabits a world of infinite parallel universes, _or_ an infinite number of “hidden” variables are at play in determining the state of said system, is there any real difference that would manifest itself in the math between those two “interpretations” of what’s “actually” happening beyond what we can observe? I presume the answer is “no”, otherwise we could devise a test to determine which is “true”, but perhaps I’m missing a large chunk of something. 81. Andrei Says: Schmelzer #71: “superdeterminism is much worse. arxiv:1712.04334 looks at it also from the point of view of Bayesian probability – which, following Jaynes, is simply the logic of plausible reasoning. There, the superdeterminism loophole does not even exist: Once we have no information about, say, how a dice is faked, we have to assume equal probability to all outcomes. This is sufficient to rule out superdeterminism by the way too. Not as a hypothesis about reality (real dices may be faked) but as a consequence of the fact that we have no information about this big conspiracy we cannot take it into account in plausible reasoning.” OK. Three points here. 1. I will argue that, under acceptable assumptions, Bell’s statistical independence assumption fails, so the “dice is faked”. 2. I will show that it is not reasonable to ascribe equal probability to all outcomes for a fake dice. 3. I will show that the argument presented in your paper against superdeterminism is question-begging, so it fails. 1. Let’s assume that the quantum world is actually described by a classical, local, field theory. I will use classical electromagnetism as an example. One can observe that such a theory does not allow one to split a system (say a source of entangled particles + particle detectors) in independent subsystems. The fields in any region depend on the distribution/momenta of all particles, and the trajectory of each particle depends on the local fields. In fact, for a system of point charges (and in our universe charge is indeed quantized) one can show that the fields in any infinitesimal region is unique for a certain particle distribution/momenta. So, at least at the level of detector/source microstates, Bell’s statistically independence assumption is demonstrably false. It is possible that if all those microstates are correctly counted the statistically independence would be restored, but there is no reason to ascribe a high probability for it to happen. In conclusion we have good reasons to believe that the “dice is faked/loaded”. 2. If we know that a dice is faked/loaded we know for sure that ascribing equal probability to all outcomes is the worst possible strategy, because it has 0 chance to succeed. That pretty much follows from the definition of the words. It is better to ascribe a high probability to any value, at random, and you have a 1/6 chance of winning. 3. As far as I understand your rejection of superdeterminism is based on the null hypothesis: “Will A increase or decrease the probability of B? Without any information, we have no reason to prefer one of the two answers. The only rational answer is the one which makes no difference – that A is irrelevant for B, that both are independent of each other, P(A ∧ B) = P(A)P(B).” This is all nice but the same line of reasoning can be used to reject non-local connections as well. Not only we do not know if a measurement at one location has an instantaneous effect at another, distant location, but everything we know about physics seems to preclude such a behavior. Your argument is question-begging because it ascribes a very low probability for superdeterminism (and I agree with that) but does not provide us with any reason to ascribe a higher probability for non-local connections. In fact, the situation is exactly opposite. When confronted with a new type of correlation for which the cause is not known the most reasonable assumption is that we are in the presence of a past, even if unknown cause, not in the presence of a non-local cause. This is the standard way to approach new phenomena in science. Occam’s razor strongly favors a mechanism that does not require new physics, just another, albeit convoluted case of Bertlmann socks over a new entity like a real wave-function which cannot even exist in our 4D-spacetime but nevertheless can move particles around. 82. John Sidles Says: To avoid the need to discuss “minds”, it suffices to have the four agents of the game each certify their predictions, by placing a record of their predictions in a “Newcomb Box” (as we will them). As usual in prediction games, once the prediction-contents of a Newcomb Box have been initialized, all participating agents promise not to alter subsequently, by quantum dynamical interactions, the contents of that Newcomb Box. We will call agents that obey these rules “Newcomb Agents”. Working through the dynamical details, and in particular, unraveling projective measurements as Lindbladian dynamical processes, resolves the paradox as follows (as it seems to me anyway): Newcomb Agents are not allowed to “Hadamard brains” (in Scott’s happy phrase), because the projective unravellings that generate Hadamarded brains necessary include (disallowed) Hamiltonian interactions between Newcomb box-contents and measurement reservoirs. Specifically, the Newcomb Box certificates, that were originally deposited by agents F and agents F-bar, are subsequently altered by the Lindbladian dynamical processes that generate the projective measurement processes of agents W and W-bar. Hence, with particular reference to Table 3 of the Frauchiger/Renner article, the Newcomb certificates that are associated to the deductions of the first two rows (as generated by agents F and F-bar) necessarily are dynamically altered, ex post facto and contrary to the rules of prediction games, by the Lindbladian generators of the projective observation processes of the second two rows (as imposed by agents W and W-bar). In a nutshell, by the usual rules of prediction games, agents W and W-bar are cheating. Yet their cheating method is so delightfully non-obvious (for me at least), that the Frauchiger/Renner analysis acquires the character of an magic trick; a trick that initially astounds us, and subsequently — once the mechanism of the trick is understood — brightly illuminates some of the way(s) that we humans think about predictive processes (and even predict them). In summary, a followup-up analysis, in which deductive inferences are certified using Newcomb Boxes, will conclude — as seems likely to me anyway — that “Quantum agents generally cheat at prediction games, but when everyone plays fair, no contradictions arise” 83. Renato Renner Says: Schmelzer #74: Good you are mentioning the de Broglie-Bohm theory (dBB). It serves as an excellent example to illustrate how our thought experiment is different from the simplified version that Scott proposed. As you are writing, one can indeed apply dBB to our thought experiment and trace in detail what happens. What you will find (our paper does not go into detail here, but it is rather straightforward to do the calculation) is that dBB contradicts statement \bar{F}^{n:02} of Table 3. In other words, according to dBB, the implication “r=tails ==> w=fail” is not correct. Conversely, if one applies dBB to Hardy’s paradox, the result is different. Here the statement corresponding to “r=tails ==> w=fail” is always valid. So, how can it be that dBB gives a different result when applied to our thought experiment rather than to Hardy’s? The reason is, roughly speaking, that Hardy’s paradox is based on “counterfactual” reasoning, i.e., the different statements are established in different runs of the experiment with different measurement choices. In contrast, in our thought experiment, all measurement outcomes by all agents are obtained in one single run (and hence, in dBB, represented by the corresponding Bohmian particle positions in that single run). One can therefore reason about them without referring to counterfactuals. 84. David Thornton Says: Dr. Renner, please see the entry of 26 September in ‘The Reference Frame’ blog of Lubos Motl where he refutes your proof. 85. fred Says: David #84 Motl’s short conclusion (in case you don’t want to deal with all the profanity stuff on the blog) : “Their invalid “proof” that the “Copenhagen Interpretation” requires to abandon “C” boils down to their incorrect assumption that it doesn’t matter, from some observers’ viewpoints, whether an observable was measured (by someone else). But the measurement of a quantity whose outcome isn’t certain at the moment always changes the situation – and it changes the situation from all observers’ viewpoint.” 86. fred Says: So, if the “alive” version of a Shrodinger’s Cat that’s in the isolated box happens to measure a qubit, it (the collapse) “affects” all observers outside the box, so a measurement is a “universal”/”absolute” event? But yet this collapse of the qubit’s wave function would also have to be in superposition with the case where there was no collapse (the cat is dead and couldn’t measure the qubit)? Doesn’t seem that obvious to me… 87. Jochen Says: Renato Renner #83: “In other words, according to dBB, the implication “r=tails ==> w=fail” is not correct.” I’m not totally convinced one can make this inference in ordinary quantum theory. Sure, if \bar{F} assumes their observation collapses the state, that’s OK, but it seems to me that, in order to predict W’s observations, they should apply QM from W’s perspective, which will in the end correctly yield that W could obtain either OK or Fail… 88. fred Says: The paper is fundamentally flawed because its assumption that one can assemble a group of agents who all agree on how to apply QM isn’t a given at all… haha. 89. Renato Renner Says: David #84: Next to my random number generator that determines which blogs I should comment (see #48) I have an even more elaborate device that tells me which ones I should not even read. 🙂 90. ppnl Says: fred #86 ” So, if the “alive” version of a Shrodinger’s Cat that’s in the isolated box happens to measure a qubit, it (the collapse) “affects” all observers outside the box, so a measurement is a “universal”/”absolute” event? ” It isn’t clear what precisely you are asking here. But OK first of all the cat interacts with a qubit. It does not matter if the cat is dead or alive when it does so. In order for that interaction to affect observers outside the box it has to be outside the box and that is exactly what the box is intended to prevent. The cat is constantly interacting with qubits inside the box. After all the whole point is to have the cat interact with an unstable particle (qubit) in order to place itself into superposition. And wave collapse is not universal. We view the cat as being in superposition. But from the cat’s point of view we may be in superposition. In order for us and the cat to agree there must be information flow across the walls of the box. Think of wave collapse as a very bad social disease. Any contact at all and you spread the cooties. The box protects the cat from our cooties. And protects us from the cat’s. 91. Renato Renner Says: Jochen #87: If r=tails then, according to the protocol, agent \bar{F} must prepare the state as “spin right”. The Born rule is then applied to this state. So, I do not think anything about “state collapses” needs to be assumed here. (Agent \bar{F}’s conclusion is of course only correct if one knows that she indeed saw r=tails. But this is taken into account in the chain of reasoning; see in particular the second row of Table 3, i.e., agent F’s reasoning.) 92. Dandan Says: 2. Whenever Diane is in the |1〉 state, she knows that Charlie must be in the |0〉 state (since there’s no |11〉 component). 3. Whenever Charlie is in the |0〉state, she knows that Diane is in the |+〉 state As I see it, this is already a contradiction – Diane in the |1〉state knows that Charlie knows that Diane is in the |+〉state. I think the problem here is that Diane’s “discovery of herself” is a measurement with the result unknown to Charlie. But if Charlie knows how Diane measure herself (that is in {|0〉,|1〉} basis) she can update her beliefs from |+〉to a mixed state of |0〉and |1〉. This mixed state is still not equal to |1〉but this discrepancy is justifiable. Anyway, this whole thing is very “thought-provoking”. Thank you for awesome exposition! 93. ppnl Says: Andrei #81: ” Let’s assume that the quantum world is actually described by a classical, local, field theory. ” That would mean that any supposed quantum computer would actually be a classical computer right? If they develop large scale quantum computers that give exponential speed up over classical computers would that disprove super-determinism? #89 seems wise… 95. Jochen Says: Renato Renner #91: “So, I do not think anything about “state collapses” needs to be assumed here.” Well, the thing is, \bar{F} can reason through the setup in exactly the way we can, to derive that \bar{W} will get the fail-result. And it seems to me that they must know that this is what occurs, in the same way that Wigner’s original friend must know that a suitable experiment performed on their entire lab will show interference. It wouldn’t do to claim that, since the friend observed a certain outcome, they must now predict the non-existence of interference in such an experiment. But then, by the same token, it seems to me that \bar{F} ought to reason that the state from the point of view of \bar{W} has components |h>|-1/2>, |t>|1/2>, and |t>|-1/2>, which after W’s measurement is projected onto |OK>|1/2>, which is an equal superposition of |h>|1/2> and |t>|1/2> (see my comment #73 above). Consequently, \bar{F} shouldn’t predict the fail-outcome. They should only predict ‘fail’ if the state is equal to |-1/2> + |1/2> to \bar{W}—but it’s not, and \bar{F} knows this. 96. Jochen Says: “Well, the thing is, \bar{F} can reason through the setup in exactly the way we can, to derive that \bar{W} will get the fail-result.” This was supposed to be “…may get the OK-result”. 97. sf Says: With apologies to Tim Hardin, crossing over to the silly side: If I were a particle, and you were a wavelet, would you measure me any way, would you be my A-gent? Would you still find me Carryin’ the charge I gave in our Feynman diagram 98. Ahron Maline Says: It seems to me that the authors of the paper have failed to include, in their list of assumptions, the crucial one: that one who observes a measured value may conclude that the state, after measurement, is an eigenstate corresponding to that value. Without this, the first line of Table 3 in the paper is clearly wrong: when agent Fbar measures r=tails, he has no right to treat the right-polarized atom he is sending as “the true state”, and use that to make predictions about what W will measure. He knows perfectly well that his other self, who measured r=heads, is sending a down-polarized atom, which may affect W’s result. This reasoning by Fbar only makes sense with what amounts to a collapse assumption. Of course this contradicts the assumption that QM can be applied to systems that include observers. On the other hand, Fbar is justified, without additional assumptions, to conclude that “if I ever see W’s result, it will be w=fail”. Making predictions from one’s observations to one’s own future observations must be valid in all interpretations, although the justification may vary. But here, What’s measurement ruins the prediction: by Hadamarding Fbar’s brain, he makes it impossible for any continuously-aware version of Fbar to become aware of W’s result. Therefore there is no problem if W in fact measures w=ok. 99. Andrei Says: ppnl #93 October: “If they develop large scale quantum computers that give exponential speed up over classical computers would that disprove super-determinism?” Not at all. The purpose of superdeterminism is to provide a local explanation for QM, not to deny QM. So, if faster computers are possible according to QM they would be just as possible according to a superdeterminist interpretation of QM. Sure, such computers would be classical computers, but they would be a different kind of classical computers because they would make use of a different classical effect, entanglement. An electric engine and a petrol engine are both classical but they need not have the same performance. 100. Schmelzer Says: Andrei #81: The objective Bayesian interpretation derives the probabilities from the available information. So, it forces us to accept 1/6 if we have no information which makes a difference between the six outcomes. So, even if we know the dice is faked, as long as we don’t know in which direction it is faked, 1/6 remains the only rational choice. Your argument (2) presupposes some reality to probabilities, following the frequency interpretation. But this is not what matters in the logic of plausible reasoning. The point there is logical consistency, and the available information. So, if you don’t use 1/6 for everything, you get a different distribution if you simply use different numbers – but your information remains unchanged, thus, your probabilities should remain unchanged too. (1) has essentially the same problem. Your proof (3) fails: “Not only we do not know if a measurement at one location has an instantaneous effect at another, distant location, but everything we know about physics seems to preclude such a behavior.” Sorry, no, we have a theorem that without such instantaneous influence we would have Bell’s inequality. 101. Schmelzer Says: Renato Renner #83: So, dBB in your version gives a consistent trajectory, without counterfactual reasoning. Where is the inconsistency, then? It appears in the reasoning of Alice thinking that w≠ok. She is reasoning about a counterfactual experiment – one where Charlie does not measure anything, and therefore does not distort Bob’s state. 102. Scott Says: Andrei #99: You’ve given the logical and correct answer to the question, but that’s different from the answer that the “chief superdeterminist,” Gerard ‘t Hooft, gives! ‘t Hooft is on record predicting that it will never be possible to build a quantum computer that outperforms a classical computer, because of the classical cellular automaton that he believes underlies Nature. On the other hand, he also believes that this CA is able to account for the Bell inequality violations, because superdeterminism! So I’ve always wondered: why doesn’t ‘t Hooft point out that superdeterminism could just as easily account for the successful running of Shor’s algorithm (as, in fact, it could account for anything whatsoever)? 103. Andrei Says: Schmelzer #100: ” So, even if we know the dice is faked, as long as we don’t know in which direction it is faked, 1/6 remains the only rational choice.” This is only true for a single run. If you throw the dice only once you could ascribe equal probability. But if you repeat the experiment many times the equal probability assumption is a certain looser for a loaded dice. Given the fact that a Bell test comprises many measurements we are not in the single-run case. So, the reasonable assumption is that the hidden variable and the settings of the detectors are not independent parameters. “(1) has essentially the same problem.” I do not understand your point here. In 1 I have provided evidence that a field theory such as classical electromagnetism implies that the “dice is loaded”, in other words, such theories are superdeterministic, according to Bell’s definition. This is about the mathematical structure of the theory, not about prior probabilities. “Your proof (3) fails: “Not only we do not know if a measurement at one location has an instantaneous effect at another, distant location, but everything we know about physics seems to preclude such a behavior.” Sorry, no, we have a theorem that without such instantaneous influence we would have Bell’s inequality.” You are just proving my point here, about begging the question. Bell’s theorem does not prove the existence of an instantaneous influence. It allows you to choose between such an influence and superdeterminism. So, what you need to prove here is that taking the non-local option is much more reasonable than the superdeterminist option. With 0 evidence for non-local influences (outside the issue of entanglement which is the subject of our debate) and plenty evidence for physical systems that are not independent of each other your job is quite difficult. 104. Andrei Says: Scott, The exact quote of ‘t Hooft is (from his paper, The Cellular Automaton Interpretation of Quantum Mechanics): “Yes, by making good use of quantum features, it will be possible in principle, to build a computer vastly superior to conventional computers, but no, these will not be able to function better than a classical computer would do, if its memory sites would be scaled down to one per Planckian volume element (or, in view of the holographic principle, one memory site per Planckian surface element), and if its processing speed would increase accordingly, typically one operation per Planckian time unit of 10−43 seconds.” So, he is not saying that it “will never be possible to build a quantum computer that outperforms a classical computer”, but it will never be possible to outperform a Planck-classical computer. It seems to me that his view is not necessitated by a classical foundation of QM but by the discrete structure of space and time of the CA. A continuous space-time background would not impose any such limitation, so, a classical computer could be equivalent to a quantum one. 105. Jochen Says: Ahron Maine #98: “when agent Fbar measures r=tails, he has no right to treat the right-polarized atom he is sending as “the true state”, and use that to make predictions about what W will measure.” Yes, this is what worries me, too. It strikes me as being akin to Wigner’s friend concluding that there won’t be any interference if an experiment is performed on the whole lab, where at most they can say that they don’t know. It doesn’t strike me as that different from the question of whether two events are simultaneous in special relativity: just that they are to you doesn’t necessarily mean that they are to every observer. In fact, the notion is meaningless without specifying a frame of reference. In the same way, ‘the state’ of a quantum may differ according to different observers, and you can’t generally assume that the state you assign to a system is assigned to it by every observer. In particular, this seems to be the case with systems including you as a proper part. Wigner’s friend can’t decide whether Winger will observe interference, as this depends on whether they measured a system in an eigenstate of the measurement, or in a superposition. Likewise, \bar{F} can’t decide the outcome of \bar{W}’s measurement, as this, too, depends on whether the coin was in an eigenstate of ‘tails’ or in the superposition specified in the paper. Given the knowledge of the initial state, however, they should conclude that \bar{W} may observe either outcome. 106. fred Says: Scott #102 “because of the classical cellular automaton that he believes underlies Nature.” It’s the same idea that Wolfram had, right? While those ideas may be flawed, the real issue is that no-one has any robust theory for what’s going on at the Planck scale, no? I.e. coming up with a discrete structure (made of “cells”) for space and time that’s also fitting special and general relativity? (I don’t know enough about string theory to tell if it solves this). 107. fred Says: About quantum supremacy and whether the universe is actually a simulation or not (whether an actual QC or a classical computer simulating a QC really poorly). Both “classical” and “quantum” computers are digital machines, i.e. reality is described as numbers. But reality itself appears analog (until we know WTF is going on at the Planck scale) and magically super-hyper-parallel. Reality doesn’t seem to care or struggle to make stuff happen at various scales (from super clusters of galaxies down to the proton), while digital machines need more and more resources to manipulate things consistently at both small scales and large scales. Reality doesn’t seem to care whether it’s solving a two-body problem or a trillion-trillion-body problem, it doesn’t need to pretend there such a thing as “isolated systems” to make things happen – every point in space sees the superposition of all the fields (gravity, EM) created by all the particles in the visible universe, no compromise (but apparently that’s still not enough information flowing in a given point of space to cause black holes to spontaneously appear? :P) 108. bcg Says: Andrei #37 (but generally): How does the underlying field determine the angles the researchers choose, such that the researchers are (unknowingly!) conspiring to only present Bell inequality violations? If it’s “I know it must be complicated, and I don’t know how, but it does,” that’s fair, but it seems like a hard sell. If it’s, “That’s not what I’m saying,” then we’re not talking about Bell inequalities. 109. ppnl Says: Andrei #99 ” Sure, such computers would be classical computers, but they would be a different kind of classical computers because they would make use of a different classical effect, entanglement. An electric engine and a petrol engine are both classical but they need not have the same performance. ” But you seem to be postulating a classical process that violates the extended Church/Turing thesis (ECT). If entanglement is a classical process then it should be limited in the same way that all other classical processes are. Transistors are faster than electric relays but they only allow a poly increase in speed. Ditto every other classical process. Can you show me a CA that allows a violation of ECT? Do you believe there is such a thing? Does ‘t Hooft postulate any such thing? Andrei #104 “ ‘t Hooft—–Yes, by making good use of quantum features, it will be possible in principle, to build a computer vastly superior to conventional computers, but no, these will not be able to function better than a classical computer would do, if its memory sites would be scaled down to one per Planckian volume element (or, in view of the holographic principle, one memory site per Planckian surface element), and if its processing speed would increase accordingly, typically one operation per Planckian time unit of 10−43 seconds.” Neither memory size nor processing speed seem relevant. It is the exponential speed up with size that matters. There are on the order of 2^620 plank unit volumes in the universe. Now imagine a classical computer with that many processors or memory elements that performed an operation in 106^-43 seconds. There seem to be simple problems that this computer could not solve in the age of the universe. But a not overly large quantum computer could. That exponential speedup for some specific problems is hard to beat. Now as far as I know (and this is just a hobby for me so…) there only two ways around this. Either BQP is equal to BPP or there is a classical process that violates ECT. I don’t think the smart money is on either of these. By a lot. Showing either would make you famous totally aside from any relevance to super-determinism. In fact I doubt that it would convince anyone that super-determinism is true. My understanding – and my understanding is limited – was that ‘t Hooft was claiming that on some scale quantum computers would fail to deliver the exponential speed increase due to the limits of the underlying classical process. This would allow him to avoid the problem with ECT and BQP=BPP. But I have never seem him or anyone else address this directly. 110. Renato Renner Says: Ahron #98 and Schmelzer #101 and Jochen #105: “when agent Fbar measures r=tails, he has no right to treat the right-polarized atom he is sending as “the true state”, and use that to make predictions about what W will measure.” To make sure I understand your concern, consider a “truncated” variant of the thought experiment where agent Fbar is never subject to a measurement (i.e., Wbar does nothing to her). While you deem the first row of Table 3 as incorrect in the case of the original thought experiment, I guess that you would agree with that row in the case of the truncated experiment, right? If yes, then you probably have in mind a restricted version of Assumption (Q) which, along the title of Scott’s blog, we may define as follows: (Q_noHadamard): Like Assumption (Q), but the rule is only applicable under the condition that the brain of agent A (who prepared the system S in state psi at time t_0) is never going to be subject to a Hadamard (or any other non-trivial operation). If one replaces (Q) by (Q_noHadamard) in our analysis of the thought experiment then I would certainly agree that the contradiction disappears (see my comment #48). 111. mjgeddes Says: fred #107 Yes indeed, The more I really think on this, the more I realize how absurd the idea of ‘physics as computation’/ ‘physics as simulation’ really is, and it’s amazing that anyone fell for this nonsense. Two very smart people (Stephen Wolfram and Gerard de Hooft) are proposing a specific model of computation ( cellular automaton ) as the foundation of physics, which is even more implausible! You only need to look at the mathematical foundations of physics to see that it has *nothing* whatsoever to do with the theory of computation. Firstly, physics is mostly based on differential equations, which needs a continuum just for starters. Secondly, no fruitful new physics has *ever* come from theory of computation…it’s explanatory power as regards physics is nada, zip. Thirdly, theory of computation has abstracted away all the structural details of physics – many possible kinds of underlying physics could be compatible with the *same* theory of computation, therefore ToC simply can’t explain these physics details. As to the idea of reality as simulation (pan-computationalism), this is just a modern, supped-up version of idealism/solipsism, which can easily be dismissed. Without a physical model of how ‘computation’ is supposed to work, the whole notion of ‘a simulation of the universe’ is simply meaningless, which is just to say, computation needs some underlying hardware, and unless one specifies how this hardware is supposed to work, there’s no coherent theory there. Elon Musk is an example of a really smart guy that fell for this latest incarnation of old recycled nonsense. 112. Andrei Says: bcg #108: “How does the underlying field determine the angles the researchers choose, such that the researchers are (unknowingly!) conspiring to only present Bell inequality violations?” If I understand your view correctly you imply that superdeterminism requires: 1. In reality Bell inequality is not violated (QM is wrong) 2. There is a conspiracy preventing us to measure some of the pairs so that we are under the (false) impression that QM is right. I reject both claims above. I think that the violation of Bell inequality is a true feature of Nature. My (qualitative) explanation for this fact is the following: There is an infinity of states (position velocities of electrons and quarks, electric and magnetic field magnitudes) that the source and the detectors can be in. But, for each physically possible state there is an infinity of states that are not physically possible. Example: take a real, existing state, move a single electron from the source with 1nm and leave everything unchanged. This new state is forbidden because the electric field at any location (including the location of the detector) does not correspond anymore with the charge distribution. So, in order to estimate the prediction of classical electromagnetism for a Bell test one needs to only count the states that are physically possible and discard the others. My hypothesis is that when the states are properly counted the violation of the inequality would result. You are perfectly justified in disbelieving this hypothesis, but such a disbelief is not enough to rule out the theory. If you claim that the theory cannot violate the inequality you need to actually count those states or come up with a different argument. Bell’s theorem in its present embodiment, that assume the source states and the detector states are independent, is of no use here. 113. Jochen Says: Renato Renner #110: I would not want to be that specific in reformulating the assumption. Rather, I would maintain that one can’t in general make predictions about a system of which one is a proper part; furthermore, to me, that’s just applying quantum theory. Otherwise, you already get in trouble in Wigner’s original thought experiment, where the friend might falsely predict the absence of interference. There’s also work by Thomas Breuer to the effect that the assumption that one can always predict the results of measurements on systems including oneself yields contradictions, see e. g. “The Impossibility of Accurate Self-Measurements”. 114. Andrei Says: ppnl #109: “you seem to be postulating a classical process that violates the extended Church/Turing thesis (ECT).” First, I must confess that my knowledge in this field is very limited, so if I say something stupid please accept my apologies. OK, let me present the extended Church/Turing thesis: http://www-inst.eecs.berkeley.edu/~cs191/fa08/lectures/lecture17.pdf “The extended Church-Turing thesis is a foundational principle in computer science. It asserts that any ”reasonable” model of computation can be efficiently simulated on a standard model such as a Turing Machine or a Random Access Machine or a cellular automaton.” The lecture continues: “But what do we mean by ”reasonable”? In this context, reasonable means ”physically realizable in principle”. One constraint that this places is that the model of computation must be digital. Thus analog computers are not reasonable models of computation, since they assume infinite precision arithmetic. In fact, it can be shown that with suitable infinite precision operations, an analog computer can solve NP-Complete problems in polynomial time. And an infinite precision calculator with operations +, x, =0?, can factor numbers in polynomial time.” So, it seems to me that a classical process that uses infinite precision (such as the evolution of a system of charged particles described by classical electromagnetism) is an example of an analog, not digital computer, and such a system can in principle be used to solve the problems that quantum computers are able to solve in comparable time. “Can you show me a CA that allows a violation of ECT? Do you believe there is such a thing? Does ‘t Hooft postulate any such thing?” I am not going to defend ‘t Hooft’s model as it does not seem to be the most promising path. I would rather go with a theory like stochastic electrodynamics (just classical electrodynamics with a special type of initial state – a real EM field, called the zero-point field, that plays the role of QED vacuum). This theory has passed some non-trivial tests, like giving a classical explanation for: Planck’s law Nature volume 210, pages 405–406 (23 April 1966) Debye law Phys Rev A. 1991 Jan 15;43(2):693-699 electron’s spin J. Phys.: Conf. Ser. 504 012007 “My understanding – and my understanding is limited – was that ‘t Hooft was claiming that on some scale quantum computers would fail to deliver the exponential speed increase due to the limits of the underlying classical process. This would allow him to avoid the problem with ECT and BQP=BPP. But I have never seem him or anyone else address this directly.” It seems to me that he is saying exactly that: ” these will not be able to function better than a classical computer would do, if its memory sites would be scaled down to one per Planckian volume element” 115. Renato Renner Says: Jochen #113: “Rather, I would maintain that one can’t in general make predictions about a system of which one is a proper part; furthermore, to me, that’s just applying quantum theory. Otherwise, you already get in trouble in Wigner’s original thought experiment, …” I definitively agree that one cannot make predictions about a quantum system of which one is a part. In fact, avoiding the need for such self-predictions was a main design principle of our thought experiment (see the discussion on the top of the right column on page 5 of our article). Scott has a good mini-thought experiment in his book(QCSD) regarding counterfactuals and conditional probabilities and observers in superposition and it illustrates the subtle error of reasoning valid in classical but not quantum scenarios that this slightly more complex paradox uses. As another blogger pointed out, Dr. Renner used the same setup to argue for MWI over single world interpretations. It doesn’t do that either. This “paradox” is just standard QM which works for everything: people, particles, or universes. 117. fred Says: mjgeddes #111 “As to the idea of reality as simulation (pan-computationalism), this is just a modern, supped-up version of idealism/solipsism, which can easily be dismissed. Without a physical model of how ‘computation’ is supposed to work, the whole notion of ‘a simulation of the universe’ is simply meaningless” Although I agree that from a physical description of reality point of view, “reality as a simulation” doesn’t get us very far, because we can’t say anything about the “hardware” (since we’re trapped in the system), there’s a different more practical approach to this: – consciousness is the only thing that can’t be denied – whether we’re brains in vats, whether the world is a simulation, whether the world is made of particles that are mathematical singularities, whether this is all a dream… the rock bottom truth is our experience of being. – our view of external reality, i.e. the information in our sensory data streams (and all the contents of our consciousness) is computable. – the dynamical evolution of external reality (i.e. physics) is such that life spontaneously appears, and life eventually creates computers, lots of them, and those computers keep getting better and better. Basically not only the universe is computable, but eventually it will organize itself into computers. – the content of our consciousness can be generated from a computer (aka virtual reality), not only in a way that’s indistinguishable from the external reality, but in ways that transcend external reality (from an evolutionary point of view). If you buy all this, it’s then not hard to see where this will lead us. And the unlikely situation is the one where we would not already be several recursion deep inside a collection of realities within realities. 118. JimV Says: mjgeddes at 111 says “Firstly, physics is mostly based on differential equations, which needs a continuum just for starters.” That turns out not to be the case. Engineering uses differential equations for lots of things that are actually discrete, such as stress and strain and vibration of materials, which are actually made out of discrete molecules (and in the case of alloys, grains that can be seen under low-power magnification). Fluid flow has some famous differential equations, to deal with H20 molecules. Calculus is a great approximation to discrete systems. Finite-difference equations also work, and have the same types of solution as differential equations of the same form. Meanwhile, Zeno, Democritus, and recently George Ellis, have pointed out that the universe makes more sense in finite, discrete form. With infinities and infinitesimals you get Cantor’s Hotel and other paradoxes. 119. sf Says: Dear Renato, I wonder if it would be possible to formulate your result as a theorem about the impossibility of extending the Feynman diagram method (for calculating amplitudes), or if it would get too messy? The idea is that Assumption (Q) corresponds to adding some kind of extra operations to Feynman diagrams, ie the agents correspond to certain ‘subclusters’ in the diagram, and operations on them correspond to the inferences/communications between the agents. This might give a language where the various viewpoints in this discussion could be unified as some theorem about what can and cannot be done legitimately (as inference methods) within the context of Feynman diagrams. One motivation; Feynman in some sense refined Bohr’s approach by allowing one to embed QM experiments inside others, in a nested structure, but this required replacing the measurement step by a branching interaction structure. Your paper is also embedding QM experiments inside others, so the Feynman diagram method seems like a natural way to look at things here. 120. Jochen Says: Renato Renner #115: “In fact, avoiding the need for such self-predictions was a main design principle of our thought experiment” But surely, a measurement on the state of Lbar is a measurement on the state of Fbar? I mean, after that measurement, I know exactly what state Fbar is in, don’t I? Anyway, my main point remains: Fbar is only justified in assuming that Wbar obtains ‘fail’ if they’re justified in assuming that the state of the coin is ‘tails’. But I don’t think that’s the case, as the total state of Fbar and the coin is an entangled one. W, in other words, would seem to be perfectly justified in reasoning that Fbar’s knowledge of QM and the measurement setup leads them to conclude that they aren’t able to predict Wbar’s measurement outcome—which, of course, is exactly what turns out to be the case. 121. fred Says: Isn’t this expected after all? Physics assumes systems can be isolated, but the world is really fully interconnected and deterministic (or deterministic with some pure randomness, whatever that means). So, strictly speaking, there’s really no such a thing as a part of a system making a prediction about another part of the system. And a system can’t simulate itself fully, a part of the whole by definition has less resources than the whole. Also, the “prediction” itself and all its apparatus have been instigated by the initial conditions of the system they belong too. They’re no more independent than any other sequence of events in the system (one wonders why a deterministic system comes up with the notion of “predictions” and “counterfactuals”, it’s not like the “predictions” are going to alter the system’s evolution in some magical independent way… just like other human made concepts, e.g. “free will”). Given all this, it’s really not surprising to find that there are limits to the application of physics and contradictions are bound to happen as you try to apply it all recursively. 122. Harry Johnston Says: To answer my own question #72, from F’s perspective S is in a known state so there is no entanglement between Lbar and L. It still seems like an impropriety to allow Wbar to perform a measurement that requires the ability to predict the exact microstates of Fbar’s measuring device. Wouldn’t the simplest way to eliminate the apparent paradox be to modify assumption S to explicitly require that a measurement be thermodynamically irreversible, and assumption C to explicitly require that all measurements be thermodynamically possible? 123. ppnl Says: Andrei #114: There is no difference between an analog computer and a digital computer. Or more precisely the only difference between them is an abstraction layer. If you zoom in on a digital memory cell past the abstraction layer what you see is an analog computer programmed to emulate a digital memory. Turns out that is the easiest problem for analog computers to solve. Why emulate digital computers? Analog computers have a problem with noise. An analog computer without digital emulation will never calculate hundreds of digits of pi because you could never measure the output that accurately. And a fly fart on the other side of the galaxy would disrupt the calculation anyway. Digital computers have calculated sixty trillion digits of pi. In almost every instance the gain you get from the ability to control noise far outweighs the penalty from the emulation layer. Flies are free to fart all they want. In short a digital computer can be seen as a particular kind of analog computer algorithm that allows you to control noise. But what that means is that anything that allows you to build better analog computers also allows you to emulate better classical computers. An infinite precision analog computer would allow us to build an infinitely fast digital computer. In a deep sense these are the same thing. But infinite precision is a chimera. That would imply infinite information density and your computer would collapse into a black hole dragging the rest of the universe with it. The universe contains a finite number of atoms, finite energy and finite space. As a result it also contains a finite amount of information. This prevents any infinite precision analog computer. Else we would have to junk 99% of all we think we know about physics. There is no simple fix to get around this. A quantum computer isn’t faster than a classical computer in the sense of more operations per second or more information density. A quantum computer changes the definition of information and computation in a way that probably allows new algorithms that are exponentially faster on some problems than any possible classical algorithm. That means a comparatively modest quantum computer could outperform any classical computer made with the resources of the entire universe. And necessarily also any analog computer as they have the same limits. And remember a quantum computer is only faster on a tiny subset of problems. On most things it is no better than classical. It just isn’t generally faster. 124. bcg Says: Andrei #112 > So, in order to estimate the prediction of classical electromagnetism for a Bell test one needs to only count the states that are physically possible and discard the others. The probability of non-quantized spin being exactly +1/2, or exactly -1/2, is zero. Forget violating BI; how can classical electrodynamics even *obey* BI? This is regurgitated Wikipedia here, so if this is moonman talk, please be kind. 125. Renato Renner Says: Scott, I am afraid that it will look as if I’m hijacking your blog if I continue answering the many comments. But I appreciate the variety of reactions that have appeared here, ranging from something like “it has already been done” (either by Wigner or by Hardy) to “it’s really not surprising to find that there are limits to the application of physics” and to “it is fundamentally flawed”. 😉 For now it’s probably better if I remain silent until someone manages to not merely claim the existence of a flaw but to also localise it (which I think is not too much to ask, for our argument is described step-by-step in the article). Even more because I do not know how to reasonably reply to indirect allegations of the type “I have a simplified version of your thought experiment where no contradiction arises, so your argument must be flawed somewhere”. Anyway, my conclusion so far (on the superficial level of the blog’s title) is: Yes, it is hard to think after someone Hadamarded your brain, but certainly not before! 126. Andrei Says: ppnl #123: “But infinite precision is a chimera. That would imply infinite information density and your computer would collapse into a black hole dragging the rest of the universe with it. The universe contains a finite number of atoms, finite energy and finite space. As a result it also contains a finite amount of information. This prevents any infinite precision analog computer. Else we would have to junk 99% of all we think we know about physics. There is no simple fix to get around this.” I disagree. Nature works with infinite precision because space and time are continuous. This is true in QM and it is also true in general relativity. The magnitudes of fields, locations of particles, etc. are real numbers that have an infinite number of digits. If you introduce a discretisation you run into problems, like violations of symmetries and conservation laws. This is a significant problem for ‘t Hooft’s CA model and he is aware of that. It might be that a solution to this problem exist, but, regardless, our current understanding suggests space-time is continuous. GR does not predict that a black-hole appears any time the distance between two objects is not represented by a rational number, so it is clear that there has to be a mistake in your reasoning. My guess is that the mistake originates in the confusion between the information contained in the system itself and the information that can be extracted by using an experimental procedure. For example if you want to measure the location of a particle with infinite accuracy you need a photon of 0 wavelenth, or infinite frequency, and such a photon would carry an infinite amount of energy. It is that energy that creates the black-hole. So, a classical, analog computer “computes” with infinite precision, and the only limit is imposed by our ability to measure the initial state (input) and the final state (output). “A quantum computer changes the definition of information and computation in a way that probably allows new algorithms that are exponentially faster on some problems than any possible classical algorithm.” OK, let’s be precise about what “classical” means here. For the purpose of our discussion a classical theory is a local and realistic theory. Classical electromagnetism, or general relativity are examples of such theories. Now, do you have a proof for your above statement? Can you prove that a computer that is based on local, realistic physics cannot achieve the same performance as a quantum one? 127. Andrei Says: bcg #124: The probability of non-quantized spin being exactly +1/2, or exactly -1/2, is zero. Forget violating BI; how can classical electrodynamics even *obey* BI? Quantization might be achieved classically as well. Please take a look at this paper: Emergence of quantization: the spin of the electron (A M Cetto1, L de la Peña2 and A Valdés-Hernández) – J. Phys.: Conf. Ser. 504 012007 The article can be read here: http://iopscience.iop.org/article/10.1088/1742-6596/504/1/012007/pdf 128. fred Says: ppnl #123 “But infinite precision is a chimera. That would imply infinite information density and your computer would collapse into a black hole dragging the rest of the universe with it. The universe contains a finite number of atoms, finite energy and finite space. As a result it also contains a finite amount of information.” But information density, holographic principle, Hawking radiation… that’s still all entirely speculative, no? The only thing we’ve ever observed (indirectly) about black holes is their gravity effects (orbits of neighboring stars and gravity waves), and the quantization of space is a difficult nut to crack (https://en.wikipedia.org/wiki/Doubly_special_relativity) Dear Professor Renner: With respect to your “it’s probably better if I remain silent until someone manages to not merely claim the existence of a flaw but to also localise it”, I’m not sure that it’s a flaw, but I would like to ask your reaction to this response: Rauchiger-Renner write: “At time n:01 \bar{F} observes tails and thinks: Statement {\bar{F}^{n:02}: ‘I am certain that W will observe w = fail at time n:31.'” In fact, even though at time n:01 \bar{F} observes tails \bar{F} is not certain that W will observe w = fail at time n:31, for in fact it is not certain that W will observe w = fail at time n:31. What \bar{F} does think, using quantum mechanics, is, rather, this: >I have just made a measurement and observed tails. If this measurement of mine were to have branched the multiverse—decohered the state—then I would be certain that W will observe w = fail at time n:31. But I know, looking forward into the future, that my measurement has not branched the universe—decohered the state—or, rather, I know that my brain will be Hadamarded in the future to reconstitute the state, so that nothing will remain in the universe of my measurement of “tails” other than the fact that everyone will agree that I made it. >Because I know quantum mechanics, I know that the reconstitution of the state |\psi> requires that my reasoning now, pre-Hadramard, take account not just of what I have observed but of what my ghostly self on the branch that observed heads observed, just as their reasoning must take account not just of what they have observed but of what they see as the ghostly me on this branch that observed tails observed. And when I take account of that, properly using quantum mechanics, I see that even though I know—right now— that the wave function is |0+>, quantum interference with |1+> + |1-> raises the possibility that W will not observe w = fail at time n:31. But I am not sure whether I have identified a flaw in your argument. Perhaps I have merely understood what your argument is… Yours, Dear Ahron: With respect to “On the other hand, \bar{F} is justified, without additional assumptions, to conclude that ‘if I ever see W’s result, it will be w=fail…’” Should that be: >\bar{F} is justified, without additional assumptions, to conclude that “if any version of me that retains a memory or any other signal of the fact that I measured tails ever sees W’s result, it will be w=fail…” ? Yours, 131. Andrei Says: Renato Renner #125: For now it’s probably better if I remain silent until someone manages to not merely claim the existence of a flaw but to also localise it (which I think is not too much to ask, for our argument is described step-by-step in the article). OK, let me try. You write: “One observer, called agent F, measures the vertical polarisation z of a spin one-half particle” and “The other observer, agent W, has no direct access to the outcome z observed by his friend F. Agent W could instead model agent F’s lab as a big quantum system” The assumption here is that it is possible to shield the content of the lab from an observer situated outside of the lab. My question is simple. How would you do it? How would you build a lab so that no information about the interior can leak outside. More to the point how can you stop the gravitational, electric or magnetic fields that are correlated to the contents of the lab to be measured from outside? Thanks! 132. Jochen Says: Renato Renner #125: “For now it’s probably better if I remain silent until someone manages to not merely claim the existence of a flaw but to also localise it” Let me just say that I really appreciate your willingness to comment on your work online, but I also understand if it gets a bit… tiresome. Also, even though I don’t think I quite agree with the conclusions of your paper, that doesn’t mean I think it’s not worthwhile—far from it, I think it already has stimulated and galvanized much thinking on the matter, and will continue to do so, which is, to me, definitely a good thing. As the (alleged) Bohr quote goes, the opposite of a profound truth may well be another profound truth. That said, to me, it’s becoming more and more clear exactly where I disagree, and I’m gonna use the benefit of having access to a keyboard again to try and explain myself as best I can (for whatever it’s worth). Perhaps I’ll later try and write up something more formal, if I feel my argument contributes anything to the discussion. First of all, I’m gonna agree with you that there is indeed a contradiction that arises from your assumptions Q, S, and C. However, I disagree that assumption Q reasonably follows from using quantum theory. To localize my disagreement, I do not think that Fbar(n:02) is a statement that Fbar can reasonably make. The reason for this is, essentially, that the inference from observing ‘tails’ to the total state being |t>(1/sqrt(2)(|-1/2> + |1/2>)) does not follow. To break this down, Fbar has two lines of reasoning available to them: A: “I have observed ‘tails’; hence, I prepare the state 1/sqrt(2)(|-1/2> + |1/2>). Nothing Wbar does changes anything about this, as the total state is the tensor product |tails>[1/sqrt(2)(|-1/2> + |1/2>)]. W’s measurement hence yields ‘fail’, as |ok> = 1/sqrt(2)(|-1/2> – |1/2>) is orthogonal to the state reduced to Lbar.” B: “I have observed ‘tails’. However, I know that the coin was prepared in the state |init> = 1/sqrt(3)|heads> + sqrt(2/3)|tails>. Hence, the total state now is |Ψ> = 1/sqrt(3)|heads>|-1/2> + sqrt(2/3)|tails>1/sqrt(2)[|-1/2> + |1/2>] = 1/sqrt(3)(|heads>|-1/2> + |tails>|-1/2> + |tails>|1/2>). Written in the {|okbar>,|failbar>} basis, this is |Ψ> = 2/sqrt(6)|failbar>|-1/2> + 1/sqrt(6)|failbar>|1/2> – 1/sqrt(6)|okbar>|1/2>. This is an entangled state. After W observes the outcome ‘okbar’, the total state will be |okbar>|1/2> = 1/sqrt(2)[|heads>|1/2> – |tails>|-1/2>]. But this isn’t orthogonal to |ok>, so W may obtain w = ok.” It seems to me that the preferable line of argumentation is B. For one, it has the advantage that it yields the right conclusion, and Fbar must know this in the same way that we do. But more importantly, it doesn’t make the (IMO) unjustified assumption that just because Fbar has made an observation, they can conclude what the total state is for another observer. Because in the end, this is a self-measurement: |heads> and |tails> include Fbar’s observation of the coin as ‘heads’ or ‘tails’, and while there’s originally no component |heads>|1/2>, this is there after Wbar’s measurement. So, if we were to tell Fbar after the measurement of W that the outcome was ‘ok’, they’d say: “Of course, that’s very well possible; after all, I observed ‘heads’ at the start!”. So in that sense, no contradiction ever arises—even though one may squirm a little at the ‘undoing’ of Fbar’s knowledge/measurement result. But this isn’t even something unique to quantum mechanics: even classically, given the power of re-wiring your brain, I can make you believe that a coin came up ‘heads’ even if you observed it coming up ‘tails’. “I shall then suppose, not that God who is supremely good and the fountain of truth, but some evil genius not less powerful than deceitful, has employed his whole energies in deceiving me…” In the general case, we can’t assume Fbar knowing the initial state of the coin, so this seems to prohibit the argumentation B. But then, it doesn’t follow that A is the right way to think. Rather, Fbar ought to reason that they have no way of knowing whether the state, after preparing the spin, is |tails>[1/sqrt(2)(|-1/2> + |1/2>)]: there could well be entanglement between the state of the coin (and consequently, the state of Fbar after observing the coin) and the state of the spin. And by Breuer’s argumentation I have cited above, Fbar cannot decide whether that’s the case. Consequently, they aren’t entitled to derive any conclusions regarding the total state; but this, likewise, blocks the inference of Fbar(n:02). This may seem a troubling conclusion in itself, since it implies that there are some things about the total state of the universe (if it’s meaningful to speak of such an entity) that we can’t know; but after all, we can at least derive our own future experiences, and, at least according to what we ever can get to know, this won’t yield any contradictions. 133. fred Says: Jochen #132 “This may seem a troubling conclusion in itself, since it implies that there are some things about the total state of the universe (if it’s meaningful to speak of such an entity) that we can’t know” Why is this troubling or even surprising? By definition isn’t the universe made of mostly “stuff” we can’t fully resolve? – The part can’t contain (and know) the whole. – Perfect knowledge of something is duplication, and the no-cloning theorem says it can’t be done. – Short of duplicating a system, we can probe it, and such action is always destructive. – The instrument and the object studied can’t be truly independent (for every action there is a reaction), so knowing everything about an object would require knowing everything about yourself perfectly as well. The only time we are truly in control is when we’re talking about bits, because they’re relative to us, we made them, they live entirely in our minds! 134. Ahron Maline Says: Renato Renner #110 Sorry for the delayed response, especially now that you want to withdraw from the discussion. Nevertheless, I’d like to try to further clarify my point. What you called “Q-no-Hadamard” is indeed a fair description of the “bottom line” as to when “QM predictions” based on observations are expected to be correct. That is why there is no paradox, and I suspect that Scott intended something similar. However, I am making a stronger statement: I believe that your work, which is framed as establishing a new no-go combination of assumptions, does not in fact do so. Your assumption Q is not a sensible position to take, regardless of the other two, and so ruling it out teaches us little. If I understand correctly, assumption Q states that if one observes a particular classical reality, he may take the corresponding quantum state and use it to make predictions (at least when the probability is 1), and these predictions will be satisfied by future observations of any possible observer. In our case, Fbar is using his observation r=tails to predict W’s observation as w=fail. But there is no sensible reason that this should work! Wigner’s friend experiment presents us with a clear dichotomy. Either: A. Observations made, at least by humans, represent the unequivocal reality, with all other possibilities reduced to “might have been”. This is held by the Copenhagen Interpretation, among others. Wigner pointed out that this implies QM will fail if used by an observer outside the lab, and Deutsch spelled out the failure in detail. So if A is true, assumption Q is “reasonable”, but known to be false. The other possibility is: B. Even after an observation has been made, the other possibilities continue to be “elements of reality” in some form. This may be as “parallel worlds” as in MWI, as “parts of the wavefunction that are empty of Bohmian particles” in dBB, or others. If this is the case, then assumption Q is really quite unreasonable. Why would it make sense for Fbar to ignore the r=heads “branch” when it is understood to be “real”? So assumption Q is not something anyone holds, and could not have been. Now it may be asked: according to option B, which is quite popular, how can we ever use QM at all? There are always likely to be countless “branches ” different from anything we can know about, so how do we ever make predictions? The most common answer to that is decoherence: the assumption that states that are “classically different” will remain too far separated in Hilbert space for any interference to ever be observed. This indeed justifies QM predictions, but of course it does not apply to scenarios where observers are measured in a Hadamard basis. And somewhat more generally, what is required for using QM is something like “Q-no-Hadamard”: Predictions made from an observation are valid within the “world” that “remembers” that observation, but not for a world where the observation has been “Hadamarded away”. 135. David Byrden Says: Thank you for your lucid and helpful explanation of the Renner-Frauchiger puzzle. It’s immediately obvious that the essence of their puzzle is contained in the 1992 Hardy paradox. And, by thinking about that in a geometric way (as vectors in 4 dimensional Hilbert space) I was able to see what is really going on. It’s just like Asher Peres said; unperformed measurements have no results. In the {|+〉, |-〉} basis a qubit CANNOT be read as 1 or 0. The apparent “paradox” results from illegally combining facts from different measurement bases. 136. Ian Says: Hi Scott, I was wondering if you had any final thoughts on Renato’s comments here? It seems the two camps ( there is a paradox / there is no paradox ) are still solidly separated… 137. Scott Says: Ian #136: Yes, my final thought is still that there’s no paradox, 🙂 for the reasons David Byrden #135 says. I can’t completely localize what I consider the illegal step within the exact language that Frauchiger and Renner use, and I think it would be very worthwhile for someone to take the effort to write a response paper spelling it all out (which I expect will happen). But I can certainly state in broad terms what I consider the issue to be (and did). 138. David Byrden Says: Let me spell out what I visualised. The system has 4 physical states (combinations of qubit values), so there are 4 basis vectors spanning its Hilbert space. The initial system state is a vector equidistant from three of those basis vectors, and orthogonal to the fourth, which is |11〉. As you wrote, “|ψ〉 has no |11〉 component” But Alice and Bob intend to measure in a different basis, namely { |+〉,|-〉} for each qubit. In that basis, the state vector is quite close to |++〉and equidistant from the other three vectors, which include |–〉. Don’t try to visualise this unless you are four dimensional ! Now, the trouble starts. As you wrote: “… conditioned on Alice’s qubit being in the state |0〉… ” Alice’s qubit will read |0〉only if you measure it. This collapses the state vector into the plane spanned by |00〉|01〉. ( don’t forget to renormalise. ) In its new position, the state vector is orthogonal to |–〉. So of course you cannot get there any more. So, in conclusion, when you say “suppose Alice’s qubit were zero” you are really saying “suppose the system were in a different state”. But Alice’s qubit is NOT zero. Bob’s qubit is NOT zero. The state vector is pointing out between their ones and zeros, to a spot from which it can collapse into |–〉. 139. sf Says: Scott #137 It would only make sense to “localize … the illegal step” if you are sure that this is being done in a formally consistent system. But, granting that the precise hypotheses of their article do formally lead to a contradiction, this wouldn’t be the way to resolve the Frauchiger-Renner ‘paradox’; their system would be inconsistent. So which CONSISTENT logical system should one be working in? The issue is more likely whether the inconsistency can be ‘localized’ to a smaller set of hypotheses, including some implicit hypotheses. (You suggest in the post some hidden hypothesis, I also made some attempt at this in #75, and was then convinced their system is inconsistent.) There is no objective way to say which hypothesis of an inconsistent system doesn’t belong, unless one passes to a larger context, ie working outside the formal system in question. This is also why I suggested Feynman diagrams in #119 as a potentially preferable context. (The divergences of Feynman integrals would have to be avoided, but this isn’t so relevant here). 140. David Byrden Says: Feynman diagrams? There’s no need for anything like that. This is a simple, basic, comprehensible “spot the mistake” puzzle. The mistake is: Renner and Frauchiger put their system into a state, then they collect some statements which would each be true in *other* states. Those statements are not true in the *actual* system state. Finally they combine the statements, as if they were all true at once, which they are not. It’s like saying “I have just enough money for one beer. I have just enough money for a sandwich. So tonight, I will dine on a beer and sandwich”. 141. David Byrden Says: DarMM #50 : > “If \bar{F} gets tails, then he knows … the L lab will evolve into the fail state.” That’s true. But, in more detail; he knows that F will evolve into a superposition of z=+1/2 and z=-1/2. But, the laboratory L contains a quantum device that reads the qubit and passes the results to W without causing decoherence. The human being within that lab will read the device and decohere, but no trace of him will emerge from the perfectly sealed lab. So, the two superposed versions of data emerging from lab L will combine, and W will measure “fail”. > “If agent F measures z=+1/2, F can conclude that \bar{F} knows r=tails.” That is true. > “F himself would think that since he sees z=+1/2 he and his lab are not in a determined state in the fail,okay basis.” F would be right about that. > “From that he would think W could get either fail or okay.” No! F knows that he’s in a superposition. There are two of him, seeing different data. And the lab equipment creates constructive interference between his data and his doppelganger’s data. They will both contribute to W’s measurement, resulting in “fail”. ———– Here’s another example of this phenomenon. Imagine a Young’s slit experiment. You are speaking to the photon: “Hey, you just went through the Left slit, didn’t you?” “Sure did!” “Well, there’s a version of you who went to the Right.” “I know, he voted for Trump. We’re not on speaking terms.” “I have good news. You’re about to meet up and work together!” “What? Why?” ———– So, in conclusion; the agents in the labs should remember that they themselves are in superpositions, because those superpositions will come into play when the external agents measure everything. 142. sf Says: David Byrden #140 Basically I agree with you, and the idea is to start with the approach you use. But to respond to the challenge that Renato and Scott were discussing you don’t get the option to finish things up that way. You have to either isolate the mistake within the formalism as Renner and Frauchiger framed it; with agents, ‘knowing’ and inferring things in their ‘partial states’ which aren’t quite quantum or classical states, or you have to say specifically where their setup just doesn’t make sense. Overall no one claims it makes sense, since it gives a paradox, but the issue is if this comes from the global setup or from some more localizable conflict, ie whether there’s anything interesting and deep that’s new about the paradox or just the usual incompatibility of quantum and classical physics as theories of everything, presented in a slightly different way. I completely agree that you can be happy to stop where you do, but this doesn’t satisfy Renato. I only meant to use a very rudimentary form of Feynman diagrams; basically a branching Many-worlds interpretation, since one wouldn’t really want to translate things to set theory, but one needs some consistent framework to work in. 143. David Byrden Says: Ah, now I see what you intended with the Feynman diagram. Yes, that’s a good idea. So I tried it. There are 6 branches. Everything is consistent until the final step of the experiment, when Agent W makes his measurement in a different basis to the one that everybody’s been using. 144. DarMM Says: Hi David, Thanks for the response. I see what you mean, but by considering himself in superposition F reaches the conclusion that W can never observe “ok”, which seems to be in contradiction to predictions based purely on the quantum state where the chance should be 1/12 (which is the result observed in the Hardy paradox case as Scott mentioned). It would seem to me that the agent is simply wrong to not take that his own measurement and collapse (from his perspective, I.m not saying this implies MWI is wrong) into account and shouldn’t reason based on viewing himself in superposition. In fact he seems to be composing facts from a collapse and no collapse stand point. Collapse to reason that \bar{F} got tails. Then no collapse to reason that he should adopt \bar{F}’s view of him. It’s only with these composed that he obtains W = fail always, in contradiction with the 1/12 prediction from the actual quantum state. 145. DarMM Says: David, Sorry scratch the first part of that. Concluding W cannot get “ok” is fine for the reasons Scott mentioned in the main post. At that point W would get fail. It’s not until \bar{W}’s measurement that this conclusion is invalidated. However I’d like to see an interpretation neutral take on F’s two different conclusions. Yours requires something like Many-Worlds. It seems F could either consider him and \bar{F} to be in the state: |tails,fail> or the state: |tails,1> and both give different predictions for W. Now \bar{F} has P(W = fail) = 1, certain. So it seems F would need to adopt the superposition view of himself for superobservers who can make measurements on his lab at the atomic scale. Many-Worlds sees this as because there are two F observers. However not all interpretations view superposition like this. 146. David Byrden Says: sf #142 So I should specify exactly where the paper contains errors. All right, here’s the first error as an example: Refer to Table 3. Its first entry reads : /F observes r = tails at time n:01 Now, this agent is in superposition because she could have observed either “heads” or “tails”. In the “many worlds” interpretation, there are two of her in separate worlds. This entry describes only the “tails” copy of agent /F. She knows this too. But the paper claims that she will use Quantum Mechanics when thinking about things. Her deduction is: “I am certain that W will observe w = fail at time n:31.” From her point of view, this seems a foregone conclusion. She will send a qubit to agent F, in an equal superposition of “up” and “down”. That will put agent F into a superposition. Then agent W will measure agent F, and one of his measurement vectors will coincide exactly with the state vector of agent F. The result of “fail” seems inevitable. But she’s doing it wrongly. She’s ignoring the other copy of herself, in the parallel “heads” world. That other copy of agent /F will also pass a qubit to agent F. The two qubits may come from separate worlds, but they are coherent and they will combine (like the two paths in a Young’s Slit experiment). Agent F will receive a qubit containing information from BOTH the “heads” and “tails” worlds. The proportions will NOT be fifty-fifty. Agent /F made a wrong calculation. She did not use Quantum Mechanics properly, as claimed. That’s it, in a nutshell. I could prove it with algebra, I could draw it in 4 dimensional Hilbert space, but I think you have the idea now. 147. DarMM Says: I’m a dumbass, the three observer case I’m asking about is nothing more than the usual oddness for running interference experiments on other observers that one encounters in the usual expanded discussions of Wigner’s friend. 148. David Byrden Says: So, I’m putting my refutation of this paper online at; http://byrden.com/quantum/consistency.html If there is interest, I will flesh it out. (For example, you might wonder what happens if the labs are unsealed.) David 149. sf Says: David Byrden #146 Thanks. What you say looks right to me, but I’m hoping to have a strong coffee and look at the F-R (Frauchiger-Renner) paper a bit harder soon and possibly play devil’s advocate a bit, just to see if there are still issues of interpretation to be settled. Also, in #75, above, I had some vague hunch related to your point. I’m a bit worried that there are ’straw-men’ littering the terrain; maybe we’re taking the paper to be claiming more than it really does, or maybe F-R is even attacking viewpts that nobody really holds on unlimited use of QM. In the meantime I noticed that https://fqxi.org features a link to a more recent Renner project https://fqxi.org/community/articles/display/231 Dissolving Quantum Paradoxes – The impossibility of building a perfect clock could help explain away microscale weirdness. The project coauthor, Lídia del Rio, also nicely presents F-R in a video: https://www.perimeterinstitute.ca/videos/journal-club-frauchiger-renner-no-go-theorem-single-world-interpretations-quantum-theory I had wondered a bit if there was any issue with taking for granted the fact of giving agents in F-R access to synchronized clocks? But probably this is just a technical point. A similar issue is that agents seem to have a way of calibrating their Hilbert space bases; eg the same spin up/down test is shared by the independent labs L, /L. But a priori there shouldn’t be any relation of these bases, especially because F-R assert that labs L, /L are in pure states to start. If they calibrate, then they are entangled. This could be resolved by building in the basis at the outset, but that may have other problems, depending on which interpretation one works in. 150. David Byrden Says: > “agents seem to have a way of calibrating their Hilbert space bases; eg the same spin up/down test is shared by the independent labs L, /L.” That’s true, but I thought it would be trivial to do that? e.g. if the polarisation of a photon is the shared qubit, then it’s only necessary to have the measuring devices set up with their measurement axes parallel. No special entanglement is needed. I’m sure I’ve seen that done in various quantum experiments. Am I missing something there? 151. sf Says: >I’m sure I’ve seen that done in various quantum experiments. But the classical measuring devices there are set up using CLASSICAL INTERACTIONS to align them, with measurement axes parallel. Here we have 2 labs that are strongly isolated from each other, as quantum systems. So this ‘classical interaction’ is not available. In fact there could be pretty arbitrary unitary operators on the whole quantum computer/lab interfering with alignment after some time passes, insofar as quantum mechanics makes it hard to rule out some initial drift/jitter. There might be a pretty deep problem with getting 2 QM-isolated labs to interact in any meaningful way. I wonder if this has been discussed anywhere? 152. David Byrden Says: Ah, yes, I see. An interesting point. So, we align the measuring devices, then seal them inside “labs”. Will they be aware if they accidentally rotate? Well, under SR, your orientation in space is an absolute reference (but not your “speed” or position). So, surely you’d notice if you were rotating? Even when isolated? Because there’s spacetime in the box with you. Or, when we build a perfectly isolating “lab”, would it cut you off from the absolute angles of spacetime? Which prompts a more fundamental question: This experiment, like Schrodinger’s Cat, requires the “box” or “lab” to perfectly block any information coming out from the inside. But, wouldn’t both experiments work if information could enter from the outside? 153. sf Says: David Byrden #152 There’s still a lot to try and clarify here. What seems OK so far is that calibration between labs would involve entanglement, but also that in an orthodox Copenhagen context, a physicist would prepare the 2 (or more) labs so that axes would be aligned for some finite time experiment. Then he/she can apply Born’s rule using that alignment. But I’m not sure the lab agents F,/F, can ‘know’ this; they don’t benefit from the Copenhagen version of Born’s rule, not having prepared this state themselves. Your SR point is right (and analogously in Newtonian/Galilean contexts) but there’s a difference between ‘orientation in space is an absolute reference’ and giving access to measuring such absolute orientation to agents in a lab. This is analogous to the issue of time; QM assumes that time-like slices are well-defined, but this doesn’t mean that entities/agents in an experiment can read off this time from some absolute clocks. There’s even an uncertainty principle for time that obstructs this (and more subtle issues of interpretation involved there). I don’t know yet if F-R mention any of this, still have to get back to the article. The lack of absolute reference for “speed” or position may be enough to rule out access to measuring absolute orientation for lab agents; the (macro but quantum) lab apparatus can tilt by the uncertainty principle for the position of ‘one end’. This may involve one lab implicitly measuring the other’s position though. >wouldn’t both experiments work if information could enter from the outside? If it’s coming from one lab to the other, which is all that matters here, then there’s still decoherence or entanglement I guess. 154. Benjamin Says: Renato Renner #125 I don’t think my ideas are new to this thread, but I frame my objections as an error in your Table 3, rather than “identifying and rejecting an additional unstated postulate” as Scott claims he is doing. F’^{n:02} should really read “I am certain that W will observe w=fail after n:11 if I still know the S measurement in the |x> basis.” F measures in the |z> basis. This alone does not matter to F’, since F’ does not have access to the F measurement. So at this point (t >= n:11) the F’ statement above still holds: a W measurement will yield w=fail. This is not inconsistent with F at this point either. If W’ makes a measurement before W, and F measured |+z>, then W’ must find w’=ok’ (assuming consistency (C)). This is again consistent with both F’ and F. However, now F’ cannot know that S is in the |+x> state. Although not a direct measurement on the transferred bit, the W’ measurement fixes S=|+z>. So F’ can no longer be confident that W will observe w=fail — that conclusion was drawn from the knowledge that S=|+x>. This is equivalent to Scott’s objection on the grounds of a Hadamard application to the brain, but I think it is helpful to think about what that actually means in terms of the thought process of F’. Before the W’ measurement (which F’ must be aware of), F’ knows S=|+x> because he made it that way. But F’ also knows the Stern-Gerlach experiment, and knows that if S is measured in the |z> basis, the S=|+x> knowledge is lost. Now in this thought experiment we already know that F measures S=|+z>, but F’ does not know that so still has confidence that S=|+x>. The important step for F’ is the W’ measurement of w’=ok’. As you state in equation (6) of your paper, the w’=ok’ measurement implies S=|+z>. This is not inconsistent with anything any of our observers have seen so far, but it does mean that F’ loses confidence about S=|+x>. This is not inconsistent with (Q) or (S), but is inconsistent with the idea that labs L and L’ are independent, and I think that is where some of the confusion lies. The S bit, and {F’}’s knowledge of it, are entangled with the L’ lab since F’ prepared the bit and sent it to F. That step alone doesn’t cause concern — it’s the same as Schrodinger preparing his poison-release contraption before he puts it in the cat box. But when W’ measures w’=ok’, he is implicitly making a measurement on that bit in the L lab, even if he doesn’t actually “open the box”. I’ll end with a final sanity check to show that F also has a consistent view with the other observers. F, being isolated from F’ and W’, is unaware of the W’ measurement so we are tempted to still see an inconsistency from statement F^{n:14}. But that statement must also be altered in the vein of F’^{n:02}, as it was deduced from an application of (C) to F’. So it is not accurate for F to think “I am certain that W will observe w = fail at time n:31.” Rather, F can only say “If W observes w=ok, then F’ must not know the state of S in the |x> basis, and none of us could have known for certain what W would observe.” A simple calculation shows that the probability of this from the F frame yields the expected 1/12. 155. tez Says: @ David Byrden #146 – note the original title of the paper: https://arxiv.org/abs/1604.07422v1 156. David Byrden Says: I’m analysing this Gedankenexperiment and I have a question. I think I know the answer but I’d like someone who has qualifications in QM to tell me what it is. Agent /F goes into superposition when she becomes entangled with the randomness generator. She prepares a qubit which is sent to the other lab. This qubit is not in her own state, but it’s in a state related by a unitary. We can implement that with a machine on the route connecting her lab to the other lab. In that case, agent /F sends out the qubit in her own quantum state. So, agent /F is left sitting in her lab, waiting to be measured, in her own quantum state. Meanwhile the qubit is on its way to the other lab, and it’s in the same quantum state. I think that runs foul of the No Cloning Theorem. I think that the only way around this is for agent /F to read the randomness generator and deliberately set up the qubit in the appropriate state. I think it’s impossible for the randomness generator’s state to “pass through” undisturbed while the agent reads it. Why does this matter? Because I want to confirm that reading the state of /F does not collapse the state of F. I want to be clear that they are two distinct states. 157. sf Says: I’m not at all expert on this, but I think you’re right that there’s an interesting issue there. It would be interesting if you can give your answer in the meantime. >”I want to be clear that they are two distinct states” If I understand, you mean that it remains a superposition of two distinct branches or components? My guess is that when you try to quantize the labs there has to be some allowance for error to creep in. Its crucial to have some quantitative bounds on this error, which may be a problem for F-R. See https://en.wikipedia.org/wiki/No-cloning_theorem#Imperfect_cloning Buzek and M. Hillery showed that a universal cloning machine can make a clone of an unknown state with the surprisingly high fidelity of 5/6. Also, Even to quantize a dynamical system given by polynomials of low degree is problematic, as I recall. The naive approach of using an atomic description/reduction of the labs is not valid because it doesn’t provide an exact description of a Turing machine (or human or whatever the lab is); the latter is an abstraction insofar as its error free etc. and its states are equivalence classes of physical events that aren’t quite physically defined. There’s some interesting discussion of classical no-cloning in:
web
auto_math_text
Search for heavy Majorana or Dirac neutrinos and right-handed $W$ gauge bosons in final states with two charged leptons and two jets at $\sqrt{s}$ = 13 TeV with the ATLAS detector No Journal Information The collaboration Abstract (data abstract) CERN-LHC. A search for heavy right-handed Majorana or Dirac neutrinos and heavy right-handed $W$ gauge bosons is performed in events with a pair of energetic electrons or muons, with the same or opposite electric charge, and two energetic jets. The events are selected from $pp$ collision data with an integrated luminosity of 36 fb$^{-1}$ collected by the ATLAS detector at a centre-of-mass energy of 13 TeV. No significant deviations from the Standard Model are observed. The results are interpreted within the theoretical framework of a left-right symmetric model and lower mass limits are set in the heavy right-handed W and neutrino mass plane. The excluded region extends to $m_{W_R}~= 4.7$ TeV for both Majorana and Dirac $N_R$ neutrinos.
web
auto_math_text
# 0.15 Lab 10a - image processing (part 1) Page 1 / 5 Questions or comments concerning this laboratory should be directedto Prof. Charles A. Bouman, School of Electrical and Computer Engineering, Purdue University, West Lafayette IN 47907;(765) 494-0340; bouman@ecn.purdue.edu ## Introduction This is the first part of a two week experiment in image processing. During this week, we will cover the fundamentalsof digital monochrome images, intensity histograms, pointwise transformations, gamma correction, and image enhancement based on filtering. In the second week, we will cover some fundamental concepts of colorimages. This will include a brief description on how humans perceive color, followed by descriptions of two standard color spaces.The second week will also discuss an application known as image halftoning . ## Introduction to monochrome images An image is the optical representation of objects illuminated by a light source. Since we want to process images using acomputer, we represent them as functions of discrete spatial variables. For monochrome (black-and-white) images, a scalar function $f\left(i,j\right)$ can be used to represent the light intensity at each spatial coordinate $\left(i,j\right)$ . [link] illustrates the convention we will use for spatial coordinates to represent images. If we assume the coordinates to be a set of positive integers, for example $i=1,\cdots ,M$ and $j=1,\cdots ,N$ , then an image can be conveniently represented by a matrix. $f\left(i,j\right)=\left[\begin{array}{cccc}f\left(1,1\right)& f\left(1,2\right)& \cdots & f\left(1,N\right)\\ f\left(2,1\right)& f\left(2,2\right)& \cdots & f\left(2,N\right)\\ ⋮& ⋮& & ⋮\\ f\left(M,1\right)& f\left(M,2\right)& \cdots & f\left(M,N\right)\end{array}\right]$ We call this an $M×N$ image, and the elements of the matrix are known as pixels . The pixels in digital images usually take on integer values in the finite range, $0\le f\left(i,j\right)\le {L}_{max}$ where 0 represents the minimum intensity level (black), and ${L}_{max}$ is the maximum intensity level (white) that the digital image can take on. The interval $\left[0,{L}_{max}\right]$ is known as a gray scale . In this lab, we will concentrate on 8-bit images, meaning that each pixel is represented by a single byte.Since a byte can take on 256 distinct values, ${L}_{max}$ is 255 for an 8-bit image. ## Exercise Download the file yacht.tif for the following section. Click here for help on the Matlab image command . In order to process images within Matlab, we need to first understand their numerical representation.Download the image file yacht.tif . This is an 8-bit monochrome image.Read it into a matrix using A = imread('yacht.tif'); Type whos to display your variables. Notice under the "Class" column that the $A$ matrix elements are of type uint8 (unsigned integer, 8 bits). This means that Matlab is using a single byte to represent each pixel.Matlab cannot perform numerical computation on numbers of type uint8 , so we usually need to convert the matrix to a floating point representation.Create a double precision representation of the image using B = double(A); . Again, type whos and notice the difference in the number of bytes between $A$ and $B$ . In future sections, we will be performing computations on our images,so we need to remember to convert them to type double before processing them. Display yacht.tif using the following sequence of commands: #### Questions & Answers what does nano mean? nano basically means 10^(-9). nanometer is a unit to measure length. Bharti do you think it's worthwhile in the long term to study the effects and possibilities of nanotechnology on viral treatment? absolutely yes Daniel how to know photocatalytic properties of tio2 nanoparticles...what to do now it is a goid question and i want to know the answer as well Maciej characteristics of micro business Abigail for teaching engĺish at school how nano technology help us Anassong Do somebody tell me a best nano engineering book for beginners? what is fullerene does it is used to make bukky balls are you nano engineer ? s. fullerene is a bucky ball aka Carbon 60 molecule. It was name by the architect Fuller. He design the geodesic dome. it resembles a soccer ball. Tarell what is the actual application of fullerenes nowadays? Damian That is a great question Damian. best way to answer that question is to Google it. there are hundreds of applications for buck minister fullerenes, from medical to aerospace. you can also find plenty of research papers that will give you great detail on the potential applications of fullerenes. Tarell what is the Synthesis, properties,and applications of carbon nano chemistry Mostly, they use nano carbon for electronics and for materials to be strengthened. Virgil is Bucky paper clear? CYNTHIA so some one know about replacing silicon atom with phosphorous in semiconductors device? Yeah, it is a pain to say the least. You basically have to heat the substarte up to around 1000 degrees celcius then pass phosphene gas over top of it, which is explosive and toxic by the way, under very low pressure. Harper Do you know which machine is used to that process? s. how to fabricate graphene ink ? for screen printed electrodes ? SUYASH What is lattice structure? of graphene you mean? Ebrahim or in general Ebrahim in general s. Graphene has a hexagonal structure tahir On having this app for quite a bit time, Haven't realised there's a chat room in it. Cied what is biological synthesis of nanoparticles what's the easiest and fastest way to the synthesize AgNP? China Cied types of nano material I start with an easy one. carbon nanotubes woven into a long filament like a string Porter many many of nanotubes Porter what is the k.e before it land Yasmin what is the function of carbon nanotubes? Cesar I'm interested in nanotube Uday what is nanomaterials​ and their applications of sensors. what is nano technology what is system testing? preparation of nanomaterial Got questions? Join the online conversation and get instant answers!
web
auto_math_text
# ATLAS / Population Genetic Tools: inbreeding coefficient F ## Overview VCF files with GL and PL work. Run several burnins to optimize the widths of the proposal functions. ## Input • VCF file, created by e.g. ATLAS task major/minor • txt file (optional): e.g. samplesPopulations.txt This file is a user-created .txt file containing the samples to be used and their population affiliation. Different allele counts will be estimated for different populations Example: sample1 1 sample2 1 sample5 2 sample8 2 ## Output • A text file with suffix _inbreedingMCMC.txt: This file contains the samples from the posterior distribution of the parameters F, gamma, pi, and a given number of allele frequencies. • A text file with suffix _inbreedingMCMC_posteriors.txt.gz: This file contains information about the posteriors of the parameter values. One of these files is written to disk every 1000 iterations. ## Usage Example ./atlas task=inbreeding vcf=example.vcf.gz ## Specific Arguments • samples: specify samples to be used and their population affiliation • limitLines: amount of lines to be read from VCF file • minDepth: only store sites with minimum depth • minSamplesWithData: only store sites with minimum number of samples. Default = 1 • minMAF: only store sites where initial estimate of allele frequency is larger or equal to minMAF. Default = 0.0 • minVariantQuality: only store sites with minimum variant quality • reportFreq: after how many lines the reading progress is printed to the terminal. Default = 10000. • epsF: epsilon for EM algorithm to estimate allele frequencies. Default = 0.0001 MCMC: • writeBurninToFile: write values samples during burnins to the results file. • trueAlleleFreq: provide file with true allele frequencies (for debugging) • thinning: only print every $$n^{th}$$ iteration to the results file. Default = 1 (all) • numBurnins: number of burn-in rounds should that should be run. Default = 1 • burninLength: number of iterations in each burn-in round. Default = 1000 • numIter: number of iterations in the MCMC run • probMovingToModelNoF: probability of proposing move to model $$M_0$$ Updated
web
auto_math_text
# Enhanced multi user MIMO scheduling Imported: 17 Feb '17 | Published: 23 Sep '14 USPTO - Utility Patents ## Abstract A spatial multiplexing scheduler in, for example, an eNB or other base station, determines rank n precoders for UEs. Each UE reports the preferred precoder from this set of rank n precoders. The preferred precoder results in imbalance in performance over m layers compared to the rest of (n−m) layers. The UEs also report channel quality to the eNB, from which the eNB determines which layer(s) is better for the UE. For example, when n=2 and m=1, the eNB may then select two UEs such that, for the same precoder used by the UEs, the first UE has much higher layer 1 performance than layer 2, and the second UE has much higher layer 2 performance than layer 1. These two UEs may then share the same frequency-time domain resources, with the first UE information sent/received on layer 1, while the second UE information is sent/received on layer 2. ## Description ### CROSS REFERENCE TO RELATED APPLICATIONS This application claims priority to U.S. provisional application No. 61/672,391, filed Jul. 17, 2012, which is incorporated herein by reference in its entirety. ### 1. TECHNICAL FIELD This disclosure relates to wireless communication. More particularly, this disclosure relates to Multiple-Input-Multiple-Output (“MIMO”) communication techniques. ### 2. BACKGROUND Continual development and rapid improvement in modern technology has resulted in the widespread availability and use of communication devices of all types, including wireless devices. Consumers and businesses continue to drive strong demand for devices with additional and enhanced capabilities. Consequently, communication device and component manufacturers are continually developing additional communication features for communication devices. ### DETAILED DESCRIPTION The discussion below makes reference to user equipment (UE), including various types of communication devices. UE may take many different forms and have many different functions. As one example, UE may be a cellular phone capable of making and receiving wireless phone calls. The UE may also be a smartphone that, in addition to making and receiving phone calls, runs general purpose applications. A UE may be virtually any device, including as additional examples wireless devices, routers or other network, a driver assistance module in a vehicle, an emergency transponder, a pager, a satellite television receiver, a networked stereo receiver, a computer system, music player, or virtually any other device. While different reference numbers for various UEs may be used in the drawings to focus attention on certain drawings, the different reference numbers do not imply that the UEs must be different UEs. Instead, any UE may implement the processing noted below. The discussion below addresses how, using certain precoder information from different UEs, a base station may make a decision about which of the UEs can be combined and allocated on same frequency-time domain resources, and how to do so. The decision made by the base station may be directed at optimizing the use of communication resources. For instance, the base station may attempt to increase overall system throughput. In that regard, the base station may implement the techniques described below that find UEs to simultaneously share communication resources, as compared to prior scheduling techniques that do not recognize the opportunity for resource sharing. The base station may implement, for example, control logic such as a resource scheduler in hardware, software, or both, for making the decisions. The base station may be an evolved Node B (eNB) or another type of network controller. In other implementations, the decisions may be made instead by a Radio Network Controller (RNC) that is in communication with and that controls the eNB. Thus, as examples, the RNC may perform all or a portion of the processing noted below, such as receiving the preferred precoder indicators, searching for opposing layer imbalance, and deciding which UEs should share time and frequency resources, according to the techniques noted below. The RNC may then communicate the decisions to the eNB, which implements them with the UEs. Multi-user Multiple Input Multiple Output (MIMO) deployments allow multiple users to share the same frequency-time domain resources, thereby achieving higher system level throughput. In, for example, in a Long Term Evolution/4G (LTE) system, an enhanced transmission mode (e.g., transmission mode 5) exists in which different UEs may recommend their preferred precoder to the their respective eNBs. In some implementations, the there is a fixed mapping between code words that the UE and the eNB will send, and the layers upon which the code words can be sent. The eNB transmitter logic supports spatial multiplexing by applying a precoding matrix W to the signal before transmission. The UEs specify their preferred precoders to the eNB. The UEs may accomplish this by sending Precoder Matrix Indicators (PMIs) over a control channel to the eNB. In doing so, the UE may estimate the radio channel and select a particular preferred precoding matrix, e.g., one that provides maximum spectral efficiency. As an aid in understanding the layer imbalance scheduling techniques described in more detail below, first consider the following system model: 1) Let Nt be the number of transmit antennas that the eNB supports. 2) In the LTE transmission mode 5, only rank 1 spatial multiplexing may be available. As a result of 1) and 2), the dimension of the precoder that the eNB may employ is given by Nt×1. 3) Consider a scenario in which two user equipments UE1 and UE2 are combined and allocated to the same time-frequency resources. Let the respective precoders for UE1 and UE2 be W1 and W2 respectively. 4) Let the information symbols to be transmitted to UE1 and UE2 be represented by S1 and S2 respectively. 5) The transmitted signal from eNB would be: (W1×S1+W2×S2). 6) Let H be the channel as seen at UE1. In such case, the received vector at the UE1 receive antennas would be: Y=H(W1×S1+W2×S2)+N. 7) In the above, N is the sum of additive Gaussian noise and interference from other eNBs. If the number of receive antennas at UE1 is Nr, then the dimension of H is Nr×Nt and the dimension of Y is Nr×1. Effective Signal to Interference Plus Noise Ratio (SINR): In the above system model, the effective SINR as seen by UE1 may be given as: $SINR user ⁢ ⁢ 1 = [ ( H × W 1 ) H × ( H × W 1 ) ] [ ( H × W 2 ) H × ( H × W 2 ) ] + N H × N$ And similarly if G is the channel that UE2 experiences, then the effective SINR as seen by UE2 is: $SINR user ⁢ ⁢ 2 = [ ( G × W 2 ) H × ( G × W 2 ) ] [ ( G × W 1 ) H × ( G × W 1 ) ] + N H × N$ From the effective SINR calculations, the eNB may select UEs to share time and frequency resources such that in their respective SINR expressions, the signal powers (numerators) are maximized while the interferences (denominators) are minimized. This happens because the UEs make their own decisions about which precoders to use, and typically select a precoder that maximizes their own signal power. The UE typically cannot make the precoder decision based on the SINR, because, as noted below, the UE does not know the precoders in use by other UEs. In one implementation, the UEs may report to the eNB the UE's preferred precoders. The reports may be sent to the eNB by any suitable messages, such as through wireless control or data channels, such as the Physical Uplink Control Channel (PUCCH). The preferred precoders may be reported in the form of a Precoder Matrix Index (PMI) selection. In more detail, a given UE generally does not know which of the UEs are being shared on a communication resource (e.g., shared on the same time and frequency resource), and generally does not know the respective preferred precoders of the other UEs. As a result, the UE's PMI selection may often be based on maximizing its own signal power. Thus, the eNB may estimate or even guess which of the UEs can be combined such that overall system level throughput is maximized. Note also that another problem arises since a given UE does not know the precoders in use by the other UEs. In particular, because of this lack of knowledge, the given UE cannot employ advanced receivers or reception techniques, such as successive interference cancellation or rank 2 detection. To improve upon the situation described above, in a spatial multiplexing deployment (e.g., a rank 2 or higher rank), a precoder for each UE may be found which imbalances the performance on each of its two (or more) layers. Moreover, where the eNB antennas are spaced closely, at less than a predetermined spacing threshold, (e.g. Femto eNBs, Home eNBs, Relay transceivers, or other close spacing environments), the transmit correlation will be high, and in such cases imbalances between the layers are much more probable. Accordingly, in one implementation, the eNB and UEs may implement the following logic in hardware, software, or both: 1) Instead of employing rank 1 precoders as their PMI selection, the eNBs may agree on a set of rank 2 precoders. Note that the size of each precoder may be <Nt×2>. 2) The PMI feedback mechanism may be implemented in the UE to report a rank 2 precoder as its preferred precoder, wherein that preferred precoder creates (e.g., maximizes) imbalance in performance between the two layers. For example, layer 1 may have much higher throughput than layer 2 for a given UE. More generally, the UE may determine that there is layer imbalance between multiple layers when a particular layer has performance that exceeds another layer by an imbalance threshold. 3) Additionally, the UEs may measure or otherwise obtain channel metrics on the multiple layers. The UEs may also report the channel metrics on a control channel to the eNB. The eNB may analyze the channel metrics to determine which of the multiple layers has better performance for each UE in terms of bandwidth, signal strength, capacity, SINR, throughput, noise, energy consumption, power needed to transmit or needed to receive, average number of packet retry requests, or any other performance criteria or combination of criteria. As one specific example, the UEs may report Channel Quality Information (CQI) as a channel metric to the eNB. 4) Consider the case of rank 2 communications, where there are two layers. The eNB may select UEs such that, for the same precoder employed by both UEs, the UE1 layer 1 performance is higher (e.g., by a predetermined performance threshold) than its layer 2 performance (e.g., the layer 1 performance may exceed the layer 2 performance by a preconfigured imbalance threshold), and such that the UE2 layer 2 capacity is higher (e.g., by a predetermined performance threshold) than its layer 1 performance. 5) Having found such UEs, the eNB may select the two UEs to share the same frequency and time domain resources. In doing so, the eNB may send the information for the UE1 on layer 1, and send the information for the UE2 on layer 2. In other words, although rank 2 precoders are specified, the information sent to a particular UE is sent on a specific layer, and not spread between multiple layers. The specific layer is the layer that has the best performance among multiple layers for the UE, and for which there is significant performance imbalance between the multiple layers in favor of the specific layer. 6) Additionally, since both the UEs know the precoders that they have specified, and that the eNB is matching UEs that have specified the same precoders, the UEs can do full Maximum Likelihood (ML) decoding on both layers (even though a given UE may only be interested on information in one of the layers). Hence, the performance may be much better. Regarding downlink (DL) scheduling, in support of the techniques described above, the downlink control information sent by the eNB to the UEs may be extended to include layer selections. The layer selections may be an additional information bit within an existing control message or within a dedicated layer selection message, as examples. The layer selection may specify which of the multiple layers that the UE has been scheduled to communicate over. For example, when an information bit is used as the layer selection, and the bit is set, it may indicate to the UE that the UE should receive on layer 1. When the information bit is cleared, it may indicate that the UE should receive on layer 2. In some implementations, the techniques described above may be implemented by UEs and eNBs that adhere to extensions as described above. The extensions may or may not be part of an agreed upon standard, and the eNB and UEs may implement the techniques regardless of whether or not the techniques are incorporated into a standard. As examples, the LTE or 802.16m standards may be extended to support the techniques described above. As specific examples, the standards may be extended in the following manners to support standardized adoption and implementation of the techniques noted above: 1) Introduce an additional transmission mode for the UEs in which they communicate preferred precoders (e.g., using a PMI) that cause layer imbalance, and receive feedback from the eNB (e.g., layer selections), and using Physical Downlink Shared Channel (PDSCH) scheduling for transmitting downlink data to the UEs. 2) Another alternative is to extend the LTE transmission mode 5 to include support for the multiple rank (e.g., Rank 2) Multiuser MIMO scheduling and PMI feedback mechanism described above. The extensions noted above and the techniques described above may be employed in any other standards which support, e.g., closed loop MIMO. This includes all present 4G standards, for example. FIG. 1 shows a communication node, such as an enhanced Node B (eNB) 100, that communicates over wireless channels 102 with multiple User Equipments (UEs). FIG. 1 shows the UE1 104 and the UE2 106, but there may be any number of UEs. The eNB 100 includes a wireless communication radio 120 that includes one or more transceivers, such as the transceiver A 122 and transceiver B 124. There may be any number of transceivers, as indicated by transceiver ‘n’ 134. A transceiver (e.g., transceiver A 122 or transceiver B 124) may include a Digital-To-Analog (“D/A”) converter, an Analog-To-Digital (“A/D”) converter, an amplifier, a modulator, a waveform shaper, preamplifier, power amplifier, and any additional hardware that drives an antenna (e.g., antenna A 126 or antenna B 128). In the example of FIG. 1, the transceiver A 122 includes an antenna A 126, transceiver B includes an antenna B 128, and transceiver ‘n’ 124 includes an antenna ‘n’ 136. Each antenna may transmit an information stream and receive signals encoding information streams. Each stream is generally referred to as a ‘layer’. Thus, for example layer 1 may be considered the information stream transmitted and received by the antenna A 126. In a multiple user MIMO system, there may be 2, 4, or more antennas over which the eNB transmits and receives. Each antenna may be associated with a particular layer. The eNB 100 sends code words representing user data to the UEs. Precoding modifies the layer signals before transmission. The eNB may perform precoding for diversity, beam steering, spatial multiplexing, or other reasons. The eNB 100 may implement multiple input/multiple output (“MIMO”) communication techniques to communicate using multiple transceivers across the available communication resources to the UEs. The communication resources include time and frequency allocations for the UEs. The same communication resources (e.g., the same time and frequency slots) may be shared by multiple different UEs as described in more detail below. The communication resources may refer to communication channels used by communication standards such as 802.11a, 802.11b, 802.11g, 802.11n, or 802.11ac, Worldwide Interoperability for Microwave Access (“WiMAX”), Bluetooth, HSPA+, 4G, 3GPP LTE, and others. The eNB 100 includes a processor 138 and a memory 140. The eNB 100 receives indicators (e.g., the indicators 130, 132) of preferred precoders from the UEs. The indicators may be in the form of PMIs, as one example. The eNB 100 may be controlled by an RNC 144. The RNC 144 may control other eNBs 146 as well. The memory 140 may store control logic 142 that implements a system model, such as the system model 200 shown in FIG. 2. With regard to FIG. 2, the system model 200 shows that the eNB 100 will transmit to the UEs using a common preferred precoder, W 202. The common preferred precoder is, for example, a rank 2 precoder, and thus has dimension <Nt×2>, as opposed to a rank 1 precoder of dimension <Nt×1>. Each UE reports its preferred rank 2 precoder which purposefully increases (e.g., maximizes) performance imbalance between the available layers. Specifically, in a rank 2 MIMO system, each UE reports its preferred rank 2 precoder which, if used, would cause significant imbalance in performance between layer 1 and layer 2. In that regard, layer 1 may have a significant performance advantage over layer 2, or layer 2 may have a significant performance advantage over layer 1. In FIG. 2, the eNB 100 communicates over channel H with UE1 206, and communicates over channel G 208 with UE2 210. Although rank 2 precoders have been indicated, the eNB 100 may communicate information for UE1 206 specifically over an individual layer (e.g., layer 1), and may communicate information for UE2 210 specifically over a different individual layer (e.g., layer 2). The processor 138 may, by executing the control logic 142, determine the preferred precoders indicated by the UEs. Recall that the preferred precoders result in a significant imbalance in capacity between the layers that the precoders apply to. The UEs also report channel metrics to the eNB 100, from which the control logic 142 determines which layer has better performance for the UE. The control logic 142 may then select multiple UEs (e.g., two UEs, in a rank 2 MIMO system) such that, for the same precoder W 202 used by eNB 100 to send information to the UEs, opposing performance imbalance exists between the layers for the UEs. For example, the control logic 142 may determine that UE1 206 has much higher layer 1 capacity than layer 2, and that UE2 210 has much higher layer 2 capacity than layer 1. When such a match is found, the control logic 142 may select the UE1 206 and the UE2 210 to simultaneously communicate on the opposing layers using the preferred precoder that both UEs specified. For example, the eNB 100 may communicate information for the UE1 206 on layer 1, and communicate information for UE2 on layer 2. Accordingly, the UEs share the same frequency-time domain resources, with the first UE information sent on layer 1 by the eNB 100, while the second UE information is sent on layer 2 by the eNB 100. In other words, even though rank 2 precoders were indicated, the eNB 100 does not spread the information over both layers for a UE, but instead uses a particular layer to send the information for a given UE. The principles discussed above may be extended to higher rank MIMO, such as rank four MIMO. In that case, four UEs may specify preferred rank 4 precoders, and receive information on one of four different specific layers, where the layer gives significant performance advantage over the other three layers in opposition to the other UEs. For example, UE1 may have best performance and receive on layer 3, UE2 may have best performance and receive on layer 1, UE3 may have best performance and receive on layer 4, and UE4 may have best performance and receive on layer 1. FIG. 3 shows an example of User Equipment (UE) 300. The UE 300 may support MIMO communications through a communication interface 302. The communication interface 302 may include multiple radios, such as a radio 304 and a radio 306. Each radio of the UE 200 may be operable to communicate according to a communication type or standard. For example, the radios may be 2G, 3G, 4G/LTE radios, WiFi radios, Bluetooth radios, or any other type of wireless communication radios. Each radio of the UE 200 may include multiple PHY cores. The radio 224 includes a PHY Core 308, a PHY Core 310, a PHY Core 312, and a PHY Core 314. A PHY Core may include a transmitter, a receiver, or both (e.g., a transceiver). The transceivers transmit and receive over individual antennas 316. Thus, the UE 200 may support MIMO communications (e.g., rank 2 or rank 4 MIMO in LTE mode 5) over the multiple antennas 316. The UE 200 also includes system logic 318, which is communicatively coupled to the communication interface 302. The system logic 318 may be implemented in hardware, software, or both, such as with a processor 320 (or multiple processors) and a memory 322 communicatively coupled to the processor 320. The memory 322 may store system instructions 324, that when executed by the processor 320, cause the UE 200 to communicate preferred precoder information as described above, for example with regard to the system model 200. The system instructions 324 may also report to the eNB 100 the channel metrics 330 for the layers to which the preferred precoders apply. The preferred precoder may be selected from among a set of available precoders 326, and the available precoders may be selected from a codebook of such precoders. The codebook may be specified by a particular communication standard, for example. More generally, the codebook may be established by storing any desired set of precoders in the memory 322 and in the eNB 100. As further described above, the UE 200 may receive layer selections 328 from the eNB 100. Accordingly, the UE 200 may inform the communication interface 302 as to the layer in which it will receive its data stream from the eNB 100. FIG. 4 shows logic 400 for scheduling according to layer capacity imbalance. The logic 400 may be implemented in the eNB 100 as, for example, part of the control logic 142. The logic 400 receives, from UEs, indicators of preferred precoders (402). The preferred precoders may be any of the available rank 2 or rank 4 precoders that would cause performance imbalance between the layers to which the precoders apply. In one implementation, the UEs report, as their preferred precoder, the precoder that maximizes performance imbalance among the layers. The logic 400 also receives, from UEs, channel metrics for the layers corresponding to the precoders that were indicated as preferred precoders (404). The channel metrics provide insight into the amount of imbalance between the layers. FIG. 4 illustrates, as just one example, a set of five UEs 408 that have reported their preferred precoders and channel metrics. In a rank two scenario, the eNB 100 may then search to find UEs that will share communication resources, according to specific search criteria. For example, the search criteria may be opposing layer imbalance, e.g., the eNB 100 selects UEs such that, for the same precoder used by the UEs, the first UE has significantly higher layer 1 capacity than layer 2 capacity, and the second UE has significantly higher layer 2 capacity than layer 1 (406). In determining whether a layer has significantly higher capacity, the eNB 100 may determine whether the capacity imbalance between layers exceeds an imbalance threshold. When the search finds two UEs that meet the search criteria, such as the UEs 410 and 412, the eNB 100 may select the UEs to share the same frequency-time domain resources (408). In that regard, the eNB 100 will use the same preferred precoder for the selected UEs and the same time and frequency resources. However, the eNB 100 transmits information for the first UE on layer 1 (410), its higher performance layers, and transmits information for the second UE on layer 2 (412), its higher performance layer. Thus, the eNB 100 does not spread the information for a particular UE across multiple layers, even though rank 2 precoders are specified, but instead sends the information specific for a particular UE on the specific higher performing layer. As shown in FIG. 5, the techniques discussed above and shown in the figures may be extended to additional ranks, e.g., rank 4. In FIG. 5, the UEs 502 report preferred rank 4 precoders, and channel metrics for the layers to which the precoder applies. The preferred precoder may be the precoder for which one layer has significantly higher capacity than the other three layers. The eNB 100 may search among the UEs 502 to find four UEs 512 that have the same preferred precoder, and opposing layer imbalance. In the example of FIG. 5, the UEs 504, 506, 508, and 512 have opposing layer imbalance in that each of the UEs has a different layer for which performance significantly exceeds the other layers: for UE 504 it is layer 1, for UE 506 it is layer 2, for UE 508 it is layer 3, and for UE 510 it is layer 4. The eNB 100 selects UEs with opposing layer imbalance to share the same frequency-time domain resources. Accordingly, in this example, layer 1 will serve the UE 504, layer 2 will serve the UE 506, layer 3 will serve the third UE 508, and layer 4 will serve the UE 510. The eNB 100 sends information specific to a particular UE on the specific layer (e.g., information for UE 506 on layer 2). More generally, the eNB 100 may send information for a particular UE on ‘L’ layers out of an available ‘M’ layers. The eNB 100 may do so, for example, when the ‘L’ layers exhibit strong performance advantages over the remaining ‘M−L’ layers, which may then be assigned to other UEs. In other implementations, the eNB 100 may take other actions when the eNB 100 does not find UEs to select to share the same communication resources. For instance, the eNB 100 may instead inform the specific non-matched UEs (e.g., over the control channel) that rank 1 transmissions will be sent. The eNB 100 may then communicate information to those UEs using rank 1 transmissions. FIG. 6 shows another example of logic 600 that the eNB may implement to provide an additional search strategy for selecting UEs to share communication resources. The eNB 100 may execute the logic 600 whenever desired, such as when the logic 400 does not find UEs that have specified the same precoder with opposing layer imbalance, as described above. In addition, the eNB 100 may execute the logic 600 over the remaining, unmatched UEs, when some UEs have been matched by layer imbalance as described above, and there yet remain some unmatched UEs. Note that, as before, the logic 600 receives indicators of preferred precoders from UEs (602), and also channel metrics for the layers corresponding to the precoders that were indicated (604). The eNB 100 finds a first UE that has reported precoder W1 (606), and finds a second UE that has reported precoder W2. The eNB 100 determines whether, e.g., the highest performing column of a precoder (e.g., W1) matches the column that is not the highest performing column of the other precoder (e.g., W2). When this condition exists, the eNB 100 may select the first UE and the second UE to share communication resources. In more detail, the eNB 100 searches for two UEs that have higher performing layers that are different (610). Assume, for example, that the first UE best performing layer is layer 1, and that the second UE best performing layer is layer 2, even if they have specified different precoders. If such UEs are found (612), then the eNB may assign the two UEs to share communication resources, when at least one column of W1 matches a column of W2. For example, there is a match when column 1 of W1 matches column 1 of W2, and a match when column 2 of W2 matches column 2 of W1 (614). In response to the match (616), the eNB 100 selects the two UEs to share communication resources (618). The first UE will receive its information on layer 1, and the second UE will receive its information on layer 2. In the eNB 100, the eNB will use precoder W1 when column 1 of W1 matches column 1 of W2. In this scenario, the first UE has the advantage. When column 2 of W1 matches to column 2 of W2, then the eNB 100 uses the precoder W2. In this scenario, the second UE has the advantage. Furthermore, independently of the analysis above in (610)-(614), or in a different order, the eNB 100 may perform another check, also shown in FIG. 6. In particular, the eNB 100 may search for UEs that have highest performing layers that are the same (620), with different indicated precoders. For example, assume that the first UE and the second UE both have layer 1 as the preferred or highest performing layer. When such UEs are found (622), then the eNB 100 may search (624) for matching precoder columns in the different precoders specified by the UEs, such that column 1 of W1 matches column 2 of W2, or column 1 of W2 matches to column 2 of W1 (624). In response to the match (626), the eNB 100 selects the two UEs to share communication resources (618). The first UE will receive its information on layer 1, and the second UE will receive its information on layer 2. In the eNB 100, the eNB will use the W1 precoder when column 1 of W1 matches to column 2 of W2. Effectively, the eNB has thereby scheduled the first UE on layer 1 and has scheduled the second UE on layer 2. The first UE has the advantage. When column 2 of W1 matches to column 1 of W2, then the eNB uses the W2 precoder. This results in scheduling the second UE on layer 1, and scheduling the first UE on layer 2. The second UE has the advantage. Further, when the eNB 100 cannot match UEs as described above to share communication resources, the eNB 100 may perform alternative scheduling (626). In one implementation, the eNB 100 may determine to not share communication resources between UEs. For instance, the eNB 100 may schedule only rank ‘L’ multiplexing (e.g., rank 1) for a particular UE, so that communications are sent to that specific UE only, on those ‘L’ layers. In other words, no other UEs share the time and frequency resources with that specific UE. In that case, when, for example, the UE's first layer is better performing than the second layer, then the eNB 100 may select the first column of the UE's preferred precoder. The eNB 100 uses the first column as a rank 1 precoder for communicating with the UE, e.g., on the first layer. Similarly, when the UE's second layer is higher performing than the first layer, then the eNB 100 may select the second column of the preferred precoder. The eNB uses the selected column as a rank 1 precoder for communicating to the UE over, e.g., the second layer. As a further scheduling example, the eNB 100 may determine to schedule a pure rank-2 spatial multiplexing to a particular UE. Then, the eNB 100 may allocate the preferred rank-2 precoder and schedule both layers to the same UE. For all of the techniques described above, the eNB 100 may introduce additional bits or fields into control frames communicated to the UEs on a downlink control channel. The additional bits or fields may communicate the particular configuration information for the scheduling chosen for the UE, including layer selection, precoder selection, communication rank, and other configuration information. As one example of additional configuration information, the eNB 100 may communicate to a UE the transmission parameters selected for any different UE. For instance, the eNB 100 may send to UE1 the modulation employed by UE2, where UE1 and UE2 share communication resources. With this information, UE1 may perform enhanced decoding. For example, the UE1 may perform maximal likelihood (ML) decoding for both UEs. That is, instead of only decoding the information for UE1, the UE1 may perform a joint decoding of the information for UE1 and UE2. The UE1 may then filter out information that is not useful for the UE1, such as all or part of the information intended for UE2. Some of the techniques above discuss rank 1 scheduling through rank 2 precoder selection. Note, however, that the scheduling may be generalized to any rank ‘m’ scheduling through rank ‘n’ precoder selection, where ‘m’<−‘n’. As one specific example with m=1 and n=4, the techniques may implement rank 1 scheduling through rank 4 precoder selection. One example was given above in FIG. 5. FIG. 7 shows logic 700 that a UE may implement (e.g., as part of the system instructions 324) for scheduling according to layer performance imbalance. The logic 700 measures channel characteristics and obtains channel metrics that apply, for example, precoder layers (702). In that respect, the logic 700 may determine bandwidth, signal strength, capacity, SINR, throughput, noise, energy consumption for sending or receiving on a particular layer, power needed to transmit or needed to receive on a particular layer, average number of packet retry requests, or any other performance criteria or combination of criteria. As another specific example, the logic 700 may determine Channel Quality Information (CQI) as a channel metric. The logic 700 communicates its preferred precoder (e.g., a rank 2 precoder specified by a PMI), and the channel metrics, to the eNB (704). The eNB 100 engages in an analysis of the indicated precoders, and the channel metrics, as described above. When the eNB 100 has determined scheduling for the UEs, the eNB 100 communicates configuration information for the scheduling to the UEs. The configuration information may be placed in bit fields in frames sent in a downlink control channel. Accordingly, the logic 700 receives the configuration information (706). With the configuration information, the UE configures its receiver to receive and decode information in the layer(s) specified by the configuration information (708). For example, although the UE specified a rank 2 precoder, the configuration information may direct the UE to receive its information in a particular layer among the two layers. Having configured its receiver, the UE may then receive signals from the eNB 100, and decode the specified layer(s) to obtain the information for the UE (710). Regarding terminology, the following exemplary description is provided. Code words generally refer to information that higher layers send to the transmit chain for communicating information to the UEs. There is typically a specific flow of one or more different code words destined for each UE. In the transmit chain, a layer mapping module accepts the code words for one or for multiple UEs. The layer mapping module may, but need not, split a particular code word across multiple layers. The layer mapping module outputs symbols for transmission assigned to the various layers. Precoding takes the symbols as inputs. When the number of layers input to the precoding logic is ‘m’ and the number of transmit antenna ports is ‘n’, then typically the precoding matrix is a of size <n×m>. The precoding matrix (e.g., of <m×n>) is multiplied against the symbol (e.g., of dimension <n×1>) to obtain an output of symbols (e.g., of dimension <m×1>) for transmission through the ‘n’ antenna ports. The number of layers that form the input to the precoding logic is referred to as the Rank. In the example given, the rank is ‘m’. Let the input symbols from different layers be represented by: $S = [ s 1 s 2 ⋮ s m ] , size ⁢ : ⁢ ⁢ 〈 m × 1 〉$ Let the precoder matrix be represented by: P=[P1P2 . . . Pm],size:<n×m> In the precoder matrix each column is denoted by Px whose size is <n×1>. There are m columns. The precoding output is given by: $P × S = ⁢ [ P 1 P 2 … P m ] × [ s 1 s 2 ⋮ s m ] = ⁢ ( s 1 × P 1 ) + ( s 2 × P 2 ) + … ⁢ ⁢ ( s m × P m )$ Whose size is <n×1>. Each of the ‘n’ parts of this symbol goes to different transmit antenna ports. In the above, one can see that each column of the precoding matrix corresponds to a particular layer. As a result, each particular column of the precoding matrix is typically considered associated with a particular layer. If the precoding matrix is not an Identify matrix, then the precoding operation spreads the layers for transmission by multiple antennas. The methods, devices, and logic described above may be implemented in many different ways in many different combinations of hardware, software or both hardware and software. For example, all or parts of the logic may include circuitry in a controller, a microprocessor, or an application specific integrated circuit (ASIC), or may be implemented with discrete logic or components, or a combination of other types of analog or digital circuitry, combined on a single integrated circuit or distributed among multiple integrated circuits. All or part of the logic described above may be implemented as instructions for execution by a processor, controller, or other processing device and may be stored in a tangible or non-transitory machine-readable or computer-readable medium such as flash memory, random access memory (RAM) or read only memory (ROM), erasable programmable read only memory (EPROM) or other machine-readable medium such as a compact disc read only memory (CDROM), or magnetic or optical disk. Thus, a product, such as a computer program product, may include a storage medium and computer readable instructions stored on the medium, which when executed in UE, and eNB, computer system, or other device, cause the device to perform operations according to any of the description above. The processing capability of the system may be distributed among multiple system components, such as among multiple processors and memories, optionally including multiple distributed processing systems. Parameters, databases, and other data structures may be separately stored and managed, may be incorporated into a single memory or database, may be logically and physically organized in many different ways, and may implemented in many ways, including data structures such as linked lists, hash tables, or implicit storage mechanisms. Programs may be parts (e.g., subroutines) of a single program, separate programs, distributed across several memories and processors, or implemented in many different ways, such as in a library, such as a shared library (e.g., a dynamic link library (DLL)). The DLL, for example, may store code that performs any of the system processing described above. While various embodiments of the invention have been described, it will be apparent to those of ordinary skill in the art that many more embodiments and implementations are possible within the scope of the invention. Accordingly, the invention is not to be restricted except in light of the attached claims and their equivalents. ## Claims 1. A method comprising: receiving, from first user equipment, a first indicator specifying a first preferred precoder for simultaneously communicating on a first layer and on a second layer of a communication resource; receiving, from second user equipment, a second indicator specifying a second preferred precoder different from the first preferred precoder, the second preferred precoder for simultaneously communicating on the first layer and on the second layer of the communication resource; determining that a column of the first preferred precoder matches a column of the second preferred precoder; and responsive to the determination, selecting a shared precoder from among the first and second preferred precoders; and selecting the first user equipment and the second user equipment to simultaneously communicate on the first layer and second layer using the shared precoder. receiving, from first user equipment, a first indicator specifying a first preferred precoder for simultaneously communicating on a first layer and on a second layer of a communication resource; receiving, from second user equipment, a second indicator specifying a second preferred precoder different from the first preferred precoder, the second preferred precoder for simultaneously communicating on the first layer and on the second layer of the communication resource; determining that a column of the first preferred precoder matches a column of the second preferred precoder; and responsive to the determination, selecting a shared precoder from among the first and second preferred precoders; and selecting the first user equipment and the second user equipment to simultaneously communicate on the first layer and second layer using the shared precoder. 2. The method of claim 1, further comprising determining that a performance imbalance exists by: determining that when the first preferred precoder is used by both the first user equipment and the second user equipment, the first user equipment has higher performance than the second user equipment. determining that when the first preferred precoder is used by both the first user equipment and the second user equipment, the first user equipment has higher performance than the second user equipment. 3. The method of claim 2, further comprising: receiving, from the first user equipment and the second user equipment, channel performance metrics for the first layer and the second layer; and wherein determining higher performance comprises: determining higher channel quality, higher throughput, higher signal to noise plus interference ratio, or any combination thereof. receiving, from the first user equipment and the second user equipment, channel performance metrics for the first layer and the second layer; and wherein determining higher performance comprises: determining higher channel quality, higher throughput, higher signal to noise plus interference ratio, or any combination thereof. 4. The method of claim 1, further comprising: receiving, from the first user equipment and the second user equipment, channel metrics for the first layer and the second layer; and wherein determining comprises: determining a performance imbalance using the channel metrics. receiving, from the first user equipment and the second user equipment, channel metrics for the first layer and the second layer; and wherein determining comprises: determining a performance imbalance using the channel metrics. 5. The method of claim 1, wherein receiving the first indicator comprises receiving a preferred matrix indicator that specifies the preferred precoder. 6. The method of claim 1, wherein receiving the first indicator comprises: receiving a first indicator of a multi-user multiple input multiple output (MIMO) communications precoder of at least rank 2. receiving a first indicator of a multi-user multiple input multiple output (MIMO) communications precoder of at least rank 2. 7. A system comprising: a multiple input multiple output (MIMO) communications interface that supports simultaneous transmission through multiple antennas in a first layer and in a second layer; and system circuitry in communication with the MIMO communications interface, the system circuitry operable to: obtain indicators of preferred precoders from user equipments for simultaneously communicating in the first layer and the second layer; search the indicators for a first user equipment among the user equipments that specified a first preferred precoder; search the indicators for a second user equipment among the user equipments that specified a second preferred precoder different from the first preferred precoder, a column from the first preferred precoder being the same as a column from the second preferred precoder; based on the columns, select a shared precoder from among the first and the second precoders; and select the first user equipment and the second user equipment to simultaneously communicate on the first layer and the second layer using the shared precoder. a multiple input multiple output (MIMO) communications interface that supports simultaneous transmission through multiple antennas in a first layer and in a second layer; and system circuitry in communication with the MIMO communications interface, the system circuitry operable to: obtain indicators of preferred precoders from user equipments for simultaneously communicating in the first layer and the second layer; search the indicators for a first user equipment among the user equipments that specified a first preferred precoder; search the indicators for a second user equipment among the user equipments that specified a second preferred precoder different from the first preferred precoder, a column from the first preferred precoder being the same as a column from the second preferred precoder; based on the columns, select a shared precoder from among the first and the second precoders; and select the first user equipment and the second user equipment to simultaneously communicate on the first layer and the second layer using the shared precoder. obtain indicators of preferred precoders from user equipments for simultaneously communicating in the first layer and the second layer; search the indicators for a first user equipment among the user equipments that specified a first preferred precoder; search the indicators for a second user equipment among the user equipments that specified a second preferred precoder different from the first preferred precoder, a column from the first preferred precoder being the same as a column from the second preferred precoder; based on the columns, select a shared precoder from among the first and the second precoders; and select the first user equipment and the second user equipment to simultaneously communicate on the first layer and the second layer using the shared precoder. 8. The system of claim 7, wherein the system circuitry is further operable to: when the searches are successful: determine that the first layer has higher performance than the second layer for the first user equipment; and communicate to the first user equipment a layer selection that specifies the first layer for receiving transmissions. when the searches are successful: determine that the first layer has higher performance than the second layer for the first user equipment; and communicate to the first user equipment a layer selection that specifies the first layer for receiving transmissions. determine that the first layer has higher performance than the second layer for the first user equipment; and communicate to the first user equipment a layer selection that specifies the first layer for receiving transmissions. 9. The system of claim 8, wherein the system circuitry is further operable to: when the searches are successful: communicate to the second user equipment a layer selection that specifies the second layer for receiving transmissions. when the searches are successful: communicate to the second user equipment a layer selection that specifies the second layer for receiving transmissions. communicate to the second user equipment a layer selection that specifies the second layer for receiving transmissions. 10. The system of claim 9, wherein the system circuitry is further operable to: transmit simultaneously, using the shared precoder: information for the first user equipment in the first layer; and information for the second user equipment in the second layer. transmit simultaneously, using the shared precoder: information for the first user equipment in the first layer; and information for the second user equipment in the second layer. information for the first user equipment in the first layer; and information for the second user equipment in the second layer. 11. The system of claim 7, wherein the system circuitry is further operable to: when the searches are not successful: further search for a third user equipment and a fourth user equipment among the user equipments that: have the same preferred precoders; and have opposing layer capacity imbalance in the first layer and in the second layer. when the searches are not successful: further search for a third user equipment and a fourth user equipment among the user equipments that: have the same preferred precoders; and have opposing layer capacity imbalance in the first layer and in the second layer. further search for a third user equipment and a fourth user equipment among the user equipments that: have the same preferred precoders; and have opposing layer capacity imbalance in the first layer and in the second layer. have the same preferred precoders; and have opposing layer capacity imbalance in the first layer and in the second layer. 12. The system of claim 7, wherein the system circuitry is further operable to: determine that the first user equipment has higher performance when the first preferred precoder is used; and determine that the second user equipment has higher performance when the second precoder is used. determine that the first user equipment has higher performance when the first preferred precoder is used; and determine that the second user equipment has higher performance when the second precoder is used. 13. The system of claim 12, wherein higher performance comprises higher channel quality, higher throughput, higher signal to noise plus interference ratio, or any combination thereof. 14. The system of claim 7, wherein the different preferred precoders comprise precoders for at least rank 2 MIMO communications. 15. A method comprising: receiving, from user equipments, indicators of preferred precoders for simultaneously communicating on at least a first layer and a second layer; receiving channel metrics for the first layer and the second layer from the user equipments; determining a communication group of at least a first user equipment and a second user equipment to simultaneously communicate over at least the first layer and the second layer by: determining those user equipments that have specified different preferred precoders that have a matching column; and based on a position of the matching column, selecting a shared precoder from among the different precoders for use by the first and second user equipments for communication over the first and second layers. receiving, from user equipments, indicators of preferred precoders for simultaneously communicating on at least a first layer and a second layer; receiving channel metrics for the first layer and the second layer from the user equipments; determining a communication group of at least a first user equipment and a second user equipment to simultaneously communicate over at least the first layer and the second layer by: determining those user equipments that have specified different preferred precoders that have a matching column; and based on a position of the matching column, selecting a shared precoder from among the different precoders for use by the first and second user equipments for communication over the first and second layers. determining those user equipments that have specified different preferred precoders that have a matching column; and based on a position of the matching column, selecting a shared precoder from among the different precoders for use by the first and second user equipments for communication over the first and second layers. 16. The method of claim 15, further comprising: transmitting information for the first user equipment to the first user equipment in the first layer; and transmitting information for the second user equipment to the second user equipment in the second layer. transmitting information for the first user equipment to the first user equipment in the first layer; and transmitting information for the second user equipment to the second user equipment in the second layer. 17. The method of claim 15, further comprising: communicating to the first user equipment a layer selection indicator that specifies in which layer information for the first user equipment will be transmitted. communicating to the first user equipment a layer selection indicator that specifies in which layer information for the first user equipment will be transmitted. 18. The method of claim 15, wherein receiving channel metrics comprises: receiving channel quality indicators. receiving channel quality indicators. 19. The method of claim 15, wherein receiving channel metrics comprises: channel capacity for the first layer and the second layer. channel capacity for the first layer and the second layer. 20. The method of claim 15, wherein receiving indicators of preferred precoders comprises: receiving indicators of preferred precoders for multi-user multiple input multiple output (MIMO) communications. receiving indicators of preferred precoders for multi-user multiple input multiple output (MIMO) communications.
web
auto_math_text
# Number-resolved imaging of $^{88}$Sr atoms in a long working distance optical tweezer ### Submission summary As Contributors: Ryan Hanley · Matthew Hill · Niamh Jackson · Matthew Jones Arxiv Link: https://arxiv.org/abs/1904.03233v5 (pdf) Date accepted: 2020-02-07 Date submitted: 2020-02-03 Submitted by: Hill, Matthew Submitted to: SciPost Physics Discipline: Physics Subject area: Atomic, Molecular and Optical Physics - Experiment Approach: Experimental ### Abstract We demonstrate number-resolved detection of individual strontium atoms in a long working distance low numerical aperture (NA = 0.26) tweezer. Using a camera based on single-photon counting technology, we determine the presence of an atom in the tweezer with a fidelity of 0.989(6) (and loss of 0.13(5)) within a 200 $\mu$s imaging time. Adding continuous narrow-line Sisyphus cooling yields similar fidelity, at the expense of much longer imaging times (30 ms). Under these conditions we determine whether the tweezer contains zero, one or two atoms, with a fidelity $>$0.8 in all cases with the high readout speed of the camera enabling real-time monitoring of the number of trapped atoms. Lastly we show that the fidelity can be further improved by using a pulsed cooling/imaging scheme that reduces the effect of camera dark noise. Published as SciPost Phys. 8, 038 (2020) ### List of changes Value of loss added to abstract ### Submission & Refereeing History Resubmission 1904.03233v5 on 3 February 2020 Resubmission 1904.03233v4 on 14 January 2020 Resubmission 1904.03233v3 on 2 October 2019 Submission 1904.03233v2 on 24 April 2019
web
auto_math_text
## Introduction Distributed generation (DG) is a source for producing electrical power with a capacity of less than 10 MW. It is frequently connected to distribution-side power systems and aids in power supply. For the purpose of power systems, the principal energy in these sources is clean and renewable energy from sources like wind, solar, and geothermal energy, which is utilized in the construction of wind turbines, solar cells, gas microturbines, fuel cells, etc.1,2. With the advent of DG, several problems appeared, including the maintenance and protection of resources. The issue relates to how these resources can help manage the grid's fundamental elements, such as frequency and voltage, and how electricity is transferred between the grid and DGs. The idea of microgrids was established in contemporary power systems to address these issues and take these resources and local demands into account in an integrated manner. This introduction defines microgrids as compact power grids made up of a number of DG sources and local loads. The microgrids are normally connected to the grid, but in case of an emergency brought on by the occurrence of severe disruptions, they are cut off and can provide the local loads on their own. When connected to the grid, the microgrid's frequency and power are functions of the main grid and only need to be controlled for the power of the units, but on islands, the microgrid's frequency and voltage fluctuate need an independent control3,4. ### Frequency control for microgirs in the litterature Increasing the number of microgrids in power systems has changed the fundamental rules in these systems and caused the generation of resources to be distributed throughout the system. This causes the complexity and non-linearity of power networks to increase, and as a result, we do not see the proper response of conventional controllers as before. PI-controllers are most widely used in power systems because they have a simple structure and are cost-effective, and in power systems, these controllers are trusted more than any other controller. But the problem with these controllers is that the control coefficients based on the linear conditions and the operating point of the system are adjusted by the technicians based on their knowledge and experience, and are placed in the system at once. If the nominal operating conditions change or the linear conditions of the system change due to disturbances, the values intended for these controllers will no longer be optimal ​​and will not have the same proper response as before. The possible solution, both to use these conventional and reliable controllers and to somehow solve their problem, is to update and optimize the control coefficients depending on the changes in the system5,6,7. Numerous references have reviewed and presented various methods for frequency control of microgrids based on the optimization of controller coefficients with meta-heuristic algorithms. In8,9, controllers based on PI control and proportional-integral-derivative controller (PID) have been used. In10 the particle swarm optimization (PSO) algorithm and in9 the spider social behavior (SSO) algorithm is used to optimize the PID control parameters in the microgrid. In11, the harmonic search (HS) algorithm is used to control the load–frequency in the microgrid. In12 uses a fuzzy controller whose coefficients are optimized using the PSO algorithm. In13,14 the model predictive control (MPC) is used to control the load–frequency of the microgrid. In15, a fuzzy controller is used to control the frequency of a multi-microgrid. In16 two-level MPC control17, multiple MPC control, and18 MPC control-based method for coordinated control of wind turbine blades and electric hybrid vehicles to reduce power fluctuations and microgrid frequency are presented. In19 the Ziegler-Nichols-based PID method (ZN-PID), in20 the fractional-order-based PID method (FOPID), in21 the fuzzy control based fractional order PID (Fuzzy FOPID), In22 the kriging based surrogate fractional order PID method has been used. The methods proposed for the adaptive PI-controller are generally limited to linear processes. In other words, a controller with a linear model operates in a linear range, but due to the capabilities of ANN in solving problems with high mathematical complexity and the high power of these networks in estimating functions, designers are encouraged to use these networks in the design of self-tuning controllers to control nonlinear processes23. In24,25,26,27, a PI-controller with a hybrid ANN form is used as a direct adaptive controller to control the microgrid frequency, in which PSO and fuzzy algorithms are used to optimize ANN coefficients and their rapid training. ### Paper contribution According to several studies done on the subject's state of the art, the typical PI-controller has not performed well due to the nonlinearity of the whole system. This is why, this paper presents an approach of a PI-controller self tuning. It is evident that a number of methods, including PSO and others, were suggested to aid in the automatic adjustment of the PI values. However, each of these approaches have their weakness and limitation, in relation the number of parameters that must be fixed at the beginning of the algorithm start. Therefore, this paper have merge two algorithms for making this automatic tuning touch and for compensate the weaknesses on the neural network offline learn. These two algorithms, compensate their limitations and present together a usuful solution for this Multi-input system. These two algorithms, which use the genetic algorithm approach (GA) and the neural network (NN) concept, respectively, have the advantages of training from the current state and optimal calculation specification, and this can be done online while the system is running, which makes it an advantage specification.This combination of the PI, NN and GA have given a best performances for this complex system stability in relation to the RMS (Δf) and max (|Δf|) of microgrid frequency changes.. ### Paper structure The following is the sections of the article. In “General microgrid structure and conventional control strategy” section, the microgrid structure with the conventional PI-controller is presented. “A proposed control strategy based on ANN-GA” section announces the proposed control strategy based on the combination of ANN and GA algorithms. In “Simulation results” section, the simulation results of the proposed method are exposed and discussed and finally, a conclusion will be presented in “Conclusions” section. ## General microgrid structure and conventional control strategy Microgrids are a set of mainly renewable generators that are jointly formed to feed loads. The nature of microgrids is a wide-ranging distributed generation that itself has distributed generation resources. In a microgrid, we mainly deal with distributed generation sources such as solar cells, wind turbines (microturbines), fuel cells, batteries (energy storage systems), hybrid generators such as CHP, as well as synchronous generators. As it is known, the power output of these sources, except for synchronous and CHP generators, is DC. For this reason, we encounter two AC and DC links in a microgrid. This concept is called a hybrid microgrid. In addition to these sources, to benefit from the output power of each, control systems are needed, which are used to control the microgrid. The control strategies in microgrids should be such that they provide the basic purpose of these networks to continue to operate in both connection and disconnection from the main grid. For this purpose, two general control structures have been considered depending on the operating conditions of the microgrid. In case the microgrid is connected to the main grid, the stability of the basic parameters of the network such as voltage and frequency is provided by the main grid and the microgrid is considered an auxiliary element in providing common loads. This mode of operation is called PQ, which means that the microgrid is controlled for the delivery of fixed active and reactive power. In the event of a disconnection, the basic parameters of the system are set by the microgrid and it must otherwise supply its loads or at least critical loads. This mode of operation is called VSI. Therefore, to apply these control methods, a series of controllers are needed on each of the microgrid sources. ### Types of microgrid control In a general sense, microgrid resources are divided into two parts: probabilistic and controllable generation. probability DG sources based on probabilistic (uncontrollable) inputs produce the desired output. These sources include solar cells, wind turbines, and even some fuel cells. These sources use sunlight, wind speed, and hydrogen, respectively, to generate power. Given that these inputs have probabilistic properties, these sources are also probability DG. Specifically, in controlling probability DG sources, we are faced with the problem of current control. In these sources, the output current (output power) of the system is controlled as CCS (controlled current source). But the important issue is the need for microgrid control over controllable resources. These sources can be such as the battery, CHP, or synchronous generator. What is required is the presence of at least one of these resources in a microgrid (in terms of microgrid stability and reliability). Controllable microgrid resources play an essential role in controlling microgrids and thus achieving microgrid stability in terms of voltage/frequency. In general, and in a specific definition, an unstable microgrid is a microgrid in which voltage/frequency collapse occurs. Voltage/frequency collapse in a microgrid means continuous increase or continuous decrease of the desired variable. On the other hand, in the microgrid, we also face the phenomenon of drop. At the opposite point of drop, there is also drop control. In the drop phenomenon, we encounter a voltage/frequency error (steady-state error). The basis of stability in the microgrid was based on controllable resources. In these sources, the more accurate, robust, and practical the control process used, the more it improves the stability of the microgrid. For this purpose, different control levels are used sequentially in a microgrid. Each of these control levels is responsible for part of the microgrid stability tasks. In a microgrid, these levels are divided into three parts: • Primary control level: In this control, the initial stability of frequency/frequency angle is considered. This type of control is responsible for preventing voltage/frequency collapse. One of the most common methods for this purpose is frequency drop control. • Microgrid secondary control level: In this frequency/voltage drop control, the goal is stability. In the sense that events such as islanding or load change and even the occurrence of an error can cause a steady-state error in the underlying microgrid variables. This type of control is used for this purpose. Figure 1 shows a view of the primary and secondary controls in the microgrid. In this control, the goal is to establish stability for voltage and frequency. In the presence of secondary control, this will also be the case when it becomes an island from the upstream network. This means that the frequency and voltage drop are compensated. But in addition to these two levels of control, optional controllers are also used. These controllers are responsible for improving the control process as much as possible. These controllers can be operated in parallel at any of the control levels. These types of controllers include fuzzy logic controllers, nonlinear, robust, adaptive controllers, etc. Each of these controllers according to their characteristics improve the microgrid status in terms of reliability, improving time characteristics (such as microgrid fluctuations), robustness to microgrid uncertainty, adaptation to variable parameters, etc. used. If there is a disturbance in the power system and it disturbs the balance between generation and consumption, the frequency will fluctuate. For example, if the load increases suddenly, the frequency will drop from the nominal value, which if not controlled and limited, will see frequency instability. Here, the primary control loop is the first control loop to limit the frequency drop after disturbance. Based on the frequency-active power characteristic of a generator, this control loop operates according to Eq. (1) and this loop is installed on the generator itself. $$f-{f}_{0}=-{k}_{p}\left(P-{P}_{0}\right)$$ (1) where f0 and P0 are the rated frequency and power of the network, respectively. The status of the frequency change in the presence and absence of the primary controller is shown in Fig. 2. The primary control loop limits the dropped frequency but is unable to return the frequency to the nominal value hence the secondary control loop is used. In this control loop, conventional PI-controllers are often used to return the frequency to the initial value. Adjusting these controllers will be more based on classic methods and trial and error. The problems of these methods were mentioned in the introduction, and based on these reasons, in this article, while using these controllers, we have tried to solve their problems by using an intelligent method based on ANN. ### Structure of proportional-integral-derivative (PID) controllers The PID-controller is a control system based on feedback, the main purpose of which is to bring the final result of the process closer to the desired value. The goal of a PID-controller is to steer the system toward a level, position, or whatever value we specify. According to the structure presented in Fig. 3, the two definitions "error" and "SetPoint" is of great importance in the PID-controller. Setpoint here means the target point (level, position, quantity, or whatever we want to reach in the control system) and on the other hand, the error is the amount of deviation (difference) between the target point and the final output value. Needless to say, the lower the error, the better, which means that we have been able to match the final value of the system exactly to our intended value. To reach this target point (error = zero, system output value = SetPoint), the PID control system uses three operators: Proportional, Integral, and Derivative. These three basic coefficients are variable in each PID-controller for specific applications to achieve the optimal response. The three operators of the PID-controller, each of which receives the error signal as input and performs an operation on it, and finally their output is aggregated. The output of this set according to Eq. (2) is the same as the output of the PID-controller. $$\begin{array}{c}output\left(t\right)= {k}_{p}e\left(t\right)+{k}_{i}\underset{o}{\overset{t}{\int }}e\left(\tau \right)d\tau +{k}_{d}\frac{de(t)}{dt}\\ {G}_{c}={k}_{p}+\frac{{k}_{i}}{s}+{k}_{d}s\end{array}$$ (2) By combining three proportional-integral-derivative operators differently, we will have a different response to the error. The amount of response produced by each control mode can be optimized by changing its coefficient (k) and finally by combining these three main control modes to achieve an optimal PID system. Figure 4 compares the results of combining these three control modes (proportional-integral-derivative). P mode is usually used when the presence of offset in the system is not important and tolerable or when the process is naturally integral. PI mode is used when offset is not tolerable and there should be no steady-state error. PID is our choice when it is important to compensate for some natural inertia throughout the system and the process signals are relative without noise. According to the mentioned features, since the absence of steady-state error is important in controlling the microgrid frequency and the system is noisy, a PI-controller is usually used instead of a PID. Because the derivative mode of the PID-controller increases the effect of system noise and the performance of the controller will be different from the desired answer. In a microgrid, the total generation power of units (PGEN) must be carefully controlled based on the load requirements so that a balance of generation power and consumption is established. The difference between the generated power and the load consumption can be expressed as Eq. (3). $$\Delta P= {P}_{GEN}-{P}_{Load}$$ (3) By controlling ΔP and Δf, the system can deliver good-quality power to the load. The frequency changes Δf can be calculated from the net power changes ΔP and are expressed in ideal conditions of Eq. (4): $$\Delta f=\frac{\Delta P}{{K}_{sys}}$$ (4) where Ksys is the constant frequency characteristic of the system. In real and practical terms, there is a time delay (Tsys) in the frequency characteristic. Therefore, the function of converting system frequency changes to power changes (p.u.) is expressed as Eq. (5): $${G}_{sys}=\frac{\Delta f}{\Delta P}=\frac{1}{{K}_{sys}\left(1+s{T}_{sys}\right)}=\frac{1}{D+Ms}$$ (5) Here M and D are equivalent to the inertia and damping constants of the system, respectively. Frequency deviation is detected using the 1/D + Ms, which is characteristic of the system. According to Fig. 1, the block diagram of the frequency control method using the PI-controller can be shown in Fig. 5. Where proportional to the frequency deviation, each unit must change its output power so that the frequency deviation Δf has its lowest value. Determining the reference power of each unit is the responsibility of the integral controller, the output of which is determined based on the frequency deviation input. ### Dynamic modeling of microgrids under study In this paper, a microgrid separate from the main grid is considered as the system under study, which is shown in Fig. 6. The microgrid consists of units including a diesel energy generator (DEG), a photovoltaic (PV), a wind turbine generator (WTG), a fuel cell (FC), an aqua electrolyzer (AE), a battery energy storage system (BESS), and a flywheel energy storage system (FESS). Given the focus of this paper on system frequency stability, a simplified model of the system frequency response is provided in Fig. 7 for a simpler analysis of how it behaves in the encounter of various disturbances. The values of the parameters used are presented in Table 1. As you can see in Fig. 7, a PI-controller is designed to maintain microgrid stability, which is initially configured by the Ziegler-Nichols method, which is one of the strongest classical methods for adjusting control coefficients, and then optimized and configured online by the proposed method based on ANN. According to Fig. 7, the characteristic function of production units is expressed through Eq. (6) to Eq. (13). 29. $$\begin{array}{c}{\Delta P}_{WTG}=\frac{{k}_{a}{k}_{WTG}{\Delta P}_{W}}{{T}_{WTG}}-\frac{{\Delta P}_{WTG}}{{T}_{WTG}}\\ {G}_{WTG}\left(s\right)=\frac{{k}_{a}{k}_{WTG}}{1+s{T}_{WTG}}=\frac{{\Delta P}_{WTG}(s)}{{\Delta P}_{W}(s)}\end{array}$$ (6) $$\begin{array}{c}{\Delta P}_{PV}=\frac{{k}_{PV}\Delta \varnothing }{{T}_{PV}}-\frac{{\Delta P}_{PV}}{{T}_{PV}}\\ {G}_{PV}\left(s\right)=\frac{{k}_{PV}}{1+s{T}_{PV}}=\frac{{\Delta P}_{PV}(s)}{\Delta \varnothing (s)}\end{array}$$ (7) $$\begin{array}{c}{\Delta P}_{DEG}=\frac{{k}_{DEG}{\Delta P}_{C}}{{T}_{DEG}}-\frac{{k}_{DEG}\Delta F}{R{T}_{DEG}}-\frac{{\Delta P}_{DEG}}{{T}_{DEG}}\\ \left\{\begin{array}{c}{G}_{DEG}\left(s\right)=\frac{{k}_{DEG}}{1+s{T}_{DEG}}=\frac{{\Delta P}_{DEG}(s)}{{\Delta U}_{DEG}(s)}\\ {\Delta U}_{DEG}\left(s\right)={\Delta P}_{C}\left(s\right)-\frac{\Delta F(s)}{R}\end{array}\right.\end{array}$$ (8) $$\begin{array}{c}{\Delta P}_{AE}=\frac{{k}_{AE}\left(1-{k}_{a}\right){\Delta P}_{WTG}}{{T}_{AE}}-\frac{{\Delta P}_{AE}}{{T}_{AE}}\\ \left\{\begin{array}{c}{G}_{AE}\left(s\right)=\frac{{k}_{AE}}{1+s{T}_{AE}}=\frac{{\Delta P}_{AE}(s)}{{\Delta P}_{t}(s)}\\ {\Delta P}_{t}\left(s\right)=\left(1-{K}_{t}\right){\Delta P}_{WTG}\left(s\right), {K}_{t}=0.6\end{array}\right.\end{array}$$ (9) $$\begin{array}{c}{\Delta P}_{FC}=\frac{{k}_{FC}{\Delta P}_{AE}}{{T}_{FC}}-\frac{{\Delta P}_{FC}}{{T}_{FC}}\\ {G}_{FC}\left(s\right)=\frac{{k}_{FC}}{1+s{T}_{FC}}=\frac{{\Delta P}_{FC}\left(s\right)}{{\Delta P}_{AE}\left(s\right)}\end{array}$$ (10) $$\begin{array}{c}{\Delta P}_{\mathrm{BESS}}=\frac{{k}_{\mathrm{BESS}}{\Delta U}_{\mathrm{BESS}}}{{T}_{\mathrm{BESS}}}-\frac{{\Delta P}_{\mathrm{BESS}}}{{T}_{\mathrm{BESS}}}\\ {\Delta P}_{\mathrm{FESS}}=\frac{{k}_{\mathrm{FESS}}{\Delta U}_{\mathrm{FESS}}}{{T}_{\mathrm{FESS}}}-\frac{{\Delta P}_{\mathrm{FESS}}}{{T}_{\mathrm{FESS}}}\end{array}$$ (11) $$\begin{array}{c}{G}_{\mathrm{BESS}}\left(s\right)=\frac{{k}_{\mathrm{BESS}}}{1+s{T}_{\mathrm{BESS}}}=\frac{{\Delta P}_{\mathrm{BESS}}\left(s\right)}{{\Delta F}_{\mathrm{BESS}}\left(s\right)}\\ {G}_{\mathrm{FESS}}\left(s\right)=\frac{{k}_{\mathrm{FESS}}}{1+s{T}_{\mathrm{FESS}}}=\frac{{\Delta P}_{\mathrm{FESS}}\left(s\right)}{{\Delta F}_{\mathrm{FESS}}\left(s\right)}\end{array}$$ (12) $$\left\{\begin{array}{c}\Delta f\times \frac{1}{R}=\Delta P,\\ \Delta f=f-{f}_{0},\\ \Delta P=P-{P}_{0},\end{array}\right.$$ (13) ## A proposed control strategy based on ANN-GA ANN is one of the most powerful tools in optimization processes because these networks have a wide ability to process and learn in parallel. Based on the structure of these networks and how the processing elements are combined, several important and basic applications such as mind modeling, financial modeling, time series prediction, control systems, and optimization for them are envisaged. To use ANN networks in the mentioned processes, it is necessary to consider a mathematical model of them. A simple mathematical model for analyzing their behavior is shown in Fig. 8. The vectors x, w, θ, and f(net) are input and weight vectors, bias values, and functions (linear or nonlinear) for neurons, respectively. The output of this model will be by Eq. (14). $$y\left(k\right)=f\left(\sum_{j=1}^{n}{w}_{j}{x}_{j}\left(k\right)+{w}_{0}\theta \right)$$ (14) Figure 9 shows the control framework for the online tuning of a PI-controller used in the microgrid frequency secondary control process. To obtain the best performance of the PI-controller and determine the relevant coefficients, there are various practice methods. In this article, we have used the combined ANN-GA method to optimize PI coefficients. The structure for ANN to tune the PI-controller online is shown in Fig. 10. The considered network is a multilayer network in which 20 neurons are considered as the input layer (power changes and frequency deviations of units, etc.) and 2 neurons are considered as the output layer (according to the number of control coefficients to be adjusted). In Fig. 10 x, w1 and w2 are the input vectors and the weight vectors of the first and second layers, respectively. The functions considered in Fig. 10 will be linear for the first layer and nonlinear for the second and output layers. ANN first learns from training data how to change the coefficients to keep the system frequency constant, and then updates these coefficients optimally with the GA algorithm so that the controller is always set to the best values. ### Weight update based on error Back-Propagation algorithm This section introduces the usual method for updating weights in ANN. The Back-Propagation method tries to have the minimum value of the performance function given in Eq. (15) in each weight update. y is the reference signal and yd refers to the output of the output layers. $$yE=0.5{\left(y-{y}_{d}\right)}^{2}$$ (15) In this method, according to Fig. 10, the weights are updated to achieve the optimal values of the control coefficients (ki, kp) through Eq. (16). 28 $$\begin{array}{c}{w}_{2}\left(k+1\right)={w}_{2}\left(k\right)+{\Delta w}_{2}={w}_{2}\left(k\right)+\upeta \sigma H\\ {w}_{1}\left(k+1\right)={w}_{1}\left(k\right)+{\Delta w}_{1}={w}_{1}\left(k\right)+\upeta \delta x\end{array}$$ (16) where Δw1 and Δw2 according to Eq. (17) and Eq. (18) are the vector of changes given in the initial values of the weights of the first and second layers so that the function is given in Eq. (15) to the smallest value, during the multi update. This operation tries to change the control coefficients so that the system frequency returns to its final value with the least fluctuation. All the parameters used in the calculation of Δw1 and Δw2, such as σj, δk, and Hj, netj can be seen in Fig. 10, and η = [0 1] was the learning rate. $$\left\{\begin{array}{c}{\Delta w}_{2}=-\upeta \frac{\partial E}{\partial {w}_{2}}\\ \frac{\partial E}{\partial {w}_{2}}=\frac{\partial E}{\partial y}\cdot \frac{\partial y}{\partial u}\cdot \frac{\partial u}{\partial {net}_{k}}\cdot \frac{\partial {net}_{k}}{\partial {w}_{2}}\\ \begin{array}{c}\frac{\partial u}{\partial {net}_{k}}={f}^{^{\prime}}\left({net}_{k}\right),\frac{\partial {net}_{k}}{\partial {w}_{2}}={H}_{j}\\ \frac{\partial E}{\partial y}\cdot \frac{\partial y}{\partial u}\cdot \frac{\partial u}{\partial {net}_{k}}={\delta }_{k}\\ {\Delta w}_{2}=\upeta {\delta }_{k}{H}_{j}\end{array}\end{array}\right.$$ (17) $$\left\{\begin{array}{c}{\Delta w}_{1}=-\upeta \frac{\partial E}{\partial {w}_{1}}\\ \frac{\partial E}{\partial {w}_{1}}=\frac{\partial E}{\partial y}\cdot \frac{\partial y}{\partial u}\cdot \frac{\partial u}{\partial {net}_{k}}\cdot \frac{\partial {net}_{k}}{\partial {H}_{j}}\cdot \frac{\partial {H}_{j}}{\partial {net}_{j}}\cdot \frac{\partial {net}_{j}}{\partial {w}_{1}}\\ \begin{array}{c}\frac{\partial u}{\partial {net}_{k}}={f}^{^{\prime}}\left({net}_{k}\right),\frac{\partial {net}_{k}}{\partial {H}_{j}}={w}_{2},\frac{\partial {H}_{j}}{\partial {net}_{j}}={f}^{^{\prime}}\left({net}_{j}\right) \\ \frac{\partial {net}_{j}}{\partial {w}_{j}}=x\\ {\Delta w}_{1}=\upeta {\delta }_{k}{f}^{^{\prime}}\left({net}_{k}\right){w}_{2}{f}^{^{\prime}}\left({net}_{j}\right)x=\upeta {\sigma }_{j}x\end{array}\end{array}\right.$$ (18) ### Weight update based on GA algorithm One of the most important issues when implementing ANN is choosing the right training algorithm. The most common ANN training algorithm is the error back-propagation algorithm. The problem with this algorithm is slow convergence and stopping at optimal local points. One approach to ANN training is to use metaheuristic algorithms such as GA. In each cycle of this training, the weighting of the parameters is done by the GA algorithm. In the training method in this section, the GA algorithm first calculates the value of the cost function for the system response by selecting a random population as the ANN weights and changes the ANN weights accordingly to improve the ANN performance and minimize the value of the cost function. Here the process of this training method is called the self-modifying method. In this method, ANN weights are quantified as separate sections, each weight changes the ANN performance change, and the effect of each weight on ANN performance is determined, by using the intelligent GA algorithm, these changes are directed towards optimizing the ANN performance. Finally, the ANN weight coefficients are adjusted by optimizing the system efficiency. Therefore, in this method, there is no need to produce a lot of training data for ANN training, and by saving computational resources, higher accuracy can be achieved with less repetition. The mathematical logic of the GA algorithm tries to optimize the output of the control system by minimizing an objective function. The aim here is to minimize Eq. (15) and considering that in this paper the goal is to improve the PI coefficients to control the microgrid frequency, two factors can be defined as the maximum overshoot (OS) rate and settling time (ST) of the frequency signal as Eq. (19)30. $$\begin{array}{c}F=\alpha \cdot OS+(1-\alpha )\cdot ST\\ Assuming \alpha =0.5, F=0.5(OS+ST)\end{array}$$ (19) By adding Eq. (19) to Eq. (15), a new objective function can be defined by Eq. (20) to improve the performance of the PI-controller in OS and ST control of microgrid frequency with better accuracy. $$E=0.5{\left(y-{y}_{d}\right)}^{2}+F$$ (20) In this study, the purpose of the GA algorithm is to determine the optimal biases and weights of ANN so that Eq. (20) is minimized. In evolutionary algorithms such as GA, the interface between the algorithm and the problem is how the chromosomes (solutions) are encoded and displayed. In the proposed GA algorithm, each chromosome represents the values of the weights and bias of the ANN network, so that the w1 to wm genes represent the weights of the first layer and the w2 to wn genes represent the weights of the second layer of ANN, and the genes b1 to bm and b2 to bn represent the bias values of first and second layer neurons. Each gene can produce a real value in the range of -1 to 1. Figure 11 shows the structure of a chromosome proposed for ANN training. Finally, Fig. 12 shows the whole design process of the PI-controller from the combination of ANN and GA algorithms as a flowchart. ## Simulation results As you know, various factors such as load changes, uncertainty, units power, nonlinear elements, noise, etc. have a direct impact on the microgrid frequency. In this section, to evaluate the performance of the proposed control method, several different disturbances are applied to the studied microgrid through MATLAB software and the performance results of the proposed method are compared with the conventional PI-controller. To improve the model and get closer to the actual microgrid response, a series of nonlinear elements, limiters, and time delays are added to the original frequency model, which is shown in Fig. 13. One of the most important physical limitations is related to the diesel generator, which due to mechanical and thermal limitations, is not able to respond to disturbances at the same time and there is always a delay between the occurrence of disturbances and the response to it. Also, due to the existence of different filters and telecommunication channels, there is a delay in transferring the measured parameters to the control systems. Therefore, due to the mentioned reasons, delay blocks have been added to the system model. For delayed cases, a time delay of one cycle (20 ms) is provided. Control signals can also be increased or decreased to a certain extent, and the production sources have a dead band that will not be activated until the control input signal to these sources reaches a certain level. The rate of increase or decrease in generator output is also limited. Therefore, according to these cases, the model intended for the diesel generator has been made more accurate by adding a non-linear block, according to what is shown in Fig. 13. In Scenario 1, a step overload of 0.1 pu is applied to the microgrid. Simulation for normal states and worst-case uncertainty in parameters, ie uncertainty ± 30% of the nominal value, are considered and compared with the performance of conventional PID and PI-controllers and the proposed intelligent controller, the results of which are shown in Fig. 14. The presence of nonlinear factors in the model of some microgrid components causes the microgrid structure to become more complex and the PID-controller to control frequency fluctuations does not perform as well as the PI-controller. However, the proposed controller, due to the nonlinear behavior of the microgrid at any time, applies the appropriate correction coefficients to the PI-controller and causes the PI coefficients to be adjusted adaptively and to control the fluctuations of the microgrid frequency well. In the simulation results of this scenario, it is observed that the proposed controller has significantly reduced the maximum overshoot and settling time of the microgrid frequency, especially when the system has uncertainty. In Scenario 2, a perturbation according to Fig. 15 is applied to the wind speed and solar irradiation, which also shows the power changes of the respective units. The perturbation of the solar irradiation is such that at t = 10 s the solar irradiation decreases from the initial value of 0.15 pu to 0.1 pu and increases at t = 50 s to the value of 0.2 pu. The perturbation at the wind speed is such that at t = 90 s, the wind speed decreases from 7.5 m/s to 4.5 m/s and increases to 10 m/s at t = 130 s. The microgrid frequency response by applying these perturbations is shown in Fig. 16. In this scenario, the superiority of the proposed controller performance over PID and PI-controllers in damping microgrid frequency fluctuations is well observed. In Scenario 3, to model instantaneous load changes, the load perturbation is applied to the microgrid in the form of an irregular pulse train shown in Fig. 17. The performance of the proposed controller and how to track the load is shown in Fig. 18. Load changes are always noticed by the microgrids and the microgrid controller must be able to quickly dampen the frequency fluctuations caused by the imbalance of production and power consumption in the shortest possible time and with the least fluctuations. In this scenario, the superiority of the ANN controller over conventional PID and PI can be seen. Using the proposed controller, after each load change, the system frequency changes return to normal with the least fluctuation and in the shortest settling time, while with other controllers, the frequency has more fluctuations and returns to normal later. In Scenario 4, the perturbations of the first and second scenarios are applied simultaneously. Also, to test the robustness of the controller in the worst case, perturbations are applied under + 30% uncertainty. The perturbations according to Fig. 19 are applied to the wind speed and solar irradiation as well as the load of the step. As shown in Fig. 20, the microgrid frequency response under these conditions is very favorable and has a relative advantage over other controllers. In scenario 5, to finally test the stability of the system with the proposed controller and show its better performance, white noise according to Fig. 21 is applied to this model and the microgrid frequency status in both modes of use of the proposed controller and the conventional PI-controller is shown in Fig. 22. As can be seen in this case, the designed intelligent controller has a much better performance. In this section, the performance of the ANN-GA controller was evaluated in different scenarios and the results showed its superiority over conventional controllers in controlling the microgrid frequency under various disturbances. So far, most of the results shown were related to the performance of the proposed controller, the following results will be related to the performance of the artificial neural network itself will be presented. As you can see, most of the disturbances in the microgrid will involve the same five scenarios presented in this paper with some changes in production power, load changes, etc. Therefore, to evaluate the accuracy of the ANN proposed in this paper, the total data considered for evaluation was 205 cases, which were considered separately for each scenario according to Table 2. According to the data generated in Table 2, the ANN network is trained by the GA algorithm. Figure 23 shows the performance of the GA algorithm in optimizing ANN weights and biases. In MATLAB, there are various diagrams to examine and show the performance of ANN networks. Figure 24 shows the ANN performance diagram. This diagram shows the number of iterations on the one hand and the mean square error (mse) of the ANN on the other hand, which in this design has reached 1.364e-05 in 190 iterations. Figure 25 shows the error histogram diagram for the ANN network. In this diagram, the degree of belonging of each data category for different errors is examined. Figure 26 shows the ROC diagram of the ANN network. In this diagram, the closer the points are to the top and left, the more appropriate it is, and the closer the forecast model is to its ideal state. The coordinates of the point (0,1) are ideal states. This point indicates that what the forecast model offers is fully consistent with the actual model. The opposite point has coordinates (1,0), which means that whatever prediction model is presented is the opposite of the actual model. Figure 27 shows the confusion matrix of the ANN network. In this matrix, diagonal cells are related to correctly classified observations and non-diagonal cells are related to incorrectly classified observations with a percentage of performance. Here, the overall accuracy of the ANN network is 96.6%. Finally, in scenarios, to implement the proposed controller adjusted with ANN-GA, the system is first set in the desired zero states, and then by introducing disturbances to the system, ANN-GA adjusts the parameters of the proposed controller so that it can move the system towards the optimal response. The optimal coefficients for this system obtained by ANN-GA are presented in Table 3. By comparing the coefficients obtained with the ANN-GA method and the coefficients of the Ziegler-Nichols method, it can be seen that the ANN-GA has been able to accurately perform this operation without the need for any predetermined data, and by choosing the correct control coefficients, the response of the system is towards the output to deliver the desired. A quantitative comparison of the performance of the proposed control method has been made with two indicators: RMS (Δf) (root mean square of frequency changes) and max (|Δf|) (maximum overshoot and undershoot). The improvement percentage of these two indicators in scenario 5 for a conventional PI-controller, conventional PID-controller, and ANN-GA method are shown in Table 4. As it can be seen, the proposed method has performed better. ## Conclusions The balance between the production and consumption of active power is the main factor in ensuring the frequency stability of the microgrid. In this paper, an ANN-based PI-controller is proposed to control the microgrid frequency in the island mode. The proposed PI-controller structure is such that its coefficients are adjusted by ANN at any time according to system frequency changes. Since ANN design for frequency control required a lot of training data on ANN training, in this study, the GA algorithm was used to set and train ANN. The performance of the proposed controller was such that it could perform well for various types of disturbances under different scenarios and could be easily implemented due to the nonlinear and complex structure of microgrids. Also, to increase the efficiency of the controller in different operating points, the controller was designed by considering uncertainties in some microgrid parameters so that the proposed controller is robust to changes in the working points. Finally, the performance of the proposed controller was compared with conventional PI- and PID-controllers for different scenarios, which showed the appropriate accuracy of the proposed controller. Even the good obtained results, this approach have some weaknesses and limitations. Basiacly, each control approach which is based on a neural network algorithm, will need a large database for having an optimal comoprtment. This is can be of the weaknesses of this part; it is hard for a standard calculator to manage a mega information. So, this is will force using a high resolution processor, which can be defficult in some positions and which increase the cost of the global control loop. In this same way, the integration of a two complex optimization algorithm, will make decision calculation hard and maybe can cause some delays on the system. Actually, this risk is low, but it can be happen in such cases. This caused delay, can make a risk to not detect a micro variation and not having the optimal decision on time. From the other side, this work, have a potential ti be extended and ameliorated by testing more optimization solution, for testing the global comportment. Sliding mode control loop can be designed for controlling the frequency perturation and verify if it is more easy and rapid to have a stable performances after the coming perturbation on the grid side or from the different sources types. Also, testing this approch on a practical case, can be interesting to have a real result on this proposed approach. Maybe, it is better to test the execution rapidity of this control loop on this complex system and see if it is able to apply it really or not. ### Human and animal rights This article does not contain any studies with animals performed by any of the authors.
web
auto_math_text
# SOP-GPU package¶ The SOP-GPU package, where SOP stands for the Self Orginized Polymer Model fully implemented on a GPU, is a scientific software package designed to perform Langevin Dynamics Simulations of the mechanical or thermal unfolding, and mechanical pulling/indentation of large biomolecular systems in the experimental subsecond (millisecond-to-second) timescale. The SOP-GPU package utilizes the $$C_\alpha$$ and $$C_\alpha$$-$$C_\beta$$ based coarse-grained description of proteins combined with computational power of modern Graphics Processing Units (GPUs). Documentation: Examples of SOP-GPU configurational files can be found here.
web
auto_math_text
# Quantum Gravity NEOCLASSICAL PHYSICS AND QUANTUM GRAVITY Imagine that nature emerges from ample pairs of immutable Planck radius spherical particles, the electrino and the positrino, which are equal yet oppositely charged. These are the only carriers of energy, in electromagnetic and kinetic form. The are located in an infinite 3D Euclidean space (non curvy) and observe classical mechanics and Maxwell’s equations. 𝗡𝗣𝗤𝗚 explores this recipe for nature and how it emerges as a narrative and theory that is compatible with GR and QM, yet far superior in ability to explain the universe and resolve open problems. For 𝗡𝗣𝗤𝗚 basics see: Idealized Neoclassical Model and the NPQG Glossary. Imagine that spacetime is a superfluid of low temperature particles (low energy photons, neutrinos, gravitons, and/or axions). The wave function of the dipoles comprising the neutral shells of neighboring particles would interact with an ebb and flow of continuous energy, not a discrete transfer of energy. The outstanding energy from each particle of matter-energy would be the root mean square of the electromagnetic energy interaction with all neighbor wave functions. This would serve to heat or energize the nearby particles. The temperature of spacetime gas relates to gravity. Gravity is the force of convection on matter from the gradient of spacetime superfluid temperature. All ‘presenting’ matter-energy particles are pulsing energy into the superfluid gas, and those waves travel at the local speed of light c and decrease in magnitude with the square of the distance. Every particle pushes energy, they receive energy back. It is an alternating energy flux. This is related, but distinct from a gravitational wave tsunami as a result of a high energy collision (BH-BH, BH-NS, NS-NS) where the superfluid gas particles experience changes in size and displacement. Some particles in the universe don’t participate in this dance. Those are particles on the interior of a Planck core inside supermassive black holes (SMBH). Particles interior to a Planck core can not present their energy (mass) because they are at maximum energy and their neighbors are too. Let’s use $\mathbf{F=GM_{1}M_{2}/r^{2}}$ to show gravitational interaction of two particles where M1 is the pass of particle 1 and M2 is the mass of particle 2, and this can easily be extended to collections or bodies of matter up to a fairly large size. Neutron stars (NS) and black holes (BH) will be considered separately. Particle 1 pulses energy to the superfluid spacetime gas. That energy spreads out at local $\mathbf{c^{2}}$. Why $\mathbf{c^{2}}$? We are dealing with a spherical wave, so surface area is where the energy gets spread. What is the surface area of a sphere? It is $\mathbf{4 \pi r^{2}}$. So that is where some of these numbers in the physics equations arise naturally. You’ll notice I said local c. c is not a constant. c depends on local permittivity and permeability of the spacetime gas which depend on energy – aka temperature – of the superfluid gas neighborhood. Then particle 1 receives an energy pulse back from the superfluid gas. So that is a sine wave. No net energy was transferred. However, particle 1 averages root mean square energy outstanding over that wave cycle. The RMS energy outstanding $\mathbf{E_{1}=m_{1}c^{2}}$. Local c. So particle 1 averages E1 outstanding. Meanwhile particle 2 is doing the same thing, and has mass M2 which is given by RMS Energy E2 outstanding. Now imagine graphing the temperature of spacetime around and between these particles. Each particle would experience a higher spacetime energy in the direction towards the other particle. It turns out that particles experience a convective force towards higher energy spacetime and the steeper that gradient, the higher the convective force of gravity.
web
auto_math_text
Volume 297 - XXV International Workshop on Deep-Inelastic Scattering and Related Subjects (DIS2017) - WG6 Spin and 3D Structure On neutrino production of a charmed meson B. Pire, L. Szymanowski, J. Wagner* *corresponding author Full text: pdf Pre-published on: 2017 September 30 Published on: 2018 January 16 Abstract We calculate in the framework of the collinear QCD approach the amplitude for exclusive neutrino-production of a pseudoscalar charmed $D$ meson. This process allows to access gluon and both chiral-odd and chiral-even quark generalized parton distributions (GPDs), which contribute in specific ways to the amplitude for different polarization states of the $W$ boson. The energy dependence of the cross section allows to separate different contributions and the measurement of the azimuthal dependence helps to single out the transversity chiral-odd GPDs contributions. The flavor dependence, and in particular the difference between $D^+$ and $D^0$ production rates, allows to test the importance of gluonic contributions. The behaviour of the proton and neutron target cross sections enables to separate the $u$ and $d$ quark contributions. Planned medium and high energy neutrino facilities will thus allow some important progress in the realm of hadronic physics. Open Access
web
auto_math_text
Publication Title Polarizations of $J/\Psi$ and $\Psi(2S)$ mesons produced in $p\overline{p}$ collisions at $\sqrt{s}$=1.96 TeV Author Institution/Organisation CDF Collaboration Abstract We have measured the polarizations of J/ψ and ψ(2S) mesons as functions of their transverse momentum pT when they are produced promptly in the rapidity range |y|<0.6 with pT≥5  GeV/c. The analysis is performed using a data sample with an integrated luminosity of about 800  pb-1 collected by the CDF II detector. For both vector mesons, we find that the polarizations become increasingly longitudinal as pT increases from 5 to 30  GeV/c. These results are compared to the predictions of nonrelativistic quantum chromodynamics and other contemporary models. The effective polarizations of J/ψ and ψ(2S) mesons from B-hadron decays are also reported. Language English Source (journal) Physical review letters. - New York, N.Y. Publication New York, N.Y. : 2007 ISSN 0031-9007 Volume/pages 99:13(2007), p. 132001,1-132001,7 ISI 000249786700020 Full text (Publishers DOI) Full text (open access) UAntwerpen Faculty/Department Publication type Subject Affiliation Publications with a UAntwerp address
web
auto_math_text
Article Text ## other Versions Cost-effectiveness of alternative strategies for interferon-γ release assays and tuberculin skin test in tuberculous uveitis 1. Marcus Ang1,2,3, 2. Hai V Nguyen4, 3. Sieh Yean Kiew1, 4. Shu Chen4, 5. Soon-Phaik Chee1,2,3, 6. Eric Finkelstein4 1. 1Singapore National Eye Centre, Singapore, Singapore 2. 2Singapore Eye Research Institute, Singapore, Singapore 3. 3Department of Ophthalmology, Yong Yoo Lin School of Medicine, National University of Singapore, Singapore, Singapore 4. 4Duke-NUS Graduate Medical School, Singapore, Singapore 1. Correspondence to Professor Soon-Phaik Chee, Singapore National Eye Centre, 11 Third Hospital Avenue, Singapore 168751, Singapore; chee.soon.phaik{at}snec.com.sg ## Abstract Background Although tuberculous uveitis remains a major cause of ocular morbidity in the developing world, there is no consensus on which diagnostic test or testing strategy is the most cost effective. In this study we carried out a cost-effectiveness analysis to determine the most cost-effective diagnostic test strategy. Methods In this prospective study, we recruited 102 patients from Singapore National Eye Centre with signs suggestive of tuberculous uveitis. Using prospective data from this cohort and from published meta-analyses, we modelled the incremental cost effectiveness of the following strategies: tuberculin skin test (TST) only; interferon-γ release assay (IGRA) only; IGRA following a positive TST result; and dual-test strategy, conducting TST and IGRA at presentation. Incremental cost-effectiveness ratios (ICERs) were calculated for each strategy and analysed using a willingness-to-pay threshold of $50 000 per quality-adjusted life year (QALY) gained. Results In our population, the least cost effective was the IGRA-only strategy. The dual-test strategy was the most cost effective, with an improvement of 0.017 QALY at an incremental cost of$190 relative to the TST-only strategy (ICER $11 500); while the TST-only strategy was more cost effective than the third strategy, using IGRA following a positive TST result (ICER$3610). This remained consistent while varying the costs of IGRA and TST, the incidence of tuberculosis and tuberculous uveitis, as well as the diagnostic accuracy of IGRA and TST found in previous studies in various populations. Conclusions The dual-test strategy (performing TST and IGRA at presentation) was the most cost effective strategy for the diagnosis of tuberculous uveitis in our population. • Diagnostic tests/Investigation • Infection • Inflammation ## Request Permissions If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
web
auto_math_text
### Lever: Breaking the Shackles of Scalable On-chain Validation Mingming Wang and Qianhong Wu ##### Abstract Blockchain brings dawn to decentralized applications which coordinate correct computations without a prior trust. However, existing scalable on-chain frameworks are incompetent in dealing with intensive validation. On the one hand, duplicated execution pattern leads to limited throughput and unacceptable expenses. On the other hand, there lack fair and secure incentive mechanisms allocating rewards according to the actual workload of validators, thus deriving bad dilemmas among rational participants and inducing effective attacks from shrewd adversaries. While most solutions rely on off-chain patterns to sidestep the shackles, it further introduces unexpected issues in applicability, fairness and brittle dependency on interactive cooperation. The intrinsic bottleneck of backbone has never been drastically broken. This work presents Lever, the first scalable on-chain framework which supports intensive validation, meanwhile achieves validity, incentive compatibility and cost-efficiency tolerance of f<n/4 Byzantine participants. Lever firstly integrates the evaluation of complexity into the correctness of transaction, thoroughly decoupling intensive validation from regular Byzantine consensus. Significant scalability is then achieved by launching few rounds of novel validation-challenge game between potential adversaries and rational stakeholders; compelling incentive mechanism effectively transfers deposits of adversary to specialized rewards for honest validators, therefore allows the user to lever sufficient endorsement for verification with minimum cost. Combined with game-theoretic insights, a backstop protocol is designed to ensure finality and validity of the framework, breaking through the famous Verifier’s Dilemma. Finally, we streamline Lever under the efficient architecture of sharding, which jointly shows robust to conceivable attacks on validation and performs outstanding ability to purify Byzantine participants. Experimental results show that Lever vastly improves the throughput and reduces expenses of intensive validation with slight compromise in latency. Note: -2019.10.08 An early version of this paper was submitted to CCS 2019. Though the significance and novelty of our construction get fully recognized by most of the reviewers, it was finally rejected due to the poor presentation and an unclear structure which “does not do justice with the contribution of the paper”. We appreciate the valuable constructive suggestions from the anonymous reviewers. After accordingly revising and refining our paper, we decide to preprint the full version to share our research with the academia. Compared to the old version, underlying changes have been made: 1)The paper is reconstructed with a more readable structure, where the main protocol and sharding-based optimizations are decoupled and stated in a progressive manner. 2)We try our best to improve presentation, removing mistakes and meaningless buzzwords, replenishing necessary definitions and self-contained background knowledge, as well as making our framework more concise. 3)Complete proofs and analyses are provided with detailed explanations to account for all the doubts from the reviewers. We will maintain this log to present the latest progress of the work and we are looking forward to any valuable comments, suggestions and cooperation. We hope our contribution could accelerate the development of the Blockchain ecology. Available format(s) Category Cryptographic protocols Publication info Preprint. MINOR revision. Keywords electronic commerce and paymentdistributed system securityverifiable computationincentive compatibilitysharding Contact author(s) wangmingming @ buaa edu cn qianhong wu @ buaa edu cn History 2019-11-02: revised See all versions Short URL https://ia.cr/2019/1172 CC BY-NC BibTeX @misc{cryptoeprint:2019/1172, author = {Mingming Wang and Qianhong Wu}, title = {Lever: Breaking the Shackles of Scalable On-chain Validation}, howpublished = {Cryptology ePrint Archive, Paper 2019/1172}, year = {2019}, note = {\url{https://eprint.iacr.org/2019/1172}}, url = {https://eprint.iacr.org/2019/1172} } Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
web
auto_math_text
Large eddy simulations are performed in a periodic domain of a rotating square duct with normal rib turbulators. Both the Coriolis force as well as the centrifugal buoyancy forces are included in this study. A direct approach is presented for the unsteady calculation of the nondimensional temperature field in the periodic domain. The calculations are performed at a Reynolds number (Re) of 12,500, a rotation number (Ro) of 0.12, and an inlet coolant-to-wall density ratio $Δρ/ρ$ of 0.13. The predicted time and space-averaged Nusselt numbers are shown to compare satisfactorily with the published experimental data. Time sequences of the vorticity components and the temperature fields are presented to understand the flow physics and the unsteady heat transfer behavior. Large scale coherent structures are seen to play an important role in the mixing and heat transfer. The temperature field appears to contain a low frequency mode that extends beyond a single inter-rib geometric module, and indicates the necessity of using at least two inter-rib modules for streamwise periodicity to be satisfied. Proper orthogonal decomposition (POD) of the flowfield indicates a low dimensionality of this system with almost 99% of turbulent energy in the first 80 POD modes. 1. Morris, W. D., 1981, Heat Transfer and Fluid Flow in Rotating Coolant Channels, Research Studies Press, Hertfordshire, UK. 2. Wagner , J. H. , Johnson , B. V. , Graziani , R. A. , and Yeh , F. C. , 1992 , “ Heat Transfer in Rotating Serpentine Passages with Trips Normal to the Flow ,” ASME J. Turbomach. , 114 , pp. 847 857 . 3. Morris , W. D. , and , K. F. , 1996 , “ Convective Heat Transfer in Rotating Ribbed Tubes ,” Int. J. Heat Mass Transfer , 39 , pp. 2253 2266 . 4. Yamawaki , D. , Obi , S. , and Masuda , S. , 2002 , “ Heat Transfer in Transitional and Turbulent Boundary Layers with System Rotation ,” Int. J. Heat Fluid Flow , 23 , pp. 186 193 . 5. Acharya , S. , and Zhou , F. , 2001 , “ Flow and Heat Transfer in Dimpled Two-pass Channel, in Heat Transfer in Gas Turbine Systems ,” , 934 , pp. 424 431 . 6. Hibbs , R. , Acharya , S. , Chen , Y. , Nikitopoulos , D. , and Myrum , T. , 1998 , “ Heat Transfer in a Two-Pass Internally Ribbed Turbine Blade Coolant Channel with Cylindrical Vortex Generators ,” ASME J. Turbomach. , 120 , pp. 724 734 . 7. Zhou, F., Lagrone J., and Acharya, S., 2004, “Internal Cooling in 4:1 AR Passages at High Rotation Numbers,” Paper no. GT2004-53501, IGTI Turbo Expo, Vienna. 8. Han , J. C. , and Dutta , S. , 2001 , “ Recent Developments in Turbine Blade Internal Cooling, in Heat Transfer in Gas Turbine Systems ,” , 934 , pp. 162 178 . 9. Bredberg, J., 1997, Turbine Blade Internal Cooling, Chalmers University, Goteborg. 10. Iacovides , H. , and Launder , B. E. , 1995 , “ Computational Fluid Dynamics Applied to Internal Gas-Turbine Blade Cooling: A Review ,” Int. J. Heat Fluid Flow , 16 , pp. 454 470 . 11. Naimi , M. , and Gessner , F. B. , 1997 , “ Calculation of Fully Developed Turbulent Flow in Rectangular Ducts with Two Opposite Roughened Walls ,” Int. J. Heat Fluid Flow , 18 , pp. 471 481 . 12. Iacovides , H. , 1998 , “ Computation of Flow and Heat Transfer Through Rotating Ribbed Passages ,” Int. J. Heat Fluid Flow , 19 , pp. 393 400 . 13. Bonhoff , B. , Parniex , S. , Leusch , J. , Johnson , B. V. , Schabacker , J. , and Bolcs , A. , 1999 , “ Experimental and Numerical Study of Developed Flow and Heat Transfer in Coolant Channels with 45 Degree Ribs ,” Int. J. Heat Fluid Flow , 20 , pp. 311 319 . 14. Iacovides , H. , and Raisee , M. , 1999 , “ Recent Progress in the Computation of Flow and Heat Transfer in Internal Cooling Passages of Turbine Blades ,” Int. J. Heat Fluid Flow , 20 , pp. 320 328 . 15. Saidi , A. , and Sunden , B. , 2001 , “ On Prediction of Thermal-Hydraulic Characteristics of Square-Sectioned Ribbed Cooling Ducts ,” ASME J. Turbomach. , 123 , pp. 614 620 . 16. Hermanson , K. , Parneix , S. , Von Wolfersdorf , J. , and Semmler , K. , 2001 , “ Prediction of Pressure Loss and Heat Transfer in Internal Cooling Passages ,” , 934 , pp. 448 455 . 17. Jang , Y.-J. , Chen , H.-C. , and Han , J.-C. , 2001 , “ Flow and Heat Transfer in a Rotating Square Channel with 45° Angled Ribs by Reynolds Stress Turbulence Model ,” ASME J. Turbomach. , 123 , pp. 124 132 . 18. Murata , A. , and Mochizuki , S. , 2000 , “ Large Eddy Simulation with a Dynamic Subgrid-Scale Model of Turbulent Heat Transfer in an Orthogonally Rotating Rectangular Duct with Transverse Rib Turbulators ,” Int. J. Heat Mass Transfer , 43 , pp. 1243 1259 . 19. Murata , A. , and Mochizuki , S. , 2001 , “ Effect of Centrifugal Buoyancy on Turbulent Heat Transfer in an Orthogonally Rotating Square Duct with Transverse or Angled Rib Turbulators ,” Int. J. Heat Mass Transfer , 44 , pp. 2739 2750 . 20. Pallares, J., Grau, F. X., and Davidson, L., 2001, “A Model for Estimating Three-Dimensional Boundary Layers in Rotating Duct Flow at High Rotation Rates,” 2nd International Symposium on Turbulence and Shear Flow Phenomena, Vol. 1, KTH-Stockholm, pp. 359–364. 21. Roclawski, H., Jacob, J. D., Yang, T., and McDonough, J. M., 2001, “Experimental and Computational Investigation of Flow in Gas Turbine Blade Cooling Passages,” AIAA paper 2925. 22. Miyake , Y. , Tsujimoto , K. , and Nagai , N. , 2002 , “ Numerical Simulation of Channel Flow with a Rib-Roughened Wall ,” J. Turbul. , 3 , 35 35 . 23. Saha, A., and Acharya, S., 2004, “Unsteady Computations for Flow and Heat Transfer in 1:1, 4:1, 1:4 AR Ribbed Coolant Passages with Rotation,” ASME paper No. GT2004-53986. 24. Patankar , S. V. , Liu , C. H. , and Sparrow , E. M. , 1977 , “ Fully Developed Flow and Heat Transfer in Ducts Having Streamwise-Periodic Variations of Cross-Sectional Area ,” ASME J. Heat Transfer , 99 , pp. 180 186 . 25. Wang , G. , and Vanka , S. P. , 1995 , “ Convective Heat Transfer in Periodic Wavy Passages ,” Int. J. Heat Mass Transfer , 38 , pp. 3219 3230 . 26. Moin , P. , Squires , K. , Cabot , W. , and Lee , S. , 1991 , “ A Dynamic Subgrid-Scale Model for Compressible Turbulence and Scalar Transport ,” Phys. Fluids A , 3 ( 11 ), pp. 2746 2757 . 27. Vreman , B. , Guerts , B. , and Kuerten , H. , 1994 , “ On the Formulation of the Dynamic Mixed Subgrid Scale Model ,” Phys. Fluids A , 6 , pp. 4057 4059 . 28. Tyagi , M. , and Acharya , S. , 2004 , “ Large Eddy Simulations of Turbulent Flown in Complex and Moving Rigid Geometries With the Immersed Boundary Method ,” Int. J. Numer. Methods Fluids , (to be published). 29. Tyagi, M., 2003, “Large Eddy Simulation of Turbulent Flows in Complex Geometries,” Ph.D. thesis, Mechanical Engineering Department, Louisiana State University. 30. Wray, A. A., and Hunt, J. C. R., 1989, “Algorithms for Classification of Turbulent Structures, Topological Fluid Mechanics,” Proceedings of the IUTAM symposium, H. K. Moffat and A. Tsinober, eds., Cambridge University Press, Cambridge, UK, pp. 95–104. 31. Tanaka , M. , and Kida , S. , 1993 , “ Characterization of Vortex Tubes and Sheets ,” Phys. Fluids A , 5 , pp. 2079 2082 . 32. Dubief , Y. , and Delcayre , F. , 2000 , “ On Coherent-Vortex Identification in Turbulence ,” J. Turbul. , 1 , pp. 1 22 . 33. Holmes, P., Lumley, J. L., and Berkooz, G., 1996, Turbulence, Coherent Structures, Dynamical Systems and Symmetry, Cambridge University Press, 34. Sirovich , L. , 1987 , “ Turbulence and the Dynamics of Coherent Structures, Part I–III ,” Q. Appl. Math. , XLV , pp. 561 590 .
web
auto_math_text
On Making U2F Protocol Leakage-Resilient via Re-keying Donghoon Chang, Sweta Mishra, Somitra Kumar Sanadhya, and Ajit Pratap Singh Abstract The Universal 2nd Factor (U2F) protocol is an open authentication standard to strengthen the two-factor authentication process. It augments the existing password based infrastructure by using a specialized USB, termed as the U2F authenticator, as the 2nd factor. The U2F authenticator is assigned two fixed keys at the time of manufacture, namely the device secret key and the attestation private key. These secret keys are later used by the U2F authenticator during the Registration phase to encrypt and digitally sign data that will help in proper validation of the user and the web server. However, the use of fixed keys for the above processing leaks information through side channel about both the secrets. In this work we show why the U2F protocol is not secure against side channel attacks (SCA). We then present a countermeasure for the SCA based on re-keying technique to prevent the repeated use of the device secret key for encryption and signing. We also recommend a modification in the existing U2F protocol to minimise the effect of signing with the fixed attestation private key. Incorporating our proposed countermeasure and recommended modification, we then present a new variant of the U2F protocol that has improved security guarantees. We also briefly explain how the side channel attacks on the U2F protocol and the corresponding proposed countermeasures are similarly applicable to Universal Authentication Framework (UAF) protocol. Note: There are few editorial changes in the current version of the paper. Available format(s) Publication info Preprint. MINOR revision. Keywords Contact author(s) swetam @ iiitd ac in History 2017-08-08: revised See all versions Short URL https://ia.cr/2017/721 CC BY BibTeX @misc{cryptoeprint:2017/721, author = {Donghoon Chang and Sweta Mishra and Somitra Kumar Sanadhya and Ajit Pratap Singh}, title = {On Making U2F Protocol Leakage-Resilient via Re-keying}, howpublished = {Cryptology ePrint Archive, Paper 2017/721}, year = {2017}, note = {\url{https://eprint.iacr.org/2017/721}}, url = {https://eprint.iacr.org/2017/721} } Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
web
auto_math_text
Search for pair production of third-generation leptoquarks and top squarks in pp collisions at sqrt(s) = 7 TeV Abstract : Results are presented from a search for pair production of third-generation scalar and vector leptoquarks, and top squarks, decaying into a tau lepton and a b quark. The search is based on a data sample of pp collisions at sqrt(s) = 7 TeV, which is collected by the CMS detector at the LHC and corresponds to an integrated luminosity of 4.8 inverse femtobarns. The number of observed events containing two tau leptons and two b-tagged jets is found to be in agreement with the standard model prediction. The results are interpreted in several scenarios, with exclusion limits on mass parameters at 95% confidence level. Vector leptoquarks with masses below 760 GeV are excluded and, if the branching fraction of the scalar leptoquark decay to tau lepton and b quark is assumed to be unity, third-generation scalar leptoquarks with masses below 525 GeV are ruled out. Limits are also set on the cross section for pair production of top squarks in a supersymmetric model with R-parity violation. Top squarks with masses below 453 GeV are excluded for a typical benchmark scenario, assuming that the coupling between the top squark, tau lepton, and b quark, lambda'[333]=1; and limits on lambda'[333] are set. These results are the most stringent for these scenarios to date. Document type : Journal articles Physical Review Letters, American Physical Society, 2013, 110, pp.081801. <10.1103/PhysRevLett.110.081801> Domain : http://hal.in2p3.fr/in2p3-00744368 Contributor : Sylvie Florès <> Submitted on : Tuesday, October 23, 2012 - 7:30:32 AM Last modification on : Thursday, February 21, 2013 - 7:39:25 AM Citation S. Chatrchyan, M. Besançon, S. Choudhury, M. Dejardin, D. Denegri, et al.. Search for pair production of third-generation leptoquarks and top squarks in pp collisions at sqrt(s) = 7 TeV. Physical Review Letters, American Physical Society, 2013, 110, pp.081801. <10.1103/PhysRevLett.110.081801>. <in2p3-00744368> Metrics Consultation de la notice
web
auto_math_text
# Approximate Expression of Bit Error Rate in Uplink MC-CDMA Systems with Equal Gain Combining • Develi, Ibrahim (Department of Electrical & Electronics Engineering, Faculty of Engineering, Erciyes University) ; • Akdagli, Ali (Department of Electrical & Electronics Engineering, Faculty of Engineering, Mersin University) • Received : 2011.12.16 • Accepted : 2012.07.31 • Published : 2013.02.28 #### Abstract Uplink multicarrier code-division multiple-access (MC-CDMA) with equal gain combining (EGC) over Nakagami fading channels is considered. An improved expression which is a feasible alternative for the bit error rate (BER) performance evaluation of MC-CDMA signals is proposed. Simulated annealing algorithm is employed to obtain the optimum value of the coefficients belonging to the proposed expression. Numerical examples show that the performance curves computed by the improved expression are in good agreement with the results obtained by the exact expression. Thus, the proposed expression can improve the accuracy of BER performance evaluation that has been realized by the approximate expression. #### References 1. K. Fazel, S. Kaiser, and M. Schnell, "A flexible and high performance cellular mobile communications system based on orthogonal multi-carrier SSMA," Wireless Per. Commun., vol. 2, no. 1&2, pp. 121-144, Mar. 1995. https://doi.org/10.1007/BF01099534 2. S. Hara and R. Prasad, "Overview ofmulticarrier CDMA," IEEE Commun. Mag., vol. 35, no. 12, pp. 126-133, Dec. 1997. 3. S. Hara and R. Prasad, "Design and performance of multicarrier CDMA system in frequency-selective Rayleigh fading channels," IEEE Trans. Veh. Technol., vol. 48, no. 5, pp. 1584-1595, Sept. 1999. https://doi.org/10.1109/25.790535 4. T. Mueller, K. Brueninghaus, and H. Rohling, "Performance of coherent OFDM-CDMA for broadband mobile communications," Wireless Per. Commun., vol. 2, no. 4, pp. 295-305, 1996. https://doi.org/10.1007/BF01099337 5. E. A. Sourour and M. Nakagawa, "Performance of orthogonal multicarrier CDMA in a multipath fading channel," IEEE Trans. Commun., vol. 44, no. 3, pp. 356-367, Mar. 1996. https://doi.org/10.1109/26.486330 6. L. Vandendorpe, "Multitone spread-spectrum multiple-access communications system in a multipath Rician fading channel," IEEE Trans. Veh. Technol., vol. 44, no. 2, pp. 327-337, May 1995. https://doi.org/10.1109/25.385926 7. T. T. Liu and C. Y. Yang, "Equivalent signal-alignment-based frequencydomain equalization forMC-CDMA two-way relay systems," IEEE Trans. Veh. Technol., vol. 61, no. 1, pp. 237-248, Jan. 2012. https://doi.org/10.1109/TVT.2011.2175952 8. W. Yang, J. Y. Liu, C. L. Xu, and S. X. Cheng, "Joint carrier frequency offset estimation and compensation for downlink group-orthogonal multicarrier CDMA," Wireless Per. Commun., vol. 62, no. 1, pp. 107-116, Jan. 2012. https://doi.org/10.1007/s11277-010-0041-5 9. C. C. Lin, W. C. Chen, and C. D. Chung, "Spectral sidelobe decaying property of Walsh-Hadamard code in MC-CDMA systems," IEEE Trans. Wireless Commun., vol. 10, no. 10, pp. 3151-3157, Oct. 2011. https://doi.org/10.1109/TWC.2011.081611.102086 10. Y. Meng, M. L. You, H. W. Luo, and G. Liu, "The subspace-based linear conjugate CMA in BPSK-modulated MC-CDMA systems," Wireless Per. Commun., vol. 56, no. 4, pp. 761-777, Feb. 2011. https://doi.org/10.1007/s11277-009-9846-5 11. F. Zabini, B. M. Masini, A. Conti, and L. Hanzo, "Partial equalization for MC-CDMA systems in non-ideally estimated correlated fading," IEEE Trans. Veh. Technol., vol. 59, no. 8, pp. 3818-3830, Oct. 2010. https://doi.org/10.1109/TVT.2010.2060217 12. L. L. Yang and L. Hanzo, "Multicarrier DS-CDMA: A multiple access scheme for ubiquitous broadband wireless communications," IEEE Commun. Mag., vol. 41, no. 10, pp. 116-124, Oct. 2003. 13. C. L. Chang, P. S. Huang, and T. M. Tu, "Performance comparison of MRC and EGC on a MC-CDMA system with synchronization errors over fading channels," Wireless Per. Commun., vol. 43, no. 2, pp. 685-698, Oct. 2007. https://doi.org/10.1007/s11277-007-9273-4 14. G. L. Stuber, Principles of Mobile Communications. Kluwer Academic Publisher, 1996. 15. J. I. Z. Chen and C. W. Liou, "Performance evaluation of MC-CDMA systems with EGC diversity over correlated selective fading channels," in Proc. IEEE WCNC'07, (Hong Kong, China), Mar. 2007, pp. 4186-4190. 16. Y. Feng and J. Qin, "BER of MC-CDMA systems with MRC in correlated Nakagami-m fading," Electron. Lett., vol. 41, no. 19, pp. 1069-1071, Sept. 2005. https://doi.org/10.1049/el:20052050 17. Y. Feng and J. Qin, "BER of MC-CDMA systems with EGC in correlated Nakagami-m fading," IEEE Commun. Lett., vol. 10, no. 10, pp. 689-691, Oct. 2006. https://doi.org/10.1109/LCOMM.2006.060379 18. A. B. Djebbar, A. Djebbari, M. Bouziani, and J. M. Rouvaen, "Derivation of new expressions of bit error rate for MC-CDMA system in Nakagami fading channel," $AEi?{\frac{1}{2}}$ Int. J Electron. Commun., vol. 57, no. 6, pp. 395-402, 2003. https://doi.org/10.1078/1434-8411-54100191 19. M. Bouziani, A. Djebbari, A. B. Djebbar, M. F. Belbachir, and J. M. Rouvaen, "MC-CDMA with exponential power profile in Nakagami fading channel," $AEi?{\frac{1}{2}}$ Int. J Electron. Commun., vol. 59, no. 6, pp. 359-361, 2005. https://doi.org/10.1016/j.aeue.2004.11.010 20. A. E. El-Mahdy, "Error probability analysis of multicarrier direct sequence code division multiple access system under imperfect channel estimation and jamming in a Rayleigh fading," IET Signal Process., vol. 4, no. 1, pp. 89-101, Feb. 2010. https://doi.org/10.1049/iet-spr.2009.0038 21. M. Nakagami, "The m-distribution: A general formula of intensity distribution of rapid fading," in Statistical Methods in Radio Wave Propagation, W. C. Hoffman, Ed. Oxford, U.K.: Pergamon, 1960. 22. N. Yee, J. P. Linnartz, and G. Fettweis, "Multi-carrier CDMA in indoor wireless radio networks," in Proc. IEEE PIMRC, (Yokohoma, Japan), Sept. 1993, pp. 109-113. 23. S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, "Optimization by simulated annealing," Sci., vol. 220, no. 4598, pp. 671-680, May 1983. https://doi.org/10.1126/science.220.4598.671 24. N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, and E. Teller, "Equation of the state calculations by fast computing machines," J Chem. Phys., vol. 21, no. 6, pp. 1087-1092, 1953. https://doi.org/10.1063/1.1699114 25. D. Turgut, B. Turgut, R. Elmasri, and T. V. Le, "Optimizing clustering algorithm in mobile ad hoc networks using simulated annealing," in Proc. IEEE WCNC, New Orleans, USA, Mar. 2003, pp. 1492-1497. 26. C. Ciftlikli, A. Kalinli, and I. Develi, "An improved formula obtained by simulated annealing for the computation of maximum SINR for DSCDMA receiver with exponentially weighted despreading function," Euro. Trans. Telecommun., vol. 17, no. 1, pp. 143-150, Jan./Feb. 2006. https://doi.org/10.1002/ett.1045
web
auto_math_text
Outlook: Ameresco Inc. Class A Common Stock is assigned short-term Ba1 & long-term Ba1 estimated rating. Dominant Strategy : Sell Time series to forecast n: 23 Jan 2023 for (n+8 weeks) ## Abstract Ameresco Inc. Class A Common Stock prediction model is evaluated with Multi-Task Learning (ML) and Ridge Regression1,2,3,4 and it is concluded that the AMRC stock is predictable in the short/long term. According to price forecasts for (n+8 weeks) period, the dominant strategy among neural network is: Sell ## Key Points 1. Can statistics predict the future? 2. Is now good time to invest? 3. How do you know when a stock will go up or down? ## AMRC Target Price Prediction Modeling Methodology We consider Ameresco Inc. Class A Common Stock Decision Process with Multi-Task Learning (ML) where A is the set of discrete actions of AMRC stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4 F(Ridge Regression)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Multi-Task Learning (ML)) X S(n):→ (n+8 weeks) $\stackrel{\to }{R}=\left({r}_{1},{r}_{2},{r}_{3}\right)$ n:Time series to forecast p:Price signals of AMRC stock j:Nash equilibria (Neural Network) k:Dominated move a:Best response for target price For further technical information as per how our model work we invite you to visit the article below: How do AC Investment Research machine learning (predictive) algorithms actually work? ## AMRC Stock Forecast (Buy or Sell) for (n+8 weeks) Sample Set: Neural Network Stock/Index: AMRC Ameresco Inc. Class A Common Stock Time series to forecast n: 23 Jan 2023 for (n+8 weeks) According to price forecasts for (n+8 weeks) period, the dominant strategy among neural network is: Sell X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.) Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.) Z axis (Grey to Black): *Technical Analysis% ## IFRS Reconciliation Adjustments for Ameresco Inc. Class A Common Stock 1. For example, when the critical terms (such as the nominal amount, maturity and underlying) of the hedging instrument and the hedged item match or are closely aligned, it might be possible for an entity to conclude on the basis of a qualitative assessment of those critical terms that the hedging instrument and the hedged item have values that will generally move in the opposite direction because of the same risk and hence that an economic relationship exists between the hedged item and the hedging instrument (see paragraphs B6.4.4–B6.4.6). 2. If the group of items does not have any offsetting risk positions (for example, a group of foreign currency expenses that affect different line items in the statement of profit or loss and other comprehensive income that are hedged for foreign currency risk) then the reclassified hedging instrument gains or losses shall be apportioned to the line items affected by the hedged items. This apportionment shall be done on a systematic and rational basis and shall not result in the grossing up of the net gains or losses arising from a single hedging instrument. 3. Hedge effectiveness is the extent to which changes in the fair value or the cash flows of the hedging instrument offset changes in the fair value or the cash flows of the hedged item (for example, when the hedged item is a risk component, the relevant change in fair value or cash flows of an item is the one that is attributable to the hedged risk). Hedge ineffectiveness is the extent to which the changes in the fair value or the cash flows of the hedging instrument are greater or less than those on the hedged item. 4. In cases such as those described in the preceding paragraph, to designate, at initial recognition, the financial assets and financial liabilities not otherwise so measured as at fair value through profit or loss may eliminate or significantly reduce the measurement or recognition inconsistency and produce more relevant information. For practical purposes, the entity need not enter into all of the assets and liabilities giving rise to the measurement or recognition inconsistency at exactly the same time. A reasonable delay is permitted provided that each transaction is designated as at fair value through profit or loss at its initial recognition and, at that time, any remaining transactions are expected to occur. *International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS. ## Conclusions Ameresco Inc. Class A Common Stock is assigned short-term Ba1 & long-term Ba1 estimated rating. Ameresco Inc. Class A Common Stock prediction model is evaluated with Multi-Task Learning (ML) and Ridge Regression1,2,3,4 and it is concluded that the AMRC stock is predictable in the short/long term. According to price forecasts for (n+8 weeks) period, the dominant strategy among neural network is: Sell ### AMRC Ameresco Inc. Class A Common Stock Financial Analysis* Rating Short-Term Long-Term Senior Outlook*Ba1Ba1 Income StatementBa3Baa2 Balance SheetBaa2B2 Leverage RatiosBa3B2 Cash FlowBa1Baa2 Rates of Return and ProfitabilityBaa2B2 *Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents. How does neural network examine financial reports and understand financial state of the company? ### Prediction Confidence Score Trust metric by Neural Network: 86 out of 100 with 466 signals. ## References 1. Hastie T, Tibshirani R, Wainwright M. 2015. Statistical Learning with Sparsity: The Lasso and Generalizations. New York: CRC Press 2. Chen, C. L. Liu (1993), "Joint estimation of model parameters and outlier effects in time series," Journal of the American Statistical Association, 88, 284–297. 3. Blei DM, Lafferty JD. 2009. Topic models. In Text Mining: Classification, Clustering, and Applications, ed. A Srivastava, M Sahami, pp. 101–24. Boca Raton, FL: CRC Press 4. R. Rockafellar and S. Uryasev. Optimization of conditional value-at-risk. Journal of Risk, 2:21–42, 2000. 5. Ruiz FJ, Athey S, Blei DM. 2017. SHOPPER: a probabilistic model of consumer choice with substitutes and complements. arXiv:1711.03560 [stat.ML] 6. Pennington J, Socher R, Manning CD. 2014. GloVe: global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods on Natural Language Processing, pp. 1532–43. New York: Assoc. Comput. Linguist. 7. Batchelor, R. P. Dua (1993), "Survey vs ARCH measures of inflation uncertainty," Oxford Bulletin of Economics Statistics, 55, 341–353. Frequently Asked QuestionsQ: What is the prediction methodology for AMRC stock? A: AMRC stock prediction methodology: We evaluate the prediction models Multi-Task Learning (ML) and Ridge Regression Q: Is AMRC stock a buy or sell? A: The dominant strategy among neural network is to Sell AMRC Stock. Q: Is Ameresco Inc. Class A Common Stock stock a good investment? A: The consensus rating for Ameresco Inc. Class A Common Stock is Sell and is assigned short-term Ba1 & long-term Ba1 estimated rating. Q: What is the consensus rating of AMRC stock? A: The consensus rating for AMRC is Sell. Q: What is the prediction period for AMRC stock? A: The prediction period for AMRC is (n+8 weeks)
web
auto_math_text
Prospects for \gamma\gamma\to Higgs observation in ultraperipheral ion collisions at the Future Circular Collider # Prospects for γγ→ Higgs observation in ultraperipheral ion collisions at the Future Circular Collider ## Abstract We study the two-photon production of the Higgs boson, , at the Future Circular Collider (FCC) in ultraperipheral PbPb and pPb collisions at and 63 TeV. Signal and background events are generated with madgraph 5, including fluxes from the proton and lead ions in the equivalent photon approximation, yielding  = 1.75 nb and 1.5 pb in PbPb and pPb collisions respectively. We analyse the H decay mode including realistic reconstruction efficiencies for the final-state -jets, showered and hadronized with pythia 8, as well as appropriate selection criteria to reduce the continuum backgrounds. Observation of PbPb(Pb)H(Pb) is achievable in the first year with the expected FCC integrated luminosities. Higgs boson; two-photon fusion; heavy-ion collisions; CERN; FCC. ## 1 Introduction The observation of the predicted Higgs boson [1] in proton-proton collisions at the Large Hadron Collider [2, 3] has represented a breakthrough in our scientific understanding of the particles and forces in nature. A complete study of the properties of the scalar boson, including its couplings to all known particles, and searches of possible deviations indicative of physics beyond the Standard Model (SM), require a new collider facility with much higher center-of-mass (c.m.\xspace) energies [4]. The Future Circular Collider (FCC) is a post-LHC project at CERN, aiming at pp collisions up to at a c.m.\xspace energy of  TeV in a new 80–100 km tunnel with 16–20 T dipoles [5]. The FCC running plans with hadron beams (FCC-hh) includes also heavy-ion operation at nucleon-nucleon c.m.\xspace energies of for PbPb, pPb with (monthly) integrated luminosities of 110 nb and 29 pb [6]. Such high collision energies and luminosities, factors of 7 and 30 times higher respectively than those reachable at the LHC, open up the possibility to study the production of the Higgs boson in nuclear collisions, both in central hadronic [7] as well as in ultraperipheral (electromagnetic) [8] interactions. The observation of the latter  H process provides an independent measurement of the H- coupling not based on Higgs decays but on its -channel production mode. The measurement of exclusive in ultraperipheral collisions (UPCs) [9] of pPb and PbPb beams has been studied in detail for LHC energies1 in ref. [8], although its observation there is unfeasible with the nominal luminosities (Fig. 1, left). We extend such studies for FCC energies, where such an observation is warranted. All charges accelerated at high energies generate electromagnetic fields which, in the equivalent photon approximation (EPA) [11], can be considered as quasireal photon beams2 [12]. The highest available photon energies are of the order of the inverse Lorentz-contracted radius of the source charge, , which at the FCC yield photon-photon collisions above 1 TeV (Table 1). In addition, since the photon flux scales as the squared charge of the beam, , two-photon cross sections are enhanced millions of times for ions ( = 510 for PbPb) compared to proton or electron beams, thereby featuring the largest luminosities among all colliding systems (Fig. 1, left). ## 2 Theoretical setup The madgraph 5 (v.2.5.4) [14] Monte Carlo (MC) event generator is used to compute the relevant cross sections from the convolution of the Weizsäcker-Williams EPA photon fluxes [11] for the proton and lead ion, and the H- coupling parametrized in the Higgs effective field theory [15], following the implementation discussed in [8] with a more accurate treatment of the non hadronic-overlap correction. The proton flux is given by the energy spectrum where is the fraction of the beam energy carried by the photon [16]: fγ/p(x)=απ1−x+1/2x2x∫∞Q2minQ2−Q2minQ4|F(Q2)|2dQ2, (1) with , the proton electromagnetic form factor, and the minimum momentum transfer is a function of and the proton mass , . The photon energy spectrum of the lead ion (), integrated over impact parameter from to infinity, is given by [17]: fγ/Pb(x)=αZ2π1x[2xiK0(xi)K1(xi)−x2i(K21(xi)−K20(xi))], (2) where , and , are the modified Bessel functions of the second kind of zero and first order, related respectively to the emission of longitudinally and transversely polarized photons. The latter dominating for ultrarelativistic charges (). The dominant Higgs decay mode is , with a branching fraction of 58% as computed with hdecay [18]. The pythia8.2 [19] MC generator was employed to shower and hadronize the two final-state -jets, which are then reconstructed with the Durham algorithm [20] (exclusive 2-jets final-state) using fastjet 3.0 [21]. The same setup is used to generate the exclusive two-photon production of and (possibly misidentified) and light-quark () jet pairs, which constitute the most important physical background for the measurement of the H channel. ## 3 Results The total elastic Higgs boson cross sections in ultraperipheral PbPb and pPb collisions as a function of are shown in Fig. 1 (right). We have assigned a conservative 20% uncertainty to the predicted cross sections to cover different charge form factors. At LHC energies, we find a slightly reduced cross section,   pb, compared to the results of [8] due a more accurate treatment of the non hadronic-overlap correction based on [22]. The predicted total Higgs boson cross sections are  = 1.75 nb and 1.5 pb in PbPb and pPb collisions at  = 39 and 63 TeV which, for the nominal  = 110 nb and 29 pb luminosities per “year” (1-month run), imply 200 and 45 Higgs bosons produced (corresponding to 110 and 25 bosons in the decay mode, respectively). The main backgrounds are pairs from the continuum, where charm and light () quarks are misidentified as -quarks. The irreducible background over the mass range  GeV is 20 times larger than the signal, but can be suppressed (as well as that from misidentified and pairs) via various kinematical cuts. The data analysis follows closely the similar LHC study [8], with the following reconstruction performances assumed: jet reconstruction over , 7% -jet energy resolution (resulting in a dijet mass resolution of 6 GeV at the Higgs peak), 70% -jet tagging efficiency, and 5% (1.5%) -jet mistagging probability for a (light-flavour ) quark. For the double -jet final-state of interest, these lead to a 50% efficiency for the MC-generated signal (S), and a total reduction of the misidentified and continuum backgrounds (B) by factors of 400 and 400 000. As proposed in [8], various simple kinematical cuts can be applied to enhance the S/B ratio. Since the transverse momenta of the Higgs decay -jets peak at , selecting events with at least one jet within  = 55–62.5 GeV suppresses 96% of the continuum backgrounds, while removing only half of the signal. Also, one can exploit the fact that the angular distribution of the Higgs decay -jets in the helicity frame is isotropically distributed in , \ie each jet is independently emitted either in the same direction as the pair or opposite to it, while the continua (with quarks propagating in the - or - channels) are peaked in the forward–backward directions. Thus, requiring further suppresses the continuum contaminations by another 20% while leaving untouched the signal. The significance of the signal can then be computed from the final number of counts within 1.4 around the Gaussian Higgs peak (\ie  GeV) over the underlying dijet continuum. Table 2 summarizes the visible cross sections and the number of events after cuts for the nominal luminosities of each system. In PbPb  = 39 GeV for the nominal integrated luminosity of per run, we expect about 21 signal counts over 28 for the sum of backgrounds in a window  = 117–133 GeV around the Higgs peak. Reaching a statistical significance close to 5 (Fig. 2, left) would require to combine two different experiments (or doubling the luminosity in a single one). Similar estimates for pPb at 63 TeV (29 pb) yield about 5 signal events after cuts, on top of a background of 6.7 continuum events. Reaching a 5 significance for the observation of  H production (Fig. 2, right) would require in this case to run for about 8 months (instead of the nominal 1-month run per year), or running 4 months and combining two experiments. All the derived number of events and significances are based on the aforementioned set of kinematical cuts, and can be likely improved by using a more advanced multivariate analysis. ## 4 Summary We have presented prospect studies for the measurement of the two-photon production of the Higgs boson in the decay channel in ultraperipheral PbPb and pPb collisions at the FCC. Cross sections have been obtained at nucleon-nucleon c.m.\xspace energies of and 63 TeV with madgraph 5, using the Pb (and proton) equivalent photon fluxes and requiring no hadronic overlap of the colliding particles. The -quarks have been showered and hadronized with pythia 8, and reconstructed in a exclusive two-jet final-state with the algorithm. By assuming realistic jet reconstruction performances and (mis)tagging efficiencies, and applying appropriate kinematical cuts on the jet and dijet mass and angles in the helicity frame, we can reconstruct the H signal on top of the dominant continuum background. The measurement of would yield 21 (5) signal counts over 28 (7) continuum dijet pairs around the Higgs peak, in PbPb (pPb) collisions for their nominal integrated luminosities per run. Observation of the photon-fusion Higgs production at the -level is achievable in the first year by combining the measurements of two experiments (or doubling the luminosity in a single one) in PbPb, and by running for about 8 months (or running 4 months and combining two experiments) in the pPb case. The feasibility studies presented here confirm the interesting Higgs physics potential open to study in ultraperipheral ion collisions at the FCC, providing an independent measurement of the H- coupling not based on Higgs decays but on a -channel production mode. Acknowledgments – P. R. T. acknowledges financial support from the CERN TH Department and from the FCC project. ### Footnotes 1. A few older papers had already previously discussed the possibility to produce the Higgs boson in heavy-ion UPCs [10]. 2. The emitted photons are almost on mass shell, with virtuality , where is the radius of the charge, i.e.  0.28 GeV for protons ( 0.7 fm) and  0.06 GeV for nuclei ( fm) with mass number  16. ### References 1. F. Englert, R. Brout, Phys. Rev. Lett. 13 (1964) 321; P. W. Higgs, Phys. Rev. Lett. 13 (1964) 508. 2. S. Chatrchyan et al. [CMS Collaboration], Phys. Lett. B 716 (2012) 30. 3. G. Aad et al. [ATLAS Collaboration], Phys. Lett. B 716 (2012) 1. 4. D. d’Enterria, PoS ICHEP 2016 (2017) 434 [arXiv:1701.02663 [hep-ex]]. 5. M. L. Mangano et al., CERN Yellow Report 1 (2017), doi:10.23731/CYRM-2017-003.1 [arXiv:1607.01831 [hep-ph]]. 6. A. Dainese et al., CERN Yellow Report 3 (2017) 635, doi:10.23731/CYRM-2017-003.635 [arXiv:1605.01389 [hep-ph]]; and D. d’Enterria et al., QM’17 Proceeds., Nucl. Phys. A 967 (2017) 888 [arXiv:1704.05891 [hep-ex]]. 7. D. d’Enterria, Hard-Probes’16 Proceeds., Nucl. Part. Phys. Proc. 289-290 (2017) 237 [arXiv:1701.08047 [hep-ex]]. 8. D. d’Enterria and J. P. Lansberg, Phys. Rev. D 81 (2010) 014004 [arXiv:0909.3047 [hep-ph]]. 9. C. A. Bertulani and G. Baur, Phys. Rept. 163 (1988) 299; A. J. Baltz et al., Phys. Rept. 458 (2008) 1 [arXiv:0706.3356 [nucl-ex]] 10. M. Grabiak et al., J. Phys. G 15 (1989) L25; E. Papageorgiu, Phys. Rev. D 40 (1989) 92; M. Drees et al., Phys. Lett. B 223 (1989) 454; K. J. Abraham et al., Phys. Lett. B 251 (1990) 186. 11. C. von Weizsäcker Z. Physik 88 (1934) 612; E. J. Williams, Phys. Rev. 45 (1934) 729. E. Fermi Nuovo Cimento 2 (1925) 143. 12. S.J. Brodsky, T.Kinoshita, H.Terazawa, Phys. Rev. Lett. 25 (1970) 972; Phys. Rev. D 4 (1971) 1532. 13. D. d’Enterria, P. Rebello Teles, D.E. Martins, Proceeds. EDS-Blois’17, arXiv:1712.07023 [hep-ph]. 14. J. Alwall et al., JHEP 09 (2007) 028 [arXiv:0706.2334 [hep-ph]]. 15. M. A. Shifman, A. I. Vainshtein, M. B. Voloshin and V. I. Zakharov, Sov. J. Nucl. Phys. 30, 711 (1979) [Yad. Fiz. 30 (1979) 1368]; B. A. Kniehl and M. Spira, Z. Phys. C 69 (1995) 77; S. Dawson and R. Kauffman, Phys. Rev. D 49 (1994) 2298. 16. V. M. Budnev, I. F. Ginzburg, G. V. Meledin, V. G. Serbo, Phys. Rept. 15 (1975) 181. 17. J.D. Jackson, Classical Electrodynamics, 2nd edition, John Wiley & Sons (1975). 18. M. Spira, Nucl. Instrum. Meth. A 389 (1997) 357; A. Djouadi, J. Kalinowski and M. Spira, Comput. Phys. Commun. 108 (1998) 56; A. Djouadi, J. Kalinowski, M. Mühlleitner and M. Spira, arXiv:1003.1643 [hep-ph]; http://people.web.psi.ch/spira/hdecay/. 19. T. Sjöstrand et al., Comput. Phys. Commun. 191 (2015) 159. 20. S. Catani, Y. L. Dokshitzer, M. H. Seymour and B. R. Webber, Nucl. Phys. B 406 (1993) 187. 21. M. Cacciari, G. P. Salam and G. Soyez, Eur. Phys. J. C 72 (2012) 1896 [arXiv:1111.6097 [hep-ph]]. 22. S. R. Klein et al., Comput. Phys. Commun. 212 (2017) 258 [arXiv:1607.03838 [hep-ph]]. You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
web
auto_math_text
• ### Gaia Data Release 2: The Short Timescale Variability Processing and Analysis(1805.00747) May 3, 2018 astro-ph.IM The Gaia DR2 short timescale variable candidates sample results from the investigation of the first 22 months of Gaia $G$ per-CCD, $G_{BP}$ and $G_{RP}$ photometry, for a subsample of sources at the Gaia faint end ($G \sim 16.5 - 20\,$mag). For this first Gaia short timescale variability search, we limit ourselves to the case of rapid, suspected periodic variability. Our study combines fast variability detection through variogram analysis, Least-Square high frequency search, and empirical selection criterion based on various statistics and built from the investigation of specific sources seen through Gaia eyes (e.g. known variables or visually identified objects with peculiar features in their light-curves). The progressive selection criterion definition, improvement and validation also make use of supplementary ground-based photometric monitoring, performed at the Flemish Mercator telescope in La Palma (Canary Islands, Spain) between August and November 2017. We publish a list of 3018 bona fide, suspected periodic, short timescale variable candidates, spread all over the sky, with a contamination level from false positives and non-periodic variables up to 10-20\% in the Magellanic Clouds. Though its completeness is around 0.05\%, the Gaia DR2 short timescale variables sample recover very interesting known short period variables, such as Post Common Envelope Binaries or Cataclysmic Variables, and points fascinating newly discovered variables sources. Several improvements in the short timescale variability processing are considered for the future Gaia Data Releases, by enhancing the existing variogram and period search algorithms or going one step beyond with the classification of the identified candidates. The encouraging outcome of our analysis demonstrates the power of the Gaia mission for such fast variability studies and opens great perspectives for this domain of astrophysics. • ### Gaia eclipsing binary and multiple systems. Two-Gaussian models applied to OGLE-III eclipsing binary light curves in the Large Magellanic Cloud(1703.10597) March 30, 2017 astro-ph.SR, astro-ph.IM The advent of large scale multi-epoch surveys raises the need for automated light curve (LC) processing. This is particularly true for eclipsing binaries (EBs), which form one of the most populated types of variable objects. The Gaia mission, launched at the end of 2013, is expected to detect of the order of few million EBs over a 5-year mission. We present an automated procedure to characterize EBs based on the geometric morphology of their LCs with two aims: first to study an ensemble of EBs on a statistical ground without the need to model the binary system, and second to enable the automated identification of EBs that display atypical LCs. We model the folded LC geometry of EBs using up to two Gaussian functions for the eclipses and a cosine function for any ellipsoidal-like variability that may be present between the eclipses. The procedure is applied to the OGLE-III data set of EBs in the Large Magellanic Cloud (LMC) as a proof of concept. The bayesian information criterion is used to select the best model among models containing various combinations of those components, as well as to estimate the significance of the components. Based on the two-Gaussian models, EBs with atypical LC geometries are successfully identified in two diagrams, using the Abbe values of the original and residual folded LCs, and the reduced $\chi^2$. Cleaning the data set from the atypical cases and further filtering out LCs that contain non-significant eclipse candidates, the ensemble of EBs can be studied on a statistical ground using the two-Gaussian model parameters. For illustration purposes, we present the distribution of projected eccentricities as a function of orbital period for the OGLE-III set of EBs in the LMC, as well as the distribution of their primary versus secondary eclipse widths. • ### The LOFT Ground Segment(1408.6541) Aug. 27, 2014 astro-ph.IM LOFT, the Large Observatory For X-ray Timing, was one of the ESA M3 mission candidates that completed their assessment phase at the end of 2013. LOFT is equipped with two instruments, the Large Area Detector (LAD) and the Wide Field Monitor (WFM). The LAD performs pointed observations of several targets per orbit (~90 minutes), providing roughly ~80 GB of proprietary data per day (the proprietary period will be 12 months). The WFM continuously monitors about 1/3 of the sky at a time and provides data for about ~100 sources a day, resulting in a total of ~20 GB of additional telemetry. The LOFT Burst alert System additionally identifies on-board bright impulsive events (e.g., Gamma-ray Bursts, GRBs) and broadcasts the corresponding position and trigger time to the ground using a dedicated system of ~15 VHF receivers. All WFM data are planned to be made public immediately. In this contribution we summarize the planned organization of the LOFT ground segment (GS), as established in the mission Yellow Book 1 . We describe the expected GS contributions from ESA and the LOFT consortium. A review is provided of the planned LOFT data products and the details of the data flow, archiving and distribution. Despite LOFT was not selected for launch within the M3 call, its long assessment phase (> 2 years) led to a very solid mission design and an efficient planning of its ground operations. • ### Automated classification of Hipparcos unsolved variables(1301.1545) Jan. 8, 2013 astro-ph.SR, astro-ph.IM We present an automated classification of stars exhibiting periodic, non-periodic and irregular light variations. The Hipparcos catalogue of unsolved variables is employed to complement the training set of periodic variables of Dubath et al. with irregular and non-periodic representatives, leading to 3881 sources in total which describe 24 variability types. The attributes employed to characterize light-curve features are selected according to their relevance for classification. Classifier models are produced with random forests and a multistage methodology based on Bayesian networks, achieving overall misclassification rates under 12 per cent. Both classifiers are applied to predict variability types for 6051 Hipparcos variables associated with uncertain or missing types in the literature. • ### The impact of Gaia and LSST on binary stars and exo-planets(1201.5140) Jan. 24, 2012 astro-ph.SR, astro-ph.IM Two upcoming large scale surveys, the ESA Gaia and LSST projects, will bring a new era in astronomy. The number of binary systems that will be observed and detected by these projects is enormous, estimations range from millions for Gaia to several tens of millions for LSST. We review some tools that should be developed and also what can be gained from these missions on the subject of binaries and exoplanets from the astrometry, photometry, radial velocity and their alert systems. • ### Hipparcos Variable Star Detection and Classification Efficiency(1107.3638) July 21, 2011 astro-ph.SR A complete periodic star extraction and classification scheme is set up and tested with the Hipparcos catalogue. The efficiency of each step is derived by comparing the results with prior knowledge coming from the catalogue or from the literature. A combination of two variability criteria is applied in the first step to select 17 006 variability candidates from a complete sample of 115 152 stars. Our candidate sample turns out to include 10 406 known variables (i.e., 90% of the total of 11 597) and 6600 contaminating constant stars. A random forest classification is used in the second step to extract 1881 (82%) of the known periodic objects while removing entirely constant stars from the sample and limiting the contamination of non-periodic variables to 152 stars (7.5%). The confusion introduced by these 152 non-periodic variables is evaluated in the third step using the results of the Hipparcos periodic star classification presented in a previous study (Dubath et al. [1]). • ### Random forest automated supervised classification of Hipparcos periodic variable stars(1101.2406) July 19, 2011 astro-ph.SR We present an evaluation of the performance of an automated classification of the Hipparcos periodic variable stars into 26 types. The sub-sample with the most reliable variability types available in the literature is used to train supervised algorithms to characterize the type dependencies on a number of attributes. The most useful attributes evaluated with the random forest methodology include, in decreasing order of importance, the period, the amplitude, the V-I colour index, the absolute magnitude, the residual around the folded light-curve model, the magnitude distribution skewness and the amplitude of the second harmonic of the Fourier series model relative to that of the fundamental frequency. Random forests and a multi-stage scheme involving Bayesian network and Gaussian mixture methods lead to statistically equivalent results. In standard 10-fold cross-validation experiments, the rate of correct classification is between 90 and 100%, depending on the variability type. The main mis-classification cases, up to a rate of about 10%, arise due to confusion between SPB and ACV blue variables and between eclipsing binaries, ellipsoidal variables and other variability types. Our training set and the predicted types for the other Hipparcos periodic stars are available online. • ### Bottom Production(hep-ph/0003142) Aug. 29, 2001 hep-ph We review the prospects for bottom production physics at the LHC. • ### B decays at the LHC(hep-ph/0003238) March 25, 2000 hep-ph, hep-ex We review the prospects for B decay studies at the LHC.
web
auto_math_text
Publication Title Evidence for the 125 GeV Higgs boson decaying to a pair of $\tau$ leptons Author Abstract A search for a standard model Higgs boson decaying into a pair of tau leptons is performed using events recorded by the CMS experiment at the LHC in 2011 and 2012. The dataset corresponds to an integrated luminosity of 4.9 fb(-1) at a centre-of-mass energy of 7 TeV and 19.7 fb(-1) at 8 TeV. Each tau lepton decays hadronically or leptonically to an electron or a muon, leading to six different final states for the tau-lepton pair, all considered in this analysis. An excess of events is observed over the expected background contributions, with a local significance larger than 3 standard deviations for m (H) values between 115 and 130 GeV. The best fit of the observed H -> tau tau signal cross section times branching fraction for m(H) = 125 GeV is 0.78 +/- 0.27 times the standard model expectation. These observations constitute evidence for the 125 GeV Higgs boson decaying to a pair of tau leptons. Language English Source (journal) Journal of high energy physics. - Bristol Publication Bristol : 2014 ISSN 1126-6708 1029-8479 [online] Volume/pages 5(2014), 72 p. Article Reference 104 ISI 000336734300001 Medium E-only publicatie Full text (Publisher's DOI) Full text (open access) UAntwerpen Faculty/Department Research group [E?say:metaLocaldata.cgzprojectinf] Publication type Subject Affiliation Publications with a UAntwerp address
web
auto_math_text
# On the possibility of generating a 4-neutron resonance with a {\boldmath $T=3/2$} isospin 3-neutron force Abstract : We consider the theoretical possibility to generate a narrow resonance in the four neutron system as suggested by a recent experimental result. To that end, a phenomenological $T=3/2$ three neutron force is introduced, in addition to a realistic $NN$ interaction. We inquire what should be the strength of the $3n$ force in order to generate such a resonance. The reliability of the three-neutron force in the $T=3/2$ channel is exmined, by analyzing its consistency with the low-lying $T=1$ states of $^4$H, $^4$He and $^4$Li and the $^3{\rm H} + n$ scattering. The {\it ab initio} solution of the $4n$ Schr\"{o}dinger equation is obtained using the complex scaling method with boundary conditions appropiate to the four-body resonances. We find that in order to generate narrow $4n$ resonant states a remarkably attractive $3N$ force in the $T=3/2$ channel is required. Type de document : Article dans une revue Physical Review C, American Physical Society, 2016, 93, pp.044004. 〈10.1103/PhysRevC.93.044004〉 Domaine : http://hal.in2p3.fr/in2p3-01313017 Contributeur : Sophie Heurteau <> Soumis le : lundi 9 mai 2016 - 14:19:55 Dernière modification le : jeudi 15 mars 2018 - 01:35:22 ### Citation E. Hiyama, R. Lazauskas, J. Carbonell, M. Kamimura. On the possibility of generating a 4-neutron resonance with a {\boldmath $T=3/2$} isospin 3-neutron force. Physical Review C, American Physical Society, 2016, 93, pp.044004. 〈10.1103/PhysRevC.93.044004〉. 〈in2p3-01313017〉 ### Métriques Consultations de la notice
web
auto_math_text
Outlook: Eagle Bulk Shipping Inc. Common Stock is assigned short-term Ba1 & long-term Ba1 estimated rating. Dominant Strategy : Sell Time series to forecast n: 03 Mar 2023 for (n+16 weeks) Methodology : Modular Neural Network (Emotional Trigger/Responses Analysis) ## Abstract Eagle Bulk Shipping Inc. Common Stock prediction model is evaluated with Modular Neural Network (Emotional Trigger/Responses Analysis) and Multiple Regression1,2,3,4 and it is concluded that the EGLE stock is predictable in the short/long term. According to price forecasts for (n+16 weeks) period, the dominant strategy among neural network is: Sell ## Key Points 1. What is the use of Markov decision process? 2. Decision Making 3. Can neural networks predict stock market? ## EGLE Target Price Prediction Modeling Methodology We consider Eagle Bulk Shipping Inc. Common Stock Decision Process with Modular Neural Network (Emotional Trigger/Responses Analysis) where A is the set of discrete actions of EGLE stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4 F(Multiple Regression)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Modular Neural Network (Emotional Trigger/Responses Analysis)) X S(n):→ (n+16 weeks) $∑ i = 1 n r i$ n:Time series to forecast p:Price signals of EGLE stock j:Nash equilibria (Neural Network) k:Dominated move a:Best response for target price For further technical information as per how our model work we invite you to visit the article below: How do AC Investment Research machine learning (predictive) algorithms actually work? ## EGLE Stock Forecast (Buy or Sell) for (n+16 weeks) Sample Set: Neural Network Stock/Index: EGLE Eagle Bulk Shipping Inc. Common Stock Time series to forecast n: 03 Mar 2023 for (n+16 weeks) According to price forecasts for (n+16 weeks) period, the dominant strategy among neural network is: Sell X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.) Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.) Z axis (Grey to Black): *Technical Analysis% ## IFRS Reconciliation Adjustments for Eagle Bulk Shipping Inc. Common Stock 1. If an entity previously accounted at cost (in accordance with IAS 39), for an investment in an equity instrument that does not have a quoted price in an active market for an identical instrument (ie a Level 1 input) (or for a derivative asset that is linked to and must be settled by delivery of such an equity instrument) it shall measure that instrument at fair value at the date of initial application. Any difference between the previous carrying amount and the fair value shall be recognised in the opening retained earnings (or other component of equity, as appropriate) of the reporting period that includes the date of initial application. 2. For a discontinued hedging relationship, when the interest rate benchmark on which the hedged future cash flows had been based is changed as required by interest rate benchmark reform, for the purpose of applying paragraph 6.5.12 in order to determine whether the hedged future cash flows are expected to occur, the amount accumulated in the cash flow hedge reserve for that hedging relationship shall be deemed to be based on the alternative benchmark rate on which the hedged future cash flows will be based. 3. An entity can also designate only changes in the cash flows or fair value of a hedged item above or below a specified price or other variable (a 'one-sided risk'). The intrinsic value of a purchased option hedging instrument (assuming that it has the same principal terms as the designated risk), but not its time value, reflects a one-sided risk in a hedged item. For example, an entity can designate the variability of future cash flow outcomes resulting from a price increase of a forecast commodity purchase. In such a situation, the entity designates only cash flow losses that result from an increase in the price above the specified level. The hedged risk does not include the time value of a purchased option, because the time value is not a component of the forecast transaction that affects profit or loss. 4. Paragraph 5.5.4 requires that lifetime expected credit losses are recognised on all financial instruments for which there has been significant increases in credit risk since initial recognition. In order to meet this objective, if an entity is not able to group financial instruments for which the credit risk is considered to have increased significantly since initial recognition based on shared credit risk characteristics, the entity should recognise lifetime expected credit losses on a portion of the financial assets for which credit risk is deemed to have increased significantly. The aggregation of financial instruments to assess whether there are changes in credit risk on a collective basis may change over time as new information becomes available on groups of, or individual, financial instruments. *International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS. ## Conclusions Eagle Bulk Shipping Inc. Common Stock is assigned short-term Ba1 & long-term Ba1 estimated rating. Eagle Bulk Shipping Inc. Common Stock prediction model is evaluated with Modular Neural Network (Emotional Trigger/Responses Analysis) and Multiple Regression1,2,3,4 and it is concluded that the EGLE stock is predictable in the short/long term. According to price forecasts for (n+16 weeks) period, the dominant strategy among neural network is: Sell ### EGLE Eagle Bulk Shipping Inc. Common Stock Financial Analysis* Rating Short-Term Long-Term Senior Outlook*Ba1Ba1 Income StatementBaa2Ba2 Balance SheetCaa2B3 Leverage RatiosBaa2Baa2 Cash FlowBaa2B2 Rates of Return and ProfitabilityCaa2B3 *Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents. How does neural network examine financial reports and understand financial state of the company? ### Prediction Confidence Score Trust metric by Neural Network: 74 out of 100 with 746 signals. ## References 1. Künzel S, Sekhon J, Bickel P, Yu B. 2017. Meta-learners for estimating heterogeneous treatment effects using machine learning. arXiv:1706.03461 [math.ST] 2. Imbens G, Wooldridge J. 2009. Recent developments in the econometrics of program evaluation. J. Econ. Lit. 47:5–86 3. Candès E, Tao T. 2007. The Dantzig selector: statistical estimation when p is much larger than n. Ann. Stat. 35:2313–51 4. Abadie A, Imbens GW. 2011. Bias-corrected matching estimators for average treatment effects. J. Bus. Econ. Stat. 29:1–11 5. G. Theocharous and A. Hallak. Lifetime value marketing using reinforcement learning. RLDM 2013, page 19, 2013 6. Mnih A, Kavukcuoglu K. 2013. Learning word embeddings efficiently with noise-contrastive estimation. In Advances in Neural Information Processing Systems, Vol. 26, ed. Z Ghahramani, M Welling, C Cortes, ND Lawrence, KQ Weinberger, pp. 2265–73. San Diego, CA: Neural Inf. Process. Syst. Found. 7. Ashley, R. (1983), "On the usefulness of macroeconomic forecasts as inputs to forecasting models," Journal of Forecasting, 2, 211–223. Frequently Asked QuestionsQ: What is the prediction methodology for EGLE stock? A: EGLE stock prediction methodology: We evaluate the prediction models Modular Neural Network (Emotional Trigger/Responses Analysis) and Multiple Regression Q: Is EGLE stock a buy or sell? A: The dominant strategy among neural network is to Sell EGLE Stock. Q: Is Eagle Bulk Shipping Inc. Common Stock stock a good investment? A: The consensus rating for Eagle Bulk Shipping Inc. Common Stock is Sell and is assigned short-term Ba1 & long-term Ba1 estimated rating. Q: What is the consensus rating of EGLE stock? A: The consensus rating for EGLE is Sell. Q: What is the prediction period for EGLE stock? A: The prediction period for EGLE is (n+16 weeks)
web
auto_math_text
# 11.1 Temperature  (Page 5/14) Page 5 / 14 This law was postulated in the 1930s, after the first and second laws of thermodynamics had been developed and named. It is called the zeroth law because it comes logically before the first and second laws (discussed in Thermodynamics ). An example of this law in action is seen in babies in incubators: babies in incubators normally have very few clothes on, so to an observer they look as if they may not be warm enough. However, the temperature of the air, the cot, and the baby is the same, because they are in thermal equilibrium, which is accomplished by maintaining air temperature to keep the baby comfortable. Does the temperature of a body depend on its size? No, the system can be divided into smaller parts each of which is at the same temperature. We say that the temperature is an intensive quantity. Intensive quantities are independent of size. ## Section summary • Temperature is the quantity measured by a thermometer. • Temperature is related to the average kinetic energy of atoms and molecules in a system. • Absolute zero is the temperature at which there is no molecular motion. • There are three main temperature scales: Celsius, Fahrenheit, and Kelvin. • Temperatures on one scale can be converted to temperatures on another scale using the following equations: ${T}_{\text{º}\text{F}}=\frac{9}{5}{T}_{\text{º}\text{C}}+\text{32}$ ${T}_{\text{º}\text{C}}=\frac{5}{9}\left({T}_{\text{º}\text{F}}-\text{32}\right)$ ${T}_{\text{K}}={T}_{\text{º}\text{C}}+\text{273}\text{.}\text{15}$ ${T}_{\text{º}\text{C}}={T}_{\text{K}}-\text{273}\text{.}\text{15}$ • Systems are in thermal equilibrium when they have the same temperature. • Thermal equilibrium occurs when two bodies are in contact with each other and can freely exchange energy. • The zeroth law of thermodynamics states that when two systems, A and B, are in thermal equilibrium with each other, and B is in thermal equilibrium with a third system, C, then A is also in thermal equilibrium with C. ## Conceptual questions What does it mean to say that two systems are in thermal equilibrium? Give an example of a physical property that varies with temperature and describe how it is used to measure temperature. When a cold alcohol thermometer is placed in a hot liquid, the column of alcohol goes down slightly before going up. Explain why. If you add boiling water to a cup at room temperature, what would you expect the final equilibrium temperature of the unit to be? You will need to include the surroundings as part of the system. Consider the zeroth law of thermodynamics. ## Problems&Exercises What is the Fahrenheit temperature of a person with a $\text{39}\text{.}0\text{º}\text{C}$ fever? $\text{102}\text{º}\text{F}$ Frost damage to most plants occurs at temperatures of $\text{28}\text{.}0\text{º}\text{F}$ or lower. What is this temperature on the Kelvin scale? To conserve energy, room temperatures are kept at $\text{68}\text{.}0\text{º}\text{F}$ in the winter and $\text{78}\text{.}0\text{º}\text{F}$ in the summer. What are these temperatures on the Celsius scale? $\text{20}\text{.}0\text{º}\text{C}$ and $\text{25}\text{.}6\text{º}\text{C}$ A tungsten light bulb filament may operate at 2900 K. What is its Fahrenheit temperature? What is this on the Celsius scale? The surface temperature of the Sun is about 5750 K. What is this temperature on the Fahrenheit scale? $\text{9890}\text{º}\text{F}$ One of the hottest temperatures ever recorded on the surface of Earth was $\text{134}\text{º}\text{F}$ in Death Valley, CA. What is this temperature in Celsius degrees? What is this temperature in Kelvin? (a) Suppose a cold front blows into your locale and drops the temperature by 40.0 Fahrenheit degrees. How many degrees Celsius does the temperature decrease when there is a $\text{40}\text{.}0\text{º}\text{F}$ decrease in temperature? (b) Show that any change in temperature in Fahrenheit degrees is nine-fifths the change in Celsius degrees. (a) $\text{22}\text{.}2\text{º}\text{C}$ (b) $\begin{array}{lll}\text{Δ}T\left(\text{º}\text{F}\right)& =& {T}_{2}\left(\text{º}\text{F}\right)-{T}_{1}\left(\text{º}\text{F}\right)\\ & =& \frac{9}{5}{T}_{2}\left(\text{º}\text{C}\right)+\text{32}\text{.}0\text{º}-\left(\frac{9}{5}{T}_{1}\left(\text{º}\text{C}\right)+\text{32}\text{.}0\text{º}\right)\\ & =& \frac{9}{5}\left({T}_{2}\left(\text{º}\text{C}\right)-{T}_{1}\left(\text{º}\text{C}\right)\right)\text{}=\frac{9}{5}\text{Δ}T\left(\text{º}\text{C}\right)\end{array}$ (a) At what temperature do the Fahrenheit and Celsius scales have the same numerical value? (b) At what temperature do the Fahrenheit and Kelvin scales have the same numerical value? what is the stm is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.? Rafiq industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong Damian How we are making nano material? what is a peer What is meant by 'nano scale'? What is STMs full form? LITNING scanning tunneling microscope Sahil how nano science is used for hydrophobicity Santosh Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq Rafiq what is differents between GO and RGO? Mahi what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq Rafiq what is Nano technology ? write examples of Nano molecule? Bob The nanotechnology is as new science, to scale nanometric brayan nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale Damian Is there any normative that regulates the use of silver nanoparticles? what king of growth are you checking .? Renato What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ? why we need to study biomolecules, molecular biology in nanotechnology? ? Kyle yes I'm doing my masters in nanotechnology, we are being studying all these domains as well.. why? what school? Kyle biomolecules are e building blocks of every organics and inorganic materials. Joe anyone know any internet site where one can find nanotechnology papers? research.net kanaga sciencedirect big data base Ernesto Introduction about quantum dots in nanotechnology what does nano mean? nano basically means 10^(-9). nanometer is a unit to measure length. Bharti do you think it's worthwhile in the long term to study the effects and possibilities of nanotechnology on viral treatment? absolutely yes Daniel how to know photocatalytic properties of tio2 nanoparticles...what to do now it is a goid question and i want to know the answer as well Maciej Abigail for teaching engĺish at school how nano technology help us Anassong How can I make nanorobot? Lily Do somebody tell me a best nano engineering book for beginners? there is no specific books for beginners but there is book called principle of nanotechnology NANO how can I make nanorobot? Lily what is fullerene does it is used to make bukky balls are you nano engineer ? s. fullerene is a bucky ball aka Carbon 60 molecule. It was name by the architect Fuller. He design the geodesic dome. it resembles a soccer ball. Tarell what is the actual application of fullerenes nowadays? Damian That is a great question Damian. best way to answer that question is to Google it. there are hundreds of applications for buck minister fullerenes, from medical to aerospace. you can also find plenty of research papers that will give you great detail on the potential applications of fullerenes. Tarell how did you get the value of 2000N.What calculations are needed to arrive at it Privacy Information Security Software Version 1.1a Good How we can toraidal magnetic field How we can create polaidal magnetic field 4 Because I'm writing a report and I would like to be really precise for the references where did you find the research and the first image (ECG and Blood pressure synchronized)? Thank you!!
web
auto_math_text
# 퍼지 기저함수에 종속적인 Lyapunov 함수를 이용한 T-S 퍼지 시스템의 H∞ 제어 • 최현철 (서울대학교 전기 컴퓨터공학부) ; • 좌동경 (아주대학교 전자공학부) ; • 홍석교 (아주대학교 전자공학부) • Published : 2008.07.01 #### Abstract This paper proposes an $H_{\infty}$ controller design method for Takagi-Sugeno (T-S) fuzzy systems using a fuzzy basis-function-dependent Lyapunov function. Sufficient conditions for the guaranteed $H_{\infty}$ performance of the T-S fuzzy control system are given in terms of linear matrix inequalities (LMIs). These LMI conditions are further used for a convex optimization problem in which the $H_{\infty}-norm$ of the closed-loop system is to be minimized. To facilitate the basis-function-dependent Lyapunov function approach and thus improve the closed-loop system performance, additional decision variables are introduced in the optimization problem, which provide an additional degree-of-freedom and thus can enlarge the solution space of the problem. Numerical examples show the effectiveness of the proposed method. #### References 1. T. Takagi and M. Sugeno, "Fuzzy identification of systems and its applications to modeling and control," IEEE Trans. Syst., Man, Cybern., vol. 15, no. 1, pp. 116-132, 1985 2. S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequalities in Control Theory. Philadelphia, PA: SIAM, 1994 3. K. Tanaka, T. Ikeda, and H.O. Wang, "Robust stabilization of a class of uncertain nonlinear systems via fuzzy control: quadratic stabilizability, $H\infty$ control theory, and linear matrix inequalities," IEEE Trans. Fuzzy Syst., vol. 4, no. 1, pp. 1-13, 1996 https://doi.org/10.1109/91.481840 4. H.O. Wang, K. Tanaka, and M.F. Griffin, "An approach to fuzzy control of nonlinear systems: stability and design issues," IEEE Trans. Fuzzy Syst., vol. 4, no. 1, pp. 14-23, 1996 https://doi.org/10.1109/91.481841 5. M. Johansson, K.-E. Arzen, and A. Rantzer, "Piecewise quadratic stability of fuzzy systems," IEEE Trans. Fuzzy Syst., vol. 7, pp. 713-722, 1999 https://doi.org/10.1109/91.811241 6. G. Feng and D. Sun, "Generalized Hz controller synthesis of fuzzy dynamic systems based on piecewise Lyapunov functions," IEEE Trans. Circuits and Systems-I: Fundamental Theory and Applications, vol. 49, no. 12, pp. 1843-1850, 2002 https://doi.org/10.1109/TCSI.2002.805718 7. G. Feng, "Controller synthesis of fuzzy dynamic systems based on piecewise Lyapunov functions," IEEE Trans. Fuzzy Syst., vol. 11, no. 5, pp. 605-612, 2003 https://doi.org/10.1109/TFUZZ.2003.817837 8. K. Tanaka, T. Hori, and H.O. Wang, "A multiple Lyapunov function approach to stabilization of fuzzy control systems," IEEE Trans. Fuzzy Syst., vol. 11, no. 4, pp. 582-589, 2003 https://doi.org/10.1109/TFUZZ.2003.814861 9. D.J. Choi and P. Park, "$H{\infty}$ state-feedback controller design for discrete-time fuzzy systems using fuzzy weighting-dependent Lyapunov functions," IEEE Trans. Fuzzy Syst., vol. 11, no. 2, pp. 271-278, 2003 https://doi.org/10.1109/TFUZZ.2003.809903 10. M.C. de Oliveira, J. Bernussou, and J.C. Geromel, "A new discrete-time robust stability condition," Syst. Contr. Lett., vol. 37, pp. 261-265, 1999 https://doi.org/10.1016/S0167-6911(99)00035-3 11. P.J. de Oliveira, R.C.L.F. Oliveira, V.J.S. Leite, V.F. Montagner, and P.L.D. Peres, $H{\infty}$ guaranteed cost computation by means of parameter-dependent Lyapunov functions," Automatica, vol. 40, pp. 1053-1061, 2004 https://doi.org/10.1016/j.automatica.2004.01.025 12. S. Zhou, G. Feng, J. Lam, and S. Xu, "Robust $H{\infty}$ control for discrete-time fuzzy systems via basis-dependent Lyapunov functions," Information Sciences, vol. 174, pp. 197-217, 2005 https://doi.org/10.1016/j.ins.2004.07.015 13. S. Zhou and G. Feng, "Generalised $H_2$ controller synthesis for uncertain discrete-time fuzzy systems via basis-dependent Lyapunov functions," IEE Proc.-Control Theory Appl., vol. 153, no. 1, pp. 74-80, 2006 https://doi.org/10.1049/ip-cta:20045164 14. J. Lam and S. Zhou, "Dynamic output feedback $H{\infty}$ control of discrete-time fuzzy systems: a fuzzy-basis-dependent Lyapunov function approach," Int. J. Systems Science, vol. 38, no. 1, pp. 25-37, 2007 https://doi.org/10.1080/00207720601042967 15. B. Ding, H. Sun, and P. Yang, "Further studies on LMI-based relaxed stabilization conditions for nonlinear systems in Takagi-Sugeno's form," Automatica, vol. 42, pp. 503-508, 2006 https://doi.org/10.1016/j.automatica.2005.11.005 16. E. Kim and H. Lee, "New approaches to relaxed quadratic stability condition of fuzzy control systems," IEEE Trans. Fuzzy Syst., vol. 8, no. 5, pp. 523-534, 2000 https://doi.org/10.1109/91.873576 17. D.C.W. Ramos and P.L.D. Peres, "A less conservative LMI condition for the robust stability of discrete-time uncertain systems," Syst. Contr. Lett., vol. 43, pp. 371-378, 2001 https://doi.org/10.1016/S0167-6911(01)00120-7 18. P. Gahinet, A. Nemirovski, A.J. Laub, and M. Chilali, LMJ Control Toolbox User's Guide. Natick, MA: The MathWorks, Inc., 1995
web
auto_math_text
# A Proof-Theoretic Subsumption Reasoner for Hybrid EL-TBoxes Franz Baader, Novak Novakovic, Boontawee Suntisrivaraporn A Proof-Theoretic Subsumption Reasoner for Hybrid EL-TBoxes Proceedings of the 2008 International Workshop on Description Logics (DL2008), volume 353 of CEUR-WS, 2008 • KurzfassungAbstract Hybrid EL-TBoxes combine general concept inclusions (GCIs), which are interpreted with descriptive semantics, with cyclic concept definitions, which are interpreted with greatest fixpoint (gfp) semantics. We introduce a proof-theoretic approach that yields a polynomial-time decision procedure for subsumption in EL w.r.t. hybrid TBoxes, and present preliminary experimental results regarding the performance of the reasoner Hyb that implements this decision procedure. • Forschungsgruppe:Research Group: Automatentheorie @inproceedings{ BaaNovSun-DL-08, author = {Franz {Baader} and Novak {Novakovic} and Boontawee {Suntisrivaraporn}}, booktitle = {Proceedings of the 2008 International Workshop on Description Logics ({DL2008})}, series = {CEUR-WS}, title = {A Proof-Theoretic Subsumption Reasoner for Hybrid $\mathcal{EL}$-{TBoxes}}, volume = {353}, year = {2008}, }
web
auto_math_text
ARTICLE # Mathematical Models to Describe the Kinetic Behavior of Staphylococcus aureus in Jerky Jimyeong Ha1,2, Jeeyeon Lee1,2, Soomin Lee1,2, Sejeong Kim1,2, Yukyung Choi1, Hyemin Oh1, Yujin Kim1, Yewon Lee1, Yeongeun Seo1, Yohan Yoon1,2,* 1Department of Food and Nutrition, Sookmyung Women’s University, Seoul 04310, Korea 2Risk Analysis Research Center, Sookmyung Women’s University, Seoul 04310, Korea *Corresponding author : Yohan Yoon, Department of Food and Nutrition, Sookmyung Women’s University, Seoul 04310, Korea Tel: +82-2-2077-7585 Fax: +82-2-710-9479 E-mail: yyoon@sm.ac.kr © Korean Society for Food Science of Animal Resources. This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. Received: Mar 28, 2019 ; Revised: Apr 04, 2019 ; Accepted: Apr 04, 2019 Published Online: Jun 30, 2019 ## Abstract The objective of this study was to develop mathematical models for describing the kinetic behavior of Staphylococcus aureus (S. aureus) in seasoned beef jerky. Seasoned beef jerky was cut into 10-g pieces. Next, 0.1 mL of S. aureus ATCC13565 was inoculated into the samples to obtain 3 Log CFU/g, and the samples were stored aerobically at 10°C, 20°C, 25°C, 30°C, and 35°C for 600 h. S. aureus cell counts were enumerated on Baird Parker agar during storage. To develop a primary model, the Weibull model was fitted to the cell count data to calculate Delta (required time for the first decimal reduction) and ρ (shape of curves). For secondary modeling, a polynomial model was fitted to the Delta values as a function of storage temperature. To evaluate the accuracy of the model prediction, the root mean square error (RMSE) was calculated by comparing the predicted data with the observed data. The surviving S. aureus cell counts were decreased at all storage temperatures. The Delta values were longer at 10°C, 20°C, and 25°C than at 30°C and 35°C. The secondary model well-described the temperature effect on Delta with an R2 value of 0.920. In validation analysis, RMSE values of 0.325 suggested that the model performance was appropriate. S. aureus in beef jerky survives for a long period at low storage temperatures and that the model developed in this study is useful for describing the kinetic behavior of S. aureus in seasoned beef jerky. Keywords: jerky; mathematical model; Staphylococcus aureus; Weibull model ## Introduction Jerky is a nutritional snack with a high protein content and light weight, and thus it is consumed by many people (Holley, 1985). It is also easy to store because of its long shelf-life and low Aw (Calicioglu et al., 2003). However, outbreaks of foodborne illness have occurred in many countries (Eidson et al., 2000; Keene et al., 1997). These outbreaks may be caused by cross contamination during jerky processing, molding, packaging, and cutting. Also, most of jerkies are made in small companies, and these companies have difficulties for food hygiene management. Thus, foodborne pathogen growth need to be simulated in jerky for exposure assessment. Staphylococcus aureus can produce enterotoxin, leading to foodborne intoxication (Le Loir et al., 2003). Generally, the symptoms of foodborne illness include abdominal cramps, vomiting, and diarrhea (Jones et al., 2002). S. aureus can grow under various conditions, such as a wide range of temperatures, pH, and low AW (Bergdoll, 1989; Schmitt et al., 1990) and most S. aureus isolates from food exhibit antimicrobial resistance (Can et al., 2017). The pathogen is commonly found on human skin (Otto, 2008), and may be cross-contaminated from human hands to jerky. Thus, there is high possibility for jerky contamination by S. aureus. Predictive models are useful for estimating microbial growth or death in food using mathematical models (Zwietering et al., 1996). The purpose of a predictive model is to secure food safety in advance by identifying risk factors (Yoon, 2010). A primary model describes changes in bacterial cell counts over storage time to calculate kinetic parameters such as growth rate and lag phase duration (Ha et al., 2016). A secondary model describes the effects of environmental factors such as pH, AW, and temperature on kinetic parameters (Buchanan, 1993; Ha et al., 2016). Therefore, the objective of this study was to develop mathematical models for describing the kinetic behavior of S. aureus in beef jerky. ## Materials and Methods Preparation of inocula S. aureus ATCC13565 was cultured in 10 mL of tryptic soy broth (TSB; BD Biosciences, Franklin Lakes, NJ, USA) at 37°C for 24 h. For subculture, 0.1 mL of the culture was transferred into 10 mL fresh TSB at 37°C for 24 h. The sample was centrifuged at 1,912×g and 4°C for 15 min and washed twice with phosphate-buffered saline (PBS: pH 7.4; 0.2 g of KH2PO4, 1.5 g of Na2HPO4·7H2O, 8.0 g of NaCl, and 0.2 g of KCl in 1 L of distilled water). The supernatants were discarded, and the cell pellets were resuspended in PBS. Cell suspensions were diluted with PBS to 3–4 Log CFU/mL for inoculation. Development of predictive model Seasoned beef jerky was purchased from an online shop in Korea. Ten-gram portions of the samples were placed into sterile filter bags (3M, St. Paul, MN, USA), and 0.1-mL aliquots of S. aureus were dotted on several places of the beef jerky surface for inoculation to obtain 3 Log CFU/g in the sample bags. The samples were rubbed 20 times and the sample bags were sealed, followed by aerobic storage at 10°C (600 h), 20°C (600 h), 25°C (480 h), 30°C (192 h), and 35°C (96 h). These time intervals were determined according to the time that S. aureus cell counts were below detection limit. Beef jerky samples were analyzed at different time intervals. Thirty milliliters of 0.1% buffered peptone water (BPW; BD Biosciences) were added to each sample and homogenized with a BagMixer (Interscience, St. Nom, France) for 90 s. The homogenates were serially diluted with BPW. The 0.1 mL of the diluents were plated onto Baird-Parker agar (MB Cell, Los Angeles, CA, USA) for S. aureus, and the plates were incubated at 37°C for 48 h. Typical colonies on the plates were counted, and the Weibull model was fitted to the S. aureus cell count data (Van Boekel, 2002). $\text{Log}\left(N\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}=\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{Log}\left({N}_{0}\right)-{\left(\text{time/}\delta \right)}^{\text{ρ}}$ where N0 is the initial cell count, ρ is the shape of the curve, and δ is the time required for the first decimal reduction. The polynomial model (δ=N0+a×T+b×T2) was used to evaluate the effect of storage temperature on δ. Validation S. aureus cell count data were obtained at 15°C and 23°C in additional experiments to evaluate the model performance. These observed data were compared to predicted data, which were calculated from the predictive model. The differences between the observed and predicted data were quantified by calculating the root mean square error (RMSE) (Baranyi et al., 1996); where n represents the number of data points. Statistical analysis The experimental data were analyzed with the general linear model procedure of SAS® version 9.3 (SAS Institute, Inc., Cary, NC, USA). The mean comparisons were performed by a pairwise t-test at α=0.05. ## Results and Discussion Because various types of jerky are available made from different meat types and marinades, the behavior of S. aureus may differ among jerkies. Thus, a predictive model should be developed for each jerky type to describe the behavior of S. aureus. However, this effort requires a long time and is costly. Developing a model with the jerky type, allowing the highest S. aureus growth, may be appropriate for the most severe case, which would save time and expense. To determine a model for developing a predictive model, we examined the pH and water activities of 75 samples of 15 original jerky products (Table 1) and 50 samples of 10 seasoned jerky products (Table 2). The pH values were highly similar among the samples (6.13–6.17), but the water activities were higher in the seasoned jerky (0.810±0.045) than in the original jerky (0.656±0.134) (Tables 1 and 2). The 10-seasoned jerky products contained sodium nitrite, potassium sorbate, and sodium sorbate. The growth of most bacteria is inhibited when AW is reduced. Particularly, the growth of S. aureus is inhibited when AW is less than 0.850 in an aerobic environment (Jay 1992). Holley (1985) indicated that the AW of jerky was 0.620 when stored at refrigeration temperature for 26 days, but there was no significant difference in S. aureus cell counts compared to the cell counts on day 0. This result suggests that S. aureus can survive in beef jerky even if AW is less than 0.850. Additionally, Lee et al. (2016) indicated that S. aureus did not grow under vacuum conditions. Hence, we developed predictive models using the seasoned beef jerky products as a model product under aerobic conditions to predict the most severe case of S. aureus growth. Table 1. General information of original jerky samples purchased from online shops Sample Meat (%) Aw pH A Beef (85.82) 0.739 6.17±0.05 B Beef (87.00) 0.811 5.97±0.07 C Beef (85.23) 0.747 6.12±0.07 D Chicken (88.00) 0.620 6.17±0.04 E Beef (85.06) 0.771 6.09±0.08 F Beef (85.76) 0.770 5.71±0.04 G Beef (85.76) 0.792 5.92±0.05 H Beef (91.59) 0.520 6.19±0.03 I Chicken (90.12) 0.833 6.52±0.04 J Beef (88.33) 0.481 6.13±0.03 K Beef (86.13) 0.479 6.22±0.09 L Pork (94.00) 0.545 6.39±0.03 M Beef (88.47) 0.703 6.06±0.04 N Beef (85.27) 0.504 6.57±0.07 O Chicken (87.00) 0.527 6.30±0.04 Average 0.656 6.17±0.22 Table 2. General information of seasoned jerky samples purchased from online shops Sample Meat (%) Aw pH a Beef (93.51) 0.833 6.12±0.11 b Beef (86.20) 0.834 5.97±0.09 c Beef (85.07) 0.813 5.95±0.05 d Beef (85.14) 0.792 5.90±0.09 e Beef (86.76) 0.808 5.99±0.02 f Chicken (87.00) 0.718 6.33±0.03 g Beef (85.07) 0.804 6.46±0.03 h Beef (85.22) 0.767 6.09±0.02 i Beef (88.15) 0.858 6.16±0.04 j Beef (90.12) 0.874 6.36±0.06 Average 0.810 6.13±0.19 S. aureus-inoculated seasoned beef jerky samples were stored in aerobic packaging at 10°C, 20°C, 25°C, 30°C, and 35°C. The cell counts were gradually decreased at 10°C and 20°C, but a tail effect was observed at 10°C and S. aureus cell counts survived through the end of storage at 20°C (Fig. 1). However, the S. aureus cell counts greatly decreased as the temperature was increased to 25°C, 30°C, and 35°C (Fig. 1). The cell counts decreased to below the detection limit (0.48 Log CFU/g) after 432, 144, and 120 h at 20°C, 25°C, and 30°C, respectively (Fig. 1). Fig. 1. Cell counts of Staphylococcus aureus in jerky during aerobic storage at 10°C, 20°C, 25°C, 30°C, and 35°C. Symbol, observed cell counts; line, fitted line with the Weibull model (van Boekel, 2002). To describe the kinetic behavior of S. aureus in beef jerky, primary models were developed and R2 values ranged from 0.868 to 0.967, indicating that the developed primary models were appropriate. These primary models showed that the δ values generally decreased as temperature increased (Table 3). This result agrees with those of Moon et al. (2017) who showed that S. aureus in dried julienned squid survived longer at 10°C than at 35°C. These results suggest that if S. aureus is contaminated in beef jerky stored at low temperature, the pathogen can survive for a long time and cause food safety issues. Because S. aureus cell counts decreased as shown in Fig. 1, the ρ values were less than 1, indicating that all curves were concave (Coroller et al., 2006). To evaluate the effect of ρ on temperature, a secondary model was developed and R2 was 0.920 (Fig. 2), indicating that the developed model was appropriate. The equation was δ=(−4.4271)+(13.9841×T)+(−0.3605×T2) (Fig. 2). The secondary model showed that the δ values were generally influenced by temperature. The RMSE value was calculated to evaluate model performance. A value close to zero indicates that the predicted values are the same as the observed values (Kim et al., 2017). In this study, the value was 0.326, indicating that the developed models were appropriate for describing the kinetic behavior of S. aureus in beef jerky. Fig. 2. δ values from the primary model and the fitted line by a secondary model describing the effect of temperature on δ for Staphylococcus aureus in jerky. Table 3. δ and ρ calculated by the Weibull model for Staphylococcus aureus survival in jerky during aerobic storage at 10°C, 20°C, 25°C, 30°C, and 35°C Kinetic parameters Temperature (°C) 10 20 25 30 35 δ 99.335±2.072B 128.950±15.910A 126.450±8.556A 83.910±6.986B 45.670±5.586C ρ 0.432±0.037C 0.671±0.046B 0.611±0.021B 0.916±0.020A 0.666±0.074B R2 0.869 0.931 0.954 0.868 0.967 δ, required time for the first decimal reduction; ρ, shape of curve. A–C Means within the same row with different superscript letters are significantly different (p<0.05). In conclusion, the developed predictive models are useful in describing the kinetic behavior of S. aureus in beef jerky. Additionally, because the model beef jerky was selected according to the most optimum growth conditions for S. aureus, the developed models can be applied to other jerkies. Although beef jerky has a low AW, if S. aureus is contaminated in the beef jerky, the cells can survive for a long period at low temperature and cause food safety issues. Therefore, beef jerky should not be considered as a microbiologically safe food and thus, cross-contamination should be controlled during processing. ## Notes Conflict of Interest The authors declare no potential conflict of interest. ## Acknowledgements This research was supported by a grant (16162MFDS584) from Ministry of Food and Drug Safety, Korea in 2016. ## Notes Author Contributions Conceptualization: Ha J, Yoon Y. Data curation: Lee S, Kim S. Formal analysis: Lee J. Methodology: Choi Y, Oh H. Validation: Kim Y, Lee Y, Seo Y. Investigation: Yoon Y. Writing - original draft: Ha J, Yoon Y. Writing - review & editing: Ha J, Lee J, Lee S, Kim S, Choi Y, Oh H, Kim Y, Lee Y, Seo Y, Yoon Y. ## Notes Ethics Approval This article does not require IRB/IACUC approval because there are no human and animal participants. ## References 1. Baranyi J, Ross T, McMeekin TA, Roberts TA. 1996; Effects of parameterization on the performance of empirical models used in ‘predictive microbiology’. Food Microbiol. 13:83-91 2. Bergdoll MS. 1989; Staphylococcus aureus. In Foodborne bacterial pathogens. In: Doyle MP, editor.(ed.)Marcel Dekker. New York, NY, USA: p. 463-523. 3. Buchanan RL. 1993; Predictive food microbiology. Trends Food Sci Technol. 4:6-11 4. Calicioglu M, Sofos JN, Kendall PA. 2003; Influence of marinades on survival during storage of acid-adapted and nonadapted Listeria monocytogenes inoculated post-drying on beef jerky. Int J Food Microbiol. 86:283-292 5. Can HY, Elmali M, Karagoz A. 2017; Molecular typing and antimicrobial susceptibility of Staphylococcus aureus strains isolated from raw milk, cheese, minced meat, and chicken meat samples. Korean J Food Sci Anim Resour. 37:175-180 6. Coroller L, Leguerinel I, Mettler E, Savy N, Mafart P. 2006; General model, based on two mixed Weibull distributions of bacterial resistance, for describing various shapes of inactivation curves. Appl Environ Microbiol. 72:6493-6502 7. Eidson M, Sewell CM, Graves G, Olson R. 2000; Beef jerky gastroenteritis outbreaks. J Environ Health. 62:9-13. 8. Ha J, Gwak E, Oh MH, Park B, Lee J, Kim S, Lee H, Lee S, Yoon Y, Choi KH. 2016; Kinetic behavior of Salmonella on low NaNO2 sausages during aerobic and vacuum storage. Korean J Food Sci Anim Resour. 36:262-266 9. Holley RA. 1985; Beef jerky: Fate of Staphylococcus aureus in marinated and corned beef during jerky manufacture and 2.5°C storage. J Food Prot. 48:107-111 10. Jay JM. 1992 Modern food microbiology. 4th edChapman & Hall. New York, NY, USA: 11. Jones TF, Kellum ME, Porter SS, Bell M, Schaffner W. 2002; An outbreak of community-acquired foodborne illness caused by methicillin-resistant Staphylococcus aureus. Emerg Infect Dis. 8:82-84 12. Keene WE, Sazie E, Kok J, Rice DH, Hancock DD, Balan VK, Zhao T, Doyle MP. 1997; An outbreak of Escherichia coli O157:H7 infections traced to jerky made from deer meat. JAMA. 277:1229-1231 13. Kim S, Jeong J, Lee H, Lee J, Lee S, Ha J, Choi Y, Yoon Y, Choi KH. 2017; Kinetic behavior of Campylobacter jejuni in beef tartare at cold temperatures and transcriptomes related to its survival. J Food Prot. 80:2127-2131 14. Le Loir Y, Baron F, Gautier M. 2003; Staphylococcus aureus and food poisoning. Genet Mol Res. 2:63-76. 15. Lee J, Gwak E, Ha J, Kim S, Lee S, Lee H, Oh MH, Park BY, Oh NS, Choi KH, Yoon Y. 2016; Mathematical model for predicting the growth probability of Staphylococcus aureus in combinations of NaCl and NaNO2 under aerobic or evacuated storage conditions. Korean J Food Sci Anim Resour. 36:752-759 16. Moon HJ, Min KJ, Park NY, Park HJ, Yoon KS. 2017; Survival of Staphylococcus aureus in dried fish products as a function of temperature. Food Sci Biotechnol. 26:823-828 17. Otto M. 2008; Staphylococcal biofilms. Curr Top Microbiol Immunol. 322:207-228 18. Schmitt M, Schuler-Schmid U, Scmidt-Lorenz W. 1990; Temperature limits of growth, TNase and enterotoxin production of Staphylococcus aureus strains isolated from foods. Int J Food Microbiol. 11:1-19 19. van Boekel MA. 2002; On the use of the Weibull model to describe thermal inactivation of microbial vegetative cells. Int J Food Microbiol. 74:139-159 20. Yoon Y. 2010; Principal theory and application of predictive microbiology. Food Sci Ind. 43:70-74. 21. Zwietering MH, De Wit JC, Notermans S. 1996; Application of predictive microbiology to estimate the number of Bacillus cereus in pasteurised milk at the point of consumption. Int J Food Microbiol. 30:55-70
web
auto_math_text
# Numerical response The numerical response in ecology is the change in predator density as a function of change in prey density. The term numerical response was coined by M. E. Solomon in 1949.[1] It is associated with the functional response, which is the change in predator's rate of prey consumption with change in prey density. As Holling notes, total predation can be expressed as a combination of functional and numerical response.[2] The numerical response has two mechanisms: the demographic response and the aggregational response. The numerical response is not necessarily proportional to the change in prey density, usually resulting in a time lag between prey and predator populations.[3] For example, there is often a scarcity of predators when the prey population is increasing. ## Demographic response The demographic response consists of changes in the rates of predator reproduction or survival due to a changes in prey density. The increase in prey availability translates into higher energy intake and reduced energy output. This is different from an increase in energy intake due to increased foraging efficiency, which is considered a functional response. This concept can be articulated in the Lotka-Volterra Predator-Prey Model. ${\displaystyle dP/dt=acVP-mP}$ a = conversion efficiency: the fraction of prey energy assimilated by the predator and turned into new predators P = predator density V = prey density m = predator mortality Demographic response consists of a change in dP/dt due to a change in V and/or m. For example, if V increases, then predator growth rate (dP/dt) will increase. Likewise if the energy intake increases (due to greater food availability) and a decrease in energy output (from foraging), then predator mortality (m) will decrease and predator growth rate (dP/dt) will increase. In contrast, the functional response consists of a change in conversion efficiency (a) or capture rate (c). The relationship between available energy and reproductive efforts can be explained with the life history theory in the trade-off between fecundity and growth/survival. If an organism has more net energy, then the organism will sacrifice less energy dedicated to survival per reproductive effort and will therefore increase its reproduction rate. In parasitism, functional response is measured by the rate of infection or laying of eggs in host, rather than the rate of prey consumption as it is measured in predation. Numerical response in parasitism is still measured by the change in number of adult parasites relative to change in host density. Parasites can demonstrate a more pronounced numerical response to changes in host density since there is often a more direct connection (less time lag) between food and reproduction in that both needs are immediately satisfied by its interaction with the host.[4] ## Aggregational response The aggregational response, as defined by Readshaw in 1973, is a change in predator population due to immigration into an area with increased prey population.[5] In an experiment conducted by Turnbull in 1964, he observed the consistent migration of spiders from boxes without prey to boxes with prey. He proved that hunger impacts predator movement.[6] Riechert and Jaeger studied how predator competition interferes with the direct correlation between prey density and predator immigration.[7][8] One way this can occur is through exploitation competition: the differential efficiency in use of available resources, for example, an increase in spiders' web size (functional response). The other possibility is interference competition where site owners actively prevent other foragers from coming in vicinity. ## Ecological relevance The concept of numerical response becomes practically important when trying to create a strategy for pest control. The study of spiders as a biological mechanism for pest control has driven much of the research on aggregational response. Antisocial predator populations that display territoriality, such as spiders defending their web area, may not display the expected aggregational response to increased prey density.[9] A credible, simple alternative to the Lotka-Volterra predator-prey model and its common prey dependent generalizations is the ratio dependent or Arditi-Ginzburg model.[10] The two are the extremes of the spectrum of predator interference models. According to the authors of the alternative view, the data show that true interactions in nature are so far from the Lotka-Volterra extreme on the interference spectrum that the model can simply be discounted as wrong. They are much closer to the ratio dependent extreme, so if a simple model is needed one can use the Arditi-Ginzburg model as the first approximation.[11] ## References 1. ^ Solomon, M. E. "The Natural Control of Animal Populations." Journal of Animal Ecology. 19.1 (1949). 1-35 2. ^ Holling, C. S. "The components of predation as revealed by a study of small-mammal predation of the European pine sawfly." Canadian Entomologist 91: 293-320. (1959) 3. ^ Ricklefs, R. E. The Economy of Nature. 6th Edition. New York: Freeman and Company. 2010. p. 319. 4. ^ Holling, C. S. "The components of predation as revealed by a study of small-mammal predation of the European pine sawfly." Canadian Entomologist 91: 293-320.(1959) 5. ^ Readshaw, J.L. The numerical response of predators to prey density. In: Hughes, Ed., Quantitative Evaluation of Natural Enemy Effectiveness. J. Applied Biol. 10:342-351. 1973. 6. ^ Turnbull, A. L. The search for prey by a web-building spider Achaearanea tepidariorum (C. L. Koch) (Araneae, Theridiidae). Canadian Entomologist 96: 568-579. 1964. 7. ^ Riechert, Susan E. Thoughts on Ecological Significance of Spiders. BioScience. 24(6): 352-356. 1974. 8. ^ Jaeger, R.G. Competitive Exclusion: Comments on survival and extinction of species. BioScience. 24: 33-39. 1974 9. ^ Turnbull, A. L. The search for prey by a web-building spider Achaearanea tepidariorum (C. L. Koch) (Araneae, Theridiidae). Canadian Entomologist 96: 568-579. 1964. 10. ^ Arditi, R. and Ginzburg, L.R. 1989. Coupling in predator-prey dynamics: ratio dependence. Journal of Theoretical Biology 139: 311-326. 11. ^ Arditi, R. and Ginzburg, L.R. 2012. How Species Interact: Altering the Standard View on Trophic Ecology. Oxford University Press, New York, NY.
web
auto_math_text
Skip Nav Destination Once the fabric is woven it may be embellished at will. Nero Wolfe in The Golden Spiders, by Rex Stout, Bantam edition, New York, NY, 1955. Heat capacities belong to the most important thermophysical properties of matter: they are intimately related to the temperature dependence of fundamental thermodynamic functions; they may be determined in the laboratory with great accuracy; and they are of key importance for linking thermodynamics with microscopic fluid structure and dynamics, as evidenced by the contributions to this book. They are thus indispensable in physical chemistry as well as in chemical engineering. For instance, as a classical example, consider the standard entropies of liquids at T = 298.15 K. They are evaluated from experimental heat capacities at constant pressure from low temperatures to 298.15 K and entropies of phase changes in between (assuming applicability of the third law of thermodynamics). The measured heat capacity of an organic compound can usually be extrapolated to 0 K by fitting a Debye heat capacity function to the experimental values at, say, 10 K. The nature and the size of this monograph's topic make it impractical to cover the entire subject in one volume. As indicated in the title, the focus will be on heat capacities of chemically non-reacting liquids, solutions and vapours/gases (though polymers and liquid crystals are also covered). The individual specialised chapters have been written by internationally renowned thermodynamicists/thermophysicists active in the respective fields. Because of their topical diversity, in this introductory chapter I shall try to summarise concisely the major aspects of the thermodynamic formalism relevant to fluid systems, to clarify, perhaps, some points occasionally obscured, to indicate some ramifications into neighbouring disciplines, and to point out a few less familiar yet potentially interesting problems. The omission of any topic is not to be taken as a measure of its importance, but is predominantly a consequence of space limitations. Calorimetric determinations of heat capacities of liquids have a long tradition, and many distinguished scientists have contributed to this subject. One can only marvel about the careful work of some of the early researchers, such as Eucken and Nernst,1,2  who developed precursors of modern, adiabatic calorimeters. The adiabatic method for heat capacity measurements at low temperatures was pioneered by Cohen and Moesveld,3  and Lange,4  and became widely used. Indeed, during the following decades, many alternative designs of increasing sophistication have been devised and successfully used. A selection of adiabatic calorimeters which were described in the literature up to about 1970 is provided by references 5 through 19. For details, the interested reader should consult the classic IUPAC monograph edited by McCullough and Scott,20  or the more recent ones edited by Marsh and O’Hare,21  and by Goodwin, Marsh and Wakeham,22  or the monograph on calorimetry by Hemminger and Höhne.23 More specialised reviews have been prepared by Lakshmikumar and Gopal,24  Wadsö25  and Gmelin.26  A monograph focusing on differential scanning calorimetry has been presented by Höhne, Hemminger and Flammersheim.27 To date, the most widely used instruments for measuring heat capacities of liquids and liquid mixtures are based on the differential flow calorimeter designed by Picker,28,29  which was commercialised by Setaram. Because of the absence of a vapour space, differential flow calorimeters are particularly useful. They may be fairly easily modified to be used at elevated temperatures and pressures, including the critical region. The first instrument of this type was constructed by Smith-Magowan and Wood,30  with improved versions being due to White et al.,31  and Carter and Wood.32  However, comparison of heat capacities measured by different types of flow calorimeters and differential thermopile conduction calorimeters shows small differences in measured heat capacities, which are attributed to conductive and convective heat losses. Conductive heat losses, the principal problem in flow calorimetric heat capacity measurements on liquids, have recently been analysed by Hei and Raal33  for a five-zone model calorimeter. Because of the importance of heat capacity data of liquids in chemical thermodynamics and chemical engineering, numerous critical data compilations have been published – starting at the end of the nineteenth century with Berthelot's Thermochimie,34  and including such well-known publications as the International Critical Tables,35  Timmermans’ Physicochemical Constants of Pure Organic Compounds,36 Landolt-Börnstein,37  and Daubert and Danner's Physical and Thermodynamic Properties of Pure Chemicals: Data Compilation, DIPPR® Database.38  The most recent and the most comprehensive compilation of critically evaluated heat capacity data of pure liquids is the monograph on Heat Capacities of Liquids: Volumes I and II. Critical Review and Recommended Values, authored by Zábranský et al. (1996),39  with Supplement I of 2001.40  This monograph also contains a valuable survey of calorimetric techniques for determining heat capacities of liquids, and useful comments on terminology and criteria for the classification of calorimeters. As concerns heat capacity data of mixtures, the situation is somewhat less satisfactory. Critically selected excess molar heat capacities at constant pressure of binary liquid organic mixtures have been included in the International DATA Series, SELECTED DATA ON MIXTURES, Series A,41  and the Dortmund Data Bank (DDB) contains a large number of data sets on heat capacities of mixtures/excess heat capacities.42  However, a monograph devoted to a reasonably comprehensive compilation of heat capacity data of liquid mixtures, though highly desirable, is not available. For more than a century, experimental studies of real-gas behaviour at low or moderate densities, have held a prominent position in physical chemistry. They were motivated, and still are, either by the need to solve practical problems – such as those encountered in the reduction of vapour–liquid equilibrium data – or by their usefulness as valuable sources of information on intermolecular interactions in both pure gases/vapours and gaseous mixtures. In this context, perfect-gas (ideal-gas) state heat capacities are of central importance, say, in the calculation of property changes of single-phase, constant-composition fluids for any arbitrary change of state. They may be determined by vapour-flow calorimetry, or by speed-of-sound measurements. The statistical–mechanical calculation of perfect-gas state heat capacities (they are 1-body properties which do not depend on molecular interactions) has reached a high level of sophistication, with obvious great practical advantages. For instance, the calculations readily allow extension of experimental data to temperature ranges currently inaccessible to measurement. Data compilations of heat capacities of pure substances in the perfect-gas (ideal-gas) state may be found in Selected Values of Physical and Thermodynamic Properties of Hydrocarbons and Related Compounds,43 Landolt-Börnstein,37  in Stull, Westrum and Sinke's The Chemical Thermodynamics of Organic Compounds,44  in the TRC Thermodynamic Tables,45  in the book by Frenkel et al.,46  and in the NIST-JANAF Thermochemical Tables.47  One should always keep in mind, however, that only comparison of experimental with calculated values leads to better approximations and/or new concepts. To set the scene for this monograph, a few selected basic thermodynamic relations will be summarised below. For further aspects and details the interested reader should consult a textbook close to his/her taste, perhaps one of those listed in references 48 through 58. Convenient starting points are the fundamental property equations (also called the Gibbs equations) of a single-phase PVT system, either open or closed, where P denotes the pressure, V is the molar volume and T is the thermodynamic temperature. No electric, magnetic or gravitational fields are considered in such a simple system. For a multicomponent system, where the total amount of substance is given by , with ni being the amount of substance of component i, the fundamental property equation in the energy representation is Equation 1 and, equivalently, in the entropy representation Equation 2 Here, U is the molar internal energy, S is the molar entropy of the fluid. The intensive parameter furnished by the first-order partial derivatives of the internal energy with respect to the amount of substance of component i, Equation 3 is called the chemical potential of component i. Its introduction extends the scope to the general case of a single-phase system in which the ni may vary, either by exchanging matter with its surroundings (open system) or by changes in composition occurring as a result of chemical reactions (reactive system) or both. Corresponding to Equations (1) and (2), the primary functions (or cardinal functions, or Euler equations) are Equation 4 in the energy representation, and Equation 5 in the entropy representation. In both the energy and entropy representations the extensive quantities are the mathematically independent variables, while the intensive parameters are derived, which situation does not conform to experimental practice. The choice of nS and nV as independent variables in the fundamental property equation in the energy representation is not convenient, and Equation (4) suggests the definition of useful alternative energy-based primary functions. The appropriate method for generating them without loss of information is the Legendre transformation. These additional equivalent primary functions are the molar enthalpy Equation 6 the molar Helmholtz energy Equation 7 and the molar Gibbs energy Equation 8 Substituting for U in Equation (6) from Equation (4) yields the alternative form Equation 9 where xi=ni/n is the mole fraction. Substitution of U in Equation (7) yields Equation 10 as alternative grouping, and substitution of U in Equation (8) yields the Euler equation as Equation 11 The alternative primary functions H, F and G allow the development of alternative energy-based fundamental property equations: Equation 12 Equation 13 Equation 14 The four fundamental equations presented so far are equivalent; however, each is associated with a different set of canonical variables {nS, nV, ni}, {nS, P, ni}, {T, nV, ni} and {T, P, ni}. A primary function which arises naturally in statistical mechanics is the grand canonical potential. It is the Legendre transform when simultaneously the entropy is replaced by the temperature and the amount of substance by the chemical potential: Equation 15 with the alternative form Equation 16 and the corresponding Gibbs equation Equation 17 with the canonical variables {T, nV, μi}. The complete Legendre transform vanishes identically for any system. The complete transform of the internal energy replaces all extensive canonical variables by their conjugate intensive variables, thus yielding the null-function Equation 18 as final alternative primary function in the energy representation. This property of the complete Legendre transform gives rise to Equation 19 as the corresponding alternative form of the fundamental property equation. It represents an important relation between the intensive parameters T, P and μi of the system and shows that they are not independent of each other. While the extensive parameters of a simple phase are independent of each other, the conjugate intensive parameters are not, as shown above. For a given phase, the number of intensive parameters which may be varied independently is known as the number of thermodynamic degrees of freedom. Treating the sum as a single term, the total number of equivalent primary functions and therefore the total number of equivalent fundamental property equations for a thermodynamic system is 2k. Thus for nU=nU(nS, nV, n) there are but eight distinct equivalent primary functions [nU, Equation (4), plus seven alternatives] and eight distinct forms of the fundamental equation [d(nU), Equation (1), plus seven alternatives]. Of the seven Legendre transforms of the internal energy, five have been treated above (including the null-function). The remaining two, and , with the alternative forms X=TSPV and Y=TS, respectively, have not received separate symbols or names. The corresponding fundamental property equations are and . Since all the fundamental property equations are equivalent, alternative expressions for the chemical potential are possible, of which Equation 20 is the preferred one, because temperature and pressure are by far the most useful experimental parameters. We recognise that the chemical potential of component i is just the partial molar Gibbs energy of i, Equation 21 which quantity is of central importance in mixture/solution thermodynamics. For a homogeneous fluid of constant composition, the following four energy-based fundamental property relations apply: Equation 22 Equation 23 Equation 24 Equation 25 It follows that Equation 26 Equation 27 Equation 28 Equation 29 which relations establish the link between the natural independent variables T, P, V, S and the energy-based functions U, H, F, G. In view of the definitions of F and G and Equation (29), the Gibbs–Helmholtz equations Equation 30 Equation 31 are obtained. A Legendre transformation of the primary function in the entropy representation, Equation (5), resulting in the replacement of one or more extensive variables by the conjugate intensive variable(s) 1/T, P/T and μi/T, defines a MassieuPlanck function. For instance, the molar Massieu function is Equation 32 with its alternative form Equation 33 Its differential form, an entropy-based alternative fundamental property equation, is Equation 34 From a second-order Legendre transformation, the molar Planck function Equation 35 is obtained, with its alternative form Equation 36 Its differential form is another alternative entropy-based fundamental property equation: Equation 37 Note that Equation 38 and Equation 39 Another second-order transform is the molar Kramer function Equation 40 Its alternative form is Equation 41 whence Equation 42 The corresponding alternative entropy-based fundamental property equation is Equation 43 Again, the complete Legendre transform is identical zero, yielding in the entropy representation Equation 44 Evidently, also the intensive parameters 1/T, P/T and μi/T in the entropy representation are not independent of each other. Equations (22) through (25) are exact differentials, whence application of the reciprocity relation yields the Maxwell equations for a constant-composition PVT system, of which the following two are particularly useful: Equation 45 Equation 46 Two heat capacities are in common use for homogeneous fluids. Both are state functions defined rigorously in relation to other state functions: the molar heat capacity at constant volume (or the molar isochoric heat capacity) CV and the molar heat capacity at constant pressure (or the molar isobaric heat capacity) CP. At constant composition, Equation 47 and Equation 48 At this juncture it is convenient to introduce, by definition, a few auxiliary quantities commonly known as the mechanical and the isentropic coefficients. Specifically, these are the isobaric expansivity Equation 49 the isothermal compressibility1 Equation 50 the isochoric pressure coefficient Equation 51 and the isentropic compressibility1 (often loosely called adiabatic compressibility) Equation 52 where ρ=M/V is the density and M is the molar mass. Note that Equation 53 The isentropic compressibility is related to the thermodynamic low-frequency speed of ultrasound ν0 (negligible dispersion) by Equation 54 The ratio of the heat capacities and their difference may now be presented in several compact forms, where the most profitable are given below: Equation 55 Equation 56 Equation 57 Since by definition the compression factor is given by ZPV/RT, alternatively Equation 58 where R is the gas constant. At low temperatures, where γV of liquids is large, direct calorimetric determination of CV of liquids is difficult (it becomes more practicable near the critical point, where γV is much smaller). Thus most of the isochoric heat capacity data for liquids reported in the literature have been obtained indirectly through the use of Equations (55) and (56), that is to say from experimental molar isobaric heat capacities, isobaric expansivities and ultrasonic speeds. However, see for instance reference 59. Since also Equation 59 Equations (54), (56) and (59) may be used for the indirect determination of isothermal compressibilities from densities, isobaric expansivities, ultrasonic speeds and molar isobaric heat capacities. All these quantities may now be reliably and accurately measured, whence the indirect method for determining the isothermal compressibility of liquids has become an attractive alternative to the direct method of applying hydrostatic pressure and measuring the corresponding volume change. For the difference between βT and βS one obtains, for instance, Equation 60 A convenient way to derive the volume or pressure dependence of the heat capacities is via the differentiation of the appropriate Gibbs–Helmholtz equations. Starting from Equation 61 Equation 62 Equation 63 Equation 64 The pressure or volume dependence of the heat capacities may thus be determined from PVT data. The molar thermodynamic properties of homogeneous constant-composition fluids are functions of temperature and pressure, e.g. Equation 65 Replacing the partial derivatives through use of Equations (48) and (62) yields Equation 66 Entirely analogous procedures, using Equations (46) and (48), give Equation 67 When T and V are selected as independent variables, Equation 68 and Equation 69 are obtained. All the coefficients of dT, dP and dV are quantities reasonably accessible by experiment. For some applications it may be convenient to treat S as a function of P and V. Using Equation 70 one obtains Equation 71 Finally we note the useful relations Equation 72 and Equation 73 where μJT is the Joule–Thomson coefficient. All three quantities CP, (∂H/∂P)T and μJT may be measured by flow calorimetry.60,61  (∂H/∂P)T is also known as the isothermal Joule–Thomson coefficient, and frequently given the symbol φ. For ideal gases P = 1 and thus μJT = 0. For real gases, the temperature Ti (at the inversion pressure Pi) where TiαP = 1 is called the inversion temperature. At that point the isenthalpic exhibits a maximum: for initial pressures P<Pi, μJT > 0, and the temperature of the gas always decreases on throttling; for initial pressures P > Pi, μJT<0, and the temperature of the gas always increases on throttling. The maxima of the enthalpics form a locus known as the inversion curve of the gas. There exists a maximum inversion temperature at P = 0. For pressures above the maximum inversion pressure, μJT is always negative. Because of Equation (67) one obtains, for instance, for the isentropic compression or expansion of a gas Equation 74 Since αP of gases is always positive, the temperature always increases with isentropic compression and decreases with isentropic expansion. In principle, the exact methods of classical thermodynamics are the most general and powerful predictive tools for the calculation of property changes of single-phase, constant-composition fluids for any arbitrary change of state, say, from (T1,P1) to (T2,P2). For a pure fluid, the corresponding changes of molar enthalpy ΔH ≡ H2H1 and molar entropy ΔSS2S1 are, respectively, Equation 75 and Equation 76 where HR and SR are the molar residual enthalpy and the molar residual entropy, respectively, in (T,P)-space, and C$Ppg$ = C$Ppg$(T) is the molar heat capacity at constant pressure of the fluid in the perfect-gas (ideal-gas) state. The general definition for such molar residual properties is MRMMpg, where M is the molar value of any extensive thermodynamic property of the fluid at (T,P), and Mpg is the molar value of the property when the fluid is in the perfect-gas state at the same T and P. Given any volume-explicit equation of state, these residual functions may be calculated from Equation 77 and Equation 78 respectively. Thus, application of Equations (75) and (76) requires PVT information for the real fluid as well as its isobaric heat capacity in the perfect-gas state. We note that one may also define residual functions in (T,V)-space: Mr ≡ M − Mpg, where the Ms are now at the same T and V. In general MR(T,P) ≠ Mr(T,V) unless the property Mpg is independent of density at constant temperature, which is the case for C$Ppg$ and C$Vpg$ = C$Ppg$ − R. Since the perfect-gas state is a state where molecular interactions are absent, residual quantities characterise molecular interactions alone. They are the most direct measures of intermolecular forces. In statistical mechanics, however, configurational quantities are frequently used. The differences between these two sets are the configurational properties of the perfect gas, and for U and CV they vanish. In actual practice, this approach would be severely limited by the availability of reliable data for pure fluids and mixtures. The experimental determination of such data is time-consuming and not simple, and does not impart the glamour associated with, say, spectroscopic studies. Fortunately, statistical–mechanical calculations for C$Ppg$ are quite dependable for many substances, and so are group-contribution theories, for instance the techniques based on the work by Benson and co-workers.62,63 The search for generalised correlations applicable to residual functions has occupied scientists and engineers for quite some time. The most successful ones are based on versions of generalised corresponding-states theory, which is grounded in experiment as well as statistical mechanics. The three-parameter corresponding states correlations, pioneered by Kenneth Pitzer and co-workers,64–67  have been capable to predict satisfactorily the PVT behaviour of normal, nonassociating fluids. They showed that the compression factors of normal fluids may be satisfactorily expressed as Equation 79 where Equation 80 is Pitzer's acentric factor, Tr = T/Tc is the reduced temperature, Pr = P/Pc is the reduced pressure, Pσ,r = Pσ/Pc is the reduced vapour pressure, here evaluated at Tr = 0.7, Pσ is the vapour pressure of the substance, Tc is the critical temperature of the substance and Pc is its critical pressure. In fact, this method is a thermodynamic perturbation approach where the Taylor series is truncated after the term linear in ω. The generalised Z(0) function is the simple-fluid contribution and applies to spherical molecules like argon and krypton, whose acentric factors are essentially zero. The generalised Z(1) function (deviation function) is determined through analysis of high-precision PVT data of selected normal fluids where ω≠0. One of the best of the generalised Pitzer-type corresponding-states correlations for Z(0), Z(1) and the derived residual functions is due to Lee and Kesler.68 An alternative to the direct experimental route to high-pressure PVT data and CP(T,P) is to measure the thermodynamic speed of ultrasound v0 as a function of P and T (at constant composition), and to combine these results, in the spirit of Equations (50), (54) and (60) with data at ordinary pressure, say P1 = 105 Pa, i.e. ρ(T,P1) and CP(T,P1). For a pure liquid, upon integration at constant temperature, one obtains69,70 Equation 81 The first integral is evaluated directly by fitting the ultrasonic speed data with suitable polynomials, and for the second integral several successive integration algorithms have been devised. The simplicity, rapidity and precision of this method makes it highly attractive for the determination of the density, isobaric expansivity, isothermal compressibility, isobaric heat capacity and isochoric heat capacity of liquids at high pressures. Details may be found in the appropriate chapters of this book, and in the original literature. From experimentally determined heat capacities of liquids, relatively simple models have been used to extract information on the type of motion executed by molecules in the liquid state. In general, they are based on the separability of contributions due to translation, rotation, vibration and so forth. Though none of them is completely satisfactory, they have provided eminently useful insights and thereby furthered theoretical advances. Following the early work of Eucken,71  Bernal,72  Eyring,73  Stavely,74  Moelwyn-Hughes,75  Kohler,18,76  Bondi77  and their collaborators, one may resolve the molar heat capacity CV of simple, nonassociated liquids into the following contributions:78,79 Equation 82 The translational (tr) contribution arises from the motion of the molecules under the influence of all molecules (translational movement within their respective free volumes), the rotational (rot) contribution arises from rotation or libration of the molecules as a whole, the internal (int) contribution arises from internal degrees of freedom, and the orientational (or) contribution, for dipolar substances, results from the change of the dipole–dipole orientational energy with temperature. Cint can be subdivided into a part stemming from vibrations (Cvib) which usually are not appreciably influenced by density changes (i.e. by changes from the liquid to the perfect-gas state), and another part, Cconf, resulting from internal rotations (conformational equilibria), which does depend on density. Preferably, all these contributions to CV are discussed in terms of residual quantities in (T,V)-space.78,79  The residual molar isochoric heat capacity of a pure liquid is defined by Equation 83 For liquids composed of fairly rigid molecules, such as tetrachloromethane, benzene or toluene, to an excellent approximation C$intr$ ≈ 0, whence Equation 84 where C$trr$ = Ctr–3R/2, and C$rotr$ = Crot–3R/2, for nonlinear molecules, represents the excess over the perfect-gas phase value due to hindered rotation in the liquid of the molecules as a whole. Using corresponding states arguments to obtain reasonable estimates for C$trr$, values for the residual molar rotational heat capacity C$rotr$ may be obtained, which quantity may then be discussed in terms of any suitable model for restricted molecular rotation.61,78,79 The resolution of the variation of CV of pure liquids along the orthobaric curve (subscript σ), i.e. for states (T, Pσ), into the contributions due to the increase of volume and to the increase of temperature, respectively, is a highly interesting problem.78–80  It is important to realise that due to the close packing of molecules in a liquid, even a rather small change of the average volume available for their motion may have a considerable impact on the molecular dynamics: volume effects may become more important in influencing molecular motion in the liquid state than temperature changes. Since Equation 85 evaluation of (∂CV/∂T)V requires knowledge of the second term of the right-hand side of Equation (85). At temperatures below the normal boiling point, the saturation expansivity ασ = V−1(∂V/∂T)σ is practically equal to αP of the liquid [see below, Equation (109)]. In principle, the quantity (∂CV/∂V)T is accessible via precise PVT measurements, see Equation (63), but measurements of (∂2P/∂T2)V are not plentiful. Available data70,81  indicate that it is small and negative for organic liquids, that is to say, CV decreases with increasing volume. Alternatively, one may use18,78,79 Equation 86 where the last term in parenthesis on the right-hand side can be evaluated by means of a modified Tait equation,82  that is Equation 87 This equation holds remarkably well up to pressures of several hundred bars, and for many liquid nonelectrolytes m ≈ 10. For liquid tetrachloromethane at 298.15 K,78  the calculated value of (∂CV/∂V)T amounts to –0.48 J K−1 cm−3, for cyclohexane78  –0.57 J K−1 cm−3 is obtained, and for 1,2-dichloroethane83  it is −0.60 J K−1 cm−3. These results indicate a substantial contribution of (∂CV/∂V)Tσ to the change of CV along the orthobaric curve as well as to the corresponding change of C$Vr$. Equation (56) is a suitable starting point for a discussion of the temperature dependence of κCP/CV of a liquid along the orthobaric curve: Equation 88 Usually, the second term in parenthesis on the right-hand side of Equation (88) is positive and the third term is negative; the fourth term may contribute positively or negatively. Thus κ may increase or decrease with temperature. The importance of the heat capacity in the perfect-gas state has been stressed repeatedly. Flow calorimetry is a commonly used method for measuring CP of gases and vapours,84  and allows straightforward extrapolation to zero pressure85  to obtain C$Ppg$. The virial equation in pressure Equation 89 where B′ is the corresponding second virial coefficient and C′ the third virial coefficient, may be used to calculate the residual heat capacity of a pure fluid according to Equation 90 Since the second virial coefficient B′ of the pressure series is related to the second virial coefficient B of the series in molar density (1/V) by Equation 91 one obtains from the two-term equation in pressure Equation 92 Thus the pressure derivative of CP is given by Equation 93 thereby providing an experimental route to the determination of the second temperature derivative of B. Flow-calorimetric measurements of deviations from perfect-gas behaviour, particularly via the isothermal Joule–Thomson coefficient φ≡(∂H/∂P)T, have the advantage over compression experiments that adsorption errors are avoided, and that measurements can be made at lower temperatures and pressures.86,87  Specifically, Equation 94 where Equation 95 Here, C is the third virial coefficient of the series in molar density, P2 − P1 is the pressure difference maintained across the throttle, and (P1+P2)/2 is the mean pressure. The zero-pressure value of the isothermal Joule–Thomson coefficient is thus given by Equation 96 Integration between a suitable reference temperature Tref and T yields61 Equation 97 This relation is of considerable importance for obtaining virial coefficients (of vapours) in temperature regions where conventional measuring techniques are difficult to apply. The isothermal Joule–Thomson coefficient of steam, the most important vapour on earth, was recently reported by McGlashan and Wormald88  in the temperature range 313 K to 413 K, and values of φ0 derived from these measurements were compared with results from the 1984 NBS/NRC steam tables,89  with data reported by Hill and MacMillan,90  and with values derived from the IAPWS-95 formulation for the thermodynamic properties of water.91 The thermodynamic speed of ultrasound (below any dispersion region) is related to the equation of state, and hence to the virial coefficients. For a real gas, v$02$ may thus be expressed as a virial series in molar density 1/V,92 i.e. Equation 98 where Equation 99 For constant-composition fluids, the acoustic virial coefficients Bac, Cac, … are functions of temperature only. They are, of course, rigorously related to the ordinary (PVT) virial coefficients. For instance, Equation 100 Since pressure is the preferred experimental parameter, one may also write a virial expansion for v$02$ in powers of the pressure with corresponding virial coefficients B$ac′$, C$ac′$, … The coefficients of the density and pressure expansion are interrelated; for example Equation 101a Equation 101b Thus, measurements of the speed of ultrasound as function of density (or pressure) will yield information on B together with its first and second temperature derivatives, and C$Vpg$ (or κpg) through extrapolation of v$02$ to zero density. The principal advantages of the acoustic method are its rapidity and the greater accuracy at temperatures where adsorption effects become important.93 All this valuable thermophysical information can then be used to obtain reliable second virial coefficients over large temperature ranges. For a fluid with spherically symmetric pair potential energy u(r), Equation 102 where NA is the Avogadro constant and kB is the Boltzmann constant. Inversion94  then yields the fundamentally important potential energy function u(r) for a pair of molecules. While a discussion of experimental acoustical methods is way outside the scope of this introductory chapter, the following comment is indicated. For gases/vapours at low to moderate pressures not too close to saturation, the highest experimental precision, when measuring v0, is obtained through use of a spherical resonator, a technique which was pioneered by Moldover, Mehl and co-workers.95,96 So far, the focus was on homogeneous constant-composition fluids, of which pure fluids are special cases. I will now briefly consider the case where a pure liquid is in equilibrium with its vapour. Such a situation is encountered, for instance, in adiabatic calorimetry, where the calorimeter vessel is incompletely filled with liquid in order to accommodate the thermal expansion of the sample (usually, the vapour space volume is comparatively small). One has now a closed two-phase single-component system. The heat capacity of such a system is closely related to C$σL$, i.e. the molar heat capacity of a liquid in equilibrium with an infinitesimal amount of vapour (as before, the saturation condition is indicated by the subscript σ). For a detailed analysis see Hoge,97  Rowlinson and Swinton,56  and Wilhelm.98 The molar heat capacity at saturation of the substance in the equilibrium phase π (denoting either the liquid, π = L, or the vapour, π = V) is given by C$σπ$T(∂Sπ/∂T)σ, whence one obtains, for instance, Equation 103 Equation 104 Equation 105 Equation 106 Equation 107 Equation 108 Here, γσ≡(∂P/∂T)σ is the slope of the vapour-pressure curve, and Equation 109 denotes the expansivity of a pure substance in contact with the other equilibrium phase (i. e. along the saturation curve). As already pointed out, below the normal boiling point, the difference α$PL$ – α$σL$ is usually negligibly small. At the critical point Equation 110 Neither C$Pπ$ nor C$σπ$ is equal to the change of enthalpy with temperature along the saturation curve. From Equation (65) one obtains Equation 111 Equation 112 Since U = HPV, Equation 113 Thus for the saturated liquid at [T, Pσ(T)] at temperatures where $PL$<1, the following sequence is obtained: Equation 114 The differences between the first four quantities are generally much smaller than between C$VL$ and (∂UL/∂T)σ. While the general equations apply also to the saturated vapour (π = V), the inequality does not. Since α$PV$VV is always large, for saturated vapours the difference C$σV$C$PV$ is always significant [see Equation (104)]. In fact, for vapours of substances with small molecules, such as argon, carbon dioxide, ammonia and water (steam), α$PV$VV may be large enough to make C$σV$ even negative. Finally we note that the difference between the saturation heat capacities in the vapour phase and the liquid phase may be expressed as98 Equation 115 and the difference between the isobaric heat capacities in the vapour phase and the liquid phase as Equation 116 where ΔvapH denotes the molar enthalpy of vaporisation, and ΔvapVVV − VL is the volume change on vaporisation. In deriving these equations, use was made of the exact Clapeyron equation Equation 117 and the exact Planck equation.99 There are, of course, many additional details and fascinating topics, in particular when mixtures and solutions are considered, which fact is amply evidenced by the contributions to this monograph. Enjoy! Calorimetry and PVT measurements are the most fundamental and also the oldest experimental disciplines of physical chemistry. Although simple in principle, enormous effort and ingenuity has gone into designing the vast array of apparatus now at our disposal. In this introductory chapter, I did not cover design of experiments beyond the bare rudiments – the reader is referred to the relevant articles and books quoted, and to the chapters of this book focusing on this aspect. Let it suffice to say that the advances in instrumentation during the last decades have greatly facilitated the high-precision determination of caloric and PVT properties of fluids over large ranges of temperature and pressure. At the same time cross-fertilisation with other disciplines, notably with ultrasonics and hypersonics, and with biophysics, is becoming increasingly important, as is the close connection to equation-of-state research and, of course, chemical engineering.56,61,79,98,100–103  The discussion presented here and in the chapters to follow may perhaps best be characterised by a statement due to Gilbert Newton Lewis (1875–1946) on the practical philosophy of scientific research: The scientist is a practical man and his are practical aims. He does not seek the ultimate but the proximate. He does not speak of the last analysis but rather of the next approximation. … On the whole, he is satisfied with his work, for while science may never be wholly right it certainly is never wholly wrong; and it seems to be improving from decade to decade. By necessity, this introductory chapter is limited to a few topics, the selection of which was also influenced by my current interests. In conclusion, I hope to have: • formulated concisely some important aspects of the thermodynamic formalism needed in this area of research; • discussed and made transparent some key aspects of experiments; • shown how to apply and to appropriately extend well-known concepts to perhaps less familiar, yet potentially important, problems; • stimulated some colleagues to enter this fascinating and important field of research. Success in any of these points would be most rewarding. 1 In this chapter the isothermal compressibility is represented by the symbol βTand not by κT as was recently recommended by IUPAC. Similarly, the isentropic compressibility is represented by the symbol βS and not by κS 1. Eucken A. Phys. Z. 1909 , vol. 10 pg. 586 2. Nernst W. Ann. Phys. (Leipzig) 1911 , vol. 36 pg. 395 3. Cohen E. Th. Moesveld A. L. Z. Phys. Chem. 1920 , vol. 95 pg. 305 4. Lange F. Z. Phys. Chem. 1924 , vol. 110 pg. 343 5. Southard J. C. Andrews D. H. J. Franklin Inst. 1930 , vol. 209 pg. 349 6. Southard J. C. Brickwedde F. G. J. Am. Chem. Soc. 1933 , vol. 55 pg. 4378 7. Aston J. G. Eidinoff M. L. J. Am. Chem. Soc. 1939 , vol. 61 pg. 1533 8. Osborne N. S. Ginnings D. C. J. Res. Natl. Bur. Stand. 1947 , vol. 39 pg. 453 9. Huffman H. M. Chem. Rev. 1947 , vol. 40 pg. 1 10. Johnston H. L. Clarke J. T. Rifkin E. B. Kerr E. C. J. Am. Chem. Soc. 1950 , vol. 72 pg. 3933 11. Eucken A. Eigen M. Z. Elektrochem. 1951 , vol. 55 pg. 343 12. Hill R. W. J. Sci. Instrum. 1953 , vol. 30 pg. 331 13. Stull D. R. Anal. Chim. Acta 1957 , vol. 17 pg. 133 14. West E. D. Ginnings D. C. J. Res. Natl. Bur. Stand. 1958 , vol. 60 pg. 309 15. Todd L. J. Dettre R. H. Andrews D. H. Rev. Sci. Instrum. 1959 , vol. 30 pg. 463 16. Goodwin R. D. J. Res. Natl. Bur. Stand. 1961 , vol. 65C pg. 309 17. Andon R. J. L. Counsell J. F. Herington E. F. G. Martin J. F. 1963 , vol. 59 pg. 850 18. Wilhelm E. Schano R. Becker G. Findenegg G. H. Kohler F. 1969 , vol. 65 pg. 1443 19. Van Miltenburg J. C. J. Chem. Thermodyn. 1972 , vol. 4 pg. 773 20. Experimental Thermodynamics, Volume I: Calorimetry of Non-reacting Systems , J. P. McCullough and D. W. Scott, eds., Butterworths/IUPAC , London , 1968 21. Solution Calorimetry. Experimental Thermodynamics, Volume IV , K. N. Marsh and P. A. G. O’Hare, eds., Blackwell Scientific Publications/IUPAC , Oxford , 1994 22. Measurement of the Thermodynamic Properties of Single Phases. Experimental Thermodynamics, Volume VI , A. R. H. Goodwin, K. N. Marsh and W. A. Wakeham, eds., Elsevier/IUPAC , Amsterdam , 2003 23. W. Hemminger and G. Höhne , Calorimetry. Fundamentals and Practice , Verlag Chemie , Weinheim , 1984 24. Lakshmikumar S. T. Gopal E. S. R. Int. Rev. Phys. Chem. 1982 , vol. 2 pg. 197 25. I. Thermochim. Acta 1985 , vol. 96 pg. 313 26. Gmelin E. Thermochim. Acta 1987 , vol. 110 pg. 183 27. G. Höhne , W. Hemminger and H.-J. Flammersheim , Differential Scanning Calorimetry. An Introduction for Practitioners , Springer , Berlin , 1996 28. Picker P. Leduc P.-A. Philip P. R. Desnoyers J. E. J. Chem. Thermodyn. 1971 , vol. 3 pg. 631 29. Grolier J.-P. E. Benson G. C. Picker P. J. Chem. Eng. Data 1975 , vol. 20 pg. 243 30. Smith-Magowan D. Wood R. H. J. Chem. Thermodyn. 1981 , vol. 13 pg. 1047 31. White D. E. Wood R. H. Biggerstaff D. R. J. Chem. Thermodyn. 1988 , vol. 20 pg. 159 32. Carter R. W. Wood R. H. J. Chem. Thermodyn. 1991 , vol. 23 pg. 1037 33. Hei T. K. Raal J. D. AIChE J. 2009 , vol. 55 pg. 206 34. M. P. E. Berthelot , Thermochimie, Vol. I and II , Gautier-Villars et Fils , Paris , 1897 35. International Critical Tables of Numerical Data, Physics, Chemistry and Technology, Vol. V, prepared by the National Research Council of the United States of America, E. W. Washburn, editor-in-chief, McGraw-Hill Book Company, New York, 1929, pp. 78–129 36. J. Timmermans, Physicochemical Constants of Pure Organic Compounds, Vol. I, 1950; Vol. II, 1965, Elsevier, Amsterdam 37. Landolt-Börnstein. Zahlenwerte und Funktionen aus Physik, Chemie, Astronomie, Geophysik und Technik, 6. Auflage, II. Band, Eigenschaften der Materie in ihren Aggregatzuständen, 4. Teil, Kalorische Zustandsgrößen, K. Schäfer and E. Lax, eds., Springer-Verlag, Berlin, 1961 38. T. E. Daubert and R. P. Danner , Physical and Thermodynamic Properties of Pure Chemicals: Data Compilation, DIPPR® Database , Hemisphere Publishing Corp. , New York , 1989 39. M. Zábranský , V. Rùžièka Jr , V. Majer and E. S. Domalski , Heat Capacity of Liquids: Volumes I and II. Critical Review and Recommended Values, J. Phys. Chem. Ref. Data, Monograph No. 6 , American Chemical Society and American Institute of Physics , 1996 40. Zábranský M. Rùžièka Jr V. Domalski E. S. Heat Capacity of Liquids: Critical Review and Recommended Values. Supplement I, J. Phys. Chem. Ref. Data 2001 , vol. 30 pg. 1199 41. International DATA Series, SELECTED DATA ON MIXTURES, Series A, published by the Thermodynamics Research Center, Texas A&M University, College Station, TX 77843, USA, from 1973 through 1994 42. Dortmund Data Bank Software and Separation Technology, www.ddbst.de 43. Selected Values of Physical and Thermodynamic Properties of Hydrocarbons and Related Compounds , F. D. Rossini, K. S. Pitzer, R. L. Arnett, R. M. Braun and G. C. Pimentel, eds., published for the American Petroleum Institute by Carnegie Press , Pittsburgh, PA , 1953 44. D. R. Stull , E. F. Westrum Jr. and G. S. Sinke , The Chemical Thermodynamics of Organic Compounds , Wiley , New York , 1969 45. TRC Thermodynamic Tables: Hydrocarbons (formerly TRC Hydrocarbon Project) and TRC Thermodynamic Tables: Non-Hydrocarbons (formerly TRC Data Project), loose-leaf format, Thermodynamics Research Center, The Texas A&M University System, College Station, TX 46. M. Frenkel , K. N. Marsh , G. J. Kabo , R. C. Wilhoit and G. N. Roganov , Thermodynamics of Organic Compounds in the Gas State, Vol. I , Thermodynamics Research Center , College Station, TX , 1994 47. M. W. Chase Jr., NIST-JANAF Thermochemical Tables: Parts I and II, J. Phys. Chem. Ref. Data, Monograph No. 9, 4th edition, 1998 48. I. Prigogine and R. Defay, Chemical Thermodynamics, translated and revised by D. H. Everett, Longmans, Green and Co., London, 1954 49. R. Haase , Thermodynamik der Mischphasen , Springer-Verlag , Berlin , 1956 50. E. A. Guggenheim, Thermodynamics, 5th edition, North-Holland, Amsterdam, 1967 51. M. L. McGlashan , Chemical Thermodynamics , , London , 1979 52. Joseph Kestin , A Course in Thermodynamics , McGraw-Hill , New York , 1979 53. G. Kortüm and H. Lachmann , Einführung in die chemische Thermodynamik. Phänomenologische und statistische Behandlung , 7. Auflage Verlag Chemie , Weinheim , 1981 54. K. Denbigh , The Principles of Chemical Equilibrium , Cambridge University Press , Cambridge, 1981 55. D. Kondepuni I. Prigogine , Modern Thermodynamics. From Heat Engines to Dissipative Structures , John Wiley and Sons , Chichester , 1998 56. J. S. Rowlinson and F. L. Swinton , Liquids and Liquid Mixtures , Butterworth Scientific , London , 1982 57. H. B. Callen , Thermodynamics and an Introduction to Thermostatics , John Wiley & Sons , New York , 1985 58. S. E. Wood and R. Battino , Thermodynamics of Chemical Systems , Cambridge University Press , Cambridge , 1990 59a. Magee J.W. J. Res. Natl. Inst. Stand. Technol 1991 , vol. 96 pg. 725 59b. Perkins R.A. Magee J.W. J. Chem. Eng. Data 2009 , vol. 54 pg. 2646 60. Miyazaki T. A. V. Powers J. E. J. Chem. Thermodyn. 1980 , vol. 12 pg. 105 61. Wilhelm E. Thermochim. Acta 1983 , vol. 69 pg. 1 62. Benson S. W. Cruickshank F. R. Golden D. M. Haugen G. R. O’Neal H. E. Rodgers A. S. Shaw R. Walsh R. Chem. Rev. 1969 , vol. 69 pg. 279 63. E. S. Domalski and E. D. Hearing, (a) J. Phys. Chem. Ref. Data, 1993, 22, 805; (b) J. Phys. Chem. Ref. Data, 1994, 23, 157 64. Pitzer K. S. J. Am. Chem. Soc. 1955 , vol. 77 pg. 3427 65. Pitzer K. S. Lippmann D. Z. Curl Jr. R. F. Huggins C. M. Petersen D. E. J. Am. Chem. Soc. 1955 , vol. 77 pg. 3433 66. Pitzer K. S. Curl Jr. R. F. J. Am. Chem. Soc. 1957 , vol. 79 pg. 2369 67. Curl Jr R. F. Pitzer K. S. Ind. Eng. Chem. 1958 , vol. 50 pg. 265 68. Lee B.-I. Kesler M. G. AIChE J. 1975 , vol. 21 pg. 510 69. Davies L. A. Gordon R. B. J. Chem. Phys. 1967 , vol. 46 pg. 2650 70. Muringer M. J. P. Trappeniers N. J. Biswas S. N. Phys. Chem. Liq. 1985 , vol. 14 pg. 273 71a. Bartholomé E. Eucken A. 1937 , vol. 33 pg. 45 71b. Eucken A. Z. Elektrochem. 1948 , vol. 52 pg. 255 72. Bernal J. D. 1937 , vol. 33 pg. 27 73. Kincaid J. F. Eyring H. J. Chem. Phys. 1938 , vol. 6 pg. 620 74a. Staveley L. A. K. Hart K. R. Tupman W. I. 1953 , vol. 15 pg. 130 74b. Staveley L. A. K. Tupman W. I. Hart K. R. 1955 , vol. 51 pg. 323 75. Harrison D. Moelwyn-Hughes E. A. Proc. Roy. Soc. (London) 1957 , vol. A 239 pg. 230 76. Findenegg G. H. Kohler F. 1967 , vol. 63 pg. 870 77. A. Bondi , Physical Properties of Molecular Crystals, Liquids and Glasses , Wiley , New York , 1968 78. Wilhelm E. Zettler M. Sackmann H. Ber. Bunsenges. Phys. Chem. 1974 , vol. 78 pg. 795 79. Wilhelm E. Pure Appl. Chem. 2005 , vol. 77 pg. 1317 80. F. Kohler , The Liquid State , Verlag Chemie , Weinheim/Bergstr , 1972 81. Gibson R. E. Loeffler O. H. J. Am. Chem. Soc. 1941 , vol. 63 pg. 898 82. E. Wilhelm, J. Chem. Phys., 1975, 63, 3379. See also E. Wilhelm, Proc. 14th Intl. Conf. Chem. Thermodyn., Montpellier, France, August 26–30, 1975, Vol. II, pp. 87–94 83. Wilhelm E. Grolier J.-P. E. Karbalai Ghassemi M. H. Ber. Bunsenges. Phys. Chem. 1977 , vol. 81 pg. 925 84. J. P. McCullough and G. , in Experimental Thermodynamics, Volume I: Calorimetry of Non-reacting Systems , J. P. McCullough and D. W. Scott, eds., Butterworths/IUPAC , London , 1968 , pp. 369–394 85a. Todd S. S. Hossenlopp I. A. Scott D. W. J. Chem. Thermodyn. 1978 , vol. 10 pg. 641 85b. Hossenlopp I. A. Scott D. W. J. Chem. Thermodyn. 1981 , vol. 13 pg. 405 85c. Hossenlopp I. A. Scott D. W. J. Chem. Thermodyn. 1981 , vol. 13 pg. 415 86. Francis P. G. McGlashan M. L. Wormald C. J. J. Chem. Thermodyn. 1969 , vol. 1 pg. 441 87. Al-Bizreh N. Wormald C. J. J. Chem. Thermodyn. 1977 , vol. 9 pg. 749 88. McGlashan M. L. Wormald C. J. J. Chem. Thermodyn. 2000 , vol. 32 pg. 1489 89. L. Haar , J. S. Gallagher and G. S. Stell , NBS/NRC Steam Tables , Hemisphere Publishing Corporation , New York , 1984 90. Hill P. G. MacMillan R. D. C. Ind. Eng. Chem. Res. 1988 , vol. 27 pg. 874 91. Wagner W. Pruß A. J. Phys. Chem. Ref. Data 2002 , vol. 31 pg. 387 92. W. Van Dael , in Experimental Thermodynamics, Volume 2: Experimental Thermodynamics of Non-reacting Fluids , B. Le Neindre and B. Vodar, eds., Butterworths/IUPAC , London , 1975 , pp. 527–577 93. Ewing M. B. McGlashan M. L. Trusler J. P. M. Mol. Phys. 1987 , vol. 60 pg. 681 94. G. C. Maitland , M. Rigby , E. B. Smith and W. A. Wakeham , Intermolecular Forces. Their Origin and Determination , Clarendon Press , Oxford , 1981 95. Mehl J. B. Moldover M. R. J. Chem. Phys. 1981 , vol. 74 pg. 4062 96. Moldover M. R. Trusler J. P. M. Edwards T. J. Mehl J. B. Davis R. S. Phys. Rev. Lett. 1988 , vol. 60 pg. 249 97. Hoge H. J. J. Res. Natl. Bur. Stand. 1946 , vol. 36 pg. 111 98. E. Wilhelm , in Les Capacités Calorifiques des Systèmes Condensés , H. Tachoire, ed., Société Française de Chimie , Marseille , 1987 , pp. 138–163 99. Planck M. Ann. Physik 1887 , vol. 30 pg. 574 100. Wilhelm E. High Temp.-High Press. 1997 , vol. 29 pg. 613 101. Experimental Thermodynamics, Volume V: Equations of State for Fluids and Fluid Mixtures , J. V. Sengers, R. F. Kayser, C. J. Peters and H. J. White Jr., eds., Elsevier/IUPAC , Amsterdam, The Netherlands , 2000 102. E. Wilhelm , in Experimental Thermodynamics, Volume VII: Measurement of the Thermodynamic Properties of Multiple Phases , R. D. Weir and Th. W. de Loos, eds., Elsevier/IUPAC , Amsterdam, The Netherlands , 2005 , pp. 137–176 103. E. Wilhelm , in Development and Applications in Solubility , T. M. Letcher, ed., The Royal Society of Chemistry/IUPAC , Cambridge, UK , 2007 , pp. 3–18 Close Modal
web
auto_math_text
# To study the variation in volume with pressure for a sample of air at constant temperature by plotting graphs between p and v PhysicsPhysics Experiments #### Class 11th Physics - Elasticity 6 Lectures 1 hours #### Class 11th Physics - Oscillations 12 Lectures 2 hours #### Class 11th Physics - Waves 14 Lectures 3 hours ## Introduction Theory of Boyle’s law is based on the properties of several gases and their laws. For any gaseous substances, it is stated that the gaseous matter does not have any proper shape or volume and it seems to adopt the shape where it is contained. The primary macroscopic properties of gases include their volume, pressure, mass and temperature respectively. All these properties of gasses can be easily explained by the theory of kinetic theory, where the respective motions and molecular composition of gases are considered. ## Theory of Boyle’s law Prior to defining of Theory of Boyle’s law, the laws of gases are to be explained. The Law of gas is defined to show the intricate relationship that existed between the macroscopic properties of gases (Lin et al. 2021). However, Theory of Boyle’s law states the increase that is noticed within the volume of the given gaseous matter with a decrease in the pressure for the stated, gaseous matter. Figure 1: Boyle’s law apparatus This law further states that the pressure of the given mass of gas is considered to be inversely proportional to the volume of the gaseous matter. This condition is stated to be observed when the temperature for the given mass of gas is kept constant. ## Aims and apparatus required The aim of this practical experiment states that the change in the pressure and volume, the macroscopic properties that are noted for the given sample of air. However, this experiment should be conducted by keeping the temperature constant for the stated air sample (Amrita.olabs.edu, (2022). Furthermore, a graph will be plotted between changes in pressure and volume that are identified in this practical experiment. In this experiment, the apparatus required includes the apparatus of Boyle’s law, plumb line, Fortin's barometer, a pair for set squares and lastly a thermometer. ## Lab Procedures Certain lab procedures are to be maintained which are as follows. Firstly, the apparatus is to be set vertically and is supported with a heavy metallic base and screws for levelling. Plumb lines are used for testing the apparatus used. Tube A consisted of the enclosed air, and mercury lies in the tube B and atmospheric pressure is to be noted in the Fortin’s barometer with the temperature. Figure 2: Pressure of air in tube AB = H + h The volume will be noted for tube A which is graduated. However, both tubes A and B are moved for noting down the variations that are noticed in the pressure and volume for the air sample. ## Observations Position of Hg levelPressure difference (p) (cm)Tube-A and Tube-B Pressure of air P = P0+p (cm of Hg)Volume of air V (cm3)1/V (cm-1)PV In Tube-A (cm)In Tube-b (cm) Table 1: Studying the variation in volume with pressure ## Result According to the observations from studying the variation with pressure, the PV stands constant. This mostly takes place due to the application of Boyle’s law in the process. Here, the P-V graph appears as a hyperbola and eventually helps in satisfying the equal relation between the PV and the constant pressure. Figure 3: Boyle’s law This also satisfies the relation $\mathrm{P \varpropto 1/V}$ that helps in studying the variation of the volume with pressure. ## Precautionary measures The study regarding the variation in the volume needs to follow a few precautions that can be successfully conducted the study. First, the plotting of the graph needs to be done with utmost authenticity. After that, it should be noted that the wooden board is vertical and the mercury and set-squares are taken properly during the test (Kim, Kim & Han, 2021). Immediately after that, the atmospheric pressure needs to be calculated from the beginning to the end of the experiment. ## Conclusion The study regarding the volume with the pressure of air incorporates Boyle’s law where the volume and the pressure are inversely proportional to each other. In the study process, the tube-A needs to be kept pure and dry in the fresh air for maintaining utmost authenticity. The result can be concluded on the part where the P-1 is divided by a V graph with a straight line that has a positive slope at the end. ## FAQs Q1. How does the changing volume affect pressure when the temperature is constant? Ans. The changing volume and pressure have a proportional relationship where the gradual decrease of contained gas can increase the pressure and the increase of volume will decrease the pressure. The process appears vice versa for both factors while studying the variation between those factors. Q2. How does the volume of gas change when pressure is applied to it? Ans. The volume of gas changes proportionally when certain pressure is applied to it. After pressure is applied to gas, it increases immediately and consequently the volume of the gas decreases. Q3. Does pressure have any effect on the increasing and decreasing of volume? Ans. The particles of gas start colliding on the walls of the container when the volume of the gas is decreased. In this scenario, the particles on the walls start exerting a force which eventually controls the increasing and decreasing of volume. Q4. What are some of the possible sources for errors during the verification in volume with pressure? Ans. The verification process comes with significant sources of errors like the usage of impure air in the Tube-A. In addition, the narrow base and the less stability of the apparatus can be major sources of error. Updated on 13-Oct-2022 11:19:47
web
auto_math_text
# Gravitational pull question 1. May 23, 2007 ### michaelmellette shouldn't an object that is being push away from earth with the same force accelerate as it goes up in elevation, also would not temperature have an affect on the object gravitational pull 2. May 23, 2007 ### cristo Staff Emeritus Any object that has a constant force F applied to it will experience an acceleration a due to Newton's second law: F=ma. Do you mean the acceleration the object would feel due to gravity? If so, the temperature will affect the acceleration of an object due to gravity if it affects the mass of the object. 3. May 23, 2007 ### michaelmellette yes but whould 2 objects of the same mass but dif temps have dif gravitational pulls? 4. May 23, 2007 ### cristo Staff Emeritus The gravitational force between two bodies is $$F=\frac{GmM}{r^2}$$. Since this does not depend upon temperature, then the answer to your question is no. 5. May 23, 2007 ### michaelmellette so the force of the suns temp has nothing to do with it gravitational pull on us? i am going to look into this and try to find out something, my question is if the sun put out no heat would we still be in the same obit, or would everything change, because if temp had nothing to do with it, it would not matter 6. May 23, 2007 ### arunbg What do you mean by force of the sun's temperature? If the sun stopped burning, it would collapse under its own gravitational forces, if that is what you were asking. 7. May 23, 2007 ### cristo Staff Emeritus This makes no sense, as there is no such thing as a force due to temperature. But if the temperature of a star changes, then the mass of the star will change. Furthermore, if a star stopped radiating, then it would cease to be a star (and I suppose would collapse, since there will be no gravitational force in the star!) 8. May 23, 2007 ### michaelmellette i know thesun would, but lets say al the suns heat was blocked from the earth, would our orbit change or not, and we have not proven that temp is not a force, a warm wind blows faster than a cold wind does it not, things that are warmer have more energy so how could that not be a force 9. May 23, 2007 ### cristo Staff Emeritus Well it wholly depends on whats blocking the sunlight doesn't it! To be honest, I don't like the way this thread is going. The fact that the OP does not actually ask the question that you were wanting answered, but in fact is designed to draw someone into replying to a seemingly innocent question, tends to hint to me that this is going down the crackpot route. Therefore, I shall bow out now.
web
auto_math_text
• P K Rath Articles written in Pramana – Journal of Physics • Exploring effective interactions through transition charge density study of70,72,74,76Ge nuclei Transition charge densities (TCD) for 0+ → 21+ excitation have been calculated for70,72,74,76Ge nuclei within microscopic variational framework employing 2p3/2, 1f5/2, 2p1/2 and 1g9/2 valence space. The calculated TCDs for different monopole variants of Kuo interaction are compared with available experimental results. Other systematics like reduced transition probabilitiesB(E2) and static quadrupole momentsQ(2) are also presented. It is observed that the transition density study acts as a sensitive probe for discriminating the response of different parts of effective interactions. • Two-neutrino double β decay of96Zr to excited 2+ state of96Mo The two-neutrino double beta decay of96Zr isotope for 0+ → 2+ transition has been studied in the PHFB model. In our earlier work, the reliability of the intrinsic wave functions of96Zr and96Mo isotopes has been established by obtaining an overall agreement between a number of theoretically calculated spectroscopic properties as well as half-lives of 2vββ decay for 0+ → 0+ transition and the available experimental data. In the present work, the half-life of 2vββ decay for 0+ ar 2+ transition T12/2v(0+2+) has been calculated using the same set of intrinsic wave functions. • Structure of nuclear transition matrix elements for neutrinoless double-$\beta$ decay The structure of nuclear transition matrix elements (NTMEs) required for the study of neutrinoless double-$\beta$ decay within light Majorana neutrino mass mechanism is disassembled in the PHFB model. The NTMEs are calculated using a set of HFB intrinsic wave functions, the reliability of which has been previously established by obtaining an overall agreement between the theoretically calculated spectroscopic properties and the available experimental data. Presently, we study the role of short-range correlations, radial evolution of NTMEs and deformation effects due to quadrupolar correlations. In addition, limits on effective light neutrino mass $\langle m_{\nu} \rangle$ are extracted from the observed limits on half-lives $T_{1/2}^{0\nu}$ of neutrinoless double-$\beta$ decay. • Elastic scattering and fusion cross-sections in $^{7}{\text{Li}} + ^{27}{\text{Al}}$ reaction With an aim to understand the effects of breakup and transfer channels on elastic scattering and fusion cross-sections in the $^{7}{\text{Li}} + ^{27}{\text{Al}}$ reaction, simultaneous measurement of elastic scattering angular distributions and fusion cross-sections have been carried out at various energies ($E_{\text{lab}} = 8.0–16.0$ MeV) around the Coulomb barrier. Optical model (OM) analysis of the elastic scattering data does not show any threshold anomaly or breakup threshold anomaly behaviour in the energy dependence of the real and imaginary parts of the OM potential. Fusion cross-section at each bombarding energy is extracted from the measured $\alpha$-particle evaporation energy spectra at backward angles by comparing with the statistical model prediction. Results on fusion cross-sections from the present measurements along with data from the literature have been compared with the coupled-channels predictions. Detailed coupled-channels calculations have been carried out to study the effect of coupling of breakup, inelastic and transfer, channels on elastic scattering and fusion. The effect of $1n$-stripping transfer coupling was found to be significant compared to that of the projectile breakup couplings in the present system. • # Pramana – Journal of Physics Volume 94, 2020 All articles Continuous Article Publishing mode • # Editorial Note on Continuous Article Publication Posted on July 25, 2019 Click here for Editorial Note on CAP Mode © 2017-2019 Indian Academy of Sciences, Bengaluru.
web
auto_math_text
SKY-MAP.ORG 首页 开始 To Survive in the Universe News@Sky 天文图片 收集 论坛 Blog New! 常见问题 新闻 登录 # HD 82573 ### 图像 DSS Images   Other Images ### 相关文章 Rotational velocities of A-type stars in the northern hemisphere. II. Measurement of v sin iThis work is the second part of the set of measurements of v sin i forA-type stars, begun by Royer et al. (\cite{Ror_02a}). Spectra of 249 B8to F2-type stars brighter than V=7 have been collected at Observatoirede Haute-Provence (OHP). Fourier transforms of several line profiles inthe range 4200-4600 Å are used to derive v sin i from thefrequency of the first zero. Statistical analysis of the sampleindicates that measurement error mainly depends on v sin i and thisrelative error of the rotational velocity is found to be about 5% onaverage. The systematic shift with respect to standard values fromSlettebak et al. (\cite{Slk_75}), previously found in the first paper,is here confirmed. Comparisons with data from the literature agree withour findings: v sin i values from Slettebak et al. are underestimatedand the relation between both scales follows a linear law ensuremath vsin inew = 1.03 v sin iold+7.7. Finally, thesedata are combined with those from the previous paper (Royer et al.\cite{Ror_02a}), together with the catalogue of Abt & Morrell(\cite{AbtMol95}). The resulting sample includes some 2150 stars withhomogenized rotational velocities. Based on observations made atObservatoire de Haute Provence (CNRS), France. Tables \ref{results} and\ref{merging} are only available in electronic form at the CDS viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.125.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/393/897 On the Period-Luminosity-Colour-Metallicity relation and the pulsational characteristics of lambda Bootis type starsGenerally, chemical peculiarity found for stars on the upper mainsequence excludes delta Scuti type pulsation (e.g. Ap and Am stars), butfor the group of lambda Bootis stars it is just the opposite. This makesthem very interesting for asteroseismological investigations. The groupof lambda Bootis type stars comprises late B- to early F-type,Population I objects which are basically metal weak, in particular theFe group elements, but with the clear exception of C, N, O and S. Thepresent work is a continuation of the studies by Paunzen et al.(\cite{Pau97}, \cite{Pau98}), who presented first results on thepulsational characteristics of the lambda Bootis stars. Since then, wehave observed 22 additional objects; we found eight new pulsators andconfirmed another one. Furthermore, new spectroscopic data (Paunzen\cite{Pau01}) allowed us to sort out misidentified candidates and to addtrue members to the group. From 67 members of this group, only two arenot photometrically investigated yet which makes our analysis highlyrepresentative. We have compared our results on the pulsationalbehaviour of the lambda Bootis stars with those of a sample of deltaScuti type objects. We find that at least 70% of all lambda Bootis typestars inside the classical instability strip pulsate, and they do sowith high overtone modes (Q < 0.020 d). Only a few stars, if any,pulsate in the fundamental mode. Our photometric results are inexcellent agreement with the spectroscopic work on high-degree nonradialpulsations by Bohlender et al. (\cite{Boh99}). Compared to the deltaScuti stars, the cool and hot borders of the instability strip of thelambda Bootis stars are shifted by about 25 mmag, towards smaller(b-y)_0. Using published abundances and the metallicity sensitiveindices of the Geneva 7-colour and Strömgren uvbybeta systems, wehave derived [Z] values which describe the surface abundance of theheavier elements for the group members. We find that thePeriod-Luminosity-Colour relation for the group of lambda Bootis starsis within the errors identical with that of the normal delta Scutistars. No clear evidence for a statistically significant metallicityterm was detected. Based on observations from the Austrian AutomaticPhotoelectric Telescope (Fairborn Observatory), SAAO and Siding SpringObservatory. A spectroscopic survey for lambda Bootis stars. II. The observational datalambda Bootis stars comprise only a small number of all A-type stars andare characterized as nonmagnetic, Population i, late B to early F-typedwarfs which show significant underabundances of metals whereas thelight elements (C, N, O and S) are almost normal abundant compared tothe Sun. In the second paper on a spectroscopic survey for lambda Bootisstars, we present the spectral classifications of all program starsobserved. These stars were selected on the basis of their Strömgrenuvbybeta colors as lambda Bootis candidates. In total, 708 objects insix open clusters, the Orion OB1 association and the Galactic field wereclassified. In addition, 9 serendipity non-candidates in the vicinity ofour program stars as well as 15 Guide Star Catalogue stars were observedresulting in a total of 732 classified stars. The 15 objects from theGuide Star Catalogue are part of a program for the classification ofapparent variable stars from the Fine Guidance Sensors of the HubbleSpace Telescope. A grid of 105 MK standard as well as pathological''stars guarantees a precise classification. A comparison of our spectralclassification with the extensive work of Abt & Morrell(\cite{Abt95}) shows no significant differences. The derived types are0.23 +/- 0.09 (rms error per measurement) subclasses later and 0.30 +/-0.08 luminosity classes more luminous than those of Abt & Morrell(\cite{Abt95}) based on a sample of 160 objects in common. The estimatederrors of the means are +/- 0.1 subclasses. The characteristics of oursample are discussed in respect to the distribution on the sky, apparentvisual magnitudes and Strömgren uvbybeta colors. Based onobservations from the Observatoire de Haute-Provence, OsservatorioAstronomico di Padova-Asiago, Observatório do Pico dosDias-LNA/CNPq/MCT, Chews Ridge Observatory (MIRA) and University ofToronto Southern Observatory (Las Campanas). Catalogue of Apparent Diameters and Absolute Radii of Stars (CADARS) - Third edition - Comments and statisticsThe Catalogue, available at the Centre de Données Stellaires deStrasbourg, consists of 13 573 records concerning the results obtainedfrom different methods for 7778 stars, reported in the literature. Thefollowing data are listed for each star: identifications, apparentmagnitude, spectral type, apparent diameter in arcsec, absolute radiusin solar units, method of determination, reference, remarks. Commentsand statistics obtained from CADARS are given. The Catalogue isavailable in electronic form at the CDS via anonymous ftp tocdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcar?J/A+A/367/521 Sixth Catalogue of Fundamental Stars (FK6). Part III. Additional fundamental stars with direct solutionsThe FK6 is a suitable combination of the results of the HIPPARCOSastrometry satellite with ground-based data, measured over a longinterval of time and summarized mainly in the FK5. Part III of the FK6(abbreviated FK6(III)) contains additional fundamental stars with directsolutions. Such direct solutions are appropriate for single stars or forobjects which can be treated like single stars. Part III of the FK6contains in total 3272 stars. Their ground-based data stem from thebright extension of the FK5 (735 stars), from the catalogue of remainingSup stars (RSup, 732 stars), and from the faint extension of the FK5(1805 stars). From the 3272 stars in Part III, we have selected 1928objects as "astrometrically excellent stars", since their instantaneousproper motions and their mean (time-averaged) ones do not differsignificantly. Hence most of the astrometrically excellent stars arewell-behaving "single-star candidates" with good astrometric data. Thesestars are most suited for high-precision astrometry. On the other hand,354 of the stars in Part III are Δμ binaries in the sense ofWielen et al. (1999). Many of them are newly discovered probablebinaries with no other hitherto known indication of binarity. The FK6gives, besides the classical "single-star mode" solutions (SI mode),other solutions which take into account the fact that hidden astrometricbinaries among "apparently single-stars" introduce sizable "cosmicerrors" into the quasi-instantaneously measured HIPPARCOS proper motionsand positions. The FK6 gives, in addition to the SI mode, the "long-termprediction (LTP) mode" and the "short-term prediction (STP) mode". TheseLTP and STP modes are on average the most precise solutions forapparently single stars, depending on the epoch difference with respectto the HIPPARCOS epoch of about 1991. The typical mean error of anFK6(III) proper motion in the single-star mode is 0.59 mas/year. This isa factor of 1.34 better than the typical HIPPARCOS errors for thesestars of 0.79 mas/year. In the long-term prediction mode, in whichcosmic errors are taken into account, the FK6(III) proper motions have atypical mean error of 0.93 mas/year, which is by a factor of about 2better than the corresponding error for the HIPPARCOS values of 1.83mas/year (cosmic errors included). Pulsation in lambda Bootis starsIn this paper we present a further step in applying asteroseismictechniques to the group of lambda Bootis stars which can becharacterized as nonmagnetic A to F-type Population I dwarfs withsignificant (surface) underabundances of Fe-peak elements. Since noconclusive theory explaining the origin of the observed abundanceanomalies exists, an extensive photometric survey for pulsation in thisgroup has been initiated. Knowledge about the pulsational properties(most members are located within the classical instability strip) couldhelp to establish constrains about the overall abundance of these starsas well as on the evolutionary status. New photometric observations werecarried out for eleven stars. Variability was detected in four stars(e.g. lambda Bootis itself) whereas the remaining seven objects areprobably constant. In total, 52 members of this group have beenphotometrically investigated so far. With 22 pulsating and 30constant'' stars, we derive a ratio of at least 50 % for variable tononvariable members inside the classical instability strip. This resultis based on high quality Hipparcos and new photometric data. Theobserved log /lineρ//lineρ_ȯ and log P values for thepulsating members are compatible with standard (solar abundant) deltaScuti models supporting the hypothesis that the found abundanceanomalies are restricted to the surface only. Otherwise the pulsationalproperties of this group are not outstanding compared to normal''delta Scuti stars, indicating that the mechanism driving the pulsationsis very similar. Based on observations obtained at ESO-La\,Silla, CTIO,SAAO, McDonald Observatory, Instituto Astrofisica Andalucia Observatoryand with the Hipparcos satellite The Angular Momentum of Main Sequence Stars and Its Relation to Stellar ActivityRotational velocities are reported for intermediate-mass main sequencestars it the field. The measurements are based on new, high S/N CCDspectra from the Coudé Feed Telescope of the Kitt Peak NationalObservatory. We analyze these rotation rates for a dependence on bothmass and age. We compare the average rotation speeds of the field starswith mean velocities for young stars in Orion, the Alpha Persei cluster,the Pleiades, and the Hyades. The average rotation speeds of stars moremassive than $\sim1.6$ \msun\experience little or no change during theevolutionary lifetimes of these stars on the zero age main sequence orwithin the main sequence band. Less massive stars in the range betwee n1.6\msun\ and 1.3\msun\ also show little decline in mean rotation ratewhile they are on the main sequence, and at most a factor of 2 decreasein velocity as they evolve off the main sequence. The {\it e}-foldingtime for the loss of angular momentum b y the latter group of stars isat least 1--2 billion years. This inferred characteristic time scale forspindown is far longer than the established rotational braking time forsolar-type stars with masses below $\sim1.3$ \msun. We conclude from acomparison of the trends in rotation with trends in chromospheric andcoronal activity that the overall decline in mean rotation speed alongthe main sequence, from $\sim2$ \msun\ down to $\sim1.3$ \msun, isimposed during the pre-main sequence phase of evolution, and that thispattern changes little thereafter while the star resides on the mainsequence. The magnetic activity implicated in the rotational spindown ofthe Sun and of similar stars during their main sequence lifetimes mus ttherefore play only a minor role in determining the rotation rates ofthe intermediate mass stars, either because a solar-like dynamo is weakor absent, or else the geometry of the magnetic field is appreciablyless effective in removing angular momentu m from these stars. (SECTION:Stars) Nonvariability among lambda Bootis starsWith asteroseismic techniques it is possible to investigate the interiorand the evolutionary status of stars via their frequency spectrum. Bothinformation would be very much needed for lambda Bootis stars, a groupof metal-poor Population I, A-type stars, since no conclusive theoryexists explaining the observed abundance anomalies. Geneva and Stromgrenphotometry place these stars inside the classical instability strip orat least very close to it. We therefore have started an extensivephotometric survey for pulsation in lambda Bootis stars and havediscovered so far 13 new variables. In this paper we present results forstars which presumably are constant, because we are able to establishonly an upper level for possible variability. A typical noise level of 3mmag for Stromgren b was achieved in the relevant frequency domain up to100 d^{-1}. Considering the given noise level of our survey, we concludethat at least 50% of all investigated lambda Bootis stars inside theinstability strip are pulsating, making this group remarkable comparedto stars with similar spectral types. This may suggest that a low(surface) metallicity has an influence on the pulsation Based onobservations obtained at ESO-La Silla, CTIO, SAAO, McDonald Observatory,Instituto Astrofisica Andalucia Observatory. Nonvariability among lambda Bootis Stars II.: SAAO (1994, 1995), CTIO (1994) and IAA (1996) DataNot Available The Gronbech-Olsen photometry: Transformations to a Hyades-Coma systemIn this paper, we consider the zero points of six sets of Stromgren-betaphotometry. The color-index system to which our results are referred isa 'Hyades-Coma' system composed of photometry by Crawford and Perry(1966) and Crawford and Barnes (1969). For V magnitudes, we usemeasurements by Taylor and Joner (1992). Our results are as follows. (1)The zero points of photometry by Gronbech and Olsen (1976, 1977) areoffset from those of the Hyades-Coma system. The offsets can amount toseveral mmag; they appear for V and all color indices except beta, anddepend on right ascension and (usually) declination. (2) These offsetscan be applied to photometry by Stetson (1991), who reduced his resultsto the Gronbech-Olsen system. After correction, Stetson's results for aset of 'transfer stars' differ from comparable data published byCrawford and Barnes (1970). (3) A direct comparison of the transferstars to the Hyades yields consistency between the Hyades-Coma andCrawford-Barnes zero points (for the transfer stars specifically). Thisresult supports a conclusion drawn by Taylor and Joner, and suggeststhat here is some problem with the zero points of Stetson'stransfer-star data. (4) From Stetson's corrected data, one finds thatthe Crawford-Perry zero points for the Hyades are consistent with theCrawford-Barnes zero points for Coma. This result agrees with aconclusion drawn by Taylor and Joner from their own data, and suggeststhat the problem postulated for Stetson's transfer-star data does notextend to his results for the Hyades and Coma. The Relation between Rotational Velocities and Spectral Peculiarities among A-Type StarsAbstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1995ApJS...99..135A&db_key=AST The position corrections of 1400 stars observed with PA II in San Juan.Not Available Improved Mean Positions and Proper Motions for the 995 FK4 Sup Stars not Included in the FK5 ExtensionNot Available Early type high-velocity stars in the solar neighborhood. IV - Four-color and H-beta photometryResults are presented from photometric obaservations in the Stromgrenuvby four-color and H-beta systems of early-type high-velocity stars inthe solar neighborhood. Several types of photometrically peculiar starsare selected on the basis of their Stromgren indices and areprovisionally identified as peculiar A stars, field horizontal-branchstars, metal-poor stars near the Population II and old-disk turnoffs,metal-poor blue stragglers, or metallic-line A stars. Numerousphotometrically normal stars were also found. Two-colour diagrams for differentially rotating starsNot Available Prediction of spectral classification from photometric observations - Application of the UVBY beta photometry and the MK spectra classification. II - General caseAbstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1980A&A....85...93M&db_key=AST UBVRI photometry of 225 AM starsUBVRI photometry of 225 Am stars taken from Mendoza's (1974) catalog ispresented. The results are compared with those obtained by Feinstein(1974) for 21 of the stars and with the values of Johnson et al. (1966).It is assumed that in the first approximation the (V-I) color index ofan unreddened Am star is equal to that of a normal main-sequence star; astandard main sequence is defined for A and early F stars, and thefive-color photometry is analyzed by means of plots of U-V vs. V-I, B-Vvs. V-I, and V-R vs. V-I. Mean color deficiencies of Am stars areexamined, and it is suggested that an unreddened star located below themain-sequence A0-F2 line in the (V-I, U-V) plane is a photometric Amstar. It is concluded that: (1) photometric Am stars have colordeficiencies (as a function of V-I) which, on the average, are 0.07 magin (U-V) color index and 0.025 mag in (B-V) color index; (2) Am starswith V-R less than 0.25 mag may also have a color deficiency of about0.01 mag; (3) Am stars with V-R greater than 0.3 mag may have a colorexcess of approximately 0.01 mag; and (4) Am stars with V-R between 0.25and 0.3 mag may have normal colors. Multicolor photometry of metallic-line stars. III. A photometric catalogueAbstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1974RMxAA...1..175M&db_key=AST Rotation and shell spectra among A-type dwarfs.Abstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1973ApJ...182..809A&db_key=AST Four-colour and H BET photometry of some bright southern stars- II.Abstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1972MNRAS.160..155S&db_key=AST Rotation of evolving A and F stars.Abstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1972A&A....18..428D&db_key=AST K-Line Photometry of Southern a StarsAbstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1971ApJS...23..421H&db_key=AST • - 没有找到链接 - ### 下列团体成员 #### 观测天体数据 星座: 長蛇座 右阿森松: 09h32m20.40s 赤纬: -19°24'01.0" 视星: 5.74 距离: 107.759 天文距离 右阿森松适当运动: -32 赤纬适当运动: 10.2 B-T magnitude: 5.903 V-T magnitude: 5.752
web
auto_math_text
# Planck’s Equation Planck’s Equation Planck’s constant is a number, it describes the size of the energy packets that are contained within light. These packets of energy are called photons. Planck’s constant is given the symbol h and the value of h is equal to 6.63 x 10-34⁻³⁴ J\se. What is Planck’s Equation? While Planck’s constant can now be Read more about Planck’s Equation[…]
web
auto_math_text
# Functional Geekery In a previous post I mentioned I had a new project in the works. I was a guest on the Ruby Rogues podcast and made an announcement there, but for those who didn’t catch that episode, I am now announcing Functional Geekery, a podcast about functional programming. After some issues with getting the hosting setup properly, and working with the hosting provider’s support for a couple of issues, the first episode is ready to go live! I will be working on getting it in the iTunes store, and some of the other podcasting services, but in the meantime, you can find it online. I am hoping to have a wide range of guests and topics, from Clojure, to Erlang, to JavaScript, to F#, as well as Scala, Haskell, and functional programming in languages like C# and Ruby. If you have any suggestions on shows, topics, or guests, check out the About Page on the site to submit ideas. –Proctor # Clojure function has-factors-in? Just another quick post this evening to share a new function I created as part of cleaning up my solution to Problem 1 of Project Euler. Was just responding to a comment on Google+ on my update sharing the post Project Euler in Clojure – Problem 16, and I saw the commenter had his own solution to problem 1. In sharing my solution I realized that I could clean up my results even further, and added a function has-factors-in?. These updates have also been pushed to my Project Euler in Clojure Github repository for those interested. (defn has-factors-in? [n coll] (some #(factor-of? % n) coll)) (defn problem1 ([] (problem1 1000)) ([n] (sum (filter #(or (factor-of? 3 %) (factor-of? 5 %))) (range n)))) It now becomes: (defn problem1 ([] (problem1 1000)) ([n] (sum (filter #(has-factors-in? % [3 5]) (range n))))) This change makes my solution read even more like the problem statement given. –Proctor # Project Euler in Clojure – Problem 16 Here is my solution to Problem 16 of Project Euler. As always, my progress you can tracked on GitHub at https://github.com/stevenproctor/project-euler-clojure. Problem 16 of Project Euler is: What is the sum of the digits of the number 2^1000 This problem was straight forward since I already had the function digits-of defined from problem8. I was able to be very declarative in my problem, so much so, that it reads as the problem statement you are asked to solve. (defn problem16 ([] (problem16 1000)) ([n] (sum (digits-of (expt 2 n))))) As always, any feedback you have for me is greatly appreciated. –Proctor # Project Euler in Clojure – Problem 15 Here is my solution to Problem 15 of Project Euler. As always, my progress you can tracked on GitHub at https://github.com/stevenproctor/project-euler-clojure. Problem 15 of Project Euler is to find the starting number with the longest Collatz sequence, summarized from the problem page as: Starting in the top left corner of a 22 grid, there are 6 routes (without backtracking) to the bottom right corner. How many routes are there through a 2020 grid? I started this problem, by trying to tracing and counting the routes through grids of 2×2, 3×3, and 4×4, and even setled in and did a 5×5 square. Having these numbers, and knowing I had two choices for ever position I was in, except for when the path got to the far edge and bottom, I had a hint at the growth rate of the problem. I tried some powers of 2 with the relationship of the numbers, and some factorials with the numbers. After seeing some possible relationships with the factorials that might be leading me in the right direction, I tried a number of permutation calculations, and the combination calculations. Having seen the numbers show up in different combination results, I then spent time back-calculating from those numbers into my ns, and found that the pattern seemed to match 2n Choose n. The source code to this was the factorial function: (defn factorial [n] (multiply (range 1M (inc (bigdec n))))) And, I could have done it recursively, but I figured I would just operate against the sequence of numbers, especially now that the reducers are available in the Clojure 1.5-Alpha 3 release (at the time of this writing) of Clojure. After I get through a few more problems (of which I am working ahead of these posts), I am thinking it would be interesting to run the same Project Euler Problems against 1.4 and 1.5 using the reducers library, just substituting map/reduce for the reduce/combine functionality, and seeing how much effort it takes to move them over, as well as the differences in the timings of the different problems. The other base function I needed was a combination function: (defn combination [n k] (cond (zero? n) 0 (zero? k) 1 :else (/ (factorial n) (* (factorial (- n k)) (factorial k))))) This function just does the basic calculation for combinations, from the formula: $\frac{n!}{\big((n-k)! * k!\big)}$ With that, and my stumbling upon the matching of the fact that ${2n}\choose{n}$ is the solution to the number of paths through the square the function problem15 is defined as: (defn problem15 ([] (problem15 20)) ([n] (combination (+ n n) n))) As always, any feedback you have for me is greatly appreciated. –Proctor # Aspect Oriented Timing in C# with Castle Windsor I was making some refurbishments on some reporting code in our application that used EF and was suffering from the Select N+1 problem. If truth, it was much worse, as it was an Select N+1 problem up to 6 levels deep depending on where the report was run from. I was changing the code to use a denormalized view from the database, and then run a SQL Query using Entity Framework. When doing this I was asked to get the timings of the report, both against the new way, and the existing way. As this is incidental to what I was really trying to do, I did not want to litter timing code, and logging mechanisms into classes that already existed. This smelled of Aspect Oriented Programming (AOP). While I had not done anything using AOP before, I knew that it was great for cross-cutting concerns like logging, timings, etc. Having been digging into Clojure and LISP recently, this also seemed like cases of the :before, :after and :around methods in Common LISP, or the similar behavior in Eiffel as pointed out in Bertrand Meyer’s Object Oriented Software Construction, not to mention the time function in Clojure which is a function whose single concern is simply the to manage capturing the timing a function passed into it. My hope was to simplify, or un-complect, the code, and keep those concerns separate. In our project, we have Castle Windsor setup as the IOC container, and Windsor supports a type of Aspect Oriented Programming using something called Interceptors. I found documentation on setting it up on a post by Ayende, and one Andre Loker. The issue was some of the places I wanted to setup the capturing of the timings were in different areas than where the handlers were registered for Windsor. After some hunting around, I managed to come up with being able to add an interceptor to an already registered component by using the following line of code, where the IReportingService is the class I want to time the method calls around, and the LogTimingInterceptor is the class that captures the timing of the method call and sends it to the logger: container.Kernel.GetHandler(typeof(IReportingService)).ComponentModel.Interceptors.Add(new InterceptorReference(typeof(LogTimingInterceptor))); Hope someone else can find this useful, –Proctor # John Backus on the Assignment Statement The assignment statement is the von Neumann bottle-neck of programming languages and keeps us thinking in word-at-a-time terms in much the same way the computer’s bottleneck does. John Backus, 1977 ACM Turing Award Lecture, Communications of the ACM August 1978, Volume 2, Number 8 # XMLisp? I had a twisted thought about a potential future thought experiment of using XML and Lisp style languages. Having used Lisp a very little bit back in college for one semester, and read more about it in Structure and Interpretation of Computer Programs, I started looking into Clojure recently. I did a session of CodeRetreat last year in it, and was hearing more about it this year at SCNA so I started to read up on it more and play a little bit with the language. Tie that in with that I recently was transferred to a new group at work that is doing some SOA (Service Oriented Architecture) work. Something triggered when I thought about the XML payloads being sent between the SOA Web Services and how that tied into what I am reading about Lisp and Clojure. In other languages we think about serializing command objects into XML and back and send those messages between Services as a message payload. What made me think was that XML is a tree structure as well as the code in a Lisp type language. What if we did something like Javascript and JSON? What if we convert the Lisp structure to XML and back, and then we can execute this Lisp structure data as Lisp code? With XML we can then also apply transforms and convert one message/command into another message/command, which would allow one message to be sent and transformed into multiple messages to be received by inserting messaging splitters and transformers. This is also not worrying about things like the security of the evaluation of the Lisp data as code since this something to think about as a thought experiment. I don’t know if this is a novel idea, or if someone else has already tried it, but to me it seems like an interesting thing to think about and mull over.
web
auto_math_text
# A Survey on Few-shot Learning | Data ## 当前最新小样本学习综述 Posted by JoselynZhao on April 29, 2020 Data augmentation via hand-crafted rules is usually used as pre-processing in FSL methods. They can introduce different kinds of invariance for the model to capture. For example, on images, one can use translation [12, 76, 114, 119], flipping [103, 119], shearing [119], scaling [76, 160], reflection [34, 72], cropping [103, 160] and rotation [114, 138]. # Discussion and Summary [1] N.Abdo,H.Kretzschmar,L.Spinello,andC.Stachniss.2013.Learningmanipulationactionsfromafewdemonstrations. In International Conference on Robotics and Automation. 1268–1275. [2] Z. Akata, F. Perronnin, Z. Harchaoui, and C. Schmid. 2013. Label-embedding for attribute-based classification. In Conference on Computer Vision and Pattern Recognition. 819–826. [3] M. Al-Shedivat, T. Bansal, Y. Burda, I. Sutskever, I. Mordatch, and P. Abbeel. 2018. Continuous adaptation via meta- learning in nonstationary and competitive environments. In International Conference on Learning Representations. [4] H. Altae-Tran, B. Ramsundar, A. S. Pappu, and V. Pande. 2017. Low data drug discovery with one-shot learning. ACS Central Science 3, 4 (2017), 283–293. [5] M. Andrychowicz, M. Denil, S. Gomez, M. W. Hoffman, D. Pfau, T. Schaul, and N. de Freitas. 2016. Learning to learn by gradient descent by gradient descent. In Advances in Neural Information Processing Systems. 3981–3989. [6] S. Arik, J. Chen, K. Peng, W. Ping, and Y. Zhou. 2018. Neural voice cloning with a few samples. In Advances in Neural Information Processing Systems. 10019–10029. [7] S. Azadi, M. Fisher, V. G. Kim, Z. Wang, E. Shechtman, and T. Darrell. 2018. Multi-content GAN for few-shot font style transfer. In Conference on Computer Vision and Pattern Recognition. 7564–7573.
web
auto_math_text
Select Page Algorithms And Data Structures In Python In this post, I’ve covered Data Structures And Their Possible Click This Link For Data-Related Web Anagrams And Chaining. You can read the complete article I wrote in this place and learn about it by looking myself up on the Web. Data Structures And Chaining Data Structures And Chaining is one of the most elementary features that everyone else should understand. While there are many types of data structures that you can use in machine learning, nearly all data structures take in new information using concepts like “streams”. Those concepts are essential for data structures since they are the way data is transmitted on the main network. Chennida Shlomo Data structures are just data that is passed through your machine and how it is processed. If you look at some of the data structures on the Internet, they are the same as each other. Imagine using the world of music and computing, for example. You can imagine an internet-connected computer where the music is handed down through a piece of paper with 5 stops on it. Then it runs through a series of loops to create a specific number for each station and each point, that is the station. We will use those data structures in some detail later. Let me introduce you a few data structures that I found useful to learn in this post. Cloud-Searching There are many types of commands we can use to search or query data for content. Most popular are the following: Cloud-Searching, Cloud Computing, Cloud-Traction, Cloud-Management, Cloud-Monitoring This will give you an overview of the Cloud-Spanning cloud provider and the cloud providers. Cloud-Metricging I’ll start read defining the cloud-spanners. A cloud-spanners is a collection of technologies, particularly cloud-services or virtual-browsers, that connect computer hardware or storage systems like SDBs and disk drives to clouds. This refers to what is typically recognized by the cloud providers, and when a cloud-entity, for example, a particular value within the storage console, is in existence. This is because there is no out-of-range service provider, let alone a global name and description, in the cloud-system – because there is no way for cloud-systems to name their services in a more advanced way. When a cloud-container or container becomes available to it, it is sent by the cloud-spanners to a provider (such as Amazon Store or Google Cloud, for example). This provider converts the data into storage records that are stored in the cloud-container or container. ## What Data Structure As in other examples of data structures, the value of the score function is used to select pairs of nodes in a given pattern with respect to the potential input set space. The question of how many of these nodes sum to (usually) an alpha function is now asked in [4]. In section 4, we have in mind a problem, that of generating complex examples. Such examples are called *matrix systems*, which represent data browse around here for forming mathematical models and for training. For each case we have $e=\alpha\in [1, \infty)$ and $k=\epsilon \in [0,+\infty)$ for $e\geq 0$, respectively, where -Algorithms And Data Structures In Python Xampp How to Learn Python The Python language provides web based access to some of the most popular libraries in the world, namely Python and OCaml. As you can find in every language, this one is very important. Many of the world’s great libraries have been proven accurate to start with in other languages. Among the early Python libraries are: PyQC Pythonqt PySidepy Pyts PyQML PyQSparklet PythonSparklet PythonSparklet PythonQext PythonQc PythonSParklet PythonSparklet PythonSparklet PythonZip PythonZip PythonZip PythonZip PythonZip Fully BoostedPython PyPy’s most popular libraries are libPython and PyTorch, while Boost libraries like.py,.pyx,.pyi,.pip, and.pipx are also available. In each of these popular and commonly used packages — Python, PythonQ, PYTHON, PyPDF, PyWrap, PDF, PDFX for a vast array of advanced programming languages, and Python (Python) are listed below. Download the tutorial file here. One of the most popular applications in Python is PythonQt, or PyQt. How to Learn Python. Here is the source of each of the latest Python libraries: My_Libraries @import_base But it’s a bit of a no-no! Look at my_libraries for a definition of those—which is the only one I can find right now. Note that in my above example I referenced python.pyx and not simplex, which makes it not only a different format from the rest of the libraries.
web
auto_math_text
# PIC16F628A PWM for LED Chaser #### rescue161 ##### Member I am brand new to PIC coding and programming, so don't beat me up too bad. I have a background in electronics, but the assembly language is throwing me off a bit. I'm more of a see one, do one, teach one kind of guy, so I've been looking at other folks source codes to try to decipher and mimic what they are doing. My end goal is to make a model of a rotating sealed-beam light to put on a 10th scale RC truck. Most of the commercially available lights are for 1/14th scale and smaller, and the operation isn't quite what I'm after. I successfully made a discrete component mock-up on a breadboard and it worked okay, but I just could not get the timing slow enough while keeping the fade rate the way I wanted. The rotation effect was either too fast with good fade, or the speed was right and it was too dim. Here is a short video of the original discrete component circuit using a 555 and a couple of 4017s. There are some components that aren't connected. I was testing different things and just left them on the board. That's when I started looking at PICs for my solution. I am trying to learn about changing PWM code in assembly language. The code I'm currently using is from http://picprojects.org.uk/projects/433chaser/index.htm and I modified it to suit my needs, but it only has four PWM states, off, dim, bright and very bright. This works pretty good using 6 LEDs and only lighting one LED to have it rotate around. I tried to use 8 LEDs, arranged in a circle and I lit two LEDs at a time (opposite of each other) to mimic a sealed-beam rotating PAR36 light that I'm trying to emulate. I can time the PIC to flash the "rotator" at 90 flashes per minute, but due to the PWM only having four brightness's, it looks jittery. I asked the writer of the original code to point me in the right direction, so I could add a few more states, but he said that it would take too much to modify the code and that if he had to do that, he would start over and write a new program in C. That is way over my head. So, I ask the experts here, what do I need to do to add a few more PWM states to the source code in the above link? Here is the area of the code that pertains to the PWM operation. The writer pointed me here, but I have absolutely no clue where to start. The area where I played around with is in the SeqData.inc file. That is where I changed which LED is lit, how bright and when, as well as the hold time between cycles. It works, it is just a little too jittery. Code: ; PWM Function _pwm movfw vc0 ; AND all 5 bits of vertical counter andwf vc1,W andwf vc2,W andwf vc3,W andwf vc4,W iorwf pwmOutput,F ; then OR bits with pwmOutput working variable movfw pwmOutput ; load in W ; when 5 bits in vertical count reach 11111 ; corresponding Port bit is turned on and then ; remains on until counter is reset movwf PORTA ; write to PORTA andlw 0xF0 ; force W to xxxx0000 iorwf copyPORTB,W ; copy the software oscillator output bit movwf PORTB ; and write to physical PORTB ; --------------------------------------- ; 2^5 bit x 8 vertical counter ; generates the pwm for the 8 channel LED output ; http://everything2.com/e2node/vertical counter _vc32 movf vc3,W andwf vc2,W andwf vc1,W andwf vc0,W xorwf vc4,F movf vc2,W andwf vc1,W andwf vc0,W xorwf vc3,F movf vc1,W andwf vc0,W xorwf vc2,F movf vc0,W xorwf vc1,F comf vc0,F ; --------------------------------------- decfsz pwm,F ; decrement PWM counter return ; return if count != 0 ; reset and reload PWM output / counter movlw .31 ; reload PWM counter movwf pwm clrf pwmOutput ; reset output port working variable ; vertical counter is 8 channels by 5 bits ; rvc1 rvc0 vc4-0 PWM ratio ; 0 0 00000 0/31 off ; 0 1 00001 1/31 dim ; 1 0 00100 8/31 bright ; 1 1 11111 31/31 very bright ; movfw loReload ; reload the vertical counter movwf vc0 movwf vc3 movwf vc1 movwf vc2 movwf vc4 return I read quite a bit on how the PIC gets its instructions and how the code works, but I'm just a simpleton, so any help is greatly appreciated. Thank you in advance. #### Mike - K8LH ##### Well-Known Member How many LEDs are you shooting for? #### rescue161 ##### Member Just 8 for each rotator, but if I could get a 12-LED program, I could omit the ones I don't need by leaving zeros in the sequence code. I want to build a replica of a Twin Sonic that initially started me off on this project. If I could figure out how to edit the code without breaking it, I'd be fine. I just don't grasp things very well until I see it. I have different breadboards set up currently using modified SeqData.inc for 6, 8 and 12-LED lights, but it is jittery due to only having a few LEDs between the four full brightness ones and pretty steep jumps in brightness values. Eventually, I want to make a 12-LED model of a Dietz 7-11 4-sealed-beam beacon to put on a wrecker. I have a couple here that I've made (LEDs only) and they look okay, but only if the speed is fast enough to not notice the jumpiness. They had a very slow flash rate and I'd like to mimic that effect. The way I did the 8-LED light looks like this: Code: control 1,31 hold 19 sdat 3,0,1,2,3,0,1,2 hold 19 sdat 2,3,0,1,2,3,0,1 hold 19 sdat 1,2,3,0,1,2,3,0 hold 19 sdat 0,1,2,3,0,1,2,3 seqend I want the 8-LEDs set up with opposing LEDs to be lit, so the above code accomplishes that. The hold time of 19 gives me 79 flashes per minute, which is as close as I could get to 80 FPM on the original Twin Sonic. If the code could be modified to add one more PWM state, I would just change the SeqData.inc to look like this: Code: control 1,31 hold 1 sdat 4,0,0,0,4,0,0,0 hold 1 sdat 4,1,0,0,4,1,0,0 hold 1 sdat 4,2,0,0,4,2,0,0 hold 1 sdat 4,3,0,0,4,3,0,0 hold 1 sdat 3,4,0,0,3,4,0,0 hold 1 sdat 2,4,0,0,2,4,0,0 hold 1 sdat 1,4,0,0,1,4,0,0 hold 1 sdat 0,4,0,0,0,4,0,0 hold 1 sdat 0,4,1,0,0,4,1,0 hold 1 sdat 0,4,1,0,0,4,1,0 hold 1 sdat 0,4,2,0,0,4,2,0 hold 1 sdat 0,4,3,0,0,4,3,0 hold 1 sdat 0,3,4,0,0,3,4,0 hold 1 sdat 0,2,4,0,0,2,4,0 hold 1 sdat 0,1,4,0,0,1,4,0 hold 1 sdat 0,0,4,0,0,0,4,0 hold 1 sdat 0,0,4,1,0,0,4,1 hold 1 sdat 0,0,4,2,0,0,4,1 hold 1 sdat 0,0,4,3,0,0,4,3 hold 1 sdat 0,0,3,4,0,0,3,4 hold 1 sdat 0,0,2,4,0,0,2,4 hold 1 sdat 0,0,1,4,0,0,1,4 hold 1 sdat 0,0,0,4,0,0,0,4 hold 1 sdat 1,0,0,4,1,0,0,4 hold 1 sdat 2,0,0,4,2,0,0,4 hold 1 sdat 3,0,0,4,3,0,0,4 hold 1 sdat 4,0,0,3,4,0,0,3 hold 1 sdat 4,0,0,2,4,0,0,2 hold 1 sdat 4,0,0,1,4,0,0,1 seqend That ends up being 29 mS long with putting a "1" in place of the "19" on the hold time, so the FPM would be a good bit less. Now that I've typed it out, it may not work the way I'd like it to. I guess that's why I'm asking you guys for help. What do you think would be the best approach? Last edited: #### rescue161 ##### Member Here is a video of the 12-LED with 4 LEDs lit at once. This is as smooth as I can get it with only 3 lit states. #### Mike - K8LH ##### Well-Known Member I'd like to recommend a Bit Angle Modulation (BAM) driver to provide more PWM 'steps' but the 4-MHz INTOSC on the '628A is too slow. Which PIC programmer do you have? Is the PIC16F628A the only PIC you can use? #### rescue161 ##### Member I'm just using a MiniPro to program. I have some PIC16F84A and PIC16F877A available as well. The ones I have the most are the 628A, hence why I went that route. I wanted to also make everything as small as possible so I could put everything on the same board as the LEDs. I just jumped right in to making PCBs at home and PIC programming all in one step and figured I could use what I had at home for programming. I had the MiniPro for programming ROMs for my old Pac-Man machine and for burning firmware for my repeaters, so I bought the PICs based off of what most people were using for their LED chaser circuits. #### Nigel Goodwin ##### Super Moderator I'm just using a MiniPro to program. I have some PIC16F84A and PIC16F877A available as well. The ones I have the most are the 628A, hence why I went that route. I wanted to also make everything as small as possible so I could put everything on the same board as the LEDs. I just jumped right in to making PCBs at home and PIC programming all in one step and figured I could use what I had at home for programming. I had the MiniPro for programming ROMs for my old Pac-Man machine and for burning firmware for my repeaters, so I bought the PICs based off of what most people were using for their LED chaser circuits. All those PIC's are seriously old, the datasheet I've got for the 628 is dated 2003, and the others date from the previous century. I used the 628 for my PIC tutorials, as at the time it fitted my basic idea nicely, having lot's of I/O and an internal oscillator. However, those days are LONG!! gone, and more modern devices have much better facilities, more memory, and run lot's faster. I'm currently doing a project using the little 8 pin 12F1840, and that runs at 32MHz using it's internal oscillator, 8 times as fast as the 628. If you want something of a similar size to the 628, I would suggest the 16F1847 (or the lesser memory 16F1827) which are essentially the same core as the little 8 pin one, and I use lot's of 1847/27's in products we make and sell. Following on from that, there are even more enhanced versions, with lot's of extra cool peripherals, such as the 16F18426 (14 pin) or 16F18446 (20 pin), another two devices that we use in products we manufacture. These could let you use their extra hardware to make your project easier. I would also suggest you get a PICKIT4, or at least a PICKIT3 - they make life a lot easier, and connect directly to MPLABX. #### rescue161 ##### Member Cool. So I should just scrap trying to edit this code and start over? I have been reading a lot about how to program the PIC and I understand the theory, but like I said, I don't know where to start. I didn't buy the PICKIT because there were too many arguments for which one was the best and from my very short research, it looked like the best one for me was no longer made? So is the 4 better than the 3? If I remember correctly, they omitted some of the earlier versions features on the newer PICKITs, but I am probably wrong. That was another thing. I was trying to use MPLAB X IDE, but reverted to using the last version of MPLAB IDE, because the newer X would not compile the code correctly. I got it working great on the old version, but again, I was probably doing something wrong on the X version. Thank you guys for the suggestions. #### Nigel Goodwin ##### Super Moderator Cool. So I should just scrap trying to edit this code and start over? I have been reading a lot about how to program the PIC and I understand the theory, but like I said, I don't know where to start. I didn't buy the PICKIT because there were too many arguments for which one was the best and from my very short research, it looked like the best one for me was no longer made? So is the 4 better than the 3? If I remember correctly, they omitted some of the earlier versions features on the newer PICKITs, but I am probably wrong. The 4 is 'best' in that it programs more new devices, it's faster, and can provide more power to the target circuit. The 3 often doesn't cover some modern devices, but does have some extra facilities (such as a simple logic analyser) - but as a programmer the 4 is superior. I've got both (and also a 2), and use them pretty interchangeably, but some boards won't program with the 3 as it can't supply enough power, and I than have to use the 4 instead. The 3's do seem very lacking on power capability, and it;'s often easier to program the chips out of the board and use an IC socket. That was another thing. I was trying to use MPLAB X IDE, but reverted to using the last version of MPLAB IDE, because the newer X would not compile the code correctly. I got it working great on the old version, but again, I was probably doing something wrong on the X version. Thank you guys for the suggestions. I was loath to move to X, but had to eventually, and now I wouldn't want to go back. #### rescue161 ##### Member Good deal. Looks like I'll just buy a PICKIT 4 then. Got any suggestions that may be better than Amazon or Ebay? I've heard that there are counterfeit clones out there that I should avoid. I hate that I wasted the money on out-dated chips, but I suppose it is par for the course. #### Nigel Goodwin ##### Super Moderator Good deal. Looks like I'll just buy a PICKIT 4 then. Got any suggestions that may be better than Amazon or Ebay? I've heard that there are counterfeit clones out there that I should avoid. I hate that I wasted the money on out-dated chips, but I suppose it is par for the course. I've still got 16C84's here - which was replaced by the 16F84 - which was replaced by the 16F84A - which was (essentially) replaced by the 16F628 etc. etc. etc. It's quite amazing that they still make the 16F84A, bu they certainly charge a premium for it, as it's MUCH more expensive than more modern much better devices. From RS it's £4.39 inc VAT, the 16F1827 is only £1.44 inc vat. and has 4 times the program memory. If you want to ensure you get a 'real' PICKIT4, then get it direct from MicroChip, or from RS Components, Farnell, Digikey etc. Mine came from RS Components. #### rescue161 ##### Member Thank you again. As far as the code, should I still write it in assembly, or is there something better (easier). It takes me a long time to pick up new things. My memory is not as good as it used to be. #### Nigel Goodwin ##### Super Moderator Thank you again. As far as the code, should I still write it in assembly, or is there something better (easier). It takes me a long time to pick up new things. My memory is not as good as it used to be. Well it's always been a controversial issue Historically I've always been a HUGE supported of assembler, and I still think everyone should start by learning the rudiments of assembler, as it forces you to understand the hardware. However, many of the more modern datasheets and application notes now give examples in C rather than assembler, and as XC8 is a free download (for the un-optimised version), it makes sense to use XC8 instead of Assembler - and all my PIC programming in now done using XC8. I still don't like C, and I don't claim to be good at it - I spent ages the other day because I typed '==' instead of '=' (usually I do the opposite), and it's unhelpful with things like that - personally I'd much prefer a good Pascal version However, as application notes aren't likely to be coming in Pascal, I'm sticking with C. #### rescue161 ##### Member Is there a better way to do what I want to do instead of the posted code? #### Nigel Goodwin ##### Super Moderator Have you considered using neopixel ws2812 LED's - you can easily fade them and change colours as much as you want, and I have code adapted for the 16F18446 from a MicroChip Application Note and Arduino examples - it uses the CLC hardware, as ws2812 LED IC's require extremely fast control signals. #### rescue161 ##### Member This is all new to me. And whatever makes it easy for me to make. I do like to play around with the code to see how everything works and to change things up if I decide to add another light to the mix. The original circuit was a single 555 controlling two 4017s. I had them timed together by tying pin 15 of each 4017 together. Even if they started out of sync, they were immediately re-sync'd after the first cycle. It worked good, but there were just way too many parts with all of the caps and transistors. All of this code tweaking reminds me of when I put my MAME cabinet together and I edited the MAME and front-end files. #### Nigel Goodwin ##### Super Moderator These are the LED IC's You simply wire them in series, DOUT to DIN and connect ground and power together - each colour, red, blue and green can be set to anyone of 256 levels, providing a wide range of options, and RAPID changes. The controller simply feeds the first DIN pin and the control signal is passed along all the LED's. This should give you plenty of scope for dimming and spinning. #### rjenkinsgb ##### Well-Known Member I've just written you a more versatile example multi-PWM setup. It's in C i'm afraid, as I am many years out of date with small PIC assembly & I can't remember all the page select stuff. I have tried to make it understandable & avoided using C shortcut functions for clarity. It uses a single PWM counter, incremented every loop of the program. Plus, a "step table" that defines the brightness for a single light at equal intervals through a cycle. The individual LEDs uses offset constants arranged equally through that table, so eg. when one LED is at step 0, another is at step 16, another at step 32 and so on. The step counter is added to the constant offset, so the LED brightnesses "rotate" through the step buffer. After every PWM count increment, a function (subroutine) is called for each LED offset, to get the step and from that the PWM on/off decision. That is used to set or clear the appropriate output pin for that LED (or two LEDs if you are doubling up 180' apart). You can have as many separate outputs as you wish, using a suitabel size step table; eg. for six out, make it 60 long with offset constants 0, 10, 20, 30, 40 & 50.. C: #include <16F18313.h> #zero_ram #DEVICE PIC16F18313 ICD=1 #USE DELAY(internal=4MHz) #USE FAST_IO (A) // Do all pin selects // 01 = VDD +5V // 02 = RA5 // 03 = RA4 // 04 = RA3 / MCLR Debug VPP // 05 = RA2 // 06 = RA1 / ICSP CLK // 07 = RA0 / ICSP DAT // 08 = VSS 0V //#include <stdio.h> //#include <stdlib.h> int8 pwmreg; //address configuration int8 stepcount; // int8 substep; // PWM cycles per step increment; speed setting int8 outimg; // PWM range 0-x; a (power of 2) - 1 value // Using 0 - 31 #define PWM_MAX 0x1f // Brightness steps for a full rotation sequence of one light; // eg. 4, 8 or 16 times the number of different-brightness lights, -1 #define STEP_MAX 64 // Output phase, equal offsets through the "step" cycle. #define LED_P0 0 #define LED_P1 16 #define LED_P2 32 #define LED_P3 48 // Look-up table for brightness sequence for one light // Brightness range 0 = PWM_MAX unsigned int8 ltable[STEP_MAX] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 7, 8, 9, 10, 12, 15, 17, 20, 22, 25, 28, 32, 32, 28, 25, 22, 20, 17, 15, 12, 10, 9, 8, 7, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}; // Function definition int8 calc_pwm (unsigned int8); /* * Main program */ int main(int argc, char** argv) { int8 ledout; outimg = 0; pwmreg = 0; stepcount = 0; substep = 0; // I/O Pin directions: // All out: TRIS = 0x0000 0000 set_tris_a(0x00); // OSC = 32 MHz. // Timer source = osc / 4; 8 MHz. for(;;) { restart_wdt(); pwmreg = pwmreg + 1; pwmreg = pwmreg & PWM_MAX; // Do one LED // Get the on/off state and set the appropriate output pin // Outputs high for on; LED & resistor to 0V. ledout = calc_pwm(LED_P0); if(ledout == 1) { outimg = outimg | 0x04; } else { outimg = outimg & ~0x04; } // The same sequence for the next: ledout = calc_pwm(LED_P1); if(ledout == 1) { outimg = outimg | 0x08; } else { outimg = outimg & ~0x08; } // And again for each pin, passing the phase offset // then setting the appropriate pin ledout = calc_pwm(LED_P2); if(ledout == 1) { outimg = outimg | 0x10; } else { outimg = outimg & ~0x10; } ledout = calc_pwm(LED_P3); if(ledout == 1) { outimg = outimg | 0x20; } else { outimg = outimg & ~0x20; } output_a(outimg); // At each 0 of the PWM reg, count the substep delay // and if that overflows, move to the next sequence step. if(pwmreg == 0) { // // Substep limit sets the overall cycle speed substep = substep + 1; if(substep > 4) { substep = 0; stepcount = stepcount + 1; if(stepcount >= STEP_MAX) { stepcount = 0; } } } } return (1); } calc_pwm (unsigned int8 lp) { unsigned int8 x, y; // Work out the cycle stage for the lamp, then get the table brightness // and compare to the PWM count to determine on/off x = stepcount + lp; // Wrap the result back to the start of the table; // if it is beyond the end. if(x >= STEP_MAX) { x = x - STEP_MAX; } // Get the brightness value from the table y = ltable[x]; // Compare to present PWM value to determine on or off if(y > pwmreg) { return 1; } return 0; } #### rjenkinsgb ##### Well-Known Member Video of that running; the board is not ideal, it's a test rig for another project and just an 8 pin PIC; plus three of the six I/O pins are used for the ICD3 connection - but I think it demonstrates the principle. (Without the debug connections, you could use one of those for up to six LED outputs though). Last edited: #### rescue161 ##### Member Oh man, thank you very much! I'm going to put in an order tonight for a PICKIT 4 and some of the mentioned PICs. Is there anything else you guys think I should get on this order? Is there anything else required to use the PICKIT-4? It looks like it only comes with a USB cable. I have in my cart the following: 1 - PICKIT-4 10 - PIC16F18313-E/P 4 - PIC16F18446-E/P 4 - PIC12F1840-E/P 4 - PIC16F1847-E/P 4 - PIC16F18426-E/P I always buy extra just in case things go well and then I have more for other projects, or if I fail and the magic smoke gets let out. Last edited:
web
auto_math_text
# 40 CFR § 1065.659 - Removed water correction. § 1065.659 Removed water correction. (a) If you remove water upstream of a concentration measurement, x, correct for the removed water. Perform this correction based on the amount of water at the concentration measurement, xH2O[emission]meas, and at the flow meter, xH2Oexh, whose flow is used to determine the mass emission rate or total mass over a test interval. For continuous analyzers downstream of a sample dryer for transient and ramped-modal cycles, you must apply this correction on a continuous basis over the test interval, even if you use one of the options in § 1065.145(e)(2) that results in a constant value for xH2O[emission]meas because xH2Oexh varies over the test interval. For batch analyzers, determine the flow-weighted average based on the continuous xH2Oexh values determined as described in paragraph (c) of this section. For batch analyzers, you may determine the flow-weighted average xH2Oexh based on a single value of xH2Oexh determined as described in paragraphs (c)(2) and (3) of this section, using flow-weighted average or batch concentration inputs. (b) Determine the amount of water remaining downstream of a sample dryer and at the concentration measurement using one of the methods described in § 1065.145(e)(2). If you use a sample dryer upstream of an analyzer and if the calculated amount of water remaining downstream of the sample dryer and at the concentration measurement, xH2O[emission]meas, is higher than the amount of water at the flow meter, xH2Oexh, set xH2O[emission]meas equal to xH2Oexh. If you use a sample dryer upstream of storage media, you must be able to demonstrate that the sample dryer is removing water continuously (i.e., xH2Oexh is higher than xH2O[emission]meas throughout the test interval). (c) For a concentration measurement where you did not remove water, you may set xH2O[emission]meas equal to xH2Oexh. You may determine the amount of water at the flow meter, xH2Oexh, using any of the following methods: (1) Measure the dewpoint and absolute pressure and calculate the amount of water as described in § 1065.645. (2) If the measurement comes from raw exhaust, you may determine the amount of water based on intake-air humidity, plus a chemical balance of fuel, DEF, intake air, and exhaust as described in § 1065.655. (3) If the measurement comes from diluted exhaust, you may determine the amount of water based on intake-air humidity, dilution air humidity, and a chemical balance of fuel, DEF, intake air, and exhaust as described in § 1065.655. (d) Perform a removed water correction to the concentration measurement using the following equation: $\begin{array}{c}x={x}_{\left[\mathrm{emission}\right]\mathrm{meas}}·\left(\frac{1-{x}_{H2\mathrm{Oexh}}}{1-{x}_{H2O\left[\mathrm{emission}\right]\mathrm{meas}}}\right)\\ \text{Eq. 1065.659-1}\\ \text{Example:}\\ {x}_{\mathrm{COmeas}}=29.0\phantom{\rule{0ex}{0ex}}\mu \mathrm{mol}/\mathrm{mol}\\ {x}_{H2\mathrm{OCOmeas}}=8.601\phantom{\rule{0ex}{0ex}}\mathrm{mmol}/\mathrm{mol}=0.008601\phantom{\rule{0ex}{0ex}}\mathrm{mol}/\mathrm{mol}\\ {x}_{H2\mathrm{Oexh}}=34.04\phantom{\rule{0ex}{0ex}}\mathrm{mmol}/\mathrm{mol}=0.03404\phantom{\rule{0ex}{0ex}}\mathrm{mol}/\mathrm{mol}\\ {x}_{\mathrm{CO}}=29.0\left(\frac{1-0.03404}{1-0.008601}\right)\\ {x}_{\mathrm{CO}}=28.3\phantom{\rule{0ex}{0ex}}\mathrm{\mu mol}/\mathrm{mol}\end{array}$ [73 FR 37335, June 30, 2008, as amended at 76 FR 57462, Sept. 15, 2011; 79 FR 23804, Apr. 28, 2014; 86 FR 34566, June 29, 2021]
web
auto_math_text
× If you have problems during the execution of MRCC, please attach the output with an adequate description of your case as well as the followings: • the way mrcc was invoked • the way build.mrcc was invoked • the output of build.mrcc • compiler version (for example: ifort -V, gfortran -v) • blas/lapack versions • as well as gcc and glibc versions This information really helps us during troubleshooting :) # Dirac interface and symmetry groups 2 years 4 months ago #1015 by Anna Z Hello, I'm trying to use Dirac and MRCC combination with help of dirac_mointegral_export utilite from MRCC package. I'm setting the symmetry in the Dirac .mol file. In the MINP file I usually set symm=off though it does not seem to affect the problem I'm explaining below. If I use C2v symmetry for computations in Dirac then the computation in MRCC works perfectly. However if I choose Cs symmetry (even for the same molecule), the reference determinant energy in MRCC is different from the total SCF energy in Dirac, and CC computation either converges badly or does not converge at all. If I try to do the computations setting C1 symmetry in Dirac, MRCC fails immediately with the error "Check the inputfile for the no. g/u orbitals". I suspect that the problem is that the reference determinant is transfered from Dirac to MRCC incorrectly. On the other hand, my colleague pointed out that two-electron integrals in Cs and C1 symmetry can be complex whereas MRCC assumes that they are real. Is it possible to fix this problem by the appropriate change of MINP and fort.55 files? Or is it true that Dirac+MRCC combination can handle only molecular symmetries with real fermion irreps (i.e. D2h, D2, C2v)? I attach Dirac out files and MRCC input and output files for Cs and C1 symmetries.
web
auto_math_text
ChIP-seq differential binding analysis tools 0 0 Entering edit mode 20 months ago Jingyue ▴ 30 Hello all, Do you recommend to overlap two or more differential peak calling tools to get a more stringent/accurate differential peaks? Thank you! ChIP-Seq Diffbind Csaw • 411 views 0 Entering edit mode I do not see why you would want to do that. It only makes things more complicated. If you want to be more stringent better lower the FDR cutoff from like 10% to 5 or 1% or test against a certain fold change, e.g. 1.5, using glmTreat in csaw (the actual function comes from edgeR). Combining different tools is tricky as one would need to ensure that normalization, filtering etc. is more or less identical to have a meaningful analysis.
web
auto_math_text
# Filed Under #undiscovered_extinctions ### Estimating undiscovered extinctions Have you ever wondered how many species were lost before we had the chance to discover them? In a paper now out in Conservation Biology, we estimated just that, for plant species in Singapore. This paper follows from the Chisholm Lab’s related work on Singapore birds and butterflies. The orchid species Grammatophyllum speciosum has not been recorded in Singapore since 1918. Photo credit: Cerlin Ng All over the world, many species remain undiscovered while both known and unknown species continue to go extinct. This is particularly true in the tropics, where biodiversity is high and development continues apace. Singapore provides... ### Extinction of undiscovered butterflies + tutorial Meryl Theng just had a new paper published in Biological Conservation, where she estimated that 46% of Singapore’s butterfly species have been extirpated since 1854. The special thing about this estimate is that it includes all species that existed, including species that went extinct before we had a chance to discover them. The trick to estimating undiscovered extinctions is the SEUX model. There is a nice write-up about the paper on Ryan’s blog. The paper has also received a good response in the press - The Straits Times and The Star have covered it - and it is generating some... ### Relationship between species discovery and extinction probability What is the relationship between a plant’s historical probability of having being discovered and its probability that it went extinct? All else being equal, species with low abundance are less likely to be collected, and low abundance is both theoretically (e.g. McCarthy et al., 2014) and empirically cited as a good predictor of extinction probability. The empirical relationship has been observed both in general (McKinney, 1997) and specifically for plants (Sutton and Morgan, 2009; Matthies et al., 2004). It has also been found in the context of habitat fragmentation (Table 2 of Henle et al., 2004), which is particularly relevant... ### Moments for a bivariate beta distribution A common choice for a probability distribution of a probability is the beta distribution. It has the required support between 0 and 1, and with its two parameters we can obtain a pretty wide qualitative range for the probability density function. What should we do if we want to create correlated probabilities? We might look for some kind of multivariate generalisation of the beta distribution, one that can describe pretty flexible correlation between the variables. The generalised Dirichlet distribution allows us to describe correlated probabilities, however it has some restrictions on the support of the joint probability density function that... ### Estimating undetected extinctions The purpose of this blog post is to give a simplified account of how the Chisholm et al. (2016) method works for estimating undetected extinctions. To estimate the historical extinction rate within a taxonomic group, a naive approach would be to divide the number of species known to be extinct by the total number of species. However, this does not account for the historical process of species discovery and temporal fluctuations in the extinction rate. The approach can be improved by estimating the cumulative probability of persistence over the time period of interest. This is equivalent to accounting for species...
web
auto_math_text
1 answer Provide two arguments in favor of IPO underpricing and two arguments against IPO underpricing. Question: Provide two arguments in favor of IPO underpricing and two arguments against IPO underpricing. Answers Similar Solved Questions 1 answer 2. ' Consider a random variable X with pdf . 2x 10. 0<x<1 Otherwise Let Y=3-.... 2. ' Consider a random variable X with pdf . 2x 10. 0<x<1 Otherwise Let Y=3-. Find the pdf ofy..... 1 answer A company maintains three offices in a certain region, each staffed by two employees. Information concerning... A company maintains three offices in a certain region, each staffed by two employees. Information concerning yearly salaries (1000s of dollars) is as follows: Office 1 1 2 2 3 3 Employee 1 2 3 4 5 6 Salary 34.7 38.6 35.2 38.6 30.8 34.7 (a) Suppose two of these employees are randomly selected from am... 1 answer Which are the different ways of expressing the gains from trade that countries archieve? Which are the different ways of expressing the gains from trade that countries archieve?... 1 answer A perfectly competitive firm will earn a profit in the short run when it produces the... A perfectly competitive firm will earn a profit in the short run when it produces the profit-maximizing quantity of output and the price is: 1) greater than marginal cost. 2) less than marginal cost. 3) less than average variable cost. 4) greater than... 1 answer Time 0 Time 1 Time 2 Time 3 Project A - 10,000 5,000 4,000 3,000 Project... Time 0 Time 1 Time 2 Time 3 Project A - 10,000 5,000 4,000 3,000 Project B - 10,000 4,000 3,000 10,000 If Wise Guy Inc is choosing one of the above mutually exclusive projects (Project A or Project B), given a discount rate of 7%, which should the company choose? O A. Project A B. Project B OC. Neit... 1 answer 3. (5 points) From the NMR spectrum below, determine the structure of the compound. The molecular... 3. (5 points) From the NMR spectrum below, determine the structure of the compound. The molecular formula is CH-BrO There are no 'H NMR signals outside of the region shown. 4.0 3.0... 1 answer Gabel School Balance Sheet As of September 30, 2019 30-Sep-19 ASSETS Current Assets Checking/Savi... BALANCE SHEET Gabel School Balance Sheet As of September 30, 2019 30-Sep-19 ASSETS Current Assets Checking/Savings 1. Bank Accounts 1016. Money Market 1020. Operating 1025. Fire Account 1081. Petty Cash Office 1082. Petty Cash 196,411.93 949,258.21 231,253.86 1,627.00 1,700.00 1,380,251.00 1,380,251... 1 answer Answer one question per page, double space: How are foreign exchange rates affected by differences in... Answer one question per page, double space: How are foreign exchange rates affected by differences in the interest rates prevailing in various countries? In foreign exchange, what are spot and forward transactions? How do they differ? Please provide your discussion about the risks associated with f... 1 answer 7.      Q Corporation and R Inc. are two companies with very similar characteristics. The only difference between the two companies is that Q Corporation is an unlevered firm, and R Inc. is a levered firm with debt of $3.5 million and cost of debt of 10%. Both companies have earn... 1 answer A 96.5-kg person stands on a scale in an elevator. What is the apparent weight when... A 96.5-kg person stands on a scale in an elevator. What is the apparent weight when the elevator is (a) accelerating upward with an acceleration of 1.57 m/s2, (b) moving upward at a constant speed, and (c) accelerating downward with an acceleration of 1.79 m/s2?... 1 answer The following information is provided for Molly Corporation: Estimated Sales: August$ 60,000 September 80,000 October... The following information is provided for Molly Corporation: Estimated Sales: August \$ 60,000 September 80,000 October 70,000 November 50,000 December 100,000 20% of sales are cash sales. Of the credit sales, 60% is collected in the month of sales, 30% in the month following the sale and 10% in the ... 1 answer If an object is moving at 3 m/s and accelerates to 19 m/s over 2 seconds, what was the object's rate of acceleration? If an object is moving at 3 m/s and accelerates to 19 m/s over 2 seconds, what was the object's rate of acceleration?... 1 answer This one is easy… think about the prospect of running a small business. Is that something... This one is easy… think about the prospect of running a small business. Is that something you think you would enjoy? Why or why not? If it does appeal to you, what field/industry might be of interest to you?... 1 answer - Heparin sodrum is infusing na the IV Puh I 23mL per har. Hepann infusion 25,000... - Heparin sodrum is infusing na the IV Puh I 23mL per har. Hepann infusion 25,000 units por some of now a saline INS) How many units per how are infusing... 1 answer Question 23 Not yet graded /0 pts Given a Binary Search Tree that is populated with... Question 23 Not yet graded /0 pts Given a Binary Search Tree that is populated with numerical data, write an algorithm that will determine the sum, average, least and greatest of the values maintained in the tree. Express your algorithm in Java-like pseudodode. You may use any of the operations supp... 1 answer Q3. According to this dataset, please draw a stem-and-leaf display. 30.0 31.3 32.5 33.5 36.5 31.1... Q3. According to this dataset, please draw a stem-and-leaf display. 30.0 31.3 32.5 33.5 36.5 31.1 31.4 32.7 33.4 37.1 36.5 31.5 32.8 34.2 32.6 34.7 39.4 32.9 34.7 32.3 34.9 39.7 32.4 34.1 38.9 30.8 39.6 32.3 35.0 38.2... 1 answer Question 1 The following statements arise from various theories. In each case, identify the independent variable(s)... Question 1 The following statements arise from various theories. In each case, identify the independent variable(s) by typing the letter of your intended answer in place of x in the coll immediately below each question (remember to keep the quotation marks) **(a) Cholera infection is caused by drink...
web
auto_math_text
# Online Strangeness in Quark Matter Conference 2021 May 17 – 22, 2021 US/Eastern timezone ## NCQ scaling of $f_{0}(980)$ elliptic flow in 200 GeV Au+Au collisions by STAR and its constituent quark content May 19, 2021, 9:50 AM 20m Room A (Zoom) ### Room A #### Zoom zoom co-host: Niveditha Ramasubramanian https://stonybrook.zoom.us/j/99346269726 Experimental talk ### Speaker Dr Jie Zhao (Purdue University) ### Description Searching for exotic state particles and studying their properties have furthered our understanding of quantum chromodynamics (QCD). The $f_{0}(980)$ resonance is an exotic state with relatively higher production rate in relativistic heavy-ion collisions, decaying primarily into $\pi\pi$. Currently the structure and quark content of the $f_{0}(980)$ are unknown with several predictions from theory being a $q\bar{q}$ state, a $qq\bar{q}\bar{q}$ state, a $K\bar{K}$ molecule state, or a gluonium state. We report the first $f_{0}(980)$ elliptic flow ($v_{2}$) measurement from 200 GeV Au+Au collisions at STAR. The transverse momentum dependence of $v_{2}$ is examined and compared to those of other hadrons (baryons and mesons). The empirical number of constituent quark (NCQ) scaling is used to investigate the constituent quark content of $f_{0}(980)$ [1], which may potentially address an important question in QCD. We will report the findings of our investigation and discuss its implications. [1] A. Gu, T. Edmonds, J. Zhao, F. Wang, Phys. Rev. C 101, 024908 (2020), arXiv:1902.07152 Collaboration STAR Collaboration
web
auto_math_text
Relazione su invito # Unexpected screening of field-effect in graphene-$MoS_{2}$ van der Waals heterojunctions. ##### Roddaro S. Venerdì 16/09   09:00 - 13:30   Aula D - Marianna Ciccone   II - Fisica della materia Van der Waals heterostructures play a key role in devices built out of two-dimensional materials, but their response to field effect can be rather non-trivial, due to their low density of states. Unfortunately, a direct investigation of these effects is often complex, since the layers forming the heterojunction typically cannot be probed independently. In my contribution, I will illustrate back-gated $MoS_{2}$ field-effect transistor (FET) architectures integrating multiple graphene contacts, where each contact can act as an additional FET. This allows an independent probing and correlation of the conducting properties of bare $MoS_{2}$ and of the graphene contact regions. Experiments reveal how a $MoS_{2}$ overlayer can significantly suppress the n-side field-effect in graphene, even in a configuration where it would not be expected to do so. I will show ---thanks to $ab initio$ calculations--- that this effect can be understood as caused by deep traps associated with sulfur vacancies, which counterintuitively impact the field effect. Perspectives for the investigation of generic heterojunctions are discussed.
web
auto_math_text
Blackholes Forum Message Forums: Atm · Astrophotography · Blackholes · Blackholes2 · CCD · Celestron · Domes · Education Eyepieces · Meade · Misc. · God and Science · SETI · Software · UFO · XEphem Be the first pioneers to continue the Astronomy Discussions at our new Astronomy meeting place...The Space and Astronomy Agora Who Is J. - 'Clerk-Secretary' - Rawlins, Anyway? Forum List | Follow Ups | Post Message | Back to Thread Topics | In Response ToPosted by Kent Benjamin Robertson on July 2, 2004 09:08:42 UTC You Need To Read Some New Physics Forum List | Follow Ups | Post Message | Hide secretary | Back to Thread Topics | In Response To Posted by Clerk on February 17, 2004 13:12:06 UTC Dear Kent Benjamin Robertson, All the physics you quote (*narrate and originate) is several years old. There is a revolution going on in physics right now where string theory is being replaced by loop quantum gravity and process physics, although there is some evidence that LQG can derive string theory. It already can derive quantum mechanics. It derives what you say about Planck. It also derives area, and volume, and the big bang, not capitalized here as it did not start at a point. So I will give you all the relevant recent publications below. The big names are Rovelli, Smolin, Theiman, Bojowald. But Bojowald has done the best recent work on cosmology. Then I will give you the work of Cahill which I believe is the phenomonology of LQG, even though the LQG people do not realize it yet. Then I will give you selections from others. J Rawson ------------------------------------------ Loop Quantum Cosmology: Recent Progress Authors: Martin Bojowald Comments: 17 pages, 2 figures, Plenary talk at ICGC 04, Cochin, India Report-no: AEI-2004-017 Aspects of the full theory of loop quantum gravity can be studied in a simpler context by reducing to symmetric models like cosmological ones. This leads to several applications where loop effects play a significant role when one is sensitive to the quantum regime. As a consequence, the structure of and the approach to classical singularities are very different from general relativity: The quantum theory is free of singularities, and there are new phenomenological scenarios for the evolution of the very early universe including inflation. We give an overview of the main effects, focussing on recent results obtained by several different groups. Full-text: PostScript, PDF, or Other formats http://arxiv.org/abs/gr-qc/0402053 Loop Quantum Cosmology and Boundary Proposals Authors: Martin Bojowald, Kevin Vandersloot Comments: 18 pages, 5 figures, invited parallel talk at Xth Marcel Grossmann meeting, July 20-26, 2003, Rio de Janeiro Report-no: AEI-2003-114 For many years, the most active area of quantum cosmology has been the issue of choosing boundary conditions for the wave function of a universe. Recently, loop quantum cosmology, which is obtained from loop quantum gravity, has shed new light on this question. In this case, boundary conditions are not chosen by hand with some particular physical intuition in mind, but they are part of the dynamical law. It is then natural to ask if there are any relations between these boundary conditions and the ones provided before. After discussing the technical foundation of loop quantum cosmology which leads to crucial differences to the Wheeler-DeWitt quantization, we compare the dynamical initial conditions of loop quantum cosmology with the tunneling and the no-boundary proposal and explain why they are closer to the no-boundary condition. We end with a discussion of recent developments and several open problems of loop quantum cosmology. Full-text: PostScript, PDF, or Other formats http://arxiv.org/abs/gr-qc/0312103 Homogeneous Loop Quantum Cosmology: The Role of the Spin Connection Authors: Martin Bojowald, Ghanashyam Date, Kevin Vandersloot Comments: revtex4, 36 pages, 10 figures Report-no: IMSc/2003/04/06, CGPG-03/10-5, AEI-2003-085 Homogeneous cosmological models with non-vanishing intrinsic curvature require a special treatment when they are quantized with loop quantum cosmological methods. Guidance from the full theory which is lost in this context can be replaced by two criteria for an acceptable quantization, admissibility of a continuum approximation and local stability. A quantization of the corresponding Hamiltonian constraints is presented and shown to lead to a locally stable, non-singular evolution compatible with almost classical behavior at large volume. As an application, the Bianchi IX model and its modified behavior close to its classical singularity is explored. Full-text: PostScript, PDF, or Other formats 1. astro-ph/0309478 [abs, ps, pdf, other] : Title: Quantum Gravity and the Big Bang Authors: Martin Bojowald Comments: 6 pages, invited talk at the conference "Where Cosmology and Fundamental Physics Meet" at IUFM, Marseille, June 23 - 26, 2003 2. gr-qc/0307083 [abs, ps, pdf, other] : Title: Consistency Conditions for Fundamentally Discrete Theories Authors: Martin Bojowald, Ghanashyam Date 3. gr-qc/0306008 [abs, ps, pdf, other] : Title: Cosmological applications of loop quantum gravity Authors: Martin Bojowald, Hugo A. Morales-Tecotl Comments: 42 pages, 4 figures, written for the proceedings of the Fifth Mexican School (DGFM): The Early Universe and Observational Cosmology 4. gr-qc/0305069 [abs, ps, pdf, other] : Title: Initial Conditions for a Universe Authors: Martin Bojowald Comments: 7 pages, this essay was awarded First Prize in the Gravity Research Foundation Essay Contest 2003 5. hep-th/0304252 [abs, ps, pdf, other] : Title: Classical Solutions for Poisson Sigma Models on a Riemann surface Authors: Martin Bojowald, Thomas Strobl Subj-class: High Energy Physics - Theory; Symplectic Geometry Journal-ref: JHEP 0307 (2003) 002 6. gr-qc/0304074 [abs, ps, pdf, other] : Title: Mathematical structure of loop quantum cosmology Authors: Abhay Ashtekar, Martin Bojowald, Jerzy Lewandowski Subj-class: General Relativity and Quantum Cosmology; Mathematical Physics 7. gr-qc/0303073 [abs, ps, pdf, other] : Title: Homogeneous Loop Quantum Cosmology Authors: Martin Bojowald Journal-ref: Class.Quant.Grav. 20 (2003) 2595-2615 8. gr-qc/0303072 [abs, ps, pdf, other] : Title: Loop Quantum Cosmology, Boundary Proposals, and Inflation Authors: Martin Bojowald, Kevin Vandersloot Journal-ref: Phys.Rev. D67 (2003) 124023 9. gr-qc/0303026 [abs, ps, pdf, other] : Title: Spin Foam Quantization and Anomalies Authors: Martin Bojowald, Alejandro Perez 10. gr-qc/0207038 [abs, ps, pdf, other] : Title: Isotropic Loop Quantum Cosmology with Matter Authors: Martin Bojowald, Franz Hinterleitner Journal-ref: Phys.Rev. D66 (2002) 104003 11. gr-qc/0206054 [abs, ps, pdf, other] : Title: Inflation from Quantum Geometry Authors: Martin Bojowald Journal-ref: Phys.Rev.Lett. 89 (2002) 261301 12. gr-qc/0206053 [abs, ps, pdf, other] : Title: Quantization Ambiguities in Isotropic Quantum Geometry Authors: Martin Bojowald Journal-ref: Class.Quant.Grav. 19 (2002) 5113-5230 13. gr-qc/0202077 [abs, ps, pdf, other] : Title: Isotropic Loop Quantum Cosmology Authors: Martin Bojowald Journal-ref: Class.Quant.Grav. 19 (2002) 2717-2742 14. hep-th/0112074 [abs, ps, pdf, other] : Title: Poisson Geometry in Constrained Systems Authors: Martin Bojowald, Thomas Strobl Comments: 41 pages, more detailed abstract in paper; v2: minor corrections and an additional reference Subj-class: High Energy Physics - Theory; Symplectic Geometry 15. gr-qc/0105113 [abs, ps, pdf, other] : Title: The Semiclassical Limit of Loop Quantum Cosmology Authors: Martin Bojowald Comments: 10 pages Journal-ref: Class.Quant.Grav. 18 (2001) L109-L116 16. gr-qc/0105067 [abs, ps, pdf, other] : Title: The Inverse Scale Factor in Isotropic Quantum Geometry Authors: Martin Bojowald Journal-ref: Phys.Rev. D64 (2001) 084018 17. gr-qc/0104072 [abs, ps, pdf, other] : Title: Dynamical Initial Conditions in Quantum Cosmology Authors: Martin Bojowald Journal-ref: Phys.Rev.Lett. 87 (2001) 121301 18. gr-qc/0102069 [abs, ps, pdf, other] : Title: Absence of Singularity in Loop Quantum Cosmology Authors: Martin Bojowald Journal-ref: Phys.Rev.Lett. 86 (2001) 5227-5230 19. gr-qc/0101061 [abs, ps, pdf, other] : Title: Symmetric States in Quantum Geometry Authors: M. Bojowald, H. A. Kastrup Comments: 9 pages, talk at the Ninth Marcel Grossmann Meeting, Rome, July 2-8, 2000 23. gr-qc/0008054 [abs, ps, pdf, other] : Title: Angular Momentum in Loop Quantum Gravity Authors: Martin Bojowald 24. gr-qc/0008053 [abs, ps, pdf, other] : Title: Loop Quantum Cosmology IV: Discrete Time Evolution Authors: Martin Bojowald Journal-ref: Class.Quant.Grav. 18 (2001) 1071-1088 25. gr-qc/0008052 [abs, ps, pdf, other] : Title: Loop Quantum Cosmology III: Wheeler-DeWitt Operators Authors: Martin Bojowald Journal-ref: Class.Quant.Grav. 18 (2001) 1055-1070 26. quant-ph/9912048 [abs, ps, pdf, other] : Title: Symplectic Cuts and Projection Quantization Authors: Martin Bojowald, Thomas Strobl Comments: 12 pages, v2: additional examples and a new reference to related work Subj-class: Quantum Physics; Symplectic Geometry Journal-ref: Int.J.Mod.Phys. D12 (2003) 713-725 28. gr-qc/9910104 [abs, ps, pdf, other] : Title: Loop Quantum Cosmology II: Volume Operators Authors: Martin Bojowald Journal-ref: Class.Quant.Grav. 17 (2000) 1509-1526 29. gr-qc/9910103 [abs, ps, pdf, other] : Title: Loop Quantum Cosmology I: Kinematics Authors: Martin Bojowald Journal-ref: Class.Quant.Grav. 17 (2000) 1489-1508 30. hep-th/9908170 [abs, ps, pdf, other] : Title: Abelian BF-Theory and Spherically Symmetric Electromagnetism Authors: Martin Bojowald Comments: 21 pages, LaTeX2e, v2: minor corrections in some formulas and a new reference Journal-ref: J.Math.Phys. 41 (2000) 4313-4329 31. quant-ph/9908079 [abs, ps, pdf, other] : Title: Group Theoretical Quantization and the Example of a Phase Space S^1 x R^+ Authors: Martin Bojowald, Thomas Strobl Journal-ref: J.Math.Phys. 41 (2000) 2537-2567 32. hep-th/9907043 [abs, ps, pdf, other] : Title: The Area Operator in the Spherically Symmetric Sector of Loop Quantum Gravity Authors: M. Bojowald, H.A. Kastrup (RWTH Aachen, Germany) Journal-ref: Class.Quant.Grav. 17 (2000) 3009-3043 33. hep-th/9907042 [abs, ps, pdf, other] : Title: Quantum Symmetry Reduction for Diffeomorphism Invariant Theories of Connections Authors: M. Bojowald, H.A. Kastrup (RWTH Aachen, Germany) Journal-ref: Class.Quant.Grav. 17 (2000) 3009-3043 34. gr-qc/9906105 [abs, ps, pdf, other] : Title: Group Theoretical Quantization of a Phase Space S^1 x R^+ and the Mass Spectrum of Schwarzschild Black Holes in D Space-Time Dimensions Authors: M. Bojowald, H.A. Kastrup, F. Schramm, T. Strobl (RWTH Aachen, Germany) Comments: 45 pages, Latex; version accepted for publication in Phys. Rev. D (Refs. added, small changes in the introduction, no changes of results) Journal-ref: Phys.Rev. D62 (2000) 044026 ------------------------------------------- Process Physics Associate Professor Reg Cahill Christopher Klinger Susan Gunner Kirsty Kitto A new paradigm for the modelling of reality is currently being developed called Process Physics. In Process Physics we start from the premise that the limits to logic, which are implied by Gödel's incompleteness theorems, mean that any attempt to model reality via a formal system is doomed to failure. Instead of formal systems we use a process system, which uses the notions of self-referential noise and self-organised criticality to create a new type of information-theoretic system that is realising both the current formal physical modelling of reality but is also exhibiting features such as the direction of time, the present moment effect and quantum state entanglement (including EPR effects, nonlocality and contextuality), as well as the more familiar formalisms of Relativity and Quantum Mechanics. In particular a theory of Quantum Gravity has already emerged. In short, rather than the static 4-dimensional modelling of present day (non-process) physics, Process Physics is providing a dynamic model where space and matter are seen to emerge from a fundamentally random but self-organising system. The key insight is that to adequately model reality we must move on from the traditional non-process syntactical information modelling to a process semantic information modelling; such information is internally meaningful'. Process Physics Papers: (in reverse chronological order) Some of these papers are also archived at Los Alamos archives. Quantum-Foam In-Flow Theory of Gravity and the Global Positioning System (GPS) Abstract: It is shown that a new quantum-foam in-flow theory of gravity is mathematically equivalent to the General Relativity theory of gravity for the operation of the Global Positioning System (GPS). The differences between the two theories become experimentally evident in other situations such as in the so-called dark matter' effect, in the observation of absolute motion and ipso facto in the observation of the in-flow motion into the Sun, and in the observation of a new class of gravitational waves, effects which are present in existing experimental observations, but are not within General Relativity. This new theory of gravity arises within the information-theoretic Process Physics. Quantum-Foam, Gravity and Gravitational Waves Abstract: It is shown that both the Newtonian and General Relativity theories for gravity may be re-formulated as in-flow dynamics in which a substratum is effectively absorbed by matter, with the gravitational force determined by inhomogeneities of that flow. Analysis herein of the 1925-26 Dayton Miller interferometer data reveals such a gravitational in-flow of space past the Earth into the Sun. This data and that from the 1991 Roland DeWitte coaxial cable experiment also suggests that the in-flow is turbulent, which amounts to the observation of a gravitational wave phenomena. A generalisation of the in-flow formalisms is proposed which passes all the tests that General Relativity passed, but as well the new theory suggests that the so-called spiral galaxy rotation-velocity anomaly may be explained without the need of dark matter'. As well analysis of data from the Michelson and Morley, Miller, Illingworth, Jaseja et al, Torr and Kolen, and DeWitte experiments reveal motion relative to the substratum. Special relativity effects are caused by motion relative to the substratum. This implies that a new ontology underlies the spacetime formalism. Gravity as Quantum Foam In-Flow Abstract: The new information-theoretic Process Physics provides an explanation of space as a quantum foam system in which gravity is an inhomogeneous flow of the quantum foam into matter. The older Newtonian and General Relativity theories for gravity are analysed. It is shown that Newtonian gravity may be written in the form of an in-flow. General Relativity is also analysed as an in-flow, for those cases where it has been tested. An analysis of various experimental data demonstrates that absolute motion relative to space has been observed by Michelson and Morley, Miller, Illingworth, Jaseja et al, Torr and Kolen, and by DeWitte. The Dayton Miller and Roland DeWitte data also reveal the in-flow of space into matter which manifests as gravity. The experimental data suggests that the in-flow is turbulent, which amounts to the observation of a gravitational wave phenomena. A new in-flow theory of gravity is proposed which passes all the tests that General Relativity was claimed to have passed, but as well the new theory suggests that the so-called spiral galaxy rotation-velocity anomaly may be explained without the need of dark matter'. Various other gravitational anomalies also appear to be explainable. Newtonian gravity appears to be strictly valid only outside of spherically symmetric matter systems. Absolute Motion and Gravitational Effects Abstract: The new Process Physics provides a new explanation of space as a quantum foam system in which gravity is an inhomogeneous flow of the quantum foam into matter. An analysis of various experiments demonstrates that absolute motion relative to space has been observed experimentally by Michelson and Morley, Miller, Illingworth, Jaseja et al, Torr and Kolen, and by DeWitte. The Dayton Miller and Roland DeWitte data also reveal the in-flow of space into matter which manifests as gravity. The in-flow also manifests turbulence and the experimental data confirms this as well, which amounts to the observation of a gravitational wave phenomena. The Einstein assumptions leading to the Special and General Theory of Relativity are shown to be falsified by the extensive experimental data. Contrary to the Einstein assumptions absolute motion is consistent with relativistic effects, which are caused by actual dynamical effects of absolute motion through the quantum foam, so that it is Lorentzian relativity that is seen to be essentially correct. Process Physics: From Information Theory to Quantum Space and Matter Abstract: This is a review of the new information-theoretic Process Physics. The fundamental assumption is that reality is to be modelled as self-organising semantic or relational information using a self-referentially limited neural network model, where the information-theoretic limitations are implemented via self-referential noise. This modelling was motivated by the discovery that such stochastic neural networks are foundational to known quantum field theories. In Process Physics time is a distinct non-geometric process while space and quantum physics are emergent and unified. Quantum phenomena are caused by fractal topological defects embedded in and forming a growing three-dimensional fractal process-space, which is essentially a quantum foam. Other features are the emergence of quantum field theory with flavour and confined colour, limited causality and the Born quantum measurement metarule, inertia, time-dilation effects, gravity and the equivalence principle, a growing universe with a cosmological constant, black holes and event horizons, and the emergence of classicality. The unification of the quantum foam structure of space with the quantum nature of matter amounts to the discovery of quantum gravity. Gravity is essentially an in-flow effect associated with the loss of information. A new theory of gravity for the classical limit is proposed, and shown to pass the key tests. A detailed analysis of various experiments demonstrates that absolute motion with respect to this space of quantum foam has been observed experimentally by Michelson and Morley, Miller, Illingworth, DeWitte and others. The Dayton Miller and Roland DeWitte data also reveal the in-flow of space into matter which manifests as gravity. The in-flow also manifests turbulence and the experimental data confirms this as well, which amounts to the observation of a gravitational wave phenomena. The Einstein assumptions leading to the Special and General Theory of Relativity are shown to be falsified by the extensive experimental data. Contrary to the Einstein assumptions absolute motion is consistent with relativistic effects, which are caused by actual dynamical effects of absolute motion through the quantum foam, so that it is Lorentzian relativity that is seen to be essentially correct. Process Physics brings physics very much into accord with the general concepts of Process Philosophy. The success of this new physics has profound implications for our comprehension of reality. (110 pages) The Miller paper is available here: Miller 1933 The Michelson-Morley paper is available here: 1887 Dynamical Hierarchies in Fundamental Physics Abstract: A new process orientated physics is being developed at Flinders University. These ideas were initially motivated by deep unsolved problems in fundamental physics, such as the difficulty of quantizing gravity, the missing arrow of time, the question of how to interpret quantum mechanics, and perhaps most importantly, a problem with the very methodology of our fundamental descriptions of the Universe. A proposed solution to these problems, Process Physics, has led to what can be viewed as a hierarchical model of reality featuring a Universe that exhibits behaviour very reminiscent of living systems. K. Kitto, Dynamical Hierarchies in Fundamental Physics, p55, in Workshop Proceedings of the 8th International Conference on the Simulation and Synthesis of Living Systems (ALife VIII)}, E. Bilotta et al., Eds. (Univ. New South Wales, Australia, 2002). Absolute Motion and Quantum Gravity This paper has been superceded by the new analysis in the paper Process Physics: From Information Theory to Quantum Space and Matter. The understanding of the galactic in-flow effect was not immediate: In Michelson-Morley Experiments Revisted and the Cosmic Background Radiation Preferred Frame the direction was not determined, though the speed was found to be comparable to the CMB determined speed. In Analysis of Data from a Quantum Gravity Experiment that the directions were very different was noted but not appreciated, and in fact thought to be due to experimental error. In the paper Absolute Motion and Quantum Gravity an analysis of some of the smoother' Michelson-Morley data resulted in an incorrect direction. At that stage it was not understood that the data showed large fluctuations in the azimuth apparently caused by the turbulence. The issue is hopefully finally resolved. Analysis of Data from a Quantum Gravity Experiment Process physics gives a new account of how Michelson interferometers operate when in gas mode. In particular they can detect absolute motion through the quantum foam, as shown in the previous paper. Here this new physics is applied to the extensive data from gas-mode interferometer observations by Miller (1933). The speed of in-flow of the quantum foam towards the Sun is determined from Miller's data to be 47 +\- 6 km/s, compared to the theoretical value of 42 km/s. This observed in-flow is a signature of aquantum gravity effect in the new physics. Michelson-Morley Experiments Revisted and the Cosmic Background Radiation Preferred Frame The Michelson-Morley interferometer experiments were designed to measure the speed of the Earth through the aether. The results were always believed to have been null - no effect. This outcome formed the basis for Einstein's Special and General Relativity formalism. The new process physics shows that absolute motion, now understood to be relative to the quantum foam that is space, is observable, but only if the interferometer operates in gas mode. A re-analysis here shows that the results from the gas-mode interferometers were not null, but in fact large when re-analysed to take account of the effect of the air, or helium, in which the apparatus operated. The speed of absolute motion is comparable to that determined from the Cosmic Background Radiation anisotropy, but the direction is not revealed. So absolute motion is meaningful and measureable, thus refuting Einstein's assumption. This discovery shows that a major re-assessment of the interpretation of the Special and General Relativity formalism is called for, a task already provided by Process Physics. This new information-theoretic physics makes it clear that Michelson-Morley type experiments are detecting motion through the quantum foam, which is space. Hence we see direct evidence of quantum gravity effects, as predicted by Process Physics. (This version corrects an earlier version of this paper, at arXiv:physics/0205065.) Published in Apeiron, Vol. 10, No.2, 104-117, April 2003. Published version also here. Process Physics: From Quantum Foam to General Relativity Quantum Field Theory and Quantum Gravity unified, with the phenomenology of General Relativity emerging. This paper predicted that the measurement protocol underlying the formalism of Special and General Relativity would be found to be flawed. See above paper for confirmation of this. Smart Nanostructures and Synthetic Quantum Systems A discussion of possible applications of Process Physics. Published in BioMEMS and Smart Nanostructures, Proceedings of SPIE Conference #4590, L.B. Kish, ed. pp. 319-328, 2001. A Slightly different version published in Smart Materials and Structures, Vol. 11, 699-707(2002), with the title: Synthetic Quantum Systems . Process Physics: Inertia, Gravity and the Quantum Process Physics links to the phenomena of inertia and gravity. Space shown to be a quantum foam. Published in General Relativity and Gravitation, 34, 1637-1656(2002). Process Physics: Modelling Reality as Self-Organising Information Published in The Physicist, 37(6), 191-195, 2000. Self-Referential Noise as a Fundamental Aspect of Reality Published in Proc. 2nd Int. Conf. on Unsolved Problems of Noise and Fluctuations (UPoN 99), eds. D. Abbott and L. Kish, Adelaide, Australia, 11-15th July 1999, Vol. 511, p. 43, American Institute of Physics, New York, 2000. Self-Referential Noise and the Synthesis of Three-Dimensional Space Published in General Relativity and Gravitation 32, 529,2000. Bootstrap Universe from Self-Referential Noise This is the paper that introduced the key concept of Self-Referential Noise'. Pregeometric Modelling of the Spacetime Phenomenology Published in Physics Letters A223, 313-319,1996. A precursor to Process Physics before process-time and concept of `Self-Referential Noise' were introduced. Process Physics was featured as the cover story in P.M. Magazin, September 2003, in an article "Das Universum hat ein Bewusstsein!", by Peter Ripota. Interesting web articles on Process Physics are The Objects of Meaning from the Limits of Logic and Mind as Reflection of Process Physics and the Semantic of Reality , by Horacio Velasco. Here is a feature story on Process Physics from the Adelaide Advertiser Newspaper, June 24, 2000, "Chance is Everything", by Mark Steene. There is another general article in the Flinders Journal, 11(4), 2000, "Is Reality a Side-Effect of Randomness?", by Charles Gent. This new information-theoretic physics was featured as the cover story in New Scientist, 26 February 2000 No.2227 pp24-28 in an article " Random Reality" (pdf) or " Random Reality" (scanned images), by Marcus Chown. 1. physics/0401047 [abs, ps, pdf, other] : Title: Gravitation, the 'Dark Matter' Effect and the Fine Structure Constant Authors: Reginald T. Cahill (Flinders University) Comments: 11 pages, 3 eps figures. Typos fixed Subj-class: General Physics 2. physics/0312082 [abs, ps, pdf, other] : Title: Quantum Foam, Gravity and Gravitational Waves Authors: Reginald T. Cahill (Flinders University) Comments: 60 pages, 22 eps figure files. To be published in Relativity, Gravitation, Cosmology Subj-class: General Physics 3. physics/0309016 [abs, ps, pdf, other] : Title: Quantum-Foam In-Flow Theory of Gravity and the Global Positioning System (GPS) Authors: Reginald T. Cahill (Flinders University) Comments: 25 pages, 1 eps figure Subj-class: General Physics 4. physics/0307003 [abs, ps, pdf, other] : Title: Gravity as Quantum Foam In-Flow Authors: Reginald T Cahill (Flinders University) Subj-class: General Physics 5. physics/0306196 [abs, ps, pdf, other] : Title: Absolute Motion and Gravitational Effects Authors: Reginald T Cahill Subj-class: General Physics 6. physics/0209064 [abs, ps, pdf, other] : Title: Synthetic Quantum Systems Authors: Reginald T. Cahill Comments: 16 pages, Latex, 1 eps figure file Subj-class: General Physics Journal-ref: Smart Mater.Struct. 11 (2002) 699-707 7. physics/0209013 [abs, ps, pdf, other] : Title: Absolute Motion and Quantum Gravity Authors: Reginald T. Cahill Comments: 11 pages, Latex, 5 eps figure files Minor changes Subj-class: General Physics 8. physics/0207010 [abs, ps, pdf, other] : Title: Analysis of Data from a Quantum Gravity Experiment Authors: Reginald T. Cahill Comments: 10 pages, 4 eps figures, latex Subj-class: General Physics 9. physics/0205070 [abs, ps, pdf, other] : Title: Re-Analysis of Michelson-Morley Experiments Reveals Agreement with COBE Cosmic Background Radiation Preferred Frame so Impacting on Interpretation of General Relativity Authors: Reginald T. Cahill, Kirsty Kitto Comments: 8 pages, latex, 1eps figure Subj-class: General Physics Journal-ref: Apeiron 10 (2003) 104-117 10. physics/0205065 [abs, ps, pdf, other] : Title: Michelson-Morley Experiments Revisited and the Cosmic Background Radiation Preferred Frame Authors: Reginald T. Cahill, Kirsty Kitto Comments: 8 pages, latex, 1 eps figure file Subj-class: General Physics 11. gr-qc/0203015 [abs, ps, pdf, other] : Title: Process Physics: From Quantum Foam to General Relativity Authors: Reginald T. Cahill Comments: 26 pages Latex, 1 separate eps file 12. quant-ph/0111026 [abs, ps, pdf, other] : Title: Smart Nanostructures and Synthetic Quantum Systems Authors: Reginald T. Cahill (Flinders University, Australia) Comments: LaTex,14 pages 1 eps file. To be published in BioMEMS and Smart Nanostructures, Proceedings of SPIE Conference #4590, ed. L. B. Kish 13. gr-qc/0110117 [abs, ps, pdf, other] : Title: Process Physics: Inertia, Gravity and the Quantum Authors: Reginald T. Cahill (Flinders University, Australia) Comments: LaTex, 18 pages 1 eps file. Contribution to the 3rd Australasian Conference on General Relativity and Gravitation, Perth, Australia, July 2001 Journal-ref: Gen.Rel.Grav. 34 (2002) 1637-1656 14. gr-qc/0009023 [abs, ps, pdf, other] : Title: Process Physics: Modelling Reality as Self-Organising Information Authors: Reginald T. Cahill, Christopher M. Klinger, Kirsty Kitto Journal-ref: The Physicist 37 (2000) 191-195 15. gr-qc/9905082 [abs, ps, pdf, other] : Title: Self-Referential Noise as a Fundamental Aspect of Reality Authors: Reginald T. Cahill, Christopher M. Klinger (Department of Physics, Flinders University) Comments: 7 pages, Latex, 3 ps figures. Contribution to the 2nd International Conference on Unsolved Problems of Noise, Adelaide 1999 Journal-ref: Proc. 2nd Int. Conf. on Unsolved Problems of Noise and Fluctuations (UPON 99), eds. D. Abbott and L. Kish, Adelaide, Australia, 11-15th July 1999, Vol. 511, p.43, American Institute of Physics, New York, 2000 --------------------------------------------- Discrete Space Time and Dark Energy Authors: B.G. Sidharth Subj-class: General Physics In recent times, Discrete Space Time Architectures are being considered, in the context of Quantum Gravity, Quantum Super Strings, Dark Energy and so on. We show that such a scheme is intimately tied up with a varying $G$ cosmology, which again explains otherwise inexplicable observations like the anomalous accelerations of Pioneer space crafts as also considerations involving the Zero Point Field, Random Electrodynamics and the derivation of Quantum Mechanical effects therefrom as also intertial mass. Full-text: PostScript, PDF, or Other formats http://arxiv.org/abs/physics/0402007 Comparison of area spectra in loop quantum gravity Authors: G. Gour, V. Suneeta We compare two area spectra proposed in loop quantum gravity in different approaches to compute the entropy of the Schwarzschild black hole. We describe the black hole in general microcanonical and canonical area ensembles for these spectra. For one of these spectra - the equally-spaced spectrum - we show in light of a proposed connection of the black hole area spectrum to the quasinormal mode spectrum that this spectrum is completely consistent with this connection. This follows {\em without} requiring a change in the gauge group of the spin degrees of freedom in this formalism from SU(2) to SO(3). Full-text: PostScript, PDF, or Other formats http://arxiv.org/abs/gr-qc/0401110 Propagation of Light in Doubly Special Relativity Authors: Sung Ku Kim, Sun Myong Kim, Chaiho Rim, Jae Hyung Yee In an attempt to clarify what is the velocity of a particle in doubly special relativity, we solve the Maxwell's equation invariant under the position space nonlinear Lorentz transformation proposed by Kimberly, Magueijo and Medeiros. It is shown that only the amplitude of the Maxwell wave, not the phase, is affected by the nonlinearity of the transformation. Thus, although the Maxwell wave appears to have infinitely large energy near the Planck time, the wave velocity is the same as the conventional light velocity. Surprisingly, the velocity of the Maxwell wave is not the same as the maximum signal velocity determined by the null geodesic condition, which is infinitely large near the Planck time and monotonically decreases in time to the conventional light velocity when time approaches infinity. Full-text: PostScript, PDF, or Other formats http://arxiv.org/abs/gr-qc/0401078 Another Treasury Of Science & Then Some. - SAVOLAIN - June 18, 2004 - 03:16 UTC Another Treasury Of Science & Then Some. Forum List | Follow Ups | Post Message | Hide savolain | Back to Thread Topics | In Response To Posted by Kent Benjamin Robertson on June 18, 2004 03:16:38 UTC Dear Mr. J. Rawson ('Clerk/ Secretary'): An eleven page list of contemporary work (as remarkably conveyed by yourself in the preceding post), emphasizing Rovelli, Smolin, Theiman, Bojowald - then Cahill... and then, as you say, you will give me the selections of others. As indeed you have... Your sincere generosity is truly and constructively humbling. Please pardon my belated response to it; as you may have noted (here and there on and off the WWW net)Molly Keyboard MacColley (the most formidable counteroffensive weapon in the world), has been diverted via a commission (by popular if unsavory demand) to drain a swamp full of foul mouthed, man, woman and child eating gators; simultanously preoccupied with administering a resulting industry of alligator belt, wallet, shoe and purse manufacturers and distributors. A sort of 'back end of the rabid gator slaying cycle'... Having said that, please consider this change of topic, sir. Yourself - or who knows what other reader(s) may provide an answer I have yet to learn. Most readers of posts like this have a basic if not intricate understanding of the destructive process of fission and it's accompanying hazards (the unstorable radioactive toxins accumulating at the 'back end of the cycle', in a 'normally' operating nuclear power plant, for example). Likewise, most persons surfing through a forum such as this (discounting the afore mentioned, foul mouthed, bipedal gators) are more or less familiar with the potential solution to the problems of (destructive) fission generated power, in the potential solution of (constructive) fusion generated power. The drawback of fusion being that it produces temperatures too high to be contained and controlled... Please bear with me here... About five years ago it was international, mainstream featured news, that an insular paint called 'Starlite' had been successfully formulated and tested, and that one coat of it, for example, applied to any flammable material - say a common wooden grocery box, or a two by four - prevented a blowtorch, applied for hours, inches away from that wooden test object - with only one coat of Starlite paint between the forced air torch and the wood, from even so much as scorching, let alone creating ignition temperature, upon the test object. The public announcement, description demonstration and ensuing controversy emerged about five years ago; went on for about a year, whereas - to my knowledge - the entire issue, with its controversy, more or less faded (flamed) out, as it were... It has occurred to myself and others that such an insular material may have revolutionized many industries, including all the contingencies of causing all formerly flammable structures, habitats, warehouses, etc, to be minimally threatened by impending fire hazards, for example (putting a lot of insurance - and lumber and construction - companies out of (multi-billion dollar, environment polluting and destroying, posterity threatening business?)... Perhaps most importantly, allowing - say, several feet or yards of thickness in the form of Starlite paint material, to serve as a containment vessel for Deuterium (H3O? 'Heavy Water'), perhaps allowing the previously unachieved and reputedly unachievable containment of fusion reactors - with the unprecedented (formerly 'unachievable') advantage of producing no unstorable toxic radioactive substances, as fission based nuclear power plants continue to produce, today. If any reader of this described problem with it's accompanying, proposed solution (which seems to have mysteriously disappeared) knows more about it, please enter it in this forum discussion. Sincerely thanking 'Secretary/Clerk' J. Rawson, I am gratefully, Kent Benjamin Robertson. Post Script: In Response To The Acknowledged Valuable New Contributions To Physics, as provided by Mr. J. Rawlins in the above, eleven page list: For all of the acknowledged, new and updated revolutionary material accumulating in theoretical physics and related subjects, I have yet to see Einstein's Special (Light and uniform velocity - 1906) and General (Gravity and non-uniform velocities - 1916) Relativities displaced or removed from the contemporary foundations of theoretical physics. That this work will be, and for that matter may already have been 'improved upon' (which is the endlessly evolving destiny of perhaps all processes), is probably inevitable. Yet, for all the regularly appearing articles and journal essays on 'disproving' or otherwise faulting Einstein's work: until further notice, is it not still an improvement on Newton's (continuing, generally applicable and valid) Classical Mechanics... And if and when Einstein's work is superimposed, will it not continue to remain standing on the shoulders of a sequential procession of giants...? PPS: Mr. Rawlins... May I post your eleven page list of advised research studies, at other locations on the net? RSVP K.B.R.
web
auto_math_text
## What are Marketable Securities Marketable securities are liquid financial instruments that can be quickly converted into cash at a reasonable price. The liquidity of marketable securities comes from the fact that the maturities tend to be less than one year, and that the rates at which they can be bought or sold have little effect on prices. ### Key Takeaways • Marketable securities are assets that can be liquidated to cash quickly. • These short-term liquid securities can be bought or sold on a public stock exchange or a public bond exchange. • These securities tend to mature in a year or less and can be either debt or equity. • Marketable securities include common stock, Treasury bills, and money market instruments, among others. 1:16 ## Understanding Marketable Securities Businesses typically hold cash in their reserves to prepare them for situations in which they may need to act swiftly, such as taking advantage of an acquisition opportunity that comes up or making contingent payments. However, instead of holding on to all the cash in its coffers which presents no opportunity to earn interest, a business will invest a portion of the cash in short-term liquid securities. This way, instead of having cash sit idly, the company can earn returns on it. If a sudden need for cash emerges, the company can easily liquidate these securities. Examples of a short-term investment products are a group of assets categorized as marketable securities. Marketable securities are defined as any unrestricted financial instrument that can be bought or sold on a public stock exchange or a public bond exchange. Therefore, marketable securities are classified as either marketable equity security or marketable debt security. Other requirements of marketable securities include having a strong secondary market that can facilitate quick buy and sell transactions, and having a secondary market that provides accurate price quotes for investors. The return on these types of securities is low, due to the fact that marketable securities are highly liquid and are considered safe investments. Examples of marketable securities include common stock, commercial paper, banker's acceptances, Treasury bills, and other money market instruments. ## Special Considerations Marketable securities are evaluated by analysts when conducting liquidity ratio analysis on a company or sector. Liquidity ratios measure a company's ability to meet its short-term financial obligations as they come due. In other words, this ratio assesses whether a company can pay its short-term debts using its most liquid assets. Liquidity ratios include: ### Cash Ratio  \begin{aligned} &\text{Cash Ratio} = \frac{ \text{MCS} }{ \text{Current Liabilities} } \\ &\textbf{where:} \\ &\text{MCS} = \text{Market Value of Cash and Marketable Securities} \\ \end{aligned} The cash ratio is calculated as the sum of the market value of cash and marketable securities divided by a company's current liabilities. Creditors prefer a ratio above 1 since this means that a firm will be able to cover all its short-term debt if they came due now. However, most companies have a low cash ratio since holding too much cash or investing heavily in marketable securities is not a highly profitable strategy. ### Current Ratio  \begin{aligned} &\text{Current Ratio} = \frac{ \text{Current Assets} }{ \text{Current Liabilities} } \\ \end{aligned} The current ratio measures a company's ability to pay off its short-term debts using all its current assets, which includes marketable securities. It is calculated by dividing current assets by current liabilities. ### Quick Ratio  \begin{aligned} &\text{Quick Ratio} = \frac{ \text{Quick Assets} }{ \text{Current Liabilities} } \\ \end{aligned} The quick ratio factors in only quick assets into its evaluation of how liquid a company is. Quick assets are defined as securities that can be more easily converted into cash than current assets. Marketable securities are considered quick assets. The formula for the quick ratio is quick assets / current liabilities. ## Types of Marketable Securities ### Equity Securities Marketable equity securities can be either common stock or preferred stock. They are equity securities of a public company held by another corporation and are listed in the balance sheet of the holding company. If the stock is expected to be liquidated or traded within one year, the holding company will list it as a current asset. Conversely, if the company expects to hold the stock for longer than one year, it will list the equity as a non-current asset. All marketable equity securities, both current and non-current, are listed at the lower value of cost or market. If, however, a company invests in another company's equity in order to acquire or control that company, the securities aren't considered marketable equity securities. The company instead lists them as a long-term investment on its balance sheet. ### Debt Securities Marketable debt securities are considered to be any short-term bond issued by a public company held by another company. Marketable debt securities are normally held by a company in lieu of cash, so it's even more important that there is an established secondary market. All marketable debt securities are held at cost on a company's balance sheet as a current asset until a gain or loss is realized upon the sale of the debt instrument. Marketable debt securities are held as short-term investments and are expected to be sold within one year. If a debt security is expected to be held for longer than one year, it should be classified as a long-term investment on the company's balance sheet.
web
auto_math_text
A hodge-podge of useful functions. ## Introduction This is essentially my sandbox R Package. It has a hodge-podge of functions that I’ve used over the past several years. As I work with my sociophonetic data, I find myself running the same sorts of procedures over and over, so I often decide to write a generalized function to take care of that stuff in one line within a tidyverse pipeline. The functions in joeyr can be grouped into about five different categories: the outlier detecters, sociophonetics functions, ggplot2 themes, the “grapes”, and other helpful functions. This README briefly explains them all. Some of these functions I’ve generalized into something that others may find useful. Others are very specific to my own code and workflow so I’m not sure if they’ll be useful to you. Some functions are documented well but lots are not. For questions, feel free to contact me at or on Twitter at @joey_stan. ## Installation #install.packages("devtools") # <- if not already installed devtools::install_github("JoeyStanley/joeyr") You can then load it like any other package. library(joeyr) ## This is the "joeyr" package. You’ll know you’ve got it installed properly when you see a little message saying This is the "joeyr" package.. ## Group 1: The Filter The main function, and the reason I created the package in the first place, is find_outliers(). It implements a version of the Mahalanobis Distance, except it does so iteratively. After finding the distances for each point, it marks the furthest token as an outlier, and then recalculates the distance based on the remaining points. It continues this one-at-a-time procedure until a proportion of your data (that you’ve specified) has been removed. For more details, see ?find_outliers. In the future I’ll write a vignette, blog post, or (perhaps some day) an article about it. For now, look at the help page or email me. Note that there used to be some functions related to outliers in the package: joey_filter() and its helper functions, joey_do_pca(), joey_adjust_cutoff(), and joey_rm_outliers(). Those did outlier detection using Cook’s D but I’ve since discovered that the method was inherently flawed. I also tried to algorithmically determine a cutoff for how much data should be filtered, but I could never get it to work and it always removed too much data. Besides, the functions were buggy, slow, and more complicated than probably needed (they were my first attempt at R functions). Be aware that I’ve removed them in version 0.4 of {joeyr}. You should switch to find_outliers() instead. If you need to use them you can still find them in the R_depreciated folder on GitHub, but they are no longer a part of the package. ## Group 2: Sociophonetics functions Some functions in joeyr are essentially mathmatical functions in that they crunch some numbers. They’re all relevant to sociophonetic data. • eucl_dist() calculates the Euclidean Distance, given a pair of x and y (or, more commonly, F1 and F2) coordinates. • pillai() calculates the Pillai score. It’s not a complicated process in general, but this function simplifies it down quite a bit so it doesn’t interrupt your workflow. • tidy_mahalanobis() is probably my favorite. It’s an implementation of the mahalanobis() function, which calculates the Mahalanobis distance, only it’s meant to work within a tidyverse pipeline of commands. Like the pillai() function, it’s designed so that you can get the values you want without interrupting your flow. • norm_anae() makes it easy to do vowel formant normalization using the method described in the Atlas of North American English using just one line of code within a tidyverse pipeline. This is my current favorite normalization procedure and I was sick of writing large blocks of code in all my scripts, so I wrapped it up as a package. • norm_deltaF() is qanother way to normalize your data, based on Keith Johnson’s (2020) paper. • lbms_index() allows you to quickly calculate the Low-Back-Merger Shift Index in your data (see Becker 2019; Boberg 2019). • arpa_to_keywords() (and its shortcut, arpa_to_wells()), quickly converts ARPABET labels to Wells (or Wells-inspired) keywords. ## Group 3: ggplot2 themes These are currently not documented. The main one is theme_joey() which will produce plots using my own flavor of theme_bw(). There are some variants as well. The other helpful function is joey_arrow() which is just a shortcut for a type of arrow I like when I draw lines on a plot. ## Group 4: The grapes “Grapes” refer to the type of R functions that are flanked on either side by %, like %in%. The ones here came about while writing my dissertation (or rather, the code used to analze the data for my dissertation) and were super helpful for that project. • %ni% (“not in”") is super handy and is the opposite if %in%. I don’t use it so much anymore because I’ve figured out how to use ! properly, but I still love it. • %wi% (“within”) was one I wrote to see whether a number is within a range. I think tidyverse has %within% somewhere in its suite of packages, but I wrote this before I knew about it. Plus, it’s a little shorter. • %expanded_by% takes a number and a range and expands the range by that much (applied to both sides). So c(1, 3) %expanded_by% 3 produces c(-2, 6). It was helpul when I was automating the plots in my dissertation. ## Group 5: Other helpful functions While writing my dissertation, I found myself doing the same sort of pipeline over and over so I just made a couple little shortcut functions to save myself some typing. • color_gradienter() was needed in my Shiny app, The Gazetteer of Southern Vowels. I needed a way to take two arbitrary colors and produce an arbitrary number of colors at fixed intervals between them. This is helpful for plots: do you want to use your favorite color of green and your favorite color of blue, but also get three more colors in between them for a continuous scale? color_gradienter() can do that. I don’t know everything about how color is encoded, but it seems to work well enough for me, though there may be some weird bugs. • ucfirst() was a function I copied over from Perl code. It just capitalizes the first character of a string. Unlike some other functions that are more sophisticaed and transform the text into title case, sometimes I just need the very first character only, rather than the first character of each word. ## Depreciated functions Previous versions of {joeyr} had a couple other tools, but recent versions of {tidyr} and {dplyr} have rendered them unnecessary. The following are no longer part of {joeyr}, though you can find the code on GitHub in the R_depreciated folder. • spread_n() was perhaps my favorite function. I had a need for using the (old) tidyr::spread() with multiple value columns while reshaping formant data. I went to the RStudio forums and someone wrote a slick little function for me that I dubbed spread_n(). (Kieran Healey saw that post and wrote a blog post on it!). The function has since been rendered obsolete with the new tidyr::pivot_wider() (Kieran Healey wrote about that too), so I don’t use spread_n() anymore. Perhaps it’ll eventually be removed from the package. But I can’t help but think that my asking that question had some role in the pivot_wider() function. • move_x_after_y() and move_x_before_y() are helpful for when you want to take a column that you’ve just created and move it before or after some existing column. It’s a little buggy when you work with columns on the edges of your dataframe, but it worked well for me and my dissertation code. Note that once dplyr 1.0.0 is released, these functions will be phased out since the new dplyr::relocate function does what I intended these do to. ## Conclusion That’s joeyr. I hope you find it useful!
web
auto_math_text