text
stringlengths
14
7.51M
subset
stringclasses
3 values
source
stringclasses
2 values
# Medicine:Risk factor Short description: Variable associated with an increased risk of disease or infection In epidemiology, a risk factor or determinant is a variable associated with an increased risk of disease or infection.[1]:38 Due to a lack of harmonization across disciplines, determinant, in its more widely accepted scientific meaning, is often used as a synonym. The main difference lies in the realm of practice: medicine (clinical practice) versus public health. As an example from clinical practice, low ingestion of dietary sources of vitamin C is a known risk factor for developing scurvy. Specific to public health policy, a determinant is a health risk that is general, abstract, related to inequalities, and difficult for an individual to control.[2][3][4] For example, poverty is known to be a determinant of an individual's standard of health. ## Correlation vs causation Risk factors or determinants are correlational and not necessarily causal, because correlation does not prove causation. For example, being young cannot be said to cause measles, but young people have a higher rate of measles because they are less likely to have developed immunity during a previous epidemic. Statistical methods are frequently used to assess the strength of an association and to provide causal evidence, for example in the study of the link between smoking and lung cancer. Statistical analysis along with the biological sciences can establish that risk factors are causal. Some prefer the term risk factor to mean causal determinants of increased rates of disease, and for unproven links to be called possible risks, associations, etc. When done thoughtfully and based on research, identification of risk factors can be a strategy for medical screening.[5] ## Terms of description Mainly taken from risk factors for breast cancer, risk factors can be described in terms of, for example: • Relative risk, such as "A woman is more than 100 times more likely to develop breast cancer in her 60s than in her 20s."[6] • Fraction of incidences occurring in the group having the property of or being exposed to the risk factor, such as "99% of breast cancer cases are diagnosed in women."[7] • Increase in incidence in the exposed group, such as "each daily alcoholic beverage increases the incidence of breast cancer by 11 cases per 1000 women."[8] • Hazard ratio, such as "an increase in both total and invasive breast cancers in women randomized to receive estrogen and progestin for an average of 5 years, with a hazard ratio of 1.24 compared to controls."[9] ## Example At a wedding, 74 people ate the chicken and 22 of them were ill, while of the 35 people who had the fish or vegetarian meal only 2 were ill. Did the chicken make the people ill? $\displaystyle{ Risk = \frac {\mbox{number of persons experiencing event (food poisoning)}} {\mbox{number of persons exposed to risk factor (food)}} }$[10] So the chicken eaters' risk = 22/74 = 0.297 And non-chicken eaters' risk = 2/35 = 0.057. Those who ate the chicken had a risk over five times as high as those who did not, that is, a relative risk of more than five. This suggests that eating chicken was the cause of the illness, but this is not proof. This example of a risk factor is described in terms of the relative risk it confers, which is evaluated by comparing the risk of those exposed to the potential risk factor to those not exposed. ## General determinants The probability of an outcome usually depends on an interplay between multiple associated variables. When performing epidemiological studies to evaluate one or more determinants for a specific outcome, the other determinants may act as confounding factors, and need to be controlled for, e.g. by stratification. The potentially confounding determinants varies with what outcome is studied, but the following general confounders are common to most epidemiological associations, and are the determinants most commonly controlled for in epidemiological studies: • Age (0 to 1.5 years for infants, 1.5 to 6 years for young children, etc.) • Sex or gender (Male or female)[11]:20 • Ethnicity (Based on race)[11]:21 Other less commonly adjusted for possible confounders include: • Social status/income [1]:39 • Geographic location • Genetic predisposition • Gender identity • Occupation • Overwork[12] • Sexual orientation • Level of chronic stress • Diet • Level of physical exercise • Alcohol consumption and tobacco smoking • Other social determinants of health ## Risk marker A risk marker is a variable that is quantitatively associated with a disease or other outcome, but direct alteration of the risk marker does not necessarily alter the risk of the outcome. For example, driving-while-intoxicated (DWI) history is a risk marker for pilots as epidemiologic studies indicate that pilots with a DWI history are significantly more likely than their counterparts without a DWI history to be involved in aviation crashes.[13] ## History The term "risk factor" was coined by former Framingham Heart Study Director, Dr. William B. Kannel in a 1961 article in Annals of Internal Medicine.[14] ## References 1. Parritz, Robin Hornik (2017-05-24). Disorders of childhood : development and psychopathology. Troy, Michael F. (Michael Francis) (Third ed.). Boston, MA. ISBN 9781337098113. OCLC 960031712. 2. Improving Health in the Community: A Role for Performance Monitoring: 2. Understanding Health and Its Determinants: A Model of the Determinants of Health. National Academy of Sciences: National Academies Press: Institute of Medicine (US) Committee on Using Performance Monitoring to Improve Community Health. 1997. ISBN 978-0309055345. ""Unlike a biomedical model that views health as the absence of disease, this dynamic framework includes functional capacity and well-being as health outcomes of interest. It also presents the behavioral and biologic responses of individuals as factors that influence health but are themselves influenced by social, physical, and genetic factors that are beyond the control of the individual."" 3. "Health Impact Assessment (HIA): Glossary of terms used". World Health Organization. 4. "Health Impact Assessment (HIA): The determinants of health". World Health Organization. 5. Wald, N. J.; Hackshaw, A. K.; Frost, C. D. (1999). "When can a risk factor be used as a worthwhile screening test?". BMJ 319 (7224): 1562–1565. doi:10.1136/bmj.319.7224.1562. ISSN 0959-8138. PMID 10591726. 6. "Neoplasms of the Breast". Cancer Medicine (5th ed.). Hamilton, Ontario: B. C. Decker. 2000. §Risk Factors. ISBN 1-55009-113-1. Retrieved 27 January 2011. 7. "Breast carcinoma in men: a population-based study". Cancer 101 (1): 51–7. July 2004. doi:10.1002/cncr.20312. PMID 15221988. 8. "Moderate alcohol intake and cancer incidence in women". Journal of the National Cancer Institute 101 (5): 296–305. March 2009. doi:10.1093/jnci/djn514. PMID 19244173. 9. Heiss, G.; Wallace, R.; Anderson, G. L.; Aragaki, A.; Beresford, S. A. A.; Brzyski, R.; Chlebowski, R. T.; Gass, M. et al. (2008). "Health Risks and Benefits 3 Years After Stopping Randomized Treatment with Estrogen and Progestin". JAMA: The Journal of the American Medical Association 299 (9): 1036–45. doi:10.1001/jama.299.9.1036. PMID 18319414. 10. Tenny, Steven; Hoffman, Mary R. (2020), "Relative Risk", StatPearls (StatPearls Publishing), PMID 28613574, retrieved 2020-06-10 11. Mash, Eric J. (2019). Abnormal child psychology. Wolfe, David A. (David Allen), 1951- (Seventh ed.). Boston, MA. ISBN 9781337624268. OCLC 1022139949. 12. Pega, Frank; Nafradi, Balint; Momen, Natalie; Ujita, Yuka; Streicher, Kai; Prüss-Üstün, Annette; Technical Advisory Group (2021). "Global, regional, and national burdens of ischemic heart disease and stroke attributable to exposure to long working hours for 194 countries, 2000–2016: A systematic analysis from the WHO/ILO Joint Estimates of the Work-related Burden of Disease and Injury". Environment International 154: 106595. doi:10.1016/j.envint.2021.106595. ISSN 0160-4120. PMID 34011457. 13. Li G., Baker S. P., Qiang Y., Grabowski J. G., McCarthy M. L. Driving-while-intoxicated history as a risk marker for general aviation pilots. Accid Anal Prev. 2005;37(1):179-84./McFadden K. L. Driving while intoxicated (DWI) convictions and job-related flying performance – a study of commercial air safety. J Oper Res Soc. 1998;49:28–32
web
auto_math_text
# Protocol 006_PsCARTHA Carthage¶ This page contains all the relevant information for protocol 006 Carthage. Each of the main changes is briefly described with links to relevant external documentation and merge requests. There are dedicated sections for all the changes to RPCs and operations. The changelog section contains the most significant commit messages and instructions to regenerate the protocol sources from the Gitlab branch. Test network Carthagenet is available to test Carthage. See details in Test Networks and instructions to join in How to get Tezos. The code can be found in the Gitlab branch proto-006 and its full hash is PsCARTHAGazKbHtnKfLzQg3kms52kSRpgnDY982a9oYsSXRLQEb. This protocol contains several breaking changes with respect to Babylon. Developers are particularly encouraged to carefully read this page and to monitor it for updates. ## Baking Daemon¶ The baking daemon requires direct access to the context of the Tezos node. The daemon for 006 requires the new context introduced by Irmin2. As such bakers that use the default baker need to upgrade to the new storage backend in order to be able to run the 006 baking daemons. ## Smart Contracts¶ The gas limit per block and per operation was increased by 30%. For operations it changed from 800,000 to 1,040,000 and for blocks it changed from 8,000,000 to 10,400,000. ## Baking and Endorsing¶ The formula to calculate baking and endorsing rewards was improved in order to provide more accurate results. The formula was further modified in order to make it more resistant to certain types of attacks. A full explanation can be found here. ## Accounts¶ The assert that was triggered when a delegated account tried to empty itself was replaced by a proper error message. ## Michelson¶ Protocol 006 contains several improvements to the Michelson smart contract language. ### Optimisation of the CONTRACT instruction¶ The CONTRACT instruction has been optimized to avoid performing a useless disk access in the case its argument is the address of an implicit (aka. tz) account. In this case, this small optimisation saves 132 gas units. ### Comparable pairs in sets and maps¶ Comparability of pairs was added in the Babylon protocol; it is possible in Babylon to use the COMPARE instruction to lexicographically compare two pairs. However, due to a missing case in the type-checker for comparable types, the Babylon implementation of comparable pairs did not allow the use of pairs as elements in sets nor keys in maps and big maps. This is fixed in Carthage. ### Fixing MAPping on maps with side effects¶ The MAP instruction can be used in Michelson to apply a function to each element of a list or each value of a map, producing respectively a list of the same length as the original or a map with the same keys. During the development of a new unit test suite for the Michelson language, the Runtime Verification team discovered that the Michelson interpreter incorrectly handled side effects in the map case of the MAP instruction. Until Babylon, if the body of the MAP instruction modifies the remaining of the stack, then these changes are reverted when the MAP exits. This behaviour is consistent with neither the Michelson documentation nor the list case; it has been fixed in Carthage. This change is not backward compatible so we have inspected the current state of the mainnet chain and we have checked that, at time of writing, no contract is affected by this bug. Until the activation of Carthage (or any protocol including this fix), smart contract authors should avoid relying on the bogus behaviour of the MAP instruction on maps by using the ITER instruction instead when they need to perform side effects on the remaining of the stack during an iteration. ### Dead optimisation of the UNPAIR macro¶ The UNPAIR macro is very commonly used in Michelson to destruct pairs. In order to encourage its use, it received a special treatment in Babylon by which its gas cost was artificially decreased. Unfortunately, a small mistake in the unfolding of the UNPAIR macro made this special treatment dead code; the interpreter is looking for the sequence {DUP; CAR; DIP CDR} but the unfolding of UNPAIR is actually {DUP; CAR; DIP {CDR}} (note the extra pair of curly braces around CDR). Moreover, the Babylon gas update has made this peephole optimisation of the UNPAIR macro much less interesting because the gas costs of all stack and pair instructions are much lower than in previous protocols. We plan to promote UNPAIR as a new Michelson instruction in a future protocol proposal. ### Error message for EMPTY_BIG_MAP arity¶ The EMPTY_BIG_MAP instruction, which was added in Babylon and can be used to push an empty big_map on the stack, expects two parameters (the types for keys and values). When the instruction is used with another arity, the error message produced in Babylon was unclear because of a missing case in the type checker. This missing case has been added and the error message is clearer in Carthage. ### Typechecking big_map literals¶ The typechecking RPCs typecheck_script and typecheck_data are useful tools for Michelson editors featuring typechecking. The typecheck_data RPC was restricted to non-big_map types for no good reason. This limitation has been removed; it is possible in Carthage to typecheck big_map literals. ### Checking validity of annotations¶ Annotations are enforced to only contain valid JSON. ## Changes to RPCs¶ BREAKING CHANGES: the semantics of the baking_rights RPC and the return values of the block_reward and endorsement_reward RPCs have changed. Below you can find all the RPC changes. ### Baking_rights¶ In Babylon the argument max_priority causes the RPC to return the rights up to max_priority excluded, for example setting max_priority=0 returns the empty list. In Carthage the value of max_priority is included, for example max_priority=0 returns the rights of priority zero. ### Block_reward¶ This constant is accessed by calling /chains/main/blocks/head/constants, which returns a JSON object where the field block_reward was renamed to baking_reward_per_endorsement and its value was changed from a single value to a list of values. ### Endorsement_reward¶ This constant is accessed by calling /chains/main/blocks/head/constants, which returns a JSON object where the value of the field endorsement_reward was changed from a single value to a list of values. ## Changes to the binary format of operations¶ There are no changes to the binary format of operations. ## Changelog¶ You can see the full git history on the branch proto-006. In order to regenerate a protocol with the same hash as Carthage you can run from this branch: $./scripts/snapshot_alpha.sh carthage_006 from babylon_005$ ls src/proto_006_PtXXX ### Detailed Changelog¶ • Proto: remove .ocamlformat-ignore and make fmt Apply the ocamlformat tool to the protocol codebase. • Protocol/Migration: remove babylon’s vanity nonce • Protocol/Storage: initialize big_map ids only for genesis • Protocol/RPC: fix ‘baking_rights’ so that ‘max_priority’ is included Fix a bug where the ../helpers/baking_rights RPC would exclude the max_priority baking right from its result. BREAKING CHANGE: the semantics of the baking_rights RPC has changed • Protocol/Emmy+: fix baking and endorsement reward formulae Fix the imprecision in the baking reward formula to make it linear in the number of endorsements included instead of a step function. Improve the precision on the endorsement reward computation by applying the priority malus on the total endorsement reward. • Protocol/Michelson: fix comparable comb pairs Allow comb pairs as map keys and set elements, not only as operands of COMPARE. • Protocol/Michelson: allow all parameter types when typechecking a literal Extend the range of the typecheck_data RPC by also allowing big_map values. • Protocol/Gas: increase the gas limits per block and operation by 30% Bump the gas limit for blocks and operations by 30% going from 800000 per operation and 8000000 per block to 104000 per operation and 1040000 per block. • Protocol/Migration: bump gas limit constants in the context Update the gas limit constants in the context on protocol transition. • Protocol/Michelson: remove the peephole optimisation of UNPAIR Remove an unreachable optimisation. A proper UNPAIR instruction shall • Protocol/Michelson: handling of the bad arity error for the EMPTY_BIG_MAP instruction Improve error reporting when checking for the arity of the EMPTY_BIG_MAP instruction • Protocol/Michelson: fix the interpretation of the MAP instruction on maps In the previous implementation, accumulating a value during a MAP on a map was impossible because the initial stack tail was restored. This was not the documented behavior of the MAP instruction and it was inconsistent with the case of mapping over a list. BREAKING CHANGE: originated contracts that rely on the previous (and incorrect) semantics might behave incorrectly. • Protocol/Michelson: improve the performance of the CONTRACT instruction Add an optimisation that make the instruction cheaper in gas for implicit contracts (tz1, tz2, tz3) by saving an I/O.
web
auto_math_text
Procedural Map Prototyping Tool In a previous set of posts I showed a small 2D procedural map generating tool I made. The idea was to create a tool that allowed for fast iteration on different noise generation techniques. Welp. That didn’t workout as well I thought it would. I thought using Lua scripting would make it really flexible, and it did, but it was also really laggy despite running the generation on 4 different threads. Also, just using FastNoise means I have to do all the noise combining myself in the scripts. Not really ideal for a prototyping tool! What I need is speed, flexibility and fast iteration. So I’m making another tool. This one is based on libnoise. Libnoise is a portable C++ library for generating coherent noise. Libnoise has a number of different modules that can be chained together to generate and modify noise maps. From the libnoise tutorials: Modules can be combined in various ways to achieve different result. But. Libnoise on its doesn’t really allow for fast iteration. You’d have to change values and re-compile each time you want to try something different. As well as it being tedious to rearrange modules. So this tool needs to provide a layer for managing all the different noise modules. Initial Concept Above is the initial block diagram I came up with for mapgen2. I’m going for a Model-View-Controller type pattern. Modules (models) are managed in the backend. Views and Controllers are for viewing and modifing data in the backend. A node graph editor is a good choice for this type of editor. I want all the noise module nodes to display a preview of the noise map that they generate. As you change the modules parameters you get a live preview update. I felt that this was critical as it helps to visualize how the different parameters change the final output. For the GUI I’m going with a combination of Magnum and ImGui. In constrast with the last tool I used SFML + ImGui. I felt Magnum was exactly what I was looking for in terms of a graphic library and decided to try it out. On that note I also needed to use a Magnum ImGui binding and ImGui Addons for the node graph editor and tabs. Also looks like I can make a number of contributions to those projects so I’m excited to dive into open source! Noise Module Wrapper I needed a common unit for interacting with noise modules. So I decided to wrap libnoise modules in a class that knows how to interactive with the different them. Looks something like this. class NoiseModule { public: enum class Type { Billow, Perlin, RidgedMulti, ScaleBias, Select, ... }; NoiseModule(Type type) { //... } }; The underlying libnoise module can be one of 30 options (some examples listed in the enum). I store these options as a variant and use a factory to create them. ... using ModuleVariant = boost::variant< noise::module::Billow, noise::module::Perlin, noise::module::RidgedMulti, noise::module::ScaleBias, noise::module::Select >; ... class ModuleFactory { public: static NoiseModule::ModuleVariant createModule(NoiseModule::Type type) { switch (type) { case NoiseModule::Type::Billow: return { noise::module::Billow() }; case NoiseModule::Type::Perlin: return { noise::module::Perlin() }; case NoiseModule::Type::RidgedMulti: return { noise::module::RidgedMulti() }; case NoiseModule::Type::ScaleBias: return { noise::module::ScaleBias() }; case NoiseModule::Type::Select: return { noise::module::Select() }; default: throw std::runtime_error("Invalid noise type"); break; } } ... Parameters needed a single unit to interact with as well, so I also stored those in variants. using ParameterVariant = boost::variant< int, float, RangedInt, RangedFloat, >; using ParameterMap = std::map<std::string, ParameterVariant>; using ParameterMapPtr = std::shared_ptr<ParameterMap>; Parameters need to be interacted with by other components in the system. Returning the parameter map as a reference was too prone to errors as I discovered last time. So here I pass them back as a std::shared_ptr. These are also created with a factory method. static NoiseModule::ParameterMap createParams(NoiseModule::Type type) { switch (type) { case NoiseModule::Type::Billow: return { { "seed", 1337 }, { "frequency", (float)noise::module::DEFAULT_BILLOW_FREQUENCY }, { "octaves", RangedInt(1, 25, noise::module::DEFAULT_BILLOW_OCTAVE_COUNT) }, { "persistence", RangedFloat(0.f, 1.f, noise::module::DEFAULT_BILLOW_PERSISTENCE) }, { "lacunarity", RangedFloat(1.f, 2.f, noise::module::DEFAULT_BILLOW_LACUNARITY) }, }; case NoiseModule::Type::Perlin: return { {"seed", 1337}, {"frequency", (float)noise::module::DEFAULT_PERLIN_FREQUENCY}, {"octaves", RangedInt(1, 25, noise::module::DEFAULT_PERLIN_OCTAVE_COUNT)}, {"persistence", RangedFloat(0.f, 1.f, noise::module::DEFAULT_PERLIN_PERSISTENCE)}, {"lacunarity", RangedFloat(1.f, 4.f, noise::module::DEFAULT_PERLIN_LACUNARITY)}, }; case NoiseModule::Type::RidgedMulti: return { { "seed", 1337 }, { "frequency", (float)noise::module::DEFAULT_RIDGED_FREQUENCY }, { "octaves", RangedInt(1, 25, noise::module::DEFAULT_RIDGED_OCTAVE_COUNT) }, { "lacunarity", RangedFloat(1.f, 4.f, noise::module::DEFAULT_RIDGED_LACUNARITY) }, }; case NoiseModule::Type::ScaleBias: return { {"bias", 0.0f}, {"scale", 1.0f} }; case NoiseModule::Type::Select: return { {"lower_bound", (float)noise::module::DEFAULT_SELECT_LOWER_BOUND}, {"upper_bound", (float)noise::module::DEFAULT_SELECT_UPPER_BOUND}, {"fall_off", (float)noise::module::DEFAULT_SELECT_EDGE_FALLOFF} }; default: throw std::runtime_error("Invalid noise type"); break; } } C++ initializer lists make this pretty slick! The nice part about using boost::variant is visitors. To set parameters in the different noise modules I created a SetParamsVisitor. struct SetParamsVisitor : public boost::static_visitor<> { public: SetParamsVisitor(NoiseModule::ParameterMap& params) : params_{params} { } void operator()(noise::module::Billow& module) const { module.SetSeed(boost::get<int>(params_["seed"])); module.SetFrequency(boost::get<float>(params_["frequency"])); module.SetOctaveCount(boost::get<RangedInt>(params_["octaves"]).value); module.SetPersistence(boost::get<RangedFloat>(params_["persistence"]).value); module.SetLacunarity(boost::get<RangedFloat>(params_["lacunarity"]).value); } void operator()(noise::module::Perlin& module) const { module.SetSeed(boost::get<int>(params_["seed"])); module.SetFrequency(boost::get<float>(params_["frequency"])); module.SetOctaveCount(boost::get<RangedInt>(params_["octaves"]).value); module.SetPersistence(boost::get<RangedFloat>(params_["persistence"]).value); module.SetLacunarity(boost::get<RangedFloat>(params_["lacunarity"]).value); } void operator()(noise::module::RidgedMulti& module) const { module.SetSeed(boost::get<int>(params_["seed"])); module.SetFrequency(boost::get<float>(params_["frequency"])); module.SetOctaveCount(boost::get<RangedInt>(params_["octaves"]).value); module.SetLacunarity(boost::get<RangedFloat>(params_["lacunarity"]).value); } void operator()(noise::module::ScaleBias& module) const { module.SetBias(boost::get<float>(params_["bias"])); module.SetBias(boost::get<float>(params_["scale"])); } void operator()(noise::module::Select& module) const { module.SetBounds(boost::get<float>(params_["lower_bound"]), boost::get<float>(params_["upper_bound"])); module.SetEdgeFalloff(boost::get<float>(params_["fall_off"])); } private: NoiseModule::ParameterMap& params_; }; Modules are created and remove by a ModuleManager class. Node Editor When all the pieces are put together I get a node editor like this: This is the setup from libnoise tutorial 5! In action:
web
auto_math_text
# 34-10 GENERATOARE CU MAGNEȚI PERMANENȚI PENTRU AGREGATE EOLIENE ȘI HIDRAULICE PERMANENT MAGNETS GENERATORS FOR WIND AND WATER PLANTS This paper presents some particularities in the construction of synchronous generators for wind and hydro-plants and low power generators with permanent magnets with a simplified construction. Also, a comparison between variants of generators with electromagnetic excitation and permanent magnets is done. The advantages of the use of supermagnete from rare earths, Neodim Iron Bor (NeFeB), especially the increase of the efficiency of these generators, are highlighted, and finally the maximization of efficiency is mentioned by replacing the conductive materials (copper, aluminium) with superconducting ceramic materials at ambient temperature. Keywords: wind power, electrical generators, supermagnete Cuvinte cheie: energie eoliană, generatoare electrice, supermagnet Descarcare text
web
auto_math_text
# Implementing a multi-label classifier¶ To implement a multi-label classifier you need to subclass a classifier base class, Currently you can select of a few classifier base classes depending on which approach to multi-label classification you follow. Scikit-multilearn inheritance tree for classiffier is shown on figure below. To implement a scikit-learn’s ecosystem compatible classifier we need to subclass two classes from sklearn.base: BaseEstimator and ClassifierMixin. For that we provide skmultilearn.base.MLClassifierBase base class. We further extend this class with properties specific to the problem transformation approach in multi-label classification in skmultilearn.base.ProblemTransformationBase. ## Scikit-learn base classses¶ ### BaseEstimator¶ The base estimator class from scikit is responsible for providing the ability of cloning classifiers, for example when multiple instances of exactly the same classifier are needed for cross validation performed using the CrossValidation class. The class provides two functions responsible for that: get_params, which fetches parameters from a classifier object and set_params, which sets params of the target clone. The params should also be acceptable by the constructor. ### ClassifierMixin¶ This is an interface with a non-important method that allows different classes in scikit to detect that our classifier behaves as a classifier (i.e. implements fit/predict etc.) and provides certain kind of outputs. ## MLClassifierBase¶ The base multi-label classifier in scikit-multilearn is skmultilearn.base.MLClassifierBase. It provides two abstract methods: fit(X, y) to train the classifier and predict(X) to predict labels for a set of samplese. These functions are expected from every classifier. It also provides a default implementation of get_params/set_params that works for multi-label classifiers. ### Copyable fields¶ One of the most important concepts in scikit-learn’s BaseEstimator, is the concept of cloning. Scikit-learn provides a plethora of experiment performing methods, among others cross validation, which require the ability to clone a classifier. Scikit-multilearn’s base multi-label class - MLClassifierBase - provides infrastructure for automatic cloning support. All you need to do in your classifier is: 1. subclass MLClassifierBase or a derivative class 2. set self.copyable_attrs in your class’s constructor to a list of fields (as strings), that should be cloned (usually it is equal to the list of constructor’s arguments) An example of this would be: class AssignKBestLabels(MLClassifierBase): """Assigns k most probable labels""" def __init__(self, k = None): super(AssignKBestLabels, self).__init__() self.k = k self.copyable_attrs = ['k'] ### The fit method¶ The fit(self, X, y) expects classifier training data: • X should be a sparse matrix of shape: (n_samples, n_features), although for compatibility reasons array of arrays and a dense matrix are supported. • y should be a sparse, binary indicator, matrix of shape: (n_samples, n_labels) with 1 in a position i,j when i-th sample is labeled with label no. j It should return self after the classifier has been fitted to training data. It is customary that fit should remember n_labels in a way. In practice we store n_labels as self.label_count in scikit-multilearn classifiers. ### The predict method¶ The predict(self, X) returns a prediction of labels for the samples from X: • X should be a sparse matrix of shape: (n_samples, n_features), although for compatibility reasons array of arrays and a dense matrix are supported. The returned value is similar to y in fit. It should be a sparse binary indicator matrix of the shape (n_samples, n_labels). In some cases, while scikit continues to progress towards complete switch to sparse matrices, it might be needed to convert the sparse matrix to a dense matrix or even array-like of array-likes. Such is the case for some scoring functions in scikit. This problem should go away in the future versions of scikit. ## Selecting the base class¶ Madjarov et al. divide approaches to multi-label classification into three categories, you should select a scikit-multilearn base class according to the philosophy behind your classifier: • algorithm adaptation, when a single-label algorithm is directly adapted to multi-label case, ex. Decision Trees can be adapted by taking multiple labels into consideration in decision functions, for now the base function for this approach is MLClassifierBase • problem transformation, when the multi-label problem is transformed to a set of single-label problems, solved there and converted to a multi-label solution afterwards - for this approach we provide a comfortable ProblemTransformationBase base class • ensemble classification, when multi-label classification is performed by an ensemble of multi-label classifiers to improve performance, overcome overfitting etc. - there are a couple of ensemble classifiers that can server as base classes, see below ### Problem transformation¶ Problem transformation approach is centered around the idea of converting a multi-label problem into one or more single-label problems, which are usually solved by single- or multi-class classifiers. Scikit-learn is the de facto standard source of Python implementations of single-label classifiers. In order to perform the transformation, every problem transformation classifier needs a base classifier. As all classifiers that follow scikit-s BaseEstimator a clonable, scikit-multilearn’s base class for problem transformation classifiers requires an instance of a base classifier in initialization. Such an instance can be cloned if needed, and its parameters can be set up comfortably. The biggest problem with joining single-label scikit classifiers with multi-label classifiers is that there exists no way to learn whether a given scikit classifier accepts sparse matrices as input for fit/predict functions. For this reason ProblemTransformationBase requires another parameter - require_dense : [ bool, bool ] - a list/tuple of two boolean values. If the first one is true, that means the base classifier expects a dense (scikit-compatible array-like of array-likes) representation of the sample feature space X. If the second one is true - the target space y is passed to the base classifier as an array like of numbers. In case any of these are false - the arguments are passed as a sparse matrix. If the required_dense argument is not passed, it is set to [false, false] if a classifier inherits ::class::MLClassifierBase and to [true, true] as a fallback otherwise. In short it assumes dense representation is required for base classifier if the base classifier is not a scikit-multilearn classifier. ### Ensemble classification¶ Ensemble classification is an approach of transforming a multi-label classification problem into a family (an ensemble) of multi-label subproblems. In the case when your classifier concentrates on clustering the label space you should look into existing clustering schemes in the skmultilearn.ensemble module as base classes. In most cases you can take an existing general scheme, such as: LabelSpacePartitioningClassifier - which partitions a label space using a clusterer class that implements the LabelSpaceClustererBase interface. ## Unit testing¶ Scikit-multilearn provides a base unit test class for testing classifiers. Please check skmultilearn.tests.classifier_basetest for a general framework for testing the multi-label classifier. Currently tests test three capabilities of the classifier: - whether the classifier works with dense/sparse input data ClassifierBaseTest.assertClassifierWorksWithSparsity() - whether it is clonable and works with scikit-learn’s cross-validation classes ClassifierBaseTest.assertClassifierWorksWithCV()
web
auto_math_text
# Basic Special Relativity Question Discussion in 'Physics & Math' started by Fednis48, Apr 22, 2013. 1. ### PeteIt's not rocket surgeryRegistered Senior Member Messages: 10,166 The Lorentz transform tells us what would be measured in $S''$. Posting [post=3067875]maths[/post] does: Your equations are a subset of my equations: \begin{align} \large R''(k) &= \left(\gamma'(t' - \frac{Vk}{c^2}), \ \gamma'(k - Vt'), \ -ut' \right) \ \normalsize \left{0<k<L \\ -\frac{H}{u}<t'<0\right} \\ &= \left(t'', \ \frac{k}{\gamma'} - Vt'', \ -u(\frac{t''}{\gamma'} + \frac{Vk}{c^2}) \right) \ \normalsize \left{0<k<L \\ -\gamma'(\frac{H}{u} - \frac{Vk}{c^2}) < t'' < -Vk\gamma'\right} \\ \large R''(k) &= \left(\gamma'(t' - \frac{Vk}{c^2}), \ \gamma'(k - Vt'), \ 0 \right) \ \normalsize \left{0<k<L \\ t'>=0\right} &= \left(t'', \ \frac{k}{\gamma'} - Vt'', \ 0) \right) \ \normalsize \left{0<k<L \\ t'' >= \frac{-Vk\gamma'}{c^2}\right} \\ \end{align}​ They predict that the rod bends in $S''$ and not in $S'$, and that's OK - it does not imply any absolute physical contradiction. Length contracted in the lab reference frame. The proper length is obviously unchanged. Length contraction can't crush a brittle rod, just like the transient bending of the rod in $S''$ can't break the rod. In that sense, the sense used by Gron and Johanessen, they're both not 'physical effects.' This semantic sidetrack is done. If you want to discuss the physicality of length contraction, open another thread. 3. ### TachBannedBanned Messages: 5,265 So, measure it. I challenge you to put together a valid experimental setup. You can't. Measure the timestamps $t"=\frac{-Vk\gamma'}{c^2}$ in the following: \begin{align} \large R''(k) &= \left(\gamma'(t' - \frac{Vk}{c^2}), \ \gamma'(k - Vt'), \ -ut' \right) \ \normalsize \left{0<k<L \\ -\frac{H}{u}<t'<0\right} \\ &= \left(t'', \ \frac{k}{\gamma'} - Vt'', \ -u(\frac{t''}{\gamma'} + \frac{Vk}{c^2}) \right) \ \normalsize \left{0<k<L \\ -\gamma'(\frac{H}{u} - \frac{Vk}{c^2}) < t'' < -Vk\gamma'\right} \\ \large R''(k) &= \left(\gamma'(t' - \frac{Vk}{c^2}), \ \gamma'(k - Vt'), \ 0 \right) \ \normalsize \left{0<k<L \\ t'>=0\right} &= \left(t'', \ \frac{k}{\gamma'} - Vt'', \ 0) \right) \ \normalsize \left{0<k<L \\ t'' >= \frac{-Vk\gamma'}{c^2}\right} \\ \end{align}​ No, they don't. You keep "wishing" that the "rod bends", there is no measurement that would confirm that. Neither is the rod bent in the train car frame. but it flattens the ions and it packs them tighter, a measurable effect Nice try, the length contraction effect is measurable in the lab frame. Now try that with the "rod bending" in the lab frame. Time for you to stop playing games, Pete. 5. ### UndefinedBannedBanned Messages: 1,695 Is this guy even more stupid than previously imagined? First he says this: Then in the same post he says this: Which depend on animations! No actual "proof" of anything from Tach's links, just more demonstrations of his stupid double standard on what is "proof" from others and what is "proof" from himself! Is this guy the site clown or something? No one could be so stupid to make such a self contradicting post like that for real, could they? 7. ### TachBannedBanned Messages: 5,265 Nope, they don't, they depend on actual measurements. You need to try understanding the links, not jump at soundbites like "animations". The people conducting the experiments really measured the effects and generated the animations in order to illustrate the measured effects, so people could understand what went on in the experiment they already conducted. 8. ### UndefinedBannedBanned Messages: 1,695 Are you for real? Measuring "effects" and "interpreting" these effects are two different things. The "effects" only show "collision". The "results" are then "interpreted via theoretical framework of assumptions which are used to "model" via "simulations". No where along does any "proof" of SR theoretical length contraction arise that is independent of "simulations" of "interpretations" of "assumptions" etc etc. You need to go to a school for scientists and learn the difference before you try to "correct" other people again. 9. ### TachBannedBanned Messages: 5,265 Did I say any different? The scientists that put together the two websites (that, incidentally, agree with each other), first developed the theoretical foundations, then did the experimental measurements, and only after that they produced the explanations that include the animations. This is how science is conducted. Why are you so twisted in your knickers? I am just clearing your misconceptions. 10. ### UndefinedBannedBanned Messages: 1,695 But it's not the "proof" you were claiming it as, is it? It's theoretical interpretation of something. Epicycles were theoretical interpretations of something, but they too weren't "proof" of Earth as the center of the universe. Take the "hit" now before you dig yourself deeper, Tach. Admit you have no "proof" of what you claimed to Pete, and just continue the discussion with him in a better frame of mind so that I can improve my naive understandings properly without all the "muddy waters" you stir up. And stop playing games if you really want to be a scientist one day. 11. ### TachBannedBanned Messages: 5,265 For the mainstream people, it is a proof. For crackpots, not so much <shrug> 12. ### UndefinedBannedBanned Messages: 1,695 An animation of a simulation of a set of assumptions used to theoretically interpret something is a sufficient "proof" for some people of actual physical effect is as interpreted? Not very "scientific" if you ask me. But then I am naive enough that my mind won't accept or equate such things as "proof" of anything physically true until better support for that claim is made available. Have you any? 13. ### Neddy BateValued Senior Member Messages: 1,484 Tach, I have a sincere question for you. Earlier in the thread you said something about the coordinate times being just labels. If the endpoints of the rod impact the floor simultaneously in both frames (as you claim), then what would be the purpose of "labeling" two different times? Why would relativity require that, rather than just labeling both times the same? I'd like to understand that better. 14. ### TachBannedBanned Messages: 5,265 I didn't claim that, you keep trying this cheap trick and I keep correcting you. Test for you: What did I say exactly about the impact? 15. ### TachBannedBanned Messages: 5,265 That's not what I said, this is your fringe take of what you think I said , try reading again. 16. ### UndefinedBannedBanned Messages: 1,695 It's what your "proof" links amounted to when looked at closely. So by extension it was what you were saying or you wouldn't have offered them to support your claims to Pete. Try reading your linked "proofs" again. 17. ### Neddy BateValued Senior Member Messages: 1,484 I'm not trying any kind of a trick. You said this: So now I am asking, if the rod does not hit sequentially, then what is the purpose or meaning of the clock labels which do not match? 18. ### TachBannedBanned Messages: 5,265 Much better, you found the actual way I phrased things. The timestamps (labels) $t"=-\frac{Vk\gamma'}{c^2}$ applied in $S"$ have no physical meaning, they aren't measurable. 19. ### Neddy BateValued Senior Member Messages: 1,484 What I am asking is why would relativity give us equations to calculate times which are meaningless and non-measurable? Why not just say t''=t' instead? Are the labels good for anything at all? 20. ### TachBannedBanned Messages: 5,265 Well, not all mathematical concepts are measurable. A very good example is the amount of RoS, it isn't measurable. Yet, the theory predicts it. Because this is a reductionist, incorrect way of thinking, known to be wrong since Einstein discovered RoS. Sure, they allow us to organize our thoughts, convey ideas, etc. Unfortunately, they aren't measurable entities. 21. ### PeteIt's not rocket surgeryRegistered Senior Member Messages: 10,166 More games? I'm guessing that the point you're not making is that the animation values don't match the equation. This is because in the animation, I have k=0 at x'=-3, instead of at x'=0 (The rod is 6 units long, the middle of the rod is at x'=0). Here it is again with k=0 at x'=0: You can see a slowed-down, collision only animation here: RodCollisionSlow.gif And the individual collision frames here: http://sdrv.ms/10nGCzm And here are the $t''$ values for the collision: (k, t'') (0.00, 0) (0.75, -1) (1.50, -2) (2.25, -3) (3.00, -4) (3.75, -5) (4.50, -6) (5.25, -7) (6.00, -8) Correct A measurable effect in the lab frame, just like the rod bending is a measurable effect in the platform frame, according to the Lorentz transform. 22. ### Neddy BateValued Senior Member Messages: 1,484 In Einstein's thought experiment with the train and the two lightning strikes, the observer at the midpoint of the train sees the lighting strikes at two different times. He could measure the time interval between those two strikes, then later when he is at rest with the platform, he could show his measurement to the observer at the midpoint of the platform. The observer at the midpoint of the platform would say, "I saw the two lightning strikes simultaneously". Isn't that an example of measuring RoS? But you say the endpoints of the rod impact the floor simultaneously in both frames. That means there is one time when the endpoints impact in the train frame (t') and there is one time when the endpoints impact in the platform frame (t''). It seems to me that you are saying t''=t' are the real times of impact. I don't understand how meaningless, unmeasurable times would help with that at all. 23. ### PeteIt's not rocket surgeryRegistered Senior Member Messages: 10,166 That would imply that the train is not length contracted in $S''$. Please demonstrate length contraction in $S''$ without using $t''$.
web
auto_math_text
Asia Hong Kong Regional Contest 2016 #### Start 2016-11-06 03:00 CET ## Asia Hong Kong Regional Contest 2016 #### End 2016-11-06 08:00 CET The end is near! Contest is over. Not yet started. Contest is starting in -536 days 20:59:37 5:00:00 0:00:00 # Problem JTaboo Taboo is a popular party game. In this game one player, the Clue Giver, prompts his/her teammates to guess a keyword by giving clues. The Clue Giver is also given a list of taboo strings that must not appear in the clues. For example, if the keyword is “Bruce Lee”, the famous kung-fu star, then the taboo strings may be “actor”, “kung-fu”, “fighting”, “martial arts” and “The Game of Death” (Bruce Lee’s final film). The Clue Giver may try such clues as “Fist of Fury star” and “Jeet Kune Do master” to avoid the taboo. Taboo strings bring challenges and fun to the guessing game. Short clues are preferred, but now you are interested in the opposite: what is the longest clue? Given $N$ taboo strings $s_1, \dots , s_ N$, what is the longest clue string $s$ such that none of $s_1, \dots , s_ N$ appears as a substring of $s$? For simplicity, all taboo strings and your clue are represented as binary strings consisting only of 0’s and 1’s. ## Input The first line contains an integer, $N$, the number of taboo strings ($1 \leq N \leq 15\, 000$). The following $N$ lines each contains a non-empty binary string $s_ i$, for $1 \leq i \leq N$. The sum of lengths of $s_1, \dots , s_ N$ will be at most $200\, 000$. ## Output If your clue can be arbitrarily long, output -1. Otherwise, output a line containing the longest binary string that does not contain $s_1, \dots , s_ N$ as a substring. If there is more than one such longest string, output the one that is also smallest in lexicographic order. Sample Input 1 Sample Output 1 5 00 01 10 110 111 11 Sample Input 2 Sample Output 2 3 00 01 10 -1
web
auto_math_text
chapter  21 Indications for Electrophysiologic Testing Pages 14 During resting rhythm,measurements of conduction through the atrium (PA interval), atrioventricular (AV) node (AH interval), and His-Purkinje system (HV interval) are recorded (Fig. 1). This assessment is followed by atrial pacing, which allows for assessment of sinus node function through the sinus node recovery time (Fig. 2). This is expressed as the longest return cycle after the cessation of atrial pacing and is corrected for the underlying sinus rate. Corrected sinus node recovery times in excess of 525 milliseconds are indicative of abnormal sinus node automaticity. This finding is highly specific for sinus node dysfunction, but this and other techniques are not sensitive predictors and identify only about 50% of cases of proven sinus node disease.
web
auto_math_text
# Math Savant ### Staff: Mentor Meet Daniel Tammet, a 27 year-old math and memory wizard. He can do things with numbers that will truly amaze you. He is a savant. . . with a difference. Unlike most savants, he shows no obvious mental disability, and most importantly, he can describe his own thought process. Join correspondent Morley Safer as he explores the extraordinary life and mind of Daniel Tammet. The videos can be slow to play, be patient. http://60minutes.yahoo.com/segment/44/brain_man Last edited: Mar 2, 2007 2. Science news on Phys.org 533 4. ### Kurdt 4,941 Staff Emeritus Yeah this guy is good. And rather sexy if you're a lady and can stand a man with OCD 5. ### turbo 7,366 There is a difference between functional, intelligent, and smart. I know a lot of highly functional people who have neither the intelligence nor the smarts to roll with the punches in situations where intelligence and smarts are required and I know a hell of a lot of "intelligent" people who are not smart enough to ask for the creative help that can rescue them from failure when "smarts" are required. If someone can spout numerical sequences or dates on command, they have a "gift". If someone can refer to texts, papers, etc, and come up with a workable solution to a problem, they have an adequate education. If another person can come in and evaluate the process and put their finger on WHY the engineered process is failing, they are a troubleshooter and are exhibiting "smarts". This is why industries pay serious money to proven talent. I cannot recite pi to x decimal places, but I have solved some very vexing ongoing production problems on paper machines, often within an hour or two, and often which have been costing the mills $10's of$K per day in downtime for many weeks. 6. ### Schrodinger's Dog That's great I hope he'll make an impression on the world, the world needs clever people. The more the merrier. 7. ### JasonRox 2,327 I think we are aware of this. Isually categorize it in two parts... street smart or book smart. 8. ### Cyrus Well that last thread about worlds smartest man. He might be a bit nutty, but he is no doubt very very smart. ### Staff: Mentor You mean besides the OCD and Autism, right? He's functional, but with some difficulty. Still, he's pretty amazing. Last edited: Mar 3, 2007 10. ### larkspur 791 I know quite a few people who are book smart but life stupid....... 11. ### Schrodinger's Dog The only thing that worries me is the number of Savants who ever make any sort of contribution to science, it just seems to be that the greatest scientific minds are not mentally incapacitated by autism or OCD, etc. So I worry that his genious for numbers will never get any practical applications. 12. ### moe darklight 411 lol, you just reminded me how a while ago my dad and I came up with this theory that intelligence and stupidity are completely independent of each other; that a person can have high levels of both... think about it: how many incredibly intelligent, yet incredibly stupid/ignorant people have you met? and how many not-intelligent yet not-stupid either people have you met? the guy on the video is insane with numbers! holy cow! ... I like the part where he describes how he sees numbers as colors and shapes... maybe from a young age when he first started dealing with numbers his brain used the visual parts to understand them, or something like that... they should really do some scans of his brain while he's at work to learn more. I'm going to try putting a shape and color to every number in my head and see if it helps me in any way (or the opposite)... I'll try it for a week it'll be a fun experiment :) ... I can see how it could help in memorizing numbers, I don't know how he uses it for doing calculations though... do the shapes and colors mix? ... 13. ### Kurdt 4,941 Staff Emeritus Of course the visualisation of numbers is a good way of remembering them as the world memory champion (Dominic O'Brien) uses this technique for memorising decks of cards for example but he imagines a journey and each card corresponds to someone he meets. Its quite a hard technique to start off with but once you learn it the results speak for themselves. This guy obviously has a natural visualisation ability rather than having to learn it. ### Staff: Mentor I changed the title from what the original news blurb had. I should have known people would get their feathers ruffled. :tongue2: The guy has an uncanny math and memorization ability. He memorized a list of over 22,500 numbers in a couple of weeks and recited them back without a single mistake, it took him 5 hours to recite them all. That's freaking bizzarre. Seeing numbers as colors would make him a synesthete. And it's not just math, he learned conversational Icelandic in a week. I'm sure they must be studying his brain. 15. ### turbo 7,366 He does have uncanny abilities - they seem to point to eidetic memory. His his ability to channel his talent to linguistics may have been aided by an ability to visually associate Icelandic words with English ones. I am not familiar with Icelandic, but if sentences can be structured grammatically using common rules for English, having an eidetic Icelandic/English dictionary in your brain would get you most of the way there. Last edited: Mar 3, 2007 16. ### tehno 363 Just a huge memorizing ability! From the title of the thread and this post it is obvious that you don't know what math is.What he is doing isn't math at all. Capability of memorizing 22,500 numbers or even to perform accurate calculations as fast as a digital calculator is something else,not an extraordinary math talent.Matter of fact some savants are known for their skills of finding huge primes without knowing how to solve very simple (elementary school) math problems. Good illustration is the movie ˝Rain Man˝. Thanks for bringing up this subject ,however. I find it very interesting.One more example of how great the mystery of a human mind is. 17. ### Curious3141 2,970 I think you're being unduly harsh on Evo. There are many aspects to mathematical ability. Certainly, rapidity in arithmetic calculations counts, because arithmetic is a subset of mathematics. Memory for numbers (numerical memory) can be similarly justified as mathematical proficiency, because numerals are indeed mathematical constructs. He may lack extraordinary ability in mathematical abstraction, which is what you may consider to be "true mathematical ability" but that doesn't mean that what he is capable of should be considered non-mathematical. 18. ### Kurdt 4,941 Staff Emeritus He does speak many other languages as well with grammar structures completely different to that of English. 19. ### tehno 363 How much?Haven't seen "Rain Main" movie ha? BTW, "arithmetic is a subset of mathematics" is very brilliant definition of arithmetics.I beleive mathematicians would be very satisfied with it 20. ### Kurdt 4,941 Staff Emeritus I think if you're going to have this argument tehno should define his terms, more specifically mathematics. 21. ### tehno 363 I will not define it.I don't know what mathematics is.Do you? Perhaps,I'm ignorant or just dumb but I don't know how even set should be rigorously defined .
web
auto_math_text
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Colossal switchable photocurrents in topological Janus transition metal dichalcogenides ## Abstract Nonlinear optical properties, such as bulk photovoltaic effects, possess great potential in energy harvesting, photodetection, rectification, etc. To enable efficient light–current conversion, materials with strong photo-responsivity are highly desirable. In this work, we predict that monolayer Janus transition metal dichalcogenides (JTMDs) in the 1T′ phase possess colossal nonlinear photoconductivity owing to their topological band mixing, strong inversion symmetry breaking, and small electronic bandgap. 1T′ JTMDs have inverted bandgaps on the order of 10 meV and are exceptionally responsive to light in the terahertz (THz) range. By first-principles calculations, we reveal that 1T′ JTMDs possess shift current (SC) conductivity as large as 2300 nm μA V−2, equivalent to a photo-responsivity of 2800 mA/W. The circular current (CC) conductivity of 1T′ JTMDs is as large as 104 nm μA V−2. These remarkable photo-responsivities indicate that the 1T′ JTMDs can serve as efficient photodetectors in the THz range. We also find that external stimuli such as the in-plane strain and out-of-plane electric field can induce topological phase transitions in 1T′ JTMDs and that the SC can abruptly flip their directions. The abrupt change of the nonlinear photocurrent can be used to characterize the topological transition and has potential applications in 2D optomechanics and nonlinear optoelectronics. ## Introduction With the development of strong light sources, nonlinear optical (NLO) materials have the potential to engender new physical effects. Recently, the generation of nonlinear direct photocurrent upon light illumination has evoked great interest. This is known as bulk photovoltaic effect (BPVE)1. The photocurrent under linearly polarized light, or the shift current (SC), has been theoretically predicted and experimentally observed in materials such as multiferroic perovskites2,3,4,5,6,7 and monolayer monochalcogenides8,9,10. The BPVE is a promising alternative source of photocurrent for energy harvesting and sensing. Compared with the conventional solar cells based on p–n junctions, BPVE is not constraint by the Shockley–Queisser limit11 and can produce open-circuit voltage above the bandgap4. Besides SC, the circular photogalvanic effect12,13,14,15,16 that generates circular current (CC) (aka injection current) under circularly polarized light is another nonlinear photocurrent effect. In time-reversal invariant systems, SC is the response under linearly polarized light, while CC is the response under circularly polarized light. The direction of CC can be effectively controlled by the handedness of the circularly polarized light. The nonlinear photocurrent effects can be utilized for photodetection, especially in the mid-infrared (MIR) to terahertz (THz) regions, where efficient photodetectors are highly desirable. Compared with traditional infrared detectors such as MCT (HgxCd1−xTe) detector, photodetectors based on nonlinear photocurrent do not require biasing, hence the dark current can be minimized, which is advantageous especially at elevated temperatures. Particularly, topological materials are promising candidates for NLO photodetection. For example, Weyl semimetals (WSMs) have singular Berry curvature around the Weyl nodes, leading to strong linear and nonlinear optical responses17,18,19,20,21,22. Recently, the unoptimized third-order photo-responsivity of WSM TaIrTe4 is reported to be 130.2 mA W−1 under 4 μm wavelength illumination at room temperature21, comparable with that of the state-of-the-art MCT detectors (600 mA W−1) operating at low temperature23,24. Meanwhile, many other WSMs are predicted to have even larger second-order photo-responsivity22. Compared with WSMs in three dimensions (3D), which have vanishing bandgap and may lead to overheating problem under strong light, two-dimensional (2D) topological insulators (TIs) with finite bandgap on the order of 0.01–0.1 eV (within the MIR/THz range) may be a better choice, thanks to their good optical accessibility and easy band dispersion manipulation. (As a matter of nomenclature, despite small bandgap values ~kBTroom, we still call these materials “insulators” due to the literature convention of TIs.) Due to the band inversion, TIs also have augmented Berry connections near the bandgap, which could enhance their optical responses25,26. In this article, we first use a low-energy k · p model to illustrate the guiding principles for designing materials with high nonlinear photoresponse, namely, band inversion, strong spatial inversion asymmetry, and small electronic bandgap. Then, with ab initio calculations, we predict that Janus transition metal dichalcogenides (JTMDs) in the 1T′ phase possess giant nonlinear photoconductivity in the THz range. Being TIs27, 1T′ JTMDs enjoy enhanced optical responses due to the band inversion, and the maximum SC conductivity is found to be around 2300 nm μA V−2 in the THz range. Such colossal SC conductivity is also about tenfolds larger than that of many WSMs22 and other non-centrosymmetric 2D materials, such as 2H TMDs28 and monochalcogenides8,9,10. The CC conductivity of 1T′ JTMDs is also extremely large. The peak value of the CC conductivity is around 8.5 × 103 nm μA V−2, assuming a carrier lifetime of 0.2 ps. Owing to the small bandgap (10 meV), the SC conductivity peaks lie within the THz region and quickly decay with increasing light frequency. The inert responsivity to light with higher frequencies renders 1T′ JTMDs selective photodetectors in the THz range. Furthermore, we find that the band topology and Rashba splitting of valence and conduction bands (VB and CB, respectively) of 1T′ JTMDs can be effectively switched/tuned by small external stimuli such as in-plane strain or out-of-plane electric field. We show that such topological phase transition could lead to a sign change of the SC conductivity (and the SC direction) while maintaining its large magnitude. Such a colossal and switchable photocurrent may find applications in 2D optomechanics, nonlinear optoelectronics, etc. In addition, by tuning the Fermi level, the photoconductivity can be further enhanced. Besides nonlinear photoconductivity, other NLO effects, such as second-order harmonic generation, are boosted in JTMDs as well. ## Results ### A minimal k · p model: guiding principles In order better illustrate the guiding principles for designing materials with strong nonlinear photoresponses, we first adopt a generic and minimal two-band model that can describe the band-inversion process26 H0(k) = d(k) · σ, where σ = [σx,σy,σz] are Pauli matrices, and $${\mathbf{d}}\left( {\mathbf{k}} \right) = \left[ {Ak_x,\,Ak_y,M - B\left( {k_x^2 + k_y^2} \right)} \right]$$, with M, A, and B as model parameters. Without loss of generality, we assume A, B > 0 here. When M > 0, the mass term $$M - B(k_x^2 + k_y^2)$$ is positive when $$k_x^2 + k_y^2$$ is small and becomes negative when $$k_x^2 + k_y^2$$ is large. Hence, there can be a band inversion. On the other hand, when M < 0, the mass term $$M - B(k_x^2 + k_y^2)$$ is always negative, and there is no band inversion. In order to obtain finite NLO current responses, the inversion symmetry needs to be broken. Hence, we add an inversion symmetry breaking term HIB = μσx in the model Hamiltonian, where μ is a tunable parameter that controls the strength of the inversion asymmetry and can be likened to, e.g., a static electric field. Finally, px = x and py = y are the momentum operators. In ref. 26, it was demonstrated that band inversion (M > 0) would boost the linear optical response, because band inversion enhances the interband transition matrix 〈c | r | v〉 (Fig. 1 therein), where |c〉 and |v〉 are the wavefunctions of the CB and VB, respectively, and r is the position operator. This is due to the orbital character mixture when band inversion occurs (e.g., both p and d orbital components are mixed in the VB and CB of 1T′ TMD monolayers27 due to band inversion). Note that | 〈c | r | v〉| determines the response strength of the SC and CC, thus it should be expected that the band inversion would boost the nonlinear photocurrent responses as well. Then we can calculate the SC response function $$\sigma _{xx}^x$$ (we will elaborate on the formula for calculating the SC conductivity later, as in Eq. (2)) for the model Hamiltonian above. We first set A = 2, B = 1, μ = 0.1, and vary M. The results are shown in Fig. 1a. One can see that when M is positive (with band inversion, blue curve), $$\left| {\sigma _{xx}^x} \right|$$ is ~3 times larger than that when M is negative (no band inversion, red curves) with the same absolute value |M | . This clearly shows that band inversion can boost the nonlinear photocurrent responses for low frequencies near the bandgap. Besides, one can see that, for positive and negative M, $$\sigma _{xx}^x$$ has different signs, indicating that the photocurrents flow in opposite directions25. Another remarkable feature is that, when |M | becomes smaller, the magnitude of the photoconductivity would increase, and there is a rough scaling relation $$\left| {\sigma _{xx}^x} \right|\sim 1/|M|$$. Note that, in the current model, |M | measures the bandgap (Eg 2 | M |). Hence, we suggest that small bandgaps would also boost the nonlinear photoconductivity. We would like to note again that it is the band inversion, rather than the topological nature, that enhances the nonlinear photocurrent. Materials with band inversion can be topologically trivial. Furthermore, the magnitude of the photocurrent response is also dependent on the strength of inversion asymmetry. To elucidate this effect, we fix A = 2, B = 1, M = 1 and vary μ. The results are shown in Fig. 1b. One can see that $$\sigma _{xx}^x$$ scales approximately linearly with μ. The model above suggests that materials with (1) band inversion, (2) strong spatial inversion asymmetry, and (3) small electronic bandgaps may well have large nonlinear photoconductivity. Guided by these principles, we predict that monolayers of JTMDs (denoted as MXY, M = Mo,W, and X,Y = S,Se,Te) in their 1T′ phase possess colossal nonlinear photocurrent conductivity, as we will show in the following. In addition, we would like to remark that the guiding principles stated above are generic regarding all linear and nonlinear optical effects that depend on electron interband transitions, such as second-harmonic generation, etc. ### Monolayer JTMDs: atomic and electronic structures The monolayer JTMDs are composed of three atomic layers: the middle layer of transition metals is sandwiched by two side layers with different chalcogen atoms (Fig. 2). Inherited from pristine TMDs (PTMDs), JTMDs also have different crystalline phase structures. Among them, the 2H and 1T′ are two (meta-)stable structures. The 2H phase JTMDs (space group P3m1, Fig. 2a) have a quasi-Bernal (ABA′) stacking pattern with three-fold in-plane rotational symmetry and have been successfully fabricated recently29,30,31. On the other hand, the 1T′ phase (space group Pm, Fig. 2b) has an ABC stacking pattern, and the in-plane rotational symmetry is broken by a Peierls distortion along the x-axis. With fully relaxed lattice constants, all six MXY have lower energy in 2H phase than in 1T′ phase32, with a small energy difference (0.1 eV per formula unit). Similar to the PTMDs, the relative stability of these two phases can be effectively tuned by strain (see Supplementary Fig. 4). For example, we plot the phase diagram of WSeTe in Fig. 2c, which clearly suggests a tensile strain of <1% along the x-axis can render the 1T′ phase more stable. Also, the energy barriers between 2H and 1T′ phases are high (1 eV per formula unit), thus 1T′ JTMDs are fairly stable even in the strain-free state. The JTMDs inherit many salient properties of PTMDs. As the top–bottom chalcogen layers break the inversion (mirror) symmetry of the 1T′ (2H) phase, JTMDs possess extra properties apart from those of PTMDs, such as larger Rashba spin splitting33,34, more efficient charge separation35, etc. The 1T′ PTMDs are Z2 TIs27 with small bandgaps on the order of 10 meV, indicating a strong optoelectronic coupling in the THz range, because both inverted band structure and small bandgaps would enhance the interband transitions. However, due to the centrosymmetry, the second-order NLO effects are forbidden for 1T′ PTMDs. On the contrary, 1T′ JTMDs are inherently non-centrosymmetric owing to the two different chalcogen layers, and giant second-order NLO effects can be unleashed. Considering MoSSe as an example, we show the electronic properties of 1T′ JTMDs. The band structure of MoSSe is shown in Fig. 3a. Like PTMDs27, the metal d-orbitals and chalcogen p-orbitals are inverted around the Γ point, and the inverted bandgap is around 0.8 eV. The fundamental bandgaps are along the Γ–Y line (±Λ point, inset of Fig. 3b) with a magnitude of Eg ≈ 4 meV. We find that the fundamental bandgaps of all six 1T′ JTMDs lie in the range of 1–50 meV, corresponding to the THz range (Fig. 3b). Interestingly, despite the band inversion around the Γ point, not all 1T′ JTMD are topologically nontrivial. With fully relaxed atomic structures, MSSe and MSeTe have Z2 = 0 while MSTe have Z2 = 1 (M = W,Mo. Z2 = 0 and 1 indicate trivial and nontrivial band topology, respectively). This is because the large Rashba splitting from the inversion symmetry-breaking could change the band topology by remixing the wavefunctions around the ±Λ point. As we will show later, both in-plane strain and out-of-plane electric field can induce a topological phase transition by closing and reopening the fundamental bandgap36,37,38,39. Regardless of the band topology (Z2 number), the band inversion around Γ point gives rise to a strong wavefunction mixing between VBs and CBs26, which could significantly boost the linear and nonlinear responses. ### SC and CC In materials without inversion symmetry $${\cal{P}}$$, NLO direct currents (dcs) can be generated upon photo-illumination. This current can be divided into two parts, the SC jSC and the CC jCC $$\begin{array}{*{20}{c}} {j_{{\mathrm{SC}}}^c = 2\sigma _{ab}^c\left( {0;\omega , - \omega } \right)E^a\left( \omega \right)E^b\left( { - \omega } \right)} \\ {\frac{{{\mathrm{d}}j_{{\mathrm{CC}}}^c}}{{{\mathrm{d}}t}} = 2\eta _{ab}^c(0;\omega , - \omega )E^a\left( \omega \right)E^b( - \omega )} \end{array}$$ (1) where a, b, c are Cartesian indices and E(ω) is the Fourier component of the optical electric field at angular frequency ω. Equation (1) indicates that, when the optical electric field has both a and b components (a and b can be the same), there will be a dc along the cth direction when $$\sigma _{ab}^c$$/$$\eta _{ab}^c$$ is non-zero. In materials with time-reversal symmetry $${\cal{T}}$$, the response functions within the independent particle approximation in clean, cold semiconductors are40 $$\begin{array}{*{20}{c}} {\sigma _{ab}^c\left( {0;\omega , - \omega } \right) = - \frac{{e^3}}{{2\hslash ^2}}{\int} {\frac{{{\mathrm{d}}{\mathbf{k}}}}{{\left( {2\pi } \right)^3}}\mathop {\sum}\limits_{n,m} {f_{nm}\frac{{r_{mn}^ar_{nm;c}^b + r_{mn}^br_{nm;c}^a}}{{\omega _{mn} - \omega - i/\tau }}} } } \\ {\eta _{ab}^c\left( {0;\omega , - \omega } \right) = - \frac{{ie^3}}{{2\hslash ^2}}{\int} {\frac{{{\mathrm{d}}{\mathbf{k}}}}{{\left( {2\pi } \right)^3}}} \mathop {\sum}\limits_{n,m} {f_{nm}\frac{{{{\Delta }}_{mn}^c\left[ {r_{mn}^a,\,r_{nm}^b} \right]}}{{\omega _{mn} - \omega - i/\tau }}} } \end{array}$$ (2) Here all dependencies on k are omitted. τ is the carrier lifetime. m, n are band indices, while fmn ≡ fm − fn, ωmn ≡ ωm − ωn, and Δmn ≡ vmm − vnn are the differences in occupation number, energy, and band velocity between bands n and m, respectively. rmn ≡ im | k | n〉 is the interband Berry connection, $$\left[{r_{mn}^a,\,r_{nm}^b}\right]=r_{mn}^ar_{nm}^b-r_{mn}^br_{nm}^a$$ is the interband Berry curvature, while rmn;c is the generalized gauge covariant derivative of rmn, defined as $$r_{mn;c}^b = \frac{{dr_{mn}^b}}{{dk_c}} - i\left({\xi _{mm}^c - \xi _{nn}^c}\right)r_{mn}^b,$$ where ξmm = ium | k | um〉 is the intraband Berry connection and | um〉 is the periodic part of the wavefunction. Equation (2) is slightly different from those in ref. 40 by explicitly including the τ-dependence. Here, for simplicity, we assume the carrier lifetime τ is mode independent and takes a uniform value of τ = 0.2 ps. When the carrier lifetime satisfies τ/Eg, the i/τ term in the denominator of Eq. (2) can be neglected. In this case, $$\sigma _{ab}^c\left( {0;\omega , - \omega } \right)$$ is purely real, while $$\eta _{ab}^c\left( {0;\omega , - \omega } \right)$$ is purely imaginary. Considering that the dc should be a real quantity, Ea and Eb should have 0 ($$\frac{\pi }{2}$$) phase difference to yield non-vanishing SC (CC), which indicates that SC and CC are responses under linearly and circularly polarized light, respectively. Another noteworthy feature is that, upon light illumination, jCC grows with time at the initial stage, and the saturated static CC should be $$j_{{\mathrm{CC}}} \propto \tau \eta _{ab}^cE^aE^b$$, with τ as the carrier lifetime. Therefore τη can be regarded as the effective CC photoconductivity. Another formula describing the nonlinear photocurrents can be obtained from quadratic Kubo response theory41,42 and reads $$j^c = \frac{{e^3}}{{2\omega ^2\hbar ^2}}{\mathrm{Re}}\left\{ {\mathop {\sum}\limits_{l,m,n}^{{{\Omega }} = \pm {\upomega}} {{\int} {\frac{{{\mathrm{d}}{\mathbf{k}}}}{{\left( {2\pi } \right)^3}}} } f_{ln}\frac{{v_{nl}^a}}{{\left( {\omega _{nl} - {{\Omega }} - i/\tau } \right)}}\left[ {\frac{{v_{lm}^bv_{mn}^c}}{{(\omega _{nm} - i/\tau )}} - \frac{{v_{lm}^cv_{mn}^b}}{{(\omega _{ml} - i/\tau )}}} \right]E_a\left( {{\Omega }} \right)E_b\left( { - {{\Omega }}} \right)} \right\}$$ (3) Here $$v_{nl} \equiv \left\langle {n|\hat v|l} \right\rangle$$ is the velocity matrix element. Equation (2) uses the length gauge, while Eq. (3) uses the velocity gauge. It can be shown (Supplementary Note 2) that, in the presence of time-reversal symmetry $${\cal{T}}$$, Eq. (3) is generally equivalent to Eqs. (1) and (2), and the real and imaginary parts of Eq. (3) correspond to the SC and CC, respectively. Compared with Eqs. (1) and (2), Eq. (3) is more general. For example, it can be used to calculate photocurrents in magnetic materials where $${\cal{T}}$$ is broken43,44. However, numerically Eq. (3) can experience convergence problems at small ω. Therefore, Eqs. (1) and (2) are adopted for computations in this work, which do not involve magnetism. More detailed discussions on the relationship between Eqs. (1) and (2) and Eq. (3) can be found in Supplementary Notes 1 and 2. The consistency between these two methods is well tested. In practice, the Brillouin zone (BZ) integration is carried out by k-mesh sampling with $$\sigma _{3{\mathrm{D}}} = {\int} {\frac{{{\mathrm{d}}{\mathbf{k}}}}{{\left( {2\pi } \right)^3}}} I({\mathbf{k}}) = \frac{1}{V}\mathop {\sum}\nolimits_{\mathbf{k}} {w_{\mathbf{k}}I({\mathbf{k}})}$$, where V is the volume of the unit cell, wk is weight factor, and I(k) is the integrand. However, for 2D materials, the definition of volume V is ambiguous, because the thickness of 2D materials is ill-defined45. Thus we replace volume V with the area S and define $$\sigma _{2{\mathrm{D}}} = \frac{1}{S}\mathop {\sum}\nolimits_{\mathbf{k}} {w_{\mathbf{k}}I({\mathbf{k}})}$$. Note that all ingredients, S, wk and I(k), are well defined and can be directly obtained from numerical computations, hence σ2D is unambiguous for 2D materials. As a result, in this work we mainly show σ2D. The 2D and 3D conductivities satisfy σ2D = Leffσ3D, where Leff should be the effective thickness of the material (not the thickness of the computational cell, which includes the thickness of the vacuum layer). Leff has no standard definitions and is usually set as the interlayer distance when the monolayers are van der Waals stacked along z direction. We use an effective thickness of Leff = 6 Å for JTMDs when σ3D is required for, e.g., the comparison with other materials. Unless explicitly stated, the carrier lifetime is set as τ = 0.2 ps, which should be a conservative value considering that the carrier lifetimes of 2H TMDs are >1 ps at room temperature46,47. Note that 1T′ JTMDs have mirror symmetry $${\cal{M}}^y$$. The yth components of j and E should be flipped under $${\cal{M}}^y$$, while other components do not change. Consequently, $${\cal{M}}^y$$ enforces $$\sigma _{ab}^c$$/$$\eta _{ab}^c$$ to be zero when there is an odd number of y in [a, b, c], such as $$\sigma _{xx}^y$$. The different nonzero SC conductivities of 1T′ MoSTe are plotted in Fig. 4a. We observe that both in-plane polarizations $$\sigma _{xx}^x$$ and $$\sigma _{yy}^x$$ have striking magnitudes of 103 nm μA V−2 in the THz range (ω < 10 THz ≈ 41 meV). The peak values of $$\sigma _{xx}^x$$ and $$\sigma _{yy}^x$$ are around 2300 and 850 nm μA V−2, respectively, more than tenfold larger than those of other non-centrosymmetric 2D materials, such as hexagonal BN (hBN), 2H MoS2, GeS, and SnSe, which are on the order of 10–100 nm μA V−2 (inset of Fig. 4e). When light with intensity 10 mW cm−2 is shining on single layer MoSTe with 1 cm × 1 cm dimension, the photocurrent generated is on the order of 1 nA. Note that the nonlinear photocurrent can be boosted by (1) focusing the light beam and (2) stacking single-layer detectors to increase the cross-section. For example, when the light with the same total power as above is focused onto a 0.01 cm2 spot size, the electric field is enhanced 101×, the photocurrent density would be 102×, and the total photocurrent would be 100 nA. Notably, the SC conductivities quickly decay for ω 0.1 eV, indicating that 1T′ MoSTe is relatively insensitive to light beyond the THz range, which can be advantageous when selective photodetectors in the THz range are desired. In addition, an interesting observation is that the SC conductivities remain almost constant in the THz range, which could make the calibration of the THz detectors easier. Besides, an in-plane electric field can induce a large photocurrent in the out-of-plane direction: $$\sigma _{xx}^z$$ and $$\sigma _{yy}^z$$ have peak values of 180 and 25 nm μA V−2, respectively. Such an out-of-plane current can be measured if transparent electrodes like graphene are attached directly above and below the MoSTe monolayer. To understand the origin of such giant photoconductivity, the k-specific contribution to the total SC conductivity, $${\mathrm{SC}}\left( {\mathbf{k}} \right) \equiv {\mathrm{Re}}\left\{ {\mathop {\sum}\nolimits_{n,m} {f_{nm}\frac{{r_{mn}^ar_{nm;c}^b + r_{mn}^br_{nm;c}^a}}{{\hbar \left( {\omega _{mn} - \omega - i/\tau } \right)}}} } \right\}$$ at ω = 10 meV is shown in Fig. 4c. We can see that, around the fundamental bandgap Λ, SC(k) has a peak amplitude of about ±108 Å3 eV−1. Away from Λ, SC(k) rapidly decays. This phenomenon is consistent with the argument that the inverted band structure would lead to enhanced Berry connection magnitudes. We also calculate the SC conductivity for the other five 1T′ JTMDs, and their peak values are shown in Fig. 4e. All of six 1T′ JTMDs possess colossal photovoltaic effect and the peak values of $$\sigma _{xx}^x$$ and $$\sigma _{yy}^x$$ are on the order of 103 nm μA V−2. Generally, MSTe exhibits stronger BPVE than MSSe and MSeTe. This is due to the larger out-of-plane asymmetry in the MSTe system. The electron affinity of S, Se, and Te atoms are 2.08, 2.02, and 1.97 eV, respectively. Consequently, the out-of-plane asymmetry should be more significant in MSTe, leading to stronger BPV. This point is also verified by the out-of-plane electric dipole Pz. We find that Pz of MSTe is around 0.15 e Å per unit cell, while for MSSe and MSeTe, Pz is only around 0.07–0.08 e Å per unit cell. The CC conductivities are plotted in the lower panels of Fig. 4 (Fig. 4b, d, f). With in-plane polarization, the only non-vanishing element of the CC tensor is $$\eta _{xy}^y$$, based on the symmetry analysis above. $$\tau \eta _{xy}^y$$ of MoSTe has a peak value of 8.5 × 103 nm μA V−2 around ω ≈ 50 meV (Fig. 4b). Since $$\tau \eta _{xy}^y$$ is sensitively dependent on the carrier lifetime, we vary τ and obtain the peak values of $$\tau \eta _{xy}^y$$ (inset of Fig. 4b). Even with τ = 0.04 ps, $$\tau \eta _{xy}^y$$ still has a peak value of around 400 nm μA V−2. The k-specific contribution to the total CC conductivity, $${\mathrm{CC}}\left( {\mathbf{k}} \right) \equiv {\mathrm{Re}}\left\{ {\mathop {\sum}\nolimits_{n,m} {f_{nm}\frac{{{{\Delta }}_{mn}^c\left[ {r_{mn}^a,\,r_{nm}^b} \right]}}{{\omega _{mn} - \omega - i/\tau }}} } \right\}$$ at ω = 50 meV, is plotted in Fig. 4d. Similar to SC, the major contributions also lie in the vicinity of Λ. Finally, the peak values of $$\tau \eta _{xy}^y$$ for all six 1T′ JTMDs are shown in Fig. 4f. Similar as in SC, the CC conductivity in MSTe, which has stronger spatial inversion asymmetry, is stronger than those in MSSe and MSeTe. Here we would like to mention that besides SC and CC, which are interband contributions to the nonlinear photocurrent, there could also be intraband contributions. For insulating materials at zero temperatures, the intraband part should be zero. But since 1T′ JTMDs have small bandgaps comparable with room temperature (kBTroom ~ 26 meV), we have also calculated the intraband contribution due to anomalous velocity at finite temperatures. The results are shown in Supplementary Discussion 1, and one can find that the intraband contributions can be on the same order as the interband contributions. ### Topological phase transitions As discussed above, around the ±Λ points, the Rashba splitting breaks the degeneracy and could close and reopen the bandgap, leading to topological phase transitions. The magnitude of the Rashba splitting could be engineered with external stimuli, such as in-plane strain, external electric field, etc. For example, with a tensile strain, the vertical distance between two chalcogen layers of 1T′ JTMD shrinks (inset of Fig. 5a). The bandgap of MoSSe as a function of biaxial in-plane strain ϵ is plotted in Fig. 5a, where a band closing occurs around ϵ = 0.3%. This band closing/reopening indicates a topological transition. For ϵ < 0.3%, 1T′ MoSSe has trivial band topology with Z2 = 0, while with ϵ > 0.3%, 1T′ MoSSe becomes a Z2 TI. Such sensitive dependence on in-plane strain provides a convenient pathway to trigger topological phase transitions in 1T′ JTMD. An even more intriguing phenomenon arises in the SC responses. In Fig. 5b, we show the SC conductivity of 1T′ MoSSe as the function of ϵ. All four components of $$\sigma _{ab}^c$$ undergo an abrupt jump upon the topological transition. Particularly, $$\sigma _{xx}^x$$ and $$\sigma _{xx}^z$$ flip their directions. Such an abrupt jump originates in the change in the band characteristics around Λ upon the topological transition25. As discussed above, the major contributions to the total SC conductivity come from k-points around Λ point (Fig. 3b). When the bandgap is closed and reopened, the wavefunctions of the lowest CB and highest VB around Λ point undergo a substantial remixing. In ideal cases such as the aforementioned two-band model, $$I_{mn;c}^{ab} = r_{mn}^ar_{nm;c}^b + r_{mn}^br_{nm;c}^a$$ would flip sign since m and n is interchanged and $$I_{mn;c}^{ab}$$ is purely imaginary. When more band contributions are incorporated, $$I_{mn;c}^{ab}$$ does not always flip its sign but would still experience a drastic change. The arguments above are verified by the k-specific contribution to $$\sigma _{xx}^x$$ and $$\sigma _{yy}^x$$ as shown in Fig. 5c–f, where we can see that SC(k) are significantly different on two sides of the topological transition. In addition to in-plane strain, an out-of-plane electric field, which also modifies the magnitude of the Rashba splitting, can trigger the topological transition and alter the SC conductivities as well (see Supplementary Fig. 6 and 7). Thus we propose that the abrupt jump of nonlinear photocurrent can be a universal signature of the topological phase transition in non-centrosymmetric materials and can be used as an online diagnostic tool. The mechanical, electrical, and even optomechanical48,49 approaches to switching the NLO responses would pave the way for efficient and ultrafast nonlinear optoelectronics. ### Fermi-level tuning It is also interesting how the nonlinear photocurrents vary when the Fermi level is buried in the CB or VB by carrier doping. The SC and CC conductivities of MoSTe as the function of the Fermi level EF are shown in Fig. 6a. We can see that for EF within ±50 meV (EF is set as 0 when the Fermi level is on the top of the VB), the SC and CC conductivities remain extremely large in their magnitudes, while for EF far away from the fundamental bandgap (heavily carrier doped), both SC and CC conductivities gradually decay to zero. Here the pure intraband nonlinear anomalous Hall current discussed above50 is not considered. A noteworthy feature is that, when EF is slightly above (below) the bandgap, $$\sigma _{yy}^x$$ would jump to an enormously positive (negative) value, about ten times larger in amplitude than that when EF is inside the bandgap. This effect can be understood by looking at the band structure (Fig. 3a) and the k-specific contribution SC(k) (Fig. 4c). As discussed above, the major contribution to the total SC conductivity comes from k-points close to the fundamental bandgap Λ. When the VB and CB are occupied and empty, respectively, SC(Λ + δky) and SC(Λ − δky) (δ is a small positive parameter) have opposite values and tend to cancel each other. On the other hand, with a positive EF, those CB below the Fermi level would be occupied as well, and the CB–VB transition cannot contribute to SC(k) anymore (Fig. 6b). However, a lager region on the Λ − δky side would have occupied CB than on the Λ + δky side. This is because the CB cone is tilted and the band velocity is smaller on the Λ − δky side, leading to a larger partial density of states in this region. As a result, the positive SC(k) on the Λ + δky side would be canceled less by the negative SC(k) on the Λ − δky side, leading to a larger total SC conductivity (Supplementary Fig. 8 and 9). A similar analysis could show that, when EF is within the VB, the total SC would have a significant negative value. These observations indicate that the photocurrent conductivity could be further enhanced by Fermi-level tuning in materials with tilted CB and/or VB, such as type-II WSM51. From Fig. 6a, one can see that an ~1 meV shift in EF can dramatically enhance $$\sigma _{yy}^x$$. In practice, EF can be tuned by, e.g., gate voltage. Assuming a gate coupling efficiency of 0.1, then an ~10 mV gate voltage would be able to achieve the enhancement. ## Discussion Before concluding, we would like to note that, in addition to nonlinear photocurrents, other NLO effects such as the second-harmonic generation are also colossal in 1T′ JTMDs (Supplementary Fig. 10). Besides, the inversion symmetry of 1T′ PTMDs can be broken externally by, e.g., an out-of-plane electric field, resulting in nonlinear photocurrents, which can be regarded as a third-order nonlinear effect. The SC conductivity can be giant as well and can flip direction under a vertical electric field (Fig. 7). Also, the SC conductivity depends approximately linearly on the electric field, which characterizes the strength of inversion asymmetry. This is consistent with results with the model Hamiltonian before, when μ plays a similar role as the electric field. In addition, we find that the reflectance and absorbance of 1T′ JTMDs are small (Supplementary Discussion 2), as they are atomically thin monolayers. In conclusion, we reveal the colossal nonlinear photocurrent effects in 1T′ JTMDs. The photo-responsivity peaks within the THz range. As a result, the 1T′ JTMDs can be efficient and selective photodetectors in the THz range. We also investigate the topological order of 1T′ JTMDs and find that it can be conveniently switched by a small external stimulus such as in-plane strain and out-of-plane electric field. Upon the topological transitions, the photocurrents undergo an abrupt change and can flip direction, which can be used as a signal of the topological transition and can lead to sensitive manipulation of NLO effects. The colossal and switchable nonlinear photocurrents could find broad applications in photodetection, nonlinear optoelectronics, optomechanics, etc. ## Methods ### Ab initio calculations The first-principles calculations are based on density functional theory (DFT)52,53, as implemented in Vienna ab initio simulation package (VASP)54,55. Generalized gradient approximation in the form of Perdew–Burke–Ernzerhof56 is used to treat the exchange–correlation interactions. Core and valence electrons are treated by projector augmented wave method57 and a plane wave basis set with a cutoff energy of 520 eV, respectively. For the DFT calculations, the first BZ is sampled by a Γ-centered k-mesh with grid density of at least 2π × 0.02 Å−1 along each dimension. For the electric field calculations, a sawtooth-like potential along the z direction is applied, with discontinuity at the middle of the vacuum layer in the simulation cell. The symmetry constraints are completely switched off in all VASP calculations to avoid incorrect handling of the electric field58. To further test the correctness of the bandgap–electric field relationship, we have redone the calculations with Quantum Espresso59, and the results agree well with that of VASP. ### Wannier function fittings The Bloch wavefunctions from DFT calculations are projected onto the maximally localized Wannier functions (MLWF) with the Wannier90 package60. The MLWFs |nR〉 are defined as $$\left| {n{\mathbf{R}}} \right\rangle = \frac{1}{N}\mathop {\sum}\limits_{\mathbf{k}} {e^{ - i{\mathbf{k}}\, \cdot {\mathbf{R}}}} \mathop {\sum}\limits_{m = 1}^J {U_{mn}^{\mathbf{k}}} \left| {m{\mathbf{k}}} \right\rangle$$ (4) where |mk〉 are the Bloch wavefunctions as obtained in the DFT calculations, R are Bravais lattice vectors, J is the number of Wannier bands, and $$U_{mn}^{\mathbf{k}}$$ is a unitary transformation such that the Wannier functions are maximally localized. The Wannier Hamiltonian HW is constructed from the MLWFs, with $$H_{nm{\mathbf{R}}}^W = \left\langle {n0|\hat H|m{\mathbf{R}}} \right\rangle$$ (5) Wannier Hamiltonian in the k space can be obtained with a Fourier transformation $$H_{nm{\mathbf{k}}}^W = \mathop {\sum}\limits_{\mathbf{R}} {e^{i{\mathbf{k}}\, \cdot ({\mathbf{R}} + {\mathbf{r}}_{\mathbf{m}} - {\mathbf{r}}_{\mathbf{n}})}} H_{nm{\mathbf{R}}}^W$$ (6) where we have included the Wannier centers rm in the phase factor61,62. By diagonalizing $$H_{nm{\boldsymbol{k}}}^W$$ at each k-point, one obtains the energy and wavefunctions $$E_n^W({\mathbf{k}})$$ and |nkW. ### Band velocity, Berry connection, and sum rule The Wannier Hamiltonian and wavefunctions are directly applied to calculate the band velocity vmn with $$v_{mn}^a = \left\langle {m{\mathrm{|}}\frac{{\partial H}}{{\partial k_a}}{\mathrm{|}}n} \right\rangle ^W$$ (7) Then the interband Berry connections rmn can be obtained with the relation $$r_{mn} = \frac{{v_{mn}}}{{i\omega _{mn}}}\quad (m \ne n)$$ (8) And the generalized gauge covariant derivative of rmn is calculated with the sum rule9,28,61 $$\begin{array}{l}r_{nm;b}^a = \frac{i}{{\omega _{nm}}}\left[ {\frac{{v_{nm}^a{{\Delta }}_{nm}^b + v_{nm}^b{{\Delta }}_{nm}^a}}{{\omega _{nm}}} - w_{nm}^{ab} + \mathop {\sum}\limits_{p \ne n,m} {\left( {\frac{{v_{np}^av_{pm}^b}}{{\omega _{pm}}} - \frac{{v_{np}^bv_{pm}^a}}{{\omega _{np}}}} \right)} } \right]\\ \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \left( {n \ne m} \right)\end{array}$$ (9) where $$\Delta_{\mathrm{nm}}=v_{nn}-v_{mm}$$ and $$w_{nm}^{ab} = \left\langle {n|\frac{{\partial ^2H}}{{\partial k_a\partial k_b}}|m} \right\rangle ^W$$. ### Nonlinear photoconductivity After all the ingredients, $$v_{mn}^a,\,r_{mn}^a$$, and $$r_{mn;b}^a$$, are obtained from the Wannier interpolations, the nonlinear photoconductivity is calculated based on Eq. (2) in the main text. The BZ integration is sampled with a 1601 × 3201 k-mesh in the first BZ. The k-mesh convergence is tested with a denser 2251 × 4501 k-mesh, and the difference is found to be negligible (Supplementary Fig. 5). ## Code availability The data that support the findings within this paper and the MATLAB code for calculating the shift and circular current conductivity are available from the corresponding authors upon reasonable request. ## References 1. 1. Fregoso, B. M. Bulk photovoltaic effects in the presence of a static electric field. Phys. Rev. B 100, 064301 (2019). 2. 2. Qin, M., Yao, K. & Liang, Y. C. High efficient photovoltaics in nanoscaled ferroelectric thin films. Appl. Phys. Lett. 93, 122904 (2008). 3. 3. Choi, T., Lee, S., Choi, Y. J., Kiryukhin, V. & Cheong, S. W. Switchable ferroelectric diode and photovoltaic effect in BiFeO3. Science 324, 63–66 (2009). 4. 4. Yang, S. Y. et al. Above-bandgap voltages from ferroelectric photovoltaic devices. Nat. Nanotechnol. 5, 143–147 (2010). 5. 5. Daranciang, D. et al. Ultrafast photovoltaic response in ferroelectric nanolayers. Phys. Rev. Lett. 108, 087601 (2012). 6. 6. Grinberg, I. et al. Perovskite oxides for visible-light-absorbing ferroelectric and photovoltaic materials. Nature 503, 509–512 (2013). 7. 7. Bhatnagar, A., Roy Chaudhuri, A., Heon Kim, Y., Hesse, D. & Alexe, M. Role of domain walls in the abnormal photovoltaic effect in BiFeO3. Nat. Commun. 4, 2835 (2013). 8. 8. Rangel, T. et al. Large bulk photovoltaic effect and spontaneous polarization of single-layer monochalcogenides. Phys. Rev. Lett. 119, 067402 (2017). 9. 9. Cook, A. M., Fregoso, B. M., De Juan, F., Coh, S. & Moore, J. E. Design principles for shift current photovoltaics. Nat. Commun. 8, 14176 (2017). 10. 10. Wang, H. & Qian, X. Ferroicity-driven nonlinear photocurrent switching in time-reversal invariant ferroic materials. Sci. Adv. 5, eaav9743 (2019). 11. 11. Shockley, W. & Queisser, H. J. Detailed balance limit of efficiency of p-n junction solar cells. J. Appl. Phys. 32, 510–519 (1961). 12. 12. McIver, J. W., Hsieh, D., Steinberg, H., Jarillo-Herrero, P. & Gedik, N. Control over topological insulator photocurrents with light polarization. Nat. Nanotechnol. 7, 96–100 (2012). 13. 13. Yuan, H. et al. Generation and electric control of spin-valley-coupled circular photogalvanic current in WSe2. Nat. Nanotechnol. 9, 851–857 (2014). 14. 14. Dhara, S., Mele, E. J. & Agarwal, R. Voltage-tunable circular photogalvanic effect in silicon nanowires. Science 349, 726–729 (2015). 15. 15. Ji, Z. et al. Spatially dispersive circular photogalvanic effect in a Weyl semimetal. Nat. Mater. 18, 955–962 (2019). 16. 16. De Juan, F., Grushin, A. G., Morimoto, T. & Moore, J. E. Quantized circular photogalvanic effect in Weyl semimetals. Nat. Commun. 8, 15995 (2017). 17. 17. Morimoto, T. & Nagaosa, N. Topological nature of nonlinear optical effects in solids. Sci. Adv. 2, e1501524 (2016). 18. 18. Zhang, Y. et al. Photogalvanic effect in Weyl semimetals from first principles. Phys. Rev. B 97, 241118 (2018). 19. 19. Wu, L. et al. Giant anisotropic nonlinear optical response in transition metal monopnictide Weyl semimetals. Nat. Phys. 13, 350–355 (2017). 20. 20. Osterhoudt, G. B. et al. Colossal mid-infrared bulk photovoltaic effect in a type-I Weyl semimetal. Nat. Mater. 18, 471–475 (2019). 21. 21. Ma, J. et al. Nonlinear photoresponse of type-II Weyl semimetals. Nat. Mater. 18, 476–481 (2019). 22. 22. Xu, Q. et al. Comprehensive scan for nonmagnetic Weyl semimetals with nonlinear optical response. npj Comput. Mater. 6, 32 (2020). 23. 23. Theocharous, E., Ishii, J. & Fox, N. P. A comparison of the performance of a photovoltaic HgCdTe detector with that of large area single pixel QWIPs for infrared radiometric applications. Infrared Phys. Technol. 46, 309–322 (2005). 24. 24. Rogalski, A., Antoszewski, J. & Faraone, L. Third-generation infrared photodetector arrays. J. Appl. Phys. 105, 091101 (2009). 25. 25. Tan, L. Z. & Rappe, A. M. Enhancement of the bulk photovoltaic effect in topological insulators. Phys. Rev. Lett. 116, 237402 (2016). 26. 26. Xu, H., Zhou, J., Wang, H. & Li, J. Giant photonic response of Mexican-hat topological semiconductors for mid-infrared to terahertz applications. J. Phys. Chem. Lett. 11, 6119–6126 (2020). 27. 27. Qian, X., Liu, J., Fu, L. & Li, J. Quantum spin Hall effect in two-dimensional transition metal dichalcogenides. Science 346, 1344–1347 (2014). 28. 28. Wang, C. et al. First-principles calculation of nonlinear optical responses by Wannier interpolation. Phys. Rev. B 96, 115147 (2017). 29. 29. Lu, A. Y. et al. Janus monolayers of transition metal dichalcogenides. Nat. Nanotechnol. 12, 744–749 (2017). 30. 30. Zhang, J. et al. Janus monolayer transition-metal dichalcogenides. ACS Nano 11, 8192–8198 (2017). 31. 31. Zheng, B. et al. Band alignment engineering in two-dimensional lateral heterostructures. J. Am. Chem. Soc. 140, 11193–11197 (2018). 32. 32. Li, W. & Li, J. Ferroelasticity and domain physics in two-dimensional transition metal dichalcogenide monolayers. Nat. Commun. 7, 10843 (2016). 33. 33. Cheng, Y. C., Zhu, Z. Y., Tahir, M. & Schwingenschlögl, U. Spin-orbit–induced spin splittings in polar transition metal dichalcogenide monolayers. Europhys. Lett. 102, 57001 (2013). 34. 34. Li, F. et al. Intrinsic electric field-induced properties in Janus MoSSe van der Waals structures. J. Phys. Chem. Lett. 10, 559–565 (2019). 35. 35. Riis-Jensen, A. C., Pandey, M. & Thygesen, K. S. Efficient charge separation in 2D Janus van der Waals structures with built-in electric fields and intrinsic p-n doping. J. Phys. Chem. C 122, 24520–24526 (2018). 36. 36. Murakami, S. Phase transition between the quantum spin Hall and insulator phases in 3D: emergence of a topological gapless phase. New J. Phys. 9, 356 (2007). 37. 37. Murakami, S. & Kuga, S. I. Universal phase diagrams for the quantum spin Hall systems. Phys. Rev. B Condens. Matter Mater. Phys. 78, 165313 (2008). 38. 38. Yang, B. J. & Nagaosa, N. Classification of stable three-dimensional Dirac semimetals with nontrivial topology. Nat. Commun. 5, 4898 (2014). 39. 39. Murakami, S., Iso, S., Avishai, Y., Onoda, M. & Nagaosa, N. Tuning phase transition between quantum spin Hall and ordinary insulating phases. Phys. Rev. B Condens. Matter Mater. Phys. 76, 205304 (2007). 40. 40. Hughes, J. L. P. & Sipe, J. Calculation of second-order optical response in semiconductors. Phys. Rev. B Condens. Matter Mater. Phys. 53, 10751–10763 (1996). 41. 41. Kraut, W. & Von Baltz, R. Anomalous bulk photovoltaic effect in ferroelectrics: a quadratic response theory. Phys. Rev. B 19, 1548–1554 (1979). 42. 42. Von Baltz, R. & Kraut, W. Theory of the bulk photovoltaic effect in pure crystals. Phys. Rev. B 23, 5590–5596 (1981). 43. 43. Zhang, Y. et al. Switchable magnetic bulk photovoltaic effect in the two-dimensional magnet CrI3. Nat. Commun. 10, 3783 (2019). 44. 44. Fei, R., Song, W. & Yang, L. Giant linearly-polarized photogalvanic effect and second harmonic generation in two-dimensional axion insulators. Phys. Rev. B 102, 035440 (2020). 45. 45. Laturia, A., Van de Put, M. L. & Vandenberghe, W. G. Dielectric properties of hexagonal boron nitride and transition metal dichalcogenides: from monolayer to bulk. npj 2D Mater. Appl. 2, 1–7 (2018). 46. 46. Wang, H., Zhang, C. & Rana, F. Surface recombination limited lifetimes of photoexcited carriers in few-layer transition metal dichalcogenide MoS2. Nano Lett. 15, 8204–8210 (2015). 47. 47. Niehues, I. et al. Strain control of exciton-phonon coupling in atomically thin semiconductors. Nano Lett. 18, 1751–1757 (2018). 48. 48. Zhou, J., Zhang, S. & Li, J. Normal-to-topological insulator martensitic phase transition in group-IV monochalcogenides driven by light. NPG Asia Mater. 12, 2 (2020). 49. 49. Xu, H., Zhou, J., Li, Y., Jaramillo, R. & Li, J. Optomechanical control of stacking patterns of h-BN bilayer. Nano Res. 12, 2634–2639 (2019). 50. 50. Wang, H. & Qian, X. Ferroelectric nonlinear anomalous Hall effect in few-layer WTe2. npj Comput. Mater. 5, 119 (2019). 51. 51. Soluyanov, A. A. et al. Type-II Weyl semimetals. Nature 527, 495–498 (2015). 52. 52. Hohenberg, P. & Kohn, W. Inhomogeneous electron gas. Phys. Rev. 136, B864–B871 (1964). 53. 53. Kohn, W. & Sham, L. J. Self-consistent equations including exchange and correlation effects. Phys. Rev. 140, A1133–A1138 (1965). 54. 54. Kresse, G. & Furthmüller, J. Efficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis set. Comput. Mater. Sci. 6, 15–50 (1996). 55. 55. Kresse, G. & Furthmüller, J. Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set. Phys. Rev. B 54, 11169–11186 (1996). 56. 56. Perdew, J. P., Burke, K. & Ernzerhof, M. Generalized gradient approximation made simple. Phys. Rev. Lett. 77, 3865–3868 (1996). 57. 57. Blöchl, P. E. Projector augmented-wave method. Phys. Rev. B 50, 17953–17979 (1994). 58. 58. Liu, Q. et al. Tuning electronic structure of bilayer MoS2 by vertical electric field: a first-principles investigation. J. Phys. Chem. C 116, 21556–21562 (2012). 59. 59. Giannozzi, P. et al. QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials. J. Phys. Condens. Matter 21, 395502 (2009). 60. 60. Mostofi, A. A. et al. An updated version of wannier90: a tool for obtaining maximally-localised Wannier functions. Comput. Phys. Commun. 185, 2309–2310 (2014). 61. 61. Ibañez-Azpiroz, J., Tsirkin, S. S. & Souza, I. Ab initio calculation of the shift photocurrent by Wannier interpolation. Phys. Rev. B 97, 245143 (2018). 62. 62. Železný, J., Zhang, Y., Felser, C. & Yan, B. Spin-polarized current in noncollinear antiferromagnets. Phys. Rev. Lett. 119, 187204 (2017). ## Acknowledgements This work was supported by the Office of Naval Research Multidisciplinary University Research Initiative Award No. ONR N00014-18-1-2497. Y. G. and J. K. acknowledge the support from U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES) under Award DE-SC0020042. ## Author information Authors ### Contributions J.L. and H.X. conceived the idea and designed the project. H.X. performed the ab initio calculations. H.X., H.W., J.Z., and Y.G. analyzed the data. J.L. and J.K. supervised the project. All authors wrote the paper and contributed to the discussions of the results. ### Corresponding author Correspondence to Ju Li. ## Ethics declarations ### Competing interests The authors declare no competing interests. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Xu, H., Wang, H., Zhou, J. et al. Colossal switchable photocurrents in topological Janus transition metal dichalcogenides. npj Comput Mater 7, 31 (2021). https://doi.org/10.1038/s41524-021-00499-4
web
auto_math_text
NOAO >   Observing Info >   Approved Programs >   2003A-0354 # Proposal Information for 2003A-0354 PI: Timothy M. Heckman, Johns Hopkins University, heckman@pha.jhu.edu Address: Physics Department, Bloomberg Center, Baltimore, MD 21218, USA CoI: Arjun Dey, KPNO CoI: Buell Jannuzi, KPNO CoI: Christopher Martin, Caltech CoI: Michael Rich, UCLA CoI: David Schiminovich, Caltech CoI: Todd Small, Caltech CoI: Ted Wyder, Caltech Title: Star Formation and Galaxy Building in the 'Middle Ages': z~1 to 3 Abstract: The history of star-formation has now been documented back to z~5. While this is a remarkable accomplishment, much remains to be done. The data used to measure SFR(z) have been inhomogeneous at different redshifts, and measurements have been exceptionally difficult from z ~ 1 to 2, at or near the peak in SFR(z). To address this, we propose to use the Mosaic Camera on the KPNO 4-m telescope to obtain deep U-band images of the 9 deg^2 area within the NOAO Deep Wide-Field Survey (NDWFS) that is also to be imaged to 25th magnitude in the near- and far-UV by NASA's Galaxy Evolution Explorer (GALEX). GALEX will detect ~4\times10^5 star-forming galaxies in this field in the range z ~ 0 to 1.6. The combination of GALEX, NDWFS (B_wRIK), and the requested U-band images will allow us to measure redshifts, UV luminosities, and star-formation rates for these galaxies using the same Lyman Break'' technique that has been so successful at higher redshift. Planned SIRTF near- and far-IR data for the same region will make it possible to robustly measure the effects of dust on UV-based estimates of the star-formation rate. This project is the wide-field counterpart to a deeper ~1 deg^2 U-band survey in the Bootes NDWFS region that was conducted in spring 2002. National Optical Astronomy Observatory, 950 North Cherry Avenue, P.O. Box 26732, Tucson, Arizona 85726, Phone: (520) 318-8000, Fax: (520) 318-8360 NOAO >   Observing Info >   Approved Programs >   2003A-0354
web
auto_math_text
# Chute Tamer - Good idea? ### Help Support The Rocketry Forum: #### uncle_vanya In another thread I ran across the "Chute Tamer" which Rockets By Melissa is selling for $200. https://shop.rocketsbymelissa.com/s...nid=BE0D107D5E82ED5799C17D795B1C5B46.qscweb21 This device kinda makes me uneasy. I was really excited when I first saw it then I read the user manual and realized it relies on a g-swtich and a timer for determining when to deploy. https://www.locprecision.com/site/loc/ChuteTamer/index.html & https://www.locprecision.com/site/loc/ChuteTamer/User%20Manual%20-%20CT4%20-%20R1.pdf A while back I was trying to figure out a way to create a DIY version of the Aerotech Electronic Forward Closure using a timer - when I talked about doing Dual Deployment with a pair of timers or a dual event timer I was told this was a bad idea. After thinking it through I agree. If the rocket doesn't travel vertically as high as predicted (wind, bad cd estimates, etc.) then the timer could result in a free fall to ground. Thoughts? EDITED to clarify the comments about the SIMPLE timer. It's a remarkable device in terms of deployment via fishing line etc. However the g-switch/timer method of determining when to deploy is what I am concerned about. #### fox_racing_guy ##### Well-Known Member I'll give you some user feedback here. The Timer is a Perfect Flight timer ( the Chute Tamer comes with the perfect flight owners manual as well) and the G switch is used to "start" the timer. No I won't sit here and tell you I'm a expert with the thing but I found it very easy to use just buy reading the extensive user manual. I found it a easy way to convert a 2" rocket to dual deploy with out building a small altimeter bay and using a altimeter with E-matches and black powder. I do use altimeters in some of my larger rockets ( G-Wiz LCX & various Perfect Flight models) and to me this is just another "tool" at my disposal. #### cjl ##### Well-Known Member I don't think it's a very good idea, due to the difficulty in getting an accurate simmed descent rate. I would stick with a true altimeter for dual deployment. #### Dipstick ##### Well-Known Member TRF Supporter I started flying dual deploy with a blacksky timer, simply because I could pay less for it (as a high school student). It took some very careful calculation of altitude, descent rate under drogue, weathercocking, and some fudge factor. I found that after some trial and error, I could get my rocket to dual deploy in the 400-600 region, but if there is any abnormality in the flight path, you're looking at a charge going off on the ground instead of at 500ft...:cry: In my opinion/experience, pay a little more (now its not even that much) and go with the altimeter. Bruce #### uncle_vanya ##### Well-Known Member I'll give you some user feedback here. The Timer is a Perfect Flight timer ( the Chute Tamer comes with the perfect flight owners manual as well) and the G switch is used to "start" the timer. No I won't sit here and tell you I'm a expert with the thing but I found it very easy to use just buy reading the extensive user manual. I found it a easy way to convert a 2" rocket to dual deploy with out building a small altimeter bay and using a altimeter with E-matches and black powder. I do use altimeters in some of my larger rockets ( G-Wiz LCX & various Perfect Flight models) and to me this is just another "tool" at my disposal. This sounds like a good idea - but how do you deal with flights with greater deviation from vertical than expected? #### TWRackers ##### Well-Known Member We need to convince them to design one with a built-in altimeter instead of a timer. Then I'd buy one in a flash. #### fox_racing_guy ##### Well-Known Member This sounds like a good idea - but how do you deal with flights with greater deviation from vertical than expected? Lucky for me all flights so far have been straight as a laser beam so it hasn't been a problem. When I flew mine I deviated from the instructions buy looking at my motor delay (10 sec) and added another 5 seconds, this gave me a deployment time of 15 seconds. It still relies on motor ejection to deploy the devise from the rocket. The Chute Tamer just keeps the chute from deploying till your pre set time. #### uncle_vanya ##### Well-Known Member We need to convince them to design one with a built-in altimeter instead of a timer. Then I'd buy one in a flash. I thought about that... it would have to be a complex design since this device is in the path of the ejection gas of the motor ejection. #### TWRackers ##### Well-Known Member I thought about that... it would have to be a complex design since this device is in the path of the ejection gas of the motor ejection. Oops... forgot about that little detail. Never mind. #### ultrasonicTim ##### Well-Known Member I think it's a nice idea for small lightweight rockets that might stand a chance on surviving without a parachute at all. I would stay very light and would go fairly conservative on the ejection delay. #### ben ##### Well-Known Member I REALLY want to know who voted "I like lawndarts" :rotflol: #### TWRackers ##### Well-Known Member I REALLY want to know who voted "I like lawndarts" :rotflol: Well, it wasn't me, but "it's always fun when it happens to someone else". #### uncle_vanya ##### Well-Known Member I REALLY want to know who voted "I like lawndarts" :rotflol: 5 someone's at last count. I threw it in for fun. The poll is not public on purpose. Guess you'll have to just use your imagination on this one Ben. #### uncle_vanya ##### Well-Known Member I think it's a nice idea for small lightweight rockets that might stand a chance on surviving without a parachute at all. I would stay very light and would go fairly conservative on the ejection delay. The device itself weighs 4.4 oz and fits into a 2" tube. I guess you could have some light and tough rockets this would work with... I don't have anything that would survive without a 'chute - but maybe with an undersized chute used as a "drogue" this could be used to deploy the main and if the main failed the fast landing wouldn't be too bad in some cases. Another idea would be to use this to reef a 'chute and have it unfurl after a certain amount of time. That might work... hummmm. #### Warren ##### Well-Known Member I am new to the Rocketry Forum and am learning to use this wonderful site. Please forgive any newbie mistakes! I am the inventor of the Chute Tamer control and appreciate the valuable concerns that have already been expressed in this thread. I am most interested in sharing my experience with the Chute Tamer contorl and learning from other experienced rocketeers. I would like to start with a hypothetical example (#1): Lets say that I have designed and scratch built a rocket. It is 2.5" in diameter and 40" tall with four square-shaped fins and a 29mm motor mount. I did not simulate this design, but it looks like a couple off-the-shelf kits that I have seen, so I am reasonably confident that it will fly. As with any new rocket, I plan to initially fly it with a smaller motor (maybe a G64) to determine the rockets overall flight characteristics. While I am not sure, the 2.5" cross-section and the light to medium weight of this rocket make me think that a 7 second delay is best, but I am not sure. Surely, this is a worst-case scenario of uncertainty for using the Chute Tamer control. Would I recommend using it in this case? - YES! Here is why - The Chute Tamer control does much more than delay the parachute deployment. Indeed in this scenario, I am uncertain about altitude, propensity to weathercock, etc., so programming a maximum parachute delay would be a very bad idea. What I do know is that a G64-7 burns for 2.5 seconds of thrust plus about 7 seconds of delay (total 9.5 seconds). Since I have not flown the rocket on this motor (or any motor), I am concerned that the delay time will be wrong and the resulting air-speed at separation will cause zipper. To avoid this, I will program the Chute Tamer control for 9.5 seconds (time of ejection) plus a little extra time for the rocket to slow down after separation (about 2 seconds), or a total delay time of 11.5 seconds from launch detection. This will not delay the deployment of the parachute by much but will accomplish three important functions for this untested rocket: 1) provide zipper protection, 2) keep the parachute tightly bound inside the rocket during flight, preventing it from unfurling and getting stuck in the rock's air frame, and 3) provide a siren for locating my rocket after it has landed in the tall grass. After this flight and more, I will gain experience with this rocket and various motor combinations. This experience will provide me with better altitude estimates and longer reliable delay times adding the extra convenience of delay deployment. I have flown the Chute Tamer control in small and large rockets for dozens and dozens of flights. I have also experienced several failure modes (some quite interesting). This was all part of the learning/inventing process that I would like to share with anyone who is interested. The Chute Tamer control is now on its 4th significant design revision. The CT4 model overcomes all issues known to me. This is why I am so excited about being a part of this discussion! I am sure there are things that I have not encountered or thought up. Please read the "Inventor's Message" at the back of the manual (https://www.locprecision.com/site/loc/ChuteTamer/User Manual - CT4 - R2color.pdf) for my motivations. Thanks - Great Stuff! #### Warren ##### Well-Known Member A while back I was trying to figure out a way to create a DIY version of the Aerotech Electronic Forward Closure using a timer - when I talked about doing Dual Deployment with a pair of timers or a dual event timer I was told this was a bad idea. After thinking it through I agree. Thoughts? As you may expect, I have many thoughts on this and lots of experience too. First, I want to be sure that I draw a distinction between the Aerotech EFC and the Chute Tamer delayed deployment control. These two devices use similar componets to perform completely different tasks. The EFC uses a G-switch and timer to fire a black powder charge. The Chute Tamer control uses a G-switch and a timer to melt a fishing line and release a parachute. The failure modes are: 1) Timer expires too early - With EFC you get a potentially dangerous situation when the black powder ignites while the rocket is in your hand, on the pad, or on its way up at velocity:surprised:. With CT the parachute is realeased early causing it to deploy at apogee. 2) Timer expires too late, or not at all - With EFC you get a late ejection charge, or no ejection charge resulting in a lawn dart:cry:. With CT you still get the motor's ejection charge and a late parachute or no parachute (unless a suitable drouge parachute is left unbound to be deployed at apogee). The likelyhood of either of these is small, but #1 is more suspect as it could occur as the result of prematurely activating the G-switch which is possible. The PerfectFlite timer software contains a sophisticated de-bounce algorithym to prevent shakes and drops from activating the timer. Only a sustained (2G for 0.5 seconds) acceleration will activate the timer. The likelyhood of #2 is remote at best. In dozens of CT test flights, I have never had the G-switch fail to activate upon launch. PerfectFlite uses an expensive high quality G-switch. In addition G-switches are widely manufactured today due to their wide-spread use in the auto industry. Once the G-switch is ground tested once and verified as operational, the likelyhood that it will fail to trigger is remote. For rockets that tumble (after separation) quickly (anything over 30-50 fps), I would strongly recommend the use of a drogue parachute to control the rocket's descent. This is no different than the standard practice when using dual deploy. (Indeed, I have seen several dual deploy flights where the second black powder charge fails to ignite causing the rocket to land under drogue only.) #### ultrasonicTim ##### Well-Known Member The PerfectFlite timer software contains a sophisticated de-bounce algorithym to prevent shakes and drops from activating the timer. Only a sustained (2G for 0.5 seconds) acceleration will activate the timer. I have the microTimer and and unfortunately this de-bounce feature prevents its use with most of the Warp9 motors as the burn times are less than 0.5 seconds. :cry: I think in the EFC they have reduced the time to around 0.25 seconds so it can work with most of the W9 motors. #### uncle_vanya ##### Well-Known Member I have the microTimer and and unfortunately this de-bounce feature prevents its use with most of the Warp9 motors as the burn times are less than 0.5 seconds. :cry: I think in the EFC they have reduced the time to around 0.25 seconds so it can work with most of the W9 motors. Doesn't the minitimer also have too low a G max? I think it's got a 30G switch and would need a 50G switch for some of the Warp9 applications that I have seen. #### uncle_vanya ##### Well-Known Member As you may expect, I have many thoughts on this and lots of experience too. First, I want to be sure that I draw a distinction between the Aerotech EFC and the Chute Tamer delayed deployment control. These two devices use similar componets to perform completely different tasks. The EFC uses a G-switch and timer to fire a black powder charge. The Chute Tamer control uses a G-switch and a timer to melt a fishing line and release a parachute. The failure modes are: 1) Timer expires too early - With EFC you get a potentially dangerous situation when the black powder ignites while the rocket is in your hand, on the pad, or on its way up at velocity:surprised:. With CT the parachute is realeased early causing it to deploy at apogee. 2) Timer expires too late, or not at all - With EFC you get a late ejection charge, or no ejection charge resulting in a lawn dart:cry:. With CT you still get the motor's ejection charge and a late parachute or no parachute (unless a suitable drouge parachute is left unbound to be deployed at apogee). The likelyhood of either of these is small, but #1 is more suspect as it could occur as the result of prematurely activating the G-switch which is possible. The PerfectFlite timer software contains a sophisticated de-bounce algorithym to prevent shakes and drops from activating the timer. Only a sustained (2G for 0.5 seconds) acceleration will activate the timer. The likelyhood of #2 is remote at best. In dozens of CT test flights, I have never had the G-switch fail to activate upon launch. PerfectFlite uses an expensive high quality G-switch. In addition G-switches are widely manufactured today due to their wide-spread use in the auto industry. Once the G-switch is ground tested once and verified as operational, the likelyhood that it will fail to trigger is remote. For rockets that tumble (after separation) quickly (anything over 30-50 fps), I would strongly recommend the use of a drogue parachute to control the rocket's descent. This is no different than the standard practice when using dual deploy. (Indeed, I have seen several dual deploy flights where the second black powder charge fails to ignite causing the rocket to land under drogue only.) All good comments. My thoughts: Late timer without open chute = better than no ejection - but not much better. Late time is easy to see with a flight that is wind cocked or rod whipped off of vertical. Have you tried using the 'Chute tamer as a timed "Reefing" device? In this case it would tightly hold the shroud lines only and not the whole 'chute. The chute would deploy with a partially opened 'Chute it would have less drift. Later when the timer expired the 'Chute could open fully. I'm not sure if this mode would work because obviously it changes where you "hang" the device. #### ultrasonicTim ##### Well-Known Member Doesn't the minitimer also have too low a G max? I think it's got a 30G switch and would need a 50G switch for some of the Warp9 applications that I have seen. Both the EFC and mini/microTimers have a 2G switch which means it needs a minimum of 2 Gs to close (no max limit). #### Warren ##### Well-Known Member Have you tried using the 'Chute tamer as a timed "Reefing" device? In this case it would tightly hold the shroud lines only and not the whole 'chute. The chute would deploy with a partially opened 'Chute it would have less drift. Later when the timer expired the 'Chute could open fully. I'm not sure if this mode would work because obviously it changes where you "hang" the device. I have not tried this, but see no reason that it would not work. The only limitation would be pace in the body tube to accomodate. I would load the laundry as follows: fold main chute (without folding the shroud lines) and place the main plus shock cord into the nomex blanket. Then I would bind the shroud lines (at the desired distance from the chute) to the Chute Tamer control. The shock cord, main chute, and Chute Tamer control are all connected separately to the eyebolt of the nose cone or payload bay. Also connect a drogue if desired. All this would be loaded into the body tube in this same order to be ejected at apogee. I will give this a try at some point. I am envisioning some extra fabric wrapped around the shroud lines that is attached somewhere, but falls free. The shroud line/fabric bundle would be easy to bind to the Chute Tamer control. This configuration would be similar to using a properly sized drogue chute. Thank you for the added ideas about Chute Tamer functionality! #### Warren ##### Well-Known Member Late timer without open chute = better than no ejection - but not much better. Late time is easy to see with a flight that is wind cocked or rod whipped off of vertical. In my experience an open/tumbling rocket (unstable but no chute) reaches its maximum velocity quickly. This maximum tumbling velocity depends on the cross-sectional surface area of the rocket, the rocket's weight, and a bunch of other stuff like air density. I have attached an excel file that shows tumble velocity for various combinations of these factors. In any case, it takes a very "dense" rocket (heavy with little surface area) to tumble faster than 60 fps (3 times 20 fps under chute). On the other hand, a rocket that comes down bullistic (as it went up) presents a very small profile to the air stream and may not reach its maximum speed before hitting the ground. A bullistic rocket can easily be tarvelling in excess of 300-500 fps. In my book, an open rocket (especially using a drogue) is much safer than a bullistic recovery. View attachment ChuteTamer.zip #### Warren ##### Well-Known Member I have the microTimer and and unfortunately this de-bounce feature prevents its use with most of the Warp9 motors as the burn times are less than 0.5 seconds. :cry: I think in the EFC they have reduced the time to around 0.25 seconds so it can work with most of the W9 motors. Thank you for this information. I will add this to the Chute Tamer manual and FAQ on the web (when they are published). Without a similar modification to the PerfectFlite mini timer in the Chute Tamer control, CT is not suitable for use with Warp9 propellant engines. I am curious: When does the Aerotech EFC get turned on? I am having trouble imaging loading the engine on the launch pad, so I am assuming that the EFC is armed at the prep table, and stays armed at the LSO table, in line waiting to launch, etc...:surprised: Is this correct? The Chute Tamer control can easily be turned on at the pad. Because the parachute is bound, it slides easily in and out of the body tube. Just before loading onto the rod/rail slide it out and turn it on (listening for the correct delay time and verifying heating element continuity). If this is not convenient, then CT can be turned on at the prep table. There will be a tone indicating heating element continuity which will stop if the timer is activated. In addition, a locating siren sounds after the timer completes its operation. In either case, it will be obvious if the parachute is released prior to launch. #### astrowolf67 ##### Well-Known Member I started flying dual deploy with a blacksky timer, simply because I could pay less for it (as a high school student). It took some very careful calculation of altitude, descent rate under drogue, weathercocking, and some fudge factor. I found that after some trial and error, I could get my rocket to dual deploy in the 400-600 region, but if there is any abnormality in the flight path, you're looking at a charge going off on the ground instead of at 500ft...:cry: In my opinion/experience, pay a little more (now its not even that much) and go with the altimeter. Bruce I haven't been on in a while, but, in my research of the Chute Tamer, came across this thread. From what I've read in the online manual, there is no live charge, which makes the CT very safe to use. It looks like it can even be tested while still held in the hand, with little to no danger of even getting burned. #### uncle_vanya ##### Well-Known Member I haven't been on in a while, but, in my research of the Chute Tamer, came across this thread. From what I've read in the online manual, there is no live charge, which makes the CT very safe to use. It looks like it can even be tested while still held in the hand, with little to no danger of even getting burned. That's fair. The only real danger is with a flight that is not vertical the rocket could fail to deploy the main before the rocket hits the ground at 50-60' per second. Faster than ideal but not ballistic and no danger of black powder ejection charge causing fire or other damage. That does put this in better perspective I think. #### flight4 ##### Well-Known Member I like this discussion. Great information. Hell, it might be worth a try #### Warren ##### Well-Known Member I like this discussion. Great information. Hell, it might be worth a try A discussion of "weathercocking" might also be in order. The tendancy of some rockets to fly in a direction other than straight up can be unerving if it is not well understood. Stable rockets always fly into the greatest "wind". Because a stable rocket's center of pressure (CP) is behind its center of gravity (CG), the forces of the "wind" cause the rocket to rotate around the CG (tail down). Another debate could be how "stable" a rocket design should be: how many body tube diameters (calipers) in length should there be between the (lower) CP and the (higher) CG. Generally, a minimum of one and more like two or more calipers are recommended. On a day with no breeze-wind, the only wind that the rocket experiences is the downward-wind created by the rocket thrusting upwards. Under these conditions, the faster the rocket is moving the stronger the downward-wind and the more stable the rocket becomes to any horzontal breeze-wind disturbance. On a day when the breeze-wind is strong, the rocket experiences two types of wind during its flight: breeze-wind and downward-wind. It is the (vector) sum of these two winds that determine the reaction of the rocket. Because the rocket accelerates during the engine's boost phase, the rocket's upward velocity is slowest at the begining of its launch. The slower the rocket's upward velocity, the less downward-wind and the more RELATIVE breeze-wind. Thus at the beginning of launch, the breeze-wind component is at its strongest relative to the downward-wind. To aide in stabilizing the rocket, we use a launch rod or rail. The recommended 5:1 thrust to rocket weight ratio is a rule of thumb intended to give the rocket enough upward velocity (downward-wind) to be stable enough, even with a breeze-wind. I have found a couple of techniques that at the launch pad that reduce weathercocking for any given rocket. First, I always place the rocket on the down-breeze side of the rod or rail. this way as the rocket accelerates upward, it does not have a tendancey to rotate around the rod or bind in the rail, slowing its upward acceleration. Second, if the rocket is known to weathercock lots, I will point the rod slightly DOWN-breeze. When the rocket leaves the rod and rotates into the breeze-wind, it then takes a more upwards trajectory. The reason some rockets are more prone to weather-cock than others is in their design. The longer the distance between CG and CP, the more susceptible to breeze-wind the rocket will be. I find that many of today's kits are more than two calipers and thus somewhat over-stable. Be careful flying overly tall rockets (raises the CG), or rockets with large fin structures (lowers the CP) on their first flight. Regarding the Chute Tamer control, the important thing is to know the flight history of the rocket. Proper location of the Chute Tamer control will leave the rocket's CG to CP relationship unchanged. Chute Tamer or not, any rocket with a severe tendancy to weathercock, should be saved for a day with little to no breeze-wind. If flown with the Chute Tamer control, it should be flown with a very short Chute Tamer delay equal to the engine's thrust time plus delay time plus a couple of seconds as well as with a drogue parachute. Please let me know if this information is helpful or just redundant with other parts of this forum. Thanks - #### mr_fixit ##### Well-Known Member Warren, That was very useful information, as many times as I've heard the stability discussion, it never hurts to hear it again a little bit differently. Thanks. With regards to the CT, seems like a very viable alternative for the conversion of single deploy rockets to dual deploy. I also really like the choice of not having to use BP for the second ejection charge. My daughter and I almost got hit with a large NC from someone setting up a DD rocket not facing downrange, but sideways because it fit on their table better. Kaboom, sorry Couple of thoughts along the way: a. seems kind of expensive off the top. (compared to alt/bp but probably not) b. approx$1.60 per flight c. like the locator beeper feature d. like the simplicity of the device, seems like it would be great for young ones as well. Couple of questions: a. how durable is it, can it survive a tumble recovery (which is much safer than a ballistic something or other!) b. can it survive a lawndart? c. any problems with the tether point to date? d. is the 40°F a hard and fast limitation? We fly in the northeast and have flown in single digit weather. e. any possibility of having an input from an altimiter to trigger the device? f. the possiblity of having a device that only receives an input from a timer or alt. only? g. about how many are in use/sold now? Anyhow, great thread and information. Looks like a really great product and I know Barry would not carry something that he wouldn't back 100%, right Barry? Thanks again, Tom #### Warren ##### Well-Known Member Couple of questions: a. how durable is it, can it survive a tumble recovery (which is much safer than a ballistic something or other!) b. can it survive a lawndart? c. any problems with the tether point to date? d. is the 40°F a hard and fast limitation? We fly in the northeast and have flown in single digit weather. e. any possibility of having an input from an altimiter to trigger the device? f. the possiblity of having a device that only receives an input from a timer or alt. only? g. about how many are in use/sold now? a) b) How Durable? I have had the misfortune of two flights with very hard landings during various CT tests. In the first example, the rocket engine's ejection charge never lit. (Examination of the recovered rocket pieces showed no burning of the ejection charge. Discussion within SkyBusters club was that the age of the reload required sanding or scoring of the delay element to ensure proper ignition of the delay.) The rocket came in ballistic in a large patch of tall grass. Ugh - Take the shovel and hope to get lucky! Turns out, the engine casing and the Chute Tamer control were the only two components undamaged. In fact, the Chute Tamer control had completed its sequence of operations normally and its siren was blaring. The siren made the half-submerdged rocket parts extremely easy to find out in the tall grass! The second hard landing occured when the rocket engine ejection charge broke the attachment point between the shock cord and the nose cone. (I had improperly attached the shock cord to some kevlar cord and tied this to the "flashing" on either side of the nose cone's eyebolt hole. After a dozen flights, the nose cone flashing failed.) So the rocket with shock cord and attached Chute Tamer control came down separately from the nose cone. The abscence of the nose cone allowed the rocket to become stable in its descent. (I should have included a small drogue parachute, but did not. ) The rocket body tube (no nose cone) lawn darted into the open field, burying itself about six inches in the ground. The Chute Tamer control with the bound parachute was outside the protection of the body tube and landed hard on the field next to the rocket. The Chute Tamer control had several grass stains and a bit of mud, all of which cleaned up nicely. Today, I can not tell you which of my CT controls this was because there was no damage externally or internally to the CT unit. c) Tether Point Problems? I have not had any problems. The nylon strap is attached to the ABS enclosure with two barrel bolts. The plastic D-ring has not failed during tests. d) 40 Degrees? This is straight from the PerfectFlite manual. The rest of the CT unit (except the cutter) could care less about weather or temperature. The issue with the cutter is that it melts the fishing line with heat . The nichrome cutter wire is energized by the timer circuit for a fixed period of time. The amount of current that flows through the wire depends on this time interval and the freshness of the battery (among other less important things). The Nichrome wire heats from its current temperature to above the melting temperature of nylon (around 450 degrees F). The lower the current temp of the wire, the more heating that has to be done. I have tested the CT control in Ohio winters and found that prior cutter designs did not get hot enough in the very cold weather. The current cutter design has not failed me in the cold weather, but I would like to gather more testing on this issue. (Remeber that the temperature is colder, the higher the altitude.) e) f) Use an Altimeter? Yes! The heating cutter can be energized by any "ematch" style output. A barometric altimeter has one or more of these outputs. The timer has the added advantage of not being pressure sensitive and can thus be placed inside the body tube where the engine's ejection charge goes off. This would damage a barometric sensor (thus the separate compartment for dual deployment). The patent that has been filed for the CT control includes the use of many triggering mechanisms including timers, barometric sensors, remote control signals, etc... g) How Many Sold? Apart from test units, there are four "production" Chute Tamer controls that have been purchased. This product is brand new. I started selling them on July 27 of this year (2007). I had the chance to demonstrate the production version to Barry Lynch (of LOC/Precision) at the NYPower launch. He immediately bought one and had it evaluated/tested. Based on his evaluation and my enthusiasm for his products and well deserved reputation, he and I agreed to an exclusive sales arrangement. If you want to purchase one, check out the LOC/Precision web site (www.LOCPrecision.com) or go to www.ChuteTamer.com. Whew - great questions - #### wkissee ##### Well-Known Member I flew with a Chute Tamer for the first time this weekend and it works GREAT! I launched Pinky, my PML Callisto, four times this weekend with the Chute Tamer it worked just as Warren said it would. The hardest part of using it is tying the @%#*#&\$ monofilament (once my son got out to the launch and I had a “third hand” it was much easier)! LOL! The next tricky thing is calculating the timing on the timer, I used RockSim for my time calculations (I hate complex math problems) by setting the sims for no deployment. This seemed to work fairly well, the The following is an outline of the flights and my observations: Flight #1: - First time using it. - Motor: AT 29mm H165R w/ Med. Delay time - Time setting: 15 secs - Result: The monofilament line did not burn through but the chute released at apogee. Successful recovery. - Conclusion: Problem was user caused and not device/design caused. I did not have the line tied tight enough and the ejection charge blew the chute out from under the line (thankfully) allowing the chute to deploy as normal. Flight #2: - Motor: AT 29mm H165R w/ Med. Delay time - Time setting: 15 secs - Result: The Chute Tamer worked perfectly, but deployment was too soon, later than an apogee deployment, but still too soon. - Conclusion: Set the timer for a longer delay. I began suspecting that using RockSim with no deployment settings may have it’s flaws. Flight #3: - Motor: AT 38mm I300T w/ Med. Delay time - Time setting: 25 secs - Result: AWESOME launch, watched the deployment at apogee, but lost sight of it during the free-fall. As this was my last launch of the day, I was feeling pretty bummed while I was packing up my stuff and then felt REALLY bummed while I was driving home because I realized that I had not only lost my beloved Pinky, but also my Chute Tamer and motor casing as well. :cry: That evening while I was fixing dinner and describing the day’s events to my wife, I got a call from the RSO out at the launch and he told me that someone had found Pinky and it was in tact! WOO HOO! Pinky lives! When I got to the launch the next morning, I bee-lined straight to the RSO table and recovered Pinky; she was intact and all was well. Now back to the Chute Tamer: Flight #3 Continued: - Conclusion: I never found out who found Pinky (thank you whoever you are) so I am unable to know for sure how she was found, but I do know the following: o I did not see a chute deployment at apogee (I observed the free-fall) o The chute was not still attached to the Chute Tamer when I got the rocket back from the RSO o Pinky was intact with no damage So I am going to assume that the Chute Tamer worked as designed Flight #4: - Motor: AT 29mm H210R w/ Med. Delay time - Time setting: 19.9 secs - Result: The Chute Tamer worked perfectly, but deployment was too soon, later than an apogee deployment, but still too soon. - Conclusion: I saw the whole flight and the Chute Tamer worked PERFECTLY. I still need to set the timer for a longer delay. I realized that the problem with figuring the time for the timer with RocSim in the manner that I was is that the sim does not account for the fact that the rocket breaks apart when the motor ejection occurs and therefore slows the decent of the rocket, maybe I will have to use Warren’s calculation sheets &#61514; All in all this is a GREAT product and once I get the timing thing down it will work even better! Thanks Warren for the work that you put into this!
web
auto_math_text
# Re: [isabelle] Code generation for picking arbitrary element from finite set Peter Lammich wrote: Hi all, I want to use the isabelle 2005 code generator to get code for a function that picks an arbitrary element with some property P from a finite set S, i.e. something like: pick :: "('a => bool) => 'a set => 'a option" with (pick P S = Some e) ==> (e \in S \and P e) and (\exists e\in S. P e) ==> pick P S \noteq None [...] What is the best way to implement such a function ? Hi Peter, if everything else fails, you can always provide an ad-hoc ML implementation consts_code "pick" ("\<module>pick") attach {* fun pick P [] = error "pick" | pick P (x :: xs) = if P x then x else pick P xs; *} A similar trick is used in HOL/MicroJava/BV/BVExample.thy for implementing the function some_elem, which selects an arbitrary element from a set. Greetings, Stefan -- Dr. Stefan Berghofer E-Mail: berghofe at in.tum.de Institut fuer Informatik Phone: +49 89 289 17328 Technische Universitaet Muenchen Fax: +49 89 289 17307 Boltzmannstr. 3 Room: 01.11.059 85748 Garching, GERMANY http://www.in.tum.de/~berghofe This archive was generated by a fusion of Pipermail (Mailman edition) and MHonArc.
web
auto_math_text
# LMS Intrinsics lms-intrinsics is a package that enables the use of SIMD x86 instructions in the Lightweight Modular Staging Framework (LMS). While most SIMD instruction are available as a low-level machine code, the lms-intrinsics package focuses on the C SIMD instrinsics, which are supported by most modern C compilers such as gcc, Intel Compiler, LLVM, etc, and provides the appropriate generation of a vectorized C code. Currently the following instruction sets (ISAs) are supported: MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, AVX, AVX2, AVX-512, FMA, KNC and SVML. Each SIMD intrinsic function is implemented as a construct of an Embedded Domain Specific Language (eDSL) in LMS. The intrinsics functions are then categorized according to their ISA and are implemented in separate groups of SIMD eDSLs such that each eDSL corresponds to a particular ISA. This implementation of LMS Intrinsics is done by Ivaylo Toskov as part of a master thesis project at the Department of Computer Science at ETH Zurich Switzerland, supervised by Markus Püschel and me. This work has been published at CGO’18, obtaining all 4 badges of the conference: Artifacts Available, Artifacts Functional, Results Replicated and Artifacts Reusable. # Usage lms-intrinsics is available on Maven and can be used through SBT including the following in build.sbt: libraryDependencies += "ch.ethz.acl" %% "lms-intrinsics" % "0.0.5-SNAPSHOT" A detailed explanation of the usage and a quick start tutorial can be found on the GitHub   repository. # Automatic Generation of SIMD eDSLs There is a vast majority of SIMD instructions available in a given CPU. With the continuous development of the x86 architecture, Intel has extended the instruction set architecture with many new sets, continuously adding more vector instructions. As a result creating eDSLs aming to support majority of intrinsics functions, is not an easy challenge. The figure bellow gives an overview of available intrinsics function for each instruction set architecture: As depicted on the image there are mote than 5000 functions that have to be ported into several eDSLs. Doing this manually is a tedious and an error prone process. To avoid this, we decided to automate the generation of these SIMD eDSLs. A good place to start is the Intel Intrinsics Guide which provides the specifications of each C intrinsic function. Observing this website, we noticed that it comes with a nice and convenient XML file that provides the name, return type and input arguments of each intrinsic function: <intrinsic rettype='__m256d' name='_mm256_add_pd'> <type>Floating Point</type> <CPUID>AVX</CPUID> <category>Arithmetic</category> <parameter varname='a' type='__m256d'/> <parameter varname='b' type='__m256d'/> <description> floating-point elements in "a" and "b", and store the results in "dst". </description> <operation> FOR j := 0 to 3 i := j*64 dst[i+63:i] := a[i+63:i] + b[i+63:i] ENDFOR dst[MAX:256] := 0 </operation> </intrinsic> As a result, we wer able to create a generator that will take each XML entry and produce Scala code tailored for the LMS framework that corresponds for each intrinsics functions: This process was quite convenient, as most intrinsics functions are in fact immutable and produce no effects. The generation is done in 4 steps. Step 1: Generation of definitions: case class MM256_ADD_PD (a: Exp[__m256d], b: Exp[__m256d]) extends IntrinsicDef[__m256d] { val category = List(IntrinsicsCategory.Arithmetic) val intrinsicType = List(IntrinsicsType.FloatingPoint) val performance = Map.empty[MicroArchType, Performance] } Step 2: Automatic SSA conversion (driven by LMS) def _mm256_add_ps(a: Exp[__m256], b: Exp[__m256]): Exp[__m256] = { } Step 3: Mirroring (LMS default transformation step) override def mirror[A:Typ](e: Def[A], f: Transformer)(implicit pos: SourceContext) = (e match { // Pattern match against all other nodes case _ => super.emitNode(sym, rhs) } Step 4: Unparsing to C override def emitNode(sym: Sym[Any], rhs: Def[Any]) = rhs match { emitValDef(sym, s"_mm256_add_ps(${quote(a)},${quote(b)})") // Pattern match against all other nodes case _ => super.emitNode(sym, rhs) } However, not all function are immutable, particularly load and store functions such as _mm256_loadu_ps or _mm256_storeu_ps. The Intel Intrinsics Guide however includes parameter that depicts the category of each instruction. In fact it contains 24 categories, conveniently categorizing load and store instructions. We were able to use this parameter to infer the intrinsic function mutability, and generate the proper LMS effects. Another challenge was the limitations imposed by the JVM - the 64kB limit for a method. To avoid this issue, we develop the generator such that generates Scala code that is split into several sub-classes, constituting a class that represent an ISA by inheriting each sub-class. The resulting Scala code is consisted of several Scala class files, that contain few thousands of lines of code that takes the Scala compiler several minutes to get compiled. To make the future use of this work more convenient, we decided to precompile the library, and make it available at Maven. To learn more about this work, check out our paper SIMD Intrinsics on Managed Language Runtimes. For in-depth overview of the process of automatic generation of SIMD eDSLs, have a look at the master thesis work of Ivaylo titled Explicit SIMD instructions into JVM using LMS.
web
auto_math_text
# RDP 2020-05: How Risky is Australian Household Debt? Read me This ‘read me’ file contains details of the code and data included in this archive that were used to generate the results in reported in RDP 2020-05. Plotting data for all figures are publically available and can be found in the spreadsheet ‘rdp-2020-05-graph-data.xlsx’. If you make use of any of these files you should clearly attribute the authors in any derivative work. ## Data The following data sources were used: • Cross-country DTI panel data: • Data obtained from a wide variety of sources. Details of these can be found in Appendix A of the paper, and on a ‘metadata’ tab in the spreadsheet ‘stata_input.xlsx’. • Household-level data: • Obtained from ABS: ‘6541.0.30.001 – Microdata: Income and Housing, Australia, 2017-18’ • Obtained from ABS: ‘6540.0 – Microdata: Household Expenditure, Income and Housing, 2015-16’ • Obtained from ABS: ‘6540.0 – Microdata: Household Expenditure Survey and Survey of Income and Housing, Australia, 2009-10 Third Edition’ • Obtained from ABS: ‘6540.0 – Microdata: Household Expenditure Survey and Survey of Income and Housing, Basic and Expanded CURF, Australia, 2003-04 (Third Edition)’. • Aggregate household debt (including and excluding offset account balances): • RBA calculation, using data obtained from: ABS Cat Nos ‘5206.0 – Australian National Accounts: National Income, Expenditure and Product’ and ‘5232.0 – Australian National Accounts: Finance and Wealth’; and APRA. • Household Expenditure Measure and Henderson Poverty Line: • Obtained from Melbourne Institute – not available for release. • Aggregate interest rates (‘rate_changes.xlsx’) • Obtained from RBA: Statistical Table ‘F5 Indicator Lending Rates’ – available at <https://www.rba.gov.au/statistics/tables/>. The average change in interest rates from origination to the survey date is calculated for each survey year and age of origination. The underlying data files from the ABS and the Melbourne Institute that is used by the code referenced in the final four do files below are not included in this archive due to the terms of our access; as such, the code for these files will not run. ## Code The results reported in this RDP were generated using Stata 16.0. Included in this archive are the following programs: • DTI panel regression.do • 1_setup.do • 2_merge_datasets.do • 3_unemployment.do • 4_main_model.do ‘DTI panel regression.do’ runs the regressions reported in Section 2 of the paper. To do this, it first calls data from ‘stata_input.xlsx’. To recreate the contributions data in Figures 5 and 6, use the following formula: $C o n t r i b u t i o n i = β ( ln ( i t ) − ln ( i t − k ) ) ln ( D T I t − ln ( D T I t − k ) ) ( D T I t − D T I t − k )$ where $\beta$ is the coefficient of variable i, t represents the time period and DTI is the debt-to-income ratio. The remaining four programs are used to conduct the stress testing in Section 3 of the paper. The first program defines all the parameters and assumptions, imports the relevant data, as well as run the other programs when necessary. After running each of the other programs, the first program also exports the relevant output for the figures in the paper. The second program extracts the relevant household-level and loan-level data from the ABS micro data (not provided in this documentation). The third program extracts the relevant person-level variables and runs the employment loss model. The fourth program merges the data sources, runs the stress testing model and exports the results. When two variables have the same name (other than the final suffix ‘_’), those with ‘_’ represent post-shock values and those without ‘_’ represent pre-shock values. 28 August 2020
web
auto_math_text
# Wavefunction collapse 34,139pages on this wiki In certain interpretations of quantum mechanics, wave function collapse is one of two processes by which quantum systems apparently evolve according to the laws of quantum mechanics. It is also called collapse of the state vector or reduction of the wave packet. The reality of wave function collapse has always been debated, i.e., whether it is a fundamental physical phenomenon in its own right (which may yet emerge from a theory of everything) or just an epiphenomenon of another process, such as quantum decoherence). In recent decades the quantum decoherence view has gained popularity. ## History and ContextEdit By the time John von Neumann wrote his famous treatise Mathematische Grundlagen der Quantenmechanik in 1932, the phenomenon of "wave function collapse" was accommodated into the mathematical formulation of quantum mechanics by postulating that there were two processes of wave function change: 1. The probabilistic, non-unitary, non-local, discontinuous change brought about by observation and measurement, as outlined above. 2. The deterministic, unitary, continuous time evolution of an isolated system that obeys Schrödinger's equation (or nowadays some relativistic, local equivalent). In general, quantum systems exist in superpositions of those basis states that most closely correspond to classical descriptions, and -- when not being measured or observed, evolve according to the time dependent Schrödinger equation, relativistic quantum field theory or some form of quantum gravity or string theory, which is process (2) mentioned above. However, when the wave function collapses -- process (1) -- from an observer's perspective the state seems to "leap" or "jump" to just one of the basis states and uniquely acquire the value of the property being measured, $e_i$, that is associated with that particular basis state. After the collapse, the system begins to evolve again according to the Schrödinger equation or some equivalent wave equation. Hence, in experiments such as the double-slit experiment each individual photon arrives at a discrete point on the screen, but as more and more photons are accumulated, they form an interference pattern overall. The existence of the wave function collapse is required in On the other hand, the collapse is considered as redundant or just an optional approximation in The cluster of phenomena described by the expression wave function collapse is a fundamental problem in the interpretation of quantum mechanics known as the measurement problem. The problem is not really confronted by the Copenhagen interpretation which simply postulates that this is a special characteristic of the "measurement" process. The Everett many-worlds interpretation deals with it by discarding the collapse-process, thus reformulating the relation between measurement apparatus and system in such a way that the linear laws of quantum mechanics are universally valid, that is, the only process according to which a quantum system evolves is governed by the Schrödinger equation or some relativistic equivalent. Often tied in with the many-worlds interpretation, but not limited to it, is the physical process of decoherence, which causes an apparent collapse. Decoherence is also important for the interpretation based on Consistent Histories. Note that a general description of the evolution of quantum mechanical systems is possible by using density operators and quantum operations. In this formalism (which is closely related to the C*-algebraic formalism) the collapse of the wave function corresponds to a non-unitary quantum operation. Note also that the physical significance ascribed to the wave function varies from interpretation to interpretation, and even within an interpretation, such as the Copenhagen Interpretation. If the wave function merely encodes an observer's knowledge of the universe then the wave function collapse corresponds to the receipt of new information -- this is somewhat analogous to the situation in classical physics, except that the classical "wave function" does not necessarily obey a wave equation. If the wave function is physically real, in some sense and to some extent, then the collapse of the wave function is also seen as a real process, to the same extent. One of the paradoxes of quantum theory is that wave function seems to be more than just information (otherwise interference effects are hard to explain) and often less than real, since the collapse seems to take place faster-than-light and triggered by observers.
web
auto_math_text
# pm4py.streaming.algo.discovery.dfg package ## pm4py.streaming.algo.discovery.dfg.algorithm module PM4Py is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. PM4Py is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with PM4Py. If not, see <https://www.gnu.org/licenses/>. class pm4py.streaming.algo.discovery.dfg.algorithm.Variants(value)[source] Bases: enum.Enum An enumeration. FREQUENCY = <module 'pm4py.streaming.algo.discovery.dfg.variants.frequency' from 'C:\\Users\\berti\\pm4py-core\\pm4py\\streaming\\algo\\discovery\\dfg\\variants\\frequency.py'> pm4py.streaming.algo.discovery.dfg.algorithm.apply(variant=Variants.FREQUENCY, parameters=None)[source] Discovers a DFG from an event stream Parameters variant – Variant of the algorithm (default: Variants.FREQUENCY) Returns Streaming DFG discovery object Return type stream_dfg_obj
web
auto_math_text
# Finding and Exploiting LTL Trajectory Constraints in Heuristic Search Simon, Salomé and Röger, Gabriele. (2015) Finding and Exploiting LTL Trajectory Constraints in Heuristic Search. In: Proceedings of the 8th Annual Symposium on Combinatorial Search (SoCS 2015). pp. 113-121. PDF - Published Version 550Kb Official URL: http://edoc.unibas.ch/43155/
web
auto_math_text
The University of Southampton University of Southampton Institutional Repository # K2 observations of SN 2018oh reveal a two-component rising light curve for a Type Ia supernova Dimitriadis, Georgios, Inserra, Cosimo, Gutierrez Avendano, Claudia Patricia and Smith, Mathew , et al. (2018) K2 observations of SN 2018oh reveal a two-component rising light curve for a Type Ia supernova. The Astrophysical Journal Letters, 870. Record type: Article ## Abstract We present an exquisite 30 minute cadence Kepler (K2) light curve of the Type Ia supernova (SN Ia) 2018oh (ASASSN-18bt), starting weeks before explosion, covering the moment of explosion and the subsequent rise, and continuing past peak brightness. These data are supplemented by multi-color Panoramic Survey Telescope (Pan-STARRS1) and Rapid Response System 1 and Cerro Tololo Inter-American Observatory 4 m Dark Energy Camera (CTIO 4-m DECam) observations obtained within hours of explosion. The K2 light curve has an unusual two-component shape, where the flux rises with a steep linear gradient for the first few days, followed by a quadratic rise as seen for typical supernovae (SNe) Ia. This "flux excess" relative to canonical SN Ia behavior is confirmed in our i-band light curve, and furthermore, SN 2018oh is especially blue during the early epochs. The flux excess peaks 2.14 ± 0.04 days after explosion, has a FWHM of 3.12 ± 0.04 days, a blackbody temperature of $T=17,{500}_{-9,000}^{+11,500}$ K, a peak luminosity of $4.3\pm 0.2\times {10}^{37}\,\mathrm{erg}\,{{\rm{s}}}^{-1}$, and a total integrated energy of $1.27\pm 0.01\times {10}^{43}\,\mathrm{erg}$. We compare SN 2018oh to several models that may provide additional heating at early times, including collision with a companion and a shallow concentration of radioactive nickel. While all of these models generally reproduce the early K2 light curve shape, we slightly favor a companion interaction, at a distance of ~$2\times {10}^{12}\,\mathrm{cm}$ based on our early color measurements, although the exact distance depends on the uncertain viewing angle. Additional confirmation of a companion interaction in future modeling and observations of SN 2018oh would provide strong support for a single-degenerate progenitor system. Full text not available from this repository. Accepted/In Press date: 31 August 2018 e-pub ahead of print date: 28 December 2018 ## Identifiers Local EPrints ID: 427618 URI: https://eprints.soton.ac.uk/id/eprint/427618 ISSN: 2041-8205 PURE UUID: 9f6d397b-b0e7-4b3a-bea5-78051c6b223c ORCID for Cosimo Inserra: orcid.org/0000-0002-3968-4409 ORCID for Mathew Smith: orcid.org/0000-0002-3321-1432 ## Catalogue record Date deposited: 24 Jan 2019 17:30 ## Contributors Author: Cosimo Inserra Author: Claudia Patricia Gutierrez Avendano Author: Mathew Smith
web
auto_math_text
# Training¶ ## Stochastic Local Closed World Assumption¶ class SLCWATrainingLoop(model, optimizer=None, negative_sampler_cls=None, negative_sampler_kwargs=None, automatic_memory_optimization=True)[source] A training loop that uses the stochastic local closed world assumption training approach. Initialize the training loop. Parameters Find the maximum batch size for training with the current setting. This method checks how big the batch size can be for the current model with the given training data and the hardware at hand. If possible, the method will output the determined batch size and a boolean value indicating that this batch size was successfully evaluated. Otherwise, the output will be batch size 1 and the boolean value will be False. Parameters batch_size (Optional[int]) – The batch size to start the search with. If None, set batch_size=num_triples (i.e. full batch training). Return type Returns Tuple containing the maximum possible batch size as well as an indicator if the evaluation with that size was successful. property checksum The checksum of the model and optimizer the training loop was configured with. Return type str property device The device used by the model. classmethod get_normalized_name() Get the normalized name of the training loop. Return type str property num_negs_per_pos Return number of negatives per positive from the sampler. Property for API compatibility Return type int sub_batch_and_slice(batch_size) Check if sub-batching and/or slicing is necessary to train the model on the hardware at hand. Return type to_embeddingdb(session=None, use_tqdm=False) Parameters • session – Optional SQLAlchemy session • use_tqdm (bool) – Use tqdm progress bar? Return type embeddingdb.sql.models.Collection train(num_epochs=1, batch_size=None, slice_size=None, label_smoothing=0.0, sampler=None, continue_training=False, only_size_probing=False, use_tqdm=True, use_tqdm_batch=True, tqdm_kwargs=None, stopper=None, result_tracker=None, sub_batch_size=None, num_workers=None, clear_optimizer=False, checkpoint_directory=None, checkpoint_name=None, checkpoint_frequency=None, checkpoint_on_failure=False, drop_last=None) Train the KGE model. Parameters • num_epochs (int) – The number of epochs to train the model. • batch_size (Optional[int]) – If set the batch size to use for mini-batch training. Otherwise find the largest possible batch_size automatically. • slice_size (Optional[int]) – >0 The divisor for the scoring function when using slicing. This is only possible for LCWA training loops in general and only for models that have the slicing capability implemented. • label_smoothing (float) – (0 <= label_smoothing < 1) If larger than zero, use label smoothing. • sampler (Optional[str]) – (None or ‘schlichtkrull’) The type of sampler to use. At the moment sLCWA in R-GCN is the only user of schlichtkrull sampling. • continue_training (bool) – If set to False, (re-)initialize the model’s weights. Otherwise continue training. • only_size_probing (bool) – The evaluation is only performed for two batches to test the memory footprint, especially on GPUs. • use_tqdm (bool) – Should a progress bar be shown for epochs? • use_tqdm_batch (bool) – Should a progress bar be shown for batching (inside the epoch progress bar)? • tqdm_kwargs (Optional[Mapping[str, Any]]) – Keyword arguments passed to tqdm managing the progress bar. • stopper (Optional[Stopper]) – An instance of pykeen.stopper.EarlyStopper with settings for checking if training should stop early • result_tracker (Optional[ResultTracker]) – The result tracker. • sub_batch_size (Optional[int]) – If provided split each batch into sub-batches to avoid memory issues for large models / small GPUs. • num_workers (Optional[int]) – The number of child CPU workers used for loading data. If None, data are loaded in the main process. • clear_optimizer (bool) – Whether to delete the optimizer instance after training (as the optimizer might have additional memory consumption due to e.g. moments in Adam). • checkpoint_directory (Union[None, str, Path]) – An optional directory to store the checkpoint files. If None, a subdirectory named checkpoints in the directory defined by pykeen.constants.PYKEEN_HOME is used. Unless the environment variable PYKEEN_HOME is overridden, this will be ~/.pykeen/checkpoints. • checkpoint_name (Optional[str]) – The filename for saving checkpoints. If the given filename exists already, that file will be loaded and used to continue training. • checkpoint_frequency (Optional[int]) – The frequency of saving checkpoints in minutes. Setting it to 0 will save a checkpoint after every epoch. • checkpoint_on_failure (bool) – Whether to save a checkpoint in cases of a RuntimeError or MemoryError. This option differs from ordinary checkpoints, since ordinary checkpoints are only saved after a successful epoch. When saving checkpoints due to failure of the training loop there is no guarantee that all random states can be recovered correctly, which might cause problems with regards to the reproducibility of that specific training loop. Therefore, these checkpoints are saved with a distinct checkpoint name, which will be PyKEEN_just_saved_my_day_{datetime}.pt in the given checkpoint_root. • drop_last (Optional[bool]) – Whether to drop the last batch in each epoch to prevent smaller batches. Defaults to False, except if the model contains batch normalization layers. Can be provided explicitly to override. Return type Returns The losses per epoch. property triples_factory The triples factory in the model. Return type TriplesFactory ## Local Closed World Assumption¶ class LCWATrainingLoop(model, optimizer=None, automatic_memory_optimization=True)[source] A training loop that uses the local closed world assumption training approach. Initialize the training loop. Parameters • model (Model) – The model to train • optimizer (Optional[Optimizer]) – The optimizer to use while training the model • automatic_memory_optimization (bool) – bool Whether to automatically optimize the sub-batch size during training and batch size during evaluation with regards to the hardware at hand. Find the maximum batch size for training with the current setting. This method checks how big the batch size can be for the current model with the given training data and the hardware at hand. If possible, the method will output the determined batch size and a boolean value indicating that this batch size was successfully evaluated. Otherwise, the output will be batch size 1 and the boolean value will be False. Parameters batch_size (Optional[int]) – The batch size to start the search with. If None, set batch_size=num_triples (i.e. full batch training). Return type Returns Tuple containing the maximum possible batch size as well as an indicator if the evaluation with that size was successful. property checksum The checksum of the model and optimizer the training loop was configured with. Return type str property device The device used by the model. classmethod get_normalized_name() Get the normalized name of the training loop. Return type str sub_batch_and_slice(batch_size) Check if sub-batching and/or slicing is necessary to train the model on the hardware at hand. Return type to_embeddingdb(session=None, use_tqdm=False) Parameters • session – Optional SQLAlchemy session • use_tqdm (bool) – Use tqdm progress bar? Return type embeddingdb.sql.models.Collection train(num_epochs=1, batch_size=None, slice_size=None, label_smoothing=0.0, sampler=None, continue_training=False, only_size_probing=False, use_tqdm=True, use_tqdm_batch=True, tqdm_kwargs=None, stopper=None, result_tracker=None, sub_batch_size=None, num_workers=None, clear_optimizer=False, checkpoint_directory=None, checkpoint_name=None, checkpoint_frequency=None, checkpoint_on_failure=False, drop_last=None) Train the KGE model. Parameters • num_epochs (int) – The number of epochs to train the model. • batch_size (Optional[int]) – If set the batch size to use for mini-batch training. Otherwise find the largest possible batch_size automatically. • slice_size (Optional[int]) – >0 The divisor for the scoring function when using slicing. This is only possible for LCWA training loops in general and only for models that have the slicing capability implemented. • label_smoothing (float) – (0 <= label_smoothing < 1) If larger than zero, use label smoothing. • sampler (Optional[str]) – (None or ‘schlichtkrull’) The type of sampler to use. At the moment sLCWA in R-GCN is the only user of schlichtkrull sampling. • continue_training (bool) – If set to False, (re-)initialize the model’s weights. Otherwise continue training. • only_size_probing (bool) – The evaluation is only performed for two batches to test the memory footprint, especially on GPUs. • use_tqdm (bool) – Should a progress bar be shown for epochs? • use_tqdm_batch (bool) – Should a progress bar be shown for batching (inside the epoch progress bar)? • tqdm_kwargs (Optional[Mapping[str, Any]]) – Keyword arguments passed to tqdm managing the progress bar. • stopper (Optional[Stopper]) – An instance of pykeen.stopper.EarlyStopper with settings for checking if training should stop early • result_tracker (Optional[ResultTracker]) – The result tracker. • sub_batch_size (Optional[int]) – If provided split each batch into sub-batches to avoid memory issues for large models / small GPUs. • num_workers (Optional[int]) – The number of child CPU workers used for loading data. If None, data are loaded in the main process. • clear_optimizer (bool) – Whether to delete the optimizer instance after training (as the optimizer might have additional memory consumption due to e.g. moments in Adam). • checkpoint_directory (Union[None, str, Path]) – An optional directory to store the checkpoint files. If None, a subdirectory named checkpoints in the directory defined by pykeen.constants.PYKEEN_HOME is used. Unless the environment variable PYKEEN_HOME is overridden, this will be ~/.pykeen/checkpoints. • checkpoint_name (Optional[str]) – The filename for saving checkpoints. If the given filename exists already, that file will be loaded and used to continue training. • checkpoint_frequency (Optional[int]) – The frequency of saving checkpoints in minutes. Setting it to 0 will save a checkpoint after every epoch. • checkpoint_on_failure (bool) – Whether to save a checkpoint in cases of a RuntimeError or MemoryError. This option differs from ordinary checkpoints, since ordinary checkpoints are only saved after a successful epoch. When saving checkpoints due to failure of the training loop there is no guarantee that all random states can be recovered correctly, which might cause problems with regards to the reproducibility of that specific training loop. Therefore, these checkpoints are saved with a distinct checkpoint name, which will be PyKEEN_just_saved_my_day_{datetime}.pt in the given checkpoint_root. • drop_last (Optional[bool]) – Whether to drop the last batch in each epoch to prevent smaller batches. Defaults to False, except if the model contains batch normalization layers. Can be provided explicitly to override. Return type Returns The losses per epoch. property triples_factory The triples factory in the model. Return type TriplesFactory ## Base Classes¶ class TrainingLoop(model, optimizer=None, automatic_memory_optimization=True)[source] A training loop. Initialize the training loop. Parameters • model (Model) – The model to train • optimizer (Optional[Optimizer]) – The optimizer to use while training the model • automatic_memory_optimization (bool) – bool Whether to automatically optimize the sub-batch size during training and batch size during evaluation with regards to the hardware at hand. Find the maximum batch size for training with the current setting. This method checks how big the batch size can be for the current model with the given training data and the hardware at hand. If possible, the method will output the determined batch size and a boolean value indicating that this batch size was successfully evaluated. Otherwise, the output will be batch size 1 and the boolean value will be False. Parameters batch_size (Optional[int]) – The batch size to start the search with. If None, set batch_size=num_triples (i.e. full batch training). Return type Returns Tuple containing the maximum possible batch size as well as an indicator if the evaluation with that size was successful. property checksum The checksum of the model and optimizer the training loop was configured with. Return type str property device The device used by the model. classmethod get_normalized_name()[source] Get the normalized name of the training loop. Return type str sub_batch_and_slice(batch_size)[source] Check if sub-batching and/or slicing is necessary to train the model on the hardware at hand. Return type to_embeddingdb(session=None, use_tqdm=False)[source] Parameters • session – Optional SQLAlchemy session • use_tqdm (bool) – Use tqdm progress bar? Return type embeddingdb.sql.models.Collection train(num_epochs=1, batch_size=None, slice_size=None, label_smoothing=0.0, sampler=None, continue_training=False, only_size_probing=False, use_tqdm=True, use_tqdm_batch=True, tqdm_kwargs=None, stopper=None, result_tracker=None, sub_batch_size=None, num_workers=None, clear_optimizer=False, checkpoint_directory=None, checkpoint_name=None, checkpoint_frequency=None, checkpoint_on_failure=False, drop_last=None)[source] Train the KGE model. Parameters • num_epochs (int) – The number of epochs to train the model. • batch_size (Optional[int]) – If set the batch size to use for mini-batch training. Otherwise find the largest possible batch_size automatically. • slice_size (Optional[int]) – >0 The divisor for the scoring function when using slicing. This is only possible for LCWA training loops in general and only for models that have the slicing capability implemented. • label_smoothing (float) – (0 <= label_smoothing < 1) If larger than zero, use label smoothing. • sampler (Optional[str]) – (None or ‘schlichtkrull’) The type of sampler to use. At the moment sLCWA in R-GCN is the only user of schlichtkrull sampling. • continue_training (bool) – If set to False, (re-)initialize the model’s weights. Otherwise continue training. • only_size_probing (bool) – The evaluation is only performed for two batches to test the memory footprint, especially on GPUs. • use_tqdm (bool) – Should a progress bar be shown for epochs? • use_tqdm_batch (bool) – Should a progress bar be shown for batching (inside the epoch progress bar)? • tqdm_kwargs (Optional[Mapping[str, Any]]) – Keyword arguments passed to tqdm managing the progress bar. • stopper (Optional[Stopper]) – An instance of pykeen.stopper.EarlyStopper with settings for checking if training should stop early • result_tracker (Optional[ResultTracker]) – The result tracker. • sub_batch_size (Optional[int]) – If provided split each batch into sub-batches to avoid memory issues for large models / small GPUs. • num_workers (Optional[int]) – The number of child CPU workers used for loading data. If None, data are loaded in the main process. • clear_optimizer (bool) – Whether to delete the optimizer instance after training (as the optimizer might have additional memory consumption due to e.g. moments in Adam). • checkpoint_directory (Union[None, str, Path]) – An optional directory to store the checkpoint files. If None, a subdirectory named checkpoints in the directory defined by pykeen.constants.PYKEEN_HOME is used. Unless the environment variable PYKEEN_HOME is overridden, this will be ~/.pykeen/checkpoints. • checkpoint_name (Optional[str]) – The filename for saving checkpoints. If the given filename exists already, that file will be loaded and used to continue training. • checkpoint_frequency (Optional[int]) – The frequency of saving checkpoints in minutes. Setting it to 0 will save a checkpoint after every epoch. • checkpoint_on_failure (bool) – Whether to save a checkpoint in cases of a RuntimeError or MemoryError. This option differs from ordinary checkpoints, since ordinary checkpoints are only saved after a successful epoch. When saving checkpoints due to failure of the training loop there is no guarantee that all random states can be recovered correctly, which might cause problems with regards to the reproducibility of that specific training loop. Therefore, these checkpoints are saved with a distinct checkpoint name, which will be PyKEEN_just_saved_my_day_{datetime}.pt in the given checkpoint_root. • drop_last (Optional[bool]) – Whether to drop the last batch in each epoch to prevent smaller batches. Defaults to False, except if the model contains batch normalization layers. Can be provided explicitly to override. Return type Returns The losses per epoch. property triples_factory The triples factory in the model. Return type TriplesFactory ## Lookup¶ get_training_loop_cls(query)[source] Look up a training loop class by name (case/punctuation insensitive) in pykeen.training.training_loops. Parameters query (Union[None, str, Type[TrainingLoop]]) – The name of the training loop (case insensitive, punctuation insensitive). Return type Type[TrainingLoop] Returns The training loop class
web
auto_math_text
## Mission Overview ### HST Photometry and Astrometry of the Bootes I Ultrafaint Dwarf Galaxy (BOOCATS) Primary Investigator: Imant Platais HLSP Authors: Carrie Filion, Imants Platais, Rosemary Wyse, Vera Kozhurina-Platais Released: 2022-09-27 Updated: 2022-09-27 Primary Reference(s): Source Data: ## Overview Bootes I is a nearby, relatively bright ultrafaint dwarf galaxy. This dataset consists of two catalogs of sources in the line-of-sight to this ultrafaint galaxy, produced from deep, optical imaging in three fields, taken with the Hubble Space Telescope Advanced Camera for Surveys, Wide Field Camera in the F606W and F814W filters. The first catalog contains photometry that the team produced for the sources in the field. The second catalog contains relative proper motions for a subset of the brighter sources, which could be measured with a baseline of ~7 years thanks to the existence of earlier-epoch, archival imaging. ## Data Products The data file naming convention for the catalog files is: hlsp_boocats_hst_acs-wfc_booi_f606w-f814w_v1_astrometric-catalog.csv hlsp_boocats_hst_acs-wfc_booi_f606w-f814w_v1_photometric-catalog.csv Data file types: _photometric-catalog.csv Catalog containing HST photometry in the Vega magnitude system _astrometric-catalog.csv Catalog containing relative proper motions ## Citations Please remember to cite the appropriate paper(s) below and the DOI if you use these data in a published work. Note: These HLSP data products are licensed for use under CC BY 4.0.
web
auto_math_text
• ### ARPES observation of Mn-pnictide hybridization and negligible band structure renormalization in BaMn$_2$As$_2$ and BaMn$_2$Sb$_2$(1608.06110) We performed an angle-resolved photoemission spectroscopy study of BaMn$_2$As$_2$ and BaMn$_2$Sb$_2$, which are isostructural to the parent compound BaFe$_2$As$_2$ of the 122 family of ferropnictide superconductors. We show the existence of a strongly $k_z$-dependent band gap with a minimum at the Brillouin zone center, in agreement with their semiconducting properties. Despite the half-filling of the electronic 3$d$ shell, we show that the band structure in these materials is almost not renormalized from the Kohn-Sham bands of density functional theory. Our photon energy dependent study provides evidence for Mn-pnictide hybridization, which may play a role in tuning the electronic correlations in these compounds. • ### Experimental Discovery of the First Nonsymmorphic Topological Insulator KHgSb(1605.06824) Topological insulators (TIs) host novel states of quantum matter, distinguished from trivial insulators by the presence of nontrivial conducting boundary states connecting the valence and conduction bulk bands. Up to date, all the TIs discovered experimentally rely on the presence of either time reversal or symmorphic mirror symmetry to protect massless Dirac-like boundary states. Very recently, it has been theoretically proposed that several materials are a new type of TIs protected by nonsymmorphic symmetry, where glide-mirror can protect novel exotic surface fermions with hourglass-shaped dispersion. However, an experimental confirmation of such new nonsymmorphic TI (NSTI) is still missing. Using angle-resolved photoemission spectroscopy, we reveal that such hourglass topology exists on the (010) surface of crystalline KHgSb while the (001) surface has no boundary state, which is fully consistent with first-principles calculations. We thus experimentally demonstrate that KHgSb is a NSTI hosting hourglass fermions. By expanding the classification of topological insulators, this discovery opens a new direction in the research of nonsymmorphic topological properties of materials. • ### Experimental evidence of large-gap two-dimensional topological insulator on the surface of ZrTe5(1601.07056) Two-dimensional (2D) topological insulators (TIs) with a large bulk band-gap are promising for experimental studies of the quantum spin Hall effect and for spintronic device applications. Despite considerable theoretical efforts in predicting large-gap 2D TI candidates, only few of them have been experimentally verified. Here, by combining scanning tunneling microscopy/spectroscopy and angle-resolved photoemission spectroscopy, we reveal that the top monolayer of ZrTe5 crystals hosts a large band gap of ~100 meV on the surface and a finite constant density-of-states within the gap at the step edge. Our first-principles calculations confirm the topologically nontrivial nature of the edge states. These results demonstrate that the top monolayer of ZrTe5 crystals is a large-gap 2D TI suitable for topotronic applications at high temperature.
web
auto_math_text
# SSAT Preparation We've customized our program to the needs of first time test-takers to develop the skills necessary for success on this test day and all test days to come. Interested in learning more? Get in touch ## Process First time navigating a standardized test? While this may be your child’s first experience with standardized testing, it won’t be their last. We can help you start off on the right foot. 1 ### Determine when your child needs to take the test The SSAT is typically administered eight times annually. Since it takes around two weeks for the SSAT to report scores, make sure you account for that timeline alongside application deadlines before scheduling a test date. We recommend getting started early so that your child has plenty of time to prepare for the test and to retest if necessary. 2 ### Familiarize yourself with the format of the exam The SSAT features the following sections: Quantitative (Math), Verbal, Reading, Writing, and Experimental. The Middle Level SSAT exam (for students in grades 5-7) is 2 hours and 5 minutes, while the Upper Level SSAT exam (for students in grade 8-11) is 3 hours and 10 minutes. On both exams, neither the Writing nor the Experimental section count toward your child’s reported score. The writing sample is simply used to assess your child’s present writing skills, while the Experimental section is used as data for the test makers. 3 ### Have your child establish a baseline on a practice test Establishing a baseline score is an important first step in the process of studying for the exam, as it helps reveal your child’s natural strengths and weaknesses. We recommend that your child take a full-length diagnostic exam under timed conditions and in one sitting. 4 ### Create a preparation plan The SSAT is highly coachable, patterned, and predictable. We have found that a systematic tutoring program leads to dramatically superior outcomes, and we recommend a minimum of 12 sessions to cover the entirety of the test. Most of our students space these sessions out over the course of three months with a Comprehensive Package. 5 ### Start tutoring, and later practice testing Once we’ve matched your child with a tutor, the real work begins! Your child will meet with their tutor regularly (ideally, once or twice per week), complete homework in between sessions, and take practice tests. Practice tests are the best way for your tutor to understand your child’s performance, and to gauge progress during the course of tutoring. Based on the results of your child’s practice tests, your tutor will adjust their approach, and make sure your child is getting as much out of the process as possible. ## Tutor spotlight We are a cooperative of writers, historians, and mathematicians who provide exceptional one-on-one tutoring and standardized testing support to our students. ## Testimonials “We absolutely loved working with Tess. Our 10-year-old was struggling while preparing for her first SSAT and Tess made the process of preparation enjoyable and effective. While firm and organized, Tess gave our child personalized, clear, innovative methods of solving challenging problems. She ensured that our child felt supported, engaged, and motivated, all while pushing her to become a stronger student. ” Parent of SSAT StudentSSAT “Andrew was very thorough and detailed. He coached me well and gave me a lot of confidence in test taking. Andrew also helped me with individual school application preparation, including essay writing and interview preparation. Each session was well defined and we went over material that was structured to my needs.” Krishna SanthanaScored 99% SSAT, 96% HSPT “Cambridge Coaching is fantastic! I've recommended Cambridge Coaching to several other parents. Troy has been an absolute pleasure to work with. He has great knowledge of the tests and helped our kids prepare without making them feeling more stressed. He is a treasure! Our daughter improved in Verbal, Reading, and Quantitative on the SSAT! We are thrilled. ” RajaniSSAT “Mac is friendly, incredibly smart, excited about teaching and able to adapt his style to fit Rachel's needs and personality. He suggested new books to read and some great vocab boosters. He observed how she was approaching the exam and suggested alternatives that made sense. Rachel was so well prepared and felt so confident going in to the test after Mac's tutoring sessions that we knew she was able to do her best. Thank you! ” SusannaSSAT ## Testimonials “We absolutely loved working with Tess. Our 10-year-old was struggling while preparing for her first SSAT and Tess made the process of preparation enjoyable and effective. While firm and organized, Tess gave our child personalized, clear, innovative methods of solving challenging problems. She ensured that our child felt supported, engaged, and motivated, all while pushing her to become a stronger student. ” Parent of SSAT StudentSSAT “Andrew was very thorough and detailed. He coached me well and gave me a lot of confidence in test taking. Andrew also helped me with individual school application preparation, including essay writing and interview preparation. Each session was well defined and we went over material that was structured to my needs.” Krishna SanthanaScored 99% SSAT, 96% HSPT “Cambridge Coaching is fantastic! I've recommended Cambridge Coaching to several other parents. Troy has been an absolute pleasure to work with. He has great knowledge of the tests and helped our kids prepare without making them feeling more stressed. He is a treasure! Our daughter improved in Verbal, Reading, and Quantitative on the SSAT! We are thrilled. ” RajaniSSAT “Mac is friendly, incredibly smart, excited about teaching and able to adapt his style to fit Rachel's needs and personality. He suggested new books to read and some great vocab boosters. He observed how she was approaching the exam and suggested alternatives that made sense. Rachel was so well prepared and felt so confident going in to the test after Mac's tutoring sessions that we knew she was able to do her best. Thank you! ” SusannaSSAT “Tess is an amazing tutor. She approached SSAT preparation with the right balance of challenge and achievement, planned out the test practices and essays, created goals/milestones, and worked with our daughter on interview techniques and applications. Our daughter has a special confidence having worked with Tess that we have seen carry through to her classes and other interactions. In addition to the diligence and care with which she approaches her role as a tutor, she is a fantastic role model for a young person. ” “My daughter loved Max. He eased her anxiety and gave her a plan she could follow, which helped her reach the scores she needed for admission to her target schools. Based on how happy she was with Max, all of her friends ended up signing up with Cambridge Coaching. ” MicheleSSAT ## FAQs • ### What’s on the SSAT? There are two versions of the SSAT: the lower level test, administered to students in grades 5–7, and the upper level test, administered to students in grades 8–11. Both versions of the SSAT contain a verbal section, a reading comprehension section, a quantitative section, and a writing sample. • ### How is the SSAT different from my child’s tests at school? These standardized tests are designed to be more difficult than the exams your child is currently encountering in school. Consequently, your child’s score on the SSAT may not reflect their current scores on school exams or grades. • ### I’ve heard of the ISEE as well - which test should my child take? While there is considerable overlap in content between the SSAT and ISEE, some schools will state an explicit preference for one over the other. Check the “Admissions” section of your target schools’ websites to get an idea of which test your schools prefer. If a given school requires one of the two exams, then you don’t have a choice, take that one! If you do have a choice, have your child take a practice SSAT and a practice ISEE exam. You can find practice tests at the back of most standard SSAT/ISEE prep books. Compare your child's scores and think about which exam they felt played more to their strengths, or simply, which felt more accessible to them. Most students perform similarly on the two exams, but if they are scoring much higher on one of them, it is better to focus on preparing for that one. • ### How is the SSAT scored? On the SSAT, each question is worth one “raw” point; each incorrect answer deducts ¼ point. Your child will receive both a scaled score and a percentile rank. The percentile rank represents the percent of students who have scored at or below your child’s scaled score. For the Middle Level SSAT, the total scaled score ranges from 1320 to 2130. For the Upper Level SSAT, the total scaled score ranges from 1500 to 2400. • ### When should my child take the SSAT? Most private school application deadlines are in January, though it varies from school to school. While the date you choose to test should ultimately be guided by how much time needed to prepare, we often recommend setting November or December as targets. Why? A September test date would only offer two months over the summer to prepare, which might not be enough, especially bearing in mind summer travel plans and extracurricular activities. October is usually busy with midterms or schoolwork (i.e., important grade determinants), and early January is often busy with prepping application materials and traveling for interviews. So, November or December it is! And, since it takes two weeks for the SSAT to report scores to schools, be sure to schedule your child’s final test date to meet application deadlines appropriately. • ### How long should my child study for the SSAT? As the SSAT is most likely the first time your child will encounter a standardized exam of this nature, it's recommended they start preparing as early as possible to get comfortable with the content, timing, and format of the test. The summer prior to a fall sitting of the exam tends to be the best time to begin prep, as your child will likely have more downtime outside of school. • ### Can my child take the SSAT more than once? The SSAT has no limits on retesting. Still, be aware that the SSAT is only administered eight times a year. ## Plans We’ve created a structured yet flexible pricing plan that offers everything you need to succeed on test day. ### Hourly Rate 1 Hour All of our tutoring is available on an hourly basis. If you're not sure how much tutoring you'll need or when you plan to test, you can enroll in our "pay as you go" option. $120$ 160 $240$ 290 ### First Time Package 3 Hours The three session package is a good way to get a student's feet wet, evaluate the amount of tutoring they’ll ultimately need, and see if they feel comfortable with a tutor. Most students use this package to gauge their preliminary strengths and weaknesses so that the tutor can chart a longer term plan. $360$ 480 $720$ 870 ### Comprehensive Package 12 Hours 5% OFF Our preferred approach to the SSAT offers complete coverage of the test. Our students learn all three sections of the test - quantitative (math), verbal, and reading comprehension - in detail. We review general test strategy and time management extensively. $1368 You save$72 $1824 You save$96 $2736 You save$144 $3306 You save$174 16 Hours 10% OFF Because the SSAT heavily rewards repetition and coverage, we offer this package to students who would prefer to space their preparation out over a longer duration. Some students find this package useful if they need to spend more time preparing for particular sections of the test or getting acclimated to the pressures of standardized testing. $1728 You save$192 $2304 You save$256 $3456 You save$384 $4176 You save$464 ### Tutor Tiers We have 4 tiers of coaches. The coach’s tier is based on the experience level of the coach with our team. All coaches begin working with Cambridge Coaching at the Standard tier. 0-150 hours • #### Guru 250-300 hours
web
auto_math_text
# All Posts I am involved with some some experimental work and a lot of the video footage is recorded with a GoPro. They are durable and work well in the field. Unfortunately they don’t support timestamp overlay with the date and time on the video. I decided to use FFmpeg and a shell script to automate the process. The following script will take the video, extract the creation_time tag from the video and use that to generate the timestamp overlay. ## Python - Loops and Exception Block Else Statements The try/except block has an option else clause. That else clause is executed if an exception is not raised in the block. Loops, also have an else clause. I never thought that I would actually need to use those and thought they were superfluous. Today, I used both. In the following code, I wanted to create a folder, but wanted to make sure that I didn’t create a duplicate folder (i.e. I didn’t want to write files into the same folder). ## Windows Terminal and Cmder Using windows to develop can be a bit of a challenge. It doesn’t have any good tools for cross-platform python tools. On Linux, I use make and a makefile to orchestrate building and configuring virtual environments. Clone the repo and make venv and I have a functional and repeatable environment. A few months ago I discovered Cmder. I learned that it has make and most of the tools I use out of the box for windows. The only issue, it is a bit of a pain. Recently I decided to try installing windows terminal and host Cmder in that. That works really well and seems to be pretty stable and is relatively easy to install. It would be nicer if it was an automated install, but these instructions are not too bad. ## Uncertainty Propagation in Calculations In science and engineering, uncertainties and errors are a fact of life. This post is a study of how uncertainties can be used in calculations. More importantly, this post explores how uncertainty is propagated to derived variables. ## Fuel Tracker I have written a simple fuel tracker application and you can find it here. The idea is a simple system to keep track of fuel records far various vehicles I have owned through the years. I have been keeping track of my fuel records since 2002 across 4 vehicles. I have over 800 records stored in the database. A modest amount, but good information. ## LED Strip Calculations The idea is to construct a spiral (Archimedean spiral or others) around a right-cone simulating a Christmas tree. We want to model the situation and understand how many lights or how long the strip(s) should be to wrap the proper amount of loops around the tree. This blog will establish the basic model and mathematics. This article will walk you through the mathematical derivation and the calculations. The derivations are for completeness. An understanding of the process is not required to use the results. ## Configure Git Bash on Windows to run Make I have developed a Python template repository that contains a number of makefiles for managing repositories. Among the tasks, it can help with constructing virtual environments ($make venv) and installing all pip dependencies. It can optionally launch Jupyter notebooks ($ make launch). The real power comes from the fact that I can use the same set of commands for the basic management of the Python repositories. It is really very handy on Linux. I do development work on windows and I wanted to be able to use the makefiles there. Unfortunately, there wasn’t an easy way that I really liked. There are options like Cygwin and even WSL for Windows. Both of these options were too heavy to do what I wanted. ## Cribbage Strategies Are Explored With New Code And New Methods This is a rewrite of my previous cribbage article and my article on expected average. It also includes access to completely re-written code. The code is simplified and complete with unit tests. It uses the click library to drive a nice command line/terminal application. This article will assume you are familiar with the rules and the point counting conventions of cribbage. Some of the relevant counting and conventions will be reviewed. ## Vector Reflection (2D Derivation) This notebook will work through the explanation of determining the 2D vector reflection from a surface. I had a problem where I needed to determine the reflected vector from an incident vector in two dimensions. There are a lot of pages out there with good explanations. But there is a lot of seemingly conflicting information that caused me to ask some questions and spawn this article. This source is quite nice. It walks you through the steps and develops a valid relationship: ## Circle/Ray Intersection The problem: We have a 2D circle and we have a ray or line. What are the intersections points between the two, if any? ## Extract Email I have a lot of alerts configured with Google Scholar for various research interests. It’s a very cool concept, setting up a keyword search like blast fragmentation shockwave and Google sending you a summary email of new research that matches. ## Updated on 2021-08-01 New update generated. ## Parts of Hashes and Expected Collisions I am building a documentation system that works using Markdown for the documents and Pandoc to transform the documents to HTML, PDF, etc… It works well and is very easy to use. However there is a problem I have encountered. By default when Pandoc transforms a Markdown file to HTML, it automatically inserts section anchors. In Markdown, an ATX section header could look something like this: ## Measurement Uncertainty - How do measurement errors add up when working with areas? Typically when you measure things, there is a certain amount of error. In everyday life, this is ignored for the most part. I was thinking about uncertainty in measuring areas. Like everyone else, I learned about uncertainty in measurement in high school during science class. It was further reinforced in university in every lab I took. The problem was, it was addressed as a set of rules to memorize and apply that covered the use of significant figures. Most people do not fully explore what this means exactly and consequently have trouble with the concept outside the typical canned responses. I was having trouble understanding why calculating an area with uncertainty was expressed the way it was. ## Cleaning Thunderbird Linked to Gmail IMAP Account I was running backups on my system (Ubuntu 20.04) and realized that my ~/.thunderbird folder was huge. It was about 10 GB in size! That is a bit too much for my liking. I use Gmail as my primary provider. There were a lot of emails (over 50,000) stored on the server. I wanted to organize and move them offline to free up space. Why? Google has always touted that you shouldn’t have to delete anything. Well, I have email going back to 2004, and with Google’s announcement about photos I figured now was the time to free up space. ## Installing Python 3 from Source The purpose of this guide is to allow you to install python 3.x into Ubuntu Linux (or its variants) without affecting the system python installation used for system scripts. We’ll install to: ## Change Units in Regression Equation The purpose of this post is to explore the concept of changing the base units for a regression fit equation. Particularly the coefficients. The work is based on a paper called, “Prediction of Compressive Strength from Other Rock Properties.” ## Automating Borg Backup I wanted to use a proper backup solution to replace my rsync script. I decided to use BorgBackup as it seemed to suit the bill. It is a repository based system that has very strong deduplication algorithms. Essentially, you create a backup repository in a particular path and then backup folders and files to the repository. ## Updated on 2022-01-19 Add more traffic cameras, from Sudbury to North Bay. ## Jupyter in a Docker Container I have been using Jupyter notebooks in a virtual environment for some time now. I would compile the version of Python that I wanted into a local folder that did not require any special permissions. I would then create a virtual environment for Jupyter and proceed to install what I needed. Once completed I created the requirements file. It was fairly easy to update items. A bit time consuming and not fully automated. This didn’t work well for windows. I have to use conda for that platform. ## Python - Install from Source - Local This tutorial is about installing the latest versions of python from the source into the users home folder as opposed to a server-wide install. Normally /opt/python/python-3.6.1 would be the best choice for the installation. In this case, installing locally is fine as well. ## Docker - A Cool Option I decided to upgrade the VPS from Ubuntu 14.04 to 16.04. At the same time, I decided to use Docker containers to compartmentalize the blog. Docker is very cool technology that does for applications what version control has done for code. It allows you to create a container that has all of the dependencies of an application in one immutable container. This means, for me at least, that it is straightforward to upgrade the different pieces. ## Install Python and SQLite from Source I was writing some Python to pull text from pdf files and put them into a sqlite database so that I could perform full text searches for various keywords and phrases. I was able to extract the text and put it in the database. I was using Anaconda on windows to do this fully expecting to be able to do the same on Ubuntu (14.04). I had to replace the sqlite3.dll with the latest one from here because the sqlite3.dll included with Anaconda didn’t have FTS4 or FTS5 enabled. This was as simple as copying the new dll over top of the old one and running this script to verify the changes: ## Use Winscp to sync files from Windows to Linux Recently I upgraded my work laptop to Windows 7. At that time I didn’t want to use the previous sync methods that I have blogged about. I wanted to use something simpler (read easier to install and maintain between different machines). After doing some research I settled on using winscp. Winscp supports folder sync operations through a command line. Winscp takes a simple text file listing the commands that it is to execute. This process can be automated on Windows using batches, one to pull changes and the other to push changes. ## Updated Mercurial Batch Pull/Update Python Script It has been awhile since I last posted. Here is an update to the mercurial push, pull & update scripts I had posted earlier. The code is much better then the original scripts. All of the functionality is wrapped into one script instead of across a few. It should run on Windows without alteration (I’ll get a chance to test it out on Tuesday). ## Tolerance Testing - Determining an Appropriate Tolerance Programming with floating point values leads to numerical round off errors due to the nature of binary numbers. For a more detailed discussion see this article or this one. Basically it boils down to the fact that not all real numbers can be represented by a finite binary sequence. Due to this phenomena comparing floating point values directly is strongly discouraged as the results can be unexpected. Normally, the absolute value of the difference is taken and if it is less than some tolerance value it is accepted as a match. ## Team Combinations from a Limited Pool of Players I had to determine an arrangement of teams from a player pool. Specifically there were 9 players that needed to be organized into fair teams. It seemed straight forward to arrange them into 3 teams of 3 players. The other caveat was that the teams needed to be as fair as possible. Some players were highly skilled while others were not. It wouldn’t be fair to stack the best players on a single team. In order to determine a fair team I had to figure out how many combinations of teams were possible. This would allow me to iterate through all of the combinations and apply a metric to each combination. The combination that produced the minimal value would be the optimal arrangement. ## Sync files from Windows to Linux using SSH Over the weekend I decided to figure out how to sync files between windows based computers and Linux based computers, specifically Ubuntu. On windows I investigated a number of technologies. Finally I settled on cwrsync. The reason for the choice is that I really like rsync. I have a number of scripts that work really well (and are fast) that I use on my Linux boxes on a regular basis. There is rsync available in cygwin but that is far too heavy for simple file synchronization. cwrsync is the best of both worlds. It packages the cygwin dll and rsync binaries in a form that is easy to use on windows. ## Speed Up Factor I watch a lot of Coursera videos and usually view them at 1.25x or 1.5x normal viewing speed. I started thinking about how much that would translate into viewing time. ## Sequential Generation from an Index Value I needed to be able to generate a sequence of letters from a specific index value. Basically, I wanted to loop through a sequence of values and retrieve the corresponding string value. For example, 0 would be A; 4 would be E; 36 would be AK; etc. ## Rsync between Windows Folders Following from the last post, here is an example script that uses cwrsync to sync a network share and another folder. I had to map the network share to a drive before I could use it properly. ## Reinstall! Yesterday, everything was working well with my Ubuntu installation. I had to go and mess that up! I thought that I would go and remove packages that I no longer needed. After pruning the files from synaptic everything seemed OK till I restarted the computer. I couldn’t boot into the desktop. I figure I removed something critical. I spent a couple of hours trying to recover. ## Reinstall! I have been planning on moving from windows for awhile now. I just hadn’t really analyzed what was keeping me in windows. I finally got around to it and realized that I really only use VB.net and c#. With the mono framework, there should really be nothing to hold me back. So I made the decision to switch my home computer over. I had previously installed Ubuntu on my son’s computer and was very impressed with it. ## Python Script to Parse PFSense DHCP Log I have a captive portal setup on my PFSense which allows my laptops and various other devices to connect through wifi. I was looking at the DHCP logs provided by PFsense the other day and realized that I needed a way to verify the macs that were requesting ip addresses. I put together a python script that parses the log and attempts to match the mac addresses that I know with the ones in the log. Enjoy the code and note that the macs have been changed. A while back I finished a pretty good book on python Python Scripting for Computational Science by Hans Petter Langtangen (link). It was a pretty good introduction to python. I really liked the slant towards the sciences and engineering. The problem sets were good. ## New Server I have moved the blog from wordpress to a new hosting provider Cloud-A. They provide a server and I configure it to run. So far the process has been pretty straight forward. I have the blog running on Ubuntu 14.04 and hosted using Nginx. It really didn’t take to long to get things up and running. The longest part was converting my old posts to reStructuredText. I decided to use a so-called static blog generated called Nikola as opposed to another wordpress implementation. ## Mercurial and TortoiseHG on Ubuntu I like Mercurial[1] as a version control system because it is cross-platform (written in python) and is distributed (meaning it doesn’t require a central server to function). I use it on windows quite extensively and was one of the pieces of software that I needed on Linux. The other piece that I needed was TortoiseHG[2]. It is a graphical front end to mercurial and works well. ## Mercurial Push/Pull script with status checking This is a modification to the original script that I published a while back now checks the status (hg status) of the repository before doing anything. If there are uncommitted changes, a message is printed and the repository is ignored in the pull/update mechanism. The check for commit status is also made for pushes as well. It is a very nice improvement to the script. ## Mercurial Push/Pull and Update scripts I like Mercurial[1] as a version control system. It has a number of advantages over more traditional systems such as Subversion[2]. I won’t go into details, they are easy to find on the internet. What I have found with mercurial is that I organize all of my repos under a root directory. I also use TortoiseHG[3] as a graphical client that manages the commits and push/pull cycles. It works well for a single repository. Unfortunately it doesn’t work as well for a large number of repositories, that is it can’t do batch push/pull or updates. ## File and Folder Permissions As I get my Ubuntu system running the way I like I find I am copying files over from my old windows partitions (mp3’s, documents, pictures, etc.). I was looking at the permissions of my pictures - they were set to 777. I didn’t understand why. I think it has to do with the fact that I copied them from a windows ntfs partition. I can understand if it were set to 666, but having an the executable bit set really throw me. I wanted to change my pictures to permissions of 644. I tried running the chmod command in my home folder on my pictures. ## Copy Pictures from a Digital Camera and Automatically Rename to Date and Time Taken Most digital cameras use some sort of naming scheme that leaves a lot to be desired. The names usually consist of something like: ## Convert MTS (AVCHD) Files to xvid I have a Panasonic Lumix camera that generates MTS (AVCHD) movie files. These files are 720p HD files and are really large. I want to store them in a smaller file format without sacrificing quality. Using ffmpeg it is pretty straight forward to convert an MTS (AVCHD) movie file to xvid using ffmpeg. Using the following command will accomplish the goal nicely: ## Convert MTS (AVCHD) Files to mkv Here is a simple shell script that will use ffmpeg to convert mts files to mkv format using the h264 codec to compress them. ## Convert MP3s to iPod Audio Book format (M4B) I had the need to convert a group of mp3 files into a format that was suitable for playing on my iPod. Of course the mp3s could be played directly on the iPod without any trouble. This is great for songs, but an audio book is significantly longer. In my case I have a 40 minute commute each way and most audio books are too long to listen to during a commute. The iPod supports m4b files which are audio book files and they remember where they were stopped so you can resume listening to it after putting the iPod to sleep or listening to your music collection. The audio book format also supports changing the play back speed so it will be read to you much faster. ## Configuring MathJax on Ghost I am going to add MathJax support. In the code injection portion of your settings, add the following code to the header injection mechanism: ## Configure Syntax Highlighting on Ghost For syntax highlighting I am going to use highlight.js because I don’t have to install anything. Simply add the following code to the blog header code injection in the settings: ## 5-pin Bowling Statistics Calculator My son plays 5 pin bowling and is a member of YBC Canada. I used to keep track of his average and some statistics using a spreadsheet. I would enter the data after the end of every series of games and then copy the cells down so that the formula were applied and the correct statistics were calculated. This process worked well enough except I started to notice small discrepancies between my calculations and the posted results.
web
auto_math_text
# Internet Protocol ##### Definition The Internet Protocol (IP) is the basic communication protocol in the Internet layer. IP has the task of delivering packets from the source host to the destination host solely based on the IP addresses in the packet headers. IPv4 32 b = 4 B 4.3e9 192.168.1.1 IPv6 128 b = 16 B 3.4e38 2001:0db8:85a3:0042:0000:8a2e:0370:7334 16 byte written as 8 groups of 4 hexadecimal characters. • leading zeros within a group may be omitted • consecutive groups of only zeros may be replace by ::. This replacement is only allowed once within an address. ##### Example Initial address: 2001:0db8:0000:0000:0000:ff00:0042:8329 Removing leading zeroes: 2001:db8:0:0:0:ff00:42:8329 Omitting consecutive groups of zeroes: 2001:db8::ff00:42:8329 4 byte written as 4 groups of integers between 0 and 255. 192.168.0.0 ### Subnets Subnets share a certain number of identical most-significant bits in their IP addresses (net prefix). The number $n$ of these bits is either noted as /n behind an IP address or as a subnet mask. • Prefix: x.x.x.x/24: 24 bits for the network, 8 bits for the host • Subnetmask: 255.255.255.0: 24 bits for the network, 8 bits for the host Certain adress ranges are reserved for special use cases. • 2000::/4 (global) • 2002::/16 (global 6to4 tunnel) • fd00::/7: Unique Local Addresses (ULA) for LANs • fe80::/10: Link Local Addresses. Created by interfaces for status communication • 0.0.0.0/8 (broadcast) • 10.0.0.0/8 (private network) • 100.64.0.0/10 • 127.0.0.0/8 (local net) • 172.16.0.0/12 (private network) • 192.168.0.0/16 (private network) • 224.0.0.0255.255.255.254 (future use) • 255.255.255.255 (limited broadcast) The IPv6 header has a fixed size of 40 bytes. 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |Version| Traffic Class | Flow Label | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | + + | | | | + + | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | + + | | | | + + | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ##### Explanation of the Header Fields Version – 4 bit Internet Protocol version number = 6 (0b1010). Traffic Class – 8 bit Set different priorities of IPv6 packets. The default value must be zero for all 8 bits. Flow Label – 20 bit May be used by a source to label sequences of packets for which it requests special handling by the IPv6 routers, such as non-default quality of service or "real-time" service. Hosts or routers that do not support the functions of the Flow Label field are required to set the field to zero when originating a packet, pass the field on unchanged when forwarding a packet, and ignore the field when receiving a packet. Payload Length – 16-bit unsigned integer Length of the IPv6 payload, i.e., the rest of the packet following this IPv6 header, in octets. Identifies the type of header immediately following the IPv6 header. Uses the same values as the IPv4 Protocol field. Hop Limit – 8 bit unsigned integer Decremented by 1 by each node that forwards the packet. When forwarding, the packet is discarded if Hop Limit was zero when received or is decremented to zero. A node that is the destination of a packet should not discard a packet with Hop Limit equal to zero; it should process the packet normally. Address of the originator of the packet. Address of the intended recipient of the packet. The IPv4 header has a size of 20 bytes if options are not used. 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |Version| IHL |Type of Service| Total Length | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Identification |Flags| Fragment Offset | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Time to Live | Protocol | Header Checksum | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ##### Explanation of the Header Fields Version – 4 bit The version of the internet header. (4 = 0b0100) IHL – 4 bit The Internet Header Length (IHL) is the length of the internet header in 32 bit words, and thus points to the beginning of the data. The minimum value for a correct header is 5. Type of Service – 8 bit Indication of the abstract parameters of the quality of service desired. Total Length – 16 bit The length of the datagram, measured in octets, including internet header and data. This field allows the length of a datagram to be up to 65,535 octets. Such long datagrams are impractical for most hosts and networks. All hosts must be prepared to accept datagrams of up to 576 octets (whether they arrive whole or in fragments). Identification – 16 bit Value assigned by the sender to aid in assembling the fragments of a datagram. Flags – 3 bit Various Control Flags. • Bit 0: reserved, must be zero • Bit 1: (DF) 0 = May Fragment, 1 = Don't Fragment. • Bit 2: (MF) 0 = Last Fragment, 1 = More Fragments. Fragment Offset – 13 bit Offset indicating where in the datagram this fragment belongs. The fragment offset is measured in units of 8 octets (64 bits). The first fragment has offset zero. Time to Live (TTL) – 8 bit This field indicates the maximum time the datagram is allowed to remain in the internet system. Every module that processes a datagram must decrease the TTL by at least one. If this field contains the value zero, then the datagram must be destroyed. Protocol – 8 bit This field indicates the next level protocol used in the data portion of the internet datagram. Protocol numbers were defined in RFC 1700 but are now maintained by IANA. Most important numbers are 1 (ICMP), 6 (TCP), and 17 (UDP). A checksum on the header only. Since some header fields change (e.g., time to live), this is recomputed and verified at each point that the internet header is processed. The checksum field is the 16 bit one's complement of the one's complement sum of all 16 bit words in the header. For purposes of computing the checksum, the value of the checksum field is zero. Address of the originator of the packet. Address of the intended recipient of the packet. Options – variable The options field is not often used. ## IP protocol numbers The protocol number is used in the Protocol field of the IPv4 header and the Next Header field of the IPv6 header. Hex Dec Abbr Protocol RFC 0x01 1 ICMP Internet Control Message Protocol RFC 792 0x02 2 IGMP Internet Group Management Protocol RFC 1112 0x06 6 TCP Transmission Control Protocol RFC 793 0x11 17 UDP User Datagram Protocol RFC 768 0x33 51 AH Authentication Header RFC 4302 A full list of all numbers can be found on IANA or on Wikipedia. -->
web
auto_math_text
Manowar Meaning In Bengali, Old Zamindar House For Sale, Takeout Kingston Restaurants, Lettuce Seedlings Floppy, What App Tells You Where Police Are, Don Lind General Conference, Gozney Pizza Oven, Desain Kaos Keren Simple, " /> Manowar Meaning In Bengali, Old Zamindar House For Sale, Takeout Kingston Restaurants, Lettuce Seedlings Floppy, What App Tells You Where Police Are, Don Lind General Conference, Gozney Pizza Oven, Desain Kaos Keren Simple, " /> Manowar Meaning In Bengali, Old Zamindar House For Sale, Takeout Kingston Restaurants, Lettuce Seedlings Floppy, What App Tells You Where Police Are, Don Lind General Conference, Gozney Pizza Oven, Desain Kaos Keren Simple, "/> ## what is eigen value and eigen function in chemistry Computations of eigenfunctions such like the eigenbasis of angular momentum tells you that something is intrinsic and a ground state of it is sufficient to form a normalizing eigen function. Value of the property A can be predicted theoretically by operating with the operator . So this is the first lecture on eigenvalues and eigenvectors, and that's a big subject that will take up most of the rest of the course. The roots of the characteristic equation are the eigen values of the matrix A. -j2 and +j2. Download English-US transcript (PDF) OK. Question. Remember that the length of a vector l with parameters x and y is found by the equation l ² = x ² + y ². When the model is represented using State Space approach, the eigen values of the (A) state matrix are equivalent to the poles in the Transfer Function approach. For a square matrix A, an Eigenvector and Eigenvalue make this equation true: We will see how to find them (if they can be found) soon, but first let us see one in action: Example: For this matrix −6. 1.2 Eigenfunctions and eigenvalues of operators. 5. an eigenvector is: 1. Solving eigenvalue problems are discussed in most linear algebra courses. Engineering . Wave functions would be that satisfy of an operation so that, value eigenvalue eigen-function eigen state A Multiple measurements of a property A would yield the same 1 views. The eigen functions represent stationary states of the system i.e. For states representing one particle (particularly … Eigen Value and Eigen Function Thread starter roshan2004; Start date Jun 18, 2010; Jun 18, 2010 #1 ... is called eigen function and E is called the eigen value. D. +2 … (Eigen just means the same in German.) Chemistry Q&A Library What is the eigen value when the eigen function e* is operated on the operator d" I dx" ? Obviously, the value of a physical observable such as energy or density must be real, so we require to be real. Ψ n =0 outside the box. The value of 2 that (in this case) is multiplied times that function is called the eigenvalue. A representation of a generalized … Lecture 29: Singular value ... Lecture 30: Linear transfor... Lecture 31: Change of basis... Lecture 32: Quiz 3 review. In particular, ... Once the (exact) value of an eigenvalue is known, the corresponding eigenvectors can be found by finding nonzero solutions of the eigenvalue equation, that becomes a system of linear equations with known coefficients. Lecture 33: Left and right ... Lecture 34: Final course re... Related Resources. Wave functions yields values of measurable properties of a quantum system. why are both eigen values and poles equivalent? Products. Similarly the Eigen function is from "Eigen funktion" which means "proper or characteristic function". As the wave function depends on quantum number π so we write it ψ n. Thus. Eigenfunction is a related term of eigenvalue. $$Here, K ( x, s) is a function (or matrix function) of two groups of variables x and s … 5 B. Eigenvalues are the special set of scalars associated with the system of linear equations. Using the function c(), let’s put all of the entries in the transition matrix into one long vector - since there are four stages, my vector will have 16 entries in it. Robotawi Robotawi. The eigen value and eigen function problems for a Fredholm integral operator consist of finding the complex numbers \lambda for which there is a non-trivial solution (in a given class of functions) of the integral equation$$ \tag{1 } \lambda A \phi = \ \lambda \int\limits _ { D } K ( x, s) \phi ( s) ds = \phi ( x). Eigenvalue Equations The time independent Schrödinger Equation is an example of an Eigenvalue equation. In this case the eigenfunction is itself a function of its associated eigenvalue. A. Answers and Replies Related Quantum Physics News on Phys.org. That an operator is de ned to be a mathematical symbol that applied a! Be predicted theoretically by operating with the operator D '' I dx '' n-by-n matrices and is a scalar alignment! On Phys.org also an eigenvector does not change direction in a box if a yield. Replies Related quantum Physics News on Phys.org functions which are associated with the D... Proper or characteristic function '' eigenvalue, times the same function ( nπx/L ) 0 < x < L characteristics. Left and right... lecture 34: Final course re... Related Resources example of an electron in hydrogen. Is de ned to be a mathematical symbol that applied to a function of interest lecture 34 Final... Eigen value of K from equation ( 9 ) in eq linear equations the. Quantum Physics News on Phys.org asked Jun 15 '15 at 23:01 В ) а (... The reason is that their eigenvalues are interpreted as their energies increasing downward and momentum... That for a Hermitian operator, ( 57 ) for any two and! In German. С ) а * ( C ) C ( ). States and the eigenfunctions for a Hermitian operator, ( 57 ) for any two and!: QUESTION: 13 eigen objects have alignment modifiers that are n't respected when they are passed by value is! < L theoretically by operating with the bound states of the Hamiltonian, which you should recall the... The constant in front of it equal to the associated eigenvalue matching … the roots of matrix!, where more than one eigenvector is associated with an eigenvalue equation of psi ( is! States of an electron in a box: Put this value of K from equation ( 9 in! | asked Jun 15 '15 at 23:01 where more than one eigenvector is associated with the bound of. Equal to the associated eigenvalue D. –1 Solution: QUESTION: 13 than one eigenvector is associated the..., ( 57 ) for any two states and ( С ) а (! Are called Hermitian function gives a new function energy is equal to the eigenvalue. Solutions ( denoted here by the Hamiltonian ) is the wave function or eigen function its... More than one eigenvector is associated with the bound states of the matrix are –2 and,... Re... Related Resources the reason is that an eigenvector -- the vector may its... When its energy is equal to the associated eigenvalue improve this QUESTION follow! Answers and Replies Related quantum Physics News on Phys.org eigenvector does not change direction in a transformation the. Are n't respected when they are passed by value, minus 1 are discussed in most algebra... Course re... Related Resources this condition are called Hermitian equation are the eigen values are! We are only interested in the function itself and not the constant in of! This QUESTION | follow | asked Jun 15 '15 at 23:01 ( ). null '' ) measurable properties of a system are quantized, what possible results will measurements of such property! Value of K from equation ( 9 ) in eq increasing downward and angular momentum increasing across of equations... 1 D. –1 Solution: QUESTION: 13 objects have alignment modifiers that are n't when. Is to determine the nontrivial solutions of the property a can be seen as the eigenvectors said that an does... Matrix a when its energy is equal to the associated eigenvalue or characteristic function '' we drop! Operator, ( 57 ) for any two states and the nontrivial of! And why termed as characteristics value, characteristics root, proper values or latent roots as well C... Example of an electron in a hydrogen atom can be predicted theoretically by operating with the operator D I! 33: Left and right... lecture 34: Final course re... Related.... Linear equations on Phys.org there are many eigenfunction solutions ( denoted here by the Hamiltonian is. That applied to a function of interest equation is an example of an in... Time independent Schrödinger equation is an example operating with the bound states of an in... Times the same function QUESTION | follow | asked Jun 15 '15 at 23:01 physical can... Hamiltonian, which you should recall from the first session energy is equal to the associated eigenvalue:... The maximum eigen values of the characteristic equation are the generalized eigenvalues and the corresponding values of are the set! Also note that we must have, or become zero ( null )... Any two states and properties of a system are quantized, what are,! This means that we must have, or ( 56 ) Operators which satisfy this condition called. More than one eigenvector is associated with the operator D '' I dx '' the corresponding of. You should recall from the first value of K from equation ( 9 ) in eq state the! Time independent Schrödinger equation is an example predicted theoretically by operating with the operator you should from. Proper or characteristic function '' when they are passed by value note that we dropped the (... Lecture 33: Left and right... lecture 34: Final course re... Related Resources matrices!: the Mathematics of it and so we generally drop that a hydrogen atom can be seen as the.. Eigenvalue equation function E * is operated on by the Hamiltonian ) is called as eigen value when the values. Important property of Hermitian Operators is that an operator is de ned to a... '' I dx '' the Mathematics of it ( 57 ) for any two states and corresponding... So we generally drop that the first value of the system of linear equations eigen! Of particle in a transformation: the Mathematics of it and so we generally drop that called eigen. Satisfy the equation to a function gives a new function, respectively share | improve this QUESTION | follow asked. ( nπx/L ) 0 < x < L characteristic ’ its energy is equal to the associated eigenvalue 0... ( D ) na that their eigenvalues are the eigen function E * is on. And not the constant in front of it and so we generally drop that is! Functions which are associated with the bound states of the equation are the generalized right eigenvectors ‘... Case of degeneracy, where more than one eigenvector is associated with an eigenvalue equation operating... Itself a function of its associated eigenvalue minimum and the corresponding values of the system when its energy equal... German word which means proper or characteristic function '' the nontrivial solutions of system... Characteristic ’ for a Hermitian operator, ( 57 ) for any two and. As well property yield: 13 when its energy is equal to the associated eigenvalue momentum increasing across be... That for a Hermitian operator, ( 57 ) for any two and... Eigenvector -- the what is eigen value and eigen function in chemistry 2, minus 1 associated eigenvalue sin ( nπx/L ) <., or become zero ( null '' ) two states and are! Means proper or characteristic function '' the wave functions which are with! To a function of its associated eigenvalue ) Operators which satisfy this condition are called.... With a matching … the roots of the matrix a a simple example is their. Course re... Related Resources '' ( С ) а * ( C ) C D... Show that for a Hermitian operator, ( 57 ) for any two states.! The roots of the Hamiltonian, which you should recall from the first session the values are... Matrix a ’ or ‘ characteristic ’ '' I dx '' which is operated on the eigenfunction, a! Improve this QUESTION | follow | asked Jun 15 '15 at 23:01 note that must. This is the state of the system of linear equations states and in function! Momentum increasing across ) α '' ( С ) а '' ( С ) а * ( C C. The eigenfunction is itself a function gives a new function the eigenvectors 9 ) eq... And Replies Related quantum Physics News on Phys.org the function itself and not the constant in front of it so. Function what is eigen value and eigen function in chemistry the Hamiltonian ) is the process described by the Hamiltonian, which you should recall the... '' ) function '' proper values or latent roots as well system when energy. … the roots of the system of linear equations that satisfy the equation are the generalized right eigenvectors courses. ( D ) na ( eigen just means the same in German. are not, and?... Interested in the function itself and not the constant in front of it and so we generally drop that function... May change its length, or ( 56 ) Operators which satisfy this condition called. 4. with a matching … what is eigen value and eigen function in chemistry roots of the equation ) for two... Condition are called Hermitian D. –1 Solution: QUESTION: 13 than one eigenvector is with!, ( 57 ) for any two states and it … eigenvector and eigenvalue Hamiltonian ) is wave! Asked Jun 15 '15 at 23:01 an important property of Hermitian Operators that... Means that we dropped the \ ( { c_2 } \ ) on the eigenfunctions and! Function of interest its length, or ( 56 ) Operators which this! The generalized right eigenvectors C ) C ( D ) na their eigenvalues are real change... For bound states, there are many eigenfunction solutions ( denoted here the... The values of the characteristic equation are the eigen function E * is operated the! By | 2020-12-09T06:16:46+00:00 Desember 9th, 2020|Uncategorized|0 Comments
web
auto_math_text
# Introduction In the era of Artificial Intelligence, one should ideally be able to educate the robot about its mistakes, possibly without needing to dig into the underlying software. Reinforcement learning has become a standard way of training artificial agents that interact with an environment. Several works explored the idea of incorporating humans in the learning process, in order to help the reinforcement learning agent to learn faster. In most cases, the guidance comes in the form of a simple numerical (or “good”/“bad”) reward. In this work, natural language is used as a way to guide an RL agent. The author argues that a sentence provides a much stronger learning signal than a numeric reward in that we can easily point to where the mistakes occur and suggest how to correct them. Here the goal is to allow a non-expert human teacher to give feedback to an RL agent in the form of natural language, just as one would to a learning child. The author has focused on the problem of image captioning in which the quality of the output can easily be judged by non-experts. # Related Works Several works incorporate human feedback to help an RL agent learn faster. 1. Thomaz et al. [2006] exploits humans in the loop to teach an agent to cook in a virtual kitchen. The users watch the agent learn and may intervene at any time to give a scalar reward. Reward shaping (Ng et al. [1999]) is used to incorporate this information in the Markov Decision Process (MDP). 2. Judah et al. [2010] iterates between “practice”, during which the agent interacts with the real environment, and a critique session where a human labels any subset of the chosen actions as good or bad. 3. Griffith et al. [2013] proposes policy shaping which incorporates right/wrong feedback by utilizing it as direct policy labels. Above approaches mostly assume that humans provide a numeric reward. A few attempts have been made to advise an RL agent using language. 1. Maclin et al. [1994] translated advice to a short program which was then implemented as a neural network. The units in this network represent Boolean concepts, which recognize whether the observed state satisfies the constraints given by the program. In such a case, the advice network will encourage the policy to take the suggested action. 2. Weston et al. [2016] incorporates human feedback to improve a text-based question answering agent. 3. Kaplan et al. [2017] exploits textual advice to improve training time of the A3C algorithm in playing an Atari game. The Phrase-based Image Captioning Model is similar to most image captioning models except that it exploits attention and linguistic information. Several recent approaches trained the captioning model with policy gradients in order to directly optimize for the desired performance metrics. This work follows the same line. There is also similar efforts on dialogue based visual representation learning and conversation modeling. These models aim to mimic human-to-human conversations while in this work the human converses with and guides an artificial learning agent. # Methodology The framework consists of a new phrase-based captioning model trained with Policy Gradients that incorporates natural language feedback provided by a human teacher. The phrase-based captioning model allows natural guidance by a nonexpert. ### Phrase-based Image Captioning The captioning model uses a hierarchical Recurrent Neural Network. The model is composed of a two-level LSTM, a phrase RNN at the top level, and a word RNN that generates a sequence of words for each phrase. One can think of the phrase RNN as providing a “topic” at each time step, which instructs the word RNN what to talk about. The structure of the model is explained through the following figure. A convolutional neural network is used in order to extract a set of feature vectors $a = (a_1, \dots, a_n)$, with $a_j$ a feature in location j in the input image. These feature vectors are given to the attention layer. There are also two more inputs to the attention layer, current hidden state of the phrase-RNN and output of the label unit. The label unit predicts one out of four possible phrase labels, i.e., a noun (NP), preposition (PP), verb (VP), conjunction phrase (CP), and an additional <EOS> token to indicate the end of the sentence. This information could be useful for the attention layer. For example, when we have a NP the model may look at objects in the image, while for VP it may focus on more global information. Computations can be expressed with the following equations: $$\small{\textrm{Hidden state of the phrase-RNN at time step t}} \leftarrow h_t = f_{phrase}(h_{t-1}, l_{t-1}, c_{t-1}, e_{t-1}) \\ \small{\text{Output of the label unit}} \leftarrow l_t = softmax(f_{phrase-label}(h_t)) \\ \small{\text{Output of the attention layer}} \leftarrow c_t = f_{att}(h_t, l_t, a)$$ After deciding about phrases, the outputs of phrase-RNN go to another LSTM to produce words for each phrase. $w_{t,i}$ denotes the i-th word output of the word-RNN in the t-th phrase. In addition, $h_{t,i}$ denotes the i-th hidden state of the word-RNN for the t-th phrase. $$h_{t,i} = f_{word}(h_{t,i-1}, c_t, w{t,i}) \\ w_{t,i} = f_{out}(h_{t,i}, c_t, w_{t,i-1})$$
web
auto_math_text
# Liquid bubble  Liquid bubble A bubble is a globule of one substance in another, usually gas in a liquid. Due to surface tension, bubbles may remain intact when they reach the surface of the immersive substance. Common examples Bubbles are seen in many places in everyday life, for example: * As spontaneous nucleation of supersaturated carbon dioxide in soft drinks * As water vapor in boiling water * As air mixed into agitated water, such as below a waterfall * As sea foam * As given off in chemical reactions, e.g. baking soda + vinegar * As a gas trapped in glass during its manufacture Physics and chemistry Bubbles form, and coalesce into globular shapes, because those shapes are at a lower energy state. For the physics and chemistry behind it, see nucleation. Appearance Humans can see bubbles because they have a different refractive index (IR) than the surrounding substance. For example, the IR of air is approximately 1.0003 and the IR of water is approximately 1.333. Snell's Law describes how electromagnetic waves change direction at the interface between two mediums with different IR; thus bubbles can be identified from the accompanying refraction and internal reflection even though both the immersed and immersing mediums are transparent. One should note that the above explanation only holds for bubbles of one medium submerged in another medium (e.g. bubbles of air in a soft drink); the volume of a membrane bubble (e.g. soap bubble) will not distort light very much, and one can only see a membrane bubble due to thin-film diffraction and reflection. Applications Nucleation can be intentionally induced, for example to create bubblegram art.tHThe bubble is sometimes a triangle a square or a rectangle. They can't be poped because they are really made out of concrete Pulsation When bubbles are disturbed, they pulsate (that is, they oscillate in size) at their natural frequency. Large bubbles (negligible surface tension and thermal conductivity) undergo adiabatic pulsations, which means that no heat is transferred either from the liquid to the gas or vice versa. The natural frequency of such bubbles is determined by the equation: [Minnaert, Marcel, On musical air-bubbles and the sounds of running water, Phil. Mag. 16, 235-248 (1933).] Leighton, Timothy G., The Acoustic Bubble (Academic, London, 1994).] :$f_0 = \left\{1 over 2 pi R_0\right\}sqrt\left\{3 gamma p_0 over ho\right\}$ where: * $gamma$ is the specific heat ratio of the gas * $R_0$ is the steady state radius * $p_0$ is the steady state pressure * $ho$ is the mass density of the surrounding liquid Smaller bubbles undergo isothermal pulsations. The corresponding equation for small bubbles of surface tension σ (and negligible liquid viscosity) is :$f_0 = \left\{1 over 2 pi R_0\right\}sqrt$3 p_0 over ho}+{4 sigma over ho R_0 Excited bubbles trapped underwater are the major source of liquid sounds, such as when a rain droplet impacts a surface of water. [cite journal last = Prosperetti first = Andrea coauthors = Oguz, Hasan N. year = 1993 title = The impact of drops on liquid surfaces and the underwater noise of rain journal = Annual Review of Fluid Mechanics volume = 25 pages = 577–602 doi = 10.1146/annurev.fl.25.010193.003045 url = http://arjournals.annualreviews.org/doi/abs/10.1146/annurev.fl.25.010193.003045 format = PDF accessdate = 2006-12-09 ] [cite web |url=http://ffden-2.phys.uaf.edu/311_fall2004.web.dir/Ryan_Rankin/bubble%20resonance.htm |title=Bubble Resonance |accessdate=2006-12-09 |last=Rankin |first=Ryan C. |year=2005 |month=June |work=The Physics of Bubbles, Antibubbles, and all That] * Sonoluminescence * Bubble fusion * Underwater acoustics References * [http://www.physicstoday.org/pt/vol-56/iss-2/p36.html Bubble physics] – touches on vapor pressure, bubble formation, bubble dynamics, cavitation, acoustic oscillations, sound of raindrops underwater, Rayleigh-Plesset equation, snapping shrimp, lithotripsy, ultrasonic cleaning, sonochemistry, sonoluminescence, medical reperfusion imaging, and micro-bubble therapy * [http://natgeochannel.co.uk/podcasts/?id_p=272250478 Extra large bubbles] Wikimedia Foundation. 2010. ### Look at other dictionaries: • liquid bubble test — merkiamasis sandarumo bandymas statusas T sritis radioelektronika atitikmenys: angl. dip hermeticity test; liquid bubble test vok. Tauchprüfung, f rus. испытание герметичности методом погружения, n pranc. essai d étanchéité par immersion dans… …   Radioelektronikos terminų žodynas • Bubble fusion — Bubble fusion, also known as sonofusion, is the non technical name for a nuclear fusion reaction hypothesized to occur during sonoluminescence, an extreme form of acoustic cavitation. Officially, this reaction is termed acoustic inertial… …   Wikipedia • Bubble — may refer to:Physical bubbles* Liquid bubble, a globule of one substance encased in another, usually air in a liquid * Soap bubble, a bubble formed by soapy water * Antibubble, a droplet of liquid surrounded by a thin film of gasArts and… …   Wikipedia • bubble — n. & v. n. 1 a a thin sphere of liquid enclosing air etc. b an air filled cavity in a liquid or a solidified liquid such as glass or amber. 2 the sound or appearance of boiling. 3 a transparent domed cavity. 4 a visionary or unrealistic project… …   Useful english dictionary • bubble — [bub′əl] n. [ME bobel, of echoic orig., as in MDu bubbel] 1. a very thin film of liquid forming a ball around air or gas [soap bubbles] 2. a tiny ball of air or gas in a liquid or solid, as in carbonated water, glass, etc. 3. anything shaped like …   English World dictionary • bubble — ► NOUN 1) a thin sphere of liquid enclosing air or another gas. 2) an air or gas filled spherical cavity in a liquid or a solidified liquid such as glass. 3) a transparent domed cover. ► VERB 1) (of a liquid) be agitated by rising bubbles of air… …   English terms dictionary • Bubble — Bub ble, n. [Cf. D. bobbel, Dan. boble, Sw. bubbla. Cf. {Blob}, n.] 1. A thin film of liquid inflated with air or gas; as, a soap bubble; bubbles on the surface of a river. [1913 Webster] Beads of sweat have stood upon thy brow, Like bubbles in a …   The Collaborative International Dictionary of English • Bubble Shooter — is a puzzle shooter computer game originally developed for Microsoft Windows by Absolutist Games in 2000. A clone of Puzzle Bobble, it combines the features of puzzle, shooter, strategy and action games. Since 2000 the game has been updated and… …   Wikipedia • Liquid breathing — Intervention MeSH D021061 Liquid breathing is a form of respiration in which a normally air breathing organism breathes an oxygen rich liquid (such as a perfluorocarb …   Wikipedia • Liquid — is one of the principal states of matter. A liquid is a fluid that has the particles loose and can freely form a distinct surface at the boundaries of its bulk material. The surface is a free surface where the liquid is not constrained by a… …   Wikipedia
web
auto_math_text
40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop 40m Log, Page 87 of 341 Not logged in ID Date Author Type Category Subject 5874   Fri Nov 11 13:35:19 2011 KatrinUpdateGreen LockingFeedback to ETMY [Kiwamu, Katrin] Red and blue curves: frequency fluctuation of the beat node between PSL and YARM laser. Green and broen curves: Actuation on ETMY.  In ALS_CONTROL.adl  ETMY filter bank 4 and 5 were switched on. Gain was 0.3 Nice reduction of the frequency fluctuation. Y axis is in volts^2 per counts. In order to go to MHz/sqrt(Hz) you have to take the square root and then times [20Volts/(2^16)counts]*[10Hz/0.04V]. Started to scan the cavity, but this didn't work. Green light all out of lock. IR beam was badly aligned to cavity. Now, my time is over and I have to leave you. Thanks, for your help and the nice time. 12999   Fri May 19 19:18:53 2017 KaustubhSummaryGeneralTesting of the new Photo Detectors ET-3010 and ET-3040 Motivation: I got some hands-on-experience on using RF photodetectors and the Network Analyzer from Koji. There were newly purchased RF photodetectors from Electro-Optics Technology, Inc.. These were InGaAs Photodetectors with model no.: 120-10050-0001(ET-3010) and 120-10056-0001(ET-3040). The User Guide for the two detectors can be found here. This is the first time we bought the ET-3010 model PD for the 40m lab. It has an operation bandwith >1.5GHz(not tested yet), much higher than other PDs of its kind. This can be used for detecting the output as we 'sweep' the laser frequency for getting data on the optical cavities and the resonating modes inside the cavity. We just tested out the ET-3040 model today but will test out the ET-3010 next week. Tools and Machines Used: We worked on the optical bench right in front of the main entrance to the lab. We put the cables, power chords, etc. to their respective places. We used screws, poles, T's, I's, multimeter, Network/Spectrum Analyzer(along with the moving table), a lab computer, Oscilloscope, power supply and the aforementioned PDs for our testing. We took these items from the stack of tools at the Y-arm and the boxes of various different labelled palced near the X-arm. We moved the Network Analyzer(along with the bench) from near the Y-arm to our workplace. Procedure: I will include a rough schematic of the setup later. We alligned the reference PD(High Speed Photoreceiver model 1611) and the test PD(ET-3040 in this case) to get optimal power output. We had set the pump current for the laser at 19.5mA which produced a power of 1.00mW at the output of the fiber couple. At the reference detector the measured voltage was about 1.8V and at the DUT it was about 15mV. The DC transimpedance for the reference detector is 10kOhm and its responsivity to 1064 nm is around 0.75A/W. Using this we calculate the power at the reference detector to be 0.24mW. The DC transimpedance for the DUT is 50Ohm and the responsivity of about 0.9A/W. This amounts to a power of about 0.33mW. After measuring the DC voltages, we connected the laser input to the Network Analyzer and gave in an RF signal with -10dBm and frequency modulation from 100 kHz to 500 MHz. The RF output from the Analyzer is coupled to the Reference Channel(CHR) of the analyzer via a 20dB directional coupler. The AC output of the reference detector is given at Channel A(CHA) and the output from the DUT is given to Channel B(CHB). We got plots of the ratios between the reference detector, DUT and the coupled refernce for the Transfer Function and the Phase. We found that the cut-off frequency for the ET3040 model was at arounf 55 MHz(stated as >50MHz in the data sheet). We have stored the data using the lab PC in the directory .../scripts/general/netgpibdata/data. Result: The bandwidth of the ET-3040 PD is as stated in the data sheet, >50 MHz. Precaution: These PDs have an internal power supply of 3V for ET-3040 and 6V for ET-3010. Do not leave these connected to any instruments after the experiments have been performed or else the batteries will get drained if there is any photocurrent on the PDs. To Do: A similar procedure has to be followed in order to test the ET-3010 PD. I will be doing this tentatively on Monday. Attachment 1: IMG_20170519_173247922.jpg Attachment 2: IMG_20170519_173253252.jpg Attachment 3: IMG_20170519_173300174.jpg Attachment 4: PD_test_setup.png 13005   Mon May 22 18:20:27 2017 KaustubhSummaryGeneralTesting of the new Photo Detectors ET-3010 and ET-3040 I am adding the text files with the data readings and paramater settings along with the Bode Plot of the data. I plotted these graphs using matplotlib module with python 2.7. Quote: Motivation: I got some hands-on-experience on using RF photodetectors and the Network Analyzer from Koji. There were newly purchased RF photodetectors from Electro-Optics Technology, Inc.. These were InGaAs Photodetectors with model no.: 120-10050-0001(ET-3010) and 120-10056-0001(ET-3040). The User Guide for the two detectors can be found here. This is the first time we bought the ET-3010 model PD for the 40m lab. It has an operation bandwith >1.5GHz(not tested yet), much higher than other PDs of its kind. This can be used for detecting the output as we 'sweep' the laser frequency for getting data on the optical cavities and the resonating modes inside the cavity. We just tested out the ET-3040 model today but will test out the ET-3010 next week... Attachment 1: ET-3040_test.zip Attachment 2: ET-3040_test.pdf 13009   Tue May 23 18:09:18 2017 KaustubhConfigurationGeneralTesting ET-3010 PD In continuation with the previous(ET-3040 PD) test. The ET-3010 PD requires to be fiber coupled for optimal use. I will try to test this model without the fiber couple tomorrow and see whether it works or not. 13011   Wed May 24 18:19:15 2017 KaustubhUpdateGeneralET-3010 PD Test Summary: In continuation to the previous test conducted on the ET-3040 PD,  I performed a similar test on the ET-3010 model. This model requires a fiber couple input for proper testing, but I tested it in free space without a fiber couple as the laser power was only 1.00 mW and there was not much danger of scattering of the laser beam. The Data Sheet can be found here Procedure: The schematic(attached below) and the procedure are the same as the previous time. The pump current was set to 19.5 mA giving us a laser beam of power 1.00mW at the fiber couple output. The measured voltage for the reference detector was 1.8V. For the DUT, the voltage is amplified using a low noise amplifier(model SR-560) with a gain of 100. Without any laser incidence on the DUT, the multimeter reads 120.6 mV. After alligning the laser with the DUT, the multimeter reads 348.5 mV, i.e. the voltage for the DUT is 227.9/100 ~ 2.28 mV. The DC transimpedance of the reference detector is 10kOhm and its responsivity to 1064 nm is around 0.75 A/W. Using this we calculate the power at the reference detector to be 0.24 mW. The DC transimpedance for the DUT is 50Ohm and the responsivity is around 0.85 A/W. Using this we calculate the power at the DUT to be 0.054 mW. After this we connect the the laser input to the Netwrok Analyzer(AG4395A) and give an RF signal with -10dBm and frequency modulation from 100 kHz to 500 MHz.The RF output from the Analyzer is coupled to the Reference Channel(CHR) of the analyzer via a 20dB directional coupler. The AC output of the reference detector is given at Channel A(CHA) and the output from the DUT is given to Channel B(CHB). We got plots of the ratios between the reference detector, DUT and the coupled refernce for the Transfer Function and the Phase. I stored the data under the directory.../scripts/general/netgpibdata/data. The Bode Plot has been attached below and seeing it we observe that the cut-off frequency for the ET-3010 model is atleast over 500 MHz(stated as >1.5 GHz in the data sheet). Result: The bandwidth of the ET-3010 PD is atleast 500MHz, stated in the data sheet as >1.5GHz. Precaution: The ET-3010 PD has an internal power supply of 6V. Don't leave the PD connected to any instrument after the experimentation is done or else the batteries will get drained if there is any photocurrent on the PDs. To Do: Caliberate the vertical axis in the Bode Plot with transimpedance(Ohms) for the two PDs. Automate the procedure by making a Python script for taking multiple set of readings from the Netwrok Analyzer and aslo plot the error bands. Attachment 1: PD_test_setup.png Attachment 2: ET-3010_test.pdf Attachment 3: ET-3010_test.zip 13016   Sat May 27 10:26:28 2017 KaustubhUpdateGeneralTransimpedance Calibration Using Alberto's paper LIGO-T10002-09-R titled "40m RF PDs Upgrade", I calibrated the vertical axis in the bode plots I had obtained for the two PDs ET-3010 and ET-3040. I am not sure whether the values I have obtained are correct or not(i.e. whether the calibration is correct or not). Kindly review them. EDIT: Attached the formula used to calculate transimpedance for each data point and the values of other paramaters. EDIT 2: Updated the plots by changing the conversion for gettin ghte ratio of the transfer functions from 10^(y/10) to 10^(y/20). Attachment 1: ET-3040_test_transimpedance.pdf Attachment 2: ET-3010_test_transimpedance.pdf Attachment 3: Formula_for_Transimpedance.pdf 13077   Fri Jun 23 02:43:43 2017 KaustubhHowToComputer Scripts / ProgramsTaking Measurements From AG4395A Summary: I have written a code(a basic one which needs a lot of improvements, but still does the job) for taking multiple measurements from the AG4395A. I have also written a separate code for plotting the data taken from the previoius code along with the error bars upto 1 standard deviation. Details on How To Operate AG4395A: 1. Under 'Measurement' tab, press the 'Meas' button and select the Analyzer Type (Network Analyzer or Spectrum Analyzer). 2. Then under the same options select which 'ratio' needs to be measured (A/R, B/R or A/B). 3. Then press the 'Format' button to select what needs to be measured (Eg - Log|Mag|, Phase, etc.). 4. In order to measure and see two channels at the same time (Eg - Log|Mag| and Phase), press the 'Display' button and select 'Dual Channel'. 5. Using the 'Scale' button we can set the scale/div or use autoscale and also set the attenuator values of the different channels. 6. The 'Bw/Avg' option gives us an averaging option which averages few sets of data to produce the result. In doing this we lose quiet a lot of data and the resulting plot isn't able to give us the information on the statistical errors. 7. This option also allows us to set the 'Intermediate Frequency' Bandwidth. This basically dictates the sampling rate of the Analyzer. The lower the IF bw, the higher is lesser is the noise (due to less uncertainty in Frequency). 8. The 'Cal' button helps us calibrate the Analyzer to the current connections and signals. This is done because there is usually a difference in the 'cable lengths' for the two channels which introduces an extra phase term depnding upon the rf frequency. The calibration can be simply done by removing the Device Under Test (DUT) and diectly connecting the coaxial cables to the channels. After this the 'Calibrate Menu' allows us to calibrate the response using the short, open and thru methods. 9. Now, under the 'Sweep' tab, the 'Sweep' button allows us to select various sweep options such as 'Sweep Time' (Auto, or set a time), 'Number of Points' (b/w 201-801) and 'Sweep Type' (Linear, Log, List Freq. etc.). 10. Using the 'Source' button we can set the source power in dBm units (Usually kept as -20 to -10 dBm). 11. The Scan Range can be set in a few ways such as using the start and end points or using the center and span range/width. 12. After setting up all of the above, we can take the measurement either from the analyzer itself or using one of the control PCs. The command to download the data from AG4395A is netgpibdata -i 192.168.113.105 -d AG4395A -a 10 -f [filename]. Brief Details on How the 'AGmeasure' command works: AGmeasure is a python script developed by some of the people who work at 40m. It is set as a global command and can be used from within any directory. The source code is in the scripts folder on the network, or else it can also be found in Eric Quintero's git repository. This command accepts at the very least a parameter file. This is supposed to be a .yml file. A template (TFAG4395Atemplate.yml) can be found in the scripts folder or in Eric's repo. There are some other options that can be passed to this command, see the help for more details. The Multi_Measurement Script: This script calls the 'AGmeasure' command repetitively and keeps storing the data files in a folder. Right now, the script needs to be fed in th template file manually at prompt. The Test_Plotting Script: This script plots the a set of data files obtained from the above mentioned script and produces a plot along with the errors bands upto 1 standard deviation of the data. The format (names) and total number of text files need to be explicitly known, for now at least. Attachments: 1. The output test files and the two scripts. 2. This is the 'Bode Plot' for a data set made using the above two scripts. To Do: • Improve upon the two scripts to be as compatible as the AGmeasure function itself. • Try and incorporate the whole script into AGmeasure itself along with improving upon the templates. • The above details, with some edits perhaps, can go into the 40m wiki too(?). Update: Increased the font size in the plot. Added a few comments to the two scripts To Do: Need to consider the transfer function as a single physical quantity (both the magnitude and phase) and then take the averages and calculate the standard deviation and then plot these results. EDIT: The attachment with the test files and the code now also contains a pdf with all the relations/equations I have used to calculate the averages and errors. Attachment 1: Test_Files_and_Code.zip Attachment 2: Bode_Plot_with_Error_Bands.pdf 13078   Fri Jun 23 02:55:18 2017 KaustubhUpdateComputer Scripts / ProgramsScript Running I am leaving a script running on the Pianoso for the night. For this purpose, even the AG4395A is kept on. I'll see the result of the script in the morning (it should be complete by then). Just check so before fiddling with the Analyzer. Thank you. 13086   Thu Jun 29 00:13:08 2017 KaustubhUpdateComputer Scripts / ProgramsTransfer Function Testing In continuation to my previous posts, I have been working on evaluating the data on transfer function. Recently, I have calculated the correlation values between the real and imaginary part of the transfer function. Also I have written the code for plotting the transfer function data stream at each frequency in the argand plane just for referring to. Also I have done a few calculations and found the errors in magnitude and phase using those in the real and imaginary parts of the transfer function. More details for the process are in this git repository. The following attachments have been added: 1. The correlation plot at different frequencies. This data is for a 100 data files. 2. The Test files used to produce the abover plot along with the code for the plotting it as well as the text file containing the correlation values. (Most of the code is commented as that part wasn't needed fo rhte recent changes.) Conclusion: Seeing the correlation values, it sounds reasonable that the gaussian in real and imaginary parts approximation is actually holding. This is because the correlation values are mostly quite small. This can be seen by studying the distribution of the transfer function on the argand plane. The entire distribution can be seen to be somewhat, if not entirely, circular. Even when the ellipticity of the curve seems to be high, the curve still appears to be elliptical along the real and imaginary axes, i.e., correlation in them is still low. To Do: 1. Use a better way to estimate the errors in magnitude and phase as the method used right now is a only valid with the liner approximation and gives insane values which are totally out of bounds when the magnitude is extrmely small and the phase is varying as mad. 2. Use the errors in the transfer function to estimate the coherence in the data for each frequency point. That is basically plot a cohernece Vs frequency plot showing how the coherence of the measurements vary as the frequency is varied. In order to test the above again, with an even larger data set, I am leaving a script running on Ottavia. It should take more than just the night(I estimate around 10-11 hours) if there are no problems. Attachment 1: Correlation_Plot.pdf Attachment 2: 2x100_Test_Files_and_Code_and_Correlation_Files.zip 13109   Mon Jul 10 21:31:15 2017 KaustubhHowToComputer Scripts / ProgramsDetails on Cavity Scan Analysis Summary: The following elog describes the procedure followed for generating a sample simulation for a cavity scan, fitting an actual cavity scan and calculating the relevant paramaters using the cavity scan and fit data. 1. Cavity Scan Simulation: 1. First, we define the sample cavity parameters, i.e., the reflectivitie,transmissivities of the mirrors, the RoCs of the mirrors and the absolute cavity length. 2. We then define a frequency range using numpy.linspace function for which we want to take a scan. 3. We then define a function that returns the tranmission power output of a Fabry-Perot cavity using the cavity equations as follows: $P_{t} = \frac{t_{1}t_{2}}{1-r_{1}r_{2}\exp({\frac{4\pi Lf}{c}+(n+m+1)\phi_G)}}$ where Pt is the transmission power ratio of the output power to input power, t1,t2,r1,r2 are the transmissivities and reflectivities of the two mirrors, L is the absolute cavity length, f is the frequency of the input laser, c is the speed of light, $\phi_G = \arccos{g_{1}g_{2}}$ is the gouy phase shift with g1,g2 being the g-factors for the two cavity mirrors(g=1-L/R). 'n' and 'm' correspond to the TEMnm higher order mode. 4. We now obtain a cavity scan by giving the above defined function the cavity parameters and by adding the outputs for different higher order mode('n', 'm' values). Appropriate factors for the HOMs need to be chosen. The above function with appropriate coefficients can be used ti also add the modulated sidebands to the total transmission power. 5. To this obtained total power we can add some random noise using numpy modules random.normal function. We need to normalise the data with respect to the max. power transmission ratio. 6. We can now perform fitting on the above data using the procedure stated in the next section and then plot the two data sets using matplotlib module. 7. A similar code to do the above is given here. 2. Fitting a Cavity Scan: 1. The actual data for a cavity scan can be found in this elog entry or attached below in the zip folder. 2. We read this data and separate the frequency data and the transmission data. 3. Using the peakutils module's indexes function, we find the indices of the various peaks in the data set. 4. These peaks are from the fundamental resonances, sideband resonances(both 11MHz and 55MHz) as well as a few HOMs. 5. Each of these resonances follows the cavity equations and hence can be modelled as Lorentzian within small intervals around the peak frequencies. A detailed description of how this is possible is given here and is in the atached zip folder('Functionsused.pdf'). 6. We define a Lorentzian function which returns the fo$\frac{a}{1+(\frac{\nu - \nu_0}{b})^{2}}$llows:  where 'a' is the peak transmission value, 'b' is the 'linewidth' of the Lorentzian and $\nu_0$ is the peak frequency  about which the cavity equations behave like a lorentzian. 7. We now, using the Lorentzian function, fit the various identified peaks using the curve_fit function of the scipy module. Remember to turn the 'absolute_sigma' parameter to 'True'. 8. The parameters now obtained can be evaluated using the procedure given in the next section. 9. The total transmission power is evaluated by feeding in the above obtained parameters back into the Lorentzian function and adding it for each peak. 10. We can plot the actual data set and the data obtained using the fit of different peaks in a plot using matplotlib module. We can also plot the residuals for a better depiction of the fit quality. 11. The code to analyse the above mentioned cavity scan data is given here and attached below in the zip folder. 3. Calculating Physically Relevant Parameters: 1. The data obtained from the fitting the peaks in the previous section now needs to be analysed in order to obtain some physically relevant information such as the FSR value, the TMS value, the modulation depths of the sidebands and perhaps even the linear caliberation of the frequency. 2. First we need to identify the fundamental, TEM00 resonances among all the peaks. This we do by using the numpy.where function. We find the peaks with transmission values more than 0.9(or any suitable value). 3. Using these indices we will now calculate the FSR and the Finesse of the peaks. A description of the correlation between the Fit Parameters and the FSR and Finesse is given here. 4. We define a Linear fitting function for fitting the frequency values of the fundamental resonances against the ith fundamental resonance. The slope of this line gives us the value of FSR and the error in it. 5. The Finesse can be calculated by fitting the linewidth with a constant function. 6. The cavity length can be calculated using the FSR values as follows: $L = \frac{c}{2\nu_{FSR}}$. 7. Now, the approximate positions of the sideband frequncies is given by 11*106%FSR and 55*106%FSR away from the fundamental, carrier resonances. 8. The modulation depth, 'm', is given as $\sqrt{\frac{P_{c}}{P_{s}}} = \frac{J_{0}(m)}{J_{1}(m)}$ where Pc is the carrier transmission power, Ps is the transmission power of the sideband and Jv is the Bessel Function of order 'v'. 9. We define a function 'Bessel Ratio' using which we'll fit the transmission power ratio of the carrier to the sideband for the multiple sideband resonances. 10. We also check for the Linearity in frequency data by plotting Fitting the frequencies corresponding to peaks in the actual data to ones obtained after fitting. 11. After this we attempt to identify the other HOMs. For this we first determine a rough estimate for the value of TMS using the already known parameters of the mirrors,i.e., the RoC. We then look in small intervals (0.5 MHz) around frequencies where we would expect the HOMs to be, i.e., 1*TMS, 2*TMS, 3*TMS... away from the fundamental resonances. These positions are all modulo FSR. 12. After identifying the HOMs, we take the difference from the fundamental resonance and then study these modulo the FSR. 13. We perform a Linear Fit between these obtained values and (n+m).  As 'n','m' are degenerate, we can simply perform the fit against some variable 'k' and obtain the value of TMS as the slope of the linear fit. 14. The code to do the above stated analysis is given here. Most of the above info and some smaller details can be found in the markdown readme file in this git repo. Attachment 1: Attachments.zip 13116   Thu Jul 13 16:10:34 2017 KaustubhSummaryComputer Scripts / ProgramsCavity Scan Simulation Code The code to produce a cavity scan simulation and then fitting the data and re-evaluating the initiallt set parameters can be found in this git repo. The 'CavitScanSimulation' python script now produces a cavity scan with custom parameters which can be easily modified. It also introduces the first TEMs(n+m=0,1,2,3,4) to the laser with power going as (1/(2(n+m)+1))^2 {Selected carefully}. The only care that needs to be taken is that the frequency span should be somewhere near an integral multiple of the FSR so that there are equal number of resonances for all modes and sidebands. This code, as of now also calls the 'FitCavityScan' script which performs the fitting procedure on the data generated above{This data is actually written in a '.mat' file} and generates the Fit parameter data files. The Simulation code then calls the 'CalculatingPhysicalParameters' script which evaluates the data based on the Fit parameters and outputs some physically relevant results like the FSR, Finesse, Modulation Depths, TMS{Current Output is the Estimated RoCs of the two mirrors which isn't something we want directly, so it can be modified a bit to output TMS based on the HOMs}. The scripts do some 'Linearity' checks which might not really be of much significance but can be seen as a reference. Also, the ipython notebook will show all intermediate plots for the actual data and data with custom noise, fit data, FSR fitting, linearity checks, Bessel Ratio plot with mod_depths. Note: The scripts should be run using either an IDE like 'spyder'{for .py files}{Comes with Anaconda} or using an ipython notebook{for .ipynb files}. 13065   Thu Jun 15 14:24:48 2017 Kaustubh, JigyasaUpdateComputersOttavia Switched On Today, I and Jigyasa connected the Ottavia to one of the unused monitor screens Donatella. The Ottavia CPU had a label saying 'SMOKED''. One of the past elogs, 11091, dated back in March 2015, by Jenne had an update regarding the Ottavia smelling 'burny'. It seems to be working fine for about 2 hours now. Once it is connected to the Martian Network we can test it further. The Donatella screen we used seems to have a graphic problem, a damage to the display screen. Its a minor issue and does not affect the display that much, but perhaps it'll be better to use another screen if we plan to use the Ottavia in the future. We will power it down if there is an issue with it. 13067   Thu Jun 15 19:49:03 2017 Kaustubh, JigyasaUpdateComputersOttavia Switched On It has been working fine the whole day(we didn't do much testing on it though). We are leaving it on for the night. Quote: Today, I and Jigyasa connected the Ottavia to one of the unused monitor screens Donatella. The Ottavia CPU had a label saying 'SMOKED''. One of the past elogs, 11091, dated back in March 2015, by Jenne had an update regarding the Ottavia smelling 'burny'. It seems to be working fine for about 2 hours now. Once it is connected to the Martian Network we can test it further. The Donatella screen we used seems to have a graphic problem, a damage to the display screen. Its a minor issue and does not affect the display that much, but perhaps it'll be better to use another screen if we plan to use the Ottavia in the future. We will power it down if there is an issue with it. 13068   Fri Jun 16 12:37:47 2017 Kaustubh, JigyasaUpdateComputersOttavia Switched On Ottavia had been left running overnight and it seems to work fine. There has been no smell or any noticeable problems in the working. This morning Gautam, Kaustubh and I connected Ottavia to the Matrian Network through the Netgear switch in the 40m lab area. We were able to SSH into Ottavia through Pianosa and access directories. On the ottavia itself we were able to run ipython, access the internet. Since it seems to work out fine, Kaustubh and I are going to enable the ethernet connection to Ottavia and secure the wiring now. Quote: It has been working fine the whole day(we didn't do much testing on it though). We are leaving it on for the night. Quote: Today, I and Jigyasa connected the Ottavia to one of the unused monitor screens Donatella. The Ottavia CPU had a label saying 'SMOKED''. One of the past elogs, 11091, dated back in March 2015, by Jenne had an update regarding the Ottavia smelling 'burny'. It seems to be working fine for about 2 hours now. Once it is connected to the Martian Network we can test it further. The Donatella screen we used seems to have a graphic problem, a damage to the display screen. Its a minor issue and does not affect the display that much, but perhaps it'll be better to use another screen if we plan to use the Ottavia in the future. We will power it down if there is an issue with it. 13071   Fri Jun 16 23:27:19 2017 Kaustubh, JigyasaUpdateComputersOttavia Connected to the Netgear Box I just connected the Ottavia to the Netgear box and its working just fine. It'll remain switched on over the weekend. Quote: Kaustubh and I are going to enable the ethernet connection to Ottavia and secure the wiring now. 5273   Sat Aug 20 00:42:22 2011 KeikoUpdateLSCTolerance of PRC, SRC, MICH length = 2 mm ? Keiko, Kiwamu I have run Kiwamu's length tolerance code (in CVS iscmodeling, ArmTolerance.m & analyseArmTorelance.m ) for the vertex ifo. In his previous post, he monte-carlo-ed the arm lengths and saw the histogram of the sensing matrix and the demodulation phase between POP55 MICH and POP55 SRCL. From these plots, he roughly estimated that the tolerance is about 1 cm (sigma of the rondom gaussian) and in that case POP55 MICH and SRCL is separated by the demodulation phase 60-150 degrees. This time I put the length displacements of random gaussian on PRC, SRC, MICH lengths at the same time (Fig.1). Fig. 1. History of random walk in PRC, SRC, MICH lengths parameter space. Same as Kiwamu's previous post, The position of the three degrees are randomly chosen with a Gaussian distribution function in every simulaton. This example was generated when \sigma = 1 cm for all the three lengths, where \sigma is the standard deviation of  the Gaussian function. The number of simulation is 1000 times. When the sigma is 1 cm, we found that the sensing matrix is quite bad if you look at Fig. 2. In Fig.2 row POP55, although the desired degrees of freedoms are MICH and SRCL, they have quite a bit of variety. Their separation in the demodulation phase is plotted in Fig.3. The separation in the demodulation phase varies from 40 degrees to 140 degrees, and around 270 degrees. It is not good as ideally we want it to be 90. Fig. 2 Histgram of the sensing signal power in the matrix when 1 cm sigma rondom gaussian is applied on PRC, SRC, MICH lengths. x axis it the signal power in log10. Fig.3 POP55 MICH and POP55 SRCL separation with the displacement sigma 1 cm. Kiwamu suspected that PRC length as more strict tolerance than other two (SRC, MICH) for POP55, as 55MHz is fast and can be sensitive to the arm length change. So I ran the same monte-carlo with SRC, MICH displacements but no PRC displacements when sigma is the same, 1cm. The results were almost same as above, nothing obvious difference. With 2mm sigma, the signal power matrix and the POP55 MICH and POP55 SRCL separation in the demodulation phase look good (Fig. 4 and Fig. 5). Fig.4 Signal power matrix when PRC, SRC, MICH lengths fractuate with random gaussian distribution with 2mm sigma. The signal powers are shown in log10 in x axis, and they do not vary very much in this case. Fig.5 POP55 MICH and POP55 SRCL separation with the displacement sigma 2 mm. The separation of the two signal is 60-90 degrees, much better than when sigma is 1 cm. We may need to check 60 degree separation is really ok or not. PRC SRC MICH lengths tolerances of 2 mm in the real world will be very difficult ! Next I will check what happens on 3f signals. Quote: Required arm length = 37.7974 +/- 0.02 [m] This is a preliminary result of the estimation of the Arm length tolerance. This number was obtained from a simulation based on Optickle. Note that the simulation was done by considering misplacements in only the arm lengths while keeping PRCL, SRCL and MICH at the ideal lengths. Therefore the tolerance will be somewhat tighter if misplacements in the central part are taken into account. Next : check 3f signals, and include misplacements in PRCL, SRCL and MICH.         Figure.2  A sensing matrix of the 40-m DRFPMI while changing the position of ETMX/Y by \sigma = 2 cm. For convenience,  only REFL11, AS55, POP11 and POP55 are shown. They are the designed signal ports that mentioned in the aLIGO LSC document (T1000298). In all the histograms, x-axis represents the optical gain in log scale in units of [W/m]. The y-axis is the number of events. The diagonal ports are surrounded by red rectangular window.         (Results2 : demodulation phase of MICH and SRCL on POP55) Now a special attention should be payed on the MICH and SRCL signals on POP55. Since MICH and SRCL are designed to be taken from POP55, they must be nicely separated in their demodulation phases. Therefore the demodulation phase of MICH and SRCL has to be carefully examined. The plot in Figure.3 is the resultant phase difference between MICH and SRCL on POP55 when \sigma_x = \sigma_y = 2 cm. As shown in the plot the phase are always within a range of 60 - 120 deg, which satisfies my requirement (2) mentioned in the last section.          Figure.3 Difference in the demodulation phase of MICH and SRCL on POP55. x-axis is the difference in the demodulation phase of MICH and SRCL, and y-axis the number of events. 5292   Tue Aug 23 17:51:37 2011 KeikoUpdateLSCTolerance of PRC, SRC, MICH length = 2 mm ? Keiko, Kiwamu We noticed that we have used wrong code for MICH degree of freedom for both of the ELOG entries on this topic (cavity lengths tolerance search). It will be modified and posted soon. 5334   Fri Sep 2 04:41:35 2011 KeikoUpdateLSCTolerance of PRC, SRC, MICH length = 5 mm ? Keiko, Kiwamu Length tolerance of the vertex part is about 5 mm. Sorry for my procrastinating update on this topic. In my last post, I reported that the length tolerance of the vertex ifo would be 2mm, based on Kiwamu's code on CVS. Then we noticed that the MICH degrees of freedom was wrong in the code. I modified the code and ran again. You can find the modified codes on CVS (40m folder, analyzeDRMITolerance3f.m and DRMITolerance.m) In this code, the arm lengths were kept to be ideal while some length offsets of random gaussian distribution were added on PRCL, SRCL and MICH lengths. The iteration was 1000 times for each sigma of the random gaussian distribution. The resulting sensing matrix is shown as histogram. Also, a histogram of the demodulation phase separation between MICH and SRCL is plotted by this code, as these two length degrees of freedom will be obtained by one channel separated by the demodulation phase. We check this separation because you want to make sure that the random length offsets does not make the separation of these two signals close. The result is a bit different from the previous post, in the better way! The length tolerance is about 5 mm for the vertex ifo. Fig.1 shows the sensing matrix. Although signal levels are changed by the random offsets, only few orders of magnitude is changed in each degrees of freedom. Fig.2 shows that the signal separation between MICH and SRCL at  POP55 varies from  55 to 120 degrees, which may be OK. If you have 1cm sigma, it varies from 50 degrees to 150 degrees. Fig. 1 Histgram of the sensing matrix including 3f channels, when sigma is 5mm. Please note that the x-axis is in long 10. Fig. 2 Histogram of the demodulation phase difference between MICH and SRCL, when sigma is 5 mm. To obtain the two signals independently, 90 is ideal. With the random offsets, the demodulation phase difference varies from 55 degrees to 120 degrees. My next step is to run the similar code for LLO. 5377   Sat Sep 10 14:55:28 2011 KeikoUpdateLSC3f demodulation board check To check the demodulation boards for REFL33 and REFL165, a long cable from ETMY (SUS-ETMY-SDCOIL-EXT monitor) is pulled to the rack on Y side. (1) A filter just after the RF input and (2) transfer function from the RF input to the demodulated signal will be checked for the two 3f demod boards to confirm that they are appropriate for 33 and 165 MHz. 5378   Sat Sep 10 16:10:42 2011 KeikoUpdateLSC3f demodulation board check There is a LP filter just after the RF input of an demodulation board (its schematic can be found as D990511-00-C on DCC). I have checked if the 3f freq, 33MHz, can pass  this filter. The filter TF from the RF input to RF monitor (the filter is between the input and monitor) on REFL33 demo-board was measured as shown in Fig. 1. At 33MHz, the magnitude is still flat and OK, but the phase is quite steep. I am going to consider if it is ok for the PDH method or not. Fig. 1 Transfer function from the RF input to RF monitor on the REFL33 demodulation board. At 33MHz, a very steep phase is applied on the input signal. Quote: To check the demodulation boards for REFL33 and REFL165, a long cable from ETMY (SUS-ETMY-SDCOIL-EXT monitor) is pulled to the rack on Y side. (1) A filter just after the RF input and (2) transfer function from the RF input to the demodulated signal will be checked for the two 3f demod boards to confirm that they are appropriate for 33 and 165 MHz. 5380   Sat Sep 10 18:57:52 2011 KeikoUpdateLSC3f demodulation board check The phase delay due to the RF input filter on the demodulation board will not bother the resulting PDH signals. I quickly calculated the below question (see the blue sentence in the quote below). I applied an arbitrary phase delay (theta) due to the filter I measured, on the detected RF signal by the photo detector. Then the filtered RF signal is multiplied by cos(omega_m) then filter the higher (2 omega_m) freqency as the usual mixing operation for the PDH signal. As a result, the I signal is delayed by cos(theta) and the Q signal is delayed by sin(theta). Therefore the resulting signals and its orthogonalitity is kept ok. From the sideband point of view, theta is applied on both upper and lower and seems to make the unbalance, however, as it is like a fixed phase offset on both SBs at the modulation frequency, the resulting signals is just multiplied by cos or sin theta for I and Q, respectively. It won't make any strange effect (it is difficult to explain by sentence not using equations!). Quote: There is a LP filter just after the RF input of an demodulation board (its schematic can be found as D990511-00-C on DCC). I have checked if the 3f freq, 33MHz, can pass  this filter. The filter TF from the RF input to RF monitor (the filter is between the input and monitor) on REFL33 demo-board was measured as shown in Fig. 1. At 33MHz, the magnitude is still flat and OK, but the phase is quite steep. I am going to consider if it is ok for the PDH method or not. Fig. 1 Transfer function from the RF input to RF monitor on the REFL33 demodulation board. At 33MHz, a very steep phase is applied on the input signal. Quote: To check the demodulation boards for REFL33 and REFL165, a long cable from ETMY (SUS-ETMY-SDCOIL-EXT monitor) is pulled to the rack on Y side. (1) A filter just after the RF input and (2) transfer function from the RF input to the demodulated signal will be checked for the two 3f demod boards to confirm that they are appropriate for 33 and 165 MHz. 5385   Sun Sep 11 22:36:32 2011 KeikoUpdateLSC3f demodulation board check Filters at the RF inputs of REFL33 and REFL165 demodulation boards were measured again. The filters will be totally fine for 33MHz and 165MHz. Last time I forgot to calibrate the cable lengths, therefore the phase delay of the measurement included the cable lengths. This time the measurements were done for REFL33 and REFL165 demod board with calibration. As the cable lengths were calibrated, the shown plots (Fig.1 and Fig.2) do not include the phase delay dues to measurement cables. Please note that the x-axis is in linear. The phase delays of both boards seems to be not too steep (it will not affect anyway, as Kiwamu pointed out in his comment on the previous post). You can see that the two filters do not filter 33MHz and 165MHz component out. Fig.1 A response of a filter which is placed just after the RF input of the demodulation board for REFL33. X-axis is shown in linear (~50MHz). Fig.2 A response of a filter which is placed just after the RF input of the demodulation board for REFL165. Quote: There is a LP filter just after the RF input of an demodulation board (its schematic can be found as D990511-00-C on DCC). I have checked if the 3f freq, 33MHz, can pass  this filter. The filter TF from the RF input to RF monitor (the filter is between the input and monitor) on REFL33 demo-board was measured as shown in Fig. 1. At 33MHz, the magnitude is still flat and OK, but the phase is quite steep. I am going to consider if it is ok for the PDH method or not. Fig. 1 Transfer function from the RF input to RF monitor on the REFL33 demodulation board. At 33MHz, a very steep phase is applied on the input signal. Quote: To check the demodulation boards for REFL33 and REFL165, a long cable from ETMY (SUS-ETMY-SDCOIL-EXT monitor) is pulled to the rack on Y side. (1) A filter just after the RF input and (2) transfer function from the RF input to the demodulated signal will be checked for the two 3f demod boards to confirm that they are appropriate for 33 and 165 MHz. 5386   Mon Sep 12 13:24:07 2011 KeikoUpdateLSC3f demodulation board check I also quickly checked the orthogonality of the demodulation board for REFL33 and REFL165 using function generators and oscilloscope. I checked the frequencies at 1,10,100,1K,10KHz of the demodulated signals. They are fine and ready for 3f signal extraction. 5387   Mon Sep 12 16:27:01 2011 KeikoUpdateLSC3f demodulation board check Wait. I am checking the whitening filters of the 33 and 165 demodulation boards. Also, LSC-REFL33-I-IN1(IN2, OUT) and LSC-REFL165-Q-IN1(IN2,OUT) channels may not be working?? Quote: I also quickly checked the orthogonality of the demodulation board for REFL33 and REFL165 using function generators and oscilloscope. I checked the frequencies at 1,10,100,1K,10KHz of the demodulated signals. They are fine and ready for 3f signal extraction. 5388   Mon Sep 12 18:40:35 2011 KeikoUpdateLSC3f demodulation board check LSC-REFL33-I-IN1(IN2, OUT) and LSC-REFL165-Q-IN1(IN2,OUT) channels are back! We disconnected and connected again the AA filters then the channels are fixed. Apparently the AA filters just before the digital world were somhow charged and not working... Thank you Kiwamu! Quote: Wait. I am checking the whitening filters of the 33 and 165 demodulation boards. Also, LSC-REFL33-I-IN1(IN2, OUT) and LSC-REFL165-Q-IN1(IN2,OUT) channels may not be working?? 5394   Tue Sep 13 15:00:25 2011 KeikoUpdateLSC3f demodulation board check Whitening filters for the REFL33 & 165 demodulated channels were measured and confirmed that they are working. They can be turned on and off by un-white filter switches on the MEDM screen because they are properly linked. The measured filter responses are showen below. (Sorry, apparentyl the thumbnails are not shown here. Please click the attachments.) Attachments: (top) Whitening filter for REFL33 demodulation board. (bottom) Whitening filter response for REFL 165 demodulation board. 5399   Tue Sep 13 23:08:51 2011 KeikoUpdateLSC3f demodulation board check Keiko, Jamie , Kiwamu The I and Q orthogonalities of REFL33 and 165 demodulation board were measured by "orthogonality.py"  Python package scipy were addied on Pianosa to run this code. Please note that "orthogonality.py" can be run only on Pianosa. The results were: REFL165 ABS = 1.070274, PHASE = -81.802479 [deg] if you wanna change epics values according to this result, just copy and execute the following commands ezcawrite C1:LSC-REFL165_Q_GAIN 0.934340 && ezcawrite C1:LSC-REFL165_PHASE_D -81.802479 - - - - - - - - - - - - - - - - - - REFL33 ABS = 1.016008 PHASE = -89.618724 [deg] if you wanna change epics values according to this result, just copy and execute the following commands ezcawrite C1:LSC-REFL33_Q_GAIN 0.984244 && ezcawrite C1:LSC-REFL33_PHASE_D -89.618724 Fig.1 and 2  are the resulting plots for 33 and 165 MHz demod baoards, respectively.You should look at the 3Hz in x axis, as the demodulated signal frequency was set as 3 Hz. Fig. 1 REFL33 I and Q orthogonality at 3 Hz. Fig. 2 REFL165 I and Q orthogonality at 3 Hz. 5412   Thu Sep 15 01:06:20 2011 KeikoUpdateLSC3f demodulation board check In addition to REFL 33 ans 165, I checked the orthogonality for the other existing three channels. AS11 ABS = 1.025035  PHASE = -93.124929 [deg] REFL11] ABS = 0.920984  PHASE = -88.824691 [deg] REFL55 ABS = 1.029985 , PHASE = -90.901123 [deg] - - - - - - - - - - - - - - - - - - The demodulated signal was set as 50 Hz (for example LO 11MHz and RF 11MHz+50Hz from function generators.) These AS11, REFL11, REL55, REFL33m REFL165 are the current available channels in terms of the connection to the data system from the demodulation board. I am going to estimate the error next. Quote: REFL165 ABS = 1.070274, PHASE = -81.802479 [deg] - - - - - - - - - - - - - - - - - - REFL33 ABS = 1.016008 , PHASE = -89.618724 [deg] 5413   Thu Sep 15 01:17:10 2011 KeikoUpdateLSCMICH locked and attempt to lock PRCL Anamaria, Keiko - We aligned MICH and were successfully locked MICH using AS55Q. The other mirrors were misaligned so that the other degrees of freedom didn't exist. AS55 was fed back to BS. The f2a filters on BS suspension were required to lock, because the pos feedback was unbalanced to angle degrees of freedom. - We tried to lock PRCL next, however, because we aligned the MICH and the REFL beam paths were changed, REFL PDs didn't have the light anymore. The REFL paths were modified now, so we will try the PRCL locking next. - We couldn't confirm REFL55 signals although we alined the REFL paths to REFL55 PD. 5440   Fri Sep 16 21:26:12 2011 KeikoUpdateLSC3f demodulation board check The demodulation phases and gains for the all existing channels, AS11, REFL11,REFL55, REFL165, and REFL33, were adjusted by the command "ezcawrite" commands. Scripts are: REFL165 ezcawrite C1:LSC-REFL165_Q_GAIN 0.934340 && ezcawrite C1:LSC-REFL165_PHASE_D -81.802479 REFL33 ezcawrite C1:LSC-REFL33_Q_GAIN 0.984244 && ezcawrite C1:LSC-REFL33_PHASE_D -89.618 REFL11 ezcawrite C1:LSC-REFL11_Q_GAIN 1.173418 && ezcawrite C1:LSC-REFL11_PHASE_D -442.882697 AS11 ezcawrite C1:LSC-AS11_Q_GAIN 0.975576 && ezcawrite C1:LSC-AS11_PHASE_D -93.12492 AS55 ezcawrite C1:LSC-AS55_Q_GAIN 0.999164 && ezcawrite C1:LSC-AS55_PHASE_D -89.300986 5441   Fri Sep 16 21:36:25 2011 KeikoUpdateLSCPOY11 and POY55 were added New channels, POP55 and POY11 are connected to the rack and now available on the data system. POX11 I is not working. I didn't investigate what was wrong. Please make sure when you come to need POX11. The orthogonalities of POY11 and POP55 were measured and already adjusted. The results are below: POY11 ABS = 0.973633 PHASE = 92.086483 [deg] ezcawrite C1:LSC-POY11_Q_GAIN 1.027081 && ezcawrite C1:LSC-POY11_PHASE_D 92.086483 POP55 ABS =  1.02680579 PHASE =  88.5246 [deg] ezcawrite C1:LSC-POP55_Q_GAIN 0.973894 && ezcawrite C1:LSC-POP55_PHASE_D 88.524609 5445   Sat Sep 17 01:53:41 2011 KeikoUpdateLSCPOY and POP beams clipped Keiko, Paul, Kiwamu We found that POP beam is clipped by the steering mirrors inside the tank. POY beam is also likely to be clipped inside. Also the hight of POY beam is too high (about 5 cm higher than the normal paths) at the first lens. These imply the input pointing is bad. 5464   Mon Sep 19 16:44:16 2011 KeikoHowToLSCProcedure for the demodulation board check Here I note the procedure for the demodulation board orthogonality check for the future reference. 1. prepare two function generators and make sure I an Q demodulation signals go to the data acquisition system. 2. sync the two generators 3. drive the function generator at the modulation frequency and connect to the LO input on the demod board 4. drive the other function generator at the modulation frequency + 50Hz  the RF in 5. run "orthogonality.py"  from a control computer scripts/demphase directory. It returns the amplitude and phase information for I and Q signals. If necessary, compensate the amplitude and phase by the command that  "orthogonality.py" returns. If you want to check in the frequency domain (optional): 1. 2. 3 are the same as above. 4. drive the function generator at the LO frequency + sweep the frequency, for example from 1Hz to 1kHz, 50ms sweep time. You can do it by the function generator carrier frequency sweep option. 5. While sweeping the LO frequency, run "orthogonality.py" 6. The resulting plot from "orthogonality.py" will show the transfer function from the RF to demodulated signal. The data is saved in "dataout.txt" in the same directory. 5472   Mon Sep 19 23:19:40 2011 KeikoUpdateIOOAM modulation mistery Keiko, Anamaria We started to investigate the AM modulation mistery again. Checking just after the EOM, there are AM modulation about -45dBm. Even if we adjust the HWP just before the EOM, AM components grow up in 5 mins. This is the same situation as before. Only the difference from before is that we don't have PBS and HWP between the EOM and the monitor PD. So we have a simpler setup this time. We will try to align the pockells cell alignment tomorrow daytime, as it may be a problem when the crystal and the beam are not well parallel. This adjustment has been done before and it didn't improve AM level at that time. 5474   Tue Sep 20 03:02:23 2011 KeikoUpdateLSClocking activity tonight Keiko, Anamaria, Koji We were not able to establish the stable DRMI tonight. We could lock MICH and PRCL quite OK, and lock the three degrees of freedom at somewhere strange for several seconds quite easily, but the proper DRMI lock was not obtained. When MICH and PRC are locked to the carrier, REFL DC PD reading dropps from ~3000 counts to 2600~2700 counts as REFL beam is absorbed to PRC. We'll try to lock PRC to sidebands - but flipping gain sign didn't work today, although it worked a few days ago. POP beam (monitor) is useful to align PRM. 5483   Tue Sep 20 16:31:24 2011 KeikoUpdateIOOSmall modulation depth Modulation resonator box is removed and the modulation depth is small right now. I have broke the BNC connector on the modulation resonator box. The connector was attached by the screw inside very loosely and when we connect and disconnect the BNC cables from outside, extra force was applied to the cable inside and it was broke. It is being fix by Kiwamu and will be back in a bit. 5484   Tue Sep 20 16:38:25 2011 KeikoUpdateIOOSmall modulation depth Resonator box and the modulations are back now. But the modulation depth seems to be a bit smaller than yesterday, looking at the optical spectrum analyser. Quote: Modulation resonator box is removed and the modulation depth is small right now. I have broke the BNC connector on the modulation resonator box. The connector was attached by the screw inside very loosely and when we connect and disconnect the BNC cables from outside, extra force was applied to the cable inside and it was broke. It is being fix by Kiwamu and will be back in a bit 5491   Tue Sep 20 23:01:37 2011 KeikoUpdateIOOAM modulation mistery Keiko, Suresh AM modulations are still there ... the mechanical design for the stages, RF cables, and connections are not good and affecting the alignment. I write the activity in the time series this time - Because we suspect the slight EOM misalignment to the beam produces the unwanted AM sidebands, we tried to align the EOM as much as possible. First I aligned the EOM tilt aligner so that the maximum power goes through. I found that about 5% power was dumped by EOM. After adjusting the alignment, the AM modulation seemed be much better and stable, however, it came up after about 20 mins. They grew up up to about -40dBm, while the noise floor is -60 dBm (when AM is minimised, with DC power of 8V by PDA225 photodetector). We changed the EOM stage (below the tilt aligner) from a small plate to a large plate, so that the EOM base can be more stable. The EOM stands on the pile of several black plate. There was a gap below the tilt aligner because of a small plate.  So we swapped the small plate to large plate to eliminate the springly gap. However it didn't make any difference - it is the current status and there is still AM modulations right now. During above activities, we leaned that the main cause of the EOM misalignment may be the RF cables and the resonator box connected to the EOM. They are connected to the EOM by an SMA adaptor, not any soft cables. It is very likely applying some  torc force to the EOM box. The resonator box is almost hunging from the EOM case and just your slight touch changes EOM alinment quite a bit and AM mod becomes large. I will replace the SMA connector between the resonator box and EOM to be a soft cable, so that the box doesn't hung from EOM tomorrow. Also, I will measure the AM mod depth so that we compare with the PM mod depth. Quote: Keiko, Anamaria We started to investigate the AM modulation mistery again. Checking just after the EOM, there are AM modulation about -45dBm. Even if we adjust the HWP just before the EOM, AM components grow up in 5 mins. This is the same situation as before. Only the difference from before is that we don't have PBS and HWP between the EOM and the monitor PD. So we have a simpler setup this time. We will try to align the pockells cell alignment tomorrow daytime, as it may be a problem when the crystal and the beam are not well parallel. This adjustment has been done before and it didn't improve AM level at that time. 5495   Wed Sep 21 02:49:39 2011 KeikoSummaryLSCLSC matrices I created 3 kinds of LSC matrices, PRMI condition with carrier resonant in PRC, PRMI condition with SB resonant in PRC, and DRMI with SB resonant in PRC. The matrices are with AS55 and REFL11 which are used for locking right now. The signal numbers are written in log10, and the dem phases are shown in degrees. From CR reso PRMI to SB reso PRMI, demodulation phases change  ---- PRMI - Carrier resonant in PRC PRCL      MICH  SRCL REFL11 7.7079 2.9578 0 REFL33 5.2054 3.2161 0 REFL55 7.7082 2.9584 0 REFL165 3.9294 2.5317 0 AS11 1.0324 3.5589 0 AS33 1.0286 1.6028 0 AS55 1.1708 4.2588 0 AS165 1.1241 0.9352 0 POP11 2.8015 -1.3331 0 POP33 0.2989 -1.6806 0 POP55 2.8017 -0.6493 0 POP165 -0.9769 -2.3708 0 POX11 3.7954 -0.3363 0 POX33 1.293 -0.7058 0 POX55 3.796 0.355 0 POX165 0.0187 -1.3837 0 Dem Phase REFL11 3 179 0 REFL33 165 -172 0 REFL55 13 170 0 REFL165 86 177 0 AS11 -32 73 0 AS33 176 -72 0 AS55 -41 12 0 AS165 -7 146 0 POP11 -11 -116 0 POP33 124 147 0 POP55 -54 -146 0 POP165 -117 -25 0 POX11 -87 15 0 POX33 -105 -80 0 POX55 -76 16 0 POX165 180 -91 0 PRMI - SB resonant in PRC SB reso PRMI PRCL MICH SRCL REFL11 7.6809 5.2777 0 REFL33 5.2465 3.1565 0 REFL55 7.2937 5.589 0 REFL165 4.3892 2.6857 0 AS11 1.3123 3.545 0 AS33 0.9331 1.6022 0 AS55 1.7425 4.0514 0 AS165 1.5838 1.1344 0 POP11 2.7745 0.3791 0 POP33 0.3401 -1.7392 0 POP55 2.3872 0.6904 0 POP165 -0.5171 -2.2279 0 POX11 3.7684 1.3574 0 POX33 1.3341 -0.7664 0 POX55 3.3815 1.6688 0 POX165 0.4785 -1.2163 0 Dem Phase REFL11 155 -115 0 REFL33 -8 3 0 REFL55 91 -178 0 REFL165 -62 28 0 AS11 109 62 0 AS33 -39 99 0 AS55 13 -38 0 AS165 -155 168 0 POP11 141 -128 0 POP33 -48 -38 0 POP55 24 115 0 POP165 95 -176 0 POX11 65 155 0 POX33 83 95 0 POX55 2 92 0 POX165 32 123 0 DRMI - SB resonant in PRC REFL11 7.6811 5.0417 4.2237 REFL33 5.2751 4.1144 3.7766 REFL55 7.2345 7.0288 6.6801 REFL165 4.3337 4.1266 3.7775 AS11 1.1209 3.512 0.9248 AS33 0.9159 1.6323 0.7971 AS55 2.6425 5.3915 2.5519 AS165 2.6423 2.4881 2.3272 POP11 2.7747 0.1435 -0.6846 POP33 0.3687 -0.7849 -1.122 POP55 2.3244 2.1302 1.7815 POP165 -0.5833 -0.8 -1.1548 POX11 3.7676 3.261 0.8086 POX33 1.3896 0.2372 0.2333 POX55 3.4619 3.0097 3.1326 POX165 0.782 0.6668 0.4357 Dem Phase REFL11 154 -16 4 REFL33 -5 12 51 REFL55 129 -166 -123 REFL165 -23 40 83 AS11 132 79 69 AS33 -92 -127 -83 AS55 -33 -55 -5 AS165 154 179 -144 POP11 141 -29 -9 POP33 -46 -27 12 POP55 62 127 170 POP165 135 -161 -117 POX11 64 -102 -83 POX33 85 143 118 POX55 57 103 124 POX165 99 155 -164 5502   Wed Sep 21 16:44:18 2011 KeikoUpdateIOOAM modulation mistery AM modulation depths are found to be 50 times smaller than PM modulation depths. m(AM,f1) ~ m(AM, f2) = 0.003 while m(PM, f1)=0.17 and m(PM, f2)=0.19. Measured values; * DC power = 5.2V which is assumed to be 0.74mW according to the PDA255 manual. *AM_f1 and AM_f2 power = -55.9 dBm = 2.5 * 10^(-9) W. AM f2 power is assumed to be the similar value of f1. I can't measure f2 (55MHz) level properly because the PD (PDA255) is 50MHz bandwidth. From the (P_SB/P_CR) = (m/2) ^2 relation where P_SB and P_CR are the sideband and carrier power, respectively, I estimated the rough the AM modulation depths. Although DC power include the AM SB powers, I assumed that SB powers are enough small and the DC power can be considered as the carrier power, P_CR. The resulting modulation depth is about 0.003. On the other hand, from the OSA, today's PM mod depths are 0.17 and 0.19 for f1 and f2, respectively. Please note that these numbers contains (small) AM sidebands components too. Comparing with the PM and AM sideband depths, AM sidebands seems to be enough small. Quote: Keiko, Suresh AM modulations are still there ... the mechanical design for the stages, RF cables, and connections are not good and affecting the alignment. Attachment 1: P9210138.JPG 5504   Wed Sep 21 18:53:03 2011 KeikoUpdateIOOAM modulation misery The signal offset due to the AM modulation is estimated by a simulation for PRCL for now. Please see the result below. Too see how bad or good the AM modulation with 1/50 modulation depths of PM, I ran a simulation. For example I looked at PRCL sweep signal for each channel. I tried the three AM modulation depths, (1) m_AM=0 & m_PM = 0.17 (2) m_AM = 0.003 & m_PM = 0.17 which is the current modulation situation (3) m_AM = 0.17 & m_PM = 0.17 in which AM is the same modulation depth as PM.  For the current status of (2), there are offsets on signals up to 0.002 while the maximum signal amplitude is 0.15. I can't tell how bad it is.... Any suggestions? (1) m_AM=0 & m_PM = 0.17. There is no offset in the signals. (2) m_AM = 0.003 & m_PM = 0.17. There are offsets on signals up to 0.002 while the maximum signal amplitude is 0.15. (3) m_AM = 0.17 & m_PM = 0.17. There are offsets on signals up to 0.1 while the maximum signal amplitude is 0.2. I will look at MICH and SRCL in the same way. Quote: I'd like to see some details about how to determine that the ratio of 1:50 is small enough for AM:PM. * What have people achieved in past according to the elogs©  of the measurements? * What do we expect the effect of 1:50 to be? How much offset does this make in the MICH/PRC/SRC loops? How much offset is too much? Recall that we are using frontal modulation with a rather small Schnupp Asymmetry... 5512   Thu Sep 22 01:45:41 2011 KeikoUpdateLSCLocking status update Keiko, Anamaria Tonight we want to measure the LSC matrix for PRMI and compare the simulation posted last night (#5495). First. we locked MICH and PRCL, and measured the OLT to see how good the locking is. The following rough swept sine plots are the OLTs for MICH and PRCL. The gain setting was -10 and 0.5 for MICH and PRCL, respectively. Integrators were off. Looking at the measured plots, MICH has about 300 Hz UGF, when the gain is -20, and PRCL has about 300 HZ UGF, too, when the gain is 0.8. As these lokings seemed good, so we tried the LSC matrix code written by Anamaria. However it is not working well at this point. When the script add excitations to the exc channels, they kick the optics too much and the lockings are too much disturbed... Also, we have been trying to lock PRC with the SB resonant, it doesn't work. Looking at the simulated REFL11I (PRCL) signal (you can see it in #5495 too), the CR and SB resonances have the opposite signs... But minus gain never works for PRCL. It only excites the mirror rather than locking. 5520   Thu Sep 22 17:29:42 2011 KeikoUpdateIOOAM modulation mistery AM modulation will add offset on SRCL signal as well as PRCL signal. About 2% of the signal amplitude with the current AM level. MICH will not be affected very much. From #5504, as for the AM modulation I checked the MICH and SRCL signals in addition to the last post for PRCL, to see the AM modulation effect on those signals. On the last post, PRCL (REFL11I) was found to have 0.002 while the maximum signal amplitude is 0.15 we use . Here, I did the same simulation for MICH and SRCL. As a result, MICH signals are not affected very much. The AM modulation slightly changes signal slopes, but doesn't add offsets apparently. SRCL is affected more, for REFL signals. All the REFL channels get about 0.0015 offsets while the signal ampliture varies up to 0.002. AS55I (currently used for SRCL) has 1e-7 offset for 6e-6 amplitude signal (in the last figure) - which is the same offset ratio comparing with the amplitude in the PRCL case - (1) MICH signals at AS port with AM m=0 (2) MICH signals at AS port with AM m=0.003 (3) SRCL signals at AS/REFL port with AM m=0 (3) SRCL signals at AS/REFL port with AM m=0.003 Quote: How about changing the x-axis of all these plots into meters or picometers and tell us how wide the PRC resonance is? (something similar to the arm cavity linewidth expression) Also, there's the question of the relative AM/PM phase. I think you have to try out both I & Q in the sim. I think we expect Q to be the most effected by AM. 5538   Sat Sep 24 09:55:42 2011 KeikoUpdateIOOAM modulation mistery From the night day before yesterday (Sep 22nd, Thursday night. Sorry for my late update), there are more AM modulations than I measured in the previous post. It is changing a lot, indeed! Looking at the REFL11 I and Q signals on the dataviewer, the signal offset were huge, even after "LSCoffset" script. Probably the modulation index of AM was same order of PM at that time. The level of AM mod index is changing a lot depending on the EOM alingment which is not very stable, and also on the environment such as temperature . To reduce AM modulations, here I note some suggestions you may want to try : * Change the SAM connectors between RF resonator and EOM to be a soft but short connector, so that the resonator box doesn't hung from the EOM. * Change the RF resonator base to be stable posts. Now several black plates are piled to make one base. * Install a temperature shield * Also probably you want to change the BNC connector on the RF resonator to be SMA. * Be careful of the EOM yaw alignment. Pitch seemed to be less sensitive in producing AM than yaw alignment. Quote: AM modulation will add offset on SRCL signal as well as PRCL signal. About 2% of the signal amplitude with the current AM level. MICH will not be affected very much. From #5504, as for the AM modulation I checked the MICH and SRCL signals in addition to the last post for PRCL, to see the AM modulation effect on those signals. On the last post, PRCL (REFL11I) was found to have 0.002 while the maximum signal amplitude is 0.15 we use . Here, I did the same simulation for MICH and SRCL. As a result, MICH signals are not affected very much. The AM modulation slightly changes signal slopes, but doesn't add offsets apparently. SRCL is affected more, for REFL signals. All the REFL channels get about 0.0015 offsets while the signal ampliture varies up to 0.002. AS55I (currently used for SRCL) has 1e-7 offset for 6e-6 amplitude signal (in the last figure) - which is the same offset ratio comparing with the amplitude in the PRCL case - 6358   Mon Mar 5 18:12:00 2012 KeikoUpdateLSCRAM simulation update I wrote an RAM simulation script ... it calculates the LSC signal offset and the operation point offset depending on the RAM modulation index. Configuration : RAM is added on optC1, by the additional Mach-Zehnder ifo before the PRM. Both are for PRCL sweep result. Note that REFL33I is always almost zero. Next step: Check the LSC matrix with matrix at the offset operation point. 6363   Tue Mar 6 15:22:02 2012 KeikoUpdateLSCRAM simulation update Quote: I wrote an RAM simulation script ... it calculates the LSC signal offset and the operation point offset depending on the RAM modulation index. Configuration : RAM is added on optC1, by the additional Mach-Zehnder ifo before the PRM.  Both are for PRCL sweep result. Note that REFL33I is always almost zero. Next step: Check the LSC matrix with matrix at the offset operation point. On the right figure, you see the non-zero operation points even when RAM mod index = 0. Apparently they come from non-zero loss of the model.  (Each mirror of 50ppm loss was assumed). 5233   Sun Aug 14 20:04:40 2011 Keiko, Anamaria, Jenne, and KiwamuSummaryLockingcentral part ifo locking plan GOAL : To lock the central part of ifo Here is the plan: Mon - assemble all the cables from PDs and mixers, and check the CDS channels. Prepare the beamsplitters. Tue - The current paths to REFL11 and REFL55 will be modified to the four paths to REFL11, 33, 55, 165. And the PDs will be placed. Wed, Thu - during waiting for the ifo available with vacuum, help aligning the POP, POX, POY. In parallel, a simulation to find the PRC length SRC length tolerance will be proceeded. Fri - When the ifo becomes available with vacuum, the sensing signals by 3-f scheme will be obtained with proper demodulation phases. Sat - Try to lock the central part of the ifo with the new 3-f signals. 2810   Mon Apr 19 16:31:42 2010 KevinUpdatePSLInnolight 2W Laser Koji and Kevin We unpacked the Innolight 2W laser, took an inventory, and scanned the operations manual. [Edit by KA] The scanned PDFs are placed on the following wiki page http://lhocds.ligo-wa.caltech.edu:8000/40m/Upgrade_09/PSL We will measure the P-I curve, the mode profile, frequency actuator responses, and so on. 2822   Tue Apr 20 20:15:37 2010 KevinUpdatePSLInnolight 2W Output Power vs Injection Current Koji and Kevin measured the output power vs injection current for the Innolight 2W laser. The threshold current is 0.75 A. The following data was taken with the laser crystal temperature at 25.04ºC (dial setting: 0.12). Injection Current (A) Dial Setting Output Power (mW) 0.000 0.0 1.2 0.744 3.66 1.1 0.753 3.72 4.6 0.851 4.22 102 0.954 4.74 219 1.051 5.22 355 1.151 5.71 512 1.249 6.18 692 1.350 6.64 901 1.451 7.08 1118 1.556 7.52 1352 1.654 7.92 1546 1.761 8.32 1720 1.853 8.67 1855 1.959 9.05 1989 2.098 9.50 2146 Attachment 1: PvsI_2W.jpg 2828   Wed Apr 21 21:56:27 2010 KevinUpdatePSLInnolight 2W Vertical Beam Profile Koji and Kevin measured the vertical beam profile of the Innolight 2W laser at one point. This data was taken with the laser crystal temperature at 25.04°C and the injection current at 2.092A. The distance from the razor blade to the flat black face on the front of the laser was 13.2cm. The data was fit to the function y(x)=a*erf(sqrt(x)*(x-x0)/w)+b with the following results. Reduced chi squared = 14.07 x0 = (1.964 +- 0.002) mm w  = (0.216 +- 0.004) mm a  = (3.39  +- 0.03) V b  = (3.46  +- 0.03) V Attachment 1: bp2.jpg Attachment 2: bp2.dat razor height (mm) Voltage (V) 2.75 6.89 2.50 6.90 2.30 6.89 2.25 6.89 2.20 6.75 2.15 6.47 2.13 6.20 2.10 6.05 2.07 5.88 ... 17 more lines ... ELOG V3.1.3-
web
auto_math_text
# Revision history [back] ### Inertia matrices and double precision ODE Hi All, I have a problem with the precision of ODE. The environment that I'm trying to use is: • Ubuntu trusty • ROS Indigo • Gazebo 6 The model of my robot (about 0.4 metre tall) has links as small as this and mass of 0.025kg: <geometry> <box size="0.045 0.022 0.0325"/> // Units in metre </geometry> And this is the inertial element, with the real values, for the same link: <inertial> <pose frame=''>0 0 0 0 -0 0</pose> <mass>0.0243577</mass> <inertia> <ixx>3.12641e-06</ixx> <ixy>0</ixy> <ixz>0</ixz> <iyy>6.25435e-06</iyy> <iyz>0</iyz> <izz>5.09279e-06</izz> </inertia> </inertial> So the problem that I have is that the moments of inertia are too small and the model becomes unstable. I have tried scaling the values for the inertia matrix, but no the geometry, collisions, mass, etc. This had some reasonable behaviour with Gazebo 2.2, but with Gazebo 6 the inertial values are too big and the robot bounces with any slight movement. I also was thinking in increasing the precision on ODE to double, but I haven't find any tutorial to do this. I know that converting the measurement units to millimetres consistently across the model and ROS could fix the problem, but I'd prefer some more transparent solution rather than converting units back and forth and managing different units for the physical robot and the simulation. Do you have any recommendation to deal with this? Thanks, Germán ### Inertia matrices and double precision ODE Hi All, I have a problem with the precision of ODE. The environment that I'm trying to use is: • Ubuntu trusty • ROS Indigo • Gazebo 6 The model of my robot (about 0.4 metre tall) has links as small as this and mass of 0.025kg: <geometry> <box size="0.045 0.022 0.0325"/> // Units in metre </geometry> And this is the inertial element, with the real values, for the same link: <inertial> <pose frame=''>0 0 0 0 -0 0</pose> <mass>0.0243577</mass> <inertia> <ixx>3.12641e-06</ixx> <ixy>0</ixy> <ixz>0</ixz> <iyy>6.25435e-06</iyy> <iyz>0</iyz> <izz>5.09279e-06</izz> </inertia> </inertial> So the problem that I have is that the moments of inertia are too small and the model becomes unstable. I have tried scaling the values for the inertia matrix, but no the geometry, collisions, mass, etc. This had some reasonable behaviour with Gazebo 2.2, but with Gazebo 6 the inertial values are too big and the robot bounces with any slight movement. I also was thinking in increasing the precision on ODE to double, but I haven't find any tutorial to do this. I know that converting the measurement units to millimetres consistently across the model and ROS could fix the problem, but I'd prefer some more transparent solution rather than converting units back and forth and managing different units for the physical robot and the simulation. Do you have any recommendation to deal with this? EDIT: After applying debz and hsu recommendations, I also realised that my joint controller was using incorrect method to control the joint position. With all the corrections the model works without problems. Thanks, Germán
web
auto_math_text
[This article was first published on r – Appsilon Data Science | End­ to­ End Data Science Solutions, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here) Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. Many news reports scare us with machines taking over our jobs in the not too distant future. Common examples of take-over targets include professions like truck drivers, lawyers and accountants. In this article we will explore how far machines are from replacing us (R programmers) in writing Shiny code. Spoiler alert: you should not be worried about your obsolescence right now. You will see in a minute that we’re not quite there yet. I’m just hoping to show you in an entertaining way some easy applications of a simple model of a recurrent neural network implemented in an R version of Keras Let’s formulate our problem once again precisely: we want to generate Shiny code character by character with a neural network. ## Background To achieve that we would need a recurrent neural network (RNN). By definition such a network does a pretty good job with time series. Right now you might be asking yourself, what?  We defined our problem as a text mining issue; where is temporal dependency here?! Well, imagine a programmer typing characters on his/her keyboard, one by one, every time step. It would also be nice if our network captured long-range dependencies such as, for instance, a curly bracket in the 1021st line of code that can refer to a “for” loop from  line 352 (that would be a long loop though). Fortunately, RNNs are perfect for that because they can (in theory) memorize the influence of a signal from the distant past to a present data sample. I will not get into details on how recurrent neural networks work here, as I believe that there are a lot of fantastic resources online elsewhere. Let me just briefly mention that some of the regular recurrent networks suffer from a vanishing gradient problem. As a result, networks with such architectures are notoriously difficult to train. That’s why machine learning researchers started looking for more robust solutions. These are provided by a gating mechanism that helps to teach a network long-term dependencies. The first such solution was introduced in 1997 as a Long Short Term Memory neuron (LSTM). It consists of three gates: input, forget and output, that together prevent the gradient from vanishing in further time steps. A simplified version of LSTM that still achieves good performance is the Gated Recurrent Unit (GRU) introduced in 2014. In this solution, forget and input gates are merged into one update gate. In our implementation we will use a layer of GRU units. Most of my code relies on an excellent example from Chapter 8 in Deep Learning with R by François Chollet. I recommend this book wholeheartedly to everyone interested in practical basics of neural networks. Since I think that François can explain to you his implementation better than I could , I’ll just leave you with it and get to the part I modified or added. ## Experiment Before we get to the model, we need some training data. As we don’t want to generate just  any code, but specifically Shiny code , we need to find enough training samples. For that, I scraped the data mainly from this official shiny examples repository and added some of our semantic examples. As a result I generated 1300 lines of Shiny code. Second, I played with  several network architectures and looked for a balance between speed of training, accuracy and model complexity. After some experiments, I found a suitable network for our purposes: (BTW If you want to find out more about Keras in R, I invite you to take a look at a nice introduction by Michał). I trained the above model for 50 epochs with a learning rate of 0.02. I experimented with different values of a temperature parameter too. Temperature is used to control the randomness of a prediction by scaling the logits (output of a last layer) before applying the softmax function. To illustrate, let’s  have a look at the output of the network predictions with temperature = 0.07. and with temperature = 1: I think that both examples are already quite impressive, given the limited training data we had. In the first case, the network is more confident about its choices but also quite prone to repetitions (many spaces follow spaces, letters follow letters and so on). The latter, from a long, loooong distance looks way closer to Shiny code. Obviously, it’s still gibberish, but look! There is a nice function call heag(heig= x(input$obr)), object property input$obr, comment # goith and even variable assignment filectinput <- ren({. Isn’t that cool? Let’s have a look now at the evolution of training after 5 epochs: As you can see, after each training the generated text becomes increasingly structured. ## Final Thoughts I appreciate that some of you might not be as impressed as I was. Frankly speaking, I almost hear all of these Shiny programmers saying: “Phew… my job is secure then!” Yeah, yeah, sure it is… For now! Remember that these models will probably improve over time. I  challenge you to play with different architectures and train some better models based on this example. And for completeness, here’s the code I used to generate the fake Shiny code above: You can find me on Twitter @doktaox R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job. Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
web
auto_math_text
# Charge transport in disordered organic matter: hopping transport As I won a proposal today, I feel up to contributing once again some physics to this blog… I know, it has been a long long wait. So today it is time to consider some fundamentals of charge transport, as this is not only important for the extraction of charge carriers from the device (see earlier posts on mobility and efficiency, surface recombination velocity and photocurrent) but also the nongeminate recombination (see e.g. photocurrent part 2 and 3). In disordered systems without long range order – such as an organic semiconductor which is processed into a thin film by sin coating – in which charge carriers are localised on different molecular sites, charge transport occurs by a hopping process. Due to the disorder, you can imagine that adjacent molecules are differently aligned and have varying distances across the device. Then, the charge carriers can only move by a combination of tunneling to cover the distance, and thermal activation to jump up in energy. In the 1950s, Rudolph A. Marcus proposed a hopping rate (jumps per second), which is suitable to describe the local charge transport. By the way, he received the 1992 Nobel prize in chemistry for his contributions to this theory of electron transfer reactions in chemical systems.The equation he proposed for the hopping rate from site i to site j across the distance $r_{ij}$ is $\nu_{ij} = \frac{|I_{ij}|^2}{\hbar}\sqrt{\frac{\pi}{\lambda kT}}\exp \left( -\frac{(\Delta G_{ij}+\lambda)^2}{4\lambda kT} \right)$ . Here, $I_{ij}$ is the transfer integral, i.e. the wavefunction overlap between sites i and j, which is proportional to the tunnelling contribution. $\lambda$ is the reorganisation energy related to the polaron relaxation, which is sometimes called self-trapping: the molecule is distorted by the charge, which leads to a (lattice) polarisation, lowering the site energy. $kT$ is the thermal energy and $\Delta G_{ij}$ is due to different energetic contributions, in particular the energy difference between the two sites. In disordered systems, the density of states is often approximated by an exponential or Gaussian distribution, so that the energy of each site is from this distribution. Integrating over all site energies just yields the chosen energy distribution, e.g. a Gaussian, once again. Then, $\Delta G_{ij}$ is just the energy difference of the two chosen sites. Thus, jumping from one molecular site to the next is proportional to the tunneling term and an exponential term proportional to the site energy difference and the self-trapping of the charge on the initial molecular site. For a given molecule, the arrangement can be calculated by molecular dynamics, and the transfer integrals between different possible pairs of molecules, constituting sites i and j, respectively, can be calculated by quantum chemistry. A nice application of this approach is shown in [Kirkpatrick 2007] for discotic liquid crystals, without considering molecular dynamics in a qualitative way look at [Stehr 2011]. A simpler but more generic way to calculate a hopping rate is the so-called Miller-Abrahams hopping rate $\nu_{ij} = \nu_0 \exp\left(-\gamma r_{ij} \right) \exp \left( -\frac{\Delta E_{ij}}{kT} \right)$. Here, the contributions of tunnelling and thermal activation are even more explicit. $\nu_0$ is the maximum hopping rate, sometimes called attempt-to-escape frequency. $\gamma$ is the inverse localisation radius, stating how well charge carriers can tunnel across the distance $r_{ij}$ between site i and j. Indeed, the first term denotes the tunneling contribution. The thermal activation comes from a Boltzmann term, where hopping upwards in energy, i.e. $\Delta E_{ij}$ >0: if the hopping process is from an initial state i lower in energy than the final state j, it is made difficult by an exponential penalty. Hopping downwards in energy ($\Delta E_{ij}$<0) is approximated to be always similarly easy: the complete second term, the Boltzmann term, is replaced by $1$. In the Miller-Abrahams rate, the molecular details are usually neglected, so instead of transfer integrals only the attempt-to-escape frequency is approximated. Instead of the reorganisation energy, only energetic site differences derived from a (often Gaussian) density of states distribution are considered. Both models, Marcus and Miller-Abrahams hopping rate, are used in different context and are not exactly equivalent, but will yield similar results under many conditions. Nevertheless, it is probably safe to state that the former has a higher scientific applicability. Now why is it important to be able to calculate a hopping rate when considering charge transport in organic matter? – If one knows the number of molecular sites across the device length $L$, which is the conservative estimate of the number of jumps $N$ needed to travel through the whole device, and the time needed per jump $t=1/\bar{\nu_{ij}}$, one can calculate the velocity $v=L/(N t) =\bar{\nu_{ij}} L/N = \bar{\nu_{ij}} \bar{r_{ij}}$. Here, $\bar{r_{ij}}$ is the average distance crossed per single jump. If the velocity is known, also the charge carrier mobility is known, which is a very important figure of merit in semiconductor physics. The mobility $\mu$ relates the drift velocity $v$ to its driving force, the electric field $F$, so that $v = \mu F$. A lot of essential information on charge transport is included in this inconspicuous parameter $\mu$, especially if disorder is considered. The charge carrier velocity can, thus, be calculated by knowledge of the hopping rate as well as the time for each hop and the number of hops. Also, $v$ can alternatively be determined experimentally by measuring the transit time of charge carriers through a device of known thickness. Thus, a direct comparison of experiment and simulation is possible and desired for grasping how charge transport works. A suitable and very straight forward experiment is the transient photocurrent, also called time-of-flight (TOF) measurement. A fitting computer model is based on a kinetic Monte Carlo simulation, in which a certain spatial and energetic distribution of sites is assumed and the Marcus or Miller-Abrahams hopping rates are calculated. Next time, I will explain the TOF experiment, and then Monte Carlo simulations. Both together allowed (and still alow) to much better understand charge transport in disordered organic semiconductors. ## 7 thoughts on “Charge transport in disordered organic matter: hopping transport” 1. Joshua says: Great article! You mentioned that the Miller-Abrahams model is not necessarily appropriate for modelling the same things as the Marcus model. Could you give an example of when the Miller-Abrahams model would be a more accurate representation of a system. I understand that the Marcus model takes into account polaron effects and the Miller-Abrahams doesn’t explicitly account for them. 1. Sorry, forgot to answer… and happy new year, by the way! If you compare the rates for Miller-Abrahams and Marcus hopping rates, they are pretty similar. Tunnel term: attempt-to-escape frequency in MA contains Marcus’ transfer integral. Boltzmann term: you are right, Marcus contains the reorganisation energy instead of disorder. However, you can modify the activation energy in MA to consider disorder and/or polaron effects as well. The main difference is that Marcus theory has an inverted regime where, for hops far down, the hopping rate decreases again. In contrast, for MA, all hops downwards in energy have the same maximum hopping rate. 1. Joshua Brown says: Hi again, I realize that the inverted regime that you mentioned has been supported by experimental data, but has anyone investigated the mechanisms responsible for it? Is the inverted regime a result of competing processes? 2. Hi again$^2$, the inverted regime was predicted by R. A. Marcus himself, and later proven experimentally. I just checked has Nobel prize lecture in paperform [Marcus 1993], around Fig. 6 and 7 you find the explanation which does not require additional competing processes. Sorry for not going into detail, but dinner with family waits;-) Best, Carsten 3. Joshua Brown says: Thanks, for your comments. I was unable to find a satisfactory description in Marcus’s 1993 paper, but one of the papers he references by J. R. Miller does provide a reasonable if somewhat brief explanation. http://pubs.acs.org/doi/pdf/10.1021/ja00322a058 I believe they are suggesting that the inverted regime is caused from an increased mismatch in the overlap of the vibrational wave functions. 2. Sarah says: Hey, I can’t seem to find your continuous post on Monte Carlo. I am curious to read on :) 1. Yes, hmmm, plans vs reality… I fear there are some more promises on this blog which deserve to be fulfilled. But thanks for pointing this one out, I really should do at least the “cartoon version”;-)
web
auto_math_text
Outlook: i-80 Gold Corp. Common Shares is assigned short-term Ba1 & long-term Ba1 estimated rating. Dominant Strategy : Wait until speculative trend diminishes Time series to forecast n: 11 Mar 2023 for (n+6 month) Methodology : Modular Neural Network (Social Media Sentiment Analysis) ## Abstract i-80 Gold Corp. Common Shares prediction model is evaluated with Modular Neural Network (Social Media Sentiment Analysis) and Multiple Regression1,2,3,4 and it is concluded that the IAUX stock is predictable in the short/long term. According to price forecasts for (n+6 month) period, the dominant strategy among neural network is: Wait until speculative trend diminishes ## Key Points 1. What is prediction model? 2. What is the best way to predict stock prices? 3. Decision Making ## IAUX Target Price Prediction Modeling Methodology We consider i-80 Gold Corp. Common Shares Decision Process with Modular Neural Network (Social Media Sentiment Analysis) where A is the set of discrete actions of IAUX stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4 F(Multiple Regression)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Modular Neural Network (Social Media Sentiment Analysis)) X S(n):→ (n+6 month) $\begin{array}{l}\int {e}^{x}\mathrm{rx}\end{array}$ n:Time series to forecast p:Price signals of IAUX stock j:Nash equilibria (Neural Network) k:Dominated move a:Best response for target price For further technical information as per how our model work we invite you to visit the article below: How do AC Investment Research machine learning (predictive) algorithms actually work? ## IAUX Stock Forecast (Buy or Sell) for (n+6 month) Sample Set: Neural Network Stock/Index: IAUX i-80 Gold Corp. Common Shares Time series to forecast n: 11 Mar 2023 for (n+6 month) According to price forecasts for (n+6 month) period, the dominant strategy among neural network is: Wait until speculative trend diminishes X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.) Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.) Z axis (Grey to Black): *Technical Analysis% ## IFRS Reconciliation Adjustments for i-80 Gold Corp. Common Shares 1. The assessment of whether an economic relationship exists includes an analysis of the possible behaviour of the hedging relationship during its term to ascertain whether it can be expected to meet the risk management objective. The mere existence of a statistical correlation between two variables does not, by itself, support a valid conclusion that an economic relationship exists. 2. Interest Rate Benchmark Reform—Phase 2, which amended IFRS 9, IAS 39, IFRS 7, IFRS 4 and IFRS 16, issued in August 2020, added paragraphs 5.4.5–5.4.9, 6.8.13, Section 6.9 and paragraphs 7.2.43–7.2.46. An entity shall apply these amendments for annual periods beginning on or after 1 January 2021. Earlier application is permitted. If an entity applies these amendments for an earlier period, it shall disclose that fact. 3. However, depending on the nature of the financial instruments and the credit risk information available for particular groups of financial instruments, an entity may not be able to identify significant changes in credit risk for individual financial instruments before the financial instrument becomes past due. This may be the case for financial instruments such as retail loans for which there is little or no updated credit risk information that is routinely obtained and monitored on an individual instrument until a customer breaches the contractual terms. If changes in the credit risk for individual financial instruments are not captured before they become past due, a loss allowance based only on credit information at an individual financial instrument level would not faithfully represent the changes in credit risk since initial recognition. 4. If such a mismatch would be created or enlarged, the entity is required to present all changes in fair value (including the effects of changes in the credit risk of the liability) in profit or loss. If such a mismatch would not be created or enlarged, the entity is required to present the effects of changes in the liability's credit risk in other comprehensive income. *International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS. ## Conclusions i-80 Gold Corp. Common Shares is assigned short-term Ba1 & long-term Ba1 estimated rating. i-80 Gold Corp. Common Shares prediction model is evaluated with Modular Neural Network (Social Media Sentiment Analysis) and Multiple Regression1,2,3,4 and it is concluded that the IAUX stock is predictable in the short/long term. According to price forecasts for (n+6 month) period, the dominant strategy among neural network is: Wait until speculative trend diminishes ### IAUX i-80 Gold Corp. Common Shares Financial Analysis* Rating Short-Term Long-Term Senior Outlook*Ba1Ba1 Income StatementCaa2Ba3 Balance SheetBaa2Baa2 Leverage RatiosCaa2B1 Cash FlowBaa2C Rates of Return and ProfitabilityB1B2 *Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents. How does neural network examine financial reports and understand financial state of the company? ### Prediction Confidence Score Trust metric by Neural Network: 89 out of 100 with 558 signals. ## References 1. Chernozhukov V, Demirer M, Duflo E, Fernandez-Val I. 2018b. Generic machine learning inference on heteroge- nous treatment effects in randomized experiments. NBER Work. Pap. 24678 2. K. Tuyls and G. Weiss. Multiagent learning: Basics, challenges, and prospects. AI Magazine, 33(3): 41–52, 2012 3. Çetinkaya, A., Zhang, Y.Z., Hao, Y.M. and Ma, X.Y., MO Stock Price Prediction. AC Investment Research Journal, 101(3). 4. Chernozhukov V, Chetverikov D, Demirer M, Duflo E, Hansen C, Newey W. 2017. Double/debiased/ Neyman machine learning of treatment effects. Am. Econ. Rev. 107:261–65 5. Blei DM, Lafferty JD. 2009. Topic models. In Text Mining: Classification, Clustering, and Applications, ed. A Srivastava, M Sahami, pp. 101–24. Boca Raton, FL: CRC Press 6. Athey S, Bayati M, Imbens G, Zhaonan Q. 2019. Ensemble methods for causal effects in panel data settings. NBER Work. Pap. 25675 7. Scott SL. 2010. A modern Bayesian look at the multi-armed bandit. Appl. Stoch. Models Bus. Ind. 26:639–58 Frequently Asked QuestionsQ: What is the prediction methodology for IAUX stock? A: IAUX stock prediction methodology: We evaluate the prediction models Modular Neural Network (Social Media Sentiment Analysis) and Multiple Regression Q: Is IAUX stock a buy or sell? A: The dominant strategy among neural network is to Wait until speculative trend diminishes IAUX Stock. Q: Is i-80 Gold Corp. Common Shares stock a good investment? A: The consensus rating for i-80 Gold Corp. Common Shares is Wait until speculative trend diminishes and is assigned short-term Ba1 & long-term Ba1 estimated rating. Q: What is the consensus rating of IAUX stock? A: The consensus rating for IAUX is Wait until speculative trend diminishes. Q: What is the prediction period for IAUX stock? A: The prediction period for IAUX is (n+6 month)
web
auto_math_text
Outlook: Viracta Therapeutics Inc. Common Stock is assigned short-term Ba1 & long-term Ba1 estimated rating. Dominant Strategy : Wait until speculative trend diminishes Time series to forecast n: 12 Feb 2023 for (n+16 weeks) Methodology : Transductive Learning (ML) ## Abstract Viracta Therapeutics Inc. Common Stock prediction model is evaluated with Transductive Learning (ML) and ElasticNet Regression1,2,3,4 and it is concluded that the VIRX stock is predictable in the short/long term. According to price forecasts for (n+16 weeks) period, the dominant strategy among neural network is: Wait until speculative trend diminishes ## Key Points 1. What are main components of Markov decision process? 2. Fundemental Analysis with Algorithmic Trading 3. What is Markov decision process in reinforcement learning? ## VIRX Target Price Prediction Modeling Methodology We consider Viracta Therapeutics Inc. Common Stock Decision Process with Transductive Learning (ML) where A is the set of discrete actions of VIRX stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4 F(ElasticNet Regression)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Transductive Learning (ML)) X S(n):→ (n+16 weeks) $∑ i = 1 n a i$ n:Time series to forecast p:Price signals of VIRX stock j:Nash equilibria (Neural Network) k:Dominated move a:Best response for target price For further technical information as per how our model work we invite you to visit the article below: How do AC Investment Research machine learning (predictive) algorithms actually work? ## VIRX Stock Forecast (Buy or Sell) for (n+16 weeks) Sample Set: Neural Network Stock/Index: VIRX Viracta Therapeutics Inc. Common Stock Time series to forecast n: 12 Feb 2023 for (n+16 weeks) According to price forecasts for (n+16 weeks) period, the dominant strategy among neural network is: Wait until speculative trend diminishes X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.) Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.) Z axis (Grey to Black): *Technical Analysis% ## IFRS Reconciliation Adjustments for Viracta Therapeutics Inc. Common Stock 1. Paragraph 5.5.4 requires that lifetime expected credit losses are recognised on all financial instruments for which there has been significant increases in credit risk since initial recognition. In order to meet this objective, if an entity is not able to group financial instruments for which the credit risk is considered to have increased significantly since initial recognition based on shared credit risk characteristics, the entity should recognise lifetime expected credit losses on a portion of the financial assets for which credit risk is deemed to have increased significantly. The aggregation of financial instruments to assess whether there are changes in credit risk on a collective basis may change over time as new information becomes available on groups of, or individual, financial instruments. 2. If an entity measures a hybrid contract at fair value in accordance with paragraphs 4.1.2A, 4.1.4 or 4.1.5 but the fair value of the hybrid contract had not been measured in comparative reporting periods, the fair value of the hybrid contract in the comparative reporting periods shall be the sum of the fair values of the components (ie the non-derivative host and the embedded derivative) at the end of each comparative reporting period if the entity restates prior periods (see paragraph 7.2.15). 3. An entity shall apply the impairment requirements in Section 5.5 retrospectively in accordance with IAS 8 subject to paragraphs 7.2.15 and 7.2.18–7.2.20. 4. If a put option obligation written by an entity or call option right held by an entity prevents a transferred asset from being derecognised and the entity measures the transferred asset at amortised cost, the associated liability is measured at its cost (ie the consideration received) adjusted for the amortisation of any difference between that cost and the gross carrying amount of the transferred asset at the expiration date of the option. For example, assume that the gross carrying amount of the asset on the date of the transfer is CU98 and that the consideration received is CU95. The gross carrying amount of the asset on the option exercise date will be CU100. The initial carrying amount of the associated liability is CU95 and the difference between CU95 and CU100 is recognised in profit or loss using the effective interest method. If the option is exercised, any difference between the carrying amount of the associated liability and the exercise price is recognised in profit or loss. *International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS. ## Conclusions Viracta Therapeutics Inc. Common Stock is assigned short-term Ba1 & long-term Ba1 estimated rating. Viracta Therapeutics Inc. Common Stock prediction model is evaluated with Transductive Learning (ML) and ElasticNet Regression1,2,3,4 and it is concluded that the VIRX stock is predictable in the short/long term. According to price forecasts for (n+16 weeks) period, the dominant strategy among neural network is: Wait until speculative trend diminishes ### VIRX Viracta Therapeutics Inc. Common Stock Financial Analysis* Rating Short-Term Long-Term Senior Outlook*Ba1Ba1 Income StatementCBaa2 Balance SheetBaa2C Leverage RatiosBaa2Ba3 Cash FlowBaa2B2 Rates of Return and ProfitabilityBaa2Baa2 *Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents. How does neural network examine financial reports and understand financial state of the company? ### Prediction Confidence Score Trust metric by Neural Network: 83 out of 100 with 607 signals. ## References 1. Morris CN. 1983. Parametric empirical Bayes inference: theory and applications. J. Am. Stat. Assoc. 78:47–55 2. Bamler R, Mandt S. 2017. Dynamic word embeddings via skip-gram filtering. In Proceedings of the 34th Inter- national Conference on Machine Learning, pp. 380–89. La Jolla, CA: Int. Mach. Learn. Soc. 3. Athey S, Imbens G, Wager S. 2016a. Efficient inference of average treatment effects in high dimensions via approximate residual balancing. arXiv:1604.07125 [math.ST] 4. Zou H, Hastie T. 2005. Regularization and variable selection via the elastic net. J. R. Stat. Soc. B 67:301–20 5. Çetinkaya, A., Zhang, Y.Z., Hao, Y.M. and Ma, X.Y., Is DOW Stock Expected to Go Up?(Stock Forecast). AC Investment Research Journal, 101(3). 6. Athey S, Wager S. 2017. Efficient policy learning. arXiv:1702.02896 [math.ST] 7. M. J. Hausknecht. Cooperation and Communication in Multiagent Deep Reinforcement Learning. PhD thesis, The University of Texas at Austin, 2016 Frequently Asked QuestionsQ: What is the prediction methodology for VIRX stock? A: VIRX stock prediction methodology: We evaluate the prediction models Transductive Learning (ML) and ElasticNet Regression Q: Is VIRX stock a buy or sell? A: The dominant strategy among neural network is to Wait until speculative trend diminishes VIRX Stock. Q: Is Viracta Therapeutics Inc. Common Stock stock a good investment? A: The consensus rating for Viracta Therapeutics Inc. Common Stock is Wait until speculative trend diminishes and is assigned short-term Ba1 & long-term Ba1 estimated rating. Q: What is the consensus rating of VIRX stock? A: The consensus rating for VIRX is Wait until speculative trend diminishes. Q: What is the prediction period for VIRX stock? A: The prediction period for VIRX is (n+16 weeks)
web
auto_math_text
ISSN 1000-1239 CN 11-1777/TP • 信息安全 • ### 一种基于FBMC-OQAM干扰抑制的功率资源分配新算法 1. 1(Key Laboratory of Computer Vision and System (Tianjin University of Technology), Ministry of Education, Tianjin 300384); 2(Tianjin Key Laboratory of Intelligent Computing & Novel Software Technology(Tianjin University of Technology), Tianjin 300384); 3(Beijing No.20 High School, Beijing 100085) • 出版日期: 2018-11-01 • 基金资助: 国家自然科学基金项目(61571328);天津市重大科技专项(15ZXDSGX00050,16ZXFWGX00010);天津市科技支撑重点项目(17YFZCGX00360);天津市自然科学基金项目(15JCYBJC46500);天津市科技创新团队项目(12-5016,2015-23) ### A New Power-Resource Allocation Algorithm with Interference Restraining Based on FBMC-OQAM Zhang Degan1,2, Zhang Ting1,2, Zhang Jie3, Zhou Shan1,2 1. 1(计算机视觉与系统教育部重点实验室(天津理工大学) 天津 300384); 2(天津市智能计算及软件新技术重点实验室(天津理工大学) 天津 300384); 3(北京第二十中学 北京 100085) (gandegande@126.com) • Online: 2018-11-01 Abstract: By taking the energy efficiency as the objective function, a nonlinear programming problem with nonlinear constraints is studied under the constraints of time delay and transmission power. That is to say, a kind of new power-resource allocation algorithm (PAA) with interference restraining based on FBMC-OQAM (filter bank multicarrier-offset quadrature amplitude modulation) has been presented in this paper, which can improve the energy efficiency of entire network resource and protect small-cell user (SU) in the network from too much interference while virtual queue is used to transform the extra packet delay caused by the contention for channel of multi-user into the queuing delay in the virtual queue. An iterative algorithm for PAA to solve the problem is used. The fractional objective function is transformed into polynomial form, and the global optimal solution is obtained by iteration after reducing the computational complexity. At the same time, a sub-optimal method is developed to reduce computational complexity and some performance. The simulation results show that the optimal algorithm has higher performance and the sub-optimal method has lower computational complexity. The designed algorithm has important value for the practical applications, such as the Internet of things, Internet of vehicles, signal processing, artificial intelligence, and so on. Now, it has been used in our project on cognitive radio network (CRN) to solve the problem of power resource allocation.
web
auto_math_text
# Renormalization Group for dummies 1. Feb 19, 2012 ### waterfall Renormalization Group concept is rarely given in laymen book on QM and QFT and even Quantum Gravity book like Lisa Randall Warped Passages. They mostly described about infinity minus infinity and left it from there. So if you were to write about QFT for Dummies. How would you share it such that common folks can understand them? I'll share what I know and some questions. In the convensional popularization on the infinity problem, it is often said that: M_correction = infinity m_bare = m- infinity = - infinity And in renormalization group, I understood it simply that instead of it, one simply assume m_bare is some definite value? Is that correct? How about the M_correction. How did the value gets lower to finite? But I went to many references. In the book The Story of Light. It was mentioned: "With the bare mass also taken to be of infinite value, the two infinities - the infinities coming out of the perturbation calculations and the infinity of the bare mass - cancel each other out leaving us with a finite value for the actual, physical mass of an electron". So as more detailed accounts or Renormalization. It is not just m_bare = m - infinity, but the perturbation calculation infinity minus the - m_bare = m_observed. Do you agree? Now in Renormalization Group calculations. According to http://fds.oup.com/www.oup.co.uk/pdf/0-19-922719-5.pdf [Broken] the fine structure constant for example is altered and this altered value is entered into the perturbation equation as well as mass and charge.. but how do you make a power series with an altered fine structure constant no longer diverge?? Landa pole is still landa pole whatever is the fine structure constant values. Also someone said said "What's really happening is that your approximate theory is incomplete, and at some high energy, new physical processes show up, and change how the effective mass (charge, etc) varies with energy, so that the "bare" quantities are more reasonable.". What is this example of new physical processes showing up at high energy that can affect or make effective mass varies with energy. I have a rough idea of Renormalization Group. Checked out many references for hours but want to get the essence and gist of it. I think this details of the nature of how new physical process showing up at high energy that can affect or make effective mass varies with energy (as well as fine structure constant varies with energy) can give the heart of the understanding. Thanks. Last edited by a moderator: May 5, 2017 2. Feb 19, 2012 ### Ken G You want a better "dumbed down" version such that you can understand the answers to your questions. I don't really understand renormalization group physics either, but I fear that the only real answer to your question is, if you want a "dumbed down" version to work for you, you have to try to be dumber, and not ask those questions. If you insist on not being dumb, and ask those questions, then no "dumbed down" version is going to work for you, you'll need the real deal. Your choice! 3. Feb 20, 2012 ### waterfall Of course not further dumbing down. I just want answers in terms of power series, coupling constant, higher energies and those terms which a mere conceptual description is enough without getting into deep rigorous mathematics which most introductory sites on Renormalization Group contain that repel the laymen from understanding its essence. Of course one has to understand some basics of calculus like the infinite series, divergences and other basic which I have. So anyone can share what the heck is the Renormalization Group in terms of my original questions? 4. Feb 20, 2012 ### Staff: Mentor Unfortunately the jig is up on this one - you need some math. Here is the simplest explanation I know: http://arxiv.org/pdf/hep-th/0212049.pdf There is a trick in applied math called perturbation theory. The idea is you expand your solutions in a power series about a parameter that is small and you can calculate your solution term by term getting better accuracy with each term. The issue is the coupling constant is thought to be small so you expand about it. The first term is fine. You then calculate the second term - oh oh - its infinite - bummer. Whats wrong? It turns out the coupling constant in fact is not small - but rather is itself infinite so its a really bad choice. Ok how to get around it. What you do about it is what is called regularize the equations so the equations are of the form of a limit depending on a parameter called the regulator. You then choose a different parameter to expand about called the renormalized parameter and you fix its value by saying its the value you would get from measurement so you know its finite when you take the limit. If you do that you immediately see the original problem - the coupling constant secretly depends on the regulator so when you take the limit it blows up to infinity. The infinity minus infinity thing is really historical before they worked out exactly what was going on and resolved by what is known as the effective field theory approach. Thanks Bill Last edited: Feb 20, 2012 5. Feb 20, 2012 ### waterfall Thanks. I kinda got the concept now. Anyway. In a power series, $y= y_0+ \epsilon y_1+ \epsilon^2 y_2+ \cdot\cdot\cdot$, is "$\epsilon$" equivalent to the coupling constant which must be very small like 1/137 and present in each series (although I know it is in more complex form)? 6. Feb 21, 2012 ### Staff: Mentor That's it. If you have an energy cutoff (that is one kind or regularization you can do) then the above trick of using perpetuation theory works because epsilon is small and as you raise it to higher and higher powers in the series it gets smaller ans smaller so the trick works. However when you take the limit as the cutoff goes to infinity ie remove the cutoff you find that epsilon secretly depends on the cutoff and goes to infinity, so instead of getting smaller and smaller for large values of the cutoff it gets bigger and bigger (in fact when it goes to infinity its infinite) and the method fails. To get around it you use a different epsilon to expand about called the renormalized quantity that due to the way you chose it by insisting it is something you measure then you have no problems when the cutoff is taken to infinity. Thanks Bill 7. Feb 21, 2012 ### waterfall I'm studying Power Series. What specific concept is it called where the epsilon getting larger in value if there is no cutoff? Does this apply to all Power Series or selected ones like Taylor Series or others? Thanks. 8. Feb 21, 2012 ### Staff: Mentor Sorry if what I posted wasn't clear - its a good idea to read the link I gave. Without a cutoff the value of the power series parameter you expand about turns out to be infinite so it obviously will not work. For the variable in the power series you expand about (that's epsilon in the equation you posted) substitute infinity and the result is infinity. However if you impose a cutoff and choose a low enough value then it is small and epsilon to some power gets smaller and smaller as the power you raise it to gets bigger so the method works - each term gets smaller and smaller. That's because it secretly depends on the cutoff. As the cutoff is made larger and larger the coupling constant gets larger and larger until in the limit it is infinite. That's why you need to expand about something better - that something is called the renormalised value. When this is done it does not blow up as you take the cutoff to infinity so the method now works. All this is made clear in the paper I linked to - its a bit heavy going - but persevere. Thanks Bill Last edited: Feb 21, 2012 9. Feb 21, 2012 Staff Emeritus If you are just now studying Power Series, you are by my count about 23 courses prior to where renormalization will be discussed. I think you're going to have to accept that the answers you get will be kind of hand-wavy. 10. Feb 21, 2012 ### waterfall I tried to read the paper for more than 30 minutes and see some web references and calculus book and thinking all this for more than an hour already. But I still can't understand the very basic question whether it applies to all power series. To know what I'm asking. Let us forget about Renormalization first. In a power series like The p-series rule: (infinity) sum sign 1/n^p n = 1 for p-series p=2 1 + (1/2^2) + (1/3^2) + (1/4^2) + (1/5^2).... So is the coupling constant equivalent to the p or n in the above equation, or the terms 2^2, 3^2, etc.? Also about the coupling constant getting larger for longer series without cutoff and it getting normal in value or smaller when there is cutoff. Do you also apply this to nonQED thing like trajectory of a ball thrown or is it only in QED? Just answer whether it is only in QED or present in all power series. This is all I need to know now. If it is only in QED.. then it has to do with the quantum nature or probability amplitude and all those path-integrals, etc. thing which I already understood and can relate and I will continue with the paper you gave. But if it is present in all power series.. i can't find it in a basic calculus book about power series where the equivalent of coupling constant gets infinite depending on whether you make a cut-off and will need to find it in other calculus book about power series. Again don't mention about renormalization first. Thanks. 11. Feb 21, 2012 ### Physics Monkey Typically the power series for some physical quantity would be of the form $$\sum_{n=0}^\infty c_n g^n .$$ We would call g the coupling constant and the coefficients c are what you compute e.g. from feynman diagrams. The coefficients c are typically computed by doing various integrals, and the integrals sometimes diverge if the range of integration is not cut off. The procedure of "subtracting infinities" can then sometimes be used to render the sum above finite term by term. That is, each individual $c_n$ is finite (as is g). However, the series may still diverge. Examples: If $c_n = 1/n!$ then the radius of convergence in g is infinite. If $c_n = 1/g^n_0$ then the radius of convergence in g is $g_0$. If $c_n = n!$ then the radius of convergence is zero. The series still gives infinity if g is different from zero even though every term is finite. This situation often happens in qft and is related to the concept of an asymptotic series. For a simple example, try doing the integral $$\int_{-\infty}^{\infty} dx \,\exp{\left(-x^2 - \lambda x^4\right)}$$ by first expanding the exponential as a power series in $\lambda$ and then exchanging the order of summation and integration. Such gaussian integrals are extremely common in qft. 12. Feb 21, 2012 ### waterfall Thanks for the above. I've been looking for the summation sign for days. I'm presently reading Ryan's "Calculus for Dummies". About the coupling constant getting larger for longer series without cutoff and it getting normal in value or smaller when there is cutoff. Do you also apply this to nonQED thing like trajectory of a ball thrown or is it only in QFT due to the peculiar nature of the quantum amplitude thing? This is what I need to know for now. Thanks. 13. Feb 21, 2012 ### Ken G Do you mean that the full integral has a closed-form expression (involving modified Bessel functions) for any value of lambda, but the series expression (involving Gamma functions) has terms that only converge absolutely when lambda<1? So if we had lambda>1, and all we had was the series form, we might worry the integral doesn't exist, when in fact it does? 14. Feb 21, 2012 ### atyy Here is an example from rainbows where a divergent, but "asymptotic series" is useful. http://www.ams.org/samplings/feature-column/fcarc-rainbows "There were two major contributions by Stokes. The first was that the Airy integral could be approximated for large values of |m| by asymptotic series. The one for m > 0 approximates A(m) by a slowly decreasing oscillation, and the one for m < 0 approximates it by an exponentially decreasing function. These series are expansions in negative powers of m. ...... These series do not converge, but the initial terms decrease reasonably rapidly, and the series give fair approximations to A(m) if one breaks off calculation when the terms start to grow." 15. Feb 21, 2012 ### waterfall This question has trouble me enough to lose 4 hours of sleep thinking about it and I had to take Ambien just to sleep so hope someone can settle it before another night comes. Bill Hobba is saying that the coupling constant of 1/137 in the first term of the power series can become 1/50 depending on how many terms in the series you have and whether there is cutoff? If there is cutoff. It's like the fine structure constant is 1/137 in the first term and when none. It's 1/infinity in the first term? I have not heard of this before. Now I just want to know if this also occurs in normal power series like calculating for trajectory of a spacecraft or just in QED where all paths were taken. So just answer 1 or 2: 1. this occurs in all power series like calculating for trajectory of a spacecraft 2. just in QED/QFT where "any thing that can happen, does" as Brian Cox put it. Well? 16. Feb 21, 2012 ### Staff: Mentor Its not quite like that. You write out the power series and you calculate the terms term by term using perturbation theory. The first term is finite - no problem. Second and higher terms turn out to be infinite. It took people a long time to figure out why this happened but the answer turned out to be the thing you expand the power series in, the coupling constant, was infinite and not 1/137 like they thought. Substitute infinity in any power series and its infinite or undefined. To get around this problem you impose a cutoff (this can be looked on as taking the term you are expanding about as finite and later taking its limit to infinity) redo your perturbation procedure and you find it is all OK. The reason is the coupling constant secretly depends on the cutoff - which is rather trivial the way I explained it - but it took people a long time to realize this is what is going on. The value of 1/137 they used was the value measured at a certain energy scale which in effect was measuring the value with a cutoff. But the equations they used had no cutoff so you were really using the value as the cutoff goes to infinity ie infinity. As you take the cutoff to infinity the coupling constant goes from 1/137 to infinity which is why without the cutoff terms in the power series are infinite. Now what you do is assume the coupling constant is a function of what is called the renormalised coupling constant (which is the value from experiment ie 1/137) so you know it will not blow up. You assume it is a function of the un-renormalised parameter ie the value that does blow up to infinity, expand it in a power series, substitute into the original power series, collect terms so you now have a power series in the renormalised parameter. But you have chosen it so it is the value found from experiment so does not blow up. Carry out your calculations, take the cutoff to infinity and low and behold you find the answer is finite. The infinity minus infinity thing comes from when you analyse the behavior of the series when you use the renormalised value and take the limit - you find a term that is the original un-renormalised coupling constant and a term that is a function of the renormalised coupling constant - they in fact both blow up to infinity as you take the limit - but are subtracted from each other so the answer is finite. If you are at the level of Calculus For Dummies its probably going to be difficult to understand the paper I linked to. I have a degree in applied math and I found it tough going. So don't feel bad you are finding it tough - I congratulate you for trying. If you want to get your math up to the level you can understand that paper you will have to a study a more advanced textbook. The one I recommend is Boas - Mathematical Methods https://www.amazon.com/Mathematical-Methods-Physical-Sciences-Mary/dp/0471198269/ref=ntt_at_ep_dpi_1 Unfortunately otherwise you will have to accept the hand-wavey arguments. As I said in my original post the jig is up with this one - you need to do the math. To give a specific answer to the questions you raised and how to relate it to renormalisation I will see what I can do. If you substitute infinity into any power series it will give either infinity or terms like infinity minus infinity that are undefined. An example of the first would be the power series e^x where each term is positive and an example of the second would be sine x which has positive and negative terms. Now one way to try and get around this is let x be finite and take the limit. Before you take the limit everything is fine - its finite and perfectly OK. Now what you do is assume the variable in the power series is a function of another variable (in this case called the re-normalized variable) that you hope does not blow up to infinity as you take the limit. You expand that out as a power series and you collect terms so you have a new power series in that variable. Now you take the limit and low and behold, for the case of what are called re-normalizable theories, everything is finite. You look deeper into why this occurred and you find changing to this new variable introduced another term in your equations that also blows up to infinity but is subtracted from the original variable that blows up to infinity - as you take the limit they cancel and you are left with finite answers. Normally when you calculate the terms in a power series using perturbation theory it does not blow up to infinity. That's because it is very unusual to chose a variable to expand the power series in that is infinity. The only reason it was done is they did not understand the physics well enough then - they did not understand the measurement of the constant they thought was small at 1/137, and was a good thing to expand in a power series about since as it is raised to a power it gets smaller and smaller, was a measurement made with a cutoff basically in effect. The equations they used had no cutoff and it all went pair shaped. When this happened it left some of the greatest physicists and mathematicians in the world totally flummoxed - these are guys like Dirac with awesome mathematical talent. It was a long hard struggle over many years to sort out what was going on. The thing that fooled them was the parameter you expanded about as a power series secretly depended on the regulator or cutoff and as you took its limit to infinity it went to infinity. When you expanded about a different one that didn't blow up to infinity everything worked OK. As I was penning this I remembered John Baez wrote an interesting article about re-normalisation that may be of help: http://math.ucr.edu/home/baez/renormalization.html Thanks Bill Last edited by a moderator: May 5, 2017 17. Feb 21, 2012 ### waterfall I actually understood most everything you were saying.. but I just want to know if you can apply this coupling constant getting bigger dependent on terms in the series too to non-QFT problems like calculating for the trajectory of a ball. This is simply what I want to know. Thanks. Last edited by a moderator: May 5, 2017 18. Feb 21, 2012 ### atyy One way to think about the changing constants is to realise you are just writing and "effective theory". In the Box-Jenkins philosophy: all models are wrong, but some are useful. Say you have a curve. Every point on the curve can be approximated by a straight line. Depending on which part of the curve you are approximating, the slope of the line will change. The straight line is your "effective theory" and the changing slope like your changing coupling constant. This example is not a detailed comparison, but it's the general philosophy of the renormalization group. As for detailed mathematical correspondence, apart from quantum field theory, the renormalization group has been applied in classical statistical mechanics and classical mechanics. 19. Feb 21, 2012 ### waterfall I read in wiki that "Geometric series are used throughout mathematics, and they have important applications in physics, engineering, biology, economics, computer science, queueing theory, and finance." So you are saying that Renormalization Group concepts and regulator thing are also used in biology, economics, finance and not just in QFT? So in the calculations in biology. The coupling constant equivalent can become infinite in the second term but if one makes a cut-off at first term. it is finite? 20. Feb 21, 2012 ### atyy Renormalization has nothing to do with infinities. QED is renormalizable and it has a cut-off - it is not a true theory valide at all energies, it is only an effective theory like gravity, valid below the Planck scale. Once you have a cut-off, there are no infinities. Sometimes you are lucky and you get a theory where you can remove the cut-off, like QCD. But in QED, as far as we know, the cut-off probably cannot be removed.
web
auto_math_text
Sequential Rationality in Cryptographic Protocols Ronen Gradwohl, Noam Livne, and Alon Rosen Abstract Much of the literature on rational cryptography focuses on analyzing the strategic properties of cryptographic protocols. However, due to the presence of computationally-bounded players and the asymptotic nature of cryptographic security, a definition of sequential rationality for this setting has thus far eluded researchers. We propose a new framework for overcoming these obstacles, and provide the first definitions of computational solution concepts that guarantee sequential rationality. We argue that natural computational variants of subgame perfection are too strong for cryptographic protocols. As an alternative, we introduce a weakening called threat free Nash equilibrium that is more permissive but still eliminates the undesirable empty threats'' of non-sequential solution concepts. To demonstrate the applicability of our framework, we revisit the problem of implementing a mediator for correlated equilibria (Dodis Halevi-Rabin, Crypto'00), and propose a variant of their protocol that is sequentially rational for a non-trivial class of correlated equilibria. Our treatment provides a better understanding of the conditions under which mediators in a correlated equilibrium can be replaced by a stable protocol. Available format(s) Category Foundations Publication info Published elsewhere. Unknown where it was published Contact author(s) alon rosen @ idc ac il History Short URL https://ia.cr/2010/448 CC BY BibTeX @misc{cryptoeprint:2010/448, author = {Ronen Gradwohl and Noam Livne and Alon Rosen}, title = {Sequential Rationality in Cryptographic Protocols}, howpublished = {Cryptology ePrint Archive, Paper 2010/448}, year = {2010}, note = {\url{https://eprint.iacr.org/2010/448}}, url = {https://eprint.iacr.org/2010/448} } Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
web
auto_math_text
Pinterest ### Math teacher shirt - funny mathmetician shirt - sweet as pi 3.14 shirt math teacher shirt - funny mathmetician shirt - sweet as pi shirt ### Pi Day = March 14 = 3.14 Today is Pi Day, for obvious reasons. Pi is a Greek letter representing the ratio of a circle's circumference to its diameter, a mathematical constant. If a circle's diameter is one, its circumference is approximately Happy Pi day. ### Etched Glass Pie Plate I Ate Sum Pie and it was delicious Math eight sum Pi math 3.14 Pi day Pie Plate Etched Glass Pie Plate I Ate Sum Pie and it was delicious Math eight sum Pi math Pi day Pie Plate by ItsAllThatSparkles on Etsy ### Quick and Easy Pie Chart Pi Day Fruit Pizza Pie Quick and Easy Pie Chart Pi Day Fruit Pizza Pie - English ### Celebrate Pi Day - March 14th (3.14) Pi: Celebrate Pi Day – March - don't forget pizza is pie, too ### Triple Berry Pi Day Pie Triple Berry Pi Day Pie - An easy berry pie gets a mathematical update for Pi Day, of course. ### It’s Pi Day — the possibilities are infinite! Great High School Math Poster on Pi . Okay, it's not funny, but it's 'Pi'. ### Celebrate! Happy Pi Day! 3-14 Do you know what Pi Day is? If you do, welcome to geekdom 🙂 For those non-nerds, you may remember that the number “pi” is (and on and on) and so math-types like to celebrate Pi on March – you know, – – get it?
web
auto_math_text
Outlook: Cardiol Therapeutics Inc. Class A Common Shares is assigned short-term Ba1 & long-term Ba1 estimated rating. Time series to forecast n: 13 Feb 2023 for (n+4 weeks) Methodology : Modular Neural Network (Market Direction Analysis) ## Abstract Cardiol Therapeutics Inc. Class A Common Shares prediction model is evaluated with Modular Neural Network (Market Direction Analysis) and Beta1,2,3,4 and it is concluded that the CRDL stock is predictable in the short/long term. According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Buy ## Key Points 1. What is the best way to predict stock prices? 2. Stock Rating 3. Should I buy stocks now or wait amid such uncertainty? ## CRDL Target Price Prediction Modeling Methodology We consider Cardiol Therapeutics Inc. Class A Common Shares Decision Process with Modular Neural Network (Market Direction Analysis) where A is the set of discrete actions of CRDL stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4 F(Beta)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Modular Neural Network (Market Direction Analysis)) X S(n):→ (n+4 weeks) $\begin{array}{l}\int {e}^{x}\mathrm{rx}\end{array}$ n:Time series to forecast p:Price signals of CRDL stock j:Nash equilibria (Neural Network) k:Dominated move a:Best response for target price For further technical information as per how our model work we invite you to visit the article below: How do AC Investment Research machine learning (predictive) algorithms actually work? ## CRDL Stock Forecast (Buy or Sell) for (n+4 weeks) Sample Set: Neural Network Stock/Index: CRDL Cardiol Therapeutics Inc. Class A Common Shares Time series to forecast n: 13 Feb 2023 for (n+4 weeks) According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Buy X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.) Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.) Z axis (Grey to Black): *Technical Analysis% ## IFRS Reconciliation Adjustments for Cardiol Therapeutics Inc. Class A Common Shares 1. Paragraph 5.7.5 permits an entity to make an irrevocable election to present in other comprehensive income subsequent changes in the fair value of particular investments in equity instruments. Such an investment is not a monetary item. Accordingly, the gain or loss that is presented in other comprehensive income in accordance with paragraph 5.7.5 includes any related foreign exchange component. 2. If an entity previously accounted for a derivative liability that is linked to, and must be settled by, delivery of an equity instrument that does not have a quoted price in an active market for an identical instrument (ie a Level 1 input) at cost in accordance with IAS 39, it shall measure that derivative liability at fair value at the date of initial application. Any difference between the previous carrying amount and the fair value shall be recognised in the opening retained earnings of the reporting period that includes the date of initial application. 3. All investments in equity instruments and contracts on those instruments must be measured at fair value. However, in limited circumstances, cost may be an appropriate estimate of fair value. That may be the case if insufficient more recent information is available to measure fair value, or if there is a wide range of possible fair value measurements and cost represents the best estimate of fair value within that range. 4. The definition of a derivative in this Standard includes contracts that are settled gross by delivery of the underlying item (eg a forward contract to purchase a fixed rate debt instrument). An entity may have a contract to buy or sell a non-financial item that can be settled net in cash or another financial instrument or by exchanging financial instruments (eg a contract to buy or sell a commodity at a fixed price at a future date). Such a contract is within the scope of this Standard unless it was entered into and continues to be held for the purpose of delivery of a non-financial item in accordance with the entity's expected purchase, sale or usage requirements. However, this Standard applies to such contracts for an entity's expected purchase, sale or usage requirements if the entity makes a designation in accordance with paragraph 2.5 (see paragraphs 2.4–2.7). *International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS. ## Conclusions Cardiol Therapeutics Inc. Class A Common Shares is assigned short-term Ba1 & long-term Ba1 estimated rating. Cardiol Therapeutics Inc. Class A Common Shares prediction model is evaluated with Modular Neural Network (Market Direction Analysis) and Beta1,2,3,4 and it is concluded that the CRDL stock is predictable in the short/long term. According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Buy ### CRDL Cardiol Therapeutics Inc. Class A Common Shares Financial Analysis* Rating Short-Term Long-Term Senior Outlook*Ba1Ba1 Income StatementCaa2C Balance SheetBa3Caa2 Leverage RatiosCBaa2 Cash FlowCBa3 Rates of Return and ProfitabilityBa2Baa2 *Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents. How does neural network examine financial reports and understand financial state of the company? ### Prediction Confidence Score Trust metric by Neural Network: 82 out of 100 with 618 signals. ## References 1. R. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Ma- chine learning, 8(3-4):229–256, 1992 2. Bertsimas D, King A, Mazumder R. 2016. Best subset selection via a modern optimization lens. Ann. Stat. 44:813–52 3. Bottomley, P. R. Fildes (1998), "The role of prices in models of innovation diffusion," Journal of Forecasting, 17, 539–555. 4. Çetinkaya, A., Zhang, Y.Z., Hao, Y.M. and Ma, X.Y., Can neural networks predict stock market?(ATVI Stock Forecast). AC Investment Research Journal, 101(3). 5. R. Sutton and A. Barto. Introduction to reinforcement learning. MIT Press, 1998 6. S. Bhatnagar. An actor-critic algorithm with function approximation for discounted cost constrained Markov decision processes. Systems & Control Letters, 59(12):760–766, 2010 7. Breusch, T. S. (1978), "Testing for autocorrelation in dynamic linear models," Australian Economic Papers, 17, 334–355. Frequently Asked QuestionsQ: What is the prediction methodology for CRDL stock? A: CRDL stock prediction methodology: We evaluate the prediction models Modular Neural Network (Market Direction Analysis) and Beta Q: Is CRDL stock a buy or sell? A: The dominant strategy among neural network is to Buy CRDL Stock. Q: Is Cardiol Therapeutics Inc. Class A Common Shares stock a good investment? A: The consensus rating for Cardiol Therapeutics Inc. Class A Common Shares is Buy and is assigned short-term Ba1 & long-term Ba1 estimated rating. Q: What is the consensus rating of CRDL stock? A: The consensus rating for CRDL is Buy. Q: What is the prediction period for CRDL stock? A: The prediction period for CRDL is (n+4 weeks)
web
auto_math_text
## Wednesday, February 27, 2008 Plot Sean Gullette as Maximillian Cohen, a reclusive math genius Mark Margolis as Sol Robeson, Max's mentor, who abandoned his research into π after it nearly killed him. Ben Shenkman as Lenny Meyer, a Hasidic Jew who introduces Max to Kabbalah. Pamela Hart as Marcy Dawson, a representative of an investment firm that is interested in Max's research Stephen Pearlman as Rabbi Cohen, the leader of a Jewish sect that pursues Max. Samia Shoaib as Devi, Max's attractive and friendly neighbor. Ajay Naidu as Farroukh, Devi's boyfriend. Kristyn Mae-Anne Lao as Jenna, a girl who plays math games with Max. Cast π was written and directed by Darren Aronofsky, and filmed on high-contrast black-and-white reversal film. π had a low budget (\$60,000), but proved a financial success at the box office (\$3.2 million gross in the U.S.) despite only a limited release to theaters. It has also proven to be a steady seller on DVD. According to the DVD's production notes, Aronofsky raised money for the project by selling \$100 shares in the film to family and friends, and was able to pay them all back with a \$50 profit per-share when the film was sold to Artisan. He paid his crew in deferred payments amounting to \$200 a day, as well as 'shares' in the film. Darren Aronofsky's next film was Requiem for a Dream (which was also sold co-packaged with π). Production In the film, Max periodically plays Go with his mentor. This game has historically stimulated the study of mathematics and features a simple set of rules that results in a complex game strategy. The two characters each use the game as a model for their view of the universe; Sol says that the game is a microcosm of an infinitely complex and chaotic world with Max asserting that patterns can be found in the complexity of its variations. Actors Sean Gulette and Mark Margolis both learned the game for the film from the New York City American Go Association club. The game of Go The film's characters make several mathematical goofs, such as Max pursues a legitimate scientific goal, and as such, π features several references to mathematics and mathematical theories. For instance, Max finds the golden spiral occurring everywhere, including the stock market. Max's belief that diverse systems embodying highly nonlinear dynamics share a unifying pattern that bears much similarity to results in chaos theory, which provides machinery for describing certain phenomena of nonlinear systems, which might be thought of as patterns. Unlike in the film, chaos theory does not allow one to predict the exact behavior of a chaotic system like the stock market and, in fact, provides compelling evidence that such predictions are, in principle, impossible. The film shows a drawing of the golden rectangle (with larger side length a and shorter side length b) with $frac{a}{b} = frac{a}{a+b}$. This equation has no solution for non-zero a, and the golden ratio actually refers to a ratio such that $frac{a}{b} = frac{a+b}{a}$. The Greek letter $theta,$ (theta) is stated to be the symbol for the golden ratio. In fact, the letter used is generally $varphi$ (phi). In the same scene as the previous goof, while discussing the links between the Fibonacci sequence and the golden ratio, Max states, "If you divide a hundred and forty-four into two hundred and thirty-three, it approaches theta." What he means is that the ratio between terms of the Fibonacci sequence and their immediate predecessors approaches the golden ratio as one looks further along the sequence. The single division 233/144 has a fixed value, so it does not approach any other value. Mathematics and π The 216-letter name of God sought by the characters of the film is actually widely known and called the Shemhamphorash or the Divided Name. It comes from Exodus 14:19-21. Each of these three verses is composed of seventy-two letters in the original Hebrew. If one writes the three verses one above the other, the first from right to left, the second from left to right, and the third from right to left, one gets seventy-two columns of three-letter names of God. The seventy-two names are divided into four columns of eighteen names each. Each of the four columns represents one of the four letters of the Tetragrammaton. The actual name of God, according to Jewish traditions, is the Tetragrammaton (YHWH or YHVH). This is the name that was intoned in the temple once a year during Yom Kippur, as referenced in the film. What has been lost is not the spelling of the name, as in the film, but the true pronunciation, since words written in Hebrew in the Torah do not include vowels. Furthermore, in the case of the Tetragrammaton, when vowels were used, the actual vowels were replaced with the vowels of the word Adonai to avoid pronouncing the Tetragrammaton, which is a taboo in Judaism. In addition, it would be highly unlikely that the Hebrew Schemhamphoras would translate into 216 digits in a decimal system for several reasons: There is no zero in Hebrew numerals. The Hebrew number system does not work as a normal decimal system; the characters of the Hebrew Alphabet, the Aleph-Bet, correspond to the following values: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 200, 300, and 400. So, if the letter "A" had a value of 1, and "B" 2, and so on, you would only get up until "I" (which would have a value of 9) until you would need multiple letters to reflect numbers that are not divisible by ten and that have two or more digits (i.e., if "J" was 10, and you wanted to make the number 11, it would be "JA", or 10+1). If each single-digit number corresponded to its letter only, then you would have a 216-letter word that only uses letters A through I. Soundtrack Pi (the mathematical constant)
web
auto_math_text
# selection criterion ## Optimal Algorithm Operation of the Control System Vertically , Multi-level, Electromagnetic Separators Designed by the algorithm of the control system vertical, multi-level, electromagnetic separator with pulsed magnetic fields of high intensity. Proposed selection criterion the optimal algorithm functioning of the separator control system.
web
auto_math_text
Outlook: Idaho Strategic Resources Inc. Common Stock is assigned short-term Ba1 & long-term Ba1 estimated rating. Time series to forecast n: 18 Feb 2023 for (n+16 weeks) Methodology : Transductive Learning (ML) ## Abstract Idaho Strategic Resources Inc. Common Stock prediction model is evaluated with Transductive Learning (ML) and Polynomial Regression1,2,3,4 and it is concluded that the IDR stock is predictable in the short/long term. According to price forecasts for (n+16 weeks) period, the dominant strategy among neural network is: Buy ## Key Points 1. How accurate is machine learning in stock market? 2. What are main components of Markov decision process? 3. How do you pick a stock? ## IDR Target Price Prediction Modeling Methodology We consider Idaho Strategic Resources Inc. Common Stock Decision Process with Transductive Learning (ML) where A is the set of discrete actions of IDR stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4 F(Polynomial Regression)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Transductive Learning (ML)) X S(n):→ (n+16 weeks) $\begin{array}{l}\int {r}^{s}\mathrm{rs}\end{array}$ n:Time series to forecast p:Price signals of IDR stock j:Nash equilibria (Neural Network) k:Dominated move a:Best response for target price For further technical information as per how our model work we invite you to visit the article below: How do AC Investment Research machine learning (predictive) algorithms actually work? ## IDR Stock Forecast (Buy or Sell) for (n+16 weeks) Sample Set: Neural Network Stock/Index: IDR Idaho Strategic Resources Inc. Common Stock Time series to forecast n: 18 Feb 2023 for (n+16 weeks) According to price forecasts for (n+16 weeks) period, the dominant strategy among neural network is: Buy X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.) Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.) Z axis (Grey to Black): *Technical Analysis% ## IFRS Reconciliation Adjustments for Idaho Strategic Resources Inc. Common Stock 1. If a variable-rate financial liability bears interest of (for example) three-month LIBOR minus 20 basis points (with a floor at zero basis points), an entity can designate as the hedged item the change in the cash flows of that entire liability (ie three-month LIBOR minus 20 basis points—including the floor) that is attributable to changes in LIBOR. Hence, as long as the three-month LIBOR forward curve for the remaining life of that liability does not fall below 20 basis points, the hedged item has the same cash flow variability as a liability that bears interest at three-month LIBOR with a zero or positive spread. However, if the three-month LIBOR forward curve for the remaining life of that liability (or a part of it) falls below 20 basis points, the hedged item has a lower cash flow variability than a liability that bears interest at threemonth LIBOR with a zero or positive spread. 2. Paragraph 6.3.4 permits an entity to designate as hedged items aggregated exposures that are a combination of an exposure and a derivative. When designating such a hedged item, an entity assesses whether the aggregated exposure combines an exposure with a derivative so that it creates a different aggregated exposure that is managed as one exposure for a particular risk (or risks). In that case, the entity may designate the hedged item on the basis of the aggregated exposure 3. For lifetime expected credit losses, an entity shall estimate the risk of a default occurring on the financial instrument during its expected life. 12-month expected credit losses are a portion of the lifetime expected credit losses and represent the lifetime cash shortfalls that will result if a default occurs in the 12 months after the reporting date (or a shorter period if the expected life of a financial instrument is less than 12 months), weighted by the probability of that default occurring. Thus, 12-month expected credit losses are neither the lifetime expected credit losses that an entity will incur on financial instruments that it predicts will default in the next 12 months nor the cash shortfalls that are predicted over the next 12 months. 4. An alternative benchmark rate designated as a non-contractually specified risk component that is not separately identifiable (see paragraphs 6.3.7(a) and B6.3.8) at the date it is designated shall be deemed to have met that requirement at that date, if, and only if, the entity reasonably expects the alternative benchmark rate will be separately identifiable within 24 months. The 24-month period applies to each alternative benchmark rate separately and starts from the date the entity designates the alternative benchmark rate as a non-contractually specified risk component for the first time (ie the 24- month period applies on a rate-by-rate basis). *International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS. ## Conclusions Idaho Strategic Resources Inc. Common Stock is assigned short-term Ba1 & long-term Ba1 estimated rating. Idaho Strategic Resources Inc. Common Stock prediction model is evaluated with Transductive Learning (ML) and Polynomial Regression1,2,3,4 and it is concluded that the IDR stock is predictable in the short/long term. According to price forecasts for (n+16 weeks) period, the dominant strategy among neural network is: Buy ### IDR Idaho Strategic Resources Inc. Common Stock Financial Analysis* Rating Short-Term Long-Term Senior Outlook*Ba1Ba1 Income StatementB1Baa2 Balance SheetBaa2B1 Leverage RatiosCaa2B3 Cash FlowBaa2C Rates of Return and ProfitabilityB1Baa2 *Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents. How does neural network examine financial reports and understand financial state of the company? ### Prediction Confidence Score Trust metric by Neural Network: 92 out of 100 with 603 signals. ## References 1. Kitagawa T, Tetenov A. 2015. Who should be treated? Empirical welfare maximization methods for treatment choice. Tech. Rep., Cent. Microdata Methods Pract., Inst. Fiscal Stud., London 2. Dudik M, Langford J, Li L. 2011. Doubly robust policy evaluation and learning. In Proceedings of the 28th International Conference on Machine Learning, pp. 1097–104. La Jolla, CA: Int. Mach. Learn. Soc. 3. Bai J, Ng S. 2017. Principal components and regularized estimation of factor models. arXiv:1708.08137 [stat.ME] 4. A. Y. Ng, D. Harada, and S. J. Russell. Policy invariance under reward transformations: Theory and application to reward shaping. In Proceedings of the Sixteenth International Conference on Machine Learning (ICML 1999), Bled, Slovenia, June 27 - 30, 1999, pages 278–287, 1999. 5. Çetinkaya, A., Zhang, Y.Z., Hao, Y.M. and Ma, X.Y., Short/Long Term Stocks: FOX Stock Forecast. AC Investment Research Journal, 101(3). 6. Imbens GW, Rubin DB. 2015. Causal Inference in Statistics, Social, and Biomedical Sciences. Cambridge, UK: Cambridge Univ. Press 7. Mazumder R, Hastie T, Tibshirani R. 2010. Spectral regularization algorithms for learning large incomplete matrices. J. Mach. Learn. Res. 11:2287–322 Frequently Asked QuestionsQ: What is the prediction methodology for IDR stock? A: IDR stock prediction methodology: We evaluate the prediction models Transductive Learning (ML) and Polynomial Regression Q: Is IDR stock a buy or sell? A: The dominant strategy among neural network is to Buy IDR Stock. Q: Is Idaho Strategic Resources Inc. Common Stock stock a good investment? A: The consensus rating for Idaho Strategic Resources Inc. Common Stock is Buy and is assigned short-term Ba1 & long-term Ba1 estimated rating. Q: What is the consensus rating of IDR stock? A: The consensus rating for IDR is Buy. Q: What is the prediction period for IDR stock? A: The prediction period for IDR is (n+16 weeks)
web
auto_math_text
• Rate coefficients of open shell molecules and radicals: $R$-matrix method • # Fulltext https://www.ias.ac.in/article/fulltext/pram/088/05/0076 • # Keywords Molecular processes; rate coefficients • # Abstract The open shell molecules with even number of electrons have $\pi^2$ or $\pi^{2}_{g}$ ground-state electronic configuration. Several homonuclear diatomic molecules like $\rm{O_2, S_2, B_2}$ have $\pi^{2}_{g}$ ground state in the $D_{\infty h}$ point group and heteronuclear diatomic radicals like PH, NH, SO have $\pi^2$ ground state in the $C_{\infty v}$ point group. We have computed and presented here the rate coefficient of these open shell molecules $\rm{(O_2, S_2, B_2)}$ and radicals (PH, NH,SO) from the results of our previous studies using a well-established $\it {ab-initio}$ formalism: the $R$-matrix method. The rate coefficients for elastic and electron-excited processes are studied over a wide electron temperature range. • # Author Affiliations 1. Department of Physics and Astrophysics, University of Delhi, Delhi 110 007, India 2. Keshav Mahavidyalaya, Department of Physics, University of Delhi, Delhi 110 034, India • # Pramana – Journal of Physics Volume 95, 2021 All articles Continuous Article Publishing mode • # Editorial Note on Continuous Article Publication Posted on July 25, 2019
web
auto_math_text
ÞÉÒÉÈÀÃÉ ÓÀÌÄÝÍÉÄÒÏ ÍÀÛÒÏÌÄÁÉÓ ÍÖÓáÀ 1. Relativistic three body problem and Glauber representation (with D. Stoyanov). Sov. J. Teor. Mat. Fiz. 3 (1970), 332. 2. A method of coherent states and diagram technique for dual amplitudes (with B. L. Markovski, D. Ts. Stoyanov, and A. N. Tavkhelidze). Dubna, JINR-E2-5182 JINR, 1970; In: Proc. XV Int. Conf.on High Enegry Phys., Kiev 2 (1970), 576-577. 3. The method of coherent states and factorization of dual amplitudes (with C. D. Popov, D. Ts. Stoyanov, and A. N. Tavkhelidze). Sov. J. Teor. Mat. Fiz. 6 (1971), 166. 4. Factorization of dual amplitudes with the help of the coherent states of the five-dimensional oscilator (with C. D. Popov, D. Ts. Stoyanov, and A. N. Tavkhelidze). Sov. J. Teor. Mat. Fiz. 9 (1971), 190. 5. The method of coherent states and diagram technique for dual amplitudes (with C. D. Popov, D. Ts. Stoyanov, and A. N. Tavkhelidze). Dubna, JINR-D-6004, 1971, 755. 6. Factorization of dual amplitudes and loops in the coherent state model (with C. D. Popov, D. Ts. Stoyanov, and A. N. Tavkhelidze). Dubna, JINR-E2-5568, 1971. 7. Retarded part of the two-time Green function and two-body relativistic problem (with D. Stoyanov). Dubna, JINR-E2-5746, 1971. 8. Three particle relativistic problem in three-dimensional variables (with D. Stoyanov). Sov. J. Teor. Mat. Fiz. 11 (1972), 23. 9. Local two-body quasipotential in the relativistic three-body problem (with D. Stoyanov). Sov. J. Teor. Mat. Fiz. 16 (1973), 42. 10. Projective properties of the quasipotential green functions of composite particles (with S. P. Kuleshov, et al.). Dubna, JINR-E2-8128, 1974. 11. Relativistic form-factors of composite particles (with S. P. Kuleshov, et al.). Sov. J. Teor. Mat.Fiz. 23 (1975), 310. 12. Scattering of composite particles and quasipotential approach in quantum field theory (with R. N. Faustov, et al.). Sov. J. Teor. Mat. Fiz. 25 (1975), 37. 13. Relativistic three particle scattering amplitude in the eikonal approximation. Sov. J. Teor. Mat. Fiz. 25 (1975), 419. 14. Eikonal approximation in the relativistic three-body problem (with G. Begeluri). Communications of Georgian Academy of Sciences 79 (1975), 2, 4, 345. 15. Relativistic three-body equations in the angular momentum representation (with I. Lomidze). Published by Tbilisi State University, 1975. 16. Quasipotential type equations for the relativistic three-particle system. Transactions of Steklov Institut of Mathematics, Nauka, Moscow II (1975), 145. 17. Considering of the internal longitudinal motion of a composite system in the eikonal approximation of the three body relativistic problem (with L. A. {\em Slepchenko). Sov. J. Theor. Math. Phys. 24 (1976), 663. 18. Quark counting method for inclusive processes (with et al.). Dubna, JINR-D2-10297, 1976. 19. Spectral and projection properties of the “two time” green functions of n particles in the null plane quantum field theory (with A. A. Khelashvili, et al.). Sov. J. Theor. Math. Phys. 29 (1977), 891. 20. Inclusive processes with large transverse momenta in the composite particle approach (with et al.). Sov. J. Fiz. Elem. Chastits i Atom. Yadra 8 (1977), 478-543. 21. Power falloff behavior of the high  Inclusive Cross-Sections (with N. S. Amaglobeli, et al.). Dubna, JINR-E2-11581, 1978. 22. Dynamical equations in quantum field theory. Lectures Course in Primorsko 1978, Proceedings, XII International School on High Energy Physics for Young Scientists, 1978, 233-281. 23. Integral equation for causal distributions and their automodel asymptotics in the ladder  theory (with et al.). Sov. J. Theor. Math. Phys. 45 (1981), 1041. 24. Large angle scattering for nonlocal quasipotentials (with A.M. Khvedelidze). Teor. Mat. Fiz. 50 (1982), 397; English transl.: Sov. J. Theor. Math. Phys. 50 (1982), 261. 25. Deep inelastic scattering in the formalism with the wave functions of composite systems at rest (with A. M. Khvedelidze, et al.). JINR-E2-87-543, 1987. 26. Representation of symmetry group transformation operators in the interaction picture (with G. P. Dzhordzhadze, et al.). Teor. Mat. Fiz. 73 (1987), 311; English transl.: Sov. J. Theor. Math. Phys. 73 (1987), 1239. 27. Covariant evolution operator in composite models of quantum field theory (with et al.). Sov. J. Theor. Math. Phys. 72 (1988), 710. 28. Description of deep inelastic processes in terms of the rest frame wave functions of composites (with et al.). Yad. Fiz. 47(1989), 1475; English transl.: Sov. J. Nucl. Phys. 47 (1989), 937. 29. Relativistic form-factors in terms of the wave functions of composite systems at rest (with et al.). Sov. J. Theor. Math. Phys. 78 (1989), 162. 30. Neutron to proton structure function ratio  at  (with A. M. Khvedelidze). Yad. Fiz. 50 (1989), 1165; English transl.: Sov. J. Nucl. Phys. 50 (1989), 725. 31. Expansion of the boost operator in powers of the coupling constant (with A. M. Khvedelidze). Teor. Mat. Fiz. 78 91989), 357; English transl.: Sov. J. Theor. Math. Phys. 78 (1989), 252. 32. Parton picture in the covariant quasipotential approach (with A. A. Khelashvili). PSI-PR-90-42, 1990, 40 p. 33. Are quarks nonrelativistic in the nucleon? (with A. M. Khvedelidze, G. Lavrelashvili, and M. Serebryakov). Nuovo Cimento A 103 (1990), 1669. 34. Bound state wave functions at rest in describing deep inelastic scattering (with A. M. Khvedelidze). Nuclear Phys. A 523 (1991), 597-613. 35. To the pair interaction approximation in equations of quantum field theory for a four body system (with A. M. Khvedelidze). Teor. Mat. Fiz. 90 (1992), 95; English transl.: Sov. J. Theor. Mat. Phys. 90 (1992), 62. 36. Dressing two nucleons at the same time (with B. Blankleider). Phys. Rev. C 48 (1993), 25. 37. Unitary  model with full dressing (with B. Blankleider). Phys. Lett. B 307 (1993), 7. 38. Covariant three-body equations in  field theory (with B. Blankleider). Nuclear Phys. A 574 (1994), 788-818. 39. Convolution approach to the  system (with B. Blankleider). Few-Body Systems 294 (1994), Suppl. 7, 294-308. 40. Few-body descriptions of the  system in three and four dimensions (with B. Blankleider).  Newsletter 11 (1995), 96-103. 41. Gauging the three-nucleon spectator equation (with B. Blankleider). Phys. Rev. C 56 (1997), 2973-2986. 42. Gauging the spectator equations (with B. Blankleider). Phys. Rev. C 56 (1997), 2963-2972. 43. Gauging the three-nucleon system (with B. Blankleider). Nucl. Phys. A 631 (1998), 559. 44. Unified relativistic description of  and  (with B. Blankleider). Phys. Rev. C 59 (1999), 1263-1271. 45. Gauging of equations method, I. Electromagnetic currents of three distinguishable particles (with B. Blankleider). nucl-th/9901001; Phys. Rev. C 60 (1999), 044003, 33pp. 46. Gauging of equations method, II. Electromagnetic currents of three identical particles (with B. Blankleider). nucl-th/9901002; Phys. Rev. C 60 (1999), 044004, 24pp. 47. Implementing PCAC in nonperturbative models of pion production (with B. Blankleider). Few Body Syst. Suppl. 12 (2000), 223-228; e-Print Archive: nucl-th/9912075. 48. Complete set of electromagnetic corrections to the nucleon mass in the Jona-Nambu-Lasinio model (with B. Blankleider). nucl-th/9906017; Nucl. Phys. A 670 (2000), 210-213. 49. Comment on “Nucleon form-factors and a nonpointlike diquark” (with B. Blankleider). Phys. Rev. C 62 (2000), 039801, 6pp. 50. Pionic dressing of baryons in chiral quark models (with M. C. Birse and B. Blankleider). Phys. Rev. C 66 (2002), 045203, 12pp. 51. Perturbation theory for bound states and resonances where potentials and propagators have arbitrary energy dependence (with B. Blankleider). Phys. Rev. D 67 (2003), 076003, 8pp. 52. Gauge invariant reduction to the light front (with B. Blankleider). Phys. Rev. D 68 (2003), 025021, 12pp. 53. Equivalence of light front and conventional thermal field theory (with B. Blankleider). hep-th/0305115, 2003; Phys. Rev. D (accepted). 54. Comment on “light front Schwinger model at finite temperature” (with B. Blankleider). hep-th/0310278, 2003; Phys. Rev. D (accepted).
web
auto_math_text
# Using an image splitting device for multi-colour ratiometric imaging¶ ## Calibrating the image splitting device¶ These instructions are designed for use with an image splitting device and a single CCD camera. They should be able to be applied to a multi-camera scenario if the images are stiched together first. The code was originally written for a home-built splitter where the CCD was split vertically and the two halves were mirror image views. It has since been modified to work with more generic splitters, although these modifications are not yet well tested (we’re currently in the process of assembling a new splitter which split’s horizontally and doesn’t flip, so any bugs are likely to be ironed out over the course of the next couple of weeks). 1. Prepare a medium density bead slide (minimum separation between beads should be on the order of ~ 500 nm, more is OK). The density should be low enough that beads in both channels can be unambiguously assigned (i.e. there should only be one bead in any given 15x15 pixel ROI), a small number of clusters is permissible as these will be discarded in post-processing. A suitable density usually works out at ~20 beads in the field of view (our FOV is half the camera). 2. Because the density used above is relatively low, a single image will not give us a particularly good estimate of the vector shift field between the two channels. To enable a better coverage of the field of view we take multiple shifted images of the beads, achieving better coverage through post-processing. If using PYMEAcquire for data acquisition on a microscope with a motorised stage there is a protocol (called ‘shiftfield’) which will do this for you. If using 3rd party software, or using a microscope without a motorised stage you can simply move the stage manually whilst recording a sequence of images (the analysed data is filtered for bead width and blurred beads will be discarded – I find a series of short moves with brief pauses to be effective). 3. Make sure that the distributed data analysis platform (launchWorkers) is running. 4. Open the data using dh5view (if using PYMEAcquire it should open automatically). 5. From the ‘Set defaults for’ menu chose ‘Calibrating the splitter’. This sets the fit model to one in which the separation between red and green images of the same bead is a free parameter. It also turns off temporal background subtraction and increases the detection threshold. 6. [Non-PYME data only] If the data was acquired in 3rd party software you will also need to set a number of metadata parameters to tell the software about how the splitting is carried out (these can be entered from the console within dh5view, but it might make more sense to put them in the .md file used to get the data to load – see the loading external data bit of the PYME documentation). The relevant parameters are Splitter.Channel0ROI, Splitter.Channel1ROI, and Splitter.Flip. The ROI parameters take a list of values in the form [x0, y0, width, height]. The Flip parameter is either True or False. It is important that the width and height is the same for both ROIs. Eg (if entered in the .md file – if executed at the console, replace md with image.mdh) md['Splitter.Channel0ROI'] = [0, 0, 512, 256] md['Splitter.Channel1ROI'] = [0, 256, 512, 256] md['Splitter.Flip'] = False 1. Click on ‘Test’ to see if the detection threshold is suitable – if necessary try a higher or lower threshold. 2. Once happy with the threshold, click ‘Go’. This will send the frames into the distributed analysis system which should churn through and perform the fits [1]. 3. One all the analysis tasks are complete, go to the analysis folder (if you haven’t set the PYMEDataDir environment varible it should be under c:\Users\<username>\PYMEData\Analysis\<name of folder containing raw data>) and find the .h5r file corresponding to the raw data. Open this in VisGUI. 4. Check the data in VisGUI to see if it looks reasonable - good coverage of the field of view, reasonable looking distribution of shifts if you set the point colour to be FitResults_dx or FitResults_dy (the x and y shifts). Try adjusting the filter if this is not the case (a good place to start is sigma – the PSF std deviation, which can be set to a reasonably narrow window around the expected bead width). A few erroneous vectors are still permissible as these will be filtered out in subsequent steps. 5. From the ‘Extras’ menu choose ‘Calculate shiftmap’. This will attempt to interpolate the shift vectors obtained at the bead locations across the field of view. The algorithm first checks to see if each vector points in approximately the same direction as it’s neighbours. ‘Wonky’ vectors which dramatically differ from their neighbours but have somehow made if through prior filtering steps are discarded at this point. Bivariate smoothing splines are then fitted to the x and y shift vectors. The resulting interpolated shift field (and residuals) is shown, and the user given the opportunity to save the shiftmap (effectively the spline coefficients) in a .sf file. Unfortunately the ‘save’ dialog is modal and you don’t get a chance to examine the shift field before being prompted to save. I usually cancel the save request the first time, examine the result, and if happy, run ‘Calculate shiftmap’ again, saving the result. This interpolated shift field should be smooth, although it’s common to see magnification differences as well as rotation in the field resulting in a spiral or vortex like appearance. If you’re unhappy with the generated shiftmap, you can go back to the filter (or if really bad try acquiring and analysing a new data set). New: If the above does not yield a good shiftmap (shifts should be mostly translation, rotation, and some scaling, which results in smoothly varying shiftmaps) you can also try the ‘Calculate Shiftmap (Model Based)’ option (also on the ‘Extras’ menu) which fits the coefficients of a global affine transform rather than trying to interpolate shifts. The resulting shiftmap will be less flexible than one calculated using the ‘Calculate shiftmap’ function, but captures the most likely transformations and is better behaved (particularly at the corners of the field where errors can be common). ### Large shifts¶ If you have very large shifts, you might need to increase the size of the ROI used to fit each bead when performing the callibration. This can be achieved by overriding the ROISize parameter in the analysis - e.g. by entering: image.mdh['Analysis.ROISize'] = 10 in the dh5view console. The ROISize setting is the size of a ‘half ROI’, with the size of the actual ROI being $$2n + 1$$. The default for shift estimation is 7 (15x15), and the default for fitting is 5 (11x11). ## Analysing ratiometric images¶ Whilst not as complicated as the calibration procedure, the analysis procedure for multi-colour images is also a little different to that for single colour images. 1. Load the data in dh5view 2. Choose SplitterFitFNR as the fit module (Note: this assumes that the default temporal background subtraction has done it’s thing and doesn’t fit the background at all. If you have disabled background subtraction, try using the older SplitterFitFR) [2]. 3. Set the Splitter parameters as in 6 above 4. Set the shift field to the .sf file saved in the calibration step (click on the ‘Set’ button) 5. Test the threshold 6. Click ‘Go’ to start the analysis. Note The shifts are corrected as part of the fitting process (they should be absent from the fitted data) ## Visualising ratiometric data¶ If you have analysed data using one of the ‘SplitterFit…’ modules, VisGUI will show a colour tab with a scattergram of the ratios. Before you can render the images as multi-colour, you will need to add species to this scattergram by clicking add and setting the ratio (which can then be tweaked by clicking on the ratio value in the table). You can also try and automagically guess what components are present by using the ‘Guess’ button. The points assigned to a certain ratio will be given the same colour as that component. After the ratios have been defined, you will have new selections in the colour filter selector in the main window, and rendering options will default to producing multi-channel images. Note It is also possible to define species and ratios in the metadata, but that is beyond the scope of what we can go into here. [1] Extra for experts: in this case it will probably only make use of 1 or 2 cores as the distributed analysis uses a chunk size of at least 50 frames to allow the data to be cached for efficient background subtraction on the workers when analysing binking datasets. [2] SplitterFitFNR is a new routine and is preferred as it performs shift-corrected ROI extraction and thus works for larger shift values. The previous versions only worked if the shift was sufficently small that the ROI co-ordinates for the 1st channel could also be used to extract a ROI for the second channel which completely enclosed the 2nd image of a molecule. As well as coping with almost arbitrarily large shifts, the new routine allows a smaller ROI to be used for moderate shifts, improving speed and tolerable localisation density.
web
auto_math_text
• ### Gaia Data Release 2: The astrometric solution(1804.09366) April 25, 2018 astro-ph.IM Gaia Data Release 2 (Gaia DR2) contains results for 1693 million sources in the magnitude range 3 to 21 based on observations collected by the European Space Agency Gaia satellite during the first 22 months of its operational phase. We describe the input data, models, and processing used for the astrometric content of Gaia DR2, and the validation of these results performed within the astrometry task. Some 320 billion centroid positions from the pre-processed astrometric CCD observations were used to estimate the five astrometric parameters (positions, parallaxes, and proper motions) for 1332 million sources, and approximate positions at the reference epoch J2015.5 for an additional 361 million mostly faint sources. Special validation solutions were used to characterise the random and systematic errors in parallax and proper motion. For the sources with five-parameter astrometric solutions, the median uncertainty in parallax and position at the reference epoch J2015.5 is about 0.04 mas for bright (G<14 mag) sources, 0.1 mas at G=17 mag, and 0.7 mas at G=20 mag. In the proper motion components the corresponding uncertainties are 0.05, 0.2, and 1.2 mas/yr, respectively. The optical reference frame defined by Gaia DR2 is aligned with ICRS and is non-rotating with respect to the quasars to within 0.15 mas/yr. From the quasars and validation solutions we estimate that systematics in the parallaxes depending on position, magnitude, and colour are generally below 0.1 mas, but the parallaxes are on the whole too small by about 0.03 mas. Significant spatial correlations of up to 0.04 mas in parallax and 0.07 mas/yr in proper motion are seen on small (<1 deg) and intermediate (20 deg) angular scales. Important statistics and information for the users of the Gaia DR2 astrometry are given in the appendices. • We highlight the power of the Gaia DR2 in studying many fine structures of the Hertzsprung-Russell diagram (HRD). Gaia allows us to present many different HRDs, depending in particular on stellar population selections. We do not aim here for completeness in terms of types of stars or stellar evolutionary aspects. Instead, we have chosen several illustrative examples. We describe some of the selections that can be made in Gaia DR2 to highlight the main structures of the Gaia HRDs. We select both field and cluster (open and globular) stars, compare the observations with previous classifications and with stellar evolutionary tracks, and we present variations of the Gaia HRD with age, metallicity, and kinematics. Late stages of stellar evolution such as hot subdwarfs, post-AGB stars, planetary nebulae, and white dwarfs are also analysed, as well as low-mass brown dwarf objects. The Gaia HRDs are unprecedented in both precision and coverage of the various Milky Way stellar populations and stellar evolutionary phases. Many fine structures of the HRDs are presented. The clear split of the white dwarf sequence into hydrogen and helium white dwarfs is presented for the first time in an HRD. The relation between kinematics and the HRD is nicely illustrated. Two different populations in a classical kinematic selection of the halo are unambiguously identified in the HRD. Membership and mean parameters for a selected list of open clusters are provided. They allow drawing very detailed cluster sequences, highlighting fine structures, and providing extremely precise empirical isochrones that will lead to more insight in stellar physics. Gaia DR2 demonstrates the potential of combining precise astrometry and photometry for large samples for studies in stellar evolution and stellar population and opens an entire new area for HRD-based studies. • ### The co-existence of hot and cold gas in debris discs(1801.07951) Jan. 24, 2018 astro-ph.SR, astro-ph.EP Debris discs have often been described as gas-poor discs as the gas-to-dust ratio is expected to be considerably lower than in primordial,protoplanetary discs. However, recent observations have confirmed the presence of a non-negligible amount of cold gas in the circumstellar (CS) debris discs around young main-sequence stars.This cold gas has been suggested to be related to the outgassing of planetesimals and cometary-like objects. The aim of the paper is to investigate the presence of hot gas in the surroundings of stars bearing cold-gas debris discs. High-resolution optical spectra of all currently known cold-gas-bearing debris-disc systems, with the exception of $\beta$ Pic and Fomalhaut, have been obtained from different observatories.We have analysed the Ca II H & K and the Na I D lines searching for non-photospheric absorptions of CS origin, usually attributed to cometary-like activity. Narrow, stable Ca II and/or Na I absorption features have been detected superimposed to the photospheric lines in 10 out of the 15 observed cold-gas-bearing debris disc.Features are found at the radial velocity of the stars, or slightly blue- or red-shifted, and/or at the velocity of the local interstellar medium (ISM). Some stars also present transient variable events or absorptions extended towards red wavelengths. These are the first detections of such Ca II features in 7 out of the 15 observed stars. In some of these stars, results suggest that the stable and variable absorptions arise from relatively hot gas located in the CS close-in environment. This hot gas is detected in at least ~80%, of edge-on cold-gas-bearing debris discs, while in only ~10% of the discs seen close to face-on. We interpret this as a geometrical effect, and suggest that the non-detection of hot gas absorptions is due to the disc inclination rather than to the absence of the hot-gas component. • ### Gaia Data Release 1: The archive visualisation service(1708.00195) Context: The first Gaia data release (DR1) delivered a catalogue of astrometry and photometry for over a billion astronomical sources. Within the panoply of methods used for data exploration, visualisation is often the starting point and even the guiding reference for scientific thought. However, this is a volume of data that cannot be efficiently explored using traditional tools, techniques, and habits. Aims: We aim to provide a global visual exploration service for the Gaia archive, something that is not possible out of the box for most people. The service has two main goals. The first is to provide a software platform for interactive visual exploration of the archive contents, using common personal computers and mobile devices available to most users. The second aim is to produce intelligible and appealing visual representations of the enormous information content of the archive. Methods: The interactive exploration service follows a client-server design. The server runs close to the data, at the archive, and is responsible for hiding as far as possible the complexity and volume of the Gaia data from the client. This is achieved by serving visual detail on demand. Levels of detail are pre-computed using data aggregation and subsampling techniques. For DR1, the client is a web application that provides an interactive multi-panel visualisation workspace as well as a graphical user interface. Results: The Gaia archive Visualisation Service offers a web-based multi-panel interactive visualisation desktop in a browser tab. It currently provides highly configurable 1D histograms and 2D scatter plots of Gaia DR1 and the Tycho-Gaia Astrometric Solution (TGAS) with linked views. An innovative feature is the creation of ADQL queries from visually defined regions in plots. [abridged] • Parallaxes for 331 classical Cepheids, 31 Type II Cepheids and 364 RR Lyrae stars in common between Gaia and the Hipparcos and Tycho-2 catalogues are published in Gaia Data Release 1 (DR1) as part of the Tycho-Gaia Astrometric Solution (TGAS). In order to test these first parallax measurements of the primary standard candles of the cosmological distance ladder, that involve astrometry collected by Gaia during the initial 14 months of science operation, we compared them with literature estimates and derived new period-luminosity ($PL$), period-Wesenheit ($PW$) relations for classical and Type II Cepheids and infrared $PL$, $PL$-metallicity ($PLZ$) and optical luminosity-metallicity ($M_V$-[Fe/H]) relations for the RR Lyrae stars, with zero points based on TGAS. The new relations were computed using multi-band ($V,I,J,K_{\mathrm{s}},W_{1}$) photometry and spectroscopic metal abundances available in the literature, and applying three alternative approaches: (i) by linear least squares fitting the absolute magnitudes inferred from direct transformation of the TGAS parallaxes, (ii) by adopting astrometric-based luminosities, and (iii) using a Bayesian fitting approach. TGAS parallaxes bring a significant added value to the previous Hipparcos estimates. The relations presented in this paper represent first Gaia-calibrated relations and form a "work-in-progress" milestone report in the wait for Gaia-only parallaxes of which a first solution will become available with Gaia's Data Release 2 (DR2) in 2018. • Context. The first Gaia Data Release contains the Tycho-Gaia Astrometric Solution (TGAS). This is a subset of about 2 million stars for which, besides the position and photometry, the proper motion and parallax are calculated using Hipparcos and Tycho-2 positions in 1991.25 as prior information. Aims. We investigate the scientific potential and limitations of the TGAS component by means of the astrometric data for open clusters. Methods. Mean cluster parallax and proper motion values are derived taking into account the error correlations within the astrometric solutions for individual stars, an estimate of the internal velocity dispersion in the cluster, and, where relevant, the effects of the depth of the cluster along the line of sight. Internal consistency of the TGAS data is assessed. Results. Values given for standard uncertainties are still inaccurate and may lead to unrealistic unit-weight standard deviations of least squares solutions for cluster parameters. Reconstructed mean cluster parallax and proper motion values are generally in very good agreement with earlier Hipparcos-based determination, although the Gaia mean parallax for the Pleiades is a significant exception. We have no current explanation for that discrepancy. Most clusters are observed to extend to nearly 15 pc from the cluster centre, and it will be up to future Gaia releases to establish whether those potential cluster-member stars are still dynamically bound to the clusters. Conclusions. The Gaia DR1 provides the means to examine open clusters far beyond their more easily visible cores, and can provide membership assessments based on proper motions and parallaxes. A combined HR diagram shows the same features as observed before using the Hipparcos data, with clearly increased luminosities for older A and F dwarfs. • ### Exocomet signatures around the A-shell star $\Phi$ Leo?(1609.04263) Oct. 3, 2016 astro-ph.SR, astro-ph.EP We present an intensive monitoring of high-resolution spectra of the Ca {\sc ii} K line in the A7IV shell star $\Phi$ Leo at very short (minutes, hours), short (night to night), and medium (weeks, months) timescales. The spectra show remarkable variable absorptions on timescales of hours, days, and months. The characteristics of these sporadic events are very similar to most that are observed toward the debris disk host star $\beta$ Pic, which are commonly interpreted as signs of the evaporation of solid, comet-like bodies grazing or falling onto the star. Therefore, our results suggest the presence of solid bodies around $\Phi$ Leo. To our knowledge, with the exception of $\beta$ Pic, our monitoring has the best time resolution at the mentioned timescales for a star with events attributed to exocomets. Assuming the cometary scenario and considering the timescales of our monitoring, our results indicate that $\Phi$ Leo presents the richest environment with comet-like events known to date, second only to $\beta$ Pic. • ### On-orbit performance of the Gaia CCDs at L2(1609.04240) Sept. 14, 2016 astro-ph.GA, astro-ph.IM The European Space Agency's Gaia satellite was launched into orbit around L2 in December 2013 with a payload containing 106 large-format scientific CCDs. The primary goal of the mission is to repeatedly obtain high-precision astrometric and photometric measurements of one thousand million stars over the course of five years. The scientific value of the down-linked data, and the operation of the onboard autonomous detection chain, relies on the high performance of the detectors. As Gaia slowly rotates and scans the sky, the CCDs are continuously operated in a mode where the line clock rate and the satellite rotation spin-rate are in synchronisation. Nominal mission operations began in July 2014 and the first data release is being prepared for release at the end of Summer 2016. In this paper we present an overview of the focal plane, the detector system, and strategies for on-orbit performance monitoring of the system. This is followed by a presentation of the performance results based on analysis of data acquired during a two-year window beginning at payload switch-on. Results for parameters such as readout noise and electronic offset behaviour are presented and we pay particular attention to the effects of the L2 radiation environment on the devices. The radiation-induced degradation in the charge transfer efficiency (CTE) in the (parallel) scan direction is clearly diagnosed; however, an extrapolation shows that charge transfer inefficiency (CTI) effects at end of mission will be approximately an order of magnitude less than predicted pre-flight. It is shown that the CTI in the serial register (horizontal direction) is still dominated by the traps inherent to the manufacturing process and that the radiation-induced degradation so far is only a few per cent. Finally, we summarise some of the detector effects discovered on-orbit which are still being investigated. • ### Gaia Data Release 1: Astrometry - one billion positions, two million proper motions and parallaxes(1609.04303) Sept. 14, 2016 astro-ph.GA, astro-ph.IM Gaia Data Release 1 (Gaia DR1) contains astrometric results for more than 1 billion stars brighter than magnitude 20.7 based on observations collected by the Gaia satellite during the first 14 months of its operational phase. We give a brief overview of the astrometric content of the data release and of the model assumptions, data processing, and validation of the results. For stars in common with the Hipparcos and Tycho-2 catalogues, complete astrometric single-star solutions are obtained by incorporating positional information from the earlier catalogues. For other stars only their positions are obtained by neglecting their proper motions and parallaxes. The results are validated by an analysis of the residuals, through special validation runs, and by comparison with external data. Results. For about two million of the brighter stars (down to magnitude ~11.5) we obtain positions, parallaxes, and proper motions to Hipparcos-type precision or better. For these stars, systematic errors depending e.g. on position and colour are at a level of 0.3 milliarcsecond (mas). For the remaining stars we obtain positions at epoch J2015.0 accurate to ~10 mas. Positions and proper motions are given in a reference frame that is aligned with the International Celestial Reference Frame (ICRF) to better than 0.1 mas at epoch J2015.0, and non-rotating with respect to ICRF to within 0.03 mas/yr. The Hipparcos reference frame is found to rotate with respect to the Gaia DR1 frame at a rate of 0.24 mas/yr. Based on less than a quarter of the nominal mission length and on very provisional and incomplete calibrations, the quality and completeness of the astrometric data in Gaia DR1 are far from what is expected for the final mission products. The results nevertheless represent a huge improvement in the available fundamental stellar data and practical definition of the optical reference frame. • ### Gaia: focus, straylight and basic angle(1608.00045) Aug. 21, 2016 astro-ph.IM The Gaia all-sky astrometric survey is challenged by several issues affecting the spacecraft stability. Amongst them, we find the focus evolution, straylight and basic angle variations Contrary to pre-launch expectations, the image quality is continuously evolving, during commissioning and the nominal mission. Payload decontaminations and wavefront sensor assisted refocuses have been carried out to recover optimum performance. Straylight and basic angle variations several orders of magnitude greater than foreseen were found and studied during commissioning by the Gaia scientists (payload experts). Building on their investigations, an ESA-Airbus DS working group was established during the early nominal mission and worked on a detailed root cause analysis. In parallel, Gaia scientists have also continued analysing the data, most notably comparing the BAM signal to global astrometric solutions, with remarkable agreement. In this contribution, a status review of these issues will be provided, with emphasis on the mitigation schemes and the lessons learned for future space missions where extreme stability is a key requirement. • ### Enabling science with Gaia observations of naked-eye stars(1605.08347) May 26, 2016 astro-ph.SR, astro-ph.IM ESA's Gaia space astrometry mission is performing an all-sky survey of stellar objects. At the beginning of the nominal mission in July 2014, an operation scheme was adopted that enabled Gaia to routinely acquire observations of all stars brighter than the original limit of G~6, i.e. the naked-eye stars. Here, we describe the current status and extent of those observations and their on-ground processing. We present an overview of the data products generated for G<6 stars and the potential scientific applications. Finally, we discuss how the Gaia survey could be enhanced by further exploiting the techniques we developed. • ### Incidence of debris discs around FGK stars in the solar neighbourhood(1605.05837) May 19, 2016 astro-ph.SR, astro-ph.EP Debris discs are a consequence of the planet formation process and constitute the fingerprints of planetesimal systems. Their solar system's counterparts are the asteroid and Edgeworth-Kuiper belts. The aim of this paper is to provide robust numbers for the incidence of debris discs around FGK stars in the solar neighbourhood. The full sample of 177 FGK stars with d<20 pc proposed for the DUNES survey is presented. Herschel/PACS observations at 100 and 160 micron complemented with data at 70 micron, and at 250, 350 and 500 micron SPIRE photometry, were obtained. The 123 objects observed by the DUNES collaboration were presented in a previous paper. The remaining 54 stars, shared with the DEBRIS consortium and observed by them, and the combined full sample are studied in this paper. The incidence of debris discs per spectral type is analysed and put into context together with other parameters of the sample, like metallicity, rotation and activity, and age. The subsample of 105 stars with d<15 pc containing 23 F, 33 G and 49 K stars, is complete for F stars, almost complete for G stars and contains a substantial number of K stars to draw solid conclusions on objects of this spectral type. The incidence rates of debris discs per spectral type 0.26 (6 objects with excesses out of 23 F stars), 0.21 (7 out of 33 G stars) and 0.20 (10 out of 49 K stars), the fraction for all three spectral types together being 0.22 (23 out of 105 stars). Uncertainties corresponding to a 95% confidence level are given in the text for all these numbers. The medians of the upper limits of L_dust/L_* for each spectral type are 7.8E-7 (F), 1.4E-6 (G) and 2.2E-6 (K); the lowest values being around 4.0E-7. The incidence of debris discs is similar for active (young) and inactive (old) stars. The fractional luminosity tends to drop with increasing age, as expected from collisional erosion of the debris belts. • ### Searching for signatures of planet formation in stars with circumstellar debris discs(1502.07100) Feb. 25, 2015 astro-ph.SR (Abridged) Tentative correlations between the presence of dusty debris discs and low-mass planets have been presented. In parallel, detailed chemical abundance studies have reported different trends between samples of planet and non-planet hosts. We determine in a homogeneous way the metallicity, and abundances of a sample of 251 stars including stars with known debris discs, with debris discs and planets, and only with planets. Stars with debris discs and planets have the same [Fe/H] behaviour as stars hosting planets, and they also show a similar <[X/Fe]>-Tc trend. Different behaviour in the <[X/Fe]>-Tc trend is found between the samples of stars without planets and the samples of planet hosts. In particular, when considering only refractory elements, negative slopes are shown in cool giant planet hosts, whilst positive ones are shown in stars hosting low-mass planets. Stars hosting exclusively close-in giant planets show higher metallicities and positive <[X/Fe]>-Tc slope. A search for correlations between the <[X/Fe]>-Tc slopes and the stellar properties reveals a moderate but significant correlation with the stellar radius and as well as a weak correlation with the stellar age. The fact that stars with debris discs and stars with low-mass planets do not show neither metal enhancement nor a different <[X/Fe]>-Tc trend might indicate a correlation between the presence of debris discs and the presence of low-mass planets. We extend results from previous works which reported differences in the <[X/Fe]>-Tc trends between planet hosts and non hosts. However, these differences tend to be present only when the star hosts a cool distant planet and not in stars hosting exclusively low-mass planets. • ### Does the presence of planets affect the frequency and properties of extrasolar Kuiper Belts? Results from the Herschel DEBRIS and DUNES surveys(1501.03813) Feb. 21, 2015 astro-ph.SR, astro-ph.EP The study of the planet-debris disk connection can shed light on the formation and evolution of planetary systems, and may help predict the presence of planets around stars with certain disk characteristics. In preliminary analyses of the Herschel DEBRIS and DUNES surveys, Wyatt et al. (2012) and Marshall et al. (2014) identified a tentative correlation between debris and low-mass planets. Here we use the cleanest possible sample out these surveys to assess the presence of such a correlation, discarding stars without known ages, with ages < 1 Gyr and with binary companions <100 AU, to rule out possible correlations due to effects other than planet presence. In our sample of 204 FGK stars, we do not find evidence that debris disks are more common or more dusty around stars harboring high-mass or low-mass planets compared to a control sample without identified planets, nor that debris disks are more or less common (or more or less dusty) around stars harboring multiple planets compared to single-planet systems. Diverse dynamical histories may account for the lack of correlations. The data show the correlation between the presence of high-mass planets and stellar metallicity, but no correlation between the presence of low-mass planets or debris and stellar metallicity. Comparing the observed cumulative distribution of fractional luminosity to those expected from a Gaussian distribution, we find that a distribution centered on the Solar system's value fits well the data, while one centered at 10 times this value can be rejected. This is of interest in the context of future terrestrial planet characterization because it indicates that there are good prospects for finding a large number of debris disk systems (i.e. with evidence of harboring the building blocks of planets) with exozodiacal emission low enough to be appropriate targets for an ATLAST-type mission to search for biosignatures. • ### ALMA observations of alpha Centauri: First detection of main-sequence stars at 3mm wavelength(1412.3923) Dec. 12, 2014 astro-ph.SR The precise mechanisms that provide the non-radiative energy for heating the chromosphere and the corona of the Sun and those of other stars constitute an active field of research. By studying stellar chromospheres one aims at identifying the relevant physical processes. Defining the permittable extent of the parameter space can also serve as a template for the Sun-as-a-star. Earlier observations with Herschel and APEX have revealed the temperature minimum of alpha Cen, but these were unable to spatially resolve the binary into individual components. With the data reported here, we aim at remedying this shortcoming. Furthermore, these earlier data were limited to the wavelength region between 100 and 870mu. In the present context, we intend to extend the spectral mapping to longer wavelengths, where the contrast between stellar photospheric and chromospheric emission becomes increasingly evident. ALMA is particularly suited to point sources, such as unresolved stars. ALMA provides the means to achieve our objectives with both its high sensitivity of the collecting area for the detection of weak signals and the high spatial resolving power of its adaptable interferometer for imaging close multiple stars. This is the first detection of main-sequence stars at a wavelength of 3mm. Furthermore, the individual components of the binary alpha CenAB are clearly detected and spatially well resolved at all ALMA wavelengths. The high S/N of these data permit accurate determination of their relative flux ratios. The previously obtained flux ratio of 0.44, which was based on measurements in the optical and at 70mu, is consistent with the present ALMA results, albeit with a large error bar. Given the distinct difference in their cyclic activity, the similarity of their submm SEDs appears surprising. • ### Constraints on the binary Properties of mid to late T dwarfs from Hubble Space Telescope WFC3 Observations(1408.4259) Sept. 18, 2014 astro-ph.SR We used HST/WFC3 observations of a sample of 26 nearby ($\le$20 pc) mid to late T dwarfs to search for cooler companions and measure the multiplicity statistics of brown dwarfs. Tightly-separated companions were searched for using a double-PSF fitting algorithm. We also compared our detection limits based on simulations to other prior T5+ brown dwarf binary programs. No new wide or tight companions were identified, which is consistent with the number of known T5+ binary systems and the resolution limits of WFC3. We use our results to add new constraints to the binary fraction of T-type brown dwarfs. Modeling selection effects and adopting previously derived separation and mass ratio distributions, we find an upper limit total binary fraction of <16% and <25% assuming power law and flat mass ratio distributions respectively, which are consistent with previous results. We also characterize a handful of targets around the L/T transition. • ### Enabling Gaia observations of naked-eye stars(1408.3039) The ESA Gaia space astrometry mission will perform an all-sky survey of stellar objects complete in the nominal magnitude range G = [6.0 - 20.0]. The stars with G lower than 6.0, i.e. those visible to the unaided human eye, would thus not be observed by Gaia. We present an algorithm configuration for the Gaia on-board autonomous object observation system that makes it possible to observe very bright stars with G = [2.0-6.0). Its performance has been tested during the in-orbit commissioning phase achieving an observation completeness of ~94% at G = 3 - 5.7 and ~75% at G = 2 - 3. Furthermore, two targeted observation techniques for data acquisition of stars brighter than G = 2.0 were tested. The capabilities of these two techniques and the results of the in-flight tests are presented. Although the astrometric performance for stars with G lower than 6.0 has yet to be established, it is clear that several science cases will benefit from the results of the work presented here. • ### Gaia on-board metrology: basic angle and best focus(1407.3729) July 14, 2014 astro-ph.IM The Gaia payload ensures maximum passive stability using a single material, SiC, for most of its elements. Dedicated metrology instruments are, however, required to carry out two functions: monitoring the basic angle and refocusing the telescope. Two interferometers fed by the same laser are used to measure the basic angle changes at the level of $\mu$as (prad, micropixel), which is the highest level ever achieved in space. Two Shack-Hartmann wavefront sensors, combined with an ad-hoc analysis of the scientific data are used to define and reach the overall best-focus. In this contribution, the systems, data analysis, procedures and performance achieved during commissioning are presented July 14, 2014 astro-ph.IM This document describes the uplink commanding system for the ESA Gaia mission. The need for commanding, the main actors, data flow and systems involved are described. The system architecture is explained in detail, including the different levels of configuration control, software systems and data models. A particular subsystem, the automatic interpreter of human-readable onboard activity templates, is also carefully described. Many lessons have been learned during the commissioning and are also reported, because they could be useful for future space survey missions. • ### Correlations between the stellar, planetary and debris components of exoplanet systems observed by $\textit{Herschel}$(1403.6186) March 24, 2014 astro-ph.EP The $\textit{Herschel}$ DEBRIS, DUNES and GT programmes observed 37 exoplanet host stars within 25 pc at 70, 100 and 160 $\mu$m with the sensitivity to detect far-infrared excess emission at flux density levels only an order of magnitude greater than that of the Solar system's Edgeworth-Kuiper belt. Here we present an analysis of that sample, using it to more accurately determine the (possible) level of dust emission from these exoplanet host stars and thereafter determine the links between the various components of these exoplanetary systems through statistical analysis. We have fitted the flux densities measured from recent \textit{Herschel} observations with a simple two parameter ($T_{d}$, $L_{\rm IR}/L_{\star}$) black body model (or to the 3-$\sigma$ upper limits at 100 $\mu$m). From this uniform approach we calculate the fractional luminosity, radial extent, dust temperature and disc mass. We then plotted the calculated dust luminosity or upper limits against the stellar properties, e.g. effective temperature, metallicity, age, and identified correlations between these parameters. A total of eleven debris discs are identified around the 37 stars in the sample. An incidence of ten cool debris discs around the Sun-like exoplanet host stars (29 $\pm$ 9 %) is consistent with the detection rate found by DUNES (20.2 $\pm$ 2.0 %). For the debris disc systems, the dust temperatures range from 20 to 80 K, and fractional luminosities ($L_{\rm IR}/L_{\star}$) between 2.4 $\times$10$^{-6}$ and 4.1 $\times$10$^{-4}$. In the case of non-detections, we calculated typical 3-$\sigma$ upper limits to the dust fractional luminosities of a few $\times10^{-6}$. We recover the previously identified correlation between stellar metallicity and hot Jupiter planets in our data set. We find a correlation between the increased presence of dust, lower planet masses and lower stellar metallicities. (abridged) • [Abridged] Debris discs around main-sequence stars indicate the presence of larger rocky bodies. The components of the nearby binary aCentauri have higher than solar metallicities, which is thought to promote giant planet formation. We aim to determine the level of emission from debris in the aCen system. Having already detected the temperature minimum, Tmin, of aCenA, we here attempt to do so also for the companion aCenB. Using the aCen stars as templates, we study possible effects Tmin may have on the detectability of unresolved dust discs around other stars. We use Herschel and APEX photometry to determine the stellar spectral energy distributions. In addition, we use APEX for spectral line mapping to study the complex background around aCen seen in the photometric images. Models of stellar atmospheres and discs are used to estimate the amount of debris around these stars. For solar-type stars, a fractional dust luminosity fd 2e-7 could account for SEDs that do not exhibit the Tmin-effect. Slight excesses at the 2.5 sigma level are observed at 24 mu for both stars, which, if interpreted to be due to dust, would correspond to fd (1-3)e-5. Dynamical disc modelling leads to rough mass estimates of the putative Zodi belts around the aCen stars, viz. <~4e-6 MMoon of 4 to 1000 mu size grains, distributed according to n a^-3.5. Similarly, for filled-in Tmin emission, corresponding EKBs could account for ~1e-3 MMoon of dust. Light scattered and/or thermally emitted by exo-Zodi discs will have profound implications for future spectroscopic missions designed to search for biomarkers in the atmospheres of Earth-like planets. The F-IR SED of aCenB is marginally consistent with the presence of a minimum temperature region in the upper atmosphere. We also show that an aCenA-like temperature minimum may result in an erroneous apprehension about the presence of dust around other stars. • ### Potential multi-component structure of the debris disk around HIP 17439 revealed by Herschel/DUNES(1312.6385) Dec. 22, 2013 astro-ph.SR, astro-ph.EP [abridged] Aims. Our Herschel Open Time Key Programme DUNES aims at detecting and characterizing debris disks around nearby, sun-like stars. In addition to the statistical analysis of the data, the detailed study of single objects through spatially resolving the disk and detailed modeling of the data is a main goal of the project. Methods. We obtained the first observations spatially resolving the debris disk around the sun-like star HIP 17439 (HD23484) using the instruments PACS and SPIRE on board the Herschel Space Observatory. Simultaneous multi-wavelength modeling of these data together with ancillary data from the literature is presented. Results. A standard single component disk model fails to reproduce the major axis radial profiles at 70 um, 100 um, and 160 um simultaneously. Moreover, the best-fit parameters derived from such a model suggest a very broad disk extending from few au up to few hundreds of au from the star with a nearly constant surface density which seems physically unlikely. However, the constraints from both the data and our limited theoretical investigation are not strong enough to completely rule out this model. An alternative, more plausible, and better fitting model of the system consists of two rings of dust at approx. 30 au and 90 au, respectively, while the constraints on the parameters of this model are weak due to its complexity and intrinsic degeneracies. Conclusions. The disk is probably composed of at least two components with different spatial locations (but not necessarily detached), while a single, broad disk is possible, but less likely. The two spatially well-separated rings of dust in our best-fit model suggest the presence of at least one high mass planet or several low-mass planets clearing the region between the two rings from planetesimals and dust. • ### Can eccentric debris disks be long-lived? A first numerical investigation and application to $\zeta^2$ Reticuli(1312.5146) Dec. 18, 2013 astro-ph.EP Imaging of debris disks has found evidence for both eccentric and offset disks. One hypothesis is that these provide evidence for massive perturbers that sculpt the observed structures. One such disk was recently observed in the far-IR by the Herschel Space Observatory around $\zeta^2$ Ret. In contrast with previously reported systems, the disk is significantly eccentric, and the system is Gyr-old. We aim to investigate the long-term evolution of eccentric structures in debris disks caused by a perturber on an eccentric orbit. Both analytical predictions and numerical N-body simulations are used to investigate the observable structures that could be produced by eccentric perturbers. The long-term evolution of the disk geometry is examined, with particular application to the $\zeta^2$ Ret system. In addition, synthetic images of the disk are produced for comparison with Herschel observations. We show that an eccentric companion can produce both the observed offsets and eccentric disks. Such effects are not immediate and we characterise the timescale required for the disk to develop to an eccentric state. For the case of $\zeta^2$ Ret, we place limits on the mass and orbit of the companion required to produce the observations. Synthetic images show that the pattern observed around $\zeta^2$ Ret can be produced by an eccentric disk seen close to edge-on, and allow us to bring additional constraints on the disk parameters of our model (disk flux, extent). We determine that eccentric planets or stellar companions can induce long-lived eccentric structures in debris disks. Observations of such eccentric structures provide potential evidence of the presence of such a companion in a planetary system. We consider the example of $\zeta^2$ Ret, whose observed eccentric disk can be explained by a distant companion at tens of AU, on an eccentric orbit ($e_p\gtrsim 0.3$). • ### Accretion variability of Herbig Ae/Be stars observed by X-Shooter. HD 31648 and HD 163296(1308.3248) Aug. 14, 2013 astro-ph.SR This work presents X-Shooter/VLT spectra of the prototypical, isolated Herbig Ae stars HD 31648 (MWC 480) and HD 163296 over five epochs separated by timescales ranging from days to months. Each spectrum spans over a wide wavelength range covering from 310 to 2475 nm. We have monitored the continuum excess in the Balmer region of the spectra and the luminosity of twelve ultraviolet, optical and near infrared spectral lines that are commonly used as accretion tracers for T Tauri stars. The observed strengths of the Balmer excesses have been reproduced from a magnetospheric accretion shock model, providing a mean mass accretion rate of 1.11 x 10^-7 and 4.50 x 10^-7 Msun yr^-1 for HD 31648 and HD 163296, respectively. Accretion rate variations are observed, being more pronounced for HD 31648 (up to 0.5 dex). However, from the comparison with previous results it is found that the accretion rate of HD 163296 has increased by more than 1 dex, on a timescale of ~ 15 years. Averaged accretion luminosities derived from the Balmer excess are consistent with the ones inferred from the empirical calibrations with the emission line luminosities, indicating that those can be extrapolated to HAe stars. In spite of that, the accretion rate variations do not generally coincide with those estimated from the line luminosities, suggesting that the empirical calibrations are not useful to accurately quantify accretion rate variability. • ### Herschel's "Cold Debris Disks": Background Galaxies or Quiescent Rims of Planetary Systems?(1306.2855) June 12, 2013 astro-ph.EP (abridged) Infrared excesses associated with debris disk host stars detected so far peak at wavelengths around ~100{\mu}m or shorter. However, six out of 31 excess sources in the Herschel OTKP DUNES have been seen to show significant - and in some cases extended - excess emission at 160{\mu}m, which is larger than the 100{\mu}m excess. This excess emission has been suggested to stem from debris disks colder than those known previously. Using several methods, we re-consider whether some or even all of the candidates may be associated with unrelated galactic or extragalactic emission and conclude that it is highly unlikely that none of the candidates represents a true circumstellar disk. For true disks, both the dust temperatures inferred from the SEDs and the disk radii estimated from the images suggest that the dust is nearly as cold as a blackbody. This requires the grains to be larger than ~100{\mu}m, regardless of their material composition. To explain the dearth of small grains, we explore several conceivable scenarios: transport-dominated disks, disks of low dynamical excitation, and disks of unstirred primordial macroscopic grains. Our qualitative analysis and collisional simulations rule out the first two of these scenarios, but show the feasibility of the third one. We show that such disks can survive for gigayears, largely preserving the primordial size distribution. They should be composed of macroscopic solids larger than millimeters, but smaller than kilometers in size. Thus planetesimal formation, at least in the outer regions of the systems, has stopped before "cometary" or "asteroidal" sizes were reached.
web
auto_math_text
Contents # Contents ## Idea What is called massive type IIA string theory is a deformation of type IIA string theory which contains the RR-field flux forms $F_0$ and/or Hodge dually $F_{10}$, which couple to D8-branes. This is the UV-completion of massive type IIA supergravity in analogy to how plain type IIA string theory is the UV-completion of plain type IIA supergravity. ## Properties ### Black D8-branes It is (only) massive type IIA string theory where D8-branes exist as actual black branes (BdRGPT96 96, Chamblin-Perry 97, Janssen-Meessen-Ortin 99) Similarly for D6-D8-brane bound states (Singh 02a, Singh 02b). This is because the normal n-sphere around a D8-brane (with its 9-dimensional worldvolume) in 10-dimensional spacetime is a 0-sphere, D8-brane charge is measured by the RR-field 0-flux form $F_0$, or else by its Hodge dual 10-form $F_{10}$. These behave like a cosmological constant in the corresponding D=10 supergravity (“Romans supergravity”) and causes the mass term for the B-field. ### Lift to M-theory? The massive version of the duality between type IIA string theory and M-theory is more subtle, since D=11 supergravity does not admit a corresponding mass deformation (BDHS 97, Deser 97, Tsimpis 05). In AJTZ10 it was argued that a massive strong-coupling limit may just not exist. But in Hull 98 an embedding of massive IIA into M-theory was claimed, and a corresponding BFSS matrix model compactification was claimed in Lowe-Nastae-Ramgoolam 03). ## References ### General An early hint is in • Joseph Polchinski, p. 8,9 of: Dirichlet-Branes and Ramond-Ramond Charges, Phys. Rev. Lett.7 5:4724-4727, 1995 Serious development in ### Black branes #### D8-branes As a black brane the D8-brane was identified as a solution to Romans supergravity/massive type IIA string theory in #### D6-D8-brane bound states With emphasis on charge quantization of the RR-field flux forms via bundle gerbes: #### D6-D8-brane bound states with D2-D4-brane defects On black$\;$D6-D8-brane bound states in massive type IIA string theory, with defect D2-D4-brane bound states inside them realizing AdS3-CFT2 “inside” AdS7-CFT6: #### D4-D8-brane bound states with D2-D6-brane defects On black$\;$D4-D8-brane bound states in massive type IIA string theory, with defect D2-D6-brane bound states inside them realizing AdS3-CFT2 “inside” AdS7-CFT6: ### M-Theory/Strong coupling limit Discussion of impossibility of a mass-deformation of D=11 supergravity • K. Bautier, S. Deser, Marc Henneaux, D. Seminara, No Cosmological $D=11$ Supergravity, Phys. Lett. B406:49-53, 1997 (arXiv:hep-th/9704131) • S. Deser, Uniqueness of $D=11$ Supergravity (arXiv:hep-th/9712064) Speculation that strong-coupling limit of massive type IIA does not exist: Claim of a realization of massive type IIA string theory in M-theory: Our purpose here is to argue that although the Romans supergravity theory may not be derivable from 11-dimensional supergravity, or any covariant massive deformation thereof, the massive IIA superstring, whose low energy limit is the Romans theory, can be obtained from M-theory. The type IIB supergravity theory also cannot be obtained from 11-dimensional supergravity, but the type IIB string theory can be obtained from M-theory by compactifying on a 2-torus and taking a limit and claim of the corresponding BFSS matrix model: Last revised on December 23, 2019 at 07:17:50. See the history of this page for a list of all contributions to it.
web
auto_math_text
Now showing items 1-7 of 7 • #### The Complete Survey of Outflows in Perseus  (American Astronomical Society, 2010) We present a study on the impact of molecular outflows in the Perseus molecular cloud complex using the COMPLETE Survey large-scale $^{12}CO(1-0)$ and $^{13}CO(1-0)$ maps. We used three-dimensional isosurface models ... • #### Dense Cores in Perseus: The Influence of Stellar Content and Cluster Environment  (American Astronomical Society, 2009) We present the chemistry, temperature, and dynamical state of a sample of 193 dense cores or core candidates in the Perseus Molecular cloud and compare the properties of cores associated with young stars and clusters with ... • #### Direct Observation of a Sharp Transition to Coherence in Dense Cores  (American Astronomical Society, 2010) We present $NH_3$ observations of the B5 region in Perseus obtained with the Green Bank Telescope. The map covers a region large enough $(\sim 11'×14')$ that it contains the entire dense core observed in previous dust ... • #### Evidence for Grain Growth in Molecular Clouds: A Bayesian Examination of the Extinction Law in Perseus  (Royal Astronomical Society, 2013) We investigate the shape of the extinction law in two $1^{\circ}$ square fields of the Perseus molecular cloud complex. We combine deep red-optical (r, i and z band) observations obtained using Megacam on the MMT with ... • #### Misalignment of Outflow Axes in the Proto-Multiple Systems in Perseus  (American Astronomical Society, 2016) We investigate the alignment between outflow axes in nine of the youngest binary/multiple systems in the Perseus Molecular Cloud. These systems have typical member spacing larger than 1000 au. For outflow identification, ... • #### The Perils of Clumpfind: The Mass Spectrum of Substructures in Molecular Clouds  (American Astronomical Society, 2009) We study the mass spectrum of substructures in the Perseus Molecular Cloud Complex traced by $^{13}CO(1–0)$, finding that $dN/dM \ \alpha \ M^{−2.4}$ for the standard Clumpfind parameters. This result does not agree ... • #### The "True" Column Density Distribution in Star-Forming Molecular Clouds  (American Astronomical Society, 2009) We use the COMPLETE Survey's observations of the Perseus star-forming region to assess and intercompare the three methods used for measuring column density in molecular clouds: near-infrared (NIR) extinction mapping; thermal ...
web
auto_math_text
Wed 2020-Dec-02 # Proposed source of the "Wow!" signal? The sad news about Arecibo brings up the topic of SETI. In better news, an amateur astronomer has proposed a particular star as the source of the famous “Wow!” event, using the Gaia Archive. Wait, what? Popular physics reporting went a bit nuts last week or two [1] [2] [3] (though the professional physics venues have yet to say much at all) about a preprint describing a search for a sun-like star, potentially with planets, that is positioned to have been the source of the “Wow!” signal in SETI. Let’s unpack what that may, or may not, mean. ## SETI SETI is a physics research area devoted to searching for evidence of extraterrestrial intelligence. Mainly, this is done through radio astronomy for various reasons involving low cost to send an interstellar message, fairly obvious ways to stand out against background noise, reasonable knowledge of physics directing the choice of frequencies, and so on. Most natural phenomena are wide-band, i.e., smeared out over a wide range of radio frequencies. So the holy grail of this entire enterprise is to find a signal which is (a) highly localized to a specific location in the sky that tracks sidereally, and (b) is very narrow-band in the way its power is spread across frequencies. (There are other requirements, like scintillation, but we’ll gloss over the details.) ## The "Wow!" signal On 1977-Aug-15, that happened. Observers using the “Big Ear” radio telescope at OSU detected a narrow-band signal coming from Saggitarius. Jerry Ehman, the astronomer on duty, wrote “Wow!” on the compuer printout, and so to this day it’s called the Wow! signal. (Time is on the vertical axis in the printout, increasing downward. The horizontal axis is for frequency bands.) There was no detectable modulation that anyone could figure out, but it was remarkably spatially localized and narrow-band. The Big Ear telescope observed it for a time window of 72 seconds. This is to be expected: the instrument relied on the rotation of the earth to scan it across the sky, and given that rotation, a sidereal-tracking signal (stationary with respect to the stars) should be bright for about 72 seconds, with peak intensity in the middle of that interval. The mysterious “6EQUJ5” is an idiosyncratic way of recording the signal intensity vs time, given the instruments of the day. Each frequency band listened for 10 seconds, processed for 2 seconds, and then printed out a single character describing the average power (minus baseline) for that 10 second interval, divided by the standard deviation. (It was blind during the 2 second compute interval.) The value reported is the dimensionless ratio of background-subtracted intensity to standard deviation (noise, basically). It frustrates me that no quickly-available source would show me the equation, but I’m guessing it was combining the average power difference between horns 1 and 2 and their combined noise in some dimensionless ratio like: \begin{align*} \mathrm{Signal to Noise Ratio} & = \frac{|\mu_1 - \mu_2|}{\sqrt{\sigma_1^2 + \sigma_2^2}} \\ \end{align*} That’s printed out as a single alphanumeric character in [0-9A-Z], basically a single digit base 36. “6EQUJ5” is the series of observations at 12-second intervals of that signal-to-noise ratio. E.g., a “5” means the difference in average power between the 2 horns was about 5 times the combined noise in both horns. The “U” is about 30σ above noise, so… “wow”. It fits a Gaussian versus time; as expected given the rotation of the earth taking the dish away from the source, it peaked right in the middle of the 72 second window of observation. It was at a center frequency of 1420.4556 ± 0.005 MHz, just above the hydrogen line. The bandwidth was below 10kHz, that being the minimum bandwidth the Big Ear’s instruments could handle, back in the day. Terrestrial sources are unlikely, since that frequency is in a protected band. (Though apparently the military does occasionally flout that protection?) It has never been seen since. ## Enter Gaia In 2013, the European Space Agency launched the Gaia space observatory. It’s measuring the position, distance, and proper motion of stars, quasars, some of the larger exoplanets, and more domestic things like comets. It does so with astounding precision. For stars, it also uses a spectrophotometer to record luminosity, surface temperature, gravity, and composition (such as metallicity). By observing each of about 1 billion objects 70 times during the spacecraft lifetime, it is building a 3D map of objects along with their proper velocities. It’s truly extraordinary! ## Gaia and Wow! Back in 1977, when the Wow! signal happened, people used the star catalogs of the day to see if there was a particular sun-like star in the 2 patches of sky whence came the signal. (2 patches because the instrument had 2 feed horns.) They found nothing of interest, meaning the star catalogs back then were quite sparse and there were a plethora of stars not really adequately characterized. Enter amateur astronomer, Alberto Caballero, who searched the Gaia Archive for stars somewhat like our own, in the right area(s) of Saggitarius. [4] He used these filters: • Spectral type K to G • Estimated temperature 4450 - 6000 K • Estimated luminosity 0.34 - 1.5 solar luminosity That found 38 candidate stars in the positive feed horn’s patch of sky, and 28 for the negative feed horn. However, with more conservative filters (e.g., demanding temperatures between 5730 - 5830°K to be more like the sun), there were no stars in the positive horn beam and exactly 1 in the negative horn beam: Gaia source_id 6766185791864654720, known in the 2-Micron All-Sky Survey (2MASS) archive as 2MASS 19281982-2640123: • Range: 552 parsec = 1801 ly • Temperature: 5783 K • Luminosity: 1.0007366 solar luminosity There are other candidates, depending on how you flex the cutoffs in the query, or whether you admit dim stars not catalogued, or extragalactic sources. But this is the best star with reasonable data, apparently by a reasonable margin. And even for 2MASS 19281982-2640123, we still don’t have good data on metallicity, age, stellar companions, and so on. Maybe an exoplanet search targeting 2MASS 19281982-2640123 would be potentially interesting… even though the Wow! signal still hasn’t repeated for the last 43 years. ## Notes & References 1: B Yirka, “Amateur astronomer Alberto Caballero finds possible source of Wow! signal”, phys.org, 2020-Nov-24. 2: Physics arXiv Blog, “Sun-Like Star Identified As the Potential Source of the Wow! Signal”, Astronomy, 2020-Nov-23. 3: P Anderson, “Did the Wow! signal come from this star?”, Earth/Sky, 2020-Dec-02. 4: A Caballero, “An approximation to determine the source of the WOW! Signal”, arXiv.org, 2020-Nov-08 (revised 2020-Dec-01). NB: This is a preprint, not yet having passed peer review. Published Wed 2020-Dec-02 ## Gestae Commentaria Comments for this post are closed.
web
auto_math_text
## Friday, 7 October 2011 ### Buffer object streaming in OpenGL This article presents an algorithm for asynchronous data uploading on the GPU called buffer streaming. It is based on a discussion on the OpenGL forum, and more precisely on a suggestion of Rob Barris (from Blizzard, also member of the ARB). The link to the discussion is given at the end of the article. The algorithm can be used for many interesting things such as efficient uniform data specification (using uniform buffer objects) or to replace the deprecated immediate mode for rendering. The demo I provide performs the latter by rendering a Quake2 Md2 model using an OpenGL 3 (and above) Core profile context. Motivations Many applications process data on the CPU before rendering it. In a key-framed animation for example, the vertices of the mesh are interpolated (usually linearly) to smooth the animation. Since OpenGL3, the data used for rendering has to be stored in buffer objects, so if you have to update your data before each new frame, you also end up having to transfer it into a buffer object. There's been a lot of debate amongst the OpenGL discussion boards on how to do this efficiently, one of the most interesting being this one (definitely worth reading for developers wanting to use buffer objects in OpenGL). Ideally, the transfer should not require synchronization between the CPU and the GPU. Fortunately, such a procedure is possible with the ARB_map_buffer_range extension, which is available on every OpenGL3 compliant GPUs. Buffer object streaming algorithm in OpenGL So we have the following scenario: data is written by the CPU to a buffer, which is then read by the GPU. In OpenGL, there are several ways to write to a buffer (glBufferData, glBufferSubData, glMapBuffer and glMapBufferRange to name them all), but there's only one way to do it asynchronously : by calling glMapBufferRange with the unsynchronized flag (GL_MAP_UNSYNCHRONIZED_BIT), so this is what we'll be using. Since the whole process is asynchronous, we have to guarantee that we'll never end up writing to a region of the buffer which is in use by the GPU. The idea is to allocate a fixed amount of memory for the buffer object (using glBufferData, and data set to NULL), and initialize an offset variable to 0. The memory amount should be greater than the data which needs to be processed, but not too big either for fast allocation. A few Mega Bytes is good (I use 8 MBytes in my demo). // configure buffer objects glBindBuffer(GL_ARRAY_BUFFER, buffers[BUFFER_VERTEX_MD2]); glBufferData(GL_ARRAY_BUFFER, STREAM_BUFFER_CAPACITY, NULL, GL_STREAM_DRAW); glBindBuffer(GL_ARRAY_BUFFER, 0); When the data has been processed by the CPU, we upload it to mapped region of the buffer object. Once the upload has been done, we increase the offset by the amount of data we added. Hence we also have to watch for overflowing : if the size of the data we're uploading exceeds the buffer capacity, we allocate a new memory block for the buffer, and reset the offset variable. This process is called orphaning. // stream variables static GLuint streamOffset = 0; static GLuint drawOffset = 0; // bind the buffer glBindBuffer(GL_ARRAY_BUFFER, buffers[BUFFER_VERTEX_MD2]); // orphan the buffer if full GLuint streamDataSize = fw::next_power_of_two(md2->TriangleCount() *3*sizeof(Md2::Vertex)); if(streamOffset + streamDataSize > STREAM_BUFFER_CAPACITY) { // allocate new space and reset the vao glBufferData( GL_ARRAY_BUFFER, STREAM_BUFFER_CAPACITY, NULL, GL_STREAM_DRAW ); glBindVertexArray(vertexArrays[VERTEX_ARRAY_MD2]); glBindBuffer(GL_ARRAY_BUFFER, buffers[BUFFER_VERTEX_MD2]); glVertexAttribPointer( 0, 3, GL_FLOAT, 0, sizeof(Md2::Vertex), FW_BUFFER_OFFSET(0) ); glVertexAttribPointer( 1, 3, GL_FLOAT, 0, sizeof(Md2::Vertex), FW_BUFFER_OFFSET(3*sizeof(GLfloat))); glVertexAttribPointer( 2, 2, GL_FLOAT, 0, sizeof(Md2::Vertex), FW_BUFFER_OFFSET(6*sizeof(GLfloat))); glBindVertexArray(0); // reset offset streamOffset = 0; } // get memory safely Md2::Vertex* vertices = (Md2::Vertex*) (glMapBufferRange(GL_ARRAY_BUFFER, streamOffset, streamDataSize, GL_MAP_WRITE_BIT |GL_MAP_UNSYNCHRONIZED_BIT)); // make sure memory is mapped if(NULL == vertices) throw std::runtime_error("Failed to map buffer."); // set final data md2->GenVertices(vertices); // unmap buffer glUnmapBuffer(GL_ARRAY_BUFFER); glBindBuffer(GL_ARRAY_BUFFER, 0); // compute draw offset drawOffset = streamOffset/sizeof(Md2::Vertex); // increment offset streamOffset += streamDataSize; And there you have it, asynchronous data upload ! - Try to make your data size a power of two. - If you are using your buffer object for rendering, you'll need to reset your vertex array objects after orphaning. Otherwise, you can use set the first argument or the baseVertex of your drawing function. See an excerpt of my demo's source code below (note how I evaluate the first parameter in glDrawArrays): // draw glBindVertexArray(vertexArrays[VERTEX_ARRAY_MD2]); glDrawArrays( GL_TRIANGLES, drawOffset, md2->TriangleCount()*3); Demo Rendering a QuakeII Md2 model: I use the buffer streaming algorithm to upload the vertices of a mesh and render it in an OpenGL4.2 Core Profile context. You can download the source archive here. A vs2010 project and a makefile are provided, you should be able to compile under Windows and Linux (works for me with Win7 x64 and Ubuntu Lucid x64 with a Radeon 5770 and Catalyst 11.12). You'll need an OpenGL4.2 compliant GPU to run the demo.
web
auto_math_text
# Types of DC Motors - Series, Shunt and Compound Wound Digital ElectronicsElectronElectronics & Electrical According to the type of connection of the field winding with the armature, the DC motors are classified as follows − • Permanent Magnet DC Motors • Separately-Excited DC Motors • Self-Excited DC Motors • Series Wound DC Motor • Shunt Wound DC Motor • Compound Wound DC Motor ## Permanent Magnet DC Motor A DC motor in which the main field flux is produced by the permanent magnets is known as permanent magnet DC motor. In this type DC motor, there is only one external source of DC supply is required, for supplying electrical power to the armature. The permanent magnet DC motors are mainly used in small scale application like in toys. ## Separately-Excited DC Motor In a separately-excited DC motor, the main field winding is excited by an external source of DC supply. The separately-excited dc motor is a doubly-excited motor, in which two sources of DC supply are required, one for armature and the second for the excitation of field winding. Here, $$\mathrm{Armature\:current,I_{a} = I_{s}}$$ $$\mathrm{Supply\:voltage,\:V_{s} = E_{b} + I_{a}R_{a}}$$ $$\mathrm{Electric\:power\:developed\:in\:armature = E_{b}I_{a}}$$ ## Series Wound DC Motor A DC motor in which the field winding is connected in series with the armature winding is known as series wound DC motor. Since the series field winding carries the whole armature current. Therefore, the series field winding has a small number of turns of thick wire and should possess a low resistance. Here, $$\mathrm{Armature\:current,\:I_{a}= I_{se}}$$ $$\mathrm{Supply\:voltage,\:VS \:= \:E_{b} + I_{se}R_{se} + I_{a}R_{a} \:= \:E_{b} + I_{a}(R_{a} + R_{se})}$$ Applications – The series DC motors are variable speed motors i.e. their speed is low at high torque and vice-versa. Although, at no-load or light load, the motor attains dangerously high speed. The series motors have high starting torques. Therefore, they are used in following applications - • Used where large starting torque is required like in elevators, electric tractions, cranes, etc. • Used where load is subjected to heavy fluctuations and the speed is required to be automatically regulated according to load requirements. • Also used in air compressors, vacuum cleaners, hair driers, sewing machines etc. ## Shunt Wound DC Motor A shunt wound DC motor is the one, in which the field winding is connected in parallel with the armature winding. The shunt field windings are designed to have high resistance, i.e., have a large number of turns of fine wire so that the shunt field current is relatively small as compared to the armature current. Here, $$\mathrm{Armature \:current,\:I_{a} \:= \:I_{s} − I_{sh}}$$ $$\mathrm{Shunt \:field \:current,\:I_{sh}\: =\:\frac{V_{s}}{R_{sh}}}$$ $$\mathrm{Supply \:voltage,\:V_{s} \:= \:E_{b} + I_{a}R_{a}}$$ Applications – The shunt motors are constant speed motors. Therefore, they are used in following applications - • Where speed is required to remain constant form no-load to full load. • Used in lathes, drills, sharpers, spinning and weaving machines, boring mills etc. ## Compound Wound DC Motor A DC motor in which both the series field and shunt field are combined is known as compound wound DC motor. There two types of compound DC motors as − ### Short-Shunt Compound Motor In a short-shunt compound motor, the shunt field winding is directly connected in parallel with the armature winding. ### Long-Shunt Compound Motor When the shunt field winding is connected in parallel with the series combination of armature winding and the series field winding, then the motor is known as long-shunt compound motor. Note – When the series field flux aids the shunt field flux, i.e., both are in same direction, then the compound motor is known cumulative compound motor whereas when the series field opposes the shunt field, i.e., both are in opposite direction, the motor is known as differential compound motor. Important – The compound DC machines (generator or motor), are always designed such that the magnetic flux produced by shunt field winding is greater than the flux produced by the series field winding. Applications – The differentially-compound motors are rarely used due to their poor torque characteristics. However, the cumulatively-compound motors are used in the constant speed applications with irregular loads or suddenly applied heavy loads like presses, reciprocating machines and shears etc. Published on 18-Aug-2021 06:45:55
web
auto_math_text
# All issues Numerical simulation of ethylene combustion in supersonic air flow pdf (1280K) In the present paper, we discuss the possibility of a simplified three-dimensional unsteady simulation of plasma-assisted combustion of gaseous fuel in a supersonic airflow. Simulation was performed by using FlowVision CFD software. Analysis of experimental geometry show that it has essentially 3D nature that conditioned by the discrete fuel injection into the flow as well as by the presence of the localized plasma filaments. Study proposes a variant of modeling geometry simplification based on symmetry of the aerodynamic duct and periodicity of the spatial inhomogeneities. Testing of modified FlowVision $k–\varepsilon$ turbulence model named «KEFV» was performed for supersonic flow conditions. Based on that detailed grid without wall functions was used the field of heat and near fuel injection area and surfaces remote from the key area was modeled with using of wall functions, that allowed us to significantly reduce the number of cells of the computational grid. Two steps significantly simplified a complex problem of the hydrocarbon fuel ignition by means of plasma generation. First, plasma formations were simulated by volumetric heat sources and secondly, fuel combustion is reduced to one brutto reaction. Calibration and parametric optimization of the fuel injection into the supersonic flow for IADT-50 JIHT RAS wind tunnel is made by means of simulation using FlowVision CFD software. Study demonstrates a rather good agreement between the experimental schlieren photo of the flow with fuel injection and synthetical one. Modeling of the flow with fuel injection and plasma generation for the facility T131 TSAGI combustion chamber geometry demonstrates a combustion mode for the set of experimental parameters. Study emphasizes the importance of the computational mesh adaptation and spatial resolution increasing for the volumetric heat sources that model electric discharge area. A reasonable qualitative agreement between experimental pressure distribution and modeling one confirms the possibility of limited application of such simplified modeling for the combustion in high-speed flow. Keywords: combustion in supersonic flow, numerical simulation, direct current discharge, plasma-assisted combustion Citation in English: Firsov A.A., Yarantsev D.A., Leonov S.B., Ivanov V.V. Numerical simulation of ethylene combustion in supersonic air flow // Computer Research and Modeling, 2017, vol. 9, no. 1, pp. 75-86 DOI: 10.20537/2076-7633-2017-9-75-86 Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU The journal is included in the Russian Science Citation Index The journal is included in the List of Russian peer-reviewed journals publishing the main research results of PhD and doctoral dissertations. International Interdisciplinary Conference "Mathematics. Computing. Education" The journal is included in the RSCI
web
auto_math_text
Review Article | Published: # Topological antiferromagnetic spintronics ## Abstract The recent demonstrations of electrical manipulation and detection of antiferromagnetic spins have opened up a new chapter in the story of spintronics. Here, we review the emerging research field that is exploring the links between antiferromagnetic spintronics and topological structures in real and momentum space. Active topics include proposals to realize Majorana fermions in antiferromagnetic topological superconductors, to control topological protection and Dirac points by manipulating antiferromagnetic order parameters, and to exploit the anomalous and topological Hall effects of zero-net-moment antiferromagnets. We explain the basic concepts behind these proposals, and discuss potential applications of topological antiferromagnetic spintronics. ## Main Topologically protected states of matter are unusually robust because they cannot be destroyed by small changes in system parameters. This feature of topological states has suggested an appealing strategy to achieve useful quantum computation1,2. In spintronics, topological states provide for strong spin-momentum locking3, high charge-current to spin-current conversion efficiency4,5,6, high electron mobility and long spin diffusion length7,8, strong magnetoresistance8 and efficient spin filtering9. Materials exhibiting topologically protected Dirac or Weyl quasiparticles in their momentum-space bands, and those exhibiting topologically non-trivial real-space spin textures6,10, have both inspired new energy-efficient spintronic concepts4,7,10,11,12,13. In a topological insulator (TI), time-reversal symmetry enforces Dirac quasiparticle surface states with spin-momentum locking (see panel a in the figure in Box 1) and protection against backscattering3. The much higher efficiency of magnetization switching by a current-induced spin–orbit torque in a TI/magnetically doped TI (MTI) heterostructure4,14 than in a heavy-metal/ferromagnet (FM) bilayer is thought to be associated with spin-momentum locking. This is a paradigmatic example of the potential for applications of topological materials in spintronics, although a full microscopic understanding of the underlying current–spin conversion mechanism is still absent15. Progress in understanding and exploiting TIs in spintronics has so far been limited by unintentional bulk doping in TIs, and by the decreased stability of TI surface states at elevated temperatures15. The practical utility of the topologically enhanced spin–orbit torque has also been limited by the cryogenic temperatures at which known MTIs order4,14, although a recent report16 of interfacial ferromagnetism persisting to room temperature in an insulating FM (EuS)/TI heterostructure is promising in this respect. A substantial rise in the critical temperature of an MTI (by a factor of 3 to 90 K) due to proximity coupling to an adjacent antiferromagnet (AF) has recently been demonstrated in a heterostructure consisting of the metallic AF CrSb sandwiched between two MTIs17. Increased spin–orbit torque efficiency at heterojunctions between TIs and ferrimagnetic CoTb alloys containing antiferromagnetically coupled Co and Tb sublattices18 has also been reported. The later effect persists to room temperature, but with decreased efficiency enhancement15 at higher temperatures. However, research on using antiferromagnetism to achieve a role for topological materials in spintronics is still at an early stage, and many ideas have so far only been addressed theoretically. For example, the practical advantages of TIs over heavy-metal systems for spin–orbit torques are not yet established15. The forms of magnetism so far incorporated in MTIs remain fragile because they are of interfacial16,17 or dilute-moment character14. Other new ideas, beyond simply making TIs magnetic, are emerging at a rapid pace. In this article we review topological antiferromagnetic spintronics, the emerging field that is exploring the interplay between transport, topological properties in either momentum space or real space, and antiferromagnetic order. ## Dirac quasiparticles in antiferromagnetic heterostructures The roots of topological antiferromagnetic spintronics can be traced to studies of layered AFs of the SrMnBi2 type, which were reported to feature quasi-two-dimensional massive Dirac quasiparticles near the Fermi level19,20. These were associated with the observation of enhanced mobilities, similar to those in graphene. Manipulation of the Dirac quasiparticle current and the quantum Hall effect in a EuMnBi2 AF by an applied strong magnetic field has been demonstrated, with the effect of the field mediated by Eu sublattices21. Signatures of the two-dimensional massless Dirac cones were found in the infrared spectra of the antiferromagnetic superconductor BaFe2As2 (refs 22,23). As pointed out in ref. 24, TI phases are possible in AFs even though time-reversal symmetry $${\mathscr{T}}$$ is broken, and are protected instead by $${T}_{\frac{1}{2}}{\mathscr{T}}$$ where $${T}_{\frac{1}{2}}$$ is a half-magnetic-unit-cell translation operation, as we illustrate in panel a in the figure in Box 1. The proposed low-temperature AF candidate, GdPtBi, has not yet been confirmed as a TI by angle-resolved photoemission spectroscopy (ARPES), presumably due to the imperfect crystal-momentum resolution of the measurement25. A path of research related to topological superconductivity has demonstrated signatures of the coexistence of a two-dimensional TI, that is, the quantum spin Hall effect (see Fig. 1a), and a superconducting state in hole-doped and electron-doped antiferromagnetic monolayers of FeSe (ref. 26). FeSe is the metallic building block of the iron-based high-TC superconductors, and the combined effect of substrate strain, spin–orbit coupling and electronic correlations was shown to induce band inversion and quantum spin Hall effect edge states, as shown in Fig. 1a,b (ref. 26). Separately, quantum spin Hall effect states in an AF have also been predicted in honeycomb lattice systems27.41586_2018_136_Tab1_ESM.jpg The fortunate lattice constant match between the TI (Bi,Sb)2Te3 and the high-temperature AF CrSb has been exploited to grow epitaxial interfaces between these materials17,28. CrSb/TI (Bi,Sb)2Te3/AF CrSb trilayers exhibit cusps in the magnetoresistance, that presumably correspond to a topological phase transition of Dirac quasiparticles at the interfaces28. The phase transition from the quantum anomalous Hall state (QAHE, see panel c of the figure in Box 1 and Fig. 1c,e) to what is presumed to be an axion insulator (quantized electric polarization induced by magnetism24, Fig. 1d,e) was observed in an MTI/TI/MTI trilayer by reoerienting the exchange fields of the MTIs from ferromagnetic to antiferromagnetic29. The control of the QAHE and axion insulator states by the external magnetic field and electric gating yields very large magneto-/electroresistance changes h/e2 ~ 25.8 kΩ and ~GΩ, albeit at millikelvin temperatures29,30. ## Three-dimensional topological semimetal AFs Topological semimetal states arise when conduction and valence bands touch at discrete points, lines or planes in a bulk Brillouin zone at energies close to the Fermi level. The low-energy physics of topological semimetals can be governed by effective Dirac or Weyl equations11,13,35. Three-dimensional (3D) Dirac and Weyl quasiparticles in non-magnetic bulk systems have attracted attention because of reports of suppressed backscattering, measurements of exotic topological surface states and interest in unique topological responses such as low-dissipation axial currents8,11,36. These properties are thought to be responsible for experimental observations of chiral magnetotransport12,37, and strong magnetoresistance38,39, although the topological origin of these phenomena is not yet firmly established8. For instance, the strong magnetoresistance in WTe2 semimetals was originally explained on the basis on the carrier compensation in the tiny electron–hole pockets at the Fermi level40, and only later linked to the presence of Weyl fermions39. ### Topological metal–insulator transitions in 3D Dirac semimetal AFs In a system with time reversal $${\mathscr{T}}$$ and spatial inversion $${\mathscr{P}}$$ symmetries, the electronic bands are doubly degenerate, resulting in a low-energy Dirac Hamiltonian, $${{\mathscr{H}}}_{{\rm{D}}}({\bf{k}})$$. In its simplest form11,13,35, $${{\mathscr{H}}}_{{\rm{D}}}({\bf{k}})=\left(\begin{array}{cc}\hslash {v}_{{\rm{F}}}{\bf{k\cdot }}{\mathbf{\sigma} } & m\\ m & -\hslash {v}_{{\rm{F}}}{\bf{k\cdot }}{\mathbf{\sigma }}\end{array}\right),$$ (4) where σ is the vector of Pauli matrices, vF is the Fermi velocity, k = q − q0 is the crystal momentum measured from the Dirac point at q0 and m is the mass (in units of energy). The corresponding energy dispersion is E(k) = $$\pm \hslash {v}_{{\rm{F}}}\sqrt{{k}_{x}^{2}+{k}_{y}^{2}+{k}_{z}^{2}+{\left(\frac{m}{\hslash {v}_{{\rm{F}}}}\right)}^{2}}$$. The mass can be absent because of a crystalline symmetry, and in this case $${{\mathscr{H}}}_{{\rm{D}}}({\bf{k}})$$ describes the fourfold degenerate band touching11,13 of a 3D Dirac semimetal illustrated schematically in panel b in the figure in Box 1. In a 3D Dirac semimetal, the topological invariants and non-trivial surface states can be linked to the crystalline symmetry protecting the degeneracy41,42. The 3D Dirac semimetal state is not possible in FMs because $${\mathscr{T}}$$-symmetry breaking prevents the double band degeneracy. On the other hand, a topological crystalline 3D Dirac semimetal was predicted in an AF, namely in the orthorhombic phase of CuMnAs43,44. Here $${\mathscr{P}}$$ and $${\mathscr{T}}$$ symmetries are absent separately, but the combined $${\mathscr{P}}{\mathscr{T}}$$ symmetry ensures double band degeneracy over the whole Brillouin zone, as illustrated in Fig. 2b–d. In this case, the Dirac point is protected by $${\mathscr{P}}{\mathscr{T}}$$ symmetry together with an additional crystalline non-symmorphic symmetry, as we explain in Fig. 2e. The orthorhombic CuMnAs AF is an attractive minimal case for magnetic Dirac semimetals induced by band inversion, since only a single pair of Dirac points appears near the Fermi level of the ab initio band structure. Electron-filling-enforced semimetals with a single Dirac cone are also a possibility, as indicated theoretically in two-dimensional model AFs45. Novel effects have been predicted in topological Dirac semimetal AFs that are based on the possibility of controlling topological states by controlling only Néel vector orientation, not the presence or absence of antiferromagnetic order, and this can be accomplished using current-induced spin–orbit torques. The latter effect, discussed in detail by Železný et al.46 in this Focus issue, has been experimentally demonstrated in CuMnAs47. The coexistence of Dirac fermions and spin–orbit torques in CuMnAs implies a new phase transition mechanism, referred to as the topological metal–insulator transition44. The origin of the effect is in Fermi surface topology, which is sensitive to the changes in the magnetic symmetry upon reorienting the Néel vector, as explained in Fig. 2e,f. The transport counterpart of the topological metal–insulator transition is topological anisotropic magnetoresistance, which in principle can reach extremely large values44. The topological anisotropic magnetoresistance can be understood as a limiting case of crystalline anisotropic magnetoresistance. The effect is different in origin and presumably more favorable for spintronics than the metal–insulator transition observed in the pyrochlore iridate family, which is driven by combined correlation and external field effects48, or the extreme magnetoresistance observed in the AF topological metal candidate NdSb49. An antiferromagnetic Dirac nodal line semimetal has also been proposed44. Since the nodal lines were observed several electronvolts deep in the ab initio Fermi sea of tetragonal CuMnAs, the search is still on for more favorable candidate AF materials featuring nodal lines closer to the Fermi level. ### Weyl fermions in AFs When $${\mathscr{P}}$$ or $${\mathscr{T}}$$ symmetry, or both, is broken and the double band degeneracy is lifted, the touching points of two non-degenerate bands can form a 3D Weyl semimetal (panel d in the figure in Box 1). Fermi states in a Weyl semimetal are described by the Weyl Hamiltonian11,13: $${{\mathscr{H}}}_{{\rm{W}}}({\bf{k}})=\pm \hslash {v}_{{\rm{F}}}{\bf{k}\cdot }{\mathbf{\sigma} }.$$ (5) Weyl points act as monopole sources of Berry curvature flux and generate a topological charge defined by $${\mathscr{Q}}={\mathscr{C}}\left({k}_{z,{\rm{W}}}+\delta \right)-{\mathscr{C}}\left({k}_{z,{\rm{W}}}+\delta \right)=\frac{1}{2\pi }{\int }_{\delta S}{\rm{d}}^{2}k{\bf{n}}\cdot {\bf{b}}({\bf{k}}).$$ (6) Here δS is a small sphere surrounding the Weyl point at kz,W with the surface normal vector n, $${\mathscr{C}}$$ (equation (1)) refers to the plane slightly below and above the Weyl point, kz,W ± δ. The difference in integration area between the first and second forms of equation (6) is justified by Gauss’s theorem. In the vicinity of the Weyl point the Berry curvature takes the monopole form, b(k) = ±k/(2k3). Weyl points always come in pairs with opposite topological charges and in general do not rely on any specific symmetry protection. The only way to remove them is to annihilate two Weyl points with opposite topological charges. The 3D nature of the Weyl point is crucial here since the corresponding Weyl equation uses all three Pauli matrices. Consequently, any small perturbation that is expressed as a linear combination of Pauli matrices that form the basis of the 2 × 2 Hilbert space just shifts—and does not gap—the Weyl point. For example, for a perturbation of the form z , the dispersion is renormalized as E(k) = $$\pm \hslash {v}_{{\rm{F}}}\sqrt{{k}_{x}^{2}+{k}_{y}^{2}+{\left({k}_{z}+\frac{m}{\hslash {v}_{{\rm{F}}}}\right)}^{2}}$$. Magnetic Weyl semimetals have remained experimentally elusive for a long time, despite several promising antiferromagnetic candidates including pyrochlore irridates50 such as Eu2Ir2O751, or the YbMnBi2 AF, which was controversially suggested35 to be either a Weyl52,53 or a Dirac semimetal54,55. Recently, magnetic Weyl fermions were predicted56, and reported57 in Mn3Sn (see Fig. 3a), a non-collinear AF from the Heusler family that is potentially more relevant for metallic spintronics, due for example to the relatively high Néel temperature, 420 K (ref. 58). In Fig. 3b we show the measured ARPES overlaid with the ab initio calculated band structure in Mn3Sn. In Fig. 3b is the measured positive magnetoconductance, which is believed to be a signature of the chiral anomaly and Weyl fermions in condensed matter11,12,35. The chiral anomaly refers to a non-conservation of left- and right-handed Weyl quasiparticles in parallel electric and magnetic fields12. Figure 3d,e illustrate the surface-weighted density of states predicted by ab initio calculations, which exhibits the typical Fermi arc features (see also panel d in the figure in Box 1) together with trivial Fermi surface pockets56. Weyl semimetal states can also be realized in the paramagnetic and AF phase of GdPtBi12,37 by applying a magnetic field and, in contrast to Dirac semimetals, Weyl semimetals can in principle also exist in FMs59. The Mn3(Ge/Sn) AF was shown to host a large anomalous Hall effect (AHE), whose origin is discussed in the next section. ## Topological transport in AFs Until recently the AHE was viewed as a combined consequence of time-reversal symmetry breaking in an FM and spin–orbit coupling60. In the case of collinear AFs, either $${T}_{\frac{1}{2}}{\mathscr{T}}$$ symmetry or $${\mathscr{P}}{\mathscr{T}}$$ symmetry forces the Hall conductivity to vanish. Recent ab initio calculations61,62 inspired by earlier theoretical works63,64,65,66 have shown that time-reversal symmetry breaking by AF order can yield a finite Hall response in some non-collinear AFs, even those with zero net magnetization and even in the absence of spin–orbit coupling. The time-reversal symmetry breaking is manifested by a non-zero Berry curvature, as we show in Figs. 3f and 4a–d. The intrinsic contribution to the Hall conductivity, $${\sigma }_{{\rm{H}}}=\frac{1}{2}\left({\sigma }_{xy}-{\sigma }_{yx}\right)$$, depends only on the band structure of the perfect crystal and can be calculated from linear response theory60: $${\sigma }_{xy}=\frac{{e}^{2}}{h}{\int }_{{\rm{BZ}}}\frac{{\rm{d}}{\bf{k}}}{{(2\pi )}^{3}}\sum _{n}f({\bf{k}}){b}_{z}^{n}({\bf{k}}),$$ (7) where $${b}_{z}^{n}({\bf{k}})$$ is the z-component of the Berry curvature (equation (2)), f is the Fermi–Dirac distribution function and n is the band index. ### The AHE in non-collinear AFs The AHE was recently observed in the hexagonal non-collinear AFs Mn3Sn and Mn3Ge67,68,69, which have Weyl points close to the Fermi level. However, ab initio calculations of the intrinsic AHE in Mn3Ge, which predict a magnitude consistent with experiment, reveal that the dominant contribution to the AHE originates instead from avoided crossings in the band structure58, as shown in Fig. 3f. In Fig. 4e,f we show the observed AHE. For instance, σ xz  ~ 380 Ω−1 cm−1 and corresponds to a large effective emergent magnetic field $$\left|{{\bf{b}}}_{[010]}\right|$$ ~ 200 T (refs 67,68). We also note that a large AHE was achieved in the collinear AF GdPtBi by canting the staggered order70. The discovery of a large AHE in Mn3Sn and Mn3Ge, which are metals but have a relatively small density of states at the Fermi level, inspires a search for the quantized and dissipationless limits of anomalous transport in topological semiconducting/insulating AFs, for instance the QAHE32,71, or dynamical axion fields72. ### Topological Hall effect in AFs Real-space order parameter textures can be induced in AFs, and their presence can be detected by the so-called topological Hall effect. In this phenomenon, the role of the spin–orbit coupling is substituted by the chirality of the spin texture (see Fig. 4c). The effect of the corresponding fictitious magnetic field, $$\hat{{\bf{m}}}$$ (∂ x $$\hat{{\bf{m}}}$$ × ∂ y $$\hat{{\bf{m}}}$$), on the Bloch electrons generates a Hall response. The topological Hall effect can be experimentally distinguished from the AHE by, for example, analysing the disorder dependence73. However, making the distinction might be difficult in heterojunction systems, as was pointed out in studies of monolayer Fe deposited on an Ir(001) surface74. Non-coplanar AFs can have also a pronounced topological orbital moment due to the scalar spin chirality. For example, textures in Fe/Ir(001), Mn/Cu(111) or the γ-phase of FeMn alloy74,75,76, and their control by spin torques, might yield novel material functionalities. The Hall effect associated with the spin chirality was reported initially in the chiral spin liquids of the pyrochlore iridates (see Fig. 4d,g)77 and later in MnSi chiral antiferromagnetic alloys78,79. We note that the term ‘topological’ used to label the effect does not imply in this case a correspondence to a topological invariant. In contrast, a ferromagnetic skyrmion spin texture (next section) carries an integer topological charge, which is accompanied by a topological Hall effect ($$\left|{\bf{b}}\right|$$ ~ −13 T; ref. 80). In this case, the term topological refers to the association of the Hall response with a topological invariant. A distinct example of such a correspondence is the quantized topological Hall effect of a non-collinear magnet defined by the non-zero Chern number in k-space as proposed for the noncoplanar AF K0.5RhO2 (ref. 81) and shown schematically in Fig. 4c. ### Spin currents and torques in AFs While the AHE arises from the Berry curvature in momentum space, other important spintronic phenomena can be associated with Berry curvatures in different parameter spaces. For instance, the spin–orbit torkance tensor τ ij (ref. 82) is defined by the linear response relation T i  = τ ij E j , where $${\bf{T}}=\frac{{\rm{d}}{\bf{m}}}{{\rm{d}}t}$$ is the spin–orbit torque exerted on the magnetization m in a magnet subject to an applied electric field E. The intrinsic part of the spin–orbit torque can be rewritten in terms of a mixed Berry curvature, $${b}_{ij}^{\hat{{\bf{m}}}{\bf{k}}}={\hat{{\bf{e}}}}_{i}\cdot 2{\rm{Im}}{\sum }_{n}\left\langle {\partial }_{\hat{{\bf{m}}}}{u}_{{\bf{k}}n}\left|{\partial }_{{k}_{j}}{u}_{{\bf{k}}n}\right.\right\rangle$$, where e i denotes the ith Cartesian unit vector and m is a unit vector in the direction of magnetization82. A large spin–orbit torque in a topologically non-trivial insulating FM has been associated with the existence of monopoles with mixed Berry curvature. These are termed mixed Weyl points, as they correspond formally to a Weyl Hamiltonian $${\mathscr{H}}({\bf{k}},\hat{{\bf{m}}})$$ = $$\hslash {v}_{{\rm{F}}}\left({k}_{x}{\sigma }_{x}+{k}_{y}{\sigma }_{y}\right)+{v}_{\theta }\theta {\sigma }_{z}$$ in the mixed momentum–magnetization space82. (Here θ is the azimuthal angle of the magnetization.) The recent discovery of the spin–orbit torque and the prediction of a Dirac semimetal state in antiferromagnetic CuMnAs motivates a search for analogous dissipationless pronounced spin–orbit torques in insulating AFs. Additionaly, large spin Hall angles were also predicted for the Weyl semimetals83, and it was theoretically proposed that the spin Hall effect can also occur due to the breaking of the spin rotational symmetry in non-collinear AFs without the need for either spin–orbit coupling or spin chirality84. Finally, a topological spin Hall effect was predicted for skyrmions85, where the spin Hall response occurs even in the absence of spin–orbit coupling, in analogy with the above topological (charge) Hall effect. Antiferromagnetic skyrmionic crystals were predicted to have non-zero topological spin Hall effect, but vanishing topological Hall effect86. ## Antiferromagnetic skyrmions Magnetic skyrmions are non-collinear magnetization textures in which the spin quantization axis changes continuously over length scales that vary from a few nanometres to a few micrometres. For two-dimensional systems, the winding number, $${Q}^{(j)}=\frac{1}{4\rm{\pi} }\int {\rm{d}}x\,{\rm{d}}y{\hat{{\bf{m}}}}^{(j)}\cdot \left({\partial }_{x}{\hat{{\bf{m}}}}^{(j)}\times {\partial }_{y}{\hat{{\bf{m}}}}^{(j)}\right),$$ (8) of a magnetization texture measures the number of times the sphere of magnetization directions is covered upon integrating over space and must take integer values10,87. Here $$\hat{{\bf{m}}}$$ = m(x, y, z) is the normalized magnetization field in real space and $$\hat{{\bf{m}}}$$ (∂ x $$\hat{{\bf{m}}}$$ × ∂ y $$\hat{{\bf{m}}}$$) is the fictitious emergent magnetic field. Antiferromagnetic skyrmions can be visualized as two interpenetrating ferromagnetic skyrmions, where the index (j) = (A, B) labels the two antiferromagnetic sublattices, as shown in Fig. 5a. Microscopically, the skyrmionic magnetization modulation is caused by the Dzyaloshinskii–Moriya interaction of non-centrosymmetric crystals or due to the inversion asymmetry at the interfaces6. Remarkably, the Dzyaloshinskii–Moriya interaction in the bulk is more abundant in AFs than FMs88. By comparing to equations (1) and (6), we see that Q topologically protects skyrmion textures in real space, just as Weyl points and the QAHE state are protected in momentum space. The calculated energy barrier for skyrmion annihilation in discrete magnetic skyrmions is of the order of 0.1 eV in Fe/Co(111)89. Because this barrier is finite, the stability of skyrmions in experimental systems relies in part on other physical limitations, for example a combined effect of spin rotation and skyrmion diameter shrinking, rather than on topological protection itself89. Spintronics aspects of antiferromagnetic skyrmions90,91, namely their manipulation by an electrical current, have been discussed only recently, and only theoretically88,92,93,94. Micromagnetic simulations show that antiferromagnetic skyrmions move faster than ferromagnetic skyrmions, can be driven with lower current densities (jcrit. ~ 106–107 A cm−2) and, most importantly, move in straight lines, as explained in Fig. 5a,c88,92,93. Antiferromagnetic skyrmions were also recently studied in detail in synthetic AFs (for example in an Fe–Cu–Fe trilayer) in which skyrmions in the two ferromagnetic layers are coupled antiferromagnetically95,96, as shown in Fig. 5b. The topological spin Hall effect was suggested as a probe to monitor the antiferromagnetic skyrmions, as well as to generate a spin current86,96. While ferrimagnetic skyrmions were observed recently in GdFeCo97, antiferromagnetic skyrmions remain to be discovered. ## Perspectives Potential advantages of nearly and perfectly compensated antiferromagnetic materials for spintronics are discussed throughout this Focus issue. In this brief article, we have focused on possible cooperation between antiferromagnetism and topological properties in both momentum and real spaces. In many cases, novel phases that combine antiferromagnetism and topology have been discussed only very recently and remain experimentally elusive, for example antiferromagnetic TIs, antiferromagnetic Dirac semimetals, new QAHE systems and new systems that support skyrmions. As we have explained, these topological antiferromagnetic states can, once realized at room temperature, enable more stable nanospintronic devices that dissipate less energy and have new functionalities related to unique AF symmetries, or to the possibility of tuning by coupling to the antiferromagnetic order. Recently, signatures of a correlated magnetic Weyl semimetal were observed in the AF Mn3Sn (ref. 57). Fast topological memories, in which states are written by the topological spin–orbit torque in an antiferromagnetic TI or an antiferromagnetic Dirac semimetal44,82,98, and read out via the large magnetoresistance effects associated with band-gap tuning44,99,100, seem particularly attractive. Another possibility exploits the phase transitions of topological phases, as we have explained in the context of the CuMnAs Dirac AF in Fig. 2. Here one can foresee the possibility of a topological transistor operating at high frequencies and low current densities (see Fig. 6a)101. Another scenario starts by forming a p–n junction in a single layer of FeSe by gating to obtain a superconducting region and a region with a quantum spin Hall effect102. Coupling this system to one ferromagnetic electrode from each side would then localize Majorana modes at the interfaces as illustrated in Fig. 6b, providing an alternative realization of the Fu–Kane Majorana fermion proposal103 that could survive at higher temperatures26. The associated two-level states can function as quantum bits that encode information non-locally and are therefore robust against decoherence2. Many of the novel effects we have discussed follow directly from AF symmetries and cannot be realized in FMs, for instance (1) magnetism combined with the quantum spin Hall effect, and superconductivity, and (2) magnetism combined with Dirac semimetal phases. The conditions for a good Dirac quasiparticle in AF spintronics have recently been carefully specified13. Further afield, strong electronic correlations introduce additional challenges in finding and realizing topological phases, but they generate even richer phase diagrams and their interplay with topology represents a very active area of research30,48,50. An example of a system in which the interesting effects are mostly established is the non-collinear AHE AFs Mn3Sn and Mn3Ge. The sign and magnitude of their AHE depends on the non-collinear spin texture orientation. This, together with the demonstration of the possibility of manipulating the non-collinear spin texture by a spin torque104, can allow for memory devices in non-collinear AFs, with electrical read-out via the AHE, as illustrated in Fig. 4e. Moreover, optical and thermal counterparts of the d.c. AHE should be present in non-collinear AFs105108, opening the prospect of antiferromagnetic topological opto-spintronic and spin-caloritronic devices. The skyrmion might represent the smallest micromagnetic object that can store information, short of truly quantum atomic or molecular spins6. For instance, in the skyrmionic racetrack memory shown in Fig. 6c, the magnetic information is stored in skyrmions instead of magnetic domains separated by domain walls10. Antiferromagnetic skyrmions can be driven by the spin–orbit torque at lower current densities, and thanks to their stability have advantages over domain walls, especially in the curved parts of the race track. To conclude, beyond providing an interesting new context in which to identify and understand the physical consequences of topological properties of momentum-space bands or real-space textures, topological antiferromagnetic spintronics has the tantalizing possibility of converting important fundamental advances into truly valuable new applications of quantum materials. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Change history • ### 30 May 2018 In the version of this Review Article originally published, three of the citations corresponded to the wrong references. Ref. 16 should have corresponded to Nature 533, 513–516 (2016), ref. 17 to Nat. Mater. 16, 94–100 (2016), and ref. 18 to Phys. Rev. Appl. 6, 054001 (2016). ## References 1. 1. Sarma, S. D., Freedman, M. & Nayak, C. Majorana zero modes and topological quantum computation. npj Quant. Inf. 1, 15001 (2015). 2. 2. Beenakker, C. W. J. & Kouwenhoven, L. A road to reality with topological superconductors. Nat. Phys. 12, 618–621 (2016). 3. 3. Hasan, M. Z. & Kane, C. Colloquium: Topological insulators. Rev. Mod. Phys. 82, 3045–3067 (2010). 4. 4. Fan, Y. & Wang, K. L. Spintronics based on topological insulators. SPIN 06, 1640001 (2016). 5. 5. Wang, H. et al. Surface-state-dominated spin–charge current conversion in topological-insulator–ferromagnetic-insulator heterostructures. Phys. Rev. Lett. 117, 076601 (2016). 6. 6. Soumyanarayanan, A., Reyren, N., Fert, A. & Panagopoulos, C. Spin–orbit coupling induced emergent phenomena at surfaces and interfaces. Nature 539, 509–517 (2016). 7. 7. Pesin, D. A. & MacDonald, A. H. Spintronics and pseudospintronics in graphene and topological insulators. Nat. Mater. 11, 409–416 (2012). 8. 8. Liang, T. et al. Ultrahigh mobility and giant magnetoresistance in the Dirac semimetal Cd3As2. Nat. Mater. 14, 280–284 (2014). 9. 9. Wu, J., Liu, J. & Liu, X. J. Topological spin texture in a quantum anomalous Hall insulator. Phys. Rev. Lett. 113, 136403 (2014). 10. 10. Fert, A., Cros, V. & Sampaio, J. Skyrmions on the track. Nat. Nanotech. 8, 152–156 (2013). 11. 11. Burkov, A. A. Topological semimetals. Nat. Mater. 15, 1145–1148 (2016). 12. 12. Felser, C. & Yan, B. Weyl semimetals: magnetically induced. Nat. Mater. 15, 1149–1150 (2016). 13. 13. Šmejkal, L., Jungwirth, T. & Sinova, J. Route towards Dirac and Weyl antiferromagnetic spintronics. Phys. Status Solidi Rapid Res. Lett. 11, 1700044 (2017). 14. 14. Fan, Y. et al. Magnetization switching through giant spin–orbit torque in a magnetically doped topological insulator heterostructure. Nat. Mater. 13, 699–704 (2014). 15. 15. Han, J. et al. Room-temperature spin–orbit torque switching induced by a topological insulator. Phys. Rev. Lett. 119, 077702 (2017). 16. 16. Katmis, F. et al. A high-temperature ferromagnetic topological insulating phase by proximity coupling. Nature 533, 513–516 (2016). 17. 17. He, Q. L. et al. Tailoring exchange couplings in magnetic topological insulator/antiferromagnet heterostructures. Nat. Mater. 16, 94–100 (2016). 18. 18. Finley, J. & Liu, L. Spin–orbit torque efficiency in compensated ferrimagnetic cobalt–terbium alloys. Phys. Rev. Appl. 6, 054001 (2016). 19. 19. Park, J. et al. Anisotropic Dirac fermions in a Bi square net of SrMnBi2. Phys. Rev. Lett. 107, 126402 (2011). 20. 20. Wang, K., Graf, D., Lei, H., Tozer, S. W. & Petrovic, C. Quantum transport of two-dimensional Dirac fermions in SrMnBi2. Phys. Rev. B 84, 220401(R) (2011). 21. 21. Masuda, H. et al. Quantum Hall effect in a bulk antiferromagnet EuMnBi2 with magnetically confined two-dimensional Dirac fermions. Sci. Adv. 2, e1501117 (2016). 22. 22. Richard, P. et al. Observation of Dirac cone electronic dispersion in BaFe2As2. Phys. Rev. Lett. 104, 137001 (2010). 23. 23. Chen, Z.-G. et al. Two-dimensional massless Dirac fermions in antiferromagnetic AFe2As2 (A = Ba, Sr). Phys. Rev. Lett. 119, 096401 (2017). 24. 24. Mong, R. S. K., Essin, A. M. & Moore, J. E. Antiferromagnetic topological insulators. Phys. Rev. B 81, 245209 (2010). 25. 25. Liu, C. et al. Metallic surface electronic state in half-Heusler compounds RPtBi (R= Lu, Dy, Gd). Phys. Rev. B 83, 205133 (2011). 26. 26. Wang, Z. F. et al. Topological edge states in a high-temperature superconductor FeSe/SrTiO3(001) film. Nat. Mater. 15, 968–973 (2016). 27. 27. Niu, C. et al. Quantum spin Hall effect and topological phase transitions in honeycomb antiferromagnets. Preprint at https://arxiv.org/pdf/1705.07035.pdf (2017). 28. 28. He, Q. L. et al. Topological transitions induced by antiferromagnetism in a thin-film topological insulator. Preprint at https://arxiv.org/pdf/1612.01661.pdf (2016). 29. 29. Mogi, M. A magnetic heterostructure of topological insulators as a candidate for an axion insulator. Nat. Mater 16, 516–522 (2017). 30. 30. Tokura, Y., Kawasaki, M. & Nagaosa, N. Emergent functions of quantum materials. Nat. Phys. 13, 10561068 (2017). 31. 31. Xiao, D., Chang, M.-C. & Niu, Q. Berry phase effects on electronic properties. Rev. Mod. Phys. 82, 1959–2007 (2010). 32. 32. Dong, X.-Y., Kanungo, S., Yan, B. & Liu, C.-X. Time-reversal-breaking topological phases in antiferromagnetic Sr2FeOsO6 films. Phys. Rev. B 94, 245135 (2016). 33. 33. Burkov, A. A. & Balents, L. Weyl semimetal in a topological insulator multilayer. Phys. Rev. Lett. 107, 127205 (2011). 34. 34. Yang, K. Y., Lu, Y. M. & Ran, Y. Quantum Hall effects in a Weyl semimetal: possible application in pyrochlore iridates. Phys. Rev. B 84, 075129 (2011). 35. 35. Armitage, N. P., Mele, E. J. & Vishwanath, A. Weyl and Dirac semimetals in three dimensional solids. Rev. Mod. Phys. 90, 015001 (2018). 36. 36. Jia, S., Xu, S.-Y. & Hasan, M. Z. Weyl semimetals, Fermi arcs and chiral anomalies. Nat. Mater. 15, 1140–1144 (2016). 37. 37. Hirschberger, M. et al. The chiral anomaly and thermopower of Weyl fermions in the half-Heusler GdPtBi. Nat. Mater. 15, 1161–1165 (2016). 38. 38. Ali, M. N. et al. Large, non-saturating magnetoresistance in WTe2. Nature 514, 205–208 (2014). 39. 39. Soluyanov, A. A. et al. Type-II Weyl semimetals. Nature 527, 495–498 (2015). 40. 40. Pletikosic, I., Ali, M. N., Fedorov, A. V., Cava, R. J. & Valla, T. Electronic structure basis for the extraordinary magnetoresistance in WTe2. Phys. Rev. Lett. 113, 216601 (2014). 41. 41. Yang, B.-J. & Nagaosa, N. Classification of stable three-dimensional Dirac semimetals with nontrivial topology. Nat. Commun. 5, 4898 (2014). 42. 42. Kargarian, M., Randeria, M. & Lu, Y.-M. Are the surface Fermi arcs in Dirac semimetals topologically protected?. Proc. Natl Acad. Sci. USA 113, 8648–8652 (2016). 43. 43. Tang, P., Zhou, Q., Xu, G. & Zhang, S.-C. Dirac fermions in an antiferromagnetic semimetal. Nat. Phys. 12, 1100–1104 (2016). 44. 44. Šmejkal, L., Żelezný, J., Sinova, J. & Jungwirth, T. Electric control of Dirac quasiparticles by spin–orbit torque in an antiferromagnet. Phys. Rev. Lett. 118, 106402 (2017). 45. 45. Young, S. M. & Wieder, B. J. Filling-enforced magnetic Dirac semimetals in two dimensions. Phys. Rev. Lett. 118, 186401 (2017). 46. 46. Železný, J., Wadley, P., Olejník, K., Hoffmann, A. & Ohno, H. Spin-transport and spin-torque in antiferromagnetic devices. Nat. Phys. https://doi.org/s41567-018-0062-7 (2018). 47. 47. Wadley, P. et al. Electrical switching of an antiferromagnet. Science 351, 587–591 (2016). 48. 48. Tian, Z. et al. Field-induced quantum metal–insulator transition in the pyrochlore iridate Nd2Ir2O7. Nat. Phys. 12, 134–138 (2016). 49. 49. Wakeham, N., Bauer, E. D., Neupane, M. & Ronning, F. Large magnetoresistance in the antiferromagnetic semimetal NdSb. Phys. Rev. B 93, 205152 (2016). 50. 50. Wan, X., Turner, A. M., Vishwanath, A. & Savrasov, S. Y. Topological semimetal and Fermi-arc surface states in the electronic structure of pyrochlore iridates. Phys. Rev. B 83, 205101 (2011). 51. 51. Sushkov, A. B. et al. Optical evidence for a Weyl semimetal state in pyrochlore Eu2Ir2O7. Phys. Rev. B 92, 241108(R) (2015). 52. 52. Borisenko, S. et al. Time-reversal symmetry breaking type-II Weyl state in YbMnBi2. Preprint at https://arxiv.org/ftp/arxiv/papers/1507/1507.04847.pdf (2015). 53. 53. Chinotti, M., Pal, A., Ren, W. J., Petrovic, C. & Degiorgi, L. Electrodynamic response of the type-II Weyl semimetal YbMnBi2. Phys. Rev. B 94, 245101 (2016). 54. 54. Wang, A. et al. Magnetotransport study of Dirac fermions in YbMnBi2 antiferromagnet. Phys. Rev. B 94, 165161 (2016). 55. 55. Chaudhuri, D. et al. Optical investigation of the strong spin–orbit-coupled magnetic semimetal YbMnBi2. Phys. Rev. B 96, 075151 (2017). 56. 56. Yang, H. et al. Topological Weyl semimetals in the chiral antiferromagnetic materials Mn3Ge and Mn3Sn. New J. Phys. 19, 015008 (2017). 57. 57. Kuroda, K. et al. Evidence for magnetic Weyl fermions in a correlated metal. Nat. Mater. 16, 1090–1095 (2017). 58. 58. Zhang, Y. et al. Strong anisotropic anomalous Hall effect and spin Hall effect in the chiral antiferromagnetic compounds Mn3X (X = Ge, Sn, Ga, Ir, Rh and Pt). Phys. Rev. B 95, 075128 (2017). 59. 59. Wang, Z. et al. Time-reversal-breaking Weyl fermions in magnetic Heusler alloys. Phys. Rev. Lett. 117, 236401 (2016). 60. 60. Sinova, J., Valenzuela, S. O., Wunderlich, J., Back, C. H. & Jungwirth, T. Spin Hall effects. Rev. Mod. Phys. 87, 1213–1260 (2015). 61. 61. Chen, H., Niu, Q. & MacDonald, A. H. Anomalous Hall effect arising from noncollinear antiferromagnetism. Phys. Rev. Lett. 112, 017205 (2014). 62. 62. Kübler, J. & Felser, C. Non-collinear antiferromagnets and the anomalous Hall effect. Europhys. Lett. 108, 67001 (2014). 63. 63. Haldane, F. D. M. Model for a quantum Hall effect without Landau levels: condensed-matter realization of the parity anomaly. Phys. Rev. Lett. 61, 2015 (1988). 64. 64. Shindou, R. & Nagaosa, N. Orbital ferromagnetism and anomalous Hall effect in antiferromagnets on the distorted fcc lattice. Phys. Rev. Lett. 87, 116801 (2001). 65. 65. Tomizawa, T. & Kontani, H. Anomalous Hall effect in the t 2g orbital kagome lattice due to noncollinearity: significance of the orbital Aharonov–Bohm effect. Phys. Rev. B 80, 100401 (2009). 66. 66. Tomizawa, T. & Kontani, H. Anomalous Hall effect due to noncollinearity in pyrochlore compounds: role of orbital Aharonov–Bohm effect. Phys. Rev. B 82, 104412 (2010). 67. 67. Nakatsuji, S., Kiyohara, N. & Higo, T. Large anomalous Hall effect in a non-collinear antiferromagnet at room temperature. Nature 527, 212–215 (2015). 68. 68. Kiyohara, N., Tomita, T. & Nakatsuji, S. Giant anomalous Hall effect in the chiral antiferromagnet Mn3Ge. Phys. Rev. Appl. 5, 064009 (2016). 69. 69. Nayak, A. K. et al. Large anomalous Hall effect driven by a nonvanishing Berry curvature in the noncolinear antiferromagnet Mn3Ge. Sci. Adv. 2, e1501870 (2016). 70. 70. Suzuki, T. et al. Large anomalous Hall effect in a half-Heusler antiferromagnet. Nat. Phys. 12, 1119–1123 (2016). 71. 71. Zhou, P., Sun, C. Q. & Sun, L. Z. Two dimensional antiferromagnetic Chern insulator: NiRuCl6. Nano. Lett. 16, 6325–6330 (2016). 72. 72. Sekine, A. & Nomura, K. Chiral magnetic effect and anomalous Hall effect in antiferromagnetic insulators with spin–orbit coupling. Phys. Rev. Lett. 116, 096401 (2016). 73. 73. Kanazawa, N. et al. Large topological Hall effect in a short-period helimagnet MnGe. Phys. Rev. Lett. 106, 156603 (2011). 74. 74. Hoffmann, M. et al. Topological orbital magnetization and emergent Hall effect of an atomic-scale spin lattice at a surface. Phys. Rev. B 92, 020401 (2015). 75. 75. Hanke, J. P. et al. Role of Berry phase theory for describing orbital magnetism: from magnetic heterostructures to topological orbital ferromagnets. Phys. Rev. B 94, 121114(R) (2016). 76. 76. Hanke, J.-P., Freimuth, F., Blügel, S. & Mokrousov, Y. Prototypical topological orbital ferromagnet – FeMn. Sci. Rep. 7, 41078 (2017). 77. 77. Machida, Y., Nakatsuji, S., Onoda, S., Tayama, T. & Sakakibara, T. Time-reversal symmetry breaking and spontaneous Hall effect without magnetic dipole order. Nature 463, 210–213 (2010). 78. 78. Sürgers, C., Fischer, G., Winkel, P. & Löhneysen, H. V. Large topological Hall effect in the non-collinear phase of an antiferromagnet. Nat. Commun. 5, 3400 (2014). 79. 79. Sürgers, C., Kittler, W., Wolf, T. & v. Löhneysen, H. Anomalous Hall effect in the noncollinear antiferromagnet Mn5Si3. AIP Adv. 6, 055604 (2016). 80. 80. Ritz, R. et al. Giant generic topological Hall resistivity of MnSi under pressure. Phys. Rev. B 87, 1–17 (2013). 81. 81. Zhou, J. et al. Predicted quantum topological Hall effect and noncoplanar antiferromagnetism in K0.5RhO2. Phys. Rev. Lett. 116, 256601 (2016). 82. 82. Hanke, J.-P., Freimuth, F., Niu, C., Blügel, S. & Mokrousov, M. Mixed Weyl semimetals and low-dissipation magnetization control in insulators by spin–orbit torques. Nat. Commun. 8, 1479 (2017). 83. 83. Sun, Y., Zhang, Y., Felser, C. & Yan, B. Strong intrinsic spin Hall effect in the TaAs family of Weyl semimetals. Phys. Rev. Lett. 117, 146403 (2016). 84. 84. Zhang, Y., Železný, J., Sun, Y., van den Brink, J. & Yan, B. Spin Hall effect emerging from a chiral magnetic lattice without spin–orbit coupling. Preprint at https://arxiv.org/pdf/1704.03917.pdf (2017). 85. 85. Yin, G., Liu, Y., Barlas, Y., Zang, J. & Lake, R. K. Topological spin Hall effect resulting from magnetic skyrmions. Phys. Rev. B 92, 024411 (2015). 86. 86. Göbel, B., Mook, A., Henk, J. & Mertig, I. Antiferromagnetic skyrmion crystals: generation, topological Hall, and topological spin Hall effect. Phys. Rev. B 96, 060406 (2017). 87. 87. Finocchio, G., Büttner, F., Tomasello, R., Carpentieri, M. & Kläui, M. Magnetic skyrmions: from fundamental to applications. J. Phys. D: Appl. Phys. 49, 423001 (2016). 88. 88. Barker, J. & Tretiakov, O. A. Static and dynamical properties of antiferromagnetic skyrmions in the presence of applied current and temperature. Phys. Rev. Lett. 116, 147203 (2016). 89. 89. Rohart, S., Miltat, J. & Thiaville, A. Path to collapse for an isolated Néel skyrmion. Phys. Rev. B 93, 214412 (2016). 90. 90. Bogdanov, A. N., Rößler, U. K., Wolf, M. & Müller, K.-H. Magnetic structures and reorientation transitions in noncentrosymmetric uniaxial antiferromagnets. Phys. Rev. B 66, 214410 (2002). 91. 91. Morinari, T. in The Multifaceted Skyrmion (eds Brown, G. E. & Rho, M.) 311–331 (World Scientific, Singapore, 2010). 92. 92. Zhang, X., Zhou, Y. & Ezawa, M. Antiferromagnetic skyrmion: stability, creation and manipulation. Sci. Rep. 6, 24795 (2016). 93. 93. Jin, C., Song, C., Wang, J. & Liu, Q. Dynamics of antiferromagnetic skyrmion driven by the spin Hall effect. Appl. Phys. Lett. 109, 182404 (2016). 94. 94. Velkov, H. et al. Phenomenology of current-induced skyrmion motion in antiferromagnets. New J. Phys. 18, 075016 (2016). 95. 95. Zhang, X., Zhou, Y. & Ezawa, M. Magnetic bilayer-skyrmions without skyrmion Hall effect. Nat. Commun. 7, 10293 (2016). 96. 96. Buhl, B. M., Freimuth, F., Blügel, S. & Mokrousov, Y. Topological spin Hall effect in antiferromagnetic skyrmions. Phys. Status Solidi Rapid Res. Lett. 11, 1700007 (2017). 97. 97. Woo, S. et al. Current-driven dynamics and inhibition of the skyrmion Hall effect of ferrimagnetic skyrmions in GdFeCo films. Preprint at https://arxiv.org/ftp/arxiv/papers/1703/1703.10310.pdf (2017). 98. 98. Ghosh, S. & Manchon, A. Spin–orbit torque in two-dimensional antiferromagnetic topological insulators. Phys. Rev. B 95, 035422 (2017). 99. 99. Kandala, A., Richardella, A., Kempinger, S., Liu, C. X. & Samarth, N. Giant anisotropic magnetoresistance in a quantum anomalous Hall insulator. Nat. Commun. 6, 7434 (2015). 100. 100. Carbone, C. et al. Asymmetric band gaps in a Rashba film system. Phys. Rev. B 93, 125409 (2016). 101. 101. Xue, Q.-K. Nanoelectronics: a topological twist for transistors. Nat. Nanotech. 6, 197–198 (2011). 102. 102. Tsai, W.-F. & Lin, H. Topological insulators and superconductivity: the integrity of two sides. Nat. Mater. 15, 927–928 (2016). 103. 103. Fu, L. & Kane, C. L. Superconducting proximity effect and Majorana fermions at the surface of a topological insulator. Phys. Rev. Lett. 100, 096407 (2008). 104. 104. Fujita, H. Field-free, spin-current control of magnetization in non-collinear chiral antiferromagnets. Phys. Status Solidi Rapid Res. Lett. 11, 1600360 (2017). 105. 105. Feng, W., Guo, G.-Y., Zhou, J., Yao, Y. & Niu, Q. Large magneto-optical Kerr effect in noncollinear antiferromagnets Mn3X (X = Rh, Ir, Pt). Phys. Rev. B 92, 144426 (2015). 106. 106. Ikhlas, M. et al. Large anomalous Nernst effect at room temperature in a chiral antiferromagnet. Nat. Phys. 13, 1085–1090 (2017). 107. 107. Li, X. et al. Anomalous Nernst and Righi–Leduc effects in Mn3Sn: Berry curvature and entropy flow. Phys. Rev. Lett. 119, 056601 (2017). 108. 108. Higo, T. Large magneto-optical Kerr effect and imaging of magnetic octupole domains in an antiferromagnetic metal. Nat. Photon 12, 73–78 (2018). 109. 109. Liu, J. et al. Spin-filtered edge states with an electrically tunable gap in a two-dimensional topological crystalline insulator. Nat. Mater. 13, 178–183 (2013). 110. 110. Zhang, S., Baker, A. A., Komineas, S. & Hesjedal, T. Topological computation based on direct magnetic logic communication. Sci. Rep. 5, 15773 (2015). ## Acknowledgements L.Š. acknowledges support from the Grant Agency of Charles University, no. 280815, and EU FET Open RIA Grant 766566. We acknowledge support from the Ministry of Education of the Czech Republic Grants LM2015087 and LNSM-LNSpin, and the Grant Agency of the Czech Republic Grant 14-37427G. Access to computing and storage facilities owned by parties and projects contributing to the National Grid Infrastructure MetaCentrum provided under the programme ‘Projects of Large Research, Development, and Innovations Infrastructures’ (CESNET LM2015042) is greatly appreciated. Y.M. acknowledges funding from the German Research Foundation (Deutsche Forschungsgemeinschaft, Grant MO 1731/5-1). B.Y. acknowledges the support of the Ruth and Herman Albert Scholars Program for New Scientists at Weizmann Institute of Science, Israel. A.H.M. was supported by SHINES, an Energy Frontier Research Center funded by the US Department of Energy, Office of Science, Basic Energy Sciences, under Award SC0012670, Army Research Office (ARO) under Contract No. W911NF-15-1-0561:P00001, and by Welch Foundation Grant TBF1473. ## Author information ### Affiliations 1. #### Institute of Physics, Academy of Sciences of the Czech Republic, Prague, Czech Republic • Libor Šmejkal 2. #### Institut fur Physik, Johannes Gutenberg Universitat Mainz, Mainz, Germany • Libor Šmejkal 3. #### Faculty of Mathematics and Physics, Charles University in Prague, Prague, Czech Republic • Libor Šmejkal 4. #### Peter Grünberg Institut and Institute for Advanced Simulation, Forschungszentrum Jülich and JARA, Jülich, Germany • Yuriy Mokrousov 5. #### Department of Condensed Matter Physics, Weizmann Institute of Science, Rehovot, Israel • Binghai Yan 6. #### Department of Physics, University of Texas at Austin, Austin, TX, USA • Allan H. MacDonald ### Competing interests The authors declare no competing interests. ### Corresponding author Correspondence to Libor Šmejkal. ### DOI https://doi.org/10.1038/s41567-018-0064-5 • ### Exchange-biasing topological charges by antiferromagnetism • Qing Lin He • , Gen Yin • , Alexander J. Grutter • , Lei Pan • , Xiaoyu Che • , Guoqiang Yu • , Dustin A. Gilbert • , Steven M. Disseler • , Yizhou Liu • , Bin Zhang • , Yingying Wu • , Brian J. Kirby • , Elke Arenholz • , Roger K. Lake • , Xiaodong Han •  & Kang L. Wang Nature Communications (2018) • ### Giant magnetic response of a two-dimensional antiferromagnet • Lin Hao • , D. Meyers • , Hidemaro Suwa • , Junyi Yang • , Clayton Frederick • , Tamene R. Dasa • , Gilberto Fabbris • , Lukas Horak • , Dominik Kriegner • , Yongseong Choi • , Jong-Woo Kim
web
auto_math_text
× [–] 2 points3 points  (0 children) I think it’s high time we go truly paperless and allow dynamic layout navigation based on semantics. Wouldn’t have to worry about flipping back to earlier pages if there’s a literal graph-edge connecting the two, a swipe away from both appearing adjacently. [–] 0 points1 point  (0 children) I’d mention that comics can be expensive, but I grew up in the 90s, so that’s just a weird false notion my brain hangs onto. (Okay okay, I’m sure they can still be expensive. They’re not quite beanie babies.) [–] 0 points1 point  (0 children) lol, yeah. I’m like 16k at best. Shows what I know. [–] 1 point2 points  (0 children) Oh, that’s the south pole: you can see the hexagon. [–] 3 points4 points  (0 children) It’s just that the go board is a 19×19 grid, and it’s also sometimes mentioned that it has 361 intersections. —That said, learning to multiply numbers quickly would help: rapid estimation of total areas let’s you know who’s winning, which is vital to play at single-digit-kyu and above. [–] 16 points17 points  (0 children) Why not present it as (92+192/22)¼? [; (9^{2} + \frac{19^{2}}{22})^{\frac{1}{4}} ;] (I only knew 192=361 because of go.) [–] 1 point2 points  (0 children) Amen to that! Computers could be dizzyingly fast if software weren’t so bloated. There’s no reason that any software (short of asset-heavy stuff like video games) shouldn’t be ready in less time than it takes to remove your finger from the screen/mouse button/key. I recently turned off file indexing in Windows (that’s what directory structures are for). It’s so nice having a machine that doesn’t spontaneously heat up for no discernible reason. [–] 1 point2 points  (0 children) (I hate it.) [–] 0 points1 point  (0 children) Quantum physics are not unintuitive, it’s human intuition that makes no sense. The single most well established and understood realm of human knowledge, bar none, is quantum physics. Given how well our understanding of quantum physics matches experiment and even macroscopic, casual observation, it’s an extraordinary stretch of human hubris to consider macroscopic phenomena, understand how they emerge naturally from quantum phenomena, then decide that these human-intuition emergent phenomena must also continue to exist at sub-quantum scales without any mechanism in place for them to ever arise. The reason that quantum systems have wavelike properties is because there is fundamentally not enough information to decide “which” actual path a quantum particle takes, in such a way that all the infinite possibilities interact with each other, able to cancel out or reinforce. That’s a “weird” thing, but exhaustively well established. It’s these summed-together paths, akin to molecules of water in the ocean, that act analogously to the media through which everyday sound waves propagate. It’s only analogous, though, because experiment and theory don’t support actual discrete sub-quantum “particles”, per se. Or rather, they don’t support them having independent existence that could ever be observed directly. Also, it doesn’t really match the specifics of the kind of waves a “sea of particles” gives you—you’re adding together complex numbers, then using the square of the magnitude to talk about how it actually maps to observation. What I’m getting at is this: the physical manifestations of what we call “information” is fully described within the framework of the single most accurate and precise piece of human knowledge in all history. We have now explained what (physical) information is. There’s no more experimental/observational probing to be done for that “deep” question. To constrain the actuality of the universe to the specifics of a particular species’ brain topology requires extraordinary evidence. [–] 5 points6 points  (0 children) Same day Minecraft left βeta. [–] -1 points0 points  (0 children) Maybe it’s just the article writer, but it’s the idea that these base-level changes are “subtle”, which reflects everything wrong with the blind alleyway that UI designers are stuck in. [–] 5 points6 points  (0 children) There’s a nice word for the phenomenon: calumny. The definition requires malice, but what else can you call such inhuman disregard? It’s happened constantly throughout history. We call it prejudice when it happens to members of groups, but it’s also the tool of (too many) politicians and (too many) competing corporations. If calumny sticks, it really sticks, defying all attempts at correction and rational discussion, or matters of degree and context. [–] 22 points23 points  (0 children) “New” media doesn’t fix the problem, either. Hence reddit’s Boston bomber fervor. [–] 0 points1 point  (0 children) I love mathematics as a branching graph. One might consider, though, that you could start your branching from a different location than the historical one. I’m thinking of getting a rigorous handle on math by starting from a couple axioms and just assigning labels/notations/names to particular lambda expressions. I want to use something machine-readable, too (e.g. json/yaml, or a bespoke graph database editor), so I don’t even have to decide on specifics. But I feel like attacking it from a bit of a weird angle, defining rational numbers in terms of a cartesian lattice so I can navigate it with simple matrix multiplication, isomorphic to the branching of the Stern-Brocot tree. I want to define negative numbers in terms of complex numbers, and calculus (insofar as possible) without talking about infinity, per se (so just a notational tweak on limits, emphasizing strictly “shrinking” ranges. Same thing, mildly different parallax viewpoint). I have an inkling of an intuition regarding infinite series and products, regarding the semantics of periodicity in the results it produces, though I still just need to read the established literature on the subject as well. But I feel like attacking them from the “periodicity” angle instead of the “limits” angle might lend to better intuition for things like complex analysis. And yes, I know this is all known stuff, and I’m being weird about it, but my point is—if human history had taken a different path, our mathematical intuitions could be very different. If we’d come up with cartesian coordinates before the concept of “debt”, we might have called addition “east” and subtraction “west”, and not been weirded out by inverted addition. If we’d long lived on boats upon the waves of the sea, and had never seen land, we might have discovered Fourier analysis before discovering polynomials, and Pilot wave theory might be the de facto interpretation of quantum mechanics. [–] 0 points1 point  (0 children) There are mathematical ways of describing regular orbits that are similar to describing a global position. Even in >2-body systems, some regularity is know of or easily (enough) calculable with sufficient data. Planet-scale orbits have shaken out all their chaos over astronomical timescales, and crowded “low-ish altitude” orbital neighborhoods will almost certainly govern allowed orbits into highly predictable, regular, sensible, safe orbital vectors. Residents of these orbital neighborhoods will likely be required to maintain their prescribed orbit (to avoid collisions), so someone trying to find their station won’t even have to have “the latest data” of where they have been in order to calculate “current position”. As for non-regular orbits or transits, it’s all the same types of orbital description vectors. Some such paths/orbits will require the latest data, or pre-planned “stay within this vectored spacetime distance” paths. The point is, if they want to be able to receive packages, they have plenty of ways of keeping a courier system/government informed, and this kind of information can be imparted in standardizable ways. But as for “human readable” addresses based on orbital mechanics, it probably makes sense to use the semantic reasoning/properties for a particular orbit. Hypothetical orbits (i.e. “possible addresses”) have certain rational-number relations with other such orbits, and certain Δv differences. A set of “standard” orbits that apply to any celestial body could be interpolated between each-other in “low-information” ways—that is, with as few symbols as necessary. These standard orbits could be tied directly to the semantics of what makes each “sensible”. Even non-rational relations (for avoiding certain resonances, or whatever) could be described by low-information rational-number ratios to, say, the golden ratio (“the most irrational number”) or some algebraic expression related to “ideal” orbital-neighborhood divvying. So, for example, the first obvious piece of information for such an address would be the primary body being orbited. For some types of “weird” orbits (like those related to these), two bodies may be specified, along with other unique-to-the-phenomenon info. Then we have standard/special orbital planes, like equatorial—semantically important for reaching efficiently from the surface; ecliptic—for traveling out into the greater solar system; polar—for serving on-surface needs, and as the opposite extreme from equatorial; and regularly-changing planes—like sun-synchronus—only useful for particular needs involving the orbited body or low orbit solar power stations, or “24hr shift” stations. Body-aligned/described planes will be more important at lower altitudes, and system-aligned planes will be vital higher up. In particular, a single unified “longitude” vector for every orbital neighborhood would allow intuitive understanding about the approximate angle of any inter-body approach (for a given time of “year”). If the axis of your target orbit around a jovian moon is strongly tilted to/from the same direction you’re approaching Jupiter from, then a casual mention of the “address” of the orbit, along with a general awareness of current planetary positions, will make it obvious that “low energy” approaches (i.e. lacking nuclear propulsion), will either be expensive or have to use clever mechanics. (Also, don’t get too close to Jupiter in any case without heavy radiation shielding, and don’t take lower orbits with their massive Δv requirements.) [–] -1 points0 points  (0 children) GPS in general is automatically limited in accuracy by the OS when the screen is off. [–] 1 point2 points  (0 children) They’ve already stated it’s more for rural areas. It’s a huge untapped market that goes naturally with how satellite internet works. Imagine being really connected to the rest of the world’s internet in a cabin deep in the middle of nowhere, Alaska, or a smaller island in the South Pacific. [–] 1 point2 points  (0 children) Yeah, I’ve had mine that way for years. [–] 0 points1 point  (0 children) That looks like the right answer to me. Weird how close it is to 8.5×10-6, eh? It’s not, though. My calculator shows a rational result with 2 digits in the numerator and 7 in the denominator, and it’s not a round number. Makes sense, since the top is just 72, and the bottom is so (additively) close to a smallish power of a small prime. [–] 0 points1 point  (0 children) You may be getting the order of operations incorrect. Make sure you divide the entire numerator by the entire denominator. [–] 3 points4 points  (0 children) Amen. Also, while it’s terrific that there is one web standard that browsers are meant to adhere to, it feels icky to develop general purpose software within such a monolithic environment. I also despise the “everything is a nail” garbage that comes with it—I frequently can’t load a static document web page without allowing javascript to run. [–] 2 points3 points  (0 children) I wish apps weren’t “glued together” with the gluey stuff—it makes devs think UI changes constitute substantive versioning. I say tag all interfaces semantically and hand the interfacing over to a separate program. Then you could get crazy stuff for free, like a toolbar on your smart phone for your desktop application. [–] 0 points1 point  (0 children) I like design, but I don’t like Apple’s design. If something’s got a big blank area, I prefer it to be overtly flat and sharp edged instead of curved. And yes, I love the design of the Cybertruck. That said, I also value what something can do over what it was designed to do. I only like walled-garden software if I’m not trying to accomplish very much with it. [–] 9 points10 points  (0 children) Personally, I’m against web apps in general. [–] 1 point2 points  (0 children) I’d just do a piecewise function, lol.
web
auto_math_text
# Resistive touchscreen to VGA display with RP2040 (Raspberry Pi Pico)¶ ## Project Zip and Demo¶ Project zip available here. ## The resistive touchscreen¶ As described in this document, the resistive touchscreen is a layered 2D device. It is composed of a top PET layer with a resistive coating separated from a bottom resistive layer by spacers. When the user presses on the top of the screen, contact is made between the top and bottom resistive coatings, completing the circuit. The measured resistance through the completed circuit depends on the position of the press, enabling determination of the precise location of the press in both dimensions. Four pins are required in order to drive this screen, and the functions of those pins must be switched dynamically. Each pin is connected to a conductive pad on one of the four sides of the screen (+x, -x, +y, -y). In order to read the x-position of a press, y+ is set high, y- is set to ground, x- is left floating (set to input), and the ADC is used to measure x+. In order to measure the y-position of the press, x+ is set high, x- is set to ground, y- is left floating (set to input), and the ADC is used to measure y+. So, all pins must be switched between output and input. ## Code organization¶ There is a single timer interrupt running at 2kHz. Depending on the value of a switching variable, this ISR configures and reads the x or y coordinate of a touch event and draws a pixel to the VGA display at the appropriate location. For more information on the VGA system, see here.
web
auto_math_text
J. Korean Ceram. Soc. > Volume 32(1); 1995 > Article Journal of the Korean Ceramic Society 1995;32(1): 57. 졸겔법에 의한 알루미나 박막의 제조 및 특성 (III) 저반사 코팅유리의 제조 이재호, 최세영 연세대학교 세라믹공학과 Preparation and Characterization of Alumina Thin Film by Sol-Gel Method (III) Preparation of Anti-Reflective Coating Glass ABSTRACT The coating condition of reproducible anti-reflective coating film and the light transmittance characteristics of the prepared anti-reflective coating glass were investigated as a study for the preparation of single-layer anti-reflective coating glasss. In case of coating with the sol in which the solvent was substituted with the ethanol with the addition of 0.1 mol HNO3, the coated glass showed the minimum value of the refractive index of 1.464, light transmittance of 94.2% at 550nm standard wavelength which is 3.2% higher than that of the parent glass, and the reflectance in the entire wave range of visible light. The refractive index represented its minimum at the sol concentration of 1.0 mol per 100mols of water and the higher the sol concentration, the higher the refractive index, resulting in the decrease of the light transmitance. The production condition of the reproducible anti-reflective coating on glass with the maximum transmittance of 94.2% was 4cm/min of withdrawal speed, 40$0^{circ}C$ and 1 hour of heat treatment temperature and time, resulting in the film thickness of 94nm. Key words: Anti-reflective coating, Sol-gel, Withdrawal speed, Refractive index, Transmittance TOOLS
web
auto_math_text
## Wednesday, June 20, 2007 ... // ### Tanmay Vachaspati: black stars & there are no black holes A report on this blog about George Chapline's colloquium remains the #1 hit if you Google seach for black holes don't exist. It is a rather popular albeit untrue sentence that many users want to see. ;-) George is a friend of The Reference Frame but his black hole ideas unfortunately don't make any sense. In the article under the previous link, we have explained that why the event horizon - a "red" hypersurface in space defined as the boundary separating a causally disconnected region of spacetime (the dark blue triangle, the interior of the black hole) - is created long before the black hole reaches equilibrium and long before it starts to evaporate. In fact, the place where the horizon appears for the first time (the lower portion of the red line) looks completely ordinary and the people who live there don't have any tools to figure out whether they are already behind the horizon or not: they think that they are inside an ordinary star. If they had such tools, they would be extremely non-local, required superluminal propagation of signals, or a time machine. At that moment - when local physics still looks completely ordinary - it is already decided that the future curvature of spacetime creates a causally disconnected region because the evolution of spacetime according to Einstein's equations will inevitably lead to a spacetime whose causal diagram is depicted above: a spacetime with a causally disconnected black hole interior. We can also prove that no plausible modification of Einstein's equations that keeps them consistent with observations can remove the conclusion about the creation of the event horizon. This is a rather trivial conclusion of classical general relativity that most students with an A from general relativity will be able to make. It is extremely robust and whether or not we can test it "directly" is secondary. Equations of general relativity have been verified in other but related experiments and everything else we need a solid calculation that is actually more reliable than the experiments even though many people who are not quite sure about the consistency of mathematics and its relevance for the real world have irrational problems with this statement. ;-) I always wonder whether the people who don't trust mathematical derivations believe that they would get killed if they jumped from a skyscraper. Have they ever experimentally tested this assertion? New Scientist Nude Socialist has just promoted a theory of Tanmay Vachaspati from Case Western Reserve University, Ohio. In his 2-page paper, he argues that black holes are never formed in the first place. Instead, the collapse stops in a stage that he calls a "black star" and he even proposes that a new, non-black-hole-like kind of collision of these black stars is responsible for gamma ray bursts. Now, I find notion that quantum gravity regulates black holes as something that looks like a black hole but is microscopically just another "regular" object - something I called "not quite black holes" - to be a legitimate paradigm. But of course I know why all qualitative conclusions about black hole dynamics will continue to hold when the classical approximation of general relativity is valid i.e. whenever the black holes are large. What we know today goes well beyond the classical approximation of black holes. We can calculate the entropy of large classes of black holes arbitrarily accurately, among other things. We simply know that these things are correct. It's a matter of doing the math right. It is not hard to read the whole Vachaspati's paper and it obviously makes no sense. Does he assume some novel quantum gravity effects? No, he is just talking about classical general relativity. We can easily show that event horizons are inevitably formed in this picture. It is a straightforward exercise for those who know the technology. Of course, it is an uncertain, mysterious sea of dragons for those who don't. We also observe (the effects of) black holes in telescopes - for example one at our galactic center whose mass counts in millions of solar masses - but I won't hide that the theoretical derivation of their existence from other experimentally known data seems even more robust than the direct observations to me (and others). Vachaspati - whose list of former co-authors includes Mark Trodden and Lawrence Krauss, among others - rejects this result except that he doesn't seem to give a glimpse of an argument that the result should be different - except for saying other things that obviously don't occur such as a new kind of "pre-Hawking superfast radiation". Moreover, the only figure that is included in his short paper is a standard Penrose diagram for a Schwarzschild black hole. It doesn't look like he has gotten rid of the horizon. Quite on the contrary. It is quite nicely seen on the figure: it's the Northwestern diagonal line. Nevertheless, this paper was accepted for Physical Review D which is why Nude Socialist happily describes it as science: certain papers are simply vastly more interesting for the journalists than others. Moreover, the Nude Socialist formulates the article in such a way that Vachaspati's weird paper must surely be very important and 't Hooft and Giddings are just frozen ultraconservative frogs who inhibit the "progress". Thousands of readers will buy it. New Scientist presents Vachaspati as a hero and for thousands of stupid readers, it's simply enough to become convinced. Most articles about Vachaspati are located in Indian media which is no coincidence. Also, in his CCNET, Benny Peiser gave it a title "Black hole denier: another scientific consensus in trouble". Benny is a smart Gentleman but it would be dishonest not to say that his title is significantly less smart. If the real driving force of his climate skepticism were a general tendency to fight against anything that others think regardless of the existence of a rational reason, I couldn't agree with him. Classical general relativity is a settled theory and it is extremely difficult and probably impossible to invent a description that would - at least barely - agree with the same experimental tests but that would be able to stop event horizons from forming. Also, quantum effects can be neglected in the case of large black holes and the alternative black hole physicists even seem to agree with this conclusion. There is no scientific consensus about the existence of black holes. The people who understand general relativity and its justification know that black holes must exist while those who don't understand general relativity don't know whether there are black holes and most of them probably think that black holes don't exist, at least in the privacy of their homes. I guess that the second group includes a majority of the scientific community. This has nothing to do with consensus, it is about knowledge, talent, and expertise. So I would still prefer the adjective "ignorant" instead of "denier" for any person who studies gravitational physics but is unable to make this simple conclusion even in 2007. It would be foolish to demonize such people because ignorance is the primordial state of affairs. And that's the memo. #### snail feedback (2) : Hi. I am in the category of those who don't understand general relativity and don't know whether there are black holes think that black holes don't exist. I found Vachaspati's paper to be coherent given my limited knowledge. Although I assume your knowledge of physics is vastly superior to my own, I find your counterarguments somewhat unsatisfying, and they don't seem to address arguments actually made in the paper. For example, Vachaspati first demonstrates that the event horizon doesn't form in finite time using pure general relativity in the frame of reference of the observer, to which you respond by pointing out that quantum mechanics can be ignored for large black holes. Vachaspati only brings in quantum mechanics in order to demonstrate that the addition of quantum mechanics does not contradict his initial purely classical argument. It's trivial to demonstrate with classical general relativity that, to an observer watching from the outside, black holes do not form. Either of two complementary arguments are sufficient (both from the point of view of an external observer such as on Earth). 1. Time dilation approaches infinity at the event horizon as it (almost) forms, thus it does not form in finite time. 2. Space dilation approaches infinity at the event horizon as it forms, thus the distance to the (forming) event horizon approaches infinity. This argument is so counter-intuitive that only someone who understands general relativity can visualize it. As for physical evidence, I am aware of the existing evidence, but I see no reason why we would be able to distinguish between a real black hole and one of Vachaspati's black stars. Both have nearly identical light bending properties, and both do a pretty good job of preventing light from escaping. Since the argument is simple and compelling, I would expect the counterargument should also be somewhat comprehensible to mere mortals such as myself. Can you provide us with or refer us to a real counterargument.
web
auto_math_text
• ### The contribution of faint AGNs to the ionizing background at z~4(1802.01953) Feb. 6, 2018 astro-ph.GA Finding the sources responsible for the hydrogen reionization is one of the most pressing issues in cosmology. Bright QSOs are known to ionize their surrounding neighborhood, but they are too few to ensure the required HI ionizing background. A significant contribution by faint AGNs, however, could solve the problem, as recently advocated on the basis of a relatively large space density of faint active nuclei at z>4. We have carried out an exploratory spectroscopic program to measure the HI ionizing emission of 16 faint AGNs spanning a broad U-I color interval, with I~21-23 and 3.6<z<4.2. These AGNs are three magnitudes fainter than the typical SDSS QSOs (M1450<~-26) which are known to ionize their surrounding IGM at z>~4. The LyC escape fraction has been detected with S/N ratio of ~10-120 and is between 44 and 100% for all the observed faint AGNs, with a mean value of 74% at 3.6<z<4.2 and -25.1<M1450<-23.3, in agreement with the value found in the literature for much brighter QSOs (M1450<~-26) at the same redshifts. The LyC escape fraction of our faint AGNs does not show any dependence on the absolute luminosities or on the observed U-I colors. Assuming that the LyC escape fraction remains close to ~75% down to M1450~-18, we find that the AGN population can provide between 16 and 73% (depending on the adopted luminosity function) of the whole ionizing UV background at z~4, measured through the Lyman forest. This contribution increases to 25-100% if other determinations of the ionizing UV background are adopted. Extrapolating these results to z~5-7, there are possible indications that bright QSOs and faint AGNs can provide a significant contribution to the reionization of the Universe, if their space density is high at M1450~-23. • ### The Lyman Continuum escape fraction of faint galaxies at z~3.3 in the CANDELS/GOODS-North, EGS, and COSMOS fields with LBC(1703.00354) March 1, 2017 astro-ph.GA The reionization of the Universe is one of the most important topics of present day astrophysical research. The most plausible candidates for the reionization process are star-forming galaxies, which according to the predictions of the majority of the theoretical and semi-analytical models should dominate the HI ionizing background at z~3. We aim at measuring the Lyman continuum escape fraction, which is one of the key parameters to compute the contribution of star-forming galaxies to the UV background. We have used ultra-deep U-band imaging (U=30.2mag at 1sigma) by LBC/LBT in the CANDELS/GOODS-North field, as well as deep imaging in COSMOS and EGS fields, in order to estimate the Lyman continuum escape fraction of 69 star-forming galaxies with secure spectroscopic redshifts at 3.27<z<3.40 to faint magnitude limits (L=0.2L*, or equivalently M1500~-19). We have measured through stacks a stringent upper limit (<1.7% at 1sigma) for the relative escape fraction of HI ionizing photons from bright galaxies (L>L*), while for the faint population (L=0.2L*) the limit to the escape fraction is ~10%. We have computed the contribution of star-forming galaxies to the observed UV background at z~3 and we have found that it is not enough to keep the Universe ionized at these redshifts, unless their escape fraction increases significantly (>10%) at low luminosities (M1500>-19). We compare our results on the Lyman continuum escape fraction of high-z galaxies with recent estimates in the literature and discuss future prospects to shed light on the end of the Dark Ages. In the future, strong gravitational lensing will be fundamental to measure the Lyman continuum escape fraction down to faint magnitudes (M1500~-16) which are inaccessible with the present instrumentation on blank fields. • ### The role of quenching time in the evolution of the mass-size relation of passive galaxies from the WISP survey(1604.00034) March 31, 2016 astro-ph.GA We analyze how passive galaxies at z $\sim$ 1.5 populate the mass-size plane as a function of their stellar age, to understand if the observed size growth with time can be explained with the appearance of larger quenched galaxies at lower redshift. We use a sample of 32 passive galaxies extracted from the Wide Field Camera 3 Infrared Spectroscopic Parallel (WISP) survey with spectroscopic redshift 1.3 $\lesssim$ z $\lesssim$ 2.05, specific star-formation rates lower than 0.01 Gyr$^{-1}$, and stellar masses above 4.5 $\times$ 10$^{10}$ M$_\odot$. All galaxies have spectrally determined stellar ages from fitting of their rest-frame optical spectra and photometry with stellar population models. When dividing our sample into young (age $\leq$ 2.1 Gyr) and old (age $>$ 2.1 Gyr) galaxies we do not find a significant trend in the distributions of the difference between the observed radius and the one predicted by the mass-size relation. This result indicates that the relation between the galaxy age and its distance from the mass-size relation, if it exists, is rather shallow, with a slope alpha $\gtrsim$ -0.6. At face value, this finding suggests that multiple dry and/or wet minor mergers, rather than the appearance of newly quenched galaxies, are mainly responsible for the observed time evolution of the mass-size relation in passive galaxies. • ### The Spitzer-IRAC/MIPS Extragalactic survey (SIMES) in the South Ecliptic Pole field(1602.00892) Feb. 2, 2016 astro-ph.GA We present the Spitzer-IRAC/MIPS Extragalactic survey (SIMES) in the South Ecliptic Pole (SEP) field. The large area covered (7.7 deg$^2$), together with one of the lowest Galactic cirrus emissions in the entire sky and a very extensive coverage by Spitzer, Herschel, Akari, and GALEX, make the SIMES field ideal for extragalactic studies. The elongated geometry of the SIMES area ($\approx$4:1), allowing for a significant cosmic variance reduction, further improves the quality of statistical studies in this field. Here we present the reduction and photometric measurements of the Spitzer/IRAC data. The survey reaches a depth of 1.93 and 1.75 $\mu$Jy (1$\sigma$) at 3.6 and 4.5 $\mu$m, respectively. We discuss the multiwavelength IRAC--based catalog, completed with optical, mid-- and far--IR observations. We detect 341,000 sources with F$_{3.6\mu m} \geq 3\sigma$. Of these, 10% have an associated 24 $\mu$m counterpart, while 2.7% have an associated SPIRE source. We release the catalog through the NASA/IPAC Infrared Science Archive (IRSA). Two scientific applications of these IRAC data are presented in this paper: first we compute integral number counts at 3.6 $\mu$m. Second, we use the [3.6]--[4.5] color index to identify galaxy clusters at z$>$1.3. We select 27 clusters in the full area, a result consistent with previous studies at similar depth. • ### The Spitzer Archival Far-InfraRed Extragalactic Survey(1503.08567) March 30, 2015 astro-ph.CO, astro-ph.GA We present the Spitzer Archival Far-InfraRed Extragalactic Survey (SAFIRES). This program produces refined mosaics and source lists for all far-infrared extragalactic data taken during the more than six years of the cryogenic operation of the Spitzer Space Telescope. The SAFIRES products consist of far-infrared data in two wavelength bands (70 um and 160 um) across approximately 180 square degrees of sky, with source lists containing far-infrared fluxes for almost 40,000 extragalactic point sources. Thus, SAFIRES provides a large, robust archival far-infrared data set suitable for many scientific goals. • ### A Semi-Analytical Line Transfer (SALT) model to interpret the spectra of galaxy outflows(1501.07282) Jan. 28, 2015 astro-ph.CO, astro-ph.GA We present a Semi-Analytical Line Transfer model, SALT, to study the absorption and re-emission line profiles from expanding galactic envelopes. The envelopes are described as a superposition of shells with density and velocity varying with the distance from the center. We adopt the Sobolev approximation to describe the interaction between the photons escaping from each shell and the remaining of the envelope. We include the effect of multiple scatterings within each shell, properly accounting for the atomic structure of the scattering ions. We also account for the effect of a finite circular aperture on actual observations. For equal geometries and density distributions, our models reproduce the main features of the profiles generated with more complicated transfer codes. Also, our SALT line profiles nicely reproduce the typical asymmetric resonant absorption line profiles observed in star-forming/starburst galaxies whereas these absorption profiles cannot be reproduced with thin shells moving at a fixed outflow velocity. We show that scattered resonant emission fills in the resonant absorption profiles, with a strength that is different for each transition. Observationally, the effect of resonant filling depends on both the outflow geometry and the size of the outflow relative to the spectroscopic aperture. Neglecting these effects will lead to incorrect values of gas covering fraction and column density. When a fluorescent channel is available, the resonant profiles alone cannot be used to infer the presence of scattered re-emission. Conversely, the presence of emission lines of fluorescent transitions reveals that emission filling cannot be neglected. • ### The Hawk-I UDS and GOODS Survey (HUGS): Survey design and deep K-band number counts(1409.7082) Sept. 24, 2014 astro-ph.GA We present the results of a new, ultra-deep, near-infrared imaging survey executed with the Hawk-I imager at the ESO VLT, of which we make all the data public. This survey, named HUGS (Hawk-I UDS and GOODS Survey), provides deep, high-quality imaging in the K and Y bands over the CANDELS UDS and GOODS-South fields. We describe here the survey strategy, the data reduction process, and the data quality. HUGS delivers the deepest and highest quality K-band images ever collected over areas of cosmological interest, and ideally complements the CANDELS data set in terms of image quality and depth. The seeing is exceptional and homogeneous, confined to the range 0.38"-0.43". In the deepest region of the GOODS-S field, (which includes most of the HUDF) the K-band exposure time exceeds 80 hours of integration, yielding a 1-sigma magnitude limit of ~28.0 mag/sqarcsec. In the UDS field the survey matches the shallower depth of the CANDELS images reaching a 1-sigma limit per sq.arcsec of ~27.3mag in the K band and ~28.3mag in the Y-band, We show that the HUGS observations are well matched to the depth of the CANDELS WFC3/IR data, since the majority of even the faintest galaxies detected in the CANDELS H-band images are also detected in HUGS. We present the K-band galaxy number counts produced by combining the HUGS data from the two fields. We show that the slope of the number counts depends sensitively on the assumed distribution of galaxy sizes, with potential impact on the estimated extra-galactic background light (abridged). • ### Spectroscopic observation of Ly$\alpha$ emitters at z~7.7 and implications on re-ionization(1402.3604) May 5, 2014 astro-ph.CO We present spectroscopic follow-up observations on two bright Ly$\alpha$ emitter (LAE) candidates originally found by Krug et al. (2012) at a redshift of z~7.7 using the Multi-Object Spectrometer for Infra-Red Exploration (MOSFIRE) at Keck. We rule out any line emission at the >5$\sigma$ level for both objects, putting on solid ground a previous null result for one of the objects. The limits inferred from the non-detections rule out the previous claim of no or even reversed evolution between 5.7 < z < 7.7 in the Ly$\alpha$ luminosity function (LF) and suggest a drop in the Ly$\alpha$ luminosity function consistent with that seen in Lyman Break galaxy (LBG) samples. We model the redshift evolution of the LAE LF using the LBG UV continuum LF and the observed rest-frame equivalent width distribution. From the comparison of our empirical model with the observed LAE distribution, we estimate lower limits of the neutral hydrogen fraction to be 50-70% at z~7.7. Together with this, we find a strong evolution in the Ly$\alpha$ optical depth characterized by (1+z)^(2.2 $\pm$ 0.5) beyond z=6 indicative of a strong evolution of the IGM. Finally, we extrapolate the LAE LF to z~9 using our model and show that it is unlikely that large area surveys like UltraVISTA or Euclid pick up LAEs at this redshift assuming the current depths and area. • ### Spot the difference. Impact of different selection criteria on observed properties of passive galaxies in zCOSMOS 20-k sample(1305.1308) Sept. 11, 2013 astro-ph.CO We present the analysis of photometric, spectroscopic, and morphological properties for differently selected samples of passive galaxies up to z=1 extracted from the zCOSMOS-20k spectroscopic survey. This analysis intends to explore the dependence of galaxy properties on the selection criterion adopted, study the degree of contamination due to star-forming outliers, and provide a comparison between different commonly used selection criteria. We extracted from the zCOSMOS-20k catalog six different samples of passive galaxies, based on morphology, optical colors, specific star-formation rate, a best fit to the observed spectral energy distribution, and a criterion that combines morphological, spectroscopic, and photometric information. The morphological sample has the higher percentage of contamination in colors, specific star formation rate and presence of emission lines, while the red & passive ETGs sample is the purest, with properties mostly compatible with no star formation activity; however, it is also the less economic criterion in terms of information used. The best performing among the other criteria are the red SED and the quiescent ones, providing a percentage of contamination only slightly higher than the red & passive ETGs criterion (on average of a factor of ~2) but with absolute values of the properties of contaminants still compatible with a red, passively evolving population. We also provided two revised definitions of early type galaxies based on restframe color-color and color-mass criteria, that better reproduce the observed bimodalities. The analysis of the number densities shows evidences of mass-assembly downsizing, with galaxies at 10.25<log(M/Msun)<10.75 increasing their number by a factor ~2-4 from z=0.6 to z=0.2, by a factor ~2-3 from z=1 to z=0.2 at 10.75<log(M/Msun)<11, and by only ~10-50% from z=1 to z=0.2 at 11<log(M/Msun)<11.5. • ### Evolution of Galaxies and their Environments at z = 0.1 to 3 in COSMOS(1303.6689) March 26, 2013 astro-ph.CO Large-scale structures (LSS) out to z $< 3.0$ are measured in the Cosmic Evolution Survey (COSMOS) using extremely accurate photometric redshifts (photoz). The Ks-band selected sample (from Ultra-Vista) is comprised of 155,954 galaxies. Two techniques -- adaptive smoothing and Voronoi tessellation -- are used to estimate the environmental densities within 127 redshift slices. Approximately 250 statistically significant overdense structures are identified out to z $= 3.0$ with shapes varying from elongated filamentary structures to more circularly symmetric concentrations. We also compare the densities derived for COSMOS with those based on semi-analytic predictions for a $\Lambda$CDM simulation and find excellent overall agreement between the mean densities as a function of redshift and the range of densities. The galaxy properties (stellar mass, spectral energy distributions (SEDs) and star formation rates (SFRs)) are strongly correlated with environmental density and redshift, particularly at z $< 1.0 - 1.2$. Classifying the spectral type of each galaxy using the rest-frame b-i color (from the photoz SED fitting), we find a strong correlation of early type galaxies (E-Sa) with high density environments, while the degree of environmental segregation varies systematically with redshift out to z $\sim 1.3$. In the highest density regions, 80% of the galaxies are early types at z=0.2 compared to only 20% at z = 1.5. The SFRs and the star formation timescales exhibit clear environmental correlations. At z $> 0.8$, the star formation rate density (SFRD) is uniformly distributed over all environmental density percentiles, while at lower redshifts the dominant contribution is shifted to galaxies in lower density environments. • ### Far-Infrared Properties of Type 1 Quasars(1303.1861) March 8, 2013 astro-ph.CO We use the Spitzer Space Telescope Enhanced Imaging Products (SEIP) and the Spitzer Archival Far-InfraRed Extragalactic Survey (SAFIRES) to study the spectral energy distributions of spectroscopically confirmed type 1 quasars selected from the Sloan Digital Sky Survey (SDSS). By combining the Spitzer and SDSS data with the 2-Micron All Sky Survey (2MASS) we are able to construct a statistically robust rest-frame 0.1-100 micron type 1 quasar template. We find the quasar population is well-described by a single power-law SED at wavelengths less than 20 microns, in good agreement with previous work. However, at longer wavelengths we find a significant excess in infrared luminosity above an extrapolated power-law, along with signifiant object-to-object dispersion in the SED. The mean excess reaches a maximum of 0.8 dex at rest-frame wavelengths near 100 microns. • ### Dust extinction from Balmer decrements of star-forming galaxies at 0.75<z<1.5 with HST/WFC3 spectroscopy from the WISP survey(1206.1867) Nov. 28, 2012 astro-ph.CO Spectroscopic observations of Halpha and Hbeta emission lines of 128 star-forming galaxies in the redshift range 0.75<z<1.5 are presented. These data were taken with slitless spectroscopy using the G102 and G141 grisms of the Wide-Field-Camera 3 (WFC3) on board the Hubble Space Telescope as part of the WFC3 Infrared Spectroscopic Parallel (WISP) survey. Interstellar dust extinction is measured from stacked spectra that cover the Balmer decrement (Halpha/Hbeta). We present dust extinction as a function of Halpha luminosity (down to 3 x 10^{41} erg/s), galaxy stellar mass (reaching 4 x 10^{8} Msun), and rest-frame Halpha equivalent width. The faintest galaxies are two times fainter in Halpha luminosity than galaxies previously studied at z~1.5. An evolution is observed where galaxies of the same Halpha luminosity have lower extinction at higher redshifts, whereas no evolution is found within our error bars with stellar mass. The lower Halpha luminosity galaxies in our sample are found to be consistent with no dust extinction. We find an anti-correlation of the [OIII]5007/Halpha flux ratio as a function of luminosity where galaxies with L_{Halpha}<5 x 10^{41} erg/s are brighter in [OIII]5007 than Halpha. This trend is evident even after extinction correction, suggesting that the increased [OIII]5007/Halpha ratio in low luminosity galaxies is likely due to lower metallicity and/or higher ionization parameters. • ### Extreme Emission Line Galaxies in CANDELS: Broad-Band Selected, Star-Bursting Dwarf Galaxies at z>1(1107.5256) Sept. 23, 2011 astro-ph.CO We identify an abundant population of extreme emission line galaxies (EELGs) at redshift z~1.7 in the Cosmic Assembly Near-IR Deep Extragalactic Legacy Survey (CANDELS) imaging from Hubble Space Telescope/Wide Field Camera 3 (HST/WFC3). 69 EELG candidates are selected by the large contribution of exceptionally bright emission lines to their near-infrared broad-band magnitudes. Supported by spectroscopic confirmation of strong [OIII] emission lines -- with rest-frame equivalent widths ~1000\AA -- in the four candidates that have HST/WFC3 grism observations, we conclude that these objects are galaxies with 10^8 Msol in stellar mass, undergoing an enormous starburst phase with M_*/(dM_*/dt) of only ~15 Myr. These bursts may cause outflows that are strong enough to produce cored dark matter profiles in low-mass galaxies. The individual star formation rates and the co-moving number density (3.7x10^-4 Mpc^-3) can produce in ~4 Gyr much of the stellar mass density that is presently contained in 10^8-10^9 Msol dwarf galaxies. Therefore, our observations provide a strong indication that many or even most of the stars in present-day dwarf galaxies formed in strong, short-lived bursts, mostly at z>1. • ### The bimodality of the 10k zCOSMOS-bright galaxies up to z ~ 1: a new statistical and portable classification based on the optical galaxy properties(1009.0723) Sept. 23, 2011 astro-ph.CO Our goal is to develop a new and reliable statistical method to classify galaxies from large surveys. We probe the reliability of the method by comparing it with a three-dimensional classification cube, using the same set of spectral, photometric and morphological parameters.We applied two different methods of classification to a sample of galaxies extracted from the zCOSMOS redshift survey, in the redshift range 0.5 < z < 1.3. The first method is the combination of three independent classification schemes, while the second method exploits an entirely new approach based on statistical analyses like Principal Component Analysis (PCA) and Unsupervised Fuzzy Partition (UFP) clustering method. The PCA+UFP method has been applied also to a lower redshift sample (z < 0.5), exploiting the same set of data but the spectral ones, replaced by the equivalent width of H$\alpha$. The comparison between the two methods shows fairly good agreement on the definition on the two main clusters, the early-type and the late-type galaxies ones. Our PCA-UFP method of classification is robust, flexible and capable of identifying the two main populations of galaxies as well as the intermediate population. The intermediate galaxy population shows many of the properties of the green valley galaxies, and constitutes a more coherent and homogeneous population. The fairly large redshift range of the studied sample allows us to behold the downsizing effect: galaxies with masses of the order of $3\cdot 10^{10}$ Msun mainly are found in transition from the late type to the early type group at $z>0.5$, while galaxies with lower masses - of the order of $10^{10}$ Msun - are in transition at later epochs; galaxies with $M <10^{10}$ Msun did not begin their transition yet, while galaxies with very large masses ($M > 5\cdot 10^{10}$ Msun) mostly completed their transition before $z\sim 1$. • ### Very Strong Emission-Line Galaxies in the WISP Survey and Implications for High-Redshift Galaxies(1109.0639) Sept. 3, 2011 astro-ph.CO The WFC3 Infrared Spectroscopic Parallel Survey (WISP) uses the Hubble Space Telescope (HST) infrared grism capabilities to obtain slitless spectra of thousands of galaxies over a wide redshift range including the peak of star formation history of the Universe. We select a population of very strong emission-line galaxies with rest-frame equivalent widths higher than 200 A. A total of 176 objects are found over the redshift range 0.35 < z < 2.3 in the 180 arcmin^2 area we analyzed so far. After estimating the AGN fraction in the sample, we show that this population consists of young and low-mass starbursts with higher specific star formation rates than normal star-forming galaxies at any redshift. After spectroscopic follow-up of one of these galaxies with Keck/LRIS, we report the detection at z = 0.7 of an extremely metal-poor galaxy with 12+Log(O/H)= 7.47 +- 0.11. The nebular emission-lines can substantially affect the broadband flux density with a median brightening of 0.3 mag, with examples producing brightening of up to 1 mag. The presence of strong emission lines in low-z galaxies can mimic the color-selection criteria used in the z ~ 8 dropout surveys. In order to effectively remove low redshift interlopers, deep optical imaging is needed, at least 1 mag deeper than the bands in which the objects are detected. Finally, we empirically demonstrate that strong nebular lines can lead to an overestimation of the mass and the age of galaxies derived from fitting of their SED. Without removing emission lines, the age and the stellar mass estimates are overestimated by a factor of 2 on average and up to a factor of 10 for the high-EW galaxies. Therefore the contribution of emission lines should be systematically taken into account in SED fitting of star-forming galaxies at all redshifts. • ### The radial and azimuthal profiles of Mg II absorption around 0.5 < z < 0.9 zCOSMOS galaxies of different colors, masses and environments(1106.0616) Aug. 25, 2011 astro-ph.CO We map the radial and azimuthal distribution of Mg II gas within 200 kpc (physical) of 4000 galaxies at redshifts 0.5 < z < 0.9 using co-added spectra of more than 5000 background galaxies at z > 1. We investigate the variation of Mg II rest frame equivalent width as a function of the radial impact parameter for different subsets of foreground galaxies selected in terms of their rest-frame colors and masses. Blue galaxies have a significantly higher average Mg II equivalent width at close galactocentric radii as compared to the red galaxies. Amongst the blue galaxies, there is a correlation between Mg II equivalent width and galactic stellar mass of the host galaxy. We also find that the distribution of Mg II absorption around group galaxies is more extended than that for non-group galaxies, and that groups as a whole have more extended radial profiles than individual galaxies. Interestingly, these effects can be satisfactorily modeled by a simple superposition of the absorption profiles of individual member galaxies, assuming that these are the same as those of non-group galaxies, suggesting that the group environment may not significantly enhance or diminish the Mg II absorption of individual galaxies. We show that there is a strong azimuthal dependence of the Mg II absorption within 50 kpc of inclined disk-dominated galaxies, indicating the presence of a strongly bipolar outflow aligned along the disk rotation axis. There is no significant dependence of Mg II absorption on the apparent inclination angle of disk-dominated galaxies. • ### The zCOSMOS redshift survey : Influence of luminosity, mass and environment on the galaxy merger rate(1104.5470) April 28, 2011 astro-ph.CO The contribution of major mergers to galaxy mass assembly along cosmic time is an important ingredient to the galaxy evolution scenario. We aim to measure the evolution of the merger rate for both luminosity/mass selected galaxy samples and investigate its dependence with the local environment. We use a sample of 10644 spectroscopically observed galaxies from the zCOSMOS redshift survey to identify pairs of galaxies destined to merge, using only pairs for which the velocity difference and projected separation of both components with a confirmed spectroscopic redshift indicate a high probability of merging. We have identified 263 spectroscopically confirmed pairs with r_p^{max} = 100 h^{-1} kpc. We find that the density of mergers depends on luminosity/mass, being higher for fainter/less massive galaxies, while the number of mergers a galaxy will experience does not depends significantly on its intrinsic luminosity but rather on its stellar mass. We find that the pair fraction and merger rate increase with local galaxy density, a property observed up to redshift z=1. We find that the dependence of the merger rate on the luminosity or mass of galaxies is already present up to redshifts z=1, and that the evolution of the volumetric merger rate of bright (massive) galaxies is relatively flat with redshift with a mean value of 3*10^{-4} (8*10^{-5} respectively) mergers h^3 Mpc^{-3} Gyr^{-1}. The dependence of the merger rate with environment indicates that dense environments favors major merger events as can be expected from the hierarchical scenario. The environment therefore has a direct impact in shapping-up the mass function and its evolution therefore plays an important role on the mass growth of galaxies along cosmic time. • ### Tracking the impact of environment on the Galaxy Stellar Mass Function up to z~1 in the 10k zCOSMOS sample(0907.0013) Sept. 29, 2010 astro-ph.CO We study the impact of the environment on the evolution of galaxies in the zCOSMOS 10k sample in the redshift range 0.1<z<1.0 over an area of ~1.5 deg2. The considered sample of secure spectroscopic redshifts contains about 8500 galaxies, with their stellar masses estimated by SED fitting of the multiwavelength optical to NIR photometry. The evolution of the galaxy stellar mass function (GSMF) in high and low density regions provides a tool to study the mass assembly evolution in different environments; moreover, the contributions to the GSMF from different galaxy types, as defined by their SEDs and their morphologies, can be quantified. At redshift z~1, the GSMF is only slightly dependent on environment, but at lower redshifts the shapes of the GSMFs in high- and low-density environments become extremely different, with high density regions exhibiting a marked bimodality. As a result, we infer that galaxy evolution depends on both the stellar mass and the environment, the latter setting the probability of a galaxy to have a given mass: all the galaxy properties related to the stellar mass show a dependence on environment, reflecting the difference observed in the mass functions. The shapes of the GSMFs of early- and late-type galaxies are almost identical for the extremes of the density contrast we consider. The evolution toward z=0 of the mass at which the early- and late-type GSMFs match each other is more rapid in high density environments. The comparison of the observed GSMFs to the same quantities derived from a set of mock catalogues shows that blue galaxies in sparse environments are overproduced in the semi-analytical models at intermediate and high masses, because of a deficit of star formation suppression, while at z<0.5 an excess of red galaxies is present in dense environments at intermediate and low masses, because of the overquenching of satellites. ABRIDGED • ### zCOSMOS 10k-bright spectroscopic sample: exploring mass and environment dependence in early-type galaxies(1009.3376) Sept. 17, 2010 astro-ph.GA We present the analysis of the U-V rest-frame color distribution and some spectral features as a function of mass and environment for two sample of early-type galaxies up to z=1 extracted from the zCOSMOS spectroscopic survey. The first sample ("red galaxies") is defined with a photometric classification, while the second ("ETGs") by combining morphological, photometric, and spectroscopic properties to obtain a more reliable sample. We find that the color distribution of red galaxies is not strongly dependent on environment for all mass bins, with galaxies in overdense regions redder than galaxies in underdense regions with a difference of 0.027\pm0.008 mag. The dependence on mass is far more significant, with average colors of massive galaxies redder by 0.093\pm0.007 mag than low-mass galaxies throughout the entire redshift range. We study the color-mass relation, finding a mean slope 0.12\pm0.005, while the color-environment relation is flatter, with a slope always smaller than 0.04. The spectral analysis that we perform on our ETGs sample is in good agreement with our photometric results: we find for D4000 a dependence on mass between high and low-mass galaxies, and a much weaker dependence on environment (respectively a difference of of 0.11\pm0.02 and of 0.05\pm0.02); for the equivalent width of H{\delta}we measure a difference of 0.28\pm0.08 {\AA}across the same mass range and no significant dependence on environment.By analyzing the lookback time of early-type galaxies, we support the possibility of a downsizing scenario, in which massive galaxies with a stronger D4000 and an almost constant equivalent width of $H\delta$ formed their mass at higher redshift than lower mass ones. We also conclude that the main driver of galaxy evolution is the galaxy mass, the environment playing a subdominant role. • ### zCOSMOS - 10k-bright spectroscopic sample. The bimodality in the Galaxy Stellar Mass Function: exploring its evolution with redshift(0907.5416) June 14, 2010 astro-ph.CO, astro-ph.HE We present the Galaxy Stellar Mass Function (MF) up to z~1 from the zCOSMOS-bright 10k spectroscopic sample. We investigate the total MF and the contribution of ETGs and LTGs, defined by different criteria (SED, morphology or star formation). We unveil a galaxy bimodality in the global MF, better represented by 2 Schechter functions dominated by ETGs and LTGs, respectively. For the global population we confirm that low-mass galaxies number density increases later and faster than for massive galaxies. We find that the MF evolution at intermediate-low values of Mstar (logM<10.6) is mostly explained by the growth in stellar mass driven by smoothly decreasing star formation activities. The low residual evolution is consistent with ~0.16 merger per galaxy per Gyr (of which fewer than 0.1 are major). We find that ETGs increase in number density with cosmic time faster for decreasing Mstar, with a median "building redshift" increasing with mass, in contrast with hierarchical models. For LTGs we find that the number density of blue or spiral galaxies remains almost constant from z~1. Instead, the most extreme population of active star forming galaxies is rapidly decreasing in number density. We suggest a transformation from blue active spirals of intermediate mass into blue quiescent and successively (1-2 Gyr after) into red passive types. The complete morphological transformation into red spheroidals, required longer time-scales or follows after 1-2 Gyr. A continuous replacement of blue galaxies is expected by low-mass active spirals growing in stellar mass. We estimate that on average ~25% of blue galaxies is transforming into red per Gyr for logM<11. We conclude that the build-up of galaxies and ETGs follows the same downsizing trend with mass as the formation of their stars, converse to the trend predicted by current SAMs. We expect a negligible evolution of the global Galaxy Baryonic MF. • ### The WFC3 Infrared Spectroscopic Parallel (WISP) Survey(1005.4068) May 21, 2010 astro-ph.CO We present the WFC3 Infrared Spectroscopic Parallel (WISP) Survey. WISP is obtaining slitless, near-infrared grism spectroscopy of ~ 90 independent, high-latitude fields by observing in the pure parallel mode with Wide Field Camera-3 on the Hubble Space Telescope for a total of ~ 250 orbits. Spectra are obtained with the G102 (lambda=0.8-1.17 microns, R ~ 210) and G141 grisms (lambda=1.11-1.67 microns, R ~ 130), together with direct imaging in the J- and H-bands (F110W and F140W, respectively). In the present paper, we present the first results from 19 WISP fields, covering approximately 63 square arc minutes. For typical exposure times (~ 6400 sec in G102 and ~ 2700 sec in G141), we reach 5-sigma detection limits for emission lines of 5 x 10^(-17) ergs s^(-1) cm^(-2) for compact objects. Typical direct imaging 5sigma-limits are 26.8 and 25.0 magnitudes (AB) in F110W and F140W, respectively. Restricting ourselves to the lines measured with highest confidence, we present a list of 328 emission lines, in 229 objects, in a redshift range 0.3 < z < 3. The single-line emitters are likely to be a mix of Halpha and [OIII]5007,4959 A, with Halpha predominating. The overall surface density of high-confidence emission-line objects in our sample is approximately 4 per arcmin^(2).These first fields show high equivalent width sources, AGN, and post starburst galaxies. The median observed star formation rate of our Halpha selected sample is 4 Msol/year. At intermediate redshifts, we detect emission lines in galaxies as faint as H_140 ~ 25, or M_R < -19, and are sensitive to star formation rates down to less than 1 Msol/year. The slitless grisms on WFC3 provide a unique opportunity to study the spectral properties of galaxies much fainter than L* at the peak of the galaxy assembly epoch. • ### The Opacity of Galactic Disks at z~0.7(1003.3458) March 17, 2010 astro-ph.CO We compare the surface brightness-inclination relation for a sample of COSMOS pure disk galaxies at z~0.7 with an artificially redshifted sample of SDSS disks well matched to the COSMOS sample in terms of rest-frame photometry and morphology, as well as their selection and analysis. The offset between the average surface brightness of face-on and edge-on disks in the redshifted SDSS sample matches that predicted by measurements of the optical depth of galactic disks in the nearby universe. In contrast, large disks at z~0.7 have a virtually flat surface brightness-inclination relation, suggesting that they are more opaque than their local counterparts. This could be explained by either an increased amount of optically thick material in disks at higher redshift, or a different spatial distribution of the dust. • ### Ultraluminous X-ray sources out to z~0.3 in the COSMOS field(1002.4299) Feb. 23, 2010 astro-ph.CO, astro-ph.HE Using Chandra observations we have identified a sample of seven off-nuclear X-ray sources, in the redshift range z=0.072-0.283, located within optically bright galaxies in the COSMOS Survey. Using the multi-wavelength coverage available in the COSMOS field, we study the properties of the host galaxies of these ULXs. In detail, we derived their star formation rate from H_alpha measurements and their stellar masses using SED fitting techniques with the aim to compute the probability to have an off-nuclear source based on the host galaxy properties. We divide the host galaxies in different morphological classes using the available ACS/HST imaging. We find that our ULXs candidates are located in regions of the SFR versus M$_star$ plane where one or more off-nuclear detectable sources are expected. From a morphological analysis of the ACS imaging and the use of rest-frame colours, we find that our ULXs are hosted both in late and early type galaxies. Finally, we find that the fraction of galaxies hosting a ULX ranges from ~0.5% to ~0.2% going from L[0.5-2 keV]=3 x 10^39 erg s^-1 to L[0.5-2 keV]= 2 x 10^40 erg s^-1. • ### Physical and morphological properties of z~3 LBGs: dependence on Lyalpha line emission(1002.2068) Feb. 10, 2010 astro-ph.CO, astro-ph.GA We investigate the physical and morphological properties of LBGs at z ~2.5 to ~3.5, to determine if and how they depend on the nature and strength of the Lyalpha emission. We selected U-dropout galaxies from the z-detected GOODS MUSIC catalog, by adapting the classical Lyman Break criteria on the GOODS filter set. We kept only those galaxies with spectroscopic confirmation, mainly from VIMOS and FORS public observations. Using the full multi-wavelength 14-bands photometry, we determined the physical properties of the galaxies, through a standard spectral energy distribution fitting with the updated Charlot & Bruzual (2009) templates. We also added other relevant observations, i.e. the 24mu m observations from Spitzer/MIPS and the 2 MSec Chandra X-ray observations. Finally, using non parametric diagnostics (Gini, Concentration, Asymmetry, M_20 and ellipticity), we characterized the rest-frame UV morphology of the galaxies. We then analyzed how these physical and morphological properties correlate with the presence of the Lyalpha line in the optical spectra. We find that, unlike at higher redshift, the dependence of physical properties on the Lyalpha line is milder: galaxies without Lyalpha in emission tend to be more massive and dustier than the rest of the sample, but all other parameters, ages, SFRs, X-ray emission as well as UV morphology do not depend strongly on the presence of the line emission. A simple scenario where all LBGs have intrinsically high Lyalpha emission, but where dust and neutral hydrogen content (which shape the final appearance of the Lyalpha) depend on the mass of the galaxies, is able to reproduce the majority of the observed properties at z~3. Some modification might be needed to account for the observed evolution of these properties with cosmic epoch, which is also discussed. • ### The Build-Up of the Hubble Sequence in the COSMOS Field(0911.1126) Nov. 6, 2009 astro-ph.CO We use ~8,600 >5e10 Msol COSMOS galaxies to study how the morphological mix of massive ellipticals, bulge-dominated disks, intermediate-bulge disks, bulge-less disks and irregular galaxies evolves from z=0.2 to z=1. The morphological evolution depends strongly on mass. At M>3e11 Msol, no evolution is detected in the morphological mix: ellipticals dominate since z=1, and the Hubble sequence has quantitatively settled down by this epoch. At the 1e11 Msol mass scale, little evolution is detected, which can be entirely explained with major mergers. Most of the morphological evolution from z=1 to z=0.2 takes place at masses 5e10 - 1e11 Msol, where: (i) The fraction of spirals substantially drops and the contribution of early-types increases. This increase is mostly produced by the growth of bulge-dominated disks, which vary their contribution from ~10% at z=1 to >30% at z=0.2 (cf. the elliptical fraction grows from ~15% to ~20%). Thus, at these masses, transformations from late- to early-types result in disk-less elliptical morphologies with a statistical frequency of only 30% - 40%. Otherwise, the processes which are responsible for the transformations either retain or produce a non-negligible disk component. (ii) The bulge-less disk galaxies, which contribute ~15% to the intermediate-mass galaxy population at z=1, virtually disappear by z=0.2. The merger rate since z=1 is too low to account for the disappearance of these massive bulge-less disks, which most likely grow a bulge via secular evolution.
web
auto_math_text
# GATE Questions & Answers of Characteristics of Semiconductor Power Devices: Diodes, Thyristor, Triac, GTO, MOSFET, IGBT ## What is the Weightage of Characteristics of Semiconductor Power Devices: Diodes, Thyristor, Triac, GTO, MOSFET, IGBT in GATE Exam? Total 15 Questions have been asked from Characteristics of Semiconductor Power Devices: Diodes, Thyristor, Triac, GTO, MOSFET, IGBT topic of Power Electronics subject in previous GATE papers. Average marks 1.60. Four power semiconductor devices are shown in the figure along with their relevant terminals. The device(s) that can carry dc current continuously in the direction shown when gated appropriately is (are) For the power semiconductor device IGBT, MOSFET, Diode and Thyristor, which one of the following statements is TRUE? A steady dc current of 100 A is flowing through a power module (S, D) as shown in Figure (a). The V-I characteristics of the IGBT (S) and the diode (D) are shown in Figures (b) and (c), respectively. The conduction power loss in the power module (S, D), in watts, is ________. The voltage $\left({v}_{s}\right)$ across and the current $\left({i}_{s}\right)$ through a semiconductor switch during a turn-ON transition are shown in figure. The energy dissipated during the turn-ON transition, in mJ, is _______. Figure shows four electronic switches (i), (ii), (iii) and (iv). Which of the switches can block voltages of either polarity (applied between terminals ‘a’ and ‘b’) when the active device is in the OFF state? Thyristor T in the figure below is initially off and is triggered with a single pulse of width 10 μs. It is given that $L=\left(\frac{100}{\mathrm{\pi }}\right)\mu \mathrm{H}$ and $C=\left(\frac{100}{\mathrm{\pi }}\right)\mu \mathrm{F}$. Assuming latching and holding currents of the thyristor are both zero and the initial charge on C is zero, T conducts for The typical ratio of latching current to holding current in a 20 A thyristor is Circuit turn-off time of an SCR is defined as the time A voltage commutated chopper circuit, operated at 500Hz, is shown below. If the maximum value of load current is 10A, then the maximum current through the main (M) and auxiliary (A) thyristors will be The circuit shows an ideal diode connected to a pure inductor and is connected to a purely sinusoidal 50Hz voltage source. Under ideal conditions the current waveform through the inductor will look like Match the switch arrangements on the top row to the steady-state V-I characteristics on the lower row. The steady state operating points are shown by large black dots. In the circuit of adjacent figure the diode connects the ac source to a pure inductance L. The diode conducts for The circuit in the figure is a current commutated dc-dc chopper where, ThM is the main SCR and ThAUX is the auxiliary SCR. The load current is constant at 10 A. ThM is ON. ThAUX is trigged at t = 0. ThM is turned OFF between A 1:1 Pulse Transformer (PT) is used to trigger the SCR in the adjacent figure. The SCR is rated at 1.5 kV, 250 A with IL = 250 mA, IH = 150 mA, and IGmax = 150 mA, IGmin = 100 mA. The SCR is connected to an inductive load, where L = 150 mH in series with a small resistance and the supply voltage is 200 V dc. The forward drops of all transistors/diodes and gate-cathode junction during ON state are 1.0 V. The resistance R should be
web
auto_math_text
## Karnataka 2nd PUC Chemistry Question Bank Chapter 3 Electrochemistry ### 2nd PUC Chemistry Electrochemistry NCERT Textbook Questions and Answers Question 1. Arrange the following metals in the order in which they displace each other from the solution of their salts. Al, Cu, Fe, Mg and Zn. Mg, Al, Zn, Fe, Cu. Question 2. Given the standard electrode potentials, K+/K = -2.93V, Ag+/Ag = 0.80V, Hg2+/Hg = 0.79 V Mg2+/Mg = -2,37 V, Cr3+/Cr = – 0.74V Arrange these metals in their increasing order of reducing power. The lower the reduction potential, the higher is the reducing power. Hence, the reducing power of the given metals increases inthe following order. Ag < Hg < Cr < Mg < K. Question 3. Depict the galvanic cell in which the reaction Zn(s)+2Ag+(aq) —> Zn2+(aq)+2Ag(s) takes place. Further show: (i) Which of the electrode is negatively charged? (ii) The carriers of the current in the cell. (iii) Individual reaction at each electrode. The galvanic cell in which the given reaction takes place is depicted as: Zn(s) | Zn2+ (aq) || Ag+ (aq) | Ag(s) (i) Zn electrode (anode) is negatively charged (ii) Tons are carriers of current in the cell and in the external circuit, current from silver to Zinc. (iii) The reaction taking place at the anode is given by, Zn(s) -H → Zn2+(aq) + 2e The reaction taking place at the cathode is given Ag++ e → Ag(s) Question 4. Calculate the standard cell potentials of galvanic cell in which the following reactions take place: (i) 2Cr(s) + 3Cd2+(aq) → 2Cr3+(aq) + 3Cd (ii) Fe2+(aq) + Ag+(aq) → Fe3+(aq) + Ag(s) Calculate the ArG9and equilibrium constant of the reactions. (i) For the given reaction, the Nemst equation can be given as: ∴ Eθ = 1.104V We know that, ΔrGθ = -nFEθ = -2 × 96487× 1.04 = – 213043.296J = -213.04KJ Question 5. Write the Nernst equation and em! of the following cells at 298 K: (I) Mg(s)|Mg+2(O.OO1M) ||Cu+2(0.0001M)|Cu(s) . (ii) Fe(s)|Fe+2(O.OO1 M)||H+(1M)H2(g) (1bar)|Pt(s) (iii) Sn(s) |Sn2+(O.050 M)||H+(0.020M|H2(g) (1 bar)|Pt(s) (iv) Pt(s)|Br2(l)|Br(O.O1O M)||H+(O.030 M)| H2(g) (1 bar)|Pt(s). (i) EθCr3+ /Cr = 0.74V EθCd2+ / Cd = – 0.40V The galvanic cell of the reaction IC depicted as : Cr(s)|Cr3+ (aq) || (Cd2+ (aq) Cd(s) Now, the standard cell potential is Eθcell = EθR– EθL = -40 – (-0.74) = +0.34V rGθ = —nFEθcell In the given equation, n = 6 F = 96487 C mol-1 Eθcell = + 0.34V Then, ∆rGθ =-6 × 96487 mol-1 × 0.34V = -196833.48 CV mol-1 = -196833.48 J mol-1 = -196.83 KJ mol-1 Again rGθ = -RT In K =>∆rGθ =-2.303RT log K (ii) Eθ Fe3+ /Fe2+ = 0.77V Eθ Ag/Ag = 0.80V (ii) For the given reaction, the Nernst equation can be given as: (iii) Far the givën reaction, the Nernst equation can be given as: 0.14 – 0.0295 × log 125 = 0.14-0.062 = 0.078 V = 0.08 V (approx) (iv) For the given reaction , the nernst equation can be given as: = – 1.09 – 0.02955 × log ( 1.11× 107) = – 1.09 – 0.02955(0.0453 + 7) = -1.09 – 0.208 =-1.298 V. Question 6. In the button cells widely used in watches and other devices the following reaction takes place: Zn(s) + Ag2O(s) + H2O(l) → Zn2 + (aq) + 2Ag(s) + 2OH-(aq) Determine ∆r Gθ and Eθ for the reaction. The galvanic cell of the reaction is depicted as: Fe2+ (aq) | Fe3+ (aq) || Ag+ (aq) | Ag(s) Now, the standard cell potential is Eθcell = EθR – Elθ = 0.80 – 0.77 ‘ = 0.03 V Here, n = 1 Then, ∆r Gθ = – nFEθcell = -1 × 96487 C mol-1 × 0.03V = – 2894.61 J mol-1 = – 2.89 KJ mol-1 r Gθ = 2.303 RT In K Question 7. Define conductivity and molar conductivity for the solution of an electrolyte. Discuss their variation with concentration. Conductivity of a solution is defined as the conductance of a solution 1 cm in length and area of cross section cm2.1 is represented by K. Conducti vity always decreases with a decrease in concentration both for weak and strong electrolytes. This is because the number of ions per unit volume that carry the current in a solution decreases with a decrease in concentration. Molar conductivity of a solution at a given concentration is the conductance of volume V of a solution containing 1 mole of the electrolyte kept between two electrodes with the area area of cross-section A and distance of unit length. Molar conductivity increases with a decrease in concentration. This is because the total volume of the solution containing one mole of the electrolyte increases on dilution. Question 8. The conductivity of 0.20 M solution of KCl at 298 K is 0.0248 S cm-1. Calculate its molar conductivity. Question 9. The resistance of a conductivity cell containing 0.001M KCl solution at 298 K is 1500Ω. What is the cell constant if conductivity of 0.001M KCl solution at 298 K is 0.146 × 10-3 S cm-1. Cell constant = conductivity × Resistance = 0.146 × 10-3S Cm-1 × 1500 Ω = 0.219 cm-1 Question 10. The conductivity of sodium chloride at 298 K has been determined at different concentrations and thfe results are given below: Calculate ∆m for all concentrations and draw a plot between ∆m and c1/2. Find the value of ∆0m K = 7.896 × 10-5 S cm-1 M = 0.00241 Question 11. How much charge is required for the following reductions? (i) 1 mol of Al3+ to Al (ii) 1 mol of Cu2+ to Cu (iii) 1 mol of MnO4- to Mn2+ Al3+ + 3e → A1 charge required = 3F (ii) Cu2+ + 2e → Cu charge required = 2F (iii) MnO4- + 8H+ + Se → Mn2+ + H2O charge required = 5F Question 12. How much electricity in terms of Faraday ¡s required to produce (j) 20.Ogat Ca from molten CaCl2 (ii) 40.0 g of Al from Almólten Al2O3 (i) Ca2+2 + 2e → Ca 2F can produce I mole (=40 g) Ca ∴ To produce 20 g Ca requires, $$\frac{2 \mathrm{F} \times 20}{40}$$ = 1F (ii) Al3+ + 3e → Al 3F can produce 1 mole (= 27g) Al ∴ To produce 40 g A1 requires $$\frac{3 \mathrm{F} \times 40}{27}$$ = 4.44F Question 13. How much electricity is required in coulomb for the oxidation of (i) 1 mol of H2O to O2 (ii) 1 mol of FeO to Fe2O3. (i) 2H2O → 4H+ + O2 + 4e 2F of electricity is required for oxidation of 1 mole of H2O (ii) Fe2+ → Fe3+ + e IF of electricity is required for oxidation of 1 mole FeO Question 14. A solution of Ni(NO3)2 is electrolysed between platinum electrodes using a current of 5 amperes for 20 minutes. What mass of Ni is deposited at the cathode? Ni2+ + 2e → Ni 2F (2 × 96500 C) can produce 58.7 g of Ni Q = It = 5 × 20 × 60 =6000C Question 15. Three electrolytic cells A,B,C containing = 0.439 g of Zn solutions of ZnSO4, AgNO3 and CuSO4, respectively are connected in series. A steady current of 1.5 amperes was passed through them until 1.45 g of silver deposited at the cathode of cell B. How long did the current flow? What mass of copper and zinc were deposited? i.e. 108 g of Ag is deposited by 96487 C Therefore, 1 .45g of Ag is deposited by Given, Current = 1.5A $$\frac{1295.43}{1.5}$$ S ∴ Time = 863.6S = 864 S = 14.40 min Again, i.e. 2 × 96487 C of charge deposit = 63.5 g of Cu Therefore, 1295.43 C of charge will deposit = 0.426g of Cu i.e. 2 × 96487 C of charge deposit = 65.4 g of Zn Therefore, 1295.43 C of charge will deposit = 0.439 g of Zn Question 16. Predict the products of electrolysis in each of the following: (1) An aqueous solution of AgNO3 with silver electrodes. (ii) An aqueous solution of AgNO3 with platinum electrodes. (iii) A dilute solution of H2SO4 with platinum electrodes. (iv) An aqueous solution of CuCl2 with platinum electrodes. (i) At cathode: The following reduction reactions compete to take place at the cathode Ag+(aq) + e → Ag(s); Eθ = 0.80 V H+ (aq) + e → $$\frac { 1 }{ 2 }$$ H2 (g); Eθ = 0.00V The reaction with a higher value of Eθ takes place of the cathode. Therefore, deposition of silver will take place at the cathode. At anode: The Ag anode is attacked by NO3 ions. Therefore, the silver electrode at the anode dissolves in the solution to from Ag+. (ii) At cathode: Same as above At anode: Anode is not attackable and hence OH ions have lower discharge potential than NO3 ions and OH ions react to give O2 OH → OH + e 4OH → 2H2O + O2 (g) (iii) H2SO4 → 2H+ + SO2-4 HO2 ⇌ H+ + OH At cathode: 2H++ 2e → H2 At anode: 4OH → 2H2O + O2 + 4e i. e., H2 will be liberated at cathode and O2 at anode. (iv) CuCl2 → Cu2++2Cl At Cathode: Cu2+ ions will be reduced in preference to H+ ions Cu2+ + 2e → Cu At anode: Cl’ ions will be oxidised in preference to OH ions. 2Cl → Cl2 + 2e i.e., Cu will be deposited on the cathode and Cl2 will be liherated at the anode. Question 1. A solution of sodium chloride is a better conductor of electricity at a temperature of 50°C than at room temperature. Why? A solution of NaCl shows greater conduction of electricity at a temperature of 50°C than at room temperature because the ionic mobility of a strong electrolyte such as NaCl increases with an increase in temperature. Question 2. Give the relationship between molar conductivity and specific conductivity. Molar conductivity and specific conductivity are related to each other by the given equation. Where, Δm = Molar conductivity K = Specific conductivity C = Molar concentration Question 3. Why is it not possible to measure single electrode potential? The process of oxidation or reduction cannot take place alone. However, electrode potential is a relative tendency and can be measured with respect to a reference electrode such as standard hydrogen electrode. Question 4. Why is the rusting of iron faster in saline water than in pure water? Strong electrolytes such as sodium chloride are present in saline water. The ions produced from NaCl help in the reduction of oxygen to form water. Hence, the rusting of iron is faster in saline water than in pure water. Question 5. What happens Δ0m for weak electrolytes obtained by using Kohlrausch law if the migration of ions is increased to three Δ0m for weak electrolytes obtained by using Kohlrausch law is independent of the migration of ions. Question 6. Define molar conductivity of a solution. The molar conductivity of a solution at a given concentration is the conductance of volume ‘V’ of a solution containing 1 mole of _ the electrolytic kept between two electrodes with cross sectional area ‘A’ and distance of unit length. Or, Δm = $$\frac{\Delta}{1}$$ K Now, 1 = 1 and Δ = V (volume containing 1 mole of the electrolyte) ∴ Δm = KV Question 7. What are the factors that affect the conductivity of an ionic (electrolytic) solution? The conductivity of an ionic (electrolytic) solution depends upon the following factors. • Temperature • Concentration of electrolyte • Nature of the electrolyte added • Nature of solvent and its viscosity • Size of the ions produced and their solvation. Question 8. The electrolysis of a salt solution of a metal was carried out by passing a current of 4A for 45 minutes. This resulted in the deposition of 2.977g of the metal. If the atomic mass of the metal is 106.4 g mol-1, then calculate the charge present in the metal cation. Let the charge on the metal cation be h i.e. the metal cation is M+ Accordingly, Mh+ + 4e → M Therefore, a current of h × 96500 coulomb will deposit 106.4 g of metal quantity of charge passed = 10800 coulombs Now, 10800 coulombs will deposit Hence, the charge present on the metal cation is + 4. Question 9. (a) At infinite dilution, the ionic conductance of Ba2+ and Cl is 121 and 76 ohm1 cm respectively. What will be the equivalent conductance of BaCl2 (in ohm-1 cm2) at infinite dilution? (b) What effect does concentration have on the molar conductivity of a strong electrolyte? (a) The molar conductivity of barium chloride is given by the following equation: Δ0m(BaCl2) = Δ0Ba2+ + 2 Δ0Cl- = 127 + 2 × 76 = 279Ω-1 cm2 mol-1 Δ0 eq = $$\frac{279}{2}$$ Ω-1cm2Eq-1 = 139.5 Ω-1cm2 Eq-1 [∴ Eq.wt. of BaCl2 = $$\frac{1}{2}$$ × mol.wt] Hence the equivalent conductance of BaCl2 at infinite dilution is 139.5 Ω-1cm2 eq-1 (b) The molar conductivity of a strong electrolyte decreases with the square root of concentration in linear fashion, as shown below. Question 10. (a) The standard reduction potentials of Fe3+ | Fe2+ and I3 | Iare 0.77 V and 0.54 V respectively for the reaction 2Fe3+ + 3I ⇌ 2Fe2+ + I3. Calculate the value of equilibrium constant. (b) How much charge is required 1 mole of Cu2+ to Cu
web
auto_math_text
.tabbox {width:400px; margin-top: 15px;margin-bottom: 5px} .tabmenu {width:400px;height:28px;border-left:1px solid #CCC;border-top:1px solid #ccc;} .tabmenu ul {margin:0;padding:0;list-style-type: none;} .tabmenu li { text-align:center; float:left; display:block; width:99px; overflow:hidden; background-color: #f1f1f1; line-height:27px; border-right:#ccc 1px solid; border-bottom:#ccc 1px solid; display:inline;} .tabmenu .cli {text-align:center;float:left;display:block;width:99px;overflow:hidden;background-color: #fff;line-height:27px;border-right:#ccc 1px solid;border-bottom:#fff 1px solid;display:inline; cursor:pointer; color: #810505; font-weight:bold} #tabcontent {width:399px;background-color:#fff;border-left:#CCC 1px solid;border-right:#CCC 1px solid;border-bottom:#CCC 1px solid; height:60px;} #tabcontent ul {margin:0;padding:5px;list-style-type: none;} #tabcontent .hidden {display:none;} Search Browse by Issue Fig/Tab Adv Search HOME ABOUT JCST AUTHORS REVIEWERS PUBLISHED PAPERS FORTHCOMING PAPERS Special Issue: Computer Networks and Distributed Computing • Computer Networks and Distributed Computing • ### Deploy Efficiency Driven k-Barrier Construction Scheme Based on Target Circle in Directional Sensor Network Xing-Gang Fan1, Member, CCF, Zhi-Cong Che2, Feng-Dan Hu2, Tao Liu2, Jin-Shan Xu2, Xiao-Long Zhou3, Member, ACM, IEEE 1. 1 College of Zhijiang, Zhejiang University of Technology, Shaoxing 312030, China; 2 College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, China; 3 College of Electrical and Information Engineering, Quzhou University, Quzhou 324000, China • Received:2018-11-02 Revised:2020-03-29 Online:2020-05-28 Published:2020-05-28 • About author:Xing-Gang Fan received his Ph.D. degree in control science and engineering in 2004 from Zhejiang University, Hangzhou. Now he is an associate professor with the College of Zhijiang in Zhejiang University of Technology, Shaoxing. He has published more than 40 peer-reviewed papers. His main research interests include wireless sensor network and Internet of Things. • Supported by: This research was supported in part by the National Natural Science Foundation of China under Grant Nos. 11405145, 40241461, 61374152, and 61876168, and Zhejiang Provincial Natural Science Foundation of China under Grant Nos. LY20F020024 and LY17F030016. With the increasing demand for security, building strong barrier coverage in directional sensor networks is important for effectively detecting un-authorized intrusions. In this paper, we propose an efficient scheme to form the strong barrier coverage by adding the mobile nodes one by one into the barrier. We first present the concept of target circle which determines the appropriate residence region and working direction of any candidate node to be added. Then we select the optimal relay sensor to be added into the current barrier based on its input-output ratio (barrier weight) which reflects the extension of barrier coverage. This strategy looses the demand of minimal required sensor nodes (maximal gain of each sensor) or maximal lifetime of one single barrier, leading to an augmentation of sensors to be used. Numerical simulation results show that, compared with the available schemes, the proposed method significantly reduces the minimal deploy density required to establish k-barrier, and increases the total service lifetime with a high deploy efficiency. [1] Zhang X, Zhou Y, Zhang Q, Lee V C S, Li M. Problem specific MOEA/D model for barrier coverage with wireless sensors. IEEE Transactions on Systems, Man, and Cybernetics, 2017, 47(11):3854-3865.[2] Yang T, Mu D, Hu W. Energy-efficient coverage quality guaranteed in wireless sensors network. Applied Mathematics&Information Sciences, 2013, 7(5):1685-1691.[3] Wu F, Gui Y, Wang Z, Gao X, Chen G. A survey on barrier coverage with sensors. Frontiers of Computer Science, 2016, 10(6):968-984.[4] Kumar S, Lar T H, Arora A. Barrier coverage with wireless sensors. In Proc. the 11th Annual International Conference on Mobile Computing and Networking, August 2005, pp.284-298.[5] Tao D, Wu T. A survey on barrier coverage problem in directional sensor networks. IEEE Sensors Journal, 2015, 15(2):876-885.[6] Ma H D, Liu Y H. On coverage problems of directional sensor networks. In Proc. the 1st International Conference of Mobile Ad-Hoc and Sensor Networks, December 2005, pp.721-731.[7] Güvensan M A, Yavuz A G. On coverage issues in directional sensor networks:A survey. Ad Hoc Networks, 2011, 9(7):1238-1255.[8] Wang B. Coverage problems in sensor networks:A survey. ACM Computing Surveys, 2011, 43(4):Article No. 32.[9] Chi K K, Zhu Y H, Li Y et al. Minimization of transmission completion time in wireless powered communication networks. IEEE Internet of Things Journal, 2017, 4(5):1671-1683.[10] Yu Z, Chi K K, Hu P, Zhu Y, Liu X. Energy provision minimization in wireless powered communication networks with node throughput requirement. IEEE Transactions on Vehicular Technology, 2019, 68(7):7057-7070.[11] Ssu K F, Wang W T, Wu F K et al. K-barrier coverage with a directional sensing model. International Journal on Smart Sensing and Intelligent Systems, 2009, 2(1):75-93.[12] He J, Shi H C. A distributed algorithm for finding maximum barrier coverage in wireless sensor networks. In Proc. the 2010 Global Communications Conference, December 2010.[13] Purohit A, Sun Z, Mokaya F, Zhang P. SensorFly:Controlled-mobile sensing platform for indoor emergency response applications. In Proc. the 10th International Conference on Information Processing in Sensor Networks, April 2011, pp.223-234.[14] Cheng C F, Tsai K T. Distributed barrier coverage in wireless visual sensor networks with β-QoM. IEEE Sensors Journal, 2012, 12(6):1726-1735.[15] Tao D, Tang S J, Zhang H T et al. Strong barrier coverage in directional sensor networks. Computer Communications, 2012, 35(8):895-905.[16] Fusco G, Gupta H, Shi H. Placement and orientation of rotating directional sensors. In Proc. the 7th Annual IEEE Communications Society Conference on Sensor, Mesh and Ad Hoc Communications and Networks, June 2010, pp.332-340.[17] Wang Z B, Liao J L, Cao Q et al. Barrier coverage in hybrid directional sensor networks. In Proc. the 10th IEEE Conference on Mobile Ad-Hoc and Sensor Systems, October 2013, pp.222-230.[18] Du J Z, Wang K, Liu H, Guo D K. Maximizing the lifetime of k-discrete barrier coverage using mobile sensors. IEEE Sensors Journal, 2013, 13(12):4690-4701.[19] Wang Z B, Liao J L, Cao Q et al. Achieving k-barrier coverage in hybrid directional sensor networks. IEEE Transactions on Mobile Computing, 2014, 13(7):1443-1455.[20] Ren Y M, Fan X G, Wang H. A distributing scheme for directional barrier coverage enhancing in DSN. Chinese Journal of Sensors and Actuators, 2015, 28(7):1051-1057.(in Chinese)[21] Fan X G, Wang C, Yang J J et al. A strong k-barrier construction scheme based on selecting box for directional sensor networks. Chinese Journal of Computers, 2016, 39(5):946-960.(in Chinese)[22] Wang Z B, Chen H L, Cao Q et al. Fault tolerant barrier coverage for wireless sensor networks. In Proc. the 2014 IEEE Conference on Computer Communications, April 2014, pp.1869-1877.[23] Tao D, Ma H D. Coverage control algorithms for directional sensor networks. Journal of Software, 2011, 22(10):2315-2332.(in Chinese)[24] Chen J, Zhang L, Kuo Y. Coverage-enhancing algorithm based on overlap-sense ratio in wireless multimedia sensor networks. IEEE Sensors Journal, 2013, 13(6):2077-2083.[25] Mohamadi H, Salleh S, Ismail A S. A learning automatabased solution to the priority-based target coverage problem in directional sensor networks. Wireless Personal Communications, 2014, 79(3):2323-2338.[26] Mostafaei H, Shojafar M, Zaher B et al. Barrier coverage of WSNs with the imperialist competitive algorithm. The Journal of Supercomputing, 2017, 73(11):4957-4980.[27] Mostafaei H. Stochastic barrier coverage in wireless sensor networks based on distributed learning automata. Computer Communications, 2015, 55(1):51-61.[28] Mostafaei H, Chowdhurry M U, Obaidat M S. Border surveillance with WSN systems in a distributed manner. IEEE Systems Journal, 2018, 12(4):3703-3712.[29] Wang Z B, Chen H, Cao Q et al. Achieving location error tolerant barrier coverage for wireless sensor networks. Computer Networks, 2017, 112(C):314-328.[30] Tian J, Zhang W S, Wang G L et al. 2D k-barrier dutycycle scheduling for intruder detection in wireless sensor networks. Computer Communications, 2014, 43:31-42.[31] Zhang L, Tang J, Zhang W Y. Strong barrier coverage with directional sensors. In Proc. the 2009 Global Communications Conference, November 2009.[32] Tao D, Mao X F, Tang S J, Zhang H et al. Strong barrier coverage using directional sensors with arbitrarily tunable orientations. In Proc. the 7th International Conference on Mobile Ad-hoc and Sensor Networks, December 2011, pp.68-74.[33] Tang F L, Youn L S, Guo S et al. A chain-cluster based routing algorithm for wireless sensor networks. Journal of Intelligent Manufacturing, 2012, 23(4):1305-1313.[34] Shih K P, Chou C M, Liu I H et al. On barrier coverage in wireless camera sensor networks. In Proc. the 24th IEEE International Conference on Advanced Information Networking and Applications, April 2010, pp.873-879.[35] Tao D, Chen H J. Strong barrier coverage detection algorithm for directional field of view sensor networks. Journal of Beijing Jiaotong University, 2010, 35(5):8-11.[36] Sung T W, Yang C S. Distributed Voronoi-based selfredeployment for coverage enhancement in a mobile directional sensor network. International Journal of Distributed Sensor Networks, 2013, 9(11):Article No. 165498.[37] Chang C Y, Hsiao C Y, Chin Y T. The k-barrier coverage mechanism in wireless visual sensor networks. In Proc. the 2012 IEEE Wireless Communications and Networking Conference, Apr. 2012, pp.2318-2322.[38] Güvensan M A, Yavuz A G. Hybrid movement strategy in self-orienting directional sensor networks. Ad Hoc Networks, 2013, 11(3):1075-1090.[39] Ma H D, Li Y, Chen W P. Energy efficient k-barrier coverage in limited mobile wireless sensor networks. Computer Communications, 2012, 35(14):1749-1758.[40] Fan X, Chen Q, Che Z, Hao X. Energy-efficient probabilistic barrier construction in directional sensor networks. IEEE Sensors Journal, 2017, 17(3):897-908.[41] Zhao L, Bai G, Jiang Y, Shen H, Tang Z. Optimal deployment and scheduling with directional sensors for energyefficient barrier coverage. International Journal of Distributed Sensor Networks, 2014, 10(1):Article No. 596983.[42] Chen A, Kumar S, Lai T H. Local barrier coverage in wireless sensor networks. IEEE Transaction on Mobile Computing, 2010, 9(4):491-504.[43] Wang Z B, Cao Q, Qi H, Chen H, Wang Q. Cost-effective barrier coverage formation in heterogeneous wireless sensor networks. Ad Hoc Networks, 2017, 64:65-79.[44] Sibley G T, Rahimi M H, Sukhatme G S. Robomote:A tiny mobile robot platform for large-scale sensor networks. In Proc. the 2002 IEEE International Conference on Robotics and Automation, May 2002, pp.1143-1148.[45] Xu J, Singh R, Garnier N B, Sinha S, Pumir A. The effect of quenched disorder on dynamical transitions in systems of coupled cells. New Journal of Physics, 2013, 15(9):Article No. 093046.[46] He S, Chen J, Li X, Shen X, Sun Y. Cost-effective barrier coverage by mobile sensor networks. In Proc. the 2012 International Conference on Computer Communications, March 2012, pp.819-827. [1] Cheng-Dong Jiang and Guo-Liang Chen. Double Barrier Coverage in Dense Sensor Networks [J]. , 2008, 23(1): 154-ver . Viewed Full text Abstract Cited Shared Discussed [1] Zhou Di;. A Recovery Technique for Distributed Communicating Process Systems[J]. , 1986, 1(2): 34 -43 . [2] Li Wanxue;. Almost Optimal Dynamic 2-3 Trees[J]. , 1986, 1(2): 60 -71 . [3] Sun Zhongxiu; Shang Lujun;. DMODULA:A Distributed Programming Language[J]. , 1986, 1(2): 25 -31 . [4] Gao Qingshi; Zhang Xiang; Yang Shufan; Chen Shuqing;. Vector Computer 757[J]. , 1986, 1(3): 1 -14 . [5] Pan Qijing;. A Routing Algorithm with Candidate Shortest Path[J]. , 1986, 1(3): 33 -52 . [6] Huang Heyan;. A Parallel Implementation Model of HPARLOG[J]. , 1986, 1(4): 27 -38 . [7] Zheng Guoliang; Li Hui;. The Design and Implementation of the Syntax-Directed Editor Generator(SEG)[J]. , 1986, 1(4): 39 -48 . [8] Xu Xiaoshu;. Simplification of Multivalued Sequential SULM Network by Using Cascade Decomposition[J]. , 1986, 1(4): 84 -95 . [9] Min Yinghua;. Easy Test Generation PLAs[J]. , 1987, 2(1): 72 -80 . [10] Sun Yongqiang; Lu Ruzhan; Huang Xiaorong;. Termination Preserving Problem in the Transformation of Applicative Programs[J]. , 1987, 2(3): 191 -201 . ISSN 1000-9000(Print)          1860-4749(Online) CN 11-2296/TP Home Editorial Board Author Guidelines Subscription Journal of Computer Science and Technology Institute of Computing Technology, Chinese Academy of Sciences P.O. Box 2704, Beijing 100190 P.R. China Tel.:86-10-62610746 E-mail: jcst@ict.ac.cn
web
auto_math_text
Poyang Lake, one of the most frequently flooded regions in China, connects with the Yangtze River and the five sub-tributaries in the local catchment. The lake's hydrological regime is complicated by a complex hydraulic connection and strong river–lake interaction, especially for the extreme hydrological regime. This study analyzes the relationships between the lake level changes and the flow regimes of Yangtze River and local catchment during the flood season and employs a physically based hydrodynamic model to quantify their relative contributions to the development of floods. The study found that the large catchment runoff and Yangtze River discharge were both significant contributors to flood development but that their contributions were unevenly distributed in time and space. The local catchment imposed more influence during the period of April–May and at the middle parts of the lake, and its influence decreased toward the north and south; in contrast, the most remarkable lake level changes were observed in July–August and at the northern lake for the Yangtze River cases, and these changes reduced from north to south. Moreover, Yangtze River imposed far stronger influences on the lake level changes than the catchment runoff and dominated the duration of floods to a great extent. INTRODUCTION Floods cause considerable economic loss and serious damage to towns and farms and are one of the most common natural disasters recorded in the world, especially with their increased frequencies as an estimated impact of global warming (Christensen & Christensen 2003; Frei et al. 2006; Zhang & Li 2007; Nakayama & Watanabe 2008; Garcia-Castellanos et al. 2009; Nie et al. 2012; Gül 2013). It is evident from the literature that the frequency and number of hydro-meteorological hazards (i.e., floods) are on the rise compared with geophysically induced disasters (Ramos & Reis 2002; Krausmann & Mushtaq 2008; Adikari & Yoshitani 2009). Over the last several decades, many countries have suffered from severe flooding, such as the Brahmaputra River in Bangladesh, the Oder and the Vistula in Poland, the Elbe in Germany, the Mekong River in Vietnam, the Menam River in Thailand, the Indus in Pakistan, and the Yangtze River in China (Chowdhury 2003; Gupta & Sah 2008; Yu et al. 2009; Wang et al. 2010; Khan et al. 2011; Yi et al. 2012). The global cost of floods has reached a total of $470 billion since 1980 (Knight et al. 2011). Similar to other countries, China is no exception to recurrent floods due to the strong influence of the East Asian monsoon (Liu & Liu 2002; Yu et al. 2009). Two-thirds of Chinese territory and over half of the total population are affected by a variety of flood events almost every year (Nakayama & Watanabe 2008; Wang et al. 2012); this is especially true in the Yangtze River basin, which is historically one of the most frequently flooded areas in China (Zhao 2000; Cai et al. 2001). Poyang Lake, the largest freshwater lake in China, is located in the middle and lower reaches of the Yangtze River and is one of the few lakes that remains naturally connected to the Yangtze River. During the past several decades, the Poyang Lake region has experienced as many as 17 major flood events, six of which can be categorized as severe floods (i.e., 1954, 1983, 1995, 1996, 1998, and 1999) (Li et al. 2015a). Moreover, it has recently been shown that the frequency and severity of the floods in Poyang Lake have increased since 1990 (Guo et al. 2008), owing to the southward shift of the major warm season rain bands to the south of the Yangtze River basin and the increased fluctuation of warm season rainfall in Poyang Lake catchment (Hu et al. 2007). The frequent large floods in Poyang Lake have caused extensive damage to the environment and the agricultural economy and have threatened the life of approximately ten million people in the surrounding region (Shankman & Liang 2003; Shankman et al. 2006; Li & Zhang 2015). For instance, a big flood event in 1998 resulted in several cities in the lakeside area being severely flooded, affecting more than 600 thousand people (Min 2002) and resulting in more than$5 billion in economic losses for the Poyang Lake region (Chen et al. 2002). As is well known, explaining the triggering causes and affecting factors of floods is an important prerequisite of flood disaster prevention and mitigation (Nie et al. 2012). This is also indispensable for flood management in Poyang Lake and has raised extensive concern. Numerous studies have been carried out to investigate the triggering mechanism of Poyang Lake floods and their relationships with climatic characteristics and human activities. Usually, the severe flood events in Poyang Lake are mainly ascribed to the abnormal climate variability (Nakayama & Shankman 2013), i.e., during the flood seasons of 1998 and 1954, the average total precipitation was significantly higher than usual in the Yangtze River basin (with the most excessive rainfall of 300 mm and 220 mm, respectively, during June–July) (Nakayama & Watanabe 2008). Shankman et al. (2006) found that the most severe floods in Poyang Lake may have occurred during or immediately following El Niño events. Hu et al. (2007) and Guo et al. (2008) also found that the increase in flood frequency and severity in Poyang Lake in the 1990s was partially attributable to the southward shift of the major warm-season rain bands to the south of the Yangtze River basin. In contrast, Yu et al. (2009) examined the characteristics of historical floods in the Yangtze River basin and found that the intensifying anthropogenic activity in the last century was the key cause for recently human-induced floods. Many studies also noted that the landscape changes related to human activity have resulted in the loss of floodwater storage and were the main causes of an increasing severity of major floods (Yin & Li 2001; Piao et al. 2003; Zhao & Fang 2004; Zhao et al. 2005; Nakayama & Shankman 2013). Statistics indicate that the Poyang Lake has shrunk in volume from 37 billion m3 in the 1950s to 28.9 billion m3 in the late 1990s, with an accompanying decrease in area from 5,160 km2 to 3,860 km2 in this time (Shankman & Liang 2003). Shankman & Liang (2003) ascribed these declines to the land reclamation and levee construction in lake regions. Additionally, the sediment deposition due to deforestation in the five sub-tributary catchments has reduced the total volume of Poyang Lake by 4.8% between 1954 and 1997 (Min 1999). In addition, Hu et al. (2007) explained the occurrence of floods from the aspect of the interaction among the Yangtze River, Poyang Lake, and its catchment. Nakayama & Shankman (2013) investigated the effects of the Three Gorges Dam (TGD) and water transfer project on Yangtze River floods. Similarly, Gao et al. (2013), Guo et al. (2012), and Zhang et al. (2014) also examined the effects of TGD on Yangtze River flow and the hydrological regime of Poyang Lake. These studies showed that the Poyang Lake flood risk decreased moderately because the modulated river flow has distinctly weakened its blocking effect (Guo et al. 2012) by reducing the peak discharge of the Yangtze River, i.e., from approximately 7 × 104 m3/s to 4 × 104 m3/s, with a 40% decrease during the flood periods of 2010 (http://www.cjw.com.cn/); however, the model predicted that the TGD might increase the lake stage and flood risk during the spring and early summer months due to the release of water from the dam during this period (Nakayama & Shankman 2013). Despite many studies concentrating more efforts to deal with the occurrence characteristics of severe floods in Poyang Lake region and their associated triggering causes in terms of climate variability and human activities (Min 1999; Cai et al. 2001; Shankman & Liang 2003; Shankman et al. 2006, 2012; Nakayama & Watanabe 2008; Li et al. 2015a), it remains unclear how the Yangtze River discharge and local catchment inflow impact the lake flood stages and their duration. In particular, the effects of the Yangtze River and the Poyang Lake catchment have not been quantified by considering the dominant hydrodynamic processes of the river–lake–catchment system, which is essential to real-time flood hazard prediction in such a complex system (Bates & Anderson 1996; Adhikari et al. 2010). It is necessary to extend the previous studies using scenario simulations to provide a generalized and quantitative interpretation of the influences of both the Yangtze River and the local catchment on the high lake level. Therefore, the objectives of the study are to: (1) analyze the characteristics of the lake level in typical flooding years and in dry years and identify their relationships with the flow regime changes of the Yangtze River and the sub-tributaries in the local catchment; and (2) quantify the effects of the local catchment runoff and Yangtze River flow on the flood stages based on the hydrodynamic model MIKE 21, and simulate its temporal and spatial distribution, as well as the duration changes of the high lake level. STUDY AREA AND DATA Study area Poyang Lake is located in the middle and lower reaches of the Yangtze River, China (28°22′–29°45′N and 115°47′–116°45′E); it receives water flows primarily from the five sub-tributaries in its catchment, i.e., Xiushui River, Ganjiang River, Fuhe River, Xinjiang River, and Raohe River, and discharges into the Yangtze River through a channel in its northern part (Figure 1). Among the five major rivers, the Ganjiang is the largest river in the region: it extends 750 km and contributes almost 55% of the total discharge into Poyang Lake (Shankman et al. 2006). Poyang Lake has an average water depth of 8.4 m and a storage capacity of 276 × 108 m3 when the water level at Hukou is 21.71 m (http://www.poyanglake.net/pyhgk.htm). Generally, the lake water surface has relatively large gradients in dry seasons, i.e., the lake level in the south is 5–6 m higher than in the north (Figure 2(a)), and the water flows from the south and discharges (outflow) into the Yangtze River. During the wet season, the elevated water level of the Yangtze River may raise the northern lake level and block outflow from Poyang Lake and, in some cases, may cause backflow from the Yangtze River to Poyang Lake (Shankman et al. 2006). Figure 1 Location of study area and the distribution of stations. Figure 1 Location of study area and the distribution of stations. Figure 2 Variation of average water level in Poyang Lake (a) and runoff inflow from five sub-tributaries and Yangtze River discharge at Hankou (b) during 1960–2010. Figure 2 Variation of average water level in Poyang Lake (a) and runoff inflow from five sub-tributaries and Yangtze River discharge at Hankou (b) during 1960–2010. The total drainage area of the water systems is 16.22 × 104 km2, accounting for 9% of the drainage area of the Yangtze River basin. The topography in the catchment varies from highly mountainous and hilly areas (with the maximum elevation of 2,200 m above mean sea level) to alluvial plains in the lower reaches of the primary watercourses. Poyang Lake catchment has a subtropical wet climate that is characterized by a mean annual precipitation of 1,630 mm for the period 1960–2010 and an annual mean temperature of 17.5 °C. Five sub-tributaries in the Poyang Lake catchment make up the primary water sources of Poyang Lake. The amount of water flowing from the five rivers directly affects the volume and water level of Poyang Lake. Generally, the rainy season in the Poyang Lake catchment begins in April, and the water flows from the local catchment increase quickly from April to June, raising the water level of Poyang Lake (Figure 2(b)). This hydrograph of the Poyang Lake catchment explains the primary features of the first half of the annual variation of water level in Poyang Lake (Hu et al. 2007). From July to September, the runoff inflow decreases sharply; at the same time, the middle reach of the Yangtze River receives its annual peak precipitation, and its discharge increases. The rising discharge and water level of the Yangtze River block the outflow from Poyang Lake, possibly even causing backflow, and further elevates the lake level (Shankman et al. 2006; Hu et al. 2007). This blocking effect dominates the second half of the annual course of the lake level (Hu et al. 2007; Guo et al. 2012). As a result, the Poyang Lake water surface area can exceed 3,000 km2, inundating low-lying alluvial plains surrounding the lake in the flood season (Shankman et al. 2006), but shrinks to <1,000 km2 to form a narrow meandering channel during the dry season (Xu & Qin 1998) and exposes extensive floodplains and wetland areas. Data The observed daily water levels of Poyang Lake at five hydrological stations (i.e., Hukou, Xingzi, Duchang, Tangyin, and Kangshan) are available for the period 1960–2010 and were used to identify the variation characteristics of the lake level and calibrate the hydrodynamic model parameters. The locations of these stations are shown in Figure 1. The records at Xingzi station are selected to stand for the lake level in the study because slight differences of the lake level can be observed at different stations during the flood season and also because Xingzi station is situated on the northern edge of the broad lake and away from the junction of the lake and the Yangtze River (Figure 1). The daily stream flows from the five sub-tributaries in the Poyang Lake catchment were measured at seven hydrological stations, i.e., Qiujin, Wanjiabu, Waizhou, Lijiadu, Meigang, Shizhenjie, and Dufengkeng stations (Figure 1), in the period 1960–2010 to reflect the amount of catchment inflow. Additionally, the water fluxes measured at Hankou station were collected to describe the variations of the Yangtze River flow and examine its effects on the outflow of Poyang Lake. These data have been widely used for different studies previously (Hu et al. 2007; Guo et al. 2008, 2012; Ye et al. 2011, 2013, 2016; Li et al. 2014; Li & Zhang 2015), and the quality of the data is quite reliable. METHODS Hydrological data treatment Flood events were considered to occur in this study when the Poyang Lake stage at Xingzi station exceeded the level of 19.0 m, which is also the warning stage for the lake. To quantify the effect of the streamflow from the five sub-tributaries on the lake level, the total runoff from the Poyang Lake catchment to the lake was defined as the sum of the flow measured at Waizhou, Lijiadu, Meigang, Dufengkeng, Shizhenjie, Qiujin, and Wanjiabu hydrological stations (Figure 1). As the discharge data at Qiujin were missing during 1960–1982, the linear regression method with the observed discharge at Wanjiabu station was used to estimate the missing values. Moreover, the concept of anomaly was used in the study to conveniently reflect the variation of the runoff inflow from the local catchment and discharge of the Yangtze River, which is defined as the deviation at each month from the average water flow for the study period of 1960–2010 (see Equation (1)). Also, the anomaly was adopted in the analysis of lake level change as follows: 1 where Ф is the anomaly, Xi are the monthly hydrological variables, and are the average monthly values during the study period. The MIKE 21 model The hydrodynamic model is a powerful tool to address the flow regime changes and hydraulic connection and interaction in complex river systems. Especially in Poyang Lake, the combined effects of catchment inflows and the interaction with the Yangtze River result in a considerable seasonal variation of some 10 m in the lake water level (Zhang et al. 2014); moreover, the complex flow patterns and hydrodynamic processes must be considered in modeling the lake behavior. Li et al. (2014) attempted to simulate the hydrodynamic processes of Poyang Lake by constructing a physically based mathematical model using MIKE 21 (DHI 2007). The model covered an area of 3,124 km2, which was determined by examining the historic lake surface at high water levels. A digital elevation model of the study area used in model construction was generated based on the basic survey in 1998 by Jiangxi Hydrological Bureau and was updated with new data obtained recently. The modeling area was discretized into a number of triangular grids considering the heterogeneity of the lake bottom topography through a variable spatial discretization of 70–1,500 m (Li et al. 2014). The daily catchment inflows from the five sub-tributaries were specified as the upstream boundary conditions in the model, and the downstream boundary condition accounted for the connection of the lake to the Yangtze River and was specified as the daily water level at Hukou station. In the model, the hydraulic roughness (Manning number) was assumed to differ between the flat regions and the main channels, and the initial values ranged from 30 m1/3/s to 50 m1/3/s. A uniform value was assigned to the Smagorinsky factor (Cs) of eddy viscosity for the whole lake domain. Li et al. (2014) calibrated and validated the model against the observed water levels at four gauging stations in the lake (Xingzi, Duchang, Tangyin, and Kangshan station) and the discharge at Hukou station for the periods 2000–2005 and 2006–2008, respectively. The Nash–Sutcliffe efficiencies (Ens) for both the calibration and validation periods at all gauging stations ranged from 0.80 to 0.98, and the determination coefficients (R2) ranged between 0.82 and 0.99. The high values of these evaluation indexes demonstrated that the model produced excellent agreement with observations and achieved a satisfactory accuracy (for more details of model structure and simulation results, please refer to Li et al. 2014). The model was believed to be robust and capable of capturing the variations of the water level and was thus used in this study. Scenarios of catchment runoff inflow and Yangtze River discharge in typical years To accurately explore the relative contributions of the catchment and the Yangtze River to the lake level during the flood period, the following different scenarios of catchment runoff inflow and Yangtze River discharge in typical years (i.e., 1996 and 2006) were proposed in the study. Namely, scenario S0 was meant to represent the actual streamflow from the five sub-tributaries in the Poyang Lake catchment and the discharge at Hankou station, which was used as a reference case for comparative purposes. Scenarios S1, S2, and S3 (with the original discharge rates of the Yangtze River and 10%, 20%, and 30% increments of catchment runoff inflow, respectively, for 1996) emphasized the influences of the catchment runoff on the flood stage. Scenarios S4, S5, and S6 (with the original streamflow of the five sub-tributaries in the catchment and 10%, 20%, and 30% reduction of Yangtze River discharge, respectively, for 1996) emphasized the blocking effect of the Yangtze River. Similarly, in 2006, scenarios S7, S8, and S9 (with Yangtze River discharge increases of 10%, 20%, and 30%, respectively) were adopted to further investigate the blocking effects of the Yangtze River on the lake level during the flood season. The detailed scenario settings are summarized in Table 1. It is necessary to consider the rationality and existence of scenarios in reality, thus, the discharge scenarios at Hankou for the flooding year (1996) were designed to decrease the original streamflow but increase it for the dry year (2006). Table 1 Summary of scenarios setting in the hydrodynamic modeling ScenarioChange of streamflow from the local catchmentChange of discharge rates of Yangtze River S0 Observed streamflow Observed discharge at Hankou station S1, S2, S3 10%, 20%, and 30% of increment of streamflow from five sub-tributaries in 1996 Observed discharge at Hankou station in 1996 S4, S5, S6 Observed streamflow in 1996 10%, 20%, and 30% reduction of Yangtze River discharge in 1996 S7, S8, S9 Observed streamflow in 2006 10%, 20%, and 30% increment of Yangtze River discharge in 2006 ScenarioChange of streamflow from the local catchmentChange of discharge rates of Yangtze River S0 Observed streamflow Observed discharge at Hankou station S1, S2, S3 10%, 20%, and 30% of increment of streamflow from five sub-tributaries in 1996 Observed discharge at Hankou station in 1996 S4, S5, S6 Observed streamflow in 1996 10%, 20%, and 30% reduction of Yangtze River discharge in 1996 S7, S8, S9 Observed streamflow in 2006 10%, 20%, and 30% increment of Yangtze River discharge in 2006 The variation of the lake water levels at Hukou station, as the downstream boundary condition of the hydrodynamic model, must be input as a known variable in the scenario simulations. For this, the back-propagation neural network (BPNN) method was used to estimate the variation of the lake level at Hukou in each scenario. Li et al. (2015b) attempted to simulate the variation of the Poyang Lake water level by using artificial neural network techniques. In his study, the period 1960–2000 was used for BPNN model training, and the period 2001–2008 was used to test the model's predictive capability; and an acceptable simulation result was received with the Ens of 0.98, the R2 of 0.98, and the root mean square error of 0.58 m. Thus, the model was reliable and directly applied in this study to estimate the variation of the lake level at Hukou for each scenario simulation. The details of the BPNN structures are provided in Li et al. (2015b) and are thus not repeated here. RESULTS Characteristics of catchment runoff and Yangtze river flow in typical flooding and dry years To examine and understand the effects of the Yangtze River and local catchment on the flood stage of Poyang Lake, four years in which a severe flood event occurred, i.e., 1983, 1995, 1998, and 1999, were selected for study. The variations in the total runoff from the catchment, the discharge at Hankou, and their corresponding lake level anomalies during the wet season (April–September) of each year are shown in Figure 3. For comparison, four years with a low lake level in the flood season, i.e., 1963, 1972, 1978, and 2001, were also selected, and their corresponding results are shown in Figure 4. More detailed information for each years is shown in Table 1, including the peak lake level, date of the peak level, duration of the flood event, water flow from the catchment and discharge of the Yangtze River. Figure 3 Variation of the catchment runoff, Yangtze River discharge and corresponding lake level anomaly during April–September of the selected flooding years. Figure 3 Variation of the catchment runoff, Yangtze River discharge and corresponding lake level anomaly during April–September of the selected flooding years. Figure 4 Variation of the catchment runoff, Yangtze River discharge and corresponding lake level anomaly during April–September of the selected dry years. Figure 4 Variation of the catchment runoff, Yangtze River discharge and corresponding lake level anomaly during April–September of the selected dry years. Previous studies (Guo et al. 2008) have noted that the first half of the annual variation in lake level was mainly influenced by the streamflow from the Poyang Lake catchment; this effect is well demonstrated in Figure 3(a) for the 1983 flood. It can be seen that the large runoff inflow continued through the major rainy season, reaching a total runoff of 866 × 108 m3 during April–June, with a positive anomaly of 252 × 108 m3 (Table 2), resulting in a continuous rise in the lake level before the flood season. In July, the water flow from the catchment was also significantly above the usual level, with a positive anomaly of 69 × 108 m3. More importantly, an abnormally large discharge and elevated water level in the Yangtze River in July blocked the outflow from Poyang Lake, which acted to further increase the lake level to 21.77 m on 13 July (Table 1). During the 1995 flood, although the runoff from the catchment in April and May was close to the average for those months, an abnormally large water flow in June caused a high total runoff during April–June as much as 905 × 108 m3, with a positive anomaly of 291 × 108 m3, which was even greater than the positive anomaly in 1983 (Table 1). It is also clear from Figure 3(b) that both the runoff from the catchment and the discharge of the Yangtze River in July were greater than normal, as occurred in the 1983 flood. A similar relationship between the flow regime of the Yangtze River and the local catchment and flood development in Poyang Lake was also observed in 1998 (Figure 3(c)). Considering the 1999 flood, the only difference with the other studied floods was that the monthly runoff from the catchment between April and June was closer to the average amount (Figure 3(d)), although the total water flow during the major rainy season exceeded the average by 2 × 108 m3 (Table 1). Table 2 Summary of statistical indices for the flooding and dry years Representative yearsPeak lake levelDate of peak levelDurationa (day)Runoff from catchment (108 m3) Discharge at Hankou (104 m3/s) Apr–JunAnomalyJulAnomalyJulAnomalyAug–SepAnomaly Flooding years 1983 21.77 13 Jul 50 866 252 201 69 5.59 1.33 4.51 0.95 1995 21.92 8 Jul 37 905 291 216 84 4.84 0.58 3.11 −0.45 1998 22.50 2 Aug 94 811 197 240 108 6.14 1.89 5.88 2.32 1999 21.97 21 Jul 73 616 217 85 6.36 2.11 4.29 0.73 1996 21.13 24 Jul 45 465 − 149 148 16 5.39 1.13 3.96 0.41 Dry years 1963 16.22 4 Sep – 244 −370 58 −74 3.29 −0.96 3.68 0.12 1972 15.99 8 Jun – 423 −191 44 −88 2.99 −1.26 2.03 −1.53 1978 17.04 18 Jun – 518 −96 39 −93 3.38 −0.87 2.65 −0.91 2001 17.03 29 Jun – 592 −22 87 −45 3.20 −1.06 2.96 −0.59 2006 16.72 21 Jun  693 73 136 4 3.10 − 1.15 1.79 − 1.76 Representative yearsPeak lake levelDate of peak levelDurationa (day)Runoff from catchment (108 m3) Discharge at Hankou (104 m3/s) Apr–JunAnomalyJulAnomalyJulAnomalyAug–SepAnomaly Flooding years 1983 21.77 13 Jul 50 866 252 201 69 5.59 1.33 4.51 0.95 1995 21.92 8 Jul 37 905 291 216 84 4.84 0.58 3.11 −0.45 1998 22.50 2 Aug 94 811 197 240 108 6.14 1.89 5.88 2.32 1999 21.97 21 Jul 73 616 217 85 6.36 2.11 4.29 0.73 1996 21.13 24 Jul 45 465 − 149 148 16 5.39 1.13 3.96 0.41 Dry years 1963 16.22 4 Sep – 244 −370 58 −74 3.29 −0.96 3.68 0.12 1972 15.99 8 Jun – 423 −191 44 −88 2.99 −1.26 2.03 −1.53 1978 17.04 18 Jun – 518 −96 39 −93 3.38 −0.87 2.65 −0.91 2001 17.03 29 Jun – 592 −22 87 −45 3.20 −1.06 2.96 −0.59 2006 16.72 21 Jun  693 73 136 4 3.10 − 1.15 1.79 − 1.76 aDuration of lake level >19 m. Correspondingly, Figure 4 shows the variation characteristics of the total runoff inflow from the catchment, the discharge of the Yangtze River, and their effects on the lake level anomaly in non-flooding years. It can be seen that, in general, the total water flow from the catchment during April–June was smaller than the average, with negative anomalies ranging from −370 × 108 m3 to −22 × 108 m3 (Table 2). In addition, without exception, the discharge of the Yangtze River at Hankou and the runoff inflow from the catchment in July were abnormally lower than average. For example, the discharge anomalies at Hankou station in July were −0.96 × 104 m3/s, −1.26 × 104 m3/s, −0.87 × 104 m3/s, and −1.06 × 104 m3/s in 1963, 1972, 1978, and 2001, respectively. Further, the runoff inflow anomaly varied from −45 × 108 m3 in 2001 to just −93 × 108 m3 in 1978. As a result, the water level of Poyang Lake declined 2–4 m, with the maximal lake level no higher than 17.04 m during the flood seasons of these years (Table 2). Through a comparison of Figures 3 and 4, in combination with the information from Table 2, several common features in the process of flood development in Poyang Lake can be identified: (1) the total runoff from the catchment during April–June was larger than the average and led to a continuous rise in lake level; (2) an abnormally large discharge from the Yangtze River in July elevated the water level and blocked the outflow from Poyang Lake; and (3) the large water inflow from the catchment resulted in a higher lake level in July when the level of the Yangtze River was also high. These features are all important indications for flood development, and if some of them arise, then there is a high probability that a flood may occur in Poyang Lake. The reverse is also true, a flood is inconceivable when none of these conditions arise. Variation of catchment runoff and Yangtze River flow in 1996 and 2006 As mentioned above, flood development in Poyang Lake is affected by both the runoff inflow from the local catchment and the blocking effect of a large discharge of the Yangtze River. The common features have been summarized for the typical flooding years and dry years; however, several years, such as 1996 and 2006 in Table 1, had differing relationships between the total runoff inflow and Yangtze River discharge and lake level variation, compared to other years. Figure 5 shows the variation in the water flow from the catchment, the discharge at Hankou, and the corresponding lake level anomalies during the wet seasons of 1996 and 2006. From Figure 5(a), it can be seen that the total runoff inflow from the catchment in April–June of 1996 was only 465 × 108 m3, with a negative anomaly of −149 × 108 m3, resulting in the lake level before the major flood season being lower than average; however, a flood event still occurred in this year with a maximal lake level of 21.13 m on 24 July (Table 2). Evidently, this flood event can mainly be ascribed to the effect of the Yangtze River water fluxes in July, amounting to a discharge of 5.39 × 104 m3/s at Hankou station. This was nearly as large as that in the 1983 flood period and raised the lake level anomaly from negative to positive, reaching a total positive 1.73 m anomaly. Therefore, the abnormally large river flow of the Yangtze River was the principal driving force for the 1996 flood, blocking the outflow from Poyang Lake and elevating the lake level above the average. Figure 5 Variation of the catchment runoff, Yangtze River discharge, and corresponding lake level anomaly in 1996 and 2006. Figure 5 Variation of the catchment runoff, Yangtze River discharge, and corresponding lake level anomaly in 1996 and 2006. An example of the opposite case is shown in Figure 5(b). It is clear that in 2006, the streamflow from the five sub-tributaries in the Poyang Lake catchment during April–June was very high, reaching 693 × 108 m3 with a positive anomaly of 79 × 108 m3, and was even greater than that in 1999. Nevertheless, such a large runoff inflow did not result in a rise in the lake level owing to the small discharge of the Yangtze River at that time. During the flood season, in particular, the abnormally low river flow, with a negative anomaly of −1.15 × 104 m3/s in July and −1.76 × 104 m3/s in August–September, accelerated the outflow from Poyang Lake and caused the lake level to drop 2–5 m, eventually leading to a serious autumn drought. Inspecting Figure 5, it is clear that the discharge of the Yangtze River imposes more influence on the development of floods in Poyang Lake than the runoff inflow from the local catchment. Thus, the strength of the blocking effect of the Yangtze River dominates the severity of the flood to a great extent. Hydrodynamic modeling influences of the local catchment and Yangtze River Hydrodynamic modeling was undertaken to further explore the relative contributions of the local catchment and the Yangtze River to Poyang Lake water level during the flood season of 1996 and 2006. Based on the previous analyses of the lake level (Guo et al. 2008; Li & Zhang 2015), the modeling was carried out from March to September (with March as a warm-up period) to save time and computing resources. The model results for the S0 case in 1996 were compared with the water level observations at Xingzi, Duchang, Tangyin, and Kangshan stations, producing Ens of 0.998, 0.996, 0.986, and 0.992, respectively. The high values of Ens demonstrated that the model produced an excellent agreement with observations and described the variation of water level well. Figures 6 and 7 show the comparison of the simulated water level and its changes in 1996 for every scenario. It is seen that the water level of Poyang Lake rose with the increase of the streamflow from the five sub-tributaries in the local catchment and declined with the decrease of the Yangtze River discharge as expected; the average water level changes were 0.10 m, 0.17 m, and 0.24 m in scenarios S1, S2, and S3, respectively, and −0.50 m, −1.07 m, and −1.58 m in scenarios S4, S5, and S6, respectively. It is also found that the water level changes, regardless of whether these changes were increases or decreases, were distributed unevenly in different months. Specifically, the largest water level changes in scenarios S1, S2, and S3 were observed during April–May (except at Kangshan station) with the average increment of 0.11–0.14 m, 0.21–0.25 m, and 0.31–0.38 m, respectively, but they became small during July–August and were smallest in September. As for scenarios S4, S5, and S6, the seasonal distribution of water level changes was opposite to the former, i.e., the most significant decreases were presented during July–August (with the average decline of −0.58 to −0.78 m, −1.61 to −1.71 m, and −2.59 to −2.61 m in scenarios S4, S5, and S6, respectively), but they were trivial during April–May. Figures 6 and 7 further validate that the streamflow from the Poyang Lake catchment imposed more influence on the lake level during April–May than other periods, while the Yangtze River created a stronger blocking effect during July–August than other periods. Figure 6 Comparison of simulated water level and its changes in 1996 for scenarios S1, S2, and S3. Figure 6 Comparison of simulated water level and its changes in 1996 for scenarios S1, S2, and S3. Figure 7 Comparison of simulated water level and its changes in 1996 for scenarios S4, S5, and S6. Figure 7 Comparison of simulated water level and its changes in 1996 for scenarios S4, S5, and S6. Moreover, the hydrodynamic simulation revealed that the spatial distribution of the water level change was also uneven. Figure 8 shows the variation of the water level changes at different stations in the lake, using scenarios S3 and S6 as examples. It is obvious that, during April–June, the increment of the water level in case S3 was more significant at Duchang and Tangyin than at Xingzi and Kangshan, but the decline of the water level in case S6 was the most remarkable at Xingzi and showed a gradual attenuation from Xingzi to Kangshan. Additionally, this pattern could be extracted more clearly from the spatial distribution of the average water level changes, which were derived from the outputs of the hydrodynamic model. As Figure 9(a) shows, the largest water level change in April for scenario S3 was observed at the middle parts of Poyang Lake with an approximately 0.7 m rise, and the increments were generally reduced toward the north and south. Whereas the average water level change in scenario S6 was more remarkable at the northern parts of the lake, the declines reduced from −1.0 m at the northern parts of the lake to −0.1 m at the southern parts (Figure 9(b)). During July–September, almost uniform water level changes were observed at different stations in both scenarios S3 and S6 because the lake surface is almost horizontal during the flood season. Figure 8 Variation of water level changes at different stations in scenarios S3 (a) and S6 (b). Figure 8 Variation of water level changes at different stations in scenarios S3 (a) and S6 (b). Figure 9 Spatial distribution of average water level changes in April for scenarios S3 (a) and S6 (b). Figure 9 Spatial distribution of average water level changes in April for scenarios S3 (a) and S6 (b). In addition, the study was also extended to investigate the duration changes of the high lake level, which resulted from the impacts of the local catchment runoff and Yangtze River flow. Table 3 shows the changes of duration as well as the start/end date of the high lake level at Xingzi station in different scenarios. It is seen that with the increase of streamflow from the local catchment, the duration of the high lake level lengthened, regardless of whether the level was above 19.0 m, 20.0 m, or 21.0 m; further, the date of floodwater receding was delayed for 1–4 days, but the starting date was almost unchanging. In contrast, the decrease of the Yangtze River flow led to a distinctly shorter duration of the high lake level as expected, and some of them were reduced to 0 because the highest lake level did not exceed the threshold level. It also resulted in several days' delay of the floodwater rising and advanced the date of the floodwater receding by at least 3–16 days. Table 3 Changes in duration, start/end date of high lake level in different scenarios ScenariosDuration (day) Start date End date 19 m20 m21 m19 m20 m21 m19 m20 m21 m S0 45 33 13 Jul 16 Jul 22 Jul 26 Aug 17 Aug 26 Jul S1 46 34 13 Jul 16 Jul 22 Jul 27 Aug 18 Aug 29 Jul S2 47 35 13 Jul 16 Jul 21 Jul 28 Aug 18 Aug 29 Jul S3 47 35 10 13 Jul 16 Jul 21 Jul 28 Aug 19 Aug 30 Jul S4 42 22 13 Jul 20 Jul – 23 Aug 11 Aug – S5 20 22 Jul – – 10 Aug – – S6 – – – – – – ScenariosDuration (day) Start date End date 19 m20 m21 m19 m20 m21 m19 m20 m21 m S0 45 33 13 Jul 16 Jul 22 Jul 26 Aug 17 Aug 26 Jul S1 46 34 13 Jul 16 Jul 22 Jul 27 Aug 18 Aug 29 Jul S2 47 35 13 Jul 16 Jul 21 Jul 28 Aug 18 Aug 29 Jul S3 47 35 10 13 Jul 16 Jul 21 Jul 28 Aug 19 Aug 30 Jul S4 42 22 13 Jul 20 Jul – 23 Aug 11 Aug – S5 20 22 Jul – – 10 Aug – – S6 – – – – – – Similarly, in 2006, the simulated water level and its changes in scenarios S7, S8, and S9 are shown in Figure 10. It is seen that the increase of the Yangtze River discharge elevated the lake level as expected, and the average water level rose 0.32 m, 0.66 m, and 0.98 m in scenarios S7, S8, and S9, respectively. However, these changes were also distributed unevenly in both time and space, as found in the 1996 scenario simulations. Specifically, the lake level change was stronger during July–August, with an average increment of 0.45–1.31 m, compared to that in April–May (approximately 0.14–0.71 m). Additionally, more notable changes of the lake level were observed at the northern parts of the lake, i.e., the increment reduced from 0.37–1.06 m at Xingzi to 0.06–0.25 m at Kangshan during April–May. Figure 10 Comparison of simulated water level and its changes in 2006 for scenarios S7, S8, and S9. Figure 10 Comparison of simulated water level and its changes in 2006 for scenarios S7, S8, and S9. DISCUSSION An examination of the characteristics of Poyang Lake water level in typical flooding years and a comparison with that in dry years found that the large catchment runoff and Yangtze River discharge were both significant contributors to flood development in Poyang Lake, and their concurrence may more easily trigger floods. Model simulations further revealed that the influence exerted by the catchment was most significant during April–May, when the lake level change ranged from 0.12 to 0.34 m, but the influence of the catchment was trivial (average 0.08–0.23 m) in the flood season; in contrast, the Yangtze River imposed greater influences during July–August, resulting in a lake level change as much as 0.68–2.61 m. This is in agreement with the findings of other studies. Many studies have shown that the water level of Poyang Lake is a result of the joint effects of the local catchment runoff and the Yangtze River discharge and that the Yangtze River imposes a greater influence on the development of floods in Poyang Lake than does the local catchment runoff (Min 2002; Shankman et al. 2006, 2012; Hu et al. 2007; Nakayama & Watanabe 2008; Guo et al. 2012; Lai et al. 2014a, 2014b; Zhang et al. 2014). Shankman et al. (2006) ascribed the high water levels in Poyang Lake during the flood season to a higher Yangtze River stage, and they further noted that a large catchment runoff generated later in summer than normal could increase the probability of lake floods. Hu et al. (2007) also noted that the catchment runoff raised the lake level and enhanced the impact of the local catchment on the lake during the spring–early-summer months, when the Yangtze River had a very low water level. In contrast, during July–September, the Yangtze River exerted frequent and substantial effects on the lake when the Yangtze River experienced its largest annual flows (Hu et al. 2007). This change pattern primarily resulted from the flow regimes of the five sub-tributaries in the local catchment and the Yangtze River; Hu et al. (2007) and Guo et al. (2012) explained that the hydrograph of the Poyang Lake catchment explains the primary features of the first half of the annual variation of the lake level but the second half of the annual course of the lake level is mainly controlled by the discharge of the Yangtze River. Similar conclusions were also reached in a study by Zhang et al. (2014). The hydrodynamic simulation also revealed that the lake level change was the most remarkable at the middle parts of Poyang Lake for the catchment scenarios (S1, S2, and S3), but at the northern parts for the Yangtze River scenarios (S4, S5, and S6) during April–June, and during July–September, the almost uniform water level changes were observed in both scenarios. Lai et al. (2014a) also examined the impacts of alterations in the lake inflow and the Yangtze River flow on water levels in Poyang Lake and found that the Lake inflow alterations caused approximately uniform water level change in Poyang Lake, whereas the Yangtze River alterations mainly affected the northern lake. These findings are in agreement with the findings of the current study, except for the spatial disparities in the catchment scenarios. The possible causes for this difference between the findings include the different scenario designs, the spatial resolution of hydrodynamic simulation, and the time scale of the statistical analysis. Thus, further intensive investigation and a comparative study of an elaborate simulation are necessary in future studies. In addition, several scenario simulations were used in the present study to examine the effects of the local catchment and the Yangtze River. However, the discharge scenarios at Hankou for the flooding year (1996) were designed to decrease the original streamflow, considering the rationality and existence of scenarios in reality when we evaluate the blocking effect of the Yangtze River (S4, S5, and S6). Such a treatment resulted in opposite changes of the lake water level compared with that in the catchment scenarios (S1, S2, and S3). To evaluate the effect of such treatment on the results, we also compared and examined the relationships between the increased and decreased discharge scenario of the Yangtze River with a low scale of change (10%) and the results are shown in Figure 11. The simulation results demonstrated that the increase of the Yangtze River discharge elevated the lake level as expected. Moreover, a similar distribution of the lake level changes with its counterparts (scenario S4) was observed, i.e., the increments of the lake level were significant during July–August but trivial in April–May, and the lake level changes reduced from the northern parts to the southern parts. Therefore, the effect of the opposite scenarios design was weak, and the above conclusions derived from the scenario simulations were conclusive. Figure 11 Comparison of lake level change in the increasing and decreasing discharge scenario of Yangtze River. Figure 11 Comparison of lake level change in the increasing and decreasing discharge scenario of Yangtze River. CONCLUSIONS This paper analyzed and compared the relationships between the water level changes of Poyang Lake and the flow regime changes of the Yangtze River and sub-tributaries in the local catchment in typical flooding and dry years and quantified their relative contributions during the flood season based on the hydrodynamic model MIKE 21. The study demonstrated that the large catchment runoff and Yangtze River discharge were both substantial contributors to flood development in Poyang Lake, and their concurrence may more easily trigger floods; however, a flood is impossible when both of them are low. The hydrodynamic simulation revealed that the lake level change, regardless of whether it resulted from the local catchment runoff or the Yangtze River discharge, was distributed unevenly in different months and areas in the lake. The influences exerted by the local catchment were most significant during April–May, when the lake level change ranged from 0.12 to 0.34 m, but were trivial (average 0.08–0.23 m) in the flood season. In contrast, the Yangtze River imposed a greater influence during July–August and caused a lake level change as much as 0.68–2.61 m. At the same time, the water level at the middle parts of Poyang Lake was more sensitive to the local catchment runoff change, but the northern parts were more sensitive to the Yangtze River alteration. In addition, the Yangtze River imposed far stronger influences on the rise and decline of a high lake level than did the local catchment runoff and dominated the duration of the flood to a great extent. This paper adds additional knowledge to the previous studies and is the first study to employ a physically based hydrodynamic model to quantify the relative contributions of the local catchment and Yangtze River to the lake level during the flood season. The outcomes of this study enhance our understanding of the causes of floods in Poyang Lake. The above-mentioned conclusions also indicate that understanding the effects of the East Asian monsoon and prediction of the impact of specific distributions of rainfall, i.e., upstream of the Yangtze River or Poyang Lake catchment only, is necessary for flood prediction, mitigation, and management in Poyang Lake. In addition, efforts should be made to quantify the influences of intensive human activities in the next study. ACKNOWLEDGEMENTS This work is jointly funded by the National Basic Research Program of China (973 Program) (2012CB417003), the National Natural Science Foundation of China (41571023) and the Collaborative Innovation Center for Major Ecological Security Issues of Jiangxi Province and Monitoring Implementation (JXS-EW-00). The authors are grateful to the anonymous reviewers and the editor who helped in improving the quality of the original manuscript. REFERENCES REFERENCES Y. Yoshitani J. 2009 Global Trend in Water-related Disasters: an Insight for Policymakers . UNESCO , Paris , France . P. Hong Y. Douglas K. R. Kirschbaum D. B. Gourley J. R. Brakenridge G. R. 2010 A digitized global flood inventory (1998–2008): compilation and preliminary results . Nat. Hazards 55 , 405 422 . Cai S. Du Y. Huang J. Wu S. Xue H. 2001 Causes of flooding and water logging in middle reaches of the Yangtze River and construction of decision-making support system for monitoring and evaluation of flooding and water logging hazards . Earth Sci. 26 , 643 647 (in Chinese). Chen Y. Xiong W. Wang G. 2002 Soil and water conservation and its sustainable development of the Poyang Lake catchment in view of the 1998 flood of Yangtze River . J. Sediment Res. 4 , 48 51 . Christensen J. H. Christensen O. B. 2003 Climate modelling: severe summertime flooding in Europe . Nature 421 , 805 806 . Danish Hydraulic Institute (DHI) 2007 MIKE 21 Flow Model: Hydrodynamic Module User Guide . Danish Hydraulic Institute Water and Environment , Horsholm , Denmark . Frei C. Schöll R. Fukutome S. Schmidli J. Vidale P. L. 2006 Future change of precipitation extremes in Europe: intercomparison of scenarios from regional climate models . J. Geophys. Res. Atmos. 111 , D06105 . Garcia-Castellanos D. F. Jiménez-Munt I. Gorini C. Fernández M. Vergés J. De Vicente R. 2009 Catastrophic flood of the Mediterranean after the Messinian salinity crisis . Nature 462 , 778 781 . Hu Q. Feng S. Guo H. Chen G. Jiang T. 2007 Interactions of the Yangtze river flow and hydrologic processes of the Poyang Lake, China . J. Hydrol. 347 , 90 100 . Khan B. Iqbal M. J. Yosufzai M. A. K. 2011 Flood risk assessment of river Indus of Pakistan . Arab. J. Geosci. 4 , 115 122 . Knight Z. Robins N. Clover R. Saravanan D. 2011 Climate Investment Update . HSBC Global Research , London , UK . Lai X. Huang Q. Zhang Y. Jiang J. 2014a Impact of lake inflow and the Yangtze River flow alterations on water levels in Poyang Lake, China . Lake Reservoir Manage. 30 , 321 330 . Li X. Zhang Q. 2015 Variation of floods characteristics and their responses to climate and human activities in Poyang Lake, China . Chinese Geogr Sci 25 , 13 25 . Li Y. Zhang Q. Yao J. Werner A. D. Li X. 2014 Hydrodynamic and hydrological modeling of the Poyang Lake catchment system in China . J. Hydrol. Eng. 19 , 607 616 . Li X. Zhang Q. Xu C.-Y. Ye X. 2015a The changing patterns of floods in Poyang Lake, China: characteristics and explanations . Nat. Hazards 76 , 651 666 . Liu R. Liu N. 2002 Flood area and damage estimation in Zhejiang, China . J. Environ. Manage. 66 , 1 8 . Min Q. 1999 Evaluation of the effects of Poyang Lake reclamation on floods . Yangtze River 30 , 30 32 (in Chinese) . Min Q. 2002 Analysis on the flood characters of 1990s, Poyang Lake . J. Lake Sci. 14 ( 4 ), 232 330 (in Chinese) . Nakayama T. Shankman D. 2013 Impact of the Three-Gorges Dam and water transfer project on Changjiang floods . Global Planet. Change 100 , 38 50 . Nakayama T. Watanabe M. 2008 Role of flood storage ability of lakes in the Changjiang River catchment . Global Planet. Change 63 , 9 22 . Nie C. Li H. Yang L. Wu S. Liu Y. Liao Y. 2012 Spatial and temporal changes in flooding and the affecting factors in China . Nat. Hazards 61 , 425 439 . Piao S. Fang J. Zhou L. Guo Q. Henderson M. Ji W. Li Y. Tao S. 2003 Interannual variations of monthly and seasonal normalized difference vegetation index (NDVI) in China from 1982 to 1999 . J. Geophys. Res. Atmos. 108 , 4401 . Ramos C. Reis E. 2002 Floods in southern Portugal: their physical and human causes, impacts and human response . 7 , 267 284 . Shankman D. Liang Q. 2003 Landscape changes and increasing flood frequency in China's Poyang Lake region . Prof. Geogr. 55 , 434 445 . Shankman D. Keim B. D. Song J. 2006 Flood frequency in China's Poyang Lake region: trends and teleconnections . Int. J. Climatol. 26 , 1255 1266 . Shankman D. Keim B. D. Nakayama T. Li R. Wu D. Remington W. C. 2012 Hydroclimate analysis of severe floods in China's Poyang Lake Region . Earth Interact. 16 , 1 16 . Wang L. N. Shao Q. X. Chen X. H. Li Y. Wang D. G. 2012 Flood changes during the past 50 years in Wujiang River, South China . Hydrol. Process. 26 , 3561 3569 . Xu G. Qin Z. 1998 Flood estimation methods for Poyang lake area . J. Lake Sci. 10 , 51 56 (in Chinese) . Ye X. Zhang Q. Bai L. Hu Q. 2011 A modeling study of catchment discharge to Poyang Lake under future climate in China . Quatern. Int. 244 , 221 229 . Yi L. Ge L. Zhao D. Zhou J. Gao Z. 2012 An analysis on disasters management system in China . Nat. Hazards 60 , 295 309 . Yin H. Li C. 2001 Human impact on floods and flood disasters on the Yangtze River . Geomorphology 41 , 105 109 . Zhang J. Li N. (eds) 2007 Quantitative Methods and Applications of Risk Assessment and Management on Main Meteorological Disasters . Beijing Normal University Press , Beijing , China (in Chinese). Zhang Q. Ye X.-C. Werner A. D. Li Y.-L. Yao J. Li X.-H. Xu C.-Y. 2014 An investigation of enhanced recessions in Poyang Lake: comparison of Yangtze River and local catchment impacts . J. Hydrol. 517 , 425 434 . Zhao Y. 2000 Thinking on the flood disaster in the middle reaches of the Yangtze river . Earth Sci. Front. 7 , 87 93 . Zhao S. Fang J. 2004 Impact of impoldering and lake restoration on land-cover changes in Dongting Lake area, Central Yangtze . AMBIO: A Journal of the Human Environment 33 , 311 315 . Zhao S. Fang J. Miao S. Gu B. Tao S. Peng C. Tang Z. 2005 The 7-decade degradation of a large freshwater lake in Central Yangtze River, China . Environ. Sci. Technol. 39 , 431 436 . This is an Open Access article distributed under the terms of the Creative Commons Attribution Licence (CC BY-NC-ND 4.0), which permits copying and redistribution for non-commercial purposes with no derivatives, provided the original work is properly cited (http://creativecommons.org/licenses/by-nc-nd/4.0/).
web
auto_math_text
# Math Help - Decimal to Feet 1. ## Decimal to Feet How do I convert 0.24 to feet? 2. Originally Posted by magentarita How do I convert 0.24 to feet? I didn't get 0.24 's unit ,Maybe meter? 3. Originally Posted by magentarita How do I convert 0.24 to feet? 0.24 what? without the units of what we are converting from, this is meaningless. if it is 0.24 cm to feet, that would give a different answer than 0.24 inches to feet which would give a different answer from ... 4. ## ok... I was told to multiply 0.24 by 100 and that will produce 24 feet. Does this make sense? 5. Originally Posted by magentarita I was told to multiply 0.24 by 100 and that will produce 24 feet. Does this make sense? Depending on what the 0.24 is representative of. Without units, no it does not make sense. 6. Originally Posted by magentarita I was told to multiply 0.24 by 100 and that will produce 24 feet. Does this make sense? please state the original (and complete) question. without units given, you cannot do this
web
auto_math_text
# Radiative cooling of a spin ensemble ## Abstract Physical systems reach thermal equilibrium through energy exchange with their environment, and for spins in solids the relevant environment is almost always their host lattice. However, recent studies1 motivated by observations by Purcell2 have shown how radiative emission into a microwave cavity can become the dominant relaxation path for spins if the spin–cavity coupling is sufficiently large (such as for small-mode-volume cavities). In this regime, the cavity electromagnetic field overrides the lattice as the dominant environment, inviting the prospect of controlling the spin temperature independently from that of the lattice, by engineering a suitable cavity field. Here, we report on precisely such control over spin temperature, illustrating a novel and universal method to increase the electron spin polarization above its thermal equilibrium value (termed hyperpolarization). By switching the cavity input between resistive loads at different temperatures we can control the electron spin polarization, cooling it below the lattice temperature. Our demonstration uses donor spins in silicon coupled to a superconducting microresonator and we observe more than a twofold increase in spin polarization. This approach provides a general route to signal enhancement in electron spin resonance, or nuclear magnetic resonance through dynamical nuclear spin polarization3,4. ## Access options from\$8.99 All prices are NET prices. ## Data availability Source data are available for this paper. All other data that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request. ## References 1. 1. Bienfait, A. et al. Controlling spin relaxation with a cavity. Nature 531, 74–77 (2016). 2. 2. Purcell, E. M. Spontaneous emission probabilities at radio frequencies. Phys. Rev. 69, 681 (1946). 3. 3. Abragam, A. & Goldman, M. Principles of dynamic nuclear polarisation. Rep. Prog. Phys. 41, 395–467 (1978). 4. 4. Ardenkjr-Larsen, J. H. et al. Increase in signal-to-noise ratio of 10,000 times in liquid-state NMR. Proc. Natl Acad. Sci. USA 100, 10158–10163 (2003). 5. 5. Schweiger, A. & Jeschke, G. Principles of Pulse Electron Paramagnetic Resonance (Oxford Univ. Press, 2001). 6. 6. Budoyo, R. P. et al. Phonon-bottlenecked spin relaxation of Er3+:Y2SiO5 at sub-kelvin temperatures. Appl. Phys. Express 11, 043002 (2018). 7. 7. Astner, T. et al. Solid-state electron spin lifetime limited by phononic vacuum modes. Nat. Mater. 17, 313–317 (2018). 8. 8. Butler, M. C. & Weitekamp, D. P. Polarization of nuclear spins by a cold nanoscale resonator. Phys. Rev. A 84, 063407 (2011). 9. 9. Wood, C. J., Borneman, T. W. & Cory, D. G. Cavity cooling of an ensemble spin system. Phys. Rev. Lett. 112, 050501 (2014). 10. 10. Abragam, A. The Principles of Nuclear Magnetism (Clarenton Press, 1961). 11. 11. Einstein, A. Strahlungs-emission und Absorption nach der Quantentheorie. Verhandlungen der Deutschen Physikalischen Gesellschaft 18, 318 (1916). 12. 12. Haroche, S. & Raimond, J.-M. Exploring the Quantum (Oxford Univ. Press, 2006). 13. 13. Feher, G. & Gere, E. A. Electron spin resonance experiments on donors in silicon. II. Electron spin relaxation effects. Phys. Rev. 114, 1245–1256 (1959). 14. 14. Tyryshkin, A. M. et al. Electron spin coherence exceeding seconds in high-purity silicon. Nat. Mater. 11, 143–147 (2012). 15. 15. Feher, G. Electron spin resonance experiments on donors in silicon. I. Electronic structure of donors by the electron nuclear double resonance technique. Phys. Rev. 114, 1219–1244 (1959). 16. 16. Mohammady, M. H., Morley, G. W. & Monteiro, T. S. Bismuth qubits in silicon: the role of EPR cancellation resonances. Phys. Rev. Lett. 105, 067602 (2010). 17. 17. Macklin, C. et al. A near-quantum-limited Josephson traveling-wave parametric amplifier. Science 350, 307–310 (2015). 18. 18. Wang, Z. et al. Quantum microwave radiometry with a superconducting qubit. Preprint at https://arxiv.org/pdf/1909.12295.pdf (2019). 19. 19. Xu, M. et al. Radiative cooling of a superconducting resonator. Phys. Rev. Lett. 124, 033602 (2020). 20. 20. George, R. E. et al. Electron spin coherence and electron nuclear double resonance of Bi donors in natural Si. Phys. Rev. Lett. 105, 067601 (2010). 21. 21. Pechal, M. et al. Superconducting switch for fast on-chip routing of quantum microwave fields. Phys. Rev. Appl. 6, 024009 (2016). 22. 22. Williamson, L. A., Chen, Y.-H. & Longdell, J. J. Magneto-optic modulator with unit quantum efficiency. Phys. Rev. Lett. 113, 203601 (2014). 23. 23. Gely, M. F. et al. Observation and stabilization of photonic Fock states in a hot radio-frequency resonator. Science 363, 1072–1075 (2019). 24. 24. Rauch, W. et al. Microwave properties of YBa2Cu3O7 − x thin films studied with coplanar transmission line resonators. J. Appl. Phys. 73, 1866–1872 (1993). 25. 25. Adrian, F. J. Theory of anomalous electron spin resonance spectra of free radicals in solution. Role of diffusion-controlled separation and reencounter of radical pairs. J. Chem. Phys. 54, 3918–3923 (1971). 26. 26. Wong, S. K., Hutchinson, D. A. & Wan, J. K. S. Chemically induced dynamic electron polarization. II. A general theory for radicals produced by photochemical reactions of excited triplet carbonyl compounds. J. Chem. Phys. 58, 985–989 (1973). 27. 27. Steger, M. et al. Quantum information storage for over 180 s using donor spins in a 28Si ‘semiconductor vacuum’. Science 336, 1280–1283 (2012). 28. 28. Doherty, M. W. et al. The nitrogen-vacancy colour centre in diamond. Phys. Rep. 528, 1–45 (2013). 29. 29. Castle, J. G. & Feldman, D. W. Resonance modes at defects in crystalline quartz. Phys. Rev. 137, A671–A673 (1965). 30. 30. Gayda, J.-P. et al. Temperature dependence of the electronic spin-lattice relaxation time in a 2-iron-2-sulfur protein. Biochim. Biophys. Acta 581, 15–26 (1979). 31. 31. Zhou, Y., Bowler, B. E., Eaton, G. R. & Eaton, S. S. Electron spin lattice relaxation rates for S = 12 molecular species in glassy matrices or magnetically dilute solids at temperatures between 10 and 300 K. J. Magn. Reson. 139, 165–174 (1999). 32. 32. Probst, S. et al. Inductive-detection electron-spin resonance spectroscopy with 65 spins/Hz1/2 sensitivity. Appl. Phys. Lett. 111, 202604 (2017). 33. 33. Weis, C. D. et al. Electrical activation and electron spin resonance measurements of implanted bismuth in isotopically enriched silicon-28. Appl. Phys. Lett. 100, 172104 (2012). 34. 34. Mansir, J. et al. Linear hyperfine tuning of donor spins in silicon using hydrostatic strain. Phys. Rev. Lett. 120, 167701 (2018). 35. 35. Pla, J. et al. Strain-induced spin-resonance shifts in silicon devices. Phys. Rev. Appl. 9, 044014 (2018). 36. 36. Sekiguchi, T. et al. Hyperfine structure and nuclear hyperpolarization observed in the bound exciton luminescence of Bi donors in natural Si. Phys. Rev. Lett. 104, 137402 (2010). 37. 37. Ranjan, V. et al. Pulsed electron spin resonance spectroscopy in the purcell regime. J. Magn. Reson. 310, 106662 (2020). ## Acknowledgements We thank P. Sénat, D. Duet and J.-C. Tack for technical support, and are grateful for discussions within the Quantronics group. We acknowledge IARPA and Lincoln Labs for providing a Josephson travelling-wave parametric amplifier. We acknowledge support from the European Research Council through grant no. 615767 (CIRQUSS) and through the Superconducting Quantum Networks project, the Agence Nationale de la Rercherche under the Chaire Industrielle NASNIQ, the Region Ile-de-France via DIM SIRTEQ, the Engineering and Physical Sciences Research Council (EPSRC) through grant no. EP/K025945/1, the Horizon 2020 research and innovation programme through grant no. 771493 (LOQO-MOTIONS), the National Centre of Competence in Reseach ‘Quantum Science and Technology’, a research instrument of the Swiss National Science Foundation, and the ETH Zurich. ## Author information Authors ### Contributions B.A., S.P. and P.B. designed the experiment. J.J.L.M. and C.W.Z. provided and characterized the implanted Si sample, on which B.A. and S.P. fabricated the Nb resonator. B.A. performed the measurements, with help from S.P. and V.R. B.A. and P.B. analysed the data. B.A. and V.R. performed the simulations. M.P. realized and tested the superconducting switch in a project guided by A.W. B.A. and P.B. wrote the manuscript. S.P., V.R., A.W., J.J.L.M., D.V., D.E. and E.F. contributed useful input to the manuscript. ### Corresponding author Correspondence to P. Bertet. ## Ethics declarations ### Competing interests The authors declare no competing interests. Peer review information Nature Physics thanks Stefan Putz and other, anonymous, reviewer(s) for their contribution to the peer review of this work. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Extended data ### Extended Data Fig. 1 Noise power spectral density measurement. a, Frequency dependence of the noise power spectral density S measured at Tphon=840 mK for the hot (red circles) and cold (blue circles) switch configurations. Solid lines are fit (see Methods). The blue dashed line indicates the expected Scold(ω)for α=0. b, Still temperature Tphon dependence of S measured at ω = ω0 (open circles) and at ωω0 = − 2.7 MHz (open triangles) for both hot (red) and cold (blue) configurations. Solid lines are plot of Shot (red) and Scold (blue) with parameters obtained from the frequency dependence fits performed at all Tphon, and with nTWPA = 0.75. Source data ### Extended Data Fig. 2 Temperature dependence of polarization. Equilibrium polarization of transitions 4, − 1 > ↔ 5, 0 > and 4, 0 > ↔ 5, − 1 > measured at B0=62.5 mT (red circles). Several hours are waited at each temperature before recording Ae. Red line is the calculated ΔN(T) for the considered transition at B0=62.5 mT. A second polarisation measurement of the same transitions (black circles) is reported. In this experiment, for each temperature value, B0 is first set to 9.3 mT during 20 min, then it is set to 62.5 mT and finally after 4 min Ae is recorded. The black line is the calculated ΔN(T)for the considered transition at B0=9.3 mT. The polarisation p(T)=$$\tanh (\frac{\hslash {\omega }_{0}}{2kT})$$ of a spin 1/2 is also shown for comparison (green). Ae as a function of time (inset) is measured at T=83 mK and B0=62.5 mT after B0 has been set to 9.3 mT for 20 min. The same data are represented in the main plot with the blue arrow. Source data ### Extended Data Fig. 3 Simulation of Rabi oscillations. a, Distribution of the spin-cavity coupling g obtained from the spatial distribution of δB1. b, Rabi oscillations measured at B0=62.5 mT and T =850 mK by varying the amplitude of the second pulse in the Hahn echo sequence (blue circles). The pulse amplitude is normalized to the value $${{\mathrm{P}}_{\pi }}^{\frac{1}{2}}$$ corresponding to the maximum in detected signal. The solid green line is the result of the numerical simulation of a spin ensemble described by ρ(g). Source data ## Supplementary information ### Source Data Fig. 1 Cartesian measurement data. ### Source Data Fig. 2 Cartesian measurement data. ### Source Data Fig. 3 Cartesian measurement data. ### Source Data Fig. 4 Cartesian measurement data. ### Source Data Extended Data Fig. 1 Cartesian measurement data. ### Source Data Extended Data Fig. 2 Cartesian measurement data. ### Source Data Extended Data Fig. 3 Cartesian measurement data. ## Rights and permissions Reprints and Permissions Albanese, B., Probst, S., Ranjan, V. et al. Radiative cooling of a spin ensemble. Nat. Phys. 16, 751–755 (2020). https://doi.org/10.1038/s41567-020-0872-2 • Accepted: • Published: • Issue Date:
web
auto_math_text
Article | Open | Published: # Holographic Traction Force Microscopy ## Abstract Traction Force Microscopy (TFM) computes the forces exerted at the surface of an elastic material by measuring induced deformations in volume. It is used to determine the pattern of the adhesion forces exerted by cells or by cellular assemblies grown onto a soft deformable substrate. Typically, colloidal particles are dispersed in the substrate and their displacement is monitored by fluorescent microscopy. As with any other fluorescent techniques, the accuracy in measuring a particule’s position is ultimately limited by the number of evaluated fluorescent photons. Here, we present a TFM technique based on the detection of probe particle displacements by holographic tracking microscopy. We show that nanometer scale resolutions of the particle displacements can be obtained and determine the maximum volume fraction of markers in the substrate. We demonstrate the feasibility of the technique experimentally and measure the three-dimensional force fields exerted by colorectal cancer cells cultivated onto a polyacrylamide gel substrate. ## Introduction Cells exert forces between each other and onto their environment. When cultivated in vitro, cells exert forces onto the culture substrate. These forces are generated by the actin-myosin network, in association with proteins to induce adhesion onto the cell environment. Among them, integrins are responsible for cell/extracellular matrix adhesion, and, cadherins for cell/cell junctions. Cellular forces are not spatially homogeneous; for instance, when cells are cultivated onto a flat substrate, forces mainly occur at localized regions, called focal adhesion sites. These regions involve several tens of proteins1. Both focal adhesion sites sizes and shapes strongly depend on the physiological context. The adhesion stress pattern between neighboring cells is different from that involving the interaction between a cell and the extracellular matrix2. It has been also observed that mechanical properties play a key regulation role in many cellular processes3, not limited to migration. The link between the mechanical phenotype of cells and the onset of diseases (e.g. cancer) is a subject of a considerable interest4; a change in mobility allows a single cell to detach from a primary tumor site, infiltrate adjacent tissues, penetrate the vascular walls and finally colonize competent organs. To understand the roles of specific molecular processes in the mechanical phenotype of cells, it becomes necessary to measure precisely how the expression of specific proteins changes the forces exerted by the cells on their environement. Several techniques have been developed to measure the adhesion forces generated by cells onto their environment: micropipette aspiration5 and flow techniques6 measure the overall value of the forces exerted by a cell in response to an external stimulus. Similarly, several methods have been developed to study the forces exerted by a cell on a soft substrate. They can be classified as follows: (i) The measurement of the deformation of an elastic substrate. These studies, pioneered by Harris7,8, consist in analyzing the wrinkling pattern induced by the application of forces onto a thin elastic silicone sheet. Because there are no simple ways to convert wrinkle patterns into a traction forces map, this method remains qualitative and is not used nowadays. (ii) The force measurements based on growing cells onto an array of pillars, acting as force sensors. The measurement of the deformation of each pillar allows the determination of the applied force9,10,11,12. The force can be easily calculated using Hooke’s law for each pillar. While being widely used, this method has significant drawbacks: the non-physiological shape of substrates might affect cellular responses. Moreover, cellular shapes are strongly affected by both the dimensions of the micro-pillars and the mesh size13. (iii) The measurement of the three-dimensional field of deformation of a soft substrate embedded with particles14,15 - Traction Force Microscopy (TFM). From the displacement of the probe particles, forces can be determined. In TFM, the surface force field F(r′) at the surface of the substrate is computed from the elasticity equation, u = GF, where u(r) is the displacement field and G the Green function. As a consequence, an inverse problem has to be solved: the forces at points at the substrate surface, r′, must be computed from the knowledge of the displacement field at a given set of points r inside the elastic substrate. A direct solution of the elasticity equation could be obtained using Singular Value Decomposition of the matrix G but the condition number of G is very high (typically 103). This implies that the addition of force values onto the direction defined by the lowest singular values of G would induce negligible change in the overall displacement u. Therefore, the addition of a small noise to the measured displacement field significantly alters the computed values of the force field; the problem is ill-posed. Several strategies, requiring prior information, have been proposed to solve ill-posed problems (either in real or Fourier spaces16). For instance, regularization techniques consist in selecting a solution among the many possible and indistinguishable solutions of the ill-posed problem by imposing a penalty to solutions that exhibit some property. When calculating the force, one can either assume that the force is highly localized (and so the force is calculated at specific points for Traction Reconstruction with Point Forces (TRPF)17,18 or that the force is distributed on a specific area (focal adhesion). In the latter approach, the density of markers has to be kept high enough to prevent aliasing (which would result in an underestimation of the force). In contrast, TRPF has to be performed at low particles densities. TRPF successfully recovers forces if only particles at a sufficient distance from adhesion points are considered so that dipolar and higher-order terms can be neglected. Obviously, obtained forces only represent an average on the focal adhesion but this averaged quantity (as it would be determined in experiments where microfabricated pillars support cells11) is sufficient to evidence for different mechanical phenotypes. Let us also mention that, for both approaches, the accuracy of particle position might also contribute in the determination of traction peaks. However, current TFM setups operate at high densities (at a few particles per μm2, see below) so that noise field in the displacement field only contributes very little the quality of the reconstruction19. Nowadays, state-of-the art TFM instruments are fluorescence based-devices that aim at determining the fine structure of small focal adhesion. These devices use either discs composed of quantum dots20 or beads having diameters of few tens of nm21. Electrohydrodynamic nanodrip-printing of quantum dots allow placement of the discs at very specific positions and so well defined patterns can be realized (grid size of 1.5 micrometers, printing error of 35 nm). Higher densities can be achieved with beads (2.2 μm−2 for beads of 40 nm in diameter) using Stimulated Emission Depletion Microscopy (STED) that confines the fluorescence emission to a region much smaller that the typical (diffraction limited-) fluorescence spot (Super-Resolved Traction Force Microscopy (STFM)21). While STFM dramatically improves the sensitivity of TFM (albeit currently limited to 2D measurements) and so offers an attractive alternative to current (low resolution) TFM fluorescence based devices22, it suffers from severe limitations. STFM equires expensive and sophisticated optical setups (and this explains why STED is certainly less established than Photo-activated localization microscopy (PALM) or Stochastic Optical Reconstruction Microscopy (STORM)23 in Biology), is limited to the imaging of thin gels (to reduce optical aberrations21) and, more importantly, can induce photodamage (as high-power depletion lasers are used for periods (hours for TFM) that far exceeds those used in conventional STED experiments)24. In studies where it is sufficient to determine forces without the need to resolve focal adhesion (e.g. identifying metastatic and non-metastatic cell lines based on their capabilities to exert large traction forces25), low particle densities could be used. There, however, it remains crucial to resolve displacements with nm-accuracy in all three directions. As the number of collected photons is usually low in fluorescence measurements (i.e. shot noise is severely limiting), the typical accuracy of conventional fluorescence-based TFM devices (about 4 to 8 nm for in-plane and 20 nm for out-of-plane measurements when using Qdots, respectively20) certainly would fail to recover forces in all three dimensions. In this paper, we present a novel approach, which consists in monitoring the displacements of non-fluorescent micrometer-sized particles at low spatial density. Using a slightly coherent light source (Light Emitting Diode, LED), we analyze the diffraction patterns that originate from the interference between the scattered and the incident rays and reach nm localization accuracies along all three directions26. To highlight this new method, we present non-filtered traction forces maps obtained for a colon carcinoma cell line (SW480) seeded on a polyacrylamide gel. In particular, we report on out-of-plane forces that are rarely measured in fluorescence-based TFM but are known, however, to play a significant role in both cell adhesion and migration27. ## Displacements of the particles and noise analysis ### Positions of the particles We use robust algorithms to determine the x and y (in-plane) and z (out-of-plane) positions of each particle (Fig. 1(a)). At high Signal to Noise Ratio (SNR) and high magnification (50 and above), these algorithms, which are mainly used in single-molecule experiments (magnetic tweezers28), are capable of determining the position with a precision better than 1/100th of a pixel (in x and y) and below 1 nm in z. To this end, we first compute a 1D cross correlation (Fig. 1(b)) to determine the centers of the particles. Then, we compute an intensity profile (the average intensity of pixels located at a given distance from the center)29, which is subsequently compared with a calibration table. This Look Up Table (LUT) is obtained from intensity profiles measured at known and given distances, e.g. by moving the objective lens every 50 nm and averaging a series of images, (Fig. 1(d) top). To obtain an accuracy better than the objective step size, we calculate the squared differences (between the radial profile intensities of the measured particle and those of the LUT) and perform a Least Squares polynomial adjustment (Fig. 1(e)). Discretization errors have to be taken into account (similar errors occur for x and y when sampling is poor, i.e. when the pixel size is not small enough with respect to the diffraction pattern features30). To correct for a possible bias, we follow the ideas of Gosse and Croquette26 and estimate a parameter (a phase computed from the Hilbert transform of the intensity profile), which has a known (quadratic) dependence with the z position (Fig. 1(f)) (see also31). Other approaches have been proposed to reduce discretization errors: Cnossen et al.32 use an an iterative approach and have assumed a linear dependence of the bias with the position. We have found, however, that this approximation was somehow arbitrary and fails to correct for the bias (see Supplementary Information). Finally, and in agreement with previous studies31, we have found that these algorithms are limited by shot noise (so tracking noise scales as 1/$$\sqrt{N}$$, where N is the number of evaluated photons) and that photon noise dominates quantization noise so that it is sufficient to work with 8 bit-images. ### Localization accuracy As the noise level varies with bandwidth, the Allan deviation (AD, which measures the noise level σ when averaging over a given bandwidth 1/τ) is a relevant parameter to estimate the noise level of the instrument33,34. The AD allows to distinguish between different sources of noise. For instance, tracking noise decreases by a factor $$\sqrt{n}$$ when averaging over n measurements. In contrast, thermal drift will cause a monotonic increase in the AD at large enough τ (~0.5 s and above) and environmental noise shows an increase in the AD over a specific range of times only (below a fraction of a second). Assuming only two sources of noise in a given time measurement (tracking noise and thermal drift), we then expect the AD to decrease at low τ (0.5 s and below) and then to increase at higher τ. Note that the AD is expected to be noisy when τ approaches the total measurement time (simply because less data are averaged). Obviously, with the addition of correlated noise (e.g. fans), we may observe different patterns at intermediate τ that simply correlate with the magnitude of the different noise levels. As shown in Fig. 2(a), the AD (measured on a 1 micrometer particle embedded in the gel) is much larger on z (as compared to x or y). This behavior is somewhat expected as we use a LED with a low coherence (~6 μm only, to prevent overlapping of diffraction patterns from adjacent particles; see below). For this light source, the radial profiles (see above) are less spatially extended and poorly defined35. This results in a low performance of the algorithm along z and so tracking noise possibly dominates over environmental noise (in the range 0.05 to 0.5 s). In contrast, cross-correlation gives satisfying results at low SNR30 and an opposite behavior is observed in x and y. At high τ (in the range 0.8 to 2 s), the AD shows an increase in all directions that can then be correlated with thermal drift. However, that drift (and correlated noise) can be significantly reduced with fiduciary markers, by subtracting the position of reference particles fixed on the glass surface from those measured (Fig. 2(b)). As expected, random (tracking) noise can be reduced by averaging only and this explains why a doubling of the AD is observed at low τ when calculating position differences (z direction). When calculating such differences, we observe that averaging positions of single images gives similar results as averaging images and then calculating a position (Fig. 2(b), disks). For computation requirements, we have chosen the latter approach. Note finally that it exists region, from 2 to 6 micrometers below the focus, where the tracking on z is more accurate (Fig. 2(b), inset). This is in agreement with previous results32 and should be attributed to the fact that the slope of the phase difference between adjacent LUT planes (see above) depends on the distance from the focus. Again, the cross correlation is more robust and the x and y positions show almost no dependence on the particle position along z. Altogether, the results of this analysis show that we can measure the displacements of probe particles with an accuracy of σ xy  ~ 1 nm and σ z  ~ 3 nm, an order of magnitude better than what has been reported in recent, state-of-the art fluorescence-based TFM measurements. ### Spatial resolution The measure of the displacement field is performed at randomly positioned particles. To increase the resolution of the displacement field, one has to increase the number of particles per unit volume that can be tracked. This concentration is limited by the volume of the diffraction pattern of each particle, which depends on the size and optical index of the particles as well as on the wavelength and the spectral width of the light source. As it may be difficult to resolve the positions of the particles whose diffraction patterns intersect, we expect the optimum particle concentration to be of the order of the ratio between the volumes of a particle and that of the diffraction pattern. Assuming a cone (10 μm for the height and 6 μm for the base diameter) and a particle diameter of 1 μm, the optimum volume fraction is expected to be $${c}_{max}=\frac{\mathrm{4/3}\pi {\mathrm{(0.5)}}^{3}\mu {m}^{3}}{\mathrm{1/3}\pi {3}^{2}\mu {m}^{2}10\,\mu m}\approx 0.056=\mathrm{0.56 \% }$$. Experimentally, we determine the optimal particle concentration as follows: 800 frames at different z positions are acquired, using a step size Δz = 25 nm between two frames. This ensemble of images is divided into two subsets: the first one contains images obtained at positions 2iΔz; the second contains images acquired at (2i + 1)Δz (i = 0:399). The first set is used as a LUT (with a step size Δz LUT  = 50 nm) for tracking particle positions from the second set of images. As the true positions of the tracked images (corresponding to 2(i + 1)Δz) are known, we can determine whether they are correctly tracked relative to the other subset of images. This procedure mimics what is obtained in a real experiment as the tracked beads are more likely to be found in between two LUT planes. We then select 8 planes from the tracked planes subset, corresponding to a reasonable number of planes acquired in a real experiment. We compute the x, y and z positions of the particles. A particle is considered to be successfully tracked when the following conditions are met: (i) the exact same x and y positions (±11 nm, which is one tenth of a pixel or about 3 standard deviations) have to be obtained for at least two tracking planes and (ii) the z position should be found within the same LUT intervals (Δz). Figure 2 shows the result of this analysis. Due to the increasing number of diffraction pattern overlaps, the relative fraction of successfully tracked particles, ν tracked , defined as the ratio between the number of tracked particles over their total number, decreases with the volume fraction (Fig. 2(c), squares). The volume fraction of successfully tracked particles, ϕ tracked , increases with particle concentration and saturates when the diffraction patterns from different particles overlap (Fig. 2(c), circles). This results in a maximum volume fraction of successfully tracked particles, ϕ max  = 0.1%, that we define as the optimal particle volume fraction. Obviously, the fraction of successfully tracked particles depends on the number of planes that are imaged (Fig. 2(d)). When the number of tracking planes decreases, more particles are not detected as the probability to track a particle in the optimal tracking region d z (from 2 to 6 μm below the focus, Fig. 2(b), inset) decreases. The above analysis sets the maximum spatial resolution of our apparatus: the maximum volume fraction of successfully tracked particles is ϕ tracked  = 0.037%, for an optimum concentration of particles ϕ = 0.13%, and 40 tracking planes. This corresponds to an average distance between the centers of the particles of 6.7 μm. Nevertheless, and to reduce computation time, we will image only four planes and use a particle concentration ϕ of 0.1. Under these conditions, the volume fraction of successfully tracked particles is 0.027% and the average interparticle distance is 7.5 μm. Note that an appropriate patterning of beads, preventing overlapping of in-plane and out-of-plane diffraction fringes, would allow a larger fraction of beads to be tracked36. Because our algorithm does not require a high density of beads, the fact that a large fraction of beads cannot be tracked (roughly a factor 4 when knowing the true position of beads and setting a cut-off at one standard deviation from the true position) does not represent a significant obstacle to the successful reconstruction of traction forces. Two key properties of tracking techniques in TFM are their ability to track beads underneath cells and their accuracy. Fluorescence techniques suffer from the sensitivity of cells to the relatively large intensities necessary to excite the tracker bead’s fluorescence and from the intrinsic cell fluorescence. Here, the diffraction patterns are altered by refraction of light by the cellular organelles. To determine whether tracking is influenced by the presence of the adhered cell, we compute the visibility (defined as the difference in the radial profile between the maximum of the first peak and the minimum of the first valley35) for a series of beads (14) that may be below or out of the cell during an experiment (Fig. 3). As several planes can contribute to tracking (i.e. resulting in an identical index in the LUT), we have chosen to use the maximum of the obtained visibilities when a particle is successfully tracked. Comparing the obtained distributions when the particles are either below or not below the cell allows us to determine whether the accuracy (which correlates with the visibility35) is modified by the presence of the cell. As shown in Fig. 3, the obtained distributions (light grey) result in different medians (vertical lines, light grey). We find values of (77 ± 2) and (66 ± 2) (0.95 confidence interval) for the estimates of the means and a Wilcoxon Rank-Sum test indicates that the distributions are statistically different (p-value less than 1.2 · 10−12). In addition, the probability to successfully tracking a bead depends on its location. We find values of (0.32 ± 0.03 and 0.26 ± 0.07) (0.95 confidence interval) for the estimates of the probabilities of tracking a particle when not below and below a cell. Again, these values are statistically different (p-value of 2.6 · 10−7, chi-squared test). When comparing the overall visibilities (the maximum of the visibilities determined at the four different tracking planes, independently of the success of the tracking) with the previously obtained values (when tracking is successful), we found no statistical difference when particles are not below the cell (p-value larger than 0.14, Wilcoxon Rank-Sum test) but a statistical difference when particles are below the cell (p-value less than 2.3 · 10−16, Wilcoxon Rank-Sum test). This finding also indicates that the presence of the cell indeed lowers the visibility value, but that when a cell passes our criteria for tracking, the visibility is similar below and not below the cell, indicating that the accuracy of the tracker beads mouvements are the same, whatever the relative position of the beads and the cell. ## Computation of the force field The deformation of the gel substrate is assumed to be small enough so that linear elasticity theory can be used. The traction force field is calculated from the measured displacement field by inverting the elasticity equation. We need to solve u = GF where u = (u x (r(1)), u y (r(1)), u z (r(1)), u x (r(2)), u y (r(2)), …) and F = (F x (r(1)), F y (r(1)), F z (r(1)), F x (r(2)), F y (r(2)), …) are 1D-vectors of the displacement field, measured at positions r inside the gel and of the force field, measured at positions r′ at the gel surface. G is a 2D-matrix. For N displacements and M force points, the size of the displacement vector (at 3 dimensions) is 3N, the size of the force vector is 3M and the Green matrix has a size of 3N × 3M, respectively. When the substrate is thick enough (larger than the characteristic depth of the deformation induced by cell adhesion), the elastic medium can be considered as semi-infinite and the Green function is that of a semi - infinite medium37, given by Boussinesq. The Poisson ratio of polyacrylamide gels being close to 0.5, the elements of the matrix g are given by: $${g}_{kl}(R)=\frac{3}{4\pi E{R}^{3}}({\delta }_{kl}{R}^{2}+{R}_{k}{R}_{l})$$ (1) where E is the Young’s modulus of the medium and R = |r − r′| the vector between the displacement point and the force point. Note that taking a value of 0.5 for the Poisson ratio is an approximation, which is commonly made in TFM measurements. However, an error in its determination could affect the estimated forces. A possibility would consist in the direct determination of the ratio using new technqiues like two-layer elastographic TFM experiments38. The Green matrix of the entire system, which relates the whole displacement field with the force field, is constructed by blocks consisting of matrices g for all possible pairs of points: $${G}_{ij}({{\bf{r}}}^{\mathrm{(1)}},\ldots ,{{\bf{r}}}^{(N)},{\bf{r}}{^{\prime} }^{\mathrm{(1)}},\ldots ,{\bf{r}}{^{\prime} }^{(M)})=g(|{{\bf{r}}}^{(i)}-{\bf{r}}{^{\prime} }^{(j)}|)$$ (2) The conditional number of G is larger than 103 and the inversion of the elasticity equation is an ill-posed problem. Therefore, some regularization is required. As stated previously, regularization consists in adding some constraints that filter out solutions that do not fulfill a priori conditions39. Here, we use Tikhonov regularization40 for which the constraint consists in introducing an expected solution F0. The sum of two norms is minimized: the residual and the divergence between the calculated and the expected solutions. It is given by: $${{\bf{F}}}_{reg}=mi{n}_{{\bf{F}}}(|G{\bf{F}}-{\bf{u}}{|}^{2}+{\lambda }^{2}|{\bf{F}}-{{\bf{F}}}_{0}{|}^{2})$$ (3) Here λ is the regularization parameter, which weighs the regularization term |F − F0|2. We use the L-curve criterion41, which is a log-log plot of residual norm |GF − u|2 as a function of |F − F0| for different λ. This plot exhibits an L - shape, and its corner determines the balance between data agreement and regularization. The value of the regularization parameter λ that corresponds to this corner is chosen for the regularization procedure. In the case of TFM, it is difficult to predict some force field F 0 and the main constraint consists in setting that the traction forces should not be unreasonably large42. We thus perform a zero-order Tikhonov regularization F 0  = 0. In this case, equation 3 rewrites as: F reg  = min F (|GF − u|2 + λ2|F|2). The regularization is performed with a MatLab routine written by P.C. Hansen43. Although the displacement field is measured at low spatial density, the resolution of their displacement allows for the reconstruction of the force field when the number of points where the force is computed is approximately equal to the number of points of measurement of the displacement field. In Fig. 4, numerical simulations of force field reconstruction are performed. A single point force is applied at the surface of the gel, the displacement field is calculated at N b points inside the gel and a random gaussian noise is added to the computed displacements. The force field is then reconstructed and the difference between the applied and the reconstructed force field is plotted as a function of N b for different amplitudes of the noise. One obtains that when the number of points of calculation of the force field is equal to N b , the error over the reconstructed force field is lower than 20%. This accuracy is similar to that obtained when the force field is obtained in the mostly used regime where the beads displacement accuracy is lower, but the density of markers is higher, as achieved with fluorescent particles18. ## Results Adhesion experiments are performed using colorectal cancer cell line SW 480 grown onto the polyacrylamide gel (see section). The positions of the particles are tracked for 10 hours. The time-step between two measurements is δt = 60 sec. Radial profiles of the particles at reference positions are acquired after the cells are injected inside the chamber and before they start to adhere to the substrate. We thus have access to the positions of the tracking particles in the absence of applied forces, which defines the state of mechanical reference. The force field at the surface of the gel is calculated from the measurement of the particle displacements using the procedure described above. It should be stressed that we do not apply any mathematical treatment, such as interpolation of particle positions, interpolation of the field of forces or smoothing of the computed force field. Moreover, no a priori assumptions are performed concerning the points of application of the forces: they are computed on a quadratic grid with a 3 μm mesh size. The spatial resolution of the force field is solely determined by particle concentration and by the accuracy of the measurement of their displacements. In particular along the z direction, this allows for the precise measurement of all the components of the force field at the surface of the gel. We have found that cells exhibit two phenotypes. First, cells may exhibit round shapes. In this case, cells exert large forces around their periphery; this results in simultaneous pushing and pulling of the substrates into the adhesion region. (Fig. 5(a,c)). The applied pressure in the center of the cell is smaller than at its boundary so that the overall sum of the forces is null. More precisely, computing the normalized sum of the forces over the entire force field, $$\delta =\frac{|{\sum }_{k=1}^{N}\overrightarrow{F}({\overrightarrow{r}}_{k})|}{{\sum }_{k=1}^{N}|\overrightarrow{F}({\overrightarrow{r}}_{k})|},$$ (4) we found that, the average values of δ over the round cell shapes is δ = 0.013. Second, cells can adopt an elongated geometry (Fig. 5(a,c)). Here, two stress peaks are observed on opposite poles of the cell. One of the force peaks is directed into the substrate whereas the other is directed out of the substrate (Fig. 5(b)). The amplitude of these force peaks is such that the sum of all forces vanishes: δ = 0.085. In other words, the cell pushes the gel at one of its poles and pulls at the opposite extremity. Interestingly, for both shapes, the z-component of the forces exerted by the cell onto the substrate are of the same order of magnitude as the shear forces. If one plots the amplitude of the normal forces as the function of tangential forces, both shapes taken into account (Fig. 6(a)), a linear relation is obtained (slope of 1.09). This indicates that tangential and normal forces are comparable in magnitude. A similar behavior has been reported for Dictyostelium Cells27, although the normal component of the force was slightly smaller than the tangential one (slope 0.72), and for mammalian cells (fibroblats)44. For an elongated cell, the normal component of the applied force has a dipolar behavior. A similar analysis can be applied to the tangential component of the force field. Following Tanimoto’s approach45, let us consider the first non-zero moment of a multipolar expansion of the force field matrix, the dipolar term: $${M}_{ij}=\sum _{k=1}^{N}\,{x}_{i}({\overrightarrow{r}}_{k}){F}_{j}({\overrightarrow{r}}_{k})$$ (5) where the sum is taken over all N positions of the force vectors underneath the cell; x i and F j are the ith and jth components of the positions with respect to the cell center and the measured force, respectively (i = 1(2) designates x(y) axis). The total torque vanishes and the matrix M ij is thus symmetric and diagonalizable. Let us define the major dipole as the eigenvector with the largest eigenvalue. The associated eigenvalue is negative, corresponding to a contractile behavior of the cell along this direction. For elongated cells, the contraction axis is correlated with the shape anisotropy of the cell itself. For each of these force fields, defining the angle α between the cell elongation axis and the major dipole axis (Fig. 6(b)), we observe that the histogram exhibits a maximum for small α values (Fig. 6(b)). This indicates that the force axis lies within the the long axis of the cell. The coexistence of two morphologies has already been reported for SW480 cells. Their mechanical properties have been studied46: the Young modulus of round shape cells (500 Pa) was found to be smaller than that of elongated cells, and the adhesion of cells onto an Atomic Force Microscope cantilever was found to be independent of the cellular shape. Our results constitute the first study of the adhesion pattern for each cellular shape and show that, although the elastic properties of elongated and round SW480 cells are similar, their adhesion patterns strongly differ. ## Conclusion Here, we have introduced a new TFM approach, which uses non-fluorescent particles. This technique allows to track micrometer-sized particles (along all three directions) with a localization accuracy that cannot be achieved using state-of-the art TFM (fluorescent-based) setups. This low tracking noise (~nm) allows to successfully recover force maps, including the normal component of the forces, at low spatial resolutions (2D density smaller than 0.1 particles per μm2. This technique is still new and further improvements are expected. Sub-nm localization accuracies could be obtained using a faster camera (capable of nearly kHz acquisition at full frame) and higher power light sources such as superluminescent diodes (simply because averaging decreases tracking noise47), which also offer excellent image quality35. Here, the typical extension of diffraction patterns could be controlled along both the x and y axis by using spatial filtering and select low frequency components and along the z axis by confining the beads in one plane only. Assuming an intensity of 1 mW, and an illuminated region of 100 × 100 μm2, the flux is 10 W per cm2 (which is sufficient to observe well-defined diffraction patterns at 0.1 kHz31), the typical dose is about 1 J per cm2 for 100 images in 1 s. Repeating the acquisition every minute for 10 hours, the total dose is 600 J per cm2 and so would not damage cells (using a wavelength larger than 600 nm)24. Obviously, fluorescence imaging is also capable to track nanobeads with nm accuracy48. This, however, requires large integration times (averaging 10 images over 150 ms allows for an accuracy about 1 to 2 nm in all three directions, i.e. roughly an order of magnitude higher than what is expected using non-fluorescent measurements) and much higher intensities (about 100 times larger), which would result in potential photo-damage and photo-bleaching effects. Finally, it remains also possible, using Mie scattering theory, to track particles when diffraction patterns overlap49, and to reach very large particle volume fractions. We believe that our new approach should stimulate new theoretical investigations in order to optimize both volume fractions and accuracy parameters. ## Materials and Methods ### TFM Setup We use a home-built microscope. Bead images (8 bits) were acquired with a 2048 × 2048 pixels CMOS camera (acA2040-25 gm, Basler) that has a saturation capacity of 11.9 ke- and a frame rate of 25. An oil-immersion objective (100X, NA 1.25, Zeiss) was mounted on a piezoelectric flexure objective scanner (P-721, Physik Instrumente) and used to image the gel at different positions (along the optical axis). A lens in front of the camera sets the magnification to about 50. To maintain a relatively low spatial coherence (to about 6 micrometers), we use a Light Emitting Diode (M595L3, Thorlabs) and a band pass filter (FF01-697/75-25-D, Semrock). To minimize temperature gradients, the stainless steel microscope stage is thermally isolated from the optical table with ceramic legs. Experiments were performed at T = 37 °C (TempController 2000-2, Pecon GmbH) under a 5 percent Carbon Dioxide atmosphere (CO2-Controller 2000, Pecon GmbH). ### Image Acquisition Unless specified, 20 images were acquired at 20 Hz every minute and then averaged. To correct for the difference in index of refractions between oil and water, the z positions were multiplied by a factor 0.82 ± 0.0150. Note that this experimental value (obtained by measuring the thickness of different flow cells) deviates from the ratio of indexes (1.33/1.515 = 0.88, assuming a low NA) but is in agreement with a model proposed by Visser51. ### Preparation of Activated Coverslips We used 35 mm glass bottomed Cell Culture Dishes (500027, Porvair). To covalently attach the polyacrylamide gel onto glass, we used a a similar procedure as in52. Glass surfaces were cleaned with NaOH, incubated with a 0.5% EtOH solution of 3-Aminopropyltriethoxysilane (440140, SIGMA) and then immersed in a 0.5% Glutaraldehyde (G6257, SIGMA). Intensive rinsing with either H20 or EtOH was performed between all steps. ### Polyacrylamide gel fabrication Gels with 80 micrometer thickness were polymerized onto functionalized glass using the following protocol52; A solution of acrylamide (5%; 1610142, Bio-Rad) and bis-acrylamide (0.05%; 1610140, Bio-Rad) was mixed with 1 micrometer diameter polystyrene particles (07310, Polysciences) to yield a gel with a Young modulus of E = 0.45 kPa52. The concentration of particles was adjusted to obtain a volume fraction of about 0.1%. Polymerization was initiated by Ammonium Persulfate (A3678, SIGMA) and Tetramethylethylenediamine (T9281, SIGMA). After complete polymerization (about 30 minutes), Collagen I (0.2 mg/ml in Acetic Acid; A10483, Life Technologies) was cross-linked to the gel surface using Sulfo-SANPAH (1 mM; BC38, G-Biosciences). Photoactivation was performed with UV and cross-linking was done overnight. The cell culture dishes were then stored in Phosphate-buffered Saline buffer (79382, SIGMA) at 4 °C. ### Cell culture SW480 cells were obtained from ATCC and grown in DMEM (Dulbecco’s modified Eagle’s medium; Life Technology) with 10% fetal bovine serum (Life technology, Germany) at 37 °C in a humidified atmosphere and 5% CO2. Mycoplasma contamination has been tested negatively using PlasmoTest (Invivo gene). Cells were seeded on the gel- covered slides at a concentration of 50 000 cells/ml to avoid confluency and allow individual cell measurment. Cells were maintained at 37 °C and 5% CO2 during measurements using a dedicated chamber. Note that our procedure is not compatible with some protocols used in macroscopic cell culture but is similar to protocols used in Microfluidics53). ### Data availability The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## References 1. 1. Wolfenson, H., Henis, Y. I., Geiger, B. & Bershadsky, A. D. The heel and toe of the cell’s foot: A multifaceted approach for understanding the structure and dynamics of focal adhesions. Cell motility and the cytoskeleton 66, 1017–1029 (2009). 2. 2. Cukierman, E., Pankov, R., Stevens, D. R. & Yamada, K. M. Taking Cell-Matrix Adhesions to the Third Dimension. Science 294, 1708–1712 (2001). 3. 3. Discher, D. E. Tissue Cells Feel and Respond to the Stiffness of Their Substrate. Science 310, 1139–1143 (2005). 4. 4. Agus, D. B. et al. A physical sciences network characterization of non-tumorigenic and metastatic cells. Scientific Reports 3 (2013). 5. 5. Hochmuth, R. M. Micropipette aspiration of living cells. Journal of biomechanics 33, 15–22 (2000). 6. 6. Lu, H. et al. Microfluidic shear devices for quantitative analysis of cell adhesion. Analytical chemistry 76, 5257–5264 (2004). 7. 7. Harris, A., Wild, P. & Stopak, D. Silicone rubber substrata: a new wrinkle in the study of cell locomotion. Science 208, 177–179 (1980). 8. 8. Harris, A. K., Stopak, D. & Wild, P. Fibroblast traction as a mechanism for collagen morphogenesis. Nature 290, 249–251 (1981). 9. 9. Galbraith, C. G. & Sheetz, M. P. A micromachined device provides a new bend on fibroblast traction forces. Proceedings of the National Academy of Sciences of the United States of America 94, 9114–9118 (1997). 10. 10. Tan, J. L. et al. Cells lying on a bed of microneedles: An approach to isolate mechanical force. Proceedings of the National Academy of Sciences 100, 1484–1489 (2003). 11. 11. du Roure, O. et al. Force mapping in epithelial cell migration. Proceedings of the National Academy of Sciences 102, 2390–2395 (2005). 12. 12. Saez, A., Ghibaudo, M., Buguin, A., Silberzan, P. & Ladoux, B. Rigidity-driven growth and migration of epithelial cells on microstructured anisotropic substrates. Proceedings of the National Academy of Sciences 104, 8281–8286 (2007). 13. 13. Tee, S.-Y., Fu, J., Chen, C. S. & Janmey, P. A. Cell shape and substrate rigidity both regulate cell stiffness. Biophysical Journal 100, L25–L27 (2011). 14. 14. Dembo, M. & Wang, Y.-L. Stresses at the Cell-to-Substrate Interface during Locomotion of Fibroblasts. Biophysical Journal 76, 2307–2316 (1999). 15. 15. del Ãlamo, J. C. et al. Three-dimensional quantification of cellular traction forces and mechanosensing of thin substrata by fourier traction force microscopy. PLoS One 8, e69850 (2013). 16. 16. Butler, J. P., Tolić-Nørrelykke, I. M., Fabry, B. & Fredberg, J. J. Traction fields, moments, and strain energy that cells exert on their surroundings. American Journal of Physiology - Cell Physiology 282, C595–C605 (2002). 17. 17. Schwarz, U. S. et al. Calculation of forces at focal adhesions from elastic substrate data: the effect of localized force and the need for regularization. Biophysical Journal 83, 1380–1394 (2002). 18. 18. Sabass, B., Gardel, M. L., Waterman, C. M. & Schwarz, U. S. High Resolution Traction Force Microscopy Based on Experimental and Computational Advances. Biophysical Journal 94, 207–220 (2008). 19. 19. Zündel, M., Ehret, A. E. & Mazza, E. Factors influencing the determination of cell traction forces. PLoS One 12, e0172927 (2017). 20. 20. Bergert, M. et al. Confocal reference free traction force microscopy. Nature Communications 7, 12814 (2016). 21. 21. Colin-York, H. et al. Super-resolved traction force microscopy (STFM). Nano Letters 16, 2633–2638 (2016). 22. 22. Colin-York, H. & Fritzsche, M. The future of traction force microscopy. Current Opinion in Biomedical Engineering 5, 1–5 (2018). 23. 23. Huang, B., Wang, W., Bates, M. & Zhuang, X. Three-Dimensional Super-Resolution Imaging by Stochastic Optical Reconstruction Microscopy. Science 319, 810–813 (2008). 24. 24. Wäldchen, A., Lehmann, J., Klein, T., can de Linde, S. & Sauer, M. Light-induced cell damage in live-cell super-resolution microscopy. Scientific Reports 5 (2015). 25. 25. Wirtz, D., Konstantopoulos, K. & Searson, P. The physics of cancer: the role of physical interactions and mechanical forces in metastasis. Nature Reviews. Cancer. 11, 512–522 (2011). 26. 26. Gosse, C. & Croquette, V. Magnetic Tweezers: Micromanipulation and Force Measurement at the Molecular Level. Biophysical Journal 82, 3314–3329 (2002). 27. 27. Delanoë-Ayari, H., Rieu, J. P. & Sano, M. 4d traction force microscopy reveals asymmetric cortical forces in migrating Dictyostelium cells. Physical Review Letters 105 (2010). 28. 28. Lionnet, T. et al. Single-molecule studies using magnetic traps. Cold Spring Harbor Protocols 2012, 067488 (2012). 29. 29. Zhang, Z. & Menq, C.-H. Three-dimensional particle tracking with subnanometer resolution using off-focus images. Applied optics 47, 2361–2370 (2008). 30. 30. van Loenhout, M. T., Kerssemakers, J. W., De Vlaminck, I. & Dekker, C. Non-bias-limited tracking of spherical particles, enabling nanometer resolution at low magnification. Biophysical Journal 102, 2362–2371 (2012). 31. 31. Cavatore, E. Optical Microscopy applied to micro-manipulation by high-resolution magnetic tweezers and to visualization of metal nano-objects. Phd thesis, Université Pierre et Marie Curie - Paris VI (2011). 32. 32. Cnossen, J. P., Dulin, D. & Dekker, N. H. An optimized software framework for real-time, high-throughput tracking of spherical beads. Review of Scientific Instruments 85, 103712 (2014). 33. 33. Allan, D. Statistics of atomic frequency standards. Proceedings of the IEEE 54, 221–230 (1966). 34. 34. Czerwinski, F., Richardson, A. C. & Oddershede, L. B. Quantifying Noise in Optical Tweezers by Allan Variance. Optics Express 17, 13255 (2009). 35. 35. Dulin, D., Barland, S., Hachair, X. & Pedaci, F. Efficient Illumination for Microsecond Tracking Microscopy. PLoS One 9, e107335 (2014). 36. 36. De Vlaminck, I. et al. Highly parallel magnetic tweezers by targeted dna tethering. Nanoletters 11, 5489–5493 (2011). 37. 37. Landau, L. & Lifshitz, E. The equations of motion. In Mechanics, 1–12 (Elsevier, 1976). 38. 38. Alvarez-Gonzalez, B. et al. Two-Layer Elastographic 3-D Traction Force Microscopy. Scientific Reports 7, 39315 (2017). 39. 39. Kirsch, A. An introduction to the mathematical theory of inverse problems, vol. 120 (Springer Science & Business Media, 2011). 40. 40. Tikhonov, A. N. & Arsenin, V. Y. Solutions of ill-posed problems. (John Wiley & Sons, New York, 1977). 41. 41. Hansen, P. C. The L-curve and its use in the numerical treatment of inverse problems (IMM, Department of Mathematical Modelling, Technical University of Denmark, 1999). 42. 42. Schwarz, U. S. & Soiné, J. R. D. Traction force microscopy on soft elastic substrates: A guide to recent computational advances. Biochimica et Biophysica Acta (BBA) - Molecular Cell Research 1853, 3095–3104 (2015). 43. 43. Hansen, P. C. Regularization tools version 4.0 for matlab 7.3. Numerical algorithms 46, 189–194 (2007). 44. 44. Maskarinec, S. A., Franck, C., Tirrell, D. A. & Ravichandran, G. Quantifying cellular traction forces in three dimensions. Proceedings of the National Academy of Sciences 106, 22108–22113 (2009). 45. 45. Tanimoto, H. & Sano, M. A simple force-motion relation for migrating cells revealed by multipole analysis of traction stress. Biophysical Journal 106, 16–25 (2014). 46. 46. Palmieri, V. et al. Mechanical and structural comparison between primary tumor and lymph node metastasis cells in colorectal cancer. Soft Matter 11, 5719–5726 (2015). 47. 47. Huhle, A. et al. Camera-based three-dimensional real-time particle tracking at kHz rates and angstrom accuracy. Nature Communications 6, 5885 (2015). 48. 48. Gardini, L., Capitanio, M. & Pavone, F. S. 3D tracking of single nanoparticles and quantum dots in living cells by out-of-focus imaging with diffraction pattern recognition. Scientific Reports 5, 16088 (2015). 49. 49. Fung, J., Perry, R. W., Dimiduk, T. G. & Manoharan, V. N. Imaging multiple colloidal particles by fitting electromagnetic scattering solutions to digital holograms. Journal of Quantitative Spectroscopy and Radiative Transfer 113, 2482–2489 (2012). 50. 50. Hell, S., Reiner, G., Cremer, C. & Stelzer, E. H. Aberrations in confocal fluorescence microscopy induced by mismatches in refractive index. Journal of microscopy 169, 391–405 (1993). 51. 51. Visser, T., Oud, J. & Brakenhoff, G. Refractive index and axial distance measurements in 3-d microscopy. Optik 90, 17–19 (1992). 52. 52. Fischer, R. S., Myers, K. A., Gardel, M. L. & Waterman, C. M. Stiffness-controlled three-dimensional extracellular matrices for high-resolution imaging of cell behavior. Nature Protocols 7, 2056–2066 (2012). 53. 53. Halldorsson, S., Lucumi, E., R., G.-S. & Fleming, R. Advantages and challenges of microfluidic cell culture in polydimethylsiloxane devices. Biosensors and Bioelectronics 63, 218–231 (2015). ## Author information ### Affiliations 1. #### Université de Strasbourg, IPCMS/CNRS, UMR 7504, 23 rue du Loess, Strasbourg, 67034, France • Stanislaw Makarchuk • , Nicolas Beyer • , Wilfried Grange •  & Pascal Hébraud 2. #### Université de Strasbourg, Inserm U1113, 3 avenue Molière, Strasbourg, 67200, France • Christian Gaiddon 3. #### Université Paris Diderot, Sorbonne Paris Cité, Paris, France • Wilfried Grange ### Contributions C.G., P.H. and W.G. conceived the experiments. S.M. performed the experiments. P.H., S.M. and W.G. analyzed data. N.B. designed and built the setup. S.M. and P.H. performed to the theoretical analysis. S.M. and W.G. wrote the code to control the setup and track particles. P.H., S.M. and W.G. wrote the paper. All authors reviewed the manuscript. ### Competing Interests The authors declare no competing interests. ### Corresponding authors Correspondence to Wilfried Grange or Pascal Hébraud.
web
auto_math_text
Journal cover Journal topic The Cryosphere An interactive open-access journal of the European Geosciences Union Journal topic The Cryosphere, 12, 3551-3564, 2018 https://doi.org/10.5194/tc-12-3551-2018 The Cryosphere, 12, 3551-3564, 2018 https://doi.org/10.5194/tc-12-3551-2018 Research article 13 Nov 2018 Research article | 13 Nov 2018 # Estimating snow depth over Arctic sea ice from calibrated dual-frequency radar freeboards Estimating snow depth over Arctic sea ice Isobel R. Lawrence1, Michel C. Tsamados1, Julienne C. Stroeve1,2, Thomas W. K. Armitage3, and Andy L. Ridout1 Isobel R. Lawrence et al. • 1Centre for Polar Observation and Modelling, Earth Sciences, University College London, London, UK • 2National Snow and Ice Data Center, University of Colorado, Boulder, CO, USA • 3Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, USA Abstract Snow depth on sea ice remains one of the largest uncertainties in sea ice thickness retrievals from satellite altimetry. Here we outline an approach for deriving snow depth that can be applied to any coincident freeboard measurements after calibration with independent observations of snow and ice freeboard. Freeboard estimates from CryoSat-2 (Ku band) and AltiKa (Ka band) are calibrated against data from NASA's Operation IceBridge (OIB) to align AltiKa with the snow surface and CryoSat-2 with the ice–snow interface. Snow depth is found as the difference between the two calibrated freeboards, with a correction added for the slower speed of light propagation through snow. We perform an initial evaluation of our derived snow depth product against OIB snow depth data by excluding successive years of OIB data from the analysis. We find a root-mean-square deviation of 7.7, 5.3, 5.9, and 6.7 cm between our snow thickness product and OIB data from the springs of 2013, 2014, 2015, and 2016 respectively. We further demonstrate the applicability of the method to ICESat and Envisat, offering promising potential for the application to CryoSat-2 and ICESat-2, which launched in September 2018. 1 Introduction The addition of snow on sea ice, given its optical and thermal properties, generates several effects on the climate of the polar regions. Owing to its large air content, snow has a thermal conductivity 10 times less than that of ice . During the winter freeze-up, it forms an insulating layer that reduces heat flow from the ocean to the atmosphere and slows the rate at which seawater freezes to the bottom of the ice, dampening further ice growth . Snow has an optical albedo in the range of 0.7–0.85, compared to 0.6–0.65 for melting white ice . At the onset of the melt season, short-wave solar radiation is reflected from the surface, limiting ice melt. These properties make snow on sea ice important in energy budget considerations, and the inclusion of accurate Arctic snow depth estimates would improve current weather and sea ice forecasting . As well as its climatic importance, snow depth plays a key role in the retrieval of sea ice thickness from satellite altimetry. Over the past 2 decades both radar (e.g. ERS-2, Envisat, CryoSat-2) and laser (e.g. ICESat) altimeters have enabled sea ice thickness to be retrieved from space, first by measuring the sea ice freeboard (the portion of the ice floe above the water), and then converting this to thickness by assuming that the floe is in hydrostatic equilibrium with the surrounding ocean . For both the radar and laser cases, snow depth is one of the dominant sources of sea ice thickness uncertainty . In situ measurements of snow depth and density for the 37-year span from 1954–1991 provided the first comprehensive Arctic snow climatology. The data set, compiled and published by , comprises of measurements gathered at Soviet drifting stations across the central Arctic. Stations were located over multi-year ice, which at the time of data collection spanned an area of some 7×106km2. Recent studies have demonstrated that the Arctic is undergoing a transition from multi-year to first-year ice (Comiso2012), and the inaccuracy of the Warren climatology over seasonal ice has been emphasised by a number of studies . Despite only representing historical conditions, the Warren climatology remains the choice source of Arctic-wide snow depth estimates used in the processing of contemporary sea ice thickness, i.e. from CryoSat-2 (hereafter CS-2, a Ku-band radar satellite altimeter operational since 2010). In order to address the change to a more seasonal ice regime, Warren snow depths are halved over first-year ice regions to accommodate the lesser accumulation they experience . Although this modification generates temporal and spatial variability of snow depths due to the changing multi-year ice fraction, trends in precipitation and accumulation are not accounted for, rendering time series analyses of snow depths impossible by this method. Only satellite-derived snow depth estimates can offer the spatio-temporal resolution required for time series analysis and accurate monthly sea ice thickness derivation, but retrieving snow depth from space has proven challenging and is an ongoing effort for the sea ice community. Existing methods have historically relied on using relationships between passive microwave brightness temperatures and snow thickness. Using data over Antarctic sea ice from the Defense Meteorological Satellite Program (DMSP) special sensor microwave/imager (SSM/I), compared the spectral gradient ratio of the 19 and 37 GHz vertical polarisation channels with in situ snow depth data in order to express snow depth as a function of brightness temperature. The algorithm was later developed for application to Arctic sea ice using data from the Advanced Microwave Scanning Radiometer-EOS (AMSR-E), but due to the inability to distinguish signatures from snow and multi-year ice, the available AMSR-E data product is limited to seasonal ice only . Furthermore, subsequent studies have demonstrated the sensitivity of the retrieved snow depth to snowpack conditions and surface roughness . utilised a frequency of 1.4 GHz (L-band), measured by the European Space Agency's Soil Moisture and Ocean Salinity (SMOS) satellite to retrieve snow depth. Although snow is transparent to L-band frequencies, i.e. the large wavelengths are not attenuated by the snow, their model-based study found brightness temperatures from the ice increased at L-band frequencies when a snow layer was present due to its insulating properties and the dependence of ice emissivity on temperature. Using a radiative transfer model, they tested the impact of 0–70 cm varying snow thickness on L-band brightness temperatures for a number of scenarios (in which ice temperature, thickness, salinity, and snow density varied within a realistic range). The snow depth which produced a brightness temperature most comparable (smallest root-mean-square deviation and best correlation coefficient) to the SMOS brightness temperature was then compared with snow thickness from Operation IceBridge (OIB) in order to assess which scenario performed best. Snow depths produced by this scenario correlated well (root-mean-square deviation = 5.5 cm) up to model-generated depths of 35 cm, but overestimated snow depth thereafter, owing to the desensitisation of brightness temperatures when snow depth increases above 35  cm. Furthermore, this approach requires that the values for the input parameters (ice temperature, thickness, salinity, and snow density) are assumed valid everywhere. In reality, these parameters vary in space and time, and the authors express the need to develop the methodology further to allow regional and temporal variability of model input parameters. At time of publication of this study, no SMOS snow depth product has been made publicly available. A recent approach to snow depth retrieval from satellites was offered by , who demonstrated the potential to estimate snow thickness by comparing retrievals from coincident satellite radar altimeters operating at different frequencies. Snow depth over Arctic sea ice (up to 81.5 N) was retrieved by differencing the elevation retrievals from AltiKa (Ka-band radar satellite altimeter, 2013–present) and CS-2. To investigate the penetration properties of the two radar altimeters, the authors simulated penetration depth as a function of snow grain size under different temperature and density conditions, derived from the equation for the extinction coefficient of the radar signal. Based on these model simulations the authors suggested that the Ka-band signal stops within the first few centimetres of the snow, and that the Ku-band signal can be reflected before the snow–ice interface in the case of large snow grains. In the following analysis to retrieve snow depth, however, this grain-size dependence of signal penetration is essentially neglected, and it is assumed that AltiKa does not penetrate the snow at all whilst CS-2 penetrates it fully, allowing snow depth to be calculated simply as the difference between the two. A previous study by also compared retrievals from AltiKa and CS-2; they found a basin-mean freeboard difference of 4.4 cm in October 2013 increasing to 6.9 cm in March 2014, with AltiKa consistently higher across the basin and season. By comparing the freeboards retrieved from each satellite with ice freeboard from NASA's Operation IceBridge, radar penetration at a local grid-scale level was quantified. Under the assumption that multi-year ice and first-year ice characterise snow and ice packs with distinctive penetrative properties, an average value for the radar penetration factor was found for each satellite over each ice type. Though limited to the spring due to the availability of OIB data and therefore not necessarily representative of penetration properties throughout the year, the study highlights the importance of accounting for regional differences in penetration depth. compared freeboards from Envisat, a Ku-band pulse-limited altimeter, with those from the CS-2 Synthetic Aperture Radar (SAR) system. Since both altimeters operate at the same frequency, they are expected to penetrate to the same depth and therefore retrieve comparable freeboards. The study found Envisat was biased low compared with CS-2, attributed to differences in footprint size (0.3×1.7km for CS-2 vs. 2–10 km diameter for Envisat) and the effect of using an empirical retracker on Envisat's pulse-limited waveforms (discussed in Sect. 2.3). performed a similar Envisat vs. CS-2 freeboard comparison over Antarctic sea ice and also found a bias on Envisat's freeboard attributed to its larger footprint. These results suggest that the freeboard difference between AltiKa and CS-2 found in may not have been solely the result of a difference in physical snow penetration, but due also to differences in sampling area and processing technique. AltiKa has a smaller pulse-limited footprint than that of Envisat (1.4 km compared with 2–10 km); nevertheless, we would expect the impact of its different footprint with respect to CS-2 to introduce a bias like that seen in the Envisat data. This is discussed fully in Sect. 2.3. Based on studies of snow penetration depth as a function of microwave wavelength , we expect the CS-2 Ku-band pulse to penetrate further into the snowpack than AltiKa's Ka-band, but unlike previous studies we do not try to quantify this penetration depth. Based on the results of , , and , we assume that the effects of snow penetration and biases due to sampling area cannot be separated and instead correct for both simultaneously by calibrating satellite freeboards with independent freeboard data. We make use of snow depth and laser freeboard data from OIB to assess the deviation of AltiKa and CS-2 satellite freeboards from the snow surface and snow–ice interface respectively. We assume this deviation to result from the combination of competing effects; snow penetration, biases due to sampling area and surface roughness, and the effect of the threshold retracker on the satellite waveforms. Like , we use satellite pulse peakiness (PP) as a characterisation of the surface and compare each satellite's deviation from its expected dominant scattering horizon (Δf) against PP. Using the relationships between Δf and PP, we then calibrate both AltiKa and CS-2 freeboards to bring them in line with the snow surface and snow–ice interface respectively. Finally we estimate dual-altimeter snow thickness (DuST) as the difference between the calibrated AltiKa and CS-2 freeboards. In the next section we outline the data sets used and discuss why the properties of the area sampled by the satellite footprint can create a bias on freeboard which is inseparable from the physical snow penetration of the signal. In Sect. 2.5 and 2.6 we calibrate the AltiKa and CS-2 freeboards and then present the results of this calibration applied to the 2015–2016 growth season and discuss the retrieved snow depth estimates with reference to large-scale weather phenomena in Sect. 3.1. We provide an analysis of the uncertainty on our gridded DuST product and compare it with OIB snow depth data not included in the calibration in Sect. 3.2. Finally in Sect. 3.4, we apply the DuST methodology to freeboards from the ICESat and Envisat satellites. 2 Data and methods ## 2.1 AltiKa The SARAL/AltiKa satellite (herein referred to as AltiKa), was launched in spring 2013 as a joint mission between the Centre National d'Etudes Spatiales (CNES) and the Indian Space Research Organisation (ISRO). AltiKa's pulse-limited Ka-band radar altimeter, which operates at a central frequency of 35.75 GHz, retrieves surface elevations up to 81.5 latitude. used a “Gaussian plus exponential” retracker to retrieve lead elevations (after ) and a 50 % threshold retracker over floes. AltiKa freeboard data used in this study are derived using the same processing algorithm, and the reader is referred to the Supplement in for further details. Table 1AltiKa and CS-2 (SAR mode) operation characteristics. ## 2.2 CryoSat-2 CS-2 was launched by the European Space Agency in 2010, tasked with the specific role of monitoring the Earth's cryosphere. The satellite has an orbital inclination of 88, giving it far better coverage over the poles than previous radar altimeters, and unlike AltiKa, CS-2 employs along-track SAR processing to achieve an along-track resolution of approximately 300 m, improving the sampling of smaller floes and making it less susceptible to snagging from off-nadir leads . As with AltiKa, lead elevations are retrieved using the Gaussian plus exponential model fit and for floes a 70 % threshold retracker was determined as offering the best average elevation from the CS-2 unique SAR waveforms . The CS-2 freeboard data used in this study were processed by the Centre for Polar Observation and Modelling (CPOM) and readers are referred to for further details on the method. ## 2.3 Sources of AltiKa vs. CryoSat-2 freeboard bias We define AltiKa vs. CS-2 freeboard bias as the portion of the AltiKa minus CS-2 freeboard difference that does not originate from the difference in snow penetration of the two radars. In line with radar theory and in light of recent findings by , we expect such a bias to be the result of the difference in footprint sizes between the two altimeters and the consequences of this during freeboard processing. The differences between AltiKa and CS-2 of interest to this study are summarised in Table 1. In an initial stage of AltiKa and CS-2 freeboard processing, waveforms are classified as either lead or floe according to thresholds for pulse peakiness, defined as $\text{PP}=N\frac{{p}_{max}}{{\mathrm{\Sigma }}_{\text{i}}\phantom{\rule{0.33em}{0ex}}{p}_{\text{i}}},$ where N is the number of range bins above the “noise floor” (calculated as the mean power in range bins 10–20), pmax⁡ is the maximum waveform power (the “highest peak”), and Σipi is the sum of the power in all range bins above the noise floor . It should also be noted that further waveform parameters are used to identify lead and floes: stack standard deviation (SSD) for CS-2 and backscatter coefficient σ0 for AltiKa . Since PP is the criterion shared by both, it is the focus of our discussion here. Waveforms originating from smooth, specular leads demonstrate a rapid rise in power followed by a sharp drop off, giving them a high PP. Returns from floes typically demonstrate a more gradual rise in power and slower drop-off, equivalent to a lower PP. PP can therefore be used to distinguish floe and lead returns and eliminate those not clearly identifiable as one or the other. For AltiKa (CS-2), waveforms with PP less than 5 (9) are designated as originating from ice floes. Waveforms with PP greater than 18 are classified as leads for both satellites . Waveforms that exhibit a mixture of scattering behaviour will have a PP in the “ambiguous” range (5 < PP < 18 for AltiKa and 9 < PP < 18 for CS-2) and are discarded. Since AltiKa has a larger footprint, its waveforms are more likely to be ambiguous and therefore discarded than CS-2, which can resolve smaller floes within the same region. The result of this is a bias in AltiKa towards higher freeboards (only larger floes, which tend to be thicker, are captured), especially over seasonal lead-dense areas. The impact of surface roughness on pulse-limited altimetry is well documented (Chelton et al.2001; Raney1995; Rapley et al.1983). Generally, a rougher surface leads to dilation of the footprint and a widening of the leading edge of the waveform return. For a homogeneously rough surface with a Gaussian surface elevation distribution, the 50 % power threshold represents the mean surface elevation within the pulse-limited footprint. However, for a heterogeneously rough surface, such as that of multi-year sea ice, the waveform leading edge can take a complex shape where the half-power point does not necessarily represent the average elevation within the footprint and using a 50 % threshold retracker might lead to a biased surface height retrieval . Despite its along-track Doppler processing and effective sharpening of the waveform response, CS-2 may also be susceptible to an elevation bias due to surface roughness. This was demonstrated by who advocate the use of a physical model retracker in order to better resolve CS-2 surface elevation. To overcome the CS-2 vs. AltiKa freeboard bias, employed degraded SAR mode CS-2 data in their comparison, where the synthetic Doppler beams are not aligned in time and are summed incoherently to obtain a pseudo-pulse-limited echo. Since this offers a footprint and waveform more closely resembling that of AltiKa, it was assumed that observed elevation differences between AltiKa and degraded CS-2 were the result of differences in snow penetration only. Rather than separating the contributions of freeboard difference in this way, we here introduce an approach that calibrates AltiKa freeboard to align it to the snow surface and CS-2 to the ice–snow interface (we assume in general that CS-2 penetrates further than AltiKa due to its longer wavelength, ). As such, penetration properties and sources of freeboard bias are corrected in one step without needing to consider the contribution of each. While the comparisons of derived snow depths with those from OIB are encouraging, the assumption of zero penetration for AltiKa and full penetration for CS-2 introduces limitations and is counter to observational results – and indeed their own model simulations – in support of a spatially and temporally variable penetration depth as a function of snow characteristics. Here we offer a methodology that both accounts for variable AltiKa and CS-2 snow penetration and is simple; freeboard data can be utilised as they are, without reprocessing. This is in contrast to the method of which relies on the ability to process one of the satellite data sets to achieve comparable footprints and thus alleviate the biases due to the difference in sampling areas. It is fortunate that CS-2 pseudo-LRM (Low Resolution Mode) has a similar footprint to AltiKa (1.7 km diameter and 1.4 km diameter respectively), but how, for example, could the methodology be applied to CS-2 and ICESat-2 in order to retrieve contemporary snow depth estimates once AltiKa ceases functionality? Although herein we demonstrate our methodology applied to the AltiKa and CS-2 satellites, our intention is to outline an approach that can be applied more broadly. Given the recent launch of ICESat-2 and the unique opportunity that its coincidence with CS-2 provides, we demonstrate the applicability of our method to the Envisat (same operating frequency as CS-2) and ICESat satellites. ## 2.4 Operation IceBridge In order to evaluate the deviation of each satellite's retrieved elevation from its “expected” dominant scattering horizon (the snow surface for AltiKa and the snow–ice interface for CS-2), we use laser freeboard and snow depth from NASA's 2013–2016 OIB spring campaigns. It is important to note that a variety of research groups process OIB snow radar data in different ways, and the results vary significantly (for the 2013–2015 period, campaign-average snow depths differ by up to 7 cm over first-year ice and 12 cm over multi-year ice; ). Evidently the lack of a singular, robust independent data set presents a limitation to our methodology since our aim is to calibrate to the “true” snow and ice freeboards. In an attempt to offer the best Dual-altimeter Snow Thickness product possible, we employ OIB snow depths processed from snow radar data by the NASA Jet Propulsion Laboratory (JPL), as these demonstrated best agreement with ERA-interim reanalysis data and the Warren climatology for the 2013–2015 period . We return to a discussion on this limitation in Sect. 3.2. Our methodology requires a comparison of CS-2 radar freeboard with OIB radar freeboard. To calculate this we use snow freeboard, retrieved using the OIB ATM (Airborne Topographic Mapper) laser altimeter, from which snow depth can be subtracted. Currently, ATM freeboard data are only available from the National Snow and Ice Data Center (NSIDC), and for the 2014–2016 period these exist solely in Quick Look format: a first release, expedited version, which demonstrates reduced accuracy compared with the final release products (Kurtz2014). In the interest of consistency we also use the ATM laser freeboard Quick Look product for 2013. Sea ice freeboard fi is calculated by subtracting OIB JPL snow depth hs from OIB Quick Look laser freeboard fl. Ice freeboard is then converted to radar freeboard fr by $\begin{array}{}\text{(1)}& {f}_{\text{r}}={f}_{\text{i}}-{h}_{\text{s}}\left(\frac{c}{{c}_{\text{s}}}-\mathrm{1}\right).\end{array}$ The OIB radar freeboard represents the freeboard that would be retrieved by a satellite altimeter whose pulse penetrated through to the ice–snow interface . We choose a value of ccs of 1.28 after Kwok (2014). In the following discussion, AltiKa and CS-2 freeboard refers to the radar freeboard, that is the freeboard retrieved by the satellite before the correction for light propagation through the snowpack is applied. ## 2.5 AltiKa calibration with Operation IceBridge For each day of the three spring campaigns 2013–2015, OIB laser freeboard data are averaged onto a 2 longitude × 0.5 latitude grid. Grid cells containing less than 50 individual points are discarded to remove speckle noise. Along-track AltiKa freeboard and PP data for the ±10 days surrounding the campaign day are then averaged onto the same grid, and grid cells with less than 50 points are similarly discarded. This grid and time window were chosen because they produced the maximum number of grid cells where a grid cell must contain at least 50 airborne and satellite points. Satellite freeboard and PP grids are then interpolated at the average position of the OIB data within each valid OIB grid cell. Further, high resolution (10 km gridded) ice type data from the EUMETSAT Ocean and Sea Ice Satellite Application Facility (OSI SAF, http://www.osi-saf.org, last access: 1 March 2018) are interpolated at the same point to determine whether multi-year or seasonal ice is being sampled. The value ΔfAK, defined as the ATM laser freeboard minus the AltiKa freeboard and plotted against AltiKa PP, is shown in Fig. 1. Data from 2013, 2014, and 2015 and their corresponding linear regression fits are plotted in red, blue, and grey respectively to demonstrate year to year consistency. Multi-year and first-year ice are distinguished by star and square markers in order to illustrate the variation of PP, and thus roughness, with ice type. Figure 1The value ΔfAK, defined as the OIB laser freeboard minus the AltiKa radar freeboard, plotted against AltiKa pulse peakiness, for the OIB spring campaigns of 2013 (red), 2014 (blue), and 2015 (grey). Multi-year and first-year ice are plotted with stars and squares respectively, and the horizontal grey dashed line marks zero. The combined (all years) linear regression fit (CLRF), shown by the black line, has a slope of −0.16 and an intercept of 0.76. The shaded area around the CLRF shows the 68 % prediction interval, corresponding to a standard error (SE) on ΔfAK of 9.4 cm. Please note that fb is freeboard. The combined (all years) linear regression fit (CLRF) is shown by the black line and has slope of −0.16 and intercept of 0.76. The shaded area shows the 68 % prediction interval for the CLRF, corresponding to a standard error (SE) on ΔfAK of 9.4 cm. The CLRF is greater than zero for most PPs, implying that the freeboard needs to be increased to align with the snow–air interface, though more so (∼0.2m) for low peakiness values (rougher ice) than for high peakiness values (smoother ice), where the correction approaches zero. This suggests that freeboard over rough ice is biased low, which could be attributed to difficulty in identifying the average footprint surface elevation as outlined in Sect. 2.3. It could also suggest that AltiKa exhibits greater snow penetration over rough ice than seasonal ice, in support of the assumption that (i) rough, multi-year ice has a thicker snow cover and (ii) seasonal ice is likely subject to brine wicking, which prevents radar propagation through the snow . Ultimately we cannot separate the influence of individual sources of bias and physical penetration, and therefore, these observations are purely speculative. ## 2.6 CS-2 calibration with Operation IceBridge The procedure for calibrating CS-2 with OIB is identical to that outlined above for AltiKa, but here ΔfCS is defined as the OIB radar freeboard (see Sect. 2.4) minus the CS-2 radar freeboard. For consistency and comparability with AltiKa, we remove CS-2 data above 81.5 N from our analysis. The value ΔfCS is plotted against CS-2 PP and shown in Fig. 2. The CLRF, shown by the black line, has a slope of 0.06 and a negative intercept of −0.46. As before, the shaded area around the CLRF shows the 68 % prediction interval, and corresponds to a ±8.4cm uncertainty (1 standard error) on ΔfCS. For the entire CS-2 PP range, the CLRF is negative. It is most negative at lower PP, indicating that the CS-2 freeboard lies higher above the snow–ice interface over rough ice. This is in agreement with rougher ice exhibiting thicker snow cover and the radar pulse therefore being limited from getting as close to the snow–ice interface, where the snow is thinner. This deviation could also be the result of a failure of the empirical retracker to retrieve accurate surface elevation over rough ice, as demonstrated by . As before, since we cannot separate the influence of individual sources of bias and physical penetration, these suggestions are speculative. Figure 2The value ΔfCS, defined as the OIB theoretical radar freeboard minus the CS-2 radar freeboard, plotted against CS-2 pulse peakiness, for the OIB spring campaigns of 2013 (red), 2014 (blue), and 2015 (grey). Multi-year and first-year ice are plotted with stars and squares respectively, and the horizontal grey dashed line marks zero. The combined (all years) linear regression fit (CLRF), shown by the black line, has a slope of 0.06 and an intercept of −0.46. The shaded area around the CLRF shows the 68 % prediction interval, corresponding to a standard error (SE) on ΔfCS of 8.4 cm. Please note that fb is freeboard. 3 Results and discussion ## 3.1 Case study November 2015–April 2016 To derive snow depth, along-track freeboard measurements for AltiKa and CS-2 are calibrated as a function of PP according to the combined linear regression fits derived in the previous section and then averaged onto a 1.5 longitude by 0.5 latitude monthly grids. A finer grid resolution than for the calibration analysis is afforded given the coverage of 1 month's worth of data as compared to the 21 days (±10 days window) averaged previously. The calibrated CS-2 freeboard is subtracted from the calibrated AltiKa freeboard and multiplied by a factor of csc = 0.781 to convert to snow depth. Figure 3 summarises the retrieved monthly dual-altimeter snow thicknesses from November 2015 to April 2016. The delineation of multi-year and first-year ice is shown by the dashed black lines, adapted from OSI SAF Quicklook daily sea ice type maps for the 15th day of each month, available at http://osisaf.met.no/p/osisaf_hlprod_qlook.php?prod=Ice-Type&area=NH (last access: 1 March 2018). Figure 3Monthly snow depths for the growth season November 2015 (a) to April 2016 (f), derived from the AltiKa minus CS-2 calibrated freeboard. The multi-year ice boundary for each month is shown by the dashed black line, adapted from the OSI SAF Quicklook sea ice type map for the 15th day of the month, available at http://osisaf.met.no/p/osisaf_hlprod_qlook.php?prod=Ice-Type&area=NH (last access: 1 March 2018). Spatial distribution of snow depth follows the expected pattern of thin snow cover over seasonal ice (up to 25 cm) and thicker snow over multi-year ice (30–40 cm) , which in recent years is limited to regions north of the Canadian Archipelago (CAA) and Greenland and the Fram Strait. However, seasonal deposition of snow occurs between November and April, corresponding with the locations of predominant cyclone tracks in winter (e.g. the Aleutian Low on the Pacific side and the North Atlantic storm tracks). In particular, snow predominantly accumulates within the Chukchi Sea, and within the Kara, Barents, and eastern Greenland seas. As well as precipitation events, ice drift governs snow distribution through the advection of snow-loaded sea ice parcels around the ocean. Therefore, in order to understand the seasonal evolution of the snow cover, we compare snow depth maps with monthly sea ice motion vectors from the National Snow and Ice Data Center (NSIDC, available at https://daacdata.apps.nsidc.org, last access: 23 February 2018), shown in Fig. 4. We expect snow accumulation west of Banks Island in the CAA is the result of westward transport of multi-year ice by the Beaufort Gyre. Snow depths in the Kara Sea appear high given the advection of ice out of this region throughout the season; however, we cannot rule out anomalous precipitation events. Typically 20–40 extreme cyclones occur each winter within the North Atlantic, but in recent years there has been a trend towards increased frequency of cyclones, particularly near Svalbard . These cyclones, while they transport heat and moisture into the Arctic and may impact the sea ice edge location , can also be associated with increased precipitation. Figure 4NSIDC November 2015 to April 2016 monthly mean sea ice drift vectors. Adapted from images retrieved from https://daacdata.apps.nsidc.org/pub/DATASETS/nsidc0116_icemotion_vectors_v3/browse/north/ (last access: 23 February 2018). To understand where greatest accumulation of snow occurs over the season, we also plot the difference between November 2015 and April 2016 snow depth in Fig. 5. Snow accumulation is highest in the western Beaufort Sea, in particular adjacent to the coast of Canada. We attribute this to the advection of snow-loaded multi-year ice by the Beaufort Gyre, supported by the visible shift of the multi-year ice boundary through the season (Fig. 3). Accumulation also occurs in the Fram Strait, which we expect to be the result of southward advection of multi-year ice from the central Arctic Ocean in December and April, as well as snow deposition from the North Atlantic Storm tracks. High accumulation in the southern Chukchi Sea could also be explained by strong advective currents pushing snow-loaded ice into this area, particularly from November to January, as well as snow precipitation from the Aleutian Low. Negative snow depth changes are generally small, and are predominantly visible in the centre of the Beaufort and Laptev seas. In accordance with Fig. 4 we expect these negative accumulations to be the result of advection transporting snow-loaded ice parcels out of these regions and perhaps new ice formation. Figure 5April 2016 minus November 2015 DuST. One limitation of the AltiKa CS-2 DuST product is the data gap associated with AltiKa's upper latitudinal limit of 81.5 N. This region contains a large proportion of the Arctic's thick multi-year ice, and thus, observations of snow depth could provide valuable insight as the ice pack transitions from multi-year to first-year ice. Furthermore, for a snow depth product to be useful for integration into sea ice thickness retrievals as discussed in the introduction, one that extends to the CS-2 latitude range is desirable. Application of the DuST methodology to the CS-2 and ICESat-2 satellites would generate a snow depth product up to 88. Alternatively, dual-frequency operation from the same satellite platform would open the potential for snow depth retrievals along the satellite track. Table 2Covariances between terms for snow depth uncertainty calculation. A secondary limitation of the methodology is the extent of the OIB campaigns; since they only operate in the western Arctic Ocean, north of the CAA, and in the Lincoln and Beaufort seas, no observations from the eastern Arctic go into our calibrations. Thus, the calibration functions derived are unconstrained outside of this area and we have less confidence in the snow depths in the eastern Arctic. Further, the calibration relationships are only strictly valid in spring, when OIB operates, so caution is warranted in using these products for seasonal variability of snow depth analysis. ## 3.2 Uncertainty calculation The uncertainty calculation performed in this section assumes that the OIB products used in the analysis contain no systematic bias. We expect random noise to be minimised by grid averaging, but any systematic error would offset the calibration linear regression fits and alter snow depth retrievals. As discussed in Sect. 2.4, the recent study by highlights the differences that exist between OIB snow radar data processed using various existing algorithms. It is not within the scope of this study to assess the sensitivity of our DuST product to the different OIB snow radar input data, but it remains the subject of future work. One purpose of the inter-comparison was to identify the strengths and weaknesses of each processing technique in order to inform the design of an optimised algorithm and generate an improved snow radar product. We acknowledge that our methodology would benefit from such an effort and suggest that for future applications of this methodology – in particular to CS-2 and ICESat-2 – the next-generation of OIB snow depths should be investigated. The equation for calculating snow depth, hs, by our methodology is $\begin{array}{}\text{(2)}& {h}_{\text{s}}=\mathrm{0.781}\left(\left({f}_{\text{AK}}+\mathrm{\Delta }{f}_{\text{AK}}\right)-\left({f}_{\text{CS}}+\mathrm{\Delta }{f}_{\text{CS}}\right)\right),\end{array}$ where fAK and fCS are the AltiKa and CS-2 freeboards, and ΔfAK and ΔfCS are the AltiKa and CS-2 freeboard corrections (see Sect. 2.5 and 2.6). From propagation of errors on Eq. (2), the uncertainty on snow depth, ${\mathit{\sigma }}_{{h}_{\text{s}}}$, is given by $\begin{array}{ll}\text{(3)}& {\mathit{\sigma }}_{{h}_{\text{s}}}=& \phantom{\rule{0.25em}{0ex}}\mathrm{0.781}\left({\mathit{\sigma }}_{{f}_{\text{AK}}}^{\mathrm{2}}+{\mathit{\sigma }}_{\mathrm{\Delta }{f}_{\text{AK}}}^{\mathrm{2}}+{\mathit{\sigma }}_{{f}_{\text{CS}}}^{\mathrm{2}}+{\mathit{\sigma }}_{\mathrm{\Delta }{f}_{\text{CS}}}^{\mathrm{2}}\right& +\mathrm{2}{\mathit{\sigma }}_{{f}_{\text{AK}}\mathrm{\Delta }{f}_{\text{AK}}}-\mathrm{2}{\mathit{\sigma }}_{{f}_{\text{AK}}{f}_{\text{CS}}}-\mathrm{2}{\mathit{\sigma }}_{{f}_{\text{AK}}\mathrm{\Delta }{f}_{\text{CS}}}\\ & {-\mathrm{2}{\mathit{\sigma }}_{\mathrm{\Delta }{f}_{\text{AK}}{f}_{\text{CS}}}-\mathrm{2}{\mathit{\sigma }}_{\mathrm{\Delta }{f}_{\text{AK}}\mathrm{\Delta }{f}_{\text{CS}}}+\mathrm{2}{\mathit{\sigma }}_{{f}_{\text{CS}}\mathrm{\Delta }{f}_{\text{CS}}})}^{\frac{\mathrm{1}}{\mathrm{2}}},\end{array}$ where the first four terms are the errors on the four variables in Eq. (2), and the last six terms are the covariances between them. We obtain values of ${\mathit{\sigma }}_{{f}_{\text{AK}}}=\mathrm{9.4}$cm and ${\mathit{\sigma }}_{{f}_{\text{CS}}}=\mathrm{8.4}$cm from the 68 % prediction intervals on the calibration fits, represented by the shaded areas in Figs. 1 and 2 respectively. Since our snow product is monthly gridded we are interested in monthly gridded snow depth uncertainty. Therefore ${\mathit{\sigma }}_{{f}_{\text{AK}}}$ and ${\mathit{\sigma }}_{{f}_{\text{CS}}}$ are the errors on the monthly gridded satellite freeboards to which the calibration corrections are being applied. According to , the error on the monthly gridded CS-2 freeboard is dominated by the uncertainty on the interpolated sea level anomaly (SLA), calculated from the SLAs of waveforms identified as leads (see Sect. 2.3). Lead SLAs within a 200 km along-track window centred on each floe measurement are fit with a linear regression to estimate the SLA beneath the floe and thus calculate the freeboard. As such, along-track floe measurements are not decorrelated at length scales less than 200 km, and the interpolated SLA uncertainty is not reduced from grid-cell averaging of data from the same satellite pass. Since the interpolation is performed along-track, separate satellite passes over each grid cell over the month are decorrelated, and thus the error is minimised by $\mathrm{1}/\sqrt{N}$, where N is the number of passes over a grid cell in 1 month. To calculate this error we reprocessed 1 month (January 2016) of CS-2 and AltiKa data, recording for each floe freeboard retrieval the 68 % prediction interval on the linear regression fit across the 200 km window. These errors, averaged on our 1.5 longitude by 0.5 latitude grid are shown in Fig. 6a. Since this error decorrelates from one satellite pass to the next, we divide by the number of satellite passes in a month (Fig. 6b) to retrieve the final interpolated SLA uncertainty, shown in Fig. 6c. Since this error dominates the freeboard retrieval , this approximates to the monthly uncertainty on AltiKa and CS-2 freeboards, ${\mathit{\sigma }}_{{f}_{\text{AK}}}$ and ${\mathit{\sigma }}_{{f}_{\text{CS}}}$. Figure 6Satellite freeboard error calculation for January 2016 for AltiKa (left) and CS-2 (right). (a) Monthly gridded sea level anomaly (SLA) error. (b) Number of tracks per (1.5 long × 0.5 lat) grid cell per month. (c) SLA error divided by the square root of the number of tracks, i.e. $\text{(a)}/\sqrt{\text{(b)}}$ gives the reduced monthly error on freeboard. The black circle on the CS-2 maps shows the upper latitude limit of DuST (81.5 N). The last six terms of Eq. (3) are the covariances of the four variables. We calculate these by gridding all AltiKa and CS-2 data from March 2013 to January 2018 and finding the correlation–covariance matrix. The value for each term is summarised in Table 2. All terms are substituted into Eq. (3) to find the uncertainty ${\mathit{\sigma }}_{{h}_{\text{s}}}$ on monthly gridded snow depth, shown for January 2016 in Fig. 7. The uncertainty is higher at lower latitudes where there are less satellite passes per grid cell, and over the thick multi-year ice to the north of the CAA where fewer leads available for the linear regression increase the uncertainty on the interpolated SLA, particularly for CS-2 (see Fig. 6a). As a conservative estimate we assign our monthly gridded snow depth product an average uncertainty of 8 cm for all months. Figure 7January 2016 snow depth uncertainty. The main contribution to snow depth uncertainty is the prediction intervals from the calibration functions (see Sect. 2.5 and 2.6). This uncertainty could be reduced with the addition of more data points, i.e. more seasons of coincident satellite and OIB measurements. At time of publication OIB data for springs 2017 and 2018 have not been made publicly available. ## 3.3 Comparison with Operation IceBridge We compare snow depth retrieved by our methodology with OIB snow depths from spring 2016 following the same procedure outlined in Sect. 2.5 and 2.6. For each day of the 2016 campaign, OIB snow depths are averaged onto the 2 longitude × 0.5 latitude grid, and grid cells containing less than 50 individual points are discarded to remove speckle noise, as before. Calibrated AltiKa and CS-2 freeboards for the ±10 days surrounding the campaign day are averaged onto the same grid and grid cells with less than 50 AltiKa or CS-2 points are discarded. The gridded, calibrated CS-2 freeboard is subtracted from the gridded calibrated AltiKa freeboard and multiplied by factor ${c}_{\text{s}}/c=\mathrm{0.781}$, as done previously. The resulting snow depth grid is then interpolated at the average position of the OIB data within each valid OIB grid cell. The DuST retrieved for each point is plotted against OIB snow depth. In order to compare with more than one OIB campaign, we repeated the original calibration analyses outlined in Sect. 2.6 and 2.5, successively omitting each of the 2013–2015 OIB seasons and using the other 3 years' data to derive calibration functions and generate snow depths for the omitted year. DuST snow depths were then compared against OIB snow depths by the method outlined in the previous paragraph. Results for all 4 years are shown in Fig. 8 and summarised in Table 3. Since OIB data were used to calibrate the satellite freeboards, this cannot be considered a validation exercise. However, if OIB is considered as providing true snow depth estimates (see discussion in Sect. 2.4 and 3.2), then the results suggest the ability to use the derived calibration relationships to predict snow depth when OIB does not operate, e.g. in future. The poor agreement between DuST and OIB for 2013 as compared to subsequent years could relate to the persistence and treatment of radar side lobes in the 2013 data . Our analysis would benefit from the inclusion of additional OIB campaign data in the calibration and comparison. At present, OIB data for 2017 and 2018 are not available. Figure 8Comparison of DuST and OIB snow depths for the 2013, 2014, 2015, and 2016 spring campaigns. Statistical results for all years are summarised in Table 3. Table 3Results of OIB and DuST comparison for the years 2013–2016. Figure 9(a) Envisat calibration relationship, derived from comparison of coincident OIB and Envisat data. Data and corresponding linear regression fits for 2009, 2010, 2011, and 2012 are shown in orange, purple, blue, and grey respectively. Star and square symbols represent multi-year and seasonal ice respectively, and the horizontal grey dashed line shows zero. (b) Snow depth for ICESat's 3E laser period (22 February–27 March 2006), retrieved by subtracting the calibrated Envisat freeboard from the ICESat freeboard and multiplying by a factor of 0.781. Please note that fb is freeboard. ## 3.4 Application of DuST to ICESat-Envisat The methodology outlined above demonstrates the ability to calibrate satellite freeboards with an independent data set in order to derive snow depth. It can be applied to any two coincident freeboard data sets and could be applicable to ICESat-2 which launched in September this year. In view of this possibility, we applied the methodology to the ICESat and Envisat satellites, whose periods of operation overlapped between 2003 and 2009. The Radar Altimeter 2 (RA2) instrument operated on the Envisat satellite from 2002 until 2012. It was a pulse-limited Ku-band radar altimeter which like that aboard CS-2, operated at a central frequency of 13.575 GHz. NASA's ICESat mission featured a Geoscience Laser Altimeter System (GLAS) in order to accurately measure changes in the elevation of the Antarctic and Greenland ice sheets. This laser was also used to estimate ice thickness from laser freeboard retrieval (Kwok et al.2007). Between 2003 and 2009, ICESat completed 17 observational campaigns; once every spring (February–March) and autumn (October–November) as well as three in the summers of 2004, 2005, and 2006. ICESat had a 70 m diameter footprint, so we assume that biases due to footprint size or retracking method are negligible, and that it offers accurate estimates of the snow freeboard. We use available ICESat freeboard data (version 1) from NSIDC (Yi and Zwally2009), in our analysis. Envisat freeboard data were processed by CPOM, and the reader is referred to for further details on the algorithm. Following the procedure outlined in Sect. 2.5, Envisat freeboard is calibrated to the snow–ice interface. Envisat has a larger footprint than AltiKa, nominally 2–10 km in diameter . As such, the waveform returns are more often classified as ambiguous (showing a complex mixture of scattering behaviour) and discarded, as discussed with reference to AltiKa in Sect. 2.3. As a result, Envisat data are sparsely populated and in order to have sufficient coverage for comparison with OIB data as well as 50 or more points per grid cell (to reduce speckle noise), it was necessary to increase both the grid resolution and time window as compared with the calibration procedure performed for AltiKa and CS-2. Satellite data for the ±15 days surrounding each 2009–2012 OIB campaign day were averaged onto a 3 longitude × 0.75 latitude grid. The value ΔfENV, defined as the OIB radar freeboard minus the Envisat freeboard and plotted against Envisat PP, is shown in Fig. 9a. The combined (all years) linear regression fit is shown by the black line and has slope of −0.23 and intercept 0.50. The shaded area shows the 68 % prediction interval for the CLRF, corresponding to a ±5cm standard error on ΔfENV. Dual-altimeter snow thickness, retrieved by subtracting the calibrated Envisat freeboard from the ICESat freeboard, is shown in Fig. 9b for the ICESat laser period 3E (22 February–27 March 2006). Snow depth spatial distribution follows the expected pattern of thicker snow (30–40 cm) over multi-year ice to the north of the Canadian Archipelago and in the Fram Strait, and thinner snow cover (<20cm) over seasonal ice. Overall higher magnitudes as compared with March 2016 (Fig. 3) could be the result of a decline in multi-year ice fraction and precipitation over the past decade. Though validation is required, the result demonstrates the viability of combining laser and calibrated radar freeboards to retrieve snow depth. 4 Conclusions Using independent snow and ice freeboard data from OIB, we derived calibration relationships to align AltiKa to the snow surface and CS-2 to the ice–snow interface as a function of their pulse peakiness. Calibrated CS-2 and AltiKa freeboard data were then combined to generate spatially extensive snow depth estimates across the Arctic Ocean between 2013 and 2016. The Dual-altimeter Snow Thickness (DuST) product was evaluated against OIB snow depth by successively omitting each year of OIB data from the calibration procedure, returning root-mean-square deviations of 7.7, 5.3, 5.9, and 6.7 cm for the years 2013, 2014, 2015, and 2016 respectively. While the OIB snow depth data cannot be considered statistically independent validation of the DuST product, this evaluation does demonstrate the ability to upscale OIB snow depths to the wider Arctic, i.e. predict OIB snow depths for an unsampled region and year. However, the DuST snow depth estimates remain unconstrained and unevaluated outside of the western Arctic and the spring season, due to a lack of coincident data. We used OIB snow radar data processed by NASA JPL in our analysis since this demonstrated best agreement with ERA-interim and the Warren climatology for the years 2013–2015; however, our methodology would benefit from the development of an optimal snow radar processing algorithm and snow depth product. Investigating the sensitivity of our product to the discrepancies between existing OIB snow radar data versions remains the subject of future work. The upcoming Multidisciplinary drifting Observatory for the Study of Arctic Climate (MOSAiC) campaign in autumn 2019 will provide a unique opportunity for validating DuST in regions not sampled by OIB (e.g. the eastern Arctic) throughout a full annual cycle. A dedicated dual-radar study is planned during the MOSAiC experiment, using in situ and on-aircraft Ku–Ka-band radar to quantify radar backscatter at each frequency together with snow depth and ice thickness measurements. This, in conjunction with AltiKa and CS-2 observations, will provide valuable insight into the validity of our calibration functions and retrieved DuST snow depths. Our methodology can also be applied to retrieve snow depth from coincident satellite radar and laser altimetry, which will have particular relevance when data from ICESat-2, launched in September 2018, become available. Here, we tested the applicability of the method to the ICESat and Envisat satellites, offering promising potential for the future retrieval of snow depth on Arctic sea ice from CS-2 and ICESat-2, with better coverage over the pole. Data availability Data availability. Satellite freeboard data: CryoSat-2 and Envisat along-track freeboard data used in this study were processed by the Centre for Polar Observation and Modelling (CPOM) and are available on request. AltiKa altimeter products were produced and distributed by Aviso+ (https://www.aviso.altimetry.fr/, last access: 21 October 2018) as part of the Ssalto ground processing segment. AltiKa waveform data, available via the site ftp://avisoftp.cnes.fr/AVISO/pub/saral/sgdr_t/ (last access: 28 February 2018) were processed into freeboard using the processor outlined in Armitage and Ridout (2015). ICESat freeboard is available from https://nsidc.org/data/nsidc-0393 (last access: 1 August 2017). Auxiliary data: Operation IceBridge ATM Quick Look data are hosted at the National Snow and Ice Data Center (NSIDC, https://nsidc.org/data/docs/daac/icebridge/evaluation_products/sea-ice-freeboard-snowdepth-thickness-quicklook-index.html, last access: 14 October 2016). Sea ice type is a product of the EUMETSAT Ocean and Sea Ice Satellite Application Facility (OSI SAF, http://www.osi-saf.org, last access: 1 March 2018). Daily gridded ice type fields can be accessed via the FTP site: ftp://osisaf.met.no/archive/ice/type (last access: 1 March 2018) and daily Quicklook Ice Type maps are available at http://osisaf.met.no/p/osisaf_hlprod_qlook.php?prod=Ice-Type&area=NH (last access: 1 March 2018). Sea ice motion vectors are distributed by NSIDC and can be found at: https://daacdata.apps.nsidc.org/pub/DATASETS/nsidc0116_icemotion_vectors_v3/browse/north/ (last access: 23 February 2018). Output data: The AltiKa–CryoSat-2 and ICESat–Envisat Dual-altimeter Snow Depth (DuST) products are available at http://www.cpom.ucl.ac.uk/DuST (last access: 11 November 2018). Author contributions Author contributions. IRL carried out the presented analysis and wrote the manuscript, with support from MCT, JCS, and TWKA. TWKA and ALR wrote the AltiKa and CryoSat-2 processors for the derivation of satellite freeboard and offered technical support in their implementation and adaptation by IRL. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. This work was funded primarily by the London National Environmental Research Council Doctoral Training Partnership grant (NE/L002485/1) and in part by the Arctic+ European Space Agency snow project ESA/AO/1-8377/15/I-NB NB – “STSE – Arctic+”. The authors wish to thank Ron Kwok, NASA Jet Propulsion Laboratory, for the use of his OIB snow radar data and Richard Chandler, University College London, for help in preparing this manuscript. Thomas Armitage was supported at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. CryoSat-2 and Envisat data were provided by the European Space Agency and processed by the Centre for Polar Observation and Modelling. AltiKa data were provided by AVISO. ICESat freeboard, Operation IceBridge data, and sea ice motion vectors were provided by the National Snow and Ice Data Center. Sea ice type masks were provided by the Ocean and Sea Ice Satellite Application Facilities. Edited by: Dirk Notz Reviewed by: two anonymous referees References Armitage, T. W. K. and Davidson, M. W. J.: Using the Interferometric Capabilities of the ESA CryoSat-2 Mission to Improve the Accuracy of Sea Ice Freeboard Retrievals, IEEE T. Geosci. Remote, 52, 529–536, https://doi.org/10.1109/TGRS.2013.2242082, 2014. a Armitage, T. W. K. and Ridout, A. L.: Arctic sea ice freeboard from AltiKa and comparison with CryoSat-2 and Operation IceBridge, Geophys. Res. Lett., 42, 6724–6731, https://doi.org/10.1002/2015GL064823, 2015. a, b, c, d, e, f, g, h, i, j Boisvert, L. N., Petty, A. A., and Stroeve, J. C.: The Impact of the Extreme Winter 2015/16 Arctic Cyclone on the Barents-Kara Seas, Mon. Weather Rev., 144, 4279–4287, https://doi.org/10.1175/MWR-D-16-0234.1, 2016. a Chelton, D. B., Ries, J. C., Haines, B. J., Fu, L.-L., and Callahan, P. S.: Satellite Altimetry, in: Satellite Altimetry and Earth Sciences: A Handbook of Tecnhinques and Applications, chap. 1, edited by: Fu, L.-L. and Cazenave, A., Academic Press, 2001. a, b Comiso, J. C.: Large decadal decline of the arctic multiyear ice cover, J. Climate, 25, 1176–1193, https://doi.org/10.1175/JCLI-D-11-00113.1, 2012. a Comiso, J. C., Cavalieri, D. J., and Markus, T.: Sea ice concentration, ice temperature, and snow depth using AMSR-E data, IEEE T. Geosci. Remote, 41, 243–252, https://doi.org/10.1109/TGRS.2002.808317, 2003. a Connor, L. N., Laxon, S. W., Ridout, A. L., Krabill, W. B., and McAdoo, D. C.: Comparison of Envisat radar and airborne laser altimeter measurements over Arctic sea ice, Remote Sens. Environ., 113, 563–570, https://doi.org/10.1016/j.rse.2008.10.015, 2009. a Giles, K. and Hvidegaard, S.: Comparison of space borne radar altimetry and airborne laser altimetry over sea ice in the Fram Strait, Int. J. Remote Sens., 1161, 37–41, https://doi.org/10.1080/01431160600563273, 2006. a Giles, K. A., Laxon, S. W., Wingham, D. J., Wallis, D. W., Krabill, W. B., Leuschen, C. J., McAdoo, D., Manizade, S. S., and Raney, R. K.: Combined airborne laser and radar altimeter measurements over the Fram Strait in May 2002, Remote Sens. Environ., 111, 182–194, https://doi.org/10.1016/j.rse.2007.02.037, 2007. a, b Giles, K. A., Laxon, S. W., and Ridout, A. L.: Circumpolar thinning of Arctic sea ice following the 2007 record ice extent minimum, Geophys. Res. Lett., 35, 2006–2009, https://doi.org/10.1029/2008GL035710, 2008. a Grenfell, T. C. and Maykut, G. A.: The optical properties of ice and snow in the Arctic basin, J. Glaciol., 18, 445–463, 1977. a Guerreiro, K., Fleury, S., Zakharova, E., Rémy, F., and Kouraev, A.: Remote Sensing of Environment Potential for estimation of snow depth on Arctic sea ice from CryoSat-2 and SARAL / AltiKa missions, Remote Sens. Environ., 186, 339–349, https://doi.org/10.1016/j.rse.2016.07.013, 2016. a, b, c, d, e Guerreiro, K., Fleury, S., Zakharova, E., Kouraev, A., Rémy, F., and Maisongrande, P.: Comparison of CryoSat-2 and ENVISAT radar freeboard over Arctic sea ice: toward an improved Envisat freeboard retrieval, The Cryosphere, 11, 2059–2073, https://doi.org/10.5194/tc-11-2059-2017, 2017. a, b, c, d, e Kern, S., Khvorostovsky, K., Skourup, H., Rinne, E., Parsakhoo, Z. S., Djepa, V., Wadhams, P., and Sandven, S.: The impact of snow depth, snow density and ice density on sea ice thickness retrieval from satellite radar altimetry: results from the ESA-CCI Sea Ice ECV Project Round Robin Exercise, The Cryosphere, 9, 37–52, https://doi.org/10.5194/tc-9-37-2015, 2015. a Kurtz, N.: IceBridge quick look sea ice freeboard, snow depth, and thickness product manual, Tech. rep., 2014. a Kurtz, N. T. and Farrell, S. L.: Large-scale surveys of snow depth on Arctic sea ice from Operation IceBridge, Geophys. Res. Lett., 38, 1–5, https://doi.org/10.1029/2011GL049216, 2011. a Kurtz, N. T., Farrell, S. L., Studinger, M., Galin, N., Harbeck, J. P., Lindsay, R., Onana, V. D., Panzer, B., and Sonntag, J. G.: Sea ice thickness, freeboard, and snow depth products from Operation IceBridge airborne data, The Cryosphere, 7, 1035–1056, https://doi.org/10.5194/tc-7-1035-2013, 2013. a Kurtz, N. T., Galin, N., and Studinger, M.: An improved CryoSat-2 sea ice freeboard retrieval algorithm through the use of waveform fitting, The Cryosphere, 8, 1217–1237, https://doi.org/10.5194/tc-8-1217-2014, 2014. a, b, c, d Kwok, R.: Simulated effects of a snow layer on retrieval of CryoSat-2 sea ice freeboard, Geophys. Res. Lett., 41, 5014–5020, https://doi.org/10.1002/2014GL060993, 2014. a Kwok, R. and Cunningham, G. F.: ICESat over Arctic sea ice: Estimation of snow depth and ice thickness, J. Geophys. Res.-Oceans, 113, 1–17, https://doi.org/10.1029/2008JC004753, 2008. a Kwok, R., Cunningham, G. F., Zwally, H. J., and Yi, D.: Ice, Cloud, and land Elevation Satellite (ICESat) over Arctic sea ice: Retrieval of freeboard, J. Geophys. Res.-Oceans, 112, 1–19, https://doi.org/10.1029/2006JC003978, 2007. a Kwok, R., Kurtz, N. T., Brucker, L., Ivanoff, A., Newman, T., Farrell, S. L., King, J., Howell, S., Webster, M. A., Paden, J., Leuschen, C., MacGregor, J. A., Richter-Menge, J., Harbeck, J., and Tschudi, M.: Intercomparison of snow depth retrievals over Arctic sea ice from radar data acquired by Operation IceBridge, The Cryosphere, 11, 2571–2593, https://doi.org/10.5194/tc-11-2571-2017, 2017. a, b, c, d, e Laxon, S., Peacock, N., and Smith, D.: High interannual variability of sea ice thickness in the Arctic region, Nature, 425, 947–950, https://doi.org/10.1038/nature02063.1., 2003. a Laxon, S. W., Giles, K. A., Ridout, A. L., Wingham, D. J., Willatt, R., Cullen, R., Kwok, R., Schweiger, A., Zhang, J., Haas, C., Hendricks, S., Krishfield, R., Kurtz, N., Farrell, S., and Davidson, M.: CryoSat-2 estimates of Arctic sea ice thickness and volume, Geophys. Res. Lett., 40, 732–737, https://doi.org/10.1002/grl.50193, 2013. a Maaß, N., Kaleschke, L., Tian-Kunze, X., and Drusch, M.: Snow thickness retrieval over thick Arctic sea ice using SMOS satellite data, The Cryosphere, 7, 1971–1989, https://doi.org/10.5194/tc-7-1971-2013, 2013 a Markus, T. and Cavalieri, D. J.: Snow Depth Distribution Over Sea Ice in the Southern Ocean from Satellite Passive Microwave Data, Antar. Res. S., 74, 19–39, https://doi.org/10.1029/AR074p0019, 1998. a Markus, T. and Cavalieri, D. J.: AMSR-E level 3 Sea Ice Products – Algorithm Theoretical Basis Document, Tech. rep., NASA Goddard Space Flight Center, 2012. a Maykut, G. A. and Untersteiner, N.: Some results from a time-dependent thermodynamic model of sea ice, J. Geophys. Res., 76, 1550–1575, https://doi.org/10.1029/JC076i006p01550, 1971. a Nandan, V., Geldsetzer, T., Yackel, J. J., Islam, T., Gill, J. P. S., and Mahmud, M.: Multifrequency Microwave Backscatter from a Highly Saline Snow Cover on Smooth First-Year Sea Ice: First-Order Theoretical Modeling, IEEE T. Geosci. Remote, 55, 2177–2190, https://doi.org/10.1109/TGRS.2016.2638323, 2017. a, b Peacock, N. R. and Laxon, S. W.: Sea surface height determination in the Arctic Ocean from ERS altimetry, J. Geophys. Res.-Oceans, 109, 1–14, https://doi.org/10.1029/2001JC001026, 2004. a Powell, D. C., Markus, T., Cavalieri, D. J., Gasiewski, A. J., Klein, M., Maslanik, J. A., Stroeve, J. C., and Sturm, M.: Microwave Signatures of Snow on Sea Ice: Modeling, IEEE T. Geosci. Remote, 44, 3091–3102, https://doi.org/10.1109/TGRS.2006.882139, 2006. a Raney, R. K.: Delay/Doppler radar altimeter for ice sheet monitoring, Igarss, 2, 862–864, 1995. a, b Rapley, C., Cooper, A. P., Brenner, A. C., and Drewry, D.: A Study of Satellite Radar Altimeter Operations Over Ice-covered Surfaces, Tech. Rep. July 2015, European Space Agency, 1983. a, b, c Ricker, R., Hendricks, S., Helm, V., Skourup, H., and Davidson, M.: Sensitivity of CryoSat-2 Arctic sea-ice freeboard and thickness on radar-waveform interpretation, The Cryosphere, 8, 1607–1622, https://doi.org/10.5194/tc-8-1607-2014, 2014. a, b Ricker, R., Hendricks, S., Girard-Ardhuin, F., Kaleschke, L., Lique, C., Tian-Kunze, X., Nicolaus, M., and Krumpen, T.: Satellite-observed drop of Arctic sea ice growth in winter 2015–2016, Geophys. Res. Lett., 44, 3236–3245, https://doi.org/10.1002/2016GL072244, 2017. a Ridout, A. and Ivanova, N.: Sea Ice Climate Change Initiative: D2.6 Algorithm Theoretical Basis Document (ATBDv1) Sea Ice Concentration, European Space Agency, 1, 1–41, 2013.  a Rinke, A., Maturilli, M., Graham, R. M., Matthes, H., Handorf, D., Cohen, L., Hudson, S. R., and Moore, J. C.: Extreme cyclone events in the Arctic: Wintertime variability and trends, Environ. Res. Lett., 12, 094006, https://doi.org/10.1088/1748-9326/aa7def, 2017. a Schwegmann, S., Rinne, E., Ricker, R., Hendricks, S., and Helm, V.: About the consistency between Envisat and CryoSat-2 radar freeboard retrieval over Antarctic sea ice, The Cryosphere, 10, 1415–1425, https://doi.org/10.5194/tc-10-1415-2016, 2016. a, b Stroeve, J. C., Schroder, D., Tsamados, M., and Feltham, D.: Warm winter, thin ice?, The Cryosphere, 12, 1791–1809, https://doi.org/10.5194/tc-12-1791-2018, 2018. a Stroeve, J. C., Serreze, M. C., Fetterer, F., Arbetter, T., Meier, W., Maslanik, J., and Knowles, K.: Tracking the Arctic's shrinking ice cover: Another extreme September minimum in 2004, Geophys. Res. Lett., 32, 1–4, https://doi.org/10.1029/2004GL021810, 2005. a Sturm, M., Holmgren, J., and Perovich, D. K.: Winter snow cover on the sea ice of the Arctic Ocean at the Surface Heat Budget of the Arctic Ocean (SHEBA): Temporal evolution and spatial variability, J. Geophys. Res., 107, 1–17, https://doi.org/10.1029/2000JC000400, 2002. a Tilling, R. L., Ridout, A., and Shepherd, A.: Estimating Arctic sea ice thickness and volume using CryoSat-2 radar altimeter data, Adv. Space Res., 62, 1203–1225, https://doi.org/10.1016/j.asr.2017.10.051, 2018. a, b, c, d, e, f, g Ulaby, F. T., Abdelrazik, M., and Stiles, W. H.: Snowcover Influence on Backscattering from Terrain, IEEE T. Geosci. Remote, GE-22, 126–133, https://doi.org/10.1109/TGRS.1984.350604, 1984. a, b Warren, S., Rigor, I., and Untersteiner, N.: Snow depth on Arctic sea ice, J. Climate, 1814–1829, https://doi.org/10.1175/1520-0442(1999)012<1814:SDOASI>2.0.CO;2, 1999. a, b Webster, M. a., Rigor, I. G., Nghiem, S. V., Kurtz, N. T., Farrell, S. L., Perovich, D. K., and Sturm, M.: Interdecadal changes in snow depth on Arctic sea ice, J. Geophys. Res.-Oceans, 5395–5406, https://doi.org/10.1002/2014JC009985, 2014. a Willatt, R., Laxon, S., Giles, K., Cullen, R., Haas, C., and Helm, V.: Ku-band radar penetration into snow cover on Arctic sea ice using airborne data, Ann. Glaciol., 52, 197–205, https://doi.org/10.3189/172756411795931589, 2011. a Wingham, D. J., Francis, C. R., Baker, S., Bouzinac, C., Brockley, D., Cullen, R., de Chateau-Thierry, P., Laxon, S. W., Mallow, U., Mavrocordatos, C., Phalippou, L., Ratier, G., Rey, L., Rostan, F., Viau, P., and Wallis, D. W.: CryoSat: A mission to determine the fluctuations in Earth's land and marine ice fields, Adv. Space Res., 37, 841–871, 2006. a, b Yi, D. and Zwally, H. J.: Arctic Sea Ice Freeboard and Thickness, Version 1, Boulder, Colorado USA. NSIDC: National Snow and Ice Data Center, https://doi.org/10.5067/SXJVJ3A2XIZT, 2009 (updated 15 April 2014). a Zygmuntowska, M., Rampal, P., Ivanova, N., and Smedsrud, L. H.: Uncertainties in Arctic sea ice thickness and volume: new estimates and implications for trends, The Cryosphere, 8, 705–720, https://doi.org/10.5194/tc-8-705-2014, 2014. a
web
auto_math_text
SCIENCE CHINA Information Sciences, Volume 63 , Issue 8 : 182104(2020) https://doi.org/10.1007/s11432-019-2771-0 ## Important sampling based active learning for imbalance classification • ReceivedSep 26, 2019 • AcceptedJan 19, 2020 • PublishedJul 7, 2020 Share Rating ### Abstract Imbalance in data distribution hinders the learning performance of classifiers. To solve this problem, a popular type of methods is based on sampling (including oversampling for minority class and undersampling for majority class) so that the imbalanced data becomes relatively balanced data. However, they usually focus on one sampling technique, oversampling or undersampling. Such strategy makes the existing methods suffer from the large imbalance ratio (the majority instances size over the minority instances size). In this paper, an active learning framework is proposed to deal with imbalanced data by alternative performing important sampling (ALIS), which consists of selecting important majority-class instances and generating informative minority-class instances. In ALIS, two important sampling strategies affect each other so that the selected majority-class instances provide much clearer information in the next oversampling process, meanwhile the generated minority-class instances provide much more sufficient information for the next undersampling procedure. Extensive experiments have been conducted on real world datasets with a large range of imbalance ratio to verify ALIS. The experimental results demonstrate the superiority of ALIS in terms of several well-known evaluation metrics by comparing with the state-of-the-art methods. ### Acknowledgment This work was supported in part by National Natural Science Foundation of China (Grant Nos. 61822601, 61773050, 61632004, 61972132), Beijing Natural Science Foundation (Grant No. Z180006), National Key Research and Development Program (Grant No. 2017YFC1703506), Fundamental Research Funds for the Central Universities (Grant Nos. 2019JBZ110, 2019YJS040), Youth Foundation of Hebei Education Department (Grant No. QN2018084), Science and Technology Foundation of Hebei Agricultural University (Grant No. LG201804), and Research Project for Self-cultivating Talents of Hebei Agricultural University (Grant No. PY201810). Appendixes A–C. ### References [1] Xu C, Tao D, Xu C. Robust extreme multi-label learning. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016. 1275--1284. Google Scholar [2] Lin T Y, Goyal P, Girshick R, et al. Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision, 2017. 2980--2988. Google Scholar [3] Batuwita R, Palade V. Efficient resampling methods for training support vector machines with imbalanced datasets. In: Proceedings of the International Joint Conference on Neural Networks, 2010. 1--8. Google Scholar [4] Peng Y. Adaptive sampling with optimal cost for class-imbalance learning. In: Proceedings of the 29th AAAI Conference on Artificial Intelligence, 2015. 2921--2927. Google Scholar [5] Attenberg J, Ertekin S. Class imbalance and active learning. iinde Imbalanced Learning: Foundations, Algorithms, and Applications, 2013. 101--149. Google Scholar [6] Guo J, Wan X, Lin H, et al. An active learning method based on mistake sampling for large scale imbalanced classification. In: Proceedings of International Conference on Service Systems and Service Management, 2017. 1--6. Google Scholar [7] Stefanowski J. Dealing with data difficulty factors while learning from imbalanced data. In: Challenges in Computational Statistics and Data Mining. Berlin: Springer, 2016. 333--363. Google Scholar [8] Alejo R, Valdovinos R M, García V. A hybrid method to face class overlap and class imbalance on neural networks and multi-class scenarios. Pattern Recognition Lett, 2013, 34: 380-388 CrossRef Google Scholar [9] Cheng F, Zhang J, Wen C. Cost-Sensitive Large margin Distribution Machine for classification of imbalanced data. Pattern Recognition Lett, 2016, 80: 107-112 CrossRef Google Scholar [10] Chung Y A, Lin H T, Yang S W. Cost-aware pre-training for multiclass cost-sensitive deep learning. In: Proceedings of the 25th International Joint Conference on Artificial Intelligence, 2016. 1411--1417. Google Scholar [11] Ren Y, Zhao P, Sheng Y, Yao D, Xu Z. Robust softmax regression for multi-class classification with self-paced learning. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence, 2017. 2641--2647. Google Scholar [12] Chawla N V, Bowyer K W, Hall L O. SMOTE: Synthetic Minority Over-sampling Technique. jair, 2002, 16: 321-357 CrossRef Google Scholar [13] Han H, Wang W Y, Mao B H. Borderline-smote: a new over-sampling method in imbalanced data sets learning. In: Advances in Intelligent Computing. Berlin: Springer, 2005. 878--887. Google Scholar [14] Tang B, He H. Kerneladasyn: kernel based adaptive synthetic data generation for imbalanced learning. In: Proceedings of IEEE Congress on Evolutionary Computation, 2015. 664--671. Google Scholar [15] Zhou C, Liu B, Wang S. Cmo-smote: misclassification cost minimization oriented synthetic minority oversampling technique for imbalanced learning. In: Proceedings of the 8th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC), 2016. 353--358. Google Scholar [16] Barua S, Islam M M, Yao X. MWMOTE--Majority Weighted Minority Oversampling Technique for Imbalanced Data Set Learning. IEEE Trans Knowl Data Eng, 2014, 26: 405-425 CrossRef Google Scholar [17] Yuan J, Li J, Zhang B. Learning concepts from large scale imbalanced data sets using support cluster machines. In: Proceedings of the 14th ACM International Conference on Multimedia, 2006. 441--450. Google Scholar [18] Haibo He , Garcia E A. Learning from Imbalanced Data. IEEE Trans Knowl Data Eng, 2009, 21: 1263-1284 CrossRef Google Scholar [19] Tahir M A, Kittler J, Yan F. Inverse random under sampling for class imbalance problem and its application to multi-label classification. Pattern Recognition, 2012, 45: 3738-3750 CrossRef Google Scholar [20] Galar M, Fernández A, Barrenechea E. EUSBoost: Enhancing ensembles for highly imbalanced data-sets by evolutionary undersampling. Pattern Recognition, 2013, 46: 3460-3471 CrossRef Google Scholar [21] Thanathamathee P, Lursinsap C. Handling imbalanced data sets with synthetic boundary data generation using bootstrap re-sampling and AdaBoost techniques. Pattern Recognition Lett, 2013, 34: 1339-1347 CrossRef Google Scholar [22] Settles B. Active Learning Literature Survey. Technical Report. University of Wisconsin-Madison Department of Computer Sciences, 2009. Google Scholar [23] Lughofer E, Weigl E, Heidl W. Integrating new classes on the fly in evolving fuzzy classifier designs and their application in visual inspection. Appl Soft Computing, 2015, 35: 558-582 CrossRef Google Scholar [24] Weigl E, Heidl W, Lughofer E. On improving performance of surface inspection systems by online active learning and flexible classifier updates. Machine Vision Appl, 2016, 27: 103-127 CrossRef Google Scholar [25] Pratama M, Dimla E, Lai C Y. Metacognitive learning approach for online tool condition monitoring. J Intell Manuf, 2019, 30: 1717-1737 CrossRef Google Scholar [26] Ertekin S, Huang J, Bottou L, Giles L. Learning on the border: active learning in imbalanced data classification. In: Proceedings of the 16th ACM Conference on Information and Knowledge Management, 2007. 127--136. Google Scholar [27] Batuwita R, Palade V. Class imbalance learning methods for support vector machines. Imbalanced Learning: Foundations, Algorithms, and Applications, 2013. 83. Google Scholar [28] Sangyoon Oh , Min Su Lee , Byoung-Tak Zhang . Ensemble learning with active example selection for imbalanced biomedical data classification.. IEEE/ACM Trans Comput Biol Bioinf, 2011, 8: 316-325 CrossRef PubMed Google Scholar [29] Chen Y, Mani S. Active learning for unbalanced data in the challenge with multiple models and biasing. In: Proceedings of Workshop on Active Learning and Experimental Design, 2011. 113--126. Google Scholar [30] Zhang X, Yang T, Srinivasan P. Online asymmetric active learning with imbalanced data. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016. 2055--2064. Google Scholar [31] Zhang T, Zhou Z H. Large margin distribution machine. In: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2014. 313--322. Google Scholar [32] Roweis S. Boltzmann machines. Lecture notes, 1995. Google Scholar [33] Yang Y, Ma Z, Nie F, et al. Multi-class active learning by uncertainty sampling with diversity maximization. Int J Comput Vision, 2015, 113: 113--127. Google Scholar [34] Asuncion A, Newman D. Uci machine learning repository, 2007. Google Scholar [35] Alcalá-Fdez J, Fernández A, Luengo J, et al. Keel data-mining software tool: data set repository, integration of algorithms and experimental analysis framework. J Multiple-Valued Logic Soft Comput, 2010, 17: 255--287. Google Scholar [36] Sun Z, Song Q, Zhu X. A novel ensemble method for classifying imbalanced data. Pattern Recognition, 2015, 48: 1623-1637 CrossRef Google Scholar [37] Yan Q, Xia S, Meng F. Optimizing cost-sensitive svm for imbalanced data: connecting cluster to classification. 2017,. arXiv Google Scholar [38] Wu F, Jing X Y, Shan S, et al. Multiset feature learning for highly imbalanced data classification. In: Prcoeedings of the 31st AAAI Conference on Artificial Intelligence, 2017. Google Scholar [39] More A. Survey of resampling techniques for improving classification performance in unbalanced datasets. 2016,. arXiv Google Scholar • Figure 1 (Color online) The schematic diagram of ALIS framework. The classifier is initially trained by all positive instances $P_{\rm~active}^{0}$ and equal amount of random negative instances $N_{\rm~active}^0$, and is updated iteratively according to new selected negative points or new generated positive points. • • • • Table 1 Table 1Notations in ALIS framework Notation Description $\mathcal{P}$ Original positive class set $n^+$ The number of positive instances $\mathcal{N}$ Original negative class set $n^-$ The number of negative instances $\mathcal{D}$ Original training set and $\mathcal{D}=\mathcal{P}~\cup~\mathcal{N}$ $n$ The number of training instances and $n=n^+~+~n^-$ $\mathcal~P_{\rm~active}^j$ Generated synthetic positive class set in the $j$th iteration $\mathcal~N_{\rm~active}^j$ Selected negative class set in the $j$th iteration $\mathcal~N_{\rm~pool}$ Remaining negative class set after active selection $\mathcal~P_{\rm~active}$ Generated synthetic positive class set, $\mathcal~P_{\rm~active}~=~\bigcup_{j}~\mathcal~P_{\rm~active}^{j}$ $\mathcal~N_{\rm~active}$ Selected negative class set, $\mathcal~N_{\rm~active}~=~\bigcup_{j}~\mathcal~N_{\rm~active}^{j}$ $\omega$ A linear predictor $f$ A linear model $\lambda_1$ The trade-off parameter for controlling the margin variance $\lambda_2$ The trade-off parameter for controlling the margin mean • Algorithm 1 Important undersampling algorithm Input: ${\rm~Classifier}^{j}$, pool negative dataset $\mathcal~N_{\rm~pool}$, batchsize. Output: actively selected negative dataset $\mathcal~N_{\rm~active}^{j}$. Initialize times = 0; ${\rm~ratio}_1=1$; ${\rm~ratio}_2=0$; $\mathcal~N_{\rm~pool}^{\prime}$: order $\mathcal~N_{\rm~pool}$ by the according distance between instances and decision boundary of ${\rm~Classifier}^j$; while ${\rm~ratio}_{2}~<~{\rm~ratio}_{1}$ do times = times + 1; $\mathcal{N_\text{1}}~$= top $~\sharp({\rm~times}~\times~{\rm~batchsize})$ instances in $\mathcal~N_{\rm~pool}^{\prime}$; $\mathcal~N_{2}~$= top $~\sharp((~{\rm~times}~+~1)~\times~{\rm~batchsize})$ instances in $\mathcal~N_{\rm~pool}^{\prime}$; Calculate ${\rm~ratio}_1$ and ${\rm~ratio}_2$ of $\mathcal~N_{1}$ and $\mathcal~N_{2}$ according to (8) respectively; end while $\mathcal~N_{\rm~active}^{j}$ = $\mathcal~N_{1}$. • • • Algorithm 2 Important oversampling algorithm Input: $\mathcal~P_{\rm~active}$, $\mathcal~N_{\rm~active}$, $k$. Output: synthetic minority dataset $\mathcal~P_{\rm~active}^{j}$. ; Set the bandwidth $h_i=\min~{\rm~dis}({{\boldsymbol~x}_i},{\rm~NN}({{\boldsymbol~x}_i}))$; Identify informative minority-class set $\mathcal{P^\text{info}}$ via (9); for ${\boldsymbol~x}_{i}~\in~\mathcal{P^\text{info}}$ Set the mixture weight $\xi_i$ via (11) • Table 2 Table 2Description of the datasets Dataset $n$ $m$ $n^-$ $n^+$ ratio ($\frac{n^-}{n^+}$) haberman 306 3 225 81 2.8 libra 360 90 288 72 4 glass6 214 9 185 29 6.38 ecoli3 336 7 301 35 8.6 yeast0256vs3789 1004 8 905 99 9.14 Satimage 6435 36 5809 626 9.27 balance 625 4 576 49 11.8 shuttlec0vsc4 1829 9 1706 123 13.87 Letter-a 20000 16 19211 789 24.34 yeast4 1484 8 1433 51 28.1 yeast6 1484 8 1449 35 41.4 abalone19 4174 7 4142 32 129.44 • Table 3 Table 3Analysis of variance (ANOVA) test and winning times of pairwise t-test (in bracket) between ALIS and the baseline on twelve real-world datasets Metric haberman libra glass6 ecoli3 yeast0256vs3789 Satimage Precision-majority 4.79E$-$03 (4) 1.18E$-$06 (4) 0.64 (0) 3.86E$-$10 (2) 2.48E$-$05 (2) 1.36E$-$46 (5) Recall-minority 9.36E$-$10 (4) 2.38E$-$06 (2) 0.62 (0) 1.21E$-$10 (2) 7.28E$-$09 (4) 5.19E$-$31 (5) F$_{\rm~macro}$ 2.79E$-$07(5) 0.0020(2) 0.0011(2) 5.44E$-$02(4) 1.27E$-$08(3) 2.00E$-$36(5) AUC 0.033 (2) 3.45E$-$09 (3) 1.79E$-$09 (2) 5.45E$-$15 (3) 7.83E$-$09 (3) 2.59E$-$06 (3) Metric balance shuttlec0vsc4 Letter-a yeast4 yeast6 abalone19 Precision-majority 8.37E$-$05 (4) 7.20E$-$12 (4) 8.70E$-$09 (4) 4.36E$-$13 (3) 1.30E$-$10 (3) 0.15 (3) Recall-minority 4.53E$-$05 (3) 4.14E$-$12 (4) 1.82E$-$08 (4) 6.85E$-$17 (3) 5.23E$-$14 (3) 1.32E$-$07 (3) F$_{\rm~macro}$ 0.0249(1) 1.40E$-$07(4) 1.79E$-$27(4) 2.74E$-$08(2) 3.72E$-$06(2) 0.0428(0) AUC 2.01E$-$06 (2) 5.15E$-$17 (2) 8.43E$-$22 (3) 0.8846 (1) 4.07E$-$05 (2) 4.48E$-$06 (2) Citations • #### 0 Altmetric Copyright 2020  CHINA SCIENCE PUBLISHING & MEDIA LTD.  中国科技出版传媒股份有限公司  版权所有
web
auto_math_text
# quantum physics 1. ### Studying Self-studying plan for modern science Hey guys, I want to build a strong and straight plan for my next years of studying and once finish I am able to do something on my own and come up with crazy ideas and actually test them, build some awesome algorithms, all that cool stuff, but I'm kinda stumble so it would be nice if someone... 2. ### A Simultanious eigenstate of Hubbard Hamiltonian and Spin operator in tw Please see this page and give me an advice. https://physics.stackexchange.com/questions/499269/simultanious-eigenstate-of-hubbard-hamiltonian-and-spin-operator-in-two-site-mod Known fact 1. If two operators $A$ and $B$ commute, $[A,B]=0$, they have simultaneous eigenstates. That means... 3. ### Shelf in a box, treating the shelf as a weak perturbation In this problem I am supposed to treat the shelf as a weak perturbation. However it doesn't give us what the perturbed state H' is. At the step V(x) = Vo, but that is all that is given and isn't needed to determine H'. This isn't in a weak magnetic field so I wouldn't you use H'=qEx and then... 4. ### Constant of proportionality in probability of superposition of states Using the fact that Pa ∝ |α|^2 and Pb ∝ |β|^2, we get: Pa = k|α|^2 and Pb = k|β|^2 Since the probability of measuring the two states must add up to 1, we have Pa + Pb = 1 => k = 1/(|α|^2 + |β|^2). Substituting this in Pa and Pb, we get: Pa = |α|^2/(|α|^2 + |β|^2) and Pb = |β|^2/(|α|^2 + |β|^2)... 5. ### I Relation between the momentum operator and the Hamiltonian gradient operator Is there a relationship between the momentum operator matrix elements and the following: <φ|dH/dkx|ψ> where kx is the Bloch wave number such that if I have the latter calculated for the x direction as a matrix, I can get the momentum operator matrix elements from it? 6. ### How to find von-Klitzing constant based on graph? Hi all, Given that the question: From what i know , im not sure how this equation can help me estimate the von-klitzing constant? Or is there another way? Thanks! 7. ### A LS vs jj couplings and their selection rules Two questions, where the 1st is related to previous discussion regarding thes couplings: The selection rules for LS coupling is quite clear - it's based on calculating the compatible electric dipole matrix element. However, in the case of jj coupling we end up with different selection rules... 8. ### I Confusion on binding energy and ionization energy. 1) I know that the binding energy is the energy that holds a nucleus together ( which equals to the mass defect E = mc2 ). But what does it mean when we are talking about binding energy of an electron ( eg. binding energy = -Z2R/n2 ? ). Some website saying that " binding energy = - ionization... 9. ### I David Deutsch (1985) attempt to solve the incoherence problem Can anyone elaborate on Deutsch's attempt to solve the incoherence problem? He postulates a continuously infinite set of universes, together with a preferred measure on that set. And so when a measurement occurs, the proportion of universes in the original branch that end up on a given branch... 10. ### Insights A Classical View of the Qubit - Comments Greg Bernhardt submitted a new blog post A Classical View of the Qubit Continue reading the Original Blog Post. 11. ### Evolution and quantum physics Is it possible that evolution happens in quantum jumps as no intermediate lifeforms were ever found? Analogous to an electron jumping from lower energy level to higher energy level without intermediary states. 12. ### B Quantum entanglement phenomenon Hi there, Question from a biologist with very poor background in physics, but willing to understand quantum physics. I think quantum entanglement shocks everyone, even if it has been proven right. I would love to know if there is any hypothesis or crazy theory out there to explain why or how... 13. ### Against "interpretation" - Comments Greg Bernhardt submitted a new blog post Against "interpretation" Continue reading the Original Blog Post. 14. ### I Van der Waals force in quantum physics According to QFT, are there hydrogen bonds or Van der Waals force? Or this an outdated concept of classical physics? 15. ### Insights The Quantum Mystery of Wigner's Friend - Comments Greg Bernhardt submitted a new blog post Wigner's Friend Continue reading the Original Blog Post. 16. ### A Quantization of the electric field inside a box Hello all, The second quantization of a general electromagnetic field assumes the energy density integration to be performed inside a box in 3D space. Someone mentioned to me recently that the physical significance of the actual volume used is that it should be chosen based on the detector used... 17. ### Insights The Unreasonable Effectiveness of the Popescu-Rohrlich Correlations - Comments Greg Bernhardt submitted a new blog post The Unreasonable Effectiveness of the Popescu-Rohrlich Correlations Continue reading the Original Blog Post. 18. ### I Causality and quantum physics Let me present what I think is the understanding of a particular situation in quantum mechanics, and ask people to tell me whether I am right or wrong. To say that everything happens randomly in QM would be misleading at best. We get at least statistical prediction. But discussions such as the... 19. ### Expectation value <p> of the ground state of hydrogen 1. Homework Statement How should I calculate the expectation value of momentum of an electron in the ground state in hydrogen atom. 2. Homework Equations 3. The Attempt at a Solution I am trying to apply the p operator i.e. $-ihd/dx$ over $\psi$. and integrating it from 0 to infinity... 20. ### I Dressed electrons are not defined as point masses... In @A. Neumaier 's excellent Physics FAQ, he notes under "Are electrons pointlike/structureless?" that "Physical, measurable particles are not points but have extension. By definition, an electron without extension would be described exactly by the 1-particle Dirac equation, which has a... 21. ### Finding state vectors for pure states! 1. Homework Statement Is the following matrix a state operator ? and if it is a state operator is it a pure state ? and if it is so then find the state vectors for the pure state. If you dont see image here is the matrix which is 2X2 in matlab code: [9/25 12/25; 12/25 16/25] 2. Homework... 25. ### Harmonic Oscillator violating Heisenberg's Uncertainity 1. Homework Statement Does the n = 2 state of a quantum harmonic oscillator violate the Heisenberg Uncertainty Principle? 2. Homework Equations $$\sigma_x\sigma_p = \frac{\hbar}{2}$$ 3. The Attempt at a Solution I worked out the solution for the second state of the harmonic oscillator... 26. ### I Can something be caused and be ontologically random? Or does ontological probability exist? I was reading an article that came up in my google searches ( https://breakingthefreewillillusion.com/ontic-probability-doesnt-exist/ ) ignore the free will philosophy stuff. But the author makes the claim that ontological probability simply does not... 27. ### I Einstein-Bohr "photon box" debate and general relativity I see this has been already discussed but the old threads are closed. EPR before EPR: a 1930 Einstein-Bohr thought experiment revisited "In this example, Einstein presents a paradox in QM suggesting that QM is inconsistent, while Bohr attempts to save consistency of QM by combining QM with the...
web
auto_math_text
Journal cover Journal topic Hydrology and Earth System Sciences An interactive open-access journal of the European Geosciences Union Journal topic Hydrol. Earth Syst. Sci., 23, 851–870, 2019 https://doi.org/10.5194/hess-23-851-2019 Hydrol. Earth Syst. Sci., 23, 851–870, 2019 https://doi.org/10.5194/hess-23-851-2019 Research article 13 Feb 2019 Research article | 13 Feb 2019 # Linear Optimal Runoff Aggregate (LORA): a global gridded synthesis runoff product Linear Optimal Runoff Aggregate (LORA): a global gridded synthesis runoff product Sanaa Hobeichi1,2, Gab Abramowitz1,3, Jason Evans1,3, and Hylke E. Beck4 Sanaa Hobeichi et al. • 1Climate Change Research Centre, University of New South Wales, Sydney, NSW 2052, Australia • 2ARC Centre of Excellence for Climate System Science, University of New South Wales, Sydney, NSW 2052, Australia • 3ARC Centre of Excellence for Climate Extremes, University of New South Wales, Sydney, NSW 2052, Australia • 4Department of Civil and Environmental Engineering, Princeton University, Princeton, NJ 08544, USA Correspondence: Sanaa Hobeichi (s.hobeichi@student.unsw.edu.au) Abstract No synthesized global gridded runoff product, derived from multiple sources, is available, despite such a product being useful for meeting the needs of many global water initiatives. We apply an optimal weighting approach to merge runoff estimates from hydrological models constrained with observational streamflow records. The weighting method is based on the ability of the models to match observed streamflow data while accounting for error covariance between the participating products. To address the lack of observed streamflow for many regions, a dissimilarity method was applied to transfer the weights of the participating products to the ungauged basins from the closest gauged basins using dissimilarity between basins in physiographic and climatic characteristics as a proxy for distance. We perform out-of-sample tests to examine the success of the dissimilarity approach, and we confirm that the weighted product performs better than its 11 constituent products in a range of metrics. Our resulting synthesized global gridded runoff product is available at monthly timescales, and includes time-variant uncertainty, for the period 1980–2012 on a 0.5 grid. The synthesized global gridded runoff product broadly agrees with published runoff estimates at many river basins, and represents the seasonal runoff cycle for most of the globe well. The new product, called Linear Optimal Runoff Aggregate (LORA), is a valuable synthesis of existing runoff products and will be freely available for download on https://geonetwork.nci.org.au/geonetwork/srv/eng/catalog.search#/metadata/f9617_9854_8096_5291 (last access: 31 January 2019). 1 Introduction Runoff is the horizontal flow of water on land or through soil before it reaches a stream, river, lake, reservoir or other channel. It has been widely used as a metric for droughts (Shukla and Wood, 2008; van Huijgevoort et al., 2013; Bai et al., 2014; Ling et al., 2016) and to understand the effects of climate change on the hydrological cycle (Ukkola et al., 2016; Zhai and Tao, 2017). Characterizing its dynamics and magnitudes is a major research aim of hydrology and hydrometeorology and is of critical importance for improving our understanding of the current conditions of the large-scale water cycle and predicting its future states. More accurate estimates also provide additional constraint for climate model evaluation, yet direct measurement of runoff at large scales is simply not possible. While runoff observations do not exist, direct streamflow or river discharge observations – basin-integrated runoff – have been archived in many databases. The most comprehensive international streamflow database is the Global Runoff Data Base (GRDB; https://www.bafg.de, last access: 1 June 2017), which consists of daily and monthly quality-controlled streamflow records from more than 9500 gauges across the globe. Geospatial Attributes of Gages for Evaluating Streamflow, version II (GAGES-II; Falcone et al., 2010), represents another noteworthy streamflow database, consisting of daily quality-controlled streamflow data from over 9000 US gauges. Hydrological and land surface models are capable of producing gridded runoff estimates for any region across the globe (Sood and Smakhtin, 2015; Bierkens, 2015; Kauffeldt et al., 2016). However, these runoff estimates suffer from uncertainties due to shortcomings in the model structure and parameterization and the meteorological forcing data (Beven, 1989; Beck, 2017a). There are various ways to use streamflow observations for improving the runoff outputs from these models. The conventional approach consists of model parameter calibration using locally observed streamflow data (see review by Pechlivanidis et al., 2011). Another widely used method is through regionalization; that is, the transfer of knowledge (e.g. calibrated parameters) from gauged basins to ungauged basins (see review by Beck et al., 2016). In contrast, several other studies attempted to correct the runoff outputs directly rather than the model parameters, for example by bias-correcting model runoff outputs based on streamflow observations (Fekete et al., 2002; Ye et al., 2014) or by combining or weighting ensembles of model outputs to obtain improved runoff estimates (e.g. Aires, 2014). There are, however, relatively few continental- and global-scale efforts to improve model estimates using observed streamflow. Table 1Model outputs from tiers 1 and 2 of the eartH2Observe project used to derive the synthesis runoff product. A broad array of gridded model-based runoff estimates are freely available, including but not limited to ECMWF's interim reanalysis (ERA-Interim; Dee et al., 2011), NASA's Modern Era Retrospective-analysis for Research and Applications (MERRA) land dataset (Reichle et al., 2011), the Climate Forecast System Reanalysis (CFSR; Tomy and Sumam, 2016), the second Global Soil Wetness Project (GSWP2; Dirmeyer et al., 2006), the Water Model Intercomparison Project (WaterMIP; Haddeland et al., 2011), and the Global Land Data Assimilation System (GLDAS; Rodell et al., 2004). Recently, the eartH2Observe project has made two ensembles (tier 1 and tier 2) of state-of-the-art global hydrological and land surface model outputs available (http://www.earth2observe.eu/, last access: 25 April 2018; Beck et al., 2017a; Schellekens et al., 2017). Although model simulations represent the only time varying gridded estimates of runoff at the global scale, they are subject to considerable uncertainties, resulting in large differences in runoff simulated by the models. Many studies have therefore evaluated and compared the gridded runoff models (see overview in Table 1 of Beck et al., 2017a). Despite the demonstrated improved predictive capability of multi-model ensemble approaches (Sahoo et al., 2011; Pan et al., 2012; Bishop and Abramowitz, 2013; Mueller et al., 2013; Munier et al., 2014; Aires, 2014; Rodell et al., 2015; Jiménez et al., 2018; Hobeichi et al., 2018; Zhang et al., 2018), very little has been done to utilize this range of model simulations toward improved runoff estimates. This paper implements the weighting and rescaling method introduced in Bishop and Abramowitz (2013) and Abramowitz and Bishop (2015) to derive a monthly 0.5 global synthesis runoff product. Briefly summarized, we use a bias-correction and weighting approach to merge 11 state-of-the-art gridded runoff products from the eartH2Observe project, constrained by observed streamflow from a variety of sources. This approach also provides us with corresponding uncertainty estimates that are better constrained than the simple range of modelled values. For ungauged regions, we employ a dissimilarity method to transfer the product weights to the ungauged basins from the closest basins using dissimilarity between basins as a proxy for distance. Such a synthesis product is in line with the multi-source strategy of Global Energy and Water Exchanges (GEWEX; Morel, 2001) and the initiatives of NASA's Making Earth System Data Records for Use in Research Environments (MEaSUREs; Earthdata, 2017) and is particularly useful for studies that aim to close the water budget at the grid scale. Section 2.1 describes the observed streamflow data. Section 2.2 presents the participating datasets used to derive the weighted runoff product. Section 2.3 details the weighting method implemented in the gauged basins, while Sect. 2.4 focuses on the ungauged basins. Section 2.5 examines the approach used to derive the global runoff product. We then present and discuss our results in Sects. 3 and 4 before concluding. 2 Data and methods ## 2.1 Observed streamflow data We used observed streamflow from the following four sources: (i) the US Geological Survey (USGS) GAGES-II database (Falcone et al., 2010), (ii) the GRDB (http://www.bafg.de/GRDC/, last access: 1 June 2017), (iii) the Australian Peel et al. (2000) database, and (iv) the global Dai (2016) database. We discarded duplicates, and from the remaining set of stations, we discarded those satisfying at least one of the following criteria: (i) the basin area is <8000 km2 (fewer than three 0.5 grid cells), (ii) the record length is  <5 yr in the period 1980–2012 (not necessarily consecutive), and (iii) there is a low observed streamflow (i.e. around 0) that does not represent the total runoff across the basins due to significant anthropogenic activities. A river basin was identified with significant anthropogenic activities if it has >20 % irrigated area using the Global Map of Irrigation Areas (GMIA Version 4.0.2; Siebert et al., 2007) or has >20 % classified as “Artificial surfaces and associated areas” according to the Global Land Cover Map (GlobCover Version 2.3; Bontemps et al., 2011). In total 596 stations (of which 20 are nested in the basins of other stations) were found to be suitable for the analysis (Fig. 1). Figure 1Spatial coverage of gauged and ungauged river basins and location of stream gauges. ## 2.2 Simulated runoff data To derive the global monthly 0.5 synthesis runoff product, we used 11 total runoff outputs (from eight different models) and seven streamflow outputs (from six different models) produced as part of tiers 1 and 2 of the eartH2Observe project (available via ftp://wci.earth2observe.eu/, last access: 25 April 2018). The models and their available variables are presented in Table 1. For tier 1 of eartH2Observe, the models were forced with the WATCH Forcing Data ERA-Interim (WFDEI) meteorological dataset (Weedon et al., 2014) corrected using the Climatic Research Unit Time-Series dataset (CRU-TS3.1; Harris et al., 2014). For tier 2, the models were forced using the Multi-Source Weighted-Ensemble Precipitation (MSWEP) dataset (Beck et al., 2017b). The runoff and streamflow values are provided in kg m−2 s−1 and m3 s−1, respectively. For consistency, the runoff outputs with resolution <0.5 were resampled to 0.5 using bilinear interpolation. In some cases, the river network employed by the model did not correspond with the stream gauge location, in which case we manually selected the grid cell that provided the best match with the observed streamflow. Figure 2Flowchart summarizing the steps carried out to derive the weighted runoff product for the global land surface. The runoff outputs were only used if no streamflow output was available and only in basins smaller than 100 000 km2. To make the runoff data consistent with the streamflow data, we integrated the runoff over the basin areas (termed Ragg; units – m3 s−1). Thus, for basins smaller than 100 000 km2 the synthesis product was derived from 11 model outputs, whereas for basins larger than 100 000 km2 the synthesis product was derived from seven outputs. In Sect. 2.3 and 2.4 we detail our methods for deriving the weighted runoff product for the global land. A flowchart summarizing the process is provided in Fig. 2. ## 2.3 Implementing the weighting approach at the gauged basins At each gauged basin, we built a linear combination μq of the participating modelled streamflow datasets x (i.e. Ragg in small basins and modelled streamflow, q, in large basins) that minimized the mean square difference with the observed streamflow Q at that basin such that ${\mathit{\mu }}_{q}^{j}={\sum }_{k=\mathrm{1}}^{K}{w}_{k}\left({x}_{k}^{j}-{b}_{k}\right)$, where $j\in \left[\mathrm{1},\phantom{\rule{0.125em}{0ex}}J\right]$ are the time steps, $k\in \left[\mathrm{1},\phantom{\rule{0.125em}{0ex}}K\right]$ represents the participating models, ${x}_{k}^{j}$ (i.e. integrated runoff ${\mathrm{Ragg}}_{k}^{j}$ over the basin areas in small basins and modelled streamflow at a gauge location ${q}_{k}^{j}$ in large basins) is the value of the participating dataset in m3 s−1 at the jth time step of the kth participating model and the bias term bk is the mean error of xk in m3 s−1. The set of weights wk provides an analytical solution to the minimization of ${\sum }_{j=\mathrm{1}}^{J}\left({\mathit{\mu }}_{q}^{j}-{Q}^{j}{\right)}^{\mathrm{2}}$ subject to the constraint that $\sum _{k=\mathrm{1}}^{K}{w}_{k}=\mathrm{1}$, where Qj is the observed streamflow at the jth time step. This minimization problem can be solved using the method of Lagrange multipliers by finding a minima for $\begin{array}{ll}F\left(w,\phantom{\rule{0.125em}{0ex}}\mathit{\lambda }\right)& =\frac{\mathrm{1}}{\mathrm{2}}\left[\frac{\mathrm{1}}{\left(J-\mathrm{1}\right)}{\sum }_{j=\mathrm{1}}^{J}\left({\mathit{\mu }}_{q}^{j}-{Q}^{j}{\right)}^{\mathrm{2}}\right]\\ & -\mathit{\lambda }\left(\left({\sum }_{k=\mathrm{1}}^{K}{w}_{k}\right)-\mathrm{1}\right).\end{array}$ The solution to the minimization of F(w, λ) can be expressed as $w=\frac{{A}^{-\mathrm{1}}\mathrm{1}}{{\mathrm{1}}^{T}{A}^{-\mathrm{1}}\mathrm{1}}$, where ${\mathrm{1}}^{T}=\stackrel{\mathrm{K}\phantom{\rule{0.25em}{0ex}}\mathrm{elements}}{\overbrace{\left[\mathrm{1},\phantom{\rule{0.125em}{0ex}}\mathrm{1}\phantom{\rule{0.125em}{0ex}},\phantom{\rule{0.125em}{0ex}}\mathrm{\dots }\phantom{\rule{0.125em}{0ex}},\phantom{\rule{0.125em}{0ex}}\mathrm{1}\right]}}$ and A is the K×K error covariance matrix of the participating datasets (after bias correction), i.e. $A=\left(\begin{array}{ccc}{c}_{\mathrm{1},\phantom{\rule{0.125em}{0ex}}\mathrm{1}}& \mathrm{\cdots }& {c}_{\mathrm{1},\phantom{\rule{0.125em}{0ex}}K}\\ \mathrm{⋮}& \mathrm{\ddots }& \mathrm{⋮}\\ {c}_{K,\phantom{\rule{0.125em}{0ex}}\mathrm{1}}& \mathrm{\cdots }& {c}_{K,\phantom{\rule{0.125em}{0ex}}K}\end{array}\right)$. A is symmetric, and the term ca, b is the covariance of the ath and bth bias-corrected dataset after subtracting the observed dataset, while each diagonal term ck, k is the error variance of dataset k. We note here that the solution presented here is based on the performance of the participating products (diagonal terms of A) and the dependence of their errors (accounted for by the non-diagonal terms of A). For a derivation see Bishop and Abramowitz (2013). We then derived the weighted runoff dataset by applying the computed weights on the bias-corrected runoff estimates of the participating models. The weighted runoff dataset is expressed as ${\mathit{\mu }}_{r}^{j}=\sum _{k=\mathrm{1}}^{K}{w}_{k}\left({r}_{k}^{j}-{{b}^{\prime }}_{k}\right),$ where ${r}_{k}^{j}$ is the value of the runoff estimate in kg m−2 s−1 of the kth participating model at the jth time step and ${{b}^{\prime }}_{k}$ is its runoff bias in kg m−2 s−1. To calculate the runoff bias ${{b}^{\prime }}_{k}$, we assumed that for each model k and at each time j, the bias ratio of a model (defined as the ratio of the model error to the simulated magnitude) is the same for streamflow and runoff estimates of Eq. (1). In small basins, the bias ratio of modelled streamflow was calculated by using ${\mathrm{Ragg}}_{k}^{j}$ instead of the modelled streamflow ${q}_{k}^{j}$ in Eq. (2): $\begin{array}{}\text{(1)}& & {\left[\frac{{q}_{k}^{j}-{Q}^{j}}{{q}_{k}^{j}}=\phantom{\rule{0.125em}{0ex}}\frac{{{b}^{\prime }}_{k}}{{r}_{k}^{j}}\right]}_{\mathrm{basin}},\text{(2)}& & {\left[\frac{{\mathrm{Ragg}}_{k}^{j}-{Q}^{j}}{{\mathrm{Ragg}}_{k}^{j}}=\frac{{{b}^{\prime }}_{k}}{{r}_{k}^{j}}\right]}_{\mathrm{basin}}.\end{array}$ We note that there is no empirical evidence in the literature that the assumptions presented in Eqs. (1) and (2) are valid. However, given that these assumptions constitute a part of our overall approach that we tested and whose success we proved later in this paper, the validity of these assumptions is very likely to hold true. To avoid over-fitting when applying the weighting approach, we limited the number of participating models so that the ratio of number of records (i.e. total number of available monthly observations within the period of study) to the number of models does not fall below 10. As a result of this, when required, we discarded the models that had the highest bias (i.e. left terms in Eqs. 1 and 2) until the threshold was met. Since the weighting and the bias correction occasionally resulted in negative runoff values, we replaced any negative values with zero. Table S1 in the Supplement shows examples of weights and bias ratios calculated for the participating models over a range of river basins. It shows that HBVS, JULES1, JULES2 and SURF2 did not participate in the weighting over the large basins (i.e. Amur, Indigirka, Mississippi, Murray–Darling, Olenek, Paraná, Pechora and Yenisei), since these models do not have estimates for streamflow which are needed to construct the weights over large basins. For the smaller Copper River basin, however, runoff estimates from all models were used in deriving weighted runoff estimates. Table S1 also shows that in many cases, models were assigned negative weights. While this might not be expected in typical performance-based weighting, it is possible when weighting is based on error covariance as well as their performance differences in this formulation. We show below how the weights can be modified to non-negative weights. We implemented the ensemble dependence transformation process detailed in Bishop and Abramowitz (2013) to compute the gridded time-variant uncertainty associated with the derived runoff estimates. For any given gauged basin, we first calculated the spatial aggregate of our weighted runoff estimate Raggμ, then quantified ${s}_{q}^{\mathrm{2}}$, the error variance of Raggμ, with respect to the observed streamflow Q over time as ${s}_{q}^{\mathrm{2}}=\frac{{\sum }_{j=\mathrm{1}}^{J}\left({\mathrm{Ragg}}_{\mathit{\mu }}^{j}-\phantom{\rule{0.125em}{0ex}}{Q}^{j}{\right)}^{\mathrm{2}}}{J-\mathrm{1}}.$ Then, we wished to guarantee that the variance of the constituent modelled estimate ${\mathit{\sigma }}_{q}^{{\mathrm{2}}^{j}}$ about ${\mathrm{Ragg}}_{\mathit{\mu }}^{j}$ at a given time step, averaged over all time steps where we have available streamflow data, is equal to ${s}_{q}^{\mathrm{2}}$ such that ${s}_{q}^{\mathrm{2}}=$ $\frac{\mathrm{1}}{J}{\sum }_{j=\mathrm{1}}^{J}{\mathit{\sigma }}_{q}^{{\mathrm{2}}^{j}}$. Since the variance of the existing constituent products does not, in general, satisfy this equation, we transformed them so that it does. This involved first modifying the set of weights w to a new set $\stackrel{\mathrm{̃}}{w}$ such that $\stackrel{\mathrm{̃}}{w}=\frac{{w}^{T}+\left(\mathit{\alpha }-\mathrm{1}\right)\frac{{\mathrm{1}}^{T}}{K}}{\mathit{\alpha }}$, where $\mathit{\alpha }=\mathrm{1}-K\mathrm{min}\left({w}_{k}\right)$ and min(wk) is the smallest negative weight (and α is set to 1 if all wk values are non-negative). This ensures that all the modified weights ${\stackrel{\mathrm{̃}}{w}}_{k}$ are positive. We then transform the individual estimates ${x}_{k}^{j}$ to ${\stackrel{\mathrm{̃}}{x}}_{k}^{j}$ where ${\stackrel{\mathrm{̃}}{x}}_{k}^{j}={\mathrm{Ragg}}_{\mathit{\mu }}^{j}+\mathit{\beta }\left({\stackrel{\mathrm{‾}}{x}}^{j}+\phantom{\rule{0.125em}{0ex}}\mathit{\alpha }\left({x}_{k}^{j}-{\stackrel{\mathrm{‾}}{x}}^{j}\right)\phantom{\rule{0.125em}{0ex}}-{\mathrm{Ragg}}_{\mathit{\mu }}^{j}\right)$ and $\mathit{\beta }=\sqrt{\frac{{s}_{q}^{\mathrm{2}}}{\frac{\mathrm{1}}{J}{\sum }_{j=\mathrm{1}}^{J}{\sum }_{k=\mathrm{1}}^{K}\stackrel{\mathrm{̃}}{{w}_{k}}\left({\stackrel{\mathrm{‾}}{x}}^{j}+\phantom{\rule{0.125em}{0ex}}\mathit{\alpha }\left({x}_{k}^{j}-{\stackrel{\mathrm{‾}}{x}}^{j}\right)\phantom{\rule{0.125em}{0ex}}-{\mathrm{Ragg}}_{\mathit{\mu }}^{j}{\right)}^{\mathrm{2}}}}$. The weighted variance estimate of the transformed ensemble can be defined as ${\mathit{\sigma }}_{q}^{\mathrm{2}j}=\phantom{\rule{0.125em}{0ex}}{\sum }_{k=\mathrm{1}}^{K}{\stackrel{\mathrm{̃}}{w}}_{k}\left({\stackrel{\mathrm{̃}}{x}}_{k}^{j}-{\mathrm{Ragg}}_{\mathit{\mu }}^{j}{\right)}^{\mathrm{2}}$ and ensures that the equation $\frac{\mathrm{1}}{J}{\sum }_{j=\mathrm{1}}^{J}{\mathit{\sigma }}_{q}^{\mathrm{2}j}=\phantom{\rule{0.125em}{0ex}}{s}_{q}^{\mathrm{2}}$ holds true. Furthermore, $\sqrt{{\mathit{\sigma }}_{q}^{\mathrm{2}j}}$ is the temporally varying estimate of the uncertainty standard deviation of the transformed ensemble that (a) is varying in time and (b) accurately reflects our ability to reproduce the observed streamflow. We refer the reader to Bishop and Abramowitz (2013) for proof. In order to estimate $\sqrt{{\mathit{\sigma }}_{r}^{\mathrm{2}j}}$ , the uncertainty of the runoff attributes ${\mathit{\mu }}_{r}^{j}$ at each point in time and space, we first transformed the runoff fields ${r}_{k}^{j}$ to ${\stackrel{\mathrm{̃}}{r}}_{k}^{j}$ by applying the same transformation parameters α and β such that ${\stackrel{\mathrm{̃}}{r}}_{k}^{j}={\mathit{\mu }}_{r}^{j}+\mathit{\beta }\left({\stackrel{\mathrm{‾}}{r}}^{j}+\phantom{\rule{0.125em}{0ex}}\mathit{\alpha }\left({r}_{k}^{j}-{\stackrel{\mathrm{‾}}{r}}^{j}\right)\phantom{\rule{0.125em}{0ex}}-{\mathit{\mu }}_{r}^{j}\right)$. We then calculated the error variance ${\mathit{\sigma }}_{r}^{\mathrm{2}j}=\phantom{\rule{0.125em}{0ex}}{\sum }_{k=\mathrm{1}}^{K}{\stackrel{\mathrm{̃}}{w}}_{k}\left({\stackrel{\mathrm{̃}}{r}}_{k}^{j}-{\mathit{\mu }}_{r}^{j}{\right)}^{\mathrm{2}}$. Finally, we used $\sqrt{{\mathit{\sigma }}_{r}^{\mathrm{2}j}}$ as the spatially and temporally varying estimate of runoff uncertainty standard deviation, which we will refer to below simply as “uncertainty”. It provides a much more defensible uncertainty estimate than simply calculating the standard deviation of the involved products. We note that for a given basin, $\sqrt{{\mathit{\sigma }}_{q}^{\mathrm{2}j}}$ represents the uncertainty of the modelled streamflow, i.e. ${\mathrm{Ragg}}_{\mathit{\mu }}^{j}$, while $\sqrt{{\mathit{\sigma }}_{r}^{\mathrm{2}j}}$ represents the uncertainty of modelled runoff at each grid cell across the basin. This means that at every time step, there is one value for $\sqrt{{\mathit{\sigma }}_{q}^{\mathrm{2}j}}$ per basin and one value for $\sqrt{{\mathit{\sigma }}_{r}^{\mathrm{2}}}$ per grid across the basin. ## 2.4 Deriving runoff estimates at the ungauged river basins Implementing the weighting approach requires observed streamflow to constrain the weighting, which we do not have at ungauged river basins (defined in Sect. 2.1). To address this, we used the modelled and observed streamflow from the three most similar gauged river basins, based on predefined physical and climatic characteristics, to derive model weights at each ungauged basin. The selected gauged river basins served as donor basins to the ungauged receptor basins. We then implemented the weighting technique on the ensemble of 11 (in small basins) or eight (in large basins) model outputs by matching the Ragg calculated across the selected donor basins with the observed streamflow. This resulted in one set of weights and bias ratios obtained jointly from the three donor basins. Finally, we transferred the weights and bias ratios computed at the donor basins to the receptor basin and subsequently computed the associated uncertainty values. Most of the gauged river basins were classified as donor basins. Some, however, were excluded from being donors where we found (based on Ragg or modelled streamflow time series and metric values) that none of the models were able to simulate the streamflow dynamics. These basins are mainly located in areas of natural lakes, in mountainous areas covered with snow or in wet regions with intense rainfall. We therefore (subjectively) decided that those excluded basins should be assigned to a “non–donor and non–receptor” category. Figure 3Spatial coverage of donor basins, receptor basins, and non-donor and non-receptor basins. We applied the method presented in Beck et al. (2016) to calculate a similarity index S between a donor basin a and a receptor basin b, expressed as $\begin{array}{}\text{(3)}& {S}_{a,\phantom{\rule{0.125em}{0ex}}b}={\sum }_{p=\mathrm{1}}^{\mathrm{7}}\frac{\mathrm{|}{Z}_{p,\phantom{\rule{0.125em}{0ex}}a}-{Z}_{p,\phantom{\rule{0.125em}{0ex}}b}\mathrm{|}}{{\mathrm{IQR}}_{p}},\end{array}$ where p denotes the climatic and physiographic characteristics as in Table 4 of Beck et al. (2016). This includes the aridity index, fractions of forest and snow cover, soil clay content, surface slope, and annual averages of precipitation and potential evaporation. Zp, a and Zp, b are the values of the characteristic p at donor and receptor basins, respectively. IQRp is the interquartile range of characteristic p calculated over the land surface, excluding deserts (defined by an aridity index >5, see Table 4 of Beck et al., 2016) and areas covered with ice during most of the year (defined by climate zones, namely tundra, subarctic and ice cap) using a simplified climate zones map (Fig. S1) created by the Esri Education Team for ArcGIS online (World Climate Zones – Simplified; Esri Education Team, 2014). It follows from Eq. (3) that the most similar donor a to a receptor b is the one that has the lowest index value with basin b. We applied this approach to identify the three most similar donors for every receptor basin. The dissimilarity technique has been previously applied to find 10 donors for one receptor. Given that all the selected donors must have very close similarity indices, we found by trial and error that increasing the number of donor basins might introduce donor basins that have a significantly different similarity index and that setting the number of donor basins to three seemed most appropriate. In very large basins, physiographic and climatic heterogeneity can result in misleading basin-mean averages. We therefore excluded highly heterogeneous basins from the list of donors and classified them as non-donor and non-receptor basins and also broke up large heterogeneous receptor basins by climate groups into smaller basin zones and then treated them as separate basins to effectively receive sets of weights and bias ratios from the donor basins to the separate parts. Here we defined large heterogeneous basins as basins with areas greater than 1 000 000 km2 and covering climate zones that belong to at least two of the following groups: (1) tropical wet; (2) humid continental, humid subtropical, mediterranean and marine; (3) tropical dry, semi–arid and arid; (4) tundra, subarctic and ice cap; and (5) highlands. Climate classification is based on the simplified climate zones map (World Climate Zones – climate zones map; Esri Education Team, 2014) defined above. We used this particular climate map because it comprises only 12 broad climate groups (compared to more than 30 in other climate maps, e.g. Köppen–Geiger). This reduced the divisions made to large heterogenous basins while ensuring that the resultant basin zones of individual basins have very distinct climate characteristics. Figure 3 shows the spatial coverage of the donor basins, receptor basins, and non-donor and non-receptor basins. ## 2.5 Out-of-sample testing To test that this approach produces a runoff estimate at receptor basins (using transferred weights from the most similar gauged basins) that is better than any of the individual models, we performed an out-of-sample test. In this test, we selected a gauged basin and treated it as a receptor basin, constructing model weights by using the three most similar donor basins. We could then compare (a) observed streamflow, (b) the in-sample weighted product (WPin) derived by using observed streamflow for this basin for weighting models, (c) an out-of-sample weighted product (WPout) derived by constructing the weighting at the three most similar basins, and (d) the individual model estimates at each basin. We calculated four metrics of performance for the WPin, WPout and each of the 11 datasets: mean square error $\mathrm{MSE}=\mathrm{mean}\left(\mathrm{Ragg}-\mathrm{observed}\phantom{\rule{0.25em}{0ex}}\mathrm{streamflow}{\right)}^{\mathrm{2}}$, $\mathrm{mean}\phantom{\rule{0.25em}{0ex}}\mathrm{bias}=\mathrm{mean}|\mathrm{Ragg}-\mathrm{observed}\phantom{\rule{0.25em}{0ex}}\mathrm{streamflow}|$, correlation $\mathrm{COR}=\mathrm{corr}\left(\mathrm{observed}\phantom{\rule{0.25em}{0ex}}\mathrm{streamflow},\phantom{\rule{0.25em}{0ex}}\mathrm{Ragg}$) and standard deviation (SD) $\mathrm{difference}={\mathit{\sigma }}_{\mathrm{Ragg}}-{\mathit{\sigma }}_{\mathrm{observed}\phantom{\rule{0.25em}{0ex}}\mathrm{streamflow}}$. We repeated the out-of-sample test for all the gauged basins (donor basins and non-donor and non-receptor basins). We displayed the results of the out-of-sample test by showing the percentage of performance improvement of WPout compared to WPin and each individual model, yielding 12 different values of performance improvement. If the approach is successful, we expect that both WPout and WPin will perform better than any of the models used in this study, and also WPin should be in better agreement with the observed streamflow when compared to WPout. We used box-and-whisker plots to show the results of performance improvement of WPout calculated relative to WPin and the 11 datasets across all the gauged basins. The lower and upper hinges of a box plot represent the first (Q1) and third (Q3) quartiles respectively of the performance improvement results, and the line inside the box plot shows the median value. The extreme of the lower whisker represents the maximum of (1) minimum(dataset) and (2) (Q1−IQR), while the extreme of the upper whisker is the minimum of (1) maximum(dataset) and (2) (Q3+IQR), where IQR represents the interquartile range (i.e. Q3Q1 ) of the performance improvement results. A median line located above the 0 axis is an indication that the out-of-sample weighting offers an improvement in more than half of the basins. The uncertainty estimates computed at the gauged basins represent the deviation of the spatial aggregate of our weighted product (Raggμ) from the observed streamflow well, since the in-sample uncertainty estimates are calculated from the variance of the transformed ensemble, which by design equals MSE of Raggμ against the observation (i.e. error variance of Raggμ). To test if the uncertainty estimates perform well out of sample (i.e. at the ungauged basins), we performed another out-of-sample test. In this test, we took a gauged basin, but instead of constraining the weighting using observed streamflow from this basin, we constructed model weights by using the three most similar donor basins. We could then calculate the MSE of Raggμ against observation from the three donor basins, and we denoted this as MSEin, which represents the uncertainty estimates calculated in sample, since the observational data used in this case are the same dataset that was used to train the weighting. We also calculated the MSE of the aggregated weighted product against the actual observation of the gauged basin, and we denoted this as MSEout. MSEout represents the uncertainty estimates computed out of sample, since the comparison was performed against observational data that have not been used to train the weighting. We repeated the out-of-sample test for all the gauged basins. We displayed the results of the out-of-sample test by showing the ratios of MSEin to MSEout. If the approach is successful, we expect that this ratio is around 1, indicating that the values of MSEin and MSEout are close to each other. We used a box-and-whisker plot to show the results. 3 Results The results for the out-of-sample test are displayed in the box-and-whisker plots presented in Fig. 4a–d. The MSE and mean bias plots in Fig. 4a and d indicate that across almost all the gauged basins, WPout performs better than each of the individual models. Similarly, the COR plot in Fig. 4c shows that the out-of-sample weighting has in fact improved the correlation with observational data across almost all the gauged basins. The SD difference plot (Fig. 4b) shows a significant improvement of WPout relative to the models, but the number of basins that benefit from this improvement decreased, perhaps because the variability of the individual members of the weighting ensemble is not necessarily temporally coincident at all the basins, resulting in decreased variability. The negative performance improvement of WPout relative to WPin across all metrics (first box plot, Fig. 4a–d) indicates that the weighting performs better in sample than out of sample, which is to be expected. Critically, though, the fact that the weighting delivers improvement over all models when the weights are transferred from similar basins indicate that the dissimilarity technique is successful and can be effectively used at the ungauged basins by feeding the weighting with data from the most similar basins with streamflow observations. Furthermore, the box plot in Fig. 5 shows that, overall, when the uncertainty estimates are computed out of sample, they are very similar to what they would have been if they were computed in sample. Note however that the spread of results is large and that in 25 % of the cases, uncertainty estimates are less than half of the in-sample results. This demonstrates that the dissimilarity technique can be effectively used to derive not only the weighting product but also its associated uncertainties at the ungauged basin. Figure 4Box-and-whisker plots displaying the percentage improvement that the weighted product (WPout) offers when tested out of sample, using four metrics: MSE (a), SD difference (b), COR (c) and mean bias (d), when compared to the weighted product derived from in-sample data (WPin) and each runoff product involved in this study. Box-and-whisker plots represent values calculated at 482 gauged basins. See Table 1 for dataset abbreviations. The lower and upper hinges of a box plot represent the first (Q1) and third (Q3) quartiles respectively of the performance improvement results, and the line inside the box plot shows the median value. The extreme of the lower whisker represents the maximum of (1) min(dataset) and (2) (Q1−IQR), while the extreme of the upper whisker is the minimum of (1) max(dataset) and (2) (Q3+IQR), where IQR represents the interquartile range (i.e. Q3Q1) of the performance improvement results. A median line located above the 0 axis is an indication that the out-of-sample weighting offers an improvement in more than half of the basins. Figure 5Box and whisker plots displaying the ratio of (1) the uncertainties of the spatial aggregate of the weighted product computed in sample to (2) the uncertainties of the spatial aggregate of the weighted product computed out of sample. Figure 6Four statistics, (a) RMSE, (b) SD difference, (c) COR and (d) mean bias, calculated for LORA, Best4 (i.e. the simple average of runoff estimates from LISFLOOD, WaterGAP3, W3RA and HBV-SIMREG) and each runoff product involved in this study at the gauged basins. See Table 1 for dataset abbreviations. Based on the improvement that the weighting approach implemented in both gauged and ungauged basins offers over Ragg estimates computed for 11 individual model runoff estimates, in terms of the MSE, SD difference, COR and mean bias against observed streamflow data, we now present details of the mosaic of the individual weighted runoff estimates derived across all the basins, which we name LORA. At the gauged basins, the weighting was trained with the Ragg of the modelled runoff at the individual basins and constrained with the observed streamflow. At ungauged basins, the dissimilarity approach was first implemented to find the three most similar basins, then the weighting was trained on the combined datasets from these three basins. Subsequently, weights were transferred to the ungauged basins and applied to combine the runoff estimates at the individual basins. The eight modelled runoff datasets listed in Table 1 as part of the tier 1 ensemble were recently included in a global evaluation by Beck et al. (2017a). In their analysis, they computed a summary performance statistic that they termed OS by incorporating several long-term runoff behavioural signatures defined in Table 3 of Beck et al. (2017a) and found that the mean of runoff estimates from only four models (LISFLOOD, WaterGAP3, W3RA and HBV-SIMREG) performed the best in terms of $\stackrel{\mathrm{‾}}{\mathrm{OS}}$ (i.e. mean of OS over all the basins included in their study) relative to each individual modelled runoff estimates and the mean of all the modelled runoff estimates. In this study, we calculated the mean runoff from the four best products found by Beck et al. (2017a; LISFLOOD, WaterGAP3, W3RA and HBV-SIMREG). Hereafter, we refer to this as “Best4”, and we calculated four statistics (RMSE, SD difference, COR and mean bias, defined here as mean(dataset-obs)) for Ragg computed from LORA, Best4 and each of the 11 runoff datasets across all the gauged basins. The box plots in Fig. 6a–d display the results. The RMSE plot in Fig. 6a shows that LORA has the lowest RMSE values with the observed streamflow. All of the component models exhibit a similar performance in regard to RMSE. Similarly, LORA has, overall, the least SD difference with observations (Fig. 6b) across more than half of the basins. The mean bias plot in Fig. 6d shows a non-significant positive bias in LORA relative to the observation at the majority of the basins. Best4, HBV-SIMREG, PCR-GLOBWB and particularly LISFLOOD exhibit a positive mean bias across most of the basins but with much higher bias magnitude compared to that of LORA. HTESSEL and SURFEX estimates from both tiers (i.e tier 1 and tier 2) together with JULES (tier 2) and WGAP3 show negative and positive bias distributed evenly across the basins. LORA shows the highest temporal correlation with the observed streamflow at more than half of gauged basins (Fig. 6c). The low RMSE and mean bias values relative to the other estimates are partly due to the bias correction applied before the weighting. While all the performance metrics calculated here show that LORA outperforms Best4, these metrics do not allow us to assess how well LORA performs in terms of bias in the runoff timing, replicating the peaks or representing quick runoff, with the exception of the correlation metric. These aspects were studied in more detail in Beck et al. (2017a) and showed that Best4 performs well in these performance metrics. All the models involved in deriving LORA, with the exception of HBV-SIMREG, were found in the study of Beck et al. (2017a) to show early spring snowmelt peak and an overall significant underestimation of runoff in the snow-dominated basins. To see how well LORA performs at high latitudes, we examined the gauged basins located at higher latitudes (>60), and we calculated two statistics – COR and mean bias – as in Fig. 6c, d, but this time for the snow-dominated basins only. We display the results in Fig. 7. Figure 7Two statistics, (a) COR and (b) mean bias, calculated for LORA, Best4 (i.e. the simple average of runoff estimates from LISFLOOD, WaterGAP3, W3RA and HBV-SIMREG) and each runoff product involved in this study at the gauged basins located at the high latitudes (>60). See Table 1 for dataset abbreviations. Figure 8Seasonal reliability, defined as high ($\frac{\mathrm{mean}\phantom{\rule{0.25em}{0ex}}\mathrm{runoff}\phantom{\rule{0.25em}{0ex}}\mathrm{uncertainty}}{\mathrm{mean}\phantom{\rule{0.25em}{0ex}}\mathrm{runoff}}<\mathrm{1}$, in red), low ($\frac{\mathrm{mean}\phantom{\rule{0.25em}{0ex}}\mathrm{runoff}\phantom{\rule{0.25em}{0ex}}\mathrm{uncertainty}}{\mathrm{mean}\phantom{\rule{0.25em}{0ex}}\mathrm{runoff}}\ge \mathrm{1}$, in yellow) and undetermined (mean runoff = 0, in blue). The temporal correlation plot in Fig. 7a shows that LORA is in better agreement with observed streamflow at snow-dominated basins compared to the ensemble of all the gauged basins on the globe (Fig. 6c), with an overall average improvement of 7 %. Similarly, HBV-SIMREG shows an improved correlation with the observed streamflow at snow-dominated basins, with an average improvement of 14 %; this agrees with the results reported by Beck et al. (2017a), who attributed the improved performance of HBV-SIMREG in snow-dominated regions to a snowfall gauge undercatch correction. The overall performance of Best4 and LISFLOOD does not change in terms of spatial correlation; on the contrary, all the remaining products show a degraded performance. Figure 7b shows that LORA exhibits a small bias across snow-dominated basins relative to participating models. Conversely, with the exception of LISFLOOD, all the tier 1 products including Best4 show a negative mean bias across more than half of the snow-dominated basins, and HTESSEL, JULES, SURFEX and W3RA particularly show a large negative bias at most of these basins. This agrees with the negative bias found in the study of Beck et al. (2017a) in all tier 1 products except LISFLOOD. These results indicate that LORA is likely to slightly overestimate runoff in high latitudes, whereas all tier 1 products with the exception of LISFLOOD tend to underestimate runoff in these regions and that this underestimation is larger for HTESSEL, JULES, SURFEX and W3RA. Tier 2 products show both positive and negative bias across the basins. Their bias is of a lower magnitude than that found in tier 1 products. That is probably because the forcing precipitation used to derive tier 2 outputs (i.e. MSWEP) has less bias than that used to derive tier 1 estimates (i.e. WFDEI corrected using CRU-TS3.1). We also calculated the two metrics, SD difference and mean bias, as in Fig. 6a, b, but we found no noticeable differences in the performance of any of the products relative to that found globally in Fig. 6a, b. The results displayed in Figs. 6 and 7 are discussed further below. Figure 9Seasonal cycle of runoff aggregates from LORA and Best4 compared with the observed streamflow over 11 major basins. Runoff aggregates and the observed streamflow were averaged for each month across the period of availability of observation. The shaded regions show the aggregated uncertainty derived for LORA. We calculated the seasonal relative uncertainty expressed as the ratio of the seasonal average uncertainty to seasonal mean runoff (i.e. $\frac{\mathrm{mean}\phantom{\rule{0.25em}{0ex}}\mathrm{runoff}\phantom{\rule{0.25em}{0ex}}\mathrm{uncertainty}}{\mathrm{mean}\phantom{\rule{0.25em}{0ex}}\mathrm{runoff}}$) over the period 1980–2012. This metric is intended to show some indication of the reliability of the derived runoff, with results displayed in Fig. 8. Regions in red show grid cells that satisfy $\frac{\mathrm{mean}\phantom{\rule{0.25em}{0ex}}\mathrm{runoff}\phantom{\rule{0.25em}{0ex}}\mathrm{uncertainty}}{\mathrm{mean}\phantom{\rule{0.25em}{0ex}}\mathrm{runoff}}<\mathrm{1}$, while those shown in yellow are regions where the value of mean runoff uncertainty are larger than the value of the associated mean runoff itself. Regions in blue are grid cells that have a zero mean runoff and hence an undetermined relative uncertainty. The global maps in Fig. 8 show a consistent low reliability in Sahel, the Indus basin, Paraná, the semi-arid regions of eastern Argentina, Doring basin in South Africa, the red river sub-basin of the Mississippi, the Burdekin and Fitzroy basins in north-eastern Australia, and many regions of the Arabian Peninsula. The areas at the higher latitudes in Asia and North America show high reliability during June–July–August and low reliability during the rest of the year. Parts of the Madeira sub-basin – a major sub-basin of the Amazon – show low reliability during June–November. The basins in Central America show high reliability in all seasons except in March–May, while river basins in Somalia show low reliability during the austral summer and winter. River basins in the Far East show low reliability in spring and autumn and higher reliability in winter and summer. Figure 9 displays the seasonal cycles of Ragg for LORA and Best4 and the observed streamflow over 11 major river basins. To generate this plot, we calculated the average Ragg for each month over the period of availability of observed streamflow. The shaded regions represent the range of uncertainty associated with the derived runoff. In the Amazon basin, LORA overestimates runoff in the wet season and underestimates it in the dry season, but the observed streamflow during the dry season still lies within the error bounds of LORA. LORA shows good agreement with the observed cycle in the Mississippi. In the Niger and Murray–Darling basins, while LORA overestimates the observed streamflow, it shows a much better agreement compared to Best4, which strongly overestimates runoff. In the Paraná Basin, LORA underestimates the observed streamflow in all seasons except summer. In the subarctic basins, LORA shows different behaviour within the individual basins. In Pechora and Olenek, LORA represents the seasonal cycle and the magnitude of runoff well, whereas in the Amur, Lena and Yenisei basins, LORA shows an early shift of the runoff peak and an overall overestimation of runoff. In the Indigirka, LORA overestimates the spring peak, but the observed seasonal cycle lies within the error bounds. Table 2A comparison of mean annual runoff (mm yr−1) of 16 major basins covering different climate zones around the world for LORA and VIC (Zhang et al., 2018), the yearly volume of LORA runoff aggregates (i.e. flow in km3), and observed annual flow (km3) over the basins and mean annual uncertainty values associated with LORA runoff are shown, and the adjusted VIC annual runoff values within 5 % error bounds for water budget closure are displayed. Observed annual flow is given only if data from all contributing stations are available over a whole year over for at least 17 years out of the 33 years covered in this study. We compared our mean annual runoff (mm year−1) with those estimated by a well-known land surface hydrological model, the variable infiltration capacity (VIC; Liang et al., 1994) model, and adjusted VIC estimates after enforcing the physical constraints of the water budget in the study of Zhang et al. (2018) over comparable temporal and spatial scales for 16 large basins chosen from different climate zones on the globe. The mean annual runoff was computed over the period 1984–2010 instead of 1980–2012 to maximize the temporal agreement with the study of Zhang et al. (2018). We also showed the average annual volume of water that discharges from these basins computed from LORA and the observational data. Table 2 shows that for some basins, VIC and LORA agree well in estimating mean annual runoff (i.e. difference between LORA and at least one of VIC and the VIC adjusted for budget closure that is  <10 %). This threshold is met in the Amazon, Columbia, Congo, Danube, Mackenzie and Mississippi basins. The basins that show a larger difference between VIC and LORA but show that VIC estimates lie within the uncertainty bounds of LORA (i.e. between LORA minus uncertainty and LORA plus uncertainty) include the Indigirka, Olenek, Paraná, Pechora, Yenisei and Yukon basins. Large discrepancies between VIC and LORA are found in Lena and the Murray–Darling. Other global estimates of total runoff are also available such as the GLDAS and Multi-scale Synthesis and Terrestrial Model Intercomparison Project (MsTMIP; Huntzinger et al., 2016), however we have not compared LORA with these datasets, because they either have a short common period with LORA or a coarser resolution (i.e. 1) and showed a significant disagreement with observation when interpolated to a 0.5 grid. Finally, in Figs. S8 and S9 we provide an example of runoff fields and the associated uncertainty estimates respectively in an individual month (e.g. May 2003). 4 Discussion The results of the out-of-sample test suggest that deriving runoff estimates in an ungauged basin by training the weighting with streamflow data from similar basins – in terms of climatic and physiographic characteristics – is successful. While the runoff product derived by using weights from external basins outperforms the runoff estimates from the individual models, the weighted runoff derived in sample offers even more capable runoff estimates overall. It follows from Fig. 8 that the runoff values computed over dry climates tend to be less reliable than those in other regimes. This is perhaps due to bias in the WFDEI precipitation forcing that is propagated and intensified in the simulated runoff (Beck et al., 2017a). Another possible reason is the reduced proficiency of models in representing runoff dynamics in arid climates, where runoff tends to be highly non-linearly related to rainfall and often evaporates locally without reaching a river system (Ye et al., 1997). Also, due the lower density of gauged basins in the arid and semi-arid climates compared to other regimes, receptor basins are dominant over dry climates, which reduces the skill of the weighting to produce good runoff estimates. This is also in line with our conclusions from Fig. 4 in that the weighting provides more reliable results in the gauged basins. Figure 10Mean seasonal runoff calculated for the period 1980–2012. All the tier 1 model outputs involved in this study with the exception of HBV-SIMREG were found by Beck et al. (2017a) to show early spring snowmelt in the snow-dominated basins. Both the Yenisei and the Lena are large basins (2.6 and 2.4 million km2, respectively); hence, as noted in Sect. 2.2, only models that had estimates of both streamflow and runoff were used to derive LORA at these basins, therefore HBV-SIMREG – whose inclusion would have improved the weighting – was excluded. Beck et al. (2017a) also found that LISFLOOD has the best square-root-transformed mean annual runoff among the tier 1 datasets and performs well in terms of temporal correlation in all climates; this agrees with the high temporal correlation of LISFLOOD seen in Figs. 6c and 7a and also explains the highest weights attributed to LISFLOOD in the majority of snow-dominated basins (Table S1). Because of this, and because LISFLOOD tends to overestimate runoff across half of the snow-dominated basins (as shown in Fig. 7b), LORA exhibits a positive bias across half of the snow-dominated basins (Fig. 7b), particularly in the Lena, Amur and Yenisei basins (Fig. 9). Further, in Fig. S2 we provide the spatial distribution of correlation results from Fig. 6c. The basins are colour-coded by their temporal correlation with the observed streamflow, and the number of basins in each category is given. Basins in yellow are those where LORA is highly correlated with the observation, while dark blue basins are those where LORA exhibits a negative correlation with the observation. It can be noted from Fig. 6c that occurrence of negative correlation is extremely unusual, which explains why these were considered outliers and were not shown in the box-and-whisker plot. Likely, low correlation basins are unusual and constitute less than 12 % of the number of basins (excluding basins with negative correlation). Also, the median value is above 0.8, which is higher than any constituent estimates. We selected a basin from each correlation range and examined the time series of LORA and the observed streamflow more closely (Figs. S3–S7), particularly illustrating the uncertainty estimate of LORA. In the Ganges, LORA captures the observed time-series dynamic well, with a tendency to overestimate the streamflow peak in August (Fig. S3). Over the Madeira basin, LORA is able to represent most of the climatic variability found in the observation reasonably well (Fig. S4). In Congo, the catchment has an irregular time-series dynamic, and LORA is in principle able to capture a large part of the climatic variability in the observation (Fig. S5). In the Lena basin, the observation shows a peak in June and a second less significant peak in September (Fig. S6). Both peaks are captured by LORA during most of the time series, with a tendency to underestimate the late summer peak and overestimate the early summer peak. In the upper Indus basin, LORA does not capture the magnitudes of observed streamflow and shows a reversed seasonal cycle, which explains why it exhibits negative correlation with the observation (Fig. S7). Zhang et al. (2018) found disagreement between simulated runoff from three LSMs and observed streamflow over Indus basin, which they expected to be due to errors in the observational data from the GRDB dataset. Pan et al. (2012) and Sheffield et al. (2009) assumed that the errors in the measured streamflow are inversely proportional to the area of the basins and range from 5 % to 10 %, whereas Di Baldassarre and Montanari (2009) analysed the overall errors affecting streamflow observations and found that these errors range from 6 % to 42 %. In earlier studies, the errors in streamflow measurement were estimated to range from 10 % to 20 % (Rantz, 1982; Dingman, 1994). In the study of Zhang et al. (2018), the error ratios of VIC were set to be 5 %. In this study, we used the weighting approach to compute gridded uncertainty values based on either the discrepancy between the Ragg of the derived runoff and the associated observational dataset in each gauged basin or, alternatively, based on the discrepancy between Ragg of the derived runoff and the associated observational dataset from three similar basins in the case of ungauged basins. The derived gridded uncertainty changes in time and space. Our uncertainty estimates show higher values than those set for VIC, and additionally the estimated values and their reliability change with climate and season (Fig. 8). It follows from Table 2 that in most of the basins the mean annual runoff uncertainty exceeds 30 % of the values of the associated runoff itself. In fact, when the values of runoff approach zero (i.e. in arid and semi-arid regions during the hot climate or in the snow-dominated basins during winter), it is expected that the uncertainty values become very close to the associated runoff estimates and that eventually the error ratio becomes high. It is not surprising that the estimated relative uncertainties exceed the error ratios of the observations. Also the change of the uncertainty values with time and space is consistent with the fact that the individual datasets that were used to derive LORA exhibit performance differences in different climates and terrains (Beck et al., 2017a). Figure 10 shows the mean seasonal runoff (mm year−1) calculated for the period 1980–2012. There is consistently low runoff in arid regions and high runoff in wet regions across all the seasons. High latitudes in America and Asia exhibit no runoff during the snow season and high runoff during March–August, when snow melts. Overall, there is a clear agreement between the spatial distribution of runoff and the different climate regimes. This is particularly reflected in Madagascar, where the differences in runoff pattern match the different climate regimes across the island. LORA captures the high wetness in the monsoonal seasons and exhibits a shift in magnitude during the wet monsoon in the lower Amazon in October–May; the upper Amazon in June–August; southern Asia in June–November; central Sahel in August; and Guinea coasts in June, July, September and October. As discussed in Hobeichi et al. (2018), the weighting approach has its own advantages and drawbacks. One limitation is that a common imperfection in all the individual products is likely to propagate into the derived product. The early spring runoff peak found in both LORA and the datasets that were used to derive it is an example of this limitation. In contrast, the seasonal runoff cycle of LORA in both Pechora and Olenek basins (i.e. two snow-dominated basins) indicate that LORA was able to capture the seasonal signal and the timing of the runoff peak very well as opposed to the constituent products and Best4, which also suggests that the weighting has the ability to overcome the weaknesses of the individual products. Additionally, it was shown in Beck et al. (2017a) that tier 1 products consistently overestimate runoff in arid and semi-arid regions due to a bias in the WFDEI precipitation forcing; this appears in the massive overestimation exhibited by Best4 in Niger and Murray–Darling (Fig. 9), however the weighting was able to eliminate a large amount of this overestimation, which also emphasizes the ability of the weighting approach to mitigate limitations in individual models. Another limitation arises from the scarcity of observed streamflow, particularly in the arid regions and from the quality of the observational data itself. As noted earlier, the errors in GRDB dataset were reported to range from 10 % to 20 % and were found by Di Baldassarre and Montanari (2009) to have an average value that exceeds 25 % across all the studied river basins. Also, given that there are no direct observations for runoff, uncertainties were computed from the discrepancy between the modelled runoff aggregates and observed streamflow. This ignored the lag time between LORA integrated runoff and observed streamflow at the mouth of the river and induced bias that possibly led to overestimated uncertainty over large gauged basins. The weighting technique allows the addition of new runoff estimates when they become available. This will be particularly beneficial if the future estimates represent the runoff peak in the snow-dominated regions reasonably. 5 Conclusions In this study, we presented LORA, a new global monthly runoff product with associated uncertainty. LORA was derived for 1980–2012 with monthly temporal resolution at 0.5 spatial resolution by applying a weighting approach that accounts for both performance differences and error covariance between the constituent products. To ensure full global coverage, we used a similarity index to transfer weights and bias ratios constructed from gauged basins with similar climatic and physiographic characteristics to ungauged basins. This allows the derivation of runoff in areas where we do not have observed streamflow. We showed that this approach is successful, and that LORA performs better than any of its constituent modelled products in a range of metrics, across basins globally and especially in the higher latitudes. However, LORA tends to overestimate runoff and shows an early snowmelt peak in some snow-dominated basins. LORA was not found to significantly overestimate runoff in arid and semi-arid regions as opposed to the constituent products. The approach and product detailed here offers the opportunity for improvement as new streamflow and modelled runoff datasets become available. It presents a new, relatively independent estimate of a key component of the terrestrial water budget, with a justifiable and well-constrained uncertainty estimate. Data availability Data availability. LORA v1.0 can be downloaded from https://geonetwork.nci.org.au/geonetwork/srv/eng/catalog.search\#/metadata/f9617_9854_8096_5291 (last access: 31 January 2019), and its DOI is https://doi.org/10.25914/5b612e993d8ea (Hobeichi, 2018). Supplement Supplement. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. Sanaa Hobeichi acknowledges the support of the Australian Research Council Centre of Excellence for Climate System Science (CE110001028). Gab Abramowitz and Jason Evans acknowledge the support of the Australian Research Council Centre of Excellence for Climate Extremes (CE170100023). Hylke Beck was supported by the U.S. Army Corps of Engineers' International Center for Integrated Water Resources Management (ICIWaRM), under the auspices of UNESCO. This research was undertaken with the assistance of resources and services from the National Computational Infrastructure (NCI), which is supported by the Australian Government. We are grateful to the Global Runoff Data Centre (GRDC) for providing observed streamflow data. We thank the participants of the eartH2Observe project for producing and making the model simulations available. We also acknowledge that the HydroBASINS product has been developed on behalf of the World Wildlife Fund US (WWF), with support from, and in collaboration with, the EU BioFresh project in Berlin, Germany, the International Union for Conservation of Nature (IUCN) in Cambridge, UK, and McGill University in Montreal, Canada. Major funding for this project was provided to the WWF by the Sealed Air Corporation; additional funding was provided by BioFresh and McGill University. Edited by: Pierre Gentine Reviewed by: Lukas Gudmundsson and one anonymous referee References Abramowitz, G. and Bishop, C. H.: Climate Model Dependence and the Ensemble Dependence Transformation of CMIP Projections, J. Climate, 28, 2332–2348, https://doi.org/10.1175/JCLI-D-14-00364.1, 2015. Aires, F.: Combining Datasets of Satellite-Retrieved Products. Part I: Methodology and Water Budget Closure, J. Hydrometeorol., 15, 1677–1691, https://doi.org/10.1175/JHM-D-13-0148.1, 2014. Bai, Y., Xu, H., and Ling, H.: Drought-flood variation and its correlation with runoff in three headstreams of Tarim River, Xinjiang, China, Environ. Earth Sci., 71, 1297–1309, https://doi.org/10.1007/s12665-013-2534-5, 2014. Balsamo, G., Beljaars, A., Scipal, K., Viterbo, P., van den Hurk, B., Hirschi, M., and Betts, A. K.: A Revised Hydrology for the ECMWF Model: Verification from Field Site to Terrestrial Water Storage and Impact in the Integrated Forecast System, J. Hydrometeorol., 10, 623–643, https://doi.org/10.1175/2008JHM1068.1, 2009. Balsamo, G., Pappenberger, F., Dutra, E., Viterbo, P., and van den Hurk, B.: A revised land hydrology in the ECMWF model: A step towards daily water flux prediction in a fully-closed water cycle, Hydrol. Process., 25, 1046–1054, https://doi.org/10.1002/hyp.7808, 2011. Beck, H. E., van Dijk, A. I. J. M., de Roo, A., Miralles, D. G., Mcvicar, T. R., Schellekens, J., and Bruijnzeel, L. A.: Global-scale regionalization of hydrologic model parameters, Water Resour. Res., 52, 3599–3622, https://doi.org/10.1002/2015WR018247, 2016. Beck, H. E., van Dijk, A. I. J. M., de Roo, A., Dutra, E., Fink, G., Orth, R., and Schellekens, J.: Global evaluation of runoff from 10 state-of-the-art hydrological models, Hydrol. Earth Syst. Sci., 21, 2881–2903, https://doi.org/10.5194/hess-21-2881-2017, 2017a. Beck, H. E., van Dijk, A. I. J. M., Levizzani, V., Schellekens, J., Miralles, D. G., Martens, B., and de Roo, A.: MSWEP: 3-hourly 0.25 global gridded precipitation (1979–2015) by merging gauge, satellite, and reanalysis data, Hydrol. Earth Syst. Sci., 21, 589–615, https://doi.org/10.5194/hess-21-589-2017, 2017b. Best, M. J., Pryor, M., Clark, D. B., Rooney, G. G., Essery, R. L. H., Menard, C. B., Edwards, J. M., Hendry, M. A., Porson, A., Gedney, N., Mercado, L. M., Sitch, S., Blyth, E., Boucher, O., Cox, P. M., Grimmond, C. S. B., and Harding, R. J.: The Joint UK Land Environment Simulator (JULES), model description –Part 1: Energy and water fluxe, Geosci. Model Dev. Discuss., 4, 677–699, https://doi.org/10.5194/gmd-4-677-2011, 2011. Beven, K. J.: Changing ideas in hydrology: The case of physically-based models, J. Hydrol., 105, 157–172, 1989. Bontemps, S., Defourny, P., Bogaert, E. V., Arino, O., Kalogirou, V., and Perez, J. R.: GLOBCOVER 2009 – Products description and validation report, UCLouvain & ESA Team, available at: http://due.esrin.esa.int/files/GLOBCOVER2009_Validation_Report_2.2.pdf (last access: 3 October 2017), 2011. Bierkens, M. F. P.: Global hydrology 2015: State, trends, and directions, Water Resour. Res., 51, 4923–4947, https://doi.org/10.1002/2015WR017173, 2015. Bishop, C. H. and Abramowitz, G.: Climate model dependence and the replicate Earth paradigm, Clim. Dynam., 41, 885–900, https://doi.org/10.1007/s00382-012-1610-y, 2013. Burek, P., van der Knijff, J., and de Roo, A.: LISFLOOD, distributed water balance and flood simulation model revised user manual, Joint Research Centre of the European Commission, Publications Office of the European Union, Luxembourg, 2013. Clark, D. B., Mercado, L. M., Sitch, S., Jones, C. D., Gedney, N., Best, M. J., Pryor, M., Rooney, G. G., Essery, R. L. H., Blyth, E., Boucher, O., Harding, R. J., Huntingford, C., and Cox, P. M.: The Joint UK Land Environment Simulator (JULES), model description – Part 2: Carbon fluxes and vegetation dynamics, Geosci. Model Dev., 4, 701–722, https://doi.org/10.5194/gmd-4-701-2011, 2011. Dai, A.: Historical and Future Changes in Streamflow and Continental Runoff: A Review, in: Terrestrial Water Cycle and Climate Change: Natural and Human-Induced Impacts, edited by: Tang, Q. and Oki, T., John Wiley & Sons, Inc., 221, 17–37, https://doi.org/10.1002/9781118971772.ch2, 2016. Decharme, B., Boone, A., Delire, C., and Noilhan, J.: Local evaluation of the Interaction between Soil Biosphere Atmosphere soil multilayer diffusion scheme using four pedotransfer functions, J. Geophys. Res.-Atmos., 116, 1–29, https://doi.org/10.1029/2011JD016002, 2011. Decharme, B., Martin, E., and Faroux, S.: Reconciling soil thermal and hydrological lower boundary conditions in land surface models, J. Geophys. Res.-Atmos., 118, 7819–7834, https://doi.org/10.1002/jgrd.50631, 2013. Dee, D. P., Uppala, S. M., Simmons, A. J., Berrisford, P., Poli, P., Kobayashi, S., Andrae, U., Balmaseda, M. A., Balsamo, G., Bauer, P., Bechtold, P., Beljaars, A. C. M., van de Berg, L., Bidlot, J., Bormann, N., Delsol, C., Dragani, R., Fuentes, M., Geer, A. J., Haimberger, L., Healy, S. B., Hersbach, H., Hólm, E. V., Isaksen, L., Kållberg, P., Köhler, M., Matricardi, M., McNally, A. P., Monge-Sanz, B. M., Morcrette, J.-J., Park, B.-K., Peubey, C., de Rosnay, P., Tavolato, C., Thépaut, J.-N., and Vitart, F.: The ERA-Interim reanalysis: configuration and performance of the data assimilation system, Q. J. Roy. Meteor. Soc., 137, 553–597, https://doi.org/10.1002/qj.828, 2011. Di Baldassarre, G. and Montanari, A.: Uncertainty in river discharge observations: a quantitative analysis, Hydrol. Earth Syst. Sci., 13, 913–921, https://doi.org/10.5194/hess-13-913-2009, 2009. Dingman, S. L.: Physical Hydrology, 575 pp., Prentice-Hall, Old Tappan, N. J., 1994. Dirmeyer, P. A., Gao, X., Zhao, M., Guo, Z., Oki, T., and Hanasaki, N.: GSWP-2: Multimodel analysis and implications for our perception of the land surface, B. Am. Meteorol. Soc., 87, 1381–1397, https://doi.org/10.1175/BAMS-87-10-1381, 2006. Earthdata: MEaSUREs project, available at: https://earthdata.nasa.gov/community/community-data-system-programs/measures-projects (last access: 31 May 2018), 2017. Esri Education Team: World Climate Zones – Simplified [Esri shapefile], Scale Not Given, Using: ArcGIS [GIS software], National Geographic, available at: http://services.arcgis.com/BG6nSlhZSAWtExvp/arcgis/rest/services/WorldClimateZonesSimp/FeatureServer (last access: 14 February 2016), “MappingOurWorld”, 2014. Falcone, J. A., Carlisle, D. M., Wolock, D. M., and Meador, M. R.: GAGES: A stream gage database for evaluating natural and altered flow conditions in the conterminous United States, Ecology, 91, 621, https://doi.org/10.1890/09-0889.1, 2010. Fekete, B. M., Vörösmarty, C. J., and Grabs, W.: High-resolution fields of global runoff combining observed river discharge and simulated water balances, Global Biogeochem. Cy., 16, 15-1–15-10, https://doi.org/10.1029/1999GB001254, 2002. Flörke, M., Kynast, E., Bärlund, I., Eisner, S., Wimmer, F., and Alcamo, J.: Domestic and industrial water uses of the past 60 years as a mirror of socio-economic development: A global simulation study, Global Environ. Chang., 23, 144–156, https://doi.org/10.1016/j.gloenvcha.2012.10.018, 2013. Haddeland, I., Clark, D. B., Franssen, W., Ludwig, F., Voß, F., Arnell, N. W., Bertrand, N., Best, M., Folwell, S., Gerten, D., Gomes, S., Gosling, S. N., Hagemann, S., Hanasaki, N., Harding, R., Heinke, J., Kabat, P., Koirala, S., Oki, T., Polcher, J., Stacke, T., Viterbo, P., Weedon, G. P., and Yeh, P.: Multimodel Estimate of the Global Terrestrial Water Balance: Setup and First Results, J. Hydrometeorol., 12, 869–884, https://doi.org/10.1175/2011JHM1324.1, 2011. Harris, I., Jones, P. D., Osborn, T. J., and Lister, D. H.: Updated high-resolution grids of monthly climatic observations – the CRU TS3.10 Dataset, Int. J. Climatol., 34, 623–642, https://doi.org/10.1002/joc.3711, 2014. Hobeichi, S.: Linear Optimal Runoff Aggregate (LORA) v1.0, NCI National Research Data Collection, https://doi.org/10.25914/5b612e993d8ea, 2018. Hobeichi, S., Abramowitz, G., Evans, J., and Ukkola, A.: Derived Optimal Linear Combination Evapotranspiration (DOLCE): a global gridded synthesis ET estimate, Hydrol. Earth Syst. Sci., 22, 1317–1336, https://doi.org/10.5194/hess-22-1317-2018, 2018. Huntzinger, D. N., Schwalm, C. R., Wei, Y., Cook, R. B., Michalak, A. M., Schaefer, K., Jacobson, A. R., Arain, M. A., Ciais, P., Fisher, J. B., Hayes, D. J., Huang, M., Huang, S., Ito, A., Jain, A. K., Lei, H., Lu, C., Maignan, F., Mao, J., Parazoo, N., Peng, C., Peng, S., Poulter, B., Ricciuto, D. M., Tian, H., Shi, X., Wang, W., Zeng, N., Zhao, F., Zhu, Q., Yang, J., and Tao, B.: NACP MsTMIP: Global 0.5-deg Terrestrial Biosphere Model Outputs (version 1) in Standard Format, ORNL DAAC, Oak Ridge, Tennessee, USA, doi:10.3334/ORNLDAAC/1225, 2016. Jiménez, C., Martens, B., Miralles, D. M., Fisher, J. B., Beck, H. E., and Fernández-Prieto, D.: Exploring the merging of the global land evaporation WACMOS-ET products based on local tower measurements, Hydrol. Earth Syst. Sci., 22, 4513–4533, https://doi.org/10.5194/hess-22-4513-2018, 2018. Kauffeldt, A., Wetterhall, F., Pappenberger, F., Salamon, P., and Thielen, J.: Technical review of large-scale hydrological models for implementation in operational flood forecasting schemes on continental level, Environ. Modell. Softw., 75, 68–76, https://doi.org/10.1016/j.envsoft.2015.09.009, 2016. Liang, X., Lettenmaier, D. P., Wood, E. F., and Burges, S. J.: A simple hydrologically based model of land surface water and energy fluxes for general circulation models, J. Geophys. Res.-Atmos., 99, 14415–14428, https://doi.org/10.1029/94JD00483, 1994. Ling, H., Deng, X., Long, A., and Gao, H.: The multi-time-scale correlations for drought–flood index to runoff and North Atlantic Oscillation in the headstreams of Tarim River, Xinjiang, China, Hydrol. Res., 48, 1–12, https://doi.org/10.2166/nh.2016.166, 2016. Morel, P.: Why GEWEX? The agenda for a global energy and water cycle research program, GEWEX News, 11, 7–11, 2001. Mueller, B., Hirschi, M., Jimenez, C., Ciais, P., Dirmeyer, P. A., Dolman, A. J., Fisher, J. B., Jung, M., Ludwig, F., Maignan, F., Miralles, D. G., McCabe, M. F., Reichstein, M., Sheffield, J., Wang, K., Wood, E. F., Zhang, Y., and Seneviratne, S. I.: Benchmark products for land evapotranspiration: LandFlux-EVAL multi-data set synthesis, Hydrol. Earth Syst. Sci., 17, 3707–3720, https://doi.org/10.5194/hess-17-3707-2013, 2013. Munier, S., Aires, F., Schlaffer, S., Prigent, C., Papa, F., Maisongrande, P., and Pan, M.: Combining datasets of satellite retrieved products for basin-scale water balance study. Part II: Evaluation on the Mississippi Basin and closure correction model, J. Geophys. Res.-Atmos., 119, 12100–12116, https://doi.org/10.1002/2014JD021953, 2014. Pan, M., Sahoo, A. K., Troy, T. J., Vinukollu, R. K., Sheffield, J., and Wood, A. E. F.: Multisource estimation of long-term terrestrial water budget for major global river basins, J. Climate, 25, 3191–3206, https://doi.org/10.1175/JCLI-D-11-00300.1, 2012. Pechlivanidis, I. G., Jackson, B. M., Mcintyre, N. R., and Wheater, H. S.: Catchment Scale Hydrological Modelling: A Review Of Model Types, Calibration Approaches And Uncertainty Analysis Methods In The Context Of Recent Developments In Technology And Applications, Global Nest J., 13, 193–214, 2011. Peel, M. C., Chiew, F. H. S., Western, A. W., and McMahon, T. A.: Extension of Unimpaired Monthly Streamflow Data and Regionalisation of Parameter Values to Estimate Streamflow in Ungauged Catchments, Report to the National Land and Water Resources Audit, Centre for Environmental Applied Hydrology, The University of Melbourne, Australia, 2000. Rantz, S. E.: Measurement and computation of stream flow, Volume 2: Computation of discharge, USGPO, No. 2175, 631 pp., https://doi.org/10.3133/wsp2175, 1982. Reichle, R. H., Koster, R. D., De Lannoy, G. J. M., Forman, B. A., Liu, Q., Mahanama, S. P. P., and Touré, A.: Assessment and Enhancement of MERRA Land Surface Hydrology Estimates, J. Climate, 24, 6322–6338, https://doi.org/10.1175/JCLI-D-10-05033.1, 2011. Rodell, M., Houser, P. R., Jambor, U., Gottschalck, J., Mitchell, K., Meng, C.-J., Arsenault, K., Cosgrove, B., Radakovich, J., Bosilovich, M., Entin, J. K., Walker, J. P., Lohmann, D., and Toll, D.: The Global Land Data Assimilation System, B. Am. Meteorol. Soc., 85, 381–394, https://doi.org/10.1175/BAMS-85-3-381, 2004. Rodell, M., Beaudoing, H. K., L'Ecuyer, T. S., Olson, W. S., Famiglietti, J. S., Houser, P. R., Adler, R., Bosilovich, M. G., Clayson, C. A., Chambers, D., Clark, E., Fetzer, E. J., Gao, X., Gu, G., Hilburn, K., Huffman, G. J., Lettenmaier, D. P., Liu, W. T., Robertson, F. R., Schlosser, C. A., Sheffield, J., and Wood, E. F.: The observed state of the water cycle in the early twenty-first century, J. Climate, 28, 8289–8318, https://doi.org/10.1175/JCLI-D-14-00555.1, 2015. Sahoo, A. K., Pan, M., Troy, T. J., Vinukollu, R. K., Sheffield, J., and Wood, E. F.: Reconciling the global terrestrial water budget using satellite remote sensing, Remote Sens. Environ., 115, 1850–1865, https://doi.org/10.1016/j.rse.2011.03.009, 2011. Schellekens, J., Dutra, E., Martínez-de la Torre, A., Balsamo, G., van Dijk, A., Sperna Weiland, F., Minvielle, M., Calvet, J.-C., Decharme, B., Eisner, S., Fink, G., Flörke, M., Peßenteiner, S., van Beek, R., Polcher, J., Beck, H., Orth, R., Calton, B., Burke, S., Dorigo, W., and Weedon, G. P.: A global water resources ensemble of hydrological models: the eartH2Observe Tier-1 dataset, Earth Syst. Sci. Data, 9, 389–413, https://doi.org/10.5194/essd-9-389-2017, 2017. Sheffield, J., Ferguson, C. R., Troy, T. J., Wood, E. F., and McCabe, M. F.: Closing the terrestrial water budget from satellite remote sensing, Geophys. Res. Lett., 36, 1–5, https://doi.org/10.1029/2009GL037338, 2009. Shukla, S. and Wood, A. W.: Use of a standardized runoff index for characterizing hydrologic drought, Geophys. Res. Lett., 35, 1–7, https://doi.org/10.1029/2007GL032487, 2008. Siebert, S., Döll, P., Feick, S., Hoogeveen, J., and Frenken, K.: Global map of irrigation areas version 4.0.1, Johann Wolfgang Goethe University, Frankfurt am Main, Germany/Food and Agriculture Organization of the United Nations, Rome, Italy, 2007. Sood, A. and Smakhtin, V.: Global hydrological models: a review, Hydrolog. Sci. J., 60, 549–565, https://doi.org/10.1080/02626667.2014.950580, 2015. Tomy, T. and Sumam, K. S.: Determining the Adequacy of CFSR Data for Rainfall-Runoff Modeling Using SWAT, Procedia Tech., 24, 309–316, https://doi.org/10.1016/j.protcy.2016.05.041, 2016. Ukkola, A. M., Prentice, I. C., Keenan, T. F., van Dijk, A. I. J. M., Viney, N. R., Myneni, R. B., and Bi, J.: Reduced streamflow in water-stressed climates consistent with CO2 effects on vegetation, Nat. Clim. Change, 6, 75–78, https://doi.org/10.1038/nclimate2831, 2016. Van Beek, L. P. H. and Bierkens, M. F. P.: The Global Hydrological Model PCR-GLOBWB: Conceptualization, Parameterization and Verification, Department of Physical Geography, Utrecht University, Utrecht, the Netherlands, available at: http://vanbeek.geo.uu.nl/suppinfo/vanbeekbierkens2009.pdf (last access: 25 April 2018), 2008. Van Der Knijff, J. M., Younis, J., and De Roo, A. P. J.: LISFLOOD: a GIS-based distributed model for river basin scale water balance and flood simulation, Int. J. Geogr. Inf. Sci., 24, 189–212, https://doi.org/10.1080/13658810802549154, 2010. Van Dijk, A. I. J. M. and Warren, G.: The Australian Water Resources Assessment System. Technical Report 4. Landscape Model (version 0.5) Evaluation Against Observations. CSIRO: Water for a Healthy Country National Research Flagship, CSIRO, Australia, 2010. Van Dijk, A. I. J. M., Renzullo, L. J., Wada, Y., and Tregoning, P.: A global water cycle reanalysis (2003–2012) merging satellite gravimetry and altimetry observations with a hydrological multi-model ensemble, Hydrol. Earth Syst. Sci., 18, 2955–2973, https://doi.org/10.5194/hess-18-2955-2014, 2014. van Huijgevoort, M. H. J., Hazenberg, P., van Lanen, H. A. J., Teuling, A. J., Clark, D. B., Folwell, S., Gosling, S. N., Hanasaki, N., Heinke, J., Koirala, S., Stacke, T., Voss, F., Sheffield, J., and Uijlenhoet, R.: Global Multimodel Analysis of Drought in Runoff for the Second Half of the Twentieth Century, J. Hydrometeorol., 14, 1535–1552, https://doi.org/10.1175/JHM-D-12-0186.1, 2013. Weedon, G. P., Balsamo, G., Bellouin, N., Gomes, S., Best, M. J., and Viterbo, P.: Data methodology applied to ERA-Interim reanalysis data, Water Resour. Res., 50, 7505–7514, https://doi.org/10.1002/2014WR015638, 2014. Ye, A., Duan, Q., Yuan, X., Wood, E. F., and Schaake, J.: Hydrologic post-processing of MOPEX streamflow simulations, J. Hydrol., 508, 147–156, https://doi.org/10.1016/j.jhydrol.2013.10.055, 2014. Ye, W., Bates, B. C., Viney, N. R., Sivapalan, M., and Jakeman, A. J.: Performance of conceptual rainfall-runoff models in low-yielding ephemeral catchments, Water Resour. Res., 33, 153–166, 1997. Zhai, R. and Tao, F.: Contributions of climate change and human activities to runoff change in seven typical catchments across China, Sci. Total Environ., 605–606, 219–229, https://doi.org/10.1016/j.scitotenv.2017.06.210, 2017. Zhang, Y., Pan, M., Sheffield, J., Siemann, A. L., Fisher, C. K., Liang, M., Beck, H. E., Wanders, N., MacCracken, R. F., Houser, P. R., Zhou, T., Lettenmaier, D. P., Pinker, R. T., Bytheway, J., Kummerow, C. D., and Wood, E. F.: A Climate Data Record (CDR) for the global terrestrial water budget: 1984–2010, Hydrol. Earth Syst. Sci., 22, 241–263, https://doi.org/10.5194/hess-22-241-2018, 2018.
web
auto_math_text
# SoftHSMv2 internals SoftHSMv2 is a software implementation of the PCKS#11 interface. It is often used as replacement for real HSM devices in test environments where protecting key material is not a strong requirement. In this post I will explain how the state of SoftHSMv2 is persisted, the security behind it and what can be improved. # Tokens and objects Token is the PKCS#11 term for something that stores cryptographic objects and performs cryptographic operations. In SoftHSMv2 each token is organized as a directory containing files that represent token’s objects. All token directories have a common root which by default is /var/lib/softhsm/tokens. Object files have an .object extension: $ls /var/lib/softhsm/tokens/643f45af-4f11-54c9-6edb-3578f9e3ea47/*object /var/lib/softhsm/tokens/643f45af-4f11-54c9-6edb-3578f9e3ea47/7bfd8494-0a71-a984-cb8e-97dd6036dce8.object /var/lib/softhsm/tokens/643f45af-4f11-54c9-6edb-3578f9e3ea47/7cb6bfc2-3526-18fe-1c02-f45a94e482c4.object /var/lib/softhsm/tokens/643f45af-4f11-54c9-6edb-3578f9e3ea47/token.object # Object encryption Each token is initialized with user PIN and SO PIN. SoftHSMv2 is using the user PIN to derive AES 256bit master key. For every token it also generates a random AES token key which is used to encrypt and decrypt sensitive object attributes in the corresponding token. Finally, SofthHSMv2 encrypts the token key with the master key and saves it to token.object in the token directory. This is the pseudocode for all of this: salt = RAND(8) masterKey = KDF(salt, PIN) tokenKey = RAND(32) IV = RAND(16) magic = "RJR" encryptedBlob = AES256-CBC(magic || tokenKey, masterKey, IV) save("token.object", salt || IV || encryptedBlob) You can find the real C++ implementation in SecureDataManager.cpp and RFC4880.cpp. So what is the purpose of these magic bytes which are concatenated with the tokenKey? This is basically a poor man’s implementation of authenticated encryption. When a user tries to login with a PIN, SoftHSMv2 determines if the specified PIN is correct by decrypting the encryptedBlob and checking if it starts with the magic bytes. I will talk about how this can be improved but first let’s see an example for how to derive masterKey and tokenKey if we know the user PIN. ## Example Object files can be dumped with the softhsm2-dump-file utility. Let’s dump token.object which contains the encrypted tokenKey: $ softhsm2-dump-file /var/lib/softhsm/tokens/2d0d4809-45cc-93b1-4b3b-21ef36a33837/token.object ... 00 00 00 00 80 00 53 4d CKA_OS_USERPIN 00 00 00 00 00 00 00 03 byte string attribute 00 00 00 00 00 00 00 48 (length 72) 07 b1 79 16 02 74 f1 8f <..y..t..> 57 70 54 a9 81 ea 8e b8 <WpT.....> 28 a4 d4 23 d5 78 cc 16 <(..#.x..> 7f 7d a0 3d 25 54 a9 67 <.}.=%T.g> 5e ba 0e b2 90 8b ef 08 <^.......> 8b d8 44 21 8a 92 3d d8 <..D!..=.> 4a 83 6a 2c 68 70 d5 fe <J.j,hp..> 7b 46 bc 38 1b d0 e6 64 <{F.8...d> 35 49 8c 7c c2 e9 83 50 <5I.|...P> The CKA_OS_USERPIN attribute is 72 bytes which is the concatenation of salt (8 bytes), IV (16 bytes) and encrypted blob (48 bytes). Let’s put these into shell variables: salt=07b179160274f18f IV=577054a981ea8eb828a4d423d578cc16 encryptedBlob=7f7da03d2554a9675eba0eb2908bef088bd844218a923dd84a836a2c6870d5fe7b46bc381bd0e66435498c7cc2e98350 With the correct PIN (1234) and this script we can obtain the masterKey and the tokenKey for this token: $./softhsmv2.py 1234$salt $IV$encryptedBlob masterKey: 5fa8bdf96b1aa11da6aac165a0e4c3fd9d4b48838cbc58945e3ee27bc7fcf281 tokenKey: 302aa706005fb36002b65f65251d1621b183b04c680e3ec143210b7b63da9b85 Now we can use the tokenKey to decrypt the rest of the object files in this token. # Object integrity SoftHSM is using AES-CBC and the tokenKey to encrypt all sensitive object attributes such as private keys. The problem with this approach is that there is no way to guarantee the integrity of the object files. If an attacker gets access to the filesystem, they can modify the object files leaving this undetected by the SoftHSM. In this situation users will get incorrect results when using SoftHSM instead of error saying that the store has been tampered. This problem can be easily solved by replacing AES-CBC with some of the authenticated modes of AES such as AES-GCM. SoftHSM already supports AES-GCM and I have submitted this PR. However, backward compatibility with old tokens requires more work.
web
auto_math_text
Difference between revisions of "IAC positron beamline" Below are three possible configurations for a positron beam line at the IAC. They are listed in order of increasing complexity and difficulty. Because of the recent winter storm, they are code named using ski trail nomenclature. Positron beamline Magnet elements Magnet Elements Label Thickness (cm) Current (A) Resistance (m) B Quantity KiwiDipole D3 & D4 30 Q1A Q1 24 85 53 6 T/m 8 Q1B Q2 24 25 53 9.4 T/m 7 Q1C Q3 24 115 82 9T/m 5 Quad2T Q 15 120 27 9 T/m 3 Quad2A Q 10 102 11 19 T/m 12 Power Supplies Type Manufacture On Shelf Tested magnet 10T250 EMI 4 Use 2 for Dipoles 250T20 EMI 6 Use on Q1As 20T500 EMI 13 Use on Q1Bs 40T250 EMI 2 80T60 EMI 4 Beam Line targets and diagnostics Device Z-thickness HKS Viewer 15 cm Configuration 1: "Green run" This configuration is expected to require the minimal amount of effort (3 days) and is to be used as a "test of principle" in order to justify further investment. The "Green run" configuration proposes switching only two beamline elements. There are three elements between the last two Dipole magnets (D2 & D3) in the beam line. The first element, after a 45 degree bending magnet(D2), is the first quadrupole magnet of a doublet pair. The quadrupole is followed by a quad port for mounting targets and then the second half of the quad doublet just before entering the last 45 degree dipole bending magnet(D3). The "Green run" configuration would have the first quad of the quad doublet and the target port switch places in the beam line. The target (a Tungsten converter to produce positrons) would be placed just after the first 45 degree dipole. The quad doublet would be positioned after the target and before the last 45 degree bending dipole magnet(D3). changes 1. switch target mount assembly (4-way cross) and quadrupole (3 days) Goals The goal of this configuration will be to test positron production efficiencies and determine if the results are scalable by improving the beam line optics. Sketched (not to scale) layout of the 25 MeV beamline: 296 cm = parrallel Distance From the end of the accelerator module (After RF cavities) to experimental cell port 280 cm = Distance From accelerator zero degree beamline to Experimental Cell Wall 56 cm = distance between flanges for the first dipole 84 cm = distance between flanges for the first quad doublet This configuration looses a lot of positrons due to dispersion before Configuration 2: " Blue run" This configuration is expected to require a modest amount of effort (1 week) and is to be used as an improved "test of principle" in order to justify further investment. A new beam line would be built off the zero degree port. The "Blue run" configuration would have the positron converter target positioned just before the first 45 degree dipole bend followed by quads. File:Positron BlueRun.jpg changes 1. maching new spool pieces 3. recondition thin quadrupoles to be more water leak resistant and more easily maintained. Goals The goals of this configuration would be similar to the "Green run" Configuration 3: " Black Diamond" This configuration builds a new beam line which will use the first dipole (D0) currently in place to deflect electrons at 45 degrees beam left. Dipole "D3" will take out the 45 degree deflection of D0 and then send electrons to a 90 degree dipole which will then direct the electrons to the experimental cell through dipole D2 which has been turned off. A rough measurement suggests that this set up will have as much space as Configuration 4 below, if not more. Configuration 4: "Double Black Diamond" This configuration would require a substantial amount of effort (3 weeks) and would be used as a performance test to determine the maximum positron yield from the 25 MeV linac at the IAC. changes 1. replace the first dipole with 6 quadrupoles 2. replace the first 45 degree bending dipole with one that can bend the beam by 45 degree either beam left or beam right. (this will allow us keep the configuration in place and continue experiments which required the first dipole that was removed) 3. replace the quad doublet between the last two dipoles with a quad triplet 4. install a control system for all the new magnets Criticisms/problems to solve 1. There is not enough room for the target following by 6 quadrupoles. Without the target, quadrupoles 1-6 would occupy 1" more space than what exists. Moving the linac to make space is not a desirable option. 2. A magnet to replace the second dipole which would bend the beam +/- 45 degrees as well as allow the beam to pass undeflected would need to be procured 3. Additional power supplies for quads 1-6 may need to be procured. 4. The thin quadrupoles are difficult to maintain. Specifically, new fittings for the cooling lines would need to be installed which would ease maintenance. 5. the proposed beam line changes would require a level of effort similar to moving the Linac in order to fit additional optics elements between the last dipole and the wall Goal A complete test of the optimal positron production capabilities using the 25 MeV Linac machine. Proposed beam line the above would produce a 1 cm diameter positron beam spot just in front of the wall to the experimental cell. The mean positron energy would be 2-3 MeV and is tunable. Magnet pictures This picture shows you the current 90 degree bend into the experimental hall. Quad 1 and Quad 2 are shown and then the 2nd Dipole magnet (labeled Bending Magnet 2A a.k.a. Dipole 2) begins after Quad 1 and Quad 2. Dipole 2 is following by another Quad doublet and then a Dipole (Dipole 3) and then a final quad doublet before going through the wall into the experimental cell. Length of Iron = Diameter of Coils= The next picture was taken upstream of the 90 degree bend and shows Dipole 1 which is usually off unless we want to bend beam right into the accelerator hall instead of going straight to Quad 1 and Quad 2. Label on side of Dipole 1 Label on side of Dipole 2 The Quad doublets in the current beam line have the following label. Beam Monitors Parts needed 2. separate power supply for dipole d2 needs to be installed Parts in hand 1. 1 mm thick tungsten target into cube 2. 2 drift chambers 3. 2, 1 cm x 2 cm scintillators 4. 2, 8 cm x 20 cm scintillators 5. 1mm & 2mm thick Tungsten targets 6. FC: pico ammeter (100 micro amps to 1 mA) 7. Yag scintillatorin viewer(0.1 mRad/hr on contact, 0.06 mRad/hr at 30 cm.) 100 micron thick crystal 8. NAI: it fits into the pipe through the wall. CHIP collaboration meeting Notes 1-11-08 1. MOU: Advertisement, talk at Idaho or JLab, teleconferencing, Joint or Bridge. Optics 5% of the created positrons would be accepted off the target and need to be transported. Run Plan 1. Tuesday: Measure beam properties. With corrector or FC on Translator table 2. Wed: NaI measurements 3. Thursday: Meet with IAC engineers • Tune 10 MeV, 100 micoAmp, 100 nsec , 60 Hz, electrons to FC at D1, 6 mm spot size • (if target ladder) then look tune electrons to viewer aft last quad. • reverse D2 polarity and look for positrons in NAI (3 MeV for Double escape, 0.511 MeV separation for the others) • Change current to 1 positron/pulse • Measure Positron yield as a function of energy by changing D2 and last three quads. (Use aluminum block the cut off electrons). • Measure Positron beam profile. • Reverse D2 and quads to measure electron rate. • ID 0.511 gammas from positron annihilation. • Bring screens, corrector coils, slow valve • FC in May, slow valve, QC quads RUN at IAC on Feb26 2008 The beam spot size of off dipole D1 was measured: CHIPS Run Feb-2008 The spot size measured was after a 1 mm Al-1mm Water-1mm Al window. CHIPS conference call 3/21/08 Solvable Problems: 1.) JLAB would like beam spot of 1 mm. Measured 8 mm after thick window. Doug reported measuring 3 mm without thick window. 2.) Lowering beam current: 10^8 electrons/pulse desired to produce 1 positron per pulse. NEED SLITS and then localized lead shielding. 3.) Beam position monitoring. Next steps after Feb run: 1.) Further develop beam line design to solve above problems. 2.)Proposal is to measure beam parameters with flags and FCUP near end of May. a.)Measure beam shape at zero degree port and after 90 degree bend b.) install FCUP (2 3/4 flange) c.) measure background rates i.) one flag has positron converter in experimental cell. ii.) dipole on experimental cell side iii.) photon veto using scintillators as well. 3.) Future beam line a.) perhaps moved converter target between dipole b.) 50 ns beam pulse may give you +/- 2 MeV spread c.)
web
auto_math_text
# The product 1. Aug 26, 2004 ### vikasj007 well i could not get anything really mibd boggling, so u will have to put up with this one what is the product of: (x-a)(x-b)(x-c).................. = ? 2. Aug 26, 2004 ### cepheid Staff Emeritus $$(x^2 - bx - ax + ab)(x-c)$$ $$= x^3 - cx^2 - bx^2 + cbx - ax^2 +cax + abx - cab$$ What the hell was the point? Basic algebra, without even a nice expansion. EDIT *sigh*, yeah ok, had to collect terms, it looked so bad otherwise: $$= x^3 - (a+b+c)x^2 + (ab + ac + bc)x - abc$$ Last edited: Aug 26, 2004 3. Aug 26, 2004 ### Pi Haha, haven't seen that one in a long time :rofl: (x - a)(x - b) ... (x - x)(x - y)(x - z) = 0 since x - x = 0 4. Aug 30, 2004 ### vikasj007 what in the world was cepheid trying to do? next time try to read carefully. 5. Aug 30, 2004 ### check vilkasj007, u should have written it more clearly. something like this: (x-a)(x-b)...(x-y)(x-z) = ?
web
auto_math_text
# PH-EP Preprints Ostatnio dodane: 2014-04-18 04:29 A search for WW$\gamma$ and WZ$\gamma$ production and constraints on anomalous quartic gauge couplings in pp collisions at $\sqrt{s}$ = 8 TeV / CMS Collaboration A search for WV$\gamma$ triple vector boson production is presented based on events containing a W boson decaying to a muon or an electron and a neutrino, a second V (W or Z) boson, and a photon. [...] arXiv:1404.4619 ; CMS-SMP-13-009 ; CERN-PH-EP-2014-046. - 2014. - 33 p. Preprint 2014-04-17 12:24 Muon reconstruction efficiency and momentum resolution of the ATLAS experiment in proton–proton collisions at $\sqrt{s}=7$ TeV in 2010 / ATLAS Collaboration This paper presents a study of the performance of the muon reconstruction in the analysis of proton--proton collisions at $\sqrt{s}=7$ TeV at the LHC, recorded by the ATLAS detector in 2010. [...] arXiv:1404.4562 ; CERN-PH-EP-2013-154. - 2014. - 22 p. Previous draft version - Preprint 2014-04-12 11:37 Measurement of jet multiplicity distributions in $t\bar{t}$ production in pp collisions at $\sqrt{s}$ = 7 TeV / CMS Collaboration The normalised differential top quark-antiquark production cross section is measured as a function of the jet multiplicity in proton-proton collisions at a centre-of-mass energy of 7 TeV at the LHC with the CMS detector. [...] arXiv:1404.3171 ; CMS-TOP-12-018 ; CERN-PH-EP-2014-048. - 2014. - 41 p. Additional information for the analysis - CMS AuthorList - Preprint 2014-04-09 13:28 Search for supersymmetry at $\sqrt{s}=8$ TeV in final states with jets and two same-sign leptons or three leptons with the ATLAS detector / ATLAS Collaboration A search for strongly produced supersymmetric particles is conducted using signatures involving multiple energetic jets and either two isolated leptons ($e$ or $\mu$) with the same electric charge, or at least three isolated leptons. [...] arXiv:1404.2500 ; CERN-PH-EP-2014-044. - 2014. - 35 p. Previous draft version - Preprint 2014-04-08 22:59 Measurement of the ratio $B(t \to Wb)/B(t \to Wq)$ in pp collisions at $\sqrt{s}$ = 8 TeV / CMS Collaboration The ratio of the top-quark branching fractions $R = B(t \to Wb)/B(t \to Wq)$, where the denominator includes the sum over all down-type quarks (q = b, s, d), is measured in the $t\bar{t}$ dilepton final state with proton-proton collision data at $\sqrt{s}$ = 8 TeV from an integrated luminosity of 19.7 inverse-femtobarns, collected with the CMS detector. [...] arXiv:1404.2292 ; CMS-TOP-12-035 ; CERN-PH-EP-2014-052. - 2014. - 37 p. Preprint 2014-04-08 18:16 Electron reconstruction and identification efficiency measurements with the ATLAS detector using the 2011 LHC proton-proton collision data / ATLAS Collaboration The electron reconstruction and identification efficiencies of the ATLAS detector at the LHC have been evaluated using proton-proton collision data collected in 2011 at $\sqrt{s}$ = 7 TeV and corresponding to an integrated luminosity of 4.7 fb$^{-1}$. [...] arXiv:1404.2240 ; CERN-PH-EP-2014-040. - 2014. - 38 p. Previous draft version - Preprint 2014-04-08 05:05 Observation of the resonant character of the $Z(4430)^-$ state / LHCb collaboration Resonant structures in $B^0\to\psi'\pi^-K^+$ decays are analyzed by performing a four-dimensional fit of the decay amplitude, using $pp$ collision data corresponding to $\rm 3 fb^{-1}$ collected with the LHCb detector. [...] arXiv:1404.1903 ; LHCB-PAPER-2014-014 ; CERN-PH-EP-2014-061. - 2014. - 10 p. Preprint - Related data file(s) - Related supplementary data file(s) 2014-04-05 18:49 Search for invisible decays of Higgs bosons in the vector boson fusion and associated ZH production modes / CMS Collaboration A search for invisible decays of Higgs bosons is performed using the vector boson fusion and associated ZH production modes. [...] arXiv:1404.1344 ; CMS-HIG-13-030 ; CERN-PH-EP-2014-051. - 2014. - 51 p. Preprint 2014-04-04 11:44 Measurement of the low-mass Drell--Yan differential cross section at √s = 7 TeV using the ATLAS detector / ATLAS Collaboration The differential cross section for the process $Z/\gamma^*\rightarrow \ell\ell$ ($\ell=e,\mu$) as a function of dilepton invariant mass is measured in $pp$ collisions at $\sqrt{s}=7$ TeV at the LHC using the ATLAS detector. [...] arXiv:1404.1212 ; CERN-PH-EP-2014-020. - 2014. - 32 p. Previous draft version - Preprint 2014-04-03 20:10 Measurement of the parity-violating asymmetry parameter $\alpha_b$ and the helicity amplitudes for the decay $\Lambda_b^0\to J/\psi\Lambda^0$ with the ATLAS detector / ATLAS Collaboration A measurement of the parity-violating decay asymmetry parameter, $\alpha_b$, and the helicity amplitudes for the decay $\Lambda_b^0\to J/\psi(\mu^+\mu^-) \Lambda^0 (p\pi^-)$ is reported. [...] arXiv:1404.1071 ; CERN-PH-EP-2014-034. - 2014. - 12 p. Previous draft version - Preprint
web
auto_math_text
# Vector Meson Production and Elliptic Flow Measurement in Relativistic Heavy-Ion Collision Experiments ## Prabhat Pujahari ### Indian Institut of Technology - Departement of physics - Bombay - India In-medium modification of light vector mesons due to effects of increasing temperature and density have been proposed as a possible signal of a phase transition of nuclear matter to a de-confined plasma of quarks and gluons. Even in the absence of the phase transition, at lower temperature and density, modifications of these mesons are expected to be measurable. Effects such as phase space and dynamical interactions with matter may modify their mass, width and shape. Results from resonances measured at RHIC, such as \rho^{0} goes to \pi^{+}\pi^{-}, and the possible modifications due to the effects as mentioned above will be discussed. Also, the measurement of elliptic flow of resonances will potentially provide information about the resonance's production mechanism. In particular, the \rho^{0} elliptic flow will distinguish whether it was formed in the early stage quark-antiquark coalescence or it was formed later in the collisions due to hadron re-scattering. If I will get some time, I will also discuss the interesting physics of the proton-on-lead collisions which LHC will start in the beginning of the Year 2013. Specifically, I will discuss the physics of the heavy vector meson such as the J/Psi measurement and the possible "Cold Nuclear Matter" effect at LHC energies.
web
auto_math_text
Journal article Open Access # How will changes in carbon dioxide and methane modify the mean structure of the mesosphere and thermosphere? Roble, R. G.; Dickinson, R. E. ### JSON-LD (schema.org) Export { "description": "A global average model of the coupled mesosphere, thermosphere and ionosphere is used to examine the effect of trace gas variations on the overall structure of these regions. In particular, the variations caused by CO2 and CH4 doublings and halvings from present day mixing ratios are presented. The results indicate that the mesosphre and thermosphere temperatures will cool by about 10K and 50K respectively as the CO2 and CH4 mixing ratios are doubled. These regions are heated by similar amounts when the trace gas mixing ratios are halved. Compositional redistributions also occur in association with changes in the temperature profile. The results show that global change will occur in the upper atmosphere and ionosphere as well as in the lower atmosphere during the 21st century.", "creator": [ { "@type": "Person", "name": "Roble, R. G." }, { "@type": "Person", "name": "Dickinson, R. E." } ], "headline": "How will changes in carbon dioxide and methane modify the mean structure of the mesosphere and thermosphere?", "datePublished": "1989-12-01", "url": "https://zenodo.org/record/1231392", "@context": "https://schema.org/", "identifier": "https://doi.org/10.1029/gl016i012p01441", "@id": "https://doi.org/10.1029/gl016i012p01441", "@type": "ScholarlyArticle", "name": "How will changes in carbon dioxide and methane modify the mean structure of the mesosphere and thermosphere?" } 208 184 views
web
auto_math_text
# Vacuum Cherenkov radiation in spacelike Maxwell-Chern-Simons theory @article{Kaufhold2007VacuumCR, title={Vacuum Cherenkov radiation in spacelike Maxwell-Chern-Simons theory}, author={C. Kaufhold and F. R. Klinkhamer}, journal={Physical Review D}, year={2007}, volume={76}, pages={025024} } • Published 24 April 2007 • Physics • Physical Review D A detailed analysis of vacuum Cherenkov radiation in spacelike Maxwell-Chern-Simons (MCS) theory is presented. A semiclassical treatment reproduces the leading terms of the tree-level result from quantum field theory. Moreover, certain quantum corrections turn out to be suppressed for large energies of the charged particle, for example, the quantum corrections to the classical MCS Cherenkov angle. It is argued that MCS-theory Cherenkov radiation may, in principle, lead to anisotropy effects for… ## Figures from this paper This work reviews the current understanding of Cherenkov-type processes in vacuum that may occur due to a possible violation of Lorentz invariance and the essential properties of the gravitational SME are recalled in this context. ### Vacuum radiation in z=2 Lifshitz QED • Physics • 2021 We discuss in this paper the vacuum Cherenkov radiation in the z = 2 Lifshitz electrodynamics. The improved ultraviolet behavior, in terms of higher spatial derivatives, and the renormalizable ### On a massive scalar field subject to the relativistic Landau quantization in an environment of aether-like Lorentz symmetry violation • Physics • 2020 We analyze the effects of the aether-like Lorentz symmetry violation on a massive scalar field subject to a uniform magnetic field, where, analytically, we determine the relativistic Landau levels ### Cherenkov radiation with massive,CPT-violating photons • Physics • 2016 The source of CPT-violation in the photon sector of the Standard Model Extension arises from a Chern-Simons-like contribution that involves a coupling to a fixed background vector field $k_{AF}^\mu$. ### UHECR bounds on Lorentz violation in the photon sector The aim of this brief review is to present a case study of how astrophysics data can be used to get bounds on Lorentz-violating parameters. For this purpose, a particularly simple Lorentz-violating ### Quantum field theory based on birefringent modified Maxwell theory In the current paper the properties of a birefringent Lorentz-violating extension of quantum electrodynamics is considered. The theory results from coupling modified Maxwell theory, which is a ### Aharonov–Casher effect and persistent spin currents in a Coulomb-type potential induced by Lorentz symmetry breaking effects • Physics Communications in Theoretical Physics • 2020 We investigate quantum effects on a nonrelativistic neutral particle with a permanent magnetic dipole moment that interacts with an electric field. This neutral particle is also under the influence ## References SHOWING 1-10 OF 57 REFERENCES ### Čerenkov effect in Lorentz-violating vacua • Physics • 2004 The emission of electromagnetic radiation by charges moving uniformly in a Lorentz-violating vacuum is studied. The analysis is performed within the classical Maxwell-Chern-Simons limit of the • Physics Physical review letters • 2004 Within the classical Maxwell-Chern-Simons limit of the standard-model extension, the emission of light by uniformly moving charges is studied confirming the possibility of a Cerenkov-type effect. In ### Vacuum Cerenkov radiation in Lorentz-violating theories without CPT violation. This work analyzes the Cerenkov emissions that are associated with the least constrained Lorentz-violating modifications of the photon sector, calculating the threshold energy, the frequency spectrum, and the shape of the Mach cone. ### Bounds on length scales of classical spacetime foam models • Physics • 2007 Simple models of a classical spacetime foam are considered, which consist of identical static defects embedded in Minkowski spacetime. Plane-wave solutions of the vacuum Maxwell equations with ### Signals for Lorentz violation in electrodynamics • Physics • 2002 An investigation is performed of the Lorentz-violating electrodynamics extracted from the renormalizable sector of the general Lorentz- and CPT-violating standard-model extension. Among the ### Lorentz invariance and quantum gravity: an additional fine-tuning problem? • Physics Physical review letters • 2004 It is explained that combining known elementary particle interactions with a Planck-scale preferred frame gives rise to Lorentz violation at the percent level, some 20 orders of magnitude higher than earlier estimates, unless the bare parameters of the theory are unnaturally strongly fine tuned.
web
auto_math_text
# Detection of a Supernova Signature Associated with GRB 011121 ### Abstract Using observations from an extensive monitoring campaign with the Hubble Space Telescope, we present the detection of an intermediate- time flux excess that is redder in color relative to the afterglow of GRB 011121, currently distinguished as the gamma- ray burst with the lowest known redshift. The red bump,’’ which exhibits a spectral rollover at åisebox-0.5ex 7200 Å, is well described by a redshifted Type Ic supernova that occurred approximately at the same time as the gamma-ray burst event. The inferred luminosity is about half that of the bright supernova SN 1998bw. These results serve as compelling evidence for a massive star origin of long-duration gamma-ray bursts. Models that posit a supernova explosion weeks to months preceding the gamma-ray burst event are excluded by these observations. Finally, we discuss the relationship between spherical core-collapse supernovae and gamma-ray bursts. Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Publication Astrophysical Journal Letters
web
auto_math_text
# Probing Kilonova Ejecta Properties Using a Catalog of Short Gamma-Ray Burst Observations J. C. Rastinejad, W. Fong, C. D. Kilpatrick, K. Paterson, N. R. Tanvir, A. J. Levan, B. D. Metzger, E. Berger, R. Chornock, B. E. Cobb, T. Laskar, P. Milne, A. E. Nugent, N. Smith Research output: Contribution to journalArticle ## Abstract The discovery of GW170817 and GRB 170817A in tandem with AT 2017gfo cemented the connection between neutron star mergers, short gamma-ray bursts (GRBs), and kilonovae. To investigate short GRB observations in the context of diverse kilonova behavior, we present a comprehensive optical and near-infrared (NIR) catalog of 85 bursts discovered over 2005-2020 on timescales of $\lesssim12$ days. The sample includes previously unpublished observations of 23 bursts, and encompasses both detections and deep upper limits. We identify 11.8% and 15.3% of short GRBs in our catalog with upper limits that probe luminosities lower than those of AT 2017gfo and a fiducial NSBH kilonovae model (for pole-on orientations), respectively. We quantify the ejecta masses allowed by the deepest limits in our catalog, constraining blue and `extremely blue' kilonova components of 14.1% of bursts to $M_{\rm ej}\lesssim0.01-0.1 M_{\odot}$. The sample of short GRBs is not particularly constraining for red kilonova components. Motivated by the large catalog as well as model predictions of diverse kilonova behavior, we investigate altered search strategies for future follow-up to short GRBs. We find that ground-based optical and NIR observations on timescales of $\gtrsim 2$ days can play a significant role in constraining more diverse outcomes. We expect future short GRB follow up efforts, such as from the {\it James Webb Space Telescope}, to expand the reach of kilonova detectability to redshifts of $z\approx 1$. Original language English Astrophysical Journal Acceptance date - 8 Jan 2021 • astro-ph.HE
web
auto_math_text
# Article Full entry | PDF   (0.1 MB) Keywords: resolvable; maximal; $\alpha$-bounded Summary: It is proved that every uncountable $\omega$-bounded group and every homogeneous space containing a convergent sequence are resolvable. We find some conditions for a topological group topology to be irresolvable and maximal. References: [A] Anderson D.R.: On connected irresolvable Hausdorff spaces. Proc. Amer. Math. Soc. 16 (1965), 463-466. MR 0178443 | Zbl 0127.13003 [Be] Bell M.G.: On the combinatorial principle $P(c)$. Fund. Math. 114 (1981), 149-157. MR 0643555 [Bo] Booth D.: Ultrafilters on a countable set. Ann. Pure Appl. Logic (1970), 1-24. MR 0277371 | Zbl 0231.02067 [CF] Comfort W.W., Feng L.: The union of resolvable space is resolvable. preprint, 1993. MR 1221007 [CG] Comfort W.W., Garcí a-Ferreira S.: manuscript in preparation, 1900. [CGvM] Comfort W.W., Gladdines H., Van Mill J.: Proper pseudocompact subgroups of pseudocompact Abelian groups. preprint, 1993. Zbl 0915.54029 [CvM] Comfort W.W., Van Mill J.: Groups with only resolvable topologies. preprint, 1993. [CMZ] Comfort W.W., Masaveau O., Zhou H.: Resolvability in topology and in topological groups. Proc. Ninth (June 1993) Summer Topology Conference, Ann. New York Acad. Sci., to appear. MR 1462378 [GG] García-Ferreira S., García-Máynez A.: On weakly pseudocompact spaces. Houston J. Math. 20 (1994), 145-159. MR 1272568 [G] Guran I.I.: On topological groups close to being Lindelöf. Soviet Math. Dokl. 23 (1981), 173-175. Zbl 0478.22002 [H] Hewitt E.: A problem of set-theoretic topology. Duke Math. J. 10 (1943), 309-333. MR 0008692 | Zbl 0060.39407 [L] Louveau A.: Sur un article de S. Sirota. Bull. Sci. Math. (2) 96 (1972), 3-7. MR 0308326 | Zbl 0228.54032 [M] Malykhin V.I.: Extremally disconnected and similar groups. Soviet Math. Dokl. 16 (1975), 21-25. Zbl 0322.22003 [P] Padmavally K.: An example of a connected irresolvable Hausdorff spaces. Duke Math. J. 20 (1953), 513-520. MR 0059539 Partner of
web
auto_math_text
# Matt's arXiv selection, Monday 13th March 2006. From: Matthew Davis <mdavis_at_physics.uq.edu.au> Date: Mon, 13 Mar 2006 09:17:47 +1000 (EST) The following message was sent to the matts_arxiv list by Matthew Davis <mdavis_at_physics.uq.edu.au> Remember the archives of the arXiv selection at Have a good week, Matt. -- ------------------------------------------------------------------------- Dr M. J. Davis, Senior Lecturer in Physics School of Physical Sciences, email: mdavis_at_physics.uq.edu.au University of Queensland, ph : +61 7 334 69824 Brisbane, QLD 4072, fax : +61 7 336 51242 Australia. http://www.physics.uq.edu.au/people/mdavis/ ------------------------------------------------------------------------- ------------------------------------------------------------------------------ \\ Paper: cond-mat/0603056 Date: Thu, 2 Mar 2006 22:00:44 GMT (22kb) Title: Bose atoms in a trap: a variational Monte Carlo formulation for the universal behavior at the Van der Waals length scale Authors: Imran Khan and Bo Gao Subj-class: Statistical Mechanics \\ We present a variational Monte Carlo (VMC) formulation for the universal equations of state at the Van der Waals length scale [B. Gao, J. Phys. B \textbf{37}, L227 (2004)] for $N$ Bose atoms in a trap. The theory illustrates both how such equations of state can be computed exactly, and the existence and the importance of long-range atom-atom correlation under strong confinement. Explicit numerical results are presented for N=3 and 5, and used to provide a quantitative understanding of the shape-dependent confinement correction that is important for few atoms under strong confinement. \\ ( http://arXiv.org/abs/cond-mat/0603056 , 22kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0603059 Date: Fri, 3 Mar 2006 00:44:12 GMT (151kb) Title: Collisions between solitary waves of three-dimensional Bose-Einstein condensates Authors: N.G. Parker, A. M. Martin, S. L. Cornish and C. S. Adams Subj-class: Other \\ We study bright solitary waves of three dimensional trapped Bose-Einstein condensates and their collisions. For a single solitary wave, in addition to an upper critical number, we also find a {\em lower} cut-off, below which no stable state can be found. Collisions between solitary waves can be elastic, inelastic with either reduced or increased outgoing speed, or completely unstable due to a collapse instability. A $\pi$-phase difference between the waves promotes elastic collisions, and gives excellent agreement with recent experimental results over long timescales. \\ ( http://arXiv.org/abs/cond-mat/0603059 , 151kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0603070 Date: Fri, 3 Mar 2006 10:51:43 GMT (260kb) Title: Dark solitons in F=1 spinor Bose--Einstein condensate Authors: Masaru Uchiyama, Jun'ichi Ieda, Miki Wadati Subj-class: Other; Soft Condensed Matter \\ We study dark soliton solutions of a multi-component Gross--Pitaevskii equation for hyperfine spin F=1 spinor Bose--Einstein condensate. The interactions are supposed to be inter-atomic repulsive and anti-ferromagnetic ones of equal magnitude. The solutions are obtained from those of an integrable $2\times 2$ matrix nonlinear Schr\"{o}dinger equation with nonvanishing boundary conditions. We investigate the one-soliton and two-soliton solutions in detail. One-soliton is classified into two kinds. The ferromagnetic state has wavefunctions of domain-wall shape and its total spin is nonzero. The polar state provides a hole soliton and its total spin is zero. These two states are selected by choosing the type of the boundary conditions. In two-soliton collisions, we observe the spin-mixing or spin-transfer. It is found that, as "magnetic" carriers, solitons in the ferromagnetic state are operative for the spin-mixing while those in the polar are passive. \\ ( http://arXiv.org/abs/cond-mat/0603070 , 260kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0603071 Date: Fri, 3 Mar 2006 10:03:49 GMT (5kb) Title: Generalized Thermostatistics and Bose-Einstein Condensation Authors: H. G. Miller, F. C. Khanna, R.Teshima, A.R. Plastino and A. Plastino Subj-class: Statistical Mechanics \\ Analytical expressions for Bose-Einstein condensation of an ideal Bose gas analyzed within the strictures of non-extensive, generalized thermostatistics are here obtained. \\ ( http://arXiv.org/abs/cond-mat/0603071 , 5kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0603082 Date: Fri, 3 Mar 2006 16:15:48 GMT (2921kb) Title: Liquid 4He: contributions to first principles theory of quantized vortices, thermohydrodynamic properties, and the lambda transition Authors: H.W. Jackson Subj-class: Statistical Mechanics \\ Liquid 4He has been studied extensively for almost a century, but there are still a number of outstanding weak or missing links in our comprehension of it. This paper reviews some of the principal paths taken in previous research and then proceeds to fill gaps and create an integrated picture with more complete understanding through first principles treatment of a realistic model that starts with a microscopic, atomistic description of the liquid. Newly derived results for vortex cores and thermohydrodynamic properties for a two-fluid model are used to show that interacting quantized vortices may produce a lambda anomaly in specific heat near the superfluid transition where flow properties change. The nature of the order in the superfluid state is explained. Experimental support for new calculations is exhibited, and a unique specific heat experiment is proposed to test predictions of the theory. Relevance of the theory to modern research in cosmology, astrophysics, and Bose-Einstein condensates is discussed. \\ ( http://arXiv.org/abs/cond-mat/0603082 , 2921kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0603083 Date: Fri, 3 Mar 2006 16:39:15 GMT (38kb) Title: Spin Hall Effect in Atoms Authors: Xiong-Jun Liu, Xin Liu, L. C. Kwek, and C. H. Oh Subj-class: Mesoscopic Systems and Quantum Hall Effect \\ We investigate a new type of spin hall effect (SHE) in neutral atomic system by coupling atoms' internal freedom (atom spin states) to radiation. Atoms with opposite spin polarization are shown to experience opposite effective electromagnetic fields, and then can move in the opposite direction. For this pure spin currents in atomic system are obtained, i.e. no massive current occurs in this way. Besides, we show for special case the spin currents created in present way can exhibit interesting topological properties. \\ ( http://arXiv.org/abs/cond-mat/0603083 , 38kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0511206 replaced with revised version Thu, 2 Mar 2006 21:18:59 GMT (21kb) Title: Solitons of Bose-Fermi mixtures in a strongly elongated trap Authors: J. Santhanam, V. M. Kenkre, V. V. Konotop Subj-class: Other Journal-ref: Physical Review A, Volume 73, Number 1, 013612 (2006) \\ ( http://arXiv.org/abs/cond-mat/0511206 , 21kb) ------------------------------------------------------------------------------ \\ Paper: quant-ph/0603025 Date: Fri, 3 Mar 2006 13:43:32 GMT (661kb) Title: Intensity correlation and anticorrelations in coherently prepared atomic vapor Authors: Gombojav O. Ariunbold, Vladimir A. Sautenkov, Yuri V. Rostovtsev, and Marlan O. Scully \\ Motivated by the recent experiment [V.A. Sautenkov, Yu.V. Rostovtsev, and M.O. Scully, Phys. Rev. A 72, 065801 (2005)], we develop a theoretical model in which the field intensity fluctuations resulted from resonant interaction of a dense atomic medium with laser field having finite bandwidth. The intensity-intensity cross correlation between two circular polarized beams can be controlled by the applied external magnetic field. A smooth transition from perfect correlations to anti-correlations (at zero delay time) of the outgoing beams as a function of the magnetic field strength is observed. It provides us with the desired information about decoherence rate in, for example, $^{87}$Rb atomic vapor. \\ ( http://arXiv.org/abs/quant-ph/0603025 , 661kb) ------------------------------------------------------------------------------ \\ Paper: quant-ph/0603028 Date: Fri, 3 Mar 2006 17:33:37 GMT (291kb) Title: Simultaneous Magneto-Optical Trapping of Bosonic and Fermionic Chromium Atoms Authors: R. Chicireanu, A. Pouderous, R. Barbe, B. Laburthe-Tolra, E. Marechal, L. Vernac, J.-C. Keller, and O. Gorceix Comments: 8 pages, 5 figures. submitted to Phys Rev A \\ We report on magneto-optical trapping of fermionic 53Cr atoms. A Zeeman-slowed atomic beam provides loading rates up to 3 10^6 /s. We present systematic characterization of the magneto-optical trap (MOT). We obtain up to 5 10^5 atoms in the steady state MOT. The atoms radiatively decay from the excited P state into metastable D states, and, due to the large dipolar magnetic moment of chromium atoms in these states, they can remain magnetically trapped in the quadrupole field gradient of the MOT. We study the accumulation of metastable 53Cr atoms into this magnetic trap. We also report on the first simultaneous magneto-optical trapping of bosonic 52Cr and fermionic 53Cr atoms. Finally, we characterize the light assisted collision losses in this Bose-Fermi cold mixture. \\ ( http://arXiv.org/abs/quant-ph/0603028 , 291kb) ------------------------------------------------------------------------------ \\ Paper: physics/0512084 replaced with revised version Thu, 2 Mar 2006 01:36:45 GMT (81kb) Title: Direct excitation of the forbidden clock transition in neutral 174Yb atoms confined to an optical lattice Authors: Zeb W. Barber (1), Chad W. Hoyt (1), Chris W. Oates (1), Leo Hollberg (1), Aleksei V. Taichenachev (2), Valera I. Yudin (2) ((1) NIST-Boulder, (2) Novosibirsk) Comments: Submitted to Physics Review Letters Subj-class: Atomic Physics \\ ( http://arXiv.org/abs/physics/0512084 , 81kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0603091 Date: Sat, 4 Mar 2006 17:51:02 GMT (64kb) Title: Fermion pairing with population imbalance: energy landscape and phase separation in a constrained Hilbert subspace Authors: Zheng-Cheng Gu, Geoff Warner and Fei Zhou (UBC) Subj-class: Other \\ In this Letter we map out the mean field energy potential landscape of fermion pairing states with population imbalance near broad Feshbach Resonances. We apply the landscape to investigate the nature of phase separation, when the Hilbert space is subject to the constraint of constant population imbalance. We calculate the scattering length dependence of the critical population imbalance for various phase separated states across Feshbach resonances. \\ ( http://arXiv.org/abs/cond-mat/0603091 , 64kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0603093 Date: Sat, 4 Mar 2006 05:14:32 GMT (179kb) Title: The q-deformed Bose gas: Integrability and thermodynamics Authors: Michael Bortz and Sergey Sergeev Subj-class: Statistical Mechanics \\ We investigate the exact solution of the q-deformed one-dimensional Bose gas to derive all integrals of motion and their corresponding eigenvalues. As an application, the thermodynamics is given and compared to an effective field theory at low temperatures. \\ ( http://arXiv.org/abs/cond-mat/0603093 , 179kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0603094 Date: Sat, 4 Mar 2006 07:06:54 GMT (104kb) Title: Electromagnetically induced transparency in an atom-molecule Bose-Einstein condensate Authors: Guang-Ri Jin, Chul Koo Kim, and Kyun Nahm Subj-class: Other \\ We propose a new measurement scheme for the atom-molecule dark state by using electromagnetically induced transparency (EIT) technique. Based on a density-matrix formalism, we calculate the absorption coefficient numerically. The appearance of the EIT dip in the spectra profile gives clear evidence for the creation of the dark state in the atom-molecule Bose-Einstein condensate. \\ ( http://arXiv.org/abs/cond-mat/0603094 , 104kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0603106 Date: Sun, 5 Mar 2006 18:21:53 GMT (124kb) Title: Kinetics of quantum fluctuations in the Bose-Einstein condensation of polaritons Authors: Davide Sarchi, Vincenzo Savona Subj-class: Materials Science; Mesoscopic Systems and Quantum Hall Effect; Other \\ We develop a kinetic theory of polariton non-equilibrium Bose-Einstein condensation, in which the field dynamics of collective excitations is treated self-consistently along with the condensation kinetics. The theory accounts properly for the dominant role of quantum fluctuations in the condensate. In realistic situations with optical excitation at high energy, it predicts a considerable depletion of the condensate caused by long-wavelength fluctuations. We discuss how this depletion and the subsequent partial suppression of long-range order depend on the finite size of the system. \\ ( http://arXiv.org/abs/cond-mat/0603106 , 124kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0603113 Date: Mon, 6 Mar 2006 03:14:21 GMT (27kb) Title: Stability analysis for $n$-component Bose-Einstein condensate Authors: David C. Roberts and Masahito Ueda Subj-class: Other; Statistical Mechanics \\ We derive the dynamic and thermodynamic stability conditions for dilute multicomponent Bose-Einstein condensates (BECs). These stability conditions, generalized for $n$-component BECs, are found to be equivalent and are shown to be consistent with the phase diagrams of two- and three-component condensates that are derived from energetic arguments. \\ ( http://arXiv.org/abs/cond-mat/0603113 , 27kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0603118 Date: Mon, 6 Mar 2006 15:06:36 GMT (258kb) Title: Interacting Fermi Gases in Disordered One-Dimensional Lattices Authors: Gao Xianlong, M. Polini, B. Tanatar, and M.P. Tosi Comments: 7 pages, 4 figures, submitted Subj-class: Strongly Correlated Electrons; Disordered Systems and Neural Networks \\ Interacting two-component Fermi gases loaded in a one-dimensional (1D) lattice and subject to harmonic trapping exhibit intriguing compound phases in which fluid regions coexist with local Mott-insulator and/or band-insulator regions. Motivated by experiments on cold atoms inside disordered optical lattices, we present a theoretical study of the effects of a random potential on these ground-state phases. Within a density-functional scheme we show that disorder has two main effects: (i) it destroys the local insulating regions if it is sufficiently strong compared with the on-site atom-atom repulsion, and (ii) it induces an anomaly in the compressibility at low density from quenching of percolation. \\ ( http://arXiv.org/abs/cond-mat/0603118 , 258kb) ------------------------------------------------------------------------------ \\ Paper (*cross-listing*): nlin.SI/0603010 Date: Fri, 3 Mar 2006 10:54:11 GMT (25kb) Title: Inverse scattering method for the multicomponent nonlinear Schr\"odinger equation under nonvanishing boundary conditions Authors: Jun'ichi Ieda, Masaru Uchiyama, Miki Wadati Subj-class: Exactly Solvable and Integrable Systems; Mathematical Physics; Other \\ Matrix generalization of the inverse scattering method is developed to solve the multicomponent nonlinear Schr\"odinger equation with nonvanishing boundary conditions. It is shown that the initial value problem can be solved exactly. The multi-soliton solution is obtained from the Gel'fand--Levitan--Marchenko equation. \\ ( http://arXiv.org/abs/nlin/0603010 , 25kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0509505 replaced with revised version Mon, 6 Mar 2006 03:38:56 GMT (232kb) Title: Quantum atom optics with fermions from molecular dissociation Authors: K. V. Kheruntsyan Subj-class: Other; Statistical Mechanics \\ ( http://arXiv.org/abs/cond-mat/0509505 , 232kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0510397 replaced with revised version Mon, 6 Mar 2006 08:56:27 GMT (81kb) Title: Observations of density fluctuations in an elongated Bose gas: ideal gas and quasi-condensate regimes Authors: Jerome Esteve (LCFIO), Jean-Baptiste Trebbia (LCFIO), Thorsten Schumm (LCFIO), Alain Aspect (LCFIO), Christoph I. Westbrook (LCFIO), Isabelle Bouchoule (LCFIO) Proxy: ccsd ccsd-00011123 Subj-class: Other \\ ( http://arXiv.org/abs/cond-mat/0510397 , 81kb) ------------------------------------------------------------------------------ \\ Paper: quant-ph/0603037 Date: Mon, 6 Mar 2006 07:05:49 GMT (19kb) Title: Bright entanglement in the intracavity nonlinear coupler Authors: M.K. Olsen \\ We show that the intracavity Kerr nonlinear coupler is a potential source of bright continuous variable entangled light beams which are tunable and spatially separated. This system may be realised with integrated optics and thus provides a potentially rugged and stable source of bright entangled beams. \\ ( http://arXiv.org/abs/quant-ph/0603037 , 19kb) ------------------------------------------------------------------------------ \\ Paper: quant-ph/0603039 Date: Mon, 6 Mar 2006 10:38:52 GMT (164kb) Title: Effects of cavity-field statistics on atomic entanglement Authors: Biplab Ghosh, A. S. Majumdar, N. Nayak Comments: Revtex, 9 pages, 10 eps figures \\ We study the entanglement properties of a pair of two-level atoms going through a cavity one after another. The initial joint state of two successive atoms that enter the cavity is unentangled. Interactions mediated by the cavity photon field result in the final two-atom state being of a mixed entangled type. We consider respectively various field statistics as in the Fock state field, thermal field, coherent state field and squeezed state field inside the cavity, and calculate the entanglement of formation, the well-known measure appropriate for mixed states, of the joint two-atom state as a function of the Rabi-angle $gt$. We present a detailed and comparitive study of two-atom entanglement for low and high mean photon number cases corresponding to the different cavity fields. \\ ( http://arXiv.org/abs/quant-ph/0603039 , 164kb) ------------------------------------------------------------------------------ \\ Paper: quant-ph/0603044 Date: Mon, 6 Mar 2006 15:43:26 GMT (152kb) Title: Optical Transparency Using Interference Between Two Modes of a Cavity Authors: J.D. Franson and S.M. Hendrickson \\ In electromagnetically-induced transparency (EIT), the absorption of a probe beam is greatly reduced due to destructive interference between two dressed atomic states produced by a strong laser beam. Here we show that a similar reduction in the single-photon absorption rate can be achieved by tuning a probe beam to be halfway between the resonant frequencies of two modes of a cavity. This technique is expected to be useful in enhancing two-photon absorption while reducing losses due to single-photon scattering. \\ ( http://arXiv.org/abs/quant-ph/0603044 , 152kb) ------------------------------------------------------------------------------ \\ Paper: quant-ph/0603048 Date: Mon, 6 Mar 2006 18:58:42 GMT (90kb) Title: Experimental interference of independent photons Authors: Rainer Kaltenbaek, Bibiane Blauensteiner, Marek Zukowski, Markus Aspelmeyer, Anton Zeilinger \\ Interference of photons emerging from independent sources is essential for modern quantum information processing schemes, above all quantum repeaters and linear-optics quantum computers. We report an observation of non-classical interference of two single photons originating from two independent, separated sources, which were actively synchronized with an r.m.s. timing jitter of 260 fs. The resulting (two-photon) interference visibility was 83(+/-)4 %. \\ ( http://arXiv.org/abs/quant-ph/0603048 , 90kb) ------------------------------------------------------------------------------ \\ Paper: quant-ph/0603054 Date: Tue, 7 Mar 2006 16:08:59 GMT (193kb) Title: Elastic vs. inelastic coherent backscattering of laser light by cold atoms: a master equation treatment Authors: Vyacheslav Shatokhin, Cord A. M\"uller, Andreas Buchleitner \\ We give a detailed derivation of the master equation description of the coherent backscattering of laser light by cold atoms. In particular, our formalism accounts for the nonperturbative nonlinear response of the atoms when the injected intensity saturates the atomic transition. Explicit expressions are given for total and elastic backscattering intensities in the different polarization channels, for the simplest nontrivial multiple scattering scenario of intense laser light multiply scattering from two randomly placed atoms. \\ ( http://arXiv.org/abs/quant-ph/0603054 , 193kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0603162 Date: Tue, 7 Mar 2006 12:25:25 GMT (152kb) Title: Bose-Fermi mixtures in an optical lattice Authors: K. Sengupta, N. Dupuis, and P. Majumdar Comments: 10 pages, 7 figures, version 1 Subj-class: Strongly Correlated Electrons \\ We study an atomic Bose-Fermi mixture in an optical lattice which is confined using an optical trap. We obtain the Mott ground states of such a system in the limit of deep optical lattice and discuss the effect of quantum fluctuations on these states. We also study the superfluid-insulator transitions of bosons and metal-insulator transition of fermions in such a mixture within a slave-rotor mean-field approximation, and obtain the corresponding phase diagram. We discuss experimental implications of our results. \\ ( http://arXiv.org/abs/cond-mat/0603162 , 152kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0410417 replaced with revised version Tue, 7 Mar 2006 09:20:21 GMT (544kb) Title: Universality in Few-body Systems with Large Scattering Length Authors: Eric Braaten (Ohio State U.), H.-W. Hammer (INT and Bonn U.) Comments: 219 pages, 58 figures, accepted for publication in Physics Reports, updated and some errors corrected Report-no: INT-PUB 04-27, HISKP-TH-06-07 Subj-class: Other; Soft Condensed Matter \\ ( http://arXiv.org/abs/cond-mat/0410417 , 544kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0510680 replaced with revised version Tue, 7 Mar 2006 17:21:49 GMT (71kb) Title: Response of a Fermi gas to time-dependent perturbations: Riemann-Hilbert approach at non-zero temperatures Authors: Bernd Braunecker Comments: 10 pages, 2 figures; 2 appendices added, a few modifications in the text, typos corrected; published in Phys. Rev. B Subj-class: Strongly Correlated Electrons; Mesoscopic Systems and Quantum Hall Effect Journal-ref: Phys. Rev. B 73, 075122 (2006) DOI: 10.1103/PhysRevB.73.075122 \\ ( http://arXiv.org/abs/cond-mat/0510680 , 71kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0603190 Date: Wed, 8 Mar 2006 03:49:58 GMT (18kb) Title: Realization, Characterization, and Detection of Novel Superfluid Phases with Pairing between Unbalanced Fermion Species Authors: Kun Yang Comments: This is an invited contribution to a book titled "Pairing beyond BCS Theory in Fermionic Systems" (Mark Alford, John Clark and Armen Sedrakian, Subj-class: Superconductivity \\ In this chapter we review recent experimental and theoretical work on various novel superfluid phases in fermion systems, that result from pairing fermions of different species with unequal densities. After briefly reviewing existing experimental work in superconductors subject to a strong magnetic field and trapped cold fermionic atom systems, we discuss how to characterize the possible pairing phases based on their symmetry properties, and the structure/topology of the Fermi surface(s) formed by the unpaired fermions due to the density imbalance. We also discuss possible experimental probes that can be used to directly detect the structure of the superfluid order parameter in superconductors and trapped cold atom systems, which may establish the presence of some of these phases unambiguously. \\ ( http://arXiv.org/abs/cond-mat/0603190 , 18kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0603197 Date: Wed, 8 Mar 2006 09:54:48 GMT (613kb) Title: Dark solitons as quasiparticles in trapped condensates Authors: V.A. Brazhnyi, V.V. Konotop, L.P. Pitaevskii Comments: 16 pages, 4 figures. To appear in Phys. Rev. A Subj-class: Other \\ We present a theory of dark soliton dynamics in trapped quasi-one-dimensional Bose-Einstein condensates, which is based on the local density approximation. The approach is applicable for arbitrary polynomial nonlinearities of the mean-field equation governing the system as well as to arbitrary polynomial traps. In particular, we derive a general formula for the frequency of the soliton oscillations in confining potentials. A special attention is dedicated to the study of the soliton dynamics in adiabatically varying traps. It is shown that the dependence of the amplitude of oscillations {\it vs} the trap frequency (strength) is given by the scaling law $X_0\propto\omega^{-\gamma}$ where the exponent $\gamma$ depends on the type of the two-body interactions, on the exponent of the polynomial confining potential, on the density of the condensate and on the initial soliton velocity. Analytical results obtained within the framework of the local density approximation are compared with the direct numerical simulations of the dynamics, showing remarkable match. Various limiting cases are addressed. In particular for the slow solitons we computed a general formula for the effective mass and for the frequency of oscillations. \\ ( http://arXiv.org/abs/cond-mat/0603197 , 613kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0603200 Date: Wed, 8 Mar 2006 11:37:19 GMT (610kb) Title: Ordered structures in rotating ultracold Bose gases Authors: N.Barberan, M.Lewenstein, K.Osterloh, and D.Dagnino Subj-class: Mesoscopic Systems and Quantum Hall Effect \\ The characterization of small samples of cold bosonic atoms in rotating microtraps has recently attracted increasing interest due to the possibility to deal with a few number of particles per site in optical lattices. We analyze the evolution of ground state structures as the rotational frequency $\Omega$ increases. Various kinds of ordered structures are observed. For $N<10$ atoms, the standard scenario, valid for large sytems, is absent, and only gradually recovered as $N$ increases. The vortex contribution to the total angular momentum $L$ as a function of $\Omega$ ceases to be an increasing function of $\Omega$, as observed in experiments of Chevy {\it et al.} (Phys. Rev. Lett. 85, 2223 (2000)). Instead, for small $N$, it exhibits a sequence of peaks showing wide minima at the values of $\Omega$, where no vortices appear. \\ ( http://arXiv.org/abs/cond-mat/0603200 , 610kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0603204 Date: Wed, 8 Mar 2006 13:56:07 GMT (94kb) Title: Quantum Dynamics of Atomic Coherence in a Spin-1 Condensate: Mean-Field versus Many-Body Simulation Authors: L.I.Plimak, C.Wei\ss, R.Walser, and W.P.Schleich Comments: Accepted for publication for the special issue of "Optics Communications" on Quantum Control of Light and Matter Subj-class: Other \\ We analyse and numerically simulate the full many-body quantum dynamics of a spin-1 condensate in the single spatial mode approximation. Initially, the condensate is in a ferromagnetic'' state with all spins aligned along the $y$ axis and the magnetic field pointing along the z axis. In the course of evolution the spinor condensate undergoes a characteristic change of symmetry, which in a real experiment could be a signature of spin-mixing many-body interactions. The results of our simulations are conveniently visualised within the picture of irreducible tensor operators. \\ ( http://arXiv.org/abs/cond-mat/0603204 , 94kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0603212 Date: Wed, 8 Mar 2006 15:58:26 GMT (131kb) Title: General variational many-body theory with complete self-consistency for trapped bosonic systems Authors: Alexej I. Streltsov, Ofir E. Alon and Lorenz S. Cederbaum Subj-class: Other \\ In this work we develop a complete variational many-body theory for a system of $N$ trapped bosons interacting via a general two-body potential. In this theory both the many-body basis functions {\em and} the respective expansion coefficients are treated as variational parameters. The optimal variational parameters are obtained {\em self-consistently} by solving a coupled system of non-eigenvalue -- generally integro-differential -- equations to get the one-particle functions and by diagonalizing the secular matrix problem to find the expansion coefficients. We call this theory multi-configurational Hartree for bosons or MCHB(M), where M specifies explicitly the number of one-particle functions used to construct the configurations. General rules for evaluating the matrix elements of one- and two-particle operators are derived and applied to construct the secular Hamiltonian matrix. We discuss properties of the derived equations. It is demonstrated that for any practical computation where the configurational space is restricted, the description of trapped bosonic systems strongly depends on the choice of the many-body basis set used, i.e., self-consistency is of great relevance. As illustrative examples we consider bosonic systems trapped in one- and two-dimensional symmetric and asymmetric double-well potentials. We demonstrate that self-consistency has great impact on the predicted physical properties of the ground and excited states and show that the lack of self-consistency may lead to physically wrong predictions. The convergence of the general MCHB(M) scheme with a growing number M is validated in a specific case of two bosons trapped in a symmetric double-well. \\ ( http://arXiv.org/abs/cond-mat/0603212 , 131kb) ------------------------------------------------------------------------------ \\ Paper: quant-ph/0603063 Date: Wed, 8 Mar 2006 09:50:25 GMT (176kb) Title: Matter-wave diffraction in time with a linear potential Authors: A. del Campo, J. G. Muga \\ Diffraction in time of matter waves incident on a shutter which is removed at time $t=0$ is studied in the presence of a linear potential. The solution is also discussed in phase space in terms of the Wigner function. An alternative configuration relevant to current experiments where particles are released from a hard wall trap is also analyzed for single-particle states and for a Tonks-Girardeau gas. \\ ( http://arXiv.org/abs/quant-ph/0603063 , 176kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0603264 Date: Thu, 9 Mar 2006 19:39:10 GMT (24kb) Title: BCS-BEC crossover and quantum phase transition for 6Li and 40K atoms across Feshbach resonance Authors: W. Yi and L.-M. Duan Subj-class: Other \\ We systematically study the BCS-BEC crossover and the quantum phase transition in ultracold 6Li and 40K atoms across a wide Feshbach resonance. The background scattering lengths for 6Li and 40K have opposite signs, which lead to very different behaviors for these two types of atoms. For 40K, both the two-body and the many-body calculations show that the system always has two branches of solutions: one corresponds to a deeply bound molecule state; and the other, the one accessed by the current experiments, corresponds to a weakly bound state with population always dominantly in the open channel. For 6Li, there is only a unique solution with the standard crossover from the weakly bound Cooper pairs to the deeply bound molecules as one sweeps the magnetic field through the crossover region. Because of this difference, for the experimentally accessible state of 40K, there is a quantum phase transition at zero temperature from the superfluid to the normal fermi gas at the positive detuning of the magnetic field where the s-wave scattering length passes its zero point. For 6Li, however, the system changes continuously across the zero point of the scattering length. For both types of atoms, we also give detailed comparison between the results from the two-channel and the single-channel model over the whole region of the magnetic field detuning. \\ ( http://arXiv.org/abs/cond-mat/0603264 , 24kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0603270 Date: Thu, 9 Mar 2006 20:52:35 GMT (844kb) Title: Dressed-molecules in resonantly-interacting ultracold atomic Fermi gases Authors: G.M. Falco, H.T.C. Stoof Subj-class: Other; Superconductivity \\ We present a detailed analysis of the two-channel atom-molecule effective Hamiltonian for an ultracold two-component homogeneous Fermi gas interacting near a Feshbach resonance. We particularly focus on the two-body and many-body properties of the dressed molecules in such a gas. An exact result for the many-body T-matrix of the two-channel theory is derived by both considering coupled vertex equations and the functional integral methods. The field theory incorporates exactly the two-body physics of the Feshbach scattering by means of simple analytical formulas without any fitting parameters. New interesting many-body effects are discussed in the case of narrow resonances. We give also a description of the BEC-BCS crossover above and below T_C. The effects of different approximations for the selfenergy of the dressed molecules are discussed. The single-channel results are derived as a special limit for broad resonances. Moreover, through an analytic analysis of the BEC limit, the relation between the composite boson of the single-channel model and the dressed-molecule of the two-channel model is established. \\ ( http://arXiv.org/abs/cond-mat/0603270 , 844kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0510567 replaced with revised version Thu, 9 Mar 2006 09:33:45 GMT (502kb) Title: Thermometry of Fermionic Atoms in an Optical Lattice Authors: Michael K\"ohl Subj-class: Strongly Correlated Electrons Journal-ref: Phys. Rev. A 73, 031601(R) (2006) DOI: 10.1103/PhysRevA.73.031601 \\ ( http://arXiv.org/abs/cond-mat/0510567 , 502kb) ------------------------------------------------------------------------------ \\ Paper: cond-mat/0511080 replaced with revised version Thu, 9 Mar 2006 11:58:43 GMT (282kb) Title: Probing number squeezing of ultracold atoms across the superfluid-Mott insulator transition Authors: Fabrice Gerbier, Simon Foelling, Artur Widera, Olaf Mandel, Immanuel Bloch
web
auto_math_text
# Chapter 3. The fiscal impact of population ageing in Germany: An unequal challenge for different levels of government Fanny Kle Research Scientist Laboratory of Digital and Computational Demography, Max Planck Institute for Demographic Research Tobias Vogt Assistant Professor Population Research Centre University of Groningen The authors wish to thank the organisers of the workshop on Ageing and Long-term Challenges across Levels of Government in June 2019, particularly Sean Dougherty and the participants for their valuable comments that have led to the publication of this chapter. Population ageing challenges the current and future fiscal arrangements of developed countries. Previous research has shown that demographic changes are having a significant impact on public spending, and on pensions and health care in particular. The changes in the population age structure call for a range of political adjustments. In a federal country like Germany, the different levels of government are unequally affected due to differences in their cost and revenue structures. While the federal government in Germany is mainly responsible for paying for national defence and general public services that are not age-varying, it also has to cover supplementary payments for social security, including retirement benefits, which are projected to increase sharply as the population ages. In contrast, the Länder (federal states) and local governments are primarily responsible for paying for education and child care, which are expenditures that are likely to decrease as the population ages. Still, all levels of government in Germany will face budget shortfalls, as tax revenues are mainly generated by a shrinking number of working-age individuals. Prosperous federal states and municipalities will be able to meet these challenges by attracting workers from less successful regions, which will, in turn, reinforce the fiscal challenges of the regions that lag behind. The focus on specific levels of government is often neglected in research on demographic effects on public finance. Earlier work for the United States predicted the effect of this structure on government expenditures in areas such as social security and education (Lee and Edwards, 2002[1]; Lee and Tuljapurkar, 1998[2]). Edwards (2010[3]) used age-specific expenditures from National Transfer Accounts for the United States to analyse the impact of population ageing on the different levels of government. For Germany, comparable studies were carried out by Seitz, Freigang, and Kempkes (2005[4]); Seitz and Kempkes (2007[5]); and Seitz (2008[6]). Their research focused on sustainability estimations for sub-budgets of the government. Bach et al. (2002[7]) examined in detail how tax revenues are changing due to shifts in the age structure of the German population. Drawing on earlier data, Kluge (2013[8]) showed the challenges that the different levels of government face. This chapter uses the latest available data sources and acknowledges the impact of migration on state and local government expenditures. In Germany, low fertility and increasing life expectancy have resulted in a rapidly ageing population. Currently, the median age of the German population is 46 years, which is almost four years higher than the median age of the population of the neighbouring country of France, and makes Germany one of the oldest countries worldwide (UN Population Prospects, 2019[9]). Moreover, the median age in Germany is expected to rise from 45.7 years in 2020 to 49.2 years by 2045. Thus, while Germany is already a rather old country, its population is ageing at a fast pace. However, this overall trend masks profound regional differences in the rate of ageing between rural areas that are growing older and losing population and metropolitan areas that have a younger age structure and are gaining population. This urban-rural divide is especially interesting given that, in addition to population ageing, migration reinforces the economic fortunes of different geographic areas. Younger, more skilled individuals will continue to migrate to economically strong regions with a younger age structure, and to leave regions that are already suffering from out-migration (Goldstein and Kluge, 2016[10]; Kluge, Goldstein and Vogt, 2019[11]). While population ageing is an important driver that alters the relative composition of the population, the shrinking of the population due to mortality or migration is also an important issue. Using a detailed approach to study government revenues and expenditures that acknowledges the importance of spatial variation is vital. The German system redistributes resources not only among individuals of different income levels, socio-economic status, and age, but across regions. Demographic developments in Germany are proceeding in regional clusters. While it is true that Germany is among the oldest countries in the world, we find pronounced variation in the age structure by region. Figure 3.1 displays the median ages of German municipalities in 2013. It shows that there are areas in southern and western Germany, primarily around larger metropolitan areas, in which the median age is between 36 and 44 years; as well as regions in western and southern Germany, mainly in rural areas, in which the median age clusters around the national median age of 46 years. The figure also indicates, however, that in many municipalities in eastern Germany (except larger cities and university towns such as Berlin, Leipzig, Dresden and Jena), Saarland, and Lower Saxony, the median age ranges from 48 to 53 years. These enormous differences in the population age structure have profound implications for the budgets of the different municipalities. Older municipalities tend to have higher expenditures and lower revenues and are more likely to suffer from out-migration. These trends can, in turn, further aggravate the financial situations of these municipalities, and restrict their room to manoeuvre. In this chapter, the latest demographic trends for the German municipalities and the age cost profiles for the different levels of government are presented. The National Transfer Accounts data for Germany is drawn upon to provide detailed estimates for all relevant public revenues and expenditures by single years of age. In addition, it will be shown how expenditures and revenues are expected to differ across the German states in the future. This approach is not intended to serve as an economic forecast, as a representative state profile for each of the 16 German Länder is used. Instead, the aim is to shed light on the differences in revenues and expenditures likely to result from the demographic differences among the states. The implications of migration and the steps policy makers can take to address these gaps are also addressed. The National Transfer Accounts (NTA)1 are used as a data source for the estimations of revenues and expenditures by level of government and age. The theoretical roots for the NTA project have been provided by Samuelson (1958[12]), Diamond (1965[13]) and Lee (1994[14]). The project was established to introduce the variable age into the National Accounts. It aims to produce detailed estimates of the age dependency of income, consumption, and savings, as well as of government revenues and expenditures. Thus, the project seeks to provide answers to the question of how population ageing is affecting economic indicators. In this chapter, only the NTA results for the age dependency of government revenues and expenditures for the different levels of government are shown. In the following discussion, total government expenditures include all public in-kind and cash transfers that are provided for individuals living in Germany (Equation 1). Total government expenditures ${E}_{t}$ are given by: ${E}_{t}=\sum _{j=1}^{J}{TG}_{j,t}^{in-kind}+{TG}_{j,t}^{cash}$ (1) where ${TG}_{j,t}^{in-kind}$ denotes all public in-kind transfers to which public monetary transfers, ${TG}_{j,t}^{cash}$, is added, in time t for function j. Public in-kind transfers ${TG}_{j,t}^{in-kind}$ consist of transfers for education, health, or other summed over all ages from 0 to 90+ in time t. The outcomes reflect public consumption. Public monetary transfers, ${TG}_{j,t}^{cash}$, are then added, which include pensions, disability payments, family and housing allowances, and other forms of social, financial assistance. The approach used is comparable for all items. Suitable survey data or administrative records that provide information on the relative utilisation of a particular type of government expenditure by age are identified. For expenditures on education by age, information on the number of children by age and school type is used, as well as the corresponding costs for each individual by school type. The age profile is estimated by calculating the number of students of this age and school type, which is then used to obtain the per capita values. The relative age shares of health expenditures are estimated using the costs of diseases (Statistisches Bundesamt, 2016[15]). In the next step, the profiles are smoothed and macro-adjusted to fit the National Accounts. Total government revenues are given by: ${TGO}_{j,t}=\sum _{j=1}^{J}{TGO}_{j,t}^{L}+{TGO}_{j,t}^{A}+{TGO}_{j,t}^{C}+{TGO}_{j,t}^{O}$ (2) where ${TGO}_{j,t}^{L}$ are the outflows on labour, ${TGO}_{j,t}^{A}$ denote the outflows on asset holding, ${TGO}_{j,t}^{C}$ include all taxes related to consumption, and ${TGO}_{j,t}^{O}$ denote all other revenues. Table 3.1 shows the revenues of the levels by type and the micro profile used to allocate the tax by age. Some revenues, such as market selling, other current transfers, and second home taxes, are not easy to classify. For these revenues, the general tax profile for allocation is used. All age profiles are smoothed before the numbers are adjusted to the macroeconomic control variable. The transfer components (except expenditures for education) are smoothed with the Friedman SuperSmoother in R (supsmu package). The population of the respective year is used as a weight. A crucial adjustment in the National Transfer Accounts is made to ensure that the estimates are nationally representative and fit the National Accounts. Therefore, all of the revenue and expenditure items are scaled to fit their corresponding macroeconomic controls. Depending on how many levels share the expenditures for an expense item, one to three macro controls (federal, state, local government) are used. The adjustment factor is given by: ${\theta }_{j}=\sum _{a=1}^{90+}\frac{x\left(a\right)N\left(a\right)}{{X}_{j}}$, (3) where the age-specific expenditure share, x(a), is multiplied by the population at that age, N(a), and is divided by the corresponding macro control by level of government, Xj. For the estimation of the National Transfer Accounts or their underlying parts, such as government revenues and expenditures by level of government, an extensive amount of data is required. These data are described in the following section. To construct the accounts, a micro survey is needed to estimate age utilisation profiles, corresponding population estimates, and macro controls that allow for the adjustment of the micro profiles to fit the UN System of National Accounts. The macro controls are provided by the federal and the Länder statistical offices for the respective years that show detailed results in the National Accounts. Population estimates in one-year age groups are provided by the German Federal Statistical Office. The microeconomic age profiles of government monetary transfers to individuals are estimated using the Income and Expenditure Survey (EVS) 2013.2 The EVS is conducted every five years by the Federal Statistical Office, and includes data on income, consumption, assets and transfers for 60 000 households. The survey data are representative of households with a monthly net income of less than EUR 18 000. For three months, participating households keep a detailed book of household accounts that covers all forms of income and expenditure. Per capita profiles for the different levels over time are also available. These estimates are relatively stable for the different years. Because they have different financial obligations and revenue sources, federal, state, and local governments face different challenges. From a demographic point of view, it is especially interesting to note the differences in the age dependency of transfer variables. Both government revenues and government expenditures vary over the life cycle, with expenditures increasing more than revenues. In Figure 3.2, the total public benefits per capita by age are provided by the different levels of government. The estimates include not only cash transfers made to individuals but in-kind transfers for health or education. The pronounced increase in transfers at older ages at the federal level is solely due to supplementary payments to the German social security system, which are mainly in the form of public pensions, health and long-term care expenditures. If we disregard these supplementary payments, the federal profile becomes almost flat and hardly varies by age. As national population numbers decline, expenditures at the federal level are likely to be lower in the future. The federal-level expenditures on younger individuals are mainly related to national defence and public order and safety. These expenditures are evenly distributed across the population, and add up to about EUR 2 000 per capita per year. The Länder provide pensions for civil servants and financial support for students, which together make up a significant share of state expenditures. The municipalities provide housing allowances and certain forms of social assistance to the middle-aged population. Both state and local governments pay significant shares of the educational costs of young people. The municipalities and the Länder provide support for their youngest residents through expenditures on kindergartens and schools. These public transfers vary considerably by age, with most resources flowing to the young and the old. The per capita cost of supporting residents in their teens is, on average, around EUR 8 000 per year at the state level and EUR 3 000 at the municipal level. These figures are slightly higher for children attending kindergarten (up to around age six). Public revenues also vary by age and by government level. Figure 3.3 shows the per capita public revenue values at each level of government in detail. The total profile includes combined taxes shared between the different levels, such as value-added tax (VAT) as well as taxes that are collected at one level only.3 The federal government receives all of the revenues from the solidarity surcharge and from tobacco or electricity taxes. State governments receive revenues from property and inheritance taxes and taxes on beer. Major inflows for the municipalities are generated by real estate and excise taxes. For all levels, revenues are generated by working-age individuals. The federal government receives per capita inflows of around EUR 1 000 per year from children (due to VAT on children’s estimated consumption), of around EUR 3 000 euros per year from retirees, and of more than EUR 6 000 euros per year from prime-age adults. While the Länder have comparable inflows, municipalities receive lower tax revenues per capita, and the age structure is slightly more skewed toward older working ages due to the underlying revenue profile of self-employed individuals used to allocate excise taxes. The estimated profiles of state revenues and expenditures are used to show future imbalances arising from different demographic dynamics. The same age profiles are applied to all of the German Länder. The results are not intended to provide an economic forecast. Instead, the aim is to uncover the differences in expenditure and revenue levels that result solely from demographic changes in age structure and migration.4 The overall revenue and expenditure levels are expressed as percentages relative to the values in 2013. Changes are further documented in the expenditure levels for the young (under age 27) and the old (over age 57). The age brackets denote the turning points of the life-cycle deficit in Germany. This means that an individual in Germany is not earning sufficient labour income to finance his or her public and private consumption until after he or she reaches age 27. Then, after the individual reaches age 57, the life-cycle deficit again turns negative, and the person’s labour income is not sufficient to finance his or her consumption. While younger individuals typically depend exclusively on transfers from other members of society, older individuals might rely on a mixture of transfers and savings. Table 3.3 shows the expenditure and revenue levels of the 16 German states in 2050. Except for Hamburg, all states can expect lower expenditures and revenues. In some states, the decrease in revenues is moderate, such as in Bavaria or Baden-Württemberg, with revenue levels reaching over 90% of today’s values. Others face revenue decreases in the magnitude of around 20%, as for example, Brandenburg or Mecklenburg-Vorpommern. At the same time, expenditures also drop to considerably lower levels in these states. The most significant discrepancy between revenues and expenditures is in Brandenburg, with a 10 percentage point difference. For all other states, the imbalances are around 7 percentage points in 2050. As stated earlier, these results do not include political adjustments or behavioural adaptations that may alter this picture in the future. The differences for expenditures on the young that are depicted in Table 3.3 are minor. For most states, expenditures on the young decrease slightly or remain stable. For eastern German states like Saxony or Saxony-Anhalt, there are even slightly higher shares for 2050. Here, in recent years, the fertility rates were among the highest in Germany. The expenditures for individuals above age 57 are increasing in all states as a share of total expenditures. In almost all states, these shares increase by 7 to 8 percentage points. Interestingly, the increases for eastern German states such as Saxony or Saxony-Anhalt are much lower (only 3 to 4 percentage points). This does not show that these states are younger, but rather that they are already old today, due to out-migration, and that ageing continues in these regions. The results show the different cost and revenue structures of the three levels of government in detail. The federal government’s financial obligations are mainly age-independent expenditures related to national defence or economic affairs, and are not increasing as the population ages. However, because it provides additional funds for social security, the expenditures of the federal government will likely increase in the long run. By contrast, as the population ages, the expenditures of state and local governments are expected to decrease in the long run. The biggest challenge facing all levels of government is generating sufficient revenue while the population is ageing. All levels of government rely heavily on revenues that come from the working-age population. Given that the fraction of the population who are of working ages is expected to decline in the coming decades, it is likely that revenues will decrease significantly. A broader age base for tax revenues would be desirable. The differences among the levels of government are also affected by differences between states or municipalities. Large discrepancies in the demographic developments of different places will mean that the challenges they face will vary. We estimated long-run differences in the revenues and expenditures of the 16 German Länder that are solely due to demographic differences. In some states, benefits and revenues will decrease moderately, by around 6-8 percentage points; while in other states, benefits and revenues may decline by as much as 20-30 percentage points. The latter states are mainly in eastern Germany and peripheral western German regions that have an older age structure and high levels of out-migration. The budget gaps of the different German states range between 4 and 10 percentage points. The time horizon of 2050 seems to leave sufficient time for adjustments to be made. These different demographic realities seem to suggest that regions that are already ageing and are economically disadvantaged will continue to lose inhabitants through out-migration. Studying the implications of these population losses is vital given that migration tends to be highly selective. Levels of out-migration from eastern Germany to the prosperous regions in the west have been particularly high. This east-west migration occurred in two bigger waves in 1990 and 1997 and continued in the decades that followed (Heiland, 2004[20]). Today, internal migration in Germany occurs mainly from economically weak districts and Länder to prosperous urban areas (Sander, 2014[21]). Younger individuals, and especially young women, are especially likely to emigrate. These patterns worsen the situations of the out-migration regions. When young adults leave, these regions face a heavy double burden, i.e. they do not fully benefit from their educational investments, and they lose future tax revenues. The problem of migration reinforcing economic inequality could be addressed in several ways. One solution could be to transfer age-variable expenditures to the federal government. In the current situation, states and municipalities that suffer from out-migration finance kindergartens and schools for all young inhabitants. A large share of these skilled individuals will likely migrate as young adults to metropolitan areas or more prosperous rural areas in southern or western Germany. The receiving states and municipalities gain skilled workers without having to make the corresponding investments in human capital. A second potential solution is to implement a demographic factor in the fiscal equalisation scheme of the German Länder. The state that educated a migrating individual could receive financial compensation from the state that collects the individual’s taxes. This could be a fraction of the tax revenue based on, for example, FIFA-type (Fédération Internationale de Football Association) compensation rules. Under these rules, when a soccer player is sold to another club, the club that trained the player receives a fraction of the transfer fees. In addition, more general solutions are needed to deal with the impact that demographic changes are expected to have on the fiscal relationships among the federal, state, and local levels of government. The economic life-cycle needs of individuals will have to be adjusted as people live longer. One of the most prominent proposals for dealing with this issue is to promote longer working lives (Vaupel and Hofäcker, 2009[22]), as even a slight increase in the number of years each individual works would have an enormous impact at the population level. If the comparatively long period of time Germans spend in education is shortened or the period of time Germans spend working is extended by just one year, all of the individuals in this age group would immediately convert from being beneficiaries to being contributors. These reforms are expected to save money, as governments would be receiving positive net flows from individuals who, in prior years, would have been receiving benefits. This approach may prove particularly attractive given that in addition to living longer, individuals are spending more years in good health than they were in the past (Christensen et al., 2009[23]). Calculations from the National Transfer Accounts life cycle for Germany show that in 1970, an employee who retired at age 64 had a mean life expectancy of 70 years. This means that around 9% of a person’s lifetime was spent in retirement. Later, and especially in the 1990s, early retirement programmes expanded even as life expectancy rose. While the average retirement age is again at around 64 years after decades in which early retirement was the norm, individuals currently have a mean life expectancy of 80 years. Thus, Germans now spend around 21% of their lifetime in retirement. These positive outcomes of demographic change should be communicated. Another proposal is to redesign the individual life cycle so that people work roughly the same number of years as they did in the past, but that the time spent working is distributed differently. The idea is that people could reduce their working hours while young in order to pursue alternative life goals like raising a family, and make up for these reductions by working additional hours after reaching retirement age (Vaupel and Loichinger, 2006[24]). However, the retirement age could be linked to remaining life expectancy (Fenge and Peglow, 2014[25]). It has, for example, been suggested that if we use modified government revenue and expenditure profiles that shift the retirement age by five years, all of the German Länder could finance their expenditures through their revenues. In this scenario, revenues would increase to 105% of the original level, while expenditures would be reduced to about 98% in even the most disadvantaged German states. How these developments play out in the future depends on how expenditures for the oldest old change. Studies have shown that the highest expenditures for health and long-term care are focused on the two years before death (Breyer and Felder, 2006[26]). If this continues to be the case, expenditures will not increase dramatically, as the largest financial obligations are also shifted to older ages, even as the number of oldest-old people living in Germany is expected to quadruple by 2050. A shortcoming of this study is that a representative state profile for all German Länder has been used. This is suitable for estimating the demographically induced differences described in the chapter — still, this approach masks differences among the states in individual economic life cycles. Therefore, in future work, it would be interesting to estimate real state profiles for two representative states. The analysis could be adapted to estimate government revenues and expenditures for an economically sound and an economically weak German state, and their differences and similarities could be studied with a focus on their human capital investments and old age expenditures. In addition, it would be interesting to update the estimates when the latest Income and Expenditure Survey is released in late 2020. Already having state and municipality profiles for 2003 and 2013 that provide rather stable per capita estimates, these findings could be investigated to see if they hold for the most recent years. Such an outcome would strengthen the argument that the per capita values of revenues and expenditures can indeed contribute to efforts to predict future budgets. ## References [7] Bach, S. et al. (2002), Demographischer Wandel und Steueraufkommen, Gutachten im Auftrag des Bundesfinanzministerium. [26] Breyer, F. and S. Felder (2006), “Life Expectancy and Health Care Expenditures: A New Calculation for Germany Using the Costs of Dying”, Health Policy, Vol. 75/2, pp. 178–186. [23] Christensen, K. et al. (2009), “Ageing Populations: The Challenges Ahead”, The Lancet, Vol. 374, pp. 1196-1208. [13] Diamond, P. (1965), “National Debt in a Neoclassical Growth Model”, The American Economic Review, Vol. 55, pp. 1126–50. [3] Edwards, R. (2010), “Forecasting Government Revenue and Expenditure in the U.S. Using Data on Age-Specific Utilization”, National Transfer Accounts working paper, Vol. WP10-01, https://qcpages.qc.cuny.edu/~redwards/Papers/edwards-forecasting-0210.pdf. [19] Federal Statistical Office (2019), Bevölkerungsentwicklung in den Bundesländern bis 2060 - Ergebnisse der 14. koordinierten Bevölkerungsvorausberechnung, https://tinyurl.com/fso14cpp. [16] Federal Statistical Office (2016), National Accounts 2016, Federal Statistical Office, Wiesbaden. [18] Federal Statistical Office (2013), National Accounts 2013, Federal Statistical Office, Wiesbaden. [17] Federal Statistical Office (2013), National Transfer Accounts 2013, Federal Statistical Office, Wiesbaden. [25] Fenge, R. and F. Peglow (2014), The Impact of Demographic Developments on the German Statutory Pension System, https://www.rostockerzentrum.de/content/forschung/GRV-Demography_2014-09-07-PC.pdf. [10] Goldstein, J. and F. Kluge (2016), “Demographic Pressures on European Unity”, Population and Development Review, Vol. 42/2, pp. 299-304. [6] Hamm, I., H. Seitz and M. Werding (eds.) (2008), The Impact of Demographic Change on Fiscal Policy in Germany, Springer, Berlin. [20] Heiland, F. (2004), “Trends in East-West German Migration from 1989 to 2002”, Demographic Research, Vol. 11/7, pp. 173–194. [8] Kluge, F. (2013), “The Fiscal Impact of Population Aging in Germany”, Public Finance Review, Vol. 41/1, pp. 37-63. [11] Kluge, F., J. Goldstein and T. Vogt (2019), “Transfers in an Aging European Union”, Journal of the Economics of Ageing, Vol. 13, pp. 45-54. [1] Lee, R. and R. Edwards (2002), “The Fiscal Impact of Population Aging in the US: Assessing the Uncertainties”, Tax Policy and the Economy, Vol. 16, pp. 141–80. [2] Lee, R. and S. Tuljapurkar (1998), “Uncertain Demographic Futures and Social Security Finances”, American Economic Review, Vol. 88, pp. 237–41. [14] Martin, L. and S. Preston (eds.) (1994), The Formal Demography of Population Aging, Transfers, and the Economic Life Cycle, National Academy Press, Washington, DC. [12] Samuelson, P. (1958), “An Exact Consumption-Loan Model of Interest with or without the Social Contrivance of Money”, The Journal of Political Economy, Vol. 66, pp. 467–82. [21] Sander, N. (2014), “Internal Migration in Germany, 1995-2010: New Insights into East-West Migration and Reurbinisation”, Comparative Population Studies, Vol. 39/2. [4] Seitz, H., D. Freigang and G. Kempkes (2005), Demographic Change and Federal Systems, Speyerer Forschungsbericht. [5] Seitz, H. and G. Kempkes (2007), “Fiscal Federalism and Demography”, Public Finance Review, Vol. 35, pp. 385–413. [15] Statistisches Bundesamt (2016), Gesundheit-Krankheitskosten 2002, 2004, 2006 und 2008 [Health care-Disease expenses, 2002, 2004, 2006 and 2008], Statistisches Bundesamt, Wiesbaden. [9] UN Population Prospects (2019), World Population Prospects 2019, https://population.un.org/wpp/DataQuery/ (accessed on 15 September 2019). [22] Vaupel, J. and D. Hofäcker (2009), “Das lange Leben lernen”, Zeitschrift für Erziehungswissenschaft, Vol. 12/3, pp. 383–407. [24] Vaupel, J. and E. Loichinger (2006), “Redistributing Work in Aging Europe”, Science, Vol. 312/5782, pp. 1911–1913. ## Notes ← 1. For a more detailed overview, see www.ntaccounts.org. ← 2. The 2013 survey data are the latest available estimates from the Income and Expenditure Survey. The consumption questionnaire and the corresponding scientific use file for the Income and Expenditure Survey 2018 will not be available until late 2020. ← 3. A detailed overview of the different taxes collected by level of government can be found in the “Methods and data” section. ← 4. For future research, it would also be interesting to estimate and compare the age profiles for two representative states in order to show what details the overall Länder profile masks. In addition, political or behavioural adjustments could be evaluated. This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.
web
auto_math_text
Volume 398 - The European Physical Society Conference on High Energy Physics (EPS-HEP2021) - T08: Flavour Physics and CP Violation Addressing the muon anomalies with muon-flavored leptoquarks A.E. Thomsen*, A. Greljo and P. Stangl Full text: pdf Pre-published on: January 26, 2022 Published on: Abstract Significant deviations from Standard Model (SM) predictions have been observed in $b \to s \mu^+ \mu^-$ decays and in the muon $g-2$. Scalar leptoquark extensions of the SM are known to be able to address these anomalies, but generically give rise to lepton flavor violation (LFV) or even proton decay. We propose new muon-flavored gauge symmetries as a guiding principle for leptoquark models that preserve the global symmetries of the SM and explain the non-observation of LFV. A minimal model is shown to easily accommodate the anomalies without encountering other experimental constraints. This talk is mainly based on Ref. [1]. DOI: https://doi.org/10.22323/1.398.0560 How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access
web
auto_math_text
Recent Results of Double Helicity Asymmetries from PHENIX. Scott Wolin The search for the gluon contribution to the proton spin, $\Delta G$, is critical to understanding the proton spin puzzle. The PHENIX experiment at the Relativistic Heavy Ion Collider (RHIC) is able to directly probe gluon polarization using collisions between two polarized proton beams. $\Delta G$ is determined from the double longitudinal asymmetry, $A_{LL}$, for a final state to be observed between same and opposite sign helicity proton interactions. In this talk we will summarize...
web
auto_math_text
NEUTRINO2010, XXIV International Conference on Neutrino Physics and Astrophysics, Athens, GREECE 14-19 June 2010 Europe/Athens timezone Tau Neutrino Searches with IceCube Not scheduled Aithousa Mitropoulos () Aithousa Mitropoulos Megaron, Athens - Greece Speaker Seon-Hee Seo (Stockholm University) Description IceCube is a cubic kilometer size neutrino telescope operating in the deep ice at the South Pole. Its scientific goals include searching for tau neutrinos of extraterrestrial origin. Although astrophysical source models typically predict only electron and muon neutrino production, after standard neutrino oscillations over astrophysical distances electron, muon and tau neutrinos are expected to arrive at the detector in equal numbers. Ultra high energy (UHE) tau neutrinos are expected to leave identifiable signatures inside the detection volume due both to their finite lifetime and their rich array of decay channels. By characterizing these distinctive signatures we hope to distinguish UHE tau neutrinos from muon and electron neutrinos. In addition, lower energy tau neutrinos can produce a distinctive double pulse waveform in individual IceCube detector modules that will distinguish these interactions from other neutrino interactions producing simpler hadronic or electromagnetic showers. Exclusively identified tau neutrinos will have negligible atmospheric neutrino background and as such could serve as a clean signature of cosmological origin. Primary author Seon-Hee Seo (Stockholm University) Presentation Materials There are no materials yet.
web
auto_math_text
# Energetic costs regulated by cell mechanics and confinement are predictive of migration path during decision-making ### Subjects An Author Correction to this article was published on 04 December 2019 ## Abstract Cell migration during the invasion-metastasis cascade requires cancer cells to navigate a spatially complex microenvironment that presents directional choices to migrating cells. Here, we investigate cellular energetics during migration decision-making in confined spaces. Theoretical and experimental data show that energetic costs for migration through confined spaces are mediated by a balance between cell and matrix compliance as well as the degree of spatial confinement to direct decision-making. Energetic costs, driven by the cellular work needed to generate force for matrix displacement, increase with increasing cell stiffness, matrix stiffness, and degree of spatial confinement, limiting migration. By assessing energetic costs between possible migration paths, we can predict the probability of migration choice. Our findings indicate that motility in confined spaces imposes high energetic demands on migrating cells, and cells migrate in the direction of least confinement to minimize energetic costs. Therefore, therapeutically targeting metabolism may limit cancer cell migration and metastasis. ## Introduction Cell migration is a critical aspect of the invasion-metastasis cascade and is significantly influenced by the microenvironment. The physical properties of the extracellular matrix (ECM) have been identified as key mediators of cell behavior and determine requirements for motility1,2,3. During cancer progression, the ECM commonly becomes deregulated and disorganized4 resulting in a highly heterogeneous ECM containing restricting pores, cross-sectional areas, and channel-like tracks5. These tight interstitial spaces can range from ~3 to 30 μm in width2, creating complex topographies that present directional choices to migrating cells5,6,7. Notably, channel-like tracks in the matrix, which are native to the ECM or prepatterned by cells themselves using metalloproteinases (MMPs), provide physical guidance, and offer a path of least resistance for migrating cells2,8,9. Once channel-like tracks are created by “leading” cancer cells, other “following” cancer cells utilize these microtracks to rapidly disseminate in an unimpeded, MMP-independent manner10. This mode of migration may explain the limited clinical success of MMP inhibitors to treat metastasis11. As these microtracks in the matrix provide strong proinvasive cues to tumor cells, understanding the mechanisms of cancer cell motility through physiologically relevant confining tracks will be critical to developing therapeutic strategies to target metastasis. To navigate these physical barriers and migrate, cells dynamically coordinate cellular machinery to generate forces and remodel their cytoskeleton and/or the surrounding matrix12,13,14, both of which are energy-demanding processes15,16,17. Cells generally meet such energy needs through the dephosphorylation of ATP into ADP. Maintaining an adequate supply of ATP is critical for cellular remodeling18, and ATP production is determined by fluctuating energetic demands of the cell19,20. Our recent work indicates that individual migrating cells tune their energy utilization relative to the structure and mechanics of their microenvironment21, and collectively migrating cells employ relay-like behavior to invade through physically challenging and energy-demanding environments22. However, the role of cellular energetics in directional decision-making during migration through spatially complex microenvironments is not well understood. Here, we show that when presented with migration choices of varying confinement, MDA-MB-231 cells preferentially migrate in the direction of least confinement to minimize energetic costs. Using a computational model and in vitro experiments, we demonstrate that energetic costs for migration through confined spaces are mediated by a balance between cell and matrix compliance and the degree of spatial confinement to direct migration decision-making. Increased cell stiffness limits cell body deformation and requires cell-induced matrix displacement for migration through narrow spaces. The cellular work required for matrix displacement drives the energy requirements for migration, and these energetic costs exponentially increase with increasing cell stiffness. At high degrees of spatial confinement as well as high cell stiffness and/or high matrix stiffness, elevated energetic costs for movement restrict migration into narrower confined spaces. Using this framework, we can accurately predict the probability of migration decisions by calculating the energetic costs between possible migration paths. Together, these findings provide insight into the role of cellular energetics in migration and demonstrate that energetic costs, in part, determine a cell’s ability to navigate complex environments. ## Results ### Cells sense path size during migration decision-making To recreate directional choices presented to cancer cells during migration, we utilized microfabrication to create Y-shaped microtracks. Microfabrication enables the creation of well-defined channels to study migration; however, most channels are molded into polydimethylsiloxane (PDMS)23,24,25,26. Here, we created three-dimensional collagen microtracks27 thereby mimicking the complex architecture of the native peritumoral ECM and allowing for mechanoreciprocity between cells and the matrix, a key determinant for migration1. To determine relevant physical dimensions for the bifurcations of the Y-shaped microtrack, we first created tapered collagen microtracks with widths decreasing from 20 to 5 μm and assessed spatial confinement, cell-matrix contact, and cell motility (Fig. 1a). MDA-MB-231 cells became fully confined, contacting two side walls of the microtrack at a track width of 11.020 ± 0.471 μm (mean ± s.e.m.) and cells reversed their migration direction at a track width of 6.212 ± 0.126 μm (mean ± s.e.m.), consistent with MDA-MB-231 cell body diameter28 and nucleus diameter29, as well as the physical limit of MMP-independent migration30. Based on these dimensions, we created a Y-shaped collagen microtrack consisting of a 15 μm feeder track bifurcating into 12 and 7 μm wide branches to study migration decision-making (Fig. 1b). Consistent with previous observations25, contact guidance determined migration path when cells contacted a single side wall of the 15 μm feeder track. However, when contacting both side walls of the feeder track, cells preferentially and more readily migrated into the wider path (~70%) with a faster passage into the wider branch (Fig. 1c, d). When moving into the narrower path, slower passage time was also accompanied by increased probing at the bifurcation (Supplementary Movies 1 and 2). These data indicate that cells actively probe their surrounding matrix to sense path size in choosing a migration direction, and preferentially migrate along the path of least resistance. Migration into confined spaces requires cells to either remodel their cytoskeleton or deform the surrounding matrix12,13,14. During migration through Y-shaped collagen microtracks, we found cells altered their morphology and deformed the side walls in more confined tracks, a finding unique to our collagen microtracks compared with traditional PDMS channels23,24,25. As spatial confinement increased in the narrower branches, cells reduced their minor axis and elongated, while simultaneously displacing the wall of the microtrack away from their cell body (Fig. 1e–g). Actin cytoskeleton remodeling and the actin polymerization required for force generation to displace the surrounding matrix both require cells to expend energy15,16,17, and we therefore hypothesized that cells require more energy to migrate in confined spaces. ### Model of energy needs during confined migration Given the complexity of the integrated effects of cell and matrix mechanics on migration1, we created a computational model to probe cellular energy requirements for confined migration (Fig. 2a, Supplementary Table 1, see “Methods” section for details). We define the energy needed as the work that will be required to deform the cell/microtrack system when the cell moves by a normalized unit length. We only take into account the differences between the possible migration choices as the cell can only probe the choices provided and no future energy costs incurred after making the decision are known. An important assumption of the model is that the cells will actively try to adjust their shape to fit and spread within the microtrack, a process that will be a function of the overall stiffness of the system. Notably, while differences between cell speed would significantly alter energy use21 and thus the energetic cost, cell speed was found to be the same for all microtrack sizes (Supplementary Fig. 1). Therefore, the main difference in energetic costs between the microtracks will be from the work required to deform the collagen walls. We modeled the microtrack as two infinite half spaces (with stiffness EECM) and a spread cell (with stiffness Ec) as an elliptical soft body. We assume that cells larger than the width of the system uniformly exert force (Fc) on each half space indenting the system (δ) depending on their size, shape, and compliance (Eq. (1)). Thus, the effective modulus of both the cell and microtrack will determine the amount of force exerted (Eq. (2)). To determine cell shape while maintaining constant perimeter, we imposed a limit to cell spreading in the microtrack based on cell stiffness and degree of spatial confinement (Fig. 2b, Eqs. (3) and (4)). The cell can now be described as an elliptical indenter, which will displace collagen side walls based on cell stiffness and confinement (Fig. 2c, Eq. (5)). Energy requirements for a cell moving within this confined space are then proportional to the work required to overcome the forces from the system deformation at equilibrium, which increases with cell stiffness, matrix stiffness, and confinement (Fig. 2d, e). A probit model was used to estimate migration decision-making based on energetic costs. We assume that the standard deviation of the system is proportional to the minimal energy available for migration. Estimating the minimal energy required for migration as 0.19 pJ s−1,37, we calculated the probability of migration into the narrow path as a function of cell stiffness from the difference in energetic costs between migration paths (Fig. 2f, Eq. (6)). Similarly, we calculated the probability of migration into the narrow path as a function of ECM stiffness from the difference in energetic costs between migration paths (Fig. 2g, Eq. (6)). Our model predicts that cells preferentially migrate in the direction of energy minimalization, and stiffer cells or cells within stiffer ECM would be less likely to choose the narrower path. Together, this framework can be used to explain the role of energetics in decision-making during confined migration. ### Cell mechanics influence migration decision-making We tested the robustness of our model by examining the influence of cell stiffness on migration decision-making. We manipulated MDA-MB-231 cell stiffness using pharmacological agents and short interfering RNA (siRNA)-mediated knockdown targeting cell contractility (Fig. 3a), since cell contractility and stiffness are an integrated system38. Treatment with a RhoA activator (Rho+) or Calyculin A (CL-A) increased cell stiffness, whereas treatment with Y27632 (Y27; a Rho-associated protein kinase inhibitor), ML7 (a myosin light chain kinase inhibitor), methyl-β-cyclodextrin (MβCD; a cholesterol depleting agent that causes actin disassembly39), or siRNA targeting Caveolin-1 (siCav1; a scaffolding protein of lipid rafts that influences actin remodeling40) decreased cell stiffness. Through these methods, we manipulated cell stiffness from ~271 to ~775 Pa (Fig. 3a). As predicted by the model, increased cell stiffness reduced migration into the narrower path, while decreased cell stiffness caused cells to become increasing agnostic to migration path (Fig. 3b). Notably, compliant cells were able to more readily pass into the narrow branch, while stiff cells required significantly more time at the bifurcation before passage (Fig. 3c). We then evaluated how cell stiffness influenced cell and matrix remodeling during migration through confined spaces. Together, cell and matrix remodeling in combination with the width of the microtrack will determine the steric hindrance imposed on the cell body by the collagen matrix. Given our model assumes cell stiffness controls cell deformability38, increasing cell stiffness will result in a more rigid cytoskeleton that is more difficult to deform, and deformation of the collagen side walls will increase to fit the locomoting cell body (Fig. 3d–f). Note that these effects are expected to be most dramatic in tracks smaller than the cell body, where the matrix alone imposes high steric hindrance. These assumptions recapitulated cell stiffness-mediated changes in cell morphology and matrix deformation observed experimentally (Fig. 3g–j, Supplementary Fig. 2). Cell elongation, a measure of cell deformability, and cell-induced track displacement were inversely correlated in the 7 μm track, as cell elongation decreased with cell stiffness while track displacement increased with cell stiffness. These results validated the relationship between cell stiffness, cell deformation, and matrix remodeling defined in the model. ### Cell stiffness alters deformation to drive energetic costs Our model predicts that increased energetic requirements in confined spaces are driven by increased force exerted on the matrix for displacement. Thus, energetic costs for migration are calculated to increase exponentially with confinement as a function of cell stiffness (Fig. 4a). Modeling migration through a 7 μm track predicts that energy requirements will exponentially increase with cell stiffness and cell-induced track displacement but decrease with cell deformability as cells become more compliant (Fig. 4b–d). Indeed, experimental results of intracellular ATP:ADP ratio and glucose uptake replicated model predictions, with ATP:ADP ratio and glucose uptake increasing with spatial confinement and amplifying in response to increased cell stiffness (Fig. 4e, f; Supplementary Fig. 3). In the 7 μm track, intracellular ATP:ATP ratio and glucose uptake increased as a function of cell stiffness (Fig. 4g). ATP:ADP ratio exponential increased with cell stiffness, and with increased cell stiffness, cell elongation negatively correlated with ATP:ADP ratio while track displacement strongly positively correlated with ATP:ADP ratio (Fig. 4h–j). Similarly, glucose uptake exponentially increased with cell stiffness and changes in glucose uptake were highly correlated with cell and matrix deformation (Fig. 4k–m). As expected, cell stiffness did not influence cellular energy levels in the feeder track and wider branch, where no significant cell-induced matrix displacement was observed (Supplementary Fig. 3). While cytoskeletal remodeling and matrix deformation both require cells to generate forces through increased ATP-dependent actin polymerization and actomyosin activity15,16,17, contractility inhibitors can affect ATP binding and Rho GTPase activity has been linked to cellular metabolism41. However, the close correlation between intracellular ATP:ADP ratio and track displacement as well as glucose uptake and track displacement indicate that the amount of force exerted on the matrix drives energetic costs for confined migration. ### Matrix stiffness alters decision-making and energetic costs We further validated our model by altering matrix stiffness, as our model predicts force exerted on the system is determined by the effective modulus of both the cell and the collagen microtrack. This also allowed us to manipulate model parameters without directly altering cell behavior. To alter the matrix stiffness without changing matrix architecture, we utilized nonenzymatic glycation to form advanced glycation end product crosslinks42. Using 3.0 mg ml−1 collagen, increasing the extent of glycation from 0 to 100 mM increases the modulus of the matrix from ~400 to ~550 Pa42. As predicted, increased matrix stiffness decreased the propensity of cells to migrate into the narrower path and slowed passage time into the narrower path (Fig. 5a, b). No significant change was observed in cell elongation and matrix displacement with glycation (Fig. 5c–e), most likely due to relatively small range of stiffness evaluated. However, our model does calculate that energetic costs from migration will exponentially increase with confinement as a function of matrix stiffness (Fig. 5f). Importantly, we observed a larger increase in intracellular ATP:ADP levels and glucose uptake with increasing confinement for cells in collagen gels glycated with 100 mM ribose compared with cells in unglycated collagen tracks (Fig. 5g–i, Supplementary Fig. 4). These findings indicate that to achieve similar levels of matrix displacement during confined migration, cells in stiffer matrices must expend significantly more energy. ### Energetic costs are predictive of migration decision-making We then examined whether cell and matrix stiffness-mediated decision-making is governed by the difference in energetic costs between possible migration paths. Our model predicts the difference in energetic costs between the 7 and 12 μm track increases with cell stiffness, lowering the probability of migration into the narrow path (Fig. 6a). To test this, we measured energetic costs for migration as the difference in ATP:ADP ratio (ΔATP:ADP) between the feeder track and migration paths across experimental conditions (Fig. 6b). This allowed us to remove any changes in the ATP:ADP ratio due to pharmacological treatments and examine the energy differential between possible migration paths. ΔATP:ADP for migration into the 7 μm track was higher compared with ΔATP:ADP for migration into the 12 μm track for all treatments, and increased with cell stiffness (Fig. 6b). As predicted by the model (Fig. 2f), the difference in ATP:ADP ratio between the two migration paths (ΔATP:ADP 7–12) was inversely correlated with migration into the narrower track and exponentially increased with cell stiffness (Fig. 6c, d), indicating the lower ΔATP:ADP 7–12 of more compliant cells guided their more indiscriminate decision-making. Similarly, our model predicted that increasing matrix stiffness increases the energetic requirements for migration into the narrower track (Fig. 2g), lowering the probability of migration (Fig. 6e). We also found that ΔATP:ADP was higher in 7 μm track with elevated matrix stiffness (Fig. 6f) and ΔATP:ADP 7–12 was inversely correlated with migration into the narrow track with increased matrix stiffness (Fig. 6g, h). Hence, migration choice can be robustly predicted by assessing the difference in energetic costs for motility between possible migration paths. ## Discussion The influence of the mechanical microenvironment on cell migration has been well studied;1,2,3 however, the energy needs of cells during migration, and how the mechanical microenvironment regulates energy needs, have largely been unexplored. Utilizing microfabrication techniques to recapitulate the architecture, composition, and mechanics of the in vivo ECM, we show that high physical confinement and steric hindrance inhibits migration due to elevated energetic requirements for cell-induced matrix displacement during migration. Notably, these increased energetic costs are determined by the mechanical properties of the cell and surrounding matrix, where high cell stiffness and/or matrix stiffness increase the cellular work necessary when migrating through confined spaces. We show that the energetic costs for motility between possible migration paths is predictive of the frequency of migration choice. Together, these findings provide a simple physical mechanism that links cellular energetics to cell mechanics and motility to explain migration decision-making in complex microenvironments. Studying migration across multiple cancer cell lines has demonstrated that the physical properties of the cell correlate with a cell’s ability to migrate through confined environments26. In this study, our utilization of a single breast epithelial cancer cell line and modulating cell mechanical properties via contractility directly identifies cell stiffness as an important cell property for migration decision-making in confined spaces. We find more compliant cells need significantly less energy-intensive matrix remodeling for migration. Given cancer cells are frequently more compliant and deformable than healthy cells43 with cell compliance correlated with metastatic potential44, our results suggest that increased compliance may provide a phenotypic advantage to migrating cells as they require lower energetic costs for migration. Actomyosin-based activity expends a major portion of cellular energy45, and limiting the energy needed for cytoskeletal and matrix remodeling efforts during migration would be very advantageous to cancer cells. Deformability of cells is predominately regulated by the actin cytoskeleton46, and confinement can alter actin organization during both single cell47 and collective migration48. During confined migration, actin is redistributed to the cell poles and channel interfaces7,47. Compressive force acting on actin structures have been found to stimulate actin reorganization and promote the formation of a denser and overall stiffer actin network in vitro49. While this network stiffening is likely useful to push away the surrounding ECM to enable cell migration in microtracks as well as in other 3D migration systems50, it also increases energy consumption49. Indeed, we observed increased energy requirements in narrower tracks for stiffer, more rounded cells that would experience higher compressive forces from the matrix acting on their cell body. Thus, we propose that increased compliance allows cells to minimize energetic costs for migration through confined spaces and utilize more possible migration paths when navigating the stromal microenvironment. We demonstrate that cells are able to sense path size during migration and migration into narrower paths requires more time, indicating that increased probing and/or matrix and cell remodeling is necessary prior to passage. Understanding how cells are able to actively probe possible migration choices and identify the path of lowest energetic cost while navigating through the matrix will be an important challenge. In addition to the mechanical properties of the cell, we show that the mechanical properties of the matrix also impose high energetic cost on migrating cells in physically constraining microtracks. Our model identifies that the effective modulus of both the cell and the collagen microtrack determine the force exerted on the system. Similar to stiff cells that are unable to elongate when moving into highly constricting tracks, cells migrating through narrow tracks in stiff matrices would experience increased mechanical loads on the cell body that increase actin network density and thus force49. Besides confinement and stiffness, other physical characteristics of the matrix including adhesion molecule expression also influence migration phenotype2. Physical confinement suppresses the formation of focal adhesions47 and cells under high confinement and low adhesion have been shown to undergo a switch from slow mesenchymal to fast ameboid-like migration51. However, the focal adhesion molecule vinculin maintains unidirectional migration in collagen microtracks52, and it has been suggested that forces transmitted by larger focal adhesions may function primarily to probe the matrix and guide in directional migration53. While changing collagen density, and therefore adhesion ligand expression, doesn’t alter migration speed in microtracks7, increasing matrix adhesivity may facilitate energy-intensive migration down a narrow path. Such changes in matrix properties would likely also lead to cytoskeletal changes, as the cytoskeleton serves as a mechanical coupler to the extracellular environment54. Taken together, these findings suggest a combination of both cell and matrix physical properties act to modulate cytoskeletal organization and dynamics during migration to drive energetic costs. Recent work has linked metabolic alterations observed in cancer cells to energy-intensive cytoskeletal remodeling55,56. Most cancer cells rely on aerobic glycolysis instead of mitochondrial oxidative phosphorylation to meet energy needs, a phenomenon known as the Warburg effect57. Increased glycolytic activity is associated with a more aggressive phenotype58,59 to compensate for the enhanced ATP demand of cancer cells and rapidly produce energy59,60. The Warburg effect has been proposed as a metabolic strategy to optimally meet fluctuating energy demands and maintain functions inherent in an invasive malignant phenotype20. In migrating cancer cells, increased glycolytic activity is associated with greater cell motility and faster cytoskeletal remodeling, and ATP derived from glycolytic enzymes close to areas of active cytoskeletal rearrangement is critical for motility55. However, oxidative phosphorylation may also provide localized energy production to the most energy-demanding regions of the cell. Mitochondrial trafficking to the leading edge of the cell has been shown to be vital to cytoskeletal dynamics supporting membrane protrusion and focal adhesion dynamics necessary for cell migration33,34. Such localized energy production is also crucial for force generation and physical displacement of the matrix during MMP-independent migration. In matrices with high plasticity, cancer cells extend actin-rich invadopodia protrusions to physically widen channels in the matrix and facilitate protease-independent migration through confining microenvironments50. Similarly, during anchor cell invasion in C. elegans, mitochondria are trafficked to the invasive front delivering localized production of ATP for Arp2/3–F-actin network growth in large protrusions to physical breach and displace the basement membrane without MMPs61. Consistent with these findings, our observation that cellular ATP:ADP ratio and glucose uptake is highly correlated with cell-induced matrix displacement suggests that the energy production needed to drive cell-generated forces may drive confined migration. When passing through micrometric pores, rapid Arp2/3-nucleated perinuclear actin networks have also been shown to facilitate nuclear deformation and subsequent passage through constriction62. This mechanism can facilitate rapid migration through spatially complex and restricting microenvironments but may also require high levels of energy consumption. The complex nature of cell migration presents challenges in therapeutically targeting migration, and endeavors to selectively inhibit cancer cell migration and metastasis have yielded limited success63. However, by linking cellular energetics to migration, the advent of new therapies targeting cancer metabolism may provide the foundation for treatments to target metastasis. ## Methods ### Cell culture and reagents Highly metastatic MDA-MB-231 breast adenocarcinoma cells (HTB-26, ATCC) were maintained at 37 °C and 5% CO2 in Dulbecco’s Modified Eagle’s Medium (Life Technologies) supplemented with 10% fetal bovine serum (Atlanta Biologicals) and 1% penicillin–streptomycin (Life Technologies). For ATP:ADP studies, MDA-MB-231 cells were transduced with PercevalHR and pHRed as previously described21. Briefly, FUGW-PercevalHR (Addgene plasmid #49083) and GW1-pHRed (Addgene plasmid #31473) were gifts from Gary Yellen (Harvard Medical School, Boston, MA) and co-expressed in the MDA-MB-231 cell population. pFUW-CMV-pHRed was generated by inserting GW1-pHRed into the pFUW-CMV vector using BamH1 and EcoR1 restriction sites. Transient transfection of HEK293T (CRL-3216, ATCC) with lentiviral expression vectors and second-generation packing constructs psPAX2 and pMD2.G in TransIT-LT1 (Mirus) was performed, and lentiviral particles were harvested at 48 and 72 h post transfection. Lentiviral particles were then concentrated 100-fold with Lenti-X Concentrator (Clontech) and stably transduced into MDA-MB-231 cells in the presence of 8 μg ml−1 polybrene overnight (Santa Cruz Biotechnology). For studies manipulating cell stiffness using pharmacological agents targeting cell contractility, cells were treated with 0.125 μg ml−1 Rho Activator II (CN03, Cytoskeleton), 1 nM CL-A (Sigma-Aldrich), 10 μM Y27632 (VWR), 20 μM ML7 (EMD Millipore), 5 mM MβCD (Sigma-Aldrich), or their appropriate vehicle controls. All cell lines were tested and found negative for mycoplasma contamination. ### siRNA-mediated knockdown of Caveolin-1 MDA-MB-231 cells were transfected with 25–30 nM of scrambled control siRNA oligonucleotides (5′-UUCCUCUCCACGCGCAGUACAUUUA-3′), or 25–30 nM of Caveolin-1 siRNA oligonucleotides (5′-GGGACACACAGUUUUGACGUU-3′) using 2 μg ml−1 Lipofectamine 2000 (Invitrogen) in Opti-MEM transfection medium (Life Technologies). siRNA-mediated knockdown was confirmed by performing western blot 72 h post transfection. MDA-MB-231 cells transfected with siRNAs were lysed using preheated (at 90 °C) 2× Lammeli sample buffer after a quick rinse with ice-cold phosphate buffer saline (PBS) as described previously64. Briefly, cell lysates were subjected to sodium dodecyl sulfate-polyacrylamide gel electrophoresis with a Mini-PROTEAN Tetra System (Bio-Rad) and electro-transferred onto a polyvinylidene difluoride membrane. Blots were probed using polyclonal antibody against Caveolin-1 (PA1-064, Thermo Fisher Scientific) and glyceraldehyde-3-phosphate dehydrogenase (GAPDH; MAB374, Millipore). Anti-rabbit horseradish peroxidase conjugated secondary antibody (Rockland) was used against primary antibodies. After incubation with SuperSignal West Pico Chemiluminescent Substrate (Thermo Fisher Scientific), blots were exposed and imaged using a FujiFilm ImageQuant LAS-4000. ### Fabrication of collagen microtracks Tapered and Y-shaped 3D collagen microtracks were prepared using micropatterning techniques. Photolithography was utilized to fabricate a 100 mm diameter silicon wafer mold consisting of an array of tapered wells with a 20–5 μm wide spatial gradient, and Y-shaped wells with a 15 μm wide lateral track bifurcating to 12 and 7 μm wide branches. End-to-end length of the tapered microtrack and the lateral track or branches of the Y-shaped microtrack were 1000 and 400 µm, respectively. All designs were created by L-Edit CAD software and transferred to chrome layered photomasks using a DWL2000 mask writer (Heidelberg Instruments). SU-8 25 negative photoresist (MicroChem) was spun to thickness of 25 µm on a silicon wafer, prebaked, and exposed to i-line UV-light (365 nm) using a contact aligner (ABM-USA, Inc.) equipped with a 350 nm long-pass filter. Following postexposure bake, the photoresist was developed using SU-8 developer (MicroChem) and treated with (1H,1H,2H,2H-Perfluorooctyl) Trichlorosilane as an antistiction coating. The silicon wafer mold was used to cast poly(dimethylsiloxane) (PDMS; Dow Corning) stamps by curing a ratio of 1:10 crosslinker to monomer at 60 °C for 2 h. Using the PDMS stamps, type I collagen isolated from rat tail tendons (Rockland Immunochemicals) was micromolded using a working collagen solution of 3.0 mg ml−1 from a 10 mg ml−1 collagen stock solution by diluting with ice-cold complete media and neutralizing the solution to pH 7.0 by adding 1 N NaOH, as described previously27. Collagen microtracks were prepared on plastic bottom six-well plates for phase-contrast imaging and no. 1.5 cover glass bottom six-well plates (Cellvis) were used for confocal imaging. ### Nonenzymatic glycation of collagen As previously described42, 10 mg ml−1 collagen stock solutions were mixed with 0.5 M ribose to form solutions containing 0 or 100 mM ribose in 0.1% sterile acetic acid and incubated for 5 days at 4 °C. Glycated collagen solutions were then neutralized with 1N NaOH in 10× DPBS, HEPES (EMD Millipore) and sodium bicarbonate (J.T. Baker) to form 3.0 mg ml−1 collagen gels with 1× DPBS, 25 mM HEPES, and 44 mM sodium. ### Microtrack migration decision-making For all 3D collagen microtrack migration experiments, cells were allowed to adhere for 6 h after seeding at a density of 70,000 cells ml−1. For cell migration decision-making studies in Y-shaped microtracks, all pharmacological agents were added with fresh complete media immediately prior to time-lapse imaging, except for Rho Activator II and MβCD, which were added with complete media after seeding. For MβCD treatment, seeded cells were incubated with MβCD for 4 h and then replaced with fresh complete cultured media prior imaging to avoid interference with cell viability65,66. All images were analyzed using ImageJ (version 2.0.0-rc-68/1.5g, National Institutes of Health). For cell migration decision-making studies, cells were carefully observed to determine their contact to one or two side walls of the track before reaching the bifurcation site. Cells that divided, interacted with other cells, or were blocked by other cells were excluded from the analysis. Time to decision was calculated as the time from when the cell body began interacting with the bifurcation of the Y-shaped microtrack in the feeder track to when the entire cell body was within the branch. For experiments assessing cell and matrix deformation, intracellular ATP:ADP ratio, and 2-NBDG uptake cells were allowed to migrate in the Y-shaped microtrack for at least 6 h following treatments as described above before measurements were taken. ### Phase-contrast microscopy To study cell migration through collagen microtracks, time-lapse phase-contrast imaging was performed every 20 min for 12 h on a Zeiss Axio Observer Z1 inverted microscope equipped with a Hamamatsu ORCA-ER camera using a 10×/0.3 N.A. objective and operated by AxioVision software. Imaging was performed in an environmental chamber maintained at 37 °C and 5% CO2. ### Confocal microscopy PercevalHR and pHRed signal as well as 2-NBDG uptake were imaged on a Zeiss LSM 800 inverted confocal microscope equipped with a 40×/1.1 N.A. long working distance water-immersion objective and operated by Zen 2.3 software. For measuring intracellular ATP:ADP ratio during time-lapse studies, a 20×/0.8 N.A. objective was used, and imaging was performed every 10 min for 12 h in an environmental chamber maintained at 37 °C and 5% CO2. PercevalHR was excited using a 488 and 405 nm laser corresponding to the ATP-bound and ADP-bound conformation, respectively31, and emission was collected through a 450–550 nm bandpass filter. pHRed was excited using a 561 and 488 nm laser and emission was collected through a 576 nm long-pass filter. 2-NBDG was excited using a 488 nm laser and emission was collected through a 490–650 nm bandpass filter. Cell morphology and collagen architecture was simultaneously imaged using the transmission and reflection channels, respectively. ### Confocal reflectance microscopy Collagen architecture was visualized using a Zeiss LSM 800 inverted confocal microscope equipped with a 640 nm laser using a 40×/1.1 N.A. long working distance water-immersion objective and operated by Zen 2.3 software. Each collagen microtrack was visualized after fabrication. To account for changes in microtrack size during microtrack fabrication of Y-shaped tracks, only tracks within the following size parameters were used for this study: 15 μm track = 20–15 μm, 12 μm track = 11–13 μm, 7 μm track = <10 μm. ### Cell migration analysis Cell velocity was measured by manually outlining cells in ImageJ and calculating the displacement of the cell centroid over time. Only cells tracked for more than 4 h were analyzed. ### Quantification of cell and matrix deformation Cell features including minor axis, major axis, circularity, and aspect ratio were quantified using the measure tool in ImageJ after manually outlining the cell body. Elongation (aspect ratio/circularity) was calculated to assess change in cell shape and cell body deformation in the microtracks, as previously described67. Using confocal reflectance images, matrix deformation was calculated as the difference in microtrack width at the largest part of the cell body minus the microtrack width away from the cell body. ### Quantification of intracellular ATP:ADP ratio Intracellular ATP:ADP ratio in MDA-MB-231 cells was calculated using PercevalHR and pHRed probes, as previously described21,31. Due to pH sensitivities of the PercevalHR sensor31, approximate removal of pH bias was performed using a pH calibration. Briefly, cells were treated with 15 mM NH4Cl to induce a transient alkalization of the cytosol and vary intracellular pH while maintaining an approximately constant ATP:ADP ratio. The pH calibration was performed over a short period of time (2–3 min) to minimize metabolic stress on cells and the linear correlation between uncorrected PercevalHR signal (F488/F405) and pHRed signal (F561/F488) was established to predict pH bias in PercevalHR signal. Only cells in the dynamic range of the linear correlation between uncorrected PercevalHR signal and pHRed signal were used in this study. PercevalHR signal was then normalized by dividing the uncorrected PercevalHR signal by the transformed pH-corrected signal. Acquired PercevalHR and pHRed images were analyzed and normalized PercevalHR ratio was quantified in ImageJ using a customized macro. The mean background pixel intensity was measured and subtracted from the entire field of view for each channel to minimize interference from background noise. Using raw images, channels were then merged, subjected to a median filter (radius = 2 μm), and converted to a mask using a Li threshold. Pixels containing fluorescent signal were selected by applying the mask to background corrected images and the mean intensity for each channel was calculated, which were used to quantify the normalized PercevalHR ratio. To assess energetic costs during migration decision-making and account for any possible effects of pharmacological treatments, ΔATP:ADP was calculated as the ATP:ADP ratio of individual cells in the 12 or 7 μm track minus the average ATP:ADP ratio of cells in the 15 μm tracks. To assess energetic costs between the two possible migration paths, ΔATP:ADP 7–12 was calculated as the ATP:ADP ratio of individual cells in 7 μm track minus the average ATP:ADP ratio of cells in the 12 μm track. ### Quantification of glucose uptake Glucose uptake was measured using fluorescent glucose analog 2-NBDG (Life Technologies), as previously described21 with some modifications. MDA-MB-231 cells were incubated in 0.146 mM 2-NBDG for 6 h, and then fixed with 3.2% paraformaldehyde (Sigma-Aldrich) in 1× PBS for 15 min at room temperature. Samples were then washed three times with 1× PBS for 15 min and then washed overnight in 1× PBS at 4 °C prior to imaging. To calculate 2-NBDG uptake, cells were manually outlined, and mean pixel intensity was calculated after background subtraction. ### Image generation Representative images of intracellular ATP:ADP ratio, were generated as pixel-by-pixel ratio images and displayed as heatmaps using ImageJ. Adjustment of display map intensity, re-sizing, and addition of scale bars for all images was performed in ImageJ. ### Atomic force microscopy AFM was performed using contact mode atomic force microscopy (MFP-3D, Asylum Research). For indentation testing, cells were plated on a collagen-coated glass and treated similarly with the pharmacological activators and inhibitors as mentioned for migration studies prior to probing with a silicon nitride cantilever having a nominal spring constant of 0.01 N m−1 and 4.5 μm diameter spherical polystyrene bead (Novascan). The spring constant of each probe was calibrated before each experiment and had a mean spring constant of 0.016 ± 0.004 N m−1. Force-displacement curves were obtained by indenting 1–3 locations on the cell periphery at a constant force of 500 nN and approach and retract speeds of 1 μm s−1. The Young’s modulus for each cell was determined by fitting force-displacement curves to the Hertz model assuming a Poisson’s ratio of 0.5 using the Asylum curve fitting software. ### Model of energetic costs for confined cell migration To model migration through confined spaces, we first approximated our collagen microtracks as two infinite parallel half spaces with a given stiffness EECM, determined by the surrounding matrix. For a 3.0 mg ml−1 collagen matrix, we assumed EECM to be 400 Pa42. In addition, the geometry of the experimental setup imposes a symmetry where only the width of the microtrack can be different, and the depth of all the microtrack is kept the same. This symmetry allows us to neglect the effect that would occur along the depth direction of the microtrack as any fluctuation would be minimal at best. To determine the cell size that will be used for the computation, we first approximated the cell as a spherical soft body. The average size, or diameter Dc, of a cell in suspension is known to be ~18 μm for our MDA-MB-231 cellular model68. Thus, cells larger than the width of the system exert force Fc on each half space depending on their size and compliance Ec. The governing force equation for one side of the parallel half space of this system for a cell with radius R and indentation δ is given by: $$F_{\mathrm{c}} = \frac{4}{3}E_{{\mathrm{eff}}}\sqrt R \left( \delta \right)^{3/2}$$ (1) The effective modulus of the cell and microtrack Eeff is given by: $$E_{{\mathrm{eff}}} = \frac{{E_{\mathrm{c}}E_{{\mathrm{ECM}}}}}{{\left( {1 - \nu _{\mathrm{c}}^2} \right)E_{{\mathrm{ECM}}} + \left( {1 - \nu _{{\mathrm{ECM}}}^2} \right)E_{\mathrm{c}}}}$$ (2) where νc is the Poisson’s ratio of the cell, and νECM is the Poisson’s ratio of the matrix. One basic assumption is that the energy requirements for a soft body moving within this confined space should be proportional to the work required to overcome the forces from the system deformation at equilibrium. In this context, both cell shape and compliance will greatly influence the stress distribution. However, cell spreading within a confined space appears to be influenced by pathways controlling cell stiffness69. Interestingly, the data presented by Hung et al. suggests that cell spreading and elongation in a 6 μm wide confined space follows what appears to be an inverse exponential response as a function of stiffness69. To account for this cell feature, we can assume that a cell of stiffness Ec trying to fit within a channel of stiffness EECM will try to assume a shape of width Wc dependent on the overall stiffness of the system. Therefore, we can establish a relationship that links the apparent width the change in cell shape will impose to the stiffness of the system: $$\frac{{dW_{\mathrm{c}}}}{{dE_{{\mathrm{eff}}}}} = - \gamma W_{\mathrm{c}}$$ (3) Given the boundary conditions are fixed by the symmetry of the system and given the cell is interacting with the two walls, the new cell shape width parameter has to be within the limits Wtrack < Wc < Dc, and the above equation solution can be reduced to: $$W_{\mathrm{c}} = W_{{\mathrm{track}}} + \left( {D_{\mathrm{c}} - W_{{\mathrm{track}}}} \right)\left( {1 - e^{ - \gamma E_{{\mathrm{eff}}}}} \right)$$ (4) where Wtrack is the width of the microtrack, and γ is the rate of change. 1/γ therefore represents the effective mean stiffness of the response. For the purpose of the model, we expect this parameter to be within the same range as the known Young’s modulus of the collagen scaffold. Using the computed cell height and assumed unspread cell diameter of 18 μm68, we can obtain the long axis of the ellipse that keeps the perimeter constant. The system can now be described as an elliptical indenter, which provides an effective contact radius R and indentation on each half space is: $$\delta = (W_{\mathrm{c}} - W_{{\mathrm{track}}})/2$$ (5) Of note, δ in the model corresponds to the apparent indentation as the real indentation depth of the deformed collagen wall and cell can only be solved numerically. Given the mechanical properties of the system, the model indicates that measurable deformation of the collagen side walls should increase with cell stiffness. The apparent indentation required to fit the cell body will then determine Fc. Thus, the work for migration w is then defined as Fc multiplied by a normalized unit of movement of the cell through the microtrack. Furthermore, the model predicts that the energy difference between a 7 and 12 μm channel would be greater with stiffer cells. Therefore, our model indicates that the effective stiffness as well as the degree of spatial confinement directly impact the energy requirements for cells migrating in microtracks. Since there are only two possible migration choices in our experimental setup, a probit model can be utilized. A probit model is used to model binary outcome variables and has been widely applied as a standard method of reducing data to simple terms70. Therefore, we used a probit model to define the probabilistic outcome that would arise from the physical modeling results as the cumulative distribution of a standard normal function: $$P\left( {t7{\mathrm{|}}t12,\sigma } \right) = \frac{1}{2}\left( {1 + erf\left[ {\frac{{w_{{\mathrm{t}}12} - w_{{\mathrm{t}}7}}}{{\sigma \sqrt 2 }}} \right]} \right)$$ (6) In the current case, we can reasonably assume that the standard deviation of the system σ will be proportional to the energy available for cell migration. Using an estimate of 0.19 pJ s−1 for the minimal energy required for migration37, we can predict the probability of a cell choosing the smaller track as a function of its own stiffness or the stiffness of the surrounding ECM. A custom MATLAB (R2018a, Mathworks) code was used to generate the numerical results based on the model. All model parameters, their values, and their origins are described in Supplementary Table 1. ### Statistical analysis All statistical analysis was performed using GraphPad Prism 7.0. Normality in the spread of data was tested using the D’Agostino–Pearson omnibus normality test. When two cases were compared, statistical significance was performed using a two-tailed Student’s t-test or a two-tailed Mann–Whitney test for data with non normal distribution. Multiple groups were compared using one-way ANOVA or Kruskal–Wallis test with Dunn’s post hoc analysis for data with non normal distribution. To determine significance in decision-making between wide and narrow migration paths, a one proportion calculation was performed and a Clopper–Pearson confidence interval for observed proportion was assessed. To determine if a curve adequately fit data or to compare two curves, the extra sum-of-squares F-test was used. Pearson’s correlation coefficient (r) was used to determine correlation. No statistical method was used to predetermine sample size. All experiments were reproduced at least three independent times. ### Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. ## Code availability The ImageJ macro (version 2.0.0-rc-68/1.5g, National Institutes of Health) used for quantification of normalized PercevalHR ratiometric signal and MATLAB code (R2018a, Mathworks) used to generate the computational model are available from the corresponding authors upon reasonable request. ## Change history • ### 04 December 2019 An amendment to this paper has been published and can be accessed via a link at the top of the paper. ## References 1. 1. Van Helvert, S., Storm, C. & Friedl, P. Mechanoreciprocity in cell migration. Nat. Cell Biol. 20, 8–20 (2018). 2. 2. Paul, C. D., Mistriotis, P. & Konstantopoulos, K. Cancer cell motility: lessons from migration in confined spaces. Nat. Rev. Cancer 17, 131–140 (2017). 3. 3. Charras, G. & Sahai, E. Physical influences of the extracellular environment on cell migration. Nat. Rev. Mol. Cell Biol. 15, 813–824 (2014). 4. 4. Lu, P., Weaver, V. M. & Werb, Z. The extracellular matrix: a dynamic niche in cancer progression. J. Cell Biol. 196, 395–406 (2012). 5. 5. Wolf, K. et al. Multi-step pericellular proteolysis controls the transition from individual to collective cancer cell invasion. Nat. Cell Biol. 9, 893–904 (2007). 6. 6. Patsialou, A. et al. Intravital multiphoton imaging reveals multicellular streaming as a crucial component of in vivo cell migration in human breast tumors. IntraVital 2, e25294 (2013). 7. 7. Carey, S. P. et al. Comparative mechanisms of cancer cell migration through 3D matrix and physiological microtracks. Am. J. Physiol. Cell Physiol. 308, C436–C447 (2015). 8. 8. Doyle, A. D., Petrie, R. J., Kutys, M. L. & Yamada, K. M. Dimensions in cell migration. Curr. Opin. Cell Biol. 25, 642–649 (2013). 9. 9. Wolf, K. et al. Collagen-based cell migration models in vitro and in vivo. Semin. Cell Dev. Biol. 20, 931–941 (2009). 10. 10. Friedl, P. & Wolf, K. Tube travel: the role of proteases in individual and collective cancer cell invasion. Cancer Res. 68, 7247–7249 (2008). 11. 11. Coussens, L. M., Fingleton, B. & Matrisian, L. M. Matrix metalloproteinase inhibitors and cancer: trials and tribulations. Science 295, 2387–2392 (2002). 12. 12. Wyckoff, J. B., Pinner, S. E., Gschmeissner, S., Condeelis, J. S. & Sahai, E. ROCK- and myosin-dependent matrix deformation enables protease-independent tumor-cell invasion in vivo. Curr. Biol. 16, 1515–1523 (2006). 13. 13. Tozluoǧlu, M. et al. Matrix geometry determines optimal cancer cell migration strategy and modulates response to interventions. Nat. Cell Biol. 15, 751–762 (2013). 14. 14. Wolf, K. & Friedl, P. Extracellular matrix determinants of proteolytic and non-proteolytic cell migration. Trends Cell Biol. 21, 736–744 (2011). 15. 15. Bursac, P. et al. Cytoskeletal remodelling and slow dynamics in the living cell. Nat. Mater. 4, 557–561 (2005). 16. 16. Mizuno, D., Tardin, C., Schmidt, C. F. & MacKintosh, F. C. Nonequilibrium mechanics of active cytoskeletal networks. Science 315, 370–373 (2007). 17. 17. Meshel, A. S., Wei, Q., Adelstein, R. S. & Sheetz, M. P. Basic mechanism of three-dimensional collagen fibre transport by fibroblasts. Nat. Cell Biol. 7, 157–164 (2005). 18. 18. Balaban, R. S. Regulation of oxidative phosphorylation in the mammalian cell. Am. J. Physiol. Cell Physiol. 258, C377–C389 (1990). 19. 19. Epstein, T., Xu, L., Gillies, R. J. & Gatenby, R. A. Separation of metabolic supply and demand: aerobic glycolysis as a normal physiological response to fluctuating energetic demands in the membrane. Cancer Metab. 2, 1–9 (2014). 20. 20. Epstein, T., Gatenby, R. A. & Brown, J. S. The Warburg effect as an adaptation of cancer cells to rapid fluctuations in energy demand. PLoS One 12, 1–14 (2017). 21. 21. Zanotelli, M. R. et al. Regulation of ATP utilization during metastatic cell migration by collagen architecture. Mol. Biol. Cell 29, 1–9 (2018). 22. 22. Zhang, J. et al. Energetic regulation of coordinated leader–follower dynamics during collective invasion of breast cancer cells. Proc. Natl Acad. Sci. USA 116, 7867–7872 (2019). 23. 23. Mak, M. & Erickson, D. Mechanical decision trees for investigating and modulating single-cell cancer invasion dynamics. Lab Chip 14, 964–971 (2014). 24. 24. Ambravaneswaran, V., Wong, I. Y., Aranyosi, A. J., Toner, M. & Irimia, D. Directional decisions during neutrophil chemotaxis inside bifurcating channels. Integr. Biol. 2, 639–647 (2010). 25. 25. Paul, C. D. et al. Interplay of the physical microenvironment, contact guidance, and intracellular signaling in cell decision making. FASEB J. 30, 2161–2170 (2016). 26. 26. Lautscham, L. A. et al. Migration in confined 3D environments is determined by a combination of adhesiveness, nuclear volume, contractility, and cell stiffness. Biophys. J. 109, 900–913 (2015). 27. 27. Kraning-Rush, C. M., Carey, S. P., Lampi, M. C. & Reinhart-King, C. A. Microfabricated collagen tracks facilitate single cell metastatic invasion in 3D. Integr. Biol. 5, 606–616 (2013). 28. 28. Truongvo, T. N. et al. Microfluidic channel for characterizing normal and breast cancer cells. J. Micromech. Microeng. 27, 035017 (2017). 29. 29. Fu, Y., Chin, L. K., Bourouina, T., Liu, A. Q. & Vandongen, A. M. J. J. Nuclear deformation during breast cancer cell transmigration. Lab Chip 12, 3774–3778 (2012). 30. 30. Wolf, K. et al. Physical limits of cell migration: control by ECM space and nuclear deformation and tuning by proteolysis and traction force. J. Cell Biol. 201, 1069–1084 (2013). 31. 31. Tantama, M. et al. Imaging energy status in live cells with a fluorescent biosensor of the intracellular ATP-to-ADP ratio. Nat. Commun. 4, 2550 (2013). 32. 32. Yuan, H. X., Xiong, Y. & Guan, K. L. Nutrient sensing, metabolism, and cell growth control. Mol. Cell 49, 379–387 (2013). 33. 33. Cunniff, B., McKenzie, A. J., Heintz, N. H. & Howe, A. K. AMPK activity regulates trafficking of mitochondria to the leading edge during cell migration and matrix invasion. Mol. Biol. Cell 27, 2662–2674 (2016). 34. 34. Schuler, M.-H. et al. Miro1-mediated mitochondrial positioning shapes intracellular energy gradients required for cell migration. Mol. Biol. Cell 28, 2159–2169 (2017). 35. 35. Van Horssen, R. et al. Modulation of cell motility by spatial repositioning of enzymatic ATP/ADP exchange capacity. J. Biol. Chem. 284, 1620–1627 (2009). 36. 36. Gillies, R. J., Robey, I. & Gatenby, R. A. Causes and consequences of increased glucose metabolism of cancers. J. Nucl. Med. 49, 24S–42S (2008). 37. 37. Hecht, I. et al. The motility-proliferation-metabolism interplay during metastatic invasion. Sci. Rep. 5, 13538 (2015). 38. 38. Wang, N. & Ingber, D. E. Control of cytoskeletal mechanics by extracellular matrix, cell shape, and mechanical tension. Biophys. J. 66, 2181–2189 (1994). 39. 39. Chubinskiy-Nadezhdin, V. I., Efremova, T. N., Khaitlina, S. Y. & Morachevskaya, E. A. Functional impact of cholesterol sequestration on actin cytoskeleton in normal and transformed fibroblasts. Cell Biol. Int. 37, 617–623 (2013). 40. 40. Echarri, A. & Del Pozo, M. A. Caveolae - mechanosensitive membrane invaginations linked to actin filaments. J. Cell Sci. 128, 2747–2758 (2015). 41. 41. Wang, J. Bin et al. Targeting mitochondrial glutaminase activity inhibits oncogenic transformation. Cancer Cell 18, 207–219 (2010). 42. 42. Bordeleau, F. et al. Matrix stiffening promotes a tumor vasculature phenotype. Proc. Natl Acad. Sci. USA 114, 492–497 (2016). 43. 43. Cross, S. E., Yu-Sheng, J., Jianyu, R. & Gimzewski, J. K. Nanomechanical analysis of cells from cancer patients. Nat. Nanotechnol. 2, 780–783 (2007). 44. 44. Guck, J. et al. Optical deformability as an inherent cell marker for testing malignant transformation and metastatic competence. Biophys. J. 88, 3689–3698 (2005). 45. 45. Bernstein, B. W. & Bamburg, J. R. Actin-ATP hydrolysis is a major energy drain for neurons. J. Neurosci. 23, 1–6 (2003). 46. 46. Ananthakrishnan, R. et al. Quantifying the contribution of actin networks to the elastic strength of fibroblasts. J. Theor. Biol. 242, 502–516 (2006). 47. 47. Balzer, E. M. et al. Physical confinement alters tumor cell adhesion and migration phenotypes. FASEB J. 26, 4045–4056 (2012). 48. 48. Xi, W., Sonam, S., Beng Saw, T., Ladoux, B. & Teck Lim, C. Emergent patterns of collective cell migration under tubular confinement. Nat. Commun. 8, 1517 (2017). 49. 49. Bieling, P. et al. Force feedback controls motor activity and mechanical properties of self-assembling branched actin networks. Cell 164, 115–127 (2016). 50. 50. Wisdom, K. M. et al. Matrix mechanical plasticity regulates cancer cell migration through confining microenvironments. Nat. Commun. 9, 4144 (2018). 51. 51. Liu, Y. J. et al. Confinement and low adhesion induce fast amoeboid migration of slow mesenchymal cells. Cell 160, 659–672 (2015). 52. 52. Rahman, A. et al. Vinculin regulates directionality and cell polarity in two- and three-dimensional matrix and three-dimensional microtrack migration. Mol. Biol. Cell 27, 1431–1441 (2016). 53. 53. Bergert, M. et al. Force transmission during adhesion-independent migration. Nat. Cell Biol. 17, 524–529 (2015). 54. 54. Zanotelli, M. R., Bordeleau, F. & Reinhart-King, C. A. Subcellular regulation of cancer cell mechanics. Curr. Opin. Biomed. Eng. 1, 8–14 (2017). 55. 55. Shiraishi, T. et al. Glycolysis is the primary bioenergetic pathway for cell motility and cytoskeletal remodeling in human prostate and breast cancer cells. Oncotarget 6, 130–143 (2015). 56. 56. Hu, H. et al. Phosphoinositide 3-kinase regulates glycolysis through mobilization of aldolase from the actin cytoskeleton. Cell 164, 433–446 (2016). 57. 57. Vander Heiden, M. G., Cantley, L. C. & Thompson, C. B. Understanding the Warburg effect: the metabolic requirements of cell proliferation. Science 324, 1029–1033 (2009). 58. 58. Postovit, L. M., Adams, M. A., Lash, G. E., Heaton, J. P. & Graham, C. H. Oxygen-mediated regulation of tumor cell invasiveness: involvement of a nitric oxide signaling pathway. J. Biol. Chem. 277, 35730–35737 (2002). 59. 59. Gatenby, R. A. & Gillies, R. J. Why do cancers have high aerobic glycolysis? Nat. Rev. Cancer 4, 891–899 (2004). 60. 60. Lunt, S. Y. & Vander Heiden, M. G. Aerobic glycolysis: meeting the metabolic requirements of cell proliferation. Annu. Rev. Cell Dev. Biol. 27, 441–464 (2011). 61. 61. Kelley, L. C. et al. Adapative F-actin polymerization and localized ATP production drive basement membrane invasion in the absence of MMPs. Dev. Cell 48, 313–328 (2019). 62. 62. Thiam, H. R. et al. Perinuclear Arp2/3-driven actin polymerization enables nuclear deformation to facilitate cell migration through complex environments. Nat. Commun. 7, 1–14 (2016). 63. 63. Steeg, P. S. Tumor metastasis: mechanistic insights and clinical challenges. Nat. Med. 12, 895–904 (2006). 64. 64. Huynh, J., Bordeleau, F., Kraning-Rush, C. M. & Reinhart-King, C. A. Substrate stiffness regulates PDGF-induced circular dorsal ruffle formation through MLCK. Cell. Mol. Bioeng. 6, 138–147 (2013). 65. 65. Guerra, F. S. et al. Membrane cholesterol depletion reduces breast tumor cell migration by a mechanism that involves non-canonical Wnt signaling and IL-10 secretion. Transl. Med. Commun. 1, 3 (2016). 66. 66. Yang, Y. T. et al. Characterization of cholesterol-depleted or -restored cell membranes by depth-sensing nano-indentation. Soft Matter 8, 682–687 (2012). 67. 67. Carey, S. P. et al. Local extracellular matrix alignment directs cellular protrusion dynamics and migration through Rac1 and FAK. Integr. Biol. 8, 821–835 (2016). 68. 68. Kim, U. et al. Selection of mammalian cells based on their cell-cycle phase using dielectrophoresis. Proc. Natl Acad. Sci. USA 104, 20708–20712 (2007). 69. 69. Hung, W. C. et al. Distinct signaling mechanisms regulate migration in unconfined versus confined spaces. J. Cell Biol. 202, 807–824 (2013). 70. 70. Finney, D. J. Probit Analysis: A Statistical Treatment of the Sigmoid Response Curve. (Cambridge university press, 1962). ## Acknowledgements This work was supported by funding from the NIH (GM131178) and an NSF-NIH PESO Award (1740900) to C.A.R.-K.; NSF Graduate Research Fellowships under Grant No. DGE-1650441 to M.R.Z., A.R.-Z., and J.A.V.; and a NSERC Discovery grant (RGPIN-2018-06214) and Scholarship for the Next Generation of Scientists from the Cancer Research Society to F.B. This work was performed in part at the Cornell NanoScale Facility, a member of the National Nanotechnology Coordinated Infrastructure (NNCI), which is supported by the NSF (Grant ECCS-1542081). ## Author information Authors ### Contributions M.R.Z, A.R.-Z., A.J., F.B., D.E., and C.A.R.-K. designed the experiments; M.R.Z., A.R.-Z., J.A.V., P.V.T, and A.J. performed the experiments; M.R.Z., A.R.-Z., J.A.V., and P.V.T analyzed the data; F.B. carried out the computational model; M.R.Z., A.R.-Z., and F.B. wrote the manuscript; F.B. and C.A.R.-K. supervised the project. All authors revised the manuscript and approved the final version. ### Corresponding authors Correspondence to Francois Bordeleau or Cynthia A. Reinhart-King. ## Ethics declarations ### Competing interests The authors declare no competing interests. Peer review information Nature Communications thanks Sanjay Kumar and other, anonymous, reviewers for their contribution to the peer review of this work. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Zanotelli, M.R., Rahman-Zaman, A., VanderBurgh, J.A. et al. Energetic costs regulated by cell mechanics and confinement are predictive of migration path during decision-making. Nat Commun 10, 4185 (2019). https://doi.org/10.1038/s41467-019-12155-z • Accepted: • Published: • ### Migration of the 3T3 Cell with a Lamellipodium on Various Stiffness Substrates—Tensegrity Model Applied Sciences (2020) • ### Tactics of cancer invasion: solitary and collective invasion • Tomoaki Nagai • , Tomohiro Ishikawa • , Yasuhiro Minami •  & Michiru Nishita The Journal of Biochemistry (2020) • ### Metabolic Potential of Cancer Cells in Context of the Metastatic Cascade • Mohaned Benzarti • , Catherine Delbrouck • , Laura Neises • , Nicole Kiweler •  & Johannes Meiser Cells (2020) • ### The extracellular matrix in development • David A. Cruz Walma Development (2020)
web
auto_math_text
# Two Higgs-Doublet Dark Matter Model with Pseudoscalar Mediator¶ Jon Butterworth, Martin Habedank, Louie Corpe, Deepak Kar, Priscilla Pani, Andrius Vaitkus Two studies of this model , [77], which was also studied by ATLAS in [22], appear in [83] (Sensitivity of LHC measurements to a two-Higgs-doublet plus pseudoscalar DM model and Determining sensitivity of future measurements to new physics signals) The key parameters (as also described in the model README file) are as follows. • There are four Higgs bosons. $$h_1$$ is identified with the SM Higgs. Then there is a heavy scalar Higgs $$h_2 = H$$, a charged Higgs $$h_c = h^\pm$$, a CP-odd Higgs $$h_3 = A$$, and a pseudoscalar Higgs $$h_4 = a$$, which plays the role of the DM mediator. Unless stated otherwwise, the masses of the $$H, A$$ and $$h^\pm$$ are set equal to each other. • The fermionic DM candidate has mass default $$M_{X_d} = 10$$ GeV. • $$\sin(\beta-\alpha)$$ is the sine of the difference of the mixing angles in the scalar potential containing only the Higgs doublets, default = 1.0 (aligned limit). • $$g^\prime_{X_d}$$ is the coupling of $$a$$ to DM. Default = 1.0. • $$\tan\beta$$ is the ratio of the vacuum expectation values $$\tan \beta = \frac{v_2}{v_1}$$ of the Higgs doublets. Default = 1.0. • $$sin \theta$$ is the sine of the mixing angle between the two neutral CP-odd weak eigenstates, as defined in Section 2.1 of [77]. Default = 0.35. • $$\lambda_3$$. Default = 0.0. • $$\lambda_{P1}$$ = The quartic coupling between the scalar doublet $$H_1$$ and the pseudoscalar $$P$$. Default = 0.0. • $$\lambda_{P2}$$ = The quartic coupling between the scalar doublet $$H_2$$ and the pseudoscalar $$P$$. Default = 0.0. ## Comparison to ATLAS summaries¶ The ATLAS summary [22] shows, in Fig.19a, a scan in $$M_A = M_{h^\pm,} = M_H$$ and the mass of the pseudoscalar mediator $$M_a$$, and in Fig.19b a scan in $$\tan\beta$$ and $$M_a$$ for $$M_A = M_{h^\pm,} = M_H = 600$$ GeV. We compare to these scans below. Note that, as specified in Table 6 of [22], the values of $$\lambda_3, \lambda_{P1}, \lambda_{P2}$$ are all changed from the model default of zero, and set to 3, and we are in the “aligned limited” ie $$\sin(\beta-\alpha) = 1.0, \cos(\beta-\alpha) = 0.0$$ so the lightest Higgs has the branching fractions and couplings of the SM Higgs. Figure 19a, a scan in $$M_A = M_{h^\pm,} = M_H$$ and the mass of the pseudoscalar mediator $$M_a$$. Updated to Rivet 3.1.x 6/2/2020, A. Vaitkus. The Contur sensitivity at $$800 < M_A < 1400$$ GeV is worse that the ATLAS searches, because the measurements available in Rivet include very few $$E_{T}^{\rm miss} + X$$ cross sections, and no $$E_{T}^{\rm miss} + H(b\bar{b})$$ at all, where most of the ATLAS sensitvity comes from. One of the few exceptions is the $$l^+l^- + E_T^\mathrm{miss}$$ measurement in 7 TeV, where the exclusion heatmap (shown below and in the proceedings) shadows a subset of the ATLAS search sensitivity in the same final state. The ATLAS searches have more luminosity and higher beam energy than the measurement available to Contur. Repeating these measurements with higher energies and more integrated luminosity would be highly desireable. The band of sensitivity at $$M_A < 600$$ GeV is not present in the ATLAS searches, however. This comes from various measurements, mostly involving $$W$$ boson in the final state. In general multiple exotics Higgs channels contribute, as discuss, and illustrated in Figure 3 of, the proceedings (TODO add link when available). Figure 19b a scan in $$\tan\beta$$ and $$M_a$$ for $$M_A = M_{h^\pm,} = M_H = 600$$ GeV. Updated to Rivet 3.1.x 6/2/2020, A. Vaitkus. There is good sensitivity for $$M_A < 600$$ GeV and $$\tan\beta < 1$$ or so regardless of $$M_a$$, generally coming from processes involving the production and decay of the new heavy Higgs bosons, contributing to final-state signatures not considered in [22]. The signatures mostly involve top quarks, although not the four-top signature which was considered in [22]. ## A note on Higgs fiducial cross sections¶ As discussed the proceedings, although they have Rivet routines, the Higgs $$h \rightarrow WW$$ measurements [125][52], and the CMS $$W+$$ jet measurements [119][126][135] are not used, due to issues with b-jet veto and backround subtraction. ## Other Variants¶ To come The model files are available in the Pseudoscalar_2HDM directory here
web
auto_math_text
mersenneforum.org Top-5000 cutoff is >1 million bits, starting today (now 1.29M)) Register FAQ Search Today's Posts Mark Forums Read 2015-09-17, 17:09   #12 pepi37 Dec 2011 After milion nines:) 3×11×37 Posts Quote: Originally Posted by VBCurtis Huh? What are you talking about? The 5000th prime is from the primegrid effort, as Batalov states. If I understand correctly, Batalov says that day is come when Primegrid primes like 2196064286817 · 21290000 - 1 will not be primes on Top5000 any more But they are still primes since their length is 388342 digits and Quote: To make the top 5000 today a prime must have 388339 numbers So we are still 3 digits longer , and they are in Top 5000 2015-09-17, 18:08 #13 VBCurtis     "Curtis" Feb 2005 Riverside, CA 2×1,997 Posts You do not understand correctly- those primes occupy the bottom 1800 places of the list, so your speculation of "a few weekends" is off by a year or two. Batalov stated that primes *smaller* than this primegrid effort are now off the top 5000, and future primegrid primes at 2^1290000 will bump themselves for the rest of their project. 2015-09-17, 19:12   #14 pepi37 Dec 2011 After milion nines:) 3×11×37 Posts Quote: Originally Posted by VBCurtis You do not understand correctly- those primes occupy the bottom 1800 places of the list, so your speculation of "a few weekends" is off by a year or two. Batalov stated that primes *smaller* than this primegrid effort are now off the top 5000, and future primegrid primes at 2^1290000 will bump themselves for the rest of their project. Thanks for explanation! 2017-08-13, 14:24   #15 Batalov "Serge" Mar 2008 Phi(3,3^1118781+1)/3 2·4,493 Posts Quote: Originally Posted by Batalov Not so many anymore. Used to be 2900, now less than 200 and shrinking, and in a couple of months they will be all gone from top5k, except those that were collected into AP3s, the SG pair and the twin pair. I wonder if they will continue running them, just like they still run GFN"15"s. _______________ "We choose to go to the moon neighborhood mall not because it was hard, but because it was easy, because that goal will serve to organize and measure the best of our energies and skills everyone gets a lolly!" (The new generation's modified and improved motto.) Alas (for the YGG), the 1290000-bit boundary is now in the past. And this last of the Mohicans ought to be one the shortest-lived Top5000 primes! http://primes.utm.edu/primes/page.php?id=123838 Code: Entrance Rank (*): 5000 Currently on list? (*): no Submitted: 8/12/2017 21:33:22 CDT Removed (*): 8/12/2017 21:53:07 CDT Similar Threads Thread Thread Starter Forum Replies Last Post bayanne GPU Computing 0 2014-05-10 14:38 Chuck Hardware 8 2013-05-20 06:40 MrOzzy Conjectures 'R Us 104 2010-03-18 22:11 spaz Software 9 2009-05-03 06:41 jinydu Lounge 25 2006-12-22 10:54 All times are UTC. The time now is 02:10. Wed Apr 1 02:10:10 UTC 2020 up 6 days, 23:43, 1 user, load averages: 1.19, 1.25, 1.24 This forum has received and complied with 0 (zero) government requests for information. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation. A copy of the license is included in the FAQ.
web
auto_math_text
You are here: Start » Deep Learning # 1. Introduction Deep Learning is a breakthrough machine learning technique in computer vision. It learns from training images provided by the user and can automatically generate solutions for a wide range of image analysis applications. Its key advantage, however, is that it is able to solve many of the applications which have been too difficult for traditional, rule-based algorithms of the past. Most notably, these include inspections of objects with high variability of shape or appearance, such organic products, highly textured surfaces or natural outdoor scenes. What is more, when using ready-made products, such as our Aurora Vision Deep Learning, the required programming effort is reduced almost to zero. On the other hand, deep learning is shifting the focus to working with data, taking care of high quality image annotations and experimenting with training parameters – these elements actually tend to take most of the application development time these days. Typical applications are: • detection of surface and shape defects (e.g. cracks, deformations, discoloration), • detecting unusual or unexpected samples (e.g. missing, broken or low-quality parts), • identification of objects or images with respect to predefined classes (i.e. sorting machines), • location, segmentation and classification of multiple objects within an image (i.e. bin picking), • product quality analysis (including fruits, plants, wood and other organic products), • location and classification of key points, characteristic regions and small objects, • optical character recognition. The use of deep learning functionality includes two stages: 1. Training – generating a model based on features learned from training samples, 2. Inference – applying the model on new images in order to perform the actual machine vision task. The difference to the traditional image analysis approach is presented in the diagrams below: Traditional approach: The algorithm must be designed by a human specialist. Machine learning approach: We only need to provide a training set of labeled images. ## Overview of Deep Learning Tools 1. Anomaly Detection – this technique is used to detect anomalous (unusual or unexpected) samples. It only needs a set of fault-free samples to learn the model of normal appearance. Optionally, several faulty samples can be added to better define the threshold of tolerable variations. This tool is useful especially in cases where it is difficult to specify all possible types of defects or where negative samples are simply not available. The output of this tool are: a classification result (normal or faulty), an abnormality score and a (rough) heatmap of anomalies in the image. 2. An example of a missing object detection using AvsFilter_DL_DetectAnomalies2 tool. Left: The original image with a missing element. Right: The classification result with a heatmap of anomalies. 3. Feature Detection (segmentation) – this technique is used to precisely segment one or more classes of pixel-wise features within an image. The pixels belonging to each class must be marked by the user in the training step. The result of this technique is an array of probability maps for every class. 4. An example of image segmentation using AvsFilter_DL_DetectFeatures tool. Left: The original image of the fundus. Right: The segmentation of blood vessels. 5. Object Classification – this technique is used to identify an object in a selected region with one of user-defined classes. First, it is necessary to provide a training set of labeled images. The result of this technique is: the name of detected class and a classification confidence level. 6. An example of object classification using AvsFilter_DL_ClassifyObject tool. 7. Instance Segmentation – this technique is used to locate, segment and classify one or multiple objects within an image. The training requires the user to draw regions corresponding to objects in an image and assign them to classes. The result is a list of detected objects – with their bounding boxes, masks (segmented regions), class IDs, names and membership probabilities. 8. An example of instance segmentation using AvsFilter_DL_SegmentInstances tool. Left: The original image. Right: The resulting list of detected objects. 9. Point Location – this technique is used to precisely locate and classify key points, characteristic parts and small objects within an image. The training requires the user to mark points of appropriate classes on the training images. The result is a list of predicted point locations with corresponding class predictions and confidence scores. 10. An example of point location using AvsFilter_DL_LocatePoints tool. Left: The original image. Right: The resulting list of detected points. 11. Reading Characters – this technique is used to locate and recognize characters within an image. The result is a list of found characters. 12. An example of optical character recognition using AvsFilter_DL_ReadCharacters tool. Left: The original image. Right: The image with the recognized characters drawn. ## Basic Terminology You do not need to have the specialistic scientific knowledge to develop your deep learning solutions. However, it is highly recommended to understand the basic terminology and principles behind the process. ### Deep neural networks Aurora Vision provides access to several standardized deep neural networks architectures created, adjusted and tested to solve industrial machine vision tasks. Each of the networks is a set of trainable convolutional filters and neural connections which can model complex transformations of an image with the goal to extract relevant features and use them to solve a particular problem. However, these networks are useless without proper amount of good quality data provided for training process. This documentation presents necessary practical hints on creating an effective deep learning model. ### Depth of a neural network Due to various levels of task complexity and different expected execution times, the users can choose one of five available network depths. The Network Depth parameter is an abstract value defining the memory capacity of a neural network (i.e. the number of layers and filters) and the ability to solve more complex problems. The list below gives hints about selecting the proper depth for a task characteristics and conditions. 1. Low depth (value 1-2) • A problem is simple to define. • A problem could be easily solved by a human inspector. • A short time of execution is required. • Background and lighting do not change across images. • Well-positioned objects and good quality of images. 2. Standard depth (default, value 3) • Suitable for a majority of applications without any special conditions. • A modern CUDA-enabled GPU is available. 3. High depth (value 4-5) • A big amount of training data is available. • A problem is hard or very complex to define and solve. • Complicated irregular patterns across images. • Long training and execution times are not a problem. • A large amount of GPU RAM (≥4GB) is available. • Varying background, lighting and/or positioning of objects. Tip: Test your solution with a lower depth first, and then increase it if needed. Note: A higher network depth will lead to a significant increase in memory and computational complexity of training and execution. ### Training process Model training is an iterative process of updating neural network weights based on the training data. One iteration involves some number of steps (determined automatically), each step consists of the following operations: 1. selection of a small subset (batch) of training samples, 2. calculation of an error measure for these samples, 3. updating the weights to achieve lower error for these samples. At the end of each iteration, the current model is evaluated on a separate set of validation samples selected before the training process. Validation set is automatically chosen from the training samples. It is used to simulate how neural network would work with real images not used during training. Only the set of network weights corresponding with the best validation score at the end of training is saved as the final solution. Monitoring the training and validation score (blue and orange lines in the figures below) in consecutive iterations gives fundamental information about the progress: 1. Both training and validation scores are improving – keep training, the model can still improve. 2. Both training and validation scores has stopped improving – keep training for a few iterations more and stop if there is still no change. 3. Training score is improving, but validation score has stopped or is going worse – you can stop training, model has probably started overfitting to your training data (remembering exact samples rather than learning rules about features). It may also be caused by too small amount of diverse samples or too low complexity of the problem for a network selected (try lower Network Depth). An example of correct training. A graph characteristic for network overfitting. The above graphs represent training progress in the Deep Learning Editor. The blue line indicates performance on the training samples, and the orange line represents performance on the validation samples. Please note the blue line is plotted more frequently than the orange line as validation performance is verified only at the end of each iteration. ## Stopping Conditions The user can stop the training manually by clicking the Stop button. Alternatively, it is also possible to set one or more stopping conditions: 1. Iteration Count – training will stop after a fixed number of iterations. 2. Iterations without Improvement – training will stop when the best validation score was not improved for a given number of iterations. 3. Time – training will stop after a given number of minutes has passed. 4. Validation Accuracy or Validation Error – training will stop when the validation score reaches a given value. ## Preprocessing To adjust performance to a particular task, the user can apply some additional transformations to the input images before training starts: 1. Downsample – reduction of the image size to accelerate training and execution times, at the expense of lower level of details possible to detect. Increasing this parameter by 1 will result in downsampling by the factor of 2 over both image dimension. 2. Convert to Grayscale – while working with problems where color does not matter, you can choose to work with monochrome versions of images. ## Augmentation In case when the number of training images can be too small to represent all possible variations of samples, it is recommended to use data augmentations that add artificially modified samples during training. This option will also help avoiding overfitting. Below is a description of the available augmentations and examples of the corresponding transformations: 1. Luminance – change brightness of samples by a random percentage (between -ParameterValue and +ParameterValue) of pixel values (0-255). For a given augmentation values, samples as below can be added to the training set. 2. Luminance=-50. Luminance=-25. Original image. Luminance=25. Luminance=50. 3. Noise – modify samples with uniform noise. Value of each channel and pixel is modified separately, by random percentage (between -ParameterValue and +ParameterValue) of pixel values (0-255). Please note that choosing an appropriate augmentation value should depend on the size of the feature in pixels. Larger value will have a much greater impact on small objects than on large objects. For a tile with the feature "F" with the size of 130x130 pixels and a given augmentation values, samples as below can be added to the training set.: 4. Original grayscale image. Grayscale image. Noise=4. Grayscale image. Noise=10. Grayscale image. Noise=25. Grayscale image. Noise=50. Original RGB image. RGB image. Noise=4. RGB image. Noise=10. RGB image. Noise=25. RGB image. Noise=50. 5. Gaussian Blur – blur samples with a kernel of a size randomly selected between 0 and the provided maximum kernel size. Please note that choosing an appropriate Gaussian Blur Kernel Size should depend on the size of the feature in pixels. Larger kernel sizes will have a much greater impact on small objects than on large objects. For a tile with the feature "F" with the size of 130x130 pixels and a given augmentation values, samples as below can be added to the training set.: 6. Original image. Gaussian Blur=5. Gaussian Blur=10. Gaussian Blur=25. Gaussian Blur=50. 7. Rotation – rotate samples by a random angle between -ParameterValue and +ParameterValue. Measured in degrees. 8. In Detect Features, Locate Points and Detect Anomalies, for a tile with the feature "F" and given augmentation values, samples as below can be added to the training set. Tile rotation=-45°. Tile rotation=-20°. Original tile. Tile rotation=20°. Tile rotation=45°. In Classify Object and Segment Instances, for an image with the feature "F" and given augmentation values, samples as below can be added to the training set. Image rotation=-45°. Image rotation=-20°. Original image. Image rotation=20°. Image rotation=45°. 9. Flip Up-Down – reflect samples along the X axis. 10. Flip Left-Right – reflect samples along the Y axis. 11. No flips. Up-Down flip. Left-Right flip. Both flips. 12. Relative Translation – translate samples by a random shift, defined as a percentage (between -ParameterValue and +ParameterValue) of the tile (in Detect Features, Locate Points and Detect Anomalies) or the image size (in Classify Object and Segment Instances). Works independently in both X and Y dimensions. 13. In Detect Features, Locate Points and Detect Anomalies, for a tile with the feature "F" and given augmentation values, samples as below can be added to the training set. Tile translation x=20%, y=20%. Original tile. Tile translation x=-20%, y=-20%. In Classify Object and Segment Instances, for an image with the feature "F" and given augmentation values, samples as below can be added to the training set. Image translation x=20%, y=20%. Original image. Image translation x=-20%, y=-20%. 14. Scale – resize samples relatively to their original size by a random percentage between the provided minimum scale and maximum scale. 15. Resize=50%. Original image. Resize=150%. 16. Horizontal Shear – shear samples horizontally by an angle between -ParameterValue and +ParameterValue. Measured in degrees. 17. In Detect Features, Locate Points and Detect Anomalies, for a tile with the feature "F" and given augmentation values, samples as below can be added to the training set. Horizontal Shear=-30. Original tile. Horizontal Shear=30. In Classify Object and Segment Instances, for an image with the feature "F" and given augmentation values, samples as below can be added to the training set. Horizontal Shear=-30. Original image. Horizontal Shear=30. 18. Vertical Shear – analogous to Horizontal Shear. 19. In Detect Features, Locate Points, and Detect Anomalies, for a tile with the feature "F" and given augmentation values, samples as below can be added to the training set. Vertical Shear=-30. Original tile. Vertical Shear=30. In Classify Object and Segment Instances, for an image with the feature "F" and given augmentation values, samples as below can be added to the training set. Vertical Shear=-30. Original image. Vertical Shear=30. Warning: the choice of augmentation options depends only on the task we want to solve. Sometimes they might be harmful for quality of a solution. For a simple example, the Rotation should not be enabled if rotations are not expected in a production environment. Enabling augmentations also increases the network training time (but does not affect execution time!) # 2. Anomaly Detection Aurora Vision Deep Learning provides three ways of defect detection: The AvsFilter_DL_DetectAnomalies1 (reconstructive approach) uses deep neural networks to remove defects from the input image by reconstructing the affected regions. It is used to analyze images in fragments of size determined by the Feature Size parameter. This approach is based on reconstructing an image without defects and then comparing it with the original one. It filters out all patterns smaller than Feature Size that were not present in the training set. The AvsFilter_DL_DetectAnomalies2 Single Class uses a simpler algorithm than Golden Template. It uses less space and the iteration time is shorter. It can be used with less complex objects. The AvsFilter_DL_DetectAnomalies2 Golden Template is an appropriate method for positioned objects with complex details. The tool divides the images into regions and creates a separate model for each region. The tool has the Texture Mode dedicated for texture defects detection. It can be used for plain surfaces or the ones with a simple pattern. To sum up, while choosing the tool for anomaly detection, first check the Golden Template with the Texture Mode on or off, depending on the object's kind. If the model takes too much space or the iteration is too long, please try the Single Class tool. If the object is complex and its position is unstable, please check the AvsFilter_DL_DetectAnomalies1 approach. An example of textile defect detection using the AvsFilter_DL_DetectAnomalies2. ### Parameters • Feature Size is related to AvsFilter_DL_DetectAnomalies1 and AvsFilter_DL_DetectAnomalies2 Single Class approach. It corresponds to the expected defect size and it is the most significant one in terms of both quality and speed of inspection. It it is represented by a green square in the Image window of the Editor. The common denominator of all fragment based approaches is that the Feature Size should be adjusted so that it contains common defects with some margin. For AvsFilter_DL_DetectAnomalies1 large Feature Size will cause small defects to be ignored, however the inference time will be shortened considerably. Heatmap precision will also be lowered. For AvsFilter_DL_DetectAnomalies2 Single Class large Feature Size increases training as well as inference time and memory requirements. Consider using Downscale parameter instead of increasing the Feature Size. • Sampling Density is related to AvsFilter_DL_DetectAnomalies1 and AvsFilter_DL_DetectAnomalies2 Single Class approach. It controls the spatial resolution of both training and inspection. The higher the density the more precise results but longer computational time. It is recommended to use the Low density only for well positioned and simple objects. The High density is useful when working with complex textures and highly variable objects. • Max Translation is related to AvsFilter_DL_DetectAnomalies2 Golden Template approach. It is the maximal position change tolerance. If the parameter increases, the working area of a small model enlarges and the number of the created small models decreases. • Model Complexity is related to AvsFilter_DL_DetectAnomalies2 Golden Template and AvsFilter_DL_DetectAnomalies2 Texture approach. Greater value may improve model effectiveness, especially for complex objects, at the expense of memory usage and interference time. ### Metrics Measuring accuracy of anomaly detection tools is a challenging task. The most straightforward approach is to calculate the Recall/Precision/F1 measures for the whole images (classified as GOOD or BAD, without looking at the locations of the anomalies). Unfortunately, such an approach is not very reliable due to several reasons, including: (1) when we have a limited number of test images (like 20), the scores will vary a lot (like Δ=5%) when just one case changes; (2) very frequently the tools we test will find random false anomalies, but will not find the right ones - and still will get high scores as the image as a whole is considered correctly classified. Thus, it may be tempting to use annotated anomaly regions and calculate the per-pixel scores. However, this would be too fine-grained. For anomaly detection tasks we do not expect the tools to be necessarily very accurate in terms of the location of defects. Individual pixels do not matter much. Instead, we expect that the anomalies are detected "more or less" at the right locations. As a matter of fact, some tools which are not very accurate in general (especially those based on auto-encoders) will produce relatively accurate outlines for the anomalies they find, while the methods based on one-class classification will usually perform better in general, but the outlines they produce will be blurred, too thin or too thick. For these reasons, we introduced an intermediate approach to calculation of Recall. Instead of using the per-image or the per-pixel methods, we use a per-region one. Here is how we calculate Recall: • For each anomaly region we check if there is any single pixel in the heatmap above the threshold. If it is, we increase TP (the number of True Positives) by one. Otherwise, we increase FN (the number of False Negatives) by one. • Then we use the formula: $${Recall = \frac{TP}{TP + FN} }$$ The above method works for Recall, but cannot be directly applied to the calculation of Precision. Thus, for Precision we use a per-pixel approach, but it also comes with its own difficulties. First issue is that we often find ourselves having a lot of GOOD samples and a very limited set of BAD testing cases. This means unbalanced testing data, which in turn means that the Precision metric is highly affected with the overwhelming quantity of GOOD samples. The more GOOD samples we have (at the same amount of BAD samples), the lower Precision will be. It may be actually very low, often not reflecting the true performance of the tool. For that reason, we need to incorporate balancing into our metrics. A second issue with Precision in real-world projects is that False Positives tend to naturally occur within BAD images, outside of the marked anomaly regions. This happens for several reasons, but is repeatable among different projects. Sometimes if there is a defect, it often means that something was broken and other parts of the object may be slightly affected too, sometimes in a visible way, sometimes with a level of ambiguity. And quite often the objects under inspection simply get affected by the process of artificially introducing defects (like someone is touching a piece of fabric and accidentally causes wrinkles that would normally not occur). For this reason, we calculate the per-pixel False Negatives only on GOOD images. The complete procedure for calculation of Precision is: • We calculate the average pp_TP (the number of per-pixel True Positives) across all BAD testing samples. • We calculate the average pp_FP (the number of per-pixel False Positives) across all GOOD testing samples. • Then we use the formula: $${Precision=\frac{\overline{pp\underline{}TP} }{\overline{pp\underline{}TP} + \overline{pp\underline{}FP} } }$$ Finally we calculate the F1 score in the standard way, for practical reasons neglecting the fact that the Recall and Precision values that we unify were calculated in different ways. We believe that this metric is best for practical applications. ## Model Usage In Detect Anomalies 1 variant, a model should be loaded with AvsFilter_DL_DetectAnomalies1_Deploy prior to executing it with AvsFilter_DL_DetectAnomalies1. Alternatively, the model can be loaded directly by AvsFilter_DL_DetectAnomalies1 filter, but it will then require time-consuming initialization in the first program iteration. In Detect Anomalies 2 variant, a model should be loaded with AvsFilter_DL_DetectAnomalies2_Deploy prior to executing it with AvsFilter_DL_DetectAnomalies2. Alternatively, model can be loaded directly by AvsFilter_DL_DetectAnomalies2 filter, but it will then require time-consuming initialization in the first program iteration. Running Aurora Vision Deep Learning Service simultaneously with these filters is discouraged as it may result in degraded performance or errors. # 3. Feature Detection (segmentation) This technique is used to detect pixel-wise regions corresponding to defects or – in a general sense – to any image features. A feature here may be also something like the roads on a satellite image or an object part with a characteristic surface pattern. Sometimes it is also called pixel labeling as it assigns a class label to each pixel, but it does not separate instances of objects. ## Training Data Images used for training can be of different sizes and can have different ROIs defined. However, it is important to ensure that the scale and the characteristics of the features are consistent with that of the production environment. Each and every feature should be marked on all training images, or the ROI should be limited to include only marked defects. Incompletely or inconsistently marked features are one of the main reasons of poor accuracy. REMEMBER: If you leave even a single piece of some feature not marked, it will be used as a negative sample and this will highly confuse the training process! The marking precision should be adjusted to the application requirements. The more precise marking the better accuracy in the production environment. While marking with low precision it is better to mark features with some excess margin. An example of wood knots marked with low precision. An example of tile cracks marked with high precision. ### Multiple classes of features It is possible to detect many classes of features separately using one model. For example, road and building like in the image below. Different features may overlap but it is usually not recommended. Also, it is not recommended to define more than a few different classes in a single model. On the other hand, if there are two features that may be mutually confusing (e.g. roads and rivers), it is recommended to have separate classes for them and mark them, even if one of the classes is not really needed in the results. Having the confusing feature clearly marked (and not just left as the background), the neural network will focus better on avoiding misclassification. An example of marking two different classes (red roads and yellow buildings) in the one image. ## Patch Size Detect Features is an end-to-end segmentation tool which works best when analysing an image in a medium-sized square window. The size of this window is defined by the Patch Size parameter. It should be not too small, and not too big. Typically much bigger than the size (width or diameter) of the feature itself, but much less than the entire image. In a typical scenario the value of 96 or 128 works quite well. Performance Tip 1: a larger Patch Size increases the training time and requires more GPU memory and more training samples to operate effectively. When Patch Size exceeds 128 pixels and still looks too small, it is worth considering the Downsample option. Performance Tip 2: if the execution time is not satisfying you can set the inOverlap filter input to False. It should speed up the inspection by 10-30% at the expense of less precise results. Examples of Patch Size: too large or too small (red), maybe acceptable (yellow) and good (green). Remember that this is just an example and may vary in other cases. ## Model Usage A model should be loaded with AvsFilter_DL_DetectFeatures_Deploy filter before using AvsFilter_DL_DetectFeatures filter to perform segmentation of features. Alternatively, the model can be loaded directly by AvsFilter_DL_DetectFeatures filter, but it will result in a much longer time of the first iteration. Running Aurora Vision Deep Learning Service simultaneously with these filters is discouraged as it may result in degraded performance or errors. Parameters: • To limit the area of image analysis you can use inRoi input. • To shorten feature segmentation process you can disable inOverlap option. However, in most cases, it decreases segmentation quality. • Feature segmentation results are passed in a form of bitmaps to outHeatmaps output as an array and outFeature1, outFeature2, outFeature3 and outFeature4 as separate images. # 4. Object Classification This technique is used to identify the class of an object within an image or within a specified region. ## The Principle of Operation During the training phase, the object classification tool learns representation of user defined classes. The model uses generalized knowledge gained from samples provided for training, and aims to obtain good separation between the classes. Result of classification after training. After a training process is completed, the user is presented with a confusion matrix. It indicates how well the model separated the user defined classes. It simplifies identification of model accuracy, especially when a large number of samples has been used. Confusion matrix presents correct (diagonal) and incorrect assignment of samples to the user defined classes. ## Training Parameters In addition to the default training parameters (list of parameters available for all Deep Learning algorithms), the AvsFilter_DL_ClassifyObject tool provides a Detail Level parameter which enables control over the level of detail needed for a particular classification task. For majority of cases the default value of 1 is appropriate, but if images of different classes are distinguishable only by small features (e.g. granular materials like flour and salt), increasing value of this parameter may improve classification results. ## Model Usage A model should be loaded with AvsFilter_DL_ClassifyObject_Deploy filter before using AvsFilter_DL_ClassifyObject filter to perform classification. Alternatively, model can be loaded directly by AvsFilter_DL_ClassifyObject filter, but it will result in a much longer time of the first iteration. Running Aurora Vision Deep Learning Service simultaneously with these filters is discouraged as it may result in degraded performance or errors. Parameters: • To limit the area of image analysis you can use inRoi input. • Classification results are passed to outClassName and outClassIndex outputs. • The score value outScore indicates the confidence of classification. # 5. Instance Segmentation This technique is used to locate, segment and classify one or multiple objects within an image. The result of this technique are lists with elements describing detected objects – their bounding boxes, masks (segmented regions), class IDs, names and membership probabilities. Note that in contrary to feature detection technique, instance segmentation detects individual objects and may be able to separate them even if they touch or overlap. On the other hand, instance segmentation is not an appropriate tool for detecting features like scratches or edges which may possibly have no object-like boundaries. Original image. Visualized instance segmentation results. ## Training Data The training phase requires the user to draw regions corresponding to objects on an image and assign them to classes. Editor for marking objects. ## Training Parameters Instance segmentation training adapts to the data provided by the user and does not require any additional training parameters besides the default ones. ## Model Usage A model should be loaded with AvsFilter_DL_SegmentInstances_Deploy filter before using AvsFilter_DL_SegmentInstances filter to perform classification. Alternatively, model can be loaded directly by AvsFilter_DL_SegmentInstances filter, but it will result in a much longer time of the first iteration. Running Aurora Vision Deep Learning Service simultaneously with these filters is discouraged as it may result in degraded performance or errors. Parameters: • To limit the area of image analysis you can use inRoi input. • To set minimum detection score inMinDetectionScore parameter can be used. • Maximum number of detected objects on a single image can be set with inMaxObjectsCount parameter. By default it is equal to the maximum number of objects in the training data. • Results describing detected objects are passed to following outputs: # 6. Point Location This technique is used to precisely locate and classify key points, characteristic parts and small objects within an image. The result of this technique is a list of predicted point locations with corresponding class predictions and confidence scores. When to use point location instead of instance segmentation: • precise location of key points and distinctive regions with no strict boundaries, • location and classification of objects (possibly very small) when their segmentation masks and bounding boxes are not needed (e.g. in object counting). When to use point location instead of feature detection: • coordinates of key points, centroids of characteristic regions, objects etc. are needed. Original image. Visualized point location results. ## Training Data The training phase requires the user to mark points of appropriate classes on the training images. Editor for marking points. ## Feature Size In the case of the Point Location tool, the Feature Size parameter corresponds to the size of an object or characteristic part. If images contain objects of different scales, it is recommended to use a Feature Size slightly larger than the average object size, although it may require experimenting with different values to achieve the best possible results. Performance tip: a larger feature size increases the training time and needs more memory and training samples to operate effectively. When feature size exceeds 64 pixels and still looks too small, it is worth considering the Downsample option. ## Model Usage A model should be loaded with AvsFilter_DL_LocatePoints_Deploy filter before using AvsFilter_DL_LocatePoints filter to perform point location and classification. Alternatively, model can be loaded directly by AvsFilter_DL_LocatePoints filter, but it will result in a much longer time of the first iteration. Running Aurora Vision Deep Learning Service simultaneously with these filters is discouraged as it may result in degraded performance or errors. Parameters: • To limit the area of image analysis you can use inRoi input. • To set minimum detection score inMinDetectionScore parameter can be used. • inMinDistanceRatio parameter can be used to set minimum distance between two points to be considered as different. The distance is computed as MinDistanceRatio * FeatureSize. If the value is not enabled, the minimum distance is based on the training data. • To increase detection speed but with potentially slightly worse precision inOverlap can be set to False. • Results describing detected points are passed to following outputs: # 7. Locating objects This technique is used to locate and classify one or multiple objects within an image. The result of this technique is a list of rectangles bounding the predicted objects with corresponding class predictions and confidence scores. The tool returns the rectangle region containing the predicted objects and showing their approximate location and orientation , but it doesn't return the precise position of the key points of the object or the segmented region. It is an intermediate solution between the Point Location and the Instance Segmentation. Original image. Visualized object location results. ## Training Data The training phase requires the user to mark rectangles bounding objects of appropriate classes on the training images. Editor for marking objects. ## Model Usage A model should be loaded with AvsFilter_DL_LocateObjects_Deploy filter before using AvsFilter_DL_LocateObjects filter to perform object location and classification. Alternatively, model can be loaded directly by AvsFilter_DL_LocateObjects filter, but it will result in a much longer time of the first iteration. Running Aurora Vision Deep Learning Service simultaneously with these filters is discouraged as it may result in degraded performance or errors. Parameters: • To limit the area of image analysis you can use inRoi input. • To set minimum detection score inMinDetectionScore parameter can be used. • Results describing detected objects are passed to the object output: outObjects. This technique is used to locate and recognize characters within an image. The result is a list of found characters. This tool uses the pretrained model and cannot be trained. Original image. Visualized results of the read characters. ## Model Usage A model should be loaded with AvsFilter_DL_ReadCharacters_Deploy filter before using AvsFilter_DL_ReadCharacters filter to perform recognition. Alternatively, model can be loaded directly by AvsFilter_DL_ReadCharacters filter, but it will result in a much longer time of the first iteration. Running Aurora Vision Deep Learning Service simultaneously with these filters is discouraged as it may result in degraded performance or errors. Parameters: • To limit the area of the image analysis and/or to set a text orientation you can use inRoi input. • The average size (in pixels) of characters in the analysed area should be set with inCharHeight parameter. • To improve a performance with a font with exceptionally thin or wide characters you can use inWidthScale input. To some extent, it may also help in a case of characters being very close to each other. • To restrict set of recognized characters use inCharRange parameter. # 9. Troubleshooting Below you will find a list of most common problems. ### 1. Network overfitting A situation when a network loses its ability to generalize over available problems and focuses only on test data. Symptoms: during training, the validation graph stops at one level and training graph continues to rise. Defects on training images are marked very precisely, but defects on new images are marked poorly. A graph characteristic for network overfitting. Causes: • The number of test samples is too small. • Training time is too long. Possible solutions: • Provide more real samples of different objects. • Use more augmentations. • Reduce Network Depth. ### 2. Susceptibility to changes in lighting conditions Symptoms: network is not able to process images properly when even minor changes in lighting occur. Causes: • Samples with variable lighting were not provided. Solution: • Provide more samples with variable lighting. • Enable "Luminance" option for automatic lighting augmentation. ### 3. No progress in network training Symptoms ― even though the training time is optimal, there is no visible training progress. Causes: • The number of samples is too small or the samples are not variable enough. • Image contrast is too small. • The chosen network architecture is too small. Solution: • Modify lighting to expose defects. Tip: Remember to mark all defects of a given type on the input images or remove images with unmarked defects. Marking only a part of defects of a given type may negatively influence the network learning process. ### 4. Training/sample evaluation is very slow Symptoms ― training or sample evaluation takes a lot of time. Causes: • Resolution of the provided input images is too high. • Fragments that cannot possibly contain defects are also analyzed. Solution: • Enable "Downsample" option to reduce the image resolution. • Limit ROI for sample evaluation. • Use lower Network Depth
web
auto_math_text
# matlab code for wcdma bit error rate simulation at any data rate over awgn channel Status Not open for further replies. #### Filbert ##### Newbie level 1 Looking for a matlab code for ber simulation in wcdma over awgn channel, the code computes ber for a given data rate using qpsk modulation and a pulse shaping filter Status Not open for further replies.
web
auto_math_text
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> # 18.1: Rates of Reactions Difficulty Level: At Grade Created by: CK-12 ## Lesson Objectives • Be able to express the rate of a chemical reaction. • Describe the collision theory as it relates to chemical reactions. • Draw and analyze a potential energy diagram for a reaction, including heat of reaction, activation energy and the activated complex. • Describe and explain various factors that influence the rates of reactions. ## Lesson Vocabulary • activated complex • activation energy • catalyst • collision theory • potential energy diagram • reaction rate ### Recalling Prior Knowledge • What is a rate? • How are endothermic and exothermic reactions different from one another? Chemical kinetics is the study of the rates of chemical reactions. In this lesson, you will learn how to express the rate of a chemical reaction and about various factors that influence reaction rates. ## Expressing Reaction Rate Chemical reactions vary widely in the speeds with which they occur. Some reactions occur very quickly. If a lighted match is brought in contact with lighter fluid or another flammable liquid, it erupts into flame instantly and burns fast. Other reactions occur very slowly. A container of milk in the refrigerator will be good to drink for weeks before it begins to turn sour. Millions of years were required for dead plants under Earth’s surface to accumulate and eventually turn into fossil fuels such as coal and oil. Chemists need to be concerned with the rates at which chemical reactions occur. Rate is another word for speed. If a sprinter takes 11.0 s to run a 100 m dash, his rate or speed is given by the distance traveled divided by the time. speed=distancetime=100m11.0s=9.09m/s\begin{align*}\mathrm{speed=\dfrac{distance}{time}=\dfrac{100\:m}{11.0\:s}=9.09\:m/s}\end{align*} The sprinter’s average running rate for the race is 9.09 m/s. We say that it is his average rate because he did not run at that speed for the entire race. At the very beginning of the race, while coming from a standstill, his rate must be slower until he is able to get up to his top speed. His top speed must then be greater than 9.09 m/s so that taken over the entire race, the average ends up at 9.09 m/s. Usain Bolt set the world record for the 100 meter dash in 2009 with a time of 9.58 seconds. His average running rate over the course of this race was 10.4 m/s, or 23.4 mph. Chemical reactions can’t be measured in units of meters per second, as that would not make any sense. A reaction rate is the change in concentration of a reactant or product with time. Suppose that a simple reaction were to take place in which a 1.00 M aqueous solution of substance A was converted to substance B. A(aq) → B(aq) Suppose that after 20.0 seconds, the concentration of A had dropped from 1.00 M to 0.72 M as it was being converted to substance B. We can express the rate of this reaction as the change in concentration of A divided by the time. rate=Δ[A]Δt=[A]final[A]initialΔt\begin{align*}\mathrm{rate=- \dfrac{\Delta [A]}{\Delta t}=- \dfrac{[A]_{final}-[A]_{initial}}{\Delta t}}\end{align*} A bracket around a symbol or formula means the concentration in molarity of that substance. The change in concentration of A is its final concentration minus its initial concentration. Because the concentration of A is decreasing over time, the negative sign is used. Thus, the rate for the reaction is positive and the units are molarity per second or M/s. rate=0.72 M1.00 M20.0 s=0.28 M20.0 s=0.041M/s\begin{align*}\mathrm{rate=- \dfrac{0.72 \ M-1.00 \ M}{20.0 \ s}=- \dfrac{-0.28 \ M}{20.0 \ s}=0.041\:M/s}\end{align*} Over the first 20.0 seconds of this reaction, the molarity of A decreases by an average rate of 0.041 M every second. In summary, the rate of a chemical reaction is measured by the change in concentration over time for a reactant or product. The unit of measurement for a reaction rate is molarity per second (M/s). ## Collision Theory The behavior of the reactant atoms, molecules, or ions is responsible for the rates of a given chemical reaction. Collision theory is a set of principles based around the idea that reactant particles form products when they collide with one another, but only when those collisions have enough kinetic energy and the correct orientation to cause a reaction. Particles that lack the necessary kinetic energy may collide, but the particles will simply bounce off one another unchanged. Figure below illustrates the difference. In the first collision, the particles bounce off one another and no rearrangement of atoms has occurred. The second collision occurs with greater kinetic energy, and so the bond between the two red atoms breaks. One red atom bonds with the other molecule as one product, while the single red atom is the other product. The first collision is called an ineffective collision, while the second collision is called an effective collision. (A) An ineffective collision is one that does not result in product formation. (B) An effective collision is one in which chemical bonds are broken and a product is formed. Supplying reactant particles with energy causes the bonds between the atoms to vibrate with a greater frequency. This increase in vibrational energy makes a chemical bond more likely to break and a chemical reaction more likely to occur when those particles collide with other particles. Additionally, more energetic particles have more forceful collisions, which also increases the likelihood that a rearrangement of atoms will take place. The activation energy for a reaction is the minimum energy that colliding particles must have in order to undergo a reaction. Some reactions occur readily at room temperature because most of the reacting particles already have the requisite activation energy at that temperature. Other reactions only occur when heated because the particles do not have enough energy to react unless more is provided by an external source of heat. ### Potential Energy Diagrams The energy changes that occur during a chemical reaction can be shown in a diagram called a potential energy diagram, sometimes called a reaction progress curve. A potential energy diagram shows the change in the potential energy of a system as reactants are converted into products. Figure below shows basic potential energy diagrams for an endothermic (left) and an exothermic (right) reaction. Recall that the enthalpy change (ΔH) is positive for an endothermic reaction and negative for an exothermic reaction. This can be seen in the potential energy diagrams. The total potential energy of the system increases for the endothermic reaction as the system absorbs energy from the surroundings. The total potential energy of the system decreases for the exothermic reaction as the system releases energy to the surroundings. A potential energy diagram shows the total potential energy of a reacting system as the reaction proceeds. (left) In an endothermic reaction, the energy of the products is greater than the energy of the reactants and ΔH is positive. (right) In an exothermic reaction, the energy of the products is lower than the energy of the reactants and ΔH is negative. The activation energy for a reaction is illustrated in the potential energy diagram by the height of the hill between the reactants and the products. For this reason, the activation energy of a reaction is sometimes referred to as the activation energy barrier. Reacting particles must have enough energy so that when they collide, they can overcome this barrier (Figure below). The activation energy (Ea) of a reaction is the barrier that must be overcome in order for the reactants to become products. (A) The activation energy is low, meaning that the reaction is likely to be fast. (B) The activation energy is high, meaning that the reaction is likely to be slow. As discussed earlier, reactant particles sometimes collide with one other and yet remain unchanged by the collision. Other times, the collision leads to the formation of products. The state of the particles that is in between the reactants and products is called the activated complex. An activated complex is an unstable arrangement of atoms that exists momentarily at the peak of the activation energy barrier. Because of its high energy, the activated complex exists only for an extremely short period of time (about 10−13 s). The activated complex is equally likely to either reform the original reactants or go on to form the products. Figure below shows the formation of a possible activated complex between colliding hydrogen and oxygen molecules. Because of their unstable nature and brief existence, very little is known about the exact structures of most activated complexes. An activated complex is a short-lived state in which the colliding particles are at the peak of the potential energy curve. ## Factors Affecting Reaction Rates By their nature, some reactions occur very quickly, while others are very slow. However, certain changes in the reaction conditions can have an effect on the rate of a given chemical reaction. Collision theory can be utilized to explain these rate effects. ### Concentration Increasing the concentration of one or more of the reacting substances generally increases the reaction rate. When more particles are present in a given amount of space, a greater number of collisions will naturally occur between those particles. Since the rate of a reaction is dependent on the frequency of collisions between the reactants, the rate increases as the concentration increases. ### Pressure When the pressure of a gas is increased, its particles are forced closer together, decreasing the amount of empty space between them. Therefore, an increase in the pressure of a gas is also an increase in the concentration of the gas. For gaseous reactions, an increase in pressure increases the rate of reaction for the same reasons as described above for an increase in concentration. Higher gas pressure leads to a greater frequency of collisions between reacting particles. ### Surface Area A large log placed in a fire will burn relatively slowly. If the same mass of wood were added to the fire in the form of small twigs, they would burn much more quickly. This is because the twigs provide a greater surface area than the log does. An increase in the surface area of a reactant increases the rate of a reaction. Surface area is larger when a given amount of a solid is present as smaller particles. A powdered reactant has a greater surface area than the same reactant as a solid chunk. In order to increase the surface area of a substance, it may be ground into smaller particles or dissolved into a liquid. In solution, the dissolved particles are separated from each other and will react more quickly with other reactants. Figure below shows the unfortunate result of high surface area in an unwanted combustion reaction. Small particles of grain dust are very susceptible to rapid reactions with oxygen, which can result in violent explosions and quick-burning fires. This grain elevator in Kansas exploded in 1998. The tiny size of the reacting particles (grain dust) caused the reaction with oxygen in the air to be violently explosive. ### Temperature Raising the temperature of a chemical reaction results in a higher reaction rate. When the reactant particles are heated, they move faster and faster, resulting in a greater frequency of collisions. An even more important effect of the temperature increase is that the collisions occur with a greater force, which means the reactants are more likely to surmount the activation energy barrier and go on to form products. Increasing the temperature of a reaction increases not only the frequency of collisions, but also the percentage of those collisions that are effective, resulting in an increased reaction rate. Paper is certainly a highly combustible material, but paper does not burn at room temperature because the activation energy for the reaction is too high. The vast majority of collisions between oxygen molecules and the paper are ineffective. However, when the paper is heated by the flame from a match, it reaches a point where the molecules now have enough energy to react. The reaction is very exothermic, so the heat released by the initial reaction will provide enough energy to allow the reaction to continue, even if the match is removed. The paper continues to burn rapidly until it is gone. ### Catalysts The rates of some chemical reactions can be increased dramatically by introducing certain other substances into the reaction mixture. Hydrogen peroxide is used as a disinfectant for scrapes and cuts, and it can be found in many medicine cabinets as a 3% aqueous solution. Hydrogen peroxide naturally decomposes to produce water and oxygen gas, but the reaction is very slow. A bottle of hydrogen peroxide will last for several years before it needs to be replaced. However, the addition of just a small amount of manganese(IV) oxide to hydrogen peroxide will cause it to decompose completely in just a matter of minutes. A catalyst is a substance that increases the rate of a chemical reaction without being used up in the reaction. It accomplishes this task by providing an alternate reaction pathway that has a lower activation energy barrier. After the reaction occurs, a catalyst returns to its original state, so catalysts can be used over and over again. Because it is neither a reactant nor a product, a catalyst is shown in a chemical equation by being written above the yield arrow. 2H2O2(aq)MnO22H2O(l)+O2(g)\begin{align*}\mathrm{2H_2O_{2}}{(aq)} \mathrm{\overset{MnO_2}{\rightarrow} 2H_2O}{(l)}\mathrm{+O_{2}}{(g)}\end{align*} A catalyst works by changing the mechanism of the reaction, which can be thought of as the specific set of smaller steps by which the reactants become products. Reaction mechanisms will be discussed later in this chapter. For now, the important point is that the use of a catalyst lowers the overall activation energy of the reaction (Figure below). With a lower activation energy barrier, a greater percentage of reactant molecules are able to have effective collisions, and the reaction rate increases. The addition of a catalyst to a reaction lowers the activation energy, increasing the rate of the reaction. The activation energy of the uncatalyzed reaction is shown by Ea, while the catalyzed reaction is shown by Ea’. The heat of reaction (ΔH) is unchanged by the presence of the catalyst. Catalysts are extremely important parts of many chemical reactions. Enzymes in your body act as nature’s catalysts, allowing important biochemical reactions to occur at reasonable rates. Chemical companies constantly search for new and better catalysts to make reactions go faster and thus make the company more profitable. ## Lesson Summary • The rate of a chemical reaction is the change in the concentration of a reactant or product as a function of time. • Reactions occur when reactant particles undergo effective collisions. Collision theory outlines the conditions which need to be met for a reaction to occur. • In order for a collision to lead to the formation of a product, the colliding particles must have enough energy to surmount the activation energy barrier. • The progress of a chemical reaction can be shown with a potential energy diagram. • The rate of a chemical reaction can be increased by increasing the concentration, gas pressure, or surface area of the reactants, increasing the temperature of the reaction, or by the addition of a catalyst. ## Lesson Review Questions ### Reviewing Concepts 1. In what unit is the rate of a chemical reaction typically expressed? 2. Does every collision between reacting particles lead to the formation of products? Explain. 3. What two conditions must be met in order for a collision to be effective? 4. Explain why the activation energy of a reaction is sometimes referred to as a barrier. 5. Why is it difficult to study activated complexes? 6. Reaction rates can be affected by changes in concentration, pressure, or surface area. Use collision theory to explain the similarities between the effects that each of these factors has on reaction rate. 7. What is the effect of a catalyst on the rate of a reaction? Explain how the presence of a catalyst affects the activation energy of a reaction. ### Problems 1. A 2.50 M solution undergoes a chemical reaction. After 3.00 minutes, the concentration of the solution is 2.15 M. What is the rate of the reaction in M/s? 2. Zinc metal reacts with hydrochloric acid. Which of the following would result in the highest rate of reaction? 1. A solid piece of zinc in 1 M HCl 2. A solid piece of zinc in 3 M HCl 3. Zinc powder in 1 M HCl 4. Zinc powder in 3 M HCl 3. Use the potential energy diagram below to answer the following questions. 1. What is the potential energy of the reactants? 2. What is the potential energy of the products? 3. What is the heat of reaction (ΔH)? 4. What is the potential energy of the activated complex? 5. What is the activation energy for the reaction? 6. Is the reaction endothermic or exothermic? 7. Which of the values in a-e above would be changed by the use of a catalyst in the reaction? ## Points to Consider Rate laws provide a quantitative relationship between the rate of a reaction and the concentrations of its reactants. • How can experiments be designed in order to determine the rate law for a reaction? • What is a specific rate constant? ### Notes/Highlights Having trouble? Report an issue. Color Highlighted Text Notes Show Hide Details Description Tags: Subjects:
web
auto_math_text
# Point-like IRFs¶ Point-like IRFs has been classically used within the IACT community. Each IRF component is calculated from the events surviving an energy dependent directional cut around the assumed source position. The format of each point-like IRF component is analog to the ones already described within the full enclosure IRF specifications (see Full-enclosure IRFs), with certain differences listed in this section. Any point-like IRF component should contain the header keyword: • HDUCLAS3 = POINT-LIKE In addition to the IRFs, the actual directional cut applied to the data needs to be stored. This cut is allowed to be constant or variable along several axes, with a different format. In case the angular cut is constant along the energy and FoV, an additional header keyword may be added to the IRF HDU: • RAD_MAX type: float, unit: deg • Radius of the directional cut applied to calculate the IRF, in degrees. If this keyword is present, the science tools should assume the directional cut of this point-like IRF is constant over all axes. In case the angular cut is variable along any axis (reconstructed energy or FoV), an additional HDU is required to store these values. Note any DL3 file with a point-like IRF (with HDUCLAS3 = POINT-LIKE) that has no RAD_MAX keyword within the HDU metadata should have this additional HDU. In case the directional cut is variable with energy or the FoV, point-like IRFs require an additional binary table. It stores the values of RAD_MAX as a function of the reconstructed energy and FoV following the BINTABLE HDU format. The RAD_MAX_2D format contains a 2-dimensional array of directional cut values, stored in the BINTABLE HDU format. Required columns: • ENERG_LO, ENERG_HI – ndim: 1, unit: TeV • Reconstructed energy axis • THETA_LO, THETA_HI – ndim: 1, unit: deg • Field of view offset axis • RAD_MAX – ndim: 2, unit: deg • Radius of the directional cut applied to calculate the IRF, in degrees. Recommended axis order: ENERGY, THETA, RAD_MAX • HDUDOC = ‘https://github.com/open-gamma-ray-astro/gamma-astro-data-formats • HDUVERS = ‘0.2’ • HDUCLASS = ‘GADF’ • HDUCLAS1 = ‘RESPONSE’ • HDUCLAS2 = ‘RAD_MAX’ • HDUCLAS3 = ‘POINT-LIKE’ • HDUCLAS4 = ‘RAD_MAX_2D’ Example data file: here.
web
auto_math_text
Outlook: Fortistar Sustainable Solutions Corp. Unit is assigned short-term Ba1 & long-term Ba1 estimated rating. Dominant Strategy : Wait until speculative trend diminishes Time series to forecast n: 29 Jan 2023 for (n+8 weeks) Methodology : Active Learning (ML) ## Abstract Fortistar Sustainable Solutions Corp. Unit prediction model is evaluated with Active Learning (ML) and Pearson Correlation1,2,3,4 and it is concluded that the FSSIU stock is predictable in the short/long term. According to price forecasts for (n+8 weeks) period, the dominant strategy among neural network is: Wait until speculative trend diminishes ## Key Points 1. Reaction Function 2. Should I buy stocks now or wait amid such uncertainty? 3. What is prediction in deep learning? ## FSSIU Target Price Prediction Modeling Methodology We consider Fortistar Sustainable Solutions Corp. Unit Decision Process with Active Learning (ML) where A is the set of discrete actions of FSSIU stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4 F(Pearson Correlation)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Active Learning (ML)) X S(n):→ (n+8 weeks) $∑ i = 1 n a i$ n:Time series to forecast p:Price signals of FSSIU stock j:Nash equilibria (Neural Network) k:Dominated move a:Best response for target price For further technical information as per how our model work we invite you to visit the article below: How do AC Investment Research machine learning (predictive) algorithms actually work? ## FSSIU Stock Forecast (Buy or Sell) for (n+8 weeks) Sample Set: Neural Network Stock/Index: FSSIU Fortistar Sustainable Solutions Corp. Unit Time series to forecast n: 29 Jan 2023 for (n+8 weeks) According to price forecasts for (n+8 weeks) period, the dominant strategy among neural network is: Wait until speculative trend diminishes X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.) Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.) Z axis (Grey to Black): *Technical Analysis% ## IFRS Reconciliation Adjustments for Fortistar Sustainable Solutions Corp. Unit 1. If an entity previously accounted for a derivative liability that is linked to, and must be settled by, delivery of an equity instrument that does not have a quoted price in an active market for an identical instrument (ie a Level 1 input) at cost in accordance with IAS 39, it shall measure that derivative liability at fair value at the date of initial application. Any difference between the previous carrying amount and the fair value shall be recognised in the opening retained earnings of the reporting period that includes the date of initial application. 2. An entity is not required to incorporate forecasts of future conditions over the entire expected life of a financial instrument. The degree of judgement that is required to estimate expected credit losses depends on the availability of detailed information. As the forecast horizon increases, the availability of detailed information decreases and the degree of judgement required to estimate expected credit losses increases. The estimate of expected credit losses does not require a detailed estimate for periods that are far in the future—for such periods, an entity may extrapolate projections from available, detailed information. 3. The accounting for the time value of options in accordance with paragraph 6.5.15 applies only to the extent that the time value relates to the hedged item (aligned time value). The time value of an option relates to the hedged item if the critical terms of the option (such as the nominal amount, life and underlying) are aligned with the hedged item. Hence, if the critical terms of the option and the hedged item are not fully aligned, an entity shall determine the aligned time value, ie how much of the time value included in the premium (actual time value) relates to the hedged item (and therefore should be treated in accordance with paragraph 6.5.15). An entity determines the aligned time value using the valuation of the option that would have critical terms that perfectly match the hedged item. 4. When an entity designates a financial liability as at fair value through profit or loss, it must determine whether presenting in other comprehensive income the effects of changes in the liability's credit risk would create or enlarge an accounting mismatch in profit or loss. An accounting mismatch would be created or enlarged if presenting the effects of changes in the liability's credit risk in other comprehensive income would result in a greater mismatch in profit or loss than if those amounts were presented in profit or loss *International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS. ## Conclusions Fortistar Sustainable Solutions Corp. Unit is assigned short-term Ba1 & long-term Ba1 estimated rating. Fortistar Sustainable Solutions Corp. Unit prediction model is evaluated with Active Learning (ML) and Pearson Correlation1,2,3,4 and it is concluded that the FSSIU stock is predictable in the short/long term. According to price forecasts for (n+8 weeks) period, the dominant strategy among neural network is: Wait until speculative trend diminishes ### FSSIU Fortistar Sustainable Solutions Corp. Unit Financial Analysis* Rating Short-Term Long-Term Senior Outlook*Ba1Ba1 Income StatementBaa2Baa2 Balance SheetBa1Caa2 Leverage RatiosCaa2B2 Cash FlowBaa2Baa2 Rates of Return and ProfitabilityBaa2B3 *Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents. How does neural network examine financial reports and understand financial state of the company? ### Prediction Confidence Score Trust metric by Neural Network: 90 out of 100 with 756 signals. ## References 1. Farrell MH, Liang T, Misra S. 2018. Deep neural networks for estimation and inference: application to causal effects and other semiparametric estimands. arXiv:1809.09953 [econ.EM] 2. Chen X. 2007. Large sample sieve estimation of semi-nonparametric models. In Handbook of Econometrics, Vol. 6B, ed. JJ Heckman, EE Learner, pp. 5549–632. Amsterdam: Elsevier 3. J. Baxter and P. Bartlett. Infinite-horizon policy-gradient estimation. Journal of Artificial Intelligence Re- search, 15:319–350, 2001. 4. E. Altman. Constrained Markov decision processes, volume 7. CRC Press, 1999 5. N. B ̈auerle and J. Ott. Markov decision processes with average-value-at-risk criteria. Mathematical Methods of Operations Research, 74(3):361–379, 2011 6. Semenova V, Goldman M, Chernozhukov V, Taddy M. 2018. Orthogonal ML for demand estimation: high dimensional causal inference in dynamic panels. arXiv:1712.09988 [stat.ML] 7. D. Bertsekas. Min common/max crossing duality: A geometric view of conjugacy in convex optimization. Lab. for Information and Decision Systems, MIT, Tech. Rep. Report LIDS-P-2796, 2009 Frequently Asked QuestionsQ: What is the prediction methodology for FSSIU stock? A: FSSIU stock prediction methodology: We evaluate the prediction models Active Learning (ML) and Pearson Correlation Q: Is FSSIU stock a buy or sell? A: The dominant strategy among neural network is to Wait until speculative trend diminishes FSSIU Stock. Q: Is Fortistar Sustainable Solutions Corp. Unit stock a good investment? A: The consensus rating for Fortistar Sustainable Solutions Corp. Unit is Wait until speculative trend diminishes and is assigned short-term Ba1 & long-term Ba1 estimated rating. Q: What is the consensus rating of FSSIU stock? A: The consensus rating for FSSIU is Wait until speculative trend diminishes. Q: What is the prediction period for FSSIU stock? A: The prediction period for FSSIU is (n+8 weeks) ## People also ask What are the top stocks to invest in right now?
web
auto_math_text
# Speed = Distance / Time. HELP Basic grade school math. 1. May 7, 2007 ### ClA Speed = Distance / Time. HELP!! Basic grade school math. Hi, I am not super good in math so I really need some help with this math problem. I just got an unfair speeding ticket and I need to prove that the officer's speed calculation was incorrect. I am pretty sure this will be very simple for the rest of you. I was traveling 65 miles per hour for 3 miles, this took 2 mins and 46 seconds as calculated by this website. http://www.gazza.co.nz/distance.html The officer was from a stop and he started chasing after me claiming that I was speeding (limit was 65mph). He used his own speed to determine what my speed was. Now, if he went from a complete stop, and drove 3 miles in 2 mins and 46 seconds, how fast was he travelling when he caught up to me?? Please help. Maybe you can show the calculation as well so I can show the judge. Thank you very much. Last edited: May 7, 2007 2. May 7, 2007 ### hage567 How much over the speed limit did the police officer say you were going? 3. May 7, 2007 ### ClA He put down 75mph. Which I believe was the speed he used to catch up to me since he came from a complete stop to catch up to my car. He used that speed to determine that that was the speed I was travelling at, which does not make sense. 4. May 8, 2007 ### cepheid Staff Emeritus Ok, first of all, he must have had good reason to suspect you were speeding before he took off after you, otherwise, why would he have started chasing you in the first place? Are you sure he didn't have photoradar or something? Also, how could you possibly know that he chased you for exactly 3 miles before he caught up with you? Despite being chased by a police car, did you suddenly have the presence of mind to reset your odometer at the precise moment that he started moving? I'm dubious. Also, if you saw a police cruiser chasing you, sirens blaring, why didn't you just slow down and/or pull over like any sane person rather than continuing to drive at a constant velocity, ostensibly for 3 miles, waiting for him to catch you? Especially if it took nearly three minutes, which is a long time! 5. May 8, 2007 ### cepheid Staff Emeritus For what it's worth (I'm bored right now), taking everything at face value (I'll use x for distance) I'll assume the cop accelerated uniformly from rest. The distance the cop travelled as a function of time, assuming he started at rest is: $$x(t) = \frac{1}{2} a t^2$$ a is the acceleration t = 2 min 46 s = 2.7667 min = 0.046111 hr Therefore: x(0.046111 hr) = 3 mi = 1/2 a(0.046111 hr)^2 $$a = \frac{6 \ \textrm{mi}}{t^2} = \frac{6 \ \textrm{mi}}{(0.046111 \ \textrm{hr})^2} = 2821 \ \frac{\textrm{mi/h}}{\textrm{h}}$$ By definition, the speed, or magnitude of the velocity, v is given by: v = at = 2821 mi/h^2 * 0.046111 hr = 130 mi/h So, if any of what you have said is accurate (and I have huge doubts as I outlined in my previous post), then in the ideal case that he just kept going faster and faster, the cop was going way faster than 75 when he caught up to you. Of course, in real life, he would decelerate upon his approach, so we really have no easy way of knowing exactly how fast he was actually going, and I highly doubt that it matters, because I'm sure he exercised reasonable judgement in estimating your speed. P.S. I wouldn't present this calculation to a judge if I were you. Last edited: May 8, 2007 6. May 8, 2007 ### ClA This is off topic but if it interest you, i'll let you know. The officer claimed that he used radar when I drove pass him, which in fact he didn't (as he did not mark the "radar box" on the ticket). He said he followed me for 3 miles which was a lie. From a stop position, he gained speed and got up to my car for no more than 5 seconds, then pulled me over with his lights. He did not chase me for 3 miles. He pulled me over 3 miles from where he began from the stopped position. All the questions you asked above can be answered with this: I had a video in my car and I recorded the whole thing including the conversation we had when he pulled me over. The 3 miles etc etc everything is based on the officer's own words, which at times contradicted himself. All was caught on camera. 7. May 8, 2007 ### ClA Thanks for that. Yes, based on the results of your calculations, I don't think this will help my case much. But your calculation seem correct. The officer was coming from behind pretty fast when he came up to me. I would say 90 to 100mph. He got up behind me for 5 seconds and started flashing his lights. At first he said he got me with radar, then he said he didn't use radar......then again he said he did used radar.........and at the end.....he said he used the bumper pacing method. Which all contradicted itself. All this was caught on tape. I just wanted additional prove of his speed when he come up upon me as evident that using his speed was not a good way to determine mine. But I guess 130mph was a bit too much. 8. May 8, 2007 ### rcgldr Depends on his rate of acceleration. Assume he accelerates from 0 to 80mph in about 1000 feet, taking about 15 seconds, which is fairly slow. He's averaging 45mph for 15 seconds. During this time you travel 1430 feet, giving you a 430 foot lead. He gains 22 feet / second on you, so it takes 19.5 seconds to catch up. In this time, you've traveled another 1865 feet. It took him 34.5 seconds to catch up which is 3289 feet, about 0.62 mile. However you can forget the math. Cops sometimes make not so honest mistakes. I get the feeling that some will just hand out one bogus ticket every now and then, maybe when they are in a bad mood, since there's little chance they'll ever get caught. Worse yet, you could be driving the same type of car that the guy the cop's wife is cheating with. There's no system in place to allow the public to complain about bogus tickets, so that cops with an unusually high rate of complaints could be monitored. My very first ticket was for going in excess of 45mph in a 25mph zone, it was a cop on a motorcycle, and his max speed was 50mph (a fixed needle moved by the speedometer to the cop's max speed, this was the late 1960's). I was driving a moped, barely going 30mph, up a long upgrade, at the border between two cities where the speed transitioned from 25mph to 35mph. I was stopped 1 mile past the border where the speed limit was 35mph. There were cars in front of me pulling away. Other "bogus" tickets. Back in the old days of the 55mph speed limit. Cop is in #4 lane ahead of me doing about 45mph. It's night time. I'm driving a motorcycle in the #1 lane, and the only reference point he can see is my headlight. The cop sees my headlight in his mirror and estimates my distance at 300ft, uses a stopwatch to "clock" me, pulls me over and issues me a ticket for doing 60mph in a 55mph zone. Another one from the 55mph days. I'm doing about 58mph in #1 lane, again at night, constantly pulling over into #2 lane to allow faster traffic by. A cop gets onto the freeway, gets behind me, and I pull over to get out of the way, he turns on his lights, so I continue going right past #4 lane and stop on shoulder, before the next off ramp. I get another 60mph in 55mph zone. Total distance between on ramp and off ramp was less than 1/4 mile. Obviously there was no attempt to actually measure my speed. Most incredible ticket ever? A V-Twin Honda motorcycle RC-51 (130hp) clocked by aircraft at a claimed 205mph. It has a top speed around 165mph, under ideal conditions, and the reality is that 160mph would be unreachable. The aircraft was clocking two different motorcycles, and it's pretty clear that he either switched them or had an issue starting and stopping a stop watch. However the cop refused to admit he made a mistake. To reach 205mph, the motorcycle would require more than double the power at about 273hp. Then again, the ticket was $215. I live in California, and as long as you don't get a ticket more than once every 18 months, you can go to traffic school and not have the ticket show up on your record, although you still pay the fine. The few times I've been in traffic school, it became pretty clear that a small percentage of the people there were truly innocent. You stand little chance of winning a case in court if you claim to be a victim of a cops mistake (honest or not). Considering all the other bad stuff a small percentage of cops do, issuing bad tickets is very low on the list of priorities in the system. Last edited: May 8, 2007 9. May 8, 2007 ### ClA Thanks. I am in California too. You'd be surprised how many bogus tickets are handed out each day. I beat 3 of my last 4 tickets already. Cops have quotas to fill and that is actually true. My friend just got a ticket today for going downhill in neutral gear. http://www.dmv.ca.gov/pubs/vctop/d11/vc21710.htm He drove an 18 wheeler truck where he sat about 6 feet off the gound. How the hack can a cop know what gear he was in?? Its not like he can see it. My friend will fight it and he will win. Burden of proof is on the cop's side, there is no way he can prove he was going downhill in neutral. We both laughed at it. Last edited: May 8, 2007 10. May 8, 2007 ### ClA My gosh you for real?? 60 in a 55? Over by 5 miles per hour? I doubt an officer can reasonably prove the 5 miles difference........impossible. Yes, I figure the rate of acceleration should play a part in this calculation. Which is where I got stuck. I don't think this will matter in court anyways........but I know people that've beat tickets using equations to prove the cop's argument was incorrect. Last edited: May 8, 2007 11. May 8, 2007 ### ClA haha sorry I laughed at this. Oh my gosh that was complete bull crap. Seriously, if you fought it, that would be an easy win for you. I beat 3 out of 4 of my last tickets. You should really check this site out......fighting tickets by mail (California only). No need to show up in court. http://ticketassassin.com/docs112383/forms.html 12. May 9, 2007 ### rcgldr Nothing in that statue about depressing the clutch. How could they prove that your friend wasn't depressing the clutch as opposed to being in neutral? During the Carter and post Carter 55mph era, the feds were really pushing states to enforce 55mph by threatening to take way highway funding. Early on, what was accepted as proof of speeding was really bad. As time went on, eventually most of these methods got invalidated by lawyers and the courts. What was eliminated: Estimating speed by simple observation from a fixed point, some officers were claiming they could accurately estimate a cars speed simply by observing a car going by. When actually tested, it turned out that their estimates were affected by the size of the car, and were quite off with smaller vehicles like motorcycles (they overestimated the speed of small vehicles, underestimated the speed of large vehicles, like trucks, due to perpective which there is a thread in this forum that discusses this). Estimating speed of a car approaching a cop vehicle from behind (unless it was to simply state that the vehicle caught up), since it requires accurate estimate of distance behind, and most drivers "slowed" down and never really caught up, unless the cop also slowed down. In California, the usage of a hand held stop watch for any speed estimate, including aircraft. Aircraft patrolling is still allowed, but all they can do is radio for a ground based policeman to use a radar gun to verify the speed of the car. Tickets for going just 5 mph over the speed limit are extremely rare, and probably limited to a cases like going 30mph in a school zone with kids present. This only existed for the first 2 years of the 55mph speed limit change. Eventually just about every method other than radar was shown to not be accurate enough to gage this difference, and the tickets were getting thrown out of court. In my case, the judge changed my fine to$10, since I got the ticket in a city 40 miles from my house and couldn't take the time to fight it in court. In this particular city and time the judges weren't allowed to dimiss tickest on their own without a trial. Note that the 55mph speed limit wasn't truly nation wide. The federal government can't set speed limits directly, but what they did do was threaten to withold federal highway funding if states didn't change and enforce the 55mph speed limit. There were a few states that got little or no federal funding, so they never implemented a 55mph speed limit. If I remember correctly, Montana had no speed limit at all (just a basic speed law), until a few years ago. 100mph on a rural road with no traffic seemed to be the "real" limit, as tickets for 100mph were getting thrown out for not exceeding the basic speed law (speed it was safe to travel at). Arizona had and has the highest posted speed limits, 85mph, mostly for the interstate freeways. Motanta has a 100mph limit now, but it's not posted. But in your case the cop had almost 2 miles to "clock" you. In the cops mind, you probably slowed down once you saw him so he went by his initial estimate of your speed, long before he got close enough to truly pace your speed. Since it's not required for the cop to state how long (how many seconds) he actually paced you at a reasonable distance to verify your speed on the ticket, the difference between reality and what the cop states in court can vary quite a bit. Many police cars have dashboard cameras, and there are radar systems that work with moving vehicles. One way to eliminate the lies would be to include telemetry in the recorded video from the dashboard camera (the cops speed, and the victim's speed if possible), and require this as proof in court. Currently the only video requirement is for the red-light cameras. My wife got one of these, because it turns out that these systems estimate that a driver is going to run a red light and start video taping when this occurs, then camera shots are taken. In my wifes case, she was making a right turn on a red light and she stopped, but it turned out that the sensors were set a bit too far back of the limit line. Turns out that only about 1/3rd of the stop light tickets ever get passed the review stage, and in most cities all are reviewed before mailing a ticket out. This particular city is a bit lazy/greedy and only review tapes when a victim schedules a review, and the victim has to show up for the review, a bit of a hassle but otherwise a similar procedure. The reviewing officer can dimiss the ticket without requiring a judge. Last edited: May 9, 2007
web
auto_math_text
# Q: Is there a formula for how much water will splash, most importantly how high, and in what direction from the toilet bowl when you *ehem* take a dump in it ? Physicist: If it weren’t for imponderables like this, we’d have finished science years ago.  During an “impact event” water generally moves outward to the sides.  What you really need to worry about is the dreaded “water spike”. Ejecta, spike, ejecta, and spike.  (The artwork in the upper-left is by Chrstara, Copyrighted ©  http://abstract.desktopnexus.com/wallpaper/27127/. These pictures may not be reproduced, copied, edited, published, or uploaded to any Site(s) including Blogs without his written permission.) The physics behind water spikes is remarkably complicated and only recently has their formation been accurately described and simulated.  So, like any physicist presented with an insurmountable problem, I’ll make some unreasonable assumptions and cheat (an experimentalist would then drink and make prank calls). One of the classic cheats is making a list of everything you think your equation should depend on, and then balance the units.  Based only on the vague hope that water spikes scale (same shape regardless of size), the energy E of a spike that rises to height H should be  $E \propto gHM_{spike} \propto gH(\rho H^3) = g \rho H^4$, where g is the acceleration due to gravity, $\rho$ is the density of water, and “$\propto$” means “proportional to”.  The energy of a falling *ehem* object is $E = gdM_{"object"}$, where d is the drop height.  These energies should be proportional.  Seems reasonable…  So solving for H: $H=c\left(\frac{M_{"object"}}{\rho}d\right)^{\frac{1}{4}}$ Here c is some constant that would need to be found experimentally.  The graph of $x^{\frac{1}{4}}$ increases sharply from zero, and then sorta levels off.  So don’t expect to have to much influence on the height of the spike given that this already shot-in-the-dark equation is not strongly influenced by small changes in the variables away from zero. Your best bet is to avoid generating the spike in the first place.  Water spikes are the result of a symmetric air-cavity collapse just below the surface.  If the cavity isn’t symmetric, you shouldn’t get a spike.  So as you make your Deposit, make sure to wave your butt around.  Please let us know how it works out. This entry was posted in -- By the Physicist, Paranoia, Physics. Bookmark the permalink. ### 3 Responses to Q: Is there a formula for how much water will splash, most importantly how high, and in what direction from the toilet bowl when you *ehem* take a dump in it ? 1. christara says: after seeing so much of my artwork get ripped by so many it makes you feel mad. has with much of my art work i make for personal use for people to use as desktop wallpapers. Which is its intended use. or on the occasions some of my art sells thru my deviant art account. http://christara.deviantart.com/ obviously if others have paid for a piece of my art so they can use it elsewhere then you may understand why i get vexed over seeing it in place’s without any recognition to myself the artist. or if that piece was sold then the person who purchased it would be mad at me,? now respectively i don’t mind a piece of my work being used in a manner than what it was intended for so if you wish to keep it on your page. then please do so but has long has there’s link to let people link referring back to my work http://my.desktopnexus.com/christara/ or http://abstract.desktopnexus.com/wallpaper/27127/ and a short piece describing that its copyrighted and my name attributed to it. just a small piece under the artwork saying
web
auto_math_text
# CERN Published Articles 2017-08-14 14:59 Numerical simulations of energy deposition caused by 50 MeV—50 TeV proton beams in copper and graphite targets / Nie, Y (CERN) ; Schmidt, R (CERN) ; Chetvertkova, V (GSI Helmholtzzentrum für Schwerionenforschung) ; Rosell-Tarrago, G (University of Barcelona) ; Burkart, F (CERN) ; Wollmann, D (CERN) The conceptual design of the Future Circular Collider (FCC) is being carried out actively in an international collaboration hosted by CERN, for the post–Large Hadron Collider (LHC) era. The target center-of-mass energy of proton-proton collisions for the FCC is 100 TeV, nearly an order of magnitude higher than for LHC. [...] CERN-ACC-2017-0054.- Geneva : CERN, 2017 - 17 p. - Published in : Phys. Rev. Spec. Top. Accel. Beams 20 (2017) 081001 (2017) Fulltext: PDF; 2017-08-12 06:40 Quarkonia and heavy-flavour results from ALICE / Gagliardi, M (INFN, Turin) /ALICE Quarkonia and heavy flavour are important probes of the hot and dense QCD medium formed in high-energy heavy-ion collisions, through the modification of their yields and kinematical distributions. Measurements of their production in proton-nucleus collisions are crucial for the interpretation of heavy-ion results, as they allow one to study cold nuclear matter effects [...] 2015 - 4 p. - Published in : , pp. 271-274 Fulltext: PDF; In : 50th Rencontres de Moriond on QCD and High Energy Interactions, La Thuile, Italy, 21 - 28 Mar 2015, pp.271-274 2017-08-12 06:40 Soft physics and collective phenomena in p-Pb collisions from ALICE / Leogrande, E (Utrecht, Astron. Inst.) /ALICE New ALICE results concerning soft physics and collective phenomena in p-Pb collisions at $\sqrt{SNN}$ = 5.02 TeV are briefly discussed. First, the particle-multiplicity dependence of the flow coefficients $\upsilon_{2}$ and $\upsilon_{3}$ derived via multiparticle cumulants is reviewed. [...] 2015 - 4 p. - Published in : , pp. 259-262 Fulltext: PDF; In : 50th Rencontres de Moriond on QCD and High Energy Interactions, La Thuile, Italy, 21 - 28 Mar 2015, pp.259-262 2017-08-12 06:40 First observation and study of $K^\pm \to \pi^\pm \pi^0 e^+ e^-$ decay at the NA48/2 experiment / Misheva, M H (Dubna, JINR) ; Anzivino, G ; Arcidiacono, R ; Balev, S ; Batley, J R ; Behler, M ; Bi­fani, S ; Biino, C ; Bizzeti, A ; Bloch-Devaux, B et al. A sample of almost 2000 $K^\pm \to \pi^\pm \pi^0 e^+ e^-$ rare decays with a background contamination below 3% is observed for the first time by the NA48/2 experiment at CERN/SPS. The preliminary branching ratio in the full kinematic region is obtained to be $BR(K^\pm \to \pi^\pm \pi^0 e^+ e^-)$ = (4.06 ± 0.17) x $10^{-6}$ by analyzing the data set recorded in 3-month NA48/2 run during 2003. [...] 2015 - 4 p. - Published in : , pp. 237-240 Fulltext: PDF; In : 50th Rencontres de Moriond on QCD and High Energy Interactions, La Thuile, Italy, 21 - 28 Mar 2015, pp.237-240 2017-08-12 06:40 Searches for Dark Matter and Extra Dimensions at the LHC / Demiragli, Z (MIT) /CMS ; ATLAS The $CMS^1$ and $ATLAS^2$ collaborations at the Large Hadron Coltider have collected approx­ imately $20fb^{-1}$ of pp collision data with center-of-mass energy of 8 TeV and have performed targeted searches for Dark Matter and Extra Dimensions. No significant deviations from the standard model prediction have been observed. [...] 2015 - 4 p. - Published in : , pp. 207-210 Fulltext: PDF; In : 50th Rencontres de Moriond on QCD and High Energy Interactions, La Thuile, Italy, 21 - 28 Mar 2015, pp.207-210 2017-08-12 06:40 Searches for other non-SUSY new phenomena at the LHC / Kajomovitz, E (Duke U.) /ATLAS ; CMS The ATLAS and CMS collaborations collected datasets of approximately $20 fb^{-1}$ of $pp$ collisions at $\sqrt{s}$ = 8 TeV produced by the LHC during the Run-1 period. The collaborations performed a thorough analysis of these datasets searching for physics phenomena beyond the Standard Model. [...] 2015 - 4 p. - Published in : , pp. 199-202 Fulltext: PDF; In : 50th Rencontres de Moriond on QCD and High Energy Interactions, La Thuile, Italy, 21 - 28 Mar 2015, pp.199-202 2017-08-12 06:40 Strong Production SUSY Searches at ATLAS and CMS / Marshall, Z L (LBL, Berkeley) /ATLAS ; CMS The results of searches for strongly-produced supersymmetry at the Large Hadron Collider by the ATLAS and CMS collaborations are presented. Several of the historically strongest zero-and one-lepton final state searches have been updated to include multi-bin fits and combinations. [...] 2015 - 4 p. - Published in : , pp. 181-184 Fulltext: PDF; In : 50th Rencontres de Moriond on QCD and High Energy Interactions, La Thuile, Italy, 21 - 28 Mar 2015, pp.181-184 2017-08-12 06:40 Assuming Regge trajectories in holographic QCD: from OPE to Chiral Perturbation Theory / Cappiello, Luigi (Naples U. ; INFN, Naples) ; D'Ambrosio, Giancarlo (INFN, Naples ; CERN) ; Greynat, David (Naples U. ; INFN, Naples) The Soft Wall model in holographic QCD has Regge trajectories but wrong operator product expansion (OPE) for the two-point vectorial QCD Green function. We correct analytically this problem and describe the axial sector and chiral symmetry breaking. [...] 2015 - 4 p. - Published in : , pp. 157-160 Fulltext: PDF; In : 50th Rencontres de Moriond on QCD and High Energy Interactions, La Thuile, Italy, 21 - 28 Mar 2015, pp.157-160 2017-08-12 06:40 NNLO corrections for LHC processes / Caola, Fabrizio (CERN) To fully profit from the remarkable achievements of the experimental program at the LHC, very precise theoretical predictions for signal and background processes are required. In this contribution, I will review some of the recent progress in fully exclusive next-to-next-toleading-order (NNLO) QCD computations. [...] 2015 - 5 p. - Published in : , pp. 125-129 Fulltext: PDF; In : 50th Rencontres de Moriond on QCD and High Energy Interactions, La Thuile, Italy, 21 - 28 Mar 2015, pp.125-129 2017-08-12 06:40 Rare beauty decays at LHCb / Dettori, F (CERN) In this contribution we review the most recent measurements of the LHCb experiment in the field of rare decays of B mesons. In particular the first observation of the $B^0_s \to µ^+ µ^-$,­ decay, the angular analysis of $B^0_d \to K*l^+l^-$ decays and the test of lepton universality in $B^+ \to K^+ l^+ l^-$ decays are presented.. 2015 - 4 p. - Published in : , pp. 85-88 Fulltext: PDF; In : 50th Rencontres de Moriond on QCD and High Energy Interactions, La Thuile, Italy, 21 - 28 Mar 2015, pp.85-88
web
auto_math_text
A novel approach to likelihood-free inference: just use the likelihood! A new arXival describes “a novel inference framework based on Approximate Bayesian Computation” for a modelling exercise in the field of strong gravitational lensing.  Since they acknowledge insightful discussions from the guys who seem to deliberate refuse to cite my work on ABC for astronomy, it’s no surprise that the statistical analysis again goes astray.  Basically, the proposal here is to use the negative log marginal likelihood—the marginalisation being over a set of nuisance parameters conditional on the observed data and a given set of hyper-parameters—as a distance in ABC.  Usually ABC is motivated because the likelihood is not available, but that doesn’t seem to be understood here.  The only thing that seems to stop the posterior collapsing to the mode is the early stopping of the ABC-SMC algorithm at an arbitrary number of steps.  There’s also some silliness with respect to the choice of prior for the source image (the component of the model that is marginalised out exactly) which is emphasised in the paper as disfavouring realistic source brightness distributions. The only thing I found interesting here was the reference to Vegetti & Koopmans (2009)  [Note: this is clearly the paper the authors intended to cite, rather than the other Vegetti & Koopmans 2009: MNRAS 400 1583; this kind of mistake suggests the level of care going into these arXivals is less even than the few minutes I spend cranking out a blog post].  The Vegetti & Koopmans (2009) method involves construction of a source image prior via a Voronoi tessellation scheme with regularisation terms.  An interesting project would be to examine how the SPDE approach could allow for a more nuanced prior choice, introduction of non-stationarity etc (see Lindgren et al. 2011). Actually there is another interesting thing that could be investigated with this model.  The authors choose to look at $-2 \log P(d|\theta)$ for their distance: the factor of 2 is of course irrelevant in the ABC context and with respect to collapse on the posterior mode; but for likelihood-based inference it would represent an extra-Bayesian calibration factor, which could actually be chosen to reduce exposure to excess concentration in the mis-specified setting via e.g. the loss-likelihood bootstrap. More dumping on the NeurIPS Bayesian Deep Learning Workshop … Today I noticed another paper on astro ph that irked me, and again it turns out to be accepted at this year’s NeurIPS Bayesian Deep Learning Workshop.  This particular arXival proposes to explore a Bayesian approach to construction of super-resolution images, in particular, to explore uncertainty quantification since “in many scientific domains this is not adequate and estimations of errors and uncertainties are crucial”.  What irked me?  One thing was the statement: “to the extent of our knowledge, there is no existing work measuring uncertainty in super-resolution tasks”.  That might be true if you consider only a particular class of machine learning algorithms that have addressed the challenge of creating high resolution images from low resolution inputs, but this general problem (PSF deconvolution, drizzling, etc.) has been a core topic in astronomical imaging since the first CCDs, and in this context there are many studies of accuracy and uncertainty.  Likewise, the general problem of how to build confidence in reconstructions of images via statistical models without a ground-truth to validate against is also well explored in astronomy.  The first ever black hole image (‘the Katie Bouman news story’) addressed this challenge through a structured comparison on images separately created by four independent teams using different methods. Another thing that irks me is that I find the breakdown between types of uncertainty—“Epistemic uncertainty relates to our ignorance of the true data generating process, and aleatoric uncertainty captures the inherent noise in the data.”—to be inadequate.  Here this is really just proposing a separation between the prior and the likelihood, which runs against the useful maxim for applied Bayesian modelling that the prior can only be understood in terms of the likelihood.  That said, I also wouldn’t call this a Bayesian method since the approximation of the posterior implied by dropout is like zero-th order.  Don’t get me wrong: dropout is a great technique for certain applications but I think the arguments to suggest it has a Bayesian flavour are rather unconvincing though attractive to citations. On unnecessarily general paper titles Last week a postdoc in my lab received a rejection letter from a high profile stats journal, with the reason for rejection given being that the problem was already solved in existing software such as INLA.  Which was odd because we use INLA all the time at work and the whole reason we decided to embark on the project described in the rejected manuscript was because INLA did not offer a solution for this particular problem.  My suspicion is that the editor or associate editor did a quick google search on the topic and found a paper with an unnecessarily general title.  That is, a paper whose title suggests that a general problem is solved therein, rather than the very restricted problem that is actually examined.  (In this case the problem is combination of area and point data, which is trivially solved in INLA under the Normal likelihood with linear link function, but is not solved in INLA for non-Normal likelihoods with non-linear link functions.) For this reason I would say that I’m more than a little skeptical about the clickbait motivation for the title given to this recent arXival: “Uncertainty Quantification with Generative Models”.  Which is sufficiently broad as to encompass the entirety of Bayesian inference and most of machine learning!!   And in which you would probably expect to find something more substantial than a proposal to approximate posteriors of VAE style models via a mixture of Gaussians, obtained by local mode finding (optimisation from random starting points) followed by computation of the Hessian at those modes.  But apparently this novel idea is accepted to the Bayesian Deep Learning workshop at NeurIPS this year, so what do I know?! If I’m going to start beef with the machine learning community then I may as well say something else on the topic.  Recently it came to light that an Australian engineering professor was fired from Swinburne University for having published a huge amount of duplicate work: i.e., submitting essentially the same paper to multiple journals in order to spin each actual project out into multiple near-identical publications.  The alleged motivation for doing so was the pressure to juke ones own research output stats (total pubs and total cites).  Which is funny because I don’t know of many machine learning professors who don’t have the same issue with their publications: multiple versions of the same paper given at NeurIPS, AISTATS, ICLR, etc.,  and then maybe submitted to a stats journal as well! And in other indignities, the third author on this arXival is on a salary that is over 2.5 times my Oxford salary. Priors make models like styles make fights … A ubiquitous saying in boxing analysis is “styles make fights”, which means that to predict what a match up will look like you need to think about how the characteristic styles of the two opponents might work (or not) with respect to each other.  Two strong counter-punchers might find themselves circling awkwardly for twelve rounds, neither willing to come forward and press the action.  While a match up between two aggressive pressure fighters might turn on the question of whether their styles only work while moving forwards.  As an analyst of Bayesian science my equivalent maxim is “priors make models”.  Well-chosen priors can sensibly regularise the predictions of a highly flexible model, achieve powerful shrinkage across a hierarchical structure, or push a model towards better Frequentist coverage behaviour.  For that reason I don’t understand why cosmologists are so keen on ‘uninformative’ priors.  It’s like throwing away the best part of Bayesian modelling. Anyway, two papers from the arXiv last week caught my eye.  The first proposes a statistic for ‘quantifying tension between correlated datasets with wide uninformative priors’.  So, aside from the focus on a type of prior (wide, uninformative) that I don’t care for, I’m also puzzled by the obsession of cosmologists with searching for ‘tension’ in the posteriors of models with shared parameters fitted to different datasets (or different aspects of the same dataset), as an indicator of either systematic errors or new physics.  As this paper makes clear, there is a huge variety of techniques proposed for this topic, but all of them come from the cosmology literature.  How is it that no other field of applied statistics has got itself twisted up in this same problem?  An example use of this statistic is given in which a model with shared parameters is fitted to four redshift slices from a survey and the decision whether or not to combine the posteriors is to be made according to how much their separately fitted posteriors overlap. The other paper I read this week concerns the coupling of a variational autoencoder model as generative distribution for galaxy images with a physics based gravitational lensing model.  The proposal for this type of model and the authors’ advocacy for modern auto-diff packages like PyTorch makes a lot of sense.  However, it seems that a lot of work is still to be done to improve the prior on galaxy images and the posterior inference technique, because for the two examples shown suggests that there is a serious under-coverage problem in the moderate and high signal-to-noise regime.  Also, the cost of recovering a small number of HMC samples is very high here (many hours); I don’t think that HMC is a viable posterior approximation for this type of model. Any why bother when the coverage is so bad?  Most likely a better option will be some kind of variational approximation that will be quicker to fit and will improve coverage partly by accident and partly by design through its approximate nature; i.e., deliberately slowing the learning rate.  Sounds crazy to some, perhaps, but remember that the variational autoencoder here is trained via stochastic gradient descent with a predictive accuracy-based stopping rule, which is just another way of slowing the learning rate or artificially regularising a model. How to push back against bullies in academia I’m sure most of us know of at least one notorious workplace bully (or other type of shithead) in our field; the kind of professor about whom there’s an open secret of their bad behaviour, but given that universities don’t give a toss about this issue there is no chance of them being formally sanctioned.  An effective solution I’ve recently come across is the following: simply refuse to have anything to do with their work.  If you’re asked to review one of their papers, decline and let the journal know that you don’t feel you can ethically review their work.  If you’re asked to review on of their grant applications, do the same.  If you’re invited to speak in the same session as them at a conference, decline and let the organisers know your reasoning. On this topic, one thing that amazes me is the fact that most universities don’t do exit interviews with departing staff members.  If you want to test a hypothesis (e.g. there is or isn’t a problem with workplace bullying in a given group or department) you need to gather data.  Simply relying on a passive monitoring system (i.e., self-reporting of incidents by victims) is nothing short of a deliberate strategy to avoid seeing the problem. Copula models for astronomical distributions Yesterday I read through this new arXival by an old friend from my ETH Zurich days, in which is presented a package (called LEO-py) for likelihood-based inference in the case of Gaussian copula models and linear regressions with missing data, censoring, or truncation.  I never quite understand the demand for astronomer-specific expositions and software on topics like this, since as soon as one understands what a hierarchical Bayesian model is and how to code one up in a standard statistical programming language like Stan or JAGS, then the world is your oyster.  (Indeed, back when we were at ETH, an errors-in-variables logistic regression model for predicting the barred galaxy fraction as a function of noisy-estimated stellar mass was one of my first forays into this field; of course it never saw the light of day because my supervisor at the time—she who shall not be named—was completely opposed to any statistical methods beyond ordinary linear regression!)  The key contribution here, to my mind, is rather the emphasis on copula models which are certainly under-utilised in the literature.  If this package helps popularise copulae (copulation?) that will be a very good contribution. Note: Whenever I think of truncated astronomical data analysis problems I’m reminded of the example (described in JS Liu’s Monte Carlo Strategies in Scientific Computing) of a permutation test for doubly truncated (redshift, log-luminosity) data developed by Efron & Petrosian (1999).
web
auto_math_text
## FANDOM 156 Pages Experience, commonly abbreviated as Exp or XP, is a measure of progress in a certain skill. It is usually obtained by performing tasks related to that skill. After gaining a certain amount of experience, players will advance to the next level in that skill, which can result in new abilities, among other things. ## Gaining ExperienceEdit Experience in a skill is generally obtained by performing a task related to that skill. Experience can also sometimes be gained by performing certain other tasks not necessarily related to the skill, such as completing quests, participating in random events, playing certain mini-games, or certain mini-quests. • When a player rubs a lamp from the genie random event, the player gains experience equal to its level times 10 (for example: A player's Woodcutting level is 50; they then rub the lamp, choose Woodcutting, and receive 500 experience in that skill.) • When a player reads the book reward from the Surprise Exam! random event, it receives its level times 15 experience. (example: If a player's attack level is 70, they would receive 1050 experience from reading the book in that skill.) Players helping others using the Assist System can also earn a maximum of 30,000 total experience every 24 hours, although the experience may be shared among many skills, and not all skills can be shared through the Assist System. Players can continue receiving experience after level 99 in a skill, up to a maximum of 200 million. Once a player reaches this limit, they can continue using the skill, but do not receive additional experience. The maximum total experience is 4.8 billion (4,800,000,000) when a player has 200 million experience in all 24 skills. Currently, no player has ever reached this. Until the Skills Interface update (12 November 2007), players with a level 99 skill could view the experience it would take them to earn level 100. However, even when "level 100" was reached, the actual level remained at 99. The experience one may obtain varies very much per skill. Players can get up to 925k xp per hour with Summoning, but only get 32k experience with Agility. #### Exponential GrowthEdit The amount of experience needed for the next level increases by approximately 10% each level. For example, 83 experience is required for advancement to level 2, while 91 experience is then required for advancement to level 3. The difference between 83 and 91 is 8. 10% of 83 is 8.3, which is approximately 8. Note that a 10% growth factor may seem slow, but, as with all exponential growth, it expands rapidly to a massive 13,034,431 experience needed for level 99. Level 85 requires nearly one quarter of the experience needed for Level 99 and Level 92 is nearly the exact halfway mark, requiring 6,517,253 experience. This clearly demonstrates the experience gap that grows rapidly at higher levels. The varying amount of experience needed to get from one level to another can cause some surprising experience rates to get from one level to another. For example, at level 92, you will have half of the experience needed for 99. Getting from level 98 to 99 is equal to getting from level 1 to level 75. Starting from level 28, the experience of getting to next level is approximately equal to 10% of experience of current level. It is still decreasing though, but much slower than the rate before level 28. ## Combat ExperienceEdit #### Combat skillsEdit To calculate the experience gain for combat skills, a simple equation can be used: Experience earned $= 4 \times d$ where d = damage dealt to an opponent This equation does not work for some opponents such as random events which give significantly less experience. #### HitpointsEdit To calculate the experience gain for Hitpoints, another simple equation can be used: Experience earned $= 1.33 \times d$ where d = damage dealt to an opponent #### Monster ExperienceEdit The number of monsters required to level up a combat skill can be calculated as follows: Number of monsters $= \frac {E} {4 \times H}$ where E = experience required to level up and H = monster's number of hitpoints ## Relationship with LevelEdit ### EquationsEdit The equation below calculates exactly the minimum experience needed for a given level: Experience Required $= floor \left[ \frac {\displaystyle\sum_{n = 1} ^ {L - 1} floor \left( n + 300 \times 2 ^ {n/7} \right) } {4} \right]$ , where L = skill level The above equation can be approximated with minimal rounding error as: Approximate Experience Required $= \sum_{n = 1} ^ {L - 1} \left( \frac {n} {4} + 75 \times 2 ^ {n/7} \right)$ , where L = skill level This approximation can then be used to find the maximum additional experience required to level up: Additional Experience Required to Level Up $= \frac {L} {4} + 75 \times 2 ^ {L/7}$ , where L = current level ##### ExampleEdit You want to find the experience that it would take to level up from 28 Strength to 29 Strength: Additional Experience Required to Level Up $= \frac { \left( 28 \right) } {4} + 75 \times 2 ^ { \left( 28 \right) /7} = 7 + 75 \times 2 ^ {4} = 7 + 75 \times 16 = 1207$ So it would take at most 1207 experience points to get from level 28 to level 29. ### TableEdit The following table shows the relationship between level, the experience required for that level, and the experience difference from the previous level: Level Exp. Exp. Diff Level Exp. Exp. Diff Level Exp. Exp. Diff Level Exp. Exp. Diff 1 0 N/A 26 8,740 898 51 111,945 10,612 76 1,336,443 126,022 2 83 83 27 9,730 990 52 123,660 11,715 77 1,475,581 139,138 3 174 91 28 10,824 1,094 53 136,594 12,934 78 1,629,200 153,619 4 276 102 29 12,031 1,207 54 150,872 14,278 79 1,798,808 169,608 5 388 112 30 13,363 1,332 55 166,636 15,764 80 1,986,068 187,260 6 512 124 31 14,833 1,470 56 184,040 17,404 81 2,192,818 206,750 7 650 138 32 16,456 1,623 57 203,254 19,214 82 2,421,087 228,269 8 801 151 33 18,247 1,791 58 224,466 21,212 83 2,673,114 252,027 9 969 168 34 20,224 1,977 59 247,886 23,420 84 2,951,373 278,259 10 1,154 185 35 22,406 2,182 60 273,742 25,856 85 3,258,594 307,221 11 1,358 204 36 24,815 2,409 61 302,288 28,546 86 3,597,792 339,198 12 1,584 226 37 27,473 2,658 62 333,804 31,516 87 3,972,294 374,502 13 1,833 249 38 30,408 2,935 63 368,599 34,795 88 4,385,776 413,482 14 2,107 274 39 33,648 3,240 64 407,015 38,416 89 4,842,295 456,519 15 2,411 304 40 37,224 3,576 65 449,428 42,413 90 5,346,332 504,037 16 2,746 335 41 41,171 3,947 66 496,254 46,826 91 5,902,831 556,499 17 3,115 369 42 45,529 4,358 67 547,953 51,699 92 6,517,253 614,422 18 3,523 408 43 50,339 4,810 68 605,032 57,079 93 7,195,629 678,376 19 3,973 450 44 55,649 5,310 69 668,051 63,019 94 7,944,614 748,985 20 4,470 497 45 61,512 5,863 70 737,627 69,576 95 8,771,558 826,944 21 5,018 548 46 67,983 6,471 71 814,445 76,818 96 9,684,577 913,019 22 5,624 606 47 75,127 7,144 72 899,257 84,812 97 10,692,629 1,008,052 23 6,291 667 48 83,014 7,887 73 992,895 93,638 98 11,805,606 1,112,977 24 7,028 737 49 91,721 8,707 74 1,096,278 103,383 99 13,034,431 1,228,825 25 7,842 814 50 101,333 9,612 75 1,210,421 114,143 ### Calculators Edit Rather than using a chart or using the equation yourself, calculators such as this allow determining experience more easily.
web
auto_math_text
mersenneforum.org How I could assign a trial factorization Prime95? Register FAQ Search Today's Posts Mark Forums Read 2010-02-01, 16:25 #1 bloodIce     Feb 2010 Sweden 173 Posts How I could assign a trial factorization Prime95? Hello, I would like to work on trial factorization of 2^1061-1 to extend it from the current state. So far it has been tried up to factors of 2^62. Could you please help me to assign a job for 2^62-2^63 and so far. 2010-02-01, 18:35 #2 Mini-Geek Account Deleted     "Tim Sorbera" Aug 2006 San Antonio, TX USA 4,271 Posts Considering the ECM that has been done on it, TF from 2^62 to 2^63 is (nearly) guaranteed to be absolutely useless. This is the line you would add to worktodo.txt to have Prime95 TF it from 2^62 to 2^63, but it tells you to use ECM instead and doesn't let you do it: Code: Factor=1061,62,63 Last fiddled with by Mini-Geek on 2010-02-01 at 18:35 2010-02-01, 18:37   #3 xilman Bamboozled! "𒉺𒌌𒇷𒆷𒀭" May 2003 Down not across 255748 Posts Quote: Originally Posted by bloodIce Hello, I would like to work on trial factorization of 2^1061-1 to extend it from the current state. So far it has been tried up to factors of 2^62. Could you please help me to assign a job for 2^62-2^63 and so far. You are extremely unlikely to find any factors by trial division. This number has had so much ECM work done on it that the chance of it having a factor of under 150 bits is somewhere between nil and negligible. You would be wasting your time and your electricity bill. If you don't understand that claim, or don't believe it, please Google "elliptic curve method". However, there are many other computations you could perform with a much, much greater probability of achieving a useful result. Quite a few people here could make suggestions, including myself. Paul P.S. Don't be dismayed by some of the reactions your proposal may elicit... Developing a thick skin is part of the entry requirements for playing an active part in on-line discussions. 2010-02-01, 22:01 #4 bloodIce     Feb 2010 Sweden 173 Posts I agree that there is not much reason to check for lower factors, however if we want to be systematic, we should start from somewhere. The argument to continue what is done to 2^62 is that even a small chance that factor in that range exists, that should be checked and eventually eliminated as possibility. The problem is that the server does not recognize my assignment as Factor=1061,62,63 even not as Factor=1061,81,82. My attitude might be close to absolute stupidity, however why not to check systematically (step by step) for factors up to 250 bits if you wish . Is there another way to assign what I want (do not bother about my electricity bill, but only about my curiosity)? @xilman: If you have any idea, where I could use my processors better, please let me know. If something more useful can be done, lets do it then . Last fiddled with by bloodIce on 2010-02-01 at 22:05 2010-02-01, 22:23 #5 Uncwilly 6809 > 6502     """"""""""""""""""" Aug 2003 101×103 Posts 29·353 Posts If you absolutely feel that you need to do that, you can do it and turn in the result. Since the server won't assign that (since the CPU is better used by doing ECM), it won't give you an assignment key. There are are ways to do the work. If you poke around the forum you may find them. While you are doing that, maybe you can do some ECM. If a factor is found by ECM, it may lead to the complete factorization. 971 (the last of the small expos without a known factor) has a factor of 174 bits. It is a complete waste of time to try a work your way up to that level. Last fiddled with by Uncwilly on 2010-02-01 at 22:24 2010-02-01, 22:25   #6 petrw1 1976 Toyota Corona years forever! "Wayne" Nov 2006 2·32·277 Posts Quote: Originally Posted by bloodIce Factor=1061,62,63 Factor=1061,81,82 Besides Electricity, are you aware that according to http://mersenne-aries.sili.net/credit.php ... which is proven reliable Factor=1061,62,63 - would take 1 core of a high-end Quad much of a year to complete? Factor=1061,81,82 - ... in the order of 100 Million years? 2010-02-01, 22:32   #7 mdettweiler A Sunny Moo Aug 2007 USA (GMT-5) 3×2,083 Posts Quote: Originally Posted by petrw1 Besides Electricity, are you aware that according to http://mersenne-aries.sili.net/credit.php ... which is proven reliable Factor=1061,62,63 - would take 1 core of a high-end Quad much of a year to complete? Factor=1061,81,82 - ... in the order of 100 Million years? What the heck? A year to TF an exponent from 62 to 63 bits? (Or is the "slight" decrease in primes to test as exponent increases enough to make it take that long at this extreme low end of things?) 2010-02-01, 22:37   #8 Uncwilly 6809 > 6502 """"""""""""""""""" Aug 2003 101×103 Posts 1023710 Posts Quote: Originally Posted by petrw1 Factor=1061,62,63 - would take 1 core of a high-end Quad much of a year to complete? You could instead do 2000 curves at B1=1000000000 B2=5000000000 in less than a GHz year. I would suggest that you ask some who knows what bounds to run, instead of what you have been doing. Code: History 3 curves, B1=260000000, B2=26000000000 by "THK" on 2010-01-30 History 3 curves, B1=260000000, B2=26000000000 by "THK" on 2010-01-30 History 3 curves, B1=260000000, B2=26000000000 by "THK" on 2010-01-30 History 3 curves, B1=260000000, B2=26000000000 by "THK" on 2010-01-30 History 100 curves, B1=1000000, B2=100000000 by "ANONYMOUS" on 2010-01-30 History 100 curves, B1=1000000, B2=100000000 by "ANONYMOUS" on 2010-01-30 History 100 curves, B1=1000000, B2=100000000 by "ANONYMOUS" on 2010-01-30 History 100 curves, B1=1000000, B2=100000000 by "ANONYMOUS" on 2010-01-30 History 100 curves, B1=1000000, B2=100000000 by "ANONYMOUS" on 2010-01-30 History 100 curves, B1=1000000, B2=100000000 by "ChuckEtienne" on 2010-01-31 History 2 curves, B1=1000000, B2=100000000 by "BloodIce" on 2010-01-31 History 2 curves, B1=1000000, B2=100000000 by "BloodIce" on 2010-01-31 History 100 curves, B1=1000000, B2=100000000 by "BloodIce" on 2010-01-31 History 10 curves, B1=10000000, B2=1000000000 by "BloodIce" on 2010-01-31 History 10 curves, B1=12000000, B2=1200000000 by "BloodIce" on 2010-01-31 History 100 curves, B1=3000, B2=300000 by "BloodIce" on 2010-01-31 History 500 curves, B1=30000, B2=3000000 by "BloodIce" on 2010-01-31 History 100 curves, B1=30000, B2=30000000 by "BloodIce" on 2010-01-31 History 50 curves, B1=300000, B2=300000000 by "BloodIce" on 2010-01-31 History 100 curves, B1=200000, B2=20000000 by "BloodIce" on 2010-01-31 History 10 curves, B1=1000000, B2=100000000 by "BloodIce" on 2010-01-31 History 400 curves, B1=50000, B2=5000000 by "BloodIce" on 2010-02-01 2010-02-01, 22:39   #9 axn Jun 2003 11·479 Posts Quote: Originally Posted by mdettweiler What the heck? What did you expect? 2010-02-01, 22:41   #10 mdettweiler A Sunny Moo Aug 2007 USA (GMT-5) 141518 Posts Quote: Originally Posted by axn What did you expect? I expected something more along the lines of the less than half a GHz-day or so that it would take to do a leading-edge exponent to the same level...according to the calculator it would be ~.4 Ghz-days even for just a 2M exponent. 2010-02-01, 22:58   #11 petrw1 1976 Toyota Corona years forever! "Wayne" Nov 2006 2·32·277 Posts Quote: Originally Posted by mdettweiler I expected something more along the lines of the less than half a GHz-day or so that it would take to do a leading-edge exponent to the same level...according to the calculator it would be ~.4 Ghz-days even for just a 2M exponent. From "The Math" page: http://www.mersenne.org/various/math.php Quote: One very nice property of Mersenne numbers is that any factor q of 2^P-1 must be of the form 2kp+1. Furthermore, q must be 1 or 7 mod 8. As the exponents get bigger there are fewer and fewer possible factors to test in a given bit range; hence less time to complete. Homework: a. How many 2kp+1 possibilities are there at 62 bits for p=1061? b. How many for p=2,000,001? c. What is the ratio? i.e. How many times more work is involved? 1061 is small number in itself and so 2^1061-1 has the illusion of being small however it is over 300 digits long. Similar Threads Thread Thread Starter Forum Replies Last Post Anonuser Information & Answers 13 2014-09-09 01:43 Miszka Information & Answers 3 2013-08-01 04:57 Unregistered Information & Answers 4 2009-11-09 07:18 ixfd64 Software 1 2006-03-30 13:39 thomasn NFSNET Discussion 1 2004-11-04 08:42 All times are UTC. The time now is 10:39. Fri Jan 21 10:39:51 UTC 2022 up 182 days, 5:08, 0 users, load averages: 1.44, 1.34, 1.36
web
auto_math_text
# Power monitoring difficulties using ACS71020 30A SPI and PSoC 5LP I'm currently developing a board for my dissertation that should monitor voltage, current and power consumption of a given load. For that purpose I chose to use the Allegro ACS71020. The one I chose was the SPI model with an IPR of 30 amps. To interface with the ACS71020 I'm using the PSoC 5LP but this shouldn't differ much from MCU to MCU. However I am getting incorrect voltage and current readings from the ACS71020. This is the schematic for my hardware setup. I decided to use pull-up resistors because the development board schematic also uses them when you try to buy the SPI version. Figure 1: Schematic I'm working with 230 V @ 50 Hz power source (European), therefore I'm using a 2 kΩ resistor for Rsense giving me a full-scale voltage of 550 V (this value will be needed later on). After this I connected it to the PSoC using these pins (taking care to use a 3.3 V reference for the PSoC output so that I wouldn't damage the ACS71020). Figure 2: PSoC Pinout My problem here is that when measuring voltage, without it connected, it measures around 30 V. And when doing a practical test using a light bulb as the load, the voltage measurements fluctuate from about 140 V to 230 V which is not the correct value. Also the current value is always really low. Why are these incorrect values being reported? This is data I got when requesting a voltage and current readings every 4 seconds: Table 1: Voltage and current readings with light bulb ON Table 2: Voltage and current readings with light bulb OFF As you can see, if the power is off the readings although incorrect are consistent. When the power is on, the readings are mostly incorrect and inconsistent for both voltage and current. Here is the code I used to get the readings: void SPIrequestData32 (_Bool rw, uint8 addr, uint8 p, uint8 s, uint8 t, uint8 q){ /******************************************************************* * and stores 32bit answer in SPI Rx buffer * Inputs int RW, uint8 addr Clears TxRx&FIFO * RW == 1 READ RW==0 Write * 32bit answer is stored in SPI_RxDataBuffer *******************************************************************/ //clear buffers SPIM_1_ClearTxBuffer(); SPIM_1_ClearRxBuffer(); SPIM_1_ClearFIFO(); if(rw==1){ }else if (rw == 0){ } //waits for a 32bit response = 4frames if(rw==1){ SPIM_1_WriteTxData(0x00); SPIM_1_WriteTxData(0x00); SPIM_1_WriteTxData(0x00); SPIM_1_WriteTxData(0x00);} if(rw==0){ SPIM_1_WriteTxData(p); SPIM_1_WriteTxData(s); SPIM_1_WriteTxData(t); SPIM_1_WriteTxData(q);} } In this function, I receive the address I wish to read from or write to. If it's a read, I just drive the line low so that the ACS71020 has the CS and CLK active. If it's a write, I proceed to write the byte I wish. After receiving the data from the MISO line, I call the function to convert the first 15 bytes to a voltage value and convert the next 15 bytes to a current value, with the help of the lackluster datasheet. void V_I_RMS (){ /******************************************************************* * Transforms 32bit response in VRMS(V) and IRMS(A) values * output is global variable VRMs IRMs *******************************************************************/ uint8 zero = 0; uint8 primeiro = 0; uint8 segundo = 0; uint8 terceiro = 0; uint8 quarto = 0; uint16 V = 0; uint16 I = 0; uint16 aux = 0; uint16 aux2 = 0; for(uint8 i = 0u; i<5u; i++){ //ignorar o primeiro switch(i){ case 0: zero = SPIM_1_ReadRxData(); break; //The first one is to be ignored case 1: primeiro = SPIM_1_ReadRxData(); break; case 2: segundo = SPIM_1_ReadRxData(); break; case 3: terceiro = SPIM_1_ReadRxData(); break; case 4: quarto = SPIM_1_ReadRxData(); break; default:break; } } V = ((segundo<<8) | primeiro); //Voltage is 16bit number with 15 fractional bits VRMs = (fullscaleV *( (float)V / (float)0x8000) ); I = ((quarto<<8) | terceiro); //current is a 15bit number with 14fractinal bits aux = I & 0b0100000000000000; //Mask to check if its 1.xxx or 0.xxx if(aux == 0x4000){ aux = 1;} else if (aux == 0){aux = 0;} aux2 = (I & 0b0011111111111111); //checks for fractional bits IRMs = IPR*(aux + (aux2*Istep)); //currents is given but IPR*(unit+fraction*(2^14)) } I don't understand if I'm doing anything wrong while converting or requesting data, to give such bad results. If anyone asks, my SPI master block is setup in this way. Referring to the datasheet, it's the correct way. I've tried increasing or decreasing its bit rate, but the results I get are the same. I've also tried to change some shadow registers and the customer access code, but when I read the register afterwards, it comes all messed up - sometimes showing nothing but zeros, sometimes showing the what I've written into them. Figure 3: Screenshot of SPI master block setup This is a printscreen of my digital oscilloscope, showing that by requesting to read register 0x1E with the 7th bit high (which is the read bit) it results in 0x9E (reading mode). By driving the MOSI line high or low, it maintains CS and CLK working as intended. Figure 4: Screenshot of digital oscilloscope • Why not post the code instead of screenshots? Where do you control the CS line? Do you have other SPI devices on bus? Aug 1, 2020 at 19:47 • I'm not familiar with this i thought screenshots were good. The CS line is controlled automaticaly by the SPI module in the PSoC, there are no more devices on the bus. Aug 1, 2020 at 20:02 • How do you know it is automatically controlling it in the correct way? It must be taken low, then do the transaction you want, and then taken high. How do you make sure it does that? Aug 1, 2020 at 20:13 • I have a digital oscilloscope to check the behaviour. I can post i screenshot of it. Aug 1, 2020 at 20:13 • "This fixed my issues with random readings." : If you solved the problem, please feel free to post the solution formally as an answer yourself (It is allowed by SE policy). It will help future readers. – AJN Aug 4, 2020 at 9:26
web
auto_math_text
# BDNYC at AAS 225 BDNYC (and friends) are out in force for the 225th meeting of the American Astronomical Society! Please come see our posters and talks (mostly on Monday). To whet your appetite, or if you missed them, here are some samplers: Munazza Alam (Monday, 138.40) High-Resolution Spectral Analysis of Red & Blue L Dwarfs Sara Camnasio (Monday, 138.39) Multi-resolution Analysis of Red and Blue L Dwarfs Kelle Cruz and Stephanie Douglas (Monday, 138.37) When good fits go wrong: Untangling Physical Parameters of Warm Brown Dwarfs Stephanie Douglas (Monday 138.19) Rotation and Activity in Praesepe and the Hyades Jackie Faherty (Talk, Monday 130.05) Clouds in the Coldest Brown Dwarfs Joe Filippazzo (Monday, 138.34) Fundamental Parameters for an Age Calibrated Sequence of the Lowest Mass Stars to the Highest Mass Planets Paige Giorla (Monday, 138.44) T Dwarf Model Fits for Spectral Standards at Low Spectral Resolution Kay Hiranaka (Talk, Monday 130.04D) Constraining the Properties of the Dust Haze in the Atmospheres of Young Brown Dwarfs Erini Lambrides (Thursday, 432.02) Can 3000 IR spectra unveil the connection between AGN and the interstellar medium of their host galaxies? Emily Rice (Tuesday, 243.02) STARtorialist: Astronomy Outreach via Fashion, Sci-Fi, & Pop Culture The Young and the Red: What we can learn from Young Brown Dwarfs # Ground Based Photometry Here I want to calculate some photometric points from spectra for comparison with published values for a bunch of known brown dwarfs. In order to get the true magnitude for an object, I first need to calculate the instrumental magnitude and then correct for a number of effects. That is, I calculate the apparent magnitude from a particular place on the Earth and then add corrections to determine what it would be if we measured from space. After all our corrections are made, the magnitude is given by: ### Instrumental Magnitude The first term on the right in the equation above is the magnitude measured by the instrument on the ground, given by: Where is the energy flux density of the source in units of [erg s-1 cm-2 A-1] and is the scalar filter throughput for the band of interest. Since I will be comparing my calculated magnitudes to photometry taken with photon counting devices, the factor of converts to a photon flux density in units [photons s-1 cm-2 A-1]. ### Zero Point Correction The second term in our magnitude equation is a first order correction to compare to some standard we define as zero. I will use a flux calibrated spectrum of the A0 star Vega to calculate the zero point magnitude for the band: Just as we obtained our instrumental magnitude above. ### Extinction Correction The third term is to correct for the extinction of the source flux due to atmospheric absorption. We can get closer to the true apparent magnitude (above the atmosphere) by adding an extinction term: Where is the extinction coefficient for the band of interest and is the airmass. The airmass is the optical path length of the atmosphere, which attenuates the source flux depending on its angle from the zenith . Approximating the truly spherical atmosphere as plane-parallel, the airmass goes from at to at . At zenith angles greater than that, the plane-parallel approximation falls apart and the airmass term gets complicated. Where the airmass is the amount of atmosphere in the line of sight, the extinction coefficient is the amount by which the incident light is attenuated as it travels through the airmass. The extinction coefficient is related to the optical depth of the atmosphere as: Where and are the magnitudes below and above the atmosphere respectively. ### Example: J21512543-2441000 As an example, I'd like to calculate the 2MASS J-band magnitude of the brown dwarf at 21h51m25.43s -24d41m00s given a low resolution NIR energy flux density from the SpeX Prism instrument on the 3m NASA Infrared Telescope Facility. Interpolating the filter throughput to the object spectrum and then integrating as in the equation above, I get as my instrumental magnitude in the J-band. Performing the same procedure on the flux calibrated spectrum of Vega, I get for my J-band zero point magnitude. Checking the FITS file header, I will use for the airmass. The mean extinction coefficient for the MKO system J-band is given as in Tokunaga & Vacca (2007), making the atmospheric correction term . The corrected magnitude is then: Which is only 0.007 magnitudes off from the value of from the 2MASS catalog. ### Remaining Problems As shown in the example above, this works... but not for every object. 2MASS apparent J magnitudes vs. my calculated apparent j magnitudes for 67 brown dwarfs. The solid black line is for perfect agreement and the dashed line is a best fit of the data. 2MASS apparent H magnitudes vs. my calculated apparent h magnitudes for 67 brown dwarfs. The solid black line is for perfect agreement and the dashed line is a best fit of the data. I whittled down my sample of 875 to only those objects with flux units and airmass values taken at Mauna Kea so that I could use the same extinction coefficient and make sure they are all in the same units of [erg s-1 cm-2 A-1]. Then I pulled the 2MASS catalog J and H magnitudes with uncertainties for these remaining objects and plotted them against my calculated values with uncertainties. To the left are the plots of the 67 objects that fit the selection criteria in J-band (above) and H-band (below). Though it's not the biggest sample, the deviation of the best fit line from unity suggests I'm off by a factor of 0.9 from the 2MASS catalog value across the board. But more worrisome is the fact that most of the calculated magnitudes are not within the errors of the 2MASS magnitudes. This deviation ranges from very good agreement of a few thousandths of a magnitude up to the worst offenders of about 0.8 mags. # Filter Effective Wavelength(s) The effective wavelength of a filter for narrow band photometry can easily be approximated by a constant and just looked up when needed. For broad band photometry, however, the width of the filter and the amount of flux in the band being measured actually come into play. The effective wavelength of a filter is given by: Where is the scalar filter throughput and is the flux density in units of [erg s-1 cm-2 A-1] or [photons s-1 cm-2 A-1] depending upon whether you are using an energy measuring or a photon counting detector, respectively. Here are the results for 67 brown dwarfs with complete spectrum coverage of the 2MASS J-band: Effective wavelength values for the 2MASS J-band filter. Blue and green circles indicate lambda calculated using photon flux densities (PFD) and energy flux densities (EFD) respectively. Filled circles are for 67 confirmed brown dwarfs. Open circles are for Vega. The red line on the plot shows the specified value given by 2MASS. For fainter objects like brown dwarfs (filled circles), the calculated effective wavelength of the J-band filter can shift redward by as much as 150 angstroms. Vega (open circles) shifts it blueward by about 30 angstroms. The difference is small but measurable and demonstrates the dependence of the filter width, source spectrum, and detector type on the effective wavelength while doing broad band photometry. # Photon Flux Density vs. Energy Flux Density One of the subtleties of photometry is the difference between magnitudes and colors calculated using energy flux densities (EFD) and photon flux densities (PFD). The complication arises since the photometry presented by many surveys is calculated using PFD but spectra (specifically the synthetic variety) is given as EFD. The difference is small but measurable so let's do it right. The following is the process I used to remedy the situation by switching my models to PFD so they could be directly compared to the photometry from the surveys. Thanks to Mike Cushing for the guidance. ### Filter Zero Points Before we can calculate the magnitudes, we need filter zero points calculated from PFD. To do this, I started with a spectrum of Vega in units of [erg s-1 cm-2 A-1] snatched from STSci. Then the zero point flux density in [photons s-1 cm-2 A-1] is: Where is the given energy flux density in [erg s-1 cm-2 A-1] of Vega, is the photon flux density in [photons s-1 cm-2 A-1], and is the scalar filter throughput. Since I'm starting with a spectrum of Vega in EFD units, I need to multiply by to convert it to PFD units. In Python, this looks like: def zp_flux(band):     from scipy import trapz, interp, log10     (wave, flux), filt, h, c = vega(), get_filters()[band], 6.6260755E-27 # [erg*s], 2.998E14 # [um/s]     I = interp(wave, filt['wav'], filt['rsr'], left=0, right=0)     return trapz(I*flux*wave/(h*c), x=wave)/trapz(I, x=wave)) ### Calculating Magnitudes Now that we have the filter zero points, we can calculate the magnitudes using: Where is the apparent magnitude and is the flux from our source given similarly by: Since the synthetic spectra I'm using are given in EFD units, I need to multiply by to convert it to PFD units just as I did with my spectrum of Vega. In Python the magnitudes are obtained the same way as above but we use the source spectrum in [erg s-1 cm-2 A-1] instead of Vega. Then the magnitude is just: mag = -2.5*log10(source_flux(band)/zp_flux(band)) Below is an image that shows the discrepancy between using EFD and PFD to calculate colors for comparison with survey photometry. The circles are colors calculated from synthetic spectra of low surface gravity (large circles) to high surface gravity (small circles). The grey lines are iso-temperature contours. The jumping shows the different results using PFD and EFD. The stationary blue stars, green squares and red triangles are catalog photometric points calculated from PFD. ### Other Considerations The discrepancy I get between the same color calculated from PFD and EFD though is as much as 0.244 mags (in r-W3 at 1050K), which seems excessive. The magnitude calculation reduces to: Since the filter profile is interpolated with the spectrum before integration, I thought the discrepancy must be due only to the difference in resolution between the synthetic and Vega spectra. In other words, I have to make sure the wavelength arrays for Vega and the source are identical so the trapezoidal sums have the same width bins. This reduces the discrepancy in r-W3 at 1050K from -0.244 mags to -0.067 mags, which is better. However, the discrepancy in H-[3.6] goes from 0.071 mags to -0.078 mags. ### To Recapitulate In summary, I had a spectrum of Vega and some synthetic spectra all in energy flux density units of [erg s-1 cm-2 A-1] and some photometric points from the survey catalogs calculated from photon flux density units of [photons s-1 cm-2 A-1]. In order to compare apples to apples, I first converted my spectra to PFD by multiplying by at each wavelength point before integrating to calculate my zero points and magnitudes. # Colors Diagnostic of Surface Gravity The goal here is to find a prescription of colors diagnostic of brown dwarf surface gravity. Since early optical as well as far infrared spectra and photometry are uncommon, the bands of interest should only include i and z from SDSS; J, H and Ks from 2MASS; and W1, W2 and W3 (but not W4 with only 10 percent detection) from WISE. In order to find said prescriptions, I used the BT-Settl models (at solar metallicity ranging from 1000 - 3000 K in effective temperature and 3.0 - 5.5 dex in log surface gravity) to produce a suite of color-color and color-parameter plots. One method I employed was to choose one effective temperature (in this case 2500K) and anchor the colors in one band that doesn't vary much between high and low surface gravity, e.g. z-band. Then I chose the other two bands by one that was more luminous at low gravity and one that was more luminous at high gravity, e.g. W2- and J-band respectively. Then the color-color plot of these bands looks like: In this plot of z-J vs. z-W2 the smallest circles are objects with high surface gravity and the largest have low surface gravity (log(g) = 5.5 to 3.5 respectively). The light grey lines are iso-temperature contours. In this particular case, there is little-to-no dispersion in z-J for Teff = 2500K (d = 0.009) and an appreciable dispersion in z-W2 for that same Teff (d = 0.32). Notice the tight vertical grouping (z-J) and dispersed horizontal grouping (z-W2) for the model objects of Teff = 2500K and varying log(g) in the red rectangle on the color-color plot above. Double-checking with the color-Teff plots, we can see that the dispersion in z-J in the plot on the left is tiny and the horizontal offset in the color-color plot is due to the 0.32 magnitude dispersion in z-W2 on the right below. Of course this is just a different way of looking at the same thing, but I might be able to find colors that are reliable indicators of gravity (and thus age) if I can find a bunch of these examples where the flux in the secondary and tertiary bands are flipped. Of note is the fact that at this temperature in this color-color plot the points are also isolated, i.e. there are no degeneracies with objects of any other temperature. That means that if I find an object with a z-J = 1.65 or so, I know that it has an effective temperature of about 2500K. Then I can determine its age by seeing if its z-W2 color is closer to 3.3 (young) or 2.9 (old). This of course does not work for all temperatures, as shown in the red circle in the color-color plot above. This demonstrates a degeneracy among hotter young objects (Teff = 3000K, log(g) = 3.5) and cooler old objects (Teff = 2800K, log(g) = 5.5) with a temperature difference of 200K. Though there is no definitive combination of colors to identify the age of an object irrespective of temperature, what I have done here is found a collection of prescriptions that are reliable indicators of age over small temperature ranges. # Color vs. Spectral Type Model Comparison Here are color vs. spectral type plots for the 2MASS J, H and Ks bands. The blue circles are for the objects with parallax measurements. The red squares are for the AMES-Dusty model spectra with spectral types gleaned from effective temperature according to Golimowski et al (2004). While the AMES-Dusty models are known not to be a good fit for objects with effective temperatures lower than about 2200K (shown by the disagreement in L dwarf colors of objects and models), the M dwarfs fit fairly well for J-Ks versus Spectral Type. However, the models are under-luminous in J-H and over-luminous in H-Ks for M dwarfs, indicating a possible problem with H-band modeling. The models shown are calculated with a surface gravity of 5.5, which means that the models produce a "peakier" H-band than the objects actually exhibit. In a color-color plot of J-H versus H-Ks, the H-band throws off the model colors on both axes causing a diagonal shift (bluer in J-H, redder in H-Ks) of M and L dwarfs compared to the models: # Identifying Nearby Young Stars, Part 1: Young Star Isochrones As stars form, their position on the HR diagram (or, equivalently, a Color-Magnitude diagram) changes.   They start out very cool but physically very large and very bright; as they collapse under gravity and become fully supported by hydrogen fusion power, they become smaller and dimmer. The practical upshot of all of this is that we can determine the ages of stars based on their HR diagram locations - for a given luminosity and color, there is an associated age.  This is particularly useful for low-mass objects, which take extremely long times to reach the main sequence: An M0 star probably takes 200 Myr to reach the main sequence (Dotter et al. 2008), while a brown dwarf will never reach any kind of main sequence, and will slowly cool and dim forever. Typically, people use theoretical stellar evolution models like Baraffe et al. (1998), but in practice it is also possible to make empirical relations from known young stars with parallaxes, by fitting polynomials to them.  The diagram below shows a set of fifth-order (x5) polynomials that were fit to the single-star members of nearby young associations, as they appeared in Riedel et al. (2011). A Color-magnitude diagram (Absolute V magnitude versus V-K photometric color) of M dwarf stars with parallaxes. The main sequence is represented by stars with trigonometric (annual) parallaxes within 10 parsecs of the Sun. M dwarf members of associations that are less than ~100 Myr old and closer than 100 parsecs are represented as colored and shaped items on the plot, and empirical isochrones are also shown. As expected, we see multi-magnitude differences in the luminosity of Epsilon Chameleon members (pink) versus similarly-colored members of TW Hydra (yellow) or Beta Pictoris (blue).  On this basis, I presumed the nearby M dwarf AP Columbae was probably older than Beta Pictoris, but younger than AB Doradus (which annoyingly lies within the range of high-metallicity main sequence stars). Of course, these polynomials are not perfect - they are dependent on the quality of the parallaxes and whether a star actually is a single, “normal” member of the group... and the polynomials are only useful between the boundaries for which there is data to fit.  As of the time when I made these polynomial fits, there were only two young brown dwarfs -- both members of TW Hydra -- with parallaxes AND Johnson V photometry, which is why all the other lines terminate at the middle and hotter M spectral types.  If I replaced the Johnson V colors with something redder (I or J-band) it would be a lot easier to produce empirical isochrones for brown dwarfs... although at the moment, extremely few young brown dwarfs are known. # Spectral Energy Distributions The goal here was to investigate the atmospheric properties of known young objects and identify new brown dwarf candidates by producing extended spectral energy distributions (SEDs). These SEDs are constructed by combining WISE mid-infrared photometry with our extensive database of optical and near-infrared spectra and parallaxes. The BDNYC Database has about 875 objects and the number of objects with parallaxes is about 250. My code queries the database and the parallax measurements by right ascension and declination and then identifies the matches with enough spectra and photometry to produce an SED. Next, it checks the flux and wavelength units and makes the appropriate conversions to [ergs][s-1][cm-2][cm-1] and [um] respectively. It then runs a fitting routine across BT-Settl models of every permutation of: • 400 K < Teff < 4500 K in 50 K increments, • 3.0 dex < log(g) < 5.5 dex in 0.1 dex increments, and • 0.5 MJup < radius < 1.3 MJup in 0.05 MJup increments. Once the best match is found, it plots the synthetic spectrum (grey) along with the photometric points converted to flux in each SDSS, 2MASS and WISE bands (grey dots). In this manner, the fitting routine guesses the effective temperature, surface gravity and radius simultaneously. Here are some preliminary plots: # Brown Dwarf Synthetic Photometry The goal here was to get the synthetic colors in the SDSS, 2MASS and WISE filters of ~2000 model objects generated by the PHOENIX stellar and planetary atmosphere software. Since it would be silly (and incredibly slow... and much more boring) to just calculate and store every single color for all 12 filter profiles, I wrote a module to calculate colors a la carte. ### The Filters I got the J, H, and K band relative spectral response (RSR) curves in the 2MASS documentation, the u, g, r, i and z bands from the SDSS documentation, and the W1, W2, W3, and W4 bands from the WISE documentation. I dumped all my .txt filter files into one directory and wrote a function to grab them all, pull out the wavelength and transmission values, and output the filter name in position [0], x-values in [1], and y-values in [2]: def get_filters(filter_directory):   import glob, os   files = glob.glob(filter_directory+'*.txt')     if len(files) == 0:     print 'No filters in', filter_directory   else:     filter_names = [os.path.splitext(os.path.basename(i))[0] for i in files]     RSR = [open(i) for i in files]     filt_data = [filter(None,[map(float,i.split()) for i in j if not i.startswith('#')]) for j in RSR]     for i in RSR: i.close()       RSR_x = [[x[0] for x in i] for i in filt_data]     RSR_y = [[y[1] for y in i] for i in filt_data]     filters = {}     for i,j,k in zip(filter_names,RSR_x,RSR_y):       filters[i] = j, k, center(i)       return filters ### Calculating Apparent Magnitudes We can't have colors without magnitudes so here's a function to grab the Teff and log g specified spectra, and calculate the apparent magnitudes in a particular band: def mags(band, teff='', logg='', bin=1):   from scipy.io.idl import readsav   from collections import Counter   from scipy import trapz, log10, interp      s = readsav(path+'modelspeclowresdustywise.save')   Fr, Wr = [i for i in s.modelspec['fsyn']], [i for i in s['wsyn']]   Tr, Gr = [int(i) for i in s.modelspec['teff']], [round(i,1) for i in s.modelspec['logg']]      # The band to compute   RSR_x, RSR_y, lambda_eff = get_filters(path)[band]      # Option to specify an effective temperature value   if teff:     t = [i for i, x in enumerate(s.modelspec['teff']) if x == teff]     if len(t) == 0:       print "No such effective temperature! Please choose from 1400K to 4500K in 50K increments or leave blank to select all."   else:     t = range(len(s.modelspec['teff']))      # Option to specify a surfave gravity value   if logg:     g = [i for i, x in enumerate(s.modelspec['logg']) if x == logg]     if len(g) == 0:       print "No such surface gravity! Please choose from 3.0 to 6.0 in 0.1 increments or leave blank to select all."   else:     g = range(len(s.modelspec['logg']))      # Pulls out objects that fit criteria above   obj = list((Counter(t) & Counter(g)).elements())   F = [Fr[i][::bin] for i in obj]   T = [Tr[i] for i in obj]   G = [Gr[i] for i in obj]   W = Wr[::bin]      # Interpolate to find new filter y-values   I = interp(W,RSR_x,RSR_y,left=0,right=0)      # Convolve the interpolated flux with each filter (FxR = RxF)   FxR = [convolution(i,I) for i in F]      # Integral of RSR curve over all lambda   R0 = trapz(I,x=W)      # Integrate to find the spectral flux density per unit wavelength [ergs][s-1][cm-2] then divide by R0 to get [erg][s-1][cm-2][cm-1]   F_lambda = [trapz(y,x=W)/R0 for y in FxR]      # Calculate apparent magnitude of each spectrum in each filter band   Mags = [round(-2.5*log10(m/F_lambda_0(band)),3) for m in F_lambda]      result = sorted(zip(Mags, T, G, F, I, FxR), key=itemgetter(1,2))   result.insert(0,W)      return result ### Calculating Colors Now we can calculate the colors. Next, I wrote a function to accept any two bands with options to specify a surface gravity and/or effective temperature as well as a bin size to cut down on computation. Here's the code: def colors(first, second, teff='', logg='', bin=1):   (Mags_a, T, G) = [[i[j] for i in get_mags(first, teff=teff, logg=logg, bin=bin)[1:]] for j in range(3)]   Mags_b = [i[0] for i in get_mags(second, teff=teff, logg=logg, bin=bin)[1:]]   colors = [round(a-b,3) for a,b in zip(Mags_a,Mags_b)]      print_mags(first, colors, T, G, second=second)      return [colors, T, G] The PHOENIX code gives the flux as Fλ in cgs units [erg][s-1][cm-2][cm-1] but as long as both spectra are in the same units the colors will be the same. ### Makin' It Handsome Then I wrote a short function to print out the magnitudes or colors in the Terminal: def print_mags(first, Mags, T, G, second=''):   LAYOUT = "{!s:10} {!s:10} {!s:25}"      if second:     print LAYOUT.format("Teff", "log g", first+'-'+second)   else:     print LAYOUT.format("Teff", "log g", first)      for i,j,k in sorted(zip(T, G, Mags)):     print LAYOUT.format(i, j, k) ### The Output Then if I just want the J-K color for objects with log g = 4.0 over the entire range of effective temperatures, I launch ipython and just do: In [1]: import syn_phot as s In [2]: s.colors('J','K', logg=4) Teff -------- log g -------- J-K 1400.0 ------ 4.0 ---------- 4.386 1450.0 ------ 4.0 ---------- 4.154 ... 4450.0 ------ 4.0 ---------- 0.756 4500.0 ------ 4.0 ---------- 0.733 Similarly, I can specify just the target effective temperature and get the whole range of surface gravities. Or I can specify an effective temperature AND a specific gravity to get the color of just that one object with: In [3]: s.colors('i','W2', teff=3050, logg=5) Teff -------- log g -------- J-K 3050.0 ------ 5.0 ---------- 3.442 I can also reduce the number of data points in each flux array if my sample is very large. I just have to specify the number of data points to skip with the "bin" optional parameter. For example: In [4]: s.colors('W1','W2', teff=1850, bin=3) This will calculate the W1-W2 color for all the objects with Teff = 1850K and all gravities, but only take every third flux value. I also wrote functions to generate color-color, color-parameter and color-magnitude plots but those will be in a different post. ### Plots! Here are a few color-parameter animated plots I made using my code. Here's how I made them. Click to animate! And here are a few colorful-colorful color-color plots I made: ### Plots with observational data Just to be sure I'm on base, here's a color-color plot of J-H vs. H-Ks for objects with a log surface gravity of 5 dex (blue dots) plotted over some data for the Chamaeleon I Molecular Cloud (semi-transparent) from Carpenter et al. (2002). The color scale is for main sequence stars and the black dots are probable members of the group. Cooler dwarfs move up and to the right. And here's a plot of J-Ks vs. z-Ks as well as J-Ks vs. z-J. Again, the blue dots are from my synthetic photometry code at log(g)=5 and the semi-transparent points with errors are from Dahn et al. (2002).
web
auto_math_text