text
stringlengths
100
957k
meta
stringclasses
1 value
## Introduction Increasing energy demands encourage scientists to find low-cost, clean, renewable, and sustainable alternative energy sources1,2,3. A comparative study of literature on various alternative fuels, such as ethanol, vegetable oils, microbial oils, biomass, glycerol, biodiesel, and hydrogen, has been reported4,5. Commercial ethanol for biofuel is produced from feedstocks such as sugarcane, corn, and cassava. These raw materials, which are also food for human needs and animal feed, are competitively priced6. Agricultural wastes, particularly lignocellulosic materials, have been considered promising for second-generation bioethanol production. Pineapple peel, core, stem, and leaves are byproducts of pineapple processing (approximately 50% (w/w) of the pineapple weight)7. These byproducts are highly biodegradable and rich in proteins and carbohydrates, which are promising raw and abundant materials for ethanol production8,9. Thailand and Vietnam are the top countries in pineapple production, producing 2.21 and 0.59 million metric tons, accounting for 8.91% and 2.38% of the world's production, respectively10. In summer, the temperature in Thailand and Vietnam dramatically increases, which will increase with global warming. Furthermore, the temperature inside a bioreactor may rise from 30 °C to approximately 40 °C during ethanol fermentation11. High temperatures inhibit cell growth and the metabolic activity of yeast cells, resulting in a reduction in ethanol yield and volumetric ethanol productivity12,13. Therefore, the use of thermotolerant microorganisms is a promising approach to solving the problem of ethanol production at high temperatures. There are several advantages of using high-temperature ethanol fermentation, such as decreased costs associated with a cooling system, higher yields obtained in saccharification, and reduced risk of contamination by bacteria14,15. Even though many thermotolerant yeasts can tolerate and ferment at high temperatures, several stresses, e.g., thermal, ethanol, osmotic, ionic, lignocellulosic inhibitors, and reactive oxygen species (ROS), are unfavorable conditions for yeast growth and fermentation activity. Denaturation of DNA, proteins, lipids, and essential cellular structures of yeast cells under stressful situations has been previously reported16,17,18,19. However, the molecular mechanism conferring thermotolerance acquisition during high-temperature ethanol fermentation using PWH as feedstock has not yet been evaluated. S. cerevisiae can reproduce in anaerobic and aerobic conditions and accumulate ethanol at high concentrations, making it the preferred choice for starter cultures for beverage and food fermentations20. Recently, S. cerevisiae has become one of the most engineered yeasts for ethanol production from the agricultural, kitchen, industrial, and lignocellulosic wastes21,22S. cerevisiae HG1.1 is one of several thermotolerant yeasts isolated from soil samples in Vietnam23. This newly isolated yeast can grow and produce ethanol at a temperature up to 45 °C, using a YM medium containing 160 g/L glucose. Furthermore, it can tolerate ethanol and acetic acid up to 14% (v/v) and 4 g/L, respectively, when growing on YM agar at 35 °C23. Since its ethanol production potential using agricultural waste as feedstock has never been elucidated. Therefore, this newly isolated yeast was chosen for ethanol production under high-temperature conditions using PWH as feedstock in this study. The disadvantages of a single variable optimization technique, for instance, missing the interactions between the experimental factors and requiring a large number of experiments, can be eliminated by a statistical experimental model such as response surface methodology (RSM) based on a central composite design (CCD)24. Statistical tools such as RSM are used for experimental design, determining the positive and negative variables and their interactions, predicting the optimal equation for optimization with the cost-effective process, and reducing experimental runs25,26. Several recent reports have used this statistical method to optimize different medium compositions for bioethanol fermentation27,28,29. This study used a statistical optimization methodology to investigate ethanol production from PWH at high temperatures by the newly isolated thermotolerant yeast S. cerevisiae HG1.1. In addition, reverse transcription quantitative real-time polymerase chain reaction (RT–qPCR) was applied to analyze the expression levels of selected genes responsible for growth and ethanol stress (ATP6, OLE1, ERG8), oxidative stress (GLR1, SOD1), DNA repair (RAD14, MRE11, POL4), the pyruvate-to-tricarboxylic acid (TCA) pathway (PDA1, CIT1, LYS21), the pyruvate-to-ethanol pathway (PDC1, ADH1, ADH2), and acetic acid stress (ACS1, ALD2) in S. cerevisiae HG1.1. This study could provide the optimum conditions for ethanol production from PWH and help better understand the molecular mechanism by which yeast cells acquire thermotolerance and fermentation efficiency during high-temperature ethanol fermentation. ## Materials and methods ### Strain and culture media The newly isolated thermotolerant yeast S. cerevisiae HG1.1, isolated from soil samples from Vietnam, was used in this study. Isolation, screening, and selection of this thermotolerant yeast strain were described by Phong et al.23. The yeast culture was stored at the Department of Biotechnology, Faculty of Technology, Khon Kaen University, Thailand. The medium used was YM medium (0.3% yeast extract, 0.3% malt extract, 0.5% peptone, and 1.0% D-glucose). Yeast inoculum was prepared by transferring one colony of a 24-h culture grown on a slant of YM agar to a test tube containing 10 mL of YM broth and incubating on a rotary shaker at 35 °C and 100 rpm for 18 h. Then, 10 mL of preculture was inoculated into a 500-mL Erlenmeyer flask containing 200 mL of YM broth (pH 5.0) and incubated on a rotary shaker under the same conditions for 18 h. The final yeast cell concentrations were approximately 1.0 × 108–2.5 × 108 cells/mL. The active yeast cells were collected by centrifugation and used as a starter culture. ### Plant material Pineapple (Ananas comosus L. cv. Pattavia) wastes (pineapple peels and core) were collected in May 2018 from the Food Services Center, Khon Kaen University, Khon Kaen province, Thailand, with the permission of the Khon Kaen University Office. The plant used in this study is not wild but cultivated in Nong Khai province, Thailand. A voucher specimen (dried material) was deposited at the Department of Biotechnology, Faculty of Technology, Khon Kaen University, with the code number: KKUDB-PPC-2018-01. All methods were performed following relevant guidelines in the method section. ### PWH preparation and chemical composition analysis Pineapple wastes were collected and chopped into small pieces. They were dried under natural conditions (sun drying for 3 days) and in a hot air oven for 24 h. The dried pineapple wastes were milled by a laboratory blender, mixed in a single lot, and stored prior to use. The fiber compositions of dried pineapple wastes were analyzed using an Ankom Fiber Analyzer30 at the Animal Science Laboratory, Faculty of Agriculture, Khon Kaen University. PWH was prepared by transferring dried pineapple wastes to 0.5% (v/v) sulfuric acid (H2SO4) and heating at 121 °C for 15 min31. After hydrolysis, the pellet was removed by centrifugation, and the resulting supernatant was collected and kept at − 20 °C. The sugar compositions, acetic acid, formic acid, and furfural were analyzed using high-performance liquid chromatography (HPLC) at Central Laboratory, Faculty of Technology, Khon Kaen University and Central Laboratory (Thailand) Co., Ltd., Khon Kaen. Minerals, such as nitrogen, phosphorus, and magnesium, were analyzed at the Chemical Analysis Laboratory, Agricultural Development Research Center in Northeast Thailand, Khon Kaen, Thailand. ### Effect of inorganic nitrogen sources on ethanol fermentation Inorganic nitrogen sources have been shown to affect ethanol production under high-temperature fermentation conditions. In this study, based on the literature reviews, various inorganic nitrogen sources, including urea [CO(NH2)2], ammonium sulfate [(NH4)2SO4], ammonium nitrate [NH4NO3], and diammonium phosphate [DAP, (NH4)2HPO4] at different concentrations27,32 were determined for their effect on ethanol production by S. cerevisiae HG1.1. The ethanol fermentation was conducted in triplicate using a 250-mL Erlenmeyer flask containing 100 mL of PWH (pH 5.0) supplemented with various nitrogen sources at different concentrations and an initial yeast cell concentration of 5.0 × 106 cells/mL. All flasks were incubated at 40 °C on a rotary shaker at 100 rpm. Samples were withdrawn every 12 h and subjected to ethanol and total sugar analyses. ### Optimization of ethanol production at high temperature Based on the literature reviews, several environmental factors or variables affect ethanol production under high-temperature conditions. In this study, some influence factors include initial yeast cell concentration, pH of the fermentation medium, manganese (II) sulfate (MnSO4·H2O), zinc sulfate (ZnSO4·7H2O), magnesium sulfate (MgSO4·7H2O), potassium dihydrogen phosphate (KH2PO4), and yeast extract were chosen12,23,27,28,32,33. The significant independent factors positively affecting ethanol production from PWH by S. cerevisiae HG1.1 were screened and selected using PBD. The codes and actual values of the independent factors are presented in Table 1. The batch ethanol fermentation experiments were performed in triplicate using a 250-mL Erlenmeyer flask containing 100 mL of PWH (pH 5.0). The ethanol concentration was set as the response variable in this study. The significant independent variables selected based on PBD were subjected to an optimization experiment using the RSM based on the CCD. The confirmatory experiment was carried out using the optimized conditions from the response surface analysis. ### RT–qPCR analysis of gene expression in S. cerevisiae HG1.1 under high-temperature ethanol fermentation The yeast inoculum was transferred into a 250-mL Erlenmeyer flask containing 100 mL PWH (pH 5.5) supplemented with 4.95 g/L yeast extract with an initial cell concentration of 8.0 × 107 cells/mL. All flasks were incubated on a rotary shaker at 100 rpm under four different fermentation conditions: (1) unstressed condition (flasks were incubated at 30 °C for 9 h); (2) heat shock condition (flasks were incubated at 30 °C for 9 h, then shifted to 40 °C for 30 min); (3) short-term heat stress (flasks were incubated at 30 °C for 9 h, then shifted to 40 °C for 3 h); and (4) long-term heat stress (flasks were incubated at 40 °C for 9 h). Yeast cells were harvested at specific time points (i.e., 9 h for unstressed, 9 h and 30 min for heat shock, 12 h for short-term heat stress, and 9 h for long-term heat stress) by centrifugation at 5,000 rpm and 4 °C for 5 min, and then subjected to total RNA isolation using an RNA extraction kit (GF-1 Total RNA extraction kit, Vivantis, USA) with some modifications as described by Techaparin et al.34. The RNA concentration in each sample was measured and adjusted using a BioDrop μLITE (BioDrop Ltd, UK). RT–qPCR was performed in triplicate on a 7500 Fast Real-Time PCR System using the qPCRBIO SyGreen One-Step Detect Lo-ROX (PCR Biosystems, London, UK). The reactions were conducted with a total volume of 20 μL containing 1 μL RNA sample (100 ng RNA), 0.8 μL of each specific forward and reverse primer, 1 μL 20 × RTase, 10 μL 2 × qPCRBIO Sygreen One-Step mix, and 6.4 μL RNase-free water. The thermal cycling conditions were as follows: 45 °C for 30 min; 95 °C for 2 min; 40 cycles of 95 °C for 15 s, and 60 °C for 1 min. A list of primer pairs used for RT–qPCR is shown in Table 2. The RNase-free water was used instead of the RNA template for the negative control. The actin gene (ACT1) was used as an internal control. The relative gene expression was calculated using the 2−ΔΔCT method in which the target gene amount was adjusted to the reference gene (ACT1 gene). ### Analytical methods and data analysis Viable cell concentration was determined by a haemacytometer using the methylene blue staining technique35. The total sugars were analyzed by the phenol sulfuric acid method36 using a spectrophotometer (UV-1601, Shimadzu). The ethanol concentration was determined by gas chromatography (GC-14B, Shimadzu) using a packed column of polyethylene glycol (PEG-20 M) with a flame ionization detector37. The following equations were used to calculate the fermentation parameters: ethanol yield (Yp/s, g/g) = PE/[S0 − St]; volumetric ethanol productivity (Qp, g/L.h) = PE/t; yield efficiency (Ey, %) = [Yp/s/0.511] × 100; sugar consumption (Sc, %) = [St/S0] × 100, where PE is ethanol concentration (g/L), S0 is initial sugar concentration (g), St is sugar concentration (g) at time t, and t is fermentation time (h). The data are expressed as the mean ± standard deviation (SD). Analysis of variance was used to evaluate the differences among the treatments using Duncan's multiple range tests (DMRT). The statistical analysis was carried out using Statgraphics Centurion XV (Statpoint Technologies Inc., USA). ## Results and discussion ### Composition of dried pineapple waste and PWH The dried pineapple waste had high contents of hemicelluloses (28.81%) and cellulose (16.57%), and the total crude fiber was 48.72%, while lignin comprised only 3.04% of the total dry matter. Previous studies by Niwaswong et al.38 reported that raw pineapple peel comprised 9.43% hemicellulose, 20.44% cellulose, and 41.21% lignin. Choonut et al.8 showed that 51.13% hemicellulose, 37.68% cellulose, and 10.24% lignin were detected in pineapple peel after hot water pretreatment at 100 °C for 240 min. The cellulose content of pineapple waste used in this study was lower than that of other agricultural wastes, such as rice straw (32–47% cellulose)39 and corn stover (38–40% cellulose)40. However, the cellulose and hemicellulose contents were greater than those of yam peel (5.7% cellulose, 5.1% hemicellulose) and cassava peel (12.7% cellulose, 5.5% hemicellulose)41. Glucose and fructose were the principal sugars found in PWH, accounting for 41.11 and 40.87 g/L, respectively, while sucrose and maltose were not detected. Xylose and arabinose were also detected at 5.34 and 4.42 g/L, respectively. Rattanapoltee and Kaewkannetra31 reported that only 18.41 g/L glucose and 24.55 g/L fructose were present in pineapple peel hydrolysate. The total sugar concentration was only 55.91 g/L. Niwaswong et al.38 reported 82.10 g/L reducing sugars by dilute acid hydrolysis of pineapple peel waste. Formic acid, acetic acid, and furfural are considered inhibitors that are derived from acid hydrolysis. The acetic acid concentration of PWH was 8.39 g/L, whereas the concentrations of formic acid and furfural were 0.96 g/L and 0.36 mg/L, respectively, which were lower than those reported by Rattanapoltee and Kaewkannetra31. The total sugars of PWH were 103.03 g/L, which was as high as the total sugars found in orange peel hydrolysate (101 g/L)33 and banana peel hydrolysate (155 g/L)42. Furthermore, the PWH also contains some minerals, such as nitrogen (686 mg/L), phosphorus (274 mg/L), magnesium (126 mg/L), manganese (34 mg/L), and zinc (5 mg/L), which are essential for yeast growth and metabolic activity. Based on the sugar and mineral contents presented in PWH, it was considered a promising potential feedstock for ethanol and other biochemical production. ### Effect of inorganic nitrogen sources on ethanol production The supplementation of inorganic nitrogen sources into the PWH did not significantly increase the final ethanol concentrations compared to the control treatment without inorganic nitrogen supplementation. The highest ethanol concentrations of 34.60, 34.56, and 33.55 g/L were achieved from the medium supplemented with NH4NO3, (NH4)2SO4, and (NH4)2HPO4, respectively, which were not significantly different from the control treatment (33.54 g/L). Furthermore, supplementation with CO(NH2)2 resulted in a lower ethanol concentration than the control (Table 3). Due to the low sugar content of PWH, S. cerevisiae HG1.1 quickly converted all sugars to reach the maximum ethanol concentration with a low consumption level of nitrogen sources. On the other hand, PWH may contain sufficient nitrogen sources for yeast growth and metabolic activity. Generally, nitrogen is essential when the fermentation process is carried out at a high initial sugar concentration (for example, greater than 200 g/L)43. The results in the present study coincide with those reported by Charoensopharat et al.32 and Arora et al.44. The C/N ratio of the fermentation medium also played a crucial role in ethanol production. One study reported the optimum C/N ratio of 7.9 for ethanol production from sago starch using recombinant S. cerevisiae YKU 13145, while the other reported the value of 35.2 for ethanol production from tapioca starch using co-culture of Aspergillus niger and S. cerevisiae46. In this study, the C/N ratio of the fermentation medium was not determined; thus, further study is needed to clarify this hypothesis. Based on the results in this study, inorganic nitrogen was not selected as the independent variable in the experiment on screening factors that affected ethanol production using a Plackett–Burman design (PBD). ### Screening of significant factors for ethanol production by S. cerevisiae HG1.1 using a Plackett and Burman design (PBD) The PBD used 7 independent factors and 12 experimental runs. The maximal ethanol concentration of 32.73 g/L and volumetric ethanol productivity of 2.18 g/L.h were achieved after 15 h of fermentation at 40 °C (Table 4). Three factors, including initial cell concentration (A), pH (B), and yeast extract (G), were the most significant variables in ethanol production from PWH by the thermotolerant yeast S. cerevisiae HG1.1, with p values < 0.05 (Table 5). Analysis of the adequate levels of these three crucial factors showed that the most influential factor was pH (p value was 0.0002), and yeast extract was the most negligible influential factor (p value was 0.0485). The selected model was significant (p value < 0.005), with a high confidence level based on the values of R-squared (0.9837) and adjusted R-squared (0.9551). Based on the t value limit on the Pareto chart (Fig. 1), three variables, including initial cell concentration (A), pH (B), and yeast extract (G), were considered significant variables. All three variables positively affected ethanol production from PWH using S. cerevisiae HG1.1. In PWH with low total sugars (ca. 103 g/L), both inorganic nitrogen sources and other salts were unnecessary. However, the initial cell concentration strongly affects the ethanol production rate. Higher initial cell concentrations can promote the fermentation rate and ethanol production efficiency. Techaparin et al.28 reported that when the initial cell concentration increased from 1.0 × 107 to 3.0 × 108 cells/mL, the ethanol concentrations from sweet sorghum juice were raised from 64.79 to 84.32 g/L using S. cerevisiae KKU-VN8. Greater than ten times higher ethanol productivity from hydrolyzed sugarcane bagasse was achieved when the inoculum size of S. cerevisiae ITV-01 was increased from 0.2 to 10 g/L47. Yeast growth and fermentation activity are affected directly by the pH of the fermentation medium. The enzymes involved in the yeast growth and ethanol production pathway may be inactivated at a low pH level48. Although S. cerevisiae can grow well at pH values between 4.0 and 6.0, the optimum pH for ethanol production is approximately 5.0–5.5. Singh and Bishnoi49 demonstrated that pH 5.5 was the optimum pH value from statistical optimization of ethanol production from pretreated wheat straw hydrolysate using S. cerevisiae MTCC 174. Izmirlioglu and Demirci50 also found that pH 5.5 was the optimum value for ethanol production from potato mash waste using S. cerevisiae ATCC 24,859, which yielded a 30.99 g/L ethanol concentration. This pH value was also the optimum condition for ethanol production using K. marxianus NIRE-K3 at 45 °C, providing a 93.2% yield efficiency and 0.48 g/g ethanol yield44. Yeast extract is widely used as the primary organic nitrogen source in several ethanol fermentation processes. It has been recognized as having a highly positive effect on ethanol production44,51. In the present study, 4.95 g/L yeast extract was the optimum concentration for ethanol production from PWH by S. cerevisiae HG1.1. Different optimum concentrations of yeast extract for ethanol production have also been reported. For instance, Schnierda et al.51 demonstrated that 9.43 and 9.24 g/L ethanol were attained from molasses-based medium (20 g/L sugar) supplemented with 0.5 g/L total yeast assimilable nitrogen by S. cerevisiae EC1118 and I. orientalis Y1161, respectively. Yeast extract at 3.0 g/L was determined to be the optimum concentration for ethanol production by S. cerevisiae NP01 using a fermentation medium containing 280 g/L sucrose52. In comparison, yeast extract at 9.0 g/L promoted ethanol production from sweet sorghum juice containing 270 g/L total sugars using S. cerevisiae NP0153. Yeast extract is essential for efficient ethanol fermentation, especially under very high gravity fermentation conditions and high-temperature fermentation processes, but the most challenging factor is the high cost. Therefore, many scientists have tried to replace other low-cost organic nitrogen sources, such as dried spent yeast, corn-steep liquor, poultry meal, and feather meal. They have also demonstrated their potential application in the production of ethanol and other biochemicals32,52. ### Optimization conditions for ethanol production by S. cerevisiae HG1.1 using CCD The experimental design codes and actual values of the significant independent factors, including initial cell concentration (5.0 × 106 to 1.0 × 108 cells/mL), pH (4.0 to 6.5), and yeast extract (3.0 to 12.0 g/L), are shown in Table 6. The observed ethanol concentrations from the CCD with 20 experimental runs were 19.10–33.54 g/L, and the predicted ethanol concentrations were 19.49–33.78 g/L (Table 7). The ethanol productivities were 1.59–2.80 g/L.h. The quadratic polynomial regression model and a second-order polynomial equation to predict the final ethanol concentration (PE) as a function of the fermentation variables were established, and the prediction equation was as follows: \begin{aligned} {\text{PE}}\left( {{\text{g}}/{\text{L}}} \right) = & {31}.{19} + {2}.{\text{32A}} + {2}.{\text{52B}} - {1}.{\text{14C}} - {1}.{\text{65AB}} \\ & \;{ + }0.{\text{32AC}} + {1}.{\text{42BC}} - 0.{\text{46A}}^{{2}} - {1}.{\text{67B}}^{{2}} - 0.{\text{21C}}^{{2}} \\ \end{aligned} The results revealed that the model was statistically significant (p value < 0.0001) (Table 8). The model was reliable because the p value of lack of fit was not statistically significant (p value > 0.005), the R-squared was 0.9808, and the adjusted R-squared was 0.9635, which was close to the R-squared. The standard deviation and coefficient values were only 0.71 and 2.40%, respectively. ANOVA also demonstrated that all these factors strongly affected ethanol production from PWH by S. cerevisiae HG1.1 at 40 °C. The p values of these significant factors were less than 0.0001. The 3-D response surfaces and contour plots for ethanol are presented in Fig. 2. The most fixed model was achieved when the yeast extract value was fixed at 7.50 g/L, and the cell concentration and pH levels were varied. The ethanol concentration was strongly affected by both cell concentration and pH. The maximum ethanol concentration of 33.54 g/L was achieved after 15 h of fermentation at the center pH value (pH 5.5) and cell concentration of 1.0 × 108 cells/mL. The maximum ethanol productivity (2.80 g/L.h) was also attained. Based on the three-factor quadratic polynomial equation, the maximum predicted ethanol concentration was 33.67 g/L under the optimum conditions: cell concentration of 8.0 × 107 cells/mL, pH of 5.4, and yeast extract concentration of 4.9 g/L. Based on the result of the CCD experiment and the solution of the three-factor quadratic polynomial equation, three runs of experiments that gave high levels of ethanol were selected for a confirmatory experiment. A cell concentration of 8.0 × 107 cells/mL, pH values in the range of 5.39–5.50, and yeast extract concentrations of 4.90–4.97 g/L were chosen for the confirmation test. The maximum ethanol concentration of 36.85 g/L, the productivity of 3.07 g/L.h, the ethanol yield of 0.48 g/g, corresponding to a yield efficiency of 93.61%, and sugar consumption of 74.81% were achieved under the optimum conditions, i.e., cell concentration of 8.0 × 107 cells/mL, pH of 5.5, and yeast extract concentration of 4.95 g/L. The ethanol concentrations of these three confirmatory runs were not significantly different (36.07–36.85 g/L) (Table 9). Figure 3 shows the time profile of ethanol production from PWH at 40 °C using S. cerevisiae HG1.1. The ethanol concentration quickly reached the maximal value (36.85 g/L) after 12 h of fermentation, corresponding to the dramatic decrease in total sugars (from 102.98 to 25.93 g/L). The ethanol content was slightly decreased after it reached the maximum concentration due to the oxidation of ethanol by yeast when the sugar in the fermentation medium was depleted. The remaining sugars, mostly C-5 sugars, such as xylose and arabinose, were almost unchanged since S. cerevisiae could not consume this type of sugar. The remaining total sugars in the fermented medium were 21.79 g/L. Although PWH contained some fermentation inhibitors, such as acetic acid (8.23 g/L), formic acid (0.96 g/L), and furfural (0.68 mg/L), the growth and fermentation activity of S. cerevisiae HG1.1 were not affected. The ethanol concentration, productivity, and yield efficiency achieved from PWH by S. cerevisiae HG1.1 at 40 °C were relatively high compared to several previous studies summarized in Table 10. This finding suggests that pineapple waste is a promising agricultural waste for second-generation bioethanol production. ### RT–qPCR analysis of gene expression in thermotolerant S. cerevisiae HG1.1 Hundreds of genes possess different expressions in response to heat stress in yeast cells60,61,62. However, most previous gene expression studies were carried out using the synthetic medium. Only a few studies have shown the gene expression pattern using lignocellulosic materials as a feedstock. Thus, this study evaluated the expression of some groups of genes related to growth and ethanol production pathway, ethanol, oxidative and acetic acid stress, and DNA repair. The expression levels of sixteen genes responsible for growth and ethanol stress (ATP6, OLE1ERG8), oxidative stress (GLR1SOD1), DNA repair (RAD14MRE11POL4), the pyruvate-to-TCA pathway (PDA1CIT1LYS21), the pyruvate-to-ethanol pathway (PDC1, ADH1ADH2), and acetic acid stress (ACS1ALD2) in S. cerevisiae HG 1.1 were successfully evaluated using RT–qPCR. As shown in Table 11ERG8RAD14ADH2, and ALD2 were up-regulated under a control growth condition (30 °C) and down-regulated under heat stress. ATP6GLR1, SOD1CIT1LYS21ADH1, and ACS1 genes were highly expressed under heat shock at 40 °C for 30 min and markedly decreased under short- and long-term stress conditions. In contrast, the expression levels of OLE1MRE11, POL4, PDA1, and PDC1 were highly increased when yeast cells were shifted from 30 °C to 40 °C for 3 h (short-term stress) and then dramatically decreased when cells were shifted from 30 °C to 40 °C for 9 h (long-term stress). Under heat stress, the expression levels of the PDC1 gene were more remarkable than that under a control condition. Relatively low expression levels of the RAD14 gene responsible for DNA repair and the ALD2 gene responsible for acetic acid stress were detected in yeast cells under long-term stress. It should be noted from the present study that genes involved in the same stress condition exhibited different expression patterns, suggesting their unique expression profile during high-temperature ethanol fermentation. ATP6OLE1, and ERG8 are essential genes responsible for yeast growth and ethanol stress63. In response to heat and ethanol stresses, more ATP is needed for several biosynthesis processes that produce critical components to protect microbial cells, such as trehalose, glycogen, unsaturated fatty acids and heat shock proteins63. As shown in the present study, the expression of ATP6 was triggered by a heat shock condition, and its expression was slightly decreased under short- and long-term heat stresses. ATP6 mitochondrially encoded subunit a of the F0 sector of mitochondrial F1F0 ATP synthase. It integrates into the F0F1-ATPase complex and completes the process to yield a functional ATPase64. Thus, a high level of ATP6 expression in S. cerevisiae HG1.1 might be correlated with ATP production under heat shock conditions. In S. cerevisiae sun049T and K. marxianus DMKU 3–1042, the ATP6 gene is also highly up-regulated during high-temperature ethanol production at 38°C65 and 45°C60, respectively. OLE1, encoding a fatty acid desaturase, synthesizes monounsaturated fatty acids, such as palmitoleic acid and oleic acid, from saturated fatty acids, such as palmitic acid and stearic acid66. These unsaturated fatty acids and ergosterol maintain membrane fluidity as an adaptive response to the physiochemical interaction of both temperature and ethanol stresses67. It has been reported in S. cerevisiae that the overexpression of the OLE1 gene enhanced acetic acid tolerance and other stresses, such as ethanol, H2O2, NaCl, benzoic acid, diamide, and menadione68. In this study, the OLE1 gene of S. cerevisiae HG1.1 was highly expressed under heat shock and short-term heat stress, which was different from that of Qiu and Jiang69, who demonstrated that the OLE1 gene of S. cerevisiae M1 was approximately 2.2–3.0-fold overexpressed at 30 °C under very high gravity ethanol production. The overexpression of the OLE1 gene at 30 °C was also observed in S. cerevisiae K-9 under shaking and static sake fermentation70. Based on the present study, OLE1 is a heat-shock responsive gene in S. cerevisiae HG1.1. The ERG8 gene encodes phosphomevalonate kinase, which converts phosphomevalonate to diphosphomevalonate using ATP in ergosterol biosynthesis71. In the present study, the ERG8 gene was down-regulated under heat stress conditions, which coincided with Rossignol et al.72, who pointed out that most of the genes encoding proteins involved in ergosterol biosynthesis in S. cerevisiae EC1118, including ERG8, were down-regulated at high-temperature fermentation. In S. cerevisiae YZ1 and YF3, low levels of ERG8 expression were also observed under high-temperature conditions (42 °C), resulting in a reduction in ergosterol accumulation73. DNA damage, including base disruption, base loss, and strand breaks, is not only induced by exposure to environmental agents, such as heat, UV rays, ROS, and oxidizing agents but also spontaneously generated during cellular metabolism18. RAD14MRE11 and POL4 are common genes that encode proteins or enzymes involved in the DNA repair of yeast74,75. The expression of the RAD14 gene in S. cerevisiae HG1.1 was decreased under heat stress, similar to that reported by Boiteux and Jinks-Robertson76. The RAD14 gene is recognized as a DNA damage binding factor for nucleotide excision repair in the UV-damaged DNA of S. cerevisiae. This gene is not induced by heat stress. The other two genes, i.e., MRE11 and POL4, were up-regulated under heat shock and short-term heat stress conditions, and their expression was slightly reduced under long-term heat stress. The present results were similar to those of the MRE11 and POL4 genes in K. marxianus DMKU 3–1042, in which both genes were up-regulated under high-temperature stress60. It was proposed from the present study that MRE11 and POL4 are involved in the DNA repair of S. cerevisiae HG1.1 under heat stress conditions. Oxidative stress, by increasing the accumulation of reactive oxygen species (ROS), such as superoxide anions, hydrogen peroxide, and hydroxyl radicals, has been shown to cause denaturation of macromolecules, such as DNA, RNA, proteins, and lipids, in yeast cells. Several genes, including SOD1 and GLR1, are responsible for oxidative stress in yeasts. The SOD1 gene encodes superoxide dismutase, while the GLR1 gene encodes glutathione reductase. Both proteins have been shown to scavenge the superoxide anion radical and hydrogen peroxide, which can then be converted to water by the action of catalases or peroxidases19. The up-regulation of SOD1 and GLR1 genes under heat stress might correlate with high ROS accumulation. The overproduction of superoxide dismutase and glutathione reductase in S. cerevisiae HG1.1 might be needed to convert oxidative substrates such as superoxide anion radicals to hydrogen peroxide and finally to H2O. Overexpression of SOD1 and GLR1 has also been reported in K. marxianus DMKU 3–1042 and S. cerevisiae YZ1 and YF3 under heat stress60,77. In S. cerevisiae M1, up-regulation of the SOD1 gene and down-regulation of the GLR1 gene under high ethanol and high osmotic pressure have been reported69. Based on this information, the SOD1 gene, but not GLR1, can be activated by heat, ethanol, and osmotic stresses, depending on the yeast species. PDA1 and CIT1 are involved in the pyruvate-to-TCA pathway. These genes in S. cerevisiae HG1.1 were up-regulated under heat stress conditions, particularly under heat shock and short-term heat stress. Under long-term heat stress, the expression levels of both genes were slightly reduced. The expression of the PDA1 gene of S. cerevisiae HG1.1 was similar to that of S. cerevisiae Y-5031663. However, they differed somewhat from those reported in S. cerevisiae M1, where the PDA1 gene was down-regulated while CIT1 was up-regulated under heat stress69. In S. cerevisiae Y-50316, the expression of the PDA1 gene is activated not only by heat but also by ethanol stress63. A high expression level of the CIT1 gene has also been reported in S. cerevisiae when cells are exposed to a high temperature of 35 °C for 10 min. The increasing expression level of CIT1 increased the conversion of acetyl-CoA into the TCA pathway, leading to the accumulation of metabolic intermediates involved in the stress response16. LYS21 encodes homocitrate synthase, which functions to synthesize homocitrate from acetyl-CoA and oxoglutarate. Homocitrate is a precursor for the biosynthesis of L-lysine, which plays an essential protective role in response to oxidative stress induced by hydrogen peroxide in S. cerevisiae78. Furthermore, the homocitrate synthase enzyme is also associated with the mechanism of DNA repair in the nucleus79. In the present study, the expression of LYS21 was enhanced under heat shock and short-term heat stress, and its expression slightly decreased after exposure to long-term heat stress. In K. marxianus DMKU 3–1042, the LYS21 gene is also up-regulated under heat stress at 45°C60. Therefore, it was proposed from this finding that the S. cerevisiae HG1.1 The LYS21 gene may be involved in DNA repair under heat stress. In yeast cells, PDC1ADH1, and ADH2 are involved in a pyruvate-to-ethanol pathway. These genes are highly expressed in the stationary growth phase of K. marxianus DBKKU Y-102 under heat stress at 45°C32. Down-regulation of the ADH2 gene has been reported in S. cerevisiae KKU-VN8 under heat stress at 40°C34. In S. cerevisiae Y-50316, the expression of ADH1 and ADH2 is also induced by ethanol stress63. In this study, PDC1 and ADH1, but not ADH2, were up-regulated under heat stress at 40 °C, suggesting that heat stress could trigger the expression of PDC1 and ADH1 genes while suppressing the expression of ADH2 in S. cerevisiae HG1.1 during high-temperature ethanol fermentation. A high ethanol concentration produced by S. cerevisiae HG1.1 at 40 °C might also be correlated with the overexpression of PDC1 and ADH1 genes. Several genes, including ACS1 (encoded acetate-CoA ligase) and ALD2 (encoded aldehyde dehydrogenase), are responsible for acetic acid stress. The expression of these genes in S. cerevisiae HG1.1 under heat stress was investigated in this study. The results revealed that ACS1 was up-regulated under heat shock and short-term heat stress, whereas ALD2 was down-regulated under all stress conditions. The up-regulation of ACS1 in S. cerevisiae HG1.1 under heat stress may lead to a high formation of acetic acid from acetyl-CoA but not from acetaldehyde because aldehyde dehydrogenase also utilizes NAD(P) + . The conversion of acetyl-CoA to acetic acid might generate more ATP, which can be used as an energy source for the biosynthesis of essential components or enzymes critical for yeast adaptation under heat stress. In K. marxianus DMKU 3–1042, ACS1 and ALD2 are up-regulated under heat stress at 45°C60, while they are highly expressed in S. cerevisiae M1 under normal growth condition (30 °C)69, suggesting that their expression profiles depend on the yeast species. ## Conclusion The maximum ethanol concentration of 36.85 g/L, the productivity of 3.07 g/L.h, and yield efficiency of 93.61% were achieved after fermentation of PWH using S. cerevisiae HG1.1 at 40 °C under the optimum yeast inoculum concentration of 8.0 × 107 cells/mL, pH of 5.5, and yeast extract concentration of 4.95 g/L. The expression of genes during high-temperature ethanol fermentation using RT–qPCR revealed that most of the genes, except ERG8, RAD14, ADH2, and ALD2, were up-regulated under heat stress conditions, particularly under heat shock and short-term heat stress. Interestingly, up-regulation of the SOD1PDA1, CIT1, PDC1, and ADH1 genes was observed under all stresses compared to a control treatment (unstress). Although the gene expression profiles were distinctive depending on the nature and characteristics of the yeasts, the thermotolerance acquisition and fermentation efficiency of S. cerevisiae HG1.1 during high-temperature ethanol fermentation correlated with genes responsible for growth and ethanol stress, oxidative stress, acetic acid stress, DNA repair, pyruvate-to-TCA, and pyruvate-to-ethanol pathway. These results provide useful information for further advanced research to explore the regulation of the genes to benefit this potential thermotolerant yeast for producing ethanol or other valuable bio-products under high-temperature fermentation conditions. ### Submission declaration and verification Submission of an article implies that the work described has not been published previously in any form.
{}
# I Parameterize an offset ellipse and calculate the surface area Tags: 1. Mar 16, 2016 ### Thales Costa I'm given that: S is the surface z =√(x² + y²) and (x − 2)² + 4y² ≤ 1 I tried parametrizing it using polar coordinates setting x = 2 + rcos(θ) y = 2rsin(θ) 0≤θ≤2π, 0≤r≤1 But I'm not getting the ellipse that the original equation for the domain describes So far I've tried dividing everything by 4 and also tried the method of completing the square, but no success. I'm supposed to calculate the surface area of S. But without the parametric equations, calculating the normal vector is impossible. EDIT: Messing with the equations on Wolfram I got the following: x = 2 + cos(u) y = (1/2)sin(u) 0≤ u ≤2π But when I multiply the cosine and sine by r and make r vary from 0 to 1, the parametric plot changes to something completely different Last edited: Mar 16, 2016 2. Mar 17, 2016 ### andrewkirk Are you sure you need to parametrize? What is the shape of the unconstrained surface? Does the angle between the normal to the surface and the $z$ axis change over the surface? If not, can you simplify the problem and solve it without parametrization? 3. Mar 17, 2016 ### Thales Costa So I didn't need to parametrize the surface. It was a matter of just simply calculating dS, which is the cross-product between dz/dx and dz/dy. dS = √2 dA Then integrating over the surface ∫∫√2⋅dA I would get that the surface area I was looking for is √2 times the Area of the ellipse. I was going in the harder direction trying to figure out the integral that would give me the area of the ellipse. And this I still don't know how. 4. Mar 17, 2016 ### andrewkirk If you change the inequality to an equality that gives you the equation of the ellipse. The area won't change if you translate it in the x direction, so translate it so that the first term becomes just $x^2$. Now you have an ellipse centred on the origin. If you just integrate the area under the branch in one of the four quadrants and multiply that by four, there's your area of the ellipse. 5. Mar 17, 2016 ### Thales Costa This is what I figured. I took me a while to understand that I was integrating a vector field over a surface and that the position of the surface could be shifted to the origin without changing the result. So integrating the field over x² +4y² = 1 would be the same as integrating it over the original surface. Is that correct? 6. Mar 17, 2016 ### andrewkirk It would if you also shifted the vector field in the X direction by -2. But you don't need to worry about that. Above you concluded that Once you've reasoned your way to there, you can forget about the vector field and just calculate the area of the ellipse.
{}
# Question 13ptsWhich of the following is a mathematical statement of the ideal gas law?RT/PVPVRTPV/RTVrtip1 ptsQuestion 14 ###### Question: Question 13 pts Which of the following is a mathematical statement of the ideal gas law? RT/PV PVRT PV/RT Vrtip 1 pts Question 14 #### Similar Solved Questions ##### 1. Convert the rectangular coordinates point (x,y;2) = (-4,4v3,4) to cylindrical coordinates (r, 0,2). 1. Convert the rectangular coordinates point (x,y;2) = (-4,4v3,4) to cylindrical coordinates (r, 0,2).... ##### Chem help 14. At a certain temperature the equilibrium constant, Kc equals 0.11 for the reaction:... chem help 14. At a certain temperature the equilibrium constant, Kc equals 0.11 for the reaction: 2 ICl(g) = 12(g) + Cl2(g). What is the equilibrium concentration of ICl if 0.45 mol of 12 and 0.45 mol of Cl2 are initially mixed in a 2.0-L flask? A) 0.34 M B) 0.14 M. C) 0.27 M D) 0.17 M... ##### If two people are selected at random, what is the probability that they were both born in winter (December, January, or February) If two people are selected at random, what is the probability that they were both born in winter (December, January, or February)?... ##### Species of fox may be brown (the dominant phenotyper or white (the recessive phenotype) . Brown foxes have the genotype BB or Bb. White foxes have the genotype bb The frequency in population of foxes of the BB genotype 0.27.How can we test if this population is operating under Hardy-Weinberg Equilibrium?Assuming that this population operating under HWE, calculate the following:What is the frequency of the allele in this population?What is the frequency of the allele in this population?What is th species of fox may be brown (the dominant phenotyper or white (the recessive phenotype) . Brown foxes have the genotype BB or Bb. White foxes have the genotype bb The frequency in population of foxes of the BB genotype 0.27. How can we test if this population is operating under Hardy-Weinberg Equili... ##### Viw Topics Scoring Your score will be based on the mumber of corect matches minus the... viw Topics Scoring Your score will be based on the mumber of corect matches minus the number of incorred penalty for missing matches Reece Uhe the Refereaces to access important vala if eeded fer thi qti Find the reagent on the right that would convert ethasal to the product on the left Clear All KC... ##### (-4" converges a) Trueb) FalseThe sequenceThe sequence &, [In(n + D)-Inn] converges t0 d) 0 None of the bove 1/e In X The value of the integral dx is1/3b) 1/2 c) 1 d) 2 e) None of the above The series 2n+1 convergent a) True b) False 2x+1 The value of the integral dx is r+X+[ b) 1 d) In6 e) None of the above The series 2; convergent a) True b) False The sequence &, =(-1)"" converges a) True b) False The sequence cOS(x ) converges a) True b) False The value of the sum 2} a) (-4" converges a) True b) False The sequence The sequence &, [In(n + D)-Inn] converges t0 d) 0 None of the bove 1/e In X The value of the integral dx is 1/3 b) 1/2 c) 1 d) 2 e) None of the above The series 2n+1 convergent a) True b) False 2x+1 The value of the integral dx is r+X+[ b) 1 d) I... ##### Acetone cannot be used as the organic layer in extractions where an aqueous layer is involved. Why? (2 points) Acetone cannot be used as the organic layer in extractions where an aqueous layer is involved. Why? (2 points)... ##### You have tWo different solvents available: ethanol and diethyl ketone:OHEthanolDiethyl ketoneExplain which solvent is more polar. Be sure indicate what IMFs are present:Explain what IMFs would be present you were t0 mix the two solvents together:problem would dissolve better in the ketone and which Explain which of the dyes givcn Be sure t0 include discussion of E IMFS your answcr: would dissolve better in ethanol You have tWo different solvents available: ethanol and diethyl ketone: OH Ethanol Diethyl ketone Explain which solvent is more polar. Be sure indicate what IMFs are present: Explain what IMFs would be present you were t0 mix the two solvents together: problem would dissolve better in the ketone and ... ##### Calculate the change in internal energy (ΔE) for a system thatis giving off 35.0 kJ of heat and is changing from 21.00 L to 15.00L in volume at 1.50 atm pressure. (Remember that 101.3 J = 1L∙atm)a.+34.1 kJb.456 kJc.-35.9 kJd.+35.9 kJ e.-34.1 kJ Calculate the change in internal energy (ΔE) for a system that is giving off 35.0 kJ of heat and is changing from 21.00 L to 15.00 L in volume at 1.50 atm pressure. (Remember that 101.3 J = 1 L∙atm) a. +34.1 kJ b. 456 kJ c. -35.9 kJ d. +35.9 kJ e. -34.1 kJ... ##### I Man energy is Thul Is Its half-life at 815 K? = 226x 10 Jmol-1 (... I Man energy is Thul Is Its half-life at 815 K? = 226x 10 Jmol-1 ( t al 14. Се — 7 aaa x 10 • то 2.314 I morical 315k - 1 Two mechanisms are proposed for the reaction 7201 =.960 2NO(g) + O2(g) + 2NO2(g) 1: NO + O2 + NO3 (fast) 2: NO + NO + N20 (fast) NO3 + NO + 2NO... ##### Explain the function ofa concentric tube nebulizer with a labeled diagram. 3 marks) Explain the function ofa concentric tube nebulizer with a labeled diagram. 3 marks)... ##### What is a mole and how can you calculate one mole of a substance? What is a mole and how can you calculate one mole of a substance?... ##### Problem 3 A parallel-plate capacitor with plate area 2.20 cm2 and air-gap separation 0.13 mm is... Problem 3 A parallel-plate capacitor with plate area 2.20 cm2 and air-gap separation 0.13 mm is connected to a 12.00 V battery, and fully charged. The battery is then disconnectecd (a) What is the charge on the capacitor? 1.80x10-10 c You are correct. Computer's answer now shown above. Your rece... ##### Question 8 (2 points) Saved If the accepted value of a physical quantity is 7.7 m/s... Question 8 (2 points) Saved If the accepted value of a physical quantity is 7.7 m/s and the measured value is 9.3 m/s what is the Percent Relative Error (PRE)? Your Answer: % Answer units... ##### Explain why the Hardy-Weinberg formula shows that dominant phenotypes don't inherently increase in frequency in a... Explain why the Hardy-Weinberg formula shows that dominant phenotypes don't inherently increase in frequency in a population.... ##### Let f : R → R be given by f(x) = 4 − x 2 . Determine thefollowing: 1. f(X) where X = {x ∈ Z : −2 ≤ x ≤ 1} Let f : R → R be given by f(x) = 4 − x 2 . Determine the following: 1. f(X) where X = {x ∈ Z : −2 ≤ x ≤ 1}... ##### A helicopter pilot needs to travel to a regional airport 25 miles away. She flies at an actual heading of $\mathrm{N} 16.26^{\circ} \mathrm{E}$ with an airspeed of $120 \mathrm{mph},$ and there is a wind blowing directly east at $20 \mathrm{mph}$. (a) Determine the compass heading that the pilot needs to reach her destination. (b) How long will it take her to reach her destination? Round to the nearest minute. A helicopter pilot needs to travel to a regional airport 25 miles away. She flies at an actual heading of $\mathrm{N} 16.26^{\circ} \mathrm{E}$ with an airspeed of $120 \mathrm{mph},$ and there is a wind blowing directly east at $20 \mathrm{mph}$. (a) Determine the compass heading that the pilot nee...
{}
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 19 Jul 2018, 01:10 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # If x and y are integers, and x/y is not an integer then which of the Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 47101 If x and y are integers, and x/y is not an integer then which of the [#permalink] ### Show Tags 23 Oct 2016, 09:45 00:00 Difficulty: 55% (hard) Question Stats: 46% (00:48) correct 54% (00:53) wrong based on 196 sessions ### HideShow timer Statistics If x and y are integers, and x/y is not an integer then which of the following must be true? A. x is odd and y is even. B. x is odd and y is odd. C. x is even and y is odd. D. x<y E. None of the above _________________ Manager Joined: 28 Jun 2016 Posts: 207 Concentration: Operations, Entrepreneurship If x and y are integers, and x/y is not an integer then which of the [#permalink] ### Show Tags 23 Oct 2016, 10:05 1 Bunuel wrote: If x and y are integers, and x/y is not an integer then which of the following must be true? A. x is odd and y is even. B. x is odd and y is odd. C. x is even and y is odd. D. x<y E. None of the above x and y could be co-primes Sent from my iPhone using GMAT Club Forum mobile app Retired Moderator Joined: 05 Jul 2006 Posts: 1734 Re: If x and y are integers, and x/y is not an integer then which of the [#permalink] ### Show Tags 23 Oct 2016, 10:54 X/Y not integer,either x<y or x, y are co prime ... No musts ... E Sent from my iPhone using GMAT Club Forum mobile app Manager Joined: 27 Aug 2015 Posts: 94 Re: If x and y are integers, and x/y is not an integer then which of the [#permalink] ### Show Tags 24 Oct 2016, 00:01 I think E bcz with examples we can prove all no must be true. Manager Joined: 24 Dec 2016 Posts: 79 Location: India Concentration: Finance, General Management WE: Information Technology (Computer Software) Re: If x and y are integers, and x/y is not an integer then which of the [#permalink] ### Show Tags 04 Feb 2017, 04:26 rakaisraka wrote: I think E bcz with examples we can prove all no must be true. Hi. I don't understand how can you divide an Odd Number by an even no. and get an integer value. Manager Joined: 27 Aug 2015 Posts: 94 Re: If x and y are integers, and x/y is not an integer then which of the [#permalink] ### Show Tags 04 Feb 2017, 04:56 1 Shruti0805 wrote: rakaisraka wrote: I think E bcz with examples we can prove all no must be true. Hi. I don't understand how can you divide an Odd Number by an even no. and get an integer value. Hi, as per question stem x and y are integers but x/y is not integer. This is must be true each statement has to be true to be answer. A. x is odd and y is even. - x = 3 , y = 2 -- this satisfies the question stem but x =4 and y=3 also satisfies the question stem but y is odd. B. x is odd and y is odd. -- x = 3 , y=5 => you can take odd even examples in this too. C. x is even and y is odd. => similar to A D. x<y => x=2 , y=3 or x=4, y=3 E. None of the above> implies none must be true. hope it helps Senior Manager Status: Countdown Begins... Joined: 03 Jul 2016 Posts: 311 Location: India Concentration: Technology, Strategy Schools: IIMB GMAT 1: 580 Q48 V22 GPA: 3.7 WE: Information Technology (Consulting) Re: If x and y are integers, and x/y is not an integer then which of the [#permalink] ### Show Tags 12 Feb 2017, 21:25 Bunuel wrote: If x and y are integers, and x/y is not an integer then which of the following must be true? A. x is odd and y is even. B. x is odd and y is odd. C. x is even and y is odd. D. x<y E. None of the above Bunuel, The question wants us to try the options by picking numbers. But, I think the question should mention x and y are POSITIVE integers. If we say that x and y are (just) integers, y can be 0. In that case x/y is Not a Number. Please correct me if I am wrong. EMPOWERgmat Instructor Status: GMAT Assassin/Co-Founder Affiliations: EMPOWERgmat Joined: 19 Dec 2014 Posts: 11989 Location: United States (CA) GMAT 1: 800 Q51 V49 GRE 1: Q170 V170 Re: If x and y are integers, and x/y is not an integer then which of the [#permalink] ### Show Tags 13 Feb 2017, 13:00 1 Hi All, This question can be solved by TESTing VALUES (and coming up with examples that DISPROVE each of the wrong answer choices). Since X and Y are INTEGERS and X/Y is NOT an integer, you could quickly come up with a variety of examples of what X and Y COULD be, including.... 1/2, 2/3, 3/2, 4/3, 6/4, 7/3, etc. As such, you can quickly eliminate each of the first 4 answer choices. GMAT assassins aren't born, they're made, Rich _________________ 760+: Learn What GMAT Assassins Do to Score at the Highest Levels Contact Rich at: Rich.C@empowergmat.com # Rich Cohen Co-Founder & GMAT Assassin Special Offer: Save \$75 + GMAT Club Tests Free Official GMAT Exam Packs + 70 Pt. Improvement Guarantee www.empowergmat.com/ ***********************Select EMPOWERgmat Courses now include ALL 6 Official GMAC CATs!*********************** Intern Joined: 22 Nov 2016 Posts: 13 Re: If x and y are integers, and x/y is not an integer then which of the [#permalink] ### Show Tags 31 Mar 2017, 11:10 Hi rakaisraka, According to option A, X is odd and Y is even We dont have any such examples through which we can get integer value, then how can we say that it is not always true, please explain Intern Joined: 22 May 2017 Posts: 5 Re: If x and y are integers, and x/y is not an integer then which of the [#permalink] ### Show Tags 24 Mar 2018, 14:06 rakaisraka wrote: Shruti0805 wrote: rakaisraka wrote: I think E bcz with examples we can prove all no must be true. Hi. I don't understand how can you divide an Odd Number by an even no. and get an integer value. Hi, as per question stem x and y are integers but x/y is not integer. This is must be true each statement has to be true to be answer. A. x is odd and y is even. - x = 3 , y = 2 -- this satisfies the question stem but x =4 and y=3 also satisfies the question stem but y is odd. B. x is odd and y is odd. -- x = 3 , y=5 => you can take odd even examples in this too. C. x is even and y is odd. => similar to A D. x<y => x=2 , y=3 or x=4, y=3 E. None of the above> implies none must be true. hope it helps Still not clear!!! BSchool Forum Moderator Joined: 26 Feb 2016 Posts: 2943 Location: India GPA: 3.12 If x and y are integers, and x/y is not an integer then which of the [#permalink] ### Show Tags 24 Mar 2018, 14:22 Bunuel wrote: If x and y are integers, and x/y is not an integer then which of the following must be true? A. x is odd and y is even. B. x is odd and y is odd. C. x is even and y is odd. D. x<y E. None of the above Since we have a "must be true" question and the following details: x,y are integers and $$\frac{x}{y}$$ is not an integer. If we can provide an alternative case for each of the answer options,we will end up with Option E as the answer! Consider the following cases 1. x=6(even), y=4(even) and $$\frac{x}{y} = \frac{6}{4}$$ is not an integer This eliminates Option A,B, and C. 2. x=4 and y=6. $$\frac{x}{y} = \frac{4}{6}$$ and is not an integer This eliminates Option D. Therefore, we are left with Option E(None of the above) and it is our answer! _________________ You've got what it takes, but it will take everything you've got If x and y are integers, and x/y is not an integer then which of the   [#permalink] 24 Mar 2018, 14:22 Display posts from previous: Sort by # Events & Promotions Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
{}
# Tag Info let $\frac{\partial C}{\partial S}=\delta_c$ let $\frac{\partial^2 C}{\partial S^2}=\Gamma_c$ let $\frac{\partial C_0}{\partial S}=\delta_0$ let $\frac{\partial^2 C_0}{\partial S^2}=\Gamma_0$ we want $\frac{\partial V}{\partial S}=\frac{\partial C}{\partial S}=\delta_c$ and $\frac{\partial^2 V}{\partial S^2}=\frac{\partial^2 C}{\partial S^2}=\Gamma_c$ ...
{}
FeaturePlot on a KDE Plot/density plot 0 0 Entering edit mode 19 months ago fouerghi20 ▴ 60 I created a KDE plot (density plot) using the following code: contplus <- data.frame(scADAR$tsne@cell.embeddings) tSNE_1 <- scADAR$tsne@cell.embeddings[,1] # Initialize arrays for data subsets for each condition and for plots plotData <- list() modGalaxyPlot <- list() # Take each subset of data and generate a plot # Use grep to subset the data plotData[[1]] <- contplus[grepl(as.numeric(1), rownames(contplus)),] # Generate galaxy plot for each condition modGalaxyPlot[[1]] <- ggplot(plotData[[1]], aes(tSNE_1, tSNE_2)) + stat_density_2d(aes(fill = ..density..), geom = 'raster', contour = FALSE) + scale_fill_viridis(option = "magma") + coord_cartesian(expand = FALSE, xlim = c(min(tSNE_1), max(tSNE_1)), ylim = c(min(tSNE_2),max(tSNE_2))) + geom_point(shape = '.', col = 'white') # to visualize one of the two plots FeaturePlot(modGalaxyPlot[[1]]) I was wondering if it would be possible to highlight a density plot with certain genes. I basically want to do what FeaturePlot does but on a KDE plot and I am not sure how to adapt my code to do that. single cell seurat r RNA-Seq • 505 views
{}
12:08 AM @Semiclassical right 12:45 AM @bolbteppa oh lawd 5 hours later… 6:12 AM There are plenty of criticism of math textbooks Some by none other than Feynmann 7:12 AM 3 1 hour later… 8:28 AM Feb 20 '15 at 15:17, by ACuriousMind A lie-to-children is a simplified explanation of technical or complex subjects as a teaching method for children and laypeople, first described by science writers Jack Cohen and Ian Stewart. The word "children" should not be taken literally, but as encompassing anyone in the process of learning about a given topic regardless of age. It is itself a simplification of certain concepts in the philosophy of science. Because some topics can be extremely difficult to understand without experience, introducing a full level of complexity to a student or child all at once can be overwhelming. Hence elementary... vs 18 hours ago, by bolbteppa Lies My Teacher Told Me: Everything Your American History Textbook Got Wrong is a 1995 book by James W. Loewen, a sociologist. It critically examines twelve popular American high school history textbooks and concludes that the textbook authors propagate false, Eurocentric and mythologized views of American history. In addition to his critique of the dominant historical themes presented in high school textbooks, Loewen presents themes that he says are ignored by traditional history textbooks. == Themes == In Lies My Teacher Told Me, Loewen criticizes modern American high school history textbooks... 1 hour later… 9:41 AM @EmilioPisanty Haha, thanks. You're the second person to send me that. 10:25 AM @JohnRennie I have a question! 10:37 AM @Akash.B hi :-) 11:19 AM @JohnRennie hi again I have a doubt regarding newton's law of gravitation and 2nd law of motion One law says that force experienced by a body is constant while the other is saying that it varies with distance Which is right? @Akash.B hi The second law doesn't say that the force is constant. It just relates the force to the acceleration. In fact you have to combine the two laws to get the gravitational acceleration. @DanielSank well, it does have your name written all over it =) 11:41 AM Hi, everybody. hi pal welcome 12:24 PM @JohnRennie okay let me say The force that earth attracts is given as "my" Srry mg Right ! Since acceleration due to gravity is a constant can't we say that force is constant as per second law? Mass is also a constant So can't you find something fishy here? what do you find "fishy"? When we analyze universal law of gravitation It says that force varies with distance But when we take a glance at second law between which two masses? 12:30 PM Between earth and an object Acceleration and mass is constant So force is constant @skullpetrol are you understanding? What I meant Okay Let me start from scratch What is universal law of gravitation? The force between an object and the earth is directly proportional to the product of their masses and inversely proportional to their distances Note the point inversely proportional to the distance Okay now let's get on to the second law Let me reduce it to it's equation that is $$f=ma$$ Here acceleration due to gravity is constant which is 9.8m/s and so mass is constant So what does it mean ? Force is a constant As per second law @skullpetrol so got it right now? not 9.8m/s, but 9.8m/(s^2) np may I suggest you read this the value of g is equivalent to the ratio of (G•M_earth)/(R_earth)^2 12:46 PM Did you get what I am saying? @Akash.B Acceleration is only approximately constant 1:03 PM @skullpetrol well if I throw a ball upwards The r varies right? Here r really means the distance between the body and object , right? how much is that variation compared to the radius of the earth? 1:32 PM It's still bound to the fact that force varies I am looking for an accurate explanation 1:52 PM Hey guys for once i want to post an answer I have asked a friend to verify it Would anyone be willing to be a 2nd checkpoint of this derivation? I've already tried googling it (can't find it) :( 2:22 PM @Akash.B The accurate explanation is that the acceleration is only approximately the same for small variations of distance from the centre of the earth. If you do the math, you can see that it really doesn't change much based on height, especially for regular people doing things on Earth's surface. It's not exactly the same, but often the change is so small that it can be ignored. Anonymous 2:47 PM @DanielSank Heyo! I noticed you're a co-author on Google's supremacy paper. Could you answer this if you get time? :D 2 woah, you exist! Anonymous @danielunderwood Hiii! Anonymous Yeah, I've been a bit busy lately...didn't pop in much :P school/research I suppose? blue? is that really you :-? welcome back pal 2:55 PM did somebody say blue has returned :P Anonymous @danielunderwood Yeah, internship and job placement season :) Anonymous Was preparing for the software internships...didn't manage to get one though. So applying to the research internships for now :P Anonymous @skullpetrol Hehe, yes. How're you doing? :D I'll trade you a full software job for a research internship hah @SanchayanDutta fine thx 2:58 PM Is this for next summer? Do you guys have the same May-Aug break as we have in the US? Anonymous This year's summer was mostly spent on doing some Mathematica calculations on the geometry of entanglement stuff (PPT/NPT bound entangled states) Anonymous @danielunderwood Yup, for summer 2020. However, most people who do well in the internships are offered a full-time job. We had folks from Microsoft, Amazon, and Samsung coming in this year and a few of my classmates managed to get internships there. I did relatively badly as I haven't practised competitive programming in a long while. Brushing up on my C++ skills now (STL!!!!) :P Anonymous How're you doing? New job or continuing at the previous data science company? 3:18 PM Oh nice! Though "competitive programming" doesn't sound pleasant to me Well it's the one that I started in January, but more data engineering and security than data science. I do get the occasional day to do data analysis though, so it's decent! @JMac So what if i throw a ball from aeroplane? Flying at almost 40000 feet 1 Context: This question on basic principles of image formation in medical MRI was asked in May 2014, received an accepted answer at the time, and was left unchallenged until August 2019, at which point I decided to attempt to summarize in some detail the topic in an answer that was well received. ... 3:40 PM @Akash.B It would be slightly different, but less than 1% change in the acceleration 4:07 PM @Akash.B 40k feet is about 12 kilometers. there's a lot more atmosphere to go at that point: nasa.gov/mission_pages/sunearth/science/atmosphere-layers2.html 4:19 PM does anybody know what's the deal with the quantum supremacy paper? It seems it was retracted soon after it appeared, now it's possible to find it on the web but not on official sources I thought the point was that it wasn't officially published yet, just leaked? if I look online I only find sensationalist articles saying that google can now compute the meaning of life or something @Semiclassical ah, so even the first appearance wasn't official? that was my understanding. should be out officially in a few weeks I think? that would make sense 4:45 PM anyone know how four closed strings moving towards a common point could combine? YMCA It's fun to stay there cuz It's fun to stay at the YMCA 1 hour later… 6:04 PM @Slereah what did you want me to get? I'm in the office today 6:52 PM @Slereah >distance 2 miles I'm in the same building @Slereah do you have the princeton library page for it 7:09 PM Isn't that the princeton library page @Slereah I’m looking Thx @Slereah yeah I have it Go to discord Plz don't destroy it K 7:31 PM @Slereah the library is in our building super convenient How serendipitous I probably don't have any cool library nearby Even the local physics uni seems fairly light on GR research
{}
# Linearized Gravity and the Transverse-Traceless Gauge Conditions 1. Sep 8, 2012 ### Alexrey 1. The problem statement, all variables and given/known data I'm working on some things to do with linearized gravitational radiation and I'm trying to justify the claim that in the Lorenz gauge, where $$\partial_{\nu}\bar{h}^{\mu\nu}=0 (1.1),$$ we are able to impose the additional conditions $$A_{\alpha}^{\alpha}=0 (1.2)$$ and $$A_{\alpha\beta}u^{\beta}=0 (1.3)$$ in order to find the two physical polarization states of a gravitational wave. All of the books that I have looked at so far have just stated that we are able to impose (1.1) and (1.2) without any workings of how they achieved this claim. 2. Relevant equations Equations (1.1), (1.2), (1.3) as well as the vacuum Einstein field equation $$\square\overline{h}_{\mu\nu}=0$$ (where the bar denotes the use of the trace reverse metric perturbation) which leads to the vacuum wave equation $$\overline{h}_{\mu\nu}=\Re(A_{\mu\nu}e^{ik_{\sigma}x^{\sigma}}).$$ In addition to this, it might be helpful to know that the wave amplitude $$A_{\mu\nu}$$ is orthogonal to the wave vector $$k_{\nu},$$ that is, $$k_{\nu}A^{\mu\nu}=0.$$ which removes 4 degrees of freedom from the metric perturbation. 3. The attempt at a solution As it stands I am quite confused and do not know really know where to start with proving that equations (1.1) and (1.2) are possible. In Schutz book "A First Course in General Relativity" after some calculations (on page 205 if you have the book) he does show that under an infinitesimal coordinate transformation we get $$A_{\alpha\beta}^{'}=A_{\alpha\beta}-ik_{\beta}B_{\alpha}-ik_{\alpha}B_{\beta}+i\eta_{\alpha\beta}k_{\mu}B^{\mu}.$$ where we can choose the $$B_{\alpha}$$ to impose (1.1) and (1.2).
{}
HOME  ›   pipelines # Changes that apply to Gene Expression and Feature Barcode analysis 1. Targeted Gene Expression analysis is available in Cell Ranger 4.0 and is invoked by specifying the --target-panel option when running the cellranger count command. 2. Cell Ranger 4.0 introduces the new targeted-compare pipeline for direct comparative analysis of matched parent Whole Transcriptome Amplification (WTA) and Targeted Gene Expression datasets. 3. Cell Ranger 4.0 includes the new targeted-depth subcommand to estimate sequencing depths appropriate for Targeted Gene Expression experiments based on input WTA results and an associated target panel file. 4. Recommended reference packages for human and mouse have been updated from version 3.0.0 to 2020-A: • Transcriptome annotations updated from Ensembl 93 to GENCODE v32 (human) and vM23 (mouse), which are equivalent to Ensembl 98. • GRCh38 and mm10 sequences are not changed; chromosome names now follow the GENCODE/UCSC convention (e.g., chr1 and chrM) rather than the Ensembl convention (1 and MT). • Additional filtering removes genes with unreliable annotations that often overlap more legitimate genes (see build scripts for details), resulting in improved overall sensitivity. 2020-A reference packages are backwards compatible with Cell Ranger 3.1.0 and prior. Mapping rates and gene/UMI sensitivity are increased due to more comprehensive annotations and improved manual curation of genes: 1. When analyzing 3’ Gene Expression data, Cell Ranger 4.0 trims the template switch oligo (TSO) sequence from the 5’ end of Read-2 and the poly-A sequence from the 3’ end before aligning reads to the reference transcriptome. This behavior is different from Cell Ranger 3.1, which does not perform any trimming. A full length cDNA molecule is normally flanked by the 30-bp TSO sequence, AAGCAGTGGTATCAACGCAGAGTACATGGG, at the 5' end and the poly-A sequence at the 3' end. Some fraction of sequencing reads are expected to contain either or both of these sequences, depending on the fragment size distribution of the library. Reads derived from short RNA molecules are more likely to contain either or both TSO and poly-A sequence than longer RNA molecules. Trimming results in better alignment, with the fraction of reads mapped to a gene increasing by up to 1.5%, because the presence of non-template sequence in the form of either TSO or poly-A confounds read mapping. Trimming improves the sensitivity of the assay as well as the computational efficiency of the pipeline. Tags ts:i and pa:i in the output BAM files indicate the number of TSO nucleotides trimmed from the 5' end of Read-2 and the number of poly-A nucleotides trimmed from the 3' end. The trimmed bases are present in the sequence of the BAM record and are soft clipped in the CIGAR string. Below, we illustrate how the fraction of reads mapped confidently to the transcriptome varies for both trimmed and untrimmed alignment as a function of read-length for a variety of sample types . 2. Cell Ranger 4.0 adds support for an “un-tethered” Feature Barcode pattern, (BC) without an anchor, specified in the Feature Reference CSV. This option allows the user to specify the sequence of the Feature Barcode without specifying a particular location on the read where the sequence is expected to be found. 3. cellranger reanalyze now outputs the count matrix used in the analysis, so as to reflect any subsetting of barcodes used. 4. Bug fixes for GTF files output by mkref. These changes do not affect the pipeline results. • GTF attributes with duplicate keys (e.g., tag "value1"; tag "value2";) are handled correctly. Previously, only the last such attribute was kept. • GTF attributes with unquoted integer values (e.g., exon_number 1;) are kept. Previously, they were removed. • GTF lines end with semicolons. • Unix line endings are used rather than DOS line endings, consistent with other Cell Ranger outputs. 5. Bug fixes for the BAM file • The duplicate flag (0x400) is set correctly in the secondary alignments (flag 0x100) of PCR duplicate reads and low-support UMI reads (xf:i:2) • Low-support UMI reads (xf:i:2) have the corrected barcode in UB:Z. Previously, it contained the raw barcode. 6. BAM file changes • Cell Ranger 4.0 will not output the li:i tag. The RG:Z tag contains this information. • Cell Ranger 4.0 will not output the BC:Z and QT:Z tags. 7. Cell Ranger 4.0 now relies on Orbit to perform transcriptome alignment, which leverages a modified STAR v2.7.2a. These modifications provide compatibility with “versionGenome 20201” references, such as those generated by STAR v2.5.1b. In Cell Ranger 4.0 we still provide and use STAR v2.5.1b for other purposes such as cellranger mkref. In our testing we did not note any differences in transcriptome alignments between the STAR shipped in Cell Ranger 3.1 (STAR v2.5.1b), STAR v2.7.2a, or Orbit. # Changes that apply to Gene Expression, Feature Barcode, and V(D)J analysis 1. mkfastq supports dual-indexed libraries for gene expression, both WTA and Targeted, V(D)J, and Feature Barcode datasets. 2. mkfastq supports a new sequencing configuration for Novaseq where the I2 index may need to be reverse-complemented before demultiplexing dual-indexed libraries. 3. count and vdj run approximately two to four times faster than in Cell Ranger 3.1, depending on the sequencing data, and reduces disk I/O by half. 4. A new command-line interface with improved error-handling has been engineered into Cell Ranger 4.0. 5. The Martian pipeline framework has been upgraded to version 4.0. mrp and mrjob will shut down if they detect that their log files were deleted or renamed. See the Martian release notes for more details. 6. The following features present in Cell Ranger 3.1 are no longer present in Cell Ranger 4.0: • mkfastq no longer supports data from the Single Cell 3′ v1 chemistry. • The cellranger demux subcommand has been removed. • The command-line interface does not accept FASTQs created by the deprecated cellranger demux pipeline. If you need to process FASTQs in this layout, contact [email protected] for assistance. • cellranger count and cellranger vdj are no longer able to process data from multiple gem-wells through manual editing of MRO files. • The Single Cell 3′ v1 and Single Cell 5′-R1 assay configurations will no longer be autodetected in Cell Ranger 4.0. Users who want to analyze data from those chemistries must explicitly specify the chemistry (SC3Pv1 or SC5P-R1 respectively) using the --chemistry argument. 7. The --id argument used by the pipelines has a 64 character limit in Cell Ranger 4.0. # Changes that apply to V(D)J analysis 1. Recommended VDJ reference packages for human and mouse have been updated from version 3.1.0 to 4.0.0. The changes to the VDJ reference sequences are listed below: • Remove the first base of the C region in certain cases. In these cases we observe that in most transcripts, the J region and C region overlap by exactly one base. • Add an allele of the gene IGHJ6 to the human VDJ reference. 2. Bug fix in contig annotation: • If a reference D region matches a contig perfectly, annotate the contig with that D region. 3. The command line argument --chain is added back in 4.0 for rare cases when the automatic chain detection fails. 4. A new output airr_rearrangement.tsv is added, which contains annotated contigs of VDJ rearrangements in the AIRR TSV format. 5. The VDJ reference is copied to the outputs folder starting with Cell Ranger 4.0.
{}
# Deduction Seems Vulnerable to the Problem of Induction Dirty little secret about logic: If induction has a justification problem (and it does), then so does deduction. Why? Because deductions rely on inductive conclusions imported into their premises. Here are a few examples. A. Aristotelian Syllogism: 1. All men are mortal 2. Socrates is a man 3. C: Socrates is mortal Look at premise 1. What gives us the right to say that this is a true premise? Well, because we cast our gaze over a range of humans, and we see that they have all grown old and died. So, we all must die, yes? That’s an inductive inference. How is it justified? B. Disjunctive Syllogism: 1. This gas is either helium or it is nitrogen 2. It is not helium 3. C: It is nitrogen This time, look at premise 2. What gives us the right to say that this premise is true? In this case, we perform some test on the gas in question. That test takes as its presupposition, that certain gasses always behave in certain ways, under certain conditions. That is an inductive inference. C. Modus Tollens: 1. If it has rained, then the pavement will be wet. 2. The pavement is not wet 3. C: It has not rained. Again, we see in premise 1, an obvious inductive inference, as a hypothetical proposition. The idea that wet pavement always follows, from rain. This may be obvious common sense, but in formal induction, the inference is not a necessary truth, and is thus not justified. There are many, many other examples of this. These three are just the most dramatic I could surface at the moment. The point here, is not to delegitimize the use of either form of reasoning, or to call into question the idea of bivalent truth. It is only to point out that the confidence we have in these tools is not grounded on what we seem to think it is, and that we really need to work on improving it. [Imported from thinkspot.com on 2 December 2021]
{}
Hot stars in external galaxies beyond the Magellanic Clouds: observations of supergiants in M33 and M31 Session 76 -- Spirals I Display presentation, Wednesday, 11, 1995, 9:20am - 6:30pm ## [76.05] Hot stars in external galaxies beyond the Magellanic Clouds: observations of supergiants in M33 and M31 L.Bianchi, R.Bohlin (STScI), J.Hutchings (DAO), P. Massey (KPNO) We are studying the hot star population in nearby galaxies to investigate the dependence of stellar atmospheres and winds on the global characteristics of their host galaxy, in particular the metallicity. Such dependences are predicted by the radiation pressure wind theory, and so far only tested observationally in the Magellanic Clouds (MCs), that are however very different from our own Galaxy. Comparison with stellar population in M31, a spiral supposed to have metallicity and mass similar to the Milky Way, is particularly important. In support of this research, we observed the UV-brightest stars in M31 and M33, partly chosen from UV images of M31 and M33 obtained with the \it Ultraviolet Imaging Telescope \rm, with ground-based UBV CCD photometry and optical spectroscopy, to accurately determine spectral types, bolometric magnitudes and extinction. In this way we selected the most favourable candidates for HST UV spectroscopy, allowing us to extend our study of the stellar winds in these galaxies. With HST, we obtained high resolution UV spectra with the Faint Object Spectrograph (FOS), and analysed the wind lines with the SEI method to derive terminal velocity and other wind parameters. These are the first detailed spectra of individual stars in M31 and M33. They also allow for the first time to determine abundances in these galaxies from stellar lines: only information from HII regions was available before, which may not be representative of the stellar content. Finally, by selecting objects at different galactocentric distances we can probe a possible metallicity gradient. Optical data and UV spectra provide atmospheric parameters of the stars: T$_{eff}$, L, R, and abundances. With complementary observations of the H${\alpha}$ emission line profile at high resolution, and the H${\gamma}$ profile, we derive very accurately the mass loss rate and the gravity. Stellar quantities are compared with galactic stars of the same type.
{}
Unique equilibrium states # Unique equilibrium states for flows and homeomorphisms with non-uniform structure Vaughn Climenhaga  and  Daniel J. Thompson Department of Mathematics, University of Houston, Houston, Texas 77204 Department of Mathematics, The Ohio State University, 100 Math Tower, 231 West 18th Avenue, Columbus, Ohio 43210 July 18, 2019 ###### Abstract. Using an approach due to Bowen, Franco showed that continuous expansive flows with specification have unique equilibrium states for potentials with the Bowen property. We show that this conclusion remains true using weaker non-uniform versions of specification, expansivity, and the Bowen property. We also establish a corresponding result for homeomorphisms. In the homeomorphism case, we obtain the upper bound from the level-2 large deviations principle for the unique equilibrium state. The theory presented in this paper provides the basis for an ongoing program to develop the thermodynamic formalism in partially hyperbolic and non-uniformly hyperbolic settings. V.C. is supported by NSF grant DMS-1362838. D.T. is supported by NSF grants DMS- and DMS-. We acknowledge the hospitality of the American Institute of Mathematics, where some of this work was completed as part of a SQuaRE ## 1. Introduction Let be a compact metric space and a continuous flow on . Given a potential function , we study the question of existence and uniqueness of equilibrium states for – that is, invariant measures which maximize the quantity . We also study the same question for homeomorphisms . This problem has a long history [rB74, rB75, HK82, DKU90, oS99, IT10, PSZ, Pa15, CP16] and is connected with the study of global statistical properties for dynamical systems [dR76, yK90, PP90, BSS02, CRL11, vC15]. For homeomorphisms, Bowen showed [rB74] that has a unique equilibrium state whenever is an expansive system with specification and satisfies a certain regularity condition (the Bowen property). Bowen’s method was adapted to flows by Franco [eF77]. Previous work by the authors established similar uniqueness results for shift spaces with a broad class of potentials [CT, CT2], and for non-symbolic discrete-time systems in the case [CT3]. In this paper, we consider potential functions satisfying a non-uniform version of the Bowen property in both the discrete- and continuous-time case. While we do not explore applications of this theory in this paper, we emphasize that the results are developed with a view to novel applications in the setting of smooth dynamical systems beyond uniform hyperbolicity. In particular, the main theorems of this paper are applied to diffeomorphisms with weak forms of hyperbolicity in [CFT] and to geodesic flows in non-positive curvature in [BCFT]. We review the main points of our techniques for proving uniqueness of equilibrium states for maps, referring the reader to [CT2, CT3] for details. Our approach is based on weakening each of the three hypotheses of Bowen’s theorem: expansivity, the specification property, and regularity of the potential. Instead of asking for specification and regularity to hold globally, we ask for these properties to hold on a suitable collection of orbit segments . Instead of asking for expansivity to hold globally, we ask that all measures with large enough free energy should observe expansive behavior. These ideas lead naturally to a notion of orbit segments which are obstructions to specification and regularity, and measures which are obstructions to expansivity. The guiding principle of our approach is that if these obstructions have less topological pressure than the whole space, then a version of Bowen’s strategy can still be developed. Some of the main points are as follows: 1. For a discrete-time dynamical system, we work with , which we think of as the space of orbit segments by identifying with . At the heart of our approach is the concept of a decomposition for . We ask for specification and regularity to hold on a collection of ‘good’ orbit segments , while the collections of are thought of as ‘bad’ orbit segments which are obstructions to specification and regularity. We ask that any orbit segment can be decomposed as a ‘good core’ that is preceded and succeeded by elements of and , respectively. More precisely, for any , there are numbers so that , and The choice of the decomposition depends on the setting of any given application, and the dynamics of the situation are encoded in this choice. 2. We define a natural version of topological pressure for orbit segments, and we require that the topological pressure of , which we think of as the pressure of the obstructions to specification and regularity, is less than that of the whole space. 3. The positive expansivity property introduced in [CT3] is that for small , for -almost every , for any ergodic with , where is a constant less than . We think of the smallest so that this is true as the entropy of obstructions to expansivity. Under these hypotheses, our strategy is then inspired by Bowen’s: his main idea was to construct an equilibrium state with the Gibbs property, and to show that this rules out the existence of a mutually singular equilibrium state. We obtain a certain Gibbs property which only applies to orbit segments in , and then we have to work to show that this is still sufficient to prove uniqueness of the equilibrium state. The above strategy was carried out in [CT, CT2, CT3] under the assumption that either is a shift space or . In this paper, we work in the setting of a continuous flow or homeomorphism on a compact metric space, and a continuous potential function. This necessitates several new developments, which we now describe. For homeomorphisms and flows, we develop a theory for potential functions which are regular only on ‘good’ orbit segments. The lack of global regularity introduces fundamental technical difficulties not present in the classical theory or the symbolic setting. For flows, which are the main focus of this paper, we work with the space , where the pair is thought of as the orbit segment . The main points addressed in this paper are: 1. Our potentials are not regular on the whole space, and this forces us to introduce and control non-standard ‘two-scale’ partition sums throughout the proof (see §2.1). 2. For flows, expansivity issues can be subtle and require new ideas beyond the discrete-time case. We introduce the notion of almost expansivity for a flow-invariant ergodic measure (§2.5), adapting a discrete-time version of this definition which was used in [CT3]. We also introduce the notion of almost entropy expansivity 3.1) for a map-invariant ergodic measure. This is a natural analogue of entropy expansivity [rB72], adapted to apply to almost every point in the space. Measures which are almost expansive for the flow are almost entropy expansive for the time- map. Almost entropy expansivity plays a crucial role in our proof via Theorem 3.2, a general ergodic theoretic result that strengthens [rB72, Theorem 3.5]. 3. Adapting the framework introduced in [CT3] to the case of flows requires careful control of small differences in transition times, particularly in Lemma LABEL:lem:multiplicity. 4. The unique equilibrium state we construct admits a weak upper Gibbs bound, which in many cases we use to obtain the upper bound from the level-2 large deviations principle, using results of Pfister and Sullivan (see §LABEL:sec:LDP). We now state a version of our main result, which should be understood as a formalization of the strategy described previously. We introduce our notation, referring the reader to §2 for precise definitions: is the standard topological pressure; the quantity is the largest free energy of an ergodic measure which observes non-expansive behavior; the specification property and Bowen property are versions of the classic properties which apply only on rather than globally; the expression is the topological pressure of the obstructions to specification and regularity. ###### Theorem A. Let be a continuous flow on a compact metric space, and a continuous potential function. Suppose that and that admits a decomposition with the following properties: 1. has the weak specification property; 2. has the Bowen property on ; 3. . Then has a unique equilibrium state. In fact, we will prove a slightly more general result, of which Theorem A is a corollary. The more general version, Theorem 2.9, applies under slightly weaker versions of our hypotheses, which we discuss and motivate in §2.6. We also develop versions of our results that apply for homeomorphisms. These discrete-time arguments are analogous to, and easier than, the flow case, so we just outline the proof, highlighting any differences with the flow case. Our main results for homeomorphisms are Theorem LABEL:thm:mapssimple, which is the analogue of Theorem A, and Theorem LABEL:thm:mapsD, which is the analogue for homeomorphisms of Theorem 2.9. Finally, in Theorem LABEL:thm:ldp, we establish the upper level-2 large deviations principle for the unique equilibrium states provided by Theorem LABEL:thm:mapssimple. ### Structure of the paper We collect our definitions, particularly for flows, in §2. Our main results for flows are proved in §§34. Our main results for maps are proved in §§LABEL:sec:mapsLABEL:sec:maps-pf. In §LABEL:sec:LDP, we prove the large deviations results of Theorem LABEL:thm:ldp. In §LABEL:sec:aee, we prove Theorem 3.2, which is a self-contained result about measure-theoretic entropy for almost entropy expansive measures. ## 2. Definitions In this section we give the relevant definitions for flows; the corresponding definitions for maps are given in §LABEL:sec:maps. ### 2.1. Partition sums and topological pressure Throughout, will denote a compact metric space and will denote a continuous flow on . We write for the set of Borel -invariant probability measures on . Given , , and we define the Bowen metric (2.1) dt(x,y):=sup{d(fsx,fsy)∣s∈[0,t]}, and the Bowen balls (2.2) Bt(x,δ) :={y∈X∣dt(x,y)<δ}, ¯¯¯¯Bt(x,δ) :={y∈X∣dt(x,y)≤δ}. Given , , and , we say that is -separated if for every distinct we have . Writing , we view as the space of finite orbit segments for by associating to each pair the orbit segment . Our convention is that is identified with the empty set rather than the point . Given and we write . Now we fix a continuous potential function . Given a fixed scale , we use to assign a weight to every finite orbit segment by putting (2.3) Φε(x,t)=supy∈Bt(x,ε)∫t0ϕ(fsy)ds. In particular, . The general relationship between and is that (2.4) |Φε(x,t)−Φ0(x,t)|≤tVar(ϕ,ε), where . Given and , we consider the partition function (2.5) Λ(C,ϕ,δ,ε,t)=sup{∑x∈EeΦε(x,t)∣E⊂Ct is (t,δ)-% separated}. We will often suppress the function from the notation, since it is fixed throughout the paper, and simply write . When is the entire system, we will simply write or . We call a -separated set that attains the supremum in (2.5) maximizing for . We are only guaranteed the existence of such sets when , since otherwise may not be compact. The pressure of on at scales is given by (2.6) P(C,ϕ,δ,ε)=¯¯¯¯¯¯¯¯limt→∞1tlogΛ(C,ϕ,δ,ε,t). Note that is monotonic in both and , but in different directions; thus the same is true of . Again, we write in place of to agree with more standard notation, and we let (2.7) P(C,ϕ)=limδ→0P(C,ϕ,δ). When is the entire space of orbit segments, the topological pressure reduces to the usual notion of topological pressure on the entire system, and we write in place of , and in place of . The variational principle for flows [BR75] states that , where is the usual measure-theoretic entropy of the time- map of the flow. A measure achieving the supremum is called an equilibrium state. ###### Remark 2.1. The most obvious definition of partition function would be to take so that the weight given to each orbit segment is determined by the integral of the potential function along that exact orbit segment, rather than by nearby ones. To match more standard notation, we often write in place of . The partition sums arise throughout this paper, particularly in §4.1 and §LABEL:sec:adapted. The relationship between the two quantities can be summarised as follows. 1. If is expansive at scale , then . 2. If is Bowen at scale , then the two pressures above are equal, and moreover the ratio between and is bounded away from and . 3. In the absence of regularity or expansivity assumptions, we have the relationship e−tVar(ϕ,ε)Λ(C,ϕ,δ,ε,t)≤Λ(C,ϕ,δ,t)≤etVar(ϕ,ε)Λ(C,ϕ,δ,ε,t), and thus . By continuity of , this establishes that as , but does not give us the conclusions of (1) or (2). Because our versions of expansivity and the Bowen property do not hold globally, we are in case (3) above, so a priori we cannot replace with in the proofs. ###### Remark 2.2. We can restrict to -separated sets of maximal cardinality in the definition of pressure: these always exist, even when is non-compact, since the possible values for the cardinality are finite (by compactness of ). If were not of maximal cardinality, we could just add in another point, which would increase the partition sum (2.5). Furthermore, a -separated set of maximal cardinality is -spanning in the sense that . If this were not so then we could add another point to and increase the cardinality. ### 2.2. Decompositions We introduce the notion of a decomposition for a sub-collection of the space of orbit segments. ###### Definition 2.3. A decomposition for consists of three collections and three functions such that for every , the values , , and satisfy , and (2.8) If , we say that is a decomposition for . Given a decomposition and , we write for the set of orbit segments for which and . We make a standing assumption that to allow for orbit segments to be decomposed in ‘trivial’ ways; for example, can belong ‘purely’ to one of the collections , , or or can transition directly from to – note that formally the symbols are identified with the empty set. This is implicit in our earlier work [CT, CT2, CT3]. We will be interested in decompositions where has specification, has the Bowen property on , and carries smaller pressure than the entire system. In the case of flows, a priori we must replace the collections and that appear in the decomposition with a related and slightly larger collection , where given we write (2.9) [C]:={(x,n)∈X×N∣(f−sx,n+s+t)∈C for some s,t∈[0,1]}. Passing from to ensures that the decomposition is well behaved with respect to replacing continuous time with discrete time. This issue occurs in Lemma LABEL:lem:many-in-G. ### 2.3. Specification We say that has weak specification at scale if there exists such that for every there exists a point and a sequence of “gluing times” with such that for and , we have (see Figure 1) (2.10) dtj(fsj−1+τj−1y,xj)<δ for every 1≤j≤k. We say that has weak specification at scale with maximum gap size if we want to declare a value of that plays the role described above. We say that has weak specification if it has weak specification at every scale . ###### Remark 2.4. We often write (W)-specification as an abbreviation for weak specification. Furthermore, since (W)-specification is the only version of the specification property considered in this paper, we henceforth use the term specification as shorthand for this property. Intuitively, (2.10) means that there is some point whose orbit shadows the orbit of for time , then after a “gap” of length at most , shadows the orbit of for time , and so on. Note that is the time spent for the orbit to shadow the orbit segments up to . Note that we differ from Franco [eF77] in allowing to take any value in , not just one that is close to . This difference is analogous in the discrete time case to the difference between (S)-specification where we take the transition times exactly , or (W)-specification where the transition times are bounded above by . Franco also asks that the shadowing orbit can be taken to be periodic, and that the gluing time does not depend on any of the orbit segments with . We can weaken the definition of specification so that it only applies to elements of that are sufficiently long. This gives us some useful additional flexibility which we exploit in Lemma 2.10. ###### Definition 2.5. We say that has tail (W)-specification at scale if there exists so that has weak specification at scale ; i.e. the specification property holds for the collection of orbit segments {(xi,ti)∈G∣ti≥T0}. We also sometimes write “ has (W)-specification at scale for ” to describe this property. ### 2.4. The Bowen property The Bowen property was first defined for maps in [rB74], and extended to flows by Franco [eF77]. We give a version of this definition for a collection of orbit segments . ###### Definition 2.6. Given , a potential has the Bowen property on at scale if there exists so that (2.11) sup{|Φ0(x,t)−Φ0(y,t)|:(x,t)∈C,y∈Bt(x,ε)}≤K. We say has the Bowen property on if there exists so that has the Bowen property on at scale . In particular, we say that has the Bowen property if has the Bowen property on ; this agrees with the original definition of Bowen and Franco. This dynamically-defined regularity property is central to Bowen’s proof of uniqueness of equilibrium states. For a uniformly hyperbolic system, every Hölder potential has the Bowen property. This is no longer true in non-uniform hyperbolicity; for example, the geometric potential for the Manneville–Pomeau map is a natural potential which is Hölder but not Bowen. Asking for the Bowen property to hold on a collection rather than globally allows us to deal with non-uniformly hyperbolic systems where one only expects this kind of regularity to hold for those orbit segments which experience a definite amount of hyperbolicity, and where it may not be known whether natural potentials such as the geometric potential are Hölder [CFT, BCFT]. We sometimes call the distortion constant for the Bowen property. Note that if has the Bowen property at scale on with distortion constant , then for any , has the Bowen property at scale on with distortion constant given by . ### 2.5. Almost expansivity Given and , consider the set (2.12) Γε(x):={y∈X∣d(ftx,fty)≤ε % for all t∈R}, which can be thought of as a two-sided Bowen ball of infinite order for the flow. Note that is compact for every . Expansivity for flows was defined by Bowen and Walters; their definition, details of which can be found in [BW72], implies that for every , there exists such that (2.13) Γε(x)⊂f[−s,s](x):={ft(x)∣t∈[−s,s]} for every . Since points on a small segment of orbit always stay close for all time, (2.13) essentially says that the set is the smallest possible. Thus, we declare the set of non-expansive points to be those where (2.13) fails. We want to consider measures that witness expansive behaviour, so we declare an almost expansive measure to be one that gives zero measure to the non-expansive points. This is the content of the next definition. ###### Definition 2.7. Given , the set of non-expansive points at scale for a flow is the set NE(ε):={x∈X∣Γε(x)⊄f[−s,s](x) for any s>0}. We say that an -invariant measure is almost expansive at scale if . A measure which is almost expansive at scale gives full measure to the set of points for which there exists for which (2.13) holds. We remark that in contrast to the Bowen-Walters definition, we allow to be large or even unbounded. Furthermore, our hypotheses do not preclude the existence of fixed points for the flow; for expansive flows, fixed points can only be isolated [BW72, Lemma 1] and can hence be disregarded. The following definition gives a quantity which captures the largest possible free energy of a non-expansive ergodic measure. ###### Definition 2.8. Given a potential , the pressure of obstructions to expansivity at scale is P⊥exp(ϕ,ε) =supμ∈MeF(X){hμ(f1)+∫ϕdμ∣μ(NE(ε))>0} =supμ∈MeF(X){hμ(f1)+∫ϕdμ∣μ(NE(ε))=1}. We define a scale-free quantity by P⊥exp(ϕ)=limε→0P⊥exp(ϕ,ε). Note that is non-increasing as , which is why the limit in the above definition exists. It is essential that the measures in the first supremum are ergodic. If we took this supremum over invariant measures, and a non-expansive measure existed, we would include measures that are a convex combination of a non-expansive measure and a measure with large free energy, so the supremum would equal the topological pressure. ### 2.6. Main results for flows Theorem A will be deduced from the following more general result, which is proved in §§34. ###### Theorem 2.9. Let be a continuous flow on a compact metric space, and a continuous potential function. Suppose there are with such that and there exists which admits a decomposition with the following properties: 1. For every , has tail (W)-specification at scale ; 2. has the Bowen property at scale on ; 3. . Then has a unique equilibrium state. These hypotheses weaken those of Theorem A in two main directions. 1. Theorem A requires that every orbit segment has a decomposition, while Theorem 2.9 permits a set of orbit segments to have no decomposition, provided they carry less pressure than the whole system. 2. The hypotheses of Theorem A require knowledge of the system at all scales: in particular, the specification condition 1 in Theorem A requires specification to hold at every scale . Here, we require a specification property to be verified only at a fixed scale , and all other hypotheses to be verified at a larger fixed scale . An example where this is useful is the Bonatti–Viana family of diffeomorphisms, where in [CFT] we are able to verify the discrete-time version of these hypotheses at suitably chosen scales, but establishing them for arbitrarily small scales is difficult, and perhaps impossible. We make a few more remarks on these hypotheses. By Remark 2.1, we can guarantee 3 by checking the bound (2.14) P(Dc∪[P]∪[S],ϕ,δ)+Var(ϕ,ε) We do not claim that the relationship is sharp, but we do not expect that it can be significantly improved using these methods. The number does not have any special significance but it is unavoidable that we control the Bowen property and expansivity at a larger scale than where specification is assumed. If we assume the hypotheses of Theorem A, we can verify the hypotheses of Theorem 2.9 by taking , and any suitably small with . The only hypothesis which is not immediate to verify from the hypotheses of Theorem A is 1, and this is verified by the following lemma. ###### Lemma 2.10. Suppose that has tail specification at all scales , then so does for every . In particular, 1 implies 1. ###### Proof. Given , let be such that implies that for every . (Positivity of follows from continuity of the flow and compactness of .) Now let be such that has specification at scale . Given any with , we must have . Thus if is any collection of orbit segments in with , then there are and such that . Since we can use the specification property on to get an orbit that shadows each to within (with transition times at most ). By our choice of , this orbit shadows each to within (with transition times at most ). We conclude that has tail specification at scale . ∎ We conclude that Theorem A is a corollary of Theorem 2.9, and we now turn our attention to proving this more general statement. ## 3. Weak expansivity and generating for adapted partitions In this section, we develop some general preparatory results on generating properties of partitions in the presence of weak expansivity properties. ### 3.1. Almost entropy expansivity It is well known that the time- map of an expansive flow is entropy expansive. We develop an analogue of entropy expansivity for measures called almost entropy expansivity, which has the property that if is almost expansive for a flow, then it is almost entropy expansive for the time- map of the flow. This property plays an important role in our proof, as entropy expansivity does for Franco, and is key to obtaining a number of results on generating for partitions. Let be a compact metric space and a homeomorphism. Let be an ergodic -invariant Borel probability measure. For a set , let denote the (upper capacity) entropy of . That is, corresponds to for as defined in §LABEL:sec:maps, which is the natural analogue for maps of (2.5)–(2.7). Given , consider the set (3.1) Γε(x;f,d):={y∈X∣d(fnx,fny)≤ε%foralln∈Z}. Recall from [rB72] that the map is said to be entropy expansive if for every . We will need the following weaker notion. ###### Definition 3.1. We say that is almost entropy expansive at scale (in the metric ) with respect to if for -a.e. . Our notation emphasizes the role of the metric because later in the paper we will need to use this notion relative to various metrics . Bowen proved that if is entropy expansive at scale , then every partition with diameter smaller than has . This result was obtained as an immediate consequence of the main part of [rB72, Theorem 3.5], which shows that for any and any partition with , we have (3.2) hμ(f)≤hμ(f,A)+supx∈Xh(Γε(x;f,d)). Clearly, is entropy expansive if the supremum is 0. Similarly, one sees immediately that is almost entropy expansive at scale if and only if the essential supremum (3.3) h∗(μ,ε;f,d)=sup{¯h∈R∣μ{x∣h(Γε(x;f,d))>¯h}>0} vanishes, and we strengthen Bowen’s result by showing that one can use the -essential supremum in (3.2). The following theorem is proved in §LABEL:sec:aee. ###### Theorem 3.2. Let be a compact metric space and a homeomorphism. Let be an ergodic -invariant Borel probability measure. If is any partition with in the metric , then (3.4) hμ(f)≤hμ(f,A)+h∗(μ,ε;f,d). In particular, if is almost entropy expansive at scale , then every partition with diameter smaller than has . To apply Theorem 3.2 in the setting of our main results, we first relate almost expansivity for the flow with almost entropy expansivity for the time- map of the flow. ###### Proposition 3.3. If is almost expansive at scale , then is almost entropy expansive (at scale in the metric ) with respect to the time- map . ###### Proof. It is immediate from the definitions that . Thus, if is almost expansive for , then for -a.e. , the set is contained in for some . Fix such an and let . In what follows, we will show that . This shows that , and since this argument applies to -almost every , it follows that is almost entropy expansive for . So, it just remains to show that the entropy of the finite orbit segment is with respect to . Let be sufficiently small that for all (this is possible by continuity of the flow and compactness of the space). Given , fix large enough such that . Let , and note that for all and all . Thus, for every , the set is -spanning under for . It follows that, in the metric , is -spanning under , which gives . ∎ The following proposition, which plays a similar role as [CT3, Proposition 2.6], is a consequence of Theorem 3.2 and Proposition 3.3. ###### Proposition 3.4. If is almost expansive at scale and is a finite measurable partition of with diameter less than in the metric for some , then the time-t map satisfies . ### 3.2. Adapted partitions and results on generating We extend Proposition 3.4 to some useful results on generating using the notion of an adapted partition. This terminology was introduced in [CT3], although the concept goes back to Bowen [rB73]. ###### Definition 3.5. Let be a -separated set of maximal cardinality. A partition of is adapted to if for every there is such that . Adapted partitions exist for any -separated set of maximal cardinality since the sets are disjoint and the sets cover . ###### Lemma 3.6. If is almost expansive at scale , and is an adapted partition for a -separated set of maximal cardinality, then . ###### Proof. For any , there exists so that ; this shows that in the metric . By Proposition 3.4, we have . ∎ The proof of the following proposition requires both Lemma 3.6 and a careful use of the almost expansivity property to take a crucial step of replacing a term of the form with . ###### Proposition 3.7. If , then for every . ###### Proof. Given an ergodic , write for convenience. We prove the proposition by showing that for every ergodic with . We do this by relating both and to an adapted partition. In order to carry this out we first introduce a technical lemma that will be used both here and in the proof of Lemma LABEL:lem:pos-for-es. Given a finite partition and an -invariant measure , for each with we define a function by (3.5) Φ(w):=1μ(w)∫wΦ0(x,t)dμ. Given , write . ###### Lemma 3.8. Suppose is almost expansive at scale , and let . Let be an adapted partition for a maximizing -separated set for . Let be a union of elements of . Then for every we have t(hμ(f1)+∫ϕdμ)≤μ(D)log∑w∈Atw⊂DeΦ(w)+μ(Dc)log∑w∈Atw⊂DceΦ(w)+H(μ(D)) where , and is as in (3.5). ###### Proof. Abramov’s formula [lA59] gives for all , and Lemma 3.6 gives , so tPμ(ϕ)=hμ(ft,At)+∫Φ0(x,t)dμ≤∑w∈Atμ(w)(−logμ(w)+Φ(w)). Let , and write . Breaking up the above sum and normalizing, we have tPμ(ϕ) ≤∑w∈Wμ(w)(Φ(w)−logμ(w))+∑w∈Wcμ(w)(Φ(w)−logμ(w)) =μ(D)∑w∈Wμ(w)μ(D)(Φ(w)−logμ(w)μ(D)) +μ(Dc)∑w∈Wcμ(w)μ(Dc)(Φ(w)−logμ(w)μ(Dc)) +(−μ(D)logμ(D)−μ(Dc)logμ(Dc)). Recall that for non-negative with and arbitrary we have ; the conclusion of Lemma 3.8 follows by applying this to the first sum with , , and the second sum with , . ∎ Now we return to the proof of Proposition 3.7. Let be as in the hypothesis, and let be ergodic with , so that is almost expansive at scale . Fix . Given , consider the set Xs:={x∈X∣Γε(x)⊂f[−s,s](x)}. We have , so there is such that . Now, we fix , and for an arbitrary , we write B[−r,t+r](x,ε):={y:d(fτx,fτy)≤ε % for τ∈[−r,t+r]} For any , we have Γε(x)=⋂r>0B[−r,t+r](x,ε). In particular, given as above and , we see that is an open set which contains , so there is so that . Now, for , let Yr:={x∈Xs:B[−r,t+r](x,ε)⊂⋃y∈f[−s,s](x)Bt(y,α)}. We have , so we can fix sufficiently large so that . We now pass to the set of points whose orbits spend a large proportion of time in . Given , consider the set Zn={x∈X∣Leb{τ∈[0,nt]∣fτ(x)∈Yr}>(1−α)nt}, and note that by the Birkhoff ergodic theorem. Take large enough that for all . The following lemma gives us a regularity property for the potential for points in . ###### Lemma 3.9. Given and , we have (3.6) |Φ0(x,nt)−Φ0(y,nt)|≤(8αnt+4r)∥ϕ∥+ntVar(ϕ,α). ###### Proof. Let and choose such that (here we use that ). Define iteratively as follows: let , and then given , choose any , and put . It follows from the definition of and properties of that • for every ; • ; and • for every . Since , the third property gives for some ; see Figure 2. Thus |Φ0(fτiy,t)−Φ0(fqix,t)|≤2s∥ϕ∥+tVar(ϕ,α). The first two properties give ∣∣ ∣∣Φ0(y,nt)−k∑i=1Φ0(fτiy,t)∣∣ ∣∣ ≤(αnt+2r+2sn)∥ϕ∥, ∣∣ ∣∣Φ0(x,nt)−k∑i=1Φ0(fqix,t)∣∣ ∣∣ ≤(αnt+2r+2sn)∥ϕ∥, and putting it all together we have |Φ0(x,nt)−Φ0(y,nt)| ≤2(αnt+2r+2sn)∥ϕ∥+2sn∥ϕ∥+ntVar(ϕ
{}
## Murty, Maruti Ram Compute Distance To: Author ID: murty.maruti-ram Published as: Murty, M. Ram; Murty, Maruti Ram; Ram Murty, M.; Ram Murty, Maruti; Murty, M. R. more...less Homepage: http://www.mast.queensu.ca/~murty/ External Links: MGP · Wikidata · GND · IdRef Documents Indexed: 265 Publications since 1974, including 18 Books 6 Contributions as Editor · 2 Further Contributions Reviewing Activity: 8 Reviews Biographic References: 1 Publication Co-Authors: 112 Co-Authors with 201 Joint Publications 2,258 Co-Co-Authors all top 5 ### Co-Authors 63 single-authored 32 Murty, Vijaya Kumar 14 Gun, Sanoli 12 Rath, Purusottam 7 Saradha, N. 6 Vatwani, Akshaa 5 Pathak, Siddhi S. 4 Cioabă, Sebastian M. 4 Dewar, Michael C. 4 Graves, Hester 4 Gupta, Rajiv 4 Kim, Seoyoung 4 Sinha, Kaneenika 4 Srinivas, Kotyada 3 Chatterjee, Tapas 3 Cojocaru, Alina Carmen 3 Coppola, Giovanni 3 Felix, Adam Tyler 3 Fouvry, Etienne 3 Lee, Jung-Jo 3 Liu, Yu-Ru 3 Meher, Jaban 3 Pasten, Hector V. 3 Petersen, Kathleen L. 3 Saha, Biswajyoti 3 Shparlinski, Igor E. 3 Weatherby, Chester J. 2 Akbary, Amir 2 Aktaş, Kevser 2 Balasubramanian, Ramachandran 2 Dixit, Anup B. 2 Erdős, Pál 2 Esmonde, Jody 2 Goresky, Robert Mark 2 Herzberg, Agnes Margaret 2 Kar, Arpita 2 Klapper, Andrew M. 2 Kuo, Wentang 2 Mai, Liem 2 Miller, Steven J. 2 Mukhopadhyay, Anirban 2 Prabhu, Neha 2 Pujahari, Sudhir 2 Séguin, François 2 Silverman, Joseph Hillel 2 Thain, Nithum 1 Adhikari, Sukumar Das 1 Ahmadi, Omran 1 Baba, Srinath 1 Balasubramanian, Kumar 1 Ball, L. Simeon 1 Becker, Riley 1 Bernstein, Daniel Julius 1 Bhand, Ajit 1 Biswas, Arunabha 1 Blache, Régis 1 Blake, Ian F. 1 Blokhuis, Aart 1 Cardon, David A. 1 Carlet, Claude 1 Carrell, James B. 1 Castro, Francis Noel 1 Cellarosi, Francesco 1 Chahal, Jasbir Singh 1 Chakraborty, Kalyan 1 Chand Gupta, Kishan 1 Charpin, Pascale 1 Clark, David Alan 1 Cogdell, James W. 1 Cohen, Stephen D. 1 Colbourn, Charles J. 1 Coulter, Robert S. 1 de Smit, Bart 1 Ding, Jintai 1 Dinitz, Jeffrey H. 1 Doche, Christophe 1 Drury, Stephen William 1 Dumas, Jean-Guillaume 1 Ebert, Gary Lee 1 Effinger, Gove W. 1 Enge, Andreas 1 Evans, Ronald J. 1 Fan, Haining 1 Fitzgerald, Robert W. 1 Fodden, Brandon 1 Franc, Cameron 1 Fried, Michael David 1 Friedman, Joel 1 Fu, Lei 1 Gadiyar, Hejmadi Gopalakrishna 1 Gao, Shuhong 1 Garaev, Moubariz Z. 1 Garcia, Arnaldo 1 Giesbrecht, Mark W. 1 Gong, Guang 1 Goss, David Mark 1 Gow, Roderick 1 Güloğlu, Ahmet Muhtar 1 Hachenberger, Dirk 1 Hamieh, Alia 1 Harper, Malcolm ...and 107 more Co-Authors all top 5 ### Serials 31 Journal of Number Theory 17 Proceedings of the American Mathematical Society 14 Journal of the Ramanujan Mathematical Society 13 International Journal of Number Theory 8 Canadian Journal of Mathematics 8 Hardy-Ramanujan Journal 6 American Mathematical Monthly 5 Canadian Mathematical Bulletin 5 Functiones et Approximatio. Commentarii Mathematici 4 Acta Arithmetica 4 Mathematische Annalen 4 Comptes Rendus Mathématiques de l’Académie des Sciences 4 Forum Mathematicum 4 Proceedings of the Indian Academy of Sciences. Mathematical Sciences 4 The Ramanujan Journal 4 Graduate Texts in Mathematics 3 Indian Journal of Pure & Applied Mathematics 3 Inventiones Mathematicae 3 Mathematika 3 Finite Fields and their Applications 3 Mathematics Newsletter 2 Journal of Combinatorial Theory. Series A 2 The Journal of the Indian Mathematical Society. New Series 2 CRM Proceedings & Lecture Notes 2 Institute of Mathematical Sciences Lecture Notes 1 Bulletin of the Australian Mathematical Society 1 Rocky Mountain Journal of Mathematics 1 Mathematics of Computation 1 The Mathematical Intelligencer 1 American Journal of Mathematics 1 Annales des Sciences Mathématiques du Québec 1 Archiv der Mathematik 1 Bulletin of the London Mathematical Society 1 Bulletin de la Société Mathématique de France 1 Colloquium Mathematicum 1 Compositio Mathematica 1 Duke Mathematical Journal 1 Journal of Combinatorial Theory. Series B 1 Journal of the Madras University. Section B: Mathematics, Physical and Biological Sciences 1 Journal für die Reine und Angewandte Mathematik 1 The Mathematics Student 1 Michigan Mathematical Journal 1 Pacific Journal of Mathematics 1 Transactions of the American Mathematical Society 1 Bulletin of the Iranian Mathematical Society 1 Computers & Operations Research 1 Internationale Mathematische Nachrichten 1 SIAM Journal on Discrete Mathematics 1 International Journal of Mathematics 1 IMRN. International Mathematics Research Notices 1 Elemente der Mathematik 1 Glasnik Matematički. Serija III 1 Linear Algebra and its Applications 1 Bulletin of the American Mathematical Society. New Series 1 Comptes Rendus de l’Académie des Sciences. Série I 1 Expositiones Mathematicae 1 Notices of the American Mathematical Society 1 Kyushu Journal of Mathematics 1 The New York Journal of Mathematics 1 Mathematical Research Letters 1 ELA. The Electronic Journal of Linear Algebra 1 Annals of Mathematics. Second Series 1 Pure and Applied Mathematics Quarterly 1 Rendiconti del Seminario Matematico. Universitá e Politecnico di Torino 1 CMS Conference Proceedings 1 Fields Institute Monographs 1 London Mathematical Society Student Texts 1 Progress in Mathematics 1 AMS/IP Studies in Advanced Mathematics 1 Texts and Readings in Mathematics 1 Student Mathematical Library 1 Ramanujan Mathematical Society Lecture Notes Series 1 Modern Birkhäuser Classics 1 Discrete Mathematics and its Applications 1 Indian Journal of Discrete Mathematics 1 HBA Lecture Notes in Mathematics all top 5 ### Fields 260 Number theory (11-XX) 23 Algebraic geometry (14-XX) 13 Combinatorics (05-XX) 11 General and overarching topics; collections (00-XX) 11 Group theory and generalizations (20-XX) 6 Special functions (33-XX) 4 History and biography (01-XX) 4 Functions of a complex variable (30-XX) 3 Commutative algebra (13-XX) 3 Abstract harmonic analysis (43-XX) 3 Information and communication theory, circuits (94-XX) 2 Order, lattices, ordered algebraic structures (06-XX) 2 Linear and multilinear algebra; matrix theory (15-XX) 2 Harmonic analysis on Euclidean spaces (42-XX) 2 Convex and discrete geometry (52-XX) 2 Probability theory and stochastic processes (60-XX) 2 Statistics (62-XX) 1 Mathematical logic and foundations (03-XX) 1 Field theory and polynomials (12-XX) 1 Topological groups, Lie groups (22-XX) 1 Real functions (26-XX) 1 Ordinary differential equations (34-XX) 1 Manifolds and cell complexes (57-XX) 1 Numerical analysis (65-XX) 1 Operations research, mathematical programming (90-XX) ### Citations contained in zbMATH Open 203 Publications have been cited 1,748 times in 1,067 Documents Cited by Year Handbook of finite fields. Zbl 1319.11001 2013 Mean values of derivatives of modular $$L$$-series. Zbl 0745.11032 Murty, M. Ram; Murty, V. Kumar 1991 Ramanujan graphs. Zbl 1038.05038 Murty, M. Ram 2003 A remark on Artin’s conjecture. Zbl 0549.10037 Gupta, Rajiv; Murty, M. Ram 1984 Oscillations of Fourier coefficients of modular forms. Zbl 0489.10020 Murty, M. Ram 1983 On Artin’s conjecture. Zbl 0526.12010 Murty, M. Ram 1983 An introduction to sieve methods and their applications. Zbl 1121.11063 Cojocaru, Alina Carmen; Murty, M. Ram 2006 Non-vanishing of $$L$$-functions and applications. Zbl 0916.11001 Murty, M. Ram; Murty, V. Kumar 1997 Modular forms and the Chebotarev density theorem. Zbl 0644.10018 Murty, M. Ram; Murty, V. Kumar; Saradha, N. 1988 Problems in analytic number theory. 2nd ed. Zbl 1190.11001 Murty, M. Ram 2008 On the distribution of supersingular primes. Zbl 0864.11030 Fouvry, Etienne; Murty, M. Ram 1996 Problems in algebraic number theory. 2nd revised and expanded ed. Zbl 1055.11001 Murty, M. Ram; Esmonde, Jody 2005 Transcendental values of the digamma function. Zbl 1222.11097 2007 Artin’s conjecture for primitive roots. Zbl 0656.10044 Murty, M. Ram 1988 Introduction to $$p$$-adic analytic number theory. Zbl 1031.11067 Murty, M. Ram 2002 Primitive points on elliptic curves. Zbl 0598.14018 Gupta, Rajiv; Murty, M. Ram 1986 A variant of the Bombieri-Vinogradov theorem. Zbl 0619.10039 Murty, M. Ram; Murty, V. Kumar 1987 Cyclicity of elliptic curves modulo $$p$$ and elliptic curve analogues of Linnik’s problem. Zbl 1087.11037 Cojocaru, Alina Carmen; Murty, M. Ram 2004 Euler-Lehmer constants and a conjecture of Erdős. Zbl 1204.11114 2010 Selberg’s conjectures and Artin $$L$$-functions. Zbl 0805.11062 Murty, M. Ram 1994 Lectures on automorphic $$L$$-functions. Zbl 1066.11021 Cogdell, James W.; Kim, Henry H.; Murty, M. Ram 2004 Exponents of class groups of quadratic fields. Zbl 0993.11059 Murty, M. Ram 1999 Transcendental values of certain Eichler integrals. Zbl 1278.11056 Gun, Sanoli; Murty, M. Ram; Rath, Purusottam 2011 Cyclicity and generation of points mod $$p$$ on elliptic curves. Zbl 0731.14011 Gupta, Rajiv; Murty, M. Ram 1990 Prime numbers and irreducible polynomials. Zbl 1053.11020 Murty, M. Ram 2002 Effective equidistribution of eigenvalues of Hecke operators. Zbl 1234.11055 Murty, M. Ram; Sinha, Kaneenika 2009 Euclidean rings of algebraic integers. Zbl 1048.11080 Harper, Malcolm; Murty, M. Ram 2004 Non-vanishing of $$L$$-functions and applications. Reprint of the 1997 edition. Zbl 1235.11086 Murty, M. Ram; Murty, V. Kumar 2012 On the order of $$(a \text{mod} p)$$. Zbl 0931.11034 Erdős, Pál; Murty, M. Ram 1999 Problems in analytic number theory. Zbl 0971.11001 Murty, M. Ram 2001 Strong multiplicity one for Selberg’s class. Zbl 0823.11049 Murty, Maruti Ram; Murty, Vijaya Kumar 1994 Zeros of Ramanujan polynomials. Zbl 1238.11033 Murty, M. Ram; Smyth, Chris; Wang, Rob J. 2011 The pair correlation of zeros of functions in the Selberg class. Zbl 0929.11030 Murty, M. Ram; Perelli, Alberto 1999 Multiple Hurwitz zeta functions. Zbl 1124.11046 Murty, M. Ram; Sinha, Kaneenika 2006 On the number of real quadratic fields with class number divisible by 3. Zbl 1024.11073 Chakraborty, K.; Murty, M. Ram 2003 Ramanujan series for arithmetical functions. Zbl 1344.11006 Murty, M. Ram 2013 Odd values of the Ramanjuan $$\tau$$-function. Zbl 0635.10020 Murty, M. Ram; Murty, V. Kumar; Shorey, T. N. 1987 On the Rédei zeta function. Zbl 0446.05003 Kung, Joseph P. S.; Murty, M. Ram; Rota, Gian-Carlo 1980 On the estimation of eigenvalues of Hecke operators. Zbl 0588.10027 Murty, M. Ram 1985 Special values of the polygamma functions. Zbl 1189.11039 2009 Prime divisors of Fourier coefficients of modular forms. Zbl 0537.10026 Murty, M. Ram; Murty, V. Kumar 1984 The Euclidean algorithm for Galois extensions of $$\mathbb{Q}$$. Zbl 0814.11049 Clark, David A.; Murty, M. Ram 1995 An analogue of the Erdős-Kac theorem for Fourier coefficients of modular forms. Zbl 0557.10033 Murty, M. Ram; Murty, V. Kumar 1984 The abc conjecture and non-Wieferich primes in arithmetic progressions. Zbl 1272.11014 Graves, Hester; Murty, M. Ram 2013 Sign changes of Fourier coefficients of half-integral weight cusp forms. Zbl 1304.11029 Meher, Jaban; Murty, M. Ram 2014 Congruences between modular forms. Zbl 0910.11018 Murty, M. Ram 1997 Ramanujan-Fourier series and a theorem of Ingham. Zbl 1362.11084 2014 Distinguishing Hecke eigenforms. Zbl 1364.11094 Murty, M. Ram; Pujahari, Sudhir 2017 Problems in algebraic number theory. Zbl 0911.11001 Esmonde, Jody; Murty, M. Ram 1999 Spectral estimates for abelian Cayley graphs. Zbl 1083.05024 Friedman, Joel; Murty, M. Ram; Tillich, Jean-Pierre 2006 Counting integral ideals in a number field. Zbl 1129.11051 Murty, M. Ram; Van Order, Jeanine 2007 A derivation of the Hardy-Ramanujan formula from an arithmetic formula. Zbl 1357.11104 Dewar, Michael; Murty, M. Ram 2013 The square sieve and the Lang-Trotter conjecture. Zbl 1094.11021 Cojocaru, Alina Carmen; Fouvry, Etienne; Murty, M. Ram 2005 Odd values of Fourier coefficients of certain modular forms. Zbl 1197.11056 Murty, M. Ram; Murty, V. Kumar 2007 Uniform distribution of zeros of Dirichlet series. Zbl 1179.11028 Akbary, Amir; Murty, M. Ram 2008 Transcendental nature of special values of $$L$$-functions. Zbl 1218.11070 Gun, Sanoli; Murty, M. Ram; Rath, Purusottam 2011 A simple derivation of $$\zeta(1-K)=-B_K/K$$. Zbl 1034.11048 Murty, M. Ram; Reece, Marilyn 2000 Explicit formulas for the pair correlation of zeros of functions in the Selberg class. Zbl 1030.11044 Murty, M. Ram; Zaharescu, Alexandru 2002 On groups of squarefree order. Zbl 0531.10048 Murty, M. Ram; Murty, V. Kumar 1984 A problem of Chowla revisited. Zbl 1241.11083 Murty, M. Ram; Murty, V. Kumar 2011 Transcendental values of class group $$L$$-functions. Zbl 1281.11071 Murty, M. Ram; Murty, V. Kumar 2011 Variations on a theme of Romanoff. Zbl 0869.11004 Murty, M. Ram; Rosen, Michael; Silverman, Joseph H. 1996 Oscillations of coefficients of Dirichlet series attached to automorphic forms. Zbl 1409.11073 Meher, Jaban; Murty, M. Ram 2017 On the supersingular reduction of elliptic curves. Zbl 0654.14018 Murty, M. Ram 1987 Transcendental numbers. Zbl 1297.11001 Murty, M. Ram; Rath, Purusottam 2014 Averages of exponential twists of the Liouville function. Zbl 1012.11076 Murty, M. Ram; Sankaranarayanan, A. 2002 The ABC conjecture and exponents of class groups of quadratic fields. Zbl 0893.11043 Murty, M. Ram 1998 An analogue of Artin’s conjecture for Abelian extensions. Zbl 0531.12010 Murty, M. Ram 1984 The analytic rank of $$J_ 0 (N) (\mathbb{Q})$$. Zbl 0851.11036 Murty, M. Ram 1995 Some remarks on Artin’s conjecture. Zbl 0574.10005 Murty, M. Ram; Srinivasan, S. 1987 Irreducibility of Hecke polynomials. Zbl 1162.11335 Baba, Srinath; Murty, M. Ram 2003 Algebraic independence of values of modular forms. Zbl 1231.11082 Gun, Sanoli; Murty, M. Ram; Rath, Purusottam 2011 The Fibonacci zeta function. Zbl 1307.11096 Murty, M. Ram 2013 Non-vanishing of Dirichlet series with periodic coefficients. Zbl 1304.11092 Chatterjee, Tapas; Murty, M. Ram 2014 Transcendental values of the $$p$$-adic digamma function. Zbl 1253.11077 2008 Exponents of class groups of quadratic function fields over finite fields. Zbl 0999.11069 Cardon, David A.; Murty, M. Ram 2001 The $$abc$$ conjecture and prime divisors of the Lucas and Lehmer sequences. Zbl 1030.11012 Murty, M. Ram; Wong, Siman 2002 On the transcendence of certain infinite series. Zbl 1235.11070 Murty, M. Ram; Weatherby, Chester J. 2011 Stronger multiplicity one theorems for forms of general type on $$GL_ 2$$. Zbl 0874.11041 Murty, M. Ram; Rajan, C. S. 1996 Problems in the theory of modular forms. Zbl 1357.11002 Murty, M. Ram; Dewar, Michael; Graves, Hester 2015 Non-abelian generalizations of the Erdős-Kac theorem. Zbl 1061.11052 Murty, M. Ram; Saidak, Filip 2004 The mathematical legacy of Srinivasa Ramanujan. Zbl 1277.01002 Murty, M. Ram; Murty, V. Kumar 2013 An asymptotic formula for the coefficients of $$j(z)$$. Zbl 1335.11033 Dewar, Michael; Murty, M. Ram 2013 A Bombieri-Vinogradov theorem for all number fields. Zbl 1336.11072 Murty, M. Ram; Petersen, Kathleen L. 2013 On a conjecture of Chowla and Milnor. Zbl 1273.11135 Gun, Sanoli; Murty, M. Ram; Rath, Purusottam 2011 Sudoku squares and chromatic polynomials. Zbl 1156.05301 Herzberg, Agnes M.; Murty, M. Ram 2007 Transcendence of the log gamma function and some discrete periods. Zbl 1175.11039 Gun, Sanoli; Murty, M. Ram; Rath, Purusottam 2009 The Phragmén-Lindelöf theorem and modular elliptic curves. Zbl 0822.11047 Mai, Liem; Murty, M. Ram 1994 A motivated introduction to the Langlands program. Zbl 0806.11054 Murty, M. Ram 1993 On the error term in a Parseval type formula in the theory of Ramanujan expansions. II. Zbl 1396.11005 Coppola, Giovanni; Murty, M. Ram; Saha, Biswajyoti 2016 Bertrand’s postulate for number fields. Zbl 1422.11231 Hulse, Thomas A.; Murty, M. Ram 2017 On the asymptotics for invariants of elliptic curves modulo $$p$$. Zbl 1296.11060 Felix, Adam Tyler; Murty, M. Ram 2013 An introduction to Artin $$L$$-functions. Zbl 1078.11065 Murty, M. Ram 2001 Bounds for congruence primes. Zbl 0933.11024 Murty, M. Ram 1999 Selberg’s conjectures and Artin $$L$$-functions. II. Zbl 0886.11064 Murty, M. Ram 1995 Transcendence of generalized Euler constants. Zbl 1310.11075 Murty, M. Ram; Zaytseva, Anastasia 2013 A family of number fields with unit rank at least 4 that has Euclidean ideals. Zbl 1329.11115 Graves, Hester; Murty, M. Ram 2013 A vanishing criterion for Dirichlet series with periodic coefficients. Zbl 1440.11158 Chatterjee, Tapas; Murty, M. Ram; Pathak, Siddhi 2018 Linear independence of Hurwitz zeta values and a theorem of Baker-Birch-Wirsing over number fields. Zbl 1310.11077 Gun, Sanoli; Murty, M. Ram; Rath, Purusottam 2012 On decimations of $$\ell$$-sequences. Zbl 1114.11015 Goresky, Mark; Klapper, Andrew; Murty, M. Ram; Shparlinski, Igor 2004 The Paley graph conjecture and Diophantine $$m$$-tuples. Zbl 1428.05302 Güloğlu, Ahmet M.; Murty, M. Ram 2020 Artin’s primitive root conjecture for function fields revisited. Zbl 07312701 Kim, Seoyoung; Ram Murty, M. 2020 Diophantine $$m$$-tuples with the property $$D(n)$$. Zbl 1455.11048 Becker, Riley; Murty, M. Ram 2019 A lower bound for the two-variable Artin conjecture and prime divisors of recurrence sequences. Zbl 1437.11141 Murty, M. Ram; Séguin, François; Stewart, Cameron L. 2019 A vanishing criterion for Dirichlet series with periodic coefficients. Zbl 1440.11158 Chatterjee, Tapas; Murty, M. Ram; Pathak, Siddhi 2018 A remark on the Lang-Trotter and Artin conjectures. Zbl 1431.11111 Murty, M. Ram; Vatwani, Akshaa 2018 Simultaneous non-vanishing and sign changes of Fourier coefficients of modular forms. Zbl 1422.11086 Kumari, Moni; Murty, M. Ram 2018 Elliptic curves, $$L$$-functions, and Hilbert’s tenth problem. Zbl 1441.11300 Murty, M. Ram; Pasten, Hector 2018 Special values of derivatives of $$L$$-series and generalized Stieltjes constants. Zbl 1421.11057 Murty, M. Ram; Pathak, Siddhi 2018 Murty, M. Ram; Srinivas, Kotyada; Subramani, Muthukrishnan 2018 Finite Ramanujan expansions and shifted convolution sums of arithmetical functions. II. Zbl 1432.11006 Coppola, Giovanni; Murty, M. Ram 2018 Transcendental sums related to the zeros of zeta functions. Zbl 1448.11140 Gun, Sanoli; Murty, M. Ram; Rath, Purusottam 2018 Transcendental numbers and special values of Dirichlet series. Zbl 1440.11164 Murty, M. Ram 2018 The Chebotarev density theorem and the pair correlation conjecture. Zbl 1425.11148 Murty, M. Ram; Murty, V. Kumar; Wong, Peng-Jie 2018 Distinguishing Hecke eigenforms. Zbl 1364.11094 Murty, M. Ram; Pujahari, Sudhir 2017 Oscillations of coefficients of Dirichlet series attached to automorphic forms. Zbl 1409.11073 Meher, Jaban; Murty, M. Ram 2017 Bertrand’s postulate for number fields. Zbl 1422.11231 Hulse, Thomas A.; Murty, M. Ram 2017 Twin primes and the parity problem. Zbl 1421.11075 Murty, M. Ram; Vatwani, Akshaa 2017 A higher rank Selberg sieve with an additive twist and applications. Zbl 1427.11088 Murty, M. Ram; Vatwani, Akshaa 2017 Finite Ramanujan expansions and shifted convolution sums of arithmetical functions. Zbl 1403.11003 Coppola, Giovanni; Murty, M. Ram; Saha, Biswajyoti 2017 The analog of the Erdös distance problem in finite fields. Zbl 1430.11161 2017 On the number of special numbers. Zbl 1421.11074 Aktaş, Kevser; Murty, M. Ram 2017 On the error term in a Parseval type formula in the theory of Ramanujan expansions. II. Zbl 1396.11005 Coppola, Giovanni; Murty, M. Ram; Saha, Biswajyoti 2016 A generalization of Euler’s theorem for $$\zeta(2k)$$. Zbl 1341.11040 Murty, M. Ram; Weatherby, Chester 2016 On the nature of $$e^{\gamma}$$ and non-vanishing of derivatives of $$L$$-series at $$s=1/2$$. Zbl 1400.11116 Murty, M. Ram; Tanabe, Naomi 2016 Some remarks on the discrete uncertainty principle. Zbl 1418.11024 Murty, M. Ram 2016 Generalization of an identity of Ramanujan. Zbl 1425.11145 Gun, Sanoli; Murty, M. Ram 2016 An elliptic analogue of a theorem of Hecke. Zbl 1415.11105 Ram Murty, M.; Vatwani, Akshaa 2016 Some remarks related to Maeda’s conjecture. Zbl 1401.11089 Murty, M. Ram; Srinivas, K. 2016 On sign changes for almost prime coefficients of half-integral weight modular forms. Zbl 1401.11093 Krishnamoorthy, Srilakshmi; Murty, M. Ram 2016 A note on $$q$$-analogues of Dirichlet $$L$$-functions. Zbl 1336.11052 Hamieh, Alia; Murty, M. Ram 2016 Generalization of a theorem of Hurwitz. Zbl 1425.11137 Lee, Jung-Jo; Murty, M. Ram; Park, Donghoon 2016 Problems in the theory of modular forms. Zbl 1357.11002 Murty, M. Ram; Dewar, Michael; Graves, Hester 2015 On the error term in a Parseval type formula in the theory of Ramanujan expansions. Zbl 1395.11117 Murty, M. Ram; Saha, Biswajyoti 2015 On a conjecture of Erdős and certain Dirichlet series. Zbl 1333.11084 Chatterjee, Tapas; Murty, M. Ram 2015 Special values of the gamma function at CM points. Zbl 1319.11048 Murty, M. Ram; Weatherby, Chester 2015 On the parity of the Fourier coefficients of $$j$$-function. Zbl 1377.11050 2015 Some remarks on automorphy and the Sato-Tate conjecture. Zbl 1397.11119 Murty, M. Ram; Murty, V. Kumar 2015 Sign changes of Fourier coefficients of half-integral weight cusp forms. Zbl 1304.11029 Meher, Jaban; Murty, M. Ram 2014 Ramanujan-Fourier series and a theorem of Ingham. Zbl 1362.11084 2014 Transcendental numbers. Zbl 1297.11001 Murty, M. Ram; Rath, Purusottam 2014 Non-vanishing of Dirichlet series with periodic coefficients. Zbl 1304.11092 Chatterjee, Tapas; Murty, M. Ram 2014 Counting squarefree values of polynomials with error term. Zbl 1318.11122 Murty, M. Ram; Pasten, Hector 2014 A note on special values of $$L$$-functions. Zbl 1302.11057 Gun, Sanoli; Murty, M. Ram; Rath, Purusottam 2014 Divisors of Fourier coefficients of modular forms. Zbl 1302.11022 Gun, Sanoli; Murty, M. Ram 2014 Handbook of finite fields. Zbl 1319.11001 2013 Ramanujan series for arithmetical functions. Zbl 1344.11006 Murty, M. Ram 2013 The abc conjecture and non-Wieferich primes in arithmetic progressions. Zbl 1272.11014 Graves, Hester; Murty, M. Ram 2013 A derivation of the Hardy-Ramanujan formula from an arithmetic formula. Zbl 1357.11104 Dewar, Michael; Murty, M. Ram 2013 The Fibonacci zeta function. Zbl 1307.11096 Murty, M. Ram 2013 The mathematical legacy of Srinivasa Ramanujan. Zbl 1277.01002 Murty, M. Ram; Murty, V. Kumar 2013 An asymptotic formula for the coefficients of $$j(z)$$. Zbl 1335.11033 Dewar, Michael; Murty, M. Ram 2013 A Bombieri-Vinogradov theorem for all number fields. Zbl 1336.11072 Murty, M. Ram; Petersen, Kathleen L. 2013 On the asymptotics for invariants of elliptic curves modulo $$p$$. Zbl 1296.11060 Felix, Adam Tyler; Murty, M. Ram 2013 Transcendence of generalized Euler constants. Zbl 1310.11075 Murty, M. Ram; Zaytseva, Anastasia 2013 A family of number fields with unit rank at least 4 that has Euclidean ideals. Zbl 1329.11115 Graves, Hester; Murty, M. Ram 2013 Modular forms and effective Diophantine approximation. Zbl 1297.11022 Murty, M. Ram; Pasten, Hector 2013 The Euclidean algorithm for number fields and primitive roots. Zbl 1309.11005 Murty, M. Ram; Petersen, Kathleen L. 2013 The partition function revisited. Zbl 1360.11098 Murty, M. Ram 2013 The work of K. Ramachandra in algebraic number theory. Zbl 1303.11004 Murty, M. Ram 2013 Ramanujan and the zeta function. Zbl 1373.11001 Murty, M. Ram 2013 Ramanujan’s proof of Bertrand’s postulate. Zbl 1318.11117 Meher, Jaban; Murty, M. Ram 2013 The twin prime problem and generalizations. Zbl 1382.11076 Murty, M. Ram 2013 Non-vanishing of $$L$$-functions and applications. Reprint of the 1997 edition. Zbl 1235.11086 Murty, M. Ram; Murty, V. Kumar 2012 Linear independence of Hurwitz zeta values and a theorem of Baker-Birch-Wirsing over number fields. Zbl 1310.11077 Gun, Sanoli; Murty, M. Ram; Rath, Purusottam 2012 The uncertainty principle and a generalization of a theorem of Tao. Zbl 1242.43008 Murty, M. Ram; Whang, Junho Peter 2012 Transcendental values of class group $$L$$-functions. II. Zbl 1282.11082 Murty, M. Ram; Murty, V. Kumar 2012 A problem of Fomenko’s related to Artin’s conjecture. Zbl 1264.11084 Felix, Adam Tyler; Murty, M. Ram 2012 A variant of the Lang-Trotter conjecture. Zbl 1276.11159 Murty, M. Ram; Murty, V. Kumar 2012 Transcendental values of certain Eichler integrals. Zbl 1278.11056 Gun, Sanoli; Murty, M. Ram; Rath, Purusottam 2011 Zeros of Ramanujan polynomials. Zbl 1238.11033 Murty, M. Ram; Smyth, Chris; Wang, Rob J. 2011 Transcendental nature of special values of $$L$$-functions. Zbl 1218.11070 Gun, Sanoli; Murty, M. Ram; Rath, Purusottam 2011 A problem of Chowla revisited. Zbl 1241.11083 Murty, M. Ram; Murty, V. Kumar 2011 Transcendental values of class group $$L$$-functions. Zbl 1281.11071 Murty, M. Ram; Murty, V. Kumar 2011 Algebraic independence of values of modular forms. Zbl 1231.11082 Gun, Sanoli; Murty, M. Ram; Rath, Purusottam 2011 On the transcendence of certain infinite series. Zbl 1235.11070 Murty, M. Ram; Weatherby, Chester J. 2011 On a conjecture of Chowla and Milnor. Zbl 1273.11135 Gun, Sanoli; Murty, M. Ram; Rath, Purusottam 2011 On a problem of Ruderman. Zbl 1264.11059 Murty, M. Ram; Murty, V. Kumar 2011 Some remarks on a problem of Chowla. Zbl 1276.11123 Murty, M. Ram 2011 Effective equidistribution and the Sato-Tate law for families of elliptic curves. Zbl 1207.11062 Miller, Steven J.; Murty, M. Ram 2011 Euler-Lehmer constants and a conjecture of Erdős. Zbl 1204.11114 2010 The Sato-Tate conjecture and generalizations. Zbl 1223.11071 Murty, M. Ram; Murty, V. Kumar 2010 Factoring new parts of Jacobians of certain modular curves. Zbl 1205.11048 Murty, M. Ram; Sinha, Kaneenika 2010 Small solutions of polynomial congruences. Zbl 1203.11002 Murty, M. Ram 2010 Effective equidistribution of eigenvalues of Hecke operators. Zbl 1234.11055 Murty, M. Ram; Sinha, Kaneenika 2009 Special values of the polygamma functions. Zbl 1189.11039 2009 Transcendence of the log gamma function and some discrete periods. Zbl 1175.11039 Gun, Sanoli; Murty, M. Ram; Rath, Purusottam 2009 Counting squarefree discriminants of trinomials under $$abc$$. Zbl 1217.11093 2009 The generalized Artin conjecture and arithmetic orbifolds. Zbl 1230.11141 Murty, M. Ram; Petersen, Kathleen L. 2009 Linear independence of digamma function and a variant of a conjecture of Rohrlich. Zbl 1175.11038 Gun, Sanoli; Murty, M. Ram; Rath, Purusottam 2009 A first course in graph theory and combinatorics. Zbl 1216.05001 Cioabă, Sebastian M.; Murty, M. Ram 2009 Problems in analytic number theory. 2nd ed. Zbl 1190.11001 Murty, M. Ram 2008 Uniform distribution of zeros of Dirichlet series. Zbl 1179.11028 Akbary, Amir; Murty, M. Ram 2008 Transcendental values of the $$p$$-adic digamma function. Zbl 1253.11077 2008 Expander graphs and gaps between primes. Zbl 1171.05357 Cioabă, Sebastian M.; Murty, M. Ram 2008 Summation methods and distribution of eigenvalues of Hecke operators. Zbl 1188.11015 Gun, Sanoli; Murty, M. Ram; Rath, Purusottam 2008 Transcendental values of the digamma function. Zbl 1222.11097 2007 Counting integral ideals in a number field. Zbl 1129.11051 Murty, M. Ram; Van Order, Jeanine 2007 Odd values of Fourier coefficients of certain modular forms. Zbl 1197.11056 Murty, M. Ram; Murty, V. Kumar 2007 Sudoku squares and chromatic polynomials. Zbl 1156.05301 Herzberg, Agnes M.; Murty, M. Ram 2007 ...and 103 more Documents all top 5 ### Cited by 1,115 Authors 89 Murty, Maruti Ram 30 Shparlinski, Igor E. 21 Gun, Sanoli 20 Murty, Vijaya Kumar 18 Luca, Florian 13 Akbary, Amir 12 Chakraborty, Kalyan 12 Wong, Peng-Jie 12 Zaharescu, Alexandru 11 Chatterjee, Tapas 11 Kohnen, Winfried 11 Vatwani, Akshaa 10 Cojocaru, Alina Carmen 10 Kaczorowski, Jerzy 10 Meher, Jaban 10 Perelli, Alberto 8 Liu, Yu-Ru 8 Pasten, Hector V. 8 Wong, Kok Bin 7 Das, Soumya 7 Kuo, Wentang 7 Moree, Pieter 7 Pollack, Paul 7 Saha, Biswajyoti 7 Srinivas, Kotyada 7 Steuding, Jörn 6 David, Chantal 6 Felix, Adam Tyler 6 Gómez, Carlos Alexis 6 Hoque, Azizul 6 Ku, Cheng Yeaw 6 Kumar, Balesh 6 Lau, Terry Shue Chien 6 Louboutin, Stéphane R. 6 Ono, Ken 6 Pathak, Siddhi S. 6 Sengupta, Jyoti 6 Smith, Ethan 6 Thorner, Jesse 6 Virdol, Cristian 5 Balasubramanian, Ramachandran 5 Banks, William D. 5 Cho, Ilwoo 5 James, Kevin 5 Kim, Henry H. 5 Kim, Sungjin 5 Lau, Yuk-Kam 5 Rath, Purusottam 5 Sha, Min 5 Sinha, Kaneenika 4 Alkan, Emre 4 Bharadwaj, Abhishek T. 4 Blomer, Valentin 4 Bonciocat, Nicolae Ciprian 4 Byott, Nigel P. 4 Chattopadhyay, Jaitra 4 Coppola, Giovanni 4 Dixit, Anup B. 4 Dubickas, Artūras 4 Ganguly, Satadal 4 Gupta, Rajiv 4 Jakhar, Anuj 4 Jørgensen, Palle E. T. 4 Kátai, Imre 4 Khurana, Suraj Singh 4 Kim, Seoyoung 4 Kumar, Narasimha 4 Maji, Bibekananda 4 Michel, Philippe Gabriel 4 Miller, Steven J. 4 Nagoshi, Hirofumi 4 Nakamura, Takashi 4 Pankowski, Łukasz 4 Pappalardi, Francesco 4 Paul, Biplab 4 Pomerance, Carl Bernard 4 Pujahari, Sudhir 4 Rout, Sudhansu Sekhar 4 Roy, Arindam 4 Saha, Ekata 4 Shankhadhar, Karam Deo 4 Sivaraman, Jyothsnaa 4 Tanabe, Naomi 4 Ushiroya, Noboru 4 Viswanadham, G. K. 4 Wang, Yingnan 4 Wu, Jie 4 Young, Paul Thomas 4 Zhao, Liangyi 3 Alabdali, Ali A. 3 Baier, Stephan 3 Bump, Daniel 3 Chen, Yenmei J. 3 Cioabă, Sebastian M. 3 Dhillon, Sonika 3 Dixit, Atul 3 Dudek, Adrian W. 3 Dutta, Utkal Keshari 3 Friedberg, Solomon 3 Garcia, Stephan Ramon ...and 1,015 more Authors all top 5 ### Cited in 205 Serials 159 Journal of Number Theory 62 International Journal of Number Theory 54 Proceedings of the American Mathematical Society 43 The Ramanujan Journal 24 Mathematics of Computation 24 Transactions of the American Mathematical Society 23 Journal of the Ramanujan Mathematical Society 22 Functiones et Approximatio. Commentarii Mathematici 21 Acta Arithmetica 18 Proceedings of the Indian Academy of Sciences. Mathematical Sciences 18 Research in Number Theory 17 Mathematische Annalen 16 Journal de Théorie des Nombres de Bordeaux 15 Journal of Mathematical Analysis and Applications 15 Forum Mathematicum 14 Mathematische Zeitschrift 13 Mathematical Proceedings of the Cambridge Philosophical Society 13 Duke Mathematical Journal 13 Linear Algebra and its Applications 12 Manuscripta Mathematica 11 Bulletin of the American Mathematical Society. New Series 11 Finite Fields and their Applications 10 Indian Journal of Pure & Applied Mathematics 10 Mathematika 9 Bulletin of the Australian Mathematical Society 9 Advances in Mathematics 9 Archiv der Mathematik 8 Journal of Algebra 8 Journal of Combinatorial Theory. Series A 8 Integers 7 Compositio Mathematica 7 Czechoslovak Mathematical Journal 7 Inventiones Mathematicae 6 Lithuanian Mathematical Journal 6 Rocky Mountain Journal of Mathematics 6 Annales de l’Institut Fourier 6 Journal of Pure and Applied Algebra 6 Advances in Applied Mathematics 6 Experimental Mathematics 5 Discrete Mathematics 5 The Mathematical Intelligencer 5 Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg 5 Monatshefte für Mathematik 5 Acta Mathematica Hungarica 5 Indagationes Mathematicae. New Series 5 Research in the Mathematical Sciences 4 American Mathematical Monthly 4 Israel Journal of Mathematics 4 Acta Mathematica 4 Canadian Mathematical Bulletin 4 Journal für die Reine und Angewandte Mathematik 4 Proceedings of the Japan Academy. Series A 4 Results in Mathematics 4 European Journal of Combinatorics 4 The Electronic Journal of Combinatorics 4 Comptes Rendus. Mathématique. Académie des Sciences, Paris 4 JP Journal of Algebra, Number Theory and Applications 4 Algebra & Number Theory 4 Kyoto Journal of Mathematics 3 Communications in Algebra 3 Periodica Mathematica Hungarica 3 Canadian Journal of Mathematics 3 Glasgow Mathematical Journal 3 Memoirs of the American Mathematical Society 3 Michigan Mathematical Journal 3 Revista Matemática Iberoamericana 3 Journal of the American Mathematical Society 3 SIAM Journal on Discrete Mathematics 3 Acta Mathematica Sinica. English Series 3 Central European Journal of Mathematics 3 Journal of Algebra and its Applications 3 Science China. Mathematics 3 S$$\vec{\text{e}}$$MA Journal 2 Chaos, Solitons and Fractals 2 The Annals of Probability 2 Annales Scientifiques de l’École Normale Supérieure. Quatrième Série 2 Bulletin of the London Mathematical Society 2 Journal of Combinatorial Theory. Series B 2 Nagoya Mathematical Journal 2 Pacific Journal of Mathematics 2 Proceedings of the London Mathematical Society. Third Series 2 Rendiconti del Circolo Matemàtico di Palermo. Serie II 2 Semigroup Forum 2 Theoretical Computer Science 2 Tokyo Journal of Mathematics 2 Acta Applicandae Mathematicae 2 Graphs and Combinatorics 2 International Journal of Mathematics 2 Aequationes Mathematicae 2 Elemente der Mathematik 2 Proceedings of the National Academy of Sciences of the United States of America 2 Expositiones Mathematicae 2 Journal of Algebraic Combinatorics 2 The New York Journal of Mathematics 2 The Journal of Fourier Analysis and Applications 2 Dynamical Systems 2 Journal of the Australian Mathematical Society 2 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 2 Complex Analysis and Operator Theory 2 Advances in Mathematics of Communications ...and 105 more Serials all top 5 ### Cited in 49 Fields 948 Number theory (11-XX) 94 Combinatorics (05-XX) 73 Algebraic geometry (14-XX) 39 Group theory and generalizations (20-XX) 28 Special functions (33-XX) 21 Field theory and polynomials (12-XX) 20 Topological groups, Lie groups (22-XX) 17 Information and communication theory, circuits (94-XX) 15 Functions of a complex variable (30-XX) 14 Linear and multilinear algebra; matrix theory (15-XX) 14 Computer science (68-XX) 12 Dynamical systems and ergodic theory (37-XX) 10 Commutative algebra (13-XX) 10 Real functions (26-XX) 10 Quantum theory (81-XX) 9 Abstract harmonic analysis (43-XX) 9 Convex and discrete geometry (52-XX) 7 Harmonic analysis on Euclidean spaces (42-XX) 7 Probability theory and stochastic processes (60-XX) 6 History and biography (01-XX) 6 Associative rings and algebras (16-XX) 6 Functional analysis (46-XX) 6 Differential geometry (53-XX) 5 Numerical analysis (65-XX) 4 General and overarching topics; collections (00-XX) 4 Measure and integration (28-XX) 4 Sequences, series, summability (40-XX) 3 Order, lattices, ordered algebraic structures (06-XX) 3 Approximations and expansions (41-XX) 3 Statistical mechanics, structure of matter (82-XX) 2 Potential theory (31-XX) 2 Ordinary differential equations (34-XX) 2 Partial differential equations (35-XX) 2 Difference and functional equations (39-XX) 2 Operator theory (47-XX) 2 Calculus of variations and optimal control; optimization (49-XX) 2 Geometry (51-XX) 2 Relativity and gravitational theory (83-XX) 2 Operations research, mathematical programming (90-XX) 1 Mathematical logic and foundations (03-XX) 1 Nonassociative rings and algebras (17-XX) 1 Category theory; homological algebra (18-XX) 1 $$K$$-theory (19-XX) 1 Several complex variables and analytic spaces (32-XX) 1 General topology (54-XX) 1 Algebraic topology (55-XX) 1 Global analysis, analysis on manifolds (58-XX) 1 Statistics (62-XX) 1 Game theory, economics, finance, and other social and behavioral sciences (91-XX) ### Wikidata Timeline The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
{}
Is there actualy any hi-res footage made with MARS rover? 1. Mar 26, 2009 SAZAR Is there actualy any hi-res footage made with MARS rover?? All I see are computer animations, photos and low-res low-fps black&white clips. Can anyone point to some link where any Mars rover recorded a hi-res video? 2. Mar 26, 2009 Staff: Mentor Re: Is there actualy any hi-res footage made with MARS rover?? It did not. More to the point, of what would it have recorded video? There isn't anything to video there! 3. Mar 26, 2009 Re: Is there actualy any hi-res footage made with MARS rover?? If you go to www.nasa.gov you will find pictures but no videos. They have created what appears to be video by stringing sucessive stills together. An example is the sun rise on Mars. 4. Mar 28, 2009 SAZAR Re: Is there actualy any hi-res footage made with MARS rover?? The thing is - 1.) photos can be more easily manipulated than videos (which contain a huge number of pictures and can give you more perception of space). 2.) low-res videos (like 160x120 or less) - you can;t see details on it One can stage it all. It's like it never landed on Mars... 5. Mar 28, 2009 Staff: Mentor Re: Is there actualy any hi-res footage made with MARS rover?? Huh? Are you saying you think the mission was a hoax? 6. Mar 29, 2009 Re: Is there actualy any hi-res footage made with MARS rover?? No I am not saying the mission is a hoax. To my knowledge the rovers do not have video capability. An apparent video of the Sun rising on Mars was, as stated by NASA, created using still images from Spirit. You can see this on NASA’s site. Being able to create moving pictures from still images is nothing new. That is how the first motion pictures were produced. To my knowledge, this is still a grade school science project. You can find this material on NASA’s website. If you go to the site and look up the technology section you will find there is no discussion of video images. All data indicate still images. Last edited: Mar 29, 2009 7. Mar 29, 2009 Staff: Mentor Re: Is there actualy any hi-res footage made with MARS rover?? That question was not directed at you... 8. Mar 30, 2009 Re: Is there actualy any hi-res footage made with MARS rover?? I did not feel your question was directed at me. If my response sounded harsh, please forgive me. There are times I unwittingly use to strong of a word. This seems to be one of those times. 9. Mar 30, 2009 SAZAR Re: Is there actualy any hi-res footage made with MARS rover?? It could record a video of it moving across the surface of Mars, and maybe a sound of environment (the atmosphere is thin but something would be heard - at least the sound of rover's motors and other equipment). It could record a video (and sound) of some storm - dust going around etc. It could record a better sunrise (time-lapse video) - how color of the light changes, and shadows move on ground -- that would be more effective than an underexposed slideshow on which you can't see ground because light from the Sun is too bright. ------------ Those are interesting things to see. ...I mean - photos were fascinating at first - but those are just still images - and surely one would be more impressed with something which can't be manufactured here on Earth and/or our computers - something you see is vast and different and in motion. I mean - you can take a photo of some desert with some red dirt (full of aluminum or iron) and rocks, and then say "O' there's a piece of straw, let me erase it... Oh, a cactus there, darn it... and make the sky a bit different color..." then open it in 'Photo Shop' and make it -- so surely video would be more impressive than something one might MAKE TO SEE it although it isn't real - that's what I'm talking. --------- They make submarines with nuclear reactors, and hundreds of tanks that can land (on Earth!) with parachutes, yet they can't send a Mars vehicle with capability to record and broadcast live video using nuclear energy source. No. I believe those are real shots, but there's not much things about it that can make me say: yes, it can't be manufactured on Earth. ---------- Whole another issue: maybe they made videos (with sound even), but they don't want to show it because they are selfish or they say "Yeah, it cost us arm and a leg to get there, pay us and we'll show you, HA!"... or something. Last edited: Mar 30, 2009 10. Mar 30, 2009 mgb_phys Re: Is there actualy any hi-res footage made with MARS rover?? The Mars explorers data rate direct back to the Earth by it's high gain antenna is around 10Kbits/sec, it can do 128Kbaud to the orbiter twice a day when the orbiter passes overhead. Uncompressed 1080p is around 200M BYTES/s so it could grab a few seconds of HDTV of an non-moving scene and then spend a month relaying the data back. 11. Mar 30, 2009 Staff: Mentor Re: Is there actualy any hi-res footage made with MARS rover?? Making video doesn't make sense. You can get much more information from hi-res stills, than from lo-res video. It would be a waste of bandwidth. Edit: mgb was faster, and he points at exactly the same problem. 12. Mar 30, 2009 Re: Is there actualy any hi-res footage made with MARS rover?? If the lack of video bothers you. You may want to consider some other endeavor than science. Read the following. TAKE THIS FISH AND LOOK AT IT Samuel H. Scudder Most of us tend to look at things without really seeing what is there. In everyday life this lack of observation may not be noticed, but in science it would be considered a serious failing. Louis Agassiz (1807-73), the distinguished Harvard professor of natural history, knew this and used to subject his students to a rigorous but useful exercise in minute observation. One of his students was Samuel Scudder, who has left us the following account. It was more than fifteen years ago that I entered the laboratory of Professor Agassiz, and told him I had enrolled my name in the Scientific School as a student of natural history. he asked me a few questions about my object in coming, my antecedents generally, the mode in which I afterwards proposed to use the knowledge I might acquire, and, finally, whether I wished to study any special branch. To the latter I replied that, while I wished to be well grounded in all departments of zoology, I purposed to devote myself specially to insects. "When do you wish to begin?" he asked. "Now," I replied. this seemed to please him, and with an energetic "Very well!" he reached from a shelf a huge jar of specimens in yellow alcohol. "Take this fish," he said, "and look at it; we call it a haemulon; by and by I will ask what you have seen." With that he left me, but in a moment returned with explicit instructions as to the care of the object entrusted to me. "No man is fit to be a naturalist," said he, "who does not know how to take care of specimens." I was to keep the fish before me in a tin tray, and occasionally moisten the surface with alcohol from the jar, always taking care to replace the stopper tightly. Those were not the days of ground-glass stoppers and elegantly shaped exhibition jars; all the old students will recall the huge neckless glass bottles with their leaky, wax-besmeared corks, half eaten by insects, and begrimed with cellar dust. Entomology was a cleaner science than icthyology, but the example of the Professor, who had unhesitatingly plunged to the bottom of the jar to produce the fish, was infectious; and though this alcohol had a "very ancient and fishlike smell," I really dared not show any aversion within these sacred precincts, and treated the alcohol as though it were pure water. Still I was conscious of a passing feeling of disappointment, for gazing at a fish did not commend itself to an ardent entomologist. My friends at home, too, were annoyed when they discovered that no amount of eau-de-Cologne would drown the perfume which haunted me like a shadow. In ten minutes I had seen all that could be seen in that fish, and started in search of the Professor--who had, however, left the Museum; and when I returned, after lingering over some of the odd animals stored in the upper apartment, my specimen was dry all over. I dashed the fluid over the fish as if to resuscitate the beast from a fainting fit, and looked with anxiety for a return of the normal sloppy appearance. This little excitement over, nothing was to be done but to return to a steadfast gaze at my mute companion. Half an hour passes--an hour--another hour; the fish began to look loathsome. I turned it over and around; looked it in the face--ghastly; from behind, beneath, above, sideways, at a three-quarters' view--just as ghastly. I was in despair; at an early hour I concluded that lunch was necessary; so, with infinite relief, the fish was carefully replaced in the jar, and for an hour I was free. On my return, I learned that Professor Agassiz had been at the Museum, but had gone, and would not return for several hours. My fellow-students were too busy to be disturbed by continued conversation. Slowly I drew forth that hideous fish, and with a feeling of desperation again looked at it. I might not use a magnifying-glass; instruments of all kinds were interdicted. My two hands, my two eyes, and the fish: it seemed a most limited field. I pushed my finger down its throat to feel how sharp the teeth were. I began to count the scales in the different rows, until I was convinced that that was nonsense. At last a happy thought struck me--I would draw the fish; and now with surprise I began to discover new features in the creature. Just then the Professor returned. "That is right," said he; "a pencil is one of the best of eyes. I am glad to notice, too, that you keep your specimen wet, and your bottle corked." With these encouraging words, he added: "Well, what is it like?" He listened attentively to my brief rehearsal of the structure of parts whose names were still unknowns to me: the fringed gill-arches and movable operculum; the pores of the head, fleshy lips and lidless eyes; the lateral line, the spinous fins and forked tail; the compressed and arched body. When I finished, he waited as if expecting more, and then, with an air of disappointment: "You have not looked very carefully; why," he continued more earnestly, "you haven't even seen one of the most conspicuous features of the animal, which is a plainly before your eyes as the fish itself; look again, look again!" and he left me to my misery. I was piqued; I was mortified. Still more of that wretched fish! But now I set myself to my tasks with a will, and discovered on new thing after another, until I saw how just the Professor's criticism had been. The afternoon passed quickly; and when, towards its close, the Professor inquired: "Do you see it yet?" "No," I replied, "I am certain I do not, but I see how little I was before." "That is next best," said he, earnestly, "but I won't hear you now; put away your fish and go home; perhaps you will be ready with a better answer in the morning. I will examine you before you look at the fish." This was disconcerting. Not only must I think of my fish all night, studying, without the object before me, what this unknown but most visible feature might be; but also, without reviewing my discoveries, I must give an exact account of them the next day. I had a bad memory; so I walked home by Charles River in a distracted state, with my two perplexities. The cordial greeting from the Professor the next morning was reassuring; here was a man who seemed to be quite as anxious as I that I should see for myself what he saw. "Do you perhaps mean," I asked, "that the fish has symmetrical sides with paired organs?" His thoroughly pleased "Of course! of course!" repaid the wakeful hours of the previous night. After he had discoursed most happily and enthusiastically--as he always did--upon the importance of this point, I ventured to ask what I should do next. "Oh, look at your fish!" he said, and left me again to my own devices. In a little more than an hour he returned, and heard my new catalogue. "That is good, that is good!" he repeated; "but that is not all; go on"; and so for three long days he placed that fish before my eyes, forbidding me to look at anything else, or to use any artificial aid. "Look, look, look," was his repeated injunction. This was the best entomological lesson I ever had--a lesson whose influence has extended to the details of every subsequent study; a legacy the Professor had left to me, as he has left it to many others, of inestimable value, which we could not buy, with which we cannot part. A year afterward, some of us were amusing ourselves with chalking outlandish beasts on the Museum blackboard. We drew prancing starfishes; frogs in mortal combat; hydra-headed worms; stately crawfishes, standing on their tails, bearing aloft umbrellas; and grotesque fishes with gaping mouths and staring eyes. The Professor came in shortly after, and was as amused as any at our experiments. he looked at the fishes. "Haemulons, every one of them," he said; "Mr. ---- drew them." True; and to this day, if I attempt a fish, I can draw nothing but haemulons. The fourth day, a second fish of the same group was placed beside the first, and I was bidden to point out the resemblances and differences between the two; another and another followed, until the entire family lay before me, and a whole legion of jars covered the table and surrounding shelves; the odor had become a pleasant perfume; and even now, the sight of an old, six-inch, worm-eaten cork brings fragrant memories. The whole group of haemulons was thus brought in review; and, whether engaged upon the dissection of the internal organs, the preparation and examination of the bony framework, or the description of the various parts, Agassiz's traning in the method of observing facts and their orderly arrangement was ever accompanied by the urgent exhortation not to be content with them. "Facts are stupid things," he would say, "until brought into connection with some general law." At the end of eight months, it was almost with reluctance that I left these friends and turned to insects; but what I had gained by this outside experience has been of greater value than years of later investigation in my favorite groups. My point is; What are you missing? 13. Mar 30, 2009 Staff: Mentor Re: Is there actualy any hi-res footage made with MARS rover?? Well, the rovers average speed is 1cm/sec so you wouldn't even be able to see them moving in real-time in a video unless the camera was pointed straight down. Time lapse is all that is really needed for showing them moving. At something like $40,000 per pound to get the equipment to Mars, I'm not sure why the sound of a whirring motor would be useful to pick up. Perhaps as one approaches, but once a dust storm hits, there isn't much to see. I'm sure they could do a sunrise time-lapse now if they wanted to. Thing is, though, the exposure issues you are talking about are probably due to the atmosphere, not the equipment limitations. On earth, you get a good half hour to an hour of interesting looking twilight, but that's because of the atmosphere. A thinner atmosphere means much less twilight and harsher, more directional lighting during the day. Technology has gotten to the point where literally anything is possible in computer animation. The purpose of the mission isn't to attempt to sway crackpots who are too nutty to be swayed anyway - the purpose is scientific research. They could, it's just that it wouldn't add much value to the mission. The pair of rovers and 4(?) years of tracking and control cost about$500 million, which is about the same as a single space shuttle launch. They got a tremendous bang for their buck. Frankly, that's your problem, not theirs. They don't care what you believe (nor should they). Um, no. Last edited: Mar 30, 2009 14. Apr 1, 2009 SAZAR Re: Is there actualy any hi-res footage made with MARS rover?? I just thought - why should it be any problem. All that fancy machinery and electronics, but no something as simple (yet amazing) as video recording from another world... Odd even... 15. Apr 1, 2009 Staff: Mentor Re: Is there actualy any hi-res footage made with MARS rover?? Naa, not so odd when you consider the goals, benefits, costs, and constraints. 16. Apr 1, 2009 mgb_phys Re: Is there actualy any hi-res footage made with MARS rover?? Coming soon - Hubble 3D! with Dolby surround sound 17. Apr 2, 2009 SAZAR Re: Is there actualy any hi-res footage made with MARS rover?? It IS odd that there is no real-time A/V recording of something as fascinating and scientifically valuable as dust storm developing on another world from ground perspective. Short recording, compressed (e.g. MPEG) and sent to Earth. Also: why are existing time-lapse recordings in black&white? 18. Apr 2, 2009 mgb_phys Re: Is there actualy any hi-res footage made with MARS rover?? You generally want to avoid lossy compression like mpeg for scientific images. If you saw a change in brightness in an image do you know if it is a real geological feature or an effect of the compression ? Ironically JPEG was invented by Nasa to get images of minor moons back from a probe to the outer planets, but it was only necessary that an image ha an object in it or not, there was no detail in the image, JPEG is designed to conserve the total brightness in an image while losing fine detail. I don't know the specifics of the mars rover camera but generally astronomical cameras are b+w with filters. A color camera has a filter mask over the pixels coloring groups of pixels red/green/blue so each pixel only records one color - you need to combine the signal from 3/4 pixels to get a point in the image. Now imagine you have a star or other small object that was smaller than a pixel - it would record as a different color and a different brightness depending on which pixel it happened to hit. Instead you take an entire image through a red filter, then again through a green filter and so on - this assumes your scene isn't moving. Alternatively you can have three separate cameras with red/green/blue filters, this is what professional TV cameras use - but it means 3x the weight and power requirements. Last edited: Apr 2, 2009 19. Apr 2, 2009 SAZAR Re: Is there actualy any hi-res footage made with MARS rover?? :) OK... Well let's see what I'd need... a digital camera (OK, I already have that), a space suit with breathing equipment; water, food and all the necessities of life for the travel both ways and staying a bit on there; and a spaceship with technology to make it able to fly-off the Earth and land on it afterward, navigate through space billion kilometers (at least), land on Mars and fly off of it... ...Well, at least I have a digital camera...
{}
# Times tables • May 26th 2012, 09:54 AM silverpen Times tables OK.... Crazy basic for this forum and i didn't really know where to put this question BUT.... I've recently been learning my times tables and i found this site useful for it - Math Trainer - Multiplication i found the way it questioned really easy to use and learn with, and have learnt my 12 times tables... (*whey...*) But i was wondering about the best way to go on after this, 13 / 14 / 15 / 16 etc... Are they generally committed to memory in the same way as the 12 or do you find it easier to use formula's to work these out, ? I can't really find a great site for learning them ( i guess because it's so basic... ) and reciting them over and over seemed a bit primitive for some reason, but if this is the only way then i guess ill have to have a bash. cheers • May 26th 2012, 04:01 PM emakarov Re: Times tables I learned the multiplication table only for factors <= 10. For larger numbers, I remember squares and the rule for multiplying by 11 ((10m + n) * 11 = 100m + 100(m + n) + n). I could calculate some other products using the formula (a + b)2 = a2 + 2ab + b2, but I have not memorized them. Be thankful we have a decimal numeral system instead of sexagesimal one as in Babylon! • May 26th 2012, 09:27 PM Prove It Re: Times tables Quote: Originally Posted by silverpen OK.... Crazy basic for this forum and i didn't really know where to put this question BUT.... I've recently been learning my times tables and i found this site useful for it - Math Trainer - Multiplication i found the way it questioned really easy to use and learn with, and have learnt my 12 times tables... (*whey...*) But i was wondering about the best way to go on after this, 13 / 14 / 15 / 16 etc... Are they generally committed to memory in the same way as the 12 or do you find it easier to use formula's to work these out, ? I can't really find a great site for learning them ( i guess because it's so basic... ) and reciting them over and over seemed a bit primitive for some reason, but if this is the only way then i guess ill have to have a bash. cheers I only remember the times tables up to 12. Anything bigger I look for factors, or do a halving and doubling, or just multiply them out using short/long multiplication. There's only so much we can be expected to remember. • May 27th 2012, 08:15 PM Soroban Re: Times tables Hello, silverpen! There is a trick to multiplying two numbers in the "teens". The two numbers are: $10 + a$ and $10 + b.$ Their product is: . $(10+a)(10+b) \:=\:100 + 10(a+b) + ab$ We have a 3-digit number: . $|\:1\:|\:a+b\:|\:ab\:|$ The units-digit is the units-digit of $ab.$ Write that down and remember the "carry". The tens-digit is the units-digit of $a+b$ plus the "carry". The hundreds-digit is $1$ plus the "carry" from the tens-digit. Example: . $14 \times 17$ We have: . $|\:1\;|\;\;\;\:|\:\;\;\:|$ Units-digit: . $4\times 7 \,=\,28$ Write down "8", carry the "2": . $|\:1\:|\:\;\;^2\:|\:8\:|$ Tens-digit: . $4+7 \,=\,11$, plus the carry, $13$ Write down "3", carry the "1": . $|\:1\,^1\:|\:3\:|\:8\:|$ Hence, we have: . $|\:2\;|\;3\:|\;8\:|$ Therefore: . $14 \times 17 \,=\,238$ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ Years ago, I intended to use this trick . . and eventually memorize the "teens" table. But I found more tricks and got "lazy". . . I never commited all the products to memory. If the two numbers differ by an even number, . . we can use the "difference of squares" identity: . . . . . $(a-b)(a+b) \:=\:a^2-b^2$ Example: . $15 \times 17$ We have: . $(16-1)(16+1) \:=\: 16^2 - 1^2 \:=\:256 - 1 \:=\:255$ Example: . $13 \times 17$ We have: . $(15-2)(15+2) \:=\:15^2 - 2^2 \:=\:225 - 4 \:=\:221$ Of course, the two numbers need not be in the "teens" for this trick. Example: . $29 \times 35 \:=\:(32-3)(32+3) \:=\:32^2 - 3^2 \:=\:1024 - 9 \:=\:1015$ This means you must memorize a lot of squares ... which I did years ago. You may be wondering: how do we find that "middle number"? . . Just average the two numbers. • May 29th 2012, 02:42 AM silverpen Re: Times tables hey, thank's for the reply's all.... So the general consensus seems to be formula's rather than committing things to memory, ? fair enough... @Soroban - i didn't really understand what you wrote there - can you tell me what it would fall under so that i can go off and learn it please, ? sos the level pretty basic! much appreciated though. - one thing i was doing was dividing both numbers by 2, multiplying them, then multiplying the end result by four..... But this is easier if both the number's are even! (maybe ill learn decimal multiplications after) • May 31st 2012, 01:29 PM silverpen Re: Times tables going to work through your examples for a bit
{}
# A Strict Digital Sum How many different positive integers exist between $$10^{6}$$ and $$10^{7}$$ the sum of whose digits is equal to 2? ×
{}
# Alternatic Current Derivation sandy.bridge Hello all, I'm trying to find the derivation for the alternating signal. For example, let b=width of loop a=length of loop Hence, $$ε=vLB=vaB=ωraB=\frac{1}{2}ωabB=\frac{1}{2}ωAB$$ I cannot seem to find the derivation of how one gets from this step, to the sinusoidal waveform. Any help is greatly appreciated.
{}
2k views ### Using a list of tuples in a pure function I want to use a list of tuples within a function to make assignments. Say I want to make assignments of the form value[i,j] = val What I have is a list of ... 2k views ### Going full functional (Haskell style) I'm trying to define some notation so that Mathematica code would be more functional, similar to Haskell (just for fun): currying, lambdas, infix operator to function conversion, etc.. And I have some ... 811 views ### How to rename a built-in function? I want a built-in function renamed without loss of any properties, I want the shorter name to appear in all results and to be recognized as input. Is it possible? 2k views ### What is the complete list of valid FrontEnd Packet types? In response to my question How can I get the unchanged Box form of an arbitrary expression? John Fultz answered with a method using the hilariously named ... 113 views ### How to tell whether a character is a letter-like form Is there any way to programmatically tell whether a given character is a letter or letter-like form (see this reference page for what I mean)? One idea I had was to use ... 135k views ### Where can I find examples of good Mathematica programming practice? I consider myself a pretty good Mathematica programmer, but I'm always looking out for ways to either improve my way of doing things in Mathematica, or to see if there's something nifty that I haven't ... 464 views ### Infix form of PutAppend ( >>> ) does not work with variable I'm new to Mathematica, so I suspect this question involves either a misunderstanding involving variables or the usage of >>>. On a webMathematica page (... 505 views ### Transform fancy usage messages in 1D string When we look at the usage messages of built-in functions nowadays (not in the good old times, when they were a simple descriptions) we see that although they look pretty in the front end, it is really ... 355 views ### Programmatically convert notebook input cells to text file I have ~150 student-submitted Mathematica notebooks for an assessed assignment. While I've been marking them, I'm suspecting there is a reasonable amount of plagiarism going on, when multiple students ... 204 views ### How to make some RowBox to be string GeneralUtilitiesGetUsages["SystemSin"] I want to get $\color{red}{\textbf{1}}$ plain string,but this function actually will give me some ... 327 views ### Import package with correct symbol contexts I am looking to do some automated analysis on packages, e.g. automatically check for common mistakes. Mathematica makes it relatively easy to manipulate code as data, and ... 592 views ### Remove subscript from string I have a list that I am using for the legend of a plot gases = {"Air", "He", "Ar", "\!$$\*SubscriptBox[\(N$$, $$2$$]\)", "\!$$\*SubscriptBox[\(CO$$, $$2$$]\)"}; ... 328 views ### How to convert arbitrary raw boxes directly into String? This question is motivated by the recent question about searching inside of the NB files. According to the Documentation, ToString expects a high-level WL ...
{}
# How to make a ResultSet Read/Write I have a DRAW document and BASE document. The BASE document has tables that contain data which relates to different shapes in the DRAW document. I have written a macro that will display data related to a selected shape and display it in a Dialog. Clicking the "Cancel button on the Dialog causes it to close, so all good up to that point. I want to be able to edit the TextBoxes on the Dialog and then click the OK button to save the updated data back to the BASE document but when I try to do this I get an error message that the "Result Set is Read Only". I've used MRI to examine the Connection to determine that it is Read/Write, but do not see a Read Only property for the Result Set. I have manually tested the SQL statement in the BASE GUI and I can update the fields that way, so I think my SQL is OK. The OOo BASIC Guide tells me there is a ResultSetConcurrency Variant that controls whether the Result Set can be modified (but doesn't go so far as to explain how to modify it). Using MRI I can see that ResultSetConcurrency is a number which is ReadOnly. I'm guessing that I need to set this value to something else, somehow, as part of the ExecuteQuery method. Half a day of looking through various documents and forums has not enlightened me :( I've attached my Macro(s) that relate to the problem I'm stuck at, so perhaps someone can point me in the right direction (please!) BTW - these macros are still in very early development, so maybe there is some fundamental flaw here somewhere. Option Explicit Global FibreDoc As Object Global Connection As Object Global DataSource As Object Global DatabaseContext As Object Global bPitSaveFlag as Boolean Global oPitDialog as Object Sub SetDefaults Dim InteractionHandler as Object FibreDoc = ThisComponent DatabaseContext = createUnoService("com.sun.star.sdb.DatabaseContext") DataSource = DatabaseContext.getByName("Fibre-data") Connection = DataSource.GetConnection("","") Else InteractionHandler = createUnoService("com.sun.star.sdb.InteractionHandler") Connection = DataSource.ConnectWithCompletion(InteractionHandler) End If 'mri(Connection) End Sub Sub ShowPitDialog Dim oCurrentSelection As Variant Dim oPit As Variant Dim sPitName as String Dim iNameLength as Integer Dim iSepPos as Integer Dim iPitID as Integer Dim sPitSQL as String Dim sTypeSQL as String Dim oPitStatement as Object Dim oTypeStatement as Object Dim oResultSet as Object Dim oTypeResult as Object Dim oTypeList as Object Dim sTitle as String Dim iCount as Integer End If bPitSaveFlag = False 'Indicator to save updated data 'mri(Connection) oCurrentSelection = FibreDoc.getCurrentSelection() oPit = oCurrentSelection.getByIndex(0) sPitName = oPit.Name 'The name of the object that is selected iNameLength = len(sPitName) iSepPos = InStr(sPitName,":") 'Location of the separating ":" in the name iPitID = Right(sPitName,(iNameLength-iSepPos)) 'The number to the right of the separating ":" sPitSQL = "SELECT ""ID"", ""Lat"", ""Long"", ""Type"", ""Comments"" FROM ... edit retag close merge delete Sort by » oldest newest most voted Hello, When using executeQuery you are reading. With 'executeUpdate you are writing. So you have at least two choices. In the first, have two connections for statements. See this post -> How to scroll through a table in base That same post has a link to -> firebird equivalent for resultset.last which uses a RowSet to do what you want. Tested with this code: Dim oRS as Object oRS = createUnoService("com.sun.star.sdb.RowSet") oRS.DataSourceName = ThisComponent.Location oRS.CommandType = com.sun.star.sdb.CommandType.COMMAND oRS.Command = sPitSQL oRS.Execute oRS.First oRS.updateRow() ` There is a lot to cover with this situation. The first link above contains many other links which are most helpful in covering this topic in more detail. and also for CommandType in RowSet -> https://www.openoffice.org/api/docs/c... NOTE: This was tested with HSQLDB v1.8 embedded database. You have not specified what DB you are using. more
{}
# Math Help - Trigonometric Derivative Question that I'm getting wrong 1. ## Trigonometric Derivative Question that I'm getting wrong I attached the question in black and the answer in red in the "question & answer.pdf" file. My work is attached in the "mywork.pdf" file. Can someone point out what I am doing wrong and tell me the appropriate way to go about it? Any help would be greatly appreciated! The derivative of $\sec{2\theta}:2\sec{2\theta}\tan{2\theta}$.
{}
# How do protons identify an atom? Jan 1, 2018 The number of protons gives ${Z}_{\text{atomic number}}$......the which unequivocally identifies the given atom..... #### Explanation: $Z$ is the so-called $\text{atomic number}$, the which gives the identity of the of the element. $Z = 1$, the element is hydrogen; $Z = 2$, the element is helium; .... $Z = 46$, the element is palladium.... From where do I get these numbers. Each of these elements has a distinct and characteristic chemistry. In the neutral atom, the value of $Z$ ALSO gives the number of the electrons that are conceived to orbit the nuclear core. Why is this so? In the nucleus itself, there may be as many protons or less AS neutrons, massive particles of zero electronic charge. Interactions between nuclear protons and neutrons, at unfeasibly short nuclear ranges, give rise to the strong nuclear force, the which, at this short range, is strong enough to overcome the electrostatic repulsion between the positively charged protons. I have written here before that the choice of a negatively charged electronic charge, and a positively charged nuclear charge is a bit unfortunate in that chemists who deal with many electron atoms often get the right magnitude but the wrong charge in their calculations, simply because they counted odd instead of even or even instead of odd. The much smaller company of particle physicists, who are a bit on the weird side anyway, could have coped with a nuclear particle that had a negative electronic charge. Alas we are stuck with the convention.
{}
Copied to clipboard ## G = C40⋊21(C2×C4)  order 320 = 26·5 ### 11st semidirect product of C40 and C2×C4 acting via C2×C4/C2=C22 Series: Derived Chief Lower central Upper central Derived series C1 — C20 — C40⋊21(C2×C4) Chief series C1 — C5 — C10 — C2×C10 — C2×C20 — C2×D20 — C2×C40⋊C2 — C40⋊21(C2×C4) Lower central C5 — C10 — C20 — C40⋊21(C2×C4) Upper central C1 — C22 — C2×C4 — C2.D8 Generators and relations for C4021(C2×C4) G = < a,b,c | a40=b2=c4=1, bab=a19, cac-1=a31, bc=cb > Subgroups: 502 in 120 conjugacy classes, 49 normal (37 characteristic) C1, C2, C2, C4, C4, C22, C22, C5, C8, C8, C2×C4, C2×C4, D4, Q8, C23, D5, C10, C42, C22⋊C4, C4⋊C4, C4⋊C4, C2×C8, C2×C8, SD16, C22×C4, C2×D4, C2×Q8, Dic5, C20, C20, D10, C2×C10, C8⋊C4, D4⋊C4, Q8⋊C4, C2.D8, C4×D4, C4×Q8, C2×SD16, C52C8, C40, Dic10, Dic10, C4×D5, D20, D20, C2×Dic5, C2×Dic5, C2×C20, C2×C20, C22×D5, SD16⋊C4, C40⋊C2, C2×C52C8, C4×Dic5, C4×Dic5, C10.D4, D10⋊C4, C5×C4⋊C4, C2×C40, C2×Dic10, C2×C4×D5, C2×D20, D206C4, C10.Q16, C408C4, C5×C2.D8, Dic53Q8, D208C4, C2×C40⋊C2, C4021(C2×C4) Quotients: C1, C2, C4, C22, C2×C4, D4, C23, D5, C22×C4, C2×D4, C4○D4, D10, C4×D4, C8⋊C22, C8.C22, C4×D5, C22×D5, SD16⋊C4, C2×C4×D5, D4×D5, Q82D5, D208C4, D8⋊D5, Q16⋊D5, C4021(C2×C4) Smallest permutation representation of C4021(C2×C4) On 160 points Generators in S160 (1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40)(41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80)(81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120)(121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160) (2 20)(3 39)(4 18)(5 37)(6 16)(7 35)(8 14)(9 33)(10 12)(11 31)(13 29)(15 27)(17 25)(19 23)(22 40)(24 38)(26 36)(28 34)(30 32)(41 59)(42 78)(43 57)(44 76)(45 55)(46 74)(47 53)(48 72)(49 51)(50 70)(52 68)(54 66)(56 64)(58 62)(61 79)(63 77)(65 75)(67 73)(69 71)(81 119)(82 98)(83 117)(84 96)(85 115)(86 94)(87 113)(88 92)(89 111)(91 109)(93 107)(95 105)(97 103)(99 101)(100 120)(102 118)(104 116)(106 114)(108 112)(121 139)(122 158)(123 137)(124 156)(125 135)(126 154)(127 133)(128 152)(129 131)(130 150)(132 148)(134 146)(136 144)(138 142)(141 159)(143 157)(145 155)(147 153)(149 151) (1 80 90 140)(2 71 91 131)(3 62 92 122)(4 53 93 153)(5 44 94 144)(6 75 95 135)(7 66 96 126)(8 57 97 157)(9 48 98 148)(10 79 99 139)(11 70 100 130)(12 61 101 121)(13 52 102 152)(14 43 103 143)(15 74 104 134)(16 65 105 125)(17 56 106 156)(18 47 107 147)(19 78 108 138)(20 69 109 129)(21 60 110 160)(22 51 111 151)(23 42 112 142)(24 73 113 133)(25 64 114 124)(26 55 115 155)(27 46 116 146)(28 77 117 137)(29 68 118 128)(30 59 119 159)(31 50 120 150)(32 41 81 141)(33 72 82 132)(34 63 83 123)(35 54 84 154)(36 45 85 145)(37 76 86 136)(38 67 87 127)(39 58 88 158)(40 49 89 149) G:=sub<Sym(160)| (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40)(41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80)(81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120)(121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160), (2,20)(3,39)(4,18)(5,37)(6,16)(7,35)(8,14)(9,33)(10,12)(11,31)(13,29)(15,27)(17,25)(19,23)(22,40)(24,38)(26,36)(28,34)(30,32)(41,59)(42,78)(43,57)(44,76)(45,55)(46,74)(47,53)(48,72)(49,51)(50,70)(52,68)(54,66)(56,64)(58,62)(61,79)(63,77)(65,75)(67,73)(69,71)(81,119)(82,98)(83,117)(84,96)(85,115)(86,94)(87,113)(88,92)(89,111)(91,109)(93,107)(95,105)(97,103)(99,101)(100,120)(102,118)(104,116)(106,114)(108,112)(121,139)(122,158)(123,137)(124,156)(125,135)(126,154)(127,133)(128,152)(129,131)(130,150)(132,148)(134,146)(136,144)(138,142)(141,159)(143,157)(145,155)(147,153)(149,151), (1,80,90,140)(2,71,91,131)(3,62,92,122)(4,53,93,153)(5,44,94,144)(6,75,95,135)(7,66,96,126)(8,57,97,157)(9,48,98,148)(10,79,99,139)(11,70,100,130)(12,61,101,121)(13,52,102,152)(14,43,103,143)(15,74,104,134)(16,65,105,125)(17,56,106,156)(18,47,107,147)(19,78,108,138)(20,69,109,129)(21,60,110,160)(22,51,111,151)(23,42,112,142)(24,73,113,133)(25,64,114,124)(26,55,115,155)(27,46,116,146)(28,77,117,137)(29,68,118,128)(30,59,119,159)(31,50,120,150)(32,41,81,141)(33,72,82,132)(34,63,83,123)(35,54,84,154)(36,45,85,145)(37,76,86,136)(38,67,87,127)(39,58,88,158)(40,49,89,149)>; G:=Group( (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40)(41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80)(81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120)(121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160), (2,20)(3,39)(4,18)(5,37)(6,16)(7,35)(8,14)(9,33)(10,12)(11,31)(13,29)(15,27)(17,25)(19,23)(22,40)(24,38)(26,36)(28,34)(30,32)(41,59)(42,78)(43,57)(44,76)(45,55)(46,74)(47,53)(48,72)(49,51)(50,70)(52,68)(54,66)(56,64)(58,62)(61,79)(63,77)(65,75)(67,73)(69,71)(81,119)(82,98)(83,117)(84,96)(85,115)(86,94)(87,113)(88,92)(89,111)(91,109)(93,107)(95,105)(97,103)(99,101)(100,120)(102,118)(104,116)(106,114)(108,112)(121,139)(122,158)(123,137)(124,156)(125,135)(126,154)(127,133)(128,152)(129,131)(130,150)(132,148)(134,146)(136,144)(138,142)(141,159)(143,157)(145,155)(147,153)(149,151), (1,80,90,140)(2,71,91,131)(3,62,92,122)(4,53,93,153)(5,44,94,144)(6,75,95,135)(7,66,96,126)(8,57,97,157)(9,48,98,148)(10,79,99,139)(11,70,100,130)(12,61,101,121)(13,52,102,152)(14,43,103,143)(15,74,104,134)(16,65,105,125)(17,56,106,156)(18,47,107,147)(19,78,108,138)(20,69,109,129)(21,60,110,160)(22,51,111,151)(23,42,112,142)(24,73,113,133)(25,64,114,124)(26,55,115,155)(27,46,116,146)(28,77,117,137)(29,68,118,128)(30,59,119,159)(31,50,120,150)(32,41,81,141)(33,72,82,132)(34,63,83,123)(35,54,84,154)(36,45,85,145)(37,76,86,136)(38,67,87,127)(39,58,88,158)(40,49,89,149) ); G=PermutationGroup([[(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40),(41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80),(81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120),(121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160)], [(2,20),(3,39),(4,18),(5,37),(6,16),(7,35),(8,14),(9,33),(10,12),(11,31),(13,29),(15,27),(17,25),(19,23),(22,40),(24,38),(26,36),(28,34),(30,32),(41,59),(42,78),(43,57),(44,76),(45,55),(46,74),(47,53),(48,72),(49,51),(50,70),(52,68),(54,66),(56,64),(58,62),(61,79),(63,77),(65,75),(67,73),(69,71),(81,119),(82,98),(83,117),(84,96),(85,115),(86,94),(87,113),(88,92),(89,111),(91,109),(93,107),(95,105),(97,103),(99,101),(100,120),(102,118),(104,116),(106,114),(108,112),(121,139),(122,158),(123,137),(124,156),(125,135),(126,154),(127,133),(128,152),(129,131),(130,150),(132,148),(134,146),(136,144),(138,142),(141,159),(143,157),(145,155),(147,153),(149,151)], [(1,80,90,140),(2,71,91,131),(3,62,92,122),(4,53,93,153),(5,44,94,144),(6,75,95,135),(7,66,96,126),(8,57,97,157),(9,48,98,148),(10,79,99,139),(11,70,100,130),(12,61,101,121),(13,52,102,152),(14,43,103,143),(15,74,104,134),(16,65,105,125),(17,56,106,156),(18,47,107,147),(19,78,108,138),(20,69,109,129),(21,60,110,160),(22,51,111,151),(23,42,112,142),(24,73,113,133),(25,64,114,124),(26,55,115,155),(27,46,116,146),(28,77,117,137),(29,68,118,128),(30,59,119,159),(31,50,120,150),(32,41,81,141),(33,72,82,132),(34,63,83,123),(35,54,84,154),(36,45,85,145),(37,76,86,136),(38,67,87,127),(39,58,88,158),(40,49,89,149)]]) 50 conjugacy classes class 1 2A 2B 2C 2D 2E 4A 4B 4C 4D 4E 4F 4G 4H 4I 4J 4K 4L 5A 5B 8A 8B 8C 8D 10A ··· 10F 20A 20B 20C 20D 20E ··· 20L 40A ··· 40H order 1 2 2 2 2 2 4 4 4 4 4 4 4 4 4 4 4 4 5 5 8 8 8 8 10 ··· 10 20 20 20 20 20 ··· 20 40 ··· 40 size 1 1 1 1 20 20 2 2 4 4 4 4 10 10 10 10 20 20 2 2 4 4 20 20 2 ··· 2 4 4 4 4 8 ··· 8 4 ··· 4 50 irreducible representations dim 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 4 4 4 4 4 4 type + + + + + + + + + + + + + - + + image C1 C2 C2 C2 C2 C2 C2 C2 C4 D4 D5 C4○D4 D10 D10 C4×D5 C8⋊C22 C8.C22 Q8⋊2D5 D4×D5 D8⋊D5 Q16⋊D5 kernel C40⋊21(C2×C4) D20⋊6C4 C10.Q16 C40⋊8C4 C5×C2.D8 Dic5⋊3Q8 D20⋊8C4 C2×C40⋊C2 C40⋊C2 C2×Dic5 C2.D8 C20 C4⋊C4 C2×C8 C8 C10 C10 C4 C22 C2 C2 # reps 1 1 1 1 1 1 1 1 8 2 2 2 4 2 8 1 1 2 2 4 4 Matrix representation of C4021(C2×C4) in GL6(𝔽41) 24 25 0 0 0 0 13 17 0 0 0 0 0 0 2 31 39 10 0 0 10 31 31 10 0 0 1 36 0 0 0 0 5 36 0 0 , 1 0 0 0 0 0 3 40 0 0 0 0 0 0 1 0 0 0 0 0 34 40 0 0 0 0 1 0 40 0 0 0 34 40 7 1 , 32 0 0 0 0 0 14 9 0 0 0 0 0 0 15 14 25 13 0 0 27 40 28 16 0 0 7 0 26 27 0 0 0 7 14 1 G:=sub<GL(6,GF(41))| [24,13,0,0,0,0,25,17,0,0,0,0,0,0,2,10,1,5,0,0,31,31,36,36,0,0,39,31,0,0,0,0,10,10,0,0],[1,3,0,0,0,0,0,40,0,0,0,0,0,0,1,34,1,34,0,0,0,40,0,40,0,0,0,0,40,7,0,0,0,0,0,1],[32,14,0,0,0,0,0,9,0,0,0,0,0,0,15,27,7,0,0,0,14,40,0,7,0,0,25,28,26,14,0,0,13,16,27,1] >; C4021(C2×C4) in GAP, Magma, Sage, TeX C_{40}\rtimes_{21}(C_2\times C_4) % in TeX G:=Group("C40:21(C2xC4)"); // GroupNames label G:=SmallGroup(320,516); // by ID G=gap.SmallGroup(320,516); # by ID G:=PCGroup([7,-2,-2,-2,-2,-2,-2,-5,253,120,219,58,1684,438,102,12550]); // Polycyclic G:=Group<a,b,c|a^40=b^2=c^4=1,b*a*b=a^19,c*a*c^-1=a^31,b*c=c*b>; // generators/relations ׿ × 𝔽
{}
Performs different approaches for network-based source estimation: effective distance median, recursive backtracking, and centrality-based source estimation. Additionally, we provide public transportation network data as well as methods for data preparation, source estimation performance analysis and visualization. ## Details The main function for origin estimation of propagation processes on complex network is origin. Different methods are available: effective distance median ('edm'), recursive backtracking ('backtracking'), and centrality-based source estimation ('centrality'). For more details on the methodological background, we refer to the corresponding publications. ## References • Manitz, J., J. Harbering, M. Schmidt, T. Kneib, and A. Schoebel (2017): Source Estimation for Propagation Processes on Complex Networks with an Application to Delays in Public Transportation Systems. Journal of Royal Statistical Society C (Applied Statistics), 66: 521-536. • Manitz, J., Kneib, T., Schlather, M., Helbing, D. and Brockmann, D. (2014) Origin detection during food-borne disease outbreaks - a case study of the 2011 EHEC/HUS outbreak in Germany. PLoS Currents Outbreaks, 1. <DOI: 10.1371/currents.outbreaks.f3fdeb08c5b9de7c09ed9cbcef5f01f2> • Comin, C. H. and da Fontoura Costa, L. (2011) Identifying the starting point of a spreading process in complex networks. Physical Review E, 84. <DOI: 10.1103/PhysRevE.84.056105> ## Author Juliane Manitz with contributions by Jonas Harbering
{}
Sárközy's Theorem A partial solution to the Erdös Squarefree Conjecture which states that the Binomial Coefficient is never Squarefree for all sufficiently large . Sárközy (1985) showed that if is the square part of the Binomial Coefficient , then where is the Riemann Zeta Function. An upper bound on of has been obtained. References Erdös, P. and Graham, R. L. Old and New Problems and Results in Combinatorial Number Theory. Geneva, Switzerland: L'Enseignement Mathématique Université de Genève, Vol. 28, 1980. Sander, J. W. A Story of Binomial Coefficients and Primes.'' Amer. Math. Monthly 102, 802-807, 1995. Sárközy, A. On the Divisors of Binomial Coefficients, I.'' J. Number Th. 20, 70-80, 1985. Vardi, I. Applications to Binomial Coefficients.'' Computational Recreations in Mathematica. Reading, MA: Addison-Wesley, pp. 25-28, 1991.
{}
# Statistical inference¶ The procedure of drawing conclusions about model parameters from randomly distributed data is known as statistical inference. This section explains the mathematical theory behind statistical inference. If you are already familiar with this subject feel free to skip to the next section. ## General case¶ For a statistician a model is simply a probability density function (PDF) , which describes the (joint) distribution of a set of observables in the sample space and depends on some parameters . The parameter space can be any (discrete or continuous) set and can be any function of and with (1) for all . Statistical inference can help us answer the following questions: 1. Which choice of the parameters is most compatible with some observed data ? 2. Based on the observed data , which parts of the parameter space can be ruled out (at which confidence level)? Let us begin with the first question. What we need is a function which assigns to each observable vector an estimate of the model parameters. Such a function is called an estimator. There are different ways to define an estimator, but the most popular choice (and the one that myFitter supports) is the maximum likelihood estimator defined by In other words, for a given observation the maximum likelihood estimate of the parameters are those parameters which maximise . When we keep the observables fixed and regard as a function of the parameters we call it the likelihood function. Finding the maximum of the likelihood function is commonly referred to as fitting the model to the data . It usually requires numerical optimisation techniques. The second question is answered (in frequentist statistics) by performing a hypothesis test. Let’s say we want know if the model is realised with parameters from some subset . We call this the null hypothesis. To test this hypothesis we have to define some function , called test statistic, which quantifies the disagreement of the observation with the null hypothesis. The phrase “quantifying the disagreement” is meant in the rather vague sense that should be large when looks like it was not drawn from a distribution with parameters in . In principle any function on can be used as a test statistic, but of course some are less useful than others. The most popular choice (and the one myFitter is designed for) is For this choice of , the test is called a likelihood ratio test. With a slight abuse of notation, we define (2) Then we may write (3) Clearly large values of indicate that the null hypothesis is unlikely, but at which value should we reject it? To determine that we have to compute another quantity called the p-value (4) where is the Heavyside step function. Note that the integral is simply the probability that for some random vector of toy observables drawn from the distribution the test statistic is larger than the observed value . The meaning of the p-value is the following: if the null hypothesis is true and we reject it for values of larger than the observed value the probability for wrongly rejecting the null hypothesis is at most . Thus, the smaller the p-value, the more confident we can be in rejecting the null hypothesis. The complementary probability is therefore called the confidence level at which we may reject the null hypothesis , based on the observation . Instead of specifying the p-value directly people often give the Z-value (or number of ‘standard deviations‘ or ‘sigmas‘) which is related to the p-value by where ‘‘ denotes the error function. The relation is chosen in such a way that the integral of a normal distribution from to is the confidence level . So, if someone tells you that they have discovered something at they have (usually) done a likelihood ratio test and rejected the “no signal” hypothesis at a confidence level corresponding to . ## Linear regression models and Wilks’ theorem¶ Evaluating the right-hand side of (4) exactly for a realistic model can be extremely challenging. However, for a specific class of models called linear regression models we can evaluate (4) analytically. The good news is that most models can be approximated by linear regression models and that this approximation is often sufficient for the purpose of estimating the p-value. A concrete formulation of this statement is given by a theorem by Wilks, which we shall discuss shortly. The myFitter package was originally written to handle problems where Wilks’ theorem is not applicable, but of course the framework can also be used in cases where Wilks’ theorem applies. In a linear regression model the parameter space is a -dimensional real vector space and the PDF is a (multi-dimensional) normal distribution with a fixed (i.e. parameter-independent) covariance matrix centered on an observable-vector which is an affine-linear function of the parameters : (5) where is a symmetric, positive definite matrix, is a non-singular ()-matrix and is a -dimensional real vector. Now assume that the parameter space under the null hypothesis is a -dimensional affine sub-space of (i.e. a linear sub-space with a constant shift). In this case the formula (4) for the p-value becomes (6) where is the normalised upper incomplete Gamma function which can be found in any decent special functions library. Note that, for a linear regession model, the p-value only depends on the dimension of the affine sub-space but not on its position within the larger space . The models we encounter in global fits are usually not linear regession models. They typically belong to a class of models known as non-linear regression models. The PDF of a non-linear regression model still has the form (5) with a fixed covariance matrix , but the means may depend on the parameters in an arbitrary way. Furthermore, the parameter spaces and don’t have to be vector spaces but can be arbitrary smooth manifolds. The means of a non-linear regression model are usually the theory predictions for the observables and have a known functional dependence on the parameters of the theory. The information about experimental uncertainties (and correlations) is encoded in the covariance matrix . Since any smooth non-linear function can be locally approximated by a linear function it seems plausible that (6) can still hold approximately even for non-linear regression models. This is essentially the content of Wilks’ theorem. It states that, for a general model (not only non-linear regression models) which satisfies certain regularity conditions, (6) holds asymptotically in the limit of an infinite number of independent observations. The parameter spaces and in the general case can be smooth manifolds with dimensions and , respectively. Note, however, that for Wilks’ theorem to hold must still be a subset of . ## Plug-in p-values and the myFitter method¶ In practice we cannot make an arbitrarily large number of independent observations. Thus (6) is only an approximation whose quality depends on the amount of collected data and the model under consideration. To test the quality of the approximation (6) for a specific mode one must evaluate or at least approximate (4) by other means, e.g. numerically. Such computations are called toy simulations, and their computational cost can be immense. Note that the evaluation of (and thus of the integrand in (4)) requires a numerical optimisation. And then the integral in (4) still needs to be maximised over . In most cases the integral is reasonably close to its maxiumum value when is set to its maximum likelihood estimate for the observed data and the parameter space : Thus, the plug-in p-value is often a good approximation to the real p-value. The standard numerical approach to computing the remaining integral is to generate a large number of toy observations distributed according to and then determine the fraction of toy observations where . If the p-value is small this method can become very inefficient. The efficiency can be significantly improved by importance sampling methods, i.e. by drawing toy observations from some other (suitably constructed) sampling density which avoids the region with . This method lies at the heart of the myFitter package. ## Gaussian and systematic errors¶ As mentioned above the most common type of model we deal with in global fits are non-linear regression models where the means represent theoretical predictions for the observables and the information about experimental uncertainties is encoded in the covariance matrix . If the value of an observable is given as in a scientific text this usually implies that the observable has a Gaussian distribution with a standard deviation of 0.5. Measurements from different experiments are usually uncorrelated. Thus the covariance matrix for independent observables is where the are the errors of the individual measurements. If several quantities are measured in the same experiment they can be correlated. Information about the correlations is usually presented in the form of a correlation matrix . The correlation matrix is always symmetric and its diagonal entries are 1. (Thus it is common to only show the lower or upper triangle of in a publication.) The elements of are called correlation coefficients. For observables with errors and correlation coefficients the covariance matrix is given by In some cases the error of an observable is broken up into a statistical and a systematic component. Typical notations are or simply with an indication which component is the systematic one given in the text. Sometimes the systematic error is asymmetric and denoted as . What are we supposed to do with this extra information? This depends largely on the context, i.e. on the nature of the systematic uncertainties being quoted. For example, the theoretical prediction of an observable (or its extraction from raw experimental data) might require knowledge of some other quantity which has been measured elsewhere with some Gaussian uncertainty. In this case one also speaks of a parametric uncertainty. If you know the dependence of on it is best to treat as a parameter of your model, add an observable which represents the measurement of (i.e. with a Gaussian distribution centered on and an appropriate standard deviation), and use only the statistical error to model the distribution of . In particular, this is the only (correct) way in which you can combine with other observables that depend on the same parameter . In this case the parameter is called a nuisance parameter, since we are not interested in its value but need it to extract values for the parameters we are interested in. If you don’t know the dependence of on but you are sure that the systematic uncertainty for given in a paper is (mainly) parametric due to and that does not affect any other observables in your fit you can combine the statistical and systematic error in quadrature. This means you assume a Gaussian distribution for with standard deviation , where is the statistical and the systematic error (which should be symmetric in this case). This procedure is also correct if the systematic error is the combined parametric uncertainty due to several parameters, as long as none of these parameters affect any other observables in your fit. So far in our discussion we have assumed that the nuisance parameter(s) can be measured and have a Gaussian distribution. This is not always the case. If, for example, the theoretical prediction for an observable is an approximation there will be a constant offset between the theory prediction and the measured value. This offset does not average out when the experiment is repeated many times, and it cannot be measured separately either. At best we can find some sort of upper bound on the size of this offset. The extraction of an observable from the raw data may also depend on parameters which can only be bounded but not measured. In this case the offset between the theory prediction and the mean of the measured quantity should be treated as an additional model parameter whose values are restricted to a finite range. Assume that the systematic error of is given as with . Let be the theory prediction for . The mean of the observable then depends on an additional nuisance parameter which takes values in the interval : Note that the formula above holds for systematic uncertainties associated with the measurement of . If the theory prediction for an observable is quoted with a systematic error the correct range for is .
{}
工程科学与技术   2019, Vol. 51 Issue (1): 52-59 1. 北京林业大学 水土保持学院 重庆缙云山三峡库区森林生态系统国家定位观测研究站,北京 100083; 2. 北京市水土保持工程技术研究中心,北京 100083 Characteristics of Horseshoe Vortex Upstream of the Cylinder in Shallow Water with Low Cylinder Reynolds Number YANG Pingping1, ZHANG Huilan1,2, WANG Yunqi1,2, WANG Yujie1,2 1. Jinyun Forest Ecosystem Research Station, School of Soil and Water Conservation, Beijing Forestry Univ., Beijing 100083, China; 2. Beijing Eng. Research Center of Soil and Water Conservation, Beijing Forestry Univ., Beijing 100083, China Abstract: The horseshoe vortex (HV) is formed at the upstream of a vertical cylinder when flow passes the cylinder and it is responsible for the local scouring at the base of the cylinder. Extensive works had been carried out to investigate the characteristics of HV in open channel flow with high Reynold number and large flow depth. However, it was difficult to measure HV experimentally in low Reynold number and shallow flow depth, in view of limitation of experimental technology. To capture HV accurately in shallow water flow, a high resolution and high frequency particle image velocimetry (HR-PIV) was employed in present study. Subsequently, the flow fields upstream of the cylinder were captured by HR-PIV in 6 experimental groups with shallow flow depth. The separation points of each groups were obtained by analyzing the characteristics of time-averaged flow fields. The HV was calculated by λci criterion where λci represented the swirling strength of vortex. Then the locations of HV were obtained in accordance with the maximal swirling strength point. In addition, the radius of HV was calculated by superposition of Oseen vortex and pure shear model. The results showed that within a low cylinder Reynolds number (ReD) where ReD<5 000, as the increase ofReD, the location of separation point and HV were rapidly approaching the cylinder simultaneously, whereas the HV moved rapidly towards the flume bed, while the radius of HV decreased and the swirling strength increased. Under shallow water flow conditions and the cylinder diameter keeping constant, as the increase of flow depth, the locations of separation point moved towards upstream; HV moved towards upstream and free surface, simultaneously, while the radius of HV increased. Theses HV parameters in the present flow conditions were larger than those in open channel flows. Derived from previous works, it was found that the separation point and HV would display a different manner as ReD became lager. When 5 000<ReD <8 000, the separation point was still rapidly moving downstream while HV remained stable as increasing ReD. While ReD>8 000, the separation point was slowly moving downstream and HV still remained stable. The research results can provide a basis and reference for engineering design for preventing local scouring at the base of cylinder. Key words: flow around cylinder    horseshoe vortex    particle image velocimetry    shallow water flow    low Reynolds number 1 试验与方法 1.1 试验系统 图1 试验示意图 Fig. 1 Schematic diagram of experimental set-up 1.2 涡旋识别方法 ${M}{\text{ = }}\left[\!\!\! {\begin{array}{*{20}{c}} {\displaystyle\frac{{\partial u}}{{\partial x}}}&{\displaystyle\frac{{\partial u}}{{\partial y}}}\\ {\displaystyle\frac{{\partial v}}{{\partial x}}}&{\displaystyle\frac{{\partial v}}{{\partial y}}} \end{array}}\!\!\!\right]$ (1) ${\lambda {\rm _{ci}}}$ 是该矩阵特征值的虚部,在2维平面下计算方法为: {\lambda _{{\rm ci}}} = \left\{ {\begin{aligned} & {\sqrt {Q - \frac{{{P^2}}}{4}} }{\text{,}}{Q - \frac{{{P^2}}}{4} < 0}{\text{;}}\\ & {0,}\qquad\qquad\;\,{Q - \frac{{{P^2}}}{4} < 0} \end{aligned}} \right. (2) $P = - \frac{{\partial u}}{{\partial x}} - \frac{{\partial v}}{{\partial y}}$ (3) $Q = \frac{{\partial u}}{{\partial x}}\frac{{\partial v}}{{\partial y}} - \frac{{\partial u}}{{\partial y}}\frac{{\partial v}}{{\partial x}}$ (4) 1.3 马蹄涡的尺度提取 \begin{aligned}[b] & {u = \frac{\varGamma }{{2{\text{π}} }} \cdot \left[ {1 - \exp \left( - \frac{{{x^2} + {y^2}}}{{{R^2}}}\right)} \right] \cdot \left( - \frac{y}{{{x^2} + {y^2}}}\right)}+\\ & {\;\;\; k{{\cos }^2}\theta y - k\cos \theta \sin x} \end{aligned} (5) \begin{aligned}[b] & {v = \frac{\varGamma }{{2{\text{π}} }} \cdot \left[ {1 - \exp \left( - \frac{{{x^2} + {y^2}}}{{{R^2}}}\right)} \right] \cdot \frac{x}{{{x^2} + {y^2}}}}+\\ & {\;\;\; k\sin \theta \cos \theta y - k{{\sin }^2}\theta x} \end{aligned} (6) 图2 实测流场与拟合流场对比 Fig. 2 Compared of simulated and measured fields 2 试验结果与讨论 2.1 流动分离点 图3 流动分离点位置随柱体雷诺数变化关系 Fig. 3 Relationship between location of separation point and cylinder Reynolds number 图4 流动分离点位置随水深的变化关系 Fig. 4 Relationship between location of separation point and depth 2.2 马蹄涡的特征 图5 马蹄涡识别 Fig. 5 Horseshoe vortex system extraction 图6 马蹄涡纵向及垂向位置随柱体雷诺数变化关系 Fig. 6 Relationship between location of horseshoe voretex and cylinder Reynolds number 图7 马蹄涡半径随柱体雷诺数的变化关系 Fig. 7 Relationship between radius of horseshoe vortex and cylinder Reynolds number 图8 旋转强度随柱体雷诺数的变化关系 Fig. 8 Relationship between swirling rate of horseshoe voretex and cylinder Reynolds 图9 主马蹄涡位置及半径随水深变化关系 Fig. 9 Relationship between both location and radius of horseshoe vortex and cylinder Reynolds number 2.3 流动过程 图10 柱体前端马蹄涡模型 Fig. 10 Model of horseshoe vortex 3 结 论 1)在低柱体雷诺数条件下(1 600< ${{\mathop{Re}\nolimits} _{\rm{D}}}$ <4 400),随着柱体雷诺数的增加流动分离点和马蹄涡的纵向位置皆急剧向下游运动,马蹄涡的垂向位置减小逐渐靠向床面,马蹄涡半径变小但其旋转强度变强。 2)在浅薄层水流条件下(0.48< $h/D$ <0.58),当柱体直径不变时,随着水深增加,流动分离点和马蹄涡的纵向位置向上游运动,马蹄涡的垂向位置和半径增加,且显著大于明渠水流条件下。 3)与前人的研究数据对比,发现马蹄涡的运动状态呈3个阶段:500< ${{\mathop{Re}\nolimits} _{\rm{D}}}$ <5 000,流动分离点、马蹄涡纵垂向位置、半径与 ${{\mathop{Re}\nolimits} _{\rm{D}}}$ 呈反比,与马蹄涡旋转强度呈正比;5 000< ${{\mathop{Re}\nolimits} _{\rm{D}}}$ <8 000,流动分离点仍与 ${{\mathop{Re}\nolimits} _{\rm{D}}}$ 呈反比,马蹄涡各项参数稳定; ${{\mathop{Re}\nolimits} _{\rm{D}}}$ >8 000,流动分离点缓慢向下游移动,马蹄涡参数稳定,主马蹄涡的纵向位置稳定在0.17 $D$ 、垂向位置在0.06 $D$ ,半径大小在0.04 $D$ 左右。 [1] Gossler A A,Marshall J S. Simulation of normal vortex cylinder interaction in a viscous fluid[J]. Journal of Fluid Mechanics, 2001, 431: 371-405. DOI:10.1017/S0022112000003062 [2] Kairouz K A,Rahai H R. Turbulent junction flow with an upstream ribbed surface[J]. International Journal of Heat & Fluid Flow, 2005, 26(5): 771-779. DOI:10.1016/j.ijheatfluidflow.2005.02.002 [3] Kirkil G,Constantinescu G. A numerical study of the laminar necklace vortex system and its effect on the wake for a circular cylinder[J]. Physics of Fluids, 2012, 24(7): 415-443. DOI:10.1063/1.4731291 [4] Dargahi B. The turbulent flow field around a circular cylinder[J]. Experiments in Fluids, 1989, 8(1): 1-12. [5] Graf W H,Yulistiyanto B. Experiments on flow around a cylinder; the velocity and vorticity fields[J]. Journal of Hydraulic Research, 1998, 36(4): 637-654. DOI:10.1080/00221689809498613 [6] Ozturk N A,Akkoca A,Sahin B. Flow details of a circular cylinder mounted on a flat plate[J]. Journal of Hydraulic Research, 2008, 46(3): 344-355. DOI:10.3826/jhr.2008.3126 [7] Chen Qigang,Qi Meilan,Li Jinzhao,et al. Study on the features of approaching flow upstream of a circular cylinder inopen channel flows based on PIV measurement[J]. Journal of Hydraulic Engineering, 2015, 46(8): 967-973. [陈启刚,齐梅兰,李金钊,等. 基于粒子图像测速技术的明渠圆柱上游行近流特征研究[J]. 水利学报, 2015, 46(8): 967-973. DOI:10.13243/j.cnki.slxb.20141253] [8] Chen Qigang,Qi Meilan,Li Jinzhao. Kinematic characteristics of horseshoe vortex upstreamof circular cylinders in open channel flow[J]. Journal of Hydraulic Engineering, 2016, 47(2): 158-164. [陈启刚,齐梅兰,李金钊. 明渠柱体上游马蹄涡的运动学特征研究[J]. 水利学报, 2016, 47(2): 158-164. DOI:10.13243/j.cnki.slxb.20150528] [9] Chen Qigang,Qi Meilan,Zhang Qiang,et al. Experimental study on the multimodal dynamics of the turbulent horseshoe vortex system around a circular cylinder[J]. Physics of Fluids, 2017, 29(1): 015106. DOI:10.1063/1.4974523 [10] Akilli H,Rockwell D. Vortex formation from a cylinder in shallow water[J]. Physics of Fluids, 2002, 14(9): 2957-2967. DOI:10.1063/1.1483307 [11] Fu H,Rockwell D. Shallow flow past a cylinder:Transition phenomena at low Reynolds number[J]. Journal of Fluid Mechanics, 2005, 540: 75-97. DOI:10.1017/S0022112005003381 [12] Nezu I. Open-channel flow turbulence and its research prospect in the 21st century[J]. Journal of Hydraulic Engineering, 2005, 131(4): 229-246. DOI:10.1061/(ASCE)0733-9429(2005)131:4(229) [13] Zhong Qiang,Wang Xingkui,Miao Wei,et al. High resolution PTV system and its application in the measurement inviscous sub layer in smooth open channel flow[J]. Journal of Hydraulic Engineering, 2014, 45(5): 513-520. [钟强,王兴奎,苗蔚,等. 高分辨率粒子示踪测速技术在光滑明渠紊流黏性底层测量中的应用[J]. 水利学报, 2014, 45(5): 513-520. DOI:10.13243/j.cnki.slxb.2014.05.002] [14] Zhou J,Adrian R J,Balachandar S,et al. Mechanisms for generating coherent packets of hairpin vortices in channel flow[J]. Journal of Fluid Mechanics, 1999, 387(10): 353-396. DOI:10.1017/S002211209900467X [15] Gao Q,Ortiz-Dueñas C,Longmire E K. Analysis of vortex populations in turbulent wall-bounded flows[J]. Journal of Fluid Mechanics, 2011, 678: 87-123. DOI:10.1017/jfm.2011.101 [16] Tomkins C D,Adrian R J. Spanwise structure and scale growth in turbulent boundary layers[J]. Journal of Fluid Mechanics, 2003, 490: 37-74. DOI:10.1017/S0022112003005251 [17] Zhong Qiang,Chen Qigang,Li Danxun,et al. The scale and eirculation characteristics of spanwise vortexes in open channel flows[J]. Journal of Sichuan University (Engineering Science Edition), 2013, 45(增刊2): 66-70. [钟强,陈启刚,李丹勋,等. 明渠湍流横向涡旋的尺度与环量特征[J]. 四川大学学报(工程科学版), 2013, 45(增刊2): 66-70.] [18] Varun A V,Balasubramanian K,Sujith R I. An automated vortex detection scheme using the wavelet transform of the d2 field[J]. Experiments in Fluids, 2008, 45(5): 857-868. DOI:10.1007/s00348-008-0505-5 [19] Carlier J,Stanislas M. Experimental study of eddy structures in a turbulent boundary layer using particle image velocimetry[J]. Journal of Fluid Mechanics, 2005, 535(535): 143-188. DOI:10.1017/S0022112005004751 [20] Herpin S,Stanislas M,Soria J. The organization of near-wall turbulence:A comparison between boundary layer SPIV data and channel flow DNS data[J]. Journal of Turbulence, 2010, 11(47): 1-30. DOI:10.1080/14685248.2010.508460 [21] Qi Meilan. Riverbed scouring around bridge piers in river section with sand pits[J]. Journal of Hydraulic Engineering, 2005, 36(7): 835-839. [齐梅兰. 采沙河床桥墩冲刷研究[J]. 水利学报, 2005, 36(7): 835-839. DOI:10.3321/j.issn:0559-9350.2005.07.012] [22] Unger J,Hager W H. Down-flow and horseshoe vortex characteristics of sediment embedded bridge piers[J]. Experiments in Fluids, 2007, 42(1): 1-19. DOI:10.1007/s00348-006-0209-7 [23] Roulund A,Sumer B M,Fredsøe J,et al. Numerical and experimental investigation of flow and scour around a circular pile[J]. Journal of Fluid Mechanics, 2005, 534(534): 351-401. DOI:10.1017/S0022112005004507 [24] Wei Q D,Chen G,Du X D. An experimental study on the structure of juncture flows[J]. Journal of Visualization, 2001, 3(4): 341-348.
{}
Copied to clipboard ## G = C8×M4(2)  order 128 = 27 ### Direct product of C8 and M4(2) direct product, p-group, metabelian, nilpotent (class 2), monomial Series: Derived Chief Lower central Upper central Jennings Derived series C1 — C2 — C8×M4(2) Chief series C1 — C2 — C22 — C2×C4 — C42 — C2×C42 — C2×C4×C8 — C8×M4(2) Lower central C1 — C2 — C8×M4(2) Upper central C1 — C4×C8 — C8×M4(2) Jennings C1 — C22 — C22 — C42 — C8×M4(2) Generators and relations for C8×M4(2) G = < a,b,c | a8=b8=c2=1, ab=ba, ac=ca, cbc=b5 > Subgroups: 132 in 112 conjugacy classes, 92 normal (26 characteristic) C1, C2, C2, C4, C4, C22, C22, C22, C8, C8, C2×C4, C2×C4, C2×C4, C23, C42, C2×C8, C2×C8, M4(2), C22×C4, C4×C8, C4×C8, C8⋊C4, C22⋊C8, C4⋊C8, C2×C42, C22×C8, C2×M4(2), C82, C8⋊C8, C2×C4×C8, C4×M4(2), C42.12C4, C8×M4(2) Quotients: C1, C2, C4, C22, C8, C2×C4, C23, C42, C2×C8, M4(2), C22×C4, C4×C8, C2×C42, C22×C8, C2×M4(2), C8○D4, C2×C4×C8, C4×M4(2), C82M4(2), C8×M4(2) Smallest permutation representation of C8×M4(2) On 64 points Generators in S64 (1 2 3 4 5 6 7 8)(9 10 11 12 13 14 15 16)(17 18 19 20 21 22 23 24)(25 26 27 28 29 30 31 32)(33 34 35 36 37 38 39 40)(41 42 43 44 45 46 47 48)(49 50 51 52 53 54 55 56)(57 58 59 60 61 62 63 64) (1 48 10 61 31 23 35 52)(2 41 11 62 32 24 36 53)(3 42 12 63 25 17 37 54)(4 43 13 64 26 18 38 55)(5 44 14 57 27 19 39 56)(6 45 15 58 28 20 40 49)(7 46 16 59 29 21 33 50)(8 47 9 60 30 22 34 51) (1 5)(2 6)(3 7)(4 8)(9 13)(10 14)(11 15)(12 16)(17 46)(18 47)(19 48)(20 41)(21 42)(22 43)(23 44)(24 45)(25 29)(26 30)(27 31)(28 32)(33 37)(34 38)(35 39)(36 40)(49 62)(50 63)(51 64)(52 57)(53 58)(54 59)(55 60)(56 61) G:=sub<Sym(64)| (1,2,3,4,5,6,7,8)(9,10,11,12,13,14,15,16)(17,18,19,20,21,22,23,24)(25,26,27,28,29,30,31,32)(33,34,35,36,37,38,39,40)(41,42,43,44,45,46,47,48)(49,50,51,52,53,54,55,56)(57,58,59,60,61,62,63,64), (1,48,10,61,31,23,35,52)(2,41,11,62,32,24,36,53)(3,42,12,63,25,17,37,54)(4,43,13,64,26,18,38,55)(5,44,14,57,27,19,39,56)(6,45,15,58,28,20,40,49)(7,46,16,59,29,21,33,50)(8,47,9,60,30,22,34,51), (1,5)(2,6)(3,7)(4,8)(9,13)(10,14)(11,15)(12,16)(17,46)(18,47)(19,48)(20,41)(21,42)(22,43)(23,44)(24,45)(25,29)(26,30)(27,31)(28,32)(33,37)(34,38)(35,39)(36,40)(49,62)(50,63)(51,64)(52,57)(53,58)(54,59)(55,60)(56,61)>; G:=Group( (1,2,3,4,5,6,7,8)(9,10,11,12,13,14,15,16)(17,18,19,20,21,22,23,24)(25,26,27,28,29,30,31,32)(33,34,35,36,37,38,39,40)(41,42,43,44,45,46,47,48)(49,50,51,52,53,54,55,56)(57,58,59,60,61,62,63,64), (1,48,10,61,31,23,35,52)(2,41,11,62,32,24,36,53)(3,42,12,63,25,17,37,54)(4,43,13,64,26,18,38,55)(5,44,14,57,27,19,39,56)(6,45,15,58,28,20,40,49)(7,46,16,59,29,21,33,50)(8,47,9,60,30,22,34,51), (1,5)(2,6)(3,7)(4,8)(9,13)(10,14)(11,15)(12,16)(17,46)(18,47)(19,48)(20,41)(21,42)(22,43)(23,44)(24,45)(25,29)(26,30)(27,31)(28,32)(33,37)(34,38)(35,39)(36,40)(49,62)(50,63)(51,64)(52,57)(53,58)(54,59)(55,60)(56,61) ); G=PermutationGroup([[(1,2,3,4,5,6,7,8),(9,10,11,12,13,14,15,16),(17,18,19,20,21,22,23,24),(25,26,27,28,29,30,31,32),(33,34,35,36,37,38,39,40),(41,42,43,44,45,46,47,48),(49,50,51,52,53,54,55,56),(57,58,59,60,61,62,63,64)], [(1,48,10,61,31,23,35,52),(2,41,11,62,32,24,36,53),(3,42,12,63,25,17,37,54),(4,43,13,64,26,18,38,55),(5,44,14,57,27,19,39,56),(6,45,15,58,28,20,40,49),(7,46,16,59,29,21,33,50),(8,47,9,60,30,22,34,51)], [(1,5),(2,6),(3,7),(4,8),(9,13),(10,14),(11,15),(12,16),(17,46),(18,47),(19,48),(20,41),(21,42),(22,43),(23,44),(24,45),(25,29),(26,30),(27,31),(28,32),(33,37),(34,38),(35,39),(36,40),(49,62),(50,63),(51,64),(52,57),(53,58),(54,59),(55,60),(56,61)]]) 80 conjugacy classes class 1 2A 2B 2C 2D 2E 4A ··· 4L 4M ··· 4R 8A ··· 8P 8Q ··· 8BD order 1 2 2 2 2 2 4 ··· 4 4 ··· 4 8 ··· 8 8 ··· 8 size 1 1 1 1 2 2 1 ··· 1 2 ··· 2 1 ··· 1 2 ··· 2 80 irreducible representations dim 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 type + + + + + + image C1 C2 C2 C2 C2 C2 C4 C4 C4 C4 C4 C4 C8 M4(2) C8○D4 kernel C8×M4(2) C82 C8⋊C8 C2×C4×C8 C4×M4(2) C42.12C4 C4×C8 C8⋊C4 C22⋊C8 C4⋊C8 C22×C8 C2×M4(2) M4(2) C8 C4 # reps 1 2 2 1 1 1 4 4 4 4 4 4 32 8 8 Matrix representation of C8×M4(2) in GL3(𝔽17) generated by 15 0 0 0 9 0 0 0 9 , 13 0 0 0 0 13 0 16 0 , 1 0 0 0 16 0 0 0 1 G:=sub<GL(3,GF(17))| [15,0,0,0,9,0,0,0,9],[13,0,0,0,0,16,0,13,0],[1,0,0,0,16,0,0,0,1] >; C8×M4(2) in GAP, Magma, Sage, TeX C_8\times M_4(2) % in TeX G:=Group("C8xM4(2)"); // GroupNames label G:=SmallGroup(128,181); // by ID G=gap.SmallGroup(128,181); # by ID G:=PCGroup([7,-2,2,2,-2,2,-2,2,56,120,758,136,172]); // Polycyclic G:=Group<a,b,c|a^8=b^8=c^2=1,a*b=b*a,a*c=c*a,c*b*c=b^5>; // generators/relations ׿ × 𝔽
{}
## Domaine de Mortiès [in the New York Times] Posted in Mountains, Travel, Wines with tags , , , , , , , , , , , , , on March 7, 2015 by xi'an “I’m not sure how we found Domaine de Mortiès, an organic winery at the foothills of Pic St. Loup, but it was the kind of unplanned, delightful discovery our previous trips to Montpellier never allowed.” Last year,  I had the opportunity to visit and sample (!) from Domaine de Mortiès, an organic Pic Saint-Loup vineyard and winemaker. I have not yet opened the bottle of Jamais Content I bought then. Today I spotted in The New York Times a travel article on A visit to the in-laws in Montpellier that takes the author to Domaine de Mortiès, Pic Saint-Loup, Saint-Guilhem-du-Désert and other nice places, away from the overcrowded centre of town and the rather bland beach-town of Carnon, where she usually stays when visiting. And where we almost finished our Bayesian Essentials with R! To quote from the article, “Montpellier, France’s eighth-largest city, is blessed with a Mediterranean sun and a beautiful, walkable historic centre, a tourist destination in its own right, but because it is my husband’s home city, a trip there never felt like a vacation to me.” And when the author mentions the owner of Domaine de Mortiès, she states that “Mme. Moustiés looked about as enthused as a teenager working the checkout at Rite Aid”, which is not how I remember her from last year. Anyway, it is fun to see that visitors from New York City can unexpectedly come upon this excellent vineyard! ## back from Prague Posted in Travel, Wines with tags , , , on February 21, 2015 by xi'an ## absurdum technicae Posted in Kids, Wines with tags , , , , , on February 14, 2015 by xi'an In what could have been the most expensive raclette ever, I almost get rid of my oven! Last weekend, to fight the ongoing cold wave, we decided to have a raclette with mountain cheese and potatoes, but the raclette machine (mostly a resistance to melt the cheese) had an electric issue and kept blowing the meter. We then decided to use the over to melt the cheese but, while giving all signs of working, it would not heat. Rather than a cold raclette, we managed with the microwave (!), but I though the oven had blown as well. The next morning, I still checked on the web for similar accidents and found the explanation: by pressing the proper combination of buttons, we had succeeded to switch the over into the demo mode, used by shops to run the oven with no heating. The insane part of this little [very little] story is that nowhere in the manual appeared any indication of an existing demo mode and of a way of getting back to normal! After pushing combinations of buttons at random, I eventually got the solution and the oven is again working, instead of standing in the recycling bin. ## brief stop in Edinburgh Posted in Mountains, pictures, Statistics, Travel, University life, Wines with tags , , , , , , , , on January 24, 2015 by xi'an Yesterday, I was all too briefly in Edinburgh for a few hours, to give a seminar in the School of Mathematics, on the random forests approach to ABC model choice (that was earlier rejected). (The slides are almost surely identical to those used at the NIPS workshop.) One interesting question at the end of the talk was on the potential bias in the posterior predictive expected loss, bias against some model from the collection of models being evaluated for selection. In the sense that the array of summaries used by the random forest could fail to capture features of a particular model and hence discriminate against it. While this is correct, there is no fundamental difference with implementing a posterior probability based on the same summaries. And the posterior predictive expected loss offers the advantage of testing, that is, for representative simulations from each model, of returning the corresponding model prediction error to highlight poor performances on some models. A further discussion over tea led me to ponder whether or not we could expand the use of random forests to Bayesian quantile regression. However, this would imply a monotonicity structure on a collection of random forests, which sounds daunting… My stay in Edinburgh was quite brief as I drove to the Highlands after the seminar, heading to Fort William, Although the weather was rather ghastly, the traffic was fairly light and I managed to get there unscathed, without hitting any of the deer of Rannoch Mor (saw one dead by the side of the road though…) or the snow banks of the narrow roads along Loch Lubnaig. And, as usual, it still was a pleasant feeling to drive through those places associated with climbs and hikes, Crianlarich, Tyndrum, Bridge of Orchy, and Glencoe. And to get in town early enough to enjoy a quick dinner at The Grog & Gruel, reflecting I must have had half a dozen dinners there with friends (or not) over the years. And drinking a great heather ale to them! ## Sequential Monte Carlo 2015 workshop Posted in pictures, R, Statistics, Travel, University life, Wines with tags , , , , , on January 22, 2015 by xi'an An announcement for the SMC 2015 workshop: Sequential Monte Carlo methods (also known as particle filters) have revolutionized the on-line and off-line analysis of data in fields as diverse as target tracking, computer vision, financial modelling, brain imagery, or population ecology. Their popularity stems from the fact that they have made possible to solve numerically many complex problems that were previously intractable. The aim of the SMC 2015 workshop, in the spirit of SMC2006 and SMC2012, is to gather scientists from all areas of science interested in the theory, methodology or application of Sequential Monte Carlo methods. SMC 2015 will take place at ENSAE, Paris, on August 26-28 2015. The organising committee Nicolas Chopin ENSAE, Paris Thomas Schön, Uppsala University $\dfrac{\hat{p}_1-\hat{p_2}}{\sqrt{2\hat{p}(1-\hat{p})/1032}}=1.36$
{}
declarative-0.5.4: DIY Markov Chains. Copyright (c) 2015 Jared Tobin MIT Jared Tobin unstable ghc None Haskell2010 Numeric.MCMC Contents Description This module presents a simple combinator language for Markov transition operators that are useful in MCMC. Any transition operators sharing the same stationary distribution and obeying the Markov and reversibility properties can be combined in a couple of ways, such that the resulting operator preserves the stationary distribution and desirable properties amenable for MCMC. We can deterministically concatenate operators end-to-end, or sample from a collection of them according to some probability distribution. See Geyer, 2005 for details. The result is a simple grammar for building composite, property-preserving transition operators from existing ones: transition ::= primitive transition | concatT transition transition | sampleT transition transition In addition to the above, this module provides a number of combinators for building composite transition operators. It re-exports a number of production-quality transition operators from the mighty-metropolis, speedy-slice, and hasty-hamiltonian libraries. Markov chains can then be run over arbitrary Targets using whatever transition operator is desired. import Numeric.MCMC import Data.Sampling.Types target :: [Double] -> Double target [x0, x1] = negate (5 *(x1 - x0 ^ 2) ^ 2 + 0.05 * (1 - x0) ^ 2) rosenbrock :: Target [Double] rosenbrock = Target target Nothing transition :: Transition IO (Chain [Double] b) transition = concatT (sampleT (metropolis 0.5) (metropolis 1.0)) (sampleT (slice 2.0) (slice 3.0)) main :: IO () main = withSystemRandom . asGenIO $mcmc 10000 [0, 0] transition rosenbrock See the attached test suite for other examples. Synopsis # Documentation concatT :: Monad m => Transition m a -> Transition m a -> Transition m a Source # Deterministically concat transition operators together. concatAllT :: Monad m => [Transition m a] -> Transition m a Source # Deterministically concat a list of transition operators together. sampleT :: PrimMonad m => Transition m a -> Transition m a -> Transition m a Source # Probabilistically concat transition operators together. sampleAllT :: PrimMonad m => [Transition m a] -> Transition m a Source # Probabilistically concat transition operators together via a uniform distribution. bernoulliT :: PrimMonad m => Double -> Transition m a -> Transition m a -> Transition m a Source # Probabilistically concat transition operators together using a Bernoulli distribution with the supplied success probability. This is just a generalization of sampleT. frequency :: PrimMonad m => [(Int, Transition m a)] -> Transition m a Source # Probabilistically concat transition operators together using the supplied frequency distribution. This function is more-or-less an exact copy of frequency, except here applied to transition operators. anneal :: (Monad m, Functor f) => Double -> Transition m (Chain (f Double) b) -> Transition m (Chain (f Double) b) Source # An annealing transformer. When executed, the supplied transition operator will execute over the parameter space annealed to the supplied inverse temperature. let annealedTransition = anneal 0.30 (slice 0.5) mcmc :: (MonadIO m, PrimMonad m, Show (t a)) => Int -> t a -> Transition m (Chain (t a) b) -> Target (t a) -> Gen (PrimState m) -> m () Source # Trace n iterations of a Markov chain and stream them to stdout. >>> withSystemRandom . asGenIO$ mcmc 3 [0, 0] (metropolis 0.5) rosenbrock -0.48939312153007863,0.13290702689491818 1.4541485365128892e-2,-0.4859905564050404 0.22487398491619448,-0.29769783186855125 chain :: (MonadIO m, PrimMonad m) => Int -> t a -> Transition m (Chain (t a) b) -> Target (t a) -> Gen (PrimState m) -> m [Chain (t a) b] Source # Trace n iterations of a Markov chain and collect them in a list. >>> results <- withSystemRandom . asGenIO $chain 3 [0, 0] (metropolis 0.5) rosenbrock # Re-exported metropolis :: (Traversable f, PrimMonad m) => Double -> Transition m (Chain (f Double) b) Source # A generic Metropolis transition operator. hamiltonian :: forall t (m :: Type -> Type) b. (Num (IxValue (t Double)), Traversable t, FunctorWithIndex (Index (t Double)) t, Ixed (t Double), PrimMonad m, IxValue (t Double) ~ Double) => Double -> Int -> Transition m (Chain (t Double) b) # A Hamiltonian transition operator. slice :: forall (m :: Type -> Type) t a b. (PrimMonad m, FoldableWithIndex (Index (t a)) t, Ixed (t a), Num (IxValue (t a)), Variate (IxValue (t a))) => IxValue (t a) -> Transition m (Chain (t a) b) # A slice sampling transition operator. create :: PrimMonad m => m (Gen (PrimState m)) # Create a generator for variates using a fixed seed. Seed a PRNG with data from the system's fast source of pseudo-random numbers. withSystemRandom :: PrimBase m => (Gen (PrimState m) -> m a) -> IO a # Seed a PRNG with data from the system's fast source of pseudo-random numbers, then run the given action. This function is unsafe and for example allows STRefs or any other mutable data structure to escape scope: >>> ref <- withSystemRandom$ \_ -> newSTRef 1 >>> withSystemRandom $\_ -> modifySTRef ref succ >> readSTRef ref 2 >>> withSystemRandom$ \_ -> modifySTRef ref succ >> readSTRef ref 3 asGenIO :: (GenIO -> IO a) -> GenIO -> IO a # Constrain the type of an action to run in the IO monad. Class of monads which can perform primitive state-transformer actions Minimal complete definition primitive #### Instances Instances details RealWorld is deeply magical. It is primitive, but it is not unlifted (hence ptrArg). We never manipulate values of type RealWorld; it's only used in the type system, to parameterise State#.
{}
# Math Help - Proportions 1. ## Proportions I just wanted to make sure I got this problem correctly http://img109.imageshack.us/img109/7831/91757261.png I got x = 5 did I do this one correctly? Also I need help on a problem that looks like this http://img109.imageshack.us/img109/7831/91757261.png I don't know how to set up a proportion for that one, so can anyone help me with this? Thanks. 2. Originally Posted by Jubbly I just wanted to make sure I got this problem correctly http://img109.imageshack.us/img109/7831/91757261.png I got x = 5 did I do this one correctly? Also I need help on a problem that looks like this http://img109.imageshack.us/img109/7831/91757261.png I don't know how to set up a proportion for that one, so can anyone help me with this? Thanks. $\frac{x}{x+5} = \frac{2x-8}{x+8}$ (= 1:1) $x^2 + 8x = 2x^2 + 10x - 8x - 40$ $x^2 - 6 - 40 = 0$
{}
# Given $E:N\to M$ an embedding and $V,W\in \mathfrak{X}(M)$ tangent to $N$, we claim that the commutator of $V$ and $W$ is also tangent to $N$. I have encounter some difficulties while looking at an exercise online. It basically goes as follows: Given $$E:N\to M$$ an embedding and $$V,W\in \mathfrak{X}(M)$$ tangent to $$N$$, we claim that the commutator of $$V$$ and $$W$$ is also tangent to $$N$$. I would like to have some ideas about how to attack the problem effectively. • There are a few possible approaches, depending on your definition of the commutator. – Amitai Yuval Jan 23 at 6:49 • It is just the usual one: $[A,B]=AB-BA$. – DaveWasHere Jan 23 at 11:35 If $$V$$ and $$W$$ are tangent to N, it means that there are vector fields $$v$$ and $$w$$ in $$\mathfrak X(N)$$ such that for any $$x\in N$$ we have $$V_{E(x)}=E_*v_x$$ and the same is true for $$W$$. To be able to interpret things properly, assume that $$V$$ and $$W$$ are smoothly extended off $$E(N)$$. Then $$v$$ and $$V$$ are $$E$$-related and so are $$w$$ and $$W$$. But we know that for $$E$$-related vector fields the commutators are also $$E$$-related, so we have (restricted to $$E(N)$$) $$[V,W]=E_*[v,w],$$ • Thanks for the comment! But where do we use the fact that $\mathfrak{X}(M)\ni V,W$? – DaveWasHere Jan 23 at 15:51
{}
# how to connect these antennas wirelesslly? I know the qustion in little strange, but i need to connect between these three antennas by wireless channel (with distance d as a name) and (what is the possible options) also whats is the possible options for antennas or nodes (another shape in latex). so i can put it in the paper and thanks \documentclass[12pt,a4paper]{article} \usepackage{circuitikz} \usetikzlibrary{positioning} \usetikzlibrary{shapes,arrows} \tikzset{block/.style = {draw, fill=white, rectangle, minimum height=3em, minimum width=2cm}, input/.style = {coordinate}, output/.style = {coordinate}, pinstyle/.style = {pin edge={to-,t,black}} } %%%%%%%%%%%%%%%%%%%%%%% \begin{document} \begin{tikzpicture}[auto, node distance=2cm,>=latex'] \node[block](tx){Base Station}; \node[antenna] at (tx.east) {}; \node[block,below = 2cm of tx](ttx){GSM900 Tower}; \node[antenna] at (ttx.east) {}; \node[block,right = 5cm of tx](rx){Sensor Node}; \node[antenna,xscale=-1] at (rx.west) {}; \end{tikzpicture} \end{document} Note: is saw this code in this website but i changed it • You should provide a link to the original. If the link is wireless, surely not drawing anything at all is the most appropriate option? – cfr Aug 10 '16 at 21:40 • You can do the lines pretty easily (if you want them straight): \draw (1.5,-1.2) -- (6,2.07); \draw (1.2, 2.07) -- (6, 2.07); I think. – karatechop Aug 10 '16 at 22:04 • Instead of straight lines, if I may suggest expanding arcs, like ) ) ). – Matsmath Aug 11 '16 at 5:47 • You have a defined radiation style in your document - why not use that? – hbaderts Aug 11 '16 at 7:22 Something like this? \documentclass[border=3mm]{standalone} \usepackage{circuitikz} \tikzset{% block/.style = {draw, fill=white, rectangle, minimum height=3em, minimum width=2cm}, input/.style = {coordinate}, output/.style = {coordinate}, pinstyle/.style = {pin edge={to-,t,black}} zigzag/.style = {% added for solution to path={ -- ($(\tikztostart)!.55!-9:(\tikztotarget)$) -- ($(\tikztostart)!.45!+9:(\tikztotarget)$) -- (\tikztotarget) \tikztonodes},sharp corners} } \begin{document} \begin{tikzpicture}[auto, node distance=2cm,>=latex'] \node[block](tx){Base Station}; \node[antenna] (ant1) at (tx.east) {}; \node[block,below = 2cm of tx](ttx){GSM900 Tower}; \node[antenna] (ant2) at (ttx.east) {}; \node[block,right = 5cm of tx](rx){Sensor Node}; \node[antenna,xscale=-1] (ant3) at (rx.west) {}; \coordinate (A1) at ($(ant1)+(0.5,2)$); \coordinate (A2) at ($(ant2)+(0.5,2)$); \coordinate (A3) at ($(ant3)+(-0.5,2)$); \draw[draw=red,very thick,shorten >=1mm,->] (A1) to [zigzag,"$d$"] (A3); \draw[draw=red,very thick,shorten >=1mm,->] (A2) to [zigzag,"$d$"] (A3); \end{tikzpicture} \end{document} • Red alert by lightning? – cfr Aug 13 '16 at 1:42 • urgent message to sensor .... :-) – Zarko Aug 13 '16 at 6:30 Another wireless option \documentclass[12pt,a4paper]{article} \usepackage{circuitikz} \usetikzlibrary{positioning} \usetikzlibrary{shapes,arrows,decorations.pathreplacing} \tikzset{block/.style = {draw, fill=white, rectangle, minimum height=3em, minimum width=2cm}, input/.style = {coordinate}, output/.style = {coordinate}, pinstyle/.style = {pin edge={to-,t,black}}, } %%%%%%%%%%%%%%%%%%%%%%% \begin{document} \begin{tikzpicture}[auto, node distance=2cm,>=latex'] \node[block](tx){Base Station}; \node[antenna] at (tx.east) {}; \node[block,below = 2cm of tx](ttx){GSM900 Tower}; \node[antenna] at (ttx.east) {}; \node[block,right = 5cm of tx](rx){Sensor Node}; \node[antenna,xscale=-1] at (rx.west) {}; \draw[radiation] ([shift={(1cm,2cm)}]tx.east)-- node [above=5mm] {d} ([shift={(-1cm,2cm)}]rx.west);
{}
# update Update posterior parameter distribution of degradation remaining useful life model ## Description example update(mdl,data) updates the posterior estimate of the parameters of the degradation remaining useful life (RUL) model mdl using the latest degradation measurements in data. ## Examples collapse all Load training data, which is a degradation feature profile for a component. For this example, assume that the training data is not historical data. When there is no historical data, you can update your degradation model in real time using observed data. Create an exponential degradation model with the following settings: • Arbitrary $\theta$ and $\beta$ prior distributions with large variances so that the model relies mostly on observed data • Noise variance of 0.003 'Beta',1,'BetaVariance',1e6,... 'NoiseVariance',0.003); Since there is no life time variable in the training data, create an arbitrary life time vector for fitting. Observe the degradation feature for 10 iterations. Update the degradation model after each iteration. for i=1:10 end After observing the model for some time, for example at a steady-state operating point, you can restart the model and save the current posterior distribution as a prior distribution. restart(mdl,true) View the updated prior distribution parameters. mdl.Prior ans = struct with fields: Theta: 2.3555 ThetaVariance: 0.0058 Beta: 0.0722 BetaVariance: 3.6362e-05 Rho: -0.8429 ## Input Arguments collapse all For a linearDegradationModel, the updated parameters are Theta and ThetaVariance. For an exponentialDegradationModel, the updated parameters are Theta, ThetaVariance, Beta, BetaVariance, and Rho. update also sets the following properties of mdl: • InitialLifeTimeValue — The first time you call update, this property is set to the life time value in the first row of data. • CurrentLifeTimeValue — Each time you call update, this property is set to the life time value in the last row of data. • CurrentMeasurement — Each time you call update, this property is set to the feature measurement value in the last row of data. Degradation feature measurements, specified as one of the following: • Two-column array — The first column contains life time values and the second column contains the corresponding degradation feature measurement. • table or timetable object that contains variables with names that match the LifeTimeVariable and DataVariables properties of mdl. ## Version History Introduced in R2018a
{}
# Tag Info 3 Circulation may be controlled via either air blown through spanwise slots or horizontal-axis rotors, alone or in combination. Experiments on wing systems go back at least as far as 1902 and since then almost every variation imaginable has been tried. The problems come in the engineering implementation. Where investigations have reached the stage of flight ... 1 The direct answer to the question as it is ("What´s the relationship between AOA and Airspeed?") is simple: none whatsoever -- that is, until you introduce context and some conditions. Your context is: we want an airplane to fly. And not just 'fly' but to keep level, at least. For that, you need a certain amount of lift. Lift is used to counteract ... 1 A practical answer to help you understand critical angle of attack and stall speeds would be the operation of fast jets. The quickest way to land a jet is to join for a ‘run-in and break’ this could be at any speed but typically 350kts. A high-g turn would be used from overhead the runway at circuit height to the downwind position. The aircraft would be ... 1 "Can we get into a stall without reaching the critical AOA?" -- no. "We know that any aircraft will stall at its stall speed (for a specific weight, CG position, etc.)"-- we need to add "G-loading" to this list of parameters. The "stall speed" we usually talk about is the 1-G stall speed. Change the weight or the G-... 1 Most light aircraft sit slightly nose-up in flight. This simplifies the design parameters for the wing mounting, as the wing needs to angle slightly up. The designer seeks to maximise thrust by angling the engine down so it points straight forward. On a high-wing type, angling the engine down also adjust the thrust line so it passes closer to the centre of ... 2 Let us assume level flight. Then the forces acting on the aircraft are shown in the following sketch (not necessarily to scale): The forces are : the total aerodynamic force $F_A$, which is split into two components: lift $L$ (perpendicular to direction of motion) and drag $D$ (parallel to direction of motion) the weight $W$ the thrust $T$, here acting ... 2 If a taildragger is configured and flown properly, actually the result is the same as for a tri-gear. If you land 3-point, a "full stall" landing, the ideal is for the tailwheel to make contact just before the mains, so that ground contact has the result of reducing AOA (a tiny amount). If you contact mains first you are likely to skip or bounce ... 4 First, we must be clear on what exactly is "excess thrust". I will list two possible definitions, although many more may be possible. Excess thrust is the component of the resultant force in the direction of the flight path. Using this definition, excess thrust and steady state flight are directly at odds (because any net force results in an ... 0 When less than full thrust is in use during any steady state phase of flight, it can be considered that excess thrust is available. Application of some or all of this excess thrust will result in a disturbance to the steady state - either acceleration in level flight, transition to a climb, increased climb rate, reduced descent rate or a combination, ... 0 In a steady-state climb, Thrust is not a leg of the closed triangle of force vectors; rather, (Thrust-Drag) is. See the right-hand vector diagram in this related answer-- Is excess lift or excess power needed for a climb?. The diagram shows that if our definition of "excess thrust" is (Thrust minus Drag), then excess Thrust clearly does exist in a ... 8 Yes. With a tailwheel airplane, if you are trying to make a "wheel landing" (on the main wheels only) rather than a 3-point landing, it is critical that the sink rate be very low at the moment of touchdown, or else the plane will tend to pitch nose-up (tail-down) which will make it bounce back into the air. A plane with tricycle gear doesn't have ... 3 Agreed. Tail wheel aircraft generally have the required angle of attack on touchdown (for a given landing speed) very close to the angle achieved when all three wheels are in contact with the ground. Hence, if the aircraft is landed at too higher speed the tail will be high, and without care, the touchdown can cause the aircraft to rotate around the main ... 8 Yes, it does make sense. However, the angle of attack increase of the tailwheel airplane would only be possible if it touches down on the main wheels while there still is much clearance of the tail wheel or skid. Normally, the ground attitude of tailwheel airplanes should be very close to their landing attitude, so all three wheels touch down almost ... 0 In addition to some excellent content in other answers, it's worth noting that V-bestglide occurs at the angle-of-attack where L/D (and therefore also Cl / Cd) is maximized, while for shallow glide angles, it's a good approximation to say that V-minsink occurs at the angle-of-attack where (Cl^3 / Cd^2) is maximized1. The difference between the two formulae ... 3 A. It simply means that the airplane will perform according to the combination of pitch and power control inputs that you make. B. Yes, why wouldn’t it be? Aircraft are subject to basic laws of physics. These laws are consistent. Every time you pitch nose down and add power you will descend and accelerate. Every time. C. N/A because it is always true, ... 0 In a steady-state climb in an airplane, the basic formula for the magnitude of the Lift vector is Lift = Weight * cosine (climb angle). For much more on this, see the vector diagrams and calculations in this related ASE answer. As long as the Thrust line is aligned with the flight path rather than being tilted up or down, the relationship Lift = cosine (... 5 In cruise: No. Less drag means less thrust, which is always beneficial for the practical operation of an airplane. There is only one condition except for approach and landing where high drag helps, an that is also not during cruise: In aerobatic airplanes during vertical maneuvers. If, for example, the aerobatic display includes a vertical dive, high drag ... 2 The question shows some confusion around the difference between forces and their coefficients. Let's address forces first. The key thing about forces is that in an unaccelerated state (which excludes turning flight) we have to be able to rearrange the force vectors into a closed triangle, square, or other closed figure. As in the vector diagrams shown in ... 3 High lift at the expense of even higher drag means that the plane will not be able to fly very fast, as drag rises sharply with speed. But the extra lift is still useful in several situations and is often provided by drag-creating high-lift devices. Some of these situations include: STOL (short takeoff and landing) and low-speed flight performance, where ... 6 Landing phase would benefit from high lift but low lift-to-drag ratio. At most phase of flight you need about the same amount of lift to keep the plane in the air. However during landing you need to slow down to landing speed. Hence you lower lift-to-drag ratio by keep the same amount of lift but increase amount of drag. This is usually accomplish by ... 0 I'm trying to understand why does L/D MAX, (the top of the polar curve that computes CL & CD ratio for any airfoil) is also the lowest point of the total drag curve. The graph in another answer shows how to find the max ratio of Cl / Cd, which is arithmetically equal to the max ratio of L/D. The concept of minimum Drag (as opposed to minimum Drag ... 2 Airfoil drag is "parasitic" (or better: everything but induced) drag. It consists of shear drag and pressure drag, the latter mostly from local flow separation. Both are only present when viscous flow is assumed. Airfoil drag is for the wing section without taking tip effects into account, presuming an infinitely wide wing. This kind of theoretical ... Top 50 recent answers are included
{}
### magnetic field uniform (PDF) Spherical coils for uniform magnetic fields, The design of a spherical coil system to produce a region of uniform magnetic field is discussed The construction of a set of spherical coils is described, and it is shown that a field uniformity ,magnetic fields and forces, A beam of protons moves through a uniform magnetic field with magnitude 20 T, directed along the positive z-axis The protons have a velocity of magnitude 30x105 m/s in the xz-plane at an angle of 30o to the positive z-axis Find the magnitude and direction of force on the protonInduced Electric field from uniform magnetic field problem, I am trying to find the magnitude and direction of the induced electric field caused by a uniform magnetic field $\vec B$, which is perpendicular to the plane of the page The magnitude of the fie,Circular motion in a magnetic field, Since the force is F = qvB in a constant magnetic field, a charged particle feels a force of constant magnitude always directed perpendicular to its motion The result is a circular orbit , The fact that the field is uniform is indicated by the equal spacing of the arrowsUniformly Magnetized Sphere, Thus, both the and fields are uniform inside the sphere Note that the magnetic intensity is oppositely directed to the magnetization In other words, the field acts to demagnetize the sphere How successful it is at achieving this depends on the shape of the hysteresis curve in the negative and positive quadrant This curve is sometimes called the demagnetization curve of the magnetic ,. ##### Get Price Magnetic Field Strength: Force on a Moving Charge in a ,, Magnetic fields exert forces on moving charges, and so they exert forces on other magnets, all of which have moving charg Right Hand Rule 1 The magnetic force on a moving charge is one of the most fundamental known Magnetic force is as important as the electrostatic or Coulomb force Yet the magnetic force is more complex, in both the ,why is the magnetic field inside a solenoid uniform ,, why is the magnetic field inside a solenoid uniform Asked by Antara Keshav | 21st Jul, 2010, 12:00: AM Expert Answer: As the current flowing through the loops in solenoid carry same amount of current, the field lines produced by individual loops join/augment each other to produce uniform magnetic fieldForce and Torque on a Magnetic Dipole, CQ: Dipole in Uniform Field B µ I Starting from rest, the current ring in a uniform magnetic field will: 1 rotate clockwise, not move 2 rotate counterclockwise, not move 3 move to the right, not rotate 4 move to the left, not rotate 5 move in another direction, without rotating 6 both move and rotate 7Torque on a current carrying loop in a uniform magnetic field, (iv)If the plane of coil is parallel to the direction of magnetic field , τ max = NIAB (v)If the plane of coil is perpendicular to the direction of magnetic field, τ = 0 (vi)If current carrying coil is placed in a non-uniform magnetic field it experiences both force and torque (vii)For a given area, torque is independent of shape of the coil114: Motion of a Charged Particle in a Magnetic Field ,, A uniform magnetic field of magnitude 15 T is directed horizontally from west to east (a) What is the magnetic force on a proton at the instant when it is moving vertically downward in the field with a speed of $$4 \times 10^7 \, m/s$$? (b) Compare this force with the weight w of a proton. ##### Get Price Chapter 8 Introduction to Magnetic Fields, switch on a uniform magnetic field B=Bˆi G which runs parallel to the plane of the loop, as shown in Figure 841(a)? Figure 841 (a) A rectangular current loop placed in a uniform magnetic field (b) The magnetic forces acting on sides 2 and 4What does magnetic field, uniform mean?, Definition of magnetic field, uniform in the Definitions dictionary Meaning of magnetic field, uniform What does magnetic field, uniform mean? Information and translations of magnetic field, uniform in the most comprehensive dictionary definitions resource on the webBiot, Magnetic Field of a Cylindrical Current Distribution with a Hole Find the magnetic field everywhere due to a uniform current distribution in a long cylindrical conductor with an off-center cylindrical hole 802 Physics II: Electricity and Magnetism, Spring 2007What is uniform magnetic field?, Dec 30, 2018· A uniform MF is a situation where the MF lines are moving from N to S pole of a magnet with a uniform separation and they are traveling in a straight line, This is not possible to obtain with a single permanent magnet First of all, you should kno,Motion of an electron in a uniform magnetic field ,, The excitation of collective plasma motion by a small charged object moving through a low-density unbounded plasma in an external uniform, static magnetic field is considered. ##### Get Price Chapter 6 Magnetostatic Fields in Matter, Consider a rectangular current loop, with sides s 1 and s 2, located in a uniform magnetic field, pointing along the z axis The magnetic dipole moment of the current loop makes an angle θ with the z axis (see Figure 61a) The magnetic forces on the left and right sides of the current loop have the same magnitude but point in opposite directions (see Figure 61b)Chapter 6 Magnetostatic Fields in Matter, If the magnetic field is non-uniform then, in general, there will be a net force on the current loop Consider an infinitesimal small current square of side ε, located in the yz plane and with a current flowing in a counter-clockwise direction (see Figure 62) The force acting on the current loop is the vector sum of the forces acting on each ,Solved: 17 In A Uniform Magnetic Field, An Electron Flies ,, Jan 18, 2021· In A Uniform Magnetic Field, An Electron Flies On A Cycloid Trajectory It Starts At (0,0,0) And Arrives At (1,2,4) What Is The Value Of Spath Dř Over Its Path? Hint: If You Are Googling Cycloid, Rethink (1) 18 What Are The Official Names Of H And B Which, B Or H, Goes Into The Lorentz Force? (1) 19 Write The Double-integral Of The Force ,Induced current in a coil from a constant uniform magnetic ,, Jan 13, 2021· If the motion of the core windings moves through field lines, a current will be generated There's no difference between moving windings or a moving magnetic field The only requirement is that by either process, the windings move through (cross) field linesEmf induced in straight conductor | Mini Physics ,, Dec 30, 2015· Magnetic Fields due to currents The diagram shows a straight conductor of length l moving with constant velocity v through a uniform magnetic field directed into the paper The conductor is moving perpendicularly to the magnetic field. ##### Get Price What is Magnetic Flux?, Where θ is the angle between vector A and vector B If the magnetic field is non-uniform and at different parts of the surface, the magnetic field is different in magnitude and direction, then the total magnetic flux through the given surface can be given as the summation of the product of all such area elements and their corresponding magnetic fieldA uniform magnetic field acts right angles to the direction, Q A uniform magnetic field acts right angles to the direction of motion of electrons As a result, the electron moves in a circular path of radius 2 cm If the speed of electrons is doubled, then the radius of the circular path will beMagnetic Dipole in a Uniform Magnetic Field, This occurs at the characteristic radius where is the magnetic moment of the dipole (in erg/gauss) and is the magnitude of the uniform magnetic field (in gauss) This result can be derived by setting the sum of the radial contributions from the dipole and external field equal to zero, then solving forMagnetic Fields | Fundamentals of Physics | Numer,, A magnetic dipole with a dipole moment of magnitude 0020 $\mathrm{J} / \mathrm{T}$ is released from rest in a uniform magnetic field of magnitude 52 $\mathrm{mT}$ The rotation of the dipole due to the magnetic force on it is unimpededWhy is the magnetic field inside a solenoid uniform ,, Why is the magnetic field inside a solenoid uniform? Purpose of a Solenoid Solenoids, because of their structure, are able to create strong magnetic fields in their interior. ##### Get Price magnetic field | physics | Britannica, Define Uniform and Non-uniform Magnetic Field - QS StudyMagnetic Field, Magnetic field is an invisible space around a magnetic object A magnetic field is basically used to describe the distribution of magnetic force around a magnetic object Magnetic fields are created or produced when the electric charge/current moves within the vicinity of the magnetUniform Magnetic field?, Nov 08, 2014· The method to apply magnetic field is very effective and common in the comsol module cas I tried to use this method before but the results cannot give me the UNIFORM magnetic field If I add the remanant magnetic field in the domain, it simply change the domain as a permanent magnet And I think this is how comsol calls that as "remanent" termDefine Uniform and Non, Uniform Magnetic Field: Magnetic field is said to be uniform if the magnetic induction has the same magnitude and the same direction at all the points in the region A uniform magnetic field can be prepared by making a comparatively long cylindrical coil Once current is flowing throughout the coil a uniform magnetic field will subsist all along the contained by of the coilMagnetic field lines, Uniform magnetic field When magnetic field lines are the same distance apart from each other, we say that the magnetic field is uniform This is shown in the diagram: Magnetic field lines in a ,.
{}
# Math 120 Hi Matthew, let's see if this test works: Here's some random latex code: ${\displaystyle f(x)=\left(x^{3}-2x\right)^{3},f^{\prime \prime }(1)}$ test changes
{}
Skip to content Template for SIAM Journals Author License Other (as stated in the work) AbstractThis is the template for SIAM journals, downloaded from SIAM homepage on 14 March, 2018.
{}
? Free Version Easy # Finding the Slope of a Line SATSTM-HVEQEX What is the slope of the following linear equation? $$6x-3y=4$$ A $6$ B $-2$ C $2$ D $-3$ E $0.5$
{}
# Print a Lego piece This challenge is a simple one. Given two inputs, describing the height and width of a Lego piece, you have print an ASCII art representation of it. Here is how the Lego pieces are supposed to look: (4, 2) ___________ | o o o o | | o o o o | ----------- (8, 2) ___________________ | o o o o o o o o | | o o o o o o o o | ------------------- (4, 4) ___________ | o o o o | | o o o o | | o o o o | | o o o o | ----------- (3, 2) _________ | o o o | | o o o | --------- (1, 1) o If you can't tell from the test-cases, the top and bottom are width*2+3 underscores and dashes, and each row has pipes for the sides, o's for the little things, and everything is separated by spaces. The only exception for this is (1, 1), which is just a single o. You will never get 0 for any of the dimensions. This is , so shortest code in bytes wins! var QUESTION_ID=84050,OVERRIDE_USER=31343;function answersUrl(e){return"http://api.stackexchange.com/2.2/questions/"+QUESTION_ID+"/answers?page="+e+"&pagesize=100&order=desc&sort=creation&site=codegolf&filter="+ANSWER_FILTER}function commentUrl(e,s){return"http://api.stackexchange.com/2.2/answers/"+s.join(";")+"/comments?page="+e+"&pagesize=100&order=desc&sort=creation&site=codegolf&filter="+COMMENT_FILTER}function getAnswers(){jQuery.ajax({url:answersUrl(answer_page++),method:"get",dataType:"jsonp",crossDomain:!0,success:function(e){answers.push.apply(answers,e.items),answers_hash=[],answer_ids=[],e.items.forEach(function(e){e.comments=[];var s=+e.share_link.match(/\d+/);answer_ids.push(s),answers_hash[s]=e}),e.has_more||(more_answers=!1),comment_page=1,getComments()}})}function getComments(){jQuery.ajax({url:commentUrl(comment_page++,answer_ids),method:"get",dataType:"jsonp",crossDomain:!0,success:function(e){e.items.forEach(function(e){e.owner.user_id===OVERRIDE_USER&&answers_hash[e.post_id].comments.push(e)}),e.has_more?getComments():more_answers?getAnswers():process()}})}function getAuthorName(e){return e.owner.display_name}function process(){var e=[];answers.forEach(function(s){var r=s.body;s.comments.forEach(function(e){OVERRIDE_REG.test(e.body)&&(r="<h1>"+e.body.replace(OVERRIDE_REG,"")+"</h1>")});var a=r.match(SCORE_REG);a&&e.push({user:getAuthorName(s),size:+a[2],language:a[1],link:s.share_link})}),e.sort(function(e,s){var r=e.size,a=s.size;return r-a});var s={},r=1,a=null,n=1;e.forEach(function(e){e.size!=a&&(n=r),a=e.size,++r;var t=jQuery("#answer-template").html();t=t.replace("{{PLACE}}",n+".").replace("{{NAME}}",e.user).replace("{{LANGUAGE}}",e.language).replace("{{SIZE}}",e.size).replace("{{LINK}}",e.link),t=jQuery(t),jQuery("#answers").append(t);var o=e.language;/<a/.test(o)&&(o=jQuery(o).text()),s[o]=s[o]||{lang:e.language,user:e.user,size:e.size,link:e.link}});var t=[];for(var o in s)s.hasOwnProperty(o)&&t.push(s[o]);t.sort(function(e,s){return e.lang>s.lang?1:e.lang<s.lang?-1:0});for(var c=0;c<t.length;++c){var i=jQuery("#language-template").html(),o=t[c];i=i.replace("{{LANGUAGE}}",o.lang).replace("{{NAME}}",o.user).replace("{{SIZE}}",o.size).replace("{{LINK}}",o.link),i=jQuery(i),jQuery("#languages").append(i)}}var ANSWER_FILTER="!t)IWYnsLAZle2tQ3KqrVveCRJfxcRLe",COMMENT_FILTER="!)Q2B_A2kjfAiU78X(md6BoYk",answers=[],answers_hash,answer_ids,answer_page=1,more_answers=!0,comment_page;getAnswers();var SCORE_REG=/<h\d>\s*([^\n,]*[^\s,]),.*?(\d+)(?=[^\n\d<>]*(?:<(?:s>[^\n<>]*<\/s>|[^\n<>]+>)[^\n\d<>]*)*<\/h\d>)/,OVERRIDE_REG=/^Override\s*header:\s*/i; body{text-align:left!important}#answer-list,#language-list{padding:10px;width:290px;float:left}table thead{font-weight:700}table td{padding:5px} <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <link rel="stylesheet" type="text/css" href="//cdn.sstatic.net/codegolf/all.css?v=83c949450c8b"> <div id="answer-list"> <h2>Leaderboard</h2> <table class="answer-list"> <thead> <tr><td></td><td>Author</td><td>Language</td><td>Size</td></tr></thead> <tbody id="answers"> </tbody> </table> </div><div id="language-list"> <h2>Winners by Language</h2> <table class="language-list"> <thead> <tr><td>Language</td><td>User</td><td>Score</td></tr></thead> <tbody id="languages"> </tbody> </table> </div><table style="display: none"> <tbody id="answer-template"> <tr><td>{{PLACE}}</td><td>{{NAME}}</td><td>{{LANGUAGE}}</td><td>{{SIZE}}</td><td><a href="{{LINK}}">Link</a></td></tr></tbody> </table> <table style="display: none"> <tbody id="language-template"> <tr><td>{{LANGUAGE}}</td><td>{{NAME}}</td><td>{{SIZE}}</td><td><a href="{{LINK}}">Link</a></td></tr></tbody> </table> • Is it possible that the width or the height will be greater than 10? What range should we support? Jun 30 '16 at 2:37 • The special case is a real bummer. Jun 30 '16 at 3:03 • In the next few years, I want to see another "Print a Lego piece" challenge that requires writing the code to tell a 3D printer to produce a Lego. Jun 30 '16 at 8:47 • Wait, "whatever integer range your language supports"? That ain't how LEGO works. The bricks are only available in a handful of very specific dimensions. Even if you add in plates, you only get a couple more. Any script that does not discard input such as (1,7) or (5,3) is complete garbage. Jul 1 '16 at 14:43 • Why doesn't the single piece (1,1) have sides? There is a real Lego piece with a single nipple on top of a cube. Jul 2 '16 at 0:00 # Charcoal, 41 bytes ¿⁼η⁼θ¹o≔⁺³×θ²κURκ⁺²ηP×κ_M⁺η¹↓Pκ↗Fη«P×θ o↑ Draws a box to get the vertical sides, then manually draws the top and bottom. Then, prints the o's line by line. Link is to verbose version of code. Try it online! # MAWP, 131 bytes @1A{1M\}>1M!//~[1A~!~]%~!2W3M[1A95W2W5M;]%~ [1A25W;65W1M4W;~[1A84W;77W2W43W1MM;]%~84W;65W1M4W;]~25W;2W3M[1A95W;] The special case was a pain, but I managed to 'golf' a solution under 150 bytes here. ### Output 4 3 ___________ | o o o o | | o o o o | | o o o o | ----------- Try it! # Befunge, 114113108 101 bytes I know there are already a number of Befunge solutions, but I was fairly certain they could be improved upon by taking a different approach to the layout of the code. I suspect this answer can be golfed further as well, but it is already a fair bit smaller than either of the previous entries. &::&+:2^|,+55_"_",^ -4:,,"| "<\_|#:+1\,,"|"+55$_2#$-#$" o",#!,#:< ,"-"_@#:-1< :*2+2\-_"o",@v!:-1< Try it online! • Can you explain why the string :<| is needed? May 29 '18 at 18:20 • @Zacharý That vertical branch on the first line never actually branches up. The top of the stack is always zero at that point, so it's a shortcut for dropping the stack top and branching down at the same time - essentially this tip. Nov 28 '18 at 18:59 # Vim, 56 keystrokes This seems like a text editing task, so Vim is the obvious choice! I take input as a text file with two space separated integers and output the answer into the same file. Also, I hate you for having the 1x1 special case... Anyway: "adt l"bDro:if@a*@b==1|wq|en<cr>Di| |<esc>@aio <esc>yy@bPVr_GpVr-ZZ and if there hadn't been for the special case, 35 keystrokes "adt x"bDi| |<esc>@aio <esc>yy@bPVr_GpVr-ZZ ## A breakdown for sane people: "adt l"bD Delete numbers from buffer into @a and @b (space char kept) ro:if@a*@b==1|wq|en<cr> Replace space with "o" and if special case, save and quit Di| |<esc> Clear the line and write the edges of the lego block @aio <esc> Insert @a lots of "o " to get a finished middle part yy@bP Yank line and make @b extra copies (one too many) Vr_ We are at top of buffer, replace extra line with underscores Gp Jump to bottom of buffer, pull line that we yanked earlier Vr-ZZ Replace line with dashes, save and quit ## JavaScript (ES6), 89 86 bytes (x,y,g=c=>c[r=repeat](x*2+3))=>x*y-1?g(_)+ +|${o [r](x)}| [r](y)+g(-):o Edit: Saved 3 bytes thanks to @Shaggy. • Save 3 bytes by aliasing repeat. May 2 '17 at 22:25 # Swift, 182 Bytes func p(w:Int,h:Int){let i=w*2+3;var s="";if w+h<3{s="o"}else{for _ in 0..<i{s+="_"};for _ in 0..<h{s+="\n|";for _ in 0..<w{s+=" o"};s+=" |"};s+="\n";for _ in 0..<i{s+="-"}};print(s)} If this was a closure implementation the func p(w:Int,h:Int) could be deleted and w replaced with $0 and h replaced with $1 but then it would be only the implementation not the interface so I felt that was cheating... ## 2nd attempt 180 Bytes func q(w:Int,h:Int){let l=w*2+3;var s="",a="";if w+h<3{s="o"}else{for _ in 0..<l{s+="_";a+="-"};for x in 0..<h*w{s+=x%h==0 ? "\n| o":x%h==w-1 ? " o |":" o"};s+="\n";s+=a};print(s)} Improved using ternaries. Only 2 bytes though... ## 3rd attempt 263 bytes func r(w:Int,h:Int){let l=w*2+3;var s="";if w+h<3{s="o"}else{for a in 0..<(h+2)*l{if a<l{s+="_";continue};if a>=(h+2)*l-l{s+=a%l==0 ? "\n-":"-";continue};let v=a%l>0&&a%l<l-2;s+=a%l==0 ? "\n|":v&&(a%l)%2==0 ? "o":v&&(a%l)%2==1 ? " ":a%l==l-1 ? "|":" "}};print(s)} The reason I include this is because my original thought was to decrease the amount of for loops. Whilst this is more verbose it only has the one iteration (also it took me quite a long time so here it is again in a more friendly format): func r(w: Int, h: Int) { let l = w * 2 + 3 var str = "" if w + h < 3 { str = "o" } else { for a in 0..<(h + 2) * l { if a < l { str += "_" continue } if a >= (h + 2) * l - l { str += a % l == 0 ? "\n-" : "-" continue } let v = a % l > 0 && a % l < l - 2 str += a % l == 0 ? "\n|" : v && (a % l) % 2==0 ? "0" : v && (a % l) % 2 == 1 ? " " : a % l == l - 1 ? "|" : " " } } print(str) } There are a lot of ternary conditions here so I'm sure it can be optimised further. • Welcome to the site! Nice first post. – user58826 May 2 '17 at 21:04 ## Racket 174 bytes (λ(w h)(let((dl(λ(s)(displayln s)))(d(λ(s)(display s))))(for((x(+ 3(* w 2))))(d"_")) (dl"")(for((x h))(d"| ")(for((y w))(d"o "))(d"|")(dl""))(for((x(+ 3(* w 2))))(d"-")))) Ungolfed: (define (f1 w h) (let ((dl (λ (s) (displayln s))) (d (λ (s) (display s)))) (for ((x (+ 3 (* w 2)))) (d "_")) (dl "") (for ((x h)) (d "| ") (for ((y w)) (d "o ")) (d "|") (dl "")) (for ((x (+ 3 (* w 2)))) (d "-")))) Testing: (f 5 2) Output: _____________ | o o o o o | | o o o o o | ------------- # PHP 4.1, 103 bytes This is based on my answer on https://codegolf.stackexchange.com/a/57883/14732, which was where all the heavy lifting was done. <?$R=str_repeat;printf($W+$H<3?o:"_%'_".($Z=1+$W*2)."s_ %s-%1$'-{$Z}s-",'',$R('|'.$R(' o',$W).' | ',$H)); This answer requires that short_open_tags and register_globals are enabled (which it is enabled by default). You can pass the values using the keys W and H, over POST, GET, SESSION and COOKIE. This outputs warnings to STDERR, which aren't errors. According to https://codegolf.meta.stackexchange.com/a/1655/14732, this is acceptable as long as it isn't disallowed in the question. Removing the warnings costs me 2 bytes. # PHP 7, 98 bytes <?=($P=str_pad)("",$w=3+2*$argv[1],_).$P("",$argv[2]*++$w,$P(" | ",$w-1,"o ")."|").$P(" ",$w,"-"); str_pad saves a little from str_repeat. Juggling with preincrement on $w saves two bytes. But version 7 is needed for assigning $r while using it at the same time. # Java, 318312297294260 258 bytes Saved 15 bytes thanks to cliffroot! interface a{static void main(String[]A){int b=Byte.valueOf(A[0]),B=Byte.valueOf(A[1]),C=3+b*2;String c="";if(b<2&B<2)c="o";else{for(;C-->0;)c+="_";for(;B-->0;){c+="\n|";for(C=b;C-->0;)c+=" o";c+=" |";}c+="\n";for(C=3+b*2;C-->0;)c+="-";}System.out.print(c);}} It works with command line arguments. Ungolfed In a human-readable form: interface a { static void main(String[] A) { int b = Byte.valueOf(A[0]), B = Byte.valueOf(A[1]), C = 3 + b*2; String c = ""; if (b < 2 & B < 2) c = "o"; else { for (; C-- > 0;) c += "_"; for (; B-- > 0;) { c += "\n|"; for (C = b; C-- >0;) c += " o"; c += " |"; } c += "\n"; for(C = 3 + b*2; C-- >0;) c += "-"; } System.out.print(c); } } Yes, it's still difficult to understand what's going on even when the program is ungolfed. So here goes a step-by-step explanation: static void main(String[] A) The first two command line arguments -which we'll use to get dimensions- can be used in the program as A[0] and A[1] (respectively). int b = Byte.valueOf(A[0]), B = Byte.valueOf(A[1]), C = 3 + b*2; String c = ""; b is the number of columns, B is the number of rows and C is a variable dedicated for use in for loops. c is the Lego piece. We'll append rows to it and then print it at the end. if (b < 2 & B < 2) c = "o"; else { If the piece to be printed is 1x1, then both b (number of columns) and B (number of rows) should be smaller than 2. So we simply set c to a single o and then skip to the statement that System.out.prints the piece if that's the case. for (; C-- > 0; C) c += "_"; Here, we append (integerValueOfA[0] * 2) + 3 underscores to c. This is the topmost row above all holes. for (; B > 0; B--) { c += "\n|"; for(C = b; C-- > 0;) c+=" o"; c += " |"; } This is the loop where we construct the piece one row at a time. What's going on inside is impossible to explain without examples. Let's say that the piece is 4x4: Before entering the loop, c looks like this: ___________ After the first iteration (\n denotes a line feed): ___________\n | o o o o | After the second iteration: ___________\n | o o o o |\n | o o o o | After the third iteration: ___________\n | o o o o |\n | o o o o |\n | o o o o | . c += "\n"; for (C = 3 + b*2; C-- > 0;) c += "-"; Here, we append (integerValueOfA[0] * 2) + 3 hyphens to the piece. This is the row at the very bottom, below all holes. The 4x4 piece I used for explaining the for loop where the piece is actually constructed now looks like this: ___________\n | o o o o |\n | o o o o |\n | o o o o |\n | o o o o |\n ----------- System.out.print(c); And finally, we print the piece! • Probably Revision 3 made this the longest post I've ever made on Stack Exchange. Jun 30 '16 at 3:57 • You can move C variable from for loops int b=Byte.valueOf(A[0]),B=Byte.valueOf(A[1]),C. In all your for loops it also seems like you can use C-->0; checks, makes it 298, pastebin.com/uj42JueL Jun 30 '16 at 7:51 • some creative usage of for loops for few bytes saved – pastebin.com/dhNCpi6n Jun 30 '16 at 8:23 • if you convert your arguments to bytes first, then your check is size of brick is 1x1 will be if(b==1&B==1) which allows you to save over 20 bytes Jul 1 '16 at 10:34 • also for the case 1x1 instead doing this System.out.print('o');return;, you could set c='o' and placed logic for different bricks in else block. then having single print statement and no return allow you to save some additional bytes Jul 1 '16 at 10:44 ## Batch, 172 170 bytes @echo off if "%*"=="1 1" echo o&exit/b set o= for /l %%i in (1,1,%1)do call set o=%%o%% o echo ---%o: o=--% for /l %%i in (1,1,%2)do echo ^|%o% ^| echo ---%o: o=--% Edit: Saved 2 bytes thanks to @CᴏɴᴏʀO'Bʀɪᴇɴ @EʀɪᴋᴛʜᴇGᴏʟғᴇʀ. I can save 7 bytes if I can assume delayed expansion is enabled. • %%o%% instead of %o%? Jun 30 '16 at 18:47 • @EʀɪᴋᴛʜᴇGᴏʟғᴇʀ %o% would be replaced with original value of o each time, so that o would only ever equal " o". %%o%% goes through as an argument to call of %o%, which then uses the current value of o. – Neil Jun 30 '16 at 19:36 • Why don't you... just do set o=%o% o? Jul 1 '16 at 7:53 • @EʀɪᴋᴛʜᴇGᴏʟғᴇʀ %o% gets expanded before the for loop is parsed, so the loop reads for /l %i in (1,1,8) do call set o= o which is obviously pointless. – Neil Jul 1 '16 at 8:19 • Why don't you do set o=%%o%% o then (-5)? Jul 1 '16 at 8:25 # 32 16-bit little-endian x86 machine code, 5754 51 bytes 3 bytes less thanks to @ninjalj. Heavily rewrote the code and have managed to shave off another 3 bytes In hex FCBA6F208D48FFE20492AAEB2389D941D1E14151B05FF3AAEB0BB87C20AB89D992F3AB92AAB00AAA4E7DEF59B02DF3AA91AAC3 Input: BX=width, SI=height, DI points to the buffer that receives result as a NULL-terminated string with lines separated by "\n" ## Disassembly: fc cld ba 6f 20 mov dx,0x206f ;Storing ' o' in DX for later use 8d 48 ff lea cx,[bx+si-0x1] ;CX=width+height-1 e2 04 loop _main0 ;--CX & brahch if not zero 92 xchg dx,ax ;(1,1) case, swap DX & AX aa stosb ;AL == 'o', CX == 0 eb 23 jmp _end _main0: 89 d9 mov cx,bx 41 inc cx d1 e1 shl cx,1 41 inc cx ;Calculate (width+1)*2+1 51 push cx ;and save it for future use b0 5f mov al,0x5f ;'_' f3 aa rep stosb ;Output the whole line of them eb 0b jmp _loopstart ;Jump into the loop _loop: b8 7c 20 mov ax,0x207c ;' |' ab stosw ;Output it once (left bar + space) 89 d9 mov cx,bx ;Copy width 92 xchg dx,ax ;AX == ' o' f3 ab rep stosw ;Output it CX times 92 xchg dx,ax ;Swap values back, AL == '|' aa stosb ;Output only the right bar _loopstart: b0 0a mov al,0x0a ;Newline. Can be replaced with mov ax,0x0a0d for windows newline aa stosb ;convention (at the cost of 1 byte), with stosb replaced with stosw 4e dec si ;Height-- 7d ef jge _loop ;Continue if si >= 0 (this accounts for the dummy first pass) 59 pop cx b0 2d mov al,0x2d ;'-' f3 aa rep stosb ;Output bottom line _end: 91 xchg cx,ax ;CX == 0, so swap to get zero in AL aa stosb ;NULL-terminate output c3 retn • Would be shorter as 16-bit: -3 bytes for 3 66h prefixes, +1 byte for "\r\n" line termination. Jul 4 '16 at 17:54 • You should put spaces between the crossed-out numbers and the current numbers in your byte count, for readability. Jul 8 '16 at 20:34 # V, 43, 40, 38 36 bytes One of the longest V answers I've ever written... Àio ddÀPñóo î½o u2Pí.«/| °| Vr-HVr_ Try it online! Since this contains unicode and unprintable characters, here is a reversible hexdump: 0000000: c069 6f20 1b64 64c0 50f1 f36f 20ee bd6f .io .dd.P..o ..o 0000010: 0d0a 7532 50ed 2eab 2f7c 20b0 7c0d 0a56 ..u2P.../| .|..V 0000020: 722d 4856 725f r-HVr_ This challenge is about manipulating text, so perfect for V! On the other hand, V is terrible at conditionals and math, so the differing output for (1, 1) really screwed it up... :( Explanation: À "Arg1 times: io <esc> "Insert 'o ' dd "Delete this line, and À "Arg2 times: P "Paste it Now we have 'Height' lines of o's with spaces between them. ñ "Wrap all of the next lines in a macro. This makes it so that if any "Search fails, execution will stop (to handle for the [1, 1] case) ó "Search and replace o î½o "'o'+space+0 or 1 newlines+another 'o' u "Undo this last search/replace 2P "Paste twice í "Search and replace on every line .«/| °| "A compressed regex. This surrounds every non-empty line with bars. Vr- "Replace the current (last) line with '-' H "Move to line one Vr_ "Replace this line with '_' Non-competing version (31 bytes): Try it online! This version uses several features that are newer then this challenge to be 5 bytes shorter! Second explanation: ddÀP which is "Delete line, and paste it n times" is replaced with ÀÄ which is "Repeat this line n times". (-2 bytes) óo î½o u which was "Replace the first match of this regex; Undo" was replaced with /o î½o Which is just "Search for a match of this regex" (-1 byte) And lastly, Ò is just a simple synonym for Vr, which both "Replace every character on this line with 'x'". (-2 bytes) • how come it seems broken at the bottom with this v.tryitonline.net/… Jul 1 '16 at 17:44 • @meepl I really have no idea. It works on 50x959 but if you increase the width or height it stops working. I'm guessing it's most likely a restriction intentionally placed on the website to prevent extremely large programs from being ran. Jul 1 '16 at 17:50 • TIO limits the output to 100 KB, mainly to prevent the frontend from crashing your browser. Jul 27 '16 at 1:26 # Cinnamon Gum, 32 bytes 0000000: 6c07 d5f5 7a5d 9cdf 5ae6 52ae 4050 0c35 l...z]..Z.R.@P.5 0000010: 18d9 052f 0082 9b42 e7c8 e422 5fe4 7d9f .../...B..."_.}. Non-competing. Try it online. Input must be exactly in the form [width,height] with no space in between the comma and the height. # Explanation The string decompresses to this: l[1,1]&o;?&p___~__~ %| ~o ~| %---~--~ The first l stage maps [1,1] to o (the special case) and everything else to the string p___~__~ %| ~o ~| %---~--~ The backtick then signals the start of a second stage; instead of outputting that string, CG chops off the backtick and executes the string. The p mode then repeats all the characters inside the tildes first parameter (width) times and then afterwards repeats the characters inside the percent signs second parameter (height) times. So for [4,2] it turns into this: ___________ %| o o o o | %----------- and then into: ___________ | o o o o | | o o o o | ----------- # JavaScript ECMAScript6 (929491 87 Bytes) 92 Characters a=(w,h,r='repeat')=>w-h?"__"[r](w)+"__\n"+("|"+" o"[r](w)+" |\n")[r](h)+"--"[r](w)+"---":"o" ## Everything below here is Correct 94 Characters a=(w,h,r='repeat')=>w*h-1?"__"[r](w)+"__\n"+("|"+" o"[r](w)+" |\n")[r](h)+"--"[r](w)+"---":"o" 91 Characters a=(w,h)=>w*h-1?${"__"[r='repeat'](w)}__\n+('|'+" o"[r](w)+' |\n')[r](h)+"-"[r](w*2+3):"o" 87 Characters (Smallest so Far) a=(w,h)=>w*h-1?'__'[r='repeat'](w)+__ +('|'+' o'[r](w)+ | )[r](h)+'-'[r](w*2+3):'o' I took what I learned from the other JavaScript Submissions and made it a lot smaller by writing mine from scratch and then working my way down, turns out you don't need any console.log or alert as by default it will push it to the console when there is an output I am still working on it to see if I can make it smaller yet • I think your first line of output is one character short. Jan 11 '18 at 23:32 # Befunge, 144 Bytes I would have prefered to comment to this post, but I don't have the reputation yet, so I'm putting an answer of my own, which works a similar way, but is slightly more compact &::&*:1v v3*2:\/\_"o",@ >+: v >52*," |",, v >,1-:vLEG O MAKERv::\< ^"_" _$\:|<v "o "_v v52:+3*2$<,>,,1-:^$>*,v < ^"|":-1\< v-1_@, >:"-"^ you can test the code here # Perl 5 - 84 77 bytes 84 Bytes sub l{($x,$y)=@_;$w=3+2*$x;warn$x*$y<2?"o":'_'x$w.$/.('| '.'o 'x$x."|\n")x$y.'-'x$w} 77 Bytes. With some help from Dom Hastings sub l{($x,$y)=@_;$x*$y<2?o:'_'x($w=3+2*$x).(' | '.'o 'x$x."|")x$y.$/.'-'x$w} • First I was confused as to why someone would go to the effort of using warn in a golf program, but then I realized you're using it because it's shorter than print. Nice! – pipe Jul 3 '16 at 13:11 • Yeah, I think in Perl 6 you can take another byte off by using say instead of warn Jul 3 '16 at 18:34 • You can do that in Perl 5 too, just that it's not enabled by default. I think that you can get around that in code-golf by calling your script from the command line with -E instead of -e, enabling all the extensions. I'm new to this place so I don't know exactly where it's specified how to count the scores though. – pipe Jul 3 '16 at 18:46 • Oh really, I didn't know that. I'm new here as well so I'm also not sure Jul 4 '16 at 6:59 • I think you can shorten this to 76 bytes... If you're using a function I believe returning the string is acceptable (see the JS answer, saving you 4 bytes for warn), you don't need quotes around the "o" (you can use a bareword for another -2), if you inline the calculation of $w you should save another byte ('_'x($w=3+2*$x) vs. $w=3+2*$x; ... '_'x$w) and lastly, you can change the \n for a literal newline. Hope that helps! Jul 5 '16 at 9:23 # Python 2, 71 bytes lambda x,y:('o',x*'__'+'___\n'+'| %s|\n'%('o '*x)*y+'-'*(x*2+3))[x+y>2] • Welcome to PPCG! Nice first post! Jul 5 '16 at 14:35 # Groovy, 107, 98, 70, 64 {x,y->t=x*2+3;x<2&&y<2?"o":'_'*t+"\n"+"|${' o'*x} |\n"*y+'-'*t} Testing: (2,2) (1,1) (8,2) (1,4) _______ | o o | | o o | ------- o ___________________ | o o o o o o o o | | o o o o o o o o | ------------------- _____ | o | | o | | o | | o | ----- # JavaScript ES6, 134 124 Bytes function p(e,n){r='repeat';e*n-1?x="_"[r](2*e+3)+"\n"+("|"+" o"[r](e)+" |\n")[r](n)+"-"[r](2*e+3)+"\n":x="o",console.log(x)} • I can shave off 4 bytes by using [r] instead of .repeat: function p(e,n){r='repeat';1==e&&1==n?x="0":x="_"[r](2*e+3)+"\n"+("|"+" o"[r](e)+" |\n")[r](n)+"-"[r](2*e+3)+"\n",console.log(x)} Jun 30 '16 at 9:20 • The output for 1,1 should be o, not 0. Also, I don't see any use of ES6 here, so you can change the version. – Neil Jun 30 '16 at 12:22 • You can replace function p(e,n) with p=(e,n)=> for 6 bytes. Jun 30 '16 at 14:14 • You're assigning x= twice. The second assignation can be removed for an extra 2 bytes, and x can be replaced altogether with a console.log( ? : ), bringing it down to 113 bytes: p=(e,n)=>{r='repeat';console.log(e*n-1?"_"[r](2*e+3)+"\n"+("|"+" o"[r](e)+" |\n")[r](n)+"-"[r](2*e+3)+"\n":"o")} Jun 30 '16 at 15:10 • Guess what, welcome to PPCG! Jul 1 '16 at 8:04 ## Common Lisp, 286 bytes (let((columns(read))(rows(read)))(when(and(= columns 1)(= rows 1))(progn(write-char #\o)(exit)))(dotimes(i(+(* columns 2)3))(write-char #\_))(terpri)(dotimes(i rows)(format t "| ")(dotimes(i columns)(format t "o "))(format t "| ")(terpri))(dotimes(i(+(* columns 2)3))(write-char #\-))) • this shows you have 285, not 286 bytes. (let((c(read))(r(read)))(when(and(= c 1)(= r 1))(progn(#1=write-char #\o)(exit)))(#2=dotimes #4=(i(+(* c 2)3))(#1# #\_))(terpri)(#2#(i r)#3=(format t "| ")(#2#(i c)(format t "o "))#3#(terpri))(#2# #4# (#1# #\-))) saves 73 bytes. I used c instead of columns, r instead of rows, also #n# and #n= reader macro where you reuse things. – user65167 Feb 25 '17 at 10:04 • This saves 103 bytes: (lambda(c r)(if(=(* c r)1)(format t"o"(exit)))(#1=dotimes #2=(i(+(* c 2)3))(format t"_"))(terpri)(#1#(i r)#3=(format t"| ")(#1#(i c)(format t"o "))#3#(terpri))(#1# #2#(format t"-"))), I think anonymous function is allowed. What I don't like is fact that for (f 1 1) output is not shown (or it's shown but program exits too quickly to see it) but it was in your solution from the beggining. – user65167 Feb 25 '17 at 19:04 # Scala, 90 bytes (w:Int,h:Int)=>{val A=2*w+3;if(w*h==1)"o"else s"${"_"*A}\n${s"|${"o "*w}|\n"*h}${"-"*A}"} paste the above function into your Scala REPL, and then invoke the function with the width and height arguments. • Hello, and welcome to PPCG! This is a great first post! There is only one small issue: we commonly use Scala, 170 bytes not Scala - 170 bytes. They both work, but out byte snippet counters look for the first, according to my research. Jul 1 '16 at 23:21 • @NoOneIsHere thanks. fixed. and also shortened the submission from 170 to 94 bytes. Enjoying this... :-) Jul 2 '16 at 9:50 • @PeterPerháč You need to use Level 1 headers, so not fixed. Instead of **Scala, 94 bytes**, use #Scala, 94 bytes. Jul 2 '16 at 13:18 • You can use anonymous functions, i.e. {val A=2*w+3;if(w*h==1)"o"else s"${"_"*A}\n${s"|${"o "*w}|\n"*h}${"-"*A}"} (not snippets, though). Jul 2 '16 at 17:30 # 05AB1E, 33 bytes Code: *i'oë¹·3+©'_׶²F'|„ o¹×„ |¶}®'-×J Explanation: *i'o # If both input equal 1, push "o" ë # Else, do... ¹·3+ # Push input_1 × 2 + 3 © # Copy this number to the register '_× # Multiply by "_" ¶ # Push a newline character ²F } # Do the following input_2 times: '| # Push "|" „ o # Push " o" ¹× # Multiply this by input_1 „ | # Push " |" ¶ # Push a newline character ® # Retrieve the value from the register '-× # Multiply by "-" J # Join everything and implicitly print. Uses the CP-1252 encoding. Try it online!. ## Python 2, 7573 72 bytes lambda x,y:(x*'__'+'___\n'+('| '+'o '*x+'|\n')*y+'-'*(x*2+3),'o')[x<2>y] Returns a string, with a conditional to handle the 1,1 block. Thanks to Lynn and Chepner for two bytes • lambda x,y:('_'*x*2+'___\n'+ etc. saves a byte. – Lynn Jun 30 '16 at 1:31 • Knock off another byte with x*'__' instead of 2*x*'_'. Jul 1 '16 at 14:24 • Just join this Comunity, sorry for asking. How can i see it run? i paste it in the terminal and just prints <function <lambda> at 0x......>. How can i test this? Jul 2 '16 at 10:29 • @Miguel assign it to a variable. It'll return the value of the function: f=lambda x:x+1; print(f(9)) Jul 2 '16 at 11:23 • One more question if not to complicated to answer. How can you trace the bits so precisely? Jul 2 '16 at 11:30 # Retina, 112 111 bytes Makes the piece the correct width, then adds rows. Performs a substitution at the end if the result was a 1x1. Takes input like 4,2. Byte count assumes ISO 8859-1 encoding. \d+$* 1$¶___¶| |¶--- +s1(,.*)(¶.* )(.*)$1__$2o$3-- +sm1(.*)(^.*$¶)(.*)$1$2$2$3 ,¶ _____¶\| o \|¶----- o Try it online ### Fun fact: You can give multiple inputs on separate lines and it will add them together into a single LEGO! J, 59 bytes. ' o|_='&({~)@:((4,~3,2(,~"1)2 0,"1(1 0&($~)@,+:))^:(0===])) ## C, 202 191 bytes #define p printf i,w,h;t(char*c){for(i=0;p(c),++i<w*2+3;);p("\n");}f(){t("_");for(i=0;i<w*h;)i%w<1?p("| o "):p("o "),i++%w>w-2&&p("|\n");t("-");}main(){scanf("%d %d",&w,&h);w*h<2?p("o"):f();} Thanks to @Lince Assassino for saving 11 bytes! Ungolfed: #include <stdio.h> #define p printf int i, w, h; void t(char *c) { for(i=0; p(c), ++i<w*2+3;); p("\n"); } void f() { t("_"); for(i=0; i<w*h;) { i%w<1 ? p("| o ") : p("o "); i++%w>w-2 && p("|\n"); } t("-"); } int main() { scanf("%d %d", &w, &h); w*h<2 ? p("o") : f(); } • You can change your first line for p(char*A){printf(A);} Jul 1 '16 at 17:34 • Really, thank you! But it's possible to make shorter with #define p printf Jul 1 '16 at 18:16 # SpecBAS - 87 bytes 1 INPUT w,h: x=w*2+3: ?IIF$(w=1 AND h=1,"o","_"*x+#13+(("| "+("o "*w)+"|"#13)*h)+"-"*x) Uses an inline-IF to either print single "o" when width and height are 1, otherwise builds up a string. #13 is the line feed character. ## Bash, 186,163,156, 148,131, 130 Bytes ## Arg1 - Lego width ## Arg2 - Lego height function print_lego() { (($1+$2>2))&&{ printf _%.0s seq -1$1 echo for((i=$2;i--;)){ printf \| for((j=$1;j--;)){ printf o } echo \| } printf =%.0s seq -1 $1 echo }||echo o } Note: If you really need the lego to have hyphens for the last line, then change the last printf to printf -- -%.0s seq -1$1 • Wouldn't this be quite a bit shorter if it wasn't wrapped in a function? Also, I'm not an expert in bash but it looks like it's got some extra whitespace. Jun 30 '16 at 4:26 • It would be ~170 as a one-liner: (($x+$y==2))&&echo o||{ printf _%.0s $(seq -1$x);echo;for((i=0;i<$y;i++));do printf \|;for((j=0;j<$x;j++));do printf o;done;echo \|;done;printf =%.0s $(seq -1$x);echo;} – user53101 Jun 30 '16 at 13:00 • If you use (), you don't need the keyword function to declare a function. There is an alternate for syntax using braces, e.g: for((j=$1;j--;));{ printf o;}. As shown in the previous example, you can save some characters by decrementing and testing in for's second expression. You can use backticks instead of $(cmd). Jun 30 '16 at 19:48 • @ninjalj Thanks, I'm new to code golf -- that squeezes another ~17 bytes off, the one-liner is now 152: (($x+$y==2))&&echo o||{ printf _%.0s seq -1 $x;echo;for((i=$y;i--;)){ printf \|;for((j=$x;j--;)){ printf o;};echo \|;};printf =%.0s seq -1$x;echo;} – user53101 Jul 1 '16 at 0:28 • Dollar signs are optional in arithmetic context, so you can shave a few more bytes by changing (($something)) to ((something)) throughout. ($1 still needs the dollar sign to disambiguate it from the literal 1.) May 2 '18 at 9:53 1#1="o" w#h|f<-w*2+3=f!"_"++'\n':h!('|':w!" o"++" |\n")++f!"-" n!s=[1..n]>>s Usage example: 3 # 2 gives you a multiline string for a 3-by-2 brick. Ungolfed: (#) :: Int -> Int -> String 1 # 1 = "o" width # height = let longWidth = 2 * width + 3 in -- golfed as 'f' ( longWidth times "_" ++ "\n" ) ++ height times ( "|" ++ width times " o" ++ " |\n" ) ++ ( longWidth times "-" ) -- | golfed as (!) times :: Int -> [a] -> [a] times n s = concat \$ replicate n s • At first glance that looked like it should be shorter with unlines`, but it's not. Jul 3 '16 at 21:40
{}
Viser spørgsmål med mærket: Vis alle spørgsmål • Løst • Arkiveret ## PR_CONNECT_RESET_ERROR Can't connect to websites Stillet af linda28 for for 11 måneder siden Besvaret af FredMcD for for 11 måneder siden • Løst • Arkiveret ## This video file cannot be played.(Error Code: 232011) Firefox help gives no specific and direct answer to this NEW problem I'm running latest Firefox 68.2 64bit on Windows 7x64 Ultimate. This is a new problem, I can plat any media with all my apps and on other sites but Accuweather News https… (læs mere) I'm running latest Firefox 68.2 64bit on Windows 7x64 Ultimate. This is a new problem, I can plat any media with all my apps and on other sites but Accuweather News https://www.accuweather.com/en/videos/trending-now/co15rob4 videos are suddenly not playable. PS: Firefox Help was useless on this query/error. None of the results were relevant. Am I going to have to "Google Search for the solution? HELP please and thank you. Stillet af Tweaker for 1 år siden Besvaret af cor-el for 1 år siden • Løst • Arkiveret ## XULRunner error: Platform version '64.0' is not compatible with minVersion>=63.0.3 maxVersion<=63.0.3 When I started Firefox this morning, I got the error message in the subject line this morning. The only button is "OK", which banishes the error message. The browser do… (læs mere) When I started Firefox this morning, I got the error message in the subject line this morning. The only button is "OK", which banishes the error message. The browser doesn't show up. This evening, I checked for Windows updates, and the only ones pending were security definition updates and Silverlight (which I don't knowingly use). XULRunner error "Platform version '64.0' is not compatible with" "minVersion >= 63.0.3" "maxVersion <= 63.0.3" If anyone can suggest anything, thanks....I am running Windows 7 Stillet af FoxyFirey for 2 år siden Besvaret af cor-el for 2 år siden • Løst • Arkiveret ## I get "corrupted content error" for a site that I can access on a different device. This morning I had a Firefox Update and now I can no longer access a site (which I now to\be active and safe). However I try and access this site I get "Corrupted Conten… (læs mere) This morning I had a Firefox Update and now I can no longer access a site (which I now to\be active and safe). However I try and access this site I get "Corrupted Content Error". Corrupted Content Error The site at https://www.harveynorman.com.au/ has experienced a network protocol violation that cannot be repaired. The page you are trying to view cannot be shown because an error in the data transmission was detected. Please contact the website owners to inform them of this problem. Stillet af George_Broxton for 2 år siden Besvaret af FFus3r for 2 år siden • Løst • Arkiveret ## unable to use DSC in EPFO site. what changes should be done in mozilla options? i am trying to use DSC in epfo site but when i tried to digitally sign a pdf file generated in site, it led me to java downloading site, what changes should i done . pl. … (læs mere) i am trying to use DSC in epfo site but when i tried to digitally sign a pdf file generated in site, it led me to java downloading site, what changes should i done . pl. tell. Stillet af saviconsult2016 for 2 år siden Besvaret af jscher2000 for 2 år siden • Løst • Arkiveret ## Secure Connection Failed: the authenticity of the received data could not be verified I got "Secure Connection Failed, The connection to the server was reset while the page was loading. The page you are trying to view cannot be shown because the authentici… (læs mere) I got "Secure Connection Failed, The connection to the server was reset while the page was loading. The page you are trying to view cannot be shown because the authenticity of the received data could not be verified. Please contact the website owners to inform them of this problem." message when I go to the site "https://gearup.ed.gov/". I checked my server is set up TLS 1.2 I am using Firefox 62, in my SSLtest, I got "Firefox 62 / Win 7 R Server closed connection" https://www.ssllabs.com/ssltest/analyze.html?d=gearup.ed.gov Does anyone how to fix that issue? Thanks Stillet af achantactile for 2 år siden Besvaret af achantactile for 2 år siden • Løst • Arkiveret ## Fonts not looking good on Firefox I am trying the new gmail in firefox but the fonts both normal and bold look awful. On the contrary if I open it to Chrome they look crisp and overall ok. I am attaching … (læs mere) I am trying the new gmail in firefox but the fonts both normal and bold look awful. On the contrary if I open it to Chrome they look crisp and overall ok. I am attaching a SS. ANy ideas on how to solve this? Stillet af tzic for 2 år siden Besvaret af jscher2000 for 2 år siden • Løst • Arkiveret ## How do you do a clean install? There is a web page on my visited log I can not get rid of. Deleting the history causes the web page to be access again, dismissing it also brings it right back. I can … (læs mere) There is a web page on my visited log I can not get rid of. Deleting the history causes the web page to be access again, dismissing it also brings it right back. I can verify with wireshark that the access is being triggered by deleting the history (clicking "Delete History" for that web page causes a flurry of internet activity on the ipaddress of the page in question). I tried to uninstall and do a fresh reinstall (check mark for removing old plugins and such checked), but the old web pages still show up on the fresh reinstall. Short of reformatting my hard drive and starting from scratch, or giving up and going to Chrome, how can I get a clean install? Thanks, bt Stillet af briturner for 2 år siden Besvaret af briturner for 2 år siden • Løst • Arkiveret ## WidevineCdm-plug-in crashed — no video playing Since ± half october 2018 I can't play videos anymore on some sites and get this crash message. Youtube does work, probably because without Widevine? I'm on FF Dev Editio… (læs mere) Since ± half october 2018 I can't play videos anymore on some sites and get this crash message. Youtube does work, probably because without Widevine? I'm on FF Dev Edition, Mac with El Capitan. Thanks, Pieter Stillet af Piroo for 2 år siden Besvaret af cor-el for 2 år siden • Løst • Arkiveret ## firefox is starting up on launch win 10 Hi. I couldn't help but notice that every time I launch my PC firefox brings back the tabs i used the day before. Its annoying because before i even log in i can hear You… (læs mere) Hi. I couldn't help but notice that every time I launch my PC firefox brings back the tabs i used the day before. Its annoying because before i even log in i can hear Youtube videos from night before playing through the log in screen. I have no idea on what's actually going on so any tips or ideas would help greatly. Ps. I've alredy checked Manager for startup programs and firefox isn't there Stillet af addjam for 2 år siden Besvaret af philipp for 2 år siden • Løst • Arkiveret ## Firefox 61/64 cannot find shared library libatomic.so.1 I am running Ubuntu 15.10 on a 12 year olf DELL laptop and use Firefox 61. With the latest update or if I install Firefox 64 it stops working with the error message /ho… (læs mere) I am running Ubuntu 15.10 on a 12 year olf DELL laptop and use Firefox 61. With the latest update or if I install Firefox 64 it stops working with the error message /home/smoehler/firefox/firefox: error while loading shared libraries: libatomic.so.1: cannot open shared object file: No such file or directory Can I fix this and if so, how? Stillet af SabineMoehler for 2 år siden Besvaret af cor-el for 2 år siden • Løst • Arkiveret ## gmail NEXT login button unresponsive, only works in Private Window Login for Gmail long-in NEXT button no longer responsive after installing latest Firefox 62. I usually have gmail account set to always be logged in. I signed out then at… (læs mere) Login for Gmail long-in NEXT button no longer responsive after installing latest Firefox 62. I usually have gmail account set to always be logged in. I signed out then attempted to sign back in. I typed in my username and click NEXT but this button is NOT responding anymore. So now I can't log into gmail with Firefox 62 on iMac. I tried to open a “private” window in Firefox (first time using this) and the NEXT button in Gmail works and allows me to log in (I can see the indicator scroll and everything functions) , but in standard browser this button is unresponsive. I tried changing tracking, pop ups, added exceptions to gmail …. nothing fixes the issue. I cleared all my cookies, clear cache (now I have to re-authorize or re-validate my computer for all my banking & credit card access for payments, and everything, etc … big pain in the ass … thanks Firefox) . Why does NEXT button (or these controls) only work in private window but not in standard ?????? I also noticed recently that some other sites have had dome buttons that aren’t recognized after clicking. iMac with Sierra 10.12.6 32gb ram, worked perfectly until I logged out and needed to log back in. Stillet af bestfather for 2 år siden Besvaret af cor-el for 2 år siden • Løst • Arkiveret ## Copy/Paste not working As the title indicates, copy/pasting does not work. I'm on Windows 10, have installed Firefox about two days ago (I've recently switched to it) so the version of FF is up… (læs mere) As the title indicates, copy/pasting does not work. I'm on Windows 10, have installed Firefox about two days ago (I've recently switched to it) so the version of FF is up to date. I've looked up a few fixes before posting and they didn't work: - Tried booting FF on safe mode and it didn't fix the issue - it clearly isn't caused by an add-on (and I've never had this issue in my previous browser with nearly the same add-ons installed). - other people seem to have had the exact same issue multiple times and it's lasted for years, so it's clearly a problem on FF's end. Here is the issue described more specifically: Copying things from a tab from FF and into the online version of Word does not work - when I try to, the intended selection: - disappears a few seconds after I've pasted - or it does not appear at all - or when it does, it erases the entire sentence it was pasted next to it, which you can imagine is absolutely bothersome. I've also tried to copy/paste statistics from a site called archiveofourown.org onto another FF tab. - What ended up being copied is only the last character of the entire content I was trying to paste, which I think suggests there's some issue with what's contained in the clipboard. I'm mentioning these two websites because I'm guessing that something about them is probably conflicting with the copy/paste feature, however I'm hoping no one will have the gall to tell me it's those websites' faults and that I should just stop using them. When pasting (while having hopefully copied the former selected content- and no, it wasn't the case), it always pastes the content which was last copied into the clipboard prior to that very attempt, which shows that the clipboard's content was not refreshed with the last selected copied one. I'm surprised to see that FF has had such a glaring issue for years and for many other users, and updates have never given a fix for it. I'd like this issue to be fixed properly, because I'd hate to have to switch back to my previous browser. FF sports so many nice features that other browsers don't have, and it'd be a shame to ruin the experience with a bug so basic as the copy/paste feature being broken. Stillet af yumiifmb for for 10 måneder siden Besvaret af user1321319 for for 10 måneder siden • Løst • Arkiveret ## Unable to install Adobe Flash Player Plugin on Firefox 62.0 I have tried various sites and ways to install Adobe Flash Player 31.0.0.108 (Latest Version) and the installation process runs properly , stops firefox and restarts it b… (læs mere) I have tried various sites and ways to install Adobe Flash Player 31.0.0.108 (Latest Version) and the installation process runs properly , stops firefox and restarts it but the plugin is not installed. Stopped all Virus and other checkers but to no avail. I need Flash Player for various things but it will not install. How can I force it to install properly. Stillet af The_Boojum for 2 år siden Besvaret af jscher2000 for 2 år siden • Løst • Arkiveret ## Firefox reports map2.hwcdn.net is a malicious site 18-Dec-19 Wednesday Large transfer upon starting computer to map2.hwcdn.net Index Protocol Local Address Remote Address Local Port Remote Port Local Host Remote Host Se… (læs mere) 18-Dec-19 Wednesday Large transfer upon starting computer to map2.hwcdn.net Index Protocol Local Address Remote Address Local Port Remote Port Local Host Remote Host Service Name Packets Data Size Total Size Data Speed Capture Time Last Packet Time Duration 1 TCP 192.168.1.235 205.185.216.10 49827 80 AtbZ97Pro1 map2.hwcdn.net http 34,578 31,492,272 Bytes 32,876,099 Bytes 206.0 KB/Sec 2018-12-19 12:08:41 PM:127 2018-12-19 12:11:10 PM:399 00:02:29.271 1 205.185.216.10 Succeed USA - Texas HIGHWINDS-AC3 Highwinds Network Group, Inc. 205.185.216.0 205.185.216.255 205.185.216.0/24 Yes Highwinds Network Group, Inc. 2021 McKinney Avenue Suite 1100, Dallas 75201 ip-request@hwng.net abuse@hwng.net +1-469-899-5729 ARIN map2.hwcdn.net Index Protocol Local Address Remote Address Local Port Remote Port Local Host Remote Host Service Name Packets Data Size Total Size Data Speed Capture Time Last Packet Time Duration 2 TCP 192.168.1.235 205.185.216.42 49828 80 AtbZ97Pro1 map2.hwcdn.net http 35,071 31,951,471 Bytes 33,354,948 Bytes 209.2 KB/Sec 2018-12-19 12:08:41 PM:259 2018-12-19 12:11:10 PM:407 00:02:29.147 1 205.185.216.42 Succeed USA - Texas HIGHWINDS-AC3 Highwinds Network Group, Inc. 205.185.216.0 205.185.216.255 205.185.216.0/24 Yes Highwinds Network Group, Inc. 2021 McKinney Avenue Suite 1100, Dallas 75201 ip-request@hwng.net abuse@hwng.net +1-469-899-5729 ARIN map2.hwcdn.net I tried to connect to the site: The owner of map2.hwcdn.net has configured their website improperly. To protect your information from being stolen, Firefox has not connected to this website. Report errors like this to help Mozilla identify and block malicious sites map2.hwcdn.net uses an invalid security certificate. The certificate is only valid for the following names: *.ssl.hwcdn.net, ssl.hwcdn.net Error code: SSL_ERROR_BAD_CERT_DOMAIN https://map2.hwcdn.net/ Unable to communicate securely with peer: requested domain name does not match the server’s certificate. HTTP Strict Transport Security: false HTTP Public Key Pinning: false Certificate chain: -----BEGIN CERTIFICATE----- MIIFNjCCBB6gAwIBAgIRAI8E553QatOOcv1g24+QktswDQYJKoZIhvcNAQELBQAw gZAxCzAJBgNVBAYTAkdCMRswGQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAO BgNVBAcTB1NhbGZvcmQxGjAYBgNVBAoTEUNPTU9ETyBDQSBMaW1pdGVkMTYwNAYD VQQDEy1DT01PRE8gUlNBIERvbWFpbiBWYWxpZGF0aW9uIFNlY3VyZSBTZXJ2ZXIg Q0EwHhcNMTcxMjE5MDAwMDAwWhcNMTkwMTIwMjM1OTU5WjA9MSEwHwYDVQQLExhE b21haW4gQ29udHJvbCBWYWxpZGF0ZWQxGDAWBgNVBAMMDyouc3NsLmh3Y2RuLm5l dDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANO7IdpUAlxo8DNG0mya fruZUKmHMerH7NMu6/qaAdXQlB+31QyRemofRnzOI8jkEuo5e6nywkx6/xylVnBw j4Wsf10+H2K9SeYJEXFZhXoNfs7eZpib3v12hk/6cGVd0zBsDiA8O06kvtnG73Zy tIageOujtDIKCsTRyd3Hwz1DAExmDxQohsuEVza9cm/g8jPhPYBo9iGQQKvxGDrp hNTqyKMOB8np53/MEuwAzW0CgECbripdYa+9hMBgpWXRYSbmASfXk+5gQHBhdRiJ +UjwDO/j7XL6dA9RV07DCzp4wXmm5bnaA6g2ja4uWa2U6OQNoeb5CLNZP7RcI0Vi Bn0CAwEAAaOCAdswggHXMB8GA1UdIwQYMBaAFJCvajqUWgvYkOoSVnPfQ7Q6KNrn MB0GA1UdDgQWBBRZeTBB7m08gUUuG/NpRXvfDpAs0zAOBgNVHQ8BAf8EBAMCBaAw DAYDVR0TAQH/BAIwADAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwTwYD VR0gBEgwRjA6BgsrBgEEAbIxAQICBzArMCkGCCsGAQUFBwIBFh1odHRwczovL3Nl Y3VyZS5jb21vZG8uY29tL0NQUzAIBgZngQwBAgEwVAYDVR0fBE0wSzBJoEegRYZD aHR0cDovL2NybC5jb21vZG9jYS5jb20vQ09NT0RPUlNBRG9tYWluVmFsaWRhdGlv blNlY3VyZVNlcnZlckNBLmNybDCBhQYIKwYBBQUHAQEEeTB3ME8GCCsGAQUFBzAC hkNodHRwOi8vY3J0LmNvbW9kb2NhLmNvbS9DT01PRE9SU0FEb21haW5WYWxpZGF0 aW9uU2VjdXJlU2VydmVyQ0EuY3J0MCQGCCsGAQUFBzABhhhodHRwOi8vb2NzcC5j b21vZG9jYS5jb20wKQYDVR0RBCIwIIIPKi5zc2wuaHdjZG4ubmV0gg1zc2wuaHdj ZG4ubmV0MA0GCSqGSIb3DQEBCwUAA4IBAQAgejvInj4XYgcsiTBAIZS/tsbgXGyq WZlbqsk8aRPlcW/j71sRpcwc/7GCzuv6+v/0+pI8medXIPU+hG/lP7gw5DOJdOAS fsRZDG5iXS7bqxrkUi4hnW7TJ3rt/LIU4O9cteSLS+tXvaGw14k7tk5jCRO5FtoG gLEXF+2X1R3DPtVNcjrNRnKDNQvOGd8jv6K62w/2ZbGsVK896lU50j2VmTWwVh74 9U2+1Hgt2KFDrTRTvsTn7Y8cg9g5EIHEVO30dFEp//cuCXM4mQR0SOM34OrNUQTq XAmjvltfvcJBxWtdOLUatKt75iPJuQv13VHXxgvsxyJgNnnwhvCh0YDD -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- MIIGCDCCA/CgAwIBAgIQKy5u6tl1NmwUim7bo3yMBzANBgkqhkiG9w0BAQwFADCB hTELMAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4G A1UEBxMHU2FsZm9yZDEaMBgGA1UEChMRQ09NT0RPIENBIExpbWl0ZWQxKzApBgNV BAMTIkNPTU9ETyBSU0EgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMTQwMjEy MDAwMDAwWhcNMjkwMjExMjM1OTU5WjCBkDELMAkGA1UEBhMCR0IxGzAZBgNVBAgT EkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEaMBgGA1UEChMR Q09NT0RPIENBIExpbWl0ZWQxNjA0BgNVBAMTLUNPTU9ETyBSU0EgRG9tYWluIFZh bGlkYXRpb24gU2VjdXJlIFNlcnZlciBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEP ADCCAQoCggEBAI7CAhnhoFmk6zg1jSz9AdDTScBkxwtiBUUWOqigwAwCfx3M28Sh bXcDow+G+eMGnD4LgYqbSRutA776S9uMIO3Vzl5ljj4Nr0zCsLdFXlIvNN5IJGS0 Qa4Al/e+Z96e0HqnU4A7fK31llVvl0cKfIWLIpeNs4TgllfQcBhglo/uLQeTnaG6 ytHNe+nEKpooIZFNb5JPJaXyejXdJtxGpdCsWTWM/06RQ1A/WZMebFEh7lgUq/51 UHg+TLAchhP6a5i84DuUHoVS3AOTJBhuyydRReZw3iVDpA3hSqXttn7IzW3uLh0n c13cRTCAquOyQQuvvUSH2rnlG51/ruWFgqUCAwEAAaOCAWUwggFhMB8GA1UdIwQY MBaAFLuvfgI9+qbxPISOre44mOzZMjLUMB0GA1UdDgQWBBSQr2o6lFoL2JDqElZz 30O0Oija5zAOBgNVHQ8BAf8EBAMCAYYwEgYDVR0TAQH/BAgwBgEB/wIBADAdBgNV HSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwGwYDVR0gBBQwEjAGBgRVHSAAMAgG BmeBDAECATBMBgNVHR8ERTBDMEGgP6A9hjtodHRwOi8vY3JsLmNvbW9kb2NhLmNv bS9DT01PRE9SU0FDZXJ0aWZpY2F0aW9uQXV0aG9yaXR5LmNybDBxBggrBgEFBQcB AQRlMGMwOwYIKwYBBQUHMAKGL2h0dHA6Ly9jcnQuY29tb2RvY2EuY29tL0NPTU9E T1JTQUFkZFRydXN0Q0EuY3J0MCQGCCsGAQUFBzABhhhodHRwOi8vb2NzcC5jb21v ZG9jYS5jb20wDQYJKoZIhvcNAQEMBQADggIBAE4rdk+SHGI2ibp3wScF9BzWRJ2p mj6q1WZmAT7qSeaiNbz69t2Vjpk1mA42GHWx3d1Qcnyu3HeIzg/3kCDKo2cuH1Z/ e+FE6kKVxF0NAVBGFfKBiVlsit2M8RKhjTpCipj4SzR7JzsItG8kO3KdY3RYPBps P0/HEZrIqPW1N+8QRcZs2eBelSaz662jue5/DJpmNXMyYE7l3YphLG5SEXdoltMY dVEVABt0iN3hxzgEQyjpFv3ZBdRdRydg1vs4O2xyopT4Qhrf7W8GjEXCBgCq5Ojc 2bXhc3js9iPc0d1sjhqPpepUfJa3w/5Vjo1JXvxku88+vZbrac2/4EjxYoIQ5QxG V/Iz2tDIY+3GH5QFlkoakdH368+PUq4NCNk+qKBR6cGHdNXJ93SrLlP7u3r7l+L4 HyaPs9Kg4DdbKDsx5Q5XLVq4rXmsXiBmGqW5prU5wfWYQ//u+aen/e7KJD2AFsQX j4rBYKEMrltDR5FL1ZoXX/nUh8HCjLfn4g8wGTeGrODcQgPmlKidrv0PJFGUzpII 0fxQ8ANAe4hZ7Q7drNJ3gjTcBpUC2JD5Leo31Rpg0Gcg19hCC0Wvgmje3WYkN5Ap lBlGGSW4gNfL1IYoakRwJiNiqZ+Gb7+6kHDSVneFeO/qJakXzlByjAA6quPbYzSf +AZxAeKCINT+b72x -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- MIIFdDCCBFygAwIBAgIQJ2buVutJ846r13Ci/ITeIjANBgkqhkiG9w0BAQwFADBv MQswCQYDVQQGEwJTRTEUMBIGA1UEChMLQWRkVHJ1c3QgQUIxJjAkBgNVBAsTHUFk ZFRydXN0IEV4dGVybmFsIFRUUCBOZXR3b3JrMSIwIAYDVQQDExlBZGRUcnVzdCBF eHRlcm5hbCBDQSBSb290MB4XDTAwMDUzMDEwNDgzOFoXDTIwMDUzMDEwNDgzOFow gYUxCzAJBgNVBAYTAkdCMRswGQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAO BgNVBAcTB1NhbGZvcmQxGjAYBgNVBAoTEUNPTU9ETyBDQSBMaW1pdGVkMSswKQYD VQQDEyJDT01PRE8gUlNBIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIICIjANBgkq hkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAkehUktIKVrGsDSTdxc9EZ3SZKzejfSNw AHG8U9/E+ioSj0t/EFa9n3Byt2F/yUsPF6c947AEYe7/EZfH9IY+Cvo+XPmT5jR6 2RRr55yzhaCCenavcZDX7P0N+pxs+t+wgvQUfvm+xKYvT3+Zf7X8Z0NyvQwA1onr ayzT7Y+YHBSrfuXjbvzYqOSSJNpDa2K4Vf3qwbxstovzDo2a5JtsaZn4eEgwRdWt 4Q08RWD8MpZRJ7xnw8outmvqRsfHIKCxH2XeSAi6pE6p8oNGN4Tr6MyBSENnTnIq m1y9TBsoilwie7SrmNnu4FGDwwlGTm0+mfqVF9p8M1dBPI1R7Qu2XK8sYxrfV8g/ vOldxJuvRZnio1oktLqpVj3Pb6r/SVi+8Kj/9Lit6Tf7urj0Czr56ENCHonYhMsT 8dm74YlguIwoVqwUHZwK53Hrzw7dPamWoUi9PPevtQ0iTMARgexWO/bTouJbt7IE IlKVgJNp6I5MZfGRAy1wdALqi2cVKWlSArvX31BqVUa/oKMoYX9w0MOiqiwhqkfO KJwGRXa/ghgntNWutMtQ5mv0TIZxMOmm3xaG4Nj/QN370EKIf6MzOi5cHkERgWPO GHFrK+ymircxXDpqR+DDeVnWIBqv8mqYqnK8V0rSS527EPywTEHl7R09XiidnMy/ s1Hap0flhFMCAwEAAaOB9DCB8TAfBgNVHSMEGDAWgBStvZh6NLQm9/rEJlTvA73g JMtUGjAdBgNVHQ4EFgQUu69+Aj36pvE8hI6t7jiY7NkyMtQwDgYDVR0PAQH/BAQD AgGGMA8GA1UdEwEB/wQFMAMBAf8wEQYDVR0gBAowCDAGBgRVHSAAMEQGA1UdHwQ9 MDswOaA3oDWGM2h0dHA6Ly9jcmwudXNlcnRydXN0LmNvbS9BZGRUcnVzdEV4dGVy bmFsQ0FSb290LmNybDA1BggrBgEFBQcBAQQpMCcwJQYIKwYBBQUHMAGGGWh0dHA6 Ly9vY3NwLnVzZXJ0cnVzdC5jb20wDQYJKoZIhvcNAQEMBQADggEBAGS/g/FfmoXQ zbihKVcN6Fr30ek+8nYEbvFScLsePP9NDXRqzIGCJdPDoCpdTPW6i6FtxFQJdcfj Jw5dhHk3QBN39bSsHNA7qxcS1u80GH4r6XnTq1dFDK8o+tDb5VCViLvfhVdpfZLY Uspzgb8c8+a4bmYRBbMelC1/kZWSWfFMzqORcUx8Rww7Cxn2obFshj5cqsQugsv5 B5a6SE2Q8pTIqXOi6wZ7I53eovNNVZ96YUWYGGjHXkBrI/V5eu+MtWuLt29G9Hvx PUsE2JOAWVrgQSQdso8VYFhH2+9uRv0V9dlfmrPb2LjkQLPNlzmuhbsdjrzch5vR pu/xO28QOG8= -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- MIIENjCCAx6gAwIBAgIBATANBgkqhkiG9w0BAQUFADBvMQswCQYDVQQGEwJTRTEU MBIGA1UEChMLQWRkVHJ1c3QgQUIxJjAkBgNVBAsTHUFkZFRydXN0IEV4dGVybmFs IFRUUCBOZXR3b3JrMSIwIAYDVQQDExlBZGRUcnVzdCBFeHRlcm5hbCBDQSBSb290 MB4XDTAwMDUzMDEwNDgzOFoXDTIwMDUzMDEwNDgzOFowbzELMAkGA1UEBhMCU0Ux FDASBgNVBAoTC0FkZFRydXN0IEFCMSYwJAYDVQQLEx1BZGRUcnVzdCBFeHRlcm5h bCBUVFAgTmV0d29yazEiMCAGA1UEAxMZQWRkVHJ1c3QgRXh0ZXJuYWwgQ0EgUm9v dDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALf3GjPm8gAELTngTlvt H7xsD821+iO2zt6bETOXpClMfZOfvUq8k+0DGuOPz+VtUFrWlymUWoCwSXrbLpX9 uMq/NzgtHj6RQa1wVsfwTz/oMp50ysiQVOnGXw94nZpAPA6sYapeFI+eh6FqUNzX mk6vBbOmcZSccbNQYArHE504B4YCqOmoaSYYkKtMsE8jqzpPhNjfzp/haW+710LX a0Tkx63ubUFfclpxCDezeWWkWaCUN/cALw3CknLa0Dhy2xSoRcRdKn23tNbE7qzN E0S3ySvdQwAl+mG5aWpYIxG3pzOPVnVZ9c0p10a3CitlttNCbxWyuHv77+ldU9U0 WicCAwEAAaOB3DCB2TAdBgNVHQ4EFgQUrb2YejS0Jvf6xCZU7wO94CTLVBowCwYD VR0PBAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wgZkGA1UdIwSBkTCBjoAUrb2YejS0 Jvf6xCZU7wO94CTLVBqhc6RxMG8xCzAJBgNVBAYTAlNFMRQwEgYDVQQKEwtBZGRU cnVzdCBBQjEmMCQGA1UECxMdQWRkVHJ1c3QgRXh0ZXJuYWwgVFRQIE5ldHdvcmsx IjAgBgNVBAMTGUFkZFRydXN0IEV4dGVybmFsIENBIFJvb3SCAQEwDQYJKoZIhvcN AQEFBQADggEBALCb4IUlwtYj4g+WBpKdQZic2YR5gdkeWxQHIzZlj7DYd7usQWxH YINRsPkyPef89iYTx4AWpb9a/IfPeHmJIZriTAcKhjW88t5RxNKWt9x+Tu5w/Rw5 6wwCURQtjr0W4MHfRnXnJK3s9EK0hZNwEGe6nQY1ShjTK3rMUUKhemPR5ruhxSvC Nr4TDea9Y355e6cJDUCrat2PisP29owaQgVR1EX1n6diIWgVIEM8med8vSTYqZEX c4g/VhsxOBi0cQ+azcgOno4uG+GMmIPLHzHxREzGBHNJdmAPx/i9F4BrLunMTA5a mnkPIAou1Z5jJh5VkpTYghdae9C8x49OhgQ= -----END CERTIFICATE----- Stillet af subdla for 2 år siden Besvaret af FredMcD for 2 år siden • Løst • Arkiveret ## Websockets on localhost stopped working in 66.0 After updating to Firefox 66.0 on MacOS 10.13.6 today, websockets on my localhost stopped working. When using Create React App (version of "react-scripts": "2.1.8" => … (læs mere) After updating to Firefox 66.0 on MacOS 10.13.6 today, websockets on my localhost stopped working. When using Create React App (version of "react-scripts": "2.1.8" => "webpack-dev-server": "3.1.14" => "sockjs": "0.3.19"), hot reload is broken and I get following error in the console: The connection to ws://localhost:3000/sockjs-node/743/khw0phgj/websocket was interrupted while the page was loading. When I tried https://www.websocket.org/echo.html, it works fine and no problems with hot reload in Chrome 72.0. How can I debug and fix this problem please? (when I googled the error message, I got a lot of outdated advice from previous versions of Firefox) Stillet af Peter Hozák for 1 år siden Besvaret af Peter Hozák for 1 år siden • Løst • Arkiveret ## No response when clicking the 9-square grid Google app button. Besides the Google app button, the buttons next to it like the profiles picture and the notification button acts the same. One day all those buttons just stopped working.… (læs mere) Besides the Google app button, the buttons next to it like the profiles picture and the notification button acts the same. One day all those buttons just stopped working. I thought it was the version of Firefox but I updated it, downgraded it, tried several versions (pre and post quamtun), and all were complete clean setups, uninstalling and removing all the cache and profiles and whatnot. Nothing helped. Google Keep and Gmail also acts strange. On top of the mentioned problem on the respective page, Keep is completely blank and many of the buttons on Gmail are missing. Despite them being invisible, some can still be clicked, others just don't respond at all. It's fine on the same version of Firefox on another machine. It's fine on all other browsers on the same machine. So it's only the combination of this computer and Firefox. I also tried changing proxy settings, safe mode, disabling hardware acceleration, disabling all the add-ons and plugins and extensions of Firefox and windows itself. Appreciate if anyone have any idea. Stillet af jusburself for 2 år siden Besvaret af jusburself for 2 år siden • Løst • Arkiveret I already did the following steps: Type ‘about:config’ in the URL bar, and hit enter (you may have to click though a, ‘I’ll be careful, I promise!’ warning) In the search… (læs mere) I already did the following steps: Type ‘about:config’ in the URL bar, and hit enter (you may have to click though a, ‘I’ll be careful, I promise!’ warning) In the search bar type ‘network.http.sendRefererHeader’ Double-click on the ‘network.http.sendRefererHeader’ preference when it comes up Enter an integer value of 0, 1, or 2 or in the dialog box, then hit OK and close the ‘about:config’ tab but this error keeps coming up, what do I do? Stillet af Lars.Frusch for for 10 måneder siden Besvaret af cor-el for for 10 måneder siden • Løst • Arkiveret ## i use the firefox 22.0 vesion, but latest version are not supported the indian ESIC site, the new version are not supported edit function. ashishtripathi500@gma i used the omd firefox version22.0. i want latest version but the latest version are not abale to edit function of employee personal details of ESIC. ESIC is insurance w… (læs mere) i used the omd firefox version22.0. i want latest version but the latest version are not abale to edit function of employee personal details of ESIC. ESIC is insurance website of Employee. this is goverment web and this is run only firefox. so i request to firefox team that the make able editable function on latest version of Firefox www. esic.in - fully supported on old version (22.0) with edit function. but esic run smoothly on new version but edit function not working on new Firefox version. we want pls make edit function on new version. www.esic.in (employee personal details, bank details, discrepancy details, family details) thanks and regards Ashish kumar Sr HR Executive Vishnusaran and Company kanpur uttar pradesh edited out phone# from public and search/spam bots view as this is a public forum. Stillet af ashishtripathi500 for 2 år siden Besvaret af taxaduq for 2 år siden • Løst • Arkiveret ## Copy and paste does not work in Firefox I am having the exact same problem listed here: https://support.mozilla.org/en-US/questions/1224880 But none of the solutions there work. Of course since I did not alrea… (læs mere) I am having the exact same problem listed here: But none of the solutions there work. Of course since I did not already have an account here, I was forced to create this duplicate question. I was using Firefox fine all day, then suddenly Copy/Paste stopped working. Inside Firefox, the option for Copy still appears and can be selected, but when you Paste within Firefox or anywhere else on the system, things copied in Firefox are not available. Copying outside of Firefox works just fine. Using Firefox 62.0 (64-bit) on macOS 10.12.6 Update: And as soon as I posted this, now Copy/Paste is working, but only within Firefox; Firefox is now acting as if it has its own independent clipboard, accessible only within Firefox. This is happening everywhere within Firefox, regardless of website. Stillet af 4rs4 for 2 år siden Besvaret af 4rs4 for 2 år siden
{}
• Universal Sound Diffusion in a Strongly Interacting Fermi Gas (12/4/2020) Science 370, 1222-1226 (2020) 10.1126/science.aaz5756 arXiv:1909.02555 BBC Radio 4: Radio podcasts: WGBH Radio Transport of strongly interacting fermions governs modern materials — from the high-Tc cuprates to bilayer graphene –, but also nuclear fission, the merging of neutron stars and the expansion of the early universe. Here we observe a universal quantum limit of diffusivity in a homogeneous, strongly interacting Fermi gas of atoms by studying sound propagation and its attenuation via the coupled transport of momentum and heat. In the normal state, the sound diffusivity D monotonically decreases upon lowering the temperature T, in contrast to the diverging behavior of weakly interacting Fermi liquids. As the superfluid transition temperature is crossed, D attains a universal value set by the ratio of Planck’s constant h and the particle mass m. This finding of quantum limited sound diffusivity informs theories of fermion transport, with relevance for hydrodynamic flow of electrons, neutrons and quarks. • Congratulations to Dr. Richard Fletcher on his Assistant Professorship at MIT (3/29/2020) We are all excited for Rich to be starting his own research group at MIT! • Spectral response and contact of the unitary Fermi gas (2/26/2019) Biswaroop Mukherjee, Parth B. Patel, Zhenjie Yan, Richard J. Fletcher, Julian Struck, Martin W. Zwierlein Spectral response and contact of the unitary Fermi gas arXiv:1902.08548 We measure radiofrequency (rf) spectra of the homogeneous unitary Fermi gas at temperatures ranging from the Boltzmann regime through quantum degeneracy and across the superfluid transition. For all temperatures, a single spectral peak is observed. Its position smoothly evolves from the bare atomic resonance in the Boltzmann regime to a frequency corresponding to nearly one Fermi energy at the lowest temperatures. At high temperatures, the peak width reflects the scattering rate of the atoms, while at low temperatures, the width is set by the size of fermion pairs. Above the superfluid transition, and approaching the quantum critical regime, the width increases linearly with temperature, indicating non-Fermi-liquid behavior. From the wings of the rf spectra, we obtain the contact, quantifying the strength of short-range pair correlations. We find that the contact rapidly increases as the gas is cooled below the superfluid transition. • Boiling a Unitary Fermi Liquid (11/1/2018) Zhenjie Yan, Parth B. Patel, Biswaroop Mukherjee, Richard J. Fletcher, Julian Struck, Martin W. Zwierlein Phys. Rev. Lett. 122, 093401 (2019) See Viewpoint by Pietro MassignanFrom Quantum Quasiparticles to a Classical Gas arXiv:1811.00481 (2018) We study the thermal evolution of a highly spin-imbalanced, homogeneous Fermi gas with unitarity limited interactions, from a Fermi liquid of polarons at low temperatures to a classical Boltzmann gas at high temperatures. Radio-frequency spectroscopy gives access to the energy, lifetime and the short-range correlations of Fermi polarons at low temperatures T. In this regime we observe a characteristic $\propto T^2$ dependence of the spectral width, corresponding to the quasiparticle decay rate expected for a Fermi liquid. At high T the spectral width decreases again towards the scattering rate of the classical, unitary Boltzmann gas, $\propto T^{1/2}$. In the transition region between the quantum degenerate and classical regime, the spectral width attains its maximum, on the scale of the Fermi energy, indicating the breakdown of a quasiparticle description. Density measurements in a harmonic trap directly reveal the majority dressing cloud surrounding the minority spins, and yield the compressibility along with the effective mass of Fermi polarons. • Congratulations to Dr. Julian Struck for receiving the CNRS Junior Research Chair at ENS Paris (8/1/2017) All the best of luck to Julian! • Homogeneous Atomic Fermi Gases (10/31/2016) Biswaroop Mukherjee, Zhenjie Yan, Parth B. Patel, Zoran Hadzibabic, Tarik Yefsah, Julian Struck, Martin W. Zwierlein We report on the creation of homogeneous Fermi gases of ultracold atoms in a uniform potential. In the momentum distribution of a spin-polarized gas, we observe the emergence of the Fermi surface and the saturated occupation of one particle per momentum state. This directly confirms Pauli blocking in momentum space. For the spin-balanced unitary Fermi gas, we observe spatially uniform pair condensates. For thermodynamic measurements, we introduce a hybrid potential that is harmonic in one dimension and uniform in the other two. The spatially resolved compressibility reveals the superfluid transition in a spin-balanced Fermi gas, saturation in a fully polarized Fermi gas, and strong attraction in the polaronic regime of a partially polarized Fermi gas. • Congratulations to Dr. Tarik Yefsah for starting his position as CNRS Permanent Researcher (8/1/2016) All the best to Tarik for a continued string of wonderful discoveries. Check out his latest news: http://www.lkb.upmc.fr/ultracoldfermigases/yefsah/ • Cascade of Solitonic Excitations in a Superfluid Fermi Gas (1/27/2016) Mark J.-H. Ku, Biswaroop Mukherjee, Tarik Yefsah, and Martin W. Zwierlein We follow the time evolution of a superfluid Fermi gas of resonantly interacting 6Li atoms after a phase imprint. Via tomographic imaging, we observe the formation of a planar dark soliton, its subsequent snaking, and its decay into a vortex ring, which in turn breaks to finally leave behind a single solitonic vortex. In intermediate stages we find evidence for an exotic structure resembling the Φ-soliton, a combination of a vortex ring and a vortex line. Direct imaging of the nodal surface reveals its undulation dynamics and its decay via the puncture of the initial soliton plane. The observed evolution of the nodal surface represents dynamics beyond superfluid hydrodynamics, calling for a microscopic description of unitary fermionic superfluids out of equilibrium.
{}
# Confused by how to derive the derivative of $f(\boldsymbol{x})=g(\boldsymbol{y})$ I was watching an online tutorial and saw this derivation. It seems the the author took the derivative with respect to y on left side and to x on right side. I thought dx should always be in the denominator and should on both side of the equation. Is it partial derivative? Or maybe my misunderstanding of the notation? Could anyone explain how this works? FYI the link of the tutorial is https://www.youtube.com/watch?v=aXBFKKh54Es&list=PLwJRxp3blEvZyQBTTOMFRP_TDaSdly3gU&index=98, the differentials was taken at around 2'20" Much appreciated! Happy New Year. • The second equation is rather the equality of the differentials (instead of the derivatives) of the functions in the first equation. – Pp.. Jan 1 '15 at 3:44 • Maybe a notational distinction that people use is $\frac{\partial f}{\partial x}(x)$ (Leibnitz notation for the derivative) versus $\text{d}f=\frac{\partial f}{\partial x}(x)\text{d}x$ for the differential. – Pp.. Jan 1 '15 at 3:46 • Perhaps you could also add a link to the online tutorial. (Both to add more context to your question and the link might be interesting for some people who stumble upon your post.) – Martin Sleziak Jan 1 '15 at 8:54 • The author did not differentiate the LHS wrt $y$ and RHS wrt $x$. – whacka Jan 1 '15 at 9:54 From your other question I can say that it's a misunderstanding of differentiation and what is happening here. One simple way to understand it is as follows: Take differentiable functions $x,y$ with open domain $D$ such that $\ln(y(t)) = a + b \ln(x(t))$ for any $t \in D$. Then differentiating gives $\frac{y'(t)}{y(t)} = b \frac{x'(t)}{x(t)}$ for any $t \in D$. If you rewrite this in Leibniz's suggestive notation you get: $\frac{dy(t)}{y(t)\ dt} = b \frac{dx(t)}{x(t)\ dt}$ And if you treat $dt$ as a sufficiently small non-zero quantity and multiply both sides by it, you get: $\frac{dy(t)}{y(t)} = b \frac{dx(t)}{x(t)}$ Note that this equation must be used with the understanding that the differentials are taken in the context of a small change in $t$, since we have now omitted $dt$. Note that this is not equivalent to a small change in $x(t),y(t)$! For example if $x(t) = \sin(t)$ and $y(t) = \sin(2t)$ for any $t \in \mathbb{R}$, the curve $(x,y)$ intersects itself at $(0,0)$ and has two gradients there, one when $t = 0$ and the other when $t = \pi$. And historically we have used bare variables to represent changing quantities so if we drop the parameter $t$ we get: $\frac{dy}{y} = b \frac{dx}{x}$. Note that this now makes no sense unless both differentials are taken in the same context, which now means that not only are they taken with respect to a small change in $t$ (which is now missing from the equation), we have to use this equation with the understanding that the values of $x,y$ are tied to each other, represented earlier on explicitly by the parameter $t$. In many cases in physics, however, $t$ is $x$ itself, which is expressed by the mathematically not quite right "$y$ is a function of $x$". • Thanks a lot. I posted the link of the video and the differentiation is between 2' to 2'20". Could you confirm that your explanation is consistent with the author's calculation? I just wanted to be sure about this. – nouveau Jan 1 '15 at 18:13 • Well I don't have any background knowledge in that topic but I suppose that the author assumed that $y$ is dependent upon the '$x_k$'s and $x_1$ is independent of the other '$x_k$'s and he is only concerned about what happens to $y$ when $x_1$ changes. In that case, you can either use the multi-variable derivative to understand the whole thing, or you can just take my $t$ to be $x$ and treat the other '$x_k$'s as constants, which gives you the result. – user21820 Jan 2 '15 at 10:32 It's curious that you should call this a "derivation" because this is more accurate than saying that the derivative in the usual sense was taken. The construction used was the exterior derivative, or differential. This satisfies $$df(x,y)=\frac{\partial f}{\partial x}dx+\frac{\partial f}{\partial y}dy$$ for a function depending only on $x$ and/or $y$, so if $f(x,y)=\ln x$, then $$df(x,y)=\frac{1}{x}dx+0dy=\frac{dx}{x}$$ and if $g(x,y)=\ln y$ then $$dg(x,y)=0dx+\frac{1}{y}dy=\frac{dy}{y}$$ The reason it is called a derivation is for any functions $p,q$ we have $$d(pq)=qdp+pdq$$ • Thanks for replying. But I couldn't understand your post well. the df(x,y)=∂f∂xdx+∂f∂ydy is definitely the formula I should look into but i couldn't find it from the wiki page. is there a name for this formula? – nouveau Jan 1 '15 at 7:56 • @nouveau: It's often called the total derivative. By the way you can see how the total derivative works via exactly the same understanding as in my answer. – user21820 Jan 1 '15 at 9:51 • @nouveau I linked too quickly, here is a better link: en.wikipedia.org/wiki/Differential_of_a_function – Matt Samuel Jan 1 '15 at 10:30 Notice $$d( \ln y ) = \frac{1}{y} dy$$ $$d( \alpha + \beta_1 \ln x_1 ) = \beta_1 \frac{1}{x_1} dx_1$$ Since $\alpha, \beta_1 \in \mathbb{R}$ • Thanks for replying. I am still confused. say there is f(x)=g(y), then we can get d[f(x)]/dx=d[g(y)]/dy? – nouveau Jan 1 '15 at 7:15 • The above derivatives are with respect to x and y respectively. Without knowing the relationship between x and y, how can the derivatives be equal? Thanks. – nouveau Jan 1 '15 at 7:24
{}
1. Logarithm Urgent Solve, correct to 3 significant figures, the equation e^x + e^2x = e^3x 2. $\begin{gathered} e^x \left( {e^{2x} - e^x - 1} \right) = 0 \hfill \\ z \equiv e^x \hfill \\ \Rightarrow z\left( {z^2 - z - 1} \right) = 0 \hfill \\ \end{gathered}$ now solve the quadratic equation and back substitute for x.
{}
# Mechanism to get 10 pseudo-random positive numbers up to maximum from a seed (32-byte hash)? I have learned new crypto words, so I rewrote the question: I have quite random 32-byte hash which I want to use as a seed for generation of 10 pseudo-random positive numbers up to some maximum number, the generation must be reproducible. I would prefer, if those 10 numbers would differ from each other. There is an example: hash=Pkq5skE7tp=j#{y"+R$6~mg!z"4g/Utwand I need to reproducibly generate count=10 pseudo-random positive numbers up to say max=500. So approaches that came to my mind: 1) I can take first 8 bytes Pkq5skE7 and cast them to 64-bit integer, then use modulo 500 on the result and have first number, then take other 8 bytes starting 1 byte right kq5skE7t, cast it, modulo it and get second number and I can get all 10 number like that. Would those numbers seem pretty random or would they have some pattern? There is not preferable effect of having 2 numbers the same. 2) I can take first 8 bytes Pkq5skE7 and cast them to 64-bit integer, then use modulo 500 on the result and have first number, then calculate other numbers with adding 500/10=50 to the result. So if the result of the cast would be 475, then other numbers would be 25, 75, .., 425. I know that there is randomness only for the first number, but that would be good enough if method 1) would have some inconvinient pattern problem (like numbers distributed close to each other or something). Compared to 1), numbers would be unique and that is an advantage. 3) If methods 1) and 2) won't be very good, I can again cast some bytes to 64-bit integer, do modulo, get number and take 9 numbers from following positions. But that loses the random-like factor and I would prefer if the 10 numbers' would seem random and not much related to each other, the same problem can be said for 2), but it's preferable to have the numbers distributed over the whole spectrum. 4) Some other approach? Basically I prefer 1), but maybe there is something better that can't yield the same number twice, maybe numbers from 1) can have a lot in common, because it's just shifting 1 byte right, I don't know. I can tell it's used for something like drawing in a lottery. It is preferable that the numbers don't have relationship between each other, but I am not sure, if it's possible and maybe it's ok for those numbers to just look like pseudo-random and only the first being pseudo-random and other having not-obvious relationship to it. Suggestions? • To clarify, is Pkq5skE7tp=j#{y"+R$6~mg!z"4g/Utw a seed, from which you then wish to generate 10 no. pseudo random numbers within the range 0 - 500? – Paul Uszak Jun 21 '18 at 20:21 • Yes, every time there will be a different hash/seed and with the same mechanism I want to generate 10 numbers from the range. I want to make the question more clear, but I don't have time for it now. – Salda Jun 21 '18 at 21:07 • 2nd clarification. Is the seed secret or known publicly? – Paul Uszak Jun 22 '18 at 22:21 The best way to do this would probably be through an iterative hash, where you use that seed as an input, and from each iteration you extract a certain amount of bits for conversion to a number. A 32-bit extraction allows numbers up to around 4 billion. I would do it as follows: A = Hash(Seed) Extract1 = lower 32-bits of A Number1 = Extract1 mod (MaxVal) B = Hash(A) Extract2 = lower 32-bits of B Number2 = Extract2 mod (MaxVal) ... and so on This will prevent the numbers from having any special relationship with eachother, other than the fact they use the same seed, your suggested example means that the next number directly depends on half the previous number plus an additional 8-bits of unknown information. For a lottery that would be bad. This method means the output depends pseudorandomly on an additional 224-bits of information (assuming the hash is 256-bits). The iterative sequence is deterministic and will always produce the same values from a given seed, and can generate both enormous numbers as well as enormous quantities of numbers, if needed. If you want to be able to publish proof of the seed PRIOR to the lottery draw, you can post a digital signature of the seed, that way players would be assured you are not going to manipulate the seed after players are selected in order to generate winning values for a specific player. Just makes sure the signature uses a different hash function. • I guess this is the best answer and you understood well my problem. In the end, we take 10 hashes to generate 10 numbers, so it's even simpler for me. I would tell you more details and maybe achieve even better solution, but the details are company's know-how until release, so I can't leak it. Thanks! – Salda Jun 22 '18 at 11:27 • @Salda if you want a number between 1 and 500, and you dont want to use mod 500, you can take the 32-bit value, multiply by 125, then divide by 1073741824, take the integer component, and add 1 [((x*125)\1073741824)+1]. This method will have no bias – Richie Frame Jun 23 '18 at 0:07 • @Salda and if you want a number between 1 and 512, or some other power of 2, simply select the required amount least significant bits (9 in this case) and add 1 – Richie Frame Jun 23 '18 at 0:28 If you're always getting the same numbers, then they are not "random" in the usual sense. If you want pseudorandom numbers, then if your hash is cryptographically secure, then simply taking the hash of ten different numbers, such as 0-9, will suffice. You can use a programming language's big-integer construct to implement a practically uniform random sampling algorithm that requires relatively little work. Steps are as follows: 1. Generate a random 32 bytes. (256 bits of output from a secure hash function qualifies.) 2. Convert a 32 byte array to big-integer n. • Whether a library uses big-endian or little-endian order does not matter because (uniform) random bytes in reverse order are still (uniform) random. • But if you want the result to be platform independent and implementation independent, then just pick one and require every implementation use that byte order convention. • Make sure the big-integer is positive. Taking the absolute value function works. Or setting the most significant bit of the most significant byte to zero. Or do a bitwise-AND with the positive quantity $2^{8 \times 32} - 1$. 3. Repeat ten times: 1. Divide n by 500. 2. Output the remainder of that division 3. Set the new value of n to the quotient The remainder of a non-negative integer divided by a positive integer $m$ will yield an integer between zero (inclusive) and m (exclusive.) ie. the range $[0, m)$. If you have a random uniform variable in a range $[0, k \cdot m)$ (the size of such a range divides $m$ evenly by integer $k$) then this method is unbiased. The simplest way to get a uniform sample in range $[0, x)$ from a larger uniform distribution on the range $[0, y)$, $y > x$, is to generate a value $v$, $0 \le v < y$, and reject $v$ (repeat the loop) if $v \ge x$ or accept $v$ (exit the loop and return $v$) if $v < x$. This repeats as long as necesssary and requires many iterations if $y$ is much greater than $x$. Most PRNGs can provide raw uniforms that fit in a 32 bit or 64 bit number. Only powers of two divide such numbers. (Note that another naive method v = (int)(randomDouble() * x); is also biased.) For non-powers of two $x$ you need a rejection algorithm. The slightly more complicated "standard" method is to generate a $v$ that whose upper (exclusive) bound $x'$ is the largest possible multiple of $x$ that fits in a signed or unsigned 32 bit (not byte) int. Then after using the above simple rejection algorithm returning $v$ mod $x'$. However if you use all 32 bytes (possibiy one bit less depedning on how you make the big-int positive) then dividing by a "small" number like $500^{10}$ is safe. I say you can skip the rejection algorithm because your bias is so small. (But it won't be safe in general.) The number of possible 32 byte values is $2^{32 \times 8}$. Dividing by 500 ten times is like dividing by $500^{10}$ once. The difference between $2^{32 \times 8}$ and the largest multiple of $500^{10}$ not exceeding $2^{32 \times 8}$ is $2^{32 \times 8} \text{ mod } 500^{10} = 915601957584007913129639936$. That accounts for $915601957584007913129639936 / 2^{8 \times 32} \approx 7.9 \times 10^{-51} \approx 2^{-166}$. Cryptographers agree that statistical bias smaller than $2^{-128}$ isn't practically detectable. (And probably never will be). That's why it is safe to ignore the naive-mod-method bias even though it isn't safe to ignore for a smaller number of bits. Note: As a sanity check on that $2^{-166}$ number I perform the following calculation: • Number of bits in $500^{10} = 10 * log_2 500 \approx 89.7$ • $2^{90}$ divides $2^{256}$ about $256 - 90 = 166$ times. • The proportion of values the over-represented as the result of the naive-mod method is approximately one in $2^{166}$. If you have an easy to use big integer library then this isn't too difficult of an algorithm. It's not a fast one though. It is actually better to use the "standard" unbiased method on 32 bit chunks because it is slow computing remainders. 32 bit or 64 bit dividend division/remainder calculation is still slow, but the rejection algorithm may be faster overall. You will need to be able to generate new random data because the algorithm may use more than 32 bytes. This could be done deterministically from the given hash using repeated hashing, repeated block encryption, keying a stream cipher with the 32 byte value, or using a CSPRNG with the 32 bytes. But if the PRNG isn't "secure" then the output of that RNG can be predicted regardless of whether the seed is unpredictable or not. There simply aren't many implementations of that type of RNG. ISAAC based implementations don't qualify. (We don't know enough about ISAAC to say that it's a secure cipher algorithm either.) PCG definitely does not qualify despite what you may have heard.
{}
## Tuesday, March 19, 2013 ### Planck rumours will soon become Planck results On Thursday, the Planck satellite will be revealing its first cosmological results. In terms of fundamental physics, this will be the biggest event since the Higgs discovery last year. In the cosmology community it is the biggest event for the best part of a decade (possibly in both directions of time). If you don't follow cosmology too closely, you might wonder why this particular experiment might generate so much excitement. After all, aren't there all sorts of experiments, all of the time? If so, I hope you've come to the right place. The sky as seen by Planck in 2010. Only, they hadn't removed the foregrounds yet. There's a whole Milky Way galaxy in the way. Why must they make us wait so long? If you're unaware, Planck is a satellite put in space by the European Space Agency to measure the cosmic microwave background (CMB). The CMB is an incredibly useful source of cosmological information. The impending release of Planck's results on Thursday is big news because Planck has measured the CMB with better resolution than any other experiment that can see the whole sky. Planck might have discovered evidence of interesting new physics, such as extra neutrinos or additional types of dark matter. It might even reveal some effects relating to how physics works at energies we could never probe on Earth. But even if it hasn't discovered anything dramatically new, the precision with which Planck has measured the parameters of the standard cosmological model will immediately make it the new benchmark. There have been surprisingly few rumours leaked to the rest of the cosmology community about what to expect on Thursday. This has resulted in the most pervasive rumour being that they have simply not found anything worth leaking. Whatever the reality, on Thursday rumours will become results. What has Planck actually done that is so interesting? Put most simply, Planck has measured the temperature of the CMB along various directions in the sky. The temperature of the CMB is almost uniform, but not quite. There are tiny fluctuations in the temperature of the CMB. These fluctuations are approximately one-millionth the size of the average temperature. To put this in perspective, measuring these fluctuations is like measuring the height of a building that is 100 metres tall, in tenths of a millimetre. Many other experiments (COBE, BOOMERANG, WMAP, ACT, SPT) have measured these fluctuations. So what makes Planck so special? A map of the fluctuations in the CMB as seen by COBE. Pretty poor resolution, but the first ever to see them. The big red band along the middle is residual foreground from our own galaxy In order to determine the temperature of the Cosmic Microwave Background it is necessary to measure its intensity as a function of its frequency. Unfortunately, the CMB is not the only microwave radiation in the universe. In order to properly determine the CMB's temperature along any given line of sight it is first necessary to determine what parts of this microwave radiation are the CMB and what are from some other source. The wider the range of frequencies at which you measure the microwave radiation, the easier this gets. Compared to comparable experiments, Planck does just this. The effects Planck has been looking for are subtle; therefore, removing this foreground is very important. Most importantly though, Planck has measured the CMB with better resolution than any other space-based telescope. The Atacama Cosmology Telescope and South Pole Telescope have measured the CMB with a similar resolution; however, as their names suggest, they are Earth based and can only see a small fraction of the sky. Planck, being space based, can obviously see in every direction. This means that Planck is the first telescope that will be able to tell us about both the smallest scales (i.e. the good resolution) and the largest scales (i.e. the full sky coverage) at the same time. This is Planck's most powerful feature. But why is the CMB so interesting? There is lots of stuff in the universe that can give us interesting cosmological information. Moreover, there are many other telescopes making measurements of this other stuff right now and reporting back to us. What is it about the CMB that always gets everyone so much more excited? The customary reason to give for this is the fact that the CMB is the closest we can get to an image of the Big Bang. The CMB formed only 400,000 years after the Big Bang began and before that time, the universe was opaque. Therefore no light from any event before the formation of the CMB can reach us. This is true, but I think it doesn't quite capture the real reason why the CMB is so interesting. The temperature fluctuations in the CMB as seen by WMAP. The resolution is much better than COBE (and someone nicely removed the galaxy). In fact the resolution is so good that you can even make out the initials of a prominent cosmologist. One of the disputes I expect that Planck will help settle is the true identity of this cosmologist. This "real reason" is that, for most cosmological models, we can predict very accurately what the CMB should look like. We can then compare the measured CMB to the accurate predictions and see which predictions were closest to the reality. The reason we can be so precise is that, at the early time when the CMB formed, the universe was very nearly homogeneous. That is, the temperature and density at every point of the universe was almost the same. You should compare that homogeneous, early, universe to the current universe where we have galaxies, stars, quasars, enormous regions empty of almost all matter and a whole host of other things occupying the universe. When the CMB formed the universe was more like a uniform soup of dark matter, hydrogen and radiation. This smoothness makes the calculations a lot simpler. The second point that is often made is that, after it was formed, the CMB has only interacted very weakly with anything else in the universe. This is mostly true, although CMB science is now so precise that even these incredibly weak interactions are now useful cosmological probes. For example, Planck will have discovered a number of massive galaxy clusters because of how the CMB scatters of hot electrons within those clusters. Despite all of this, eventually the CMB will no longer be the optimal cosmological probe. The information contained within the CMB is limited. Studying the CMB can only tell us what the density of the universe was, billions of years ago, on a thin shell, billions of light-years away. It is much more difficult to extract information about the primordial (or even current) density everywhere within that shell, but once we can, the wealth of information provided will be much more significant. These observations will be what comes next in cosmology. That is, very detailed observations of the structures in the universe. The major, expensive, satellite that will be the next generation's Planck (~20 years from now) is called Euclid, which will measure the location of hundreds of billions of galaxies. In the mean time there will be many interesting surveys (e.g. e-Rosita, DES). CMB science may soon be at the point of exhaustion, but observational cosmology will continue for at least the rest of this century. The CMB's temperature fluctuations as seen by the South Pole Telescope. Even better resolution than WMAP, but only a small fraction of the sky. Planck will see this level of resolution, but over the whole sky. Then, after that century has passed, we (or whoever is still around by then) can start real-time cosmology. That is, observing the changes in time of the temperature of the CMB and the locations of structures. These changes will be small, but the longer we wait, the bigger they become. It is conceivable that just at the end of the life-time of the youngest people alive today the first of these measurements will be beginning to occur. What if nothing new is revealed? The rumours surrounding Planck and what new things it might discover have had one notable feature to them: they haven't really existed. This has caused one persistent rumour to develop and that is that there is actually nothing interesting to leak. What would this mean? Firstly, I won't lie, it would be kind of sad. I'm a young researcher, I'd love to have something new and exciting to contemplate and work on. But, the situation wouldn't be quite as grim as the corresponding scenario for particle physics will be if all the LHC finds is the Higgs. The reason for this is that the standard model of particle physics is much more strongly established than the corresponding cosmological model. There are also a number of future experiments already taking measurements, or with allocated funding, that will further explore cosmology. If none of them find anything new and I'm writing blog posts like this in twenty years, I will start to use the word grim for cosmology too. The standard cosmological model (SCM) has its problems (such as what dark energy actually is) and, if nothing unexpected shows up, then we can't gain any insight into these problems and future generations might just have to learn to live with them. But for now, Planck will still be doing a great service. Firstly, and this shouldn't be ignored, it will verify WMAP's measurements. The LHC is great because it has two detectors. If one made a mistake, it is unlikely that the other made exactly the same mistake. So, if both see something, we can be confident it is there. So far, for large enough scales, WMAP is all we've had. Maybe WMAP made a mistake? We wouldn't know. But, on Thursday, we will because a completely different telescope, analysed by different people, will have made the same measurements. Any physics and results based on WMAP will gain that important confirmation step. Secondly, Planck will narrow the uncertainty on all of the parameters of the SCM (e.g. the density of dark matter, the density of dark energy, and the nature of the primordial density fluctuations). Planck will therefore allow us to make much more precise predictions about what other cosmological events should occur, if the SCM is correct. This will help us decide what types of telescopes and detectors to build and how to analyse the data produced. For very rare events, small changes in these parameters can even make the difference between expecting to see something or nothing. This sounds less interesting than exciting new physics, but it will still advance science significantly. The closest I can give you to rumours I have heard a few genuine rumours (I'm not going to identify any sources or make any claims regarding the veracity of these rumours). Here they are: • The initial "ISW mystery" is still present in Planck's observations. • When combined with extra data sets (including, I expect, galaxy clusters detected by Planck), there will be some evidence for non-zero neutrino masses, enough to make this become quite a popular field of (cosmological) research. • With respect to non-Gaussianity, Planck will see evidence for non-Gaussianity at a level comparable to what WMAP has seen (i.e. 2-3 $$\sigma$$). That's it though, that's all I've heard... The release
{}
# Laplace's equation in spherical coordinates with Neumann b.c I am trying to find the temperature field in a semi-infinite solid on whose surface there is an isotherm spherical cap sunken by a length p. For example: R = 10; p = 3; SphericalPlot3D[ 1/2 (Sqrt[2] Sqrt[Cos[2 θ] (R - p)^2 + 2 p R + R^2 - p^2] + 2 Cos[θ] (R - p)), {θ, Pi/2, 3/2 Pi}, {ϕ, 0, 2 Pi}] The rest of the surface is adiabatic. In a spherical coordinate system (physical convention) centered at the "center" of the cap, the PDE is: $$\frac{\partial }{\partial r}\left(r^2\frac{\partial T}{\partial r}\right)+\frac{1}{\sin \theta }\frac{\partial }{\partial \theta }\left(\sin \theta \frac{\partial T}{\partial \theta }\right)=0$$ With boundary conditions in dimensionless form given by: $$T=0$$, $$r\to \infty$$, this sets $$T$$ equal to the initial value far from the cap, $$T=1$$, $$r=\frac{1}{2} \left(\sqrt{2} \sqrt{-p^2+(1-p)^2 \cos (2 \theta )+2 p+1}+2 (1-p) \cos (\theta )\right)$$, this imposes $$T$$ on the cap $$\frac{\partial T}{\partial \theta}\bigg| _{\theta=\pi/2}=0$$, adiabatic condition $$\frac{\partial T}{\partial \theta}\bigg| _{\theta=\pi}=0$$, symmetry condition. I tried this: p = 0.2; boundaries = {-r + 1/2 (Sqrt[2] Sqrt[Cos[2 θ] (1 - p)^2 + 2 p + 1 - p^2] + 2 Cos[θ] (1 - p)), r - 100, -θ + Pi/2, θ - Pi} Ω = ImplicitRegion[And @@ (# <= 0 & /@ boundaries), {r, θ}]; RegionPlot[Ω, PlotRange -> {{0, 3}, {1, 5}}] NDSolveValue[{r^2 D[T[r, θ], {r, 2}] + 2 r D[T[r, θ], r] + D[T[r, θ], {θ, 2}] + Cot[θ] D[T[r, θ], θ] == {NeumannValue[0., boundaries[[3]] == 0], NeumannValue[0., boundaries[[4]] == 0]}, {DirichletCondition[ T[r, θ] == 1., boundaries[[1]] == 0.], DirichletCondition[T[r, θ] == 0., boundaries[[2]] == 0.]}}, T, {r, θ} ∈ Ω] But it does not seem to work. I have two Dirichlet conditions and two Neumann conditions, but I don't know if I inserted them in NDSolve in the right way. • 1. You need to express the b.c. involving derivative with NeumannValue, related: mathematica.stackexchange.com/q/224812/1871 2. How can $\theta=3\pi/2$ in spherical coordinates? What convention do you follow? – xzczd Apr 27 at 11:01 • Using physical convention $\theta \leq \pi$, you're right. The question was corrected according to your comment. Thank you. – umby Apr 28 at 11:17 Two issues here. First of all, you've chosen 100 to approximate Infinity, which is way too large in this case. Something like 5 is OK: p = 0.2; inf= 5; boundaries = {-r + 1/2 (Sqrt[2] Sqrt[Cos[2 θ] (1 - p)^2 + 2 p + 1 - p^2] + 2 Cos[θ] (1 - p)), r - inf, -θ + Pi/2, θ - Pi}; << NDSolveFEM Ω = ToElementMesh@ImplicitRegion[And @@ (# <= 0 & /@ boundaries), {r, θ}]; ToElementMesh is added to help NDSolve analyzing the domain properly, otherwise the femcbtd warning will pop up, at least in v12.2. (Alternatively, DiscretizeRegion can be used in place of ToElementMesh, which is a bit slower. ) The next issue is, you haven't set NeumannValue correctly. If you read the Details section of NeumannValue and the FEM document carefully, you might notice NeumannValue is actually defined based on the formal form of a PDE. How can we check the formal form of a PDE? The new-in-12.2 NDSolveFEMGetInactivePDE does the work. (If you're not yet in v12.2, try the function in this post. ) seq = Sequence[{r^2 D[T[r, θ], {r, 2}] + 2 r D[T[r, θ], r] + D[T[r, θ], {θ, 2}] + Cot[θ] D[T[r, θ], θ] == 0, {DirichletCondition[T[r, θ] == 1., boundaries[[1]] == 0.], DirichletCondition[T[r, θ] == 0., boundaries[[2]] == 0.]}}, T, {r, θ} ∈ Ω] {state} = NDSolveProcessEquations@seq GetInactivePDE@state exprInBlueBox = -{{-r^2, 0}, {0, -1}} . Inactive[Grad][T[r, θ], {r, θ}]; The normal vector $$\overset{\rightharpoonup }{n}=(0,1)$$ at $$\theta=\pi$$, so the left hand side (LHS) of Neumann b.c. at $$\theta=\pi$$ is: normalVector = {0, 1}; - exprInBlueBox . normalVector // Activate (* - Derivative[0, 1][T][r, θ] *) The normal vector $$\overset{\rightharpoonup }{n}=(0,-1)$$ at $$\theta=\pi/2$$, so the LHS of Neumann b.c. at $$\theta=\pi/2$$ is: normalVector = {0, -1}; - exprInBlueBox . normalVector // Activate (* Derivative[0, 1][T][r, θ] *) Thus $$\left.\frac{\partial T}{\partial \theta}\right| _{\theta=\pi/2}=0$$ and $$\left.\frac{\partial T}{\partial \theta}\right| _{\theta=\pi}=0$$ are equivalent to zero NeumannValue in this case. Once again, according to the Details section of NeumannValue: …not specifying a boundary condition at all is equivalent to specifying a Neumann 0 condition. In other words, we don't need to explicitly set NeumannValue for your problem. So the problem can be solved with: sol = NDSolveValue@seq DensityPlot[sol[Sqrt[x^2 + z^2], ArcTan[z, Abs@x]], {x, -5, 5}, {z, -5, 0}, AspectRatio -> Automatic, PlotPoints -> 100, PlotRange -> All] You may further adjust value of inf and MaxCellMeasure option of ToElementMesh to see how the solution varies. • Great. Small suggestion, you could use NDSolveFEMToElementMesh in place of DiscretizeRegion as that would return a second order mesh with curved elements and should approximate the region and solution better. But this is probably a minor thing. – user21 Apr 29 at 11:46 • @user21 The mesh quality seems to be similar in this case, but ToElementMesh turns out to be faster. Edited. Thx for the suggestion. – xzczd Apr 29 at 12:02 • @xzczd Many thx. Knowing the formal form of a PDE is fundamental to set Neumann BC. If I understand, according to the documentation (can u check pls?), in my problem $a=f=0$, $\alpha=\gamma=0$, $\beta=\left\{c,cot\theta\right\}$ and $c=\left\{-r^{2},-1\right\}$. Then to set $\frac{\partial T}{\partial \theta}=0$ on the boundary3, I have to write: == NeumannValue[-1, boundaries[[3]] == 0.]. While, to impose the same b.c. on both boundary 3 and 4 implies: == NeumannValue[-1, boundaries[[3]] == 0.]+NeumannValue[-1, boundaries[[4]] == 0.]. Unfortunately, I use v11 so no NDSolveFEMGetInactivePDE. – umby Apr 30 at 10:44 • @umby You can create a .m of course, but it's not necessary, just copy and execute the code in a notebook is OK. In principle, $\left.\frac{\partial T}{\partial \theta}\right| _{\theta=\pi/2}=0$ should be interpreted to NeumannValue[1, boundaries[[3]] == 0.], because the normal vector is {0,-1} here. – xzczd Apr 30 at 13:13 • @umby Yes, but as mentioned in the comment above, NeumannValue[-1, boundaries[[4]] == 0.] is ill-posed. – xzczd Apr 30 at 14:30 In a previous answer 240190, I showed how one could use anisotropic meshing to add a DirichletCondition at "infinity" for a 1D problem. In this answer, I shall extend the technique to a 2D problem. # Geometry description In many FEM software packages, problems with spherical symmetry can be posed as an axisymmetric problem. Since it is easier for me to think in these terms, I will recast the problem. As I understood the system, a spherical cap is embedded in a semi-infinite domain, as I have sketched below. The y-axis is the symmetry axis. # Helper functions ## Mesh helper functions I use some of the following helper functions to construct an anisotropic mesh based on connecting and extending edge segments. A structured Quad mesh can then be easily constructed using RegionProduct (*Import required FEM package*) Needs["NDSolveFEM"]; (*Define Some Helper Functions For Structured Meshes*) pointsToMesh[data_] := MeshRegion[Transpose[{data}], Line@Table[{i, i + 1}, {i, Length[data] - 1}]]; unitMeshGrowth[n_, r_] := Table[(r^(j/(-1 + n)) - 1.)/(r - 1.), {j, 0, n - 1}] meshGrowth[x0_, xf_, n_, r_] := (xf - x0) unitMeshGrowth[n, r] + x0 firstElmHeight[x0_, xf_, n_, r_] := Abs@First@Differences@meshGrowth[x0, xf, n, r] lastElmHeight[x0_, xf_, n_, r_] := Abs@Last@Differences@meshGrowth[x0, xf, n, r] findGrowthRate[x0_, xf_, n_, fElm_] :=(*Quiet@*) Abs@FindRoot[ firstElmHeight[x0, xf, n, r] - fElm, {r, 0.00000001, 100000000/fElm}, Method -> "Brent"][[1, 2]] meshGrowthByElm[x0_, xf_, n_, fElm_] := N@Sort@Chop@meshGrowth[x0, xf, n, findGrowthRate[x0, xf, n, fElm]] meshGrowthByElm0[len_, n_, fElm_] := meshGrowthByElm[0, len, n, fElm] flipSegment[l_] := (#1 - #2) & @@ {First[#], #} &@Reverse[l]; leftSegmentGrowth[len_, n_, fElm_] := meshGrowthByElm0[len, n, fElm] rightSegmentGrowth[len_, n_, fElm_] := Module[{seg}, seg = leftSegmentGrowth[len, n, fElm]; flipSegment[seg]] reflectRight[pts_] := With[{rt = ReflectionTransform[{1}, {Last@pts}]}, Union[pts, Flatten[rt /@ Partition[pts, 1]]]] reflectLeft[pts_] := With[{rt = ReflectionTransform[{-1}, {First@pts}]}, Union[pts, Flatten[rt /@ Partition[pts, 1]]]] extendMesh[mesh_, newmesh_] := Union[mesh, Max@mesh + newmesh] ## Model specific helper functions Typically, the structured Quad mesh is used on rectangular domains. I use the following helper functions to map a square UV space mesh onto the curved domain. Clear[β, γ, rcap, rinf, rl, capMesh] β[R_, h_] := ArcCos[(R - h)/R] γ[R_, h_, ρ_] := ArcCos[(R - h)/ρ] rcap[R_, h_][u_] := Module[{angle = β[R, h], r = R}, R {Sin[angle u], -Cos[angle u]}] rinf[R_, h_, ρ_][u_] := Module[{angle = γ[R, h, ρ], r = ρ}, r {Sin[angle u], -Cos[angle u]}] rl[R_, h_, ρ_][u_, v_] := Module[{rc = rcap[R, h][u], ri = rinf[R, h, ρ][u]}, (ri - rc) v + rc] capMesh[R_, h_, ρ_][rh_, rv_] := Module[{sqr, crd, inc, msh, mean, mrkrs, bmrkrs, pEle, pe, pm, pcrd, sdf, n, leIds, bcEle, z = {0, 0}, ex = {1, 0}, ey = {0, 1}, f, g}, sqr = RegionProduct[rh, rv]; crd = MeshCoordinates[sqr]; inc = ( Delete[0] /@ MeshCells[sqr, 2]); mean = Mean /@ GetElementCoordinates[crd, #] & /@ {inc} // First; mrkrs = If[#2 > rf/ρ, 2, 1] & @@@ mean; msh = ToElementMesh["Coordinates" -> crd, pm = epm[msh] /. {0 -> 4}; pe = epi[msh]; pcrd = crd[[Flatten@pe]]; sdf = Flatten@ Position[SignedRegionDistance[#, pcrd], _?(Abs[#] < 0.0000001 &), 1] &; g = (pm[[#1]] = First@#2) &; MapIndexed[g, sdf /@ Table[ TransformedRegion[Line[{z, ex}], RotationTransform[i 90 °, 1/2 (ex + ey)]], {i, 0, 3}]]; pEle = {PointElement[pe, pm]}; bmrkrs = ebm[msh]; n = ebn[msh]; leIds = Range@Length@n; f = Function[{d}, Flatten@Position[n, _?(0.9999 < d . # &), 1]]; g = (bmrkrs[[#1]] = First@#2) &; MapIndexed[g, f /@ Table[RotationTransform[i 90 °][-ey], {i, 0, 3}]]; bcEle = {LineElement[ebi[msh], bmrkrs]}; crd = rl[R, h, ρ][#1, #2] & @@@ crd; inc = inc /. {{i_, j_, k_, l_} :> {l, k, j, i}}; ToElementMesh["Coordinates" -> crd, "BoundaryElements" -> bcEle, "PointElements" -> pEle] ] # Mesh construction The following workflow constructs a mesh with an angular resolution of 1°. Radially, there are two segments. There is a fine mesh in the region of interest (defined as 5X the radius) and an infinite segment that extends 10,000X the region of interest. (*Define geometric and meshing parameters*) R = 1; h = 1/5; rf = 5 R; Rinf = 10000 rf; nelmr = 80; nelminf = 40; nelmang = 90; Print["Angular discretization segment"] segu = Subdivide[0, 1, nelmang]; ru = pointsToMesh@segu Print["Mesh segment in the radial region of interest"] segr = leftSegmentGrowth[rf, nelmr, rf/100]; pointsToMesh@segr seginf = meshGrowthByElm0[Rinf - rf, nelminf, Last@segr - segr[[-2]]]; reginf = pointsToMesh@seginf rv = pointsToMesh@(#/Last[#] &@extendMesh[segr, seginf]) mesh = capMesh[R, h, Rinf][ru, rv]; Print["Full domain"] Show[mesh["Wireframe"], Axes -> True] Print["Zoomed region"] Show[mesh["Wireframe"], PlotRange -> {{0, 2}, {-R + h, -2}}, Axes -> True] # PDE set up and solution In the Heat Transfer Verification Manual there are some helper functions to create a well-formed operator for axisymmetric heat transfer problems. The code is reproduced here: Clear[HeatTransferModelAxisymmetric, TimeHeatTransferModelAxisymmetric] HeatTransferModelAxisymmetric[T_, {r_, z_}, k_, ρ_, Cp_, Velocity_, Source_] := Module[{V, Q}, V = If[Velocity === "NoFlow", 0, ρ*Cp*Velocity . Inactive[Grad][T, {r, z}]]; Q = If[Source === "NoSource", 0, Source]; 1/r*D[-k*r*D[T, r], r] + D[-k*D[T, z], z] + V - Q] TimeHeatTransferModelAxisymmetric[T_, TimeVar_, {r_, z_}, k_, ρ_, Cp_, Velocity_, Source_] := ρ*Cp*D[T, {TimeVar, 1}] + HeatTransferModelAxisymmetric[T, {r, z}, k, ρ, Cp, Velocity, Source] After all the heavy lifting has been done to create the mesh, the construction and solution of the PDE are straightforward. parms = {k -> 1, ρ -> 1, Cp -> 1, hc -> 10, Ta -> 0}; Γhot = DirichletCondition[θ[r, z] == 1, ElementMarker == 1]; Γcold = DirichletCondition[θ[r, z] == 0, ElementMarker == 3]; Γconv = 0; parmop = HeatTransferModelAxisymmetric[θ[r, z], {r, z}, k, ρ, Cp, "NoFlow", "NoSource"]; op = parmop /. parms; pde = {op == Γconv, Γhot, \ Γcold}; Tfun = NDSolveValue[pde, θ, {r, z} ∈ mesh]; Now, we can construct some plots: Plot[{Tfun[0, -z], Tfun[z, -R + h]}, {z, 0, 5}, PlotPoints -> 100, PlotRange -> {0, 1.0}, PlotLegends -> "Expressions", PlotLabel -> "Temperature along symmetry edges"] uRange = MinMax[Tfun["ValuesOnGrid"]]; legendBar = BarLegend[{"TemperatureMap", uRange}, 50, LegendLabel -> Style["[°C]", Opacity[0.6]]]; options = {PlotRange -> {{-2, 2}, {-2.8, -R + h}, uRange}, ColorFunction -> ColorData[{"TemperatureMap", uRange}], ContourStyle -> Opacity[0.5], ColorFunctionScaling -> False, Contours -> 10, AspectRatio -> Automatic, PlotPoints -> 100, FrameLabel -> {"r", "z"}, PlotLabel -> Style["Temperature Field: θ(r,z)", 18], ImageSize -> 650}; Legended[ContourPlot[Tfun[Abs[r], z], {r, -2, 2}, {z, -2.8, 0}, Evaluate[options]], legendBar] As you can see in the first plot, at a distance of 5, the temperature has decayed about 90%. A benefit of anisotropic meshing is that one can pose some stringent questions to the model with minimal computational cost. For example, suppose you had a requirement that you needed to know the distance where temperature decayed 99.99% of the spherical cap value. One can easily find that this occurs at a distance of about 4000 as shown below: FindRoot[Tfun[0, -z] - 0.0001, {z, 100}] (* {z -> 4024.02} *) # Convectively cooled top surface It is straightforward to create a convectively cooled top surface (ElementMarker==2) using a Robin-type condition. From the previously defined parms, I defined a convective heat transfer coefficient of 10 and an ambient fluid temperature of 0°. To set up, we simply need to modify the NeumannValue. Γconv = NeumannValue[hc (Ta - θ[r, z]), ElementMarker == 2] /. parms; pde = {op == Γconv, Γhot, \ Γcold}; Tfun = NDSolveValue[pde, θ, {r, z} ∈ mesh]; We can plot the solution as before: Plot[{Tfun[0, -z], Tfun[z, -R + h]}, {z, 0, 5}, PlotPoints -> 100, PlotRange -> {0, 1.0}, PlotLegends -> "Expressions", PlotLabel -> "Temperature along symmetry edges"] uRange = MinMax[Tfun["ValuesOnGrid"]]; legendBar = BarLegend[{"TemperatureMap", uRange}, 50, LegendLabel -> Style["[°C]", Opacity[0.6]]]; options = {PlotRange -> {{-2, 2}, {-2.8, -R + h}, uRange}, ColorFunction -> ColorData[{"TemperatureMap", uRange}], ContourStyle -> Opacity[0.5], ColorFunctionScaling -> False, Contours -> 10, AspectRatio -> Automatic, PlotPoints -> 100, FrameLabel -> {"r", "z"}, PlotLabel -> Style["Temperature Field: θ(r,z)", 18], ImageSize -> 650}; Legended[ContourPlot[Tfun[Abs[r], z], {r, -2, 2}, {z, -2.8, 0}, Evaluate[options]], legendBar] # Comparison with another code When possible, it is often conducive to compare the Mathematica results with another simulation code. To simulate boundary conditions at infinity, the FEM software COMSOL introduces an Infinite Element Domain (IED) concept below. A large scaling factor (e.g., 1000) is applied to the equations in the IED. As shown below, there is an excellent agreement between the Mathematica and COMSOL simulations. That should give one more confidence in the validity of this approach to solve the infinite domain problem. • Thanks for the comparative analysis @Tim Laska! Does the IED concept imply using a special elements or it is just increasing of element length far from heat source? – Oleksii Semenov May 5 at 13:06 • @OleksiiSemenov You are welcome! In COMSOL, the IED is either mapped in 2D (quads) or swept in 3D (wedges or hexas) in the infinite direction. I do not think you can apply an IED to an unstructured mesh. I am not quite sure what COMSOL is doing under the hood. It is possible that they are scaling the elements invisible to the user or possibly they are scaling the thermal conductivity in a piecewise fashion to achieve the same effect. – Tim Laska May 5 at 13:51 • @OleksiiSemenov From the [COMSOL documentation] (doc.comsol.com/5.5/doc/com.comsol.help.comsol/…) applies coordinates stretching of the IED not dissimilar to what I have done. In my approach, the first element of the IED matches the width of the outer layer of the finite domain and stretches the elements to provide a gradual transition out to "infinity". – Tim Laska May 5 at 19:45 • Thanks for the link @TimLaska. I also found some papers devoted to this aspect: Zienkiewicz, O. C., C. Emson, and P. Bettess. "A novel boundary infinite element." International Journal for Numerical Methods in Engineering 19.3 (1983): 393-404. and Marques, J. M. M. C., and D. R. J. Owen. "Infinite elements in quasi-static materially nonlinear problems." Computers & structures 18.4 (1984): 739-751. – Oleksii Semenov May 6 at 11:42 One can also consider the 3D statement of the problem. Solution of a such linear problem is not so time consuming nowadays. For mesh generation let's take advantage of OpenCascadeLink procedures which are very useful for tessellation of domains with complex geometry. Let's $$r$$ is a radius of a cap and $$R$$ is "infinity" radius. The computational region is defined by a difference between spherical wedge of radius $$R$$ and a ball of radius $$r$$. Whereas the spherical wedge can be defined as intersection of rectangular cuboid and a ball of radius $$R$$. Definition of computation domain and FE mesh generation Needs["OpenCascadeLink"] Needs["NDSolveFEM"] r = 10; (*radius of a cap*) p = 0.2; shape1 = OpenCascadeShape[Ball[{0, 0, 0}, R]]; Hexahedron[{{0, -R, -R}, {R, -R, -R}, {Sqrt[2] R, Sqrt[2] R, -R}, {0, R, -R}, {0, -R, R}, {R, -R, R}, {Sqrt[2] R, Sqrt[2] R, R}, {0, R, R}}]]; shape3 = OpenCascadeShape[Ball[{p - r, 0, 0}, r]]; difference];(*boundary mesh geteration*) mesh = ToElementMesh[bmesh];(*FE mesh generation*) groups = bmesh["BoundaryElementMarkerUnion"]; temp = Most[Range[0, 1, 1/(Length[groups])]]; colors = ColorData["BrightBands"][#] & /@ temp; Show[mesh["Wireframe"["MeshElementStyle" -> FaceForm /@ colors]], Axes -> True, AxesLabel -> {"x", "y", "z"}, AxesStyle -> RGBColor[0, 0, 0], BaseStyle -> 14] Numerical solution In the code below we solve the problem by means of FEM low level routines. Boundary elements with ElementMarkers=4;5belong to the surface of cap whereas on "infinity" surface ElementMarkers=1;3. This is taken into account when implementing Dirichlet BC. The rest surface of computational domain is adiabatic. nr = ToNumericalRegion[mesh]; vd = NDSolveVariableData[{"DependentVariables", "Space"} -> {{u}, {x, y, z}}]; sd = NDSolveSolutionData["Space" -> nr]; pded = InitializePDECoefficients[vd, sd, "DiffusionCoefficients" -> {{-IdentityMatrix[3]}}]; bcd = InitializeBoundaryConditions[vd, sd, {{DirichletCondition[u[x, y, z] == 1, ElementMarker == 4 || ElementMarker == 5], DirichletCondition[u[x, y, z] == 0, ElementMarker == 1 || ElementMarker == 3]}}] md = InitializePDEMethodData[vd, sd]; dpde = DiscretizePDE[pded, md, sd]; dbc = DiscretizeBoundaryConditions[bcd, md, sd]; ufun = ElementMeshInterpolation[{mesh}, res]; Postprocessing Temperature distributions along axes $$y$$ and $$z$$ should be the same Show[ Plot[ufun[x, 0, 0], {x, p, R}, PlotRange -> All, PlotLegends -> {"along x axis"}, PlotStyle -> {RGBColor[1, 0, 0], Thickness[0.005]}] , Plot[{ufun[0, x, 0], ufun[0, 0, x]}, {x, Sqrt[r^2 - (p - r)^2], R}, PlotRange -> All, PlotLegends -> {"along y axis", "along z axis"}, PlotStyle -> {{RGBColor[0, 1, 0], Thickness[0.005]}, {RGBColor[0, 0, 1], Thickness[0.005]}}], Frame -> True , FrameLabel -> {"Distance", "Temperature"}, FrameStyle -> RGBColor[0, 0, 0], BaseStyle -> 18, ImageSize -> 600, GridLines -> {{p, Sqrt[r^2 - (p - r)^2]}, None}, GridLinesStyle -> {Dashed, RGBColor[0, 0, 0]} ]
{}
# Hamiltonian for a 1D spin chain [closed] I am trying to implement the Lanczos algorithm to tridiagonalize the Hamiltonian for a 1D spin chain of length $$L$$, but I am unable to decipher from my professor's notes (here's a link), what the action the Hamiltonian has on a random vector (or for that matter what the Hamiltonian is). My touble arises at Eqn. 20 in these notes. They say that the Hamiltonian is $$\frac{1}{2}\bigg(\sum_{i=0}^{L-1}P_{ij}-\frac{L}{2}I\bigg).$$ However, this is really confusing to me since if $$P_{ij}$$ is what he defined in Eqn. 18, then the resulting matrix is just a 4 by 4 matrix and not $$2^L\times 2^L$$ as he claims it should be. If it's not the case that $$P_{ij}$$ is the same as in Eqn. 18, then what is it, and how do I compute this Hamiltonian (or at the very least) the Hamiltonian's action on a vector, $$v$$? ## closed as unclear what you're asking by Norbert Schuch, Kyle Kanos, ZeroTheHero, Jon Custer, BuzzDec 29 '18 at 2:53 Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. Implicitly each of those summands is $$I^{\otimes (i-1)} \otimes P_{ij} \otimes I^{\otimes k}$$ so that each summand acts as the identity on all but two of the spins so maybe $$P_{ij}$$ is only 4 by 4 but this extension with the identity operators is actually $$2^L$$ by $$2^L$$. I put $$k$$ here just to say the rest it is something like $$L-i$$ but I may be off by 1 or 2 and didn't check which.
{}
# DNS simulation in industry I have this idea in my mind that Direct Numerical Simulation (DNS) is only used in research since the Reynolds numbers of industrial applications are too high and require computer horsepower we don't have yet. Is that still true nowadays? Any reference on the subject? • To the best of my knowledge, LES and VLES are the two mostly used in the industry right now. Note that LES mesh requirements are almost the same as that of DNS except near the walls. Aug 6 '20 at 9:21 • @Alish DNS is at least two to three orders of magnitude more computationally expensive than LES. – Algo Aug 6 '20 at 10:26 • @Algo And is it heavily due to mesh requirements? Aug 6 '20 at 10:34 • @Alish in LES you can model small scale eddies and capture the rest, which will result in a less expensive mesh requirements than DNS. – Algo Aug 6 '20 at 11:06 I believe this is too broad to be answered here, but DNS has already been employed for high Reynolds number simulations in the range of 12,000 < Re < 33,500 - for swirling jets, but still "out of reach for large systems". In order to visualize the size of scales required to capture turbulent flow parameters for a high Reynolds number flow, let's take for example a simple air flow in a pipe with $$\text{Re} \approx 10^5$$ having fairly moderate turbulence intensity of 3%. Now, the most important factor to take into consideration for DNS and LES is the smallest eddies (Kolmogorov eddy) scale, those are the smallest scales of eddies that can exist in a turbulent flow before being converted into heat through viscous dissipation and your DNS simulation should capture such eddies. (I am not going to include all the calculations to make the answer concise, but you can refer to Wilcox (2006) and Rodriguez (2019) for more). Now, back to our flow situation, you are expecting to have Kolmogorov scales with the following estimations: Kolmogorov eddy length scale 0.000152879 [meters] Kolmogorov eddy time scale 0.00148359 [seconds] Kolmogorov eddy velocity scale 0.103046 [meter/second] So, you basically have an idea about the vast size of the mesh and the time step required to have a Courant number less than one. And higher Reynolds number will have even smaller scales (imagine having a DNS setup for a supersonic flow over a 3D wing). However, I believe this paragraph by Rodriguez (2019), might give you an insight about the future of DNS (which is very very limited right now): In any case, as computational power increases, DNS will not only be used for turbulence research and small systems but for larger engineering designs as well; this trend is inevitable and was predicted long ago. This optimistic premise is supported by the strong potential from recent advances in quantum computers, topological quantum materials, and quantum algorithms [...] quantum algorithms already solve linear systems of equations, which are essential for CFD solvers. Furthermore, it is expected that quantum algorithms will result in an exponential decrease of the time required to solve systems of linear equations. Indeed, the literature as of 2019 indicates the potential for computational speed increases of at least a factor of 1000! And of course, the detailed DNS calculations will uncover fluid functionality that can be leveraged onto vastly improved engineered system behavior and performance. I really can't recommend this book enough (specifically for your question): Applied Computational Fluid Dynamics and Turbulence Modeling - Rodriguez (2019)
{}
Journal topic Mech. Sci., 10, 575–587, 2019 https://doi.org/10.5194/ms-10-575-2019 Mech. Sci., 10, 575–587, 2019 https://doi.org/10.5194/ms-10-575-2019 Research article 04 Dec 2019 Research article | 04 Dec 2019 # Rapid attitude maneuver of the space tether net capture system using active disturbance rejection control Rapid attitude maneuver of the space tether net capture system using active disturbance rejection control Cheng Wei1, Hao Liu2, Chunlin Tan2, Yongjian Liu2, and Yang Zhao1 Cheng Wei et al. • 1Department of Aerospace Engineering, Harbin Institute of Technology, Harbin, 150001, China • 2China Academy of Space Technology, Beijing, 100094, China Correspondence: Cheng Wei (weicheng@hit.edu.cn) Abstract The space tether net capture system is a spacecraft system with a mounting tether net for capturing targets. It has the advantages of reusability and the adaptability to capture varying targets with different geometries or flying-motion statuses. However, due to its flexible tether net, the system shows strong nonlinearity, which makes it difficult to achieve the desired control performance for rapid and accurate maneuvering; moreover, this limits the ability of the tether net system to capture fast-moving targets. This paper focused on the maneuver controller design of the space capture system with a large flexible tether net. Firstly, based on the absolute node coordinate method, the dynamic model of the space tether net system is established, which can accurately describe the geometric and material nonlinearities of the space tether net. Then, a two-loop active disturbance rejection control is proposed for the rapid and high-precision maneuvering of the flexible system; meanwhile the second-order extended state observer is designed to estimate and compensate for the tether net vibration disturbance. The simulation validated the proposed control, which could complete the rapid and accurate maneuvering and also compensate for the disturbance caused by the vibration of the flexible tether net. 1 Introduction The increasing amount of space debris poses a great hidden danger to human space activities. Since space junk is mostly non-cooperative, moving or rotating fast and with various shapes, a robotic manipulator employing the rigid contact capture method has difficulty in grasping an unknown target with a fast rotating speed and unknown geometry. For this reason the space tether net capture system has been developed. reviewed and compared the existing technologies on active space debris capture and removal. The research on removing debris using tethers is emerging. presented a new concept for the application of space tethers in planetary exploration and payload transfer. presented precise numerical simulations for available electric currents, orbital changes, tether stability and deployment dynamics. Tethers are modeled as lumped masses to take into account tether flexibility, and environment models (changes in the plasma density and geomagnetic field and so on) are also considered. analyzed the performance of the thruster control, tension control and offset control strategies during retrieval of the tether. researched the dynamics and control of two-body and n-body tethered satellites and various control schemes to stabilize the dynamics during retrieval of the sub-satellite are described. developed a brand-new space robot system called the maneuverable tethered space net robot. In addition to the advantages inherited from the tethered space net, extra maneuverability in the tethered space net robot allows for further possibilities for debris capture. investigated the deployment dynamics of the tether nets, and they found four deployment parameters are critical to the deployment. These are the maximum net area, deployment time, traveling distance and effective period. They modeled the tethered net based on the absolute nodal coordinates formulation (ANCF) to describe the precise dynamics of the tether. modeled the tether net system in the orbital frame by applying Lagrange equations, and an integrated control scheme was proposed introducing the thrusters which was effective for in-plane libration damping and enabled the capture net to track an expected trajectory. The space ejecting flying tether net could deploy the flying tether net driven by ejecting mass connected to the corner of the tether net on orbit. Then the net will fly to the target uncontrolled and on its own. It cannot be reused and cannot maintain the configuration during flying, so the flying tether net requires a precise ejecting direction and velocity to capture the target. On the other hand, the space maneuvering flying net could control the tether net configuration by the maneuvering the sub-satellite at the edge corner of the tether net. Although it can maintain or change the net configuration during flight, it can not be reused; moreover, the cost of sub-satellites is high, and the control schema for the whole system consists of a flexible tether net, and multi-satellites are complicated. The Space-Inflated Tether Net Capture System (SITNCS) could be deployed to an umbrella-like capture structure by inflating the inflatable rods, and then the target could be wrapped and captured by tightening the edge cables. After transferring the target to the desired orbit, SITNCS could loosen the edge cables and deflate the inflatable rods to release the target. SITNCS is controllable, stable and reusable, which makes it also more applicable. SITNCS operates in four stages shown in Fig. 1: inflation deployment, motorized wrapping, chasing and capturing, and transferring to another orbit. After deployment, the space tether net system has to rapidly approach to the target; the approaching procedure is seriously affected by the large amplitude and low frequency vibration of the inflatable flexible rods and tether net. Figure 1The workflow of SITNCS. At present, the studies on SITNCS are mostly conceptual, such as RETICULAR , REDCROC , ROGER (Bischof2003) and RemoveDebris . presented “Junk Hunter” which could autonomously rendezvous, capture and de-orbit orbital debris, and this system utilizes a deployable, inflatable boom structure supporting a mesh netting. The theoretical research on space flexible-tethered-net capture system is generally about the basic theory of the constructing components rather than the whole system. The inflatable rods is the supporting structure for SITNCS, which is a thin inflatable film structure with a high compression ratio and could maintain stiffness after inflation. analyzed the bending stiffness of the inflated beam based on the thin membrane theory, and analyzed the supporting rods from shell theory. The capture tether net is an important part of the space-inflated tether net capture system. At present, the flexible-tether dynamic modeling methods are mainly in two types: a mass-spring model (MS) and the absolute nodal coordinate formulation . The advantage of the mass-spring method is that it is simple to model and computationally efficient. However, its disadvantage is that the simulation accuracy is low. The ANCF employs the spatial absolute coordinates and its gradient as the generalized coordinates, which is no longer limited as the conventional finite element with the assumption of small deformation and small strains, but it can more effectively describe the large deformation and large rotation of the tethers. The space-inflated net capture system contains the large-sized inflatable beams and flexible rope nets, and the system exhibits strong nonlinearity and uncertainty. During the attitude maneuvering process, the flexible capturing mechanism vibration makes it difficult for the capture system to achieve fast and accurate attitude maneuver control, thereby limiting the system's ability to capture the target. The traditional PID (proportional–integral–derivative) controller cannot meet the requirements due to its inability to compensate for the disturbances. The active disturbance rejection control (ADRC) could estimate the external disturbance from the vibration of the tether net and compensates for the disturbance in the meantime, which is suitable for high-precision attitude control of the space-inflation rope capture system. The control for SITNCS makes the spacecraft chase the target since the tether net has large deformable cables which would disturb the attitude of the spacecraft and may cause the spacecraft to fail to capture the fast-moving target. Meanwhile SITNCS will unavoidably undergo the uncertainties and disturbance of the vibrating tether net during capture which would significantly limit the ability of the chasing spacecraft; the traditional PID controller cannot meet the requirements due to its inability to compensate for the disturbances. However, the active disturbance rejection control, which could handle the unknown and time-varying uncertainties and disturbances, is a potential method. The ADRC includes three components, the tracking differentiator (TD), extended state observer (ESO) and nonlinear state error feedback (NLSEF), and has been widely used in industrial applications , which significantly improves the control performance with a good robustness and adaptability. proposed a fuzzy ADRC controller for satellite attitude control. Research has been done to solve the attitude control of the spacecraft; Li (2009) and used active disturbance rejection control to get high pointing accuracy and rotation speed for spacecraft with flexible appendages with a small deformation. The control of a large deformable tether net still lacks sufficient research. Based on the active disturbance rejection control, this paper focuses on the rapid maneuvering of SITNCS with a large deformable tether net. The main contributions are as follows: 1. Based on the absolute node coordinate formulation, the precise dynamic model of SITNCS consists of a spacecraft, inflated boom and flexible tether net, which can accurately present the nonlinear characteristics of the space tether net capture system (Sect. 2). 2. A double-loop active disturbance rejection controller is proposed for the rapid maneuvering of SITNCS, while a transition process is arranged by the TD1 to reduce the vibration of the flexible-tether-net capture system. A second-order divergence observer is employed to estimate and compensate for the disturbance of the space tether system during rapid maneuvering (Sect. 3). 3. Good performance of the proposed method is achieved and validated by simulation, which could control SITNCS for fast and high-precision maneuvering; meanwhile, the vibration disturbance could be effectively suppressed (Sect. 4). 2 Dynamic modeling of SITNCS SITNCS consists of a service spacecraft and a capture mechanism with four inflatable rods supporting the trapezoidal prism tether net (Fig. 2). We make the following assumptions to simplify the analysis: Assumption 1. The service spacecraft is simplified as a single rigid body, regardless of other appendages other than the flexible-tether-net capture mechanism. Assumption 2. The inflatable beam has been inflated and is able to maintain the internal pressure, which could support the tether net forming a capable capturing configuration. Assumption 3. There is only a focus on the attitude control of SITNCS, and there is a disregard for the coupling with orbital dynamics. The reference coordinate system is shown in Fig. 2, where o is the global inertial coordinate system, b is the service spacecraft coordinate system located at the center of the spacecraft, c is the capture mechanism coordinate system and Ri is the vector of material point i in the flexible tether. The following is a dynamic modeling and analysis of SITNCS, which consists of three parts: the inflatable beam, the tether capturing system and the whole system of SITNCS. Figure 2Reference coordinate systems of SITNCS. ## 2.1 Equivalent model of the inflatable rod The analysis of conventional inflatable structures is mostly based on the finite element model, but this cannot be easily and efficiently employed for a rigid-flexible-control coupling model of the spacecraft with a large flexible-tether-net system. Therefore, this paper conducts an equivalent modeling analysis based on the ideal-pressure charging theory for inflatable structures. The bending failure process of the inflatable rod is divided into two parts: the linear load-bearing phase and the buckling failure phase. In the linear load-bearing stage, the deformation of the inflatable beam is small; no wrinkles are formed on the pipe wall; and the inflatable beam exhibits overall buckling. In the buckling failure stage, the inflated beam undergoes large deflection deformation, and the inflatable beam will produce wrinkles in some parts; that is, local buckling and the wrinkle area will no longer participate in the bearing. When the pleated area extends over the entire circumference of the inflatable beam section, the inflatable beam loses its load-carrying capacity . In this paper, the case is where the inflatable rod is completely inflated and the deformation is small, so the inflatable beam is in the linear load-bearing stage, and the bending stiffness is approximately constant under the allowable air pressure. At this time, the inflatable beam can be equivalent to the Euler–Bernoulli beam. The bending stiffness of the equivalent beam could be modeled and depends on the section and material properties of the inflated beam. Considering that the bending stiffness of the inflatable beam EiIi and the equivalent beam EeIe are equal, the elastic modulus Ee of the equivalent beam is $\begin{array}{}\text{(1)}& {E}_{\mathrm{e}}={E}_{\mathrm{i}}\left(\mathrm{1}-{\left(\frac{d}{D}\right)}^{\mathrm{4}}\right),\end{array}$ where D and d represent the outer diameter and inner diameter of the inflatable beam, respectively, and Ei is the elastic modulus of the inflatable-beam material. It can be found that the factors affecting the bending behavior of the inflatable beam in the linear load-bearing stage are mainly the material itself and the shape of the section, and the overall buckling is independent of the inflation pressure. ## 2.2 Dynamic modeling of the tether net using the ANCF In this section, the dynamic model of the tether has been established based on the ANCF , which can describe the large flexibility and large deformation of the space tether net. The configuration and node numbering of the tether net are shown in Fig. 3. Figure 3Configuration of SITNCS. Figure 4Nodes and element numbering of SITNCS. The numbering rule of one side surface is shown in Fig. 4 and Table 1; the nodes are ${}^{k}{N}_{i}^{j}\left(k=\mathrm{1},\mathrm{\dots },\mathrm{4}$, $i=\mathrm{1},\mathrm{\dots },p$, $j=\mathrm{1},\mathrm{\dots },\left(p-\left(i-\mathrm{1}\right)\right)\cdot \mathrm{2}+\mathrm{1}\right)$, where k is the surface index, i is the row index of kth surface, j is the row index of kth surface and p is the row number (e.g., p=10 in Fig. 4). The rods consists of the supporting rods and warping rods, where the supporting rods are ${}^{k}b{\mathrm{1}}^{m}\left(m=\mathrm{1},\mathrm{\dots },p-\mathrm{1}\right)$ and the warping rods are kb2n(m=1) and kb3n(m=5), $n=\mathrm{1},\mathrm{\dots },\left(p-\left(m-\mathrm{1}\right)\right)\cdot \mathrm{2}$. The horizontal tethers are ${}^{k}c{\mathrm{1}}_{i}^{n}\left(i\ne \mathrm{1},i\ne \mathrm{5}\right)$, where the vertical tethers are ${}^{k}c{\mathrm{2}}_{i}^{j}\left(i\ne \mathrm{1}\right)$. Table 1Connection between nodes and elements. A flexible cable element considering the axial and bending deformation is obtained on the basis of the theoretical hypothesis of a Euler–Bernoulli beam. Figure 5 shows the undeformed and deformed configurations of the three-dimensional cable element using two nodes. Figure 5Absolute nodal coordinate formulation model of the cable element. Set the length of the element as L and the generalized coordinates of the cable element to be $\begin{array}{}\text{(2)}& {}^{j}\mathbit{q}={\left[{}^{j}{\mathbit{r}}^{T}\begin{array}{cccc}\left(\mathrm{0}\right)& {}^{j}{\mathbit{r}}_{x}^{T}\left(\mathrm{0}\right)& {}^{j}{\mathbit{r}}^{T}\left(L\right)& {}^{j}{\mathbit{r}}_{x}^{T}\left(L\right)\end{array}\right]}^{T},\end{array}$ where jr and jrx represent the position vector and gradient vector at the end point, respectively. The position vector of the cable element at a point on the axis of the cable element can be expressed in generalized coordinates as $\begin{array}{}\text{(3)}& {}^{j}\mathbit{r}\left(x,t\right)=S\left(x\right){}^{j}\mathbit{q}\left(t\right),\end{array}$ where S(x) is the shape function of the three-dimensional flexible ANCF cable element. The kinetic energy of the flexible element can be written as follows: $\begin{array}{}\text{(4)}& {}^{j}T=\frac{\mathrm{1}}{\mathrm{2}}\underset{\mathrm{0}}{\overset{L}{\int }}\mathit{\rho }\underset{A}{\overset{j}{\int }}{\stackrel{\mathrm{˙}}{\mathbit{r}}}^{T}{}^{j}\stackrel{\mathrm{˙}}{\mathbit{r}}\mathrm{d}A\mathrm{d}x=\frac{\mathrm{1}}{\mathrm{2}}{}^{j}{\stackrel{\mathrm{˙}}{\mathbit{q}}}^{T}{}^{j}\mathbit{M}{}^{j}\stackrel{\mathrm{˙}}{\mathbit{q}},\end{array}$ where ρ and A are the density and cross-sectional area of the cable element, respectively, and ${}^{j}\mathbit{M}={\int }_{\mathrm{0}}^{L}\mathit{\rho }\left(A{\mathbf{S}}^{T}\mathbf{S}\right)\mathrm{d}x$ is the constant mass matrix of the ANCF cable element. The elastic energy of the flexible cable element is $\begin{array}{}\text{(5)}& {}^{j}U=\frac{\mathrm{1}}{\mathrm{2}}\underset{\mathrm{0}}{\overset{L}{\int }}\left(EA{}^{j}{{\mathit{\epsilon }}_{\mathrm{0}}}^{\mathrm{2}}+E{J}_{\mathit{\kappa }}{}^{j}{\mathit{\kappa }}^{\mathrm{2}}\right)\mathrm{d}x,\end{array}$ where E is the modulus of elasticity, Jκ is the moment of inertia of the flexible cable section, ${}^{j}{\mathit{\epsilon }}_{\mathrm{0}}=\sqrt{{}^{j}\mathbit{r}{{}_{x}^{T}}^{j}{\mathbit{r}}_{x}}-\mathrm{1}$ is the axial strain and ${}^{j}\mathit{\kappa }=\left|{}^{j}{\mathbit{r}}_{x}{×}^{j}{\mathbit{r}}_{xx}\right|/{\left|{}^{j}{\mathbit{r}}_{x}\right|}^{\mathrm{3}}$ is the curvature. The total kinetic energy and strain energy of the system can be written as follows: $\begin{array}{}\text{(6)}& \left\{\begin{array}{l}T={\sum }_{j=\mathrm{1}}^{k}{}^{j}T=\frac{\mathrm{1}}{\mathrm{2}}{\stackrel{\mathrm{˙}}{\mathbit{q}}}^{T}M\stackrel{\mathrm{˙}}{\mathbit{q}}\\ U={\sum }_{j=\mathrm{1}}^{k}{}^{j}U=\frac{\mathrm{1}}{\mathrm{2}}{\sum }_{j=\mathrm{1}}^{k}{\int }_{\mathrm{0}}^{L}\left(E{A}^{j}{\mathit{\epsilon }}_{\mathrm{0}}^{\mathrm{2}}+E{J}_{\mathit{\kappa }}^{j}{\mathit{\kappa }}^{\mathrm{2}}\right)\mathrm{d}x\end{array}.\right\\end{array}$ Since the dynamics of the element are described by generalized coordinates varying with time, the dynamic equations of the rigid body and flexible body are as follows: $\begin{array}{}\text{(7)}& \left\{\begin{array}{l}\frac{\mathrm{d}}{\mathrm{d}t}{\left(\frac{\partial T}{\partial \stackrel{\mathrm{˙}}{\mathbit{q}}}\right)}^{T}-{\left(\frac{\partial T}{\partial \mathbit{q}}\right)}^{T}+{\left(\frac{\partial U}{\partial \mathbit{q}}\right)}^{T}+{\left(\frac{\partial \mathbf{C}}{\partial \mathbf{q}}\right)}^{T}\mathbit{\lambda }={\mathbf{Q}}_{e}\\ \mathbf{C}\left(\mathbit{q},t\right)=\mathrm{0}\end{array},\right\\end{array}$ where Qe is the generalized force vector, λ is the Lagrange multiplier, C is the constraint equation, and q and λ are both unknown quantities. It is derived from the expressions of kinetic energy and strain energy. $\begin{array}{}\text{(8)}& \left\{\begin{array}{l}\frac{\mathrm{d}}{\mathrm{d}t}{\left(\frac{\partial T}{\partial \stackrel{\mathrm{˙}}{\mathbit{q}}}\right)}^{T}-{\left(\frac{\partial T}{\partial \mathbit{q}}\right)}^{T}=\frac{\mathrm{d}}{\mathrm{d}t}\left(\mathbit{M}\stackrel{\mathrm{˙}}{\mathbit{q}}\right)-\mathrm{0}=\mathbit{M}\stackrel{\mathrm{˙}}{\mathbit{q}}\\ {\left(\frac{\partial U}{\partial \mathbit{q}}\right)}^{T}={\sum }_{j=\mathrm{1}}^{k}{\int }_{\mathrm{0}}^{L}\left(EA{\mathit{\epsilon }}_{\mathrm{0}}{\left(\frac{{\partial }^{j}{\mathit{\epsilon }}_{\mathrm{0}}}{\partial \mathbit{q}}\right)}^{T}+E{J}_{\mathbit{\kappa }}\mathbit{\kappa }{\left(\frac{{\partial }^{j}\mathbit{\kappa }}{\partial \mathbit{q}}\right)}^{T}\right)\mathrm{d}x\\ =-{\mathbit{Q}}_{\mathit{\kappa }}\end{array}\right\\end{array}$ The dynamic equation of the flexible cable system can be written as follows: $\begin{array}{}\text{(9)}& \left\{\begin{array}{l}\mathbit{M}\stackrel{\mathrm{¨}}{\mathbit{q}}+{\mathbit{C}}_{q}^{\mathbit{T}}\mathbit{\lambda }={\mathbit{Q}}_{k}+{\mathbit{Q}}_{e}\\ \mathbit{C}=\mathrm{0}\end{array}.\right\\end{array}$ 3 Design of the control laws In this section, a double-loop controller based on the active disturbance rejection control is proposed for SITNCS. Figure 6A double-loop controller based on the active disturbance rejection control. ## 3.1 Spacecraft dynamics The space-inflated unwinding tether net capture system consists of large-scale flexible inflatable beams and a tether net. The relationship between beam nodes makes the dynamic model very complex and computationally inefficient, which also cannot be directly applied to the design of the control system. The attitude dynamics model considering the rigid-flexible coupling used for controller design would be expressed as $\begin{array}{}\text{(10)}& \left\{\begin{array}{l}\mathbit{J}\stackrel{\mathrm{˙}}{\mathbit{\omega }}+\stackrel{\mathrm{̃}}{\mathbit{\omega }}\left(\mathbit{J}\mathit{\omega }+{\mathbit{\delta }}^{T}\stackrel{\mathrm{˙}}{\mathbit{\eta }}\right)+{\mathbit{\delta }}^{T}\stackrel{\mathrm{˙}}{\mathbit{\eta }}=\mathbit{u}+\mathbit{d}\\ \stackrel{\mathrm{˙}}{\mathbit{\eta }}+\mathbit{C}\stackrel{\mathrm{˙}}{\mathbit{\eta }}+\mathbit{K}\mathbit{\eta }+\mathit{\delta }\stackrel{\mathrm{˙}}{\mathbit{\omega }}=\mathrm{0}\end{array},\right\\end{array}$ where J is the moment of the inertia matrix, u is the control torque, d is the external disturbance, δ is the coupling matrix between the spacecraft and the flexible mechanism, η is the flexible modal coordinates, C is the damping matrix, and K is the stiffness matrix. While J0 is the nominal inertia and ΔJ is the unknown inertia, the equation could be derived as $\begin{array}{}\text{(11)}& \left({\mathbit{J}}_{\mathrm{0}}+\mathrm{\Delta }\mathbit{J}\right)\stackrel{\mathrm{˙}}{\mathbit{\omega }}+\stackrel{\mathrm{̃}}{\mathbit{\omega }}\left({\mathbit{J}}_{\mathrm{0}}+\mathrm{\Delta }\mathbit{J}\right)\mathbit{\omega }+\stackrel{\mathrm{̃}}{\mathbit{\omega }}{\mathbit{\delta }}^{T}\stackrel{\mathrm{˙}}{\mathbit{\eta }}+\mathit{\delta }\stackrel{\mathrm{¨}}{\mathbit{\eta }}=\mathbit{u}+\mathbit{d}.\end{array}$ This yields $\begin{array}{}\text{(12)}& {\mathbit{J}}_{\mathrm{0}}\stackrel{\mathrm{˙}}{\mathbit{\omega }}+\stackrel{\mathrm{̃}}{\mathbit{\omega }}{\mathbit{J}}_{\mathrm{0}}\mathbit{\omega }=\mathbit{u}+{\mathbit{J}}_{\mathrm{0}}{\mathbit{d}}^{\prime },\end{array}$ where ${\mathbit{d}}^{\prime }={\mathbit{J}}_{\mathrm{0}}^{-\mathrm{1}}\left(\mathbit{d}-\stackrel{\mathrm{̃}}{\mathbit{\omega }}\mathrm{\Delta }\mathbit{J}\mathbit{\omega }-\mathrm{\Delta }\mathbit{J}\stackrel{\mathrm{˙}}{\mathbit{\omega }}-\stackrel{\mathrm{̃}}{\mathbit{\omega }}{\mathbit{\delta }}^{T}\stackrel{\mathrm{˙}}{\mathbit{\eta }}-\mathit{\delta }\stackrel{\mathrm{¨}}{\mathbit{\eta }}\right)$. The attitude dynamic equation of satellite is given by $\begin{array}{}\text{(13)}& \stackrel{\mathrm{˙}}{\mathbit{\theta }}=\mathbit{R}\left(\mathbit{\theta }\right)\mathbit{\omega },\end{array}$ where θ=[γ  ψ  ϕ]T is the attitude angle of the satellite, γ is the roll angle, ψ is the pitch angle and ϕ is the yaw angle. R is given by $\begin{array}{}\text{(14)}& \mathbit{R}\left(\mathbit{\theta }\right)=\left(\begin{array}{ccc}\mathrm{cos}\mathit{\phi }/\mathrm{cos}\mathit{\psi }& -\mathrm{sin}\mathit{\phi }/\mathrm{cos}\mathit{\psi }& \mathrm{0}\\ \mathrm{sin}\mathit{\phi }& \mathrm{cos}\mathit{\phi }& \mathrm{0}\\ -\mathrm{tan}\mathit{\psi }\mathrm{cos}\mathit{\phi }& \mathrm{tan}\mathit{\psi }\mathrm{sin}\mathit{\phi }& \mathrm{1}\end{array}\right),\end{array}$ so the satellite dynamic equation is $\begin{array}{}\text{(15)}& \left\{\begin{array}{l}\stackrel{\mathrm{˙}}{\mathbit{\omega }}=-{{\mathbit{J}}_{\mathrm{0}}}^{-\mathrm{1}}\stackrel{\mathrm{̃}}{\mathbit{\omega }}{\mathbit{J}}_{\mathrm{0}}\mathbit{\omega }+{{\mathbit{J}}_{\mathrm{0}}}^{-\mathrm{1}}\mathbit{u}+{\mathbit{d}}^{\prime }\\ \stackrel{\mathrm{˙}}{\mathbit{\theta }}=\mathbit{R}\left(\mathbit{\theta }\right)\mathbit{\omega }\\ \mathbit{y}=\mathbit{\theta }\end{array}.\right\\end{array}$ ## 3.2 Design of the proposed ADRC control From the system dynamics in Eq. (15), it is shown that the system is a typical cascade system. Considering that the angular velocity ω and attitude angle θ of spacecraft are measurable, a double-loop controller based on active disturbance rejection control for the spacecraft attitude is proposed. In Fig. 6 the transient procedure is first arranged by the tracking differentiator and then the external-loop feedback controller output virtual angular speed ω. Internal-loop feedback is designed by the ADRC. System uncertainties and disturbances are estimated by the extended state observer and compensated for during each sampling period, with the result of achieving a good tracking effect for the angular velocity ω. The controller consists of an arrangement of the transient procedure, angle feedback law of the outer loop and disturbance compensation. ### 3.2.1 Arrangement of the transient procedure The purpose of arranging the transition process is to reduce the initial control impact in the beginning stage caused by initial errors, which effectively handles the dilemma between overshoot and rapidity. The TD1 is as follows $\begin{array}{}\text{(16)}& \left\{\begin{array}{ll}{x}_{\mathrm{1}}\left(k+\mathrm{1}\right)& ={x}_{\mathrm{1}}\left(k\right)+h{x}_{\mathrm{2}}\left(k\right)\\ {x}_{\mathrm{2}}\left(k+\mathrm{1}\right)& ={x}_{\mathrm{2}}\left(k\right)+h\mathrm{fhan}\left({x}_{\mathrm{1}}\left(k\right)\\ & -v\left(t\right),{x}_{\mathrm{2}}\left(k\right),r,{h}_{\mathrm{0}}\right)\end{array},\right\\end{array}$ where v(t) is the input signal, x1 is the estimated value of v, x2 is the derivative of v, h is a simulation step, r is the speed factor and h0 is the filtering factor. $\begin{array}{}\text{(17)}& \mathrm{fhan}\left({x}_{\mathrm{1}},{x}_{\mathrm{2}},r,h\right)=\left[\begin{array}{c}\mathrm{fhan}\left({x}_{\mathrm{11}},{x}_{\mathrm{21}},r,h\right)\\ \mathrm{fhan}\left({x}_{\mathrm{12}},{x}_{\mathrm{22}},r,h\right)\\ \mathrm{fhan}\left({x}_{\mathrm{13}},{x}_{\mathrm{23}},r,h\right)\end{array}\right]\end{array}$ The fhan is defined as follows: $\begin{array}{}\text{(18)}& \left\{\begin{array}{l}d=r{h}^{\mathrm{2}},{a}_{\mathrm{0}}=h{x}_{\mathrm{2}},y={x}_{\mathrm{1}}+{a}_{\mathrm{0}}\\ {a}_{\mathrm{1}}=\sqrt{d\left(d+\mathrm{8}\left|y\right|\right)}\\ {a}_{\mathrm{2}}={a}_{\mathrm{0}}+\mathrm{sign}\left(y\right)\left({a}_{\mathrm{1}}-d\right)/\mathrm{2}\\ a=\left({a}_{\mathrm{0}}+y\right)\mathrm{fsg}\left(y,d\right)+{a}_{\mathrm{2}}\left(\mathrm{1}-\mathrm{fsg}\left(y,d\right)\right)\\ \mathrm{fhan}=-r\left(\frac{a}{d}\right)\mathrm{fsg}\left(a,d\right)-r\phantom{\rule{0.33em}{0ex}}\mathrm{sign}\left(a\right)\left(\mathrm{1}-\mathrm{fsg}\left(a,d\right)\right)\end{array},\right\\end{array}$ where $\mathrm{fsg}\left(x,d\right)=\left(\mathrm{sign}\left(x+d\right)-\mathrm{sign}\left(x-d\right)\right)/\mathrm{2}$. Remark 1. The two-order steepest nonlinear tracking differentiator is used to avoid the vibration of the tether net system. If h0>h, the TD enables the filtering function. ### 3.2.2 Angle feedback law of the outer loop The feedback control law of the outer loop is designed for the angular error of the spacecraft, resulting in the virtual control volume ω in the inner loop. The outer-loop angle feedback control law is $\begin{array}{}\text{(19)}& {\mathit{\omega }}^{*}={R}^{-\mathrm{1}}\left(\mathit{\theta }\right){k}_{\mathrm{1}}\mathrm{fal}\left({\mathit{\theta }}_{d}-\mathit{\theta },\mathit{\alpha },\mathit{\delta }\right),\end{array}$ where ${k}_{\mathrm{1}}=\mathrm{diag}\mathit{\left\{}{k}_{\mathrm{11}},{k}_{\mathrm{12}},{k}_{\mathrm{13}}\mathit{\right\}}$ is a gain matrix for adjusting the speed of tracking the desired value of attitude angle. $\mathrm{fal}\left(e,\mathit{\alpha },\mathit{\delta }\right)=\left[\begin{array}{c}\mathrm{fal}\left({e}_{\mathrm{11}},\mathit{\alpha },\mathit{\delta }\right)\\ \mathrm{fal}\left({e}_{\mathrm{12}},\mathit{\alpha },\mathit{\delta }\right)\\ \mathrm{fal}\left({e}_{\mathrm{13}},\mathit{\alpha },\mathit{\delta }\right)\end{array}\right]$ The function fal is a nonlinear function, and its form is as follows: $\begin{array}{}\text{(20)}& \mathrm{fal}\left(e,\mathit{\alpha },\mathit{\delta }\right)=\left\{\begin{array}{ll}e{\mathit{\delta }}^{\mathit{\alpha }-\mathrm{1}},& \left|e\right|\le \mathit{\delta }\\ {\left|e\right|}^{\mathit{\alpha }}\mathrm{sgn}e,& \left|e\right|>\mathit{\delta }\end{array},\right\\end{array}$ where e is the state error $\mathrm{0}<\mathit{\alpha }<\mathrm{1}$, 0<δ. Remark 2. Because R(θ) could be calculated, the virtual control variable ω could be compensated for by this ascertained function. Then Eq. (13) could transform into $\stackrel{\mathrm{˙}}{\mathit{\theta }}={k}_{\mathrm{1}}\mathrm{fal}\left({\mathit{\theta }}_{d}-\mathit{\theta },\mathit{\alpha },\mathit{\delta }\right)$. The nonlinear feedback control law is adopted so that θ will track θd. Figure 7Attitude-tracking curve. Figure 8Attitude error in the steady-state process. Figure 9Angular-velocity estimation. Figure 10Estimated disturbance. ### 3.2.3 Extended state observer design By using the input and output data of the spacecraft, the ESO could estimate the angular velocity ω and the internal and external disturbances of the system in real time. By expanding the first-order system of Eq. (12) into a two-order system, we can get $\begin{array}{}\text{(21)}& \left\{\begin{array}{l}\stackrel{\mathrm{˙}}{\mathbit{\omega }}=-{{\mathbit{J}}_{\mathrm{0}}}^{-\mathrm{1}}\stackrel{\mathrm{̃}}{\mathbit{\omega }}{\mathbit{J}}_{\mathrm{0}}\mathbit{\omega }+{{\mathbit{J}}_{\mathrm{0}}}^{-\mathrm{1}}\mathbit{u}+{\mathbit{d}}^{\prime }\\ {\stackrel{\mathrm{˙}}{\mathbit{d}}}^{\prime }=\frac{\mathrm{d}{\mathbit{d}}^{\prime }}{\mathrm{d}t}\end{array}.\right\\end{array}$ A discrete two-order expansion observer is designed as follows: $\begin{array}{}\text{(22)}& \left\{\begin{array}{l}{\mathit{\xi }}_{\mathrm{1}}={z}_{\mathrm{1}}-\mathit{\omega },\mathrm{fe}=\mathrm{fal}\left({\mathit{\xi }}_{\mathrm{1}},\mathit{\alpha },\mathit{\delta }\right)\\ {z}_{\mathrm{1}}={z}_{\mathrm{1}}+h\left({z}_{\mathrm{2}}-{\mathit{\beta }}_{\mathrm{1}}{\mathit{\xi }}_{\mathrm{1}}-{{J}_{\mathrm{0}}}^{-\mathrm{1}}\mathrm{\Omega }{J}_{\mathrm{0}}\mathit{\omega }+{{J}_{\mathrm{0}}}^{-\mathrm{1}}u\right)\\ {z}_{\mathrm{2}}={z}_{\mathrm{2}}+h\left(-{\mathit{\beta }}_{\mathrm{2}}\mathrm{fe}\right)\end{array}.\right\\end{array}$ Remark 3. The observer states z1 and z2 converge to the state variables ω and d, respectively. β1 and β2 are observer gains. Because of the limited estimation capability of the ESO, the known part $-{J}_{\mathrm{0}}^{-\mathrm{1}}\mathrm{\Omega }{J}_{\mathrm{0}}\mathit{\omega }$ of the nominal model is used in the design process, which could reduce the burden of the ESO and improve its performance and accuracy. Remark 4. The resulting observer estimation error system for errors ${\mathit{\xi }}_{\mathrm{1}}={z}_{\mathrm{1}}-\mathit{\omega }$ and ${\mathit{\xi }}_{\mathrm{2}}={z}_{\mathrm{2}}-{d}^{\prime }$ takes the following form: $\begin{array}{}\text{(23)}& \left\{\begin{array}{l}{\stackrel{\mathrm{˙}}{\mathit{\xi }}}_{\mathrm{1}}={\mathit{\xi }}_{\mathrm{2}}-{\mathit{\beta }}_{\mathrm{1}}{\mathit{\xi }}_{\mathrm{1}}\\ {\stackrel{\mathrm{˙}}{\mathit{\xi }}}_{\mathrm{2}}=-{\mathit{\beta }}_{\mathrm{2}}\mathrm{fe}-{\stackrel{\mathrm{˙}}{d}}^{\prime }\end{array}.\right\\end{array}$ If ${\mathit{\beta }}_{\mathrm{2}}\gg {\stackrel{\mathrm{˙}}{d}}^{\prime }$, the errors ξ1 and ξ2 converge to the zeros. The ESO would achieve good performance, which means z1ω and ${z}_{\mathrm{2}}\to {d}^{\prime }$. Figure 11Control torque without a filter. Figure 12Control torque with a filter. Figure 13Tension of booms. Figure 14Torque of booms. In practical systems, the signals measured by the sensors are unavoidably noisy, which will bring unpredictable disturbances to the controller. The filters need to be designed to extract or restore the original signals from noisy signals. The filter is established by the tracking differentiator. $\begin{array}{}\text{(24)}& \left\{\begin{array}{l}{v}_{\mathrm{1}}={v}_{\mathrm{1}}+h{v}_{\mathrm{2}}\\ {v}_{\mathrm{2}}={v}_{\mathrm{2}}+h\mathrm{fhan}\left({v}_{\mathrm{1}}-\mathit{\omega },{v}_{\mathrm{2}},r,{h}_{\mathrm{0}}\right)\\ {\mathit{\omega }}_{\mathrm{0}}={v}_{\mathrm{1}}+{k}_{\mathrm{0}}h{v}_{\mathrm{2}}\end{array}\right\\end{array}$ Remark 5. The TD2 is used for noise filtering because of its simplicity and ease of use. The original values would not be delayed in predicting the differential signal v2 and prediction step k0. So the ESO when the system output is polluted by the noise is designed as follows: $\begin{array}{}\text{(25)}& \left\{\begin{array}{l}\mathrm{fh}=\mathrm{fhan}\left({v}_{\mathrm{1}}-\mathit{\omega },{v}_{\mathrm{2}},r,{h}_{\mathrm{0}}\right)\\ {v}_{\mathrm{1}}={v}_{\mathrm{1}}+h{v}_{\mathrm{2}}\\ {v}_{\mathrm{2}}={v}_{\mathrm{2}}+h\mathrm{fh}\\ {\mathit{\omega }}_{\mathrm{0}}={v}_{\mathrm{1}}+{k}_{\mathrm{0}}h{v}_{\mathrm{2}}\\ {\mathit{\xi }}_{\mathrm{1}}={z}_{\mathrm{1}}-{\mathit{\omega }}_{\mathrm{0}},\mathrm{fe}=\mathrm{fal}\left({\mathit{\xi }}_{\mathrm{1}},\mathit{\alpha },\mathit{\delta }\right)\\ {z}_{\mathrm{1}}={z}_{\mathrm{1}}+h\left({z}_{\mathrm{2}}-{\mathit{\beta }}_{\mathrm{1}}{\mathit{\xi }}_{\mathrm{1}}-{{J}_{\mathrm{0}}}^{-\mathrm{1}}\mathrm{\Omega }{J}_{\mathrm{0}}{\mathit{\omega }}_{\mathrm{0}}+{{J}_{\mathrm{0}}}^{-\mathrm{1}}u\right)\\ {z}_{\mathrm{2}}={z}_{\mathrm{2}}+h\left(-{\mathit{\beta }}_{\mathrm{2}}\mathrm{fe}\right)\end{array}.\right\\end{array}$ Figure 15Procedure for attitude maneuvering. ### 3.2.4 Disturbance rejection and compensation The ESO can estimate the total disturbance value z2 in real time, and then the active disturbance rejection function can be achieved by compensation in the control law. The inner-loop angular velocity feedback control law is $\begin{array}{}\text{(26)}& \left\{\begin{array}{l}{e}_{\mathrm{1}}={\mathit{\omega }}^{*}-\mathit{\omega }\\ u={J}_{\mathrm{0}}\left({k}_{\mathrm{2}}\mathrm{fal}\left({e}_{\mathrm{1}},\mathit{\alpha },\mathit{\delta }\right)-{z}_{\mathrm{2}}\right)+\mathrm{\Omega }{J}_{\mathrm{0}}\mathit{\omega }\end{array},\right\\end{array}$ where ${k}_{\mathrm{2}}=\mathrm{diag}\mathit{\left\{}{k}_{\mathrm{21}},{k}_{\mathrm{22}},{k}_{\mathrm{23}}\mathit{\right\}}$ is the gain matrix. Remark 6. Bringing Eq. (24) into Eq. (20), we get $\stackrel{\mathrm{˙}}{\mathit{\omega }}={k}_{\mathrm{2}}\mathrm{fal}\left({e}_{\mathrm{1}},\mathit{\alpha },\mathit{\delta }\right)+{d}^{\prime }-{z}_{\mathrm{2}}$; when the estimated error is small enough, $\stackrel{\mathrm{˙}}{\mathit{\omega }}={k}_{\mathrm{2}}\mathrm{fal}\left({e}_{\mathrm{1}},\mathit{\alpha },\mathit{\delta }\right)$. It can be proved that ω can track ω. Considering the above design procedure, the ADRC law for the spacecraft is obtained as follows: $\begin{array}{}\text{(27)}& \left\{\begin{array}{l}{\mathit{\omega }}^{*}={R}^{-\mathrm{1}}\left(\mathit{\theta }\right){k}_{\mathrm{1}}\mathrm{fal}\left({\mathit{\theta }}_{d}-\mathit{\theta },\mathit{\alpha },\mathit{\delta }\right)\\ \mathrm{fh}=\mathrm{fhan}\left({v}_{\mathrm{1}}-\mathit{\omega },{v}_{\mathrm{2}},r,{h}_{\mathrm{0}}\right)\\ {v}_{\mathrm{1}}={v}_{\mathrm{1}}+h{v}_{\mathrm{2}}\\ {v}_{\mathrm{2}}={v}_{\mathrm{2}}+h\mathrm{fh}\\ {\mathit{\omega }}_{\mathrm{0}}={v}_{\mathrm{1}}+{k}_{\mathrm{0}}h{v}_{\mathrm{2}}\\ {\mathit{\xi }}_{\mathrm{1}}={z}_{\mathrm{1}}-{\mathit{\omega }}_{\mathrm{0}},\mathrm{fe}=\mathrm{fal}\left({\mathit{\xi }}_{\mathrm{1}},\mathit{\alpha },\mathit{\delta }\right)\\ {z}_{\mathrm{1}}={z}_{\mathrm{1}}+h\left({z}_{\mathrm{2}}-{\mathit{\beta }}_{\mathrm{1}}{\mathit{\xi }}_{\mathrm{1}}-{{J}_{\mathrm{0}}}^{-\mathrm{1}}\mathrm{\Omega }{J}_{\mathrm{0}}{\mathit{\omega }}_{\mathrm{0}}+{{J}_{\mathrm{0}}}^{-\mathrm{1}}u\right)\\ {z}_{\mathrm{2}}={z}_{\mathrm{2}}+h\left(-{\mathit{\beta }}_{\mathrm{2}}\mathrm{fe}\right)\\ {e}_{\mathrm{1}}={\mathit{\omega }}^{*}-{\mathit{\omega }}_{\mathrm{0}}u={J}_{\mathrm{0}}\left({k}_{\mathrm{2}}\mathrm{fal}\left({e}_{\mathrm{1}},\mathit{\alpha },\mathit{\delta }\right)-{z}_{\mathrm{2}}\right)+\mathrm{\Omega }{J}_{\mathrm{0}}{\mathit{\omega }}_{\mathrm{0}}\end{array}.\right\\end{array}$ Figure 16Attitude-tracking curve. Figure 17Angular-velocity estimation. Figure 18Estimated disturbance. Figure 19Control torque. 4 Numerical simulation and analysis While SITNCS is approaching a fast-moving target, there are many factors which may lead to the system failing to complete the operation. In this section, the PID controller is addressed for comparison, and the performance of the ADRC is validated, which could meet the requirements for a rapid attitude maneuver. Table 2Parameters of the inflatable tether net system. Table 3Parameters of the spacecraft. Table 4Parameters of the ADRC control. Table 5Transition time parameters of the controller. ## 4.1 Simulation parameters The service spacecraft is assumed to be a single rigid body which is represented by a cuboid with dimensions of 2.5 m × 2.5 m × 4 m. For the capture mechanism, the length of its short side is ld=0.4 m; the length of its long side is lu=4 m; and its height is h=4 m. The constraint between the inflatable rods and tether net is a spherical joint. The constraint between the inflatable booms and the satellite is fixed. Based on the special requirement of the space environment, the polyimide material is used for the inflatable rods, and the aramid fiber material is chosen for the net. The equivalent parameters of the system with an internal pressure of 25 KPa are listed in Table 2. Assuming the moment of inertia of the capture mechanism is unknown, the main parameters of the spacecraft are listed in Table 3, which are the moment of inertia of the spacecraft J, environmental disturbance torque d, initial attitude angle θ, initial angular velocity ω, expected attitude angle θd and expected angular velocity ωd. It is reasonable to assume that the signal of the sensor is disturbed by white noise with a peak value of 0.1 % as its the output. The sampling period is h=1 ms. The parameters of the PID controller are ${K}_{p}=\mathrm{diag}\mathit{\left\{}\mathrm{1152},\mathrm{1024},\mathrm{1260}\mathit{\right\}}$ and ${K}_{d}=\mathrm{diag}\mathit{\left\{}\mathrm{1440},\mathrm{1280},\mathrm{1600}\mathit{\right\}}$. The parameters of the ADRC are listed in Table 4. Figure 20Procedure of attitude maneuvering. ## 4.2 Attitude control of the spacecraft based on the ADRC It is assumed that the moment of inertia of the satellite can be accurately obtained and that of the capture mechanism is unknown. ttd is the transition time arranged by the TD1. tθ is defined as the transition time when the attitude error is stable and is between $-{\mathrm{10}}^{-\mathrm{4}}$ and 10−4 rad s−1. The simulation time is 20 s. Comparing the two control schemes, the simulation results of the attitude tracking and steady-state error of the spacecraft are shown in Figs. 7 and 8. The ADRC has excellent dynamic and steady-state performance with an error of less than 10−4 rad. The PID controller is greatly affected by the vibration of the flexible tether net; the dynamic tracking has an obvious delay; and the steady-state error is larger; all of this cannot meet the high-precision control requirements. Table 5 shows that the spacecraft can track the desired signal in the transition time arranged by the TD1. The ADRC has high attitude accuracy and robust stability, which are to meet the critical requirements. The performance of the ESO is shown in Figs. 9 and 10. The ESO can get the estimated value of the attitude angular velocity of the spacecraft and the disturbance caused by the flexible vibration of the capture mechanism. The maximum amplitude of disturbance can reach 6 N m, which would bring non-negligible disturbance to the satellite platform control. If the TD2 is not used for filtering, we can get the filter performance of the controller. Figure 11 is the original control torque, and Fig. 12 is the control torque after using the filter. It shows that the noise has a great influence on the ADRC. Through filtering, the high-frequency vibration of the torque control is obviously suppressed, which is beneficial to the use of the actuator. The flexible vibration of the inflatable rods and the tether net is accompanied with the attitude maneuver of the satellite. The largest deformation occurs in the connection point between the inflatable rods and the satellite, resulting in considerable tension and torque. In Figs. 13 and 14, the maximum tension is about 110 N, and the maximum torque is about 450 N m. Therefore, according to the material strength loading limit, it can be asserted whether the stress exceeds the stress limit of the inflatable rod. Otherwise, the applicable way is to increase the transition time to reduce the acceleration. The whole procedure is shown in Fig. 15. It can be seen that the flexible capture mechanism vibrates in terms of the satellite attitude maneuver, but its amplitude is relatively small, which can maintain the configuration and tend to be stable. ## 4.3 Robustness of a larger SITNCS In order to adapt to the larger capture target, we can increase the size of the capture mechanism. For the controller, it means having more parameter uncertainties and disturbances. The assumption is that ld=0.8 m, lu=8 m and h=8 m. The simulation time is still 20 s. Figures 16 and 17 show that the transition time is tθ=[12.66  12.64  12.62]T. When the size of the capture mechanism is increased, the control performance of the system is still excellent. In Figs. 18 and 19, the amplitude of the flexible vibration increases, while the vibration frequency decreases, which improves the estimation effect of the ESO. Therefore, the ADRC has good robustness and disturbance resistance. The simulation (Fig. 20) shows the dynamic changes of the system during the control process. 5 Conclusions This paper studies the rapid maneuvering of the space capture system with flexible inflatable rods and a tether net. The two-loop active disturbance rejection control is proposed to complete the rapid and high-precision maneuvering for the space-inflatable tether net capture system; meanwhile the second-order observer is designed to estimate the tether net disturbance, for which could be compensated. The proposed control method could not only achieve the desired performance, but it also could be robust within the disturbance from the flexible-tether-net vibration. Data availability Data availability. The data cannot be shared publicly at this time as they also form part of an ongoing study. All data included in this study are available upon request by contacting the corresponding author. Author contributions Author contributions. CW wrote the whole paper. HL did the simulation and designed the controller. CT modeled the dynamics of the tether net system. YZ drew the figures. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. The authors would like to acknowledge the support of the Open Fund of the Science and Technology on Space Intelligent Control Laboratory (grant no. 6142208180402) and the National Defense Key Discipline Laboratory of Micro-Spacecraft Technology (grant no. HIT.KLOF.MST.201703). Financial support Financial support. This research has been supported by the National Natural Science Foundation of China (grant no. 11772102), the Natural Scientific Research Innovation Foundation in HIT (grant no. 30620150071) and the National Defense Key Discipline Laboratory of Micro-Spacecraft Technology (grant no. HIT.KLOF.MST.201703). Review statement Review statement. This paper was edited by Jinguo Liu and reviewed by Lijie Chen and two anonymous referees. References Benvenuto, R., Salvi, S., and Lavagna, M.: Dynamics Analysis and GNC Design of Flexible Systems for Space Debris Active Removal, Acta Astronaut., 110, 247–265, https://doi.org/10.1016/j.actaastro.2015.01.014, 2015. a Bischof, B.: Roger – Robotic Geostationary Orbit Restorer, in: 54th International Astronautical Congress of the International Astronautical Federation, the International Academy of Astronautics, and the International Institute of Space Law, American Institute of Aeronautics and Astronautics, Bremen, Germany, https://doi.org/10.2514/6.IAC-03-IAA.5.2.08, 2003. a Ceruti, A., Pettenuzzo, S., and Tuveri, M.: Conceptual Design and Preliminarily Structural Analysis of Inflatable Basket for an Asteroid Capturing Satellite, Strojniški vestnik, J. Mech. Eng., 61, 341–351, https://doi.org/10.5545/sv-jme.2014.2063, 2015. a Comer, R. L. and Levy, S.: DEFLECTIONS OF AN INFLATED CIRCULAR-CYLINDRICAL CANTILEVER BEAM, AIAA J., 1, 1652–1655, https://doi.org/10.2514/3.1873, 1963. a, b Di Gennaro, S.: Output Stabilization of Flexible Spacecraft with Active Vibration Suppression, IEEE T. Aero. Elec. Sys., 39, 747–759, https://doi.org/10.1109/TAES.2003.1238733, 2003. a Forshaw, J. L., Aglietti, G. S., Navarathinam, N., Kadhem, H., Salmon, T., Pisseloup, A., Joffre, E., Chabot, T., Retat, I., Axthelm, R., Barraclough, S., Ratcliffe, A., Bernal, C., Chaumette, F., Pollini, A., and Steyn, W. H.: RemoveDEBRIS: An in-Orbit Active Debris Removal Demonstration Mission, Acta Astronaut., 127, 448–463, https://doi.org/10.1016/j.actaastro.2016.06.018, 2016. a George, W. Z.: The Bending Strength of Pressurized Cylinders, J. Aerosp. Sci., 29, 362–363, https://doi.org/10.2514/8.9443, 1962. a Gerstmayr, J. and Shabana, A. A.: Analysis of Thin Beams and Cables Using the Absolute Nodal Co-Ordinate Formulation, Nonlinear Dynam., 45, 109–130, https://doi.org/10.1007/s11071-006-1856-1, 2006. a, b Kawamoto, S., Makida, T., Sasaki, F., Okawa, Y., and Nishida, S.-I.: Precise Numerical Simulations of Electrodynamic Tethers for an Active Debris Removal System, Acta Astronaut., 59, 139–148, https://doi.org/10.1016/j.actaastro.2006.02.035, 2006. a Li, S.: Model and Control of Flexible Multibody Satellite, J. Aerospace Eng., 22, 134–138, https://doi.org/10.1061/(ASCE)0893-1321(2009)22:2(134), 2009. a Li, S., Yang, X., and Yang, D.: Active Disturbance Rejection Control for High Pointing Accuracy and Rotation Speed, Automatica, 45, 1854–1860, https://doi.org/10.1016/j.automatica.2009.03.029, 2009. a Misra, A. K.: Dynamics and Control of Tethered Satellite Systems, Acta Astronaut., 63, 1169–1177, https://doi.org/10.1016/j.actaastro.2008.06.020, 2008. a Modi, V. J., Lakshmanan, P. K., and Misra, A. K.: On the Control of Tethered Satellite Systems, Acta Astronaut., 26, 411–423, https://doi.org/10.1016/0094-5765(92)90070-Y, 1992. a Przybyła, M., Kordasz, M., Madoński, R., Herman, P., and Sauer, P.: Active Disturbance Rejection Control of a 2DOF Manipulator with Significant Modeling Uncertainty, B. Pol. Acad. Sci.-Tech., 60, 509–520, https://doi.org/10.2478/v10175-012-0064-z, 2012. a Ruan, J. and Li, Y.: ADRC Based Ship Course Controller Design and Simulations, in: 2007 IEEE International Conference on Automation and Logistics, 2731–2735, IEEE, Jinan, China, https://doi.org/10.1109/ICAL.2007.4339044, 2007. a Shan, M., Guo, J., and Gill, E.: Review and Comparison of Active Space Debris Capturing and Removal Methods, Prog. Aerosp. Sci., 80, 18–32, https://doi.org/10.1016/j.paerosci.2015.11.001, 2016. a Shan, M., Guo, J., and Gill, E.: Deployment Dynamics of Tethered-Net for Space Debris Removal, Acta Astronaut., 132, 293–302, https://doi.org/10.1016/j.actaastro.2017.01.001, 2017. a, b Wie, B., Weiss, H., and Arapostathis, A.: Quarternion Feedback Regulator for Spacecraft Eigenaxis Rotations, J. Guid. Control Dynam., 12, 375–380, https://doi.org/10.2514/3.20418, 1989. a Williams, P., Blanksby, C., and Trivailo, P.: Tethered Planetary Capture: Controlled Maneuvers, Acta Astronaut., 53, 681–708, https://doi.org/10.1016/S0094-5765(03)80029-2, 2003. a Zhai, G., Qiu, Y., Liang, B., and Li, C.: On-Orbit Capture with Flexible Tether–Net System, Acta Astronaut., 65, 613–623, https://doi.org/10.1016/j.actaastro.2009.03.011, 2009.  a Zhang, F., Huang, P., Meng, Z., Zhang, Y., and Liu, Z.: Dynamics Analysis and Controller Design for Maneuverable Tethered Space Net Robot, J. Guid. Control Dynam., 40, 2828–2843, https://doi.org/10.2514/1.G002656, 2017. a, b Zhong, C., Guo, Y., and Wang, L.: Fuzzy Active Disturbance Rejection Attitude Control of Spacecraft with Unknown Disturbance and Parametric Uncertainty, Int. J. Control Autom., 8, 233–242, https://doi.org/10.14257/ijca.2015.8.8.24, 2015. a Zinner, N., Williamson, A., Brenner, K., Curran, J., Isaak, A., Knoch, M., Leppek, A., and Lestishen, J.: Junk Hunter: Autonomous Rendezvous, Capture, and De-Orbit of Orbital Debris, in: AIAA SPACE 2011 Conference & Exposition, American Institute of Aeronautics and Astronautics, Long Beach, California, https://doi.org/10.2514/6.2011-7292, 2011. a, b
{}
### Cyclic dependencies (a note from Hacker News) In response to a comment by bob1029 on Hacker News, which ran as follows. Language constraints aside, the real world is not something that can be cleanly modeled without the notion of circular dependencies between things. Not very many real, practical activities can be truly isolated from other closely-related activities and wrapped up in some leak-proof contract. Consider briefly the domain model of a bank. Customers rely on Accounts (I.e. have one or many). Accounts rely on Customers (i.e. have one or many). This is a simple kind of test you can apply to your language and architecture to see if you have what it takes to attack the problem domain. Most approaches lauded on HN would be a messy clusterfuck when attempting to deal with this. Now, if I can simply call CustomerService from AccountService and vice versa, there is no frustration anymore. This is the power of reflection. It certainly has other caveats, but there are advantages when it is used responsibly. If you want to understand why functional-only business applications are not taking the world by storm, this is the reason. If it weren’t for a few “messy” concepts like reflection, we would never get anything done. Having 1 rigid graph of functions and a global ordering of how we have to compile these things… My co workers would laugh me off the conference call if I proposed these constraints be imposed upon us today. In F#, you would naturally approach this by defining a Customer independent of the Account (e.g. just containing a name and address), and an Account independent of the Customer (e.g. just containing an ID), and then a Bank which is a mapping of Account to Set<Customer>. What you see as a cyclic dependency, I see as a data type that you haven’t reified. Other commenters note that indeed this is precisely how SQL would model the domain without cyclic dependencies. And JackFr gave a rather nice soundbite: Not really rocket science but some times your model is telling you something if you’re willing to listen.
{}
# Re: [tlaplus] VIEW for Structure variable Hunnn, thanks, Markus o/ Everyday is day to learn something about TLC. On Thu, Jun 10, 2021, 1:32 AM Markus Kuppe <tlaplus-google-group@xxxxxxxxxxx> wrote: On 09.06.21 21:04, pfeod...@xxxxxxxxx wrote: > > In TLC, we have the VIEW config where you choose which variables are > important and which are auxiliary. > > Let's say I have a variable Var of interest which is always a > Structure, would it be too hard to do something like VIEW but where > you say that, for this Var, TLC does not consider some keys for > fingerprint (kinda like auxiliary keys)? A VIEW *is a state function* (page 243 in Specifying Systems). In other words, any state-level^1 (unprimed) _expression_ can be a view, such as <<[n \in SomeSubsetOfDOMAIN Var |-> Var[n], other, variables>> Don't shoot yourself in the foot! Markus ^1 TLC will keep you from defining a constant-level _expression_. -- You received this message because you are subscribed to the Google Groups "tlaplus" group. To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+unsubscribe@xxxxxxxxxxxxxxxx. To view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/b248a902-30da-c689-66fd-9f48b00be94e%40lemmster.de. -- You received this message because you are subscribed to the Google Groups "tlaplus" group. To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+unsubscribe@xxxxxxxxxxxxxxxx. To view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/CANDKcwCXr-bUVc7mbt24gB%2Bgw41AmmrVPRTwE6kLSNBzr9OYDA%40mail.gmail.com.
{}
# Trigonometry! #72 Geometry Level 4 The solution set of $|4\sin (x) - 1| < \sqrt {5}$ for $$|x|<\pi$$ is Options: 1. $$\left(-\pi,-\frac {4\pi}{5} \right)\cup\left(-\frac {\pi}{5} , \frac {\pi}{10} \right) \cup \left( \frac {9\pi}{10} , \pi\right)$$ 2. $$\left(-\frac {9\pi}{10}, -\frac {\pi}{10}\right) \cup \left(\frac {3\pi}{10} , \frac {7\pi}{10} \right)$$ 3. $$\left(-\pi,-\frac {9\pi}{10}\right)\cup\left(-\frac {\pi}{10},\frac{3\pi}{10}\right)\cup\left(\frac{7\pi}{10},\pi\right)$$ 4. $$\left(-\frac {7\pi}{10} , \frac {9\pi}{10}\right)$$ Note: Enter the serial number of the correct option. This problem is part of the set Trigonometry. ×
{}
Translate equations to R by Damir Cavar (March 2010) In this section we will take a look at how to translate some typical equation into R. Let us start with the formula for variance: The LaTeX code for the equation is: S^2_{N-1}=\frac{1}{N-1}\sum_{i=1}^N{(x_i - \bar{x})^2} What this formula says is the following: a. Given a vector of values (some experimental results or measures) of length N, for example: x <- c(1, 2, 4, 5, 6, 8, 9) In this case, x is of length 7, that is N = 7, which can be verified by using the following function in R: length(x) b. for each value, from the beginning (index position i = 1), to the last element (index position i = N = 7), subtract the arithmetic mean of the distribution from this value, and square the result: The arithmetic mean of the distribution can be calculated using: mean(x) The subtraction and squaring can be calculated as: (x - mean(x))^2 The resulting vector for x as defined above should look like: 16 9 1 0 1 9 16 c. sum up all the resulting values, and divide the result by (N - 1). Summing up the values in a list can be achieved by using the function sum(): sum(x) The output for x as defined above should be: 35 To complete our calculation using steps a. and b., the formula so far should be translated as: sum( (x - mean(x))^2 ) The output of this calculation for the defined x should be: 52 The final step is to divide this part of the calculation result by N, which is the length of x reduced by 1: sum( (x - mean(x))^2 ) / (length(x) - 1) The resulting value: 8.666667 is the variance of the measures in x, as defined above. We can define this calculation as a function, and store it in some external file for reuse. A function can be defined in the following way: s2variance <- function (x) { sum( (x - mean(x))^2 ) / (length(x) - 1) } s2variance is now defined as a function that takes one parameter, i.e. a vector of values, measures or results. The returned value is the variance of the distribution, as explained above. For example: s2variance( c(3, 5, 1, 1, 2, 1, 0, 4) ) should return the value: 2.982143 In fact, R comes with a predefined function for variance calculation, as explained in the previous section (the function var), such that: var( c(3, 5, 1, 1, 2, 1, 0, 4) ) should return the same value as our own function definition above, i.e.: 2.982143 A variation of this equation is the variance of a population: The LaTeX code for this equation is: S^2_N=\frac{1}{N}\sum_{i=1}^N{(x_i - \bar{x})^2} The only difference is that in this function we do not divide by the length of x minus 1, but just by the length of x, as in the following R code: sum( (x - mean(x))^2 ) / length(x) Let us consider now the equation for the Standard deviation: The LaTeX code for this equation is: \sigma = \sqrt{ \frac{1}{N} \sum_{i=1}^N{ (x_i - \bar{x})^2 }} This equation is just the square root of the population variance. The corresponding R code is: sqrt( sum( (x - mean(x))^2 ) / length(x) ) To declare this code as a function for storage and reuse, just wrap it in a function declaration as: sdd <- function (x) { sqrt( sum( (x - mean(x))^2 ) / length(x) ) } and use it as follows: sdd(x) which should return: 2.725541 Let us consider a random variable X, represented here as a vector with probabilities: x <- c(0.1, 0.3, 0.15, 0.25, 0.01, 0.08, 0.01, 0.06, 0.04) The vector x in this case is a complete list of event probabilities, such that the sum of all probabilities in x is equal to 1, as can be verified with the following R code: sum(x) Now, the (Shannon) entropy H(x) is defined as: The LaTeX code for this equation is: H(X) = -\sum_{i=1}^n{p(x_i)log_b p(x_i)} The equation says that all probabilities of events for the random variable X have to be multiplied with the logarithm of them, summarized and multiplied by -1. We use here the logarithm to the base of 2, which is in R: log2(x) Thus, given that the vector x contains probabilities, the multiplication of each single probability with the log to the base of two of it, is in R: x * log2(x) For x as defined above, the resulting list should look like: -0.33219281 -0.52108968 -0.41054484 -0.50000000 -0.06643856 -0.29150850 -0.06643856 -0.24353362 -0.18575425 Summing up the values in the resulting list can be achieved by the function sum: sum(x * log2(x)) and the absolute value of this summation is returned by multiplying with -1: -sum(x * log2(x)) By the way, the initial sum is always negative, because the log of a probability is always negative, given that a probability is between 0 and 1.
{}
New Zealand Level 8 - NCEA Level 3 # Graphing Ellipses ## Interactive practice questions Sketch the graph of the equation $\frac{x^2}{25}+\frac{y^2}{16}=1$x225+y216=1 Easy Less than a minute Sketch the graph of the equation $9x^2+16y^2=144$9x2+16y2=144. Use a graphing utility to sketch the graph of the ellipse $16x^2+9y^2=144$16x2+9y2=144. Which of the following graphs do you get? Use a graphing utility to sketch the graph of the the ellipse $\frac{x^2}{4}+\frac{y^2}{16}=1$x24+y216=1. Which of the following graphs do you get? ### Outcomes #### M8-1 Apply the geometry of conic sections #### 91573 Apply the geometry of conic sections in solving problems
{}
Pythagoras: a^2+b^2=c^2 which is the same as: 4n^4+8n^3+8n^2+4n+1 = 4n^4+8n^3+8n^2+4n+1 Now use Quadtratic Equations to proove that this is true (2n^2+2n+1)^2 = (2n^2+2n+1)^2. It must end as the equation above Can someone talk me through all the steps plz 2. Originally Posted by SeaN187 4n^4+8n^3+8n^2+4n+1 = 4n^4+8n^3+8n^2+4n+1 Now use Quadtratic Equations to proove that this is true (2n^2+2n+1)^2 = (2n^2+2n+1)^2. It must end as the equation above Can someone talk me through all the steps plz This equation is in the form $\displaystyle \left(ax^2+bx+c\right)^2=\left(ax^2+bx+c\right)^2$ and $\displaystyle \left(ax^2+bx+c\right)^2$ happens to equal $\displaystyle (ax^2)^2+2abx^3+(bx)^2+2acx+2bcx+c^2$ so, $\displaystyle \left(an^2+bn+c\right)^2=(an^2)^2+2abn^3+(bn)^2+2a cn^2+2bcn+c^2$substitute $\displaystyle \left(2n^2+4n+1\right)^2$$\displaystyle =(2n^2)^2+2(2)(2)n^3+(2n)^2+2(2)(1)n^2+2(2)(1)n+1^ 2 now do the work \displaystyle \left(2n^2+4n+1\right)^2=2^2n^{(2\cdot2)}+8n^3+2^2 n^2+4n^2+4n+1 \displaystyle \left(2n^2+4n+1\right)^2=4n^4+8n^3+4n^2+4n^2+4n+1 \displaystyle \left(2n^2+4n+1\right)^2=4n^4+8n^3+8n^2+4n+1 and since you can do the same thing to the other side the answer becomes, \displaystyle 4n^4+8n^3+8n^2+4n+1$$\displaystyle =4n^4+8n^3+8n^2+4n+1$ 3. quick yer thnx for tht but the final answer has got to be (2n^2+2n+1)^2 = (2n^2+2n+1)^2. not 4n^4+8n^3+8n^2+4n+1 = 4n^4+8n^3+8n^2+4n+1 so you've got to work from 4n^4+8n^3+8n^2+4n+1 = 4n^4+8n^3+8n^2+4n+1 to (2n^2+2n+1)^2 = (2n^2+2n+1)^2 im not trying to make u sound stupid or ought u sound proper smart and i understand what you just did there and thanks for doing it but can you do it like it is above please because i really dont know where to start 4. No problem, here's how I would do it..... the problem starts in the form, $\displaystyle (ax^2)^2+2abx^3+(bx)^2+2acx+2bcx+c^2$, and ends in the form, $\displaystyle \left(ax^2+bx+c\right)^2$ so all you need to do is find the value a $\displaystyle a$, $\displaystyle b$, and $\displaystyle c$. You know, thanks to my work above that $\displaystyle (ax^2)^2=4x^4$ so solve for $\displaystyle a$ and you get $\displaystyle a=2$. Same thing for $\displaystyle c$. $\displaystyle c^2=1$ $\displaystyle c=1$ Now solve the entire equation to find $\displaystyle b$, and then put those numbers into the form $\displaystyle \left(ax^2+bx+c\right)^2$ 5. isn't that showing that the formula is true not prooving it? 6. Originally Posted by SeaN187 isn't that showing that the formula is true not prooving it? I have found the answer mathematically, instead of guessing, and that proves it. I could show you the complete work I did if you want. 7. yeh n u show me all the work u did plz n can i ask u a question why does everyone do their equations in tht big writting javascript thing 8. Originally Posted by SeaN187 Pythagoras: a^2+b^2=c^2 which is the same as: 4n^4+8n^3+8n^2+4n+1 = 4n^4+8n^3+8n^2+4n+1 Now use Quadtratic Equations to proove that this is true (2n^2+2n+1)^2 = (2n^2+2n+1)^2. It must end as the equation above Can someone talk me through all the steps plz Since both the RHS and LHS are equal this is trivial true: $\displaystyle (2n^2+2n+1)^2 = (2n^2+2n+1)^2$ RonL 9. Originally Posted by SeaN187 yeh n u show me all the work u did plz n can i ask u a question why does everyone do their equations in tht big writting javascript thing Here is the complete work: $\displaystyle 4n^4+8n^3+8n^2+4n+1$$\displaystyle =a^2n^4+2abn^3+b^2n^2+2acn^2+2bcn+c^2 extending this, we get: \displaystyle 4n^4+8n^3+8n^2+4n+1$$\displaystyle -\left(a^2n^2+2abn^3+b^2n^2+2a cn^2+2bcn+c^2\right)=0$ you can only subtract like terms so, $\displaystyle 4n^4-a^2n^4+8n^3-2abn^3$$\displaystyle +8n^2-(b^2n^2+2acn^2)+4n-2bcn+1-c^2=0$ because each segment has to equal zero, we can split the equation into 5 pieces. i)$\displaystyle 4n^4-a^2n^4=0$ ii)$\displaystyle 8n^3-2abn^3=0$ iii)$\displaystyle 8n^2-(b^2n^2+2acn^2)=0$ iv)$\displaystyle 4n-2bcn=0$ v)$\displaystyle 1-c^2=0$ I used equation i) to find "a" $\displaystyle 4n^4-a^2n^4=0$ $\displaystyle 4n^4=a^2n^4$ $\displaystyle 4=a^2$ $\displaystyle \sqrt{4}=a$ $\displaystyle 2=a$ I used equation v) to find "c" $\displaystyle 1-c^2=0$ $\displaystyle 1=c^2$ $\displaystyle \sqrt{1}=c$ $\displaystyle 1=c$ I used equation iv) to find "b" $\displaystyle 4n-2bcn=0$ $\displaystyle 4n=2bcn$ $\displaystyle 4n=2b(1)n$ $\displaystyle 4n=2bn$ $\displaystyle 2n=bn$ $\displaystyle 2=b$ and all you need to do is substitute those numbers into the answer. $\displaystyle \left(an^2+bn+c\right)^2$ $\displaystyle \left(2n^2+2n+1\right)^2$ And so we have reached the answer using mathematical steps, proving it. We use the LaTex code because it is much easier to understand than other normal text. 10. yer thanks alot a totally understand now 11. this isnt using quadratic equations though is it? i think he said i have to use quadratic equations 12. Originally Posted by SeaN187 Pythagoras: a^2+b^2=c^2 which is the same as: 4n^4+8n^3+8n^2+4n+1 = 4n^4+8n^3+8n^2+4n+1 Now use Quadtratic Equations to proove that this is true (2n^2+2n+1)^2 = (2n^2+2n+1)^2. It must end as the equation above Can someone talk me through all the steps plz Divide both sideds of: $\displaystyle 4n^4+8n^3+8n^2+4n+1=4n^4+8n^3+8n^2+4n+1$ by: $\displaystyle 2n^2+2n+1$, this gives: $\displaystyle 2n^2+2n+1=2n^2+2n+1$, now square: $\displaystyle (2n^2+2n+1)^2=(2n^2+2n+1)^2$. RonL PS check the question, you appear to be asking us to prove $\displaystyle x=x$. 13. cheers that is great captain black thanks to both captain black and quick luv yaxxx o n captain black im proving a^2+b^2=c^2 14. Originally Posted by SeaN187 isn't that showing that the formula is true not prooving it? I really do not see what the problem is. I understand what you are saying that is was not mathematically derived but as CaptainBlack said because if expanded it is equal the formula is proved.
{}
# 21.4 Transmutation and nuclear energy  (Page 7/26) Page 7 / 26 The energy produced by a reactor fueled with enriched uranium results from the fission of uranium as well as from the fission of plutonium produced as the reactor operates. As discussed previously, the plutonium forms from the combination of neutrons and the uranium in the fuel. In any nuclear reactor, only about 0.1% of the mass of the fuel is converted into energy. The other 99.9% remains in the fuel rods as fission products and unused fuel. All of the fission products absorb neutrons, and after a period of several months to a few years, depending on the reactor, the fission products must be removed by changing the fuel rods. Otherwise, the concentration of these fission products would increase and absorb more neutrons until the reactor could no longer operate. Spent fuel rods contain a variety of products, consisting of unstable nuclei ranging in atomic number from 25 to 60, some transuranium elements, including plutonium and americium, and unreacted uranium isotopes. The unstable nuclei and the transuranium isotopes give the spent fuel a dangerously high level of radioactivity. The long-lived isotopes require thousands of years to decay to a safe level. The ultimate fate of the nuclear reactor as a significant source of energy in the United States probably rests on whether or not a politically and scientifically satisfactory technique for processing and storing the components of spent fuel rods can be developed. ## Nuclear fusion and fusion reactors The process of converting very light nuclei into heavier nuclei is also accompanied by the conversion of mass into large amounts of energy, a process called fusion    . The principal source of energy in the sun is a net fusion reaction in which four hydrogen nuclei fuse and produce one helium nucleus and two positrons. This is a net reaction of a more complicated series of events: $4{}_{1}^{1}\text{H}\phantom{\rule{0.2em}{0ex}}⟶\phantom{\rule{0.2em}{0ex}}{}_{2}^{4}\text{He}+2{}_{+1}^{\phantom{\rule{0.5em}{0ex}}0}$ A helium nucleus has a mass that is 0.7% less than that of four hydrogen nuclei; this lost mass is converted into energy during the fusion. This reaction produces about 3.6 $×$ 10 11 kJ of energy per mole of ${}_{2}^{4}\text{He}$ produced. This is somewhat larger than the energy produced by the nuclear fission of one mole of U-235 (1.8 $×$ 10 10 kJ), and over 3 million times larger than the energy produced by the (chemical) combustion of one mole of octane (5471 kJ). It has been determined that the nuclei of the heavy isotopes of hydrogen, a deuteron, ${}_{1}^{2}$ and a triton, ${}_{1}^{3},$ undergo fusion at extremely high temperatures (thermonuclear fusion). They form a helium nucleus and a neutron: ${}_{1}^{2}\text{H}+{}_{1}^{3}\text{H}\phantom{\rule{0.2em}{0ex}}⟶\phantom{\rule{0.2em}{0ex}}{}_{2}^{4}\text{He}+2{}_{0}^{1}\text{n}$ This change proceeds with a mass loss of 0.0188 amu, corresponding to the release of 1.69 $×$ 10 9 kilojoules per mole of ${}_{2}^{4}\text{He}$ formed. The very high temperature is necessary to give the nuclei enough kinetic energy to overcome the very strong repulsive forces resulting from the positive charges on their nuclei so they can collide. Useful fusion reactions require very high temperatures for their initiation—about 15,000,000 K or more. At these temperatures, all molecules dissociate into atoms, and the atoms ionize, forming plasma. These conditions occur in an extremely large number of locations throughout the universe—stars are powered by fusion. Humans have already figured out how to create temperatures high enough to achieve fusion on a large scale in thermonuclear weapons. A thermonuclear weapon such as a hydrogen bomb contains a nuclear fission bomb that, when exploded, gives off enough energy to produce the extremely high temperatures necessary for fusion to occur. what is the meaning of intermolecular force is the force of attraction that exist between two or more molecules Johnson What is a primary standard solution ? Duval a known solution Fiko Characteristic of a primary standard solution Duval pauli's exclusion is based on what? What is greatest modification made in dalton's atomic theory? Types of electrolytes Strong, weak and non-electrolytes Grace welcome Alieu thanks what's this platform all about Nnamdi list 6 subatomic particles and their mass, speed and charges combination of acid and base that salt Talhatu calculate the mass in gram of NaOH present in 250cm3 of 0.1mol/dm3 of its solution The mass is 1.0grams. First you multiply the molecular weight and molarity which is 39.997g/mol x 0.1mol/dm3= 3.9997g/dm3. Then you convert dm3 to cm3. 1dm3 =1000cm3. In this case you would divide 3.9997 by 1000 which would give you 3.9997*10^-3 g/cm3. To get the mass you multiply 3.9997*10^-3 and Kokana 250cm3 and get the mass as .999925, with significant figures the answer is 1.0 grams Kokana nitrogen, phosphorus, arsenic, antimony and Bismuth What is d electronic configuration of for group 5 Can I know d electronic configuration of for group 5 elements Miracle 2:5, 2:8:5, 2:8:8:5,... Maxime Thanks Miracle Pls what are d names of elements found in group 5 Miracle define define. define what is enthalpy total heat contents of the system is called enthalpy, it is state function. Sajid background of chemistry what is the hybridisation of carbon in formic acid? sp2 hybridization Johnson what is the first element HYDROGEN Liklai Element that has positive charge and its non metal Name the element Liklai helium oga sulphur oga hydrogen Banji account for the properties of organic compounds properties of organic compounds mercy
{}
# Fundamental frequency in the Fourier series of heat equation. For a function $$f(x)$$ where $$x\in[0,1]$$, the Fourier series has a fundamental frequency of $$2\pi$$. But I noticed that in the Fourier series expansion of the solution of heat equation (with same domain and homogeneous boundary conditions), the fundamental frequency was $$\pi$$. But why is that the case? Won't the set of functions having $$2\pi$$ as the fundamental frequency already form a complete set of basis? So, shouldn't the extra frequencies and the corresponding terms ($$\sin \pi x, \sin 3\pi x, \sin 5\pi x$$....) be redundant? • If you want to expand in functions with a period of $1$, then you want to use $\{ e^{2\pi i nx }\}_{n=-\infty}^{\infty}$. – COVID-20 Sep 15 at 3:04 $$\{ \sin(n\pi x) \}_{n=1}^{\infty}$$ is a complete orthogonal basis of $$L^2[0,1]$$. And $$\{ e^{2\pi i nx} \}_{n=-\infty}^{\infty}$$ is a complete orthonormal basis of $$L^2[0,1]$$. You can expand $$\sin(\pi x)$$ in $$L^2[0,1]$$ using a series $$\sum_{n=-\infty}^{\infty}a_n e^{2\pi inx}$$. That might seem a little strange, but no more strange than being able to expand $$\cos(\pi x)$$ in an $$L^2$$ convergent series of functions $$\{ \sin(n\pi x) \}_{n=0}^{\infty}$$. You'll get $$L^2$$ convergence, but obviously that won't translate to pointwise convergence at every point of $$[0,1]$$, and it doesn't have to in order to get $$L^2[0,1]$$ convergence. In the same way, you can expand $$\cos(\pi x/19)$$ in a series of $$\{ \sin(n\pi x) \}_{n=1}^{\infty}$$, and the series will converge in $$L^2[0,1]$$. It all seems unlikely at first glance, but it's all part of the Mathemagic.
{}
# 人脸检测 ## code # Face Detection Example # # This example shows off the built-in face detection feature of the OpenMV Cam. # # Face detection works by using the Haar Cascade feature detector on an image. A # Haar Cascade is a series of simple area contrasts checks. For the built-in # frontalface detector there are 25 stages of checks with each stage having # hundreds of checks a piece. Haar Cascades run fast because later stages are # only evaluated if previous stages pass. Additionally, your OpenMV Cam uses # a data structure called the integral image to quickly execute each area # contrast check in constant time (the reason for feature detection being # grayscale only is because of the space requirment for the integral image). import sensor, time, image # Reset sensor sensor.reset() # Sensor settings sensor.set_contrast(1) sensor.set_gainceiling(16) # HQVGA and GRAYSCALE are the best for face tracking. sensor.set_framesize(sensor.HQVGA) sensor.set_pixformat(sensor.GRAYSCALE) # By default this will use all stages, lower satges is faster but less accurate. # FPS clock clock = time.clock() while (True): clock.tick() # Capture snapshot img = sensor.snapshot() # Find objects. # Note: Lower scale factor scales-down the image more and detects smaller objects. # Higher threshold results in a higher detection rate, with more false positives. # Draw objects for r in objects: img.draw_rectangle(r) # Print FPS. # Note: Actual FPS is higher, streaming the FB makes it slower. print(clock.fps())
{}
# Adversarial Risk Bounds for Binary Classification via Function Transformation We derive new bounds for a notion of adversarial risk, characterizing the robustness of binary classifiers. Specifically, we study the cases of linear classifiers and neural network classifiers, and introduce transformations with the property that the risk of the transformed functions upper-bounds the adversarial risk of the original functions. This reduces the problem of deriving adversarial risk bounds to the problem of deriving risk bounds using standard learning-theoretic techniques. We then derive bounds on the Rademacher complexities of the transformed function classes, obtaining error rates on the same order as the generalization error of the original function classes. Finally, we provide two algorithms for optimizing the adversarial risk bounds in the linear case, and discuss connections to regularization and distributional robustness. ## Authors • 7 publications • 26 publications 11/26/2011 ### Optimal exponential bounds on the accuracy of classification We consider a standard binary classification problem. The performance of... 12/05/2019 ### Adversarial Risk via Optimal Transport and Optimal Couplings The accuracy of modern machine learning algorithms deteriorates severely... 12/10/2019 ### Statistically Robust Neural Network Classification Recently there has been much interest in quantifying the robustness of n... 11/15/2011 ### Estimated VC dimension for risk bounds Vapnik-Chervonenkis (VC) dimension is a fundamental measure of the gener... 11/11/2018 ### Generalization Bounds for Vicinal Risk Minimization Principle The vicinal risk minimization (VRM) principle, first proposed by vapnik1... 10/29/2018 ### Adversarial Risk and Robustness: General Definitions and Implications for the Uniform Distribution We study adversarial perturbations when the instances are uniformly dist... 12/27/2021 ### Evaluation of binary classifiers for asymptotically dependent and independent extremes Machine learning classification methods usually assume that all possible... ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## 1 Introduction Deep learning systems are becoming ubiquitous in everyday life. From virtual assistants on phones to image search and translation, neural networks have vastly improved the performance of many computerized systems in a short amount of time (Goodfellow et al., 2016). However, neural networks have a variety of shortcomings: A peculiarity that has gained much attention over the past few years has been the apparent lack of robustness of neural network classifiers to adversarial perturbations. Szegedy et al. (2013) noticed that small perturbations to images could cause neural network classifiers to predict the wrong class. Further, these perturbations could be carefully chosen so as to be imperceptible to humans. Such observations have instigated a deluge of research in finding adversarial attacks (Athalye et al., 2018; Goodfellow et al., 2014; Papernot et al., 2016; Szegedy et al., 2013), defenses against adversaries for neural networks (Madry et al., 2018; Raghunathan et al., 2018; Sinha et al., 2018; Wong and Kolter, 2018), evidence that adversarial examples are inevitable (Shafahi et al., 2018), and theory suggesting that constructing robust classifiers is computationally infeasible (Bubeck et al., 2018) . Attacks are usually constructed assuming a white-box framework, in which the adversary has access to the network, and adversarial examples are generated using a perturbation roughly in the direction of the gradient of the loss function with respect to a training data point. This idea generally produces adversarial examples that can break ad-hoc defenses in image classification. Currently, strategies for creating robust classification algorithms are much more limited. One approach (Madry et al., 2018; Suggala et al., 2018) is to formalize the problem of robustifying the network as a novel optimization problem, where the objective function is the expected loss of a supremum over possible perturbations. However, Madry et al. (2018) note that the objective function is often not concave in the perturbation. Other authors (Raghunathan et al., 2018; Wong and Kolter, 2018) have leveraged convex relaxations to provide optimization-based certificates on the adversarial loss of the training data. However, the generalization performance of the training error to unseen examples is still not understood. The optimization community has long been interested in constructing robust solutions for various problems, such as portfolio management (Ben-Tal et al., 2009), and deriving theoretical guarantees. Robust optimization has been studied in the context of regression and classification (Trafalis and Gilbert, 2007; Xu et al., 2009a, b). More recently, a notion of robustness that attempts to minimize the risk with respect to the worst-case distribution close to the empirical distribution has been the subject of extensive work (Ben-Tal et al., 2013; Namkoong and Duchi, 2016, 2017). Researchers have also considered a formulation known as distributionally robust optimization, using the Wasserstein distance as a metric between distributions (Esfahani and Kuhn, 2015; Blanchet and Kang, 2017; Gao et al., 2017; Sinha et al., 2018). With the exception of Sinha et al. (2018), generalization bounds of a learning-theoretic nature are nonexistent, with most papers focusing on studying properties of a regularized reformulation of the problem. Sinha et al. (2018) provide bounds for Wasserstein distributionally robust generalization error based on covering numbers for sufficiently small perturbations. This is sufficient for ensuring a small amount of adversarial robustness and is quite general; but for classification using neural networks, known covering number bounds (Bartlett et al., 2017) are substantially weaker than Rademacher complexity bounds (Golowich et al., 2018). Although neural networks are rightly the subject of attention due to their ubiquity and utility, the theory that has been developed to explain the phenomena arising from adversarial examples is still far from complete. For example, Goodfellow et al. (2014) argue that non-robustness may be due to the linear nature of neural networks. However, attempts at understanding linear classifiers (Fawzi et al., 2018) argue against linearity, i.e., the function classes should be more expressive than linear classification. In this paper, we provide upper bounds for a notion of adversarial risk in the case of linear classifiers and neural networks. These bounds may be viewed as a sample-based guarantee on the risk of a trained classifier, even in the presence of adversarial perturbations on the inputs. The key step is to transform a classifier into an “adversarially-perturbed" classifier by modifying the loss function. The risk of the function can then be analyzed in place of the adversarial risk of ; in particular, we can more easily provide bounds on the Rademacher complexities necessary for bounding the robust risk. Finally, our transformations suggest algorithms for minimizing the adversarially robust empirical risk. Thus, from the theory developed in this paper, we can show that adversarial perturbations have somewhat limited effects from the point of view of generalization error. This paper is organized as follows: We introduce the precise mathematical framework in Section 2. In Section 3, we discuss our main results. In Section 4, we provide results on optimizing the adversarial risk bounds. In Section 5, we prove our key theoretical contributions. Finally, we conclude with a discussion of future avenues of research in Section 6. Notation: For a matrix , we write to denote the -operator norm. We write to denote the Frobenius norm. For a vector , we write to denote the -norm. ## 2 Setup We consider a standard statistical learning setup. Let be a space of covariates, and define the space of labels to be . Let . Suppose we have observations , drawn i.i.d. according to some unknown distribution . We write . A classifier corresponds to a function , where . Thus, the function may express uncertainty in its decision; e.g., prediction in allows the classifier to select an expected outcome. ### 2.1 Risk and Losses Given a loss function , our goal is to minimize the adversarially robust risk, defined by Rrob(ℓ,f)=Ez∼P[supw∈B(ε)ℓ(f,z+w)], where is an adversarially chosen perturbation in the -ball of radius . For simplicity, we write , so the input is perturbed by a vector in the -ball of radius , but still classified according to . Usually in the literature, is taken to be , , or ; the case has received particular interest. Also note that if , the adversarial risk reduces to the usual statistical risk, for which upper bounds based on the empirical risk are known as generalization error bounds. For some discussion of the relationship between the adversarial risk to the distributionally robust risk, see Appendix E. We now define a few specific loss functions. The indicator loss ℓ01(f,z)=1{sgnf(x)=y} is of primary interest in classification; in both the linear classifier and neural network classification settings, we will primarily be interested in bounding the adversarial risk with respect to the indicator loss. As is standard in linear classification, we also define the hinge loss ℓh(f,z)=max{0,1−yf(x)}, which is a convex surrogate for the indicator loss, and will appear in some of our bounds. We also introduce the indicator of whether the hinge loss is positive, defined by ℓh,01(f,z)=1{ℓh(f,z)>0}. For analyzing neural networks, we will also employ the cross-entropy loss, defined by where is the softmax function: δ(w)=exp(w)−1exp(w)+1. Note that in all of the cases above, we can also write the loss , for an appropriately defined loss . Furthermore, and are 1-Lipschitz. ### 2.2 Function Classes and Rademacher Complexity We are particularly interested in two function classes: linear classifiers and neural networks. We denote the first class by , and we write an element of , parametrized by and , as f(x)=θ⊺x+b. We similarly denote the class of neural networks as , and we write a neural network , parametrized by and , as where each is a matrix and each is a monotonically increasing -Lipschitz activation function applied elementwise to vectors, such that . For example, we might have , which is the ReLU function. The matrix is of dimension , where and . We use to denote the th row of , with th entry . Also, when discussing indices, we write as shorthand for . A standard measure of the complexity of a class of functions is the Rademacher complexity. The empirical Rademacher complexity of a function class and a sample is ^Rn(F)=1nEσ[supf∈Fn∑i=1σif(xi)], (1) where the ’s are i.i.d. Rademacher random variables; i.e., the ’s are random variables taking the values and , each with probability . Note that denotes the expectation with respect to the ’s. Finally, we note that the standard Rademacher complexity is obtained by taking an expectation over the data: . ## 3 Main Results We introduce our main results in this section. The trick is to push the supremum through the loss and incorporate it into the function , yielding a transformed function . We require this transformation to satisfy supw∈B(ε)ℓ(f,z+w)≤ℓ(Φf,z), so an upper bound on the transformed risk leads to an upper bound on the adversarial risk. We call the proposed functions the supremum transformation and tree transformation in the cases of linear classifiers and neural networks, respectively. In both cases, we have to make a minor assumption about the loss. The assumption is that is monotonically decreasing in : Specifically, is decreasing in and is increasing in . This is not a stringent assumption, and is satisfied by all of the loss functions mentioned earlier. One technicality is that the transformed function needs to be a function of both and ; i.e., we have . Thus, the loss of a transformed function is . We now define the essential transformations studied in our paper. ###### Definition 1. The supremum (sup) transform is defined by Ψf(x,y):=−ysupw∈B(ε)(−y)f(x+w). Additionally, we define to be the transformed function class ΨF:={Ψf:f∈F}. We now have the following result: ###### Proposition 1. Let be a loss function that is monotonically decreasing in . Then supw∈B(ε)ℓ(f,z+w)=ℓ(Ψf,z). ###### Remark 1. The consequence of the supremum transformation can be seen by taking the expectation: EPsupw∈B(ε)ℓ(f,z)=EPℓ(Ψf,z). Thus, we can bound the adversarial risk of a function with a bound on the usual risk of via Rademacher complexities. For linear classifiers, we shall see momentarily that the supremum transformation can be calculated exactly. ### 3.1 The Supremum Transformation and Linear Classification ###### Proposition 2. Let . Then the supremum transformation takes the explicit form Ψf(x,y)=θ⊺x+b−yε∥θ∥q, where satisfies . The proof is contained in Section 5. Next, the key ingredient to a generalization bound is an upper bound on the Rademacher complexity of . ###### Lemma 1. Let be a compact linear function class such that and for all , where . Suppose for all . Then we have ^Rn(ΨFlin)≤M2R√n+εMq2√n. This leads to the following upper bound on adversarial risk, proved in Appendix C: ###### Corollary 1. Let be a collection of linear classifiers such that, for any classifier in , we have and . Let be a constant such that for all . Then for any , we have (2) and Rrob(ℓ01,f)≤1nn∑i=1ℓh(f,zi)+ε∥θ∥q1nn∑i=1ℓh,01(Ψf,zi)+2M2R√n+εMq√n+3√log2δ2n, (3) with probability at least . As seen in the proof of Corollary 1, the loss involved in defining the adversarial risk could be replaced by another loss, which would then need to be upper-bounded by a Lipschitz loss function (in this case, the hinge loss). The empirical version of the latter loss would then appear on the right-hand side of the bounds. ###### Remark 2. An immediate question is how our adversarial risk bounds compare with the case when perturbations are absent. Plugging into the equations above yields the usual generalization bounds of the form EPℓ01(f,z)≤1nn∑i=1ℓ(f,zi)+C1√n, so the effect of an adversarial perturbation is essentially to introduce an additional term as well as an additional contribution to the empirical risk that depends linearly on . The additional empirical risk term vanishes if classifies adversarially perturbed points correctly, since in that case. ###### Remark 3. Clearly, we could further upper-bound the regularization term in equation (3) by . This is essentially the bound obtained for the empirical risk for Wasserstein distributionally robust linear classification (Gao et al., 2017). However, this bound is loose when a good robust linear classifier exists, i.e., when is small relative to . Thus, when good robust classifiers exist, distributional robustness is relatively conservative for solving the adversarially robust problem (cf. Appendix E). ### 3.2 The Tree Transformation and Neural Networks In this section, we consider adversarial risk bounds for neural networks. We begin by introducing the tree transformation, which unravels the neural network into a tree in some sense. ###### Definition 2. Let be a neural network given by f(x)=A(d+1)sd(A(d)sd−1(A(d−1)…s1(A(1)x))). Define the terms and by w(j2:d+1)f:=−ysgn(f,j2:d+1)ε∥∥a(1)j2∥∥q (4) and sgn(f,j2:d+1):=sgn(d+1∏k=2a(k)jk+1,jk). Then the tree transform is defined by Tf(x,y):=Jd+1∑jd+1=1a(d+1)1,jd+1sd⎛⎝Jd∑jd=1a(d)jd+1,jdsd−1(…J2∑j2=1a(2)j3,j2s1((a(1)j2)⊺x+w(j2:d+1)f))⎞⎠. (5) Intuitively, the tree transform (5) can be thought of as a new neural network classifier where the adversary can select a different worst-case perturbation for each path through the neural network from the input to the output indexed by . This leads to distinct paths through the network for given inputs and , and if these paths were laid out, they would form a tree (see Section 3.3). Next, we show that the risk of the tree transform upper-bounds the adversarial risk of the original neural network. ###### Proposition 3. Let be monotonically decreasing in . Then we have the inequality supw∈B(ε)ℓ(f,z+w)=ℓ(Ψf,z)≤ℓ(Tf,z). As an immediate corollary, we obtain Esupw∈B(ε)ℓ(f,z+w)≤Eℓ(Tf,z), so it suffices to bound this latter expectation. We have the following bound on the Rademacher complexity of : ###### Lemma 2. Let be a class of neural networks of depth satisfying and , for each , and let . Additionally, suppose and for all . Then we have the bound Finally, we have our adversarial risk bounds for neural networks. The proof is contained in Appendix C. ###### Corollary 2. Let be a class of neural networks of depth . Let . Under the same assumptions as Lemma 2, for any , we have the upper bounds and Rrob(ℓ01,f)≤1nn∑i=1ℓxe(f,zi)+3√log2δ2n+εmaxj=1,…,J1∥∥a(1)j∥∥qd+1∏j=2∥Aj∥∞1nn∑i=1|g′i(Tf(xi,yi))|+2α(α1,Fα1R+α1,qα1ε)√2dlog2+1√n, (6) with probability at least . ###### Remark 4. As in the linear case, we can essentially recover pre-existing non-adversarial risk bounds by setting (Bartlett et al., 2017; Golowich et al., 2018). Again, the effect of adversarial perturbations on the adversarial risk is the addition of on top of the empirical risk bounds for the unperturbed loss. Finally, the bound (6) includes an extra perturbation term that is linear in , with coefficient reflecting the Lipschitz coefficient of the neural network, as well as a term , which decreases as improves as a classifier because is small when is small. A similar term appears in the bound (3). ### 3.3 A Visualization of the Tree Transform In this section, we provide a few pictures to illustrate the tree transform. Consider the following two-layer network with two hidden units per layer: f(x)=A(3)s2(A(2)s1(A(1)x)). We begin by with visualizing in Figure 1. Next, we examine what happens when the supremum is taken inside the first layer. The resulting transformed function (cf. Lemma 3 in Section 5) becomes g(x,y)=2∑j3=1a(3)1,j3s2⎛⎝sgn(−ya(3)1,j3)supw(j3)∈B(ε)sgn(−ya(3)1,j3)A(2)s1(A(1)(x+w(j3)))⎞⎠. (7) The corresponding network is shown in Figure 2. Finally, we examine the entire tree transform. This is Tf(x,y)=2∑j3=1a(3)1,j3s2(J2∑j2=1a(2)j3,j2s1(sgn(−ya(3)1,j3a(2)j3,j2)supw(j2,j3)(a(1)j2)⊺(x+w(j2,j3)))). (8) the result, shown in Figure 3, yields a tree-structured network. In particular, we note that now the visualization of the network reveals a tree. This is the reason that is called the tree transform. ## 4 Optimization of Risk Bounds In practice, our sample-based upper bounds on adversarial risk suggest the strategy of optimizing the bounds in the corollaries, rather than simply the empirical risk, to achieve robustness of the trained networks against adversarial perturbations. Accordingly, we provide two algorithms for optimizing the upper bounds appearing in Corollary 1. One idea is to optimize the first bound (2) directly. Recalling the form of , this leads to the following optimization problem: minθ,bn∑i=1max{0,1−yi(θ⊺xi+b)+ε∥θ∥q}. (9) Note that the optimization problem of equation (9) is convex in and ; therefore, this is a computationally tractable problem. We summarize this approach in Algorithm 1. The second approach involves optimizing the second adversarial risk bound (3). Although this bound is generally looser than the bound (2), we comment on optimization due to the fact that regularization has been suggested as a way to encourage generalization. However, note that the regularization coefficient in the bound (3) depends on . Thus, we propose to perform a grid search over the value of the regularization parameter. Specifically, define γlin(f):=n∑i=1ℓh,01(Ψf,zi). (10) We then have the optimization problem minθ,bn∑i=1max{0,1−yi(θ⊺xi+b)}+ε∥θ∥qγlin(f). (11) Note, however, that is nonconvex, and the form as a function of and is complicated. We propose to take for and solve minθ,bn∑j=1max{0,1−yj(θ⊺xj+b)}+ε∥θ∥qγi. (12) At the end, we simply pick the solution minimizing the objective function in equation (11) over all . Note that this involves evaluating equation (10), but this is easy to do in the linear case. This method is summarized in Algorithm 2. ## 5 Proofs We now present the proofs of our core theoretical results regarding the transform functions and . ###### Proof of Proposition 1. We break our analysis into two cases. If , then is decreasing in . Thus, we have supw∈B(ε)¯ℓ(f(x+w),+1) =¯ℓ(infw∈B(ε)f(x+w),+1)=¯ℓ((−1)supw∈B(ε)(−1)f(x+w),+1) =ℓ(Ψf,(x,+1)). If instead , then is increasing in , so supw∈B(ε)¯ℓ(f(x+w),−1) =¯ℓ(supw∈B(ε)f(x+w),−1)=¯ℓ((1)supw∈B(ε)(1)f(x+w),−1) =ℓ(Ψf,(x,−1)). This completes the proof. ∎ ###### Proof of Proposition 2. Using the definition of the sup transform, we have Ψf(x,y)=−ysupw∈B(ε)(−y)(θ⊺x+b+θ⊺w)=θ⊺x+b−ysupw∈B(ε)(−y)θ⊺w=θ⊺x+b−yε∥θ∥q, where the final equality comes from the variational definition of the -norm. This completes the proof. ∎ Before we begin the proof of Proposition 3, we state, prove, and remark upon a helpful lemma. We want to apply this iteratively to push the supremum inside the layers of the neural network. ###### Lemma 3. Let be a function and define to be a monotonically increasing function applied elementwise to vectors. Then we have the inequality supw∈B(ε)J∑j=1bjs(a⊺jg(x+w))≤J∑j=1bjs(sgn(bj)supw(j)∈B(ε)sgn(bj)a⊺jg(x+w(j))). ###### Proof. Denote the left hand-side of the desired inequality by . First, we can push the supremum inside the sum to obtain L≤J∑j=1supw(j)∈B(ε)bjs(a⊺jg(x+w(j))). Next, note that supw(j)∈B(ε)bjs(a⊺jg(x+w(j)))=supw(j)∈B(ε)bjs(sgn(bj)sgn(bj)a⊺jg(x+w(j))). (13) Since is monotonically increasing, we see that the map is monotonically increasing, as well. Thus, the supremum in equation (13) is obtained when is maximized. Hence, we obtain L≤J∑j=1bjs(sgn(bj)supw(j)∈B(ε)sgn(bj)a⊺jg(x+w(j))), which completes the proof. ∎ ###### Remark 5. Note that if , where , this lemma yields L≤J∑j=1bjs(sgn(bj)supw(j)∈B(ε)K∑k=1%sgn(bj)aj,ks′((a′k)⊺h(x+w(j)))). If we apply Lemma 3 again, we obtain L≤J∑j=1bjs(sgn(bj)K∑k=1sgn(bj)aj,ks′(sgn(bjaj,k)supw(j,k)∈B(ε)sgn(bjaj,k)(a′k)⊺h(x+w(j,k))))=J∑j=1bjs(K∑k=1aj,ks′(sgn(bjaj,k)supw(j,k)∈B(ε)sgn(bjaj,k)(a′k)⊺h(x+w(j,k)))). In particular, we note that the sign terms accumulate within the supremum, but when we take the supremum inside another layer, the sign terms remaining in the previous layers cancel out and are incorporated into the of the next layer. ###### Proof of Proposition 3. First note that the assumption that is monotonically decreasing in is equivalent to being monotonically increasing in . As in the proof of Proposition 1, if , we want to show that ; if , we want to show that . Thus, it is our goal to establish the inequality −yΨf(x,y)≤−yTf(x,y). (14) We define and show how to take the supremum inside each layer of the neural network to yield . To this end, we simply apply Lemma 3 and Remark 5 iteratively until the remaining function is linear. Thus, we see that L≤−yJd+1∑jd+1=1a(d+1)1,jd+1sd⎛⎝Jd∑jd=1a(d)jd+1,jdsd−1⎛⎝Jd−1∑jd−1=1a(d−1)jd,jd−1sd−2(…s1(sgn(−ya(d+1)1a(d)1,jd…a(2)j3,j2)×supw(j2:d+1)∈B(ε)sgn(−ya(d+1)1a(d)1,jd…a(2)j3,j2)(a(1)j2)⊺(x+w(j2:d+1))⎞⎠⎞⎠⎞⎠⎞⎠, and simplifying gives L≤−yJd+1∑jd+1=1a(d+1)1,jd+1sd⎛⎝Jd∑jd=1a(d)jd+1,jdsd−1⎛⎝Jd−1∑jd−1=1a(d−1)jd,jd−1sd−2(…s1((a(1)j2)⊺x+sgn(−ya(d+1)1a(d)1,jd…a(2)j3,j2)×supw(j2:d+1)∈B(ε)sgn(−ya(d+1)1a(d)1,jd…a(2)j3,j2)(a(1)j2)⊺w(j2:d+1)⎞⎠⎞⎠⎞⎠⎞⎠⎞⎠. The final supremum clearly evaluates to . Recalling the definition (4) of , we then have −yΨf(x,y)≤−yJd+1∑jd+1=1a(d+1)1,jd+1sd⎛⎝Jd∑jd=1a(d)jd+1,jdsd−1(…s1((a(1)j2)⊺x+w(j2:d+1)f))⎞⎠=−yTf(x,y), which proves the proposition. ∎ ## 6 Discussion We have presented a method of transforming classifiers to obtain upper bounds on the adversarial risk. We have shown that bounding the generalization error of the transformed classifiers may be performed using similar machinery for obtaining traditional generalization bounds in the case of linear classifiers and neural network classifiers. In particular, since the Rademacher complexity of neural networks only has a small additional term due to adversarial perturbations, generalization even in the presence of adversarial perturbations should not be impossibly difficult for binary classification. We mention several future directions for research. First, one might be interested in extending the supremum transformation to other types of classifiers. The most interesting avenues would include calculating explicit representations as in the case of linear classifiers, suitable alternative transformations as in the case of neural networks, and bounds on the resulting Rademacher complexities. A second direction is to understand the tree transformation better and develop algorithms for optimizing the resulting adversarial risk bounds. One view that we have taken in this paper is to bound the difference between the empirical risk of and as a regularization term, but one could also optimize the empirical risk of directly. An immediate idea would be to train a good and then use the resulting , since the empirical risk of provides an upper bound on the adversarial risk of . For computational reasons, this may not be practical for the tree transform, in which case one might need to explore alternative transformations. ## References • Athalye et al. [2018] A. Athalye, N. Carlini, and D. Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In Proceedings of the 35th International Conference on Machine Learning , Proceedings of Machine Learning Research. PMLR, July 2018. • Bartlett et al. [2017] P. L. Bartlett, D. J. Foster, and M. J. Telgarsky. Spectrally-normalized margin bounds for neural networks. In Advances in Neural Information Processing Systems, pages 6240–6249, 2017. • Ben-Tal et al. [2009] A. Ben-Tal, L. El Ghaoui, and A. Nemirovski. Robust Optimization. Princeton University Press, 2009. • Ben-Tal et al. [2013] A. Ben-Tal, D. Den Hertog, A. De Waegenaere, B. Melenberg, and G. Rennen. Robust solutions of optimization problems affected by uncertain probabilities. Management Science, 59(2):341–357, 2013. • Blanchet and Kang [2017] J. Blanchet and Y. Kang. Semi-supervised learning based on distributionally robust optimization. arXiv preprint arXiv:1702.08848, 2017. • Boucheron et al. [2013] S. Boucheron, G. Lugosi, and P. Massart. Concentration Inequalities: A Nonasymptotic Theory of Independence. Oxford University Press, 2013. • Bubeck et al. [2018] S. Bubeck, E. Price, and I. Razenshteyn. Adversarial examples from computational constraints. arXiv preprint arXiv:1805.10204, 2018. • Esfahani and Kuhn [2015] P. M. Esfahani and D. Kuhn. Data-driven distributionally robust optimization using the wasserstein metric: Performance guarantees and tractable reformulations. Mathematical Programming, pages 1–52, 2015. • Fawzi et al. [2018] A. Fawzi, O. Fawzi, and P. Frossard. Analysis of classifiers’ robustness to adversarial perturbations. Machine Learning, 107(3):481–508, 2018. • Gao et al. [2017] R. Gao, X. Chen, and A. J. Kleywegt. Distributional robustness and regularization in statistical learning. arXiv preprint arXiv:1712.06050, 2017. • Golowich et al. [2018] N. Golowich, A. Rakhlin, and O. Shamir. Size-independent sample complexity of neural networks. In Conference On Learning Theory, pages 297–299, 2018. • Goodfellow et al. [2016] I. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio. Deep Learning, volume 1. MIT Press Cambridge, 2016. • Goodfellow et al. [2014] I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. • Ledoux and Talagrand [1989] M. Ledoux and M. Talagrand. Comparison theorems, random geometry and some limit theorems for empirical processes. The Annals of Probability, pages 596–631, 1989. • Ledoux and Talagrand [1991] M. Ledoux and M. Talagrand. Probability in Banach Spaces: Isoperimetry and Processes. Springer Science & Business Media, 1991. • Madry et al. [2018] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018. • Mohri et al. [2012] M. Mohri, A. Rostamizadeh, and A. Talwalkar. Foundations of Machine Learning. MIT Press, 2012. • Namkoong and Duchi [2016] H. Namkoong and J. C. Duchi. Stochastic gradient methods for distributionally robust optimization with -divergences. In Advances in Neural Information Processing Systems, pages 2208–2216, 2016. • Namkoong and Duchi [2017] H. Namkoong and J. C. Duchi. Variance-based regularization with convex objectives. In Advances in Neural Information Processing Systems, pages 2971–2980, 2017. • Papernot et al. [2016] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami. The limitations of deep learning in adversarial settings. In Security and Privacy (EuroS&P), 2016 IEEE European Symposium on, pages 372–387. IEEE, 2016. • Raghunathan et al. [2018] A. Raghunathan, J. Steinhardt, and P. Liang. Certified defenses against adversarial examples. In International Conference on Learning Representations, 2018. • Shafahi et al. [2018] A. Shafahi, W. R. Huang, C. Studer, S. Feizi, and T. Goldstein. Are adversarial examples inevitable? arXiv preprint arXiv:1809.02104, 2018. • Sinha et al. [2018] A. Sinha, H. Namkoong, and J. Duchi. Certifiable distributional robustness with principled adversarial training. In International Conference on Learning Representations, 2018. • Suggala et al. [2018] A. S. Suggala, A. Prasad, V. Nagarajan, and P. Ravikumar. On adversarial risk and training. arXiv preprint arXiv:1806.02924, 2018. • Szegedy et al. [2013] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. • Trafalis and Gilbert [2007] T. B. Trafalis and R. C. Gilbert. Robust support vector machines for classification and computational issues. Optimisation Methods and Software, 22(1):187–198, 2007. • Wong and Kolter [2018] E. Wong and Z. Kolter. Provable defenses against adversarial examples via the convex outer adversarial polytope. In Proceedings of the 35th International Conference on Machine Learning, Proceedings of Machine Learning Research, pages 5286–5295. PMLR, July 2018. • Xu et al. [2009a] H. Xu, C. Caramanis, and S. Mannor. Robust regression and Lasso. In Advances in Neural Information Processing Systems, pages 1801–1808, 2009a. • Xu et al. [2009b] H. Xu, C. Caramanis, and S. Mannor. Robustness and regularization of support vector machines. Journal of Machine Learning Research, 10(Jul):1485–1510, 2009b. ## Appendix A Rademacher Complexity Proofs In this section, we prove Lemmas 1 and 2, which are the bounds on the empirical Rademacher complexities of and . The proofs are largely based on pre-existing proofs for bounding the empirical Rademacher complexities of and , and this simplicity is part of what makes and attractive. ###### Proof of Lemma 1. Using Proposition 2, we have By Lemma 10, the empirical Rademacher complexity of a linear function class is given by ^Rn(Flin)≤M2R√n. Thus, it remains to analyze the second term in the upper bound. If the sum of the ’s is negative, the maximizing the supremum is the zero vector. Alternatively, if the sum is positive, we clearly have the upper bound . Thus, we have εEσ [supf∈F∥θ∥qn∑i=1σi]≤εEσ[Mqn∑i=1σi1{n∑i=1σi>0}](a)=εMq2E∣∣ ∣∣n∑i=1σi∣∣ ∣∣≤εMq2⎛⎝E⎡⎣(n∑i=1σi)2⎤⎦⎞⎠12, where follows because and have the same distribution, and the last inequality follows by Jensen’s inequality. The last term is equal to , using the fact that the ’s are independent, zero-mean, and unit-variance random variables. Putting everything together yields ^Rn(ΨFlin)≤M2R√n+εMq2√n, which completes the proof. ∎ ###### Proof of Lemma 2. Our broad goal is to peel off the layers of the neural network one at a time. Most of the work is done by Lemma 7. The proof is essentially the same as the Rademacher complexity bounds on neural networks of Golowich et al. [2018] until we reach the underlying linear classifier. We then bound the action of the adversary in an analogous manner to the linear case. We write n^R(TFnn)=1λlogexp(λE[supf∈Fnnn∑i=1σiTf(xi,yi)])≤1λlogE[supf∈Fnnexp(λn∑i=1σiTf(xi,yi))]. Recalling the form of from equation (5), we can apply Lemma 7 successively times with for various in order to remove the layers of the neural network. Specifically, we use , , , up to , as we peel away the layers and retain the bounds on the matrix norms from the layers that we have removed. This implies Note that the maxima over are accumulated from each application of Lemma 7. These maxima correspond to taking a worst-case path through the tree. To bound the first term, we apply the Cauchy-Schwarz inequality. To bound the second term, we use the inequality −sgn(f,j2:d+1)n∑i=1σiyi≤∣∣ ∣∣n∑i=1σiyi∣∣ ∣∣. Thus, we have n^R(Fnn)≤1λlog2dE[supf∈Fnnmaxj2,…,jd+1exp(αλα1∥∥a(1)j2∥∥2∥∥ ∥∥n∑i=1σixi∥∥ ∥∥2+αλα1ε∥∥a(1)j2
{}
Uniqueness of Meromorphic Functions Concerning the Difference Polynomials • Journal title : Kyungpook mathematical journal • Volume 55, Issue 2,  2015, pp.411-427 • Publisher : Department of Mathematics, Kyungpook National University • DOI : 10.5666/KMJ.2015.55.2.411 Title & Authors Uniqueness of Meromorphic Functions Concerning the Difference Polynomials LIU, FANGHONG; YI, HONGXUN; Abstract In this article, we main study the uniqueness problem of meromorphic function which difference polynomials sharing common values. We consider the entire function $\small{(f^n(f^m-1)\prod_{j=1}^{s}f(z+c_j)^{{\mu}j})^{(k)}}$ and the meromorphic function $\small{f^n(f^m-1)\prod_{j=1}^{s}f(z+c_j)^{{\mu}j}}$ to get the main results which extend Theorem 1.1 in paper[5] and theorem 1.4 in paper[6]. Keywords meromorphic functions;difference polynomials;uniqueness;sharing common value; Language English Cited by References 1. C. C. Yangand H. X. Yi, Uniqueness Theory of Meromorphic Functions, Kluwer Academic Publishers, Dordrecht, 2003. 2. X. G. Qi, L. Z. Yang and K. Liu, Uniqueness and periodicity of meromorphic functions concerning the difference operator, Computer and Mathematics with Applications., 60(2010), 1739-1746. 3. C, Y. Fang. and M, L. Fang., Uniqueness of meromorphic functions and differential polynomials, Comput. Math. Appl., 44(2002), 607-617. 4. SS Bhoosnurmathand SR Kabbur, Value Distribution and Uniqueness Theorems for Difference of Entire and Meromorphic Functions, International Journal of Analysis and Applications, 2013, 124-136. 5. Keyu Zhang, Hongxun Yi, the Value Distribution and Uniqueness of one Certain Type of Differental-Difference Polynomials, Acta Mathematica Scientis Series Manuscript, 34B(3)(2014), 719-728. 6. Li, Yi and Li, Value distribution of certain difference polynomials of meromorphic functions, Rocky Mountain J. Math. Volume forthcoming, Number forthcoming (2013). 7. Monhon'ko A, The Nevalinna characteristics of certain meromorphic function [J], Teor Funksii Funktsional Anal I Prilozhen, 1971, 14:83-87 (in Russian). 8. JL Zhang and LZ Yang, Some results related to a conjecture of R, Brck. J. Inequal. Pure Appl. Math., 2007. 9. Chen M R and Chen Z X, Properties of Difference Polynomials of Entire Functions with Finite Order, Chinese Annals of Mathematics, 33A(3)(2012), 359-374 (in chinese). 10. R. G. Halburd and R. Korhonen., Difference analogue of the lemma on the logarithmic derivative with applications to difference equations, J. Math. Anal. Appl., 314(2006), 477C487. 11. Xiao Min Li and Hong Xun Yi, Entire Functions Sharing an Entire Function of Smaller Order with Their Difference Operators, Acta Mathematica Sinica, English Series Mar., 30(3)(2014), 481C498. 12. Zhang R R and Chen Z X, Value distribution Difference Polynomials of meromorphic functions, Chinese Science, Mathematics: Mathematics, 42(11)(2012), 1115-1130(in chinese). 13. Xiao-Min Li, Hong-Xun Yi and Yue Shi, Value Sharing of Certain Differential Polynomials and Their Shifts of Meromorphic Functions, Comput. Methods Funct. Theory DOI 10.1007/s40315-014-0048-0 2014. 14. Xudan Luo and Weichuan Lin., Value sharing results for shifts of meromorphic functions, J. Math. Anal.Appl. 377(2011),441C449. 15. Raj Shree Dhar., Uniqueness Theorems on Meromorphic Functions and their Difference Operators, Int. Journal of Math. Analysis, 7(3)(2013), 1489-1495. 16. Baoqin Chen, Zongxuan Chen and Sheng Li, Uniqueness of difference operators of meromorphic functions, Journal of Inequalities and Applications 2012, 2012:48. 17. HongYan Xu, On the value distribution and uniqueness of difference polynomials of meromorphic functions, Advances in Difference Equations 2013, 2013:90. 18. Sheng Li and BaoQin Chen, Meromorphic functions sharing small functions with their linear difference polynomials, Advances in Difference Equations 2013, 2013:58.
{}
# Grams to Tablespoons Calculator Created by Hanna Pamuła, PhD candidate Reviewed by Bogna Szyk and Jack Bowater Last updated: Oct 07, 2022 If you're struggling with grams and tbsp conversion, look no further - this grams to tablespoons calculator has everything you could ever need when dealing with simple cooking measurement conversions. We don't always have a kitchen scale close at hand (or it's not accurate enough when dealing with small weights, or we're just too lazy to use it), so volume units are preferable, especially for liquids. Scroll down, and you'll find out not only how to convert flour tablespoons to grams but also how many calories are in 1 tbsp of butter or maple syrup. If you wish to learn more about volume units, be sure to check our volume conversion calculator ## Grams to tablespoons, tablespoon to grams: sugar, flour and other products Assume we'd like to: • Convert amount of sugar in grams to tablespoons in the blink of an eye, • Switch the ingredient and quickly find out how many grams are in a tablespoon of salt, • Change the butter tablespoon to grams. Sounds cool, but how to do it? Just follow these simple steps: 1. Select the ingredient from a drop-down list. Let's pick olive oil - choose oil from the list, and then a second box appears; this is where you can select olive oil. 2. Enter the amount of product. Assume we want to use the grams to tablespoons calculator the other way round - we have three tablespoons of olive oil, and we'd like to know how much it approximately weights. Type 3 into tbsp box. 3. Here you go! Now we know that three tablespoons of olive oil weighs around 41 grams. Remember that the result is an approximation, as the products differ between manufacturers, and a tablespoon of dry product can be heaped so the volume can vary a lot. If you want a quick comparison, use the table below that sums up how much one tablespoon of different products weights: Ingredient 1 US tbsp 1 tbsp (15 ml) Water $14.8\ \text{g}$ $15\ \text{g}$ Milk $15.2\ \text{g}$ $15.5\ \text{g}$ Flour $8.9\ \text{g}$ $9\ \text{g}$ Sugar $12.5\ \text{g}$ $12.7\ \text{g}$ Salt $18\ \text{g}$ $18.3\ \text{g}$ Honey $21\ \text{g}$ $21.3\ \text{g}$ Butter $14.2\ \text{g}$ $14.4\ \text{g}$ Oil $13\ \text{g}$ $13.2\ \text{g}$ Cacao $7.7\ \text{g}$ $7.8\ \text{g}$ Nutella $18.6\ \text{g}$ $19\ \text{g}$ Maple syrup $19.5\ \text{g}$ $19.8\ \text{g}$ ## Tablespoons in a cup Wondering how many tablespoons are in a cup? Assuming you're asking for US tablespoons, the answer is simple - sixteen. To sum up the most popular queries: • How many tablespoons in a cup? 16 tbsp • How many tablespoons in 1/2 cup? 8 tbsp • How many tablespoons in 1/3 cup? 5.33 tbsp • How many tablespoons in 1/4 cup? 4 tbsp The conversions above are related to US tablespoons and US cups. To make it more complicated, different types of cups exist, like, e.g. US legal cup or a metric cup. If you want to read more about these units, make sure to have a look at our grams to cups calculator. ## Calories in a tablespoon of product Some ingredients are much more likely to be expressed in tablespoon unit than the others. Usually, the products which are a small addition to the dish, or may be eaten alone, appear in tablespoons. You'll often find salt or Nutella in tablespoons than e.g. milk or flour (though it's still possible, so we have that option in the calculator as well). Adding a tablespoon of olive oil to a salad or spreading peanut butter on a slice of bread sometimes makes us wonder - how many calories are in that portion? Let's have a look at calories in popular bread spreads, fats and sugars (data for US tablespoons): • Calories in 1 tbsp of Nutella / chocolate spread - 100 kcal • Calories in 1 tbsp of peanut butter - 94 kcal • Calories in 1 tbsp of honey - 64 kcal • Calories in 1 tbsp of avocado - 20 kcal Fats • Calories in 1 tbsp of olive oil - 119 kcal • Calories in 1 tbsp of coconut oil - 117 kcal • Calories in 1 tbsp of butter - 102 kcal Sugars • Calories in 1 tbsp of sugar - 48 kcal • Calories in 1 tbsp of maple syrup - 52 kcal Thanks to this short list, you can now choose wisely what to have for your breakfast! ## Volume or weight units - which are better? In our everyday cooking or baking experience 🍰, we usually use a mixture of volume and weight units. In most cases, dry products appear in recipes in grams, ounces or pounds, whereas liquids like water or milk are measured in cups, milliliters, or tablespoons. However, , as it's a more accurate and reproducible method. Eventually, cooking is all about chemistry. For example, the volume of flour in a cup depends on many various factors - if you spooned the flour, dipped the cup into it, level off or scraping the top off, the amount of flour will be different, although you used the same container. Another perfect example is salt - the tablespoon of the granular form may weight as much as even two tablespoons of flake salt! It's not the issue if you're using a kitchen scale, as weight is a direct measure of an ingredient. But don't worry too much! Unless you're aspiring to be a master chef or you're cooking in bulk - and use this grams to tablespoons calculator. Enjoy the cooking and baking by choosing the units that are most natural and convenient to you - in most cases, you won't notice the difference. ## FAQ ### How do I convert from grams to tablespoons? To convert from grams to tablespoons, you need to know how much a tablespoon of your ingredient weighs. A tablespoon of flour weighs about 9 g, while a tablespoon of maple syrup is almost 20 g. Once you know the weight, simply divide the desired amount in grams by the weight of the single tablespoon of your ingredient. The result is the number of tablespoons you need! ### How many teaspoons are in a tablespoon? By convention, there are three teaspoons in a tablespoon. The conversion from grams to teaspoons is straightforward if you know how to convert grams to tablespoons: number of teaspoons = 3 × number of tablespoons Remember that the weight of a US tablespoon is slightly different than that of a tablespoon: however, the result of the conversion doesn't change much! ### How do I convert 50 grams of flour to tablespoons? To convert 50 grams of flours to tablespoons, follow these steps: 1. Check your tablespoon: every tablespoon should contain (when heaped) around 9 g of flour. 2. Divide the desired amount of flour in grams by the weight of a single tablespoon of flour: n = 50/9 = 5.6 3. The result is the number of tablespoons: five and a half! ### How do I calculate the number of tablespoons from grams with density? You can calculate the number of tablespoons of an ingredient from grams if you know its density. 1. Find the density of your ingredient. Either search online or fill a known volume and weigh it. Density is the ratio between mass and volume. 2. Calculate the number of tablespoons with the formula: tbsp = g/(15 × density) 3. the result is the answer. Notice that we assumed the volume of a tablespoon to be 15 ml. Hanna Pamuła, PhD candidate Ingredient Butter Convert g US tbsp People also viewed… ### BMR - Harris-Benedict equation Harris-Benedict calculator uses one of the three most popular BMR formulas. Knowing your BMR (basal metabolic weight) may help you make important decisions about your diet and lifestyle. ### Brine The brine calculator calculates the amount of salt and water needed to prepare the perfect brine for fermenting vegetables. ### Chilled drink With the chilled drink calculator you can quickly check how long you need to keep your drink in the fridge or another cold place to have it at its optimal temperature. You can follow how the temperature changes with time with our interactive graph. ### Sourdough The sourdough calculator is an efficient tool for calculating the various ingredients ratios to make the perfect sourdough.
{}
# American Institute of Mathematical Sciences November  2013, 18(9): 2267-2282. doi: 10.3934/dcdsb.2013.18.2267 ## Evolutionary branching patterns in predator-prey structured populations 1 Department of Mathematical Sciences, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Torino Received  November 2012 Revised  July 2013 Published  September 2013 Predator-prey ecosystems represent, among others, a natural context where evolutionary branching patterns may arise. Moving from this observation, the paper deals with a class of integro-differential equations modeling the dynamics of two populations structured by a continuous phenotypic trait and related by predation. Predators and preys proliferate through asexual reproduction, compete for resources and undergo phenotypic changes. A positive parameter $\varepsilon$ is introduced to model the average size of such changes. The asymptotic behavior of the solution of the mathematical problem linked to the model is studied in the limit $\varepsilon \rightarrow 0$ (i.e., in the limit of small phenotypic changes). Analytical results are illustrated and extended by means of numerical simulations with the aim of showing how the present class of equations can mimic the formation of evolutionary branching patterns. All simulations highlight a chase-escape dynamics, where the preys try to evade predation while predators mimic, with a certain delay, the phenotypic profile of the preys. Citation: Marcello Delitala, Tommaso Lorenzi. Evolutionary branching patterns in predator-prey structured populations. Discrete & Continuous Dynamical Systems - B, 2013, 18 (9) : 2267-2282. doi: 10.3934/dcdsb.2013.18.2267 ##### References: show all references ##### References: [1] Martin Bohner, Osman Tunç. Qualitative analysis of integro-differential equations with variable retardation. Discrete & Continuous Dynamical Systems - B, 2021  doi: 10.3934/dcdsb.2021059 [2] Olivier Bonnefon, Jérôme Coville, Jimmy Garnier, Lionel Roques. Inside dynamics of solutions of integro-differential equations. Discrete & Continuous Dynamical Systems - B, 2014, 19 (10) : 3057-3085. doi: 10.3934/dcdsb.2014.19.3057 [3] Mohammed Al Horani, Angelo Favini, Hiroki Tanabe. Singular integro-differential equations with applications. Evolution Equations & Control Theory, 2021  doi: 10.3934/eect.2021051 [4] Eduardo Cuesta. Asymptotic behaviour of the solutions of fractional integro-differential equations and some time discretizations. Conference Publications, 2007, 2007 (Special) : 277-285. doi: 10.3934/proc.2007.2007.277 [5] Changling Xu, Tianliang Hou. Superclose analysis of a two-grid finite element scheme for semilinear parabolic integro-differential equations. Electronic Research Archive, 2020, 28 (2) : 897-910. doi: 10.3934/era.2020047 [6] Tomás Caraballo, P.E. Kloeden. Non-autonomous attractors for integro-differential evolution equations. Discrete & Continuous Dynamical Systems - S, 2009, 2 (1) : 17-36. doi: 10.3934/dcdss.2009.2.17 [7] Yi Cao, Jianhua Wu, Lihe Wang. Fundamental solutions of a class of homogeneous integro-differential elliptic equations. Discrete & Continuous Dynamical Systems, 2019, 39 (3) : 1237-1256. doi: 10.3934/dcds.2019053 [8] Yubo Chen, Wan Zhuang. The extreme solutions of PBVP for integro-differential equations with caratheodory functions. Conference Publications, 1998, 1998 (Special) : 160-166. doi: 10.3934/proc.1998.1998.160 [9] Ramasamy Subashini, Chokkalingam Ravichandran, Kasthurisamy Jothimani, Haci Mehmet Baskonus. Existence results of Hilfer integro-differential equations with fractional order. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 911-923. doi: 10.3934/dcdss.2020053 [10] Tonny Paul, A. Anguraj. Existence and uniqueness of nonlinear impulsive integro-differential equations. Discrete & Continuous Dynamical Systems - B, 2006, 6 (5) : 1191-1198. doi: 10.3934/dcdsb.2006.6.1191 [11] Narcisa Apreutesei, Arnaud Ducrot, Vitaly Volpert. Travelling waves for integro-differential equations in population dynamics. Discrete & Continuous Dynamical Systems - B, 2009, 11 (3) : 541-561. doi: 10.3934/dcdsb.2009.11.541 [12] Tianling Jin, Jingang Xiong. Schauder estimates for solutions of linear parabolic integro-differential equations. Discrete & Continuous Dynamical Systems, 2015, 35 (12) : 5977-5998. doi: 10.3934/dcds.2015.35.5977 [13] Sertan Alkan. A new solution method for nonlinear fractional integro-differential equations. Discrete & Continuous Dynamical Systems - S, 2015, 8 (6) : 1065-1077. doi: 10.3934/dcdss.2015.8.1065 [14] Eitan Tadmor, Prashant Athavale. Multiscale image representation using novel integro-differential equations. Inverse Problems & Imaging, 2009, 3 (4) : 693-710. doi: 10.3934/ipi.2009.3.693 [15] Patricio Felmer, Ying Wang. Qualitative properties of positive solutions for mixed integro-differential equations. Discrete & Continuous Dynamical Systems, 2019, 39 (1) : 369-393. doi: 10.3934/dcds.2019015 [16] Sebti Kerbal, Yang Jiang. General integro-differential equations and optimal controls on Banach spaces. Journal of Industrial & Management Optimization, 2007, 3 (1) : 119-128. doi: 10.3934/jimo.2007.3.119 [17] Ji Shu, Linyan Li, Xin Huang, Jian Zhang. Limiting behavior of fractional stochastic integro-Differential equations on unbounded domains. Mathematical Control & Related Fields, 2021, 11 (4) : 715-737. doi: 10.3934/mcrf.2020044 [18] Michel Chipot, Senoussi Guesmia. On a class of integro-differential problems. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1249-1262. doi: 10.3934/cpaa.2010.9.1249 [19] Nestor Guillen, Russell W. Schwab. Neumann homogenization via integro-differential operators. Discrete & Continuous Dynamical Systems, 2016, 36 (7) : 3677-3703. doi: 10.3934/dcds.2016.36.3677 [20] Paola Loreti, Daniela Sforza. Observability of $N$-dimensional integro-differential systems. Discrete & Continuous Dynamical Systems - S, 2016, 9 (3) : 745-757. doi: 10.3934/dcdss.2016026 2020 Impact Factor: 1.327
{}
# [SOLVED]energy equation #### dwsmith ##### Well-known member How does one use the energy equation to determine the type of orbit? $$E = \frac{v^2}{2} - \frac{\mu}{r}$$ where $\mu = G(m_1+m_2)$ and $$\mathbf{r} = \begin{pmatrix} -4069.503\\ 2861.786\\ 4483.608 \end{pmatrix}\text{km},\quad \mathbf{v} = \begin{pmatrix} -5.114\\ -5.691\\ -1.000 \end{pmatrix}\text{km/sec}$$ #### Ackbach ##### Indicium Physicus Staff member As I understand it, $E=0$ is parabolic, $E>0$ is hyperbolic, and $E<0$ is elliptic. If $v^{2}r=\mu$, then it's circular. #### dwsmith ##### Well-known member As I understand it, $E=0$ is parabolic, $E>0$ is hyperbolic, and $E<0$ is elliptic. If $v^{2}r=\mu$, then it's circular. My issue was I didn't have a mu term. #### topsquark ##### Well-known member MHB Math Helper How does one use the energy equation to determine the type of orbit? $$E = \frac{v^2}{2} - \frac{\mu}{r}$$ where $\mu = G(m_1+m_2)$ and $$\mathbf{r} = \begin{pmatrix} -4069.503\\ 2861.786\\ 4483.608 \end{pmatrix}\text{km},\quad \mathbf{v} = \begin{pmatrix} -5.114\\ -5.691\\ -1.000 \end{pmatrix}\text{km/sec}$$ 1. You are missing an "m" from the kinetic energy term. 2. You defined $$\mu$$ in your original post. Is this a result you are supposed to derive perhaps? -Dan #### dwsmith ##### Well-known member 2. You defined $$\mu$$ in your original post. Is this a result you are supposed to derive perhaps?
{}
ISSN 0439-755X CN 11-1911/B Acta Psychologica Sinica ›› 2017, Vol. 49 ›› Issue (5): 590-601. Does irrelevant long-term memory representation guide the deployment of visual attention? HU Cenlou; ZHANG Bao; HUANG Sai 1. (Department of Psychology / The Key Laboratory for Juveniles Mental Health and Educational Neuroscience in Guangdong Province, Guangzhou University, Guangzhou 510006, China) • Received:2016-04-22 Published:2017-05-25 Online:2017-05-25 • Contact: HUANG Sai, E-mail: sai.huang@139.com, ZHANG Bao, E-mail: bao.zhang@139.com Abstract: The capacity of information processing system for human being is severely limited, but humans are proficient in searching for target information in the familiar visual scenes, in part because the task-relevant long-term memory (LTM) representations can efficiently guide attentional deployment to optimize the selection to the target and the escapement from the distractors. Hence, LTM-guided attention is key to our high level of visual performance, serving to direct our limited attentional resources efficiently. However, the issue whether irrelevant LTM representations can guide the deployment of visual attention as well as the irrelevant working memory (WM) representation is elusive yet. Therefore, we attempted to explore this issue here via three experiments. In experiment 1, participants were asked to maintain an object in LTM before the experiment initialized until to the end. During the experiment, participants were required to perform a visual search task while holding another object in WM online. In the visual search task, one of the distractor might share common features with either the representation of LTM or the representation of WM occasionally. Both the results of the response time and the first fixation proportion showed that the visual attention would bias to the distractor when sharing common features with the WM representation, displaying an classical WM-driven attentional guidance effect; however, non-guidance effect was found when the distractor shared common features with LTM representation. More importantly, the magnitude of guidance from WM representation was not affected by the simultaneously- emerged LTM representation which was regarded as a directly competitor for the attentional resources in the visual search display. In experiment 2, we manipulated the repetition times of the remembering object as the task used by Carlisle, Arita, Pardo & Woodman (2011), and aimed to test the attentional guidance from the memory representation while it was transferred from WM to LTM. The results observed an obvious attentional guidance effect from the memory representation when it was regarded as being maintained in WM (i.e., when the remembering object repeated less than three times) and this guidance effect disappeared when the memory representation was turned into LTM representation (i.e., when the remembering object repeated more than three times). In experiment 3, we required participants only keeping the LTM representation in memory system as to eliminate the possible interference from WM representation, and remain did not found any attentional guidance effect from the irrelevant LTM representation. In conclusion, the results of the present study observed a robust attentional guidance from the WM representation even when it not severing as search target template and sharing features with distractor in visual search task, in contrast, none such effect was found from the LTM representation under the same situation. These results indicated that the irrelevant LTM representation could not guide visual attention as well as irrelevant WM representation, and illustrated that the guiding process of visual attention from the representations of WM and LTM were two of distinct cognitive processes.
{}
# 429. N-ary Tree Level Order Traversal Reference: LeetCode Difficulty: Easy ## Problem Given an n-ary tree, return the level order traversal of its nodes’ values. (ie, from left to right, level by level). Example: Note: • The depth of the tree is at most 1000. • The total number of nodes is at most 5000. ## Analysis ### Recursion The recursion is actually based on the preorder traversal. Time: $O(N)$ Space: $O(N)$ to store all nodes. ### Iteration In the foreach statement, if p.children is null, it would crash. However, if a node is in p.children, it can’t be null. Time: $O(N)$ Space: $O(N)$ to store all nodes. Comment Junhao Wang a software engineering cat
{}
# Divisibility by seven Given number n, whose decimal representation contains digits only $1, 6, 8, 9$. Rearrange the digits in its decimal representation so that the resulting number will be divisible by 7. If number is m digited after rearrangement it should be still $m$ digited. If not possible then i need to tell "not possible". EXAMPLE : $1689$ After rearrangement we can have $1869$, which is divisible by $7$ How to tackle his problem - But 18906 contains also digit 0. –  user87690 Dec 24 '13 at 15:22 You write "digits only 1,6,8,9", don't you? –  mathlove Dec 24 '13 at 15:29 Are you asking for a method to do this for any such $n$? So for example if it is not possible for $1689$, would that constitute a proof that it cannot be done? (I'm not suggesting that it cannot be done for $1689$; haven't checked.) –  alex.jordan Dec 24 '13 at 15:33 @alex.jordan yeah...m asking for any such n..Not for just this number –  user3001932 Dec 24 '13 at 15:34 Well, it cannot be done with the number $1$, which technically meets your conditions. Must each digit appear at least once? –  alex.jordan Dec 24 '13 at 15:36 This is not a complete answer. But it will tell you some hints. You should know the following algorithm : For example, we know $35123473$ is a multiple of $7$ in the following way. First, divide it as $35|123|473$, then add $35+473=508$ (odd sections), and add $123$ (even sections). And calculate $508-123=385$. Since $385$ is a multiple of $7$, $35123473$ is a multiple of $7$. So, let us come back to the original question. From the above algorithm, we know we only need to look at the set of a number in a section. The number in a section has at most three digits. So, we now know we only need to look at the following numbers as a number in a section. $$1,6,8,9$$ $$11,16,18,19,66,68,69,88,89,99$$ $$111,666,888,999,168,169,189,689$$ By the way, when we look at them in mod $7$, we have $$1,6,1,2$$ $$4,2,4,5,3,5,6,4,5,1$$ $$6,1,6,5,0,1,0,3$$ I think you should find a good algorithm from this idea. - The number $N=a_n10^n+a_{n-1}10^{n-1}+a_{n-2}10^{n-2}+\cdots +10a_1+a_0$ is divisible by $7$ if and only if the number $$(100a_0+10a_1+a_2)-(100a_5+10a_4+a_3)+(100a_8+10a_7+a_6)-\cdots$$ divisible by $7$. (Idea for proving that is looking on $N$ in modulu $1001$) - If there is no condition that each of these digits need appear at least once, then it is not possible. Consider $1111$. - then i need to print 0.What the problem in it –  user3001932 Dec 24 '13 at 15:44 I created a rule for divisibility by seven, eleven and thirteen whose algorithm for divisibility by seven is this: N = a,bcd; a' ≣ ( − cd mod 7 + a ) mod 7; cd is eliminated and if 7|a'b then 7|N. The procedure is applied from right to left repetitively till the leftmost pair of digits is reached. If the leftmost pair is incomplete consider a = 0. Example: N = 382,536, using simple language: 36 to 42 = 6; 6 + 2 − 7 = 1 → 15; 15 to 21 = 6; 6 + 3 − 7 = 2 → 28; 7|28 and 7|N. This rule is mentioned in my unpublished (officially registered) book: Divisibility by 7, the end of a myth?. -
{}
# Lesson 9 Applying Area of Circles ### Problem 1 A circle with a 12-inch diameter is folded in half and then folded in half again. What is the area of the resulting shape? ### Solution For access, consult one of our IM Certified Partners. ### Problem 2 Find the area of the shaded region. Express your answer in terms of $$\pi$$. ### Solution For access, consult one of our IM Certified Partners. ### Problem 3 The face of a clock has a circumference of 63 in. What is the area of the face of the clock? ### Solution For access, consult one of our IM Certified Partners. (From Unit 3, Lesson 8.) ### Problem 4 Which of these pairs of quantities are proportional to each other? For the quantities that are proportional, what is the constant of proportionality? 1. Radius and diameter of a circle 2. Radius and circumference of a circle 3. Radius and area of a circle 4. Diameter and circumference of a circle 5. Diameter and area of a circle ### Solution For access, consult one of our IM Certified Partners. (From Unit 3, Lesson 7.) ### Problem 5 Find the area of this shape in two different ways. ### Solution For access, consult one of our IM Certified Partners. (From Unit 3, Lesson 6.) ### Problem 6 1. Complete the table. 4 5 1 9 $$e$$ 15 $$j$$ 2. Here is an equation for the table: $$j = 1.25e$$. What does the 1.25 mean? 3. Write an equation for this relationship that starts $$e = \text{...}$$ ### Solution For access, consult one of our IM Certified Partners. (From Unit 2, Lesson 5.)
{}
# Delimiting a species’ geographic range using posterior sampling and computational geometry ## Abstract Accurate delimitation of the geographic range of a species is important for control of biological invasions, conservation of threatened species, and understanding species range dynamics under environmental change. However, estimating range boundaries is challenging because monitoring methods are imperfect, the area that might contain individuals is often incompletely surveyed, and species may have patchy distributions. In these circumstances, large areas can be surveyed without finding individuals despite occupancy extending beyond surveyed areas, resulting in underestimation of range limits. We developed a delimitation method that can be applied with imperfect survey data and patchy distributions. The approach is to construct polygons indicative of the geographic range of a species. Each polygon is associated with a specific probability such that each interior point of the polygon has at least that posterior probability of being interior to the true boundary according to a Bayesian model. The method uses the posterior distribution of latent quantities derived from an agent-based Bayesian model and calculates the posterior distribution of the range as a derived quantity from Markov chain Monte Carlo samples. An application of this method described here informed the Australian campaign to eradicate red imported fire ants (Solenopsis invicta). ## Introduction Many of the questions arising in the management of threatened and invasive species require empirical estimation of geographic range limits and shifts in range limits over time. Delimiting surveys are routinely carried out as part of initial response to the discovery of an introduced species1,2,3 and to facilitate conservation efforts4,5, with management efforts focused within the delimited range. The effectiveness of programs to slow the spread of biological invasion depends upon accurate estimation of species range limits to avoid uncontrolled expansion of the invasion edge6,7. Accurate estimation of geographic range limits is also required for effective management of threatened species to ensure conservation efforts are applied to all locations where the species are present and to avoid costly actions being applied to unoccupied locations. The capacity to accurately estimate geographic range limits is also of central importance in understanding and predicting range shifts under environmental change to mitigate adverse impacts8. Two related problems arise: design of efficient surveys and inference of boundaries. These problems are solved iteratively, in part because a species distribution evolves over time and in part because an inferred boundary informs subsequent monitoring efforts9,10. Here we focus on the inference problem, and present a new method that is applicable to a range of survey designs. Yalcin and Leroux11 identify six methods for inferring a species’ range: observational study, grid-based mapping, convex hull, kriging, species distribution models and hybrid methods. They define an observational study somewhat idiosyncratically as a method that estimates a characteristic of a species range, such as the maximum elevation where a species can occur. Grid-based mapping and convex hull are methods for inferring a spatial distribution from a collection of point observations, and kriging is a method for interpolating spatial variables based on point observations and potentially also environmental covariates. Species distribution models estimate ranges based on correlations between species occurrence or abundance and environmental variables. Hybrid methods, as the name suggests, combine features of multiple types, for example pairing species distribution models with mechanistic modelling of spread processes. For our present purpose we propose an alternative classification comprising four types of method: utilization methods, which characterize a species’ use of spatial resources based on detected individuals; monitory methods, which use records of survey actions (including those that did not result in detections) to delimit range; correlative methods, which identify correlations between environmental variables and occupancy or abundance of a species, and use these to infer where individuals may be present even if not observed; and mechanistic methods, which explicitly model spatial population dynamics and/or detection processes to identify plausible range distributions. These distinctions are primarily conceptual – advanced methods incorporate features from all of these categories. ### Utilization methods One approach to range modelling involves utilization distributions. These provide a probabilistic representation of the use of spatial resources by an individual or species, across its range. Fleming et al.12 identify two distinct types: range and occurrence distributions. The range distribution “addresses the long-term area requirements of an animal, assuming its movement behaviors do not significantly change” whereas the occurrence distribution addresses the question of where the animal was located during the observation period. These definitions are framed in terms of an individual animal, but one can rephrase them for species in a straightforward manner. Methods for estimating range distributions include minimum convex polygon13,14, kernel density estimation15, mechanistic home range analysis16, autocorrelated Gaussian density estimation17, and local convex hull18. Occurrence distributions can be estimated using the Brownian bridge density estimator19. Utilization methods model the internal structure of a spatial distribution. Here we focus on delimitation, that is, determining the limit of a species’ range and quantifying uncertainty in that limit. This is a challenging inference problem, and one that utilization distributions and their associated methods are not ideally suited to address. A common practice is to find a contour of the utilization distribution that encloses 95% of the observations11, but this, by definition, underestimates the extent of the range. The amount by which it underestimates is not apparent, and varies from one dataset to another. Another problem for utilization methods is that available observations may not adequately represent the species’ range, for example due to a lack of sufficient monitoring resources, or imperfect detectability20. Consequently, even enclosing 100% of observations may exclude parts of the range where no observations were made. Prior to delimitation, it is typically not clear where monitoring is required. Moreover, there may be spatial variation in detection probability, due to environmental factors or to the use of multiple monitoring methods with different detection probabilities. To overcome this problem, it is necessary to model likely locations of undetected individuals, taking into account spatial variations in detection probability. It may be possible to repurpose the delimitation method we present below to construct utilization distributions. However, we stress that utilization distributions are intended to characterize the observed use of spatial resources; they are not designed to represent the likely locations of unobserved individuals. ### Monitory methods Monitory methods consider the history of survey actions undertaken during the management of a species, and combine detections, non-detections, and an assessment of detection probability to infer range limits, often by first constructing maps of probability of occupancy or expected abundance. For example, the method of Hauser et al.14 uses such records to construct a map of occupancy probabilities for an invasive plant species and prioritise subsequent survey actions. Spatial variation in detection probability remains a problem for monitory methods, although in principle this spatial variation can be incorporated into the inference. An additional problem is that heat maps of probability of occupancy or expected abundance reflect both the geographic distribution of the species and uncertainty about the locations of undetected individuals. Consequently, a temporal sequence of such heat maps can create an illusion of range expansion merely due to increasing uncertainty regarding the locations of undetected individuals7, potentially even when the range within which detections occur is contracting. Boundary curves or polygons can be constructed by finding isopleths of such heat maps, but for any chosen threshold value, the resulting isopleth likewise reflects both the extent of the species’ range and the precision with which the available data delimit that range. ### Correlative methods Correlative methods, known as Species Distribution Models11,21 (SDMs) involve regressing species occurrence or abundance against climatic or other environmental covariates, and then using maps of these covariates to predict the likely spatial distribution of undetected individuals. These methods work well when species are in equilibrium with their environment. However, this is unlikely to be true in many circumstances of management interest, because pest control programs typically are applied when species ranges are expanding, and threatened species programs often are applied when ranges are contracting. Moreover, SDMs typically do not take into account non-environmental biotic factors such as the presence or absence of diseases and predators. Ecological niche models22,23 are also relevant to correlative methods. These characterize the distribution of a species in environmental space (also known as ecological space), in which points correspond to the values of a (potentially large) number of environmental or ecological variables. In contrast, geographic space is comprised of two-dimensional spatial locations. Typically, points in geographic space can be mapped to unique points in environmental space to assess whether they are suitable for a species, but this may be of little use if suitable habitats are unoccupied, as is often the case in invasion and conservation biology. ### Mechanistic methods Another way to account for undetected individuals is to incorporate models of population dynamics into the inference procedure. In an invasive species context, the Bayesian approach developed by Mangel et al.10, estimates the probability of pest occupancy at different distances from the presumed invasion epicentre assuming the population expands smoothly, producing a bell-shaped spatial distribution. The delimitation method developed by Leung et al.9 was designed for invasions in which the proportion of invaded sites declines relatively smoothly from epicentre to edge. The accuracy of these methods, which involve allocating survey effort along transects centered on the estimated invasion epicentre, is substantially reduced when individuals have a patchy distribution9. Boundary estimation for an expanding population can be challenging when spread occurs as a result of stratified diffusion24, in which individuals make frequent short movements and occasional long distance “jumps”. This form of spread process typically creates an irregular pattern of occupancy comprised of clusters of individuals. Clusters typically are located at imperfectly predictable distances from each other due to inherent difficulties in estimating the distances and directions of long distance movements25. This form of species distribution, which also can arise from spatial heterogeneity in habitat availability, creates a heightened risk of underestimating range boundaries because individuals may exist beyond the surveyed area despite an absence of detections near its perimeter. For contracting populations such as threatened species, range determination is complicated by complex source-sink dynamics26,27 that produce substantial gaps in occupancy. More generally, challenges in estimating range limits can arise when there is a complex interplay between species reproduction, dispersal rates and habitat suitability. ### An agent-based approach In previous work7, we developed an agent-based model to reconstruct a history of the Brisbane fire ant invasion, or more precisely to sample multiple plausible histories from a posterior distribution using a Markov chain Monte Carlo (MCMC) technique. This approach combined features of all of the above methods. The available data included: extensive records of individual nest detection points, as in utilization methods; records of search actions and estimates of detection probabilities by targeted search and by public reporting in urban and rural environments, as in monitory methods; environmental variables in the form of a habitat suitability map, as in correlative methods; and a detailed model of population dynamics, including a distribution of founding distances, reproductive rate and a complete phylogenetic tree for all detected and putative undetected nests, as in mechanistic methods. While it is not possible to infer the exact number, locations or lifespans of undetected individuals, our method does simulate multiple plausible invasion histories at that level of detail. We typically sample 10000 such histories to explore the space of plausible histories consistent with the data. For the reader’s convenience, we provide a more detailed summary of the data and model parameters in Appendix 1. Full details of the model and the Markov chain Monte Carlo technique we used to sample from it are provided in Keith and Spring7, primarily in the Supplementary Information. Our approach addressed many of the limitations identified above. In particular, it can be applied in circumstances where complex spatio-temporal dynamic processes create substantial gaps in occupied regions and irregular boundary shifts over time, using data obtained with imperfect and incomplete survey methods. However, one of our outputs involved processing the 10000 sampled histories to produce a time series of heat maps showing the expected areal abundance of fire ant nests. As we point out in our discussion of monitory methods above (and in our earlier paper), a time sequence of such heat maps can create an illusion of expansion due to increasing uncertainty regarding the location of undetected nests. ### Scope of this paper Our goal in this paper is to provide a method for inferring and visualizing a species’ range limits given posterior sampled point sets, in such a way that the contribution of uncertainty to the apparent range is appropriately quantified. Each sampled point set includes known locations of detected individuals and putative locations of undetected individuals. In practice, we generate such point sets using our published agent-based method7. Next, we construct a polygon enclosing each point set, then identify map coordinates contained in the interior of at least a proportion α of these polygons. We provide boundaries for multiple values of α to indicate the degree of uncertainty in the inferred range. The polygons are selected from a polygon family, thus constraining the polygons to have properties deemed desirable for a specific application, such as convexity or connectedness. In our examples we use chi-shapes - simple polygons constructed using an algorithm of Duckham et al.28 - or modified chi-shapes (newly proposed here) to allow for multiple disjoint polygons, as described in the section on Inferring Polygons below. Alternative polygon families could be used, for example to allow polygons with holes. To illustrate the new method we estimated the boundary of an invasive species that is subject to an eradication program. The method can also be readily applied to estimate boundaries of native species that are contracting or shifting due to environmental change, harvesting pressure or demographic variability. The program we consider is aimed at eradicating a fire ant invasion in South East Queensland, Australia. We estimated the boundary of the invasion at the end of April 2015, to inform a decision on whether to continue program funding, based on historical data regarding where fire ants were detected and where efforts were made to remove them. We compared our most conservative estimate to the operational boundaries in use by the eradication program at that time. We found that the outer operational boundary at the end of April 2015 (that is, the outer limits of the region monitored by remote sensing) corresponded over most of its length to our most conservative inferred boundary. On this basis, we concluded that the invasion had been successfully delimited, subject to modest extensions being made to the operational boundary in a few identified locations. ## Methods The method takes as input multiple sets of points (that is, map locations) in a two-dimensional landscape, representing the locations of both detected and undetected individuals. These points may represent habitations, or alternatively the notional centre of range for each individual. Note that undetected individuals do not have known locations, and even the number of undetected individuals is unknown. Plausible locations for undetected individuals must therefore be imputed via some algorithm. We assume that multiple alternative sets of points are available, each containing locations of all detected individuals, but differing in the number and locations of imputed undetected individuals. In principle, such sets of points do not have to be generated within a Bayesian framework: any algorithm capable of imputing missing data will suffice. However, the probabilistic interpretation that we give to the polygons constructed here assumes that the multiple sets of points have been sampled from a posterior distribution. In the examples presented below we use an MCMC algorithm that we developed7 to sample from a posterior distribution over plausible histories of a biological invasion. ### Input to the method The input consists of the following items: 1. 1. Point sets P1, P2, …, PN, where each Pi contains ni two-dimensional points. 2. 2. A set Q of reference points distributed throughout the region of interest. 3. 3. A value α, such that the polygon to be constructed will contain all reference points interior to at least a proportion α of the N polygons constructed for the N point sets (see Step 1 in the next section). 4. 4. A family of polygons $${\mathcal F}$$ and a map such that any set of points P maps to a unique polygon $$\wp (P)\in {\mathcal F}$$. In this paper, all polygons are chi-shapes (defined below) or modified chi-shapes. Each of the point sets P1, P2, …, PN includes a subset of observations common to them all, representing known locations of individuals. The point sets differ in the number and locations of undetected individuals, imputed by some appropriate method. Here we use the posterior sampling method of Keith and Spring7. The set Q of reference points provides a convenient discretization of the geographic region of interest. In principle it can be any collection of points scattered throughout the region, but in this paper we use the centres of cells in a square tiling. In that case, the locations of all reference points can be determined by supplying map coordinates of one reference point (in some specified coordinate system aligned to the tiling) and the side length of the tiling. The value α controls how confident we can be that the polygon we ultimately report contains the entire range of the species. We stress that neither α nor 1 − α should be interpreted as a proportion of the range of the species. Whatever value of α is used, the resulting polygon will contain all known locations of individuals, since these are common to all point sets, and thus contains the entire observed range of the species. But our goal is to construct a polygon that also contains all unobserved members of the species, and α reflects how conservative we want to be in constructing such a polygon. Various options are available for the family of polygons $${\mathcal F}$$. One simple choice is the family of convex polygons, in which case (P) would be the convex hull of a set of points P. However, convex polygons have the disadvantage of resulting in potentially substantial overestimation of the species boundary when actual boundaries are nonconvex. Nonconvex boundaries are likely in many circumstances, including where unsuitable habitat prevents areas being occupied and where long-distance movements cause the boundary to “bulge outwards” in the vicinity of satellite populations. Chi-shapes28 are a family of simple polygons (‘simple’ in the geometric sense that sides intersect only at corners, and form a closed path). This family includes all convex polygons, but chi-shapes may also be non-convex. A chi-shape (P) is constructed for a set of points P by starting with the Delaunay triangulation of P, then identifying all external edges that satisfy two criteria: (1) the edge is longer than a given length L; and (2) if the edge is removed, the external edges of the remaining triangles still form a simple polygon. Only the longest such edge is removed, necessarily creating two new external edges and one new external vertex. This process is iterated until no external edges satisfying these criteria remain (see Fig. 1). In this paper, all polygons are either chi-shapes or modified chi-shapes in which we relax criterion (2). We proceed as in the preceding paragraph, except that we replace criterion (2) with the requirement: (2′) if the edge is removed, along with any other external edges in the same triangle, the remaining triangles still include all vertices (see Fig. 2). The properties of this algorithm should be analysed in future work; here we merely note that by removing the other external edges in the same triangle, it becomes possible to form disjoint polygons. Numerous other polygon families are available, for example, the families of polygons produced by LoCoH18 or by parametric kernel density estimation29. We do not claim that chi-shapes or modified chi-shapes are preferable to these alternatives; a comparison is a potential direction for future research. ### Inferring boundaries Our proposed method consists of the following steps: 1. 1. Construct a polygon $${\wp }_{i}=\wp ({P}_{i})\,\in {\mathcal F}$$ for each point set Pi. 2. 2. For each reference point, count the number of point sets for which the polygons constructed in Step 1 contain that reference point in their interior or on their edge. 3. 3. Identify the set of reference points $${Q}_{\alpha }\subset Q$$ for which the counts determined in Step 2 exceed a proportion α of the total number of point sets N. 4. 4. Construct a polygon $$\wp =\wp ({Q}_{\alpha })\,\in {\mathcal F}$$ using the reference points identified in Step 3. If a high resolution is desired, the number of reference points may be large. In that case, Step 4 can be computationally intensive. The computational efficiency of Step 4 can be improved if the reference points are centres of cells in a square tiling, as in all our examples below. In that case, we can first identify boundary reference points. A reference point in a square tiling is said to be on the boundary if any of the four reference points immediately above, below, to the left or to the right of the point is contained in fewer than a proportion α of the polygons constructed in Step 2. Step 4 then consists of constructing a polygon only for these boundary reference points. If the polygons are chi-shapes or modified chi-shapes, and the length L used in their construction is sufficiently large relative to the spacing between reference points, the polygon will be the same as if all of the reference points identified at Step 3 had been used. Using the centres of a square tiling as reference points also facilitates an alternative visualization. The counts obtained at Step 2 (or alternatively the proportions obtained by dividing these counts by N) form a data matrix that can be visualized using a heat map. This heat map is of interest in its own right, and we present an example below (Fig. 3). One can also replace Steps 3 and 4 above with an algorithm for tracing an isopleth of the heat map, that is, the level set corresponding to the level α. However, in that case the resulting polygon may not belong to the desired polygon family $${\mathcal F}$$. The proposed method can be interpreted as averaging classifiers built from multiple point sets. That is, one can interpret the polygons built at Step 1 as classifying space into infested regions (interior) and non-infested regions (exterior), with the above-mentioned heat map being essentially an average of these classifiers. In this respect, our method resembles range bagging30. The resemblance is somewhat superficial however, as range bagging is primarily a computational technique for building classifiers in high-dimensional environmental space by averaging over classifiers in one or two dimensions. Moreover, range-bagging generates multiple point sets via sub-sampling observations rather than by imputing locations to undetected individuals. For the analysis presented below, we experiment with square tilings having spacings of 50 m and 100 m. We also experiment with setting the minimum length of edges to be removed in the construction of chi-shapes and modified chi-shapes to be L = 5 km, 10 km and 20 km. ## Results ### Simulation study To test the capacity of the method to infer the geographic range of a species, and in particular to quantify the likely locations of undetected individuals, we used a simulated data set that we had previously generated to mimic a biological invasion and eradication program7. The simulation involved constructing an entire detailed history of a hypothetical invasion, starting with an initial introduction, recording individual founding events, including time of founding and location of all individuals, and also simulating management efforts to identify which individuals were detected and thus available for inference, and which nests were killed by treatment. Further details of the simulated invasion and our reconstruction of it are provided in Keith and Spring7 and are summarised in Appendix 2 below for the reader’s convenience. Here the relevant points are the following: 1. 1) We sampled 10000 plausible histories of the invasion from a posterior distribution. From each of these we extracted the known (for detected nests) and imputed (for undetected nests) locations of all individuals alive during the second last month of the modeled period. We chose the second last month so that the imputed locations of undetected individuals would be informed by detections made in the final month. This produced 10000 point sets. 2. 2) Because the data is simulated, we also know the true history of the invasion, including the precise location and lifespan of all detected and undetected individuals. From this we extracted the true locations of all individuals alive during the second last month of the modeled period. Figure 4 shows inferred boundaries for 1 − α = 0.5, 0.75, 0.975, 0.99 and 0.999 (innermost to outermost). Note that here and in the rest of the paper we specify values of 1 − α, rather than α, purely for the aesthetic reason that the area enclosed increases as 1 − α increases. Figure 4 also shows the true locations of all individuals that were alive in the second last month of the period modeled, and the locations of detections that occurred during that month. Note that all of the detections are inside the 0.5 boundary. Indeed, they must be contained in the boundary inferred for any value of α, since they are contained in all 10000 point sets. ### Case study: fire ants in brisbane The method presented here was developed for the National Fire Ant Eradication Program (NFAEP) to eradicate the Red Imported Fire Ant (RIFA) from the vicinity of Brisbane, Australia. As the history of this eradication program underscores the importance of accurately delimiting an invasion, we provide the following summary. During the early years of the NFAEP, control efforts were focused primarily on known infestations and nearby areas, with relatively little surveillance around those areas. This strategy can be slow in achieving delimitation when infestations exist well beyond the boundary of the managed area. Infestations that were accurately delimited in the early years of the program, such as the Fisherman’s Island infestation, were successfully eradicated31, while infestations that were not accurately delimited have continued to spread. In June 2007, RIFA colonies were detected at Amberley in Brisbane’s southwest, outside the operational area at that time. It was subsequently determined that an invasion had been spreading undetected from a point in or near Amberley for an extended period. This realization was a major setback for the eradication program, which had been operating with apparent success since 2001. In previous modeling7, we estimated that eradication was close to being achieved by 2004, but that the population subsequently recovered, in large part due to delimitation failure. Our results indicated that Amberley was not the only delimitation failure – there were undetected areas of spread in the eastern part of the invasion at around the same time, and these contributed to the recovery after 2004. Due to continuing spread of the Australian fire ant invasion, the eradication program’s funding and methods were reviewed. It was decided that continued funding of the program beyond June 2013 would depend partly on the invasion being successfully delimited by 30 June 2015. To increase confidence that delimitation had been achieved, the NFAEP surveyed a large area near the invasion’s estimated boundary in 2013 and 2014. To undertake this task, low cost monitoring methods involving remote sensing and citizen monitoring were applied. These methods have substantially lower detection probabilities than conventional surveillance methods, including ground surveillance with trained personnel, but enable large areas to be rapidly surveyed at affordable cost. This reliance on a surveillance method with detection probability substantially less than 1 highlights the importance of accounting for this source of observational error in estimating the invasion’s boundary. At the time this analysis was performed, we had data on detections and interventions to the end of May 2015. We decided to assess whether the invasion had successfully been delimited by the end of April 2015, so that the inference would be informed by one month of subsequent detections. We first inferred a complete history of the invasion using a Bayesian agent-based model previously developed for reconstructing the Brisbane RIFA invasion7 and summarized in Appendix 1. The remote sensing efficacy (ie. probability of a nest being detected by aerial survey) and the founding rate (ie. average number of nests founded per nest per month) were held fixed rather than inferred, but we investigated the impact of alternative fixed values on inferred boundaries. We therefore performed five separate MCMC runs with: 1. 1. Remote sensing efficacy 0.2, founding rate 0.25 nests founded per nest per month. 2. 2. Remote sensing efficacy 0.3, founding rate 0.25 nests founded per nest per month. 3. 3. Remote sensing efficacy 0.4, founding rate 0.25 nests founded per nest per month. 4. 4. Remote sensing efficacy 0.3, founding rate 0.15 nests founded per nest per month. 5. 5. Remote sensing efficacy 0.3, founding rate 0.35 nests founded per nest per month. The values of remote sensing efficacy and founding rate selected for these runs reflect ranges of plausible values for these parameters, according to advice received from Biosecurity Queensland. Each run was continued until at least 40000 MCMC reconstructed histories were produced, with the first 20000 discarded as burn-in. Convergence was assessed visually using time-series plots of log-likelihood. For each of the reconstructed histories, we extracted the map coordinates of nests living at the end of April 2015. Thus each of our inferred boundaries was based on at least 20000 point sets. Figure 5 (left) illustrates the 0.5 (inner group) and 0.999 (outer group) boundaries for the three runs with assumed remote sensing efficacy 0.3, and founding rates 0.15, 0.25 and 0.35 nests per nest per month. As expected, the inferred geographic extent increases with the founding rate. However, the difference is negligible for the 0.5 boundaries, and not large even for the 0.999 boundaries. We will therefore ignore the influence of founding rate and use the middle founding rate of 0.25 nests per nest per month (that is, 3 nests per nest per year) in the analyses that follow. Figure 5 (right) shows the 0.5 (inner group) and 0.999 (outer group) boundaries for the three runs with assumed founding rate of 0.25 nests per nest per month, and remote sensing efficacies of 0.2, 0.3 and 0.4. As expected, the inferred geographic extent increases as the assumed remote sensing efficacy decreases, but again the difference is negligible, and we will use the middle remote sensing efficacy of 0.3 in the remaining analysis. It should be noted that finding that the value we assume for remote sensing efficacy has little effect on the inference is completely different to saying the success of the program does not depend on the actual value. This is because the inference is informed by multiple data types, so that the past can be accurately reconstructed even without a precise estimate of remote sensing efficacy. Nevertheless, the eventual success of the program may depend crucially on rapid detection of relatively rare long-distance dispersal events by remote sensing. Figure 6 presents our main result – inferred 0.5, 0.75, 0.975, 0.99 and 0.999 boundaries at the end of April 2015 assuming a founding rate of 0.25 nests per nest per month and remote sensing efficacy of 0.3. This figure also shows the operational boundaries in place at that time. These included a region designated the remote sensing scope, and low- and high-risk restricted areas. The remote sensing scope is a region that is monitored by airborne cameras. However, only a small part of this area is searched in any one month. The restricted areas have various management strategies in place to limit human-assisted movement of RIFA and to eradicate existing infestations. ### Delimitation in time-series One reason for proposing the delimitation method presented here was dissatisfaction with using our earlier abundance heat map7 to delimit boundaries, given its tendency to exaggerate apparent spatial extent due to uncertainty regarding the location of undetected individuals. This effect is most apparent when visualizing changes in boundaries over time, since uncertainty about the location of undetected nests tends to increase towards the end of the data collection period. Figure 7 shows the 0.5 (inner) and 0.999 (outer) inferred boundaries in December 2000–2014, using chi-shapes with L = 10 km and a square tiling with cells of 100 m by 100 m. Also shown are all detections that occurred January–December of each year (some of which are outside the December boundaries, due to clearing the pest from those areas earlier in the year). We propose that the series of 0.5 polygons gives the best visual representation of temporal change in boundary location, since these polygons are somewhat analogous to medians, and thus less affected by increasing uncertainty. On the other hand, if one wants to identify a region that contains the entire infestation with high probability, we recommend the 0.999 polygon. The gap between these two polygons gives an indication of the degree of uncertainty in boundary location, and spatial variation in that uncertainty. Note this gap is wider in the December 2014 plot than at earlier times, but otherwise fairly constant. The December 2000 subplot illustrates one of the advantages of our approach: it shows the inferred extent of the infestation prior to the first detections in 2001. This is possible because our sampling algorithm7 imputes plausible histories, including time of founding, for all nests. Similarly, the infestation centred on Amberley is visible in the west in December 2004 and 2005, even though no detections occurred there in those years. To investigate the effect of changing the spacing between reference points, we also produced results using 50 m by 50 m cells. The results (not shown) were visually indistinguishable from Fig. 7. We concluded that our method is not much affected by cell size, at least when the side length of cells is small compared to the parameter L. ### Modified chi-shapes Figure 8 shows similar results using modified chi-shapes, with all other settings the same. The advantage is that inferred boundaries can separate into disjoint polygons. This occurred for some of the 0.5 polygons, but none of the 0.999 polygons. In particular, one can see that there were two disjoint infestations in 2000, consistent with two separate introductions. The Amberley infestation can also be seen spreading separately from the main infestation between 2003 and 2006. To investigate the effect of varying the parameter L used in the construction of chi-shapes and modified chi-shapes, we repeated the analysis with L = 5 km and L = 20 km (Figs 9 and 10). The shape of the 0.5 polygons is substantially affected by the choice of L: with L = 5 km these polygons fragment into multiple disjoint components, whereas with L = 20 km only a single connected polygon is produced. The 0.999 polygons are much less affected by this parameter: all 0.999 polygons remained connected for all three values of L, although they do become increasingly “rough” as L decreases. Changing the parameter L has a less dramatic effect on the 0.5 polygons when chi-shapes are used instead of our modified chi-shapes, because chi-shapes are constrained to be simple polygons. For management actions that rely on containing the infestation with high probability, such as setting the limits of aerial searches, the 0.999 polygons will be of more interest than the 0.5 polygons. In that case, the appropriate choice of L is a less pressing concern. However, efficient allocation of resources within the boundary may be better guided using an abundance or occupancy heat map, given the sensitivity of the 0.5 polygons to the choice of L. ### Comparison to utilization methods The method for estimating range limits described in this paper is unique in basing the inference on multiple sets of imputed coordinates representing locations of undetected individuals. It thus addresses a fundamentally different problem than utilization approaches. Both approaches identify spatial distributions, but those produced by utilization approaches represent a species’ observed use of spatial resources, whereas those produced by the new method represent posterior uncertainty in the location of range limits, accounting for undetected individuals. Nevertheless, it is interesting to compare our results to utilization approaches. We constructed polygons using detections made in each of the years 2001–2014, using two approaches: convex hull (Fig. 11) and the r-LoCoH method32 with r = 10 km (Fig. 12). The parameter r is the maximum distance of neighbors used to construct a local convex hull around each detection. Note that we used only detections, not imputed locations of undetected nests, in this analysis, to highlight the advantage of using posterior sampling to impute locations of undetected nests. As noted above, it would also be possible to use LoCoH polygons in place of chi-shapes at Steps 1 and 4 of our algorithm, but we have not explored this possibility. The first subplots of Figs 11 and 12 are blank because there were no detections in 2000, which in itself highlights an advantage of posterior sampling of unknown locations and founding times: inferences can be made about species distribution at times prior to the first detection. In the subplots for later years, polygons constructed using only detections do not identify several large infested regions inferred using our method. For example, compare the western infestations shown in the 2004 and 2005 sub-plots of Fig. 7 to the corresponding sub-plots in Figs 11 and 12. These regions are not apparent using either convex hull or r-LoCoH, mainly because large infested areas went undetected in those years. Our inference for those years is informed by detections made prior to 2004 and subsequent to 2005, and by models of unobserved spread. The convex hull approach also demonstrates the opposite problem – the convexity of the polygons forces inclusion of large regions that are clearly not infested. For example, compare subplots for the years 2006–2009: a large concave region is apparent in the south in Fig. 7, but not in Fig. 11. Also note that the polygons shown in Figs 11 and 12 enclose all detections from the corresponding year; had we used only the detections made in December of each year, these polygons would have been much smaller and would have failed to enclose large infested regions. Thus the temporal resolution possible with our method is much higher. Another advantage of our method is that by constructing polygons for multiple values of α, one can visualize the uncertainty regarding boundary location, and spatial variation in that uncertainty. While it would also be possible to construct multiple polygons enclosing different proportions of the detections, these would reflect relative utilization of regions internal to the boundary, not uncertainty regarding the boundary location. ## Discussion The method presented here constructs simple connected polygons representing the boundary of a species’ geographic range. The simulation results shown in Fig. 4 demonstrate that boundaries constructed using the proposed method do indeed reflect the location of actual nests, including undetected nests. Note that the detections made in the month for which these boundaries were constructed do not provide a good indication of the actual range of the species: if only these detections were used to infer the boundary the range would be severely underestimated. Also note that by constructing boundaries for different values of α, a realistic indication of the uncertainty in the location of the boundary can be obtained. Most living individuals are contained within the 0.5 boundary, and all but one of the undetected individuals are contained within the 0.975 boundary, with the remaining individual between the 0.99 and 0.999 boundaries. The meaning of the value 1 − α requires some clarification. Strictly speaking, for each reference point contained within the 1 − α boundary, α is the proportion of point sets for which the corresponding polygon contains that reference point. If the point sets are sampled from a posterior distribution, and the shape of the species’ range is well approximated by a member of the polygon family, the 1 − α boundary can be interpreted as containing all points with a posterior probability at least α of being within the geographic range of the species. Importantly, the polygons constructed by this method are not required to be convex, giving the method greater generality and flexibility than previously applied convex polygon methods14. Figure 6 illustrates that boundaries of real species distributions can be concave, and would not be well approximated if the polygon were constrained to be convex. This is most noticeable along the northern boundary, where use of a convex polygon would unnecessarily include a large geographical area within the inferred range. This demonstrates the risk of overestimating the boundary when convex polygon methods are used. Species often have nonconvex distributions resulting from spatial variation in habitat suitability and long-distance dispersal events that create outlier populations in remote locations. For the fire ant data, we found that the extent of the invasion was likely to be within operational boundaries at the end of April 2015, with the outer edge of the area remotely sensed corresponding over most of its length to the outer edge of the 0.999 inferred boundary. On this basis, we concluded that the invasion had been accurately delimited by the end of April 2015, subject to small extensions to operational boundaries in the southeast, far west and north of the Brisbane River, near the coast. Founding events rarely occur across large bodies of water. This behaviour is not incorporated into our model, so our methods may overestimate expansion north of the river. While this does not guarantee that eradication will ultimately be achieved, or that delimitation failure will not recur at some time in the future, establishing that the invasion has been delimited is an essential prerequisite to the ultimate success of the program. The approach developed here is well suited to practical applications for assisting managers of biological invasions and threatened species. Invasion management effectiveness can benefit from the capacity to regularly update estimates of the invasion boundary whenever new information is obtained during the course of an eradication or containment program. Such information is vital to determine whether management efforts are succeeding in contracting the invasion or slowing its spread. Regular updating of range limits also is required to assess whether threatened species populations that are subject to management are expanding or not contracting. Our method of constructing polygons is not limited to posterior samples obtained using MCMC. For example, it could alternatively be used with posterior samples obtained using Approximate Bayesian Computation (ABC – see the seminal paper of Beaumont et al.33 for a description). Our method requires multiple alternative point sets representing plausible locations of individual entities, but these need not even be generated via posterior sampling if alternative means of imputing missing locations are devised. Although in this paper we have focused on the computational geometry aspects of the method, the usefulness of the resulting polygons depends crucially on the posterior sampled point sets, which we generated using our earlier agent-based Bayesian approach7. The agent based approach draws together components of utilization, monitory, correlative and mechanistic approaches, and takes into account the species’ life cycle, environmental variables and human interventions. It is a highly flexible approach that can potentially be modified for a wide variety of species, and could also incorporate genetic information, thus refining estimates of population dynamic processes and increasing the accuracy of estimated range limits. The wide range of potential application of our approach will allow it to make substantial contributions to the problems posed by biological invasions and conservation of threatened species. An R package pts2polys implementing the method described herein is available from CRAN. Currently this package uses chi-shapes, but not the modified chi-shapes we introduced above. C code implementing the method for modified chi-shapes is available from https://github.com/jonathanmkeith/posterior_polygons/releases/tag/v1.0. ## Data Availability Data and code used in this paper are available on request to the corresponding author. ## References 1. 1. Panetta, F. D. & Lawes, R. Evaluation of weed eradication programs: the delimitation of extent. Diversity and Distributions 11, 435–442 (2005). 2. 2. Tobin, P. C. et al. Using delimiting surveys to characterize the spatiotemporal dynamics facilitates the management of an invasive non-native insect. Population ecology 55(4), 545–555 (2013). 3. 3. Ryall, K. Detection and sampling of emerald ash borer (Coleoptera: Buprestidae) infestations. The Canadian Entomologist 147, 290–299 (2015). 4. 4. Noon, B. R., Bailey, L. L., Sisk, T. D. & McKelvey, K. S. Efficient species‐level monitoring at the landscape scale. Conservation Biology 26(3), 432–441 (2012). 5. 5. Amorim, F., Carvalho, S. B., Honrado, J. & Rebelo, H. Designing optimized multi-species monitoring networks to detect range shifts driven by climate change: a case study with bats in the North of Portugal. PloS one 9, e87291 (2014). 6. 6. Sharov, A.A., Liebhold, A.M. & Roberts, A.E. Optimizing the use of barrier zones to slow the spread of gypsy moth (Lepidoptera: Lymantriidae) in North America. Journal of Economic Entomology 91, 165–174 (1998). 7. 7. Keith, J. M. & Spring, D. Agent-based Bayesian approach to monitoring the progress of invasive species eradication programs. Proceedings of the National Academy of Sciences 110, 13428–13433 (2013). 8. 8. Thuiller, W. Editorial commentary on ‘patterns and uncertainties of species’ range shifts under climate change’. Glob. Chang. Biol. 20(12), 3593–3594 (2014). 9. 9. Leung, B., Cacho, O. J. & Spring, D. Searching for non‐indigenous species: rapidly delimiting the invasion boundary. Diversity and Distributions 16, 451–460 (2010). 10. 10. Mangel, M., Plant, R. E. & Carey, J. R. Rapid delimiting of pest infestations: a case study of the Mediterranean fruit fly. Journal of Applied Ecology 21, 563–579 (1984). 11. 11. Yalcin, S. & Leroux, S. J. Diversity and suitability of existing methods and metrics for quantifying species range shifts over time. Global Ecology and Biogeography 26, 609–624 (2017). 12. 12. Fleming, C. H. et al. Rigorous home-range estimation with movement data: A new autocorrelated kernel-density estimator. Ecology 96, 1182–1188 (2015). 13. 13. Bekoff, M. & Mech, L. D. Simulation analyses of space use: home range estimates, variability, and sample size. Behavior Research Methods, Instruments, and Computers 16, 32–37 (1984). 14. 14. Hauser, C. E. et al. Practicable methods for delimiting a plant invasion. Diversity and Distributions 22, 136–147 (2016). 15. 15. Worton, B. J. Kernel methods for estimating the utilization distribution in home-range studies. Ecology 70, 164–168 (1989). 16. 16. Moorcroft, P. R. & Lewis, M.A. Mechanistic home range analysis. Princeton University Press, Princeton, New Jersey, USA (2006). 17. 17. Dunn, J. E. & Gipson, P. S. Analysis of radio telemetry data in studies of home range. Biometrics 33, 85–101 (1977). 18. 18. Getz, W. M. & Wilmers, C. C. A local nearest-neighbor convex-hull construction of home ranges and utilization distributions. Ecography 27, 489–505 (2004). 19. 19. Horne, J. S., Garton, E. O., Krone, S. M. & Lewis, J. S. Analyzing animal movements using Brownian bridges. Ecology 88, 2354–2363 (2007). 20. 20. Kery, M. & Andrew Royle, J. Hierarchical modelling and estimation of abundance and population trends in metapopulation designs. Journal of Animal Ecology 79(2), 453–461 (2010). 21. 21. Guisan, A. & Thuiller, W. Predicting species distribution: offering more than simple habitat models. Ecology letters 8(9), 993–1009 (2005). 22. 22. Soberón, J. & Peterson, A. T. Interpretation of models of fundamental ecological niches and species’ distributional areas. Biodiversity Informatics 2, 1–10 (2005). 23. 23. Peterson, A. T. Uses and requirements of ecological niche models and related distributional models. Biodiversity Informatics 3, 59–72 (2006). 24. 24. Shigesada, N., Kawasaki, K. & Takeda, Y. Modeling stratified diffusion in biological invasions. American Naturalist 146(2), 229–251 (1995). 25. 25. Suarez, A. V., Holway, D. A. & Case, T. J. Patterns of spread in biological invasions dominated by long-distance jump dispersal: insights from Argentine ants. Proceedings of the National Academy of Sciences 98, 1095–1100 (2001). 26. 26. Guo, Q., Taper, M., Schoenberger, M. & Brandle, J. Spatial‐temporal population dynamics across species range: from centre to margin. Oikos 108(1), 47–57 (2005). 27. 27. Flather, C. H. & Bevers, M. Patchy reaction-diffusion and population abundance: the relative importance of habitat amount and arrangement. The American Naturalist 159, 40–56 (2002). 28. 28. Duckham, M., Kulik, L., Worboys, M. & Galton, A. Efficient generation of simple polygons for characterizing the shape of a set of points in the plane. Pattern Recognition 41(10), 3224–3236 (2008). 29. 29. Seaman, D. E. & Powell, R. A. An evaluation of the accuracy of kernel density estimators for home range analysis. Ecology 77, 2075–2085 (1996). 30. 30. Drake, J. M. Range-bagging: a new method for ecological niche modelling from presence-only data. J. R. Soc. Interface 12, 20150086 (2015). 31. 31. Wylie, R., Jennings, C., McNaught, M. K., Oakey, J. & Harris, E. J. Eradication of two incursions of the Red Imported Fire Ant in Queensland, Australia. Ecological Management & Restoration 17, 22–32 (2016). 32. 32. Getz, W. M., Fortmann-Roe, S. B., Lyons, A., Ryan, S. & Cross, P. LoCoH methods for the construction of home ranges and utilization distributions. PLoS ONE 2, 1–11 (2007). 33. 33. Beaumont, M. A., Zhang, W. & Balding, D. J. Approximate Bayesian Computation in population genetics. Genetics 162, 2025–2035 (2002). ## Acknowledgements The authors are grateful to Dr. Ross Wylie from Biosecurity Queensland for data and advice, and to Bob Bell (also from Biosecurity Queensland) for generous assistance with spatial data support. The authors are grateful to the Australian Research Council Centre of Excellence for Mathematical and Statistical Frontiers for their support of this project (CE140100049). ## Author information J.M.K. developed the methods and software and wrote the manuscript. D.S. proposed the project, provided data and co-wrote the manuscript. T.K. reviewed the manuscript. Correspondence to Jonathan M. Keith. ## Ethics declarations ### Competing Interests The authors declare no competing interests. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Keith, J.M., Spring, D. & Kompas, T. Delimiting a species’ geographic range using posterior sampling and computational geometry. Sci Rep 9, 8938 (2019) doi:10.1038/s41598-019-45318-5
{}
What is the equation of the parabola with a focus at (-1,-4) and a directrix of y= -7? Jul 26, 2016 $6 y = {x}^{2} + 2 x - 32$. Explanation: Let the Focus be $S \left(- 1 , - 4\right)$ and, let the Directrix be $d : y + 7 = 0$. By the Focus-Directrix Property of Parabola, we know that, for any pt. $P \left(x , y\right)$ on the Parabola, $S P = \bot$ Distance $D$ from P to line $d$. $\therefore S {P}^{2} = {D}^{2}$. $\therefore {\left(x + 1\right)}^{2} + {\left(y + 4\right)}^{2} = | y + 7 {|}^{2}$ $\therefore {x}^{2} + 2 x + 1 = {\left(y + 7\right)}^{2} - {\left(y + 4\right)}^{2}$ $= \left(y + 7 + y + 4\right) \left(y + 7 - y - 4\right) = \left(2 y + 11\right) \left(3\right) = 6 y + 33$ Hence, the Eqn. of the Parabola is given by, $6 y = {x}^{2} + 2 x - 32$. Recall that the formula to find the $\bot$ distance from a pt.$\left(h , k\right)$ to a line $a x + b y + c = 0$ is given by $| a h + b k + c \frac{|}{\sqrt{{a}^{2} + {b}^{2}}}$.
{}
# Ellipse|Definition & Meaning ## Definition A regular oval shape called an ellipse can be created by a point moving in a plane in a way that the summation of its distances from the foci remains constant or by cutting a cone with an oblique plane that does not cross its base. Ellipse is a conic section component with properties similar to a circle. In contrast to a circle, an ellipse has an oval shape. An ellipse has an eccentricity below one and represents the locus of points whose distances from the ellipse’s two foci are a constant value. Ellipses can be found in our daily lives in a variety of places, including the two-dimensional shape of an egg and the running tracks in sporting venues. Figure 1 – Labeled components of ellipse. ## Equation of Ellipse The general ellipse equation is used to represent an ellipse in the coordinate plane algebraically. An ellipse’s equation is as follows: $\frac { ( x-u )^2 } { a^2 } + \frac { ( y-v )^2 } { b^2 } = 1$ Figure 2 General Equation of Ellipse with centre offset. ### Ellipse Standard Equation The ellipse has two standard equations. These equations are built around the transverse and conjugate axes of each ellipse. The transverse axis is the x-axis in the standard ellipse equation $\frac {( x-u )^ 2 } { a ^ 2 } + \frac { ( y-v ) ^ 2 } { b ^ 2 } = 1$ and the conjugate axis is the y-axis. Furthermore, another standard ellipse equation is $\frac { x ^ 2 } { b ^ 2 } + \frac { y ^ 2 } { a ^ 2 } = 1$,  with the transverse axis as the y-axis and the conjugate axis as the x-axis. The image below depicts the two standard forms of ellipse equations. Figure 3 – Standard Equation of Ellipse where the conjugate axis is defined on the y-axis and the transverse axis is defined on the x-axis. Figure 4 – Standard Equation of Ellipse with the transverse axis as the y-axis and the conjugate axis as the x-axis. ## Ellipse Components Let’s review some key terms related to the various parts of an ellipse. • Focus: The ellipse has two foci, which have the coordinates F(c, o) and F’ (-c, 0). Thus, the distance between the foci is equal to 2c. • Center: The ellipse’s center is where the major and minor axes meet. • Major Axis: The end vertices of the ellipse are (a, 0), and (-a, 0), with the major axis’ length being 2a units. • Minor Axis: The end vertices of the ellipse are (0, b) and (0, -b), respectively, and the minor axis’ length is 2b units. • The Latus Rectum: A line drawn perpendicular to the ellipse’s transverse axis is known as the latus rectum and passing through its foci.  The ellipse’s latus rectum length is $2b^2 /a$. • Transverse Axis: The axis that runs through the middle of the ellipse and between its two foci is known as the transverse axis. • Conjugate Axis: The ellipse’s axis at a point equally spaced from the foci, which is perpendicular to the transverse axis. • Eccentricity (e<1): A non-circular ellipse’s eccentricity is always greater than zero but less than one. ## Ellipse Characteristics There are several characteristics that help distinguish an ellipse from other similar shapes. These ellipse properties are as follows: • When a plane crosses a cone at its base angle, an ellipse is formed. • Each ellipse has two focal points. Any two points on the ellipse have a fixed sum of their respective distances. • There are major and minor axes on every ellipse, a centre, and eccentricity values that are less than one. ## How Do You Make an Ellipse? There are specific steps to drawing an ellipse in math. The following is a step-by-step procedure for drawing an ellipse of given dimensions. 1. Because the major axis is the ellipse’s longest diameter, determine its length. 2. Draw one horizontal line the length of the major axis. 3. Using a ruler, mark the midpoint. The major axis length is divided by two to achieve this. 4. Using a compass, draw a circle of this diameter. 5. Determine the length of the minor axis, which is the ellipse’s shortest diameter. 6. Now, take the protractor and place it at the midpoint of the major axis. Make a mark at 90 degrees. Swing the protractor 180 degrees and mark the location.At its midpoint, the minor axis can now be drawn between the two marks. 7. Using a compass, draw a circle of this diameter, just as we did for the major axis. 8. Using a compass, divide the circle into twelve 30-degree sections. 9. From the inner circle, make horizontal lines (but not for the major and minor axes). 10. They run parallel to the main axis and radiate outward from all intersections of the inner circle and the 30-degree line 11. Draw the lines a little shorter near the minor axis and a little longer as you get closer to the main axis 12. From the outer circle make vertical lines (but not for the minor and major axes). 13. These run parallel to the small axis and inward from all points where the outer circle and 30-degree lines meet. 14. Make the lines near the minor axis a little longer. but a little shorter as you approach the main axis.If the horizontal line is too far, take a ruler and stretch it slightly before drawing the vertical line. 15. Use your best freehand drawing skills to draw the curves between the points. ## Ellipse Graph Let’s look at a graphical representation of an ellipse using the ellipse formula. To graph an ellipse in a cartesian plane, certain steps must be taken. ### Step 1 Crossing with the coordinate axes, the ellipse intersects the x-axis at A (a, 0), A'(-a, 0), and the y-axis at B(0,b), B’ (0,-b). ### Step 2 The ellipse’s vertices are A(a, 0), A'(-a, 0), B(0,b), and B'(0,b) (0,-b). ### Step 3 Because the ellipse is symmetric about the coordinate axes, It has two foci  S'(-ae, 0), S(ae, 0), and two equation-based directories d and d’ $x = \frac { a } { e }$ and $y = \frac {b } {e}$, respectively. Every chord is bisected by the origin O.As a result, origin O is the centre of the ellipse. As a result, it has the shape of a central conic. ### Step 4 A closed curve is an ellipse that completely fills the rectangle. ### Step 5 The major axis is the segment AA′ of length 2a, and the minor axis is the segment BB′ of length 2b. The major and minor axes are referred to as the ellipse’s principal axes. ## Example Ellipse If the length of an ellipse’s semi-major axis is 10 cm and its semi-minor axis is 8 cm. Determine its location. ### Solution Given the length of an ellipse’s semi-major axis, a = 10 cm, and the length of its semi-minor axis, b = 8 cm, we can calculate the area of an ellipse using the formula: Area = $\pi$ x a x b Area = $\boldsymbol\pi$ x 10 x 8 = 80 x $\pi$ Area = 80 x 22/7 = 251.4286 cm$\mathsf{^2}$ area All images/mathematical drawings were created with GeoGebra.
{}
### Principia Reissue Kickstarter Spanish independent publisher Kronecker Wallis is making a new edition of Isaac Newton’s Principia Mathematicia, using a Kickstarter campaign to fund the initial print run. Here’s their video: It looks like it’ll be a fairly pretty object, and they’ve put a lot of time and thought into choosing the paper, fonts and layout. Their Kickstarter runs for around another 24 hours, and a pledge of €45 or more will secure you a copy of the finished article. Isaac Newton’s Principia Mathematica Reissue, on Kickstarter ### Curvahedra is a construction system for arty mathsy structures Edmund Harriss is a very good friend of the Aperiodical, and a mathematical artist of quite some renown. His latest project is CURVAHEDRA, a system of bendable boomerang-like pieces which join together to make all sorts of geometrical structures. ### Maths at the Manchester Science Festival 2016 Here’s our annual round-up of what’s happening in sums/thinking at this year’s Manchester Science Festival. If you’re local, or will be in the area around 20th-30th October, here’s our picks of the finest number-based shows, talks and events. ### Best way to explain topology: now officially ‘using baked goods’ Nobel Prize news! The 2016 Nobel Prize in Physics has been awarded to a trio of physicists: Michael Kosterlitz, Duncan Haldane and David Thouless“for theoretical discoveries of topological phase transitions and topological phases of matter”. And here’s the maths angle – their work is in the field of topological physics, which relates strange matter (superconductors, superfluids and the like) to topology, via the interesting way the properties of the materials change in phases, like the different fundamental shapes of objects in topology. None of the material we’ve taken a cursory glance at so far yields a simple explanation of how these two things are linked, but they have explanatory PDFs on the Nobel website if you’d like a dig around: Popular (PDF) and Advanced (PDF). Also, impressively many newspaper headlines seem to have failed to notice that ‘strange matter’ is actually a thing in physics, and consequently mangled it in their explanations. Cue of course an amazing press conference in which Nobel Committee for Physics member Thors Hans Hansson holds up a bun, a bagel and a pretzel to explain the difference. Classic topology. Official Nobel press release British scientists win Nobel prize in physics for work so baffling it had to be described using bagels, at The Telegraph (bonus points for ‘Noble prize’ typo, if it’s not been corrected yet) Physics prize explanations on the Nobel website: Popular (PDF) and Advanced (PDF) ### Happy 100th birthday, Richard K Guy! We’d all like to wish a very happy birthday to the wonderful Richard K Guy, who turns 100 today. Happily, Guy remains not dead in either the corporeal or Erdős sense: he’s both fit as a fiddle (he climbed a tower for charity aged 97), and active in the mathematical community. ### New Twin Primes found Collaborative prime number searching website PrimeGrid has announced its most recent discovery: on 14th September, user Tom Greer discovered a new pair of twin primes (primes which differ by 2), namely: $2996863034895 \times 2^{1290000} \pm 1$ Found using PrimeGrid’s Sophie Germain Prime search, the new discoveries are 388,342 digits long, smashing the previous twin prime record of 200,700 digits. PrimeGrid is a collaborative project (similar to GIMPS, which searches for specifically Mersenne Primes) in which anyone who downloads their software can donate their unused CPU time to prime searching. It’s been the source of many recent prime number discoveries, including several in the last few months which rank in the top 160 largest known primes. The University of Tennessee Martin’s Chris Caldwell maintains a database of the largest known primes, to which the new discovery has been added. Press release from PrimeGrid (PDF) The List of Largest Known Primes PrimeGrid website The new twin primes’ entries on the List of Largest Known Primes: n+1, n-1 ### The University of Leicester is going to sack its whole maths department (and rehire some of them) The University of Leicester says it’s facing a big budget deficit, so it’s got to make some cuts. In the current British climate, that’s nothing unusual. However, the university has decided to cut a lot more from the maths department than elsewhere. The way they’re going to do this is to sack almost everyone, then ask them to re-apply for slightly fewer jobs than there were before. Once it’s all done, 6 of the 21 mathematicians currently working at Leicester will be out of a job. There’s some speculation that the reason that maths is going to be hit particularly hard is that it didn’t do particularly well in the last iterations of the REF and the National Student Survey. The Universities and Colleges Union has started a petition against the cuts, disputing the size of the deficit and the need for so many job losses. They’ve written a response laying out their side of the story. The European Mathematical Society has also said it’s very concerned. Tim Gowers has written a bit more about what he thinks is going on on his blog. As usual, there’s some good discussion in the comments as well. via Yemon Choi
{}
# CNC Milling of balsa? ### Help Support The Rocketry Forum: #### shockwaveriderz ##### Well-Known Member Does anybody have any idea how much it would cost to have some boost glider solid balsa wings cnc milled such that the airfoil is preformed into the wing? in small quantties? any information will be appreciated....private emails welcome on this subject #### Loki ##### Well-Known Member Custom mill work runs roughly $50/hr, including the time required to write the CNC program. Once it is running, figure a minute or two per part, depending on the complexity. A very rough guess would be about$300 for the first 50 parts (not including material). Less than 50 parts will not be any cheaper due to the setup cost, but extending the run to 500 parts would not cost much more. #### shockwaveriderz ##### Well-Known Member thanks jeff/loki.....guess I'll scratch that idea and let people sand their own airfoils.... ##### Well-Known Member At that price...would it be worth it? #### n3tjm ##### Papa Elf You might be able to do what you want to do by using hot wire cutting with foam. I am thinking about trying that out when I get around to cloning the Aerotech Phoenix. I will first make a jig that fastens to the styrofoam, and then use the hot wire thingy to cut it. After that, remove the jig, and coat the wing with balsa... Besides the wings... the Phoenix should be a easy glider to clone.... #### LarryH ##### Well-Known Member A new CNC router will cost you about $7,000 visit: Ummmm, not entirely true, for what you seek a CNC router capable of milling Balsa and other types of wood, you can build one for under$500, search the web, there are plenty of people who have done homebuilt CNC projects, I'll see if I can dig up some of my old links. There are 3 basic components that make a CNC machine different from a standard power tool, and that is one a Stepper motor on each axis to move the toolhead to the appropriate X/Y/Z position, two a controller board to link it all up with a PC, and three software to control it all, all but the software can usually be come by fairly cheaply, and there are a few free CNC software packages out there, you just have to scour the web, there are plans and schematics on the web for homemade controller boards, or you can buy a premade unit from Geko or a few other manufacturers for a couple hundred bucks. If you only intend to use the machine for milling wood you can build the frame out of pine 2x4s, plywood, 1x4s, 4x4s, or just about any lumber you can come by.
{}
HyperPhysics****HyperMath*****Algebra: R Nave: Go Back: Complex Conjugate. [3] 9. In mathematics, the conjugate transpose (or Hermitian transpose) of an m-by-n matrix with complex entries, is the n-by-m matrix obtained from by taking the transpose and then taking the complex conjugate of each entry (the complex conjugate of + being −, for real numbers and ).It is often denoted as or ∗.. For real matrices, the conjugate transpose is just the transpose, = As imaginary unit use i or j (in electrical engineering), which satisfies basic equation i 2 = −1 or j 2 = −1.The calculator also converts a complex number into angle notation (phasor notation), exponential, or polar coordinates (magnitude and angle). Mathematical function, suitable for both symbolic and numerical manipulation. Wolfram|Alpha » Explore anything with the first computational knowledge engine. This calculator does basic arithmetic on complex numbers and evaluates expressions in the set of complex numbers. Why we use conjugate of current than the original phasor in the calculation of Complex Power i-e S=VI* Ask Question Asked 4 years, 1 month ago. The utility of the conjugate is that any complex number multiplied by its complex conjugate is a real number: This operation has … Complex number calculator: complex_number. 1) Computes the complex conjugate of z by reversing the sign of the imaginary part. Worksheets on Complex Number. You can use them to create complex numbers such as 2i+5. Examples open all close all. It also demonstrates elementary operations on complex numbers. Polynomial Roots Calculator. Complex_conjugate function calculates conjugate of a complex number online. To improve this 'Conjugate transpose (Hermitian transpose) Calculator', please fill in questionnaire. We learn the theorem and illustrate how it can be used for finding a polynomial's zeros. Applied physics and engineering texts tend to prefer , while most modern math and … Calculation for multiplication and division: Index Complex numbers . The calculator displays complex number and its conjugate on the complex plane, evaluate complex number absolute value and principal value of the argument . But to divide two complex numbers, say $$\dfrac{1+i}{2-i}$$, we multiply and divide this fraction by $$2+i$$.. ', performs a transpose without conjugation. BYJU’S online dividing complex numbers calculator tool performs the calculation faster and it displays the division of two complex numbers in a fraction of seconds. Example: type in (2-3i)*(1+i), and see the answer of 5-i. Every complex number has a so-called complex conjugate number. Conjugate a complex number. Mathematically, if I 15 30 o, then I * 15 30 (just flip the sign on the angle). complex conjugate of exp(i*x) Extended Keyboard; Upload; Examples; Random; Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. Complex modulus: complex_modulus. The idea is to find the modulus r and the argument θ of the complex number such that z = a + i b = r ( cos(θ) + i sin(θ) ) , Polar form z = a + ib = r e iθ, Exponential form As an example we take the number $$5+3i$$ . Let's take a closer look at these figures. The calculator will show you the work and detailed explanation. I am wondering why when we calculate correlation of 2 complex number we always use the conjugate on one of them. The complex conjugate transpose of a matrix interchanges the row and column index for each element, reflecting the elements across the main diagonal. Open Live Script. Functions. or z gives the complex conjugate of the complex number z. I have an array consisting of imaginary elements using fast Fourier transform (fft.fft) and would like to return a list with the complex conjugate of each element. The complex conjugate zeros, or roots, theorem, for polynomials, enables us to find a polynomial's complex zeros in pairs. (iii) The complex conjugate of w is denoted by w*. Tips . An easy to use calculator that converts a complex number to polar and exponential forms. And what you're going to find in this video is finding the conjugate of a complex number is shockingly easy. If a complex number is a zero then so is its complex conjugate. This calculator extracts the square root, calculate the modulus, finds inverse, finds conjugate and transform complex number to polar form.The calculator will … Thanks! The complex conjugate is implemented in the Wolfram Language as Conjugate[z].. Suppose that Z j 3 4 , then Z j* 3 4 (just flip the sign of the imaginary part). 2) Additional overloads are provided for float , double , long double , and all integer types, which are treated as complex numbers with zero imaginary component. B1 ( a + bi) A2. Additionally, if the value is expressed in rectangular form, as is often the case for impedances, it’s just as easy. This online calculator finds the roots of given polynomial. How to Use the Dividing Complex Numbers Calculator? As imaginary unit use i or j (in electrical engineering), which satisfies basic equation i 2 = −1 or j 2 = −1.The calculator also converts a complex number into angle notation (phasor notation), exponential, or polar coordinates (magnitude and angle). Find, in the form z a k, the equation of circle passing through S, T and the origin. 5. Hermitian Conjugate. Mathematica » The #1 tool for creating Demonstrations and anything technical. The complex numbers w and w* are represented in Argand diagram by the points S and T respectively. Wolfram Web Resources. But, imaginary part differs in the sign, with same coefficient. A1. Find the two square roots of u. Instructions. The procedure to use the dividing complex numbers calculator is as follows: Step 1: Enter the coefficients of the complex numbers, such as a, b, c and d in the input field. For Polynomials of degree less than or equal to 4, the exact value of any roots (zeros) of the polynomial are returned. For example, if B = A' and A(1,2) is 1+1i, then the element B(2,1) is 1-1i. This calculator does basic arithmetic on complex numbers and evaluates expressions in the set of complex numbers. It has the same real part. The notion of complex numbers was introduced in mathematics, from the need of calculating negative quadratic roots. Definition: Complex conjugate in mathematics, is a pair of complex numbers, which has same real part. Online calculator. The conjugate matrix of a matrix is the matrix obtained by replacing each element with its complex conjugate, (Arfken 1985, p. 210).. Complex Conjugate of this Hot Network Questions Fantasy book set in medieval times, in which one of the characters wields a double-sided battleaxe Find real and complex zeroes of a polynomial. The complex_modulus function allows to calculate online the complex modulus. Instructions:: All Functions . 0 Reply 0 [ - ] Reply by sami_aldalahmah January 3, 2018. It's really the same as this number-- or I should be a little bit more particular. Calculate Complex Conjugate of Real / Imaginary Numbers - Tutorial, Definition, Formula and Example. The complex number u is given by u = -1 + 4 3i (i) Without using a calculator and showing all your working. Tool for calculating the value of the conjugate of a complex number. Details. Find Complex Conjugate of Complex Values in Matrix. Note that there are several notations in common use for the complex conjugate. I was thinking of extracting the imaginary bits by using nparray.imag, but it creates a list without "j" to denote imaginary. Now, let’s move to the actual calculations. Convert a Complex Number to Polar and Exponential Forms - Calculator. The operation also negates the imaginary part of any complex numbers. Keep in mind that Figure 2(a) and 2(b) are two different ways of describing the same point. SEE: Conjugate Transpose. The complex number calculator only accepts integers and decimals. The calculation rules are presented in a third section. Complex number concept was taken by a variety of engineering fields. Create a 2-by-2 matrix with complex elements. Here, $$2+i$$ is the complex conjugate of $$2-i$$. Complex Number Calculator. abs: Absolute value and complex magnitude: angle: Phase angle: complex: Create complex array: conj: Complex conjugate: cplxpair: Sort complex numbers into complex conjugate pairs: i: … These conjugate complex numbers are needed in the division, but also in other functions. can be entered as co, conj, or \[Conjugate]. The conjugate of a complex number $z$ is written $\overline{z}$ or $z^*$ and is formed of … Conjugate the English verb complete: indicative, past tense, participle, present perfect, gerund, conjugation models and irregular verbs. B2 ( a + bi) Error: Incorrect input. Compute , Evaluate expressions involving Complex Numbers, Take the Square Root (Step by Step) , Find the Conjugate, Compute Arg(z), Modulus(z) Solve any Complex Equation; Read Basics on Complex Analysis and Identities involving Trigonometric , Logarithmic, Exponential and Polynomial Functions, DeMoivre Theorem We know that to add or subtract complex numbers, we just add or subtract their real and imaginary parts.. We also know that we multiply complex numbers by considering them as binomials.. You can also determine the real and imaginary parts of complex numbers and compute other common values such as phase and angle. Viewed 6k times 4. Able to display the work process and the detailed explanation. Some more important notions such as the conjugation transformation are introduced in a second section. The nonconjugate transpose operator, A. The conjugate of a complex number is that number with the sign of the imaginary part reversed. Is there a way to compute for the complex conjugate, given an array with complex elements? Wolfram Demonstrations Project » Explore thousands of free applications across science, mathematics, engineering, technology, business, art, finance, social … Conjugate automatically threads over lists. Active 4 years, 1 month ago. I will try to summarize it according to my understanding. Complex numbers have the property to be solutions of some equations that cannot be solved by usual real numbers, such as the equation x 2 =-1. Complex Number Calculator. We're asked to find the conjugate of the complex number 7 minus 5i. Are we sort of trying to get rid of the imaginary part here or something like that? Here is the complex conjugate calculator. the complex conjugate of I is written I *. So the conjugate of this is going to have the exact same real part. Hello, @jtrantow gave a very good explanation. Figure 2: Complex conjugate representation in (a) Cartesian form and (b) polar form. We also work through some typical exam style questions. Translate complete in context, with examples of … The complex number conjugated to $$5+3i$$ is $$5-3i$$. Complex Number Lesson. Just type your formula into the top box. The complex number calculator allows to perform calculations with complex numbers (calculations with i). ) Cartesian form and ( b ) polar form a ) and 2 ( a ) and 2 a. [ z ] Nave: Go Back: complex conjugate of w is denoted by *. Transformation are introduced in a second section two different ways of describing the same point was taken by variety! That number with the sign of the imaginary part here or something like?. It according to my understanding Tutorial, Definition, Formula and example online the complex conjugate representation (. Nave: Go Back: complex conjugate of the complex number and its on! Only accepts integers and decimals several notations in common use for the complex conjugate number z extracting imaginary. A closer look at these figures see the answer of 5-i in a third section we asked... Passing through S complex conjugate calculator T and the origin Back: complex conjugate of real / imaginary numbers Tutorial! Is a pair of complex numbers are needed in the set of complex numbers and evaluates expressions the. ( b ) are two different ways of describing the same point the roots of given polynomial display. Accepts integers and decimals value and principal value of the imaginary part ) which! Common use for the complex conjugate representation in ( 2-3i ) * ( )! Be used for finding a polynomial 's zeros and w *, is a pair of complex numbers theorem... Is a pair of complex numbers, which has same real part, has! Easy to use calculator that converts a complex number online ( 2-3i ) * ( )... A way to compute for the complex number absolute value and principal value of complex., conj, or roots, theorem, for polynomials, enables us to find in this video finding. Translate complete in context, with examples of … this online calculator finds the roots given... These figures 'Conjugate transpose ( Hermitian transpose ) calculator ', please fill in questionnaire with examples of … online. Conjugate is implemented in the Wolfram Language as conjugate [ z ] shockingly.... Or z gives the complex conjugate of the imaginary part of any complex numbers and. Of w is denoted by w * are represented in Argand diagram by the points and... ( a ) Cartesian form and ( b ) polar form form and ( b ) are different! Form and ( b ) are two different ways of describing the same point I will to... Pair of complex numbers and evaluates expressions in the Wolfram Language as conjugate [ z ] fill questionnaire... Taken by a variety of engineering fields part ) describing the same point a look! We sort of trying to get rid of the imaginary bits by nparray.imag. ( 2-i\ ) that converts a complex number and its conjugate on the angle ) determine the real imaginary... Are introduced in a second section negates the imaginary part here or something like that ''... 4 ( just flip the sign, with examples of … this online calculator the. Roots, theorem, for polynomials, enables us to find a polynomial zeros... Real part the imaginary part reversed of trying to get rid of the part... Exact same real part values such as 2i+5 by a variety of fields... And numerical manipulation and imaginary parts of complex numbers represented in Argand diagram by points... Conjugate, given an array with complex numbers such as the conjugation transformation are introduced in second. Are several notations in common use for the complex conjugate is implemented in the Wolfram Language as conjugate [ ]. Conjugate on the complex modulus or \ [ conjugate ] was thinking of extracting the imaginary part with of. This is going to have the exact same real part imaginary bits by nparray.imag! Zeros in pairs let 's take a closer look at these figures these conjugate complex numbers and expressions... Answer of 5-i or \ [ conjugate ] transformation are introduced in a third.... As this number -- or I should be a little bit more particular the calculations! ( iii ) the complex number absolute value and principal value of the complex number calculator accepts... Illustrate how it can be entered as co, conj, or,. Have the exact same real part please fill in questionnaire 's zeros that converts a complex is. Hello, @ jtrantow gave a very good explanation - ] Reply by sami_aldalahmah January 3 2018! For both symbolic and numerical manipulation the sign, with same coefficient way to for. J 3 4 ( just flip the sign of the complex number has a so-called complex conjugate, given array. Type in ( 2-3i ) * ( 1+i ), and see the answer of 5-i * are represented Argand! Evaluates expressions in the Wolfram Language as conjugate [ z ] is the complex zeros... Angle ) style questions online the complex conjugate zeros, or \ [ conjugate ] easy use. Numbers such as 2i+5 or \ [ conjugate ] is the complex conjugate given! But also in other functions converts a complex number and its conjugate the... Number to polar and exponential forms and example represented in Argand diagram by the points S T! Are introduced in a third section 2: complex conjugate calculator finds the of. ’ S move to the actual calculations on complex numbers, which has same part! I is written I * * Algebra: R Nave: Go Back: complex conjugate, given an with... Given an array with complex numbers has a so-called complex conjugate in mathematics, a! A little bit more particular complex conjugate number and anything technical way to compute the. An easy to use calculator that converts a complex number and its on... Mathematics, is a pair of complex numbers are needed in the sign, with examples of … online. And detailed explanation a ) Cartesian form and ( b ) polar form: Index complex numbers w and *. Work process and the detailed explanation Tutorial, Definition, Formula and example I should a... Ways of describing the same as this number -- or I should be a bit! Imaginary numbers - Tutorial, Definition, Formula and example online calculator finds roots! ) complex conjugate calculator \ ( 5+3i\ ) detailed explanation - Tutorial, Definition, Formula and example the... In context, with same coefficient let ’ S move to the actual calculations was taken by a variety engineering! Use for the complex conjugate of the imaginary bits by using nparray.imag, but also in other functions a of... Thinking of extracting the imaginary part differs in the division, but it creates list... Work and detailed explanation imaginary parts of complex numbers w and w * are represented Argand! Polar form z by reversing the sign of the complex conjugate, given an with... We 're asked to find the conjugate of a complex number is that number with the sign the! Finding a polynomial 's complex zeros in pairs knowledge engine a little more... Work through some typical exam style questions allows to calculate online the complex is! K, the equation of circle passing through S, T and the.. Can be used for finding a polynomial 's zeros of any complex numbers are needed in the form a... Sort of trying to get rid of the conjugate of a complex number that. 0 Reply 0 [ - ] Reply by sami_aldalahmah January 3, 2018 different ways of the! This online calculator finds the roots of given polynomial of trying to get rid of the of..., conj, or \ [ conjugate ] or z gives the conjugate! And angle transformation are introduced in a second section really the same as this number -- or I be! Index complex numbers, which has same real part move to the actual calculations )... To calculate online the complex number z find, in the set of complex numbers and compute other common such... Of real / imaginary numbers - Tutorial, Definition, Formula and example also determine the real and parts... January 3, 2018 something like that I 15 30 ( just the... The same as this number -- or I should be a little bit more particular a... Will show you the work process and the detailed explanation same point j 4... Take the number \ ( 5+3i\ ) imaginary parts of complex numbers are needed in the division but... And 2 ( b ) polar form * ( 1+i ), see. Numerical manipulation variety of engineering fields has a so-called complex conjugate number gave very... W * of 5-i ) * ( 1+i ), and see the answer of 5-i for the complex,. Good explanation * HyperMath * * * * HyperMath * * HyperMath * *! Of I is written I * of a complex number of \ ( 5-3i\.. Its complex conjugate of this is going to have the exact same real part if a complex and... But it creates a list without j '' to denote imaginary evaluates expressions in the form a... You 're going to find a polynomial 's zeros let ’ S move to the calculations! To have the exact same real part ) Computes the complex conjugate of z by reversing sign! This calculator does basic arithmetic on complex numbers are needed in the sign of the imaginary by... Multiplication and division: Index complex numbers work process and the detailed explanation we asked... Third section z a k, the complex conjugate calculator of circle passing through S, T the...
{}
Determining the minimal polynomial How do you find the minimal polynomial, $\mu_{M^{-1}}(x)$ given $\mu_{M}(x)$? My guess is since if $\lambda$ is an eigenvalue of $M$, then $1\over \lambda$ is an eigenvalue of $M^{-1}$, we might have something like $\mu_{M^{-1}}(x)=\mu_{M}({1\over x})$? But then I am not sure that that is the minimal polynomial... Thanks - Yours is not a bad guess, but, to begin with, there is a problem: $\mu_M\left( \frac{1}{x} \right)$ needs not to be a polynomial. Nevertheless, I think we can improve your idea: for any degree $n$ polynomial $p(x)$, define its conjugate (I'm not sure if this guy has already a name in the literature: please, correct me [EDIT. According to Georges, this is called the reciprocal polynomial ]) as $$\overline{p}(x) = x^n p\left( \frac{1}{x} \right) \ .$$ Clearly, the conjugate of a polynomial is still a polynomial, and you can easily verify that: 1. $\overline{\overline{p}}(x) = p(x)$. [EDIT: if $p(x)$ has non-zero constant term. See Georges' comment.] 2. $\overline{p(x)}\overline{q(x)} = \overline{p}(x)\cdot\overline{q}(x)$ I claim that the result is the following: if $\mu_M (x)= a_0 + a_1 x + \dots + a_{n-1}x^{n-1} + x^n$, then $$\mu_{M^{-1}} (x) = \frac{1}{a_0}\overline{\mu_M} (x) \ .$$ In order to prove this, we'll need the following lemma. Lemma. Let $M$ be an invertible matrix and $p(x)$ a polynomial such that $p(M) = 0$. Then $\overline{p}(M^{-1}) = 0$. Proof of the lemma. Indeed, $\overline{p}(M^{-1}) = (M^{-1})^n p(M) = 0$. Hence, since $\mu_M$ annihilates $M$, so does $\overline{\mu_M}$ with $M^{-1}$. We have to prove that $\frac{1}{a_0}\overline{\mu_M}$ has the characteristic property of the minimal polynomial of $M^{-1}$. Namely, that it has no proper divisor which also annihilates $M^{-1}$. So, assume there were two polynomials $p(x), q(x)$ such that $$\frac{1}{a_0}\overline{\mu_M} (x) = p(x)q(x)$$ and moreover $p(M^{-1}) = 0$. Then, taking conjugates in this last equality, we would obtain $$\frac{1}{a_0}\mu_M (x) = \overline{p}(x)\cdot\overline{q}(x) \ .$$ But, because of the lemma, $\overline{p}(M) = 0$. So, by definition of the minimal polynomial of $M$, $$\mu_M (x) = \overline{p}(x) \qquad \text{(normalized)} \ .$$ Taking conjugates again, we would have that, up to a constant, $$\overline{\mu_M} (x) = p(x) \ .$$ - Thanks, Agusti! –  Eigen Nov 7 '11 at 18:52 My pleasure, Eigen. –  a.r. Nov 7 '11 at 19:06 Dear Agustí, the guy is called reciprocal polynomial by his buddies. And reciprocating twice will not get you back where you started if the original polynomial has no constant term. (Fortunately this doesn't happen in the context of the question). –  Georges Elencwajg Nov 7 '11 at 20:12 You're right. Thank you, Georges. –  a.r. Nov 7 '11 at 21:10 You're welcome, Agustí. I forgot to mention that this is a very nice answer! –  Georges Elencwajg Nov 7 '11 at 22:23
{}
Posted on ## Cross-site Request Forgery Cross-site request forgery, also known as CSRF, is a vulnerability whereby an application accepts a form request without validating its origin. This is also sometimes known as session riding (note that the word riding is used instead of hijacking). It is a different vulnerability from SQL injection. It actually took mee awhile to understand this concept. Here is an analogy I came up with: Now imagine the web application is a company’s building. The company decides to hire a guard to check the identity of every individual passing through the gates. What the guard needs to check is the origin of a request. This could simply just be an employee identification card or in our case, a CSRF Token. ## Testing for CSRF Vulnerabilities When we test for CSRF vulnerabilities, the first thing is to check if the “employee identification card” exists. We can do this by intercepting a request using a web proxy (e.g. BurpSuite) and analysing it. Our objective is to identify random tokens which could be used to verify the origin of the form. When analyzing the request, there are two possible outcomes: 1. The first being that there is no random token and in most cases the web application is vulnerable to CSRF attacks or 2. A parameter with a random token. However, it is possible that the protection mechanism is implemented incorrectly. So it is worth a try to remove the token and carry on with the verification If you are using burp suite, you could generate the CSRF proof of concept (Engagement tools (right click) → Generate POC). Note that only licensed Burpsuite users have access to engagement tools. However, there are free online tools available such as this one here. Now that we have the proof of concept, open up the HTML file and click on the button. If the form is successfully executed, it means that the web application is vulnerable to CSRF. Otherwise, you should see some sort of controlled error message. Here are some common pages where CSRF is sensitive: • Creation/deletion of users (with or without privileges)/group • Posting of entries • Web application settings Most of the time developers do not see a CSRF on a logout page as a vulnerability because it does not pose any security risk. If “exploited”, it is more of an annoyance legitimate user. ## CSRF on ATutor LMS Here are screenshots of an actual CSRF vulnerability that I discovered in ATutor CMS. You could see that there are only two users. This is the exploit which the attacker have to trick the legitimate administrator to execute. The button is just a form request without any random tokens. Here is an example of the exploit. <form action="http://127.0.0.1/atutor-2.2/ATutor/mods/_core/users/create_user.php" method="POST"> <input name="email" type="hidden" value="[email protected]" /> <input name="private_email" type="hidden" value="1" /> <input name="email2" type="hidden" value="[email protected]" /> <input name="first_name" type="hidden" value="csrfuser99" /> <input name="last_name" type="hidden" value="csrfuser99" /> <input type="hidden" value="3" /> <input type="hidden" value="Save" /> <input type="submit" value="Submit request" /> </form> Upon clicking the button, you could see that a new user is created. This means that we have successfully exploited the vulnerability. Want to find out more on using sqlmap (open source tool for finding SQL injection)? You can read more at my sqlmap tutorial.
{}
Over the past several years, many have discussed a possible “bubble” in private equity. This post explores publicly-traded private equity firms and their valuations. Companies of Interest Given available data, it is easiest to consider the 4 largest publicly traded private equity firms: \begin{align*} \begin{array}{ccc} \mbox{Firm} & \mbox{Ticker} & \mbox{Mkt. Cap (Bil USD, Dec. 2015)} \\ \hline \mbox{The Blackstone Group} & BX & 16.33 \\ \mbox{Kohlberg Kravis Roberts} & KKR & 7.25 \\ \mbox{Apollo Global Management} & APO & 2.75 \\ \mbox{The Carlyle Group} & CG & 1.25 \end{array} \end{align*} Motivation “Bubbles” are usually identified ex-post, and the discussion surrounding private equity is no exception. Just looking at prices, we see the characteristic rise and fall starting in 2012, and peaking in 2014: All of these companies are older than their stock history suggests, having IPOs (initial public offerings) up to 30 years after the company was founded (these are IPO dates for the parent, not companies owned by the parent): \begin{align*} \begin{array}{ccc} \mbox{Firm} & \mbox{Founded} & \mbox{IPO} \\ \hline CG & 1987 & 2012 \\ KKR & 1976 & 2010 \\ BX & 1985 & 2007 \\ APO & 1990 & 2011 \\ \end{array} \end{align*} A company wants to IPO when valuations are high, to raise as much capital as possible. Given this, the clustering of private equity IPOs is more evidence of a “bubble” - it’s possible all the firms IPOed at the same time to take advantage of high (perceived) valuations. To account for the size of these companies, I plot the market capitalization (price$$\times$$shares outstanding) in Billions of USD for the 4 firms of interest. Adding up the total market capitalization across all 4 companies, we can see a loss of over 10 Billion dollars from the peak in 2015. These kind of graphs are motivate discussion on bubbles - a loss of 10 billion dollars among 4 companies over a period of 6 months seems unlikely to be justified by any changes in fundamentals. Valuation As in previous posts, we want to relate prices to fundamentals using price-dividend ratio ($$P/D$$). Calculate $$D$$ as total dividends paid over the previous 12 months. I plot the path for the private equity companies’ $$P/D$$ ratios, as well as the S&P 500’s $$P/D$$ (the $$P/D$$ ratios are on separate axes because the level doesn’t matter - it differs widely across industries and companies - we care about changes, relative to historical levels). We can see that other companies IPO when BX is near peak valuation, lending creditability to the explanation above about high perceived valuations. After 2013, we see a steady decline in $$P/D$$ for private equity companies, while the S&P 500’s stays flat over the same period. Let’s break down the components of $$P/D$$ for BX: We can see that $$P/D$$ is declining mainly because dividends are going up! That being said, I’m not sure $$P/D$$ is the best way to think about valuing a private equity company. Private equity firms buy struggling companies, sell companies with high valuations, and hold companies with low valuations (hoping to increase their valuation through better management, cutting costs, etc.). Given that they are periodically selling their most valuable assets, the value of the parent goes down, but dividends go up, as the proceeds are distributed to shareholders. Another factor that complicates the analysis is the structure of the publicly traded shares. For example: KKR’s shares are listed through KKR & Co., which holds 30\% of the firm’s ownership (while the other 70\% is still held by the firms partners). Unlike most other publicly traded companies, these shareholders will never have a majority interest in the firm, so it’s hard to think about the value of voting rights. Future Work These are just some motivating examples - private equity companies are difficult to value, and I would need to better understand how they make investments and how they improve the companies they purchase before going any further. Two articles on the topic are: 1) The Cash Flow, Return and Risk Characteristics of Private Equity, Ljungqvist and Richardson (2003) 2) The Operational Consequences of Private Equity Buyouts: Evidence from the Restaurant Industry, Bernstein and Sheen (2013) Fun Fact Axes is the only English word that is the plural of 3 different singular nouns: ax, axe and axis
{}
# Unipolar arcing Unipolar arcing is a phenomenon which may occur in plasma/fusion devices between the plasma and the cathode. This cathodic process features localized, bright, tiny spots on the cathode surface, which appear to move more or less randomly. At these spots, the cathode material makes a transition into dense plasma, which then expands rapidly into the vacuum or low-pressure ambient gas. ## Thermo-ionic Emission In a typical plasma device, the plasma is present between cathode and anode, which enables current to flow by motion of mobile charged particles. In the plasma, most of the electric current is carried by electrons because the electron mobility is much higher than that of the ions, due to the lower mass. The critical places of current continuity are the interfaces between plasma and metal. On the anode side, electrons fall into the conduction band, thereby liberating the potential energy known as the work function of the anode (about 4 eV per electron for most metals). On the cathode side, however, electrons are prevented from escaping by a potential barrier, the work function of the cathode. The nature of the discharge may create conditions which enable a fraction of the electrons to overcome the potential barrier, leading to electron emission. Depending on the character of those conditions, we distinguish different electron emission mechanisms. Electrons can be emitted during individual events, such as ion impact, or by collective events, such as high cathode temperature (thermionic) and/or a high electric field on the cathode surface. The collective thermionic and field emission can non-linearly amplify each other known as thermo-field emission. The class of emission by individual events are called 'glow' discharges, and emission by collective events 'arc' discharges. Collective thermionic, and/or field emission can be stationary. For arc discharges this is, however, not the case: the emission is related to energy dissipation and net heating of the cathode, which can enhance the temperature and associated electron emission, a so-called thermal run-away process. Locations where this occurs can explosively evaporate, leading to a new form of electron emission that is inherently non-stationary because the emission location is changed by the explosion, the plasma expansion and the increase of the hot spot area by thermal conduction. This non-stationary form of emission is called explosive electron emission or arcing. ## Model of Arcing The electron emission at the cathode spot occurs in the form of discrete explosive electron emission splashes, so-called 'ectons'. These quanta of the explosive process represent the minimum actions required for the explosive events. The duration of one ecton is about $\sim$10 ns, the current $\sim$1 A, and the size of the emission centers is about $\sim1$ $\mu$m. The explosion leaves a micro crater with a diameter of about $\sim1$ $\mu$m. According to the ecton model, arc operation is self-sustained, and occurs in stages [1]. The first stage is the appearance of dense primary erosion plasma due to the external action, e.g. a laser pulse or ELM-plasma, onto the target. This dense plasma action results in a strong emission pulse ($10^8$ A/cm$^2$) that leads to a thermal explosion of the emitting local area, the start of stage two. The created dense plasma produces two important effects: 1) the sheath thickness reduces, leading to an increase in the electric field at the surface, and 2) (due to the electric field) the ion bombardment heating increases. Now, if the local electric field is additionally enhanced by the fine structure of the surface, e.g. tungsten fuzz, this can all together intensify the local energy input, leading to a thermal run-away process. If the energy input rate exceeds the energy removing rate, this can lead to a micro-explosion. The micro-explosion creates another dense erosion plasma, and hence creates another emission site, so this causes repeating ignition of micro-explosions. The dense plasma provides the conditions for the ignition while 'choking' the already operating emission center by its limited conductivity. Ignition in this sense is not just the triggering of the arc discharge but the arc's perpetual mechanism to 'stay alive.' The probabilistic distribution of ignition of emission centers can be associated with a fractal spot model. Finally, the electron emission, and evaporation ceases, because the thermal conduction has led to an increase of the spot area, lowered the power density, and hence lowered the surface temperature. The explosively formed plasma has expanded, its density is lowered, therefore the cathode sheath thickness has increased, and therefore the electric field at the surface is reduced. ## Energy balance The transition from the cathode's solid phase to the plasma phase requires energy, which is supplied via the power dissipated by the arc[2], $P_{arc} = V I_{arc}$ where V is the voltage of the arc (measured between anode and cathode). The energy needed for the phase transition is only a fraction of the total energy balance. The total balance of the cathode region is given by: $I_{arc} V \tau = E_{phon} + E_{CE} + E_{ionization} + E_{kin,i} + E_{ee} + E_{th,e} + E_{MP} + E_{rad}$ where $\tau$ is a time interval over which observation is averaged, $E_{phon}$ is the phonon energy (heat)transferred to the cathode material, $E_{CE}$ the cohesive energy needed to transfer the cathode material from the solid phase to the vapor phase, $E_{ionization}$ is the energy needed to ionize the vaporized cathode material, $E_{kin,i}$ is the kinetic energy given to the ions due tot the pressure gradient and other acceleration mechanisms, $E_{ee}$ is the energy needed to emit electrons from the solid to the plasma, $E_{th,e}$ the thermal energy (enthalpy) of electron in the plasma, $E_{MP}$ is the energy invested in melting, heating, and acceleration of marcoparticles, and $E_{rad}$ is the energy emitted by radiation. The input energy is mostly transferred to heat the cathode, to emit and heat electrons, and to produce and accelerate ions. ## References 1. S.A. Barengolts (2010) The ecton mechanism of unipolar arcing in magnetic confinement fusion devices 2. A. Anders (2008) Cathodic Arcs: from Fractal Spots to Energetic Condenstation
{}
# Tag Info ### Counterexamples in algebra? A very basic one: Over the field of two elements, the symmetric matrix $\left(\begin{matrix}1&1\\1&1\end{matrix}\right)$ is nilpotent and thus not diagonalizable. ### Counterexamples in algebra? If $x$ and $y$ are elements of an associative ring such that $xy\ne1=yx$ then there is a mutually inverse pair of invertible matrices one of which is lower triangular but not upper triangular and the ... 1 vote ### Counterexamples in algebra? OP: [...] counterexamples can illuminate a definition (e.g. a projective module that is not free), [...] Indeed, let our ring $\ \mathcal R\$ be the the ring of all continuous functions from the ... 1 vote ### Counterexamples in algebra? You might find several answers in Harry Hutchins's book on Examples of Commutative Rings.
{}
# Thread: Ordinary Annuities and Annuities help 1. ## Ordinary Annuities and Annuities help Could you plz show me how the formula for these problems work R=1400 10% per year compounded quarterly for 8 years Answer in the back of the book: 1st part $67,410.39 2nd part:$22,610.39 R=$800, 9% per year compounded monthly for 4 years Answer in the back of the book: 1st part$46,016.57 2nd part $7616.57 The problems below are compounded annually R=$1200 i= 0.075 n=8 Answer in the back of the book: 1st part $13475.82 2nd part$3875.82 R=$17,544 i=0.08 n=6 Answer in the back of the book: 1st part$138,997.66 2nd part $33,733.66 2. ## Re: Ordinary Annuities and Annuities help What formula are you using? Show your work. Plus your problems are unclear: are they annuity immediate? What's 1st part, 2nd part? 3. ## Re: Ordinary Annuities and Annuities help Originally Posted by Wilmer What formula are you using? Show your work. Plus your problems are unclear: are they annuity immediate? What's 1st part, 2nd part? The top two are ordinary annuity problems you have to find the value of by second part answers you have to find the total amount of interest earned subtracting/dividing or etc (you can ignore if you want). The bottom two are annuity due problems. My bad about not being specific enough. 4. ## Re: Ordinary Annuities and Annuities help Asking AGAIN Johnny: what formula are you using? 5. ## Re: Ordinary Annuities and Annuities help Originally Posted by Wilmer Asking AGAIN Johnny: what formula are you using? I'm not sure, probably the Future value. This is what it shows in my book for the those problems in the Opening post: 6. ## Re: Ordinary Annuities and Annuities help That's correct. Let's apply it to your 1st question: "R=1400, 10% per year compounded quarterly for 8 years; Future Value:$67,410.39 , total interest: \$22,610.39" F = Future value (?) R = quarterly deposit (1400) N = Number of quarters (8*4 = 32) I = Interest per quarter (.10 / 4 = .025) F = R[(1 + I)^N - 1] / I F = 1400(1.025^32 - 1) / .025 = 67410.3885... Total interest = Future value - total deposits = 67410.3885... - 32*1400 = 22610.3885... Study and master that...then go apply for job as Financial Analyst 7. ## Re: Ordinary Annuities and Annuities help @wilmer lol no chance in hell ^^ thx for the help
{}
# warning about pdfpages with hyperref Package pageslts prints out this warning since I use pdfpages and hyperref in the same document. Package pageslts Warning: Package pdfpages detected. (pageslts) Using hyperref with pdfpages can cause problems. See (pageslts) ftp://ftp.ctan.org/tex-archive/ (pageslts) macros/latex/contrib/pax/ (pageslts) for project pax (PDFAnnotExtractor).. What does this mean? Am I supposed to change anything in the document or when am I going to see any problem? - I think that it's nothing to be worried about; hyperlinks in the PDF included with pdfpages will be lost, but this is known. The PAX program by H. Oberdiek can be used to reinstate them. – egreg Sep 23 '12 at 16:50
{}
Isola Tag(s): ## Easy-Medium Problem Editorial Analytics Isola is a two-player board game. It is played on a 7x7 grid which is initially filled with squares on each cell. Both players have one piece; it is in the middle position of the row closest to his/her side of the board. Players can place their piece on squares only. A move consists of two subsequent actions: 1. Moving one's piece to a neighboring (horizontally, vertically, or diagonally) position that contains a square but not the opponent's piece. 2. Removing any square with no piece on it. The player who cannot move his/her piece loses the game. Now you are going to write code to play Isola with others or computer bot. The program you submit will run for each move played by your player in the game. Input The input will be an 7x7 matrix consisting only of 0, 1, 2 and -1. Then another line will follow which will contain a number 1 or 2, which is your player id. The difference between player 1 and 2 is that player 1 plays first in start of the game. In the given matrix, top-left is [0,0] and bottom-right is [6,6]. The coordinate of a cell is represented by [row, column]. Rows increases from top to bottom and column increases from left to right. The cell marked 0 means it contains a square which is yellow in color. The cell marked 1 means it contains player 1's piece which is blue in color. The cell marked 2 means it contains player 2's piece which is red in color. The cell marked -1 means it doesn't contain the square. The board is brown in color. Output Print the coordinates of the neighbor cell [row, column] where you want to move your piece. In next line, print the coordinates of any cell to remove the square. Starting state 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 Scoring The scores will be calculated by running a tournament of all submissions at the end of the contest. Your last submission will be used while running the tournament. Score will be assigned according to Elo rating system. Example of a bot which plays the game randomly, while avoiding invalid move: https://code.hackerearth.com/6f1718R SAMPLE INPUT 0 0 0 2 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 2 SAMPLE OUTPUT 0 2 4 4 Explanation This is player 2's turn, and the player moves his/her piece to cell [0, 2] and removes the square at cell [4, 4]. After his/her move the state of game becomes: 0 0 2 0 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 Time Limit: 1.0 sec(s) for each input file. Memory Limit: 256 MB Source Limit: 1024 KB Marking Scheme: Marks are awarded when all the testcases pass. Allowed Languages: C, C++, Clojure, C#, D, Erlang, F#, Go, Groovy, Haskell, Java, Java 8, JavaScript(Rhino), JavaScript(Node.js), Lisp, Lisp (SBCL), Lua, Objective-C, OCaml, Octave, Pascal, Perl, PHP, Python, Python 3, R(RScript), Racket, Ruby, Rust, Scala, Scala 2.11.8, Swift, Visual Basic
{}
# What is a command line argument in C language? An executable instruction that performs a task for OS is called a command. These commands are issued from the prompt of OS. The arguments that are associated with the commands are as follows − • argc - argument count. • argv - argument vector. argc − It holds the total number of arguments passed from the command prompt. argv − It is a pointer to an array of character strings contains names of arguments. For example, c: |> sample. Exe hello how are you arguments Here, • argc = 5 • argv[0] = sample.exe • argv[1] = hello • argv [2] = how • argv[3] = are • argv[4] = you ## Example Following is the C program for command line argument − #include<stdio.h> main ( int argc, char *argv[ ]){ int i; clrscr( ); printf (" no. of arguments at command p = %d", argc); printf (" arguments given at prompt are "); for ( i = 1; i <argc; i++) printf ("%s ", argv[i]); getch( ); } ## Output To run a C program with command-line arguments − • Compile the program • Run the program • Go to the command prompt and give the input as shown below. c:|> sample.exe hello how are you. No. of arguments given at prompt is = 5 Arguments given at command prompt are: hello How Are You
{}
He is the amount of matter in an object. He is the amount of matter in an object. Current worldwide lighting practices might be creating this very scenario. (1981) Fast atom bombardment of solids as an ion source in mass spectroscopy. Gravitational mass - Gravitational mass is a measurement of how much gravity an object exerts on other objects. Bigger orders than that take a little delivery charge. He is the amount of matter an object has. For instance, a huge balloon can have very little mass, and a lead bullet can be exceedingly small but have a lot of mass. Atoms, as you are probably aware, are really tiny. Principally you’re making a false assumption when you state it to be made from something is to get mass. Science 246:64–71. Mass does not change according to location. The formula on how to calculate Density was also taught. Bakhtiar R, Tse FL (2000) Biological mass spectrometry: A primer. Twinkies & Root Beer & God. Weight depends on the effect of gravity. How to Measure Mass Mass is usually measured in kilograms which is abbreviated as kg. This ruler, if a force is applied at the center of mass, let's say 10 Newtons, so the mass of the whole ruler is 10 kilograms. If you're seeing this message, it means we're having trouble loading external resources on our website. Science for Dummies Hi All! It’s called dark energy. Weight varies according to location. Mass is the quantity of inertia (resistance to acceleration) possessed by an object or the proportion between force and acceleration referred to in Newton's Second Law of Motion (force equals mass times acceleration). The Higgs boson is an elementary particle in the Standard Model of particle physics, produced by the quantum excitation of the Higgs field, one of the fields in particle physics theory. The Stanford MSA isn’t a comprehensive, longitudinal research undertaking. There are two methods to ascertain the mass of an object. Gravitational force: force of attraction between all objects as a result of their masses and the distance between them. Kids will think about the specific locations of physical features like mountains, ocean trenches and volcanoes. All videos in 4K quality with weekly update. They are misbehaving and they are about to get a lesson of manners. Exercise is very, very essential for maintaining lost weight, and people who aren’t physically active are more inclined to acquire weight. This is accomplished by the Bayer process. Mass Effect 2 is widely thought of as one of the best games of all time. Weight increases or decreases with higher or lower gravity. Once upon a time, mass spectrometers were only capable of analyzing samples that existed as gases, but today's models can handle solids and liquids. The above 3 formulas are used for solving problems involving volume, mass and density. At first, my plan to achieve A1 for Science was to study hard and pay attention in class, but i realized that didnt really work out, so i started going online and clarifying my doubts online. The spaceship Endurance's destination is Gargantua, a fictional supermassive black hole with a mass 100 million times that of the sun. Take the confusion out of chemistry with hundreds of practice problems Chemistry Workbook For Dummies is your ultimate companion for introductory chemistry at the high school or college level. This article discusses atomic mass and how it is calculated. Contradictory things seem to happen at the same time. Mass spectrometry (MS) is a widely used instrumental technique, with the first such instrument, known as a parabola spectrograph, being reported in 1912. In truth, it’s our favourite table for mobile. In other words, the more mass an object has, the more force it … Catholic Mass For Dummies by Rev. There’s, though, a distinction in meaning. ––––––– *NB: For simplicity’s sake, I talk about the quasi-static simulations here. Find the mass and center of mass of a thin, straight wire from the origin to (3, 3, 3) if the density function is { \rho(x, y, z) = x + y + z. This fantasy XXX series features the very well known porn stars in doubtful sexual situations. Learn the definition of center of mass and learn how to calculate it. It is plays a major role in the chemical properties of elements. The significance of eating loads of protein can’t be overstated. Come to consider it, perhaps it shouldn’t be called normal” matter whatsoever, as it is such a little fraction of the universe. How much force must you apply at a minimum to lift the suitcase? **The mass to charge ratio ( m/z) is used to describe ions observed in mass spectrometry.By convention, m is the numerical value for the mass of the ion and z is the numerical value for the charge of the ion. In the United States, there’s almost one every day. The word "mass" comes from the Greek word "maza" meaning "lump of dough. The Pitfall of Multiple Representations in Mathematics. Many students will be qualified to graduate a year early from college due to the year at WPI. Your mass is the quantity of matter (which is just sheer stuff, in scientific terms) that you're made up of. Each isotope is not the same weight. Atomic mass is a characteristic of an atom that is a measure of its size. My target was A1 for Science. Nature 293:270–5. Hearsay, Deception and Real Money Slots Mobile. The analytical balance is the most widely used in scientific laboratories. Of a mass and coverage of major Masses, Tse FL ( )! Theory that all matter is made up of individual particles called atoms, you. Abundant component in the end, I also flipped through my study for... Of gravity on you which can not be divided ( 2010 ) - will refer you to physics ''! A mass of the force of attraction between all objects as a result of their very first engineering elective the. Atoms in any given object an instance of a table grape the best games of all.! A number of these factors quite obvious, so long as we aren ’ t about. N. to be certain that we ’ ll look, however hard we work.. Chong Institution and this is my Science Blog you to physics I '' for DC circuits ohms... Atomic mass is determined by how much matter is in an object number but different mass for! Reveal a genuine demand for a weapon they wish to purchase, oxygen has the! To understand to gravity * more precisely mass spectrometry: Principles and applications re likely to be easier... From another object Masses and the distance between them black holes or very exotic physics.. The Netherlands precisely how massive we ’ re trying to lift the suitcase, you have to the... Spectrometry.Some mass spectrometers can sit on a tabletop this very scenario be overstated of protein can t... Space beyond the true object Biological mass spectrometry: Principles and applications of his spare time doing different experiments. Take a little delivery charge also taught inertial mass - gravitational mass - gravitational mass - mass... Research undertaking Source in mass spectroscopy this very scenario solids as an ion Source in mass spectroscopy Trigilio! An atom that is a measurement of how much gravity an object theory... For dummies II ( 2010 ) - will refer you to physics I '' for DC circuits ohms! Are atoms having the exact atomic number but different mass Numbers for dummies I... And volcanoes they are misbehaving and they are misbehaving and they formed anywhere between couple... *.kasandbox.org are unblocked couple of years ago and over two billion decades.... Can also be the measurement of how much force must you apply at a to. ) - will refer you to physics I '' for DC circuits, ohms etc... Ago and over two billion decades ago opposite end at WPI, things begin to act really.! Density and type of atoms in any given object in kilograms which is abbreviated as kg of. A comprehensive, longitudinal research undertaking the fan makes air move extremely Fast we want to increase the security of... Made easier Earth you would weigh 37.7 pounds on Mars and 236.4 pounds on Mars 236.4! Free ebooks, download free ebooks, download free PDF EPUB ebook meaning lump of dough chemical properties elements... Dummies * I have come to believe in the quantity of matter and weight is the quantity mass! Occurring hydrogen is almost all hydrogen-1, and naturally occurring oxygen is virtually all.. *.kasandbox.org are unblocked learn about example of Nursing Theories Earth can vary much! Component in the end, I learnt about mass, this ruler will the... Physics for dummies * I have come to believe in the chemical properties of.... A genuine demand for a weapon they wish to purchase in physics there are two methods to ascertain mass. States, there ’ s surface universe, things begin to act really weird maza! Aware, are really tiny method is quite competitive means for evaluating a number of these devices locations. Ohms, etc: Outreach: MS basics for young students, analytes! Or very exotic physics issues ways of determining the quantity of neutrons assumption. Science Olympiad Store trying to lift the suitcase same everywhere and over two billion decades ago characteristic an. ( 2001 ) mass spectrometry determines the mass of an object is the same everywhere 104, XG... Days, you have to be certain that we ’ ll only be reached by the Science Olympiad.. Accelerate the same exact way as would a point mass, however we... An ion Source in mass spectroscopy School Science Physical Science Science Lessons Science resources Science Blog the total of. On the planet ’ s, though, a distinction in meaning Lessons Teaching Science! Gravity acts upon an object at the center of mass is the amount of matter pounds on.... First couple of days, you have to reveal a genuine demand for a weapon they to. And naturally occurring hydrogen is almost all hydrogen-1, and naturally occurring hydrogen is almost all hydrogen-1 and! Your control to share descriptions of your own work if you weigh 100 pounds on Earth calculate was. Seem to happen at the tiniest stuff in the universe is between 10 from the word. At taking their very first engineering elective during the second semester of their Masses and the distance them. Can not be divided it in the manipulative Science Chemistry Middle School Science Physical Science Science Lessons resources... Problems involving volume, mass and coverage of major Masses move extremely Fast Institution and this my... To ask accelerate the same time in physics there are different ways of determining the quantity of mass, and! ; mass spectrometer: the actual device used to describe the Density and type of atoms from another.. Re safe from weather later on the quasi-static simulations here Ryan Mah from Hwa Chong Institution and this composed. Estimate that the total mass of an atom that is a characteristic of an atom that is a in... Between 10 of your own work if you 're seeing this message, it means we 're trouble. Suitcase with a mass of an object as in space occurring oxygen is virtually all oxygen-16 of... Solving problems involving volume, mass and Density a primer methods to ascertain the mass of the due. maza '' meaning mass for dummies science lump of dough cup of water will function as control. Virtually all oxygen-16 scientific term used to determine the chemical properties of elements of measurement worksheets concentrate. Mass is a measure of the universe, things begin to act really weird solving problems involving,!: an analytical technique used to describe the Density and type of atoms in given. Happen at the same everywhere eating loads of protein can ’ t hesitate to share descriptions your! Much matter is in an object take a little delivery charge another object mass spectrometry: analytical. Lift a suitcase with a mass of an atom that is a measurement of how much gravity an object will! Re trying to lift the suitcase of matter in an object naturally occurring oxygen is all. Will function as your control that theory are: -All elements are composed of atoms:! Your control move extremely Fast be zero if no gravity acts upon an object exerts on other objects isotopes atoms... Calculate Density was also taught 9780470767863, download free ebooks, download free ebooks, download ebooks! Is virtually all oxygen-16 Stanford MSA isn ’ t be overstated the suitcase, you might know as... Abrasion is an instance of a particular carbon atom is going to made... Mass and how it is calculated is a point in space beyond the true.! Everyone be more knowledgeable and confident in applying what they know a mass 100 million times that of the:! Dummies has always stood for taking on complex concepts and making them easy understand. Also taught and electrons from something is to get a lesson of manners a lesson of.. Or lower gravity a characteristic of an atom that is a measure of the universe is between 10 points. Seem to happen at the conclusion of the 3 possible isotopes discussed above the. 1981 ) Fast atom bombardment of solids as an example, naturally occurring is... Based and concentrate on the opposite end to share descriptions of your own work if you 're a! Physics I '' for DC circuits, ohms, etc ll look, hard. Institution and this is my Science Blog get a lesson of manners on Mars and 236.4 pounds Earth! Free PDF EPUB ebook different Science experiments matter and weight is the quantity of neutrons of size! Distance between them are used for solving problems involving volume, mass and coverage of Masses! Of days, you might feel somewhat strange mass, this ruler accelerate. Want to increase the security level of all of these devices my Science Blog will. Adult of this species is around the magnitude of a mass and coverage of major Masses Wants to learn example! Creating this very scenario Tse FL ( 2000 ) Biological mass spectrometry: an technique. Most widely used in scientific laboratories on how to calculate Density was also.! Happen at the conclusion of the 3 possible isotopes discussed above all matter is in object... Above 3 formulas are used for solving problems involving volume, mass how. Mass spectrometry.Some mass spectrometers can sit on a tabletop object resists acceleration, this ruler accelerate... T be overstated varying quantities of neutrons or analytes, in truth, convert exactly between weight and!! Science Olympiad is not going to be one of the universe, begin! Over two billion decades ago is a measurement of how much gravity object... Based and concentrate on the opposite end black holes or very exotic physics issues gravity of Earth can vary much... Suitcase, you have to overcome the force of gravity on you beyond the true object is usually in! Is composed of atoms in any given object specific locations of Physical features like mountains, ocean and!
{}
# Difference between revisions of "Subadditive function" A real function $f$ with the property $$f(x+y) \le f(x) + f(y) \ .$$ A subadditive set function is a function $f$ on a collections of subset of a set $X$ with the property that $$f(A \cup B) \le f(A) + f(B) \ .$$ A set function is $\sigma$-subadditive or countably subadditive if $$f\left({ \cup_{i=1}^\infty A_i }\right) \le \sum_{i=1}^\infty f(A_i) \ .$$
{}
Hamiltonian Cycles in Prisms (1986, 2012) Originators: Moshe Rosenfeld    (presented by Douglas West - REGS 2012) Definitions: The prism over a graph $G$ is the cartesian product graph $G\Box K_2$. A graph is Hamiltonian if it has a spanning cycle, and $G$ is prism-Hamiltonian if the prism over $G$ is Hamiltonian. Background: For graphs that are prisms, typically one can weaken the conditions that suffice for various properties about spanning cycles in the absence of the restriction to prisms. For example, it is not true that all $4$-connected $4$-regular graphs are Hamiltonian. However, graphs in the subclass consisting of prisms over $3$-connected $3$-regular graphs are Hamiltonian (Paulraja [P], Rosenfeld-Barnett [RB], and [CKRR]). In general, we seek conditions on $G$ for the prism over $G$ to be Hamiltonian or to have even stronger properties about spanning cycles. The result just mentioned suggests a question. Question 1: (Alspach-Rosenfeld [AR]) Is it true that the prism over any $3$-connected $3$-regular graph $G$ decomposes into two spanning cycles? Comment: A positive answer is known for many classes of such graphs, including $3$-edge-colorable $3$-regular graphs in which each pair of color classes forms a spanning cycle, bipartite planar $3$-connected $3$-regular graphs [CKRR], and the Petersen graph [RB]. [AR] gives a necessary and sufficient condition for the prism over a $3$-connected $3$-regular graph to have a Hamiltonian decomposition. It is easy to see that the prism over a Hamiltonian graph is also Hamiltonian. How much can sufficient conditions for spanning cycles be violated and still have the prism be Hamiltonian? For example, the Chvátal-Erdős Theorem [CE] states that a $k$-connected graph $G$ with independence number $a$ is Hamiltonian if $k\ge a$. Concerning regular graphs, Jackson's Theorem [J] states that a $2$-connected $k$-regular graph with at most $3k$ vertices is Hamiltonian; when $k$ is even, there are graphs with $3k+4$ vertices that are not Hamiltonian. Question 2: (West) Given $k$, what is the largest value of $a$ such that if $G$ has connectivity $k$ and independence number $a$, then the prism over $G$ is Hamiltonian? Comment: For $a>k$, the complete bipartite graph $K_{k,a}$ is $k$-connected and has independence number $a$. When $a>2k$, the prism over $K_{k,a}$ is not Hamiltonian, since deleting the $2k$ vertices of degree $a+1$ leaves $a$ components. Hence the answer to Question 2 is at most $2k$. Question 3: (Rosenfeld) Given $k$, what is the largest value of $n$ such that the prism over any $2$-connected $k$-regular $n$-vertex graph is Hamiltonian? Comments: For even $k$ with $k\ge6$, there is a $2$-connected $k$-regular graph $G$ with $5k+6$ vertices such that the prism over $G$ is not Hamiltonian. The statement that the prism over a graph having a Hamiltonian path is Hamitonian was extended by Čada-Kaiser-Rosenfeld-Ryjáček [CKRR]. They proved that if $G$ contains a spanning subgraph that is a cactus with maximum degree at most $3$ in which all cycles have even length and all vertices of degree $3$ lie on cycles, then the prism over $G$ is Hamiltonian. They used this fact to give a short proof that the prism over a $3$-connected $3$-regular graph $G$ is Hamiltonian, starting with the fact that $G$ contains a $2$-connected spanning bipartite subgraph. One can similarly ask how long-cycle versions of results on spanning cycles can be strengthened for prisms. The long-cycle version of Dirac's Theorem, proved by Bermond [B] and Linial [L], is that every $2$-connected $n$-vertex graph with minimum degree $d$ has a cycle of length at least $\min\{n,2d\}$. Question 4: (West) Given $d$ and $n$, what is the largest value $c$ such that the prism over every $n$-vertex graph with minimum degree $d$ has a cycle of length at least $\min\{n,c\}$? Comment: Many further variations are possible. For example, one can consider Ore's Condition: if the degree-sum of each pair of nonadjacent vertices is at least $d$ in a $2$-connected $n$-vertex graph $G$, what lower bound is guaranteed for the circumference of the prism over $G$? For the questions about $2$-connected graphs, one can restrict to higher connectivity. Does a slight strengthening of the conditions for a Hamiltonian prism make the prism Hamiltonian-connected? Pancyclic? Etc., etc., etc. For analogous questions about more general products than prisms, see this problem. References: [AR] Alspach, Brian; Rosenfeld, Moshe; On Hamilton decompositions of prisms over simple 3-polytopes. Graphs Combin. 2 (1986), no. 1, 1-8. [B] Bermond, J.-C.; On Hamiltonian walks. Proceedings of the Fifth British Combinatorial Conference (Univ. Aberdeen, Aberdeen, 1975), pp. 41-51. Congressus Numerantium, No. XV, Utilitas Math., Winnipeg, Man., 1976. [CKRR] Čada, Roman; Kaiser, Tomáš; Rosenfeld, Moshe; Ryjáček, Zdeněk; Hamiltonian decompositions of prisms over cubic graphs. Discrete Mathematics 286 (2004), 45-56 [CE] Chvátal, V.; Erdős, P.; A note on Hamiltonian circuits. Discrete Math. 2 (1972), 111-113. [J] Jackson, Bill; Hamilton cycles in regular 2-connected graphs. J. Combin. Theory Ser. B 29 (1980), no. 1, 27-46. [L] Linial, Nathan; A lower bound for the circumference of a graph. Discrete Math. 15 (1976), no. 3, 297-300. [P] Paulraja, P.; A characterization of Hamiltonian prisms. J. Graph Theory 17 (1993), no. 2, 161-171. [RB] Rosenfeld, Moshe; Barnette, David; Hamiltonian circuits in certain prisms. Discrete Math. 5 (1973), 389-394.
{}
# Existence of a specific reordering bijection Please consider a bijection $g:\mathbb{N}\rightarrow\mathbb{N}$ with following properties: 1. For all real series $(a_n)_{n\geq1}$, convergence of $\sum_{n=1}^{\infty}a_n$ implies convergence of $\sum_{n=1}^{\infty}a_{g(n)}$ 2. Exist at least one real series $(c_n)_{c\geq1}$, that $\sum_{n=1}^{\infty} c_n$ diverge, but $\sum_{n=1}^{\infty}c_{g(n)}$ converge. If such bijection exist? - A detailed answer to your question can be found on the paper "Creating More Convergent Series" by Steven Krantz and Jeffery McNeal, which can be found here. – Matemáticos Chibchas Jan 10 '13 at 20:32 @MatemáticosChibchas I guess this qualifies to be turned into an answer ... – Hagen von Eitzen Jan 10 '13 at 21:37
{}
# Modeling Pandemics (3) In Statistical Inference in a Stochastic Epidemic SEIR Model with Control Intervention, a more complex model than the one we’ve seen yesterday was considered (and is called the SEIR model). Consider a population of size $N$, and assume that $S$ is the number of susceptible, $E$ the number of exposed, $I$ the number of infectious, and $R$ for the number recovered (or immune) individuals, \displaystyle{\begin{aligned}{\frac {dS}{dt}}&=-\beta {\frac {I}{N}}S\\[8pt]{\frac {dE}{dt}}&=\beta {\frac {I}{N}}S-aE\\[8pt]{\frac {dI}{dt}}&=aE-b I\\[8pt]{\frac {dR}{dt}}&=b I\end{aligned}}Between $S$ and $I$, the transition rate is $\beta I$, where $\beta$ is the average number of contacts per person per time, multiplied by the probability of disease transmission in a contact between a susceptible and an infectious subject. Between $I$ and $R$, the transition rate is $b$ (simply the rate of recovered or dead, that is, number of recovered or dead during a period of time divided by the total number of infected on that same period of time). And finally, the incubation period is a random variable with exponential distribution with parameter $a$, so that the average incubation period is $a^{-1}$. Probably more interesting, Understanding the dynamics of ebola epidemics suggested a more complex model, with susceptible people $S$, exposed $E$, Infectious, but either in community $I$, or in hospitals $H$, some people who died $F$ and finally those who either recover or are buried and therefore are no longer susceptible $R$. Thus, the following dynamic model is considered\displaystyle{\begin{aligned}{\frac {dS}{dt}}&=-(\beta_II+\beta_HH+\beta_FF)\frac{S}{N}\\[8pt]\frac {dE}{dt}&=(\beta_II+\beta_HH+\beta_FF)\frac{S}{N}-\alpha E\\[8pt]\frac {dI}{dt}&=\alpha E+\theta\gamma_H I-(1-\theta)(1-\delta)\gamma_RI-(1-\theta)\delta\gamma_FI\\[8pt]\frac {dH}{dt}&=\theta\gamma_HI-\delta\lambda_FH-(1-\delta)\lambda_RH\\[8pt]\frac {dF}{dt}&=(1-\theta)(1-\delta)\gamma_RI+\delta\lambda_FH-\nu F\\[8pt]\frac {dR}{dt}&=(1-\theta)(1-\delta)\gamma_RI+(1-\delta)\lambda_FH+\nu F\end{aligned}}In that model, parameters are $\alpha^{-1}$ is the (average) incubation period (7 days), $\gamma_H^{-1}$ the onset to hospitalization (5 days), $\gamma_F^{-1}$ the onset to death (9 days), $\gamma_R^{-1}$ the onset to “recovery” (10 days), $\lambda_F^{-1}$ the hospitalisation to death (4 days) while $\lambda_R^{-1}$ is the hospitalisation to recovery (5 days), $\eta^{-1}$ is the death to burial (2 days). Here, numbers are from Understanding the dynamics of ebola epidemics (in the context of ebola). The other parameters are $\beta_I$ the transmission rate in community (0.588), $\beta_H$ the transmission rate in hospital (0.794) and $\beta_F$ the transmission rate at funeral (7.653). Thus epsilon = 0.001 Z = c(S = 1-epsilon, E = epsilon, I=0,H=0,F=0,R=0) p=c(alpha=1/7*7, theta=0.81, delta=0.81, betai=0.588, betah=0.794, blambdaf=7.653,N=1, gammah=1/5*7, gammaf=1/9.6*7, gammar=1/10*7, lambdaf=1/4.6*7, lambdar=1/5*7, nu=1/2*7) If $\boldsymbol{Z}=(S,E,I,H,F,R)$, if we write $$\frac{\partial \boldsymbol{Z}}{\partial t} = SEIHFR(\boldsymbol{Z})$$where $SEIHFR$ is SEIHFR = function(t,Z,p){ S=Z[1]; E=Z[2]; I=Z[3]; H=Z[4]; F=Z[5]; R=Z[6] alpha=p["alpha"]; theta=p["theta"]; delta=p["delta"] betai=p["betai"]; betah=p["betah"]; gammah=p["gammah"] gammaf=p["gammaf"]; gammar=p["gammar"]; lambdaf=p["lambdaf"] lambdar=p["lambdar"]; nu=p["nu"]; blambdaf=p["blambdaf"] N=S+E+I+H+F+R dS=-(betai*I+betah*H+blambdaf*F)*S/N dE=(betai*I+betah*H+blambdaf*F)*S/N-alpha*E dI=alpha*E-theta*gammah*I-(1-theta)*(1-delta)*gammar*I-(1-theta)*delta*gammaf*I dH=theta*gammah*I-delta*lambdaf*H-(1-delta)*lambdaf*H dF=(1-theta)*(1-delta)*gammar*I+delta*lambdaf*H-nu*F dR=(1-theta)*(1-delta)*gammar*I+(1-delta)*lambdar*H+nu*F dZ=c(dS,dE,dI,dH,dF,dR) list(dZ)} We can solve it, or at least study the dynamics from some starting values library(deSolve) times = seq(0, 50, by = .1) resol = ode(y=Z, times=times, func=SEIHFR, parms=p) For instance, the proportion of people infected is the following plot(resol[,"time"],resol[,"I"],type="l",xlab="time",ylab="",col="red") lines(resol[,"time"],resol[,"H"],col="blue") # Modeling pandemics (2) When introducing the SIR model, in our initial post, we got an ordinary differential equation, but we did not really discuss stability, and periodicity. It has to do with the Jacobian matrix of the system. But first of all, we had three equations for three function, but actually$$\displaystyle{{\frac{dS}{dt}}+{\frac {dI}{dt}}+{\frac {dR}{dt}}=0}$$so it means that our problem is here simply in dimension 2. Hence\displaystyle {\begin{aligned}&X={\frac {dS}{dt}}=\mu(N-S)-{\frac {\beta IS}{N}},\\[6pt]&Y={\frac {dI}{dt}}={\frac {\beta IS}{N}}-(\mu+\gamma)I\end{aligned}}and therefore, the Jacobian of the system is$$\begin{pmatrix}\displaystyle{\frac{\partial X}{\partial S}}&\displaystyle{\frac{\partial X}{\partial I}}\\[9pt]\displaystyle{\frac{\partial Y}{\partial S}}&\displaystyle{\frac{\partial Y}{\partial I}}\end{pmatrix}=\begin{pmatrix}\displaystyle{-\mu-\beta\frac{I}{N}}&\displaystyle{-\beta\frac{S}{N}}\\[9pt]\displaystyle{\beta\frac{I}{N}}&\displaystyle{\beta\frac{S}{N}-(\mu+\gamma)}\end{pmatrix}$$We should evaluate the Jacobian at the equilibrium, i.e. $$S^\star=\frac{\gamma+\mu}{\beta}=\frac{1}{R_0}$$and$$I^\star=\frac{\mu(R_0-1)}{\beta}$$We should then look at eigenvalues of the matrix. Our very last example was times = seq(0, 100, by=.1) p = c(mu = 1/100, N = 1, beta = 50, gamma = 10) start_SIR = c(S=0.19, I=0.01, R = 0.8) resol = ode(y=start_SIR, t=times, func=SIR, p=p) plot(resol[,"time"],resol[,"I"],type="l",xlab="time",ylab="") We can compute values at the equilibrium mu=p["mu"]; beta=p["beta"]; gamma=p["gamma"] N=1 S = (gamma + mu)/beta I = mu * (beta/(gamma + mu) - 1)/beta and the Jacobian matrix J=matrix(c(-(mu + beta * I/N),-(beta * S/N), beta * I/N,beta * S/N - (mu + gamma)),2,2,byrow = TRUE) Now, if we look at the eigenvalues, eigen(J)$values [1] -0.024975+0.6318831i -0.024975-0.6318831i or more precisely $2\pi/b$ where $a\pm ib$ are the conjuguate eigenvalues 2 * pi/(Im(eigen(J)$values[1])) [1] 9.943588 we have a damping period of 10 time lengths (10 days, or 10 weeks), which is more or less what we’ve seen above, The graph above was obtained using p = c(mu = 1/100, N = 1, beta = 50, gamma = 10) start_SIR = c(S=0.19, I=0.01, R = 0.8) resol = ode(y=start_SIR, t=times, func=SIR, p=p) plot(resol[1:1e5,"time"],resol[1:1e5,"I"],type="l",xlab="time",ylab="",lwd=3,col="red") yi=resol[,"I"] dyi=diff(yi) i=which((dyi[2:length(dyi)]*dyi[1:(length(dyi)-1)])&lt;0) t=resol[i,"time"] arrows(t[2],.008,t[4],.008,length=.1,code=3) If we look carefully. at the begining, the duration is (much) longer than 10 (about 13)… but it does converge towards 9.94 plot(diff(t[seq(2,40,by=2)]),type="b") abline(h=2 * pi/(Im(eigen(J)values[1])) So here, theoretically, every 10 weeks (assuming that our time length is a week), we should observe an outbreak, smaller than the previous one. In practice, initially it is every 13 or 12 weeks, but the time to wait between outbreaks decreases (until it reaches 10 weeks). # Modeling pandemics (1) The most popular model to model epidemics is the so-called SIR model – or Kermack-McKendrick. Consider a population of size $N$, and assume that $S$ is the number of susceptible, $I$ the number of infectious, and $R$ for the number recovered (or immune) individuals, \displaystyle {\begin{aligned}&{\frac {dS}{dt}}=-{\frac {\beta IS}{N}},\\[6pt]&{\frac {dI}{dt}}={\frac {\beta IS}{N}}-\gamma I,\\[6pt]&{\frac {dR}{dt}}=\gamma I,\end{aligned}}so that $$\displaystyle{{\frac{dS}{dt}}+{\frac {dI}{dt}}+{\frac {dR}{dt}}=0}$$which implies that $S+I+R=N$. In order to be more realistic, consider some (constant) birth rate $\mu$, so that the model becomes\displaystyle {\begin{aligned}&{\frac {dS}{dt}}=\mu(N-S)-{\frac {\beta IS}{N}},\\[6pt]&{\frac {dI}{dt}}={\frac {\beta IS}{N}}-(\gamma+\mu) I,\\[6pt]&{\frac {dR}{dt}}=\gamma I-\mu R,\end{aligned}}Note, in this model, that people get sick (infected) but they do not die, they recover. So here, we can model chickenpox, for instance, not SARS. The dynamics of the infectious class depends on the following ratio:$$\displaystyle{R_{0}={\frac {\beta }{\gamma +\mu}}}$$ which is the so-called basic reproduction number (or reproductive ratio). The effective reproductive ratio is $R_0S/N$, and the turnover of the epidemic happens exactly when $R_0S/N=1$, or when the fraction of remaining susceptibles is $R_0^{-1}$. As shown in Directly transmitted infectious diseases:Control by vaccination, if $S/N the disease (the number of people infected) will start to decrease. Want to see it ? Start with mu = 0 beta = 2 gamma = 1/2 for the parameters. Here, $R_0=4$. We also need starting values epsilon = .001 N = 1 S = 1-epsilon I = epsilon R = 0 Then use the ordinary differential equation solver, in R. The idea is to say that $\boldsymbol{Z}=(S,I,R)$ and we have the gradient $$\frac{\partial \boldsymbol{Z}}{\partial t} = SIR(\boldsymbol{Z})$$where $SIR$ is function of the various parameters. Hence, set p = c(mu = 0, N = 1, beta = 2, gamma = 1/2) start_SIR = c(S = 1-epsilon, I = epsilon, R = 0) The we must define the time, and the function that returns the gradient, times = seq(0, 10, by = .1) SIR = function(t,Z,p){ S=Z[1]; I=Z[2]; R=Z[3]; N=S+I+R mu=p["mu"]; beta=p["beta"]; gamma=p["gamma"] dS=mu*(N-S)-beta*S*I/N dI=beta*S*I/N-(mu+gamma)*I dR=gamma*I-mu*R dZ=c(dS,dI,dR) return(list(dZ))} To solve this problem use library(deSolve) resol = ode(y=start_SIR, times=times, func=SIR, parms=p) We can visualize the dynamics below par(mfrow=c(1,2)) t=resol[,"time"] plot(t,resol[,"S"],type="l",xlab="time",ylab="") lines(t,resol[,"I"],col="red") lines(t,resol[,"R"],col="blue") plot(t,t*0+1,type="l",xlab="time",ylab="",ylim=0:1) polygon(c(t,rev(t)),c(resol[,"R"],rep(0,nrow(resol))),col="blue") polygon(c(t,rev(t)),c(resol[,"R"]+resol[,"I"],rev(resol[,"R"])),col="red") We can actually also visualize the effective reproductive number is $R_0S/N$, where R0=p["beta"]/(p["gamma"]+p["mu"]) The effective reproductive number is on the left, and as we mentioned above, when we reach 1, we actually reach the maximum of the infected, plot(t,resol[,"S"]*R0,type="l",xlab="time",ylab="") abline(h=1,lty=2,col="red") abline(v=max(t[resol[,"S"]*R0&gt;=1]),col="darkgreen") points(max(t[resol[,"S"]*R0&gt;=1]),1,pch=19) plot(t,resol[,"S"],type="l",xlab="time",ylab="",col="grey") lines(t,resol[,"I"],col="red",lwd=3) lines(t,resol[,"R"],col="light blue") abline(v=max(t[resol[,"S"]*R0&gt;=1]),col="darkgreen") points(max(t[resol[,"S"]*R0&gt;=1]),max(resol[,"I"]),pch=19) And when adding a $\mu$ parameter, we can obtain some interesting dynamics on the number of infected, times = seq(0, 100, by=.1) p = c(mu = 1/100, N = 1, beta = 50, gamma = 10) start_SIR = c(S=0.19, I=0.01, R = 0.8) resol = ode(y=start_SIR, t=times, func=SIR, p=p) plot(resol[,"time"],resol[,"I"],type="l",xlab="time",ylab="") # Richesse et espérance de vie Ce matin, je découvrais un graphique de l’INSEE qui présentait les taux de mortalité par sexe, âge et niveau de vie, avec entre autres le graphique suivant Comme souvent avec l’INSEE, on a accès aux données… pas celles au niveau individuel (malheureusement) mais au moins on peut retravailler la visualisation. En fait, les données sont même encore plus fines, puisque les niveaux de richesses sont définis avec des tranches de 5%, et en plus, on a le détail entre hommes et femmes b = read.csv2("MORT-RICHESSE.csv") plot(b[,1],b[,2]/1000,col="red",type="l",ylab="% de survivants",xlab="Age") lines(b[,1],b[,3]/1000,col="red",type="l",lty=2) lines(b[,1],b[,4]/1000,col="blue",type="l",lty=1) lines(b[,1],b[,5]/1000,col="blue",type="l",lty=2) legend("bottomleft",c("Femmes 95-100%","Hommes 95-100%","Femmes 0-5%","Hommes, 0-5%"), bty="n",col=c("red","blue","red","blue"),lty=c(2,2,1,1)) Je me demandais si on ne pouvait pas tenter une lecture inverse de ce graphique : sur ce graphique, assez naturellement, on regarde à pourcentage donné l’écart entre la courbe en trait plein (les pauvres) et celle en trait pointillé (les riches). Si c’est cette information qu’on veut avoir, on peut alors tenter de la visualiser. Pour ça, il faut inverser notre fonction de survie (je l’ai fait rapidement, avec une simple interpolation linéaire… je pense qu’on peut faire mieux) inversef = function(p,k=2){ y=1-b[,k]/100000 idx=sum(y&lt;=p) y1=y[idx-1] y2=y[idx] w1=(y1-p)/(y1-y2) w2=(p-y2)/(y1-y2) w2*b[idx-1,1]+w1*b[idx,1] } Ensuite, on peut construire les inverses, et mieux, les différences entre les courbes des riches et des pauvres diffF = function(p) inversef(p,3)-inversef(p,2) diffH = function(p) inversef(p,5)-inversef(p,4) u = seq(.01,.99,by=.01) vF = Vectorize(diffF)(u) vH = Vectorize(diffH)(u) plot(u*100,vF,col="red",type="l",xlab="Probabilité (%)",ylab="Nombre d'années",ylim=c(0,max(vF,vH))) lines(u*100,vH,col="blue",type="l",lty=1) legend("topright",c("Femmes","Hommes"), bty="n",col=c("red","blue"),lty=c(1,1)) Je ne suis pas très à l’aise avec le graphique. Tout d’abord parce que je ne sais pas ce que la richesse indique (un pauvre de 20 ans peut devenir un riche de 50 ans, non ?), la richesse était souvent liée à l’âge. Après l’axe des abscisses me semble aussi avoir une interprétation compliquée : quand on regarde à 10%, on regarde les pauvres et les riches qui sont morts relativement jeunes (relativement car je regarde le quantile de la fonction de survie à tranche de richesse donnée). Autrement dit, si je me place à 10%, je compare les jeunes riches morts très jeunes (10% des riches seulement sont morts plus jeunes – c’est l’interprétation d’un quantile) et les jeunes pauvres (mort à un âge que seulement 90% des pauvres ont dépassé), on observe une différence de 20 ans environ. J’ai aussi l’impression qu’on pourrait dire que pour la majorité des hommes, les riches vivent 12 ans de plus que les pauvres, soit le double des femmes (de l’ordre de 6 ans). On pourrait bien sûr se contenter de calculer les différences entre les aires, ce qui donne une différence entre les espérances de vie à la naissance des pauvres et des riches (comme le fait l’INSEE) et qu’on peut visualiser sur le graphique suivant plot(b[,1],b[,2]/1000,col="white",type="l",ylab="% de survivants",xlab="Age") polygon(c(b[,1],rev(b[,1])),c(b[,3]/1000,rev(b[,2]/1000)),col="red",border=NA) et un calcul donne une différence de l’ordre de 8 ans sum(b[,3]-b[,2])/100000 [1] 8.239346 mais la visualisation raconte bien plus qu’un simple calcul d’aire. Par exemple, le graphique ci-dessous donne exactement le même écart entre les espérances de vie des pauvres et des riches diff = sum(b[,3]-b[,2])/1000 y1 = b[,2]/1000 for(i in 1:100){ y1[i] = b[i,2]/1000+min(100-b[i,2]/1000,diff) diff = diff-(y1[i]-b[i,2]/1000) } plot(b[,1],b[,2]/1000,col="white",type="l",ylab="% de survivants",xlab="Age") polygon(c(b[,1],rev(b[,1])),c(y1,rev(b[,2]/1000)),col="red",border=NA) sum(b[,3]-b[,2])/100000 lines(b[,1],b[,2]/1000,col="red") Ici, on dit que la moitié des femmes riches meurent vers 81 ans, et les autres meurent au même âge qu’une femmes pauvre (mais une femme pauvre qui vivrait longtemps. Les distributions sont vraiment différentes, et c’est ça que je cherche à visualiser… Parce que la densité de l’âge au décès ne me semble pas forcément très simple à analyser… Comme toujours, les commentaires sont ouverts si certains ont des idées quant à la visualisation de ces données… # Proportion of people alive in 1945 that are still alive In demography, we like to use life tables to estimate the probability that someone born in 1945 (say) is still alive nowadays. But another interesting quantity might be the probability that someone alive in 1945 is still alive nowadays. The main difference is that we do not know when that person, alive in 1945, was born. Someone who was old in 1945 is very unlikely still alive in 2017. To compute those probabilities, we can use datasets from http://www.mortality.org/hmd/. More precisely, we need both death and birth data. I assume that datasets (text files) were downloaded (it is necessary to register – for free – to get the data). D=read.table("FRDeaths_1x1.txt",skip=1,header=TRUE) B=read.table("FRBirths.txt",skip=1,header=TRUE) In the death dataset, there is a “110+” for people older than 110 years. For convenience, let us cap our observations at 110 years old, DAge=as.numeric(as.character(D$Age)) D$Age[is.na(D$Age)]=110 Consider now a first function that will return, for people born in 1930 (say) two informations • the number of people (here, let us consider women only) born in 1930 (from the birth database) • the number of death of people of age 0 in 1930, people of age 1 in 1931, people of age 2 in 1932, etc… The code is simple nb=function(y=1930){ debut=1816 MatDFemale=matrix(D$Female,nrow=111) colnames(MatDFemale)=debut+0:198 cly=y-debut+1:111 deces=diag(MatDFemale[,cly[cly%in%1:199]]) return(c(B$Female[B$Year==y],deces))} We have a single number for the number of births, and then a vector for the number of deaths. Consider now another function. Consider the people born in 1930. We want to get two numbers : the number of people still alive in 1945 (say), and the number of people still alive nowadays. The ratio will be the proportion of people born in 1930 that were alive in 1945, that are still alive in 2015. pop=function(ne=1930,an=1945){ comptage=nb(ne) s=0 if(an>ne) s=sum(comptage[seq(2,1+an-ne)]) p1=max(comptage[1]-s,0) p2=max(p1-sum(comptage[seq(2+an-ne,length(comptage))]),0) c(p1,p2) } Then, for a given year (say 1945), to get the proportion of people alive in 1945 that are still alive today, we need to count how many people born in 1944 were still alive in 1945, and in 2015, but also born in 1943, 1942, etc, And we simply consider the ratio of the total number of people alive in 2015 over the total number of people alive in 1945 ptn=function(y=1945){ V=Vectorize(function(x) pop(x,y))(1816:y) sum(V[2,!is.na(V[2,])])/sum(V[1,!is.na(V[1,])]) } Hence, 22% of those alive in 1945 are still alive in 2015, > ptn(1945) [1] 0.2209435 Actually, instead of looking only at 1945, it is possible to get a plot P=Vectorize(ptn)(1900:2010) plot(1900:2010,P,type="l",ylim=0:1) For instance, > ptn(1975) [1] 0.6377413 i.e. 63.7% of those alive in 1975 are stil alive 40 years after. That is a rather interesting function, I was surprised that I couldn’t find it is standard demographical R package… # Dynamique de la Pyramide des Ages Très joli billet sur blog.revolutionanalytics.com avec un code de @kyle_e_walker permettant, très simplement (moyennant une inscription pour avoir une clé permettant d’utiliser l’API du census) de construire une pyramide des âges dynamiques. > devtools::install_github('walkerke/idbr') > library(idbr) > library(ggplot2) > library(animation) > library(dplyr) > library(ggthemes) > idb_api_key("mykey1239F2f324zf9GGZgege32R2ii4") On importe alors les données pour les hommes et les femmes, > male <- idb1('FR', 2010:2050, sex = 'male') %>% + mutate(POP = POP * -1, + SEX = 'Male') > female <- idb1('FR', 2010:2050, sex = 'female') %>% mutate(SEX = 'Female') et on stocke le tout > france <- rbind(male, female) %>% + mutate(abs_pop = abs(POP)) Ensuite, on crée l’animation, > saveGIF({ + + for (i in 2010:2050) { + + title <- as.character(i) + + year_data <- filter(france, time == i) + + g1 <- ggplot(year_data, aes(x = AGE, y = POP, fill = SEX, width = 1)) + + coord_fixed() + + coord_flip() + + annotate('text', x = 98, y = -800000, + label = 'Data: US Census Bureau IDB; idbr R package', size = 3) + + geom_bar(data = subset(year_data, SEX == "Female"), stat = "identity") + + geom_bar(data = subset(year_data, SEX == "Male"), stat = "identity") + + scale_y_continuous(breaks = seq(-1000000, 1000000, 500000), + labels = paste0(as.character(c(seq(1, 0, -0.5), c(0.5, 1))), "m"), + limits = c(min(france$POP), max(france$POP))) + + theme_economist(base_size = 14) + + scale_fill_manual(values = c('#ff9896', '#d62728')) + + ggtitle(paste0('Population structure of France, ', title)) + + ylab('Population') + + xlab('Age') + + theme(legend.position = "bottom", legend.title = element_blank()) + + guides(fill = guide_legend(reverse = TRUE)) + print(g1) + } + }, movie.name = 'france_pyramid.gif', interval = 0.1, ani.width = 700, ani.height = 600) Et le résultat est vraiment joli, non ? # Mortality by Weekday and Age A few days ago, I did mention on Twitter a nice graph, with My colleague Jean-Philippe was extremely sceptical, so I tried to reproduce that graph. The good thing is that we have the Social Security Death Master File, for data in the US. To be more specific, I have three big files on my hard drive, and in order to reproduce that graph, we’ll load the data by chunks. But before, because we have the day of birth, and the day of death, I need a function to compute the age. So here it is > age_years <- function(earlier, later) + { + lt <- data.frame(earlier, later) + age <- as.numeric(format(lt[,2],format="%Y")) - as.numeric(format(lt[,1],format="%Y")) + dayOnLaterYear <- ifelse(format(lt[,1],format="%m-%d")!="02-29", + as.Date(paste(format(lt[,2],format="%Y"),"-",format(lt[,1],format="%m-%d"),sep="")), + ifelse(as.numeric(format(later,format="%Y")) %% 400 == 0 | as.numeric(format(later,format="%Y")) %% 100 != 0 & as.numeric(format(later,format="%Y")) %% 4 == 0, + as.Date(paste(format(lt[,2],format="%Y"),"-",format(lt[,1],format="%m-%d"),sep="")), + as.Date(paste(format(lt[,2],format="%Y"),"-","02-28",sep="")))) + age[which(dayOnLaterYear > lt$later)] <- age[which(dayOnLaterYear > lt$later)] - 1 + age + } from github.com/nzcoops. Now, it is possible to create a similar table, based on that huge file (we have almost 50 million observations) > cols <- c(1,9,20,4,15,15,1,2,2,4,2,2,4,2,5,5,7) > noms_col <- c ("code","ssn","last_name","name_suffix","first_name","middle_name","VorPCode","date_death_m","date_death_d","date_death_y","date_birth_m","date_birth_d","date_birth_y","state","zip_resid","zip_payment","blanks") > library(LaF) > TABLE_AGE_DAY=function(temp = "ssdm3"){ + ssn <- laf_open_fwf( temp,column_widths = cols,column_types=rep("character",length(cols) ),column_names = noms_col,trim = TRUE) + object.size(ssn) + go_through <- seq(1,nrow(ssn),by = 1e05 ) + if(go_through[ length(go_through)] != nrow( ssn)) go_through <- c(go_through,nrow( ssn)) + go_through <- cbind(go_through[-length(go_through)],c(go_through[-c(1,length(go_through)) ]-1,go_through [ length(go_through)])) + go_through + + pb <- txtProgressBar(min = 0, max = nrow( go_through), style = 3) + count_birthday <- function(s){ + #print(s) + setTxtProgressBar(pb, s) + data <- ssn[ go_through[s,1]:go_through[s,2],c("date_death_y","date_death_m","date_death_d", + "date_birth_y","date_birth_m","date_birth_d")] + date1=as.Date(paste(data$date_birth_y,"-",data$date_birth_m,"-",data$date_birth_d,sep=""),"%Y-%m-%d") + date2=as.Date(paste(data$date_death_y,"-",data$date_death_m,"-",data$date_death_d,sep=""),"%Y-%m-%d") + idx=which(!(is.na(date1)|is.na(date2))) + date1=date1[idx] + date2=date2[idx] + itg=try(age<-age_years(date1,date2),silent=TRUE) + if(inherits(itg, "try-error")) age=trunc((date2-date1)/365.25) + w=weekdays(date2) + T=table(age,w) + Tab=matrix(0,106,7) + for(i in 1:nrow(T)) if(as.numeric(rownames(T)[i])<106) Tab[as.numeric(rownames(T)[i]),]=T[i,] + return(Tab) + } + D <- lapply( seq_len(nrow( go_through)),count_birthday) + T=D[[1]] + for(s in 2:length(D)) T=T+D[[s]] + return(T) + } If we run that function on the three files > D1=TABLE_AGE_DAY("ssdm1") |========================================| 100% > D2=TABLE_AGE_DAY("ssdm2") |========================================| 100% > D3=TABLE_AGE_DAY("ssdm3") |========================================| 100% we can visualize not percentages, as on the figure above, but counts > D=D1+D2+D3 > colnames(D)= c("Sun","Thu","Mon","Tue","Wed","Sat","Fri") > D=D1[, c("Sun","Mon","Tue","Wed","Thu","Fri","Sat")] and we have here (I remove the Saturday to get a better output) > D[,1:6] Sun Mon Tue Wed Thu Fri [1,] 2843 2888 2943 3020 2979 3038 [2,] 2007 1866 1918 1974 1990 2137 [3,] 1613 1507 1532 1530 1515 1613 [4,] 1322 1256 1263 1259 1207 1330 [5,] 1155 1061 1092 1128 1112 1171 [6,] 1067 985 950 1082 1009 1055 [7,] 1129 901 915 954 941 1044 [8,] 1026 927 944 935 911 1005 [9,] 1029 1012 871 908 939 998 [10,] 1093 1011 974 958 928 1018 [11,] 1106 1031 1019 1036 1087 1122 [12,] 1289 1219 1176 1215 1141 1292 [13,] 1618 1455 1487 1484 1466 1633 [14,] 2121 2000 1900 1941 1845 2138 [15,] 2949 2647 2519 2499 2524 2748 [16,] 4488 3885 3798 3828 3747 4267 [17,] 5709 4612 4520 4422 4443 5005 [18,] 7280 5618 5400 5271 5344 5986 [19,] 8086 6172 5833 5820 6004 6628 [20,] 8389 6507 6166 6055 6430 6955 [21,] 8794 7038 6794 6628 6841 7572 [22,] 8578 6528 6512 6472 6757 7342 [23,] 8345 6750 6483 6469 6714 7338 [24,] 8361 6859 6589 6623 6854 7369 [25,] 8398 6974 6892 6766 6964 7613 [26,] 8432 7210 7012 7175 7343 7801 [27,] 8757 7641 7526 7352 7674 7950 [28,] 9190 8041 7843 7851 7940 8268 [29,] 9495 8409 8555 8400 8469 8934 [30,] 9876 9041 9015 9166 9106 9641 [31,] 10567 9952 9506 9634 9770 10212 [32,] 11417 10428 10402 10275 10455 11169 [33,] 11992 11306 11124 11095 11243 11749 [34,] 12665 12327 11760 12025 12137 12443 [35,] 13629 13135 13179 13037 12968 13724 [36,] 14560 14009 13927 13822 14105 14436 [37,] 15660 14990 15013 15009 15101 15700 [38,] 16749 16504 16148 16091 15912 16863 [39,] 17815 17760 17519 17144 17553 17943 [40,] 19366 19057 18918 18517 18760 19604 [41,] 20770 20458 20154 20339 20349 21238 [42,] 21962 22194 22020 21499 21690 22347 [43,] 23803 23922 23701 23681 23437 24227 [44,] 25685 26133 25559 25209 25287 26115 [45,] 27506 28110 27363 27042 27272 28228 [46,] 29366 29744 29555 29245 29678 30444 [47,] 31444 32193 31817 31504 31753 32302 [48,] 33452 34719 33529 33954 33441 34618 [49,] 36186 37150 36005 36064 36226 37138 [50,] 38401 39244 38813 38465 38506 39884 [51,] 40331 41830 41168 41110 40937 42014 [52,] 43181 44351 43975 43949 43579 44734 [53,] 45307 47134 46522 46149 46089 47286 [54,] 47996 49441 49139 48678 48629 49903 [55,] 50635 52424 51757 51433 51477 52550 [56,] 53509 55337 54556 54482 54406 55906 [57,] 55703 58482 58016 57400 57097 58758 [58,] 59016 61453 60652 61024 60557 62473 [59,] 62475 65651 64169 63824 63829 65592 [60,] 66621 69185 68885 68217 68752 69963 [61,] 69759 73144 72421 71784 71745 73414 [62,] 80346 84253 83044 83177 82416 83833 [63,] 86851 90059 89002 88985 89245 90334 [64,] 91839 95465 94602 93985 94154 96195 [65,] 98461 102846 101348 101328 101306 103170 [66,] 104569 108722 107768 107711 107729 109350 [67,] 111230 115477 114418 114743 113935 116356 [68,] 116999 122053 120727 120342 119782 122926 [69,] 123695 128339 127184 126822 126639 129037 [70,] 129956 136123 134555 135120 133842 137390 [71,] 137984 142964 141316 142855 141419 143620 [72,] 145132 150708 148407 149345 149448 151910 [73,] 152877 157993 155861 156349 155924 158725 [74,] 159109 164652 162722 163499 163157 165744 [75,] 165848 172121 170730 170482 170585 173431 [76,] 172457 179036 177185 177328 177392 180215 [77,] 179936 185015 183223 183932 183237 186663 [78,] 185900 191053 189986 189730 189639 193038 [79,] 191498 196694 194246 194810 195246 197812 [80,] 195505 201289 199684 199561 198968 203226 [81,] 199031 204927 202204 202622 202951 205792 [82,] 201589 207928 204929 204001 204396 208224 [83,] 201665 206743 205194 204676 205256 207980 [84,] 200965 205653 203422 202393 203422 206012 [85,] 197445 202692 199498 199730 200075 201728 [86,] 192324 195961 193589 194754 193800 196102 [87,] 183732 188063 185153 186104 186021 188176 [88,] 174258 177474 175822 176078 176761 177449 [89,] 163180 166706 162810 164367 164281 166436 [90,] 149169 151738 150148 150212 150535 152435 [91,] 134218 136866 134959 134922 135027 136381 [92,] 118936 121106 119591 119509 119793 120998 [93,] 102734 104955 102944 102865 103345 104776 [94,] 87418 88885 88023 86963 87546 87872 [95,] 72023 72698 72151 71579 71530 72287 [96,] 56985 58238 57478 57319 57163 57615 [97,] 44447 45058 44607 44469 43888 44868 [98,] 33457 34132 33022 33409 33454 33642 [99,] 24070 24317 24305 24089 24020 24383 [100,] 17165 17295 16755 17115 16957 17207 [101,] 11799 12125 11709 11816 11824 11719 [102,] 7714 7741 7959 7691 7648 7633 [103,] 5024 5012 4822 4792 4882 4916 [104,] 2987 3101 2978 3049 3093 2906 [105,] 1781 1894 1811 1756 1734 1834 So clearly, for young people, the number of deaths is rather small… And to visualize it, as above, we can use > P=D/apply(D,1,sum)*100 > range(P) [1] 12.34857 17.59386 > dP=trunc((P-min(P))/(max(P)+.01-min(P))*11) > library(RColorBrewer) > CLR=rev(brewer.pal(name="RdYlBu", 11)) > plot(0:1,0:1,ylim=c(55,110),xlim=c(-1,7)) > for(i in 1:106){ + for(j in 1:7){ + rect(j-1,108-i,j,107-i,col=CLR[dP[i,j]]) + }} > text(rep(-.5,106),107.5-1:106,0:105,cex=.4) As above, we observe a strong difference among weekdays for the date of death for young people (below 30) which disappear after (even if there is still a sunday effect) # Démographie québécoise Cet automne, j’étais allé voir Mummy, de Xavier Dolan. Et j’avoue avoir adoré le film ! J’ai tout aimé ! La musique, le cadrage, le rythme, les acteurs (même si j’avais du mal à croire Anne Dorval qui restera toujours pour moi la maman des Parent, surtout que j’ai passé mon temps à Montréal à croiser Daniel Brière – l’acteur, pas le joueur des Avalanches – qui habitait à côté de la maison, et qui faisait souvent le marché en même temps que moi… pour moi les deux étaient attachés à tout jamais…. et la mère de la série et celle du film de Xavier Dolan sont assez différentes). Le seul reproche que j’aurais pu faire, ce sont les sous-titres (qui nous étaient imposés, en France). Au Québec, on avait pris l’habitude de vivre sans sous-titres, et on a appris une langue en la parlant, au quotidien. Par exemple, j’ai appris ce que voulait dire câlisser sans avoir à ouvrir un dictionnaire, ou sans demander de traduction. De même que j’ai découvert qu’il existait des variantes, comme décâlisser (et j’ai même fini par comprendre le sens). Mais je n’avais jamais eu besoin de visualiser la traduction du mot. Voir des traductions de mots que j’avais fini par découvrir et comprendre m’a déstabilisé. Je reverrais d’autant mieux le film en DVD, sans les sous-titres qui m’étaient imposés au cinéma. C’est amusant car pendant les vacances, j’en ai profité pour finir Magasin Général de Régis Loisel et Jean-Louis Tripp. Et j’ai eu plaisir à retrouver des expressions québécoises tout au long de la lecture. Même si l’histoire se déroule à Notre Dame du Lac, situé sur le bas Saint Laurent (même si le nom ne semble plus exister depuis quelques années), on est loin des dialogues que l’on peut entendre en région, et on a une version agréable de ce qu’on pourrait entendre à Montréal… En fait, comme cela est indiqué (trop) discrètement, c’est Jimmy Beaulieu (dont j’avais déjà souligné le travail admirable dans un précédant billet) qui a fait la “traduction” des dialogues, pour avoir (comme le disent les premières pages du livres) des dialogues en québécois “qui soient compréhensibles des deux côtés de l’Atlantique”. Ce qui montre bien qu’un français de France peut comprendre le québécois (moyennant un peu de bonne volonté). Je crois que j’aurais bien aimé avoir des sous-titres pour Mummy dans la même langue que celle parlée dans Magasin Général. En lisant en particulier les deux derniers tomes de Magasin Général (oui, j’avais un peu de retard), et les histoires de natalité, je me suis souvenu du travail que l’on avait fait il y a un peu plus d’un an avec Julie, qui était venu faire un stage à l’UQAM, et avec qui on avait découvert la démographie du Lac Saint Jean (certes, par rapport à Magasin Général, on est de l’autre côté du Saint Laurent). les slides (de la soutenance de stage) sont en ligne, # Reinterpreting Lee-Carter Mortality Model Last week, while I was giving my crash course on R for insurance, we’ve been discussing possible extensions of Lee & Carter (1992) model. If we look at the seminal paper, the model is defined as follows # Mais que s’est-il passé pendant la Première Guerre Mondiale? La réponse courte est que des gens sont morts. Beaucoup. Cela étant dit, on ne dit pas grand chose. On peut comparer les pyramides des âges pour mieux comprendre ce qui a pu se passer. Juste avant la guerre (en 1913), la pyramide des âges ressemblait à ça (en utilisant les données de mortality.org) > EXPO <- read.table( > EM=EXPO$Male > EF=EXPO$Female > Y= EXPO$Year > A= EXPO$Age > I=which(A=="110+") > base=data.frame(Female=EF,Male=EM,Y=Y,Ages=A) > base=base[-I,] > France1913=base[base$Y==1913,] > France1919=base[base$Y==1919,] > France1913$Ages=as.numeric( + as.character(France1913$Ages)) > France1919$Ages=as.numeric( + as.character(France1919$Ages)) > France1913=France1913[,c("Male","Female", + "Ages")] > library(pyramid) > plot(c(0,100), c(0,100), type="n", + frame=FALSE, axes=FALSE, xlab="", ylab="", + main="Pyramide des Ages, France, 1913") > pyramidf(France1913, frame=c(10, 75, 0, 90), + Clab="", Lcol="skyblue", Rcol="pink", + Cstep=10, Laxis=0:4*60000, AxisFM="d") En revanche, juste après la guerre (en 1919), la pyramide des âges des âges ressemblait à celle là Continue reading Mais que s’est-il passé pendant la Première Guerre Mondiale? # Men set to live as long as women by 2030? A few months ago, in Men set to live as long as women, figures show, it was mentioned that (in the U.K.) the gap between male and female life expectancy is closing and men could catch up by 2030, according to an adviser for the Office for National Statistics. (the slides are available online http://cass.city.ac.uk/…). This week, I discovered a picture on http://waitbutwhy.com/, which represent a (so-called) typical human life, in weeks, I found that interesting. But the first problem is that I don’t understand the limit, below: 90 years, that’s not the average life length. That’s not what you should expect to live when you get born. The second problem is that it cannot be as static as it might seem, when you look at the picture. I mean, life expectancy at age 0 is not the same as life expectancy at age 30, or 50. So I did try to make an animated graph, using prospective life tables. Here a code to generate life tables, at different period, for a French population (I distinguish, here male and female) library(demography) france.fcast <- forecast(france.LC1,h=100) L2 <- lifetable(france.fcast) ex2=L2$ex L1=lifetable(fr.mort,series="female") ex1=L1$ex exF=cbind(ex1,ex2) france.fcast <- forecast(france.LC1,h=100) L2 <- lifetable(france.fcast) ex2=L2$ex L1=lifetable(fr.mort,series="male") ex1=L1$ex exM=cbind(ex1,ex2) Y=colnames(exF) Based on those lifetables, we can extract remaining life expectancy, at various ages (say, for instance 50, 51, 52, etc), for someone born on some given year (say 1950). Based on those expected remaining lifetimes, we can plot picture=function(yearborn=1950,age=50){ k=which(Y==yearborn) M=diag(exM[,k+0:100]) F=diag(exF[,k+0:100]) par(mfrow=c(1,2)) va=0:(52*100-1) plot(va%%52,va%/%52,cex=.6,pch=15,col=c("light yellow","light blue","white")[1+ (va>=age*52)*1+(va>(age+M[age+1])*52)*1],ylim=c(100,0),axes=FALSE,xlab="Week", ylab="Age",main=paste("Man, born on ",yearborn, ", age ",age,sep="")) axis(1) axis(2) plot(va%%52,va%/%52,cex=.6,pch=15,col=c("light yellow","pink","white")[1+ (va>=age*52)*1+(va>(age+F[age+1])*52)*1],ylim=c(100,0),axes=FALSE,xlab="Week", ylab="Age",main=paste("Woman, born on ",yearborn, ", age ",age,sep="")) axis(1) axis(2)} For instance, if we want the graph above, for someone age 30, born in 1980, we use picture(1980,30) Now, if we run a code to get an animated gif, we can get, for someone born in 1950, and for someone born in 2000 Now, if I could get historical datasets, with the average time spent in schools, ages of retirement, etc, I guess I could add it on the graph. But that’s another story… # Smoothing mortality rates This morning, I was working with Julie, a student of mine, coming from Rennes, on mortality tables. Actually, we work on genealogical datasets from a small region in Québec, and we can observe a lot of volatiliy. If I borrow one of her graph, we get something like Since we have some missing data, we wanted to use some Generalized Nonlinear Models. So let us see how to get a smooth estimator of the mortality surface.  We will write some code that we can use on our data later on (the dataset we have has been obtained after signing a lot of official documents, and I guess I cannot upload it here, even partially). DEATH <- read.table( "http://freakonometrics.free.fr/Deces-France.txt", "http://freakonometrics.free.fr/Exposures-France.txt", library(gnm) D=DEATH$Male E=EXPO$Male A=as.numeric(as.character(DEATH$Age)) Y=DEATH$Year I=(A<100) base=data.frame(D=D,E=E,Y=Y,A=A) subbase=base[I,] subbase=subbase[!is.na(subbase$A),] The first idea can be to use a Poisson model, where the mortality rate is a smooth function of the age and the year, something like $D_{x,t}\sim\mathcal{P}(E_{x,t}\cdot \exp[{\color{blue}s(x,t)}])$that can be estimated using library(mgcv) regbsp=gam(D~s(A,Y,bs="cr")+offset(log(E)),data=subbase,family=quasipoisson) predmodel=function(a,y) predict(regbsp,newdata=data.frame(A=a,Y=y,E=1)) vX=trunc(seq(0,99,length=41)) vY=trunc(seq(1900,2005,length=41)) vZ=outer(vX,vY,predmodel) persp(vZ,theta=-30,col="green",shade=TRUE,xlab="Ages (0-100)", ylab="Years (1900-2005)",zlab="Mortality rate (log)") The mortality surface is here It is also possible to extract the average value of the years, which is the interpretation of the $a_x$ coefficient in the Lee-Carter model, predAx=function(a) mean(predict(regbsp,newdata=data.frame(A=a, Y=seq(min(subbase$Y),max(subbase$Y)),E=1))) plot(seq(0,99),Vectorize(predAx)(seq(0,99)),col="red",lwd=3,type="l") We have the following smoothed mortality rate Recall that the Lee-Carter model is $D_{x,t}\sim\mathcal{P}(E_{x,t}\cdot \exp[{\color{blue}a_x+b_x\cdot k_t}])$ where parameter estimates can be obtained using regnp=gnm(D~factor(A)+Mult(factor(A),factor(Y))+offset(log(E)), data=subbase,family=quasipoisson) predmodel=function(a,y) predict(regnp,newdata=data.frame(A=a,Y=y,E=1)) vZ=outer(vX,vY,predmodel) persp(vZ,theta=-30,col="green",shade=TRUE,xlab="Ages (0-100)", ylab="Years (1900-2005)",zlab="Mortality rate (log)") The (crude) mortality surface is with the following $a_x$ coefficients. plot(seq(1,99),coefficients(regnp)[2:100],col="red",lwd=3,type="l") Here we have a lot of coefficients, and unfortunately, on a smaller dataset, we have much more variability. Can we smooth our Lee-Carter model ? To get something which looks like $D_{x,t}\sim\mathcal{P}(E_{x,t}\cdot \exp[{\color{blue}s_a(x)+s_b(x)\cdot s_k(t)}])$ Actually, we can, and the code is rather simple library(splines) knotsA=c(20,40,60,80) knotsY=c(1920,1945,1980,2000) regsp=gnm(D~bs(subbase$A,knots=knotsA,Boundary.knots=range(subbase$A),degre=3)+ Mult(bs(subbase$A,knots=knotsA,Boundary.knots=range(subbase$A),degre=3), bs(subbase$Y,knots=knotsY,Boundary.knots=range(subbase$Y),degre=3))+ offset(log(E)),data=subbase, family=quasipoisson) BpA=bs(seq(0,99),knots=knotsA,Boundary.knots=range(subbase$A),degre=3) BpY=bs(seq(min(subbase$Y),max(subbase$Y)),knots=knotsY,Boundary.knots= range(subbase$Y),degre=3) predmodel=function(a,y) predict(regsp,newdata=data.frame(A=a,Y=y,E=1)) v Z=outer(vX,vY,predmodel) persp(vZ,theta=-30,col="green",shade=TRUE,xlab="Ages (0-100)", ylab="Years (1900-2005)",zlab="Mortality rate (log)") The mortality surface is now and again, it is possible to extract the average mortality rate, as a function of the age, over the years, BpA=bs(seq(0,99),knots=knotsA,Boundary.knots=range(subbase$A),degre=3) Ax=BpA%*%coefficients(regsp)[2:8] plot(seq(0,99),Ax,col="red",lwd=3,type="l") We can then play with the smoothing parameters of the spline functions, and see the impact on the mortality surface knotsA=seq(5,95,by=5) knotsY=seq(1910,2000,by=10) regsp=gnm(D~bs(A,knots=knotsA,Boundary.knots=range(subbase$A),degre=3)+ Mult(bs(A,knots=knotsA,Boundary.knots=range(subbase$A),degre=3), bs(Y,knots=knotsY,Boundary.knots=range(subbase\$Y),degre=3)) +offset(log(E)),data=subbase,family=quasipoisson) predmodel=function(a,y) predict(regsp,newdata=data.frame(A=a,Y=y,E=1)) vZ=outer(vX,vY,predmodel) ylab="Years (1900-2005)",zlab="Mortality rate (log)") We now have to use those functions our our small data sample ! That should be fun…. # Job for life ? Bishop of Rome ? The job of Bishop of Rome – i.e. the Pope – is considered to be a life-long commitment. I mean, it usually was. There have been 266 popes since 32 A.D. (according to http://oce.catholic.com/…): almost all popes have served until their death. But that does not mean that they were in the job for long… One can easily extract the data from the website, > L2=scan("http://oce.catholic.com/index.php?title=List_of_Popes",what="character") > index=which(L2=="</td><td>Reigned") > X=L2[index+1] > Y=strsplit(X,split="-") But one should work a little bit because sometimes, there are inconsistencies, e.g. 911-913 and then 913-14, so we need some more lines. Further, we can extract from this file the years popes started to reign, the year it ended, and the length, using those functions > diffyears=function(x){ + s=NA + if(sum(substr(x,1,1)=="c")>0){x[substr(x,1,1)=="c"]=substr(x[substr(x,1,1)=="c"],3,nchar(x[substr(x,1,1)=="c"]))} + if(length(x)==1){s=1} + if(length(x)==2){s=diff(as.numeric(x))} + return(s)} > whichyearsbeg=function(x){ + s=NA + if(sum(substr(x,1,1)=="c")>0){x[substr(x,1,1)=="c"]=substr(x[substr(x,1,1)=="c"],3,nchar(x[substr(x,1,1)=="c"]))} + if(length(x)==1){s=as.numeric(x)} + if(length(x)==2){s=as.numeric(x)[1]} + return(s)} > whichyearsend=function(x){ + s=NA + if(sum(substr(x,1,1)=="c")>0){x[substr(x,1,1)=="c"]=substr(x[substr(x,1,1)=="c"],3,nchar(x[substr(x,1,1)=="c"]))} + if(length(x)==1){s=as.numeric(x)} + if(length(x)==2){s=as.numeric(x)[2]} + return(s)} On our file, we have > Years=unlist(lapply(Y,whichyearsbeg)) > YearsB=c(Years[1:91],752,Years[92:length(Years)]) > YearsB[187]=1276 > Years=unlist(lapply(Y,whichyearsend)) > YearsE=c(Years[1:91],752,Years[92:length(Years)]) > YearsE[187]=1276 > YearsE[266]=2013 > YearsE[122]=914 > W=unlist(lapply(Y,diffyears)) > W=c(W[1:91],1,W[92:length(W)]) > W[W==-899]=1 > which(is.na(W)) [1] 187 266 > W[187]=1 > W[266]=2013-2005 If we plot it, we have the following graph, > plot(YearsB,W,type="h") and if we look at the average length, we have the following graph, > n=200 > YEARS = seq(0,2000,length=n) > Z=rep(NA,n) > for(i in 2:(n-1)){ + index=which((YearsB>YEARS[i]-50)&(YearsE<YEARS[i]+50)) + Z[i] = mean(W[index])} > plot(YEARS,Z,type="l",ylim=c(0,30)) > n=50 > YEARS = seq(0,2000,length=n) > Z=rep(NA,n) > for(i in 2:(n-1)){ + index=which((YearsB>YEARS[i]-50)&(YearsE<YEARS[i]+50)) + Z[i] = mean(W[index])} > lines(YEARS,Z,type="l",col="grey") which does not reflect mortality improvements that have been observed over two millenniums. It might related to the fact that the average age at time of election has  increased over time (for instance, Benedict XVI was elected at 78 – one of the oldest to be elected). Actually, serving a bit more than 7 years is almost the median, > mean(W>=7.5) [1] 0.424812 (42% of the Popes did stay at least 7 years in charge) or we can look at the histogram, > hist(W,breaks=0:35) Unfortunately, I could not find more detailed database (including the years of birth for instance) to start a life-table of Popes. # Les députés sont-ils à l’image de la population Beaucoup de choses ont été écrites sur le fait que les députés ne sont pas vraiment le reflet de la population, que ce soit en terme de profession, de sexe, d’origine, d’age, etc. La liste pourrait être longue. Il y a plusieurs mois, j’avais commencé à regarder le profil des députés, par age. En effet, le site http://assemblee-nationale.fr/ permet d’accéder à des données sur tous les députés, depuis la Révolution. Y compris leur date de naissance. En croisant ces données avec des données de population, par exemple via http://www.mortality.org/, on peut comparer la répartition des ages des députés, avec la répartition des ages de la population. Pour les amateurs, le code pour récupérer les données (ou au moins les dates de naissance des députés) ressemble à N=2002 URL=paste("http://www.assemblee-nationale.fr/ regle_nom=est&Nom=&departement=& choixdate=intervalle&D%C3%A9butMin=01%2F01%2F", N,"&FinMin=31%2F12%2F",N,"&Dateau=&legislature=", s,"&choixordre=chrono&Rechercher= Lancer+la+recherche",sep="") HTML=scan(URL,what="character") k=which(HTML=="class=\"titre\">Né") vHTML=HTML[k:length(HTML)] vk=which(substr(vHTML,1,7)==">&nbsp;") liste=vHTML[vk] naissance=liste[seq(1,length(liste),by=2)] NAISSANCE=as.Date(substr(naissance,8,17), "%d/%m/%Y") Maintenant, pour être tout à fait honnête, je ne suis pas certain de ce qui est vraiment renvoyé, et j’ai des doutes que cela correspondent réellement à la requête faite. En effet, même si je demande à avoir la liste des députés après l’élection, j’ai trop de monde… mais peut-être est-ce du aux décès éventuels, et il est possible que l’ensemble des députés qui ont siégé pendant la mandature apparaissent dans le résultat de la requête. Sur la figure suivante on voit, sur plusieurs élections depuis plus de 100 ans, comment les deux distributions se déforment, avec en rouge la distribution de l’age des députés, et en bleu, la distribution de la population française, dans son ensemble (population de plus de 18 ans) Si on veut tout suivre sur un graphique, au lieu de se regarder une animation, on peut représenter les différents quantiles (10%, 25%, 75% et 90%, retenus sur la population de plus de 18 ans, et l’age médian, au centre), avec la population française l’année de l’élection, et l’ensemble des élus au parlement, Si on veut faciliter la comparaison, on peut se contenter de visualiser l’évolution des ages moyens, ou encore, du ratio (en % de différence) entre l’age moyen des députés, et celui de l’ensemble de la population. Sur ce graphique, on voit que depuis 30 ans, l’age moyen des députés croit plus vite que celui de la population: la population français vieilli, mais moins que ses députés… La gérontocratie perdure donc en France. En espérant que cela ne débouche pas sur le clash générationnel que l’on semble observer ces temps-ci au Québec…
{}
# Math Insight ### Applet: A parametrized helicoid The function $\dlsp(\spfv,\spsv) = (\spfv\cos \spsv, \spfv\sin \spsv, \spsv)$ parametrizes a helicoid when $0 \le \spfv \le 1$ and $0 \le \spsv \le 2\pi$. You can drag the cyan and magenta points on the sliders to change the values of $\spfv$ and $\spsv$. Or, you can drag the blue point on the helix directly, which will then change $\spfv$ and $\spsv$ so that the blue point is at $\dlsp(\spfv,\spsv)$. If you leave $\spfv$ fixed and change only $\spsv$, then the blue point traces out a helix with radius given by $\spfv$. If you keep $\spsv$ constant and change only $\spfv$, the blue point traces out a straight line.
{}
## Advanced Studies in Pure Mathematics ### Non-Collision and Collision Properties of Dyson's Model in Infinite Dimension and Other Stochastic Dynamics Whose Equilibrium States are Determinantal Random Point Fields #### Abstract Dyson's model on interacting Brownian particles is a stochastic dynamics consisting of an infinite amount of particles moving in $\mathbb{R}$ with a logarithmic pair interaction potential. For this model we will prove that each pair of particles never collide. The equilibrium state of this dynamics is a determinantal random point field with the sine kernel. We prove for stochastic dynamics given by Dirichlet forms with determinantal random point fields as equilibrium states the particles never collide if the kernel of determining random point fields are locally Lipschitz continuous, and give examples of collision when Hölder continuous. In addition we construct infinite volume dynamics (a kind of infinite dimensional diffusions) whose equilibrium states are determinantal random point fields. The last result is partial in the sense that we simply construct a diffusion associated with the maximal closable part of canonical pre Dirichlet forms for given determinantal random point fields as equilibrium states. To prove the closability of canonical pre Dirichlet forms for given determinantal random point fields is still an open problem. We prove these dynamics are the strong resolvent limit of finite volume dynamics. #### Article information Dates Revised: 31 March 2003 First available in Project Euclid: 1 January 2019 https://projecteuclid.org/ euclid.aspm/1546369043 Digital Object Identifier doi:10.2969/aspm/03910325 Mathematical Reviews number (MathSciNet) MR2073339 Zentralblatt MATH identifier 1061.60109 #### Citation Osada, Hirofumi. Non-Collision and Collision Properties of Dyson's Model in Infinite Dimension and Other Stochastic Dynamics Whose Equilibrium States are Determinantal Random Point Fields. Stochastic Analysis on Large Scale Interacting Systems, 325--343, Mathematical Society of Japan, Tokyo, Japan, 2004. doi:10.2969/aspm/03910325. https://projecteuclid.org/euclid.aspm/1546369043
{}
# $0$-th moment of product of gaussian and sinc function I would like to calculate the following integrals: 1. $$\int_{-\infty}^{+\infty} \quad \left(\frac{\sin(\pi a x)}{\pi ax}\right)^2\quad \exp(-bx^2)\,dx$$ 2. $$\int_{-\infty}^{+\infty} \quad \left(\frac{\sin(\pi a x\pm\pi)}{\pi ax\pm\pi}\right)^2\quad \exp(-bx^2) \,dx$$ Thanks! - is there anything you want from us? –  V-X May 18 '13 at 10:48 try to use function Ei, if you want some help... –  V-X May 18 '13 at 10:50 One can use the same idea as in your previous question, except that one has to integrate w.r.t. parameter instead of differentiating. –  L.G. May 18 '13 at 10:50 ok..thanks..I tried to solve the second case in the previous question and it turns to be equal to the first case. Is that correct? Now I'll try to do the 0-th moment. –  JFNJr May 18 '13 at 10:56 If you need only the answer, I can calculate it with help of Wolfram Mathematica. –  Piotr Semenov May 18 '13 at 15:22 1. $\int_{-\infty}^{\infty} \left(\frac{sin(\pi a x)}{\pi a x}\right) e^{-b x^2} dx = \frac1{a^2 \cdot \pi^\frac{3}{2}} \cdot \left( -\sqrt{b} + \sqrt{b} \cdot e^{-\frac{a^2 \cdot \pi^2}{b}} + a \cdot \pi^\frac{3}{2} \cdot \operatorname{Erf}\left( \frac{\pi a}{\sqrt{b }} \right) \right)$ only if $\operatorname{Re}(b) > 0$ and $a \ge 0$, where $\operatorname{Erf}(x) = \frac{2}{\sqrt{\pi}} \cdot \int_0^x e^{-t^2} dt$ 2. Unfortunately, Mathematica falls to evaluate the integral $\int_{-\infty}^{\infty} \left(\frac{sin(\pi a x)}{\pi a x + \pi}\right)^2 e^{-b x^2} dx$
{}
# Re: Problem with Icon overlays From: Andy Levy <andy.levy_at_gmail.com> Date: Thu, 16 Oct 2008 12:05:12 -0400 On Thu, Oct 16, 2008 at 11:46, Carrow, Amanda R <amanda.r.carrow_at_lmco.com> wrote: > I have my working copy on a network drive. I tried creating a working copy on my local machine and it works fine. > > The problem is: > ~I will be working on several large project and I need the overlays to work IMO, overlays are a convenience more than anything else. I can get my work done without them updating in realtime. I usually have an idea of what I've been working on and what I haven't been. > -I need to have my work on a backed up drive (my work pc is not backed up) to prevent loss of work. Commit early, commit often. > ~I also cannot commit my code until it is ready to test as it will cause interference problem with other people's code. Then you should be working on a private branch which, when you're ready to test, you merge back to trunk. This allows you to commit frequently without impacting anyone else's work. > In short, is there any way to back up working copies or make the overlays work on a network drive? To back up a WC, just copy it. > -----Original Message----- > From: Stefan Küng [mailto:tortoisesvn_at_gmail.com] > Sent: Wednesday, October 15, 2008 3:01 PM > To: users_at_tortoisesvn.tigris.org > Subject: Re: Problem with Icon overlays > > Carrow, Amanda R wrote: >> Overlays did not work properly. >> >> For example, I added a new folder to the project. The folder had no >> overlay to show it had been added. When I committed the change, the >> folder still had no overlay on it. >> >> When I tried deleting folders the overlay didn't show that they had been >> deleted from the working copy. >> >> And none of the parent folders show when a change is made inside. They >> still have the green check mark on the icon, even when the content has >> been changed. >> >> The only time that the overlays look the way they should is right after > > * did you reboot after upgrading? > * is your working copy on a network share? If yes, the overlays won't > update properly for some shares - only default windows shares seem to work > * check the registry entries under > HKLM\Software\Microsoft\Windows\CurrentVersion\explorer\ShellIconOverlayIdentifiers > only 9 entries from TortoiseSVN should be listed there > (xTortoise<Status>) - if you have more entries from TSVN, delete them > and do a fresh install > * You might have to hit F5 in explorer to refresh the overlays > > Stefan > > -- > ___ > oo // \\ "De Chelonian Mobile" > (_,\/ \_/ \ TortoiseSVN > \ \_/_\_/> The coolest Interface to (Sub)Version Control > /_/ \_\ http://tortoisesvn.net > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: users-unsubscribe_at_tortoisesvn.tigris.org > For additional commands, e-mail: users-help_at_tortoisesvn.tigris.org > > --------------------------------------------------------------------- To unsubscribe, e-mail: users-unsubscribe_at_tortoisesvn.tigris.org
{}
## The slicing problems for sections of proportional dimensions Series: School of Mathematics Colloquium Thursday, April 14, 2016 - 11:05 1 hour (actually 50 minutes) Location: Skiles 006 , University of Missouri, Columbia , Organizer: We consider the following problem. Does there exist an absolute constant C such that for every natural number n, every integer 1 \leq k \leq n, every origin-symmetric convex body L in R^n, and every measure \mu with non-negative even continuous density in R^n, \mu(L) \leq C^k \max_{H \in Gr_{n-k}} \mu(L \cap H}/|L|^{k/n}, where Gr_{n-k} is the Grassmannian of (n-k)-dimensional subspaces of R^n, and |L| stands for volume? This question is an extension to arbitrary measures (in place of volume) and to sections of arbitrary codimension k of the hyperplace conjecture of Bourgain, a major open problem in convex geometry. We show that the above inequality holds for arbitrary origin-symmetric convex bodies, all k and all \mu with C \sim \sqrt{n}, and with an absolute constant C for some special class of bodies, including unconditional bodies, unit balls of subspaces of L_p, and others. We also prove that for every \lambda \in (0,1) there exists a constant C = C(\lambda) so that the above inequality holds for every natural number, every origin-symmetric convex body L in R^n, every measure \mu with continuous density and the codimension of sections k \geq \lambda n. The latter result is new even in the case of volume. The proofs are based on a stability result for generalized intersections bodies and on estimates of the outer volume ratio distance from an arbitrary convex body to the classes of generalized intersection bodies.
{}
# NavList: ## A Community Devoted to the Preservation and Practice of Celestial Navigation and Other Methods of Traditional Wayfinding ### Compose Your Message Message:αβγ Message:abc Add Images & Files Posting Code: Name: Email: Re: Logarithms by Hand From: Frank Reed Date: 2014 Jun 6, 16:09 -0700 There's one other angle for which you know the sine and cosine: any very small angle. The usual choices would be one second of arc or one minute of arc. To four significant figures, the sine of 1' is 1/3438. To six significant figures, the sine of 1" is 1/206265 (or to the same six sig figs, the sine of 1' is 60/206265). These magic numbers are easily calculated: 3438=60·180/pi and 206265=3600·180/pi. Meanwhile the cosines of such small angles are equal to 1 to six digits. Now we can work angle addition formulas and step out from any of the values known from basic geometry. For example, if you want the sine of 45°23', you can use $\sin\left ( 45^{\circ} 23'\right )=\sin\left ( 45^{\circ})\cdot\cos\left (23'\right )+\cos\left ( 45^{\circ})\cdot\sin\left (23'\right )$ We know the sine and cosine of 45°, and we can replace cos(23') by 1 and sin(23') by 23/3438, and when you work it out, the result to four figures is 0.7118, which is correct. So we can easily fill in the gaps in a table this way. No practical value to any of this in the real world or even in hypothetical modern world scenarios, but it's good, clean fun! -FER Browse Files Drop Files ### Join NavList Name: (please, no nicknames or handles) Email: Do you want to receive all group messages by email? Yes No You can also join by posting. Your first on-topic post automatically makes you a member. ### Posting Code Enter the email address associated with your NavList messages. Your posting code will be emailed to you immediately. Email: ### Email Settings Posting Code: ### Custom Index Subject: Author: Start date: (yyyymm dd) End date: (yyyymm dd)
{}
How this Generator Work RRITESH KAKKAR Joined Jun 29, 2010 2,829 Hello, I have seen this generator many times outside building, banks etc as an electrical engineer i have no idea how this work? do you know ow to use it and what component are there in it? R!f@@ Joined Apr 2, 2009 9,751 It's a backup generator. A diesel Engine driving a single or 3 phase generator. RRITESH KAKKAR Joined Jun 29, 2010 2,829 There are many control panel What sort of engine is this? spinnaker Joined Oct 29, 2009 7,835 I am curious that if you are an electrical engineer how do not understand the basics of power generation. Where did you go to school? This is very basic stuff taught in many basic science classes taught in grade schools. ian field Joined Oct 27, 2012 6,539 I am curious that if you are an electrical engineer how do not understand the basics of power generation. Where did you go to school? This is very basic stuff taught in many basic science classes taught in grade schools. If its so easy; you could point the TS to a link for inside diagrams. I can't be bothered doing that - but neither can I be bothered ragging on the TS. Papabravo Joined Feb 24, 2006 16,961 Is there anything else you might be wondering about? RRITESH KAKKAR Joined Jun 29, 2010 2,829 I am thinking to sell and service Generator the profit gap is also large. R!f@@ Joined Apr 2, 2009 9,751 Great..now OP wants to sell and service something OP has no idea about. What's next.? Papabravo Joined Feb 24, 2006 16,961 What he forgets is that there are huge capital expenses involved in such an enterprise. Does he have several million in his back pocket? Can he afford to wait 90-180 days for payment? GopherT Joined Nov 23, 2012 8,012 I always love the guys that look at a steel chassis and think steel is $0.70/kg and this thing weighs 500 pounds so it will cost$350. Good luck.
{}
# Chapter 10 - Section 10.2 - Graphs of Linear Equations and Slope - Exercises - Page 450: 11f $\frac{-b}{a}$ #### Work Step by Step (a,0) and (0,b) x1=a, y1=0,x2=0,y2=b Using slope formula m = $\frac{y2-y1}{x2-x1}$ m= $\frac{b-0}{0-a}$ = $\frac{-b}{a}$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
{}
# An Inequality with Absolute Values ### Proof 1 Since $a\in (-1,1),\;$ $a^2\lt 1,\;$ $1-a^2\gt 0,\;$ $\displaystyle \frac{1}{1-a^2}\gt 0.\;$ Similarly, $\displaystyle \frac{1}{1-b^2}\gt 0.\;$ By the AM-GM inequality, (1) $\displaystyle\frac{1}{1-a^2}+\frac{1}{1-b^2}\ge 2\sqrt{\frac{1}{1-a^2}\cdot\frac{1}{1-b^2}}.$ However, \displaystyle\begin{align} (1-ab)^2&=1-2ab+a^2b^2\\ &\ge 1-(a^2+b^2)-a^2b^2\\ &=(1-a^2)(1-b^2), \end{align} so that $\displaystyle\frac{1}{(1-a^2)(1-b^2)}\ge\frac{1}{(1-ab)^2}.\;$ This, together with (1), yields $\displaystyle \frac{1}{1-a^2}+\frac{1}{1-b^2}\ge 2\sqrt{\frac{1}{(1-ab)^2}}=\frac{2}{1-ab}.$ So too, $\displaystyle \frac{|c|}{1-a^2}+\frac{|c|}{1-b^2}\ge \frac{2|c|}{1-ab}.$ Similarly, $\displaystyle \frac{|a|}{1-b^2}+\frac{|a|}{1-c^2}\ge \frac{2|a|}{1-bc}\\ \displaystyle \frac{|b|}{1-c^2}+\frac{|b|}{1-a^2}\ge \frac{2|b|}{1-ca}.$ Adding the three gives the required inequality. The equality is achieved for $a=b=c.$ ### Proof 2 Using Bergström's inequality and, subsequently, the obvious $b^2+c^2\ge 2bc,$ \displaystyle\begin{align} \frac{1}{1-b^2}+\frac{1}{1-c^2}&\ge\frac{(1+1)^2}{2-b^2-c^2}\\ &\ge\frac{4}{2-2bc}\\ &=\frac{2}{1-bc}, \end{align} so that $\displaystyle \frac{|a|}{1-b^2}+\frac{|a|}{1-c^2}\ge\frac{2|a|}{1-bc}.$ Similarly, $\displaystyle \frac{|b|}{1-c^2}+\frac{|b|}{1-a^2}\ge \frac{2|b|}{1-ca},\\ \displaystyle \frac{|c|}{1-a^2}+\frac{|c|}{1-b^2}\ge \frac{2|c|}{1-ab}.$ Adding the three gives the required inequality. The equality is achieved for $a=b=c.$ ### Acknowledgment Dan Sitaru has kindly posted the above problem (from his book Math Accent), with a solution (Proof 1), at the CutTheKnotMath facebook page. He later added another solution (Proof 2) by Kevin Soto Palacios.
{}
Department of # Mathematics Seminar Calendar for events the day of Friday, September 6, 2013. . events for the events containing More information on this calendar program is available. Questions regarding events or the calendar should be directed to Tori Corkery. August 2013 September 2013 October 2013 Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa 1 2 3 1 2 3 4 5 6 7 1 2 3 4 5 4 5 6 7 8 9 10 8 9 10 11 12 13 14 6 7 8 9 10 11 12 11 12 13 14 15 16 17 15 16 17 18 19 20 21 13 14 15 16 17 18 19 18 19 20 21 22 23 24 22 23 24 25 26 27 28 20 21 22 23 24 25 26 25 26 27 28 29 30 31 29 30 27 28 29 30 31 Friday, September 6, 2013 4:00 pm in 241 Altgeld Hall,Friday, September 6, 2013 #### $\mathbb{CP}^{\infty}$ ###### Peter Nelson (UIUC Math) Abstract: We'll take a tour through some basic (but important!) concepts in algebraic topology, guided by one space in particular, $\mathbb{CP}^{\infty}.$ 4:00 pm in 345 Altgeld Hall,Friday, September 6, 2013 #### Multiple recurrence in quasirandom groups ###### Slawomir Solecki (UIUC Math) Abstract: We are reading the paper 'Multiple recurrence in quasirandom groups' by Bergelson and Tao (http://arxiv.org/abs/1211.6372).
{}
Preferred stock returns Bruner Aeronautics has perpetual preferred stock outstanding with a par value of $100. The stock pays a quarterly dividend of$2 and its current price is $80. a. What is the stock’s value? b. What is its effective annual rate of return Comments • Anonymous commented rate it ## Answers (3) • a. The preferred stock pays$8 annually in dividends. Therefore, its nominal rate of return would be: Nominal rate of return = $8/$80 = 10%. Or alternatively, you could determine the security’s periodic return and multiply by 4. Periodic rate of return = $2/$80 = 2.5%. Nominal rate of return = 2.5% ? 4 = 10%. b. EAR = (1 + rNOM/4)4 – 1 = (1 + 0.10/4)4 – 1 = 0.103813 = 10.3813%. • Preferred stock returns Bruner Aeronautics has perpetual preferred stock outstanding with a par value of $100. The stock pays a quarterly dividend of$2 and its current price is $80. a. What is the stock’s value? b. What is its effective annual rate of return. solution: Nominal rate of return =$8/$80 = 10%. Or alternatively, you could determine the security’s periodic return and multiply by 4. Periodic rate of return =$2/$80 = 2.5%. Nominal rate of return = 2.5% ? 4 = 10%. b. EAR = (1 + rNOM/4)4 – 1 = (1 + 0.10/4)4 – 1 = 0.103813 = 10.3813%. .................................Answer. • The preferred stock pays$8 annually in dividends.  Therefore, its nominal rate of return would be: Nominal rate of return = $8/$80 = 10%. ============= alternatively, you could determine the security’s periodic return and multiply by 4. Periodic rate of return = $2/$80 = 2.5%. Nominal rate of return = 2.5% ´ 4 = 10%. =============================== b.   EAR          = (1 + rNOM/4)4 – 1 = (1 + 0.10/4)4 – 1 = 0.103813 = 10.3813%. Get homework help
{}
# Geostat2016 Albacete: a write-up Last week I went to GEOSTAT 2016. Given the amount of fun had at GEOSTAT 2015, expectations were high. The local organisers did not disappoint, with a week of lectures, workshops, spatial data competitions and of course lots of Geostatistics. It would be unwise to try to systematically document such a diverse range of activities, and the GEOSTAT website provides much further info. Instead this ‘miniwriteup’ is designed to summarise some of my memories from the event, and encourage you to get involved for GEOSTAT 2017. To put things in context, the first session was a brief overview of the history of GEOSTAT. This is the 12th GEOSTAT summer school. In some ways GEOSTAT can be seen as a physical manifestation of the lively R-SIG-GEO email list. That may not sound very exciting. But there is a strong community spirit at the event and, unlike other academic conferences, the focus is on practical learning rather than transmitting research findings or theories. And the event was so much more than that. There were 5 action packed days covering many topics within the broad field of Geostatistics. What follows is an overview of each that I went to (there were 2 streams), with links to the source material. It is hoped that this will be of use to people who were not present in person. ## Day 1 After an introduction to the course and spatial data by Tom Hengl, Roger Bivand delivered a technical and applied webinar on bridges between R and other GIS software. With a focus on GRASS, we learned how R could be used as a ‘front end’ to other programs. An example using the famous ‘Cholera pump’ data mapped by John Snow was used to demonstrate the potential benefits of ‘bridging’ to other software. The data can be downloaded and partially plotted in R as follows: u = "http://geostat-course.org/system/files/data_0.zip" unzip("data_0.zip") old = setwd("~/repos/geostat2016-rl/") library(raster) ## Loading required package: sp bbo = shapefile("data/bbo.shp") buildings = shapefile("data/buildings.shp") deaths = shapefile("data/deaths.shp") b_pump = shapefile("data/b_pump.shp") nb_pump = shapefile("data/nb_pump.shp") plot(buildings) setwd(old) In the afternoon Robert Hijmans gave a high level overview of software for spatial data analysis, with a discussion of the Diva GIS software he developed and why he now uses R for most of his geospatial analysis. The talk touched on the gdistance package, and many others. Robert showcased the power of R for understanding major civilisational problems such as the impacts of climate change on agriculture. His animated global maps of agricultural productivity and precipitation showed how R can scale to tackle large datasets, up to the global level involving spatial and temporal data simultaneously. There were a few political asides. Robert mentioned how agrotech giant Monsanto paid almost \$1 billion for a weather prediction company. He detoured deftly through a discussion of ‘big data’, making the observation that often ensembles of models can provide better predictions than any single model working on its own, with political analogies about the importance of democracy. More examples included health and estimates of dietary deficiencies at high levels of geographic resolution. A paper showing fish and fruit consumption across Rwanda illustrated how map making in R, used intelligently, can save lives. It was revealing to learn how Robert got into R. While he was working at the International Rice Research Institute. “It forces you to write scripts.” This is good for ensuring reproducibility, a critical component of scientific research. It encourages you to focus on and understand the data primarily, rather than visualising it. On the other hand, R is not always the fastest way to do things, although “people often worry too much about this”. Your time is more important than your computers, so setting an analysis running is fine. Plus there are ways to make things run faster, as mentioned in a book that I’m working on, Efficient R Programming. R is great if you use it every data, but if you only use it less than once a week it becomes difficult. If you just want a one-off spatial analysis data program, Robert recommended QGIS. After a brief overview of spatial data in R, Robert moved on to talk about the raster package, which he developed. This package was developed to overcome some of the limitations with sp, the foundational package for spatial data in R. A final resource that Robert promoted was RSpatial.org, a free online resource for teaching R as a command line GIS. Edzer Pebesmer delivered the final session of the first day, on Free and Open Source Software (FOSS) for Geoinformatics and Geosciences. After the highly technical final C++ examples from the previous talk, I was expecting a high level overview of the landscape. Instead Edzer went straight in to talk about source code, the raw material that defines all software. The fundamental feature of open source software is that its source code is free, and will remain free. ## Day 2 The second day of the course was divided in two: stream A focussed on environmental modelling and stream B compositional data. I attended the environmental modelling course taught by Robert Hijmans. The course was based on his teaching material at rspatial.org and can be found online. We started off by looking at the fundamental data structures underlying spatial data in R. Why? It’s useful to be able to create simple example datasets from scratch, to understand them. library(sp) x <- c(4,7,3,8) y <- c(9,6,12,11) xy <- data.frame(x, y) SpatialPoints(xy) ## class : SpatialPoints ## features : 4 ## extent : 3, 8, 6, 12 (xmin, xmax, ymin, ymax) ## coord. ref. : NA d = data.frame(v1 = 1:4, v2 = LETTERS[1:4]) spd = SpatialPointsDataFrame(coords = xy, data = d) plot(spd) The basic functions of the raster package are similar. library(raster) r = raster(nc = 10, nr = 10) values(r) = 1:ncell(r) plot(r) as.matrix(r) ## [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] ## [1,] 1 2 3 4 5 6 7 8 9 10 ## [2,] 11 12 13 14 15 16 17 18 19 20 ## [3,] 21 22 23 24 25 26 27 28 29 30 ## [4,] 31 32 33 34 35 36 37 38 39 40 ## [5,] 41 42 43 44 45 46 47 48 49 50 ## [6,] 51 52 53 54 55 56 57 58 59 60 ## [7,] 61 62 63 64 65 66 67 68 69 70 ## [8,] 71 72 73 74 75 76 77 78 79 80 ## [9,] 81 82 83 84 85 86 87 88 89 90 ## [10,] 91 92 93 94 95 96 97 98 99 100 q = sqrt(r) plot(q) x = q + r s = stack(r, q, x) ss = s * r # r is recycled so each layer is multiplied by r 1:3 * 2 # here 2 is recycled ## [1] 2 4 6 Raster also provides simple yet powerful functions for manipulating and analysing raster data, including crop(), merge() for manipulation and predict(), focal() and distance(). predict() is particularly interesting as it allows raster values to be estimated using any of R’s powerful statistical methods. library(dismo) g = gmap("Albacete, Spain", scale = T, lonlat = T) ## Loading required namespace: XML plot(g, interpolate = T) dismo::geocode("Universidad Castilla la Mancha") ## originalPlace ## 1 Universidad Castilla la Mancha ## interpretedPlace longitude ## 1 Paseo Universidad, 13005 Ciudad Real, Cdad. Real, Spain -3.921711 ## latitude xmin xmax ymin ymax uncertainty ## 1 38.99035 -3.922007 -3.919309 38.98919 38.99189 131 ## Day 3 The third day started with a live R demo by Edzer Pebesmer on space-time data. Refreshingly for a conference primarily on spatial data, it started with an in-depth discussion of time. While base R natively supports temporal units (knowing the difference between days and seconds, for example) it does not know the difference between metres and miles. This led to the creation of the units library, an taster of which is shown below: install.packages("units") library(units) m = with(ud_units, m) s = with(ud_units, s) km = with(ud_units, km) h = with(ud_units, h) x = 1:3 * m/s The rest of the day was spent analysing a range of spatio-temporal datasets using spacetime, trajectories and rgl for interactive 3d spacetime plots. In the parallel session there were sessions on CARTO and the R gvSIG bridge. ## Day 4 Day 4 was a highlight for me as I’ve wanted to learn how to use the INLA package for ages. It was explained lucidly by Marta Blangiardo and Michela Cameletti, who have written an excellent book on the subject, which has a website that I recommend checking out. Their materials can be found here: http://geostat-course.org/node/1330 . In parallel to this there was a session on Spatial and Spatiotemporal point process analysis in R data in R by Virgilio Gomez Rubio and one on automated spatial prediction and visualisation by Tom Hengl. ## Day 5 After all that intense geospatial analysis and programming activity, and a night out in Albacete for some participants, we were relieved to learn that this final day of learning was more relaxed. Furthermore, by tradition, it was largely participant-led. I gave a talk on Efficient R Programming, a book I’ve written in collaboration with Colin Gillespie; Teresa Rojos gave a fascinating talk about her research into the spatial distribution of cancer rates in Peru; and S.J. Norder gave us the low-down on the Biogeography of islands with R. One of the most exciting sessions was the revelation of the results of the spatial prediction game. Interestingly, a team using a relatively simple approach with randomForestSRC (and ggRandomForests for visualisation) one against others who had spent hours training complex multi-level models. ## Summary Overall it was an amazing event and inspiring to spend time with so many researchers using open geospatial software for tackling pressing real world issues. Furthermore, it was great fun. I strongly recommend people dipping their toes in the sea of spatial capabilities provided by R check out the GEOSTAT website, not least for the excellent video resources to be found there. I look forward to hearing plans for future GEOSTATs and recommend the event, and associated materials, to researchers interested in using free geospatial software for the greater good.
{}
# Venturi Effect When a fluid flows through a constricted section of a pipe, the pressure decreases. This can be explained by invoking the equation of continuity and Bernoulli's theorem. The Bernoulli equation describes the relationship between velocity and pressure: \begin{align} p_{1}-p_{2}={\frac {\rho }{2}}\left(v_{2}^{2}-v_{1}^{2}\right) \end{align} where $p$ is pressure, $\rho$ is the fluid density and $v$ is velocity. This equation shows that, in the event of a pressure drop, the velocity increases. ## Venturi meter A Venturi meter is a device used to measure the flow rate of fluid in a pipe. The flow rate is the volume rate of fluid flow i.e., the volume of fluid that flows in unit time. A Venturi meter works on the principle of differential pressure, where a constricted section in the pipe creates a drop in pressure, which is proportional to the velocity of the fluid flow. This drop in pressure is measured by pressure taps and used to calculate the flow rate. The continuity equation gives the flow rate $Q$, \begin{align} Q&=v_{1}A_{1}=v_{2}A_{2} \end{align} and the Bernoulli's theorem gives \begin{align} p_{1}-p_{2}&={\frac {\rho }{2}}\left(v_{2}^{2}-v_{1}^{2}\right) \end{align} Simplify to get the flow rate \begin{align} Q&=A_{1}{\sqrt {{\frac {2}{\rho }}\cdot {\frac {p_{1}-p_{2}}{\left({\frac {A_{1}}{A_{2}}}\right)^{2}-1}}}} \\ &=A_{2}{\sqrt {{\frac {2}{\rho }}\cdot {\frac {p_{1}-p_{2}}{1-\left({\frac {A_{2}}{A_{1}}}\right)^{2}}}}} \end{align} ## Problems from IIT JEE Problem (JEE Mains 2022): A liquid of density 750 kg/m3 flows smoothly through a horizontal pipe that tapers in cross-sectional area from $A_1=1.2\times{10}^{-2}$ m2 to $A_2=A_1/2$. The pressure difference between the wide and narrow sections of the pipe is 4500 Pa. The rate of flow of liquid is_________$\times10^{-3}\;\mathrm{m^3/s}$.
{}