text
stringlengths
104
605k
## Wednesday, November 12, 2014 ### Unit digit, Tenth digit and Digit Sum Word problems on unit digit, tenth digit or digit sum. #1: How many digits are there in the positive integers 1 to 99 inclusive? Solution I:  From 1 to 9, there are 9 digits. From 10 to 99, there are 99 - 10 + 1 or 99 - 9 = 90 two digit numbers. 90 x 2 = 180 Solution II:  ___ There are 9  one digit numbers (from 1 to 9). ___ ___ There are 9 * 10 = 90 two digit numbers (You can't use "0" on the tenth digit but you can use "0" on the unit digit.) 90 * 2 + 9 = 189 # 2: A book has 145 pages. How many digits are there if you start counting from page 1? There are 189 digits from page 1 to 99. (See #1, solution I) From 100 to 145, there are 145 - 100 + 1 or 145 - 99 = 46 three digit numbers. 189 + 46*3 = 327 digits. #3: "A book has N pages, number the usual way, from 1 to N. The total number of digits in the page number is 930. How many pages does the book have"?  Similar to one Google interview question. Read the questions and others here from the Wall Street Journal. Solution I: 930 - 189 (digits of the first 99 pages) =741 741 divided by 3 = 247. Careful since you are counting the three digit numbers from 100 if the book has N pages N - 100 + 1 or N - 99 = 247. N = 346 pages. Solution II: 930 - 189 (total digits needed for the first 99 pages) = 741 741/3 = 247 (how far the three digit page numbers go). 247 + 99 = 346 pages #4: If you write consecutive numbers starting with 1, what is the 50th digit you write? Solution I: 50 - 9 = 41, 9 being the first 9 digits you need to use for the first 9 pages. Now it's 2 digit. 41/2 = 20.1 , which means you will be able to write 20 two digit numbers + the first digit of the next two digit numbers. 10 to 29 is the first 20 two digit numbers so the next digit 3 is the answer. (first digit of the two digit number 30.) Solution II: (50 - 9 ) / 2 = 20. 5 ; 20.5 + 9 = 29.5, so 29 pages + the first digit of the next two digit numbers, which is 3, the answer. #5: What is the sum if you add up all the digits from 1 to 100 inclusive? 00  10  20  30  40  50  60  70  80  90 01  11  21  31  41  51  61  71  81  91 02  12  22  32  42  52  62  72  82  92 03  13  23  33  43  53  63  73  83  93 04  14  24  34  44  54  64  74  84  94 05  15  25  35  45  55  65  75  85  95 06  16  26  36  46  56  66  76  86  96 07  17  27  37  47  57  67  77  87  97 08  18  28  38  48  58  68  78  88  98 09  19  29  39  49  59  69  79  89  99 Solution I: Do you see the pattern?  From 00 to 99 if you just look at the unit digits. There are 10 sets of ( 1+ 2 + 3 ... + 9) , which gives you the sum of 10 * 45 = 450 How about the tenth digits? There are another 10 sets of (1 + 2 + 3 + ...9) so another 450 Add them up and you have 450 * 2 = 900 digits from 1 to 99 inclusive. 900 + 1 ( for the "1" in the extra number 100) = 901 Solution II: If you add the digits on each column, you have an arithmetic sequence, which is 45 + 55 + 65 ... + 135  To find the sum, you use average * the terms (how many numbers) $$\dfrac {45+135} {2} * \left( \dfrac {135-45} {10}+1\right)$$ =900 900 + 1 = 901 Solution III : 2*45*101 + 1 = 901 #1: A book has 213 pages, how many digits are there? #2: A book has 1012 pages, how many digits are there? #3: If you write down all the digits starting with 1 and in the end there are a: 100, b: 501 and c: 1196 digits, what is the last digit you write down for each question? #4: What is the sum of all the digits counting from 1 to 123?
# W2 Total cost $100,000 Total volume 1,000 Average cost$100 Payer volumes Medicare (payment rate = $95) 400 Medicaid (payment rate =$75) 100 Managed Care # 1 (payment rate = $110) 300 Managed Care # 2 (pay 80% of charges) 100 Uninsured (pay 10% of charges) 100 Total all payers 1,000 Desired net income$5,000 1) Medicare and Medicaid presently account for 50% of the volume. The hospital wishes to reduce its dependence on government payers. Assume that Medicare volume is reduced to 380 patients and Medicaid volume is reduced to 90 patients. The volume from managed-care plan #1 rises to 320 patients from 300. The volume from managed-care plan #2 increases to 110 patients. Thus, total volume is unchanged at 1,000 visits. What is the new price necessary assuming all other factors are unchanged? 2) Start with the original assumptions. The hospital is facing pressure from public-interest groups to control the prices it charges to the uninsured. Assume that the hospital is able through various efficiencies to cut its per-visit cost by 5%. It also negotiates a 7% increase with managed-care plan #1. Assuming all other factors are unchanged, what is the new required price? 3) Start with the original assumptions. Notice that managed care plan #1 receives a much lower price in return for sending a larger volume of patients. Managed care plan #2 (MC#2) wants to pay a lower cost per case and is willing to send 250 more patients (350 total from MC#2) to the clinic in return for a rate of $110 per case. Assume that the average cost per case drops to$90 due to the economies of scale. All other assumptions are unchanged. What is the new required price? 4) Start with the assumptions in problem 3. But now assume that the additional volume does not enable enough economies-of-scale to reduce the average cost per case as much as originally anticipated. Assume now that the average cost per case drops only to \$95. What is the new required price? Compare the answer to problem 18 to this answer. What does this tell you about the sensitivity of the price to the assumption of the average cost per case? 5) Compare the answer to problem 3 with the answer to problem 4. What does this tell you about the sensitivity of the price to the assumption of the average cost per case? If you were the clinic manager, what would you do before agreeing to the renegotiated contract with managed care plan #2? Discuss both. ## Plagiarism Checker Submit your documents and get free Plagiarism report Free Plagiarism Checker
Advertisement Remove all ads # A magnetic field of 100 G (1 G = 10−4 T) is required which is uniform in a region of linear dimension about 10 cm and area of cross-section about 10−3 m2. - Physics Advertisement Remove all ads Advertisement Remove all ads Advertisement Remove all ads Numerical A magnetic field of 100 G (1 G = 10−4 T) is required which is uniform in a region of linear dimension about 10 cm and area of cross-section about 10−3 m2. The maximum current-carrying capacity of a given coil of wire is 15 A and the number of turns per unit length that can be wound round a core is at most 1000 turns m−1. Suggest some appropriate design particulars of a solenoid for the required purpose. Assume the core is not ferromagnetic. Advertisement Remove all ads #### Solution Magnetic field strength, B = 100 G = 100 × 10−4 T Number of turns per unit length, n = 1000 turns m−1 Current flowing in the coil, I = 15 A Permeability of free space, μ0 = 4π × 10−7 T mA−1 Magnetic field is given by the relation, "B" = mu_0"nI" ∴ "nI" = "B"/mu_0 = (100 xx 10^-4)/(4pi xx 10^-7) = 7957.74 ≈ 8000 "A"/"m" If the length of the coil is taken as 50 cm, radius 4 cm, number of turns 400, and current 10 A, then these values are not unique for the given purpose. There is always a possibility of some adjustments with limits. Concept: Solenoid and the Toroid - the Solenoid Is there an error in this question or solution? #### APPEARS IN NCERT Class 12 Physics Textbook Chapter 4 Moving Charges and Magnetism Q 15 | Page 170 NCERT Physics Part 1 and 2 Class 12 Chapter 4 Moving Charges and Magnetism Exercise | Q 4.15 | Page 170 #### Video TutorialsVIEW ALL [1] Advertisement Remove all ads Share Notifications View all notifications Forgot password?
# BEATLES – If I Needed Someone – Backing Track MIDI FILE \$1.99 ## DEMO: Beatles If I Needed Someone ## LYRICS: If I needed someone to love You’re the one that I’d be thinking of If I needed someone If I had some more time to spend Then I guess I’d be with you my friend If I needed someone Had you come some other day (Full lyrics below…) SKU: BT4UMF911 Categories: , ## Description • Artist: Beatles • Title: If I Needed Someone • Backing Track: Midi File ## LYRICS: If I needed someone to love You’re the one that I’d be thinking of If I needed someone If I had some more time to spend Then I guess I’d be with you my friend If I needed someone Had you come some other day Then it might not have been like this But you see now I’m too much in love Carve your number on my wall And maybe you will get a call from me If I needed someone Ah, ah, ah, ah If I had some more time to spend Then I guess I’d be with you my friend If I needed someone Had you come some other day Then it might not have been like this But you see now I’m too much in love Carve your number on my wall And maybe you will get a call from me If I needed someone Ah, ah
# qml.quantum_monte_carlo¶ quantum_monte_carlo(fn, wires, target_wire, estimation_wires)[source] Provides the circuit to perform the quantum Monte Carlo estimation algorithm. The input fn should be the quantum circuit corresponding to the $$\mathcal{F}$$ unitary in the paper above. This unitary encodes the probability distribution and random variable onto wires so that measurement of the target_wire provides the expectation value to be estimated. The quantum Monte Carlo algorithm then estimates the expectation value using quantum phase estimation (check out QuantumPhaseEstimation for more details), using the estimation_wires. Note A complementary approach for quantum Monte Carlo is available with the QuantumMonteCarlo template. The quantum_monte_carlo transform is intended for use when you already have the circuit for performing $$\mathcal{F}$$ set up, and is compatible with resource estimation and potential hardware implementation. The QuantumMonteCarlo template is only compatible with simulators, but may perform faster and is suited to quick prototyping. Parameters • fn (Callable) – a quantum function that applies quantum operations according to the $$\mathcal{F}$$ unitary used as part of quantum Monte Carlo estimation • wires (Union[Wires or Sequence[int]]) – the wires acted upon by the fn circuit • target_wire (Union[Wires, int]) – The wire in which the expectation value is encoded. Must be contained within wires. • estimation_wires (Union[Wires, Sequence[int], or int]) – the wires used for phase estimation Returns The circuit for quantum Monte Carlo estimation Return type function Raises ValueError – if wires and estimation_wires share a common wire Consider an input quantum circuit fn that performs the unitary $\mathcal{F} = \mathcal{R} \mathcal{A}.$ Here, the unitary $$\mathcal{A}$$ prepares a probability distribution $$p(i)$$ of dimension $$M = 2^{m}$$ over $$m \geq 1$$ qubits: $\mathcal{A}|0\rangle^{\otimes m} = \sum_{i \in X} p(i) |i\rangle,$ where $$X = \{0, 1, \ldots, M - 1\}$$ and $$|i\rangle$$ is the basis state corresponding to $$i$$. The $$\mathcal{R}$$ unitary imprints the result of a function $$f: X \rightarrow [0, 1]$$ onto an ancilla qubit: $\mathcal{R}|i\rangle |0\rangle = |i\rangle \left(\sqrt{1 - f(i)} |0\rangle + \sqrt{f(i)}|1\rangle\right).$ Following this paper, the probability of measuring the state $$|1\rangle$$ in the final qubit is $\mu = \sum_{i \in X} p(i) f(i).$ However, it is possible to measure $$\mu$$ more efficiently using quantum Monte Carlo estimation. This function transforms an input quantum circuit fn that performs the unitary $$\mathcal{F}$$ to a larger circuit for measuring $$\mu$$ using the quantum Monte Carlo algorithm. The algorithm proceeds as follows: 1. The probability distribution $$p(i)$$ is encoded using a unitary $$\mathcal{A}$$ applied to the first $$m$$ qubits specified by wires. 2. The function $$f(i)$$ is encoded onto the target_wire using a unitary $$\mathcal{R}$$. 3. The unitary $$\mathcal{Q}$$ is defined with eigenvalues $$e^{\pm 2 \pi i \theta}$$ such that the phase $$\theta$$ encodes the expectation value through the equation $$\mu = (1 + \cos (\pi \theta)) / 2$$. The circuit in steps 1 and 2 prepares an equal superposition over the two states corresponding to the eigenvalues $$e^{\pm 2 \pi i \theta}$$. 4. The circuit returned by this function is applied so that $$\pm\theta$$ can be estimated by finding the probabilities of the $$n$$ estimation wires. This in turn allows for the estimation of $$\mu$$. Visit Rebentrost et al. (2018) for further details. In this algorithm, the number of applications $$N$$ of the $$\mathcal{Q}$$ unitary scales as $$2^{n}$$. However, due to the use of quantum phase estimation, the error $$\epsilon$$ scales as $$\mathcal{O}(2^{-n})$$. Hence, $N = \mathcal{O}\left(\frac{1}{\epsilon}\right).$ This scaling can be compared to standard Monte Carlo estimation, where $$N$$ samples are generated from the probability distribution and the average over $$f$$ is taken. In that case, $N = \mathcal{O}\left(\frac{1}{\epsilon^{2}}\right).$ Hence, the quantum Monte Carlo algorithm has a quadratically improved time complexity with $$N$$. Example Consider a standard normal distribution $$p(x)$$ and a function $$f(x) = \sin ^{2} (x)$$. The expectation value of $$f(x)$$ is $$\int_{-\infty}^{\infty}f(x)p(x) \approx 0.432332$$. This number can be approximated by discretizing the problem and using the quantum Monte Carlo algorithm. First, the problem is discretized: from scipy.stats import norm m = 5 M = 2 ** m xmax = np.pi # bound to region [-pi, pi] xs = np.linspace(-xmax, xmax, M) probs = np.array([norm().pdf(x) for x in xs]) probs /= np.sum(probs) func = lambda i: np.sin(xs[i]) ** 2 r_rotations = np.array([2 * np.arcsin(np.sqrt(func(i))) for i in range(M)]) The quantum_monte_carlo transform can then be used: from pennylane.templates.state_preparations.mottonen import ( _uniform_rotation_dagger as r_unitary, ) n = 6 N = 2 ** n a_wires = range(m) wires = range(m + 1) target_wire = m estimation_wires = range(m + 1, n + m + 1) dev = qml.device("default.qubit", wires=(n + m + 1)) def fn(): qml.templates.MottonenStatePreparation(np.sqrt(probs), wires=a_wires) r_unitary(qml.RY, r_rotations, control_wires=a_wires[::-1], target_wire=target_wire) @qml.qnode(dev) def qmc(): qml.quantum_monte_carlo(fn, wires, target_wire, estimation_wires)() return qml.probs(estimation_wires) phase_estimated = np.argmax(qmc()[:int(N / 2)]) / N The estimated value can be retrieved using the formula $$\mu = (1-\cos(\pi \theta))/2$$ >>> (1 - np.cos(np.pi * phase_estimated)) / 2 0.42663476277231915 It is also possible to explore the resources required to perform the quantum Monte Carlo algorithm >>> qtape = qmc.qtape.expand(depth=1) >>> qml.specs(qmc)() {'gate_sizes': defaultdict(int, {1: 15943, 2: 15812, 7: 126, 6: 1}), 'gate_types': defaultdict(int, {'RY': 15433, 'CNOT': 15686,
# Playground Sand - Pricing Not Reviewed Equation / Last modified by KurtHeckman on 2017/01/18 19:27 Cost = Playground Sand - Pricing Variable Instructions Datatype (V)"Volume of playground sand." Enter the volume of play sand. Decimal (ft³) (uP)"Price per one cubic foot of play sand" Enter the unit price of play sand (see below) Decimal (USD) Type Equation Category vCommons Contents 2 variables Rating ID KurtHeckman.Playground Sand - Pricing UUID f26449a2-16b9-11e5-a3bb-bc764e2038f2 The Cost of Play Sand calculator computes the price of a volume (V) of play sand. INSTRUCTIONS: Choose your preferred units and enter the following: • (V) The volume of play sand (e.g. 4 cubic feet) • (uP) The unit price of a cubic foot of play sand (see Unit Pricing below).  This is typically the price of two bags of play sand. Total Cost of Play Sand: The calculator performs all of the unit conversions and returns the total cost in U.S. dollars.  However, this can be automatically converted to other currency units via the pull-down menu. ### Unit Pricing One should always use local pricing when available.  Typically a bag of play sand is 1/2 a cubic foot, so twice the price will give you the local unit price for a cubic foot of play sand.  However, for convenience, vCalc performs a periodic survey of commodities such as play sand.  The most recent play sand survey was as follows: • Store: HomeDepot • Date: 1/13/17 • Unit Price per Cubic Foot:  \$8.6 USD/ft³ See the Sandbox Calculator for more tools for computing play sand.
# problem with end letter in Arabic words I use the below commands to get an Arabic sentence \documentclass[12pt,a4paper]{article} \usepackage[cp 1256]{inputenc} \usepackage[arabic,english]{babel} \usepackage[LAE,LFE]{fontenc} \Vocalize \usepackage{color} \usepackage[left=4cm,right=2cm,top=2cm,bottom=2cm]{geometry} \begin{document} \selectlanguage{arabic} \centering \color{blue}{يَرْفَعِ اللَّهُ الَّذِينَ آَمَنُوا مِنْكُمْ وَالَّذِينَ أُوتُوا الْعِلْمَ دَرَجَاتٍ } \vspace{2.5cm} يَرْفَع الله الذين امنوا منكم و الذين اوتوا العلم درجات \end{document} the problem is, when I use Vocalized marked (fatha, damma and kasra), the last letter it transformed to medial character look instead of final character look. I use both Texmaker and TeXstudio, any help with that? • Use an * between the last character and the marke. – touhami Jul 28 '16 at 20:26 • @touhami didn't work. – Muhammad Abdulrasool Jul 28 '16 at 20:31 • Sorry, but it works for me. – touhami Jul 28 '16 at 20:45 • do you hear about xelatex? – touhami Jul 28 '16 at 20:46 • @touhami yes but I want to use pdflatex, because my thesis written all in English except for the page of the Quran verse. – Muhammad Abdulrasool Jul 28 '16 at 20:49
Both generative adversarial network models and variational autoencoders have been widely used to approximate probability distributions of datasets. Although they both use parametrized distributions to approximate the underlying data distribution, whose exact inference is intractable, their behaviors are very different. In this report, we summarize our experiment results that compare these two categories of models in terms of fidelity and mode collapse. We provide a hypothesis to explain their different behaviors and propose a new model based on this hypothesis. We further tested our proposed model on MNIST dataset and CelebA dataset. ### 相关内容 Despite the success of Generative Adversarial Networks (GANs), mode collapse remains a serious issue during GAN training. To date, little work has focused on understanding and quantifying which modes have been dropped by a model. In this work, we visualize mode collapse at both the distribution level and the instance level. First, we deploy a semantic segmentation network to compare the distribution of segmented objects in the generated images with the target distribution in the training set. Differences in statistics reveal object classes that are omitted by a GAN. Second, given the identified omitted object classes, we visualize the GAN's omissions directly. In particular, we compare specific differences between individual photos and their approximate inversions by a GAN. To this end, we relax the problem of inversion and solve the tractable problem of inverting a GAN layer instead of the entire generator. Finally, we use this framework to analyze several recent GANs trained on multiple datasets and identify their typical failure cases. Attention Model has now become an important concept in neural networks that has been researched within diverse application domains. This survey provides a structured and comprehensive overview of the developments in modeling attention. In particular, we propose a taxonomy which groups existing techniques into coherent categories. We review the different neural architectures in which attention has been incorporated, and also show how attention improves interpretability of neural models. Finally, we discuss some applications in which modeling attention has a significant impact. We hope this survey will provide a succinct introduction to attention models and guide practitioners while developing approaches for their applications. Generative Adversarial Networks (GANs) have recently achieved impressive results for many real-world applications, and many GAN variants have emerged with improvements in sample quality and training stability. However, they have not been well visualized or understood. How does a GAN represent our visual world internally? What causes the artifacts in GAN results? How do architectural choices affect GAN learning? Answering such questions could enable us to develop new insights and better models. In this work, we present an analytic framework to visualize and understand GANs at the unit-, object-, and scene-level. We first identify a group of interpretable units that are closely related to object concepts using a segmentation-based network dissection method. Then, we quantify the causal effect of interpretable units by measuring the ability of interventions to control objects in the output. We examine the contextual relationship between these units and their surroundings by inserting the discovered object concepts into new images. We show several practical applications enabled by our framework, from comparing internal representations across different layers, models, and datasets, to improving GANs by locating and removing artifact-causing units, to interactively manipulating objects in a scene. We provide open source interpretation tools to help researchers and practitioners better understand their GAN models. Generating high-quality text with sufficient diversity is essential for a wide range of Natural Language Generation (NLG) tasks. Maximum-Likelihood (MLE) models trained with teacher forcing have constantly been reported as weak baselines, where poor performance is attributed to exposure bias; at inference time, the model is fed its own prediction instead of a ground-truth token, which can lead to accumulating errors and poor samples. This line of reasoning has led to an outbreak of adversarial based approaches for NLG, on the account that GANs do not suffer from exposure bias. In this work, wake make several surprising observations with contradict common beliefs. We first revisit the canonical evaluation framework for NLG, and point out fundamental flaws with quality-only evaluation: we show that one can outperform such metrics using a simple, well-known temperature parameter to artificially reduce the entropy of the model's conditional distributions. Second, we leverage the control over the quality / diversity tradeoff given by this parameter to evaluate models over the whole quality-diversity spectrum, and find MLE models constantly outperform the proposed GAN variants, over the whole quality-diversity space. Our results have several implications: 1) The impact of exposure bias on sample quality is less severe than previously thought, 2) temperature tuning provides a better quality / diversity trade off than adversarial training, while being easier to train, easier to cross-validate, and less computationally expensive. Autoencoders provide a powerful framework for learning compressed representations by encoding all of the information needed to reconstruct a data point in a latent code. In some cases, autoencoders can "interpolate": By decoding the convex combination of the latent codes for two datapoints, the autoencoder can produce an output which semantically mixes characteristics from the datapoints. In this paper, we propose a regularization procedure which encourages interpolated outputs to appear more realistic by fooling a critic network which has been trained to recover the mixing coefficient from interpolated data. We then develop a simple benchmark task where we can quantitatively measure the extent to which various autoencoders can interpolate and show that our regularizer dramatically improves interpolation in this setting. We also demonstrate empirically that our regularizer produces latent codes which are more effective on downstream tasks, suggesting a possible link between interpolation abilities and learning useful representations. We present new intuitions and theoretical assessments of the emergence of disentangled representation in variational autoencoders. Taking a rate-distortion theory perspective, we show the circumstances under which representations aligned with the underlying generative factors of variation of data emerge when optimising the modified ELBO bound in $\beta$-VAE, as training progresses. From these insights, we propose a modification to the training regime of $\beta$-VAE, that progressively increases the information capacity of the latent code during training. This modification facilitates the robust learning of disentangled representations in $\beta$-VAE, without the previous trade-off in reconstruction accuracy. We introduce an effective model to overcome the problem of mode collapse when training Generative Adversarial Networks (GAN). Firstly, we propose a new generator objective that finds it better to tackle mode collapse. And, we apply an independent Autoencoders (AE) to constrain the generator and consider its reconstructed samples as "real" samples to slow down the convergence of discriminator that enables to reduce the gradient vanishing problem and stabilize the model. Secondly, from mappings between latent and data spaces provided by AE, we further regularize AE by the relative distance between the latent and data samples to explicitly prevent the generator falling into mode collapse setting. This idea comes when we find a new way to visualize the mode collapse on MNIST dataset. To the best of our knowledge, our method is the first to propose and apply successfully the relative distance of latent and data samples for stabilizing GAN. Thirdly, our proposed model, namely Generative Adversarial Autoencoder Networks (GAAN), is stable and has suffered from neither gradient vanishing nor mode collapse issues, as empirically demonstrated on synthetic, MNIST, MNIST-1K, CelebA and CIFAR-10 datasets. Experimental results show that our method can approximate well multi-modal distribution and achieve better results than state-of-the-art methods on these benchmark datasets. Our model implementation is published here: https://github.com/tntrung/gaan Generative adversarial networks (GANs) are powerful tools for learning generative models. In practice, the training may suffer from lack of convergence. GANs are commonly viewed as a two-player zero-sum game between two neural networks. Here, we leverage this game theoretic view to study the convergence behavior of the training process. Inspired by the fictitious play learning process, a novel training method, referred to as Fictitious GAN, is introduced. Fictitious GAN trains the deep neural networks using a mixture of historical models. Specifically, the discriminator (resp. generator) is updated according to the best-response to the mixture outputs from a sequence of previously trained generators (resp. discriminators). It is shown that Fictitious GAN can effectively resolve some convergence issues that cannot be resolved by the standard training approach. It is proved that asymptotically the average of the generator outputs has the same distribution as the data samples. We investigate deep generative models that can exchange multiple modalities bi-directionally, e.g., generating images from corresponding texts and vice versa. A major approach to achieve this objective is to train a model that integrates all the information of different modalities into a joint representation and then to generate one modality from the corresponding other modality via this joint representation. We simply applied this approach to variational autoencoders (VAEs), which we call a joint multimodal variational autoencoder (JMVAE). However, we found that when this model attempts to generate a large dimensional modality missing at the input, the joint representation collapses and this modality cannot be generated successfully. Furthermore, we confirmed that this difficulty cannot be resolved even using a known solution. Therefore, in this study, we propose two models to prevent this difficulty: JMVAE-kl and JMVAE-h. Results of our experiments demonstrate that these methods can prevent the difficulty above and that they generate modalities bi-directionally with equal or higher likelihood than conventional VAE methods, which generate in only one direction. Moreover, we confirm that these methods can obtain the joint representation appropriately, so that they can generate various variations of modality by moving over the joint representation or changing the value of another modality. David Bau,Jun-Yan Zhu,Jonas Wulff,William Peebles,Hendrik Strobelt,Bolei Zhou,Antonio Torralba 5+阅读 · 2019年10月24日 Sneha Chaudhari,Gungor Polatkan,Rohan Ramanath,Varun Mithal 12+阅读 · 2019年4月5日 David Bau,Jun-Yan Zhu,Hendrik Strobelt,Bolei Zhou,Joshua B. Tenenbaum,William T. Freeman,Antonio Torralba 11+阅读 · 2018年12月8日 Massimo Caccia,Lucas Caccia,William Fedus,Hugo Larochelle,Joelle Pineau,Laurent Charlin 6+阅读 · 2018年11月6日 David Berthelot,Colin Raffel,Aurko Roy,Ian Goodfellow 3+阅读 · 2018年7月19日 Wei Zhao,Benyou Wang,Jianbo Ye,Min Yang,Zhou Zhao,Xiaojun Chen 8+阅读 · 2018年5月2日 Christopher P. Burgess,Irina Higgins,Arka Pal,Loic Matthey,Nick Watters,Guillaume Desjardins,Alexander Lerchner 3+阅读 · 2018年4月10日 Ngoc-Trung Tran,Tuan-Anh Bui,Ngai-Man Cheung 9+阅读 · 2018年3月23日 Hao Ge,Yin Xia,Xu Chen,Randall Berry,Ying Wu 3+阅读 · 2018年3月23日 Masahiro Suzuki,Kotaro Nakayama,Yutaka Matsuo 5+阅读 · 2018年1月26日 CreateAMind 7+阅读 · 2018年12月10日 CreateAMind 9+阅读 · 2018年9月6日 AI前线 11+阅读 · 2018年3月10日 CreateAMind 6+阅读 · 2018年2月7日 22+阅读 · 2017年11月16日 CreateAMind 4+阅读 · 2017年10月31日 CreateAMind 7+阅读 · 2017年10月4日 8+阅读 · 2017年9月3日 CreateAMind 5+阅读 · 2017年8月4日 Top
# I Rotating bullet vs non rotating bullet 1. Apr 2, 2017 ### darkdave3000 I'm writing a program to simulate bullet impact on various materials. I need to know, will the angular momentum of a spinning bullet give it more impact force? I would assume that if so it would also mean that spinning bullets overcome more air friction if I am supposed to add the angular momentum to the linear forward momentum when calculating total forward momentum. I have to calculate two things: 1. Penetration depth of the impact material 2. How much air friction will slow down the bullet. Both will require assumptions about weather or not angular momentum adds to linear momentum when calculating total forward momentum felt by the air or the impact material. 2. Apr 2, 2017 ### phyzguy No, the angular momentum of the bullet does not add to the linear momentum. The reason bullets are given a spin is to keep them from tumbling. If they tumble the air resistance is significantly increased. 3. Apr 2, 2017 ### darkdave3000 So the spin adds ZERO EXTRA PENETRATING POWER? 4. Apr 2, 2017 ### Eclair_de_XII Is it because the torque on the bullet acts perpendicular to its linear motion? 5. Apr 2, 2017 ### Staff: Mentor No, it is because linear and angular momentum just don't add. They have different units. 6. Apr 2, 2017 ### phyzguy To answer this, you'd have to tell me how you plan to calculate the penetration depth. If the bullet is tumbling, the penetration depth will be different from what it will be if it is not tumbling, because the cross-sectional area of the bullet will be different. However, the difference is not due to adding the angular momentum to the linear momentum. Can you outline how you plan to calculate the penetration depth? 7. Apr 2, 2017 ### darkdave3000 I'm going to use resources below, Impact Depth by Isaac Newton and material strength. I will also simulate gravity so the ballistics of the bullet may not always strike the simulated wall like material head on, it might hit on a deflection angle. I will calculate if the force of the bullet is enough to shatter the target, if not it will just deform it. Regardless it will make the same depth based on Isaac Newton's theory. The only difference is the 3D picture will look more like the surface was broken(shattered) if material strength was exceeded vs merely deformed. Bullet might still be in the hole if there was only deformation, with shattering there is a chance the bullet penetrated the material completely and exited depending on thickness. https://en.wikipedia.org/wiki/Impact_depth https://en.wikipedia.org/wiki/Ultimate_tensile_strength https://en.wikipedia.org/wiki/Specific_strength https://en.wikipedia.org/wiki/Strength_of_materials Last edited: Apr 2, 2017 8. Apr 2, 2017 ### phyzguy OK, Newton's approximation is a reasonable method, although I suspect it will only be accurate to a factor of two or so. So you see that if the bullet impacts "head-on", as in the Wikipedia picture, the penetration depth will be different than if it impacts "sideways" because it is tumbling, since the cross-sectional area A is different in the two cases. But again, this difference is not due to adding the angular momentum to the linear momentum. 9. Apr 2, 2017 ### darkdave3000 What do you mean by factor of 2? I can calculate the sideways force component of the ballistic trajectory so only the relevant force will be used to calculate the depth, the other component of the impact velocity can be used to calculate deflection path if any energy is left after impact. 10. Apr 2, 2017 ### rumborak Since there is a certain amount of energy contained in the rotation and the bullet will come to an entire (non-rotational) stop, that energy will be imparted to the body as well. Whether that energy results in further penetration is however a different question. It may just as well end up shredding more tissue on the way. But, as discussed above, it's an academic question for the most part. Other than civil war era rifles that shoot lead pellets, you won't find a gun that doesn't impart rotation to the bullet through the boring of the rifle. 11. Apr 2, 2017 ### phyzguy I just mean that the assumption that momentum is only transferred directly ahead of the bullet and there is no interaction between the cylinder directly ahead of the bullet and the material to the sides of this cylinder sounds like a gross approximation to me. For example, this method says that the penetration depth is independent of the speed of the bullet. Does this sound reasonable to you? Do measured results agree with this? 12. Apr 2, 2017 ### darkdave3000 I dont like this Newton theory. Doesnt seem realistic enough for a commercial real time simulator. What if I used the Drag Formula and input drag coefficient of the bullet and input the material density (target) as a fluid density? I could then deduce how far the bullet can penetrate by distance travelled before velocity becomes zero and work out the heat produced by friction force produced in total and use the heat as a variable to determine if the material melted or not during the penetration. Only thing ill have trouble with left is the deflection of the bullet if it hits the target on an angle. Any ideas how I can solve this bit? 13. Apr 2, 2017 ### phyzguy I think these things are really complicated, and you are best building some sort of semi-empirical model. Here's a page where they try to measure and model the penetration depth of bullets. 14. Apr 3, 2017 ### zwierz Everybody knows that it much easier to pull out a nail if you pull it and rotate simultaneously than if you only pull it 15. Apr 3, 2017 ### darkdave3000 Ok I have a strategy , please critique: I will use drag coeficient formula substituting material density for air pressure. Also if the force of the bullet is below the tensile strength of the material the bullet bounces off it according to deflection angles. Im guessing my "substituting" needs a little more work than just that? 16. Apr 3, 2017 ### A.T. I guess because it's easier to brake the initial static friction using mechanical advantage (tool grip radius vs. nail radius). But how is that relevant to a bullet? 17. Apr 3, 2017 ### zwierz not only this directly.The angular momentum expense helps the bullet move inside а bulletproof vest for example. The mechanism is the same: the properties of the Coulomb friction 18. Apr 3, 2017 ### jbriggs444 As I would word this, the force of kinetic (or static, for that matter) friction has a fixed [maximum] magnitude. If you change the direction of that friction, e.g. by imparting a spin then the component of friction that is aligned with the bullet's trajectory is reduced. This is similar in principle to doing doughnuts on a car on icy roadways -- if you rev the engine and run the tires at a high rate of speed, the forward traction of the tires does not increase, but their resistance to slow lateral motion becomes almost non-existent. In pulling a nail, e.g. with a pair of pliers, you will find yourself twisting the nail first one way, then the other, but always pulling. The motion of nail against the wood will be mostly in the twisting direction because that's the direction where the most force is being exerted. But that little bit of pulling tension means that the net motion has a shallow spiral component -- easing a tiny bit out of the hole. Worry the nail back and forth long enough and it'll eventually come free. If the force of friction were directly proportional to velocity rather than roughly constant then no advantage could be obtained in this fashion. The force of friction in any one direction would be independent of motion in any other direction. If the force of friction were more than directly proportional to velocity (e.g. quadratic rather than linear) then a disadvantage could apply. A spinning bullet could experience more resistance to motion than a non-spinning bullet. 19. Apr 3, 2017 ### phyzguy Note that nobody in this thread has said that the spin of the bullet doesn't affect the penetration depth. It seems reasonable to assume that it might, although I'd like to see some measurements. However, any impact of the spin on the penetration depth is not caused by adding the angular momentum of the bullet to its linear momentum, as was the original question. 20. Apr 3, 2017 ### A.T. Makes sense, thanks. 21. Apr 3, 2017 ### darkdave3000 Im glad I got the community talking about this topic and I'll continue to monitor replies, I will now go and post another thread regarding a real time formula to calculate penetration of a bullet. Thanks for contributing everyone, I'll continue to monitor this thread as I post a new one. The topic of spin of the bullet contributing to a deeper hole is quite interesting , im sure the spin will cause the bullet to some how act like a drill bit especially if it has a helix shape as a result of the rifling. 22. Apr 4, 2017 ### zwierz The effect with rotating bullet can be illustrated by the following analogy. Let a horizontal conveyor belt moves with velocity $v$. There is a fixed rail above the conveyor belt. A matchbox of mass $m$ lies on the belt and rests at the rail. The friction coefficient between the belt and the matchbox is $\gamma$. The rail is smooth. Then somebody pushed the matchbox along the rail with initial velocity $u$. Which distance does the matchbox pass till it stops? This distance is bigger than $u^2/(2\gamma g)$ Last edited: Apr 4, 2017 23. Apr 5, 2017 ### Drakkith Staff Emeritus I have my doubts. A bullet does not spin very quickly relative to its forward velocity. For each bullet-length it travels, it may spin perhaps a quarter turn or so. Just look at the video posted in your other thread, located here: https://www.physicsforums.com/threa...culate-bullet-hole-depth.910145/#post-5732931 24. Apr 5, 2017 ### A.T. And even if it spun much faster, most of the resistance comes probably from normal forces at the tip, from pushing the material out of the way, rather than from friction. So reducing the friction component along the path won't have that much effect. 25. Apr 6, 2017 ### zwierz this is the reason why I speak about the motion of the bullet through bulletproof vest; in this case it is reasonable to assume friction to be obeyed the Coulomb law Share this great discussion with others via Reddit, Google+, Twitter, or Facebook Have something to add? Draft saved Draft deleted
# Distribution of cycles length in a graph Given a random directed Graph G: $$G=(V,E) \\ \lvert V \rvert = n , \lvert E \rvert = k$$ where for each vertex, either: $$d_{incoming}(v) = 1 , d_{outgoing}(v) = 1$$ meaning - for each incoming (outgoing) edge to vertex v, there is also an outgoing (incoming) edge from vertex v. Or: $$d(v) = 0$$ What is the distribution of lengths of the longest cycles for this set of random graphs? This question relates to the riddle presented in the last minute-physics video. (for the general case) • Nice question. If it doesn't get an answer here after a few days, you should maybe flag it for migration to Mathematics. But please don't just repost it there as doing so will fragment answers and confuse people. – David Richerby Dec 9 '14 at 11:15 • What have you tried and where did you get stuck? What are your thoughts and motivations for this question? – Raphael Dec 9 '14 at 13:22 • @DavidRicherby "Nice" as in interesting, but certainly not SE-nice, isn't it? Do you see how to flesh it out? – Raphael Dec 9 '14 at 13:23 • @Raphael Does it need fleshing out? A brief survey of existing results in the field would be a reasonable answer. For example, Pósa has shown that, for a large enough constant $c$, a random graph with $n$ vertices and $cn\log n$ edges is asymptotically almost surely Hamiltonian ("Hamilton circuits in random graphs", Discrete Mathematics, 14:359-364, 1976). – David Richerby Dec 9 '14 at 14:38 • @DavidRicherby Can you provide an answer then? I'm not sure whether the question asks for an algorithm (text) or a "static" result (tags). – Raphael Dec 9 '14 at 15:03 When $k = n$ and self-loops are allowed, what you have is a random permutation. The expected length of the longest cycle in a permutation is known to be $\alpha n$ for $\alpha \approx 0.624$, see Shepp and Lloyd. If self-loops are not allowed then you will get a different constant $\beta$ that can probably be computed using the methods of Shepp and Lloyd. When $k < n$, you just get a permutation on $k$ vertices, so instead of $\alpha n$ or $\beta n$ you would get $\alpha k$ or $\beta k$.
# Visual Computing ## Publications #### Wave Curves: Simulating Lagrangian water waves on dynamically deforming surfaces ACM Transactions on Graphics (SIGGRAPH 2020) • Skřivan • Söderström • Johansson • Sprenger • Museth • Wojtan We propose a method to enhance the visual detail of a water surface simulation. Our method works as a post-processing step which takes a simulation as input and increases its apparent resolution by simulating many detailed Lagrangian water waves on top of it. We extend linear water wave theory to work in non-planar domains which deform over time, and we discretize the theory using Lagrangian wave packets attached to spline curves. The method is numerically stable and trivially parallelizable, and it produces high frequency ripples with dispersive wave-like behaviors customized to the underlying fluid simulation. @article{sperl2020hylc, author = {Skřivan, Tomáš and Söderström, Andreas and Johansson, John and Sprenger, Christoph and Museth, Ken and Wojtan, Chris} title = {Wave Curves: Simulating Lagrangian water waves on dynamically deforming surfaces}, journal = {ACM Transactions on Graphics (TOG)}, number = {4}, volume = {39}, year = {2020}, publisher = {ACM} } #### Homogenized Yarn-Level Cloth ACM Transactions on Graphics (SIGGRAPH 2020) • Sperl • Narain • Wojtan We present a method for animating yarn-level cloth effects using a thin-shell solver. We accomplish this through numerical homogenization: we first use a large number of yarn-level simulations to build a model of the potential energy density of the cloth, and then use this energy density function to compute forces in a thin shell simulator. We model several yarn-based materials, including both woven and knitted fabrics. Our model faithfully reproduces expected effects like the stiffness of woven fabrics, and the highly deformable nature and anisotropy of knitted fabrics. Our approach does not require any real-world experiments nor measurements; because the method is based entirely on simulations, it can generate entirely new material models quickly, without the need for testing apparatuses or human intervention. We provide data-driven models of several woven and knitted fabrics, which can be used for efficient simulation with an off-the-shelf cloth solver. @article{sperl2020hylc, author = {Sperl, Georg and Narain, Rahul and Wojtan, Chris} title = {Homogenized Yarn-Level Cloth}, journal = {ACM Transactions on Graphics (TOG)}, number = {4}, volume = {39}, year = {2020}, publisher = {ACM} } #### A Model for Soap Film Dynamics with Evolving Thickness ACM Transactions on Graphics (SIGGRAPH 2020) • Ishida • Synak • Narita • Hachisuka • Wojtan Previous research on animations of soap bubbles, films, and foams largely focuses on the motion and geometric shape of the bubble surface. These works neglect the evolution of the bubble's thickness, which is normally responsible for visual phenomena like surface vortices, Newton's interference patterns, capillary waves, and deformation-dependent rupturing of films in a foam. In this paper, we model these natural phenomena by introducing the film thickness as a reduced degree of freedom in the Navier-Stokes equations and deriving their equations of motion. We discretize the equations on a non-manifold triangle mesh surface and couple it to an existing bubble solver. In doing so, we also introduce an incompressible fluid solver for 2.5D films and a novel advection algorithm for convecting fields across non-manifold surface junctions. Our simulations enhance state-of-the-art bubble solvers with additional effects caused by convection, rippling, draining, and evaporation of the thin film. @article{isnhw2020soapfilm_with_thickness, author = {Sadashige Ishida and Peter Synak and Fumiya Narita and Toshiya Hachisuka and Chris Wojtan} title = {A Model for Soap Film Dynamics with Evolving Thickness}, journal = {ACM Trans. on Graphics}, number = {4}, volume = {39}, year = {2020}, pages = {31:1--31:11}, articleno = 31, url = {http://dx.doi.org/10.1145/3386569.3392405}, doi = {10.1145/3386569.3392405}, publisher = {ACM} } #### A Practical Method for Animating Anisotropic Elastoplastic Materials Computer Graphics Forum (Eurographics 2020) • Schreck • Wojtan This paper introduces a simple method for simulating highly anisotropic elastoplastic material behaviors like the dissolution of fibrous phenomena (splintering wood, shredding bales of hay) and materials composed of large numbers of irregularly-shaped bodies (piles of twigs, pencils, or cards). We introduce a simple transformation of the anisotropic problem into an equivalent isotropic one, and we solve this new fictitious'' isotropic problem using an existing simulator based on the material point method. Our approach results in minimal changes to existing simulators, and it allows us to re-use popular isotropic plasticity models like the Drucker-Prager yield criterion instead of inventing new anisotropic plasticity models for every phenomenon we wish to simulate. @article{SW_ampm20, author = "Schreck, Camille and Wojtan, Chris" title = "A Practical Method for Animating Anisotropic Elastoplastic Materials", journal = "Computer Graphics Forum - Eurographics 2020", number = "2", volume = "39", year = "2020", } #### Programming temporal morphing of self-actuated shells Nature Communications (2020) • Guseinov • McMahan • Perez • Daraio • Bickel Advances in shape-morphing materials, such as hydrogels, shape-memory polymers and light-responsive polymers have enabled prescribing self-directed deformations of initially flat geometries. However, most proposed solutions evolve towards a target geometry without considering time-dependent actuation paths. To achieve more complex geometries and avoid self-collisions, it is critical to encode a spatial and temporal shape evolution within the initially flat shell. Recent realizations of time-dependent morphing are limited to the actuation of few, discrete hinges and cannot form doubly curved surfaces. Here, we demonstrate a method for encoding temporal shape evolution in architected shells that assume complex shapes and doubly curved geometries. The shells are non-periodic tessellations of pre-stressed contractile unit cells that soften in water at rates prescribed locally by mesostructure geometry. The ensuing midplane contraction is coupled to the formation of encoded curvatures. We propose an inverse design tool based on a data-driven model for unit cells’ temporal responses. @article{Guseinov2020, author={Guseinov, Ruslan and McMahan, Connor and P{\'e}rez, Jes{\'u}s and Daraio, Chiara and Bickel, Bernd}, title={Programming temporal morphing of self-actuated shells}, journal={Nature Communications}, year={2020}, volume={11}, number={1}, pages={237}, issn={2041-1723}, doi={10.1038/s41467-019-14015-2}, url={https://doi.org/10.1038/s41467-019-14015-2} } #### 2019 ACM Transactions on Graphics 38(6) (SIGGRAPH 2019) • Hafner • Schumacher • Knoop • Auzinger • Bickel • Baecher We propose a novel generic shape optimization method for CAD models based on the eXtended Finite Element Method (XFEM). Our method works directly on the intersection between the model and a regular simulation grid, without the need to mesh or remesh, thus removing a bottleneck of classical shape optimization strategies. This is made possible by a novel hierarchical integration scheme that accurately integrates finite element quantities with sub-element precision. For optimization, we efficiently compute analytical shape derivatives of the entire framework, from model intersection to in- tegration rule generation and XFEM simulation. Moreover, we describe a differentiable projection of shape parameters onto a constraint manifold spanned by user-specified shape preservation, consistency, and manufactura- bility constraints. We demonstrate the utility of our approach by optimizing mass distribution, strength-to-weight ratio, and inverse elastic shape design objectives directly on parameterized 3D CAD models. @article{Hafner:2019, author = {Hafner, Christian and Schumacher, Christian and Knoop, Espen and Auzinger, Thomas and Bickel, Bernd and B\"{a}cher, Moritz}, journal = {ACM Trans. Graph.}, issue_date = {November 2019}, volume = {38}, number = {6}, month = nov, year = {2019}, issn = {0730-0301}, pages = {157:1--157:15}, articleno = {157}, numpages = {15}, url = {http://doi.acm.org/10.1145/3355089.3356576}, doi = {10.1145/3355089.3356576}, acmid = {3356576}, publisher = {ACM}, address = {New York, NY, USA}, keywords = {CAD processing, XFEM, shape optimization, simulation}, } #### FlexMaps Pavilion: a twisted arc made of mesostructured flat flexible panels FORM and FORCE, IASS Symposium 2019, Structural Membranes 2019 • Laccone • Malomo • Perez • Pietroni • Ponchio • Bickel • Cignoni Bending-active structures are able to efficiently produce complex curved shapes starting from flat panels. The desired deformation of the panels derives from the proper selection of their elastic properties. Optimized panels, called FlexMaps, are designed such that, once they are bent and assembled, the resulting static equilibrium configuration matches a desired input 3D shape. The FlexMaps elastic properties are controlled by locally varying spiraling geometric mesostructures, which are optimized in size and shape to match the global curvature (i.e., bending requests) of the target shape. The design pipeline starts from a quad mesh representing the input 3D shape, which determines the edge size and the total amount of spirals: every quad will embed one spiral. Then, an optimization algorithm tunes the geometry of the spirals by using a simplified pre-computed rod model. This rod model is derived from a non-linear regression algorithm which approximates the non-linear behavior of solid FEM spiral models subject to hundreds of load combinations. This innovative pipeline has been applied to the project of a lightweight plywood pavilion named FlexMaps Pavilion, which is a single-layer piecewise twisted arc that fits a bounding box of 3.90x3.96x3.25 meters. @InProceedings\{LMPPPBC19, author = "Laccone, Francesco and Malomo, Luigi and P\'erez, Jes\'us and Pietroni, Nico and Ponchio, Federico and Bickel, Bernd and Cignoni, Paolo", title = "FlexMaps Pavilion: a twisted arc made of mesostructured flat flexible panels", booktitle = "FORM and FORCE, IASS Symposium 2019, Structural Membranes 2019", pages = "498-504", month = "oct", year = "2019", editor = "Carlos L\'azaro, Kai-Uwe Bletzinger, Eugenio O\~{n}ate", publisher = "International Centre for Numerical Methods in Engineering (CIMNE)", url = "http://vcg.isti.cnr.it/Publications/2019/LMPPPBC19" } #### Geometry-Aware Scattering Compensation ACM Transactions on Graphics 38(4) (SIGGRAPH 2019) • Sumin • Rittig • Babaei • Myszkowski • Bickel • Wilkie • Křivánek • Weyrich Commercially available full-color 3D printing allows for detailed control of material deposition in a volume, but an exact reproduction of a target surface appearance is hampered by the strong subsurface scattering that causes nontrivial volumetric cross-talk at the print surface. Previous work showed how an iterative optimization scheme based on accumulating absorptive materials at the surface can be used to find a volumetric distribution of print materials that closely approximates a given target appearance. In this work, we first revisit the assumption that pushing the absorptive materials to the surface results in minimal volumetric cross-talk. We design a full-fledged optimization on a small domain for this task and confirm this previously reported heuristic. Then, we extend the above approach that is critically limited to color reproduction on planar surfaces, to arbitrary 3D shapes. Our method enables high-fidelity color texture reproduction on 3D prints by effectively compensating for internal light scattering within arbitrarily shaped objects. In addition, we propose a content-aware gamut mapping that significantly improves color reproduction for the pathological case of thin geometric features. Using a wide range of sample objects with complex textures and geometries, we demonstrate color reproduction whose fidelity is superior to state-of-the-art drivers for color 3D printers. @article{sumin19geometry-aware, author = {Sumin, Denis and Rittig, Tobias and Babaei, Vahid and Nindel, Thomas and Wilkie, Alexander and Didyk, Piotr and Bickel, Bernd and K\v{r}iv\'anek, Jaroslav and Myszkowski, Karol and Weyrich, Tim}, title = {Geometry-Aware Scattering Compensation for {3D} Printing}, journal = {ACM Transactions on Graphics (Proc. SIGGRAPH)}, year = 2019, month = jul, volume = 38, numpages = 14, keywords = {computational fabrication, appearance reproduction, appearance enhancement, sub-surface light transport, volu- metric optimization, gradient rendering}, } #### Fundamental Solutions for Water Wave Animation ACM Transactions on Graphics 38(4) (SIGGRAPH 2019) • Schreck • Hafner • Wojtan This paper investigates the use of fundamental solutions for animating detailed linear water surface waves. We first propose an analytical solution for efficiently animating circular ripples in closed form. We then show how to adapt the method of fundamental solutions (MFS) to create ambient waves interacting with complex obstacles. Subsequently, we present a novel wavelet-based discretization which outperforms the state of the art MFS approach for simulating time-varying water surface waves with moving obstacles. Our results feature high-resolution spatial details, interactions with complex boundaries, and large open ocean domains. Our method compares favorably with previous work as well as known analytical solutions. We also present comparisons between our method and real world examples. @article{SHW_fsww19, author = "Schreck, Camille and Hafner, Christian and Wojtan, Chris" title = "Fundamental Solutions for Water Wave Animation", journal = "ACM Trans. on Graphics - Siggraph 2019", number = "4", volume = "38", pages = "14", month = "July", year = "2019", note = "https://doi.org/10.1145/3306346.3323002" } #### Volume-Aware Design of Composite Molds ACM Transactions on Graphics 38(4) (SIGGRAPH 2019) • Alderighi • Malomo • Giorgi • Bickel • Cignoni • Pietroni We propose a novel technique for the automatic design of molds to cast highly complex shapes. The technique generates composite, two-piece molds. Each mold piece is made up of a hard plastic shell and a flexible silicone part. Thanks to the thin, soft, and smartly shaped silicone part, which is kept in place by a hard plastic shell, we can cast objects of unprecedented complexity. An innovative algorithm based on a volumetric analysis defines the layout of the internal cuts in the silicone mold part. Our approach can robustly handle thin protruding features and intertwined topologies that have caused previous methods to fail. We compare our results with state of the art techniques, and we demonstrate the casting of shapes with extremely complex geometry. @article{Alderighi:2019, author = {Alderighi, Thomas and Malomo, Luigi and Giorgi, Daniela and Bickel, Bernd and Cignoni, Paolo and Pietroni, Nico}, title = {Volume-aware Design of Composite Molds}, journal = {ACM Trans. Graph.}, issue_date = {July 2019}, volume = {38}, number = {4}, month = jul, year = {2019}, issn = {0730-0301}, pages = {110:1--110:12}, articleno = {110}, numpages = {12}, url = {http://doi.acm.org/10.1145/3306346.3322981}, doi = {10.1145/3306346.3322981}, acmid = {3322981}, publisher = {ACM}, address = {New York, NY, USA}, keywords = {casting, fabrication, mold design}, } #### FlexMaps: Computational Design of Flat Flexible Shells for Shaping 3D Objects ACM Transactions on Graphics 37(6) (SIGGRAPH Asia 2018) • Malomo • Perez • Iarussi • Pietroni • Miguel • Cignoni • Bickel We propose FlexMaps, a novel framework for fabricating smooth shapes out of flat, flexible panels with tailored mechanical properties. We start by mapping the 3D surface onto a 2D domain as in traditional UV mapping to design a set of deformable flat panels called FlexMaps. For these panels, we design and obtain specific mechanical properties such that, once they are assembled, the static equilibrium configuration matches the desired 3D shape. FlexMaps can be fabricated from an almost rigid material, such as wood or plastic, and are made flexible in a controlled way by using computationally designed spiraling microstructures. @article{MPIPMCB18, author = "Malomo, Luigi and Per\'ez, Jes\'us and Iarussi, Emmanuel and Pietroni, Nico and Miguel, Eder and Cignoni, Paolo and Bickel, Bernd", title = "FlexMaps: Computational Design of Flat Flexible Shells for Shaping 3D Objects", journal = "ACM Trans. on Graphics - Siggraph Asia 2018", number = "6", volume = "37", pages = "14", month = "dec", year = "2018", note = "https://doi.org/10.1145/3272127.3275076" } #### CoreCavity: Interactive Shell Decomposition for Fabrication with Two-Piece Rigid Molds ACM Transactions on Graphics 37(4) (SIGGRAPH 2018) • Nakashima • Auzinger • Iarussi • Zhang • Igarashi • Bickel Molding is a popular mass production method, in which the initial expenses for the mold are offset by the low per-unit production cost. However, the physical fabrication constraints of the molding technique commonly restrict the shape of moldable objects. For a complex shape, a decomposition of the object into moldable parts is a common strategy to address these constraints, with plastic model kits being a popular and illustrative example. However, conducting such a decomposition requires considerable expertise, and it depends on the technical aspects of the fabrication technique, as well as aesthetic considerations. We present an interactive technique to create such decompositions for two-piece molding, in which each part of the object is cast between two rigid mold pieces. Given the surface description of an object, we decompose its thin-shell equivalent into moldable parts by first performing a coarse decomposition and then utilizing an active contour model for the boundaries between individual parts. Formulated as an optimization problem, the movement of the contours is guided by an energy reflecting fabrication constraints to ensure the moldability of each part. Simultaneously the user is provided with editing capabilities to enforce aesthetic guidelines. Our interactive interface provides control of the contour positions by allowing, for example, the alignment of part boundaries with object features. Our technique enables a novel workflow, as it empowers novice users to explore the design space, and it generates fabrication-ready two-piece molds that can be used either for casting or industrial injection molding of free-form objects. @article{Nakashima:2018:10.1145/3197517.3201341, author = {Nakashima, Kazutaka and Auzinger, Thomas and Iarussi, Emmanuel and Zhang, Ran and Igarashi, Takeo and Bickel, Bernd}, title = {CoreCavity: Interactive Shell Decomposition for Fabrication with Two-Piece Rigid Molds}, journal = {ACM Transactions on Graphics (SIGGRAPH 2018)}, year = {2018}, volume = {37}, number = {4}, pages = {135:1--135:13}, articleno = {135}, numpages = {16}, url = {https://dx.doi.org/10.1145/3197517.3201341}, doi = {10.1145/3197517.3201341}, acmid = {3201341}, publisher = {ACM}, address = {New York, NY, USA}, keywords = {molding, fabrication, height field, decomposition} } #### Computational Design of Nanostructural Color for Additive Manufacturing ACM Transactions on Graphics 37(4) (SIGGRAPH 2018) • Auzinger • Heidrich • Bickel Additive manufacturing has recently seen drastic improvements in resolution, making it now possible to fabricate features at scales of hundreds or even dozens of nanometers, which previously required very expensive lithographic methods. As a result, additive manufacturing now seems poised for optical applications, including those relevant to computer graphics, such as material design, as well as display and imaging applications. In this work, we explore the use of additive manufacturing for generating structural colors, where the structures are designed using a fabrication-aware optimization process. This requires a combination of full-wave simulation, a feasible parameterization of the design space, and a tailored optimization procedure. Many of these components should be re-usable for the design of other optical structures at this scale. We show initial results of material samples fabricated based on our designs. While these suffer from the prototype character of state-of-the-art fabrication hardware, we believe they clearly demonstrate the potential of additive nanofabrication for structural colors and other graphics applications. @article{Auzinger:2018:10.1145/3197517.3201376, author = {Auzinger, Thomas and Heidrich, Wolfgang and Bickel, Bernd}, title = {Computational Design of Nanostructural Color for Additive Manufacturing}, journal = {ACM Transactions on Graphics (SIGGRAPH 2018)}, year = {2018}, volume = {37}, number = {4}, pages = {159:1--159:16}, articleno = {159}, numpages = {16}, url = {http://doi.acm.org/10.1145/3197517.3201376}, doi = {10.1145/3197517.3201376}, acmid = {3201376}, publisher = {ACM}, address = {New York, NY, USA}, keywords = {structural colorization, appearance, multiphoton lithography, direct laser writing, computational fabrication, computational design, shape optimization, FDTD, diffraction, Nanoscribe} } #### Metamolds: Computational Design of Silicone Molds ACM Transactions on Graphics 37(4) (SIGGRAPH 2018) • Alderighi • Malomo • Giorgi • Pietroni • Bickel • Cignoni We propose a new method for fabricating digital objects through reusable silicone molds. Molds are generated by casting liquid silicone into custom 3D printed containers called metamolds. Metamolds automatically define the cuts that are needed to extract the cast object from the silicone mold. The shape of metamolds is designed through a novel segmentation technique, which takes into account both geometric and topological constraints involved in the process of mold casting. Our technique is simple, does not require changing the shape or topology of the input objects, and only requires offthe-shelf materials and technologies. We successfully tested our method on a set of challenging examples with complex shapes and rich geometric detail. @article{Alderighi:2018, author = {Alderighi, Thomas and Malomo, Luigi and Giorgi, Daniela and Pietroni, Nico and Bickel, Bernd and Cignoni, Paolo}, title = {Metamolds: Computational Design of Silicone Molds}, journal = {ACM Trans. Graph.}, issue_date = {August 2018}, volume = {37}, number = {4}, month = jul, year = {2018}, issn = {0730-0301}, pages = {136:1--136:13}, articleno = {136}, numpages = {13}, url = {http://doi.acm.org/10.1145/3197517.3201381}, doi = {10.1145/3197517.3201381}, acmid = {3201381}, publisher = {ACM}, address = {New York, NY, USA}, keywords = {casting, fabrication, molding}, } #### Learning Three-dimensional Flow for Interactive Aerodynamic Design ACM Transactions on Graphics 37(4) (SIGGRAPH 2018) • Umetani • Bickel We present a data-driven technique to instantly predict how fluid flows around various three-dimensional objects. Such simulation is useful for computational fabrication and engineering, but is usually computationally expensive since it requires solving the Navier-Stokes equation for many time steps. To accelerate the process, we propose a machine learning framework which predicts aerodynamic forces and velocity and pressure fields given a threedimensional shape input. Handling detailed free-form three-dimensional shapes in a data-driven framework is challenging because machine learning approaches usually require a consistent parametrization of input and output. We present a novel PolyCube maps-based parametrization that can be computed for three-dimensional shapes at interactive rates. This allows us to efficiently learn the nonlinear response of the flow using a Gaussian process regression. We demonstrate the effectiveness of our approach for the interactive design and optimization of a car body. @article{Umetani:2018, author = {Umetani, Nobuyuki and Bickel, Bernd}, title = {Learning Three-dimensional Flow for Interactive Aerodynamic Design}, journal = {ACM Transactions on Graphics (SIGGRAPH 2018)}, year = {2018}, volume = {37}, number = {4}, articleno = {89}, numpages = {10}, url = {https://doi.org/10.1145/3197517.3201325}, doi = {10.1145/3197517.3201325}, publisher = {ACM}, address = {New York, NY, USA} } #### Water Surface Wavelets ACM Transactions on Graphics (SIGGRAPH 2018) • Jeschke • Skřivan • Müller-Fischer • Chentanez • Macklin • Wojtan The current state of the art in real-time two-dimensional water wave simulation requires developers to choose between efficient Fourier-based methods, which lack interactions with moving obstacles, and finite-difference or finite element methods, which handle environmental interactions but are significantly more expensive. This paper attempts to bridge this long-standing gap between complexity and performance, by proposing a new wave simulation method that can faithfully simulate wave interactions with moving obstacles in real time while simultaneously preserving minute details and accommodating very large simulation domains. Previous methods for simulating 2D water waves directly compute the change in height of the water surface, a strategy which imposes limitations based on the CFL condition (fast moving waves require small time steps) and Nyquist's limit (small wave details require closely-spaced simulation variables). This paper proposes a novel wavelet transformation that discretizes the liquid motion in terms of amplitude-like functions that vary over {\em space, frequency, and direction}, effectively generalizing Fourier-based methods to handle local interactions. Because these new variables change much more slowly over space than the original water height function, our change of variables drastically reduces the limitations of the CFL condition and Nyquist limit, allowing us to simulate highly detailed water waves at very large visual resolutions. Our discretization is amenable to fast summation and easy to parallelize. We also present basic extensions like pre-computed wave paths and two-way solid fluid coupling. Finally, we argue that our discretization provides a convenient set of variables for artistic manipulation, which we illustrate with a novel wave-painting interface. #### Efficient FEM-Based Simulation of Soft Robots Modeled as Kinematic Chains IEEE International Conference on Robotics and Automation 2018 • Pozzi • Miguel • Deimel • Malvezzi • Bickel • Brock • Prattichizzo In the context of robotic manipulation and grasping, the shift from a view that is static (force closure of a single posture) and contact-deprived (only contact for force closure is allowed, everything else is obstacle) towards a view that is dynamic and contact-rich (soft manipulation) has led to an increased interest in soft hands. These hands can easily exploit environmental constraints and object surfaces without risk, and safely interact with humans, but present also some challenges. Designing them is difficult, as well as predicting, modelling, and “programming” their interactions with the objects and the environment. This paper tackles the problem of simulating them in a fast and effective way, leveraging on novel and existing simulation technologies. We present a triple-layered simulation framework where dynamic properties such as stiffness are determined from slow but accurate FEM simulation data once, and then condensed into a lumped parameter model that can be used to fast simulate soft fingers and soft hands. We apply our approach to the simulation of soft pneumatic fingers. @inproceedings{pozziefficient, title = {Efficient FEM-Based Simulation of Soft Robots Modeled as Kinematic Chains}, booktitle = {IEEE International Conference on Robotics and Automation 2018}, author = {Pozzi, Maria and Miguel, Eder and Deimel, Raphael and Malvezzi, Monica and Bickel, Bernd and Brock, Oliver and Prattichizzo, Domenico}, year = {2018} } #### Scattering-Aware Texture Reproduction for 3D Printing ACM Transactions on Graphics 36(6) (SIGGRAPH Asia 2017) • Elek • Sumin • Zhang • Weyrich • Myszkowski • Bickel • Wilkie • Křivánek Color texture reproduction in 3D printing commonly ignores volumetric light transport (cross-talk) between surface points on a 3D print. Such light diffusion leads to significant blur of details and color bleeding, and is particularly severe for highly translucent resin-based print materials. Given their widely varying scattering properties, this cross-talk between surface points strongly depends on the internal structure of the volume surrounding each surface point. Existing scattering-aware methods use simplified models for light diffusion, and often accept the visual blur as an immutable property of the print medium. In contrast, our work counteracts heterogeneous scattering to obtain the impression of a crisp albedo texture on top of the 3D print, by optimizing for a fully volumetric material distribution that preserves the target appearance. Our method employs an efficient numerical optimizer on top of a general Monte-Carlo simulation of heterogeneous scattering, supported by a practical calibration procedure to obtain scattering parameters from a given set of printer materials. Despite the inherent translucency of the medium, we reproduce detailed surface textures on 3D prints. We evaluate our system using a commercial, five-tone 3D print process and compare against the printer's native color texturing mode, demonstrating that our method preserves high-frequency features well without having to compromise on color gamut. @article{ElekSumin2017SGA, author = {Elek, Oskar and Sumin, Denis and Zhang, Ran and Weyrich, Tim and Myszkowski, Karol and Bickel, Bernd and Wilkie, Alexander and K\v{r}iv\'{a}nek, Jaroslav}, title = {Scattering-aware Texture Reproduction for 3{D} Printing}, journal = {ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia)}, volume = {36}, number = {6}, year = {2017}, pages = {241:1--241:15} } #### Probabilistic Image Colorization British Machine Vision Conference (BMVC 2017) • Royer • Kolesnikov • Lampert We develop a probabilistic technique for colorizing grayscale natural images. In light of the intrinsic uncertainty of this task, the proposed probabilistic framework has numerous desirable properties. In particular, our model is able to produce multiple plausible and vivid colorizations for a given grayscale image and is one of the first colorization models to provide a proper stochastic sampling scheme. Moreover, our training procedure is supported by a rigorous theoretical framework that does not require any ad hoc heuristics and allows for efficient modeling and learning of the joint pixel color distribution. We demonstrate strong quantitative and qualitative experimental results on the CIFAR-10 dataset and the challenging ILSVRC 2012 dataset. @inproceedings{royer2017probabilistic, title={Probabilistic Image Colorization}, author={Royer, Amelie and Kolesnikov, Alexander Lampert, Christoph H.}, booktitle={British Machine Vision Conference (BMVC)}, year={2017} } #### PixelCNN Models with Auxiliary Variables for Natural Image Modeling International Conference on Machine Learning (ICML 2017) • Kolesnikov • Lampert We study probabilistic models of natural images and extend the autoregressive family of PixelCNN models by incorporating auxiliary variables. Subsequently, we describe two new generative image models that exploit different image transformations as auxiliary variables: a quantized grayscale view of the image or a multi-resolution image pyramid. The proposed models tackle two known shortcomings of existing PixelCNN models: 1) their tendency to focus on low-level image details, while largely ignoring high-level image information, such as object shapes, and 2) their computationally costly procedure for image sampling. We experimentally demonstrate benefits of our models, in particular showing that they produce much more realistically looking image samples than previous state-of-the-art probabilistic models. @inproceedings{kolesnikov2017pixelcnn, title={{PixelCNN} Models with Auxiliary Variables for Natural Image Modeling}, author={Alexander Kolesnikov and Christoph H. Lampert}, booktitle={International Conference on Machine Learning (ICML)}, year={2017} } #### iCaRL: Incremental Classifier and Representation Learning IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017) • Rebuffi • Kolesnikov • Sperl • Lampert A major open problem on the road to artificial intelligence is the development of incrementally learning systems that learn about more and more concepts over time from a stream of data. In this work, we introduce a new training strategy, iCaRL, that allows learning in such a class-incremental way: only the training data for a small number of classes has to be present at the same time and new classes can be added progressively. iCaRL learns strong classifiers and a data representation simultaneously. This distinguishes it from earlier works that were fundamentally limited to fixed data representations and therefore incompatible with deep learning architectures. We show by experiments on CIFAR-100 and ImageNet ILSVRC 2012 data that iCaRL can learn many classes incrementally over a long period of time where other strategies quickly fail. @inproceedings{rebuffi2017icarl, title={{iCaRL}: Incremental Classifier and Representation Learning}, author={Rebuffi, Sylvestre-Alvise and Kolesnikov, Alexander and Sperl, Georg and Lampert, Christoph H.}, booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2017} } #### Water Wave Packets ACM Transactions on Graphics 36(4) (SIGGRAPH 2017) • Jeschke • Wojtan This paper presents a method for simulating water surface waves as a displacement field on a 2D domain. Our method relies on Lagrangian particles that carry packets of water wave energy; each packet carries information about an entire group of wave trains, as opposed to only a single wave crest. Our approach is unconditionally stable and can simulate high resolution geometric details. This approach also presents a straightforward interface for artistic control, because it is essentially a particle system with intuitive parameters like wavelength and amplitude. Our implementation parallelizes well and runs in real time for moderately challenging scenarios. @article{Jeschke2017, author = {Stefan Jeschke and Chris Wojtan}, title = {Water Wave Packets}, journal = {ACM Transactions on Graphics (SIGGRAPH 2017)}, year = {2017}, volume = {36}, number = {4} } #### Functionality-aware Retargeting of Mechanisms to 3D Shapes ACM Transactions on Graphics 36(4) (SIGGRAPH 2017) • Zhang • Auzinger • Ceylan • Li • Bickel We present an interactive design system to create functional mechanical objects. Our computational approach allows novice users to retarget an existing mechanical template to a user-specified input shape. Our proposed representation for a mechanical template encodes a parameterized mechanism, mechanical constraints that ensure a physically valid configuration, spatial relationships of mechanical parts to the user-provided shape, and functional constraints that specify an intended functionality. We provide an intuitive interface and optimization-in-the-loop approach for finding a valid configuration of the mechanism and the shape to ensure that higher-level functional goals are met. Our algorithm interactively optimizes the mechanism while the user manipulates the placement of mechanical components and the shape. Our system allows users to efficiently explore various design choices and to synthesize customized mechanical objects that can be fabricated with rapid prototyping technologies. We demonstrate the efficacy of our approach by retargeting various mechanical templates to different shapes and fabricating the resulting functional mechanical objects. @article{Zhang2017, author = {Zhang, Ran and Auzinger, Thomas and Ceylan, Duygu and Li, Wilmot and Bickel, Bernd}, title = {Functionality-aware Retargeting of Mechanisms to 3D Shapes}, journal = {ACM Transactions on Graphics (SIGGRAPH 2017)}, year = {2017}, volume = {36}, number = {4} } #### CurveUps: Shaping Objects from Flat Plates with Tension-Actuated Curvature ACM Transactions on Graphics 36(4) (SIGGRAPH 2017) • Guseinov • Miguel • Bickel We present a computational approach for designing CurveUps, curvy shells that form from an initially flat state. They consist of small rigid tiles that are tightly held together by two pre-stretched elastic sheets attached to them. Our method allows the realization of smooth, doubly curved surfaces that can be fabricated as a flat piece. Once released, the restoring forces of the pre-stretched sheets support the object to take shape in 3D. CurveUps are structurally stable in their target configuration. The design process starts with a target surface. Our method generates a tile layout in 2D and optimizes the distribution, shape, and attachment areas of the tiles to obtain a configuration that is fabricable and in which the curved up state closely matches the target. Our approach is based on an efficient approximate model and a local optimization strategy for an otherwise intractable nonlinear optimization problem. We demonstrate the effectiveness of our approach for a wide range of shapes, all realized as physical prototypes. @article{Guseinov2017, author = {Guseinov, Ruslan and Miguel, Eder and Bickel, Bernd}, title = {CurveUps: Shaping Objects from Flat Plates with Tension-Actuated Curvature}, journal = {ACM Transactions on Graphics (SIGGRAPH 2017)}, year = {2017}, volume = {36}, number = {4} } #### Computational Multicopter Design ACM Transactions on Graphics (SIGGRAPH Asia 2016) • Du • Schulz • Zhu • Bickel • Matusik We present an interactive system for computational design, optimization, and fabrication of multicopters. Our computational approach allows non-experts to design, explore, and evaluate a wide range of different multicopters. We provide users with an intuitive interface for assembling a multicopter from a collection of components (e.g., propellers, motors, and carbon fiber rods). Our algorithm interactively optimizes shape and controller parameters of the current design to ensure its proper operation. In addition, we allow incorporating a variety of other metrics (such as payload, battery usage, size, and cost) into the design process and exploring tradeoffs between them. We show the efficacy of our method and system by designing, optimizing, fabricating, and operating multicopters with complex geometries and propeller configurations. We also demonstrate the ability of our optimization algorithm to improve the multicopter performance under different metrics. #### FlexMolds: Automatic Design of Flexible Shells for Molding ACM Transactions on Graphics (SIGGRAPH Asia 2016) • Malomo • Pietroni • Bickel • Cignoni We present FlexMolds, a novel computational approach to automatically design flexible, reusable molds that, once 3D printed, allow us to physically fabricate, by means of liquid casting, multiple copies of complex shapes with rich surface details and complex topology. The approach to design such flexible molds is based on a greedy bottom-up search of possible cuts over an object, evaluating for each possible cut the feasibility of the resulting mold. We use a dynamic simulation approach to evaluate candidate molds, providing a heuristic to generate forces that are able to open, detach, and remove a complex mold from the object it surrounds. We have tested the approach with a number of objects with nontrivial shapes and topologies. #### Seed, Expand and Constrain: Three Principles for Weakly-supervised Image Segmentation European Conference on Computer Vision (ECCV 2016) • Kolesnikov • Lampert We introduce a new loss function for the weakly-supervised training of semantic image segmentation models based on three guiding principles: to seed with weak location cues, to expand objects based on the information about which classes can occur, and to constrain the segmentations to coincide with image boundaries. We show experimentally that training a deep convolutional neural network using the proposed loss function leads to substantially better segmentations than previous state-of-the-art methods on the challenging PASCAL VOC 2012 dataset. We furthermore give insight into the working mechanism of our method by a detailed experimental study that illustrates how the segmentation quality is affected by each term of the proposed loss function as well as their combinations. @article{kolesnikov2014seed, title={Seed, Expand and Constrain: Three Principles for Weakly-supervised Image Segmentation}, author={Kolesnikov, Alexander and Lampert, Christoph H}, journal={European Conference on Computer Vision (ECCV)}, year={2016} } #### Improving Weakly-Supervised Object Localization By Micro-Annotation British Machine Vision Conference (BMVC 2016) • Kolesnikov • Lampert Weakly-supervised object localization methods tend to fail for object classes that consistently co-occur with the same background elements, e.g. trains on tracks. We propose a method to overcome these failures by adding a very small amount of model-specific additional annotation. The main idea is to cluster a deep network's mid-level representations and assign object or distractor labels to each cluster. Experiments show substantially improved localization results on the challenging ILSVRC 2014 dataset for bounding box detection and the PASCAL VOC 2012 dataset for semantic segmentation. @inproceedings{kolesnikov2016improving, title={Improving Weakly-Supervised Object Localization By Micro-Annotation}, author={Kolesnikov, Alexander and Lampert, Christoph H.}, booktitle={British Machine Vision Conference (BMVC)}, year={2016} } #### Tracking, Correcting and Absorbing Water Surface Waves IST Austria (PhD thesis) • Bojsen-Hansen Computer graphics is an extremely exciting field for two reasons. On the one hand, there is a healthy injection of pragmatism coming from the visual effects industry that want robust algorithms that work so they can produce results at an increasingly frantic pace. On the other hand, they must always try to push the envelope and achieve the impossible to wow their audiences in the next blockbuster, which means that the industry has not succumb to conservatism, and there is *plenty* of room to try out new and *crazy* ideas if there is a chance that it will pan into something useful. Water simulation has been in visual effects for decades, however it still remains extremely challenging because of its high computational cost and difficult art-directability. The work in this thesis tries to address some of these difficulties. Specifically, we make the following three novel contributions to the state-of-the-art in water simulation for visual effects. @phdthesis{TCaAWSW2016, author = {Morten Bojsen-Hansen}, title = {Tracking, Correcting and Absorbing Water Surface Waves}, school = {IST Austria}, year = {2016}, month = {9} } #### Surface-Only Liquids ACM Transactions on Graphics 35(4) (SIGGRAPH 2016) • Da • Hahn • Batty • Wojtan • Grinspun #### Generalized Non-Reflecting Boundaries for Fluid Re-Simulation ACM Transactions on Graphics 35(4) (SIGGRAPH 2016) • Bojsen-Hansen • Wojtan When aiming to seamlessly integrate a fluid simulation into a larger scenario (like an open ocean), careful attention must be paid to boundary conditions. In particular, one must implement special "non-reflecting" boundary conditions, which dissipate out-going waves as they exit the simulation. Unfortunately, the state of the art in non-reflecting boundary conditions (perfectly-matched layers, or PMLs) only permits trivially simple inflow/outflow conditions, so there is no reliable way to integrate a fluid simulation into a more complicated environment like a stormy ocean or a turbulent river. This paper introduces the first method for combining non-reflecting boundary conditions based on PMLs with inflow/outflow boundary conditions that vary arbitrarily throughout space and time. Our algorithm is a generalization of state-of-the-art mean-flow boundary conditions in the computational fluid dynamics literature, and it allows for seamless integration of a fluid simulation into much more complicated environments. Our method also opens the door for previously-unseen post-process effects like retroactively changing the location of solid obstacles, and locally increasing the visual detail of a pre-existing simulation. @article{GNRBfFRS2016, author = {Morten Bojsen-Hansen and Chris Wojtan}, title = {Generalized Non-Reflecting Boundaries for Fluid Re-Simulation}, journal = {ACM Transactions on Graphics (SIGGRAPH 2016)}, year = {2016}, volume = {35}, number = {4}, } #### Fast approximations for boundary element based brittle fracture simulation ACM Transactions on Graphics 35(4) (SIGGRAPH 2016) • Hahn • Wojtan #### Computational Design of Stable Planar-Rod Structures ACM Transactions on Graphics 35(4) (SIGGRAPH 2016) • Miguel • Lepoutre • Bickel We present a computational method for designing wire sculptures consisting of interlocking wires. Our method allows the computation of aesthetically pleasing structures that are structurally stable, efficiently fabricatable with a 2D wire bending machine, and assemblable without the need of additional connectors. Starting from a set of planar contours provided by the user, our method automatically tests for the feasibility of a design, determines a discrete ordering of wires at intersection points, and optimizes for the rest shape of the individual wires to maximize structural stability under frictional contact. In addition to their application to art, wire sculptures present an extremely efficient and fast alternative for low-fidelity rapid prototyping because manufacturing time and required material linearly scales with the physical size of objects. We demonstrate the effectiveness of our approach on a varied set of examples, all of which we fabricated. @article{CDoSPRS2016, author = {Eder Miguel and Mathias Lepoutre and Bernd Bickel}, title = {Computational Design of Stable Planar-Rod Structures}, journal = {ACM Transactions on Graphics (SIGGRAPH 2016)}, year = {2016}, volume = {35}, number = {4} } #### Modeling and Estimation of Energy-Based Hyperelastic Objects Computer Graphics Forum 35(2) (EUROGRAPHICS 2016) • Miguel • Miraut #### Total Variation on a Tree SIAM Journal on Imaging Sciences (SIIMS), 9(2):605-636, 2016 • Kolmogorov • Pock • Rolinek We consider the problem of minimizing the continuous valued total variation subject to different unary terms on trees and propose fast direct algorithms based on dynamic programming to solve these problems. We treat both the convex and the non-convex case and derive worst case complexities that are equal or better then existing methods. We show applications to total variation based 2D image processing and computer vision problems based on a Lagrangian decomposition approach. The resulting algorithms are very efficient, offer a high degree of parallelism and come along with memory requirements which are only in the order of the number of image pixels. @article{kolmogorov2016total, title={Total variation on a tree}, author={Kolmogorov, Vladimir and Pock, Thomas and Rolinek, Michal}, journal={SIAM Journal on Imaging Sciences}, volume={9}, number={2}, pages={605--636}, year={2016}, publisher={SIAM} } #### Narrow Band FLIP for Liquid Simulations Computer Graphics Forum 35(2) • Ferstl • Ando • Wojtan • Westermann • Thuerey #### A Practical Method for High-Resolution Embedded Liquid Surfaces Computer Graphics Forum 35(2) • Batty • Wojtan #### Generalized Diffusion Curves: An Improved Vector Representation for Smooth-Shaded Images Computer Graphics Forum 35(2) • Jeschke This paper generalizes the well-known Diffusion Curves Images (DCI), which are composed of a set of Bezier curves with colors specified on either side. These colors are diffused as Laplace functions over the image domain, which results in smooth color gradients interrupted by the Bezier curves. Our new formulation allows for more color control away from the boundary, providing a similar expressive power as recent Bilaplace image models without introducing associated issues and computational costs. The new model is based on a special Laplace function blending and a new edge blur formulation. We demonstrate that given some user-defined boundary curves over an input raster image, fitting colors and edge blur from the image to the new model and subsequent editing and animation is equally convenient as with DCIs. Numerous examples and comparisons to DCIs are presented. @article{GDCI2016, author = {Stefan Jeschke}, title = {Generalized Diffusion Curves: An Improved Vector Representation for Smooth-Shaded Images}, journal = {Computer Graphics Forum}, year = {2016}, volume = {35}, number = {2}, pages = {1--9} } #### DefSense: Computational Design of Customized Deformable Input Devices ACM SIGCHI, May 2016 • Bächer • Hepp • Pece • Kry • Bickel • Thomaszewski • Hilliges We present a novel optimization-based algorithm for the design and fabrication of customized, deformable input devices, capable of continuously sensing their deformation. We propose to embed piezoresistive sensing elements into flexible 3D printed objects. These sensing elements are then utilized to recover rich and natural user interactions at runtime. Designing such objects manually is a challenging and hard problem for all but the simplest geometries and deformations. Our method simultaneously optimizes the internal routing of the sensing elements and computes a mapping from low-level sensor readings to user-specified outputs in order to minimize reconstruction error. We demonstrate the power and flexibility of the approach by designing and fabricating a set of flexible input devices. Our results indicate that the optimization-based design greatly outperforms manual routings in terms of reconstruction accuracy and thus interaction fidelity. #### Computational Design of Walking Automata ACM SIGGRAPH / Eurographics Symposium on Computer Animation (SCA 2015) • Bharaj • Coros • Thomaszewski • Tompkin • Bickel • Pfister Creating mechanical automata that can walk in stable and pleasing manners is a challenging task that requires both skill and expertise. We propose to use computational design to offset the technical difficulties of this process. A simple drag-and-drop interface allows casual users to create personalized walking toys from a library of pre-defined template mechanisms. Provided with this input, our method leverages physical simulation and evolutionary optimization to refine the mechanical designs such that the resulting toys are able to walk. The optimization process is guided by an intuitive set of objectives that measure the quality of the walking motions. We demonstrate our approach on a set of simulated mechanical toys with different numbers of legs and various distinct gaits. Two fabricated prototypes showcase the feasibility of our designs. #### A Stream Function Solver for Liquid Simulations ACM Trans. Graph. 34, 4 (SIGGRAPH 2015 Papers) • Ando • Thuerey • Wojtan #### Detailed Spatio-Temporal Reconstruction of Eyelids ACM Trans. Graph. 34, 4 (SIGGRAPH 2015 Papers) • Bermano • Beeler • Yeara • Bickel • Gross In recent years we have seen numerous improvements on 3D scanning and tracking of human faces, greatly advancing the creation of digital doubles for film and video games. However, despite the high-resolution quality of the reconstruction approaches available, current methods are unable to capture one of the most important regions of the face – the eye region. In this work we present the first method for detailed spatio-temporal reconstruction of eyelids. Tracking and reconstructing eyelids is extremely challenging, as this region exhibits very complex and unique skin deformation where skin is folded under while opening the eye. Furthermore, eyelids are often only partially visible and obstructed due to selfocclusion and eyelashes. Our approach is to combine a geometric deformation model with image data, leveraging multi-view stereo, optical flow, contour tracking and wrinkle detection from local skin appearance. Our deformation model serves as a prior that enables reconstruction of eyelids even under strong self-occlusions caused by rolling and folding skin as the eye opens and closes. The output is a person-specific, time-varying eyelid reconstruction with anatomically plausible deformations. Our high-resolution detailed eyelids couple naturally with current facial performance capture approaches. As a result, our method can largely increase the fidelity of facial capture and the creation of digital doubles. ACM Trans. Graph. 34, 4 (SIGGRAPH 2015 Papers) • Martin • Umetani • Bickel This paper introduces OmniAD, a novel data-driven pipeline to model and acquire the aerodynamics of three-dimensional rigid objects. Traditionally, aerodynamics are examined through elaborate wind tunnel experiments or expensive fluid dynamics computations, and are only measured for a small number of discrete wind directions. OmniAD allows the evaluation of aerodynamic forces, such as drag and lift, for any incoming wind direction using a novel representation based on spherical harmonics. Our data-driven technique acquires the aerodynamic properties of an object simply by capturing its falling motion using a single camera. Once model parameters are estimated, OmniAD enables realistic real-time simulation of rigid bodies, such as the tumbling and gliding of leaves, without simulating the surrounding air. In addition, we propose an intuitive user interface based on OmniAD to interactively design three-dimensional kites that actually fly. Various non-traditional kites were designed to demonstrate the physical validity of our model. #### Microstructures to Control Elasticity in 3D Printing ACM Trans. Graph. 34, 4 (SIGGRAPH 2015 Papers) • Schumacher • Bickel • Marschner • Rys • Daraio • Gross We propose a method for fabricating deformable objects with spatially varying elasticity using 3D printing. Using a single, relatively stiff printer material, our method designs an assembly of small-scale microstructures that have the effect of a softer material at the object scale, with properties depending on the microstructure used in each part of the object. We build on work in the area of metamaterials, using numerical optimization to design tiled microstructures with desired properties, but with the key difference that our method designs families of related structures that can be interpolated to smoothly vary the material properties over a wide range. To create an object with spatially varying elastic properties, we tile the object's interior with microstructures drawn from these families, generating a different microstructure for each cell using an efficient algorithm to select compatible structures for neighboring cells. We show results computed for both 2D and 3D objects, validating several 2D and 3D printed structures using standard material tests as well as demonstrating various example applications. #### Learning Shape Placements by Example ACM Trans. Graph. 34, 4 (SIGGRAPH 2015 Papers) • Guerrero • Jeschke • Wimmer • Wonka We present a method to learn and propagate shape placements in 2D polygonal scenes from a few examples provided by a user. The placement of a shape is modeled as an oriented bounding box. Simple geometric relationships between this bounding box and nearby scene polygons define a feature set for the placement. The feature sets of all example placements are then used to learn a probabilistic model over all possible placements and scenes. With this model we can generate a new set of placements with similar geometric relationships in any given scene. We introduce extensions that enable propagation and generation of shapes in 3D scenes, as well as the application of a learned modeling session to large scenes without additional user interaction. These concepts allow us to generate complex scenes with thousands of objects with relatively little user interaction. @article{guerrero-2015-lsp, title = "Learning Shape Placements by Example", author = "Paul Guerrero and Stefan Jeschke and Michael Wimmer and Peter Wonka", year = "2015", pages = "1--13", month = aug, event = "ACM SIGGRAPH 2015", journal = "ACM Transactions on Graphics", location = "Los Angeles, CA", keywords = "complex model generation, modeling by example", } #### High-Resolution Brittle Fracture Simulation with Boundary Elements ACM Trans. Graph. 34, 4 (SIGGRAPH 2015 Papers) • Hahn • Wojtan #### Design and Fabrication of Flexible Rod Meshes ACM Trans. Graph. 34, 4 (SIGGRAPH 2015 Papers) • Perez • Thomaszewski • Coros • Bickel • Canabal • Sumner We present a computational tool for fabrication-oriented design of flexible rod meshes. Given a deformable surface and a set of deformed poses as input, our method automatically computes a printable rod mesh that, once manufactured, closely matches the input poses under the same boundary conditions. The core of our method is formed by an optimization scheme that adjusts the cross-sectional profiles of the rods and their rest centerline in order to best approximate the target deformations. This approach allows us to locally control the bending and stretching resistance of the surface with a single material, yielding high design flexibility and low fabrication cost. #### Double Bubbles Sans Toil and Trouble: Discrete Circulation-Preserving Vortex Sheets for Soap Films and Foams ACM Trans. Graph. 34, 4 (SIGGRAPH 2015 Papers) • Da • Batty • Wojtan • Grinspun #### Predicting the Future Behavior of a Time-Varying Probability Distribution IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015) • Lampert We study the problem of predicting the future, though only in the probabilistic sense of estimating a future state of a time-varying probability distribution. This is not only an interesting academic problem, but solving this extrapolation problem also has many practical application, e.g. for training classifiers that have to operate under time-varying conditions. Our main contribution is a method for predicting the next step of the time-varying distribution from a given sequence of sample sets from earlier time steps. For this we rely on two recent machine learning techniques: embedding probability distributions into a reproducing kernel Hilbert space, and learning operators by vector-valued regression. We illustrate the working principles and the practical usefulness of our method by experiments on synthetic and real data. We also highlight an exemplary application: training a classifier in a domain adaptation setting without having access to examples from the test time distribution at training time. #### Curriculum Learning of Multiple Tasks IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015) • Pentina • Sharmanska • Lampert #### Classifier Adaptation at Prediction Time IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015) • Royer • Lampert Classifiers for object categorization are usually evaluated by their accuracy on a set of i.i.d. test examples. This provides us with an estimate of the expected error when applying the classifiers to a single new image. In real application, however, classifiers are rarely only used for a single image and then discarded. Instead, they are applied sequentially to many images, and these are typically not i.i.d. samples from a fixed data distribution, but they carry dependencies and their class distribution varies over time. In this work, we argue that the phenomenon of correlated data at prediction time is not a nuisance, but a blessing in disguise. We describe a probabilistic method for adapting classifiers at prediction time without having to retraining them. We also introduce a framework for creating realistically distributed image sequences, which offers a way to benchmark classifier adaptation methods, such as the one we propose. Experiments on the ILSVRC2010 and ILSVRC2012 datasets show that adapting object classification systems at prediction time can significantly reduce their error rate, even with additional human feedback. #### A Multi-Plane Block-Coordinate Frank-Wolfe Algorithm for Training Structural SVMs with a Costly max-Oracle IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015) • Shah • Kolmogorov • Lampert Structural support vector machines (SSVMs) are amongst the best performing methods for structured computer vision tasks, such as semantic image segmentation or human pose estimation. Training SSVMs, however, is computationally costly, because it requires repeated calls to a structured prediction subroutine (called max-oracle), which has to solve an optimization problem itself, e.g. a graph cut. In this work, we introduce a new algorithm for SSVM training that is more efficient than earlier techniques when the max-oracle is computationally expensive, as it is frequently the case in computer vision tasks. The main idea is to (i) combine the recent stochastic Block-Coordinate Frank-Wolfe algorithm with efficient hyperplane caching, and (ii) use an automatic selection rule for deciding whether to call the exact max-oracle or to rely on an approximate one based on the cached hyperplanes. We show experimentally that this strategy leads to faster convergence towards the optimum with respect to the number of required oracle calls, and that this also translates into faster convergence with respect to the total runtime when the max-oracle is slow compared to the other steps of the algorithm. A C++ implementation is provided at http://www.ist.ac.at/˜vnk. #### Water Wave Animation via Wavefront Parameter Interpolation ACM Transactions on Graphics 34(3) • Jeschke • Wojtan We present an efficient wavefront tracking algorithm for animating bodies of water that interact with their environment. Our contributions include: a novel wavefront tracking technique that enables dispersion, refraction, reflection, and diffraction in the same simulation; a unique multivalued function interpolation method that enables our simulations to elegantly sidestep the Nyquist limit; a dispersion approximation for efficiently amplifying the number of simulated waves by several orders of magnitude; and additional extensions that allow for time-dependent effects and interactive artistic editing of the resulting animation. Our contributions combine to give us multitudes more wave details than similar algorithms, while maintaining high frame rates and allowing close camera zooms. @article{WWAvWPI2015, author = {Stefan Jeschke and Chris Wojtan}, title = {Water Wave Animation via Wavefront Parameter Interpolation}, journal = {ACM Transactions on Graphics}, year = {2015}, volume = {34}, number = {3}, pages = {1--14} } #### Recent Advances in Facial Appearance Capture Computer Graphics Forum 34(2) (Eurographics 2015) • Klehm • Rousselle • Papas • Hery • Bickel • Jarosz • Beeler Facial appearance capture is now firmly established within academic research and used extensively across various application domains, perhaps most prominently in the entertainment industry through the design of virtual characters in video games and films. While significant progress has occurred over the last two decades, no single survey currently exists that discusses the similarities, differences, and practical considerations of the available appearance capture techniques as applied to human faces. A central difficulty of facial appearance capture is the way light interacts with skin—which has a complex multi-layered structure—and the interactions that occur below the skin surface can, by definition, only be observed indirectly. In this report, we distinguish between two broad strategies for dealing with this complexity. “Image-based methods” try to exhaustively capture the exact face appearance under different lighting and viewing conditions, and then render the face through weighted image combinations. “Parametric methods” instead fit the captured reflectance data to some parametric appearance model used during rendering, allowing for a more lightweight and flexible representation but at the cost of potentially increased rendering complexity or inexact reproduction. The goal of this report is to provide an overview that can guide practitioners and researchers in assessing the tradeoffs between current approaches and identifying directions for future advances in facial appearance capture. #### A Dimension-reduced Pressure Solver for Liquid Simulations ACM Transactions on Graphics 34(2) (SIGGRAPH 2015) • Ando • Thürey • Wojtan #### Partial Shape Matching using Transformation Parameter Similarity Computer Graphics Forum, 33(8) • Paul • Auzinger • Wimmer • Jeschke In this paper, we present a method for non-rigid, partial shape matching in vector graphics. Given a user-specified query region in a 2D shape, similar regions are found, even if they are non-linearly distorted. Furthermore, a non-linear mapping is established between the query regions and these matches, which allows the automatic transfer of editing operations such as texturing. This is achieved by a two-step approach. First, point-wise correspondences between the query region and the whole shape are established. The transformation parameters of these correspondences are registered in an appropriate transformation space. For transformations between similar regions, these parameters form surfaces in transformation space, which are extracted in the second step of our method. The extracted regions may be related to the query region by a non-rigid transform, enabling non-rigid shape matching. @article{Guerrero-2014-TPS, author = {Paul Guerrero and Thomas Auzinger and Michael Wimmer and Stefan Jeschke}, title = {Partial Shape Matching using Transformation Parameter Similarity}, journal = {Computer Graphics Forum}, year = {2014}, volume = {33}, number = {8}, pages = {1--14} issn = {1467-8659} } #### Closed-Form Approximate CRF Training for Scalable Image Segmentation European Conference on Computer Vision (ECCV 2014) • Kolesnikov • Gauillaumin • Ferrari • Lampert We present LS-CRF, a new method for training cyclic Conditional Random Fields (CRFs) from large datasets that is inspired by classical closed-form expressions for the maximum likelihood parameters of a generative graphical model with tree topology. Training a CRF with LS-CRF requires only solving a set of independent regression problems, each of which can be solved efficiently in closed form or by an iterative solver. This makes LS-CRF orders of magnitude faster than classical CRF training based on probabilistic inference, and at the same time more flexible and easier to implement than other approximate techniques, such as pseudolikelihood or piecewise training. We apply LS-CRF to the task of semantic image segmentation, showing that it achieves on par accuracy to other training techniques at higher speed, thereby allowing efficient CRF training from very large training sets. For example, training a linearly parameterized pairwise CRF on 150,000 images requires less than one hour on a modern workstation. @article{kolesnikov2014closed, title={Closed-Form Approximate CRF Training for Scalable Image Segmentation}, author={Kolesnikov, Alexander and Guillaumin, Matthieu and Ferrari, Vittorio and Lampert, Christoph H}, journal={European Conference on Computer Vision (ECCV)}, year={2014} } #### Spin-It: Optimizing Moment of Inertia for Spinnable Objects ACM Trans. Graph. 33, 4 (SIGGRAPH 2014 Papers) • Baecher • Whiting • Bickel • Sorkine-Hornung Spinning tops and yo-yos have long fascinated cultures around the world with their unexpected, graceful motions that seemingly elude gravity. We present an algorithm to generate designs for spinning objects by optimizing rotational dynamics properties. As input, the user provides a solid 3D model and a desired axis of rotation. Our approach then modifies the mass distribution such that the principal directions of the moment of inertia align with the target rotation frame. We augment the model by creating voids inside its volume, with interior fill represented by an adaptive multi-resolution voxelization. The discrete voxel fill values are optimized using a continuous, nonlinear formulation. Further, we optimize for rotational stability by maximizing the dominant principal moment. We extend our technique to incorporate deformation and multiple materials for cases where internal voids alone are insufficient. Our method is well-suited for a variety of 3D printed models, ranging from characters to abstract shapes. We demonstrate tops and yo-yos that spin surprisingly stably despite their asymmetric appearance. #### Deep Fisher Kernels - End to End Learning of the Fisher Kernel GMM Parameters IEEE Computer Vision and Pattern Recognition (CVPR) • Sydorov • Lampert Fisher Kernels and Deep Learning were two developments with significant impact on large-scale object categorization in the last years. Both approaches were shown to achieve state-of-the-art results on large-scale object categorization datasets, such as ImageNet. Conceptually, however, they are perceived as very different and it is not uncommon for heated debates to spring up when advocates of both paradigms meet at conferences or workshops. In this work, we emphasize the similarities between both architectures rather than their differences and we argue that such a unified view allows us to transfer ideas from one domain to the other. As a concrete example we introduce a method for learning a support vector machine classifier with Fisher kernel at the same time as a task-specific data representation. We reinterpret the setting as a multi-layer feed forward network. Its final layer is the classifier, parameterized by a weight vector, and the two previous layers compute Fisher vectors, parameterized by the coefficients of a Gaussian mixture model. We introduce a gradient descent based learning algorithm that, in contrast to other feature learning techniques, is not just derived from intuition or biological analogy, but has a theoretical justification in the framework of statistical learning theory. Our experiments show that the new training procedure leads to significant improvements in classification accuracy while preserving the modularity and geometric interpretability of a support vector machine setup. @inproceedings{ sydorov-cvpr2014, title = {Deep Fisher Kernels: Jointly Learning a Fisher Kernel SVM and its GMM Parameters}, booktitle = "IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR)", year = 2014, } #### Blending Liquids ACM Transactions on Graphics 33(4) • Raveendran • Wojtan • Thuerey • Turk #### A General Framework for Bilateral and Mean Shift Filtering ArXiv: 1405.4734 • Solomon • Crane • Butscher • Wojtan We present a generalization of the bilateral filter that can be applied to feature-preserving smoothing of signals on images, meshes, and other domains within a single unified framework. Our discretization is competitive with state-of-the-art smoothing techniques in terms of both accuracy and speed, is easy to implement, and has parameters that are straightforward to understand. Unlike previous bilateral filters developed for meshes and other irregular domains, our construction reduces exactly to the image bilateral on rectangular domains and comes with a rigorous foundation in both the smooth and discrete settings. These guarantees allow us to construct unconditionally convergent mean-shift schemes that handle a variety of extremely noisy signals. We also apply our framework to geometric edge-preserving effects like feature enhancement and show how it is related to local histogram techniques. #### Edit Propagation using Geometric Relationship Functions ACM Transactions on Graphics, 33(2) • Paul • Jeschke • Wimmer • Wonka We propose a method for propagating edit operations in 2D vector graphics, based on geometric relationship functions. These functions quantify the geometric relationship of a point to a polygon, such as the distance to the boundary or the direction to the closest corner vertex. The level sets of the relationship functions describe points with the same relationship to a polygon. For a given query point we ?rst determine a set of relationships to local features, construct all level sets for these relationships and accumulate them. The maxima of the resulting distribution are points with similar geometric relationships. We show extensions to handle mirror symmetries, and discuss the use of relationship functions as local coordinate systems. Our method can be applied for example to interactive ?oor-plan editing, and is especially useful for large layouts, where individual edits would be cumbersome. We demonstrate populating 2D layouts with tens to hundreds of objects by propagating relatively few edit operations. @article{Guerrero-2014-GRF, author = {Paul Guerrero and Stefan Jeschke and Michael Wimmer and Peter Wonka}, title = {Edit Propagation using Geometric Relationship Functions}, journal = {ACM Transactions on Graphics}, year = {2014}, volume = {33}, number = {2}, pages = {15:1--15:15} } #### Putting Holes in Holey Geometry: Topology Change for Arbitrary Surfaces ACM Transactions on Graphics 32(4) (SIGGRAPH 2013) • Bernstein • Wojtan #### Liquid Surface Tracking with Error Compensation ACM Transactions on Graphics 32(4) (SIGGRAPH 2013) • Bojsen-Hansen • Wojtan Our work concerns the combination of an Eulerian liquid simulation with a high-resolution surface tracker (e.g. the level set method or a Lagrangian triangle mesh). The naive application of a high-resolution surface tracker to a low-resolution velocity field can produce many visually disturbing physical and topological artifacts that limit their use in practice. We address these problems by defining an error function which compares the current state of the surface tracker to the set of physically valid surface states. By reducing this error with a gradient descent technique, we introduce a novel physics-based surface fairing method. Similarly, by treating this error function as a potential energy, we derive a new surface correction force that mimics the vortex sheet equations. We demonstrate our results with both level set and mesh-based surface trackers. @article{LSTwEC2013, author = {Morten Bojsen-Hansen and Chris Wojtan}, title = {Liquid Surface Tracking with Error Compensation}, journal = {ACM Transactions on Graphics (SIGGRAPH 2013)}, year = {2013}, volume = {32}, number = {4}, pages = {79:1--79:10} } #### Controlling Liquids Using Meshes ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA 2012) • Raveendran • Thuerey • Wojtan • Turk #### Highly Adaptive Liquid Simulations on Tetrahedral Meshes ACM Transactions on Graphics 32(4) (SIGGRAPH 2013) • Ando • Thürey • Wojtan #### Computational Design of Mechanical Characters ACM Trans. Graph. 32, 4 (SIGGRAPH 2013 Papers) • Coros • Thomaszewski • Noris • Sueda • Forberg • Sumner • Matusik • Bickel We present an interactive design system that allows non-expert users to create animated mechanical characters. Given an articulated character as input, the user iteratively creates an animation by sketching motion curves indicating how different parts of the character should move. For each motion curve, our framework creates an optimized mechanism that reproduces it as closely as possible. The resulting mechanisms are attached to the character and then connected to each other using gear trains, which are created in a semi-automated fashion. The mechanical assemblies generated with our system can be driven with a single input driver, such as a hand-operated crank or an electric motor, and they can be fabricated using rapid prototyping devices. We demonstrate the versatility of our approach by designing a wide range of mechanical characters, several of which we manufactured using 3D printing. While our pipeline is designed for characters driven by planar mechanisms, significant parts of it extend directly to non-planar mechanisms, allowing us to create characters with compelling 3D motions. #### Tracking Surfaces with Evolving Topology ACM Transactions on Graphics 31(4) (SIGGRAPH 2012) • Bojsen-Hansen • Li • Wojtan We present a method for recovering a temporally coherent, deforming triangle mesh with arbitrarily changing topology from an incoherent sequence of static closed surfaces. We solve this problem using the surface geometry alone, without any prior information like surface templates or velocity fields. Our system combines a proven strategy for triangle mesh improvement, a robust multi-resolution non-rigid registration routine, and a reliable technique for changing surface mesh topology. We also introduce a novel topological constraint enforcement algorithm to ensure that the output and input always have similar topology. We apply our technique to a series of diverse input data from video reconstructions, physics simulations, and artistic morphs. The structured output of our algorithm allows us to efficiently track information like colors and displacement maps, recover velocity information, and solve PDEs on the mesh as a post process. @article{TSwET2012, author = {Morten Bojsen-Hansen and Hao Li and Chris Wojtan}, title = {Tracking Surfaces with Evolving Topology}, journal = {ACM Transactions on Graphics (SIGGRAPH 2012)}, year = {2012}, volume = {31}, number = {4}, pages = {53:1--53:10} } #### Explicit Mesh Surfaces for Particle Based Fluids Computer Graphics Forum 31 (Eurographics 2012) • Yu • Wojtan • Turk • Yap #### Liquid Simulation with mesh-based Surface Tracking ACM SIGGRAPH 2011 Courses • Wojtan • Müller-Fischer • Brochu #### Analysis of Human Faces using a Measurement-Based Skin Reflectance Model ACM Transactions on Graphics 25(3) (SIGGRAPH 2006) • Weyrich • Matusik • Pfister • Bickel • Donner • Tu • McAndless • Lee • Ngan • Jensen • Gross We have measured 3D face geometry, skin reflectance, and subsurface scattering using custom-built devices for 149 subjects of varying age, gender, and race. We developed a novel skin reflectance model whose parameters can be estimated from measurements. The model decomposes the large amount of measured skin data into a spatially-varying analytic BRDF, a diffuse albedo map, and diffuse subsurface scattering. Our model is intuitive, physically plausible, and – since we do not use the original measured data – easy to edit as well. High-quality renderings come close to reproducing real photographs. The analysis of the model parameters for our sample population reveals variations according to subject age, gender, skin type, and external factors (e.g., sweat, cold, or makeup). Using our statistics, a user can edit the overall appearance of a face (e.g., changing skin type and age) or change small-scale features using texture synthesis (e.g., adding moles and freckles). We are making the collected statistics publicly available to the research community for applications in face synthesis and analysis.
# Q : 7    The inner diameter of a circular well is $\small 3.5\hspace{1mm}m$. It is $\small 10\hspace{1mm}m$ deep. Find            (ii) the cost of plastering this curved surface at the rate of Rs 40 per $\small m^2$. Given, Inner diameter of the circular well =  $d = \small 3.5\hspace{1mm}m$ Depth of the well = $h = 10\ m$ $\therefore$ The inner curved surface area of the circular well is $110\ m^2$ Now, the cost of plastering the curved surface per $\small m^2$ = Rs. 40 $\therefore$ Cost of plastering the curved surface of  $110\ m^2$ = $Rs.\ (110 \times 40) = Rs.\ 4400$ Therefore, the cost of plastering the well is $Rs.\ 4400$ ## Related Chapters ### Preparation Products ##### JEE Main Rank Booster 2021 This course will help student to be better prepared and study in the right direction for JEE Main.. ₹ 13999/- ₹ 9999/- ##### Rank Booster NEET 2021 This course will help student to be better prepared and study in the right direction for NEET.. ₹ 13999/- ₹ 9999/- ##### Knockout JEE Main April 2021 (Easy Installments) An exhaustive E-learning program for the complete preparation of JEE Main.. ₹ 4999/- ##### Knockout NEET May 2021 An exhaustive E-learning program for the complete preparation of NEET.. ₹ 22999/- ₹ 14999/-
# Batch PTA stopping condition I am reviewing my Neural Network lectures and I have a doubt: My book's (Haykin) batch PTA describes a cost function which is defined over the set of the misclassified inputs. I have always been taught to use MSE < X as a stopping condition for the training process. Is the batch case different? Should I use as stopping condition size(misclassified) < Y (and as a consequence when the weight change is very little)? Moreover, the book uses the same symbol for both the training set and the misclassified input set. Does this mean that my training set changes each epoch?
# zbMATH — the first resource for mathematics An approximate functional equation for the Dirichlet $$L$$-function. (English. Russian original) Zbl 0195.33301 Translation from Tr. Mosk. Mat. Obshch. 18, 101–115 (Russian) (1968; Zbl 0167.31801). ##### MSC: 11M06 $$\zeta (s)$$ and $$L(s, \chi)$$
## Social Media 1. Obligatory passport photo \\ 2. First selfie in Portugal \\ 3. Second selfie (haha), the wonders of bathroom lighting \\ 4. New purchase \\ 5. OOTN \\ 6. Annual mini drive \\ 7. Blog post: The Stone Age \\ 8. Wheelbarrow of flowers \\ 9. Blog post: The Stone Age 1. Best crepe ever \\ 2. Blog post: Old Town \\ 3. Blog post: Old Town \\ 4. From where I stand \\ 5. Blog post: Garden of Wheelbarrows \\ 6. Evenings by the beach \\ 7. Blog post: Let's Go To The Beach, Beach \\ 8. Jump! \\ 9. Blog post: What's In My Beach Bag 1. Statement ring \\ 2. Blog post: Lace on Lace \\ 3. Blog post: Lace on Lace \\ 4. Parfois' new collection \\ 5. Blogging from bed \\ 6. Blog post: Lace on Lace \\ 7. Wild berry picking \\ 8. Blog post: Beach House \\ 9. Village festa (party) selfie 1. Another party selfie \\ 2. Arm candy \\ 3. Blog post: Beach House \\ 4. Hidden lake \\ 5. Blog post: Strawberry Nails \\ 6. Selfie with Magnum's new release (Marc de Champagne) \\ 7. Blog post: Wild Berries \\ 8. Selfie \\ 9. Ice-cream 1. Fake beachin' \\ 2. Blog post: Wild Berries \\ 3. Blog post: Hello Sunshine \\ 4. Blog post: Hello Sunshine \\ 5. #ThrowbackThursday \\ 6. Beautiful sky \\ 7. Blog post: Pink Skies \\ 8. Pizza \\ 9. Blog post: Pink Skies Connect with me on Instagram (@peexo) I thought it would be nice to show you what I got up to in Portugal, and what better way to do this than show you the snaps I posted on Instagram? I've uploaded so much this summer, and for those of you who don't follow me on Instagram then this is a roundup of everything I posted! I won't deny that I posted one too many selfies this summer but I'm not gonna lie, there's a fab light in my bathroom and everything just looks on point hahaha! I've had Instagram since its early days but I mainly used it to edit a couple of photos and didn't think much else of it. It's crazy how much Instagram has blown up now and how big of a platform it has become! With that being said, I've tried to use Instagram more and more, and this summer I really have! I've been in Portugal for a month now so I've had plenty time to settle in and the natural sunlight has been great for photos. I really love posting little insights into my life but I know that a lot of people don't like that (I say this because of how many people unfollow me daily) but rather than tweaking my Instagram to suit others, I want to keep using it as a more personal look into my life with elements of fashion and blogging of course, as that's what I love. I hope you liked this look into my Instagram life; I'm heading back to London tomorrow after holidaying in Portugal for over a month so I'm excited to be back in London for a bit! * 1. Amazing post!!! I am so jealous that you have been there for a month but i hope you have enjoyed yourself xx animpatientscottishgirl.blogspot.co.uk 1. It does't feel like a month so it seems a little crazy when I say it haha! So pleased that you like the post :) X 2. OH my gosh it all looks amazing. (Although I've already seen most of them on Insta) they look so incredible all put into one story! Love it! Kelsey x kelstagram.com 1. Haha, thanks! Well I'm glad this post didn't bore you then :D X 3. Não percebo porque é que as pessoas te fazem unfollow só por falares um pouco da tua vida. Adorei as foto, beijinho* 1. Também não mas é a realidade (au menos no meu caso) mas eu gosto, por isso vou continuar e quem não gostar pode fazer “unfollow” a vontade ahaha! Obrigada, feliz por falar em Português :) X
## Monday, January 27, 2020 ### Number Facts for Every Year Day ((31-60)) from "On This Day in Math" The 31st day of the year; 31 = 22 + 33, i.e., The eleventh prime, and third Mersenne prime, it is also the sum of the first two primes raised to themselves. *Number Gossip  (Is there another prime which is the sum of consecutive primes raised to themselves? A note from Andy Pepperdine of Bath informed me that $$2^2 + 3^3 +5^5 + 7^7 = 826699$$, a prime. The sum of the first eight digits of pi = 3+1+4+1+5+9+2+6 = 31. *Prime Curios There are only 31 numbers that cannot be expressed as the sum of distinct squares. *Prime curios 31 is the number of regular polygons with an odd number of sides that are known to be constructible with compass and straightedge. The numbers 31, 331, 3331, 33331, 333331, 3333331, and 33333331 are all prime. For a time it was thought that every number of the form 3w1 would be prime. However, the next nine numbers of the sequence are composite *Wik 31 = 5^0 + 5^1 + 5^2 and also 31 = 2^0 + 2^1 + 2^2 + 2^3 + 2^4. *Mario Livio says that there are only two known numbers that can be expressed as consecutive powers of a number in two different ways. The second is 8191, which can be expressed as consecutive powers of two, and of ninety. $$\pi^3$$  (almost)=31 (31,006...) There are only 31 numbers that cannot be expressed as the sum of distinct squares. 31 is the minimum number of moves to solve the Towers of Hanoi problem.  The general solution for any number of discs is a Mersenne number of the form 2^n -1. Jim Wilder ‏@wilderlab offered, The sum of digits of the 31st Fibonacci number (1346269) is 31. If you like unusual speed limits, the speed limit in downtown Trenton, a small city in northwestern Tennessee, is 31 miles per hour. And the little teapot on the sign? Well, Trenton also bills itself as the teapot capital of the nation. The 31 mph road sign seems to come from a conflict between Trenton and a neighboring town which I will not name ,...but I will tell you they think of themselves as the white squirrel capital. 31 is also the smallest integer that can be written as the sum of four positive squares in two ways 1+1+4+25; 4+9+9+9. 31 is an evil math teacher number. The sequence of  the maximum number of regions obtained by joining n points around a circle by straight lines begins 2, 4, 8, 16... but for five points, it is 31. @JamesTanton posted a mathematical fact and query regarding 31.  31 =111(base 5) =11111(base 2) and 8191 =111(base 90) = 111111111111(base 2) are the only two integers known to be repunits at least 3 digits long in two different bases. Is there an integer with representations 10101010..., ,at least three digits, in each of two different bases? Which made me wonder, are there other pairs that are repdigits (all alike, but not all units) in two (or more) different bases? The 32nd day of the year;   32 is conjectured to be the highest power of two with all prime digits. *Number Gossip (Could 27 hold the similar property for powers of three?) Also, 131 is the 32nd prime and the sum of the digits of both numbers is 5. 32 & 131 is the smallest n, P(n) with this property. $$32 = 1^1 + 2^2 + 3^3$$ A fermat prime is a prime number of the form $$2^{2^n} +1$$ and five are known (3, 5, 17, 257, 65537). Their product is $$2^{32} -1$$ 32! - 1 and 33!-1 are both primes [David Marain pointed out that the products of the first n are all expressible in 2n-1 form, $$3x5 = 2^4-1, 3x5x17 = 2^8-1$$, and $$3x5x7x257 = 2^{16}-1$$ ] On an 8x8 chessboard, the longest closed non-crossing knight's path is 32 moves. The integers 1 through n=32 can be arranged in a circle so that every adjacent pair sums to a perfect square. (Try it!) Can those numbers be arranged in a circle such that any THREE adjacent numbers sum to a perfect square? *HT Matt Enlow The 33rd Day of the Year; among the infinity of integers, there are only six that can not be formed by the addition of distinct triangular numbers. The largest of these is 33. What are the other five? 33 = 1!+2!+3!+4! *jim wilder @wilderlab 33 is the smallest n such that n, n+1 and n+2 are all semi-primes, the products of two primes. *Bob S McDonald 32! - 1 and 33!-1 are both primes The 33 letter Dutch word nepparterrestaalplaatserretrappen is the longest palindrome I know in any language. It means fake stairways from the ground floor to the sun lounge, made of steel plate. The shorter word "saippuakauppias" for a soap vendor is the longest single word palindrome in the world that is in everyday use. *Wiktionary 1033 is the largest known power of ten that can be expressed as the power of two factors neither of which contains a zero. 1033 = 233 533 = 8,589,934,592 x 116,415,321,826,934,814,453,125 *Cliff Pickover @pickover The smallest odd number n such that n+x! is not a prime, for any number x. 33 is the smallest  teo-digit palindrome in  base ten which is also a palindrome in a smaller base. The  34th day of the year; 34 is the smallest integer such that it and both its neighbors are the product of the same number of primes. 34 is the smallest number which can be expressed as the sum of two primes in four ways.*Prime Curios A 4x4 magic square using the integers 1 to 16 has a magic constant of 34. An early example is in the tenth century Parshvanath Jain temple in Khajuraho. The image below was taken by Debra Gross Aczel, the wife of the late Amir D. Aczel who used the image in his last book, Finding Zero. 4x4 magic squares were written about in India by a mathematician named Nagarjuna as early as the first century. The 35th Day of The Year, There are 35 hexominos, the polyominoes made from 6 squares. *Number Gossip (I only recently learned that, Although a complete set of 35 hexominoes has a total of 210 squares, which offers several possible rectangular configurations, it is not possible to pack the hexominoies into a rectangle.) The longest open uncrossed (doesn't cross it's own path) knight's path on an 8x8 chessboard is 35 moves.   (longest cycle(end where you start) is only 32 moves) In Base 35 (A=10, B=11, etc) NERD is Prime, $$23*35^3+14*35^2+27*35+13 = 1,004,233$$. Chaw wrote, "Re. the observation that "NERD" is prime in base 35: I think base 36 is a lot more natural than base 35, given the conventional 10 digits and 26 Latin letters, which makes the following more interesting: NERDIEST is prime in base 36." The 36th Day of the Year, The 36th day of the year; 36 is the smallest non trivial number which is both triangular and square. It's also the largest day number of the year which is both. What's the next? You can find an infinity of them using this beautiful formula from Euler, Hat Tip to Vincent PANTALONI @panlepan 36 is the sum of the first three cubes, $$1 ^3 + 2^3 + 3^3 = 36$$  The sums of the first n cubes is always a square number. $$\sum_{k=1}^n k^3 = (\frac{(n)(n+1)}{2}) ^2$$ Note that this sequence and its formula were known to (and possibly discovered by) Nicomachus, 100 CE The sum of the first 36 integers, $$\sum_{k=1}^{36} k = 666$$ the so called "number of the beast."  Notice that a $$6^2$$ (a triangular number) consecutive integers forms a repdigit triangular number, 666. 36 itself is the last year day which is both a square and a triangular number.  The next square that is a triangular number is 1225. (The square of 36-1) 36 is also the smallest triangular number that is the sum of two consecutive triangular numbers And Mario Livio pointed out in a tweet that Feb 5  is 5/2 in European style dating, and 52 is the maximum number of moves needed to solve the "15" sliding puzzle from any solvable position. The Kiwi's seeds divide the circle into 36 equal sections.  Nature's protractor. *Matemolivares@Matemolivares A special historical tribute to 36: The thirty-six officers problem is a mathematical puzzle proposed by Leonhard Euler in 1782. He asked if it were possible to place officers of six ranks from each of six regiments in a 6x6 square so that no row or column would have an officer of the same rank, or the same regiment. Euler suspected that it could not be done. Euler knew how to construct such squares for nxn when n was odd, or a multiple of four, and he believed that all such squares with n = 4m+2 (6, 10, 14...) were impossible ( Euler didn't say it couldn't be done. He just said that his method does not work for numbers of that form.) Proof that he was right for n=6 took a while. French mathematician (and obviously a very patient man) Gaston Tarry proved it in 1901 by the method of exhaustion. He wrote out each of the 9408 6x6 squares and found that none of them worked. Then in 1959, R.C. Bose and S. S. Shrikhande proved that all the others could be constructed. So the thirty-six square is the only one that can't be done. Jim Wilder sent $$36^2 = 1296$$ and 1 + 29 + 6 = 36 Chaw wrote, "Re. the observation that "NERD" is prime in base 35: I think base 36 is a lot more natural than base 35, given the conventional 10 digits and 26 Latin letters, which makes the following more interesting: NERDIEST is prime in base 36." Touchard (1953) proved that an odd perfect number, if it exists, must be of the form 12k+1 or 36k+9 *Wolfram Mathworld 36^4 =  1679616, and the sum of the digits is 36. It is the largest number for which the sum of the digits of n^4 is equal to n  .There are three smaller numbers (Not counting 1) which have this property also. and 36^5 = 60466176 which also has a sum of digits of 36.  I haven't found any other numbers where sum of digits of N^a and N^b are the same. The 37th Day of the Year. The 37th day of the year; 37 is the only prime with a three digit period for the decimal expansion of its reciprocal, 1/37 = .027027....  But 37 has a strange affinity with 27, which also has a three digit period for its reciprocal, .037037...,   The affinity, of course, is due to 27 x 37 = 999 Can we call numbers like this, amicable reciprocals? and Alex Kontorovich found 1/1287 = .000777000777... and yeah, 1/777 = .001287001287... Now you can find some of your own (and make sure to send me a note)! Speaking of prime periods, students should know that the longest repeating decimal for the inverse of a prime number p is p-1.  It seems that about 37% of the primes reach this max, but not 37 as mentioned above. Big Prime::: n = integer whose digits are (left to right) 6424 copies of 37, followed by units digit of 3, is prime (n = 3737...373 has 12849 digits) *Republic of Math An amazing reversal: 37 is the 12th prime;  and 73 is the 21st prime . This enigma is the only known combination. 37 is the last year day such that the sum of the squares of the first n primes, is divisible by n.  There are only three such numbers in the days of the year. Two of them are primes themselves. If you use multiplication and division operations to combine Fibonacci numbers, (for example, 4 = 2^2, 6 = 2·3, 7 = 21/ 3 ,...) you can make almost any other number. Almost, but you can't make 37.  In fact, there are 12 numbers less than 100 that can not be expressed as "Fibonacci Integers" *Carl Pomerance, et 37! + 1 is a prime.  It is the sixth Year Day for which this is true, and the last prime year day.  There are only  13 Year Days for which n! + 1 is prime. To represent every integer as a sum of fifth powers requires at most 37 integers. The last odd Roman numeral alphabetically is XXXVII (37). *prime Curios Can we call numbers like this, amicable reciprocals? and Alex Kontorovich found 1/1287 = .000777000777... and yeah, 1/777 = .001287001287... The 38th Day Of the Year , 31415926535897932384626433832795028841 is a prime number.  BUT, It’s also the first 38 digits of pi. 38 is the largest even number so that every partition of it into two odd integers must contain a prime. 38 is the largest even number that can only be expressed as the sum of two distinct primes in one way. (31 + 7) 38 is the sum of squares of the first three primes $$2^2 + 3^2 + 5^2 = 38$$. *Prime Curios Although we've had some unusual shaped flags, usually the star field is in a rectangle with the stars displaying some kind of (generally rectangular) similarity. Some have strayed greatly from the rectangle form however. This one with 38 stars from 1877 until 1890 is an example. At the beginning of the 21st Century there were 38 known Mersenne Primes. As of this writing, there are 51, the last being discovered in Dec of 2018.. 38 is also the magic constant in the only possible magic Yhexagon which utilizes all the natural integers up to and including 19. It was discovered independently by Ernst von Haselberg in 1887, W. Radcliffe in 1895, and several others. Eventually it was also discovered by Clifford W. Adams, who worked on the problem from 1910 to 1957. He worked on the problem throughout his career as a freight-handler and clerk for the Reading Rail Road by trial and error and after many years arrived at the solution which he transmitted to Martin Gardner in 1963. Gardner sent Adams' magic hexagon to Charles W. Trigg, who by mathematical analysis found that it was unique disregarding rotations and reflections. *Wik The 39th Day of the Year, 39 is the smallest number with multiplicative persistence 3. [Multiplicative persistence is the number of times the digits must be multiplied until they produce a one digit number; 3(9)= 27; 2(7) = 14; 1(4)=4. Students might try to find the smallest number with multiplicative persistence of four, or prove that no number has multiplicative persistence greater than 11] 39 = 3¹ + 3² + 3³ *jim wilder ‏@wilderlab An Armstrong (or Pluperfect digital invariant) number is a number that is the sum of its own digits each raised to the power of the number of digits. For example, 371 is an Armstrong number since $$3^3+7^3+1^3 = 371$$. The largest Armstrong number in decimal numbers has 39 digits. (115,132,219,018,763,992,565,095,597,973,522,401 is the largest)  (Armstrong numbers are named for Michael F. Armstrong who named them for himself as part of an assignment to his class in Fortran Programming at the University of Rochester \) I find it interesting that 39 = 3*13, and is the sum of all the primes from 3 to 13, 39=3+5+7+11+13, these are sometimes call ed straddled numbers. 39 is the smallest positive integer which cannot be formed from the first four primes (used once each), using only the simple operations +, -, *, / and ^. Prime Curios. The number formed by concatenating the non-prime integers 1 through 39 is the smallest such prime: 1468910121415161820212224252627283032333435363839. Prime Curios. \$3^{39} = 4052555153018976267 \$ is the smallest power of three which is pandigital, with all ten decimal digits. The number is 19 digits long. *@Fermat's Librarty The 40th Day of the Year: in English forty is the only number whose letters are in alphabetical order. There are 40 solutions on for the 7 queens problem.  placing seven chess queens on a 7x7 chessboard so that no two queens threaten each other. -40 is the temperature at which the Fahrenheit and Celsius scales correspond; that is, −40 °F = −40 °C. Euler first noticed (in 1772) that the quadratic polynomial P(n) = n2 + n + 41 is prime for all non-negative numbers less than 40. Paul Halcke noted in 1719 that the product of the aliquot parts of 40 is equal to 40 cubed. 1*2*4*5*8*10*20 = 64000 = 403. He found the same is true for 24. And.... forty is the highest number ever counted to on Sesame Street. 40 = 2^3+5, the first three primes in order. The 41st Day of the Year: Euler (1772) observed that the polynomial f(x)= x2 + x + 41 will produce a prime for any integer value of x in the interval 0 to 39. In 1778 Legendre realized that x2 - x + 41 will give the same primes for interval (1-40). n^2 + n + 41 is prime for n = 0 ... 39 and Is prime for nearly half the values of n up to 10,000,000. *John D. Cook The smallest prime whose cube can be written as sum of three cubes in two ways (413 = 403 + 173 + 23 = 333 + 323 + 63). *Prime Curios If you multiply 41 by any three digit number to produce a five digit number, every cyclic representation of that number formed by moving the last digit to the front is also divisible by 41. (for example 41*378 = 15,498. 41 will also divide 81,549; 98154; 49815; and 54,981 *The Moscow Puzzles 41 can be expressed as the sum of consecutive primes in two ways, (2 + 3 + 5 + 7 + 11 + 13), and the (11 + 13 + 17). The sum of the digits of 41 (5) is the period length of its reciprocal, 1/41 =.0243902439,,,  It is the smallest number with a period length of five. 41 is the largest known prime formed by the sum of the first Mersenne primes in logical order (3 + 7 + 31) *Prime Curios Incredibly, if you take any two integers that sum to 41, a+b =41, then a^2 + b is a prime, for example, 20^2 + 21 = 421 Starting with 41, if you add 2, then 4, then 6, then 8, etc., you will have a string of 40 straight prime numbers. *Prime Curios The 41st Mersenne to be found = 2^24036583-1. *Prime Curios And even more from @Math Year-Round 41=1!+2!+3!+1¹+2²+3³ The 42nd Day of the Year: in The Hitchhiker's Guide to the Galaxy, the Answer to the Ultimate Question of Life, The Universe, and Everything is 42. The supercomputer, Deep Thought, specially built for this purpose takes 7½ million years to compute and check the answer. The Ultimate Question itself is unknown. There is only one scalene triangle in simplest terms with integer sides and integer area of 42, it's perimeter is also 42. (There are only three integer (non-right) triangles possible with area and perimeter equal and all integer sides.) 42 is between a pair of twin primes (41,43) and its concatenation with either of them (4241, 4243) is also a prime, which means that 4242 is also between twin primes. On September 6, 2019, Andrew Booker, University of Bristol, and Andrew Sutherland, Massachusetts Institute of Technology, found a sum of three cubes for $$42= (–80538738812075974)^3 + 80435758145817515^3 + 12602123297335631^3$$. This leaves 114 as the lowest unsolved case. At the beginning of 2019, 33 was the lowest unsolved case, but Booker solved that one earlier in 2019. 42 is the largest number n such that there exist positive integers p, q, r with 1 = 1 / n + 1 / p + 1 / q + 1 / r In 1954, researchers at the University of Cambridge looked for solutions of the equation x^3 + y^3 + z^3 = k, with k being all the numbers from 1 to 100. As of late 2019, all numbers had been solved except 42, which proved to be especially challenging. That is until University of Bristol’s Professor Andrew Booker and MIT Professor Andrew Sutherland solved the equation with the help of @CharityEngine, a crowdsourcing platform that harnesses idle, unused computing power from more than 500,000 home PCs. *Mehmet Aslan # out of network Given 27 same size whose nominal values progress from 1 to 27, a 3 × 3 × 3 magic cube can be constructed such that every row, column, and corridor, and every diagonal passing through the center, is composed of 3 numbers whose sum of values is 42. The 43rd Day of the Year.: The McNuggets version of the coin problem was introduced by Henri Picciotto, who included it in his algebra textbook co-authored with Anita Wah.Picciotto thought of the application in the 1980s while dining with his son at McDonald’s, working the problem out on a napkin. A McNugget number is the total number of McDonald’s Chicken McNuggets in any number of boxes.The original boxes (prior to the introduction of the Happy Meal-sized nuggetboxes) were of 6, 9, and 20 nuggets.According to Schur’s theorem, since 6, 9, and 20 are relatively prime,any sufficiently large integer can be expressed as a linear combination of these three. Therefore, there exists a largest non-McNugget number, and all integers larger than it are McNugget numbers.That number is 43, so how many of each size box gives the McNugget number 44? 43 is the number of seven-ominoids. (shapes made with seven equilateral triangles sharing a common edge.) In March of 1950, Claude Shannon calculated that there are appx $$\frac{64!}{32!} (8!)2(2!)6$$, or roughly 1043 possible positions in a chess match. Planck time (~ 10-43 seconds) is the smallest measurement of time within the framework of classical mechanics. That means that if you could make one unique chess position in each Planck time, you could run through them all in one second. What is the minimum number of guests that must be invited to a party so that there are either five mutual acquaintances, or five that are mutual strangers? (Sorry we still don't know :-{ But the smallest number must be 43 or larger). I think that means that for any number of points on a circle less than 43, if you colored every segment connecting two of them either red or black, there would be no complete graph of five vertices (K(5)) with all edges of the same color. [And there are 43 choose 5 or 962,598 possible choices of complete graphs to choose from.] According to Benford's Law, the odds that a random prime begins with a prime digit is more than 43% Every solvable configuration of the Fifteen puzzle can be solved in no more than 43 multi-tile moves (i.e. when moving two or three tiles at once is counted as one move) 43/100 or more exactly 3/7 shows up in a good approximation for the area of an equilateral triangle: "Gerbert of Aurillac (later Pope Sylvester II) referred to the equilateral triangle as “mother of all figures” and provided the formula A ≈ s^ 2 · 3/7 which estimates its area in terms of the length of its side to within about 1.003% ( N. M. Brown, The Abacus and the Cross, Basic, 2010, p. 109). 3/7 = .428571… sqrt(3)/4 = .433012 43 is the smallest (non-trivial) number that is equal to the consecutive powers of its digits, $$4^2 + 3^3 = 43$$ .  There are two more two digit numbers that fit this pattern.  There are also two three digit year dates that fit if you restrict the powers to consecutive integers starting with 1. And if 42 was the meaning of life, the universe, and everything, just imagine that 43 is MORE than that! Jim Wilder shared $$43^7= 271818611107$$  which has a digit sum of 43.  Wonder how often you can find a multidigit number n so that n^k (for some k=2 through 9 will have a digit sum of n, or a digit sum which is m*n for some integer m?   27^3 = 19683 and 27^7 = 10460353203 came to mind.  Rare, or not rare? When I checked 26^3, it also worked but not to seventh power.  53^7 =  also works, it seems The 44th Day of the Year: there are 44 ways to reorder the numbers 1 through five so that none of the digits is in its natural place. This is called a derangement.  The number of derangements of n items is an interesting study for students.  Some historical notes from here. If you had five  letters for five different people and five  envelopes  addressed to the five people,  there are 44 ways to put every letter in the wrong envelope. 44 is the sum of the first emirp (prime which is prime with digits reversed) pair, 13 and 31.*Prime Curios 44 is the smallest number such that it and the next number are the product of a prime and another distinct prime squared (44 = 22*11 and 45 = 32*5). All even perfect numbers greater than 6 end in 44 in base six, as do all powers of ten greater than 10. *Lord Karl Voldevive ‏@Karl4MarioMugan  (students should be encouraged to understand that the converse of these statements is not true by finding exceptions.) 44 and 45 form the first pair of consecutive numbers that are the product of a prime and the square of a prime.  44 = 2^2 * 11 and 45 = 3^2 * 5 44 is a palindrome in base ten, but not in any smaller base.  Only three of the ninr two-digit palindromes in base ten are palindromes in any smaller base.  Find them! An Euler brick, named after Leonhard Euler, is a cuboid whose edges andface diagonals all have integer lengths. A primitive Euler brick is an Eulerbrick whose edge lengths are relatively prime.The smallest Euler brick, discovered by Paul Halcke in 1719, has edges( a,b,c ) = (44 , 117 , 240) and face diagonals 125, 244, and 267. The 45th Day of the Year: 45 is the third Kaprekar number.  (452 = 2025 and 20 + 25 = 45) The next two Kaprekar numbers both have two digits, can you find them? More unusual, it is also a Kaprekar number with third powers, 45^3 = 91125 and 9 + 11 + 25 = 45.  But Wait! There's more.  45^4 = 4,100,625 and yes, 4 + 10 + 06 + 25 = 45.  There is no other number known that is a Kaprekar number in all three powers. 45 is the 9th triangular number, the sum of the digits from 1 through 9. 45-2n for n=1 through 5 forms a prime I found these on a post at the Futility Closet by Greg Ross: 452 = 2025 20 + 25 = 45 453 = 91125 9 + 11 + 25 = 45 454 = 4100625 4 + 10 + 06 + 25 = 45 45 is a palindrome in base 2 {101101} and base 8{55} 44 and 45 form the first pair of consecutive numbers that are the product of a prime and the square of a prime.  44 = 2^2 * 11 and 45 = 3^2 * 5 The 45th row of Pascal's Arithmetic Triangle has 30 even numbers, the 60th row, has 45 even numbers. 45 is the smallest odd number n that has more divisors than n+1 and that has a larger sum of divisors than n+1 The 45th parallel, halfway between the North Pole and Equater, runs just outside my family home in Elk Rapids, Michigan. The 46th Day of the Year: there are 46 fundamental ways to arrange nine queens on a 9x9 chessboard so that no queen is attacking any other. (Can you find solutions for smaller boards?) 46 is the largest even integer that cannot be expressed as a sum of two abundant numbers. 46 can be expressed as the sum of primes using the first four natural numbers once each, 46 = 41 + 3 + 241 + 23 = 64. It can also be done to its reversal 41 + 23 = 64. 46 is the ninth "Lazy Caterer" number.  The maximum number of  pieces that can be formed with 9 straight cuts across a pancake. 46 is the number of integer partitions of 18 into distinct parts. 46 is a palindrome in both base 4 and base 5 On Oct 29, 2008 the 46th discovered Mersenne Prime, then the world's largest prime was featured  in Time magazine as one of the "great inventions" of the year. It was discovered by Smith, Woltman, Kurowski, et al. of the GIMPS (Great Internet Mersenne Prime Search) program. Three more have been discovered since, one of which is smaller than this one, so while it was 46th discovered, it is 47th in rank. . 46^5 = 205962976, with a digit sum of 46.  46^8 = 20047612231936 also with a digit sum of 46.  46 is the second smallest number which has two expressions of n^k = digit sum of n. The 47th Day of the Year: 47 is a Thabit number, named after the Iraqi mathematician Thâbit ibn Kurrah number, of the form 3 * 2n -1 (sometimes called 3-2-1 numbers). He studied their relationship to Amicable numbers. 47 is related to the amicable pair, (17296, 18416)  All Thabit numbers expressed in binary end in 10 followed by n ones, 47 in binary is 101111. (The rule is that if p=3*2n-1 -1, q= 3*2n -1, and r = 9*2n-1 -1, are all prime, then 2npq and 2nr are amicable numbers. 3^3^3^3^3^3^3 has 47 distinct values depending on parentheses. *Math Year-Round ‏@MathYearRound 666<sup>47</sup> has a sum of digits equal to the Beast Number, 666 *Prime Curios 479 can be written as the sum of distinct smaller 9th powers.*Prime Curios "The 47 Society is an international interest-group that follows the occurrence and recurrence of the quintessential random number: 47. Many suspect that the coincidental nature of 47 carries some mystical, metaphysical and/or scientific significance." *http://www.47.net/47society/ Mario Livio has pointed out that this date written month day as 216, 216=63 and also 216=33+43+53 The 47th day gives me a reason to include this brief story of Thomas Hobbes from Aubrey's "Brief Lives". The 47th proposition of Libre I of The Elements (The Pythagorean Theorem) seemed so obviously false to him that, in following the reasoning back, his life was changed: He was (vide his life) 40 yeares old before he looked on geometry; which happened accidentally. Being in a gentleman’s library in . . . , Euclid’s Elements lay open, and ’twas the 47 El. libri I. He read the proposition. ‘By† G—,’ sayd he, ‘this is impossible!’So he reads the demonstration of it, which referred him back to such a propo- sition; which proposition he read. That referred him back to another, which he also read. Et sic deinceps,(and so back to the beginning) that at last he was demonstratively convinced of that trueth. This made him in love with geometry. The 48th Day of the Year: 48 is the smallest number with exactly ten divisors. (This is an interesting sequence, and students might search for others. Finding the smallest number with twelve divisors will be easier than finding the one with eleven.) 48 is also the smallest even number that can be expressed as a sum of two primes in 5 different ways: If n is greater than or equal to 48, then there exists a prime between n and 9n/8 This is an improvement on a conjecture known as Bertrand's Postulate. In spite of the name, many students remember it by the little rhyme, "Chebyshev said it, but I'll say it again; There's always a prime between n and 2n ." Mathematicians have lowered the 2n down to something like n+n.6 for sufficiently large numbers. 48 is the smallest betrothed (quasi-amicable) number. 48 and 75 are a betrothed pair since the sum of the proper divisors of 48 is 75+1 = 76 and the sum of the proper divisors of 75 is 48+1=49. (There is only a single other pair of betrothed numbers that can be a year day) And 48 x 48 = 2304 but 48 x 84 = 4032. (Others like this???) If you picked four prime numbers so that any collection of three of them had a prime sum, then the smallest sum you could get adding all four primes, is 48.  (5, 7, 17, 19).  Can you find the next smallest?(suitable for middle school students to explore as there are many with modest size numbers) In 1719 Paul Halcke observed that the product of the aliquot divisors of 48 is equal to the fourth power of 48. 1*2*3*4*6*8*12*16*24= 5,308,416= 484.   48 and 80 are the only two year dates for which this is true. 48 is a Harshad Number from the Sanskrit for "joy-giver", since it is divisible by the sum of its digits.  It is also one of the numbers cubed in the 11th Taxicab number  110656 = 40 ^3 + 36 ^3 = 48 ^3 + 4 ^3. 48 x 159 = 5346 And uses all nine digits The 49th Day of the Year: lots of numbers are squareful (divisible by a square number) but 49 is the smallest number so that it, and both its neighbors are squareful. (Many interesting questions arise for students.. what's next, can there be four in a row?, etc) And Prof. William D Banks of the University of Missouri has recently proved that every integer in base ten is the sum of 49 or less palindromes. (August 2015) (Building on Prof. Banks groundbreaking work, by February 22, 2016 JAVIER CILLERUELO AND FLORIAN LUCA had proved that for any base > 4 EVERY POSITIVE INTEGER IS A SUM OF THREE PALINDROMES ) The 49th Mersenne prime is discovered. On Jan 19th, 2016 The GIMPS program announced a new "largest known" prime, 274,207,281 -1. called M74,207,281 for short, the number has 22,338,618 digits. 49 is the smallest square which is the sum of three consecutive primes.49= 17 + 19 + 23 49 is the first square where the digits are squares, What's next? 1, 25, 49 is the smallest arithmetic progression of three squares that I have ever found.  4, 100, and 196 come next .  Is there one starting with nine?  It is proven that an arithmetic progression of four squares in not possible. If you square 49, and take the sum of the digits of that square, you have 7, the square root of 49. How common is this? Student's are reminded that 8 x 6 + 1 is 49, an example of a beautiful mathematical truth that 8 x T + 1 is a square for any triangular number. 1 / 49 = 0.0204081632 6530612244 8979591836 7346938775 51 and then repeats the same 42 digits.  It's better than it looks.  Write down all the powers of two, and then index them two to the right and add. *Wik 1, 25, 49 is the smallest arithmetic progression of three squares that I have ever found.  4, 100, and 196 come next .  Is there one starting with nine?  It is proven that an arithmetic progression of four squares in not possible. Everyone knows 25 is the hypotenuse of a Pythagorean right triangle with legs of 7 and 24.  A Pythagorean triangle can never have two sides that are squares.  When a square occurs as the shorter leg, and interesting pattern occurs: leg       leg         hypotenuse 9           40              41 25        312            313 49       1200         1201 81        3280         3281 good geometry students may already know that any (including all the squares above) odd number n larger than one, is the short leg of a right triangle with a difference of one between the other leg and the hypotenuse.  The square of the odd leg is the sum of the other leg and hypotenuse ODD Leg     Even Leg     Hypotenuse 3                          4                   5          4 + 5 =  3^2 5                          12                 13        12 + 13 = 5^2 7                           24               25          24 + 25 = 7^2 The 50th Day of the Year: 50 is the smallest number that can be written as the sum of two squares in two distinct ways 50 = 49 + 1 = 25 + 25. *Tanya Khovanova, Number Gossip (What is the next, or what is the smallest number that can be written as the sum of two squares in three distinct ways? It is also the sum of  three squares, 3^2 + 4^2 + 5^2 = 50 and of four squares, 1^2 + 2^2 + 3^2 + 6^2 = 50 You can use the first nine consecutive primes to express 50 as the sum of primes in two different ways, :50 = 2 + 5 + 7 + 17 + 19 = 3 + 11 + 13 + 23. The number 50 is somewhat responsible for the area of number theory about partitions. In 1740 Philip Naudé the younger (1684-1747) wrote Euler from Berlin to ask “how many ways can the number 50 be written as a sum of seven different positive integers?” Euler would give the answer, 522, within a few days but would return to the problem of various types of partitions throughout the rest of his life. There is no solution to the equation φ(x) = 50, making 50 a nontotient (there is no integer, k, that has exactly 50 numbers below it that do not share a divisor with k, other than 1). The 51st Day of the Year: 51 is the number of different paths from (0,0) to (6,0) made up of segments connecting lattice points that can only have slopes of 1, 0, or -1 but so that they never go below the x-axis. These are called Motzkin Numbers. $$\pi(51) = 15$$, the number of primes less than 51 is given by it's reversal, 15, and both numbers are products of Fermat Primes. Jim Wilder pointed out that 51 is the smallest number that can be written as a sum of primes  with the digits 1 to 5 each used once  2 + 3 + 5 + 41 = 51 (Students might explore similar problems using first n digits 2-9) 51 can be expressed as the sum of four primes using only the digits from 1-5, 51 = 2 + 3 + 5 + 41. A triangle with sides 51, 52 and 53 has an integer area 1170 units2.  These are called Heronian Triangles, or sometimes Super Heronian Triangles but I prefer to call them after the earliest study of them I have found, and will refer to them as Fleenor-Heronian triangles .  (Guess I shouldn't be, but surprised how all of the triangles I could find with consecutive integer sides and integer area have final digits of 1,2,3 or 3,4,5) There are an infinite number of these with consecutive integers for sides. To find the even side, just take the expansion of $$(2 + \sqrt{3})^n$$,and sum the rational terms, then double it to get the even side. The first three are 2, 7, giving us the even side of a 3,4,5 triangle and the 13, 14, 15 triangle. And if you expand $$(2 + \sqrt{3})^4$$ you get $$8 + 12 \sqrt{3} + 3(2)3 + \sqrt{3^3} =26$$ and we get the center side of the triangle above. (My thanks to @expert_says on twitter who sent me a link to two nice papers on this) (more notes about this in Day 52) And like any odd number, it is the sum of two consecutive numbers, 25+26 , and the difference of their squares $$26^2 - 25^2$$ And I just found this unusual reference, "Don’t be baffled if you see the number 51 cropping up in Chinese website names, since 51 sounds like 'without trouble' or 'carefree' in Chinese." at the Archimedes Lab Since 51 is the product of the distinct Fermat primes 3 and 17, a regular polygon with 51 sides is constructible with compass and straightedge, the angle π / 51 is constructible, and the number cos π / 51 is expressible in terms of square roots. The 52nd Day of the Year, The month and day are simultaneously prime a total of 52 times in a non-leap year. *Tanya Khovanova, Number Gossip How many times in a leap year ? 52 is also the maximum number of moves needed to solve the 15 puzzle from the worst possible start. *Mario Livio 52 is the number of 8-digit primes (on a calculator) that remain prime if viewed upside down, in a mirror, or upside down in a mirror. *Prime Curios There are 52 letters in the names of the cards in a standard deck: ACE KING QUEEN JACK TEN (This also works in Spanish. any other languages for which this is true?) *Futility Closet 52 is called an "untouchable" number, since there is no integer for which the sum of its proper divisors sum to 52. Can you find another? Euler said they were infinite. A triangle with sides 51, 52 and 53 has an integer area 1170 units2.  These are called Heronian Triangles, or sometimes I call them Sang-Heronian triangles after the earliest study I know about them by Edward Sang of Edinburgh, Scotland in 1864. They have consecutive integer sides  Each of these triangles can be partitioned into two Heronian right triangles by the altitude to the even side.  It seems that in all such triangles, the altitude will divide the even base into two sides whose lengths differ by 4.  For this one, the two right triangle bases will be 26-2 and 26+2.  To find the height of the triangle, we use the simple A=1/2 b*h , so 1170 = 26*h, and we get h = 45.  So the two right triangles have sides of 24, 45, 51 with area of 540 sq units; and 28, 45, 53 with area of 630 sq units. In every pair of right triangles formed by the altitude, one of them is a Primitive Pythagorean Triangle.  In this one the PPT is 28, 45, 53. The 53rd Day of the Year: The 53rd day of the year; the month and day are both prime a total of 53 times in every leap year, but not today. If you reverse the digits of 53 you get its hexadecimal representation; no other two digit number has this quality.  You also get the sum of the divisors of 53^3. The sum of the first 53 primes is 5830, which is divisible by 53. It is the last year day for which n divides the sum of the first n primes. (what were the others?) 53 is the sum of 5 consecutive numbers, with an average interval of 3 If you raise 2^n starting at one, and searching for a number with two adjacent zeros, you want find one until n = 53. 53 is the smallest prime p such that 1p1 (ie, 1531) , 3p3, 7p7 and 9p9 are all prime.(Can you find the 2nd smallest?)   Raj Madhuram suggested 2477 is the second smallest of these, and offered the wonderful term, "Sandwich Primes." (Raj actually found five more four digit primes that are "sandwich-able". We leave them as a challenge for the reader. ) 53 is the smallest prime number that does not divide the order of any sporadic group *Wik A triangle with sides 51, 52 and 53 has an integer area 1170 units2.  These are called Heronian Triangles, or sometimes Fleenor-Heronian Triangles, because they have sides of consecutive integers. The even side of these triangles is related to a classic equation from Diophantus' Arithmetic (AD 200's).  This one is now known as a type of Pell Equation $$x^2 - 3y^2 = 1$$.  For example it is easy to see that x=2, y=1 is a solution, and the x=2, doubled becomes the even side in the 3,4,5 Triangle.  The triangle with even side of 52, is from the solutions x=26, y=15.  If you explore the successive rational convergents to the $$\sqrt{3}$$, these occur as every other term in that series.  $$\frac{2}{1} , \frac{5}{3}, \frac{7}{4}, \frac{19}{11}, \frac{26}{15}...$$. Computer Geeks (the capital shows respect) may know that 53 has a prime ASCII code, 3533. It is the smallest prime for which that is true. The floor function of $$e ^\phi$$ is 53. You may know that with the traditional Birthday Problem, 23 people reduces the chance of not finding a match to about 1/2. Increase that number to 53, and the probability of no match is about 1/53. 53 is a self number, since it cannot be formed as the sum of any integer and its digits. another from Jim Wilder, the sum of the digits of $$53^7 = 1174711139837$$ is 53. 53 appears twice in one of the most incredible factorizations I've ever known. The number 13532385396179 has prime factors of 13, 53^2, 3853, and 96179, using exactly the same digits in order when you include the square. *Alon Amit The 54th Day of the Year: 54 is the smallest number that can be written as the sum of 3 squares in 3 ways.(Well, go on, find all three ways!) And the 54th Prime Number, is the smallest number expressible as the sum of 3 cubes in 3 ways.  *Prime Curios There are 54 ways to draw six circles  through all the points on a 6x6 lattice. *gotmath.com 54 is the fourth Leyland number, after mathematician Paul Leyland. Leyland numbers are numbers of the form $$x^y + y^x$$ where x,y are both integers greater than 1. And the Sin(54o) is one-half the golden ratio. Of course, we should add that  the Rubiks Cube has 54 squares. Not sure how he finds these, but Jim Wilder just keeps coming up with them; $$54^6$$ = 24794911296, and the sum of those digits is 54. (also see day 53) The 55th Day of the Year: 55 is the largest triangular number that appears in the Fibonacci Sequence. (Is there a largest square number?) 55 is also a Kaprekar Number: 55² = 3025 and 30 + 25 = 55 (Thanks to Jim Wilder) And speaking of 52, Everyone knows that 32 + 42 = 52, but did you know that 332 + 442 = 552 But after that, there could be no more.... right? I mean, that's just too improbable, so why is he still going on like this? You don't think......Nah. 55 is the only year day that is both a non-trivial base ten palindrome and also a palindrome in base four. Every number greater than 55 is the sum of distinct primes of the form 4n + 3. *Prime Curios  Someone help me out here.  If this is true, then since 55=37 + 13 + 5 , should this say greater than or equal to 55? 55 is a square pyramidal number, the sum of the squares of the first 5 positive integers. The first squared square was published in 1938 by Roland Sprague who found a solution using several copies of various squared rectangles and produced a squared square with 55 squares, and side lengths of 4205 No squared square can be made with less than 21 squares *Wik The 56th Day of the Year: There are 56 normalized 5x5 Latin Squares (First row and column have 1,2,3,4,5; and no number appears twice in a row or column. There are a much smaller number of 4x4 squares, try them first) 56 is the sum of the first six triangular numbers (56= 1 + 3 + 6 + 10 + 15 + 21) and thus the sixth tetrahedral number. 56 is also the sum of six consecutive primes. 3 + 5 + 7 + 11 + 13 + 17 56 is the maximum determinant in an 8 by 8 matrix of zeroes and ones. If you multiply all the composite numbers up to and including 56, and add one, you get a prime number...... with 56 digits.*Prime Curios 56 can be expressed as the sum of two primes in two different ways using only numbers that end in 3. |$56^7 = 1727094849536 \$ is the smallest seventh power that will produce a pandigital (all digits 0-9 in decimals) number.  It has 13 digits.  *@Fermat's Library  (It is actually possible to get all ten digits in a ten digit number that is the square of a five digit number, 32043^2= 1026753849 There are 56 ways to express 11 as the sum of positive integers Fifty-Six, Arkansas is a city in Stone County in North Central Arkansas.  When founding the community in 1918, locals submitted the name "Newcomb" for the settlement. This request was rejected, and the federal government internally named the community for its school district number (56).*Wik. The 57th Day of the Year: 57(base ten) is written with all ones in base seven. It is the last day this year that can be written in base seven with all ones.(What is the last day of the year that can be written with all ones in base two,... base three?) 57 is the maximum number of regions inside a circle formed by chords connecting 7 points on the circle. Students might ask themselves why this is the same as the first five numbers in the sixth row of Pascal's triangle. 57 is the number of permutations of the numbers 1 to 6 in which exactly 1 element is greater than the previous element (called a permutations with 1 "ascents"). 57 letters and spaces are required to write the famous prime number 6700417 in English. The number was one of the factors of $$F(5)=2^{2^5}+1$$ Fermat had conjectured that all such "Fermat Numbers" were prime. In 1732, Euler showed that F(5) was the product or 641 times 6700417. Euler never stated that both numbers were prime, and historians still disagree about whether he knew, or even suspected, that it was. 57 is the maximum number of possible interior regions formed by 8 intersecting circles. The number of ways of coloring the faces of a cube with 3 different colors is 57. For coloring a cube with n colors, the number of possible colorings is given by 57, is sometimes known as Grothendieck's prime.  The explanation is given in Amir D. Aczel's last book, Finding Zero.  Grothendieck had used primes as a framework on which to build some more general result when: Among the first 1000 primes, more numbers end in 57 than any other two digit ending. 57 is the third year day which remains prime if a 5 is inserted anywhere in its digits except at the end.  Can you find the smaller pair?  And it's the sixth year day which can have a 7 inserted anywhere, including at the end, so 757, 577 are both prime. 57 is a repdigit in base 7, $$57_{10}= 111_7$$ The famous Heinz 57 name was created by the owner, Henry Heinz, in 1896.  At the time he says he chose 57 because he thought 5 was lucky, and his wife's favorite lucky number was 7. (He had also told other reasons for the name.) The 58th Day of the Year: 58 is  the sum of the first seven prime numbers. It is the fourth smallest Smith Number. (Find the first three. A Smith number is a composite number for which the sum of its digits equals the sum of the digits in its prime factorization, including repetition. 58 = 2*29, and 5+8= 2+2+9.) Smith numbers were named by Albert Wilansky of Lehigh University. He noticed the property in the phone number (493-7775) of his brother-in-law Harold Smith. 58 is also the smallest Smith Numer with the sum of it's digits prime. And the two digits and their sum form consecutive Fibonacci numbers. If you take the number 2, square it, and continue to take the sum of the squares of the digits of the previous answer, you get the sequence 2, 4, 16, 37, 58, 89, 145, 42, 20, 4, and then it repeats.  See what happens if you start with other values than 2, and see if you can find one that doesn't produce 58. The Greeks knew 220 and 284 were Amicable in 300 BCE. By 1638 two more pairs had been added. Then, in 1750 in a single paper, Euler added 58 more. The 59th Day of the Year: 59 is the center prime number in a 3x3 prime magic square that has the smallest possible total for each row, column and diagonal, 177. It was reportedly found by Rudolf Ondrejka. In 1913, English puzzle writer Henry Dudeney gave an order 3 prime magic square that used the number 1. Although is was commonly included as a prime then,  present day convention no longer considers it a prime. The letters I, L, and X in Roman numerals can only be used three different ways, all three are prime numbers. LIX evaluates to be 59. 59 is the sum of three consecutive primes, 17 + 19 + 23 = 59 59 divides the smallest composite Euclid number 13# + 1= 13*11*7*5*2 + 1 = 59*509  (the symbol for a primorial,  n#, means the product of all primes from n down to 2)Euclid used numbers of the form n#+1 in his proof  that there are an infinite number of primes. And at the right is one of the 59 stellations of the icosahedron. Now for some nice observations from Derek Orr@MathYearRound: 5^59 - 4^59 is prime. 4^59 - 3^59 is prime. 3^59 - 2^59 is prime. Four Amicable Number pairs were known before Euler, He found 59 pairs. *Prime Curios (OK, a page at Princeton by William Dunham says he found 58, and Wolfram Alpha says he found 60. Pick your favorite.) Fun Time!  If you use the digits 1,2,3,4,5,6,7,8,9 in order and separate digits by * or +, the smallest prime you can get is 59.  What other primes can you produce? The first 59 digits of 58^57 form a prime. An article in Quanta Magazine said that all the tetrahedra with rational dihedral angles are found, and there are 59 distinct tetrahedra, plus two infinite groups.  A graph of all of them distinct ones is at the start of their link here. The 60th Day of the Year: 60 is the smallest composite number which is the order of a simple group. 60 is the smallest number that is the sum of two odd primes in 6 ways.(Collect the whole set) The final digits of the Fibonacci sequence have period 60. F(n) and F(n+60) both end in the same digit. 7! is the smallest # with 60 divisors. If you list all the divisors of numbers less than, and relatively prime to 60, they are either prime, a power of a prime, or 1. 60 is the largest number for which this is true. *Prime Curios 60 is the largest known integer for which which can not be expressed by three distinct primes in the form p*q+r There are four Archimedean solids with 60 vertices , : the truncated icosahedron, the rhombicosidodecahedron, the snub dodecahedron, and the truncated dodecahedron. Oh, and Pi day is coming up in a couple of weeks, so ... suppose you were scrolling through the digits of pi and wondered how long it would take until you found a string of ten digits that had all ten of 0 through nine in it... Benjamin Vitale ‏@BenVitale thought to find out and : You can arrange the whole numbers from 1 to 60 into pairs so that the sum of the numbers in each pair is a perfect square; in fact, you can do it in   4,366,714 ways. Here is one of those presented in a pretty fashion using only five squares for the sums. *Gordon Hamilton, Kiran S. Kedlaya, and Henri Picciotto; Square–Sum Pair Partitions(Won the George Polya Prize from MAA for 2016)
# GATE Questions & Answers of Fluid Mechanics Civil Engineering #### Fluid Mechanics 61 Question(s) | Weightage 09 (Marks) Bernoulli’s equation is applicable for For a steady incompressible laminar flow between two infinite parallel stationary plates the shear stress variation is A traingular pipe network is shown in the figure. The head loss in each pipe is given by hf=rQ1.8, with the variable expressed in a consistent set of units. The value of r for the pipe AB is 1 and for the pipe BC is 2. If the discharge suplied at the point (i.e. 100) is equally divided between the pipe AB and AC, the value of r (up to two decimal places) for the pipe AC should be _____________ A 1 m wide rectangular channel has a bed slope of 0.0016 and the Manning's roughness cofficient is 0.04. Uniform flow takes place in the channel at a flow depth of 0.5 m. At a particular section, gradually varied flow (GVF) is observed and the flow depth is measured as 0.6 m. The GVF profile at that section is classified as Water flows through a $90^\circ$ bend in a horizontal plane as depicted in the figure. A pressure of 140 kPa is mesaured at section 1-1. The inlet diameter marked at section 1-1 is $\frac{27}{\sqrt{\mathrm\pi}}$cm, while the nozzle diameter marked at section 2-2 is $\frac{14}{\sqrt{\mathrm\pi}}$cm. Assume the following. (i) Acceleration due to gravity = 10 m/s2 (ii) Weights of both the bent pipe segment as well as water are negligible. (iii) Friction across the bend is negligible. The magnitude of the force (in kN, up two decimal places) that would be required to hold the pipe section is _____________ The figure shows a U-tube having a 5 mm × 5 mm square cross-section filled with mercury (specific gravity=13.6) up to a height of 20 cm in each limb (open to the atmosphere). If $5\;cm^3$ of water is added to the right limb, the new height (in cm, up to two decimal places) of mercury in LEFT limb will be _____________ A sector gate is provided on a spillway as shown in the figure. Assuming  g = 10 m/s2, the resultant force per meter length (expressed in kN/m) on the gate will be __________ Group I contains the types of fluids while Group II contains the shear stress - rate of shear relationship of different types of fluids, as shown in the figure. Group I Group II P. Newtonian fluid 1. Curve 1 Q. Pseudo plastic fluid 2. Curve 2 R. Plastic fluid 3. Curve 3 S. Dilatant fluid 4. Curve 4 5. Curve 5 The correct match between Group I and Group II is An incompressible homogeneous fluid is flowing steadily in a variable diameter pipe having the large and small diameters as 15 cm and 5 cm, respectively. If the velocity at a section at the 15 cm diameter portion of the pipe is 2.5 m/s, the velocity of the fluid (in m/s) at a section falling in 5 cm portion of the pipe is ___________ The dimension for kinematic viscosity is A particle moves along a curve whose parametric equations are: $x={t}^{3}+2t,y=-3{e}^{-2t}$ and $z=2\mathrm{sin}\left(5t\right)$ where x,y and z show variations of the distance covered by the particle (in cm) with time t (in s). The magnitude of the acceleration of the particle (in cm/s2) at t=0 is ________ A horizontal jet of water with its cross-sectional area of 0.0028 m2 hits a fixed vertical plate with a velocity of 5 m/s. After impact the jet splits symmetrically in a plane parallel to the plane of the plate. The force of impact (in N) of the jet on the plate is A venturimeter, having a diameter of 7.5 cm at the throat and 15 cm at the enlarged end, is installed in a horizontal pipeline of 15 cm diameter. The pipe carries an incompressible fluid at a steady rate of 30 litres per second. The difference of pressure head measured in terms of the moving fluid in between the enlarged and the throat of the venturimeter is observed to be 2.45 m. Taking the acceleration due to gravity as 9.81 m/s2, the coefficient of discharge of the venturimeter (correct up to two places of decimal) is ______________ Three rigid buckets, shown as in the figures (1), (2) and (3), are of identical heights and base areas. Further, assume that each of these buckets have negligible mass and are full of water. The weights of water in these buckets are denoted as W1, W2, and W3 respectively. Also, let the force of water on the base of the bucket be denoted as F1, F2, and F3 respectively. The option giving an accurate description of the system physics is An incompressible fluid is flowing at a steady rate in a horizontal pipe. From a section, the pipe divides into two horizontal parallel pipes of diameters d1 and d2 (where d1 = 4d2) that run for a distance of L each and then again join back to a pipe of the original size. For both the parallel pipes, assume the head loss due to friction only and the Darcy-Weisbach friction factor to be the same. The velocity ratio between the bigger and the smaller branched pipes is _________ A straight 100 m long raw water gravity main is to carry water from an intake structure to the jack well of a water treatment plant. The required flow through this water main is 0.21 m3/s. Allowable velocity through the main is 0.75 m/s. Assume f = 0.01, g = 9.81 m/s2. The minimum gradient (in cm/100 m length) to be given to this gravity main so that the required amount of water flows without any difficulty is ___________ A plane flow has velocity components $u=\frac{x}{{T}_{1}},v=-\frac{y}{{T}_{2}}$ and w=0 along x,y and z directions respectively, where are constants having the dimension of time. The given flow is incompressible if Group I lists a few devices while Group II provides information about their uses. Match the devices with their corresponding use. Group I Group II P. Anemometer 1. Capillary potential of soil water Q. Hygrometer 2. Fluid velocity at a specific point in the flow stream R. Pitot Tube 3. Water vapour content of air S. Tensiometer 4. Wind speed A horizontal nozzle of 30 mm diameter discharges a steady jet of water into the atmosphere at a rate of 15 litres per second. The diameter of inlet to the nozzle is 100 mm. The jet impinges normal to a flat stationary plate held close to the nozzle end. Neglecting air friction and considering the density of water as 1000 kg/m3, the force exerted by the jet (in N) on the plate is _________ A venturimeter having a throat diameter of 0.1 m is used to estimate the flow rate of a horizontal pipe having a diameter of 0.2 m. For an observed pressure difference of 2 m of water head and coefficient of discharge equal to unity, assuming that the energy losses are negligible, the flow rate (in m3/s) through the pipe is approximately equal to With reference to a standard Cartesian (x, y) plane, the parabolic velocity distribution profile of fully developed laminar flow in x-direction between two parallel, stationary and identical plates that are separated by distance, h, is given by the expression $u=-\frac{h^2}{8\mu}\frac{dp}{dx}\left[1-4\left(\frac yh\right)^2\right]$ In this equation, the y = 0 axis lies equidistant between the plates at a distance h/2 from the two plates, p is the pressure variable and μ is the dynamic viscosity term. The maximum and average velocities are, respectively For subcritical flow in an open channel, the control section for gradually varied flow profiles is Group-I contains dimensionless parameters and Group- II contains the ratios. Group-I Group -II P. Mach Number 1. Ratio of inertial force and gravitational force Q. Reynolds Number 2. Ratio of fluid velocity and velocity of sound R. Weber Number 3. Ratio of inertial force and viscous force S. Froude Number 4. Ratio of inertial force and surface tension force The correct match of dimensionless parameters in Group- I with ratios in Group-II is: For a two dimensional flow field, the stream function $\Psi$ is given as $\Psi =\frac{3}{2}\left({y}^{2}-{x}^{2}\right)$.The magnitude of discharge occurring between the stream lines passing through points (0,3) and (3,4) is: A 2 km long pipe of 0.2 m diameter connects two reservoirs. The difference between water levels in the reservoirs is 8 m. The Darcy-Weisbachfriction factor of the pipe is 0.04. Accounting for frictional, entry and exit losses, the velocity in the pipe (in m/s) is: The normal depth in a wide rectangular channel is increased by 10%. The percentage increase in the discharge in the channel is: If a small concrete cube is submerged deep in still water in such a way that the pressure exerted on all faces of the cube is p, then the maximum shear stress developed inside the cube is A trapezoidal channel is 10.0 m wide at the base and has a side slope of 4 horizontal to 3 vertical. The bed slope is 0.002. The channel is lined with smooth concrete (Manning’s n = 0.012). The hydraulic radius (in m) for a depth of flow of 3.0 m is A rectangular open channel of width 5.0 m is carrying a discharge of 100 m3/s. The Froude number of the flow is 0.8. The depth of flow (in m) in the channel is The circular water pipes shown in the sketch are flowing full. The velocity of flow (in m/s) in the branch pipe “R” is For a given discharge, the critical flow depth in an open channel depends on For a body completely submerged in a fluid, the centre of gravity(G) and centre of Buoyancy (O) are known. The body is considered to be in stable equilibrium if The flow in a horizontal, frictionless rectangular open channel is supercritical. A smooth hump is built on the channel floor. As the height of hump is increased, choked condition is attained. With further increase in the height of the hump, the water surface will A single pipe of length 1500 m and diameter 60 cm connects two reservoirs having a difference of 20 m in their water levels. The pipe is to be replaced by two pipes of the same length and equal diameter d to convey 25% more discharge under the same head loss. If the friction factor is assumed to be the same for all the pipes, the value of d is approximately equal to which of the following options? A spillway discharges flood at a rate of 9 m3/s per metre width. If the depth of flow on the horizontal apron at the toe of the spillway is 46 cm, the tail water depth needed to form a hydraulic jump is approximately given by which of the following? A mild-sloped channel is followed by a steep-sloped channel. The profiles of gradually varied flow in the channel are The flow in a rectangular channel is subcritical.if the width of the channel is reduced at a certain section, the water surface under no-choke condition will Group-I gives a list of devices and Group-II the list of uses Group-I Group-II P. Pitot tube 1. Measuring pressure in a pipe Q. Manometer 2. Measuring velocity of flow in a pipe R. Venturimeter 3. Measuring air and gas velocity S. Anemometer Measuring discharge in a pipe The correct match of Group-I with Group-II  is For a rectangular channel section, Group I lists geometrical elements and Group II gives proportions for hydraulically efficient section. Group-I Group-II P. Top width 1. $\frac{{y}_{e}}{2}$ Q.Perimeter 2. ye R.Hydraulic Radius 3. 2ye S.Hydraulic Depth 4.4ye ye is the follow depth corresponding to hydraulically efficient section. The correct match of Group I with Group II is The Froude number of flown in a rectangular channel is 0.8. If the depth of flow is 1.5 m, the critical depth is Direct step method of computation for gradually varied flow is Water flows through a 100 mm diameter pipe with a velocity of 0.015 m/sec. If the kinematic viscosity of water is 1.13 × 10-6 m2/sec, the friction factor of the pipe material is A rectangular open channel of width 4.5m is carrying a discharge of 100 m3/sec. The critical depth of the channel is Water ($\gamma$w = 9.879 kN/m3) flows with a flow rate of 0.3 m3/sec through a pipe AB of 10m length and of uniform cross section. The end ‘B’ is above end ‘A’ and the pipe makes an angle of 30º to the horizontal. For a pressure of 12 kN/m2 at the end ‘B’, the corresponding pressure at the end ‘A’ is A person standing on the bank of a canal drops a stone on the water surface. He notices that the disturbance on the water surface in not traveling up-stream. This is because the flow in the canal is The flow of water (mass density = 1000 kg/m3 and kinematic viscosity = 10-6 m2/s) in a commercial  pipe, having equivalent roughness ks as 0.12 mm, yields an average shear stress at the pipe boundary = 600 N/m2. The value of ks/${\mathrm{\delta }}^{\text{'}}$ (${\mathrm{\delta }}^{\text{'}}$ being the thickness of laminar sub-layer) for this pipe is A river reach of 2.0 km long with maximum flood discharge of 10000 m3/s is to be physically modeled in the laboratory where maximum available discharge is 0.20 m3/s. For a geometrically similar model based on equality of Froude number, the length of the river reach (m) in the model is A rectangular channel 6.0 m wide carries a discharge of 16.0m3/s under uniform condition with normal depth of 1.60 m. Manning’s n is 0.015. The longitudinal slope of the channel is A rectangular channel 6.0 m wide carries a discharge of 16.0m3/s under uniform condition with normal depth of 1.60 m. Manning’s n is 0.015. A hump is to be provided on the channel bed. The maximum height of the hump without affecting the upstream flow condition is A rectangular channel 6.0 m wide carries a discharge of 16.0m3/s under uniform condition with normal depth of 1.60 m. Manning’s n is 0.015. The channel width is to be contracted. The minimum width to which the channel can be contracted without affecting the upstream flow condition is An automobile with projected area 2.6 m2 is running on a road with speed of 120 km per hour. The mass density and the kinematic viscosity of air are 1.2 kg/m3 and 1.5 × 10-5 m2/s, respectively. The drag coefficient is 0.30. The drag force on the automobile is An automobile with projected area 2.6 m2 is running on a road with speed of 120 km per hour. The mass density and the kinematic viscosity of air are 1.2 kg/m3 and 1.5 × 10-5 m2/s, respectively. The drag coefficient is 0.30. The metric horse power required to overcome the drag force is There is a free overfall at the end of a long open channel. For a given flow rate, the critical depth is less than the normal depth. What gradually varied flow profile will occur in the channel for this flow rate? At two points 1 and 2 in a pipeline the velocities and V and 2V, respectively. Both the points are at the same elevation. The fluid density is $\mathrm{\rho }$. The flow can be assumed to be in compressible, inviscid, steady and irrotational. The difference in pressures P1 and P2 at poiunts 1 and 2 is A horizontal water jet with a velocity of 10 m/s and cross sectional area of 10 mm2 strikes a flat plate held normal to the flow direction. The density of water is 1000 kg/m3. The total force on the plate due to the jet is A 1: 50 scale model of a spillway is to be tested in the laboratory. The discharge in the prototype is 1000 m3/s. The discharge to be maintained in the model test is A triangular open channel has a vertex angle to 90° and carries flow at a critical depth of 0.30 m. The discharge in the channel is Flow rate of a fluid (density = 1000 kg/m3) in a small diameter tube is 800 mm3/s. The length and the diameter of the tube are 2 m and 0.5 mm, respectively. The pressure drop in 2 m length is equal to 2.0 MPa. The viscosity of the fluid is The flow rate in a wide rectangular open channel is 2.0 m3/s per metre width. The channel bed slope is 0.002. The Manning’s roughness coefficient is 0.012. The slope of the channel is classified as A rectangular open channel needs to be designed to carry a flow of 2.0 m3/s under uniform flow conditions. The Manning’s roughness coefficient is 0.018. The channel should be such that the flow depth is equal to half the width, and the Froude number is equal to 0.5. The bed slope of the channel to be provided is
This site contains: mathematics courses and book; covers: image analysis, data analysis, and discrete modelling; provides: image analysis software. Created and run by Peter Saveliev. # Orientable surface A surface is called non-orientable if it contains the Mobius band: The rest are orientable: Definition. The orientations of two faces that share an edge are called compatible provided they induce the opposite orientations on the edge. We can choose these orientations, if we deal with one edge at a time. Can we ensure that all pairs of faces have compatible orientations? We might proceed as follows. Start with $\sigma$ and $\tau$ compatibly oriented. Then, suppose ${\sigma}$ has another edge $d$ shared with face $\lambda$. Then $\lambda$ has to have the compatible orientation with ${\tau}$. Next, $\lambda$ has an edge $f$ shared with another face which has to have the compatible orientation with ${\tau}$, etc. At every step we move from a face to an adjacent face and every time the orientation of the next face is "forced". This may continue for a while. But what happens if we make a full circle and come back to ${\tau}$? It's possible that the orientation we want to impose on ${\tau}$ will be opposite of what we have already. To sort this out, consider these two examples: triangulated sphere: and Mobius band: Is there a compatible orientation of the whole thing? Start with ${\tau}$, oriented clockwise. We orient ${\sigma}$ compatibly (generates opposite orientation on the edge they share) etc. For the Mobius band we have a problem: • ${\tau}$ orients $a$ as $a$. • ${\mu}$ orients $a$ as $a$. They are not opposite! Therefore this is not a compatible orientation of ${\bf M}$. Exercise. Show that there is no compatible orientation of the Mobius band, regardless of the triangulation. Corollary. A non-orientate surface (contains ${\bf M}$) can't be compatibly oriented.
# Removing a hypothesis when generalizing the Lebesgue measure Let $f:\mathbb R\to\mathbb R$ be a continuous increasing function. Define the (generalized) length of (finite) semiopen intervals, \begin{align} \lambda_f:&\{[a,b):a,b\in\mathbb R\,;\;a\leq b\}\to[0,\infty),\\ &[a,b) \mapsto f(b)-f(a). \end{align} Define also \begin{align}\theta^*_f:&\mathcal P(\mathbb R)\to[0,\infty],\\ &A\mapsto\inf\,\left\{\sum_{k\in\mathbb N}\,\lambda_f([a_k,b_k)):A\subset\bigcup_{k\in\mathbb N}[a_k,b_k)\right\}, \end{align} which can be shown to be an outer measure. (Therefore, with $\theta^*_f$, Carathéodory's method generates the measure $\mu_f$, the Lebesgue-Stieltjes measure.) What changes in this rationale if we no longer assume $f$ has to be continuous? - Note that, since $f$ is increasing, this already implies that it has finite one-sided limits in any real point. Of course, this does not imply continuity, but in order to eventually get a measure which is defined and finite on all intervals $[a,b)$, $f$ must at least be right-continuous. Indeed, if $\mu$ is a measure on the real line assigning finite value to every interval of this form, then $\mu ([a,b)) = \lim_{n \to \infty} \mu ([a,b+h_n))$ for any sequence $h_n > 0$ decreasing to $0$ (because $[a,b] = \bigcap_{n \in \mathbb{N}} [a,b+h_n)$ and $\mu(\{b\})=0\!$ ). To apply Carathéodory's method, you need that the measure be a priori $\sigma$-additive on intervals of such form (in the case that their union is also of this form). The above illustrates that this won't be true if $f$ isn't right continuous. However, it turns out that right continuity (along with the monotonicity requirement) is enough in order to get a Borel measure using Carathéodory's method. This is simply because in this case we avoid the former obstruction and this is indeed the only one — Carathéodory's method then generally extends the measure to the smallest $\sigma$-algebra generated by such intervals, which is the Borel algebra. The general idea makes a lot of sense, but I think I spot two small mistakes. 1, $f$ has to be continuous to some side or necessarily to the right? 2, isn't $\bigcap_{n\in\mathbb N}[a, b + h_n) = [a, b]$, since $b\in[a, b + h_n)$ for every $h_n$? –  Luke Sep 18 '11 at 21:47 @Luke: if you use left-closed intervals then $f$ has to be right-continuous. You can do a similar construction using right-closed intervals, in which case $f$ would need to be left continuous. In the end this doesn't really matter. The crux of the matter is that if $f$ is discontinuous at $x$ then the resulting measure $\mu_f$ will have an atom at $x$ (that is, the singleton {x} will have positive mass). As for your second point, you're right. I'll edit accordingly. –  Mark Sep 19 '11 at 12:08
# Quivers Representations in SUSY gauge theories + 4 like - 0 dislike 541 views I would like to hear some reasons and ideas on how quivers are useful in SUSY gauge theories. There is a nice answer about the case of D-branes here but it is not clear on their appearance in gauge theory independently of the D-branes. More specifically I have heard that quivers can describe BPS states. Is this correct? And why so? This post imported from StackExchange Physics at 2015-02-07 11:48 (UTC), posted by SE-user user39726 SUSY quivers are dissected here arxiv.org/abs/hep-th/0201205 This post imported from StackExchange Physics at 2015-02-07 11:48 (UTC), posted by SE-user Autolatry Indeed, quivers first appeared in the context of D-branes at (conifold) singularities (there are various nice expositions in Klebanov-Witten theory reviews) where the D-branes "conspire" to give a $\mathcal{N}=1$ SYM theory. Additionally, gauge theories are strongly encoded inside the physics of D-branes, so I am not sure in what way you can "separate" these notions. Usually, quivers are used to describe the physics of BPS bound states of $\mathcal{N}=2$ susy and sugra. I will say a few words on this as an example of quivers in gauge theories. So let us consider $\mathcal{N}=2$ theory in four dimensions. As you will probably know this theory has a moduli space with a Coulomb and a Higgs branch. Let us consider a point $u$ in the Coulomb branch $\mathcal{C}$ of the moduli space. There we have a gauged $U(1)^r$ symmetry group together with a lattice $\Gamma$ from which the various BPS states take their charges $(p,q)$. From Seiberg-Witten theory we know how to consider the above on an elliptic curve $\Sigma_u$ that varies along $\mathcal{C}$. It is very well known that the homology classes of 1-cycles along the tori we are considering can be identified with $\Gamma$. This is all standard Seiberg-Witten stuff. Seiberg-Witten is of course solved in the IR. To study the BPS states at some specific point $u \in \mathcal{C}$ we need to introduce the quiver. These theories also have a central charge $Z$. Now, we take half the plain of the plane on which the central charge $Z$ takes values and we name it $H$. On this plane there exist a set of $2r+f$ (where $f$ is the number of flavors of the theory) states which are customary to denote as $\gamma_i$ (that we can naively consider them as particles). It turns out that such a basis, if it exists, it is the only possible one, it is unique. Using this basis, the set $\{ \gamma_i \}$ we can construct a quiver. For every $\gamma_i$ we draw a node and for every pair we draw arrows that connect them. Then we can use quiver quantum mechanics to find the BPS bound states of the BPS "particles" $\gamma_i$. So the moral/summary is the following: In the $\mathcal{N}=2$ theory consider a point $u$ of the Coulomb branch and use its data to form (if possible) a basis $\{ \gamma_i \}$ of the hypers. Then put a node on each one, and arrows between them. Then use quiver quantum mechanics to find the BPS bound states. Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsO$\varnothing$erflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
# Experimental taphonomy of fish - role of elevated pressure, salinity and pH ## Abstract Experiments are reported to reconstruct the taphonomic pathways of fish toward fossilisation. Acrylic glass autoclaves were designed that allow experiments to be carried out at elevated pressure up to 11 bar, corresponding to water depths of 110 m. Parameters controlled or monitored during decay reactions are pressure, salinity, proton activities (pH), electrochemical potentials (Eh), and bacterial populations. The most effective environmental parameters to delay or prevent putrefaction before a fish carcass is embedded in sediment are (1) a hydrostatic pressure in the water column high enough that a fish carcass may sink to the bottom sediment, (2) hypersaline conditions well above seawater salinity, and (3) a high pH to suppress the reproduction rate of bacteria. Anoxia, commonly assumed to be the key parameter for excellent preservation, is important in keeping the bottom sediment clear of scavengers but it does not seem to slow down or prevent putrefaction. We apply our results to the world-famous Konservat-Lagerstätten Eichstätt-Solnhofen, Green River, and Messel where fish are prominent fossils, and reconstruct from the sedimentary records the environmental conditions that may have promoted preservation. For Eichstätt-Solnhofen an essential factor may have been hypersaline conditions. Waters of the Green River lakes were at times highly alkaline and hypersaline because the lake stratigraphy includes horizons rich in sodium carbonate and halite. In the Messel lake sediments some fossiliferous horizons are rich in FeCO3 siderite, a mineral indicating highly reduced conditions and a high pH. ## Introduction Since the advent of experimental methods in paeontological research, our understanding of taphonomic and fossilisation reactions has much improved. Today we realise how easily and rapidly organic tissue may be transformed into inorganic materials1,2,3,4,5,6. Consensus is emerging that fossilisation reactions can take place within time frames accessible with laboratory experiments7. The near-perfect articulation of the Pycnodontid in Fig. 1 suggests that the decision for or against preservation must have been made early, shortly after the fish died. Under ambient marine conditions - oxygenated water, normal marine salinity, and near-neutral pH - a fish so delicate would have been disarticulated or consumed by scavengers within hours to days. But what are the environmental factors most effective in retarding or preventing organic decay? If we identify those variables experimentally, we may hold the key to understanding the genesis of Konservat-Lagerstätten within which fish are prominent fossils. We report novel decay experiments with fish to understand early taphonomic pathways toward fossilisation. Parameters investigated experimentally are elevated hydrostatic pressure, elevated salinity, the role of proton (pH) and electron activity (Eh), bacterial activity, and time. We apply our results to three prominent Konservat-Lagerstätten where fossil fish are prominent species - Eichstätt-Solnhofen, Green River, and Messel. ### Previous taphonomic experiments Our experiments build upon a number of pioneering taphonomic studies that have made important contributions to the mineralisation of organic tissue2,8,9,10,11. However, not all of these studies exerted strict control over the experimental parameters. Quite often, reaction containers were not sealed8,12 but then the pH of experimental solutions may be affected by ingression of atmospheric CO2. Redox states are often derived by measuring directly physically dissolved oxygen (O2,aq) with Clarke electrodes1,2,12 but at reduced conditions O2,aq is a poor redox proxy. For example, in equilibrium with iron sulfide at 25 °C12 the O2,aq is ~ 10–68 mol kg−1 and far too small to be quantified by direct measurement. To our knowledge, experiments at elevated salinities have not been conducted although hypersaline conditions are by no means rare in shallow epicontinental seas13,14. Several studies have emphasised the importance of bacterial mats in early fossilisation8,9,10,15,16, and consensus is emerging that the encrustration and lithification of a carcass by bacteria may counteract disarticulation. The only experimental study at elevated pressure was conducted by Saitta et al.17. These authors heated and pressed feathers and lizards to 250 °C and 300 bar to test if melanosomes in organic tissue may survive diagenesis. ### Experimental and analytical methods All trials reported here were performed at elevated pressures. For that purpose we designed autoclaves that can simulate water depths of up to 110 m (11 bar). The autoclaves are machined from 110 mm diameter acrylic glass rods and permit organic decay reactions to be monitored optically as reactions proceed. Inner diameters are 70 mm, wall thicknesses are 40 mm, and the capacities are 570 cm3. Typical filling levels were around 450 cm3 to leave space for a gas cushion. Hydrostatic pressure was imposed by pressurising the autoclaves with N2 gas. Temperatures were kept at 22 ± 1 °C. Experimental solutions used in the decay experiments were seawater and seawater brines with 3.5, 7, 10, and 14 wt.% NaCl equiv. Salinities were adjusted to the desired ionic strengths by evaporating Atlantic seawater. Most trials were carried out with goldfish (Carassius auratus) carcasses while a few experiments used cichlids of the genus and species Thorichthys meeki when goldfish specimens were unavailable. Prior to experimentation, all fish were euthanised with tricaine methanesulfonate (courtesy J. Mogdans, Zoology Department, University of Bonn). The sediment substrates onto which the carcasses were bedded were ultra-fine calcite (CaCO3) oozes with grain sizes of 3.5 ± 1 µm, and in two cases sodium acetate NaCH3COO and natron Na2CO3 * 10H2O. The latter two runs were performed to quantify how elevated pH levels affect the reproduction rates of bacteria and the decay rates of organic tissue. Not only did the autoclaves serve as reaction containers. They were also used to calibrate the minimum pressure at which a fish carcass can sink to the bottom sediment. Fish that die at shallow depth usually surface when they have swim bladders, and are thus exposed to rapid decay. For a fish carcass to sink, a minimum water depth is required, deep enough to compensate for the buoyancy of the swim bladder. We have calibrated as a function of salinity with 14 goldfish specimens the hydrostatic pressures at which the carcasses reached neutral buoyancy. Neutral buoyancy is given when P = ρ * g * h, where P = hydrostatic pressure, ρ = density (salinity) of the solution, g = Earth’s acceleration, and h = height of the water column. The resulting function is highly non-linear. The curve was calibrated to 8 bar. It covers a salinity range from zero to 11.2 wt.% NaCl equiv. Proton activities (pH) of the experimental solutions were recorded with Ag/AgCl hydrogen ion sensitive glass electrodes (EGA 151, Meinsberg; Gäb et al.18). The electrodes measure the electromotive force (emf) against the Ag-AgCl equilibrium and were calibrated with Merck standard solutions at pH 7 and 10. The nominal precision of the pH measurements was around ±0.1 units in pH. Electrochemical potentials (Eh, in mV) were monitored with Ag/AgCl combination glass electrodes and quantified against the potential of an internal Ag/AgCl reference immersed in 3n KCl solution. The pH and Eh measurements could only be performed prior to and after completion of an experiment since the glass electrodes cannot withstand the pressure gradient between ambient (1 atm) and experimental pressure (up to 8 bar). Replicate measurements suggest that Eh values are reproducible to within ±30 mV. Goldfish carcasses that were physically preserved after the end of an experiment were dissected to document and compare the state of their organs with organs of a fresh (unreacted) goldfish. One specimen fermented in a hypersaline brine at 14 wt.% NaCl equiv. was scanned using a high-resolution SkyScan 1272 Bruker desktop micro computer tomograph (µCT). Articulation of its bones was imaged at an isotropic voxel resolution of 10 µm, using the Bruker software 3D.SUITE. With few exceptions, organs could not be imaged because density contrasts proved too low. Bacteria populations of three experimental solutions were characterised by sequencing the 16 S rRNA genes. The microbiome analyses were carried out using the facilities of MR DNA in Texas19. We identify with this technique only the most abundant bacterial orders. A limitation is that 16 S rRNA sequencing does not allow to quantify absolute abundances of bacteria, nor does it permit a distinction to be made between the DNA of living and dead bacteria. Hence, the relative abundances of bacterial orders should be seen as a semi-quantitative only snapshot of all DNA that could be sequenced. ## Results ### Influence of pressure Water depth is a key parameter for the fossilisation of fish. For fish to be successfully fossilised, they must sink after their death to be later be covered by sediment. We have calibrated as a function of water salinity the pressure at which a fish carcasses reaches its neutral buoyancy level (Fig. 2). The neutral buoyancy curve is individual for goldfish with sizes that they fit the autoclave. Nonetheless, it is of general applicability: • Fish dying at depths above the neutral buoyancy curve will float; such specimens would have little chance to be fossilised because they would rot quickly and/or be consumed by scavengers20,21. • Fish dying at pressures higher than the neutral buoyancy curve will sink; they have the potential to be fossilised as soon as they are covered by sediment. • The higher the salinity of the water, the higher the minimum pressure necessary for a fish carcass to sink, since water density increases with salinity. In basins stratified with respect to salinity the depth of the halocline and the salinity of the bottom waters will decide if a sinking carcass may reach the bottom sediment or if it will stagnate at the halocline. • The larger the fish carcass, the higher the pressure necessary at given salinity that it may sink. Larger fish have a smaller surface-to-volume ratio, hence require higher pressures to experience the same percentage of compression as small fish. Temperature also affects buoyancy because water density is temperature sensitive22 but in our pressure calibration experiments temperature was kept constant at 22 ± 1 °C. Elevated pressure also helps to preserve fish carcasses externally (Fig. 3a-c). At pressures of 1 bar, disarticulation was noted within days. Fish treated at 4 and 8 bar were physically preserved but decomposed internally. Elevated pressure does not prevent putrefaction, however, it does seem to preserve the entity of a carcass by suppressing expansion and disruption by putrefaction gases. ### Influence of salinity In Fig. 3d-f we show the conservation status of goldfish carcasses reacted in 3.5, 7, 10, and 14 wt.% NaCl equiv. brines. The specimen reacted in normal seawater (Fig. 3d) was found completely disarticulated and decomposed but the higher salinity specimens in Fig. 3e-f were found preserved externally. The fish fermented at 7 wt.% NaCl equiv. was ruptured ventrally but morphologically it was still intact. The two specimens in the 10 and 14 wt.% NaCl brines were well preserved both morphologically and internally. In Fig. 4a-b we compare the organs of the 10 wt.% NaCl carcass with organs of an unreacted goldfish carcass. The comparison shows that even after 39 days in 10 wt.% brine, organs are still allocable to their anatomic positions and rather well preserved. The fish fermented at 14 wt.% NaCl was X-rayed and imaged with µCT (Fig. 4c-e). Articulation is perfect, all bones were found in place. In the scull, gill branches are still recognisable, and even the hypurals attached to the fin-rays, likely to be vulnerable to decay, are well preserved. A salt concentration of >10 wt.% NaCl equiv. seems to suppress the growth of decomposing micro-organisms so effectively that hardly any degradation is noted. ### Proton activity (pH) and electrochemical potential (Eh) In Fig. 5 we display the pH-Eh conditions of all post-experiment solutions, recorded after the experiments were completed and the autoclaves opened. We display pH and Eh in one diagram to appreciate that protons and electrons are correlated through the equilibrium 6H2O = O2 + 4H3O+ + 4e. Whenever CaCO3 was used as bottom sediment, the pH fell with reaction time from ~ 8.1 (seawater and seawater brine) to ~7 ± 0.5. Apparently, the decay of organic tissue liberates organic acids that hydrolyse with H2O to produce H3O+. Calcite as bottom sediment does not seem to have sufficient buffering capacity to neutralise oxonium at a rate acidity is released by organic decay. To extend the pH range, two additional experiments were carried out with sodium acetate CH3COONa and natron Na2CO3*10H2O as bottom substrates. Both Na-acetate and natron are more soluble in water than CaCO3, both are more alkaline (pH ~ 9 and 11.2 respectively), and owing to their high solubilities in water they are more capable than CaCO3 to buffer pH. Indeed, the pH values of those two experiments ended up close to the saturation pH values of CH3COONa and Na2CO3*10H2O, at 8.6 and 11.2. With respect to electrochemical potentials, all experiments except one at a pH of 6 fell inside sulfate stability. It is noticeable that with increasing pH the solutions became more oxidized in pH-Eh space. Thermodynamically this trend is counter-intuitive. The decay of organic tissue given e.g. by $${{\rm{C}}}_{6}{{\rm{H}}}_{12}{{\rm{O}}}_{6}\,({\rm{organic}}\,{\rm{tissue}})+24{{\rm{H}}}_{2}{\rm{O}}\to 6{{{\rm{HCO}}}_{3}}^{-}+18{{\rm{H}}}_{3}{{\rm{O}}}^{+}+12{{\rm{e}}}^{-}$$ (1) produces oxonium ions as well as electrons, hence pH and Eh should be inversely correlated. This does not seem to be the case in Fig. 5. Our explanation is that alkaline conditions suppress bacterially mediated decay so effectively that no relative reduction occurs. Indeed, the fish on CaCO3 sediment at a pH of ~6 (Fig. 6a) was disarticulated after 20 days, while both the high-pH fish fermented on Na-acetate and natron beds (Fig. 6b-e) for 42 and 77 days were largely intact. ### Bacterial communities Many, if not all early fossilisation reactions commence with carcasses being incrustrated by bacteria15,16. Iniesto et al.10 showed that fossils encased by bacterial mats are less susceptible to disarticulation. We also observed that after a few days in solution the carcasses were covered by red precipitates. Many experimental solutions turned out to be distinctly reddish (e.g. Figure 3b,c) notably those from experiments with normal-salinity seawater. To identify the principal bacteria populations, three experimental solutions with 3.5, 7, and 10 wt.% NaCl equiv. were sampled and the 16S rRNA genes were sequenced for microbiome analysis (Fig. 7). Well represented in the low salinity (3.5 wt.% NaCl) solution is the order Rhodobacterales. Species of that order perform anoxygenic photosynthesis and produce pigments that we may see in Fig. 3c. Rhodobacterales are key biofilm formers on surfaces in marine habitats23. Genera of the order Rhizobiales are nitrogen-fixing24 and apparently halotolerant since they were found most abundant in the 10 wt.% brine. Genera of the order Oceanospirillales are facultatively aerobic, some anaerobic, and some require elevated Na+ for their metabolism. The order Clostridiales includes alkaliphilic anaerobic spore forming bacteria that live in soils while many species of Enterobacteriales inhabit intestines. Overall, most bacterial orders identified here are anaerobic or facultatively anaerobic, an observation that accords well with generally low Eh measured in the three post-experiment solutions sequenced for RNA (Fig. 5). ### Burial by sediment Fish can only be preserved through geologic time if they are covered by sediment. To simulate that situation, another experiment was performed where the fish carcass was fully embedded in calcite ooze The fish species used in this trial was a cichlid of the genus and species Thorichthys meeki. The water body above, and the pore solution inside the sediment, was seawater brine evaporated to 10 wt.% NaCl equiv. Hydrostatic pressure was 8 bar. After 77 days, the cichlid was found flattened to ~ 5 mm thickness (Fig. 8a), presumably by a combination of osmotic dehydration and compaction. With respect to articulation, that specimen was found almost perfectly preserved. Internal organs are decomposed but their former anatomic positions still are readily identified (Fig. 8b). That experiment again highlights how effective high salinity may be for long-term preservation. ## Discussion It is the initial conditions that determine the fate of a fish carcass, whether it decays or is handed down in the geological record as a well articulated fossil. The three parameters most effective in preventing decay are elevated salinity, elevated pH, and a hydrostatic pressure large enough that a fish after its death may sink to the bottom sediment. An elevated salinity is highly effective in suppressing organic decay. That is not surprising; salt curing of food stuff has been used by humankind at least since 3000 BC25. High salt concentrations generate an osmotic pressure between cell membranes and the surrounding solution, thus causing bacteria to dehydrate. Accordingly, at salinities exceeding 10 wt.% NaCl equiv. little degradation of organic tissue was noted. Organs survived almost intact even after for 39 days (Fig. 4a). In the cichlid in Fig. 8a,b, fermented at identical salinity for 77 days, the organs are largely resorbed but all bones are almost perfectly articulated. Whether this is due to the fact that this specimen was fermented inside a carbonate matrix or dried before it was dissected cannot be verified. It is clear though that the 77 day cichlid would have had excellent chances to “survive” as a fossil as well articulated as the Pycnodontid in Fig. 1. An elevated pH seems to be as effective in suppressing decay as elevated salinity (Fig. 6). Calcite as bottom substrate has no buffering capacity with respect to pH because its solubility product (~3.3 * 10–9 mol2 kg−2) is too low. Consequently, all trials with CaCO3 as bottom substrate experienced acidification relative to seawater (pH ~ 8.1) to ca. 7 ± 0.5, one order of magnitude higher in H3O+ than seawater (Fig. 5). With sodium acetate and natron as bottom substrates (pH 8.6 and 11.2 respectively) no organic decay is noted. One reason could be that alkaline pH levels limit bacterial populations. All bacteria strive to keep their intracellular pH close to the neutral point26. Bacteria that live at neutral or slight acidity achieve this by pumping H+ from the intermembrane space to the matrix, in order to create an electrochemical potential necessary to synthesise ATP. By contrast, alkaliphilic bacteria like Clostridium paradoxum must lower intracellular pH in order to avoid degradation of proteins, and they achieve this by creating Na+ gradients27,28,29. That process is less energy efficient than H+ pumping. Furthermore, alkaliphilic bacteria metabolise sugars instead of amino acids. Possible consequences of these factors are that fish carcasses exposed to high pH decompose more slowly than at neutral or slightly acidic pH levels because absolute abundances of bacteria are limited. There is, however, a caveat to this interpretation: solutions in equilibrium with acetate and natron (cf. Fig. 3) have similarly high ionic strengths as hypersaline solutions with >10 wt.% NaCl equiv., so we cannot discriminate with confidence between the conserving effects of high salinity and high pH. Redox conditions do not seem to have a major influence on decay rates. We have not noticed that anoxia around - 100 mV slows down decay significantly compared to more oxidised conditions30. That is expected. All bacterial strands that inhabit the intestines of fish are anaerobic or facultatively anaerobic. They are the first bacteria to begin decomposition of the carcass after a fish died, and they continue to do so for as long as a fish is not ruptured and exposed to the outside environment. Nonetheless, anoxia is important in fossilisation by keeping at bay scavengers. ### Application to important Konservat-Lagerstätten We now apply our results to three Konservat-Lagerstätten where fish are prominent fossils. We decipher from the sedimentary records of the deposits environmental conditions favorable for fossilization. The fossil deposits we analyse are Eichstätt-Solnhofen, Green River, and Messel. The Eichstätt-Solnhofen deposits of the Frankonian Alb are within an epicontinental, upper Tithonian (150 Ma) platform carbonate sequence deposited on the Helvetic shelf north of the Penninic ocean. Shallow reefs alternated laterally with restricted lagoons. The latter are filled with extremely fine-grained plattenkalks or lithographic limestones31. Fossil preservation is outstanding. Fossils include the famous Archaeopteryx, many species of pterosaurs, and a plethora of fish32. Many fish fossils are well articulated, and some even preserved original scale colouring33. In the Upper Jurassic the Solnhofen archipelago was situated at ca. 30°N in a semi-arid climate belt. The distribution of land and sea at that time - Laurasia in the northwest, Gondwana in the southwest, and the Tethys and Penninic oceans toward the southeast - suggests a monsoon-type climate, with wet onshore southeasterly winds in summer and dry offshore westerly winds in winter34. During winter times evaporation could have exceeded precipitation. Viohl35 assumes that many the Solnhofen-Eichstätt lagoons developed seasonal salinity stratifications where normal-salinity, oxygenated surface waters thriving with life overlay deep, hypersaline, reduced bottom waters. Fish that died at water depths where sinking was possible (Fig. 2) could have sunk into hypersaline bottom waters where decay was slow. Unfortunately though, salinities of the Solnhofen basins are poorly constrained. Seilacher et al.36 proposed that post-mortem contortions of vertebral columns of fish, quite common in Solnhofen, occurred when fish carcasses were dehydrated by hypersaline brines. Viohl35 reported what he thought were pseudomorphs of calcite after gypsum, suggesting salinities were above 11 wt.% NaCl equiv., but until primary gypsum is identified this proposition remains speculative. We can say though that the good conservation of fish carcasses in hypersaline brines does support Viohl’s35 analysis. The Green River Konservat-Lagerstätte in Wyoming is one of the best fossil fish sites worldwide37. In the Eocene, the Green River lakes formed a playa-lake complex38 where fresh water periods alternated with highly alkaline, halite-saturated waters. Today, the lake sediments host the largest deposits of trona Na2CO3*NaHCO3*2H2O worldwide, a mineral that imposes at saturation a pH of >11. Eugster39 noted similarities with alkaline lakes of the East African Rift system that are also fossil fish bearing40. Grande37 implicated for the fish mortality events algal blooms or overturns of H2S-bearing hypolimnia but no convincing theories exist why the Green River fish are so well preserved. We propose that high salinities combined with high pH values may have played a key role for preservation. The waters of the Eocene Konservat-Lagerstätte Messel may have been unusually alkaline as well. Messel was a crater lake41 whose waters must have equilibrated with alkaline basaltic tuffs. The hydrolysis of pyroclastic materials like feldspar and basaltic glass shards can impose on water high alkalinities and elevated pH values as high as 10 (ref. 42). In many fossiliferous horizons of the Messel stratigraphy fossils are encrustated by siderite FeCO3, a mineral that affords highly reduced conditions within the Fe2+,aq or Fe(OH)+,aq stability. Siderite is also a good pH sensor. At ambient CO2 partial pressure and an Fe2+,aq content of around 5 ppm, the formation of siderite in water requires a pH of >10 (ref. 18). It is possible that we owe the fish fossils in Messel to a highly alkaline pH. ## Conclusion We show that success or failure in the fossilisation of fish is decided soon after a fish dies. Pressure must be high enough that a fish carcass may sink to the bottom sediment. A low redox state per se does not seem to delay soft tissue decay but anoxia may be essential in keeping at bay scavengers. The parameters most effective in early fossilisation are a high salinity and an alkaline pH. When the salinity is >10 wt.% NaCl equiv. or the pH in the alkaline region, bacterial attack on soft tissue is greatly retarded, and a carcass can rest on the sediment-water interface for many weeks to months without decomposition until it is buried by sediment. The good preservation of the experimental proto-fossil in Fig. 8 is not meant to imply that fossilisation is complete after a few weeks. Many reactions will ensue, including the phosphatisation of bones5, the lithification by bacteria43, the pseudomorphic replacement of organic tissue by inorganic materials44,45,46, and the re-organisation of organic molecules to more durable compounds47,48. For successful fossilisation of fish it is the initial conditions that matter. They determine if a carcass rots, if it is consumed by scavengers, or if after millions of years it re-emerges in the geological record as a fossil. ## References 1. 1. Briggs, D. E. G. & Kear, A. J. Decay and preservation of Polychaetes: taphonomic thresholds in soft-bodied organisms. Paleobiology 19, 107–135 (1993a). 2. 2. Briggs, D. E. G. & Kear, A. J. Fossilisation of soft tissue in the laboratory. Science 259, 1439–1442 (1993b). 3. 3. Briggs, D. E. G. & Kear, A. J. Decay and mineralization of shrimps. Palaios 9, 431-456. 13. Allison P.A. 1986. Soft-bodied animals in the fossil record: The role of decay in fragmentation during transport. Geology 14, 979–981 (1994). 4. 4. Briggs, D. E. G. & McMahon, S. The role of experiments in investigating the taphonomy of exceptional preservation. Palaeontology 59, 1–11 (2016). 5. 5. Briggs, D. E. G. & Wilby, P. R. The role of the calcium carbonate-calcium phosphate switch in the mineralization of soft-bodied fossils. Journal of the Geological Society, London 153, 665–668 (1996). 6. 6. Tarhan, L. G., Hood, A. S., Droser, M. L., Gehling, J. G. & Briggs, D. E. G. Exceptional preservation of soft-bodied Ediacara biota promoted by silica-rich oceans. Geology 44, 951–954 (2016). 7. 7. Briggs, D. E. G. The role of decay and mineralization in the preservation of soft-bodied fossils. Annual Review of Earth and Planetary Sciences 31, 275–301 (2003). 8. 8. Iniesto, M., Lopez-Archilla, A. I., Fregenal-Martínez, M., Buscalioni, A. D. & Guerrero, M. C. Involvement of microbial mats in delayed decay: An experimental essay on fish preservation. Palaios 28, 56–66 (2013). 9. 9. Iniesto, M. et al. Involvement of microbial mats in early fossilisation by decay delay and formation of impressions and replicas of vertebrates and invertebrates. Scientific Reports 6, 25716 (2016). 10. 10. Iniesto, M., Villalba, I., Buscalioni, A. D., Guerrero, M. C. & López-Achilla, I. The effect of microbial mats in the decay of Anurans with implications for understanding taphonomic processes in the fossil record. Scientific Reports 7, 45160, https://doi.org/10.1038/srep45160 (2017). 11. 11. Wilson, L. A. & Butterfield, N. J. Sediment effects on the preservation of Burgess Shale–type compression fossils. Palaios 29, 145–154 (2014). 12. 12. Brock, F., Parkes, R. J. & Briggs, D. E. G. Experimental pyrite formation associated with decay of plant material. Palaios 21, 499–506 (2006). 13. 13. Meshal, A. H. Hydrography of a hypersaline coastal lagoon in the Red Sea. Estuarine, Coastal and Shelf Science 24, 167–175 (1987). 14. 14. Bellanca, A. et al. Transition from marine to hypersaline conditions in the Messinian Tripoli Formation from the marginal areas of the central Sicilian Basin. Sedimentary Geology 140, 87–105 (2001). 15. 15. Wuttke, M. ‘Weichteil-Erhaltung’ durch lithifizierte Mikroorganismen bei mittel-eozänen Vertebraten aus den Ölschiefern der ‘Grube Messel’ bei Darmstadt. Senckenbergiana Lethaea 64, 509–27 (1983). 16. 16. Wilby, P. R., Briggs, D. E. G., Bernier, P. & Gaillard, C. Role of microbial mats in the fossilization of soft tissues. Geology 24, 787–790 (1996). 17. 17. Saitta, E. T., Kaye, T. G. & Vinther, J. Sediment-encased maturation: a novel method for simulating diagenesis in organic fossil preservation. 18. Palaeontology 62, 135–150 (2018). 18. 18. Gäb, F. et al. Siderite cannot be used as CO2 sensor for Archaean atmospheres. Geochimica et Cosmochimica Acta 214, 209–225 (2017). 19. 19. Mignard, S. & Flandrois, J. P. 16S rRNA sequencing in routine bacterial identification: A 30-month experiment. Journal of Microbiological Methods 67, 574–581 (2006). 20. 20. Allison, P. A. Soft-bodied animals in the fossil record: The role of decay in fragmentation during transport. Geology 14, 979–981 (1986). 21. 21. Allison, P. A. Konservat-Lagerstätten: cause and classification. Paleobiology 14, 331–344 (1988). 22. 22. Barton, D. G. & Wilson, M. V. H. Taphonomic variations in Eocene fish-bearing varves at Horsefly, British Columbia, reveal 10 000 years of environmental change. Canadian Journal of Earth Science 42, 137–149 (2005). 23. 23. Jones, P. R., Cottrell, M. T., Kirchman, D. L. & Dexter, S. C. Bacterial community structure of biofilms on artificial surfaces in an estuary. Microbial Ecology 53, 153–162 (2007). 24. 24. Madigan, M. T., Martinko, J. M., Stahl, D. A. & Clark, D. P. Brock Mikrobiologie Kompakt. Pearson Education Inc., 698 p. (2015). 25. 25. Binkerd, E. & Kolari, O. The history and use of nitrate and nitrite in the curing of meat. Food and Cosmetics. Toxicology 13, 655–661 (1975). 26. 26. Detkova, E. N. & Pusheva, M. A. Energy metabolism in halophilic and alkaliphilic acetogenic bacteria. Microbiology 75, 1–11 (2006). 27. 27. Li, Y., Mandelco, L. & Wiegel, J. Isolation and characterization of a moderately thermophilic anaerobic alkaliphile, Clostridium paradoxum sp. nov. International Journal of Systematic and Evolutionary Microbiology 43, 450–460 (1993). 28. 28. Cook, G. M., Russell, J. B., Reichert, A. & Wiegel, J. The Intracellular pH of Clostridium paradoxum, an anaerobic, alkaliphilic, and thermophilic bacterium. Applied and Environmental Microbiology 62, 4576–4579 (1996). 29. 29. Dimroth, P., & Cook, G. M. Bacterial Na+ or H+ coupled ATP synthases operating at low electrochemical potential. Advances in Microbial Physiology 175–218. https://doi.org/10.1016/s0065-2911(04)49004-3 (2004). 30. 30. Hellawell, J. & Orr, P. J. Deciphering taphonomic processes in the Eocene Green River formation of Wyoming. Palaeobiology Palaeo-environment 92, 353–365 (2010). 31. 31. Munnecke, A., Westphal, H. & Kölbl-Ebert, M. Diagenesis of plattenkalk: examples from the Solnhofen area (Upper Jurassic, southern Germany). Sedimentology 55, 1931–1946 (2008). 32. 32. Arratia, G., Schultze, H.-P., Tischlinger, H. & Viohl, G. Solnhofen. Ein Fenster in die Jurazeit. Verlag Dr. Friedrich Pfeil, Munich (2015) 33. 33. Ebert, M., Kölbl-Ebert, M. & Lane, J.A. Fauna and predator-prey relationships of Ettling, an Actinopterygian fish-dominated Konservat-Lagerstätte from the Late Jurassic of Southern Germany. Plos One 10 (2015). 34. 34. Hallam, A.H. Jurassic Environments. Cambridge University Press, 284 p. (1975). 35. 35. Viohl, G. Der geologische Rahmen: Die südliche Frankenalb und ihre Entwicklung. In Solnhofen: Ein Fenster in die Jurazeit (Arratia G., Schultze H.-P., Tischlinger H., Viohl G. eds.). Verlag Dr. Friedrich Pfeil, Munich, 56–100 (2015). 36. 36. Seilacher, A., Reif, W. E. & Westphal, F. Sedimentological, ecological and temporal patterns of fossil Lagerstätten. Philosophical Transactions of the Royal Society of London B 311, 5–23 (1985). 37. 37. Grande, L. Paleontology of the Green River formation, with review of the fish fauna. Bulletin 63, Geological Survey of Wyoming, 333 p. (1984). 38. 38. Surdam, R. C. & Stanley, K. O. Lacustrine sedimentation during the culminating phase of Eocene Lake Gosiute, Wyoming (Green River Formation). Geological Society of America Bulletin 90, 93–110 (1979). 39. 39. Eugster, H.P. Lake Magadi, Kenya: a model for rift valley hydrochemistry and sedimentation? In Sedimentation in the African Rifts (Frostick L. E., Renaut R. W., Reid I., Tiercelin J. J., eds.) Geological Society Special Publication 25, 177–189 (1986). 40. 40. Rasmussen, C. et al. Middle-late Miocene palaeoenvironments, palynological data and a fossil fish Lagerstätte from the Central Kenya Rift (East Africa). Geological Magazine 154, 24–56 (2017). 41. 41. Büchel, G.N. & Schaal, S.F.K. Die Entstehung des Messel-Maares. In Messel. Ein fossiles Tropenökosystem (Schaal S.F.K., Smith K.T., Habersetzer J. eds.), Schweizerbarth, 7–16 (2018). 42. 42. Ballhaus, C. et al. The silicification of trees in volcanic ash - An experimental study. Geochimica et Cosmochimica Acta 84, 62–74 (2012). 43. 43. Chafetz, H. S. & Buczynski, C. Bacterially induced lithification of microbial mats. PALAIOS 7, 277–293 (1992). 44. 44. Briggs, D. E. G. & Bartels, C. Annelids from the Lower Devonian Hunsrück Slate (Lower Emsian, Rhenish Massif, Germany). Palaeontology 53, 215–232 (2010). 45. 45. Hellawell, J. et al. Incipient silicification of recent conifer wood at a Yellowstone hot spring. Geochimica et Cosmochimica Acta 149, 79–87 (2015). 46. 46. Schopf, J. M. Modes of fossil preservation. Reviews of Paleobotany and Palynology 20, 27–72 (1977). 47. 47. Briggs, D. E. G. & Summons, R. E. Ancient biomolecules: their origin, fossilization and significance in revealing the history of life. Bioessays 36, 482–490 (2014). 48. 48. Wiemann, J. et al. Fossilization transforms vertebrate hard tissue proteins into N-heterocyclic polymers. Nature Communications 9, 4741, https://doi.org/10.1038/s41467-018-07013-3 (2018). 49. 49. Ebert, M. The Pycnodontidae (Actinopterygii) in the late Jurassic: 1) The genus Proscinetes Gistel, 1848 in the Solnhofen Archipelago (Germany) and Cerin (France). Archaeopteryx 31, 22–43 (2013). ## Acknowledgements We thank Dieter Lülsdorf, Thomas Schulz, and Philipp Cremer for constructing the experimental equipment and Georg Oleschinski for photographic work. Joachim Mogdans euhanised the fish with tricaine methanesulfonate prior to experimentation. Martin Ebert and Martina Kölbl-Ebert gave permission to illustrate the Pycnodontid in Fig. 1. Christoph Bultmann advised Fabian Gäb in the limitations of bacterial genome sequencing. Alexander Ziegler assisted Anna G. Kral in X-ray imagery and micro-tomography. We thank our colleagues of the research group FOR 2685 for useful comments to the project. Comments by the editor and two anonymous journal reviewers are much appreciated. Support by the DFG through grant Ba 964/38-1 to Chris Ballhaus is gratefully acknowledged. This is contribution no. 34 of the DFG Research Unit 2685. “The Limits of the Fossil Record: Analytical and Experimental Approaches to Fossilization”. ## Author information Authors ### Contributions F.G. and C.B. designed the project and wrote the manuscript. E.S. dissected the fish specimens and assisted in carrying out the experiments. A.G.K. conducted the µCT analysis. K.J. and G.B. prepared the DNA samples for microbiome analysis and interpreted the results. ### Corresponding authors Correspondence to Fabian Gäb or Chris Ballhaus. ## Ethics declarations ### Competing interests The authors declare no competing interests. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Gäb, F., Ballhaus, C., Stinnesbeck, E. et al. Experimental taphonomy of fish - role of elevated pressure, salinity and pH. Sci Rep 10, 7839 (2020). https://doi.org/10.1038/s41598-020-64651-8
# Euler-Lagrange question about strange differentiation ## Main Question or Discussion Point I'm watching Susskind's Classical Mech. YouTube lecture series and am really confused about something he's doing where otherwise I've followed everything up until this point without a problem. In Lecture 3 he's dealing with the Euler-Lagrange equation applied to minimizing the distance between two points, and I understand his work up until here, where he starts taking the partial derivatives of the lagrangian with respect to v_i and v_i-1 rather than x_i. Why does he do this rather than continuing to take the derivatives w.r.t. x_i? He flatly says, "We're differentiating with respect to x-sub-i here" and then proceeds to take a partial derivative w.r.t. v_i and v_i-1 instead, I don't get it. Related Classical Physics News on Phys.org BvU Homework Helper 2019 Award The second argument of ##\mathcal L## is ##v_i\quad ## So for the second ##\mathcal L\ ## : $${\partial \mathcal L \over \partial x_i} = {\partial \mathcal L \over \partial v_i} \; {\partial v_i \over \partial x_i} = {1\over \epsilon} {\partial \mathcal L \over \partial v_i}$$ Zacarias Nason In Lecture 3 he's dealing with the Euler-Lagrange equation applied to minimizing the distance between two points, and I understand his work up until here, where he starts taking the partial derivatives of the lagrangian with respect to v_i and v_i-1 rather than x_i. where he starts taking the partial derivatives of the lagrangian with respect to v_i and v_i-1 rather than x_i. Why does he do this rather than continuing to take the derivatives w.r.t. x_i? He flatly says, "We're differentiating with respect to x-sub-i here" if he is doing calculation for finding out the minimum distance between two points his functional must be functions v's rather than x's thats why he is interested in the partial derivative w.r.t. v's vanhees71 Gold Member 2019 Award The second argument of ##\mathcal L## is ##v_i\quad ## So for the second ##\mathcal L\ ## : $${\partial \mathcal L \over \partial x_i} = {\partial \mathcal L \over \partial v_i} \; {\partial v_i \over \partial x_i} = {1\over \epsilon} {\partial \mathcal L \over \partial v_i}$$ This is misleading since by assumption the ##x_i## and ##v_i## are independent variables, concerning the partial derivatives of the Lagrangian. What's behind this is of course the action principle, which is about variations of the action functional $$S[x_i]=\int_{t_1}^{t_2} \mathrm{d} t L(x_i,\dot{x}_i).$$ The variation of the trajectories ##x_i(t)## is taken at fixed boundaries ##\delta x_i(t_1)=\delta x_i(t_2)=0## and time is not varied. The latter implies that $$\delta \dot{x}_i=\frac{\mathrm{d}}{\mathrm{d} t} \delta x_i$$ and thus $$\delta S[x_i]= \int_{t_1}^{t_2} \left [\delta x_i \frac{\partial L}{\partial x_i} + \frac{\mathrm{d} \delta x_i}{\mathrm{d} t} \frac{\partial L}{\partial \dot{x}_i} \right ] = \int_{t_1}^{t_2} \delta x_i \left [\frac{\partial L}{\partial x_i} - \frac{\mathrm{d}}{\mathrm{d} t} \frac{\partial L}{\partial \dot{x}_i} \right ] \stackrel{!}{=}0.$$ In the last step, I've integrated the 2nd term by parts. Since this equation must hold for all ##\delta x_i##, you get to the Euler-Lagrange equations, $$\frac{\delta S}{\delta x}=\frac{\partial L}{\partial x_i} - \frac{\mathrm{d}}{\mathrm{d} t} \frac{\partial L}{\partial \dot{x}_i} \stackrel{!}{=}0,$$ which are the equations of motion for the trajectories ##x_i(t)##. Zacarias Nason BvU
# Homework Help: Surface Area of y = e^5x revolved around the x-axis from 0 to ln(4) 1. Feb 23, 2010 ### Oblakastouf 1. The problem statement, all variables and given/known data http://i47.tinypic.com/1z6naa.jpg Note... I used wolfram alpha to get the answer, I did not get it myself... So I still need help. The answer shown is correct, so you'll know if you got it. 2. Relevant equations Integral [0, ln(4)] sqrt(1+(dy/dx)^2) 3. The attempt at a solution 2pi Integral [0, ln(4)] y*sqrt(1+(dy/dx)^2) 2pi Integral [0, ln(4)] (e^5x)*sqrt(1+5e^5x^2)dx u = 5e^5x du = 25e^5x dx dx = du/25e^5x 2pi Integral [0, ln(4)] (e^5x)*sqrt(1+u^2)du/25e^5x 2pi Integral [0, ln(4)] sqrt(1+u^2)du u = tan(t) 2pi/25 Integral [0, ln(4)] sqrt(1+tan^2(t))du 2pi/25 Integral [0, ln(4)] sqrt(sec^2(t))du 2pi/25 Integral [0, ln(4)] sec(t)du du = sec^2(t)dt dt = du*cos^2(t) 2pi/25 Integral [0, ln(4)] cos^2(t)/cos(t)dt 2pi/25 Integral [0, ln(4)] cos(t)dt Edit bounds... [arctan(5), arctan(5e^(5*ln(4)))] Then get ****ed over with an answer of .0048... What did I do wrong. 2. Feb 23, 2010 ### Dick If du=sec^2(t)*dt, then sec(t)*du is sec^3(t)*dt. 3. Feb 23, 2010 ### Oblakastouf Right... My mistake, but I'm also having trouble with integration, and that isn't my strong suit, how would I integrate that? 4. Feb 23, 2010 ### Dick It's kind of a long haul. You start by integrating by parts u=sec(t), dv=sec(t)^2*dt. It probably goes a little easier if you go back to the integral of sqrt(1+x^2)*dx and substitute x=sinh(u), if you are ok with hyperbolic functions. 5. Feb 23, 2010 ### Oblakastouf I am, but I'm in a class that doesn't use them yet lol.
# Question #71f34 Oct 30, 2016 $8.8$ m #### Explanation: Since the area of a square is ${l}^{2}$ where $l$ represents the width of the square, we can reverse the operation to obtain the length given the area. Therefore: ${l}^{2} = 4.84$ $l = \sqrt{4.84} = 2.2$ Note that we ignore the negative values of the square root as width cannot be negative, since ${\left(2.2\right)}^{2}$ and ${\left(- 2.2\right)}^{2}$ yield the same result. Now that we have the width of the square, we know that the perimeter would be $4 l$, or four times the length. Therefore, $4 l = 4 \cdot 2.2 = 8.8$ Thus the perimeter of the crime scene is $8.8$ m. Note: Don't forget units!
Publication 1997 Issue No. 4 - April Abstract - A Geometric Approach for Constructing Coteries and k-Coteries A Geometric Approach for Constructing Coteries and k-Coteries April 1997 (vol. 8 no. 4) pp. 402-411 ASCII Text x Yu-Chen Kuo, Shing-Tsaan Huang, "A Geometric Approach for Constructing Coteries and k-Coteries," IEEE Transactions on Parallel and Distributed Systems, vol. 8, no. 4, pp. 402-411, April, 1997. BibTex x @article{ 10.1109/71.588618,author = {Yu-Chen Kuo and Shing-Tsaan Huang},title = {A Geometric Approach for Constructing Coteries and k-Coteries},journal ={IEEE Transactions on Parallel and Distributed Systems},volume = {8},number = {4},issn = {1045-9219},year = {1997},pages = {402-411},doi = {http://doi.ieeecomputersociety.org/10.1109/71.588618},publisher = {IEEE Computer Society},address = {Los Alamitos, CA, USA},} RefWorks Procite/RefMan/Endnote x TY - JOURJO - IEEE Transactions on Parallel and Distributed SystemsTI - A Geometric Approach for Constructing Coteries and k-CoteriesIS - 4SN - 1045-9219SP402EP411EPD - 402-411A1 - Yu-Chen Kuo, A1 - Shing-Tsaan Huang, PY - 1997KW - CoterieKW - critical sectionKW - distributed algorithmKW - fault-toleranceKW - mutual exclusionKW - quorum set.VL - 8JA - IEEE Transactions on Parallel and Distributed SystemsER - Abstract—Quorum-based mutual exclusion algorithms are resilient to node and communication line failures. Recently, some mutual exclusion algorithms successfully use logical structures to construct coteries with small quorums sizes. In this paper, we introduce a geometric approach on dealing with the logical structures and present some useful geometric properties for constructing coteries and k-coteries. Based on those geometric properties, a logical structure named three-sided graph is proposed to provide a new scheme for constructing coteries with small quorums: The smallest quorum size is $O(\sqrt N)$ in a homogeneous system with N nodes and O(1) in a heterogeneous system. In addition, we also extend the three-sided graph to the n-sided graph for constructing k-coteries. Index Terms: Coterie, critical section, distributed algorithm, fault-tolerance, mutual exclusion, quorum set. Citation: Yu-Chen Kuo, Shing-Tsaan Huang, "A Geometric Approach for Constructing Coteries and k-Coteries," IEEE Transactions on Parallel and Distributed Systems, vol. 8, no. 4, pp. 402-411, April 1997, doi:10.1109/71.588618
+0 0 132 3 +221 Let APQRS be a pyramid, where the base PQRS is a square of side length 20. The total surface area of pyramid APQRS (including the base) is 1200. Let W, X, Y, and Z be the midpoints of $$\overline{AP}, \overline{AQ}, \overline{AR},$$ and $$\overline{AS},$$ respectively. Find the total surface area of frustum PQRSWXYZ (including the bases). Jun 22, 2020 #1 0 I believe that it doesn't make any difference if the pyramid is regular or not. Total surface area = 1200. Surface area of the base = 400. Total surface area of the four sides is 1200 - 400  =  800. 1/2 of the area of each side is below the midline*; therefore 1/2·800  =  400 is below the midline. Midline value of each side = 10   --->  area of the top base = 100. Total surface area  =  400 + 400 + 100  =  900. Jun 22, 2020 #3 0 " 1/2 of the area of each side is below the midline " The pyramid is cut horizontally, NOT vertically!!! Guest Jun 23, 2020 #2 0 Bottom square      Ab = 400 Top square            At = 100 Sides                     As = 600 Total area              A = 1100 units squared Jun 23, 2020
# Definition:Pointed Mapping Let $(A, a)$ and $(B, b)$ be pointed sets. A pointed mapping $f : (A, a) \to (B, b)$ is a mapping $f : A \to B$ such that $f(a) = b$.
• 104 Plaza Center Serekunda, The Gambia • (+220) 4290383 / 7724021 • info@scoutgambia.org XviD Video Codec is a product of the XviD project, which is sustained through a collaborative development effort. Its focus is related to compressing videos in order to reduce the required bandwidth in transmissions over networks. XviD is an open source MPEG-4 video codec that has its roots in the project that was the basis for DivX 4/5. Although it does not bundle a media player, the codec facilitates the playback of XvD videos in other media applications such as Winamp or Windows Media Player. Despite general belief, XviD is not a video format; it is a codec that helps to compress and decompress MPEG-4 streams. The advantage is the low bandwidth that will be required to transfer such files, as well as more efficient storage options on discs and removable media. Even though it aims to fulfill the same purpose as any other similar codec pack, XviD Media Codec does not support comparison with such projects, for many reasons, some of which are controversial, yet justified. First of all, XviD Media Codec is not provided in a binary package, as you’re probably accustomed. The official page of its development team provides downloads for the source code only, supposedly because of concerns over patents. However, it does offer links to websites that host compiled installers of XviD Media Codec. If you’re a developer, you can just compile the source and obtain an installer for the codec; on the other hand, regular users will have a hard time achieving that and instead, they can rely on the suggestions available on the developer’s website. On an ending note, XviD Media Codec is a project that has over ten years of existence, sustained by extensive efforts from the community. The lack of official binary packages can be compensated by the suggested alternatives, which are trustworthy if advertised by the development team itself. XviD Video Codec is a product of the XviD project, which is sustained through a collaborative development effort. Its focus is related to compressing videos in order to reduce the required bandwidth in transmissions over networks. XviD is an open source MPEG-4 video codec that has its roots in the project that was the basis for DivX 4/5. Although it does not bundle a media player, the codec facilitates the playback of XvD videos in other media applications such as Winamp or Windows Media Player. Despite general belief, XviD is not a video format; it is a codec that helps to compress and decompress MPEG-4 streams. The advantage is the low bandwidth that will be required to transfer such files, as well as more efficient storage options on discs and removable media. Even though it aims to fulfill the same purpose as any other similar codec pack, XviD Media Codec does not support comparison with such projects, for many reasons, some of which are controversial, yet justified. First of all, XviD Media Codec is not provided in a binary package, as you’re probably accustomed. The official page of its development team provides downloads for the source code only, supposedly because of concerns over patents. However, it does offer links to websites that host compiled installers of XviD Media Codec. If you’re a developer, you can just compile the source and obtain an installer for the codec; on the other hand, regular users will have a hard time achieving that and instead, they can rely on the suggestions available on the developer’s website. On an ending note, XviD Media Codec is a project that has over ten years of existence, sustained by extensive efforts from the community. The lack of official binary packages can be compensated by the suggested alternatives, which are trustworthy if advertised by the development team itself. Whats New: A new v0.92 version was released a few weeks ago (2018-12-14) but the new changes are not really a big deal for users of the software. Changelog: This is just a list of changes in a changelog:Q: An interesting case of $L^2$ convergence Let $\{f_n\}$ be a sequence of functions on the real line which is defined as f_n(t)=\begin{cases} n^2\cos nt & 0\le t\ ## XviD Video Codec X64 Once the user has installed XviD Media Codec and verified that the files are accessible, the program runs in the background and lets you choose the resolution that you want to compress at. Additionally, you can select the audio format, as well as different audio levels or video bit-rate, frame size and general settings like disc size, frame rate, or audio channels. In order to reduce the bandwidth and maximize the quality, you can also import a video file with the ISO and/or various file formats. Finally, just as with every application, XviD Media Codec has a built-in help file so that you can get started right away.Q: Can’t get the id of a particular parent from the label in another component I have two components – ComponentA & ComponentB. ComponentA has a list of components that can be dragged and dropped into ComponentB. On ComponentB (upon mouse down event), I want to get the id of ComponentA from which it was dragged. According to the doc, I tried to add these two components to a container, then get the reference of componentA. export default { mounted() { const componentA = this.\$refs.componentA; console.log(componentA.id); } } I expected it to be “component-a0” but its returning undefined. Also, as a side-note, its not at all clear to me where to call mounted() in the template. It says right in the doc ( that mounted should be called once the component has been mounted. Then what does mounted actually mean? A: You can read the reference in the mounted hook: mounted() { 2f7fe94e24 ## XviD Video Codec XviD Video Codec is said to support most of the recommendations included in SMPTE 303M, or Advanced Video Coding (AVC), and there are no restrictions on the number of secondary streams that can be received. If you have a Blu-ray or DVD burner, you can import XviD videos directly without relying on a player, as the XviD codec itself can decode and encode videos. XviD Media Codec is said to support bit-rates up to 1.5 times normal quality (i.e., 240 Mbps), which is not as good as the 1.25 times target that DivX HD has. In fact, there are no limitations on the target bit-rate as long as the source file is compliant with the extended profile, with respect to the Profile Level coding. It is said that XviD is capable of intra-frame DCT (Discrete Cosine Transform) alone, while DivX Pro supports intra-frame DCT + intra-frame wavelet and JPEG 2000. In fact, XviD Media Codec is mentioned as part of the X3.0, which also includes DivX Pro. XviD Media Codec uses a variable rate that is estimated directly from the initial input, so as not to waste too much of the output bit-rate. This is something that is not discussed in DivX Pro, which does not benefit from the same approach. The bits are divided into three frames in XviD. The first, which is called the Action Header, can be bypassed, and the second contains the picture header. It includes the following information: Segment_Count The number of segments in the video file Width The width of the picture Height The height of the picture Frame_Rate The frame rate Frame_Count The number of frames in the video file Compression_Type A media file’s method of compression Media_Type A media file type As you can see, XviD Media Codec is compatible with video formats such as H.264/MPEG-4 AVC, H.265/HEVC, or VP8. The PIC (Picture Identifier) information is used as the frame number, while the SIC (Segment Identifier) is used to provide synchronization for a macroblock. However, depending on the implementation of the individual codecs, the PIC/SIC ranges can be quite different from the standard protocols. To ensure that there are ## What’s New In? XviD is a highly developed MPEG-4 video compressor and decompressor. It is the premier-class MPEG-4 video compressor/decompressor system. It is a multi-threaded program that can support multi-CPU operation. XviD is the most powerful and popular Xvid/MPEG-4 video format available for Windows. It offers reasonable performance and provides an unparalleled quality of video. XviD can make a DVD-5 movie fit in a CD-ROM. It can reduce the file size of an MPEG-4 movie to about 50%-75% from the original, depending on the content. It can reduce the file size to about 30%-70% from the original, depending on the content. With high-speed MPEG-4 hardware accelerator, XviD can significantly reduce the file size. XviD Media Codec Features: Very fast! It is the most powerful MPEG-4 video compressor available. You can burn a DVD-5 DVD in just over 10 minutes! Precise MPEG-4 video stream CODEC. It just plays MPEG-4, H.264, or AVCHD videos without any modification. Sophisticated video features! It can also be used as a video player for MPEG-4, H.264, AVCHD or any other videos. Multi-threaded! It can support multi-core CPUs. You can use as many CPUs as you want! It has a powerful library so that the performance is balanced between CPU and GPU when being used. Fits on desktops and laptops. It runs smoothly on current Windows OS. It is an open source MPEG-4 video codec. The development of the codec is completely open and community-driven. Compatible with popular multimedia players and media players. It uses the Decoders from Xvid and ffmpeg, allowing easy use. It is the most popular MPEG-4 video codec and plays MPEG-4, H.264, AVCHD, and any other video formats, without any modification. XviD Media Codec Requirements: Operating System: Windows 95/98/Me/NT/2000/XP. Some versions may require Administrator rights. RAM (The recommended minimum amount of RAM is 512MB for the high quality setting, and 256MB for the normal setting. The minimum is around 128MB for the low quality setting.) C.Jobs 1 ## System Requirements For XviD Video Codec: Supported OS: Windows 10 / 8 / 7 Processor: Core i3-750 / i5-760 / i7-750 / i7-2600 / i7-35XX Memory: 2 GB Graphics: 1 GB of VRAM Network:
Subsections # APPROXIMATIONS OF SUMS OF INFINITE SERIES ## Background In Section 8.2 of the text the sum of an infinite series is defined as , where Sn is the partial sum of the first n terms of the series. However, because of the algebraic difficulty (often, impossibility) of expressing Sn as a function of n, it is usually not possible to find sums by directly using the definition. So, if we can generally not work from the definition, what can be done? The convergence tests of Sections 8.3, 8.4, 8.5 and 8.6 provide us with some needed tools. These are tests that tell us if a series converges, but - in the case the series does converge - do not tell us the sum of the series. Then, how do we find the sum? The answer is that we usually cannot find the sum. However, we can approximate the sum, S, by the partial sum Sn, for an appropriate value of n. But, what value of n will guarantee that S is, for instance, approximated by Sn to three decimal places? For most of the tests there is not a good answer to this question. In many cases a heuristic approach is used - look at Sn for successive values of n until it seems that the third decimal place in the Sn will not change further. Note that there is no guarantee that this procedure will actually give the desired accuracy. For instance, it would be easy to be fooled by a slowly converging series. For the convergent series , we define the remainder after n terms as . It should be clear that S = Sn + Rn and that Rn is in fact the error when Sn is used to approximate S. As is usual in approximation arguments, we seek an upper bound on the absolute value of the error. The argument that is used to prove the integral test can be modified so as to establish the following bounds. where f(i) = ai when i is a positive integer. Thus, under the assumption that ak > 0 for all k, we have That is, we have a bound on the error arising by using Sn to approximate S. ## Maple Notes Use the Maple command ?help to read more about the sum command. Note especially the use of single quotes when using sum. Sometimes it may be advantageous to use Sum and value instead of just sum. The use of Sum enables you to check if you have typed the series correctly. As an example, look at > Sum('1/2^k','k'=1..100); > value("); Also try these commands with "100" replaced by "infinity." When you use sum to find the value of some Sn, you will usually want to use it in conjunction with evalf. > evalf(sum( )); Otherwise, in some cases, Maple will attempt to do exact arithmetic and the answer may be long enough to fill several screens. ## Exercises 1. Use Maple to apply a comparison test to the given series. What is the behavior of the series? Explain. Hint: use . 2. Use Maple to apply the ratio test to the given series. What is the behavior of the series? Explain. 3. Use Maple to apply the integral test to show that the given series converges. 4. For the series of Exercise 3, use Maple to implement the error bound theory discussed in the Background section. In particular, find the smallest n that guarantees Sn approximates S with an error less than 0.05. 5. Find Sn for the value of n found in Exercise 4. In fact, Maple can sum the series we have been discussing. Have Maple do so. How does Rn compare with ?. Did the error bound theory work well for this series?
A community for students. Sign up today Here's the question you clicked on: anonymous one year ago Could someone check my answer? Q. Assume an arbitrary atom X with atomic number (nuclear charge) of Z. Calculate the maximum possible value of Z which an ion X^+(Z-1) can have keeping in mind that the electron cannot move faster than the speed of light (we already know that the maximum achievable speed is the speed of light). Ignore the relativistic effects and use Bohr model concepts to solve this problem. • This Question is Closed 1. anonymous I used the following formula to calculate Z. $v=\frac{ Z e ^{2} }{ 2n \epsilon h}$ I got the answer for Z= 137. Am I doing this correctly? Ask your own question Sign Up Find more explanations on OpenStudy Privacy Policy
# How do you manipulate pose bones using python? I'm trying to render different images of hand gestures using python, but for some reason, I'm unable to get the hand armature bones to move. I'm using Manuel Bastoni for the model. http://www.manuelbastioni.com/ Here's the code that I have right now: bpy.ops.object.mode_set(mode='POSE') for bone in bones: x = bone.location[0] + random.uniform(-0.2, 0.2) y = bone.location[1] + random.uniform(-0.2, 0.2) z = bone.location[2] + random.uniform(-0.2, 0.2) bone.location[0] = x bone.location[1] = y bone.location[2] = z bpy.context.scene.update() bpy.context.scene.render.filepath = outfile_name bpy.ops.render.render(write_still=True) I saw that in the python console, bone.location = (x, y, z) works perfectly fine to manipulate the bones, but it's not working here. Please let me know what I can do to manipulate the pose bones through a python script! Thanks! I'd avoid using ops to set the mode to Pose Mode. Access the bones directly through the hierarchy as it is exposed through the api. import bpy import random armature = bpy.context.scene.objects['Armature'] for pose_bone in armature.pose.bones: x = pose_bone.location[0] + random.uniform(-0.2, 0.2) y = pose_bone.location[1] + random.uniform(-0.2, 0.2) z = pose_bone.location[2] + random.uniform(-0.2, 0.2) pose_bone.location[0] = x pose_bone.location[1] = y pose_bone.location[2] = z • Hi, I was already accessing the bones using the following: rig = bpy.context.scene.objects['f_ca01_skeleton'] bones = list(rig.pose.bones) Is there any other way for me to access the armature? Thank you! – user58033 Jun 21 '18 at 16:40
International Journal of Mathematics ( IF 0.604 ) Pub Date : 2021-02-19 , DOI: 10.1142/s0129167x21500221 Shunya Fujii; Shun Maeta In this paper, we consider generalized Yamabe solitons which include many notions, such as Yamabe solitons, almost Yamabe solitons, $h$-almost Yamabe solitons, gradient $k$-Yamabe solitons and conformal gradient solitons. We completely classify the generalized Yamabe solitons on hypersurfaces in Euclidean spaces arisen from the position vector field. down wechat bug
## 月曜解析セミナー: Continuous cores of full factors 2015年   5月 18日 14時 45分 ~ 2015年   5月 18日 16時 15分 I will talk about the problem concerning the fullness of the core factor of a given type III$_1$ von Neumann factor algebra. We have not obtained a precise characterization, but I will sketch our solution of this problem for free product factors. This work is a collaboration with Yoshimichi Ueda at Kyushu university.
### Calculus I Introduction to the theory and applications of both differential and integral calculus. Topics discussed include the following: Limits and continuity, Derivatives and their applications, theorems of calculus, anti-derivative and integrals ### Calculus II Topics discussed in this course include: Techniques of integration and their applications, polar coordinate, and series and sequences, power Series and partial derivatives ### Linear Algebra Topics covered in this course include: Gaussian Elimination, Matrix Operation, Determinant, Vector Algebra, Vector Spaces, eigenvalue and eigenvectors.
Algebra and Trigonometry 10th Edition Published by Cengage Learning Chapter 10 - 10.3 - The Inverse of a Square Matrix - 10.3 Exercises - Page 734: 14 Answer $\begin{bmatrix} 7 &-2 \\ 3 &-1\end{bmatrix}$ Work Step by Step We need to augment the matrix by adding the Identity Matrix $\begin{bmatrix} 1 & 0 \\ 0 & 1\end{bmatrix}$ Therefore, $\begin{bmatrix} 1& 2& 1 & 0 \\ 3& 7& 0 & 1\end{bmatrix}$ Now, multiply row $1$ by $-3$ and add it to row $2$ We have : $\begin{bmatrix} 1 & 0 & 7 &-2 \\ 0 & 1 &3 &-1\end{bmatrix}$ Our answer is: $\begin{bmatrix} 7 &-2 \\ 3 &-1\end{bmatrix}$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
# [SPM Mathematics Question] How to solve this question? ### Question 18 Sebuah peta dilukis dengan skala 1:500 000. Hitung jarak sebenar, dalam km, sebatang jalan raya yang panjangnya 8cm pada peta itu. A map is drawn on a scale 1:500 000. Calculate the actual distance, in km, a road which is 8cm on the map. A 40 B 50 C 60 D 70 Hi Farhaini, A scale of 1:500000 means that the distance between two locations in the real world is 500000 times longer than the distance between the two points representing the two locations on the map. For example, if the distance between two points on the map is 2cm, then the real-world distance between the two locations would be 2cm \times 500000=1 \times 10^6 cm. So for this question, a distance of 8cm on the map would mean that the actual distance is 8cm \times 500000=4 \times 10^6 cm Since the question wants the distance in km, we would just convert the cm into km. \frac{4 \times 10^6}{100}=40000m \frac{40000}{1000}=40km Hope this helps! 1 Like Thank you very much! 1 Like
Journal topic Atmos. Chem. Phys., 18, 6317–6330, 2018 https://doi.org/10.5194/acp-18-6317-2018 Atmos. Chem. Phys., 18, 6317–6330, 2018 https://doi.org/10.5194/acp-18-6317-2018 Research article 04 May 2018 Research article | 04 May 2018 # Global warming potential estimates for the C1–C3 hydrochlorofluorocarbons (HCFCs) included in the Kigali Amendment to the Montreal Protocol Global warming potential estimates for the C1–C3 hydrochlorofluorocarbons (HCFCs) included in the Kigali Amendment to the Montreal Protocol Dimitrios K. Papanastasiou1,2, Allison Beltrone3, Paul Marshall3, and James B. Burkholder1 Dimitrios K. Papanastasiou et al. • 1Earth System Research Laboratory, Chemical Sciences Division, National Oceanic and Atmospheric Administration, 325 Broadway, Boulder, CO 80305, USA • 2Cooperative Institute for Research in Environmental Sciences, University of Colorado, Boulder, CO 80309 USA • 3Department of Chemistry, University of North Texas, P.O. Box 305070, Denton, TX 76203-5070, USA Correspondence: James B. Burkholder (james.b.burkholder@noaa.gov) Abstract Hydrochlorofluorocarbons (HCFCs) are ozone depleting substances and potent greenhouse gases that are controlled under the Montreal Protocol. However, the majority of the 274 HCFCs included in Annex C of the protocol do not have reported global warming potentials (GWPs) which are used to guide the phaseout of HCFCs and the future phase down of hydrofluorocarbons (HFCs). In this study, GWPs for all C1–C3 HCFCs included in Annex C are reported based on estimated atmospheric lifetimes and theoretical methods used to calculate infrared absorption spectra. Atmospheric lifetimes were estimated from a structure activity relationship (SAR) for OH radical reactivity and estimated O(1D) reactivity and UV photolysis loss processes. The C1–C3 HCFCs display a wide range of lifetimes (0.3 to 62 years) and GWPs (5 to 5330, 100-year time horizon) dependent on their molecular structure and the H-atom content of the individual HCFC. The results from this study provide estimated policy-relevant GWP metrics for the HCFCs included in the Montreal Protocol in the absence of experimentally derived metrics. 1 Introduction Hydrochlorofluorocarbons (HCFCs) are ozone depleting substances (ODSs), the production and use of which are controlled under the Montreal Protocol on Substances that Deplete the Ozone Layer (1987). HCFCs have been used as substitutes for chlorofluorocarbons (CFCs) in various commercial and residential applications, e.g., foam blowing, and refrigerator and air conditioning systems. In addition to being ODSs, HCFCs are also potent greenhouse gases (WMO, 2014). With the adoption of the Kigali Amendment (2016) to the Montreal Protocol by the Twenty-Eighth Meeting of the Parties to the Montreal Protocol, parties agreed to the phasedown of hydrofluorocarbons (HFCs), substances that are not ozone depleting but are climate forcing agents. As in the case of HCFCs, the HFC production and consumption control measures comprise reduction steps from established baselines (see UN Environment OzonAction Fact Sheet; UN, 2017), which are different for developed and developing countries with an exemption for countries with high ambient temperature. Since HFCs are greenhouse gases, baselines and reduction steps are expressed in CO2 equivalents. The amended protocol controls eighteen HFCs as listed in Annex F of the protocol. Although the phasedown steps stipulated in the Kigali Amendment concern only HFCs, the baselines for the reductions are derived through formulae involving both HCFC and HFC production and consumption because HFCs are intended to be substitute compounds for HCFCs. This necessitates knowledge of the global warming potentials (GWPs), a policy-relevant metric representing the climate impact of a compound relative to CO2, of all HCFCs involved in the baseline formulae. However, in the amended protocol, GWPs are available for only eight HCFCs (HCFCs-21, -22, -123, -124, -141b, -142b, -225ca, and -225cb) from a total of 274 HCFCs included in Annex C (274 is the sum of all C1–C3 HCFC isomers). Of the 274 HCFCs, only 15 have experimental kinetic and/or infrared absorption spectrum measurements used to determine their GWPs. The majority of the HCFCs listed in Annex C are not currently in use, but the intent of the protocol was for a comprehensive coverage of possible candidates for future commercial use and possible emission to the atmosphere. For molecules with no GWP available, a provision is included in the protocol stating that a default value of zero applies until such a value can be included by means of adjustments to the protocol. Having policy-relevant metrics for these compounds will help guide and inform future policy decisions. The objective of the present work is to provide a comprehensive evaluation of the following: atmospheric lifetimes; ozone depletion potentials (ODPs), which represents the ozone depleting impact of a compound relative to a reference compound (see WMO, 2014, and references within); GWPs; and global temperature change potentials (GTPs), another policy-relevant metric representing the climate impact of a compound relative to CO2, for the HCFCs listed in Annex C of the amended protocol. The HCFCs that have experimentally measured OH rate coefficients, the predominant atmospheric loss process for HCFCs, and infrared absorption spectra were used as a training dataset to establish the reliability of the methods used to estimate the metrics for the other HCFCs. The training dataset compounds and reference data are listed in Table 1. In the following section, brief descriptions of the methods used to determine the HCFC atmospheric lifetime and ODP are given. Next, the theoretical methods used to calculate the infrared spectra of the HCFCs are described. The infrared spectra are then combined with our estimated global atmospheric lifetimes to estimate the lifetime and stratospheric temperature adjusted radiative efficiency (RE), GWP, and GTP metrics (see IPCC, 2013; WMO, 2014). In the results and discussion section, a general overview of the obtained metrics is provided, while the details and results for each of the individual HCFCs are provided in the Supplement. Table 1Summary of hydrochlorofluorocarbon (HCFC) parameters in the training dataseta. a Lifetimes, RE, and GWP values taken from WMO ozone assessment (WMO, 2014) unless noted otherwise. Where multiple sources for infrared spectra are available, the spectra reported from the NOAA laboratory (McGillen et al., 2015) and the PNNL database (Sharpe et al., 2004) were used in the analysis. b Rate coefficients taken from NASA evaluation (Burkholder et al., 2015) unless noted otherwise. c Rate coefficient and metrics taken from McGillen et al. (2015) with RE lifetime adjusted and a factor of +1.1 for stratospheric temperature correction applied. 2 Methods The global atmospheric lifetime (τatm) is defined as follows: $\frac{\mathrm{1}}{{\mathit{\tau }}_{\text{atm}}}=\frac{\mathrm{1}}{{\mathit{\tau }}_{\mathrm{OH}}}+\frac{\mathrm{1}}{{\mathit{\tau }}_{\mathrm{O}{\left(}^{\mathrm{1}}\mathrm{D}\right)}}+\frac{\mathrm{1}}{{\mathit{\tau }}_{h\mathit{\nu }}},$ where τOH, ${\mathit{\tau }}_{\text{O}{\left(}^{\mathrm{1}}\mathrm{D}\right)}$, and τhν are the global lifetimes with respect to OH and O(1D) reactive loss and UV photolysis, respectively. Other reactive and deposition loss processes for HCFCs are expected to be negligible and not considered in this study. τatm is also often defined in terms of its loss within the troposphere (τTrop), stratosphere (τStrat), and mesosphere (τMeso) as $\frac{\mathrm{1}}{{\mathit{\tau }}_{\text{atm}}}=\frac{\mathrm{1}}{{\mathit{\tau }}_{\mathrm{Trop}}}+\frac{\mathrm{1}}{{\mathit{\tau }}_{\mathrm{Strat}}}+\frac{\mathrm{1}}{{\mathit{\tau }}_{\mathrm{Meso}}},$ where for example, $\frac{\mathrm{1}}{{\mathit{\tau }}_{\mathrm{Strat}}}=\frac{\mathrm{1}}{{\mathit{\tau }}_{\mathrm{Strat}}^{\mathrm{OH}}}+\frac{\mathrm{1}}{{\mathit{\tau }}_{\mathrm{Strat}}^{\mathrm{O}{\left(}^{\mathrm{1}}\mathrm{D}\right)}}+\frac{\mathrm{1}}{{\mathit{\tau }}_{\mathrm{Strat}}^{h\mathit{\nu }}}$ For the HCFCs considered in this study, mesospheric loss processes are negligible and not considered further. The atmospheric loss processes for the HCFCs considered in this study have not been determined experimentally, while τTrop is predominately determined by the HCFC reactivity with the OH radical. In this work, ${\mathit{\tau }}_{\text{Trop}}^{\text{OH}}$ was estimated using the CH3CCl3 (MCF) relative method (WMO, 2014) where ${\mathit{\tau }}_{\text{Trop}}^{\text{OH}}={\mathit{\tau }}_{\text{OH}}^{\text{HCFC}}=\frac{{k}_{\text{MCF}}\left(\mathrm{272}\phantom{\rule{0.125em}{0ex}}\text{K}\right)}{{k}_{\text{HCFC}}\left(\mathrm{272}\phantom{\rule{0.125em}{0ex}}\text{K}\right)}{\mathit{\tau }}_{\text{OH}}^{\text{MCF}},$ with the MCF recommended rate coefficient, kMCF(272 K) = 6.14 × 10−15 cm3 molecule−1 s−1 (Burkholder et al., 2015), and tropospheric lifetime, 6.1 years (WMO, 2014). In the absence of experimental OH reaction rate coefficients, a structure activity relationship (SAR) was used to estimate OH reaction rate coefficients. The SAR of Kwok and Atkinson (1995) and DeMore (1996) were compared with the rate coefficients for the 15 HCFCs (training dataset) for which experimental kinetic measurements are available (Burkholder et al., 2015). The DeMore SAR clearly performed better for these halocarbons and was used in this study. Figure 1 shows the agreement between the experimental 298 K rate coefficient data and the SAR predicted values. For the determination of kHCFC(272 K) an ER value of 1400 K was used in the Arrhenius expression, k(T) =Aexp(1400  T), which is a representative value for the HCFC reactions included in Burkholder et al. (2015). On the basis of the training dataset calculations, we estimate the uncertainty in the SAR 298 K rate coefficients to be  30 % on average. The uncertainty at 272 K will, in some cases, be greater due to our assumption that ER= 1400 K for the unknown reaction rate coefficients. A  50 % uncertainty spread encompasses nearly all the training dataset values at 272 K, see Fig. 1. Therefore, we estimate a 50 % uncertainty in k(272 K) for the HCFCs with unknown rate coefficients. Figure 1Comparison of structure activity relationship (SAR) OH rate coefficients for the training dataset (Table 1) with rate coefficients recommended in Burkholder et al. (2015). (a) Rate coefficients at 298 K using the SAR of DeMore (1996) (solid symbols) and Kwok and Atkinson (1995) (open symbols). The dashed line is the 1:1 correlation and the shaded region is the ±30 % spread around the 1:1 line. (b) Rate coefficients at 272 K using the SAR of DeMore (1996) (solid symbols) with an ER= 1400 K. The dashed line is the 1:1 correlation, the gray shaded region is the ±30 % spread, and the blue shaded region is the ±50 % spread around the 1:1 line. τStrat for the HCFCs is determined by a combination of OH and O(1D) reactive loss, and UV photolysis. Presently, there is no simple means to determine stratospheric lifetimes without the use of atmospheric models. Here, we have estimated stratospheric OH loss lifetimes, ${\mathit{\tau }}_{\text{Strat}}^{\text{OH}}$, following a methodology similar to that used in the WMO (2014) ozone assessment, where results from 2-D atmospheric model calculations are used to establish a correlation between tropospheric and stratospheric lifetimes. We used the lifetimes taken from the SPARC (Ko et al., 2013) lifetime report for three HCFCs and eight HFCs to establish a lifetime correlation, which is shown in Fig. S1 in the Supplement. The stratospheric loss via the OH reaction accounts for  5 % of the total OH loss process for > 95 % of the HCFCs. Therefore, this method of accounting for stratospheric loss leads to only a minor uncertainty in the calculated global lifetime. In most cases, O(1D) reaction and UV photolysis are minor contributors to the global loss of a HCFC. In the absence of experimental data, O(1D) rate coefficients were estimated using the reactivity trends reported in Baasandorj et al. (2013). τO(1D) for the HCFCs were based on a comparison with similarly reactive compounds included in the SPARC (Ko et al., 2013) lifetime report. As shown later, the O(1D) reaction is a minor loss process, < 1 %, for nearly all the HCFCs included in this study and, therefore, the estimation method used is not critical as this loss process is a minor contributor to the global lifetime. τhν was estimated based on the molecular Cl-atom content and its distribution within the molecule as follows: each isolated Cl-atom (450 years), each CCl2 group (80 years), each CCl3 group (50 years), with a minimum photolysis lifetime of 50 years (Ko et al., 2013). UV photolysis is a minor loss process, with the exception of a few long-lived highly chlorinated HCFC isomers where photolysis accounts for 15 % of the global loss at most. A minimum stratospheric lifetime of 20 years was applied to approximately account for transport limited stratospheric lifetimes. ## 2.2 Ozone depletion potentials (ODPs) Semi-empirical ODPs were calculated using the following formula: ${\mathrm{ODP}}_{\text{HCFC}}=\frac{{n}_{\mathrm{Cl}}}{\mathrm{3}}\phantom{\rule{0.125em}{0ex}}\frac{{f}_{\text{HCFC}}}{{f}_{\mathrm{CFC}\text{-}\mathrm{11}}}\phantom{\rule{0.125em}{0ex}}\frac{{M}_{\mathrm{CFC}\text{-}\mathrm{11}}}{{M}_{\mathrm{HCFC}}}\frac{{\mathit{\tau }}_{\text{HCFC}}}{{\mathit{\tau }}_{\mathrm{CFC}\text{-}\mathrm{11}}},$ where nCl is the number of Cl-atoms in the HCFC; M is the molecular weight; f is the molecules fractional release factor (FRF), which denotes the fraction of the halocarbon injected into the stratosphere that has been dissociated (Solomon and Albritton, 1992); and τ is the global atmospheric lifetime. The fractional release factor and global lifetime for CFC-11 were taken from the WMO (2014) ozone assessment report to be 0.47 and 52 years, respectively. Note that a new method to calculate FRF has been suggested by Ostermöller et al. (2017a, b), which has been applied by Leedham Elvdige et al. (2018) and Engel et al. (2018). Overall, there is good agreement between the new method and the empirical parameterization applied in this work. The fractional release factors for the majority of the HCFCs included in this study have not been reported. The WMO report included 3 year age of air FRFs derived from model studies and field observations for 20 ozone depleting substances (WMO, 2014). In the absence of recommended FRF values, we derived an empirical FRF vs. stratospheric lifetime relationship, shown in Fig. 2, for the compounds with reported FRFs and the 2-D model stratospheric lifetimes reported in the SPARC (Ko et al., 2013) lifetime report. Table S1 provides the values presented in Fig. 2. A fit to the data yielded FRF = 0.06 + 0.875 × exp(0.0144 ×τStrat), which was used in our calculations. Figure 2Empirical correlation of fractional release factor (FRF) versus stratospheric lifetime, τStrat. Stratospheric lifetimes were taken from 2-D model results given in the SPARC (Ko et al., 2013) lifetime report. The FRFs were taken from WMO assessment (WMO, 2014). The solid line is a fit to the data: FRF = 0.06 + 0.875 exp(0.01444 ×τStrat). ## 2.3 Theoretical calculations Information about molecular vibrational frequencies, central to the interpretation of infrared spectra, thermodynamics, and many other aspects of chemistry, became amenable to computational determination in the early 1980s. It was recognized that computed harmonic frequencies derived via the second derivative of energy as a function of atomic position were systematically higher than observed fundamentals and scale factors were introduced (Hout et al., 1982; Pople et al., 1981). For Hartree–Fock frequencies these were typically  0.9 and accounted both for the influence of anharmonicity and deficiencies in the underlying quantum calculations. Frequencies based on methods incorporating electron correlation such as CCSD, CCSD(T), or certain functionals within density functional theory (DFT) often perform well for harmonic frequencies and are scaled by  0.95 to match fundamental vibrational modes. Such scaling has been updated as more methods appear (Alecu et al., 2010; Scott and Radom, 1996). Rather less information is available concerning the evaluation of absorption intensities for fundamental modes. Within the same harmonic approximation, implemented in popular quantum codes, the intensity is proportional to the square of the derivative of the dipole moment with respect to position. Halls and Schlegel (1998) evaluated QCISD results against experimental results that indicated deviations of up to approximately ±20 %; they then used QCISD as a benchmark to evaluate a range of functionals. For B3LYP, they found differences from QCISD of around 10 %. More recently, tests of the B3LYP functional found good performance for frequency and intensity (Jiménez-Hoyos et al., 2008; Katsyuba et al., 2013). Some prior work where similar methods have been applied to the infrared absorption for molecules of atmospheric interest include studies of fluoromethanes (Blowers and Hollingshead, 2009), unsaturated hydrofluorocarbons (Papadimitriou and Burkholder, 2016; Papadimitriou et al., 2008b), perfluorocarbons (Bravo et al., 2010), chloromethanes (Wallington et al., 2016), SO2F2 (Papadimitriou et al., 2008a), permethylsiloxanes (Bernard et al., 2017), and large survey studies such as by Kazakov et al. (2012) and Betowski et al. (2015) to name a few. Halls and Schlegel noted that real spectra may exhibit the influences of resonances, intensity sharing, and large-amplitude anharmonic modes. These can be partially accounted for in an analysis based on higher derivatives of the energy and the dipole moment, performed for instance within the framework of second-order vibrational perturbation theory (Barone, 2005). Advantages include treatment of resonances among vibrational levels and incorporation of overtones and combination bands. Examples of applications to molecules containing C–H and C–F bonds indicate excellent accord with experiments for band position and intensity, (Carnimeo et al., 2013) but for CH2ClF the intensity in the region involving C–Cl stretching nevertheless exhibits intensity errors of  10 % (Charmet et al., 2013). The large number of molecules considered in this work and the associated geometry optimizations,  1500 optimizations, required that a cost-effective methodology with reasonable accuracy such as DFT methods be used. Geometry optimization and vibrational frequencies for all C1–C3 HCFCs were carried at the B3LYP/6-31G(2df,p) level using the Gaussian 09 software suite (Frisch et al., 2016). Similar approaches have been used in earlier studies for other classes of molecules with good results, see Hodnebrog et al. (2013) and references cited within. The calculations presented in this work included only the 35Cl isotope because the large number of possible isotopic substitution permutations made the calculation of all combinations prohibitive. In principle, substitution of 35Cl by 37Cl in a heavy molecule would lower the frequency of the C–Cl stretch by  3 %. The level of theory was evaluated based on comparison with available experimental HCFC infrared spectra, see Table 1. Note that our calculations and data available in the NIST quantum chemistry database (2016) obtained using a more costly triple-ζ basis set (aug-cc-pVTZ) showed only minor differences in the calculated frequencies, < 1 %, and band strengths, < 10 %, for the molecules in the training dataset. The majority of the HCFCs have multiple low-energy conformers that have unique infrared absorption spectra. Although only the most stable conformer has been used in most previous theoretical studies, including the individual conformers provides a more realistic representation of the HCFCs infrared spectrum and is expected to improve the accuracy of the calculated radiative efficiency as discussed below. We are not aware of prior studies of infrared spectra of HCFC conformers, but there have been prior theoretical studies of the conformers of other classes of molecule, such as for validation of observed infrared spectra used to deduce relative energies of carbonyl conformations (Lindenmaier et al., 2017) and comparison with measured infrared intensities for linear alkanes (Williams et al., 2013). The different errors and their trends for the intensities of C–H stretching and HCH bending modes indicate that a simple scaling approach, so successful for frequencies, will not work for intensities. In this work, we have included all conformers within 2 kcal mol−1 of the lowest energy conformer. This limit accounts for > 98 % of the population distribution at 298 K, in most cases. For each HCFC, a relaxed scan was performed to detect all possible conformations. For the C2 compounds, three staggered conformations were examined by rotating the C–C torsional angle by 120. For the C3 compounds, nine possible conformations were calculated by rotating the two torsional angles by 120. Each stable conformer was then fully optimized at the B3LYP/6-31G(2df,p) level followed by a frequency calculation. Conformer populations were calculated for a 298 K Boltzmann's distribution using the relative energies (including a zero-point correction) from the calculations. Including stable conformers resulted in overlapping vibrational bands and, therefore, more congested spectra which is consistent with the observed spectra for HCFCs. A number of the HCFCs have stereoisomers. Although the stereoisomers have identical infrared absorption spectra, they were accounted for in the population distribution. Note that for a molecule with a single asymmetrical carbon (a molecule containing a carbon with four different groups attached), e.g., HCFC-121a (CHClFCCl3), a pair of stereoisomers exist for each conformation and, therefore, the contribution of stereoisomers to the total population factors out. The entire dataset contains 126 molecules with a single asymmetric carbon and 32 molecules containing 2 asymmetric carbons. A comparison of the experimental and calculated infrared spectrum of HCFC-124a (CHF2CClF2) shown in Fig. 3 demonstrates the importance of including conformers in the spectrum calculation. A comparison of experimental and theoretical spectra for all molecules with experimental data is provided in the Supplement. The calculations found that HCFC-124a has three stable conformers at 298 K with the lowest energy conformer having  50 % of the population. The experimental spectrum is characterized by strong absorption features between 1100 and 1500 cm−1, which are mostly associated with C–F bond vibrations, and C–Cl vibrational modes below 1000 cm−1. The comparison with the experimental spectrum shows that the prominent absorption features at  825, 1000, and 1250 cm−1 originate from the higher energy conformers. The calculated spectrum is in good agreement with the experimentally measured spectrum with band positions and total integrated band strengths agreeing to within  2 %. Note that conformer contributions to an infrared absorption spectrum will be different for different molecules. The impact of including conformers in the radiative efficiency calculations is presented later. Figure 3Comparison of experimental and calculated infrared absorption spectrum of HCFC-124a (CHF2CClF2). Calculated spectra at B3LYP/6-31G(2df,p) level of theory with (solid red lines) and without (dotted red lines) including stable conformers, and the experimentally measured spectrum (solid black lines) (see Table 1 for the source of the experimental spectrum). Overall, the agreement between experimental and calculated frequencies was good. Figure S2 shows a comparison of experimental vibrational frequencies with the calculated values. There was a systematic overestimation of the calculated vibrational frequencies above 1000 cm−1 and an underestimation below 1000 cm−1. An empirical frequency correction, which accounts, in part, for anharmonicity and other approximations used in the level of theory employed, was derived from this correlation and applied to all the calculated spectra: νcorrected = 53.609 + 0.94429 ×νcalculated. Using this correction, frequencies around  1200 cm−1 (C–F bond vibrations) and around 800 cm−1 (C–Cl bond vibrations) are shifted by only  1 %. The uncertainty associated with the calculated band positions is estimated to be  1 %. The frequency-corrected spectra were used to derive the metrics reported here. Figure 4 shows a comparison of calculated and experimental band strengths (integrated between 500 and 2000 cm−1) for the training dataset. Overall, the agreement is good for the majority of HCFCs with the calculated band strengths being within 20 %, or better, of the experimental values. The calculated band strengths are, however, systematically biased high by  20 %, for band strengths < 1.1 × 10−16 cm2 molecule−1 cm−1. A comparison of the training dataset experimental and calculated infrared spectra reveals that the bias originates from a band strength overestimation of bands below 1000 cm−1 that are primarily associated with C–Cl bonds. The bias is greatest for molecules containing more than one Cl atom on the same carbon, e.g., CHFCl2 (HCFC-21), CH3CCl2F (HCFC-141b), and CH2FCCl2F (HCFC-132c). In fact the intensities of C–Cl stretches are a long-known problem for calculation (Halls and Schlegel, 1998). Scaling the overall spectrum strength to account for such biases has been applied to decrease the deviation between experimental and theoretical values in an earlier theoretical study by Betowski et al. (2015). However, since the bias is primarily for the bands associated with C–Cl bonds, a scaling of the entire band strength would not be appropriate nor an accurate representation of the experimental spectrum. The spectra reported here do not include a band strength correction, as the prediction of which bands are overestimated is too uncertain without knowledge of the experimental spectrum. Although it is difficult to estimate the uncertainty for the theoretical calculations, an estimated  20 % band strength uncertainty includes nearly all the training dataset values and encompasses the possible systematic bias observed for certain vibrational bands. Figure 4Comparison of experimental and calculated infrared band strengths over the 500–2000 cm−1 region for the HCFC training dataset (see Table 1 for the source of the experimental spectra). The dashed line is the 1:1 correlation. The shaded region represents a 20 % spread around the 1:1 line. Figure 5Sensitivity of the calculated HCFC radiative efficiencies in this study to (a) the inclusion of higher energy conformers and (b) to the broadening of the calculated infrared absorption bands, as described in the text. ΔRE values are relative to the full analysis that includes broadened spectra and all conformers within 2 kcal mole−1 of lowest energy conformer. As illustrated earlier for HCFC-124a, Fig. 3, stable HCFC conformers can make a significant contribution to its infrared absorption spectrum. Figure 5 shows the impact of including the conformer population on the calculated RE for each of the HCFCs included in this study. Overall, including conformers increases or decreases the calculated RE by 10 %, or less, in most cases. However, there are some HCFCs where a difference of 20 %, or more, is observed, e.g., HCFC-124a, HCFC-151, and HCFC-232ba. In conclusion, including the contribution from populated conformers improves the accuracy of the calculated RE values and decreases potential systematic errors in the theoretically predicted RE values. The strongest HCFC vibrational bands are due to C–F stretches, 1000–1200 cm−1, which strongly overlap the ”atmospheric window” region. The molecular geometry of the HCFC determines the exact vibrational band frequencies, i.e., HCFCs and their isomers have unique infrared absorption spectra and REs. Note that the calculated infrared spectra in this work include vibrational bands below 500 cm−1, which is usually the lower limit for experimental infrared absorption spectra measurements. The contribution of vibrational bands in this region to the RE is quantified in our calculations and is usually minor, i.e., < 1 %. The Earth's irradiance profile, HCFC infrared absorption spectra, and HCFC radiative efficiency spectra for each HCFC included in this study are included in the Supplement. Lifetime-adjusted REs were calculated using the CFC-11 emission scenario ”S” shaped parameterization given in Hodnebrog et al. (2013), which is intended to account for non-uniform mixing of the HCFC in the atmosphere. The adjustment is greatest for short-lived molecules. A +10 % correction was applied to all molecules to account for the stratospheric temperature correction (see IPCC, 2013; Supplement Sect. 8.SM.13.4 for the origin of this factor). Well-mixed and lifetime-adjusted RE values are included in the Supplement. ## 2.5 Global warming and global temperature change potentials Global warming potentials on the 20- and 100-year time horizons (T) were calculated relative to CO2 using the formulation given in IPCC (2013): $\mathrm{GWP}\left(T\right)=\frac{\mathrm{RE}\mathit{\tau }\left[\mathrm{1}-\mathrm{exp}\left(-T/\mathit{\tau }\right)\right]}{{M}_{\mathrm{HCFC}}\mathrm{Int}\phantom{\rule{0.25em}{0ex}}{\mathrm{RF}}_{{\mathrm{CO}}_{\mathrm{2}}}\left(T\right)},$ where IntRF${}_{{\text{CO}}_{\mathrm{2}}}$(T) is the integrated radiative forcing of CO2 and MHCFC is the HCFC molecular weight. The RE used in the calculation was lifetime-adjusted with a stratospheric temperature correction applied. The global lifetimes were estimated as described in Sect. 2.1. The CO2 denominator is consistent with the GWP values reported in the WMO (2014) and IPCC (2013) assessments corresponding to a CO2 abundance of 391 ppm. Therefore, the values reported in this work can be compared directly to values reported in the WMO and IPCC assessments. A comparison of our training dataset values is given in Fig. 6, where the majority of the GWPs agree to within 15 %. HCFCs-21, -22, -122, and -123 have larger differences, due primarily to discrepancies between the estimated OH rate coefficients and those from the literature. Our GWP results can be scaled to the 2016 CO2 abundance of 403 ppm (NOAA, 2017) by multiplying by 1.03, which accounts for a decrease in the CO2 radiative efficiency (see Myhre et al., 1998; Joos et al., 2013). Figure 6Comparison of 100-year time horizon GWP values reported in the WMO assessment (WMO, 2014) and McGillen et al. (2015) for 133a (with lifetime-adjustment and stratospheric temperature correction applied) and the values calculated in this study. The dashed line represents the 1:1 correlation and the shaded area is a 15 % spread around the 1:1 line. Global temperature change potentials were calculated for the 20-, 50-, and 100-year time horizons using the parameterizations given in the IPCC (2013) Supplement Sect. 8.SM.11.2. 3 Results and discussion Figure 7 provides a comprehensive graphical summary of the lifetime, ODP, lifetime and stratospheric temperature adjusted RE, GWP, and GTP results obtained in this study and the values that are based on experimental data (in black) where available. The metric values for the individual compounds are available in Table S2 and the individual data sheets in the Supplement. A detailed summary of the theoretical results is also included in the data sheets for the individual compounds. It is clear that the metrics for the C1–C3 HCFCs possess a significant range of values with a dependence on the H-atom content as well as the isomeric form for a given chemical formula. In general, an increase in the HCFC H-atom content leads to a shorter atmospheric lifetime, e.g., the lifetimes for the HCFC-226 compounds (1 H-atom) are greater than most other HCFCs. However, the HCFC reactivity also depends on the distribution of hydrogen, chlorine, and fluorine within the molecule, i.e., the isomeric form and lifetimes for isomers can vary significantly. For example, the lifetime of HCFC-225ca (CHCl2CF2CF3) is 1.9 years, while that of HCFC-225da (CClF2CHClCF3) is 16.3 years. The highest reactivity HCFCs are short-lived compounds with lifetimes as low as  0.3 years. The lowest reactivity HCFCs have lifetimes as long as 60 years (HCFC-235fa, CClF2CH2CF3). The trends in the HCFC ODPs follow that of the lifetimes with an additional factor to account for the chlorine content of the HCFC. Overall many of the HCFCs have significant ODPs with 33 HCFCs having values greater than 0.1 and 78 greater than 0.05. In addition to HCFC isomers having different reactivity (lifetimes), each isomer also has a unique infrared absorption spectrum and, thus, a unique RE. The HCFC REs range from a low of  0.03 to a high of  0.35 W m−2 ppb−1. The HCFCs with the highest H-atom content have lower REs, in general, although there are exceptions as shown in Fig. 6. As expected, many of the HCFCs are potent greenhouse gases. The GWPs and GTPs also show a strong isomer dependence, e.g., the GWPs on the 100-year time horizon for the 9 HCFC-225 isomers differ by a factor of  12. The lowest HCFC GWPs in this study are  10 and the greatest value is  5400 for HCFC-235fa. Figure 7Summary of the results obtained in this study for C1–C3 HCFCs (red and blue) and the values for which experimentally derived metrics are available (black). The lifetime, GWP, and GTP values for HCFC-235fa (CClF2CH2CF3) (gold) have been multiplied by 0.4 to improve the overall graphical clarity. ## Metric uncertainty The training calculations have been used to estimate the uncertainties in our atmospheric lifetime estimates and infrared absorption spectra and how these uncertainties propagate through to the key ODP, RE, GWP, and GTP metrics. It is not possible to assign a single uncertainty for all HCFCs for each metric due to their dependence on the individual properties of the HCFCs. To provide a general perspective for the reliability of the metrics reported in this study, we limit our discussion to the average behavior. The predominant atmospheric loss process for HCFCs was shown to be reaction with the OH radical, while UV photolysis in the stratosphere was found to be a non-negligible loss process for HCFCs with long lifetimes and significant Cl content. The DeMore (1996) SAR predicts the training dataset OH rate coefficients at 298 K to within 25 % on average, which directly translates to a 25 % uncertainty in the HCFC tropospheric lifetime. A conservative uncertainty estimate in the predicted OH rate coefficients at 272 K would be  50 %, see Fig. 1. Including an estimated  40 % uncertainty for the stratospheric UV photolysis and O(1D) reactive loss processes increases the global lifetime uncertainty by only  2 %. Table 2The Annex C HCFC table provided in the Kigali Amendment to the Montreal Protocol, where the range of 100-year time horizon global warming potentials (GWPs) obtained in this work for various HCFC isomers all with the chemical formula listed in the first column is given in italicsa. a Typos for HCFC 123 and 124 GWPs entries are corrected here. b Identifies the most commercially viable substances. c The ODPs listed are from the Montreal Protocol, while ODPs derived in this work for the individual HCFCs are available in the Supplement, Table S2. d Range of values from this work obtained for the HCFC isomers are given in italics. The semi-empirical ODP uncertainty is directly proportional to the global lifetime uncertainty with an additional factor to account for the uncertainty in the fractional release factor (FRF). For HCFCs with total lifetimes less than 2 years, the total ODP uncertainty is estimated to be 35 %, for a 25 % uncertainty in the global lifetime. For longer lived HCFCs, the ODP uncertainty is greater, 50 % or more. The overall uncertainty in the GWP and GTP metrics depends on the lifetime and RE uncertainties, with a different dependence on different time horizons. Compounds with lifetimes of less than 1 year have propagated uncertainties of  55 % on average. As the lifetime increases the uncertainty decreases to  30 % on average, or less. The greater uncertainty values for the shorter lived HCFCs is primarily associated with the uncertainty introduced by the lifetime-adjusted RE. As mentioned earlier, there have been a number of previous studies that have applied methods similar to those used in the present study. The most relevant of these studies is that of Betowski et al. (2015) who reported radiative efficiencies for a large number of the C1–C3 HCFCs included in this study. Although they report REs for 178 of the 274 HCFCs included in our work there are significant differences between their REs and those reported here. Figure S4 shows a comparison of the RE values calculated here with those reported in Betowski et al. (2015) for the HCFCs common to both studies. The RE values from Betowski et al. (2015) are systematically lower than the ones reported here by  29 % on average. A similar systematic underestimation is observed when the Betowski et al. (2015) RE values are compared with the available HCFC experimental data used in our training dataset. Betowski et al. (2015) used B3LYP/6-31G(d) to calculate the HCFC infrared spectra and applied a band strength correction in their RE calculation. Note that a band strength correction was not applied in the present study as discussed earlier. In addition, Betowski et al. (2015) did not use broadened infrared spectra in their RE calculation and included only the lowest energy conformer. These differences can account for some of the scatter in the correlation shown in Fig. S4. The average difference between the reported RE values can only partially be explained by the different methods used here, B3LYP/6-31G(2df,p), and in Betowski et al. (2015), B3LYP/6-31G(d), as they produce very similar HCFC infrared spectra, i.e., the band strengths obtained with these methods agree to within  10 %. Betwoski et al. (2015) used the available HCFC experimental data and data for a large number of compounds from other chemical classes in their training dataset, e.g., perhalocarbons, haloaldehydes, haloketones, and haloalcohols. On the basis of their analysis, a band strength scaling factor of 0.699, for the B3LYP/6-31G(d) method, was derived. However, for the HCFCs this scale factor introduces a systematic error in the band strength analysis. In Fig. 4 we showed that the DFT theoretical methods, without scaling, agree with the available experimental HCFC data to within 20 %, or better. Although the HCFC training dataset is relatively small, the band strength scaling factor based on results for other chemical compound classes is most likely not appropriate and introduces a systematic bias for the calculated RE values. Therefore, the infrared spectra reported in the present work and used to derive REs and GWPs were not scaled. 4 Summary In this study, policy-relevant metrics have been provided for C1–C3 HCFC compounds, many of which were not available at the time of the adoption of the Kigali Amendment. Table 2 summarizes the results from this study in the condensed format used in Annex C of the amended protocol where the range of metrics are reported for each HCFC chemical formula. Metrics for the individual HCFCs are given in Table S2 and the data sheets for each of the HCFCs that contain the explicit kinetic parameters and theoretical results obtained in this work. We have shown that HCFC isomers have significantly different lifetimes, ODPs, and radiative metrics. Of particular interest are the HCFCs with current significant production and emissions to the atmosphere. Of all the HCFCs listed in Annex C of the amended protocol, HCFCs-121(2), -122(3), -133(3), -141(3), -142(3), and -225(9) are of primary interest (the values in parenthesis are the number of isomers for that chemical formula). Of these 23 compounds, experimentally based metrics are included in the Kigali Amendment for only HCFCs-141b, -142b, -225ca, and -225cb. Therefore, the present work provides policy-relevant information for the other HCFCs. Although this work has provided a comprehensive set of estimated metrics for the C1–C3 HCFCs that presently do not have experimental data, careful direct fundamental laboratory studies of an intended HCFC would better define the critical atmospheric loss processes (reaction and UV photolysis) used to evaluate atmospheric lifetimes. Laboratory measurements of infrared spectra would also provide specific quantitative results to be used in the determination of the RE, GWP, and GTP metrics. It is anticipated that laboratory measurements could yield uncertainties in the reactive and photolysis loss processes of  10 % and the infrared spectrum of  5 %, or better, which are significantly less than the 25 and 20 % average estimated uncertainties obtained with the methods used in this work. Therefore, laboratory studies would potentially yield more accurate metrics. Note that the absolute uncertainty in the ODP, RE, GWP, and GTP metrics would also include a consideration of the uncertainties associated with lifetime determination methods and the Earth's irradiance profile approximation used to derive RE values, as well as the uncertainty in CO2 radiative forcing, which were not considered in this work. Data availability Data availability. Figures and tables including the master summary table of metrics for all HCFCs are provided in the supporting material. Data sheets for the individual HCFCs that contain the derived atmospheric lifetimes, ODP, RE, GWP, and GTP metrics, graphs, figures, and tables of the theoretical calculation results are available at https://www.esrl.noaa.gov/csd/groups/csd5/datasets/. Supplement Supplement. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. This work was supported in part by NOAA Climate Program Office Atmospheric Chemistry, Carbon Cycle, and Climate Program and NASA's Atmospheric Composition Program. The authors acknowledge helpful discussion with Sophia Mylona of the United Nations Environment Programme and David Fahey. The authors acknowledge the NOAA Research and Development High Performance Computing Program (http://rdhpcs.noaa.gov) and the University of North Texas chemistry cluster purchased with support from the NSF Grant CHE-1531468 for providing computing and storage resources that contributed to the research results reported within this paper. Edited by: Andreas Engel Reviewed by: three anonymous referees References Alecu, I. M., Zheng, J., Zhao, Y., and Truhlar, D. G.: Computational thermochemistry: Scale factor databases and scale factors for vibrational frequencies obtained from electronic model chemistries, J. Chem. Theory Comput., 6, 2872–2887, https://doi.org/10.1021/ct100326h, 2010. Baasandorj, M., Fleming, E. L., Jackman, C. H., and Burkholder, J. B.: O(1D) kinetic study of key ozone depleting substances and greenhouse gases, J. Phys. Chem. A, 117, 2434–2445, https://doi.org/10.1021/jp312781c, 2013. Barone, V.: Anharmonic vibrational properties by a fully automated second-order perturbative approach, J. Chem. Phys., 122, 014108, https://doi.org/10.1063/1.1824881, 2005. Bernard, F., Papanastasiou, D. K., Papadimitriou, V. C., and Burkholder, J. B.: Infrared absorption spectra of linear (L2-L5) and cyclic (D3-D6) permethylsiloxanes, J. Quant. Spect. Rad. Transf., 202, 247–254, https://doi.org/10.1016/j.jqsrt.2017.08.006, 2017. Betowski, D., Bevington, C., and Allison, T. C.: Estimation of radiative efficiency of chemicals with potentially significant global warming potential, Environ. Sci. Technol., 50, 790–797, https://doi.org/10.1021/acs.est.5b04154, 2015. Blowers, P. and Hollingshead, K.: Estimations of global warming potentials from computational chemistry calculations for CH2F2 and other fluorinated methyl species verified by comparison to experiment, J. Phys. Chem. A, 113, 5942–5950, https://doi.org/10.1021/jp8114918, 2009. Bravo, I., Aranda, A., Hurley, M. D., Marston, G., Nutt, D. R., Shine, K. P., Smith, K., and Wallington, T. J.: Infrared absorption spectra, radiative efficiencies, and global warming potentials of perfluorocarbons: comparison between experiment and theory, J. Geophys. Res., 115, D24317, https://doi.org/10.1029/2010JD014771, 2010. Burkholder, J. B., Sander, S. P., Abbatt, J., Barker, J. R., Huie, R. E., Kolb, C. E., Kurylo, M. J., Orkin, V. L., Wilmouth, D. M., and Wine, P. H.: “Chemical Kinetics and Photochemical Data for Use in Atmospheric Studies, Evaluation No. 18”, JPL Publication 15–10, Jet Propulsion Laboratory, Pasadena, 2015, available at: http://jpldataeval.jpl.nasa.gov, 2015. Carnimeo, I., Puzzarini, C., Tasinato, N., Stoppa, P., Pietropolli Charmet, A., Biczysko, M., Cappelli, C., and Barone, V.: Anharmonic theoretical simulations of infrared spectra of halogenated organic compounds, J. Chem. Phys., 139, 074310, https://doi.org/10.1063/1.4817401, 2013. Charmet, P. A., Stoppa, P., Tasinato, N., Giorgianni, S., Barone, V., Biczysko, M., Bloino, J., Cappelli, C., Carnimeo, I., and Puzzarini, C.: An integrated experimental and quantum-chemical investigation on the vibrational spectra of chlorofluoromethane, J. Chem. Phys., 139, 164302, https://doi.org/10.1063/1.4825380, 2013. DeMore, W. B.: Experimental and Estimated Rate Constants for the Reactions of Hydroxyl Radicals with Several Halocarbons, J. Phys. Chem., 100, 5813–5820, https://doi.org/10.1021/jp953216+, 1996. Engel, A., Bönisch, H., Ostermöller, J., Chipperfield, M., Dhomse, S., and Jöckel, P.: A refined method for calculating equivalent effective stratospheric chlorine, Atmos. Chem. Phys., 18, 601–619, https://doi.org/10.5194/acp-18-601-2018, 2018. Etminan, M., Highwood, E. J., Laube, J. C., McPheat, R., Marston, G., Shine, K. P., and Smith, K. M.: Infrared absorption spectra, radiative efficiencies, and global warming potentials of newly-detected halogenated compounds: CFC-113a, CFC-112 and HCFC-133a, Atmosphere, 5, 473–483, https://doi.org/10.3390/atmos5030473, 2014. Frisch, M. J., Trucks, G. W., Schlegel, H. B., Scuseria, G. E., Robb, M. A., Cheeseman, J. R., Scalmani, G., Barone, V., Petersson, G. A., Nakatsuji, H., Li, X., Caricato, M., Marenich, A., Janesko, J. B. B. G., Gomperts, R., Mennucci, B., Hratchian, H. P., Ortiz, J. V., Izmaylov, A. F., Sonnenberg, J. L., Williams-Young, D., Ding, F., Lipparini, F., Egidi, F., Goings, J., Peng, B., Henderson, A. P. T., Ranasinghe, D., Zakrzewski, V. G., Gao, J., Rega, N., Zheng, G., Liang, W., Hada, M., Ehara, M., Toyota, K., Fukuda, R., Hasegawa, J., Ishida, M., Nakajima, T., Honda, Y., Nakai, O. K. H., Vreven, T., Throssell, K. J. A., Montgomery, J., Peralta, J. E., Ogliaro, F., Bearpark, M., Heyd, J. J., Brothers, E., Kudin, K. N., Staroverov, V. N., Keith, T., Kobayashi, R., Raghavachari, J. N. K., Rendell, A., Burant, J. C., Iyengar, S. S., Tomasi, J., Cossi, M., Millam, J. M., Klene, M., Adamo, C., Cammi, R., Ochterski, J. W., Martin, R. L., Morokuma, K., Farkas, O., Foresman, J. B., and Fox, D. J.: Gaussian 09, Revision A.02, Gaussian, Inc., Wallingford CT, 2016. Halls, M. D. and Schlegel, H. B.: Comparison of the performance of local, gradient-corrected, and hybrid density functional models in predicting infrared intensities, J. Chem. Phys., 109, 10587–10593, doi;10.1063/1.476518, 1998. Hodnebrog, Ø., Etminan, M., Fuglestvedt, J. S., Marston, G., Myhre, G., Nielsen, C. J., Shine, K. P., and Wallington, T. J.: Global warming potentials and radiative efficiencies of halocarbons and related compounds: A comprehensive review, Rev. Geophys., 51, 300–378, https://doi.org/10.1002/rog.20013, 2013. Hout, R. F., Levi, B. A., and Hehre, W. J.: Effect of electron correlation on theoretical vibrational frequencies, J. Comput. Chem., 3, 234–250, https://doi.org/10.1002/jcc.540030216, 1982. IPCC: Climate Change 2013: The Physical Science Basis, Contribution of Working Group 1 to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, edited by: Stocker, T. F., Qin, D., Plattner, G.-K., Tignor, M., Allen, S. K., Boschung, J., Nauels, A., Zia, Y., Bex, V., and Midgley, P. M., Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA., 2013. Jiménez-Hoyos, C. A., Janesko, B. G., and Scuseria, G. E.: Evaluation of range-separated hybrid density functionals for the prediction of vibrational frequencies, infrared intensities, and Raman activities, Phys. Chem. Chem. Phys., 10, 6621–6629, https://doi.org/10.1039/b810877c, 2008. Joos, F., Roth, R., Fuglestvedt, J. S., Peters, G. P., Enting, I. G., von Bloh, W., Brovkin, V., Burke, E. J., Eby, M., Edwards, N. R., Friedrich, T., Frölicher, T. L., Halloran, P. R., Holden, P. B., Jones, C., Kleinen, T., Mackenzie, F. T., Matsumoto, K., Meinshausen, M., Plattner, G.-K., Reisinger, A., Segschneider, J., Shaffer, G., Steinacher, M., Strassmann, K., Tanaka, K., Timmermann, A., and Weaver, A. J.: Carbon dioxide and climate impulse response functions for the computation of greenhouse gas metrics: a multi-model analysis, Atmos. Chem. Phys., 13, 2793–2825, https://doi.org/10.5194/acp-13-2793-2013, 2013. Katsyuba, S. A., Zvereva, E. E., and Burganov, T. I.: Is there a simple way to reliable simulations of infrared spectra of organic compounds?, J. Phys. Chem. A, 117, 6664–6670, https://doi.org/10.1021/jp404574m, 2013. Kazakov, A., McLinden, M. O., and Frenkel, M.: Computational design of new refrigerant fluids based on environmental, safety, and thermodynamic characteristics, Ind. Chem. Eng. Res., 51, 12537–12548, https://doi.org/10.1021/ie3016126, 2012. Kigali Amendment to the Montreal Protocol on Substances that Deplete the Ozone Layer, available at: http://ozone.unep.org/en/handbook-montreal-protocol-substances-deplete-ozone-layer/41453, 2016. Ko, M. K. W., Newman, P. A., Reimann, S., Strahan, S. E., Plumb, R. A., Stolarski, R. S., Burkholder, J. B., Mellouki, W., Engel, A., Atlas, E. L., Chipperfield, M., and Liang, Q.: Lifetimes of stratospheric ozone-depleting substances, their replacements, and related species, available at: http://www.sparc-climate.org/publications/sparc-reports/sparc-report-no-6/ (last access: May 2018) 2013. Kwok, E. S. C. and Atkinson, R.: Estimation of hydroxyl radical reaction rate constants for gas-phase organic compounds using a structure-reactivity relationship: An update, Atmos. Environ., 29, 1685–1695, 1995. Leedham Elvidge, E., Bönisch, H., Brenninkmeijer, C. A. M., Engel, A., Fraser, P. J., Gallacher, E., Langenfelds, R., Mühle, J., Oram, D. E., Ray, E. A., Ridley, A. R., Röckmann, T., Sturges, W. T., Weiss, R. F., and Laube, J. C.: Evaluation of stratospheric age of air from CF4, C2F6, C3F8, CHF3, HFC-125, HFC-227ea and SF6; implications for the calculations of halocarbon lifetimes, fractional release factors and ozone depletion potentials, Atmos. Chem. Phys., 18, 3369–3385, https://doi.org/10.5194/acp-18-3369-2018, 2018. Lindenmaier, R., Williams, S. D., Sams, R. L., and Johnson, T. J.: Quantitative infrared absorption spectra and vibrational assignments of crotonaldehyde and methyl vinyl ketone using gas-phase mid-infrared, far-infrared, and liquid raman spectra: s-cis vs s-trans composition confirmed via temperature studies and ab initio methods, J. Phys. Chem. A, 121, 1195–1212, https://doi.org/10.1021/acs.jpca.6b10872, 2017. McGillen, M. R., Bernard, F., Fleming, E. L., and Burkholder, J. B.: HCFC-133a (CF3CH2Cl): OH rate coefficient, UV and infrared absorption spectra, and atmospheric implications, Geophys. Res. Lett., 42, 6098–6105, https://doi.org/10.1002/2015GL064939, 2015. Montreal Protocol on Substances that Deplete the Ozone Layer (1987), available at: http://unep.ch/ozone/pdf/Montreal-Protocol2000.pdf (last access: May 2018), 1987. Myhre, G., Highwood, E. J., Shine, K. P., and Stordal, F.: New estimates of radiative forcing due to well mixed greenhouse gases, Geophys. Res. Lett., 25, 2715–2718, https://doi.org/10.1029/98GL01908, 1998. National Institute of Standards and Technology (NIST), Computational Chemistry Comparison and Benchmark Database, NIST Standard Reference Database Number 101, Release 18, October 2016, edited by: Johnson, Russell D. III, available at: http://cccbdb.nist.gov/ (last access: April 2018), 2016. National Oceanic and Atmospheric Administration (NOAA), Earth System Research Laboratory, Global Monitoring Division, Global Greenhouse Gas Reference Network, available at: https://www.esrl.noaa.gov/gmd/ccgg/trends/global.html (last access: April 2018), 2017. NOAA, Earth System Research Laboratory, Chemical Sciences Division, available at: https://www.esrl.noaa.gov/csd/groups/csd5/datasets/ (last access: May 2018), 2018 Orkin, V. L., Guschin, A. G., Larin, I. K., Huie, R. E., and Kurylo, M. J.: Measurements of the infrared absorption cross-sections of haloalkanes and their use in a simplified calculational approach for estimating direct global warming potentials, J. Photochem. Photobiol. A-Chem., 157, 211–222, 2003. Ostermöller, J., Bönisch, H., Jöckel, P., and Engel, A.: A new time-independent formulation of fractional release, Atmos. Chem. Phys., 17, 3785–3797, https://doi.org/10.5194/acp-17-3785-2017, 2017a. Ostermöller, J., Bönisch, H., Jöckel, P., and Engel, A.: Corrigendum to “A new time-independent formulation of fractional release” published in Atmos. Chem. Phys., 17, 3785–3797, Atmos. Chem. Phys., 17, 3785–3797, https://doi.org/10.5194/acp-17-3785-2017-corrigendum, 2017b. Papadimitriou, V. C. and Burkholder, J. B.: OH radical reaction rate coefficients, infrared spectrum, and global warming potential of (CF3)2CFCH=CHF (HFO-1438ezy(E)), J. Phys. Chem. A, 120, 6618–6628, https://doi.org/10.1021/acs.jpca.6b06096, 2016. Papadimitriou, V. C., Portmann, R. W., Fahey, D. W., Mühle, J., Weiss, R. F., and Burkholder, J. B.: Experimental and theoretical study of the atmospheric chemistry and global warming potential of SO2F2, J. Phys. Chem. A, 112, 12657–12666, https://doi.org/10.1021/jp806368u, 2008a. Papadimitriou, V. C., Talukdar, R. K., Portmann, R. W., Ravishankara, A. R., and Burkholder, J. B.: CF3CF=CH2 and (Z)-CF3CF=CHF: temperature dependent OH rate coefficients and global warming potentials, Phys. Chem. Chem. Phys., 10, 808–820, https://doi.org/10.1039/b714382f, 2008b. Pople, J. A., Schlegel, H. B., Krishnan, R., Defrees, D. J., Binkley, J. S., Frisch, M. J., Whiteside, R. A., Hout, R. F., and Hehre, W. J.: Molecular orbital studies of vibrational frequencies, Int. J. Quantum Chem., 20, 269–278, https://doi.org/10.1002/qua.560200829, 1981. Scott, A. P. and Radom, L.: Harmonic vibrational frequencies: An evaluation of Hartree-Fock, Møller-Plesset, quadratic configuration interaction, density functional theory, and semiempirical scale factors, J. Phys. Chem., 100, 16502–16513, https://doi.org/10.1021/jp960976r, 1996. Sharpe, S. W., Johnson, T. J., Sams, R. L., Chu, P. M., Rhoderick, G. C., and Johnson, P. A.: Gas-phase databases for quantitative infrared spectroscopy, Appl. Spect., 58, 1452–1461, 2004. Sihra, K., Hurley, M. D., Shine, K. P., and Wallington, T. J.: Updated radiative forcing estimates of 65 halocarbons and nonmethane hydrocarbons, J. Geophys. Res., 106, 20493–20505, 2001. Solomon, S. and Albritton, D. L.: Time-dependent ozone depletion potentials for short- and long-term forecasts, Nature, 357, 33–37, https://doi.org/10.1038/357033a0, 1992. UN Environment OzonAction Fact Sheet, available at: http://www.unep.fr/ozonaction/information/mmcfiles/7809-e-factsheet_Kigali_Amendment_to_MP_2017.pdf, 2017. Wallington, T. J., Pivesso, B. P., Lira, A. M., Anderson, J. E., Nielsen, C. J., Andersen, N. H., and Hodnebrog, Ø.: CH3Cl, CH2Cl2, CHCl3, and CCl4: Infrared spectra, radiative efficiencies, and global warming potentials, J. Quant. Spec. Rad. Trans., 174, 56–64, https://doi.org/10.1016/j.jqsrt.2016.01.029, 2016. Williams, S. D., Johnson, T. J., Sharpe, S. W., Yavelak, V., Oates, R. P., and Brauer, C. S.: Quantitative vapor-phase IR intensities and DFT computations to predict absolute IR spectra based on molecular structure: I. Alkanes, J. Quant. Spectrosc. Radiat. Trans., 129, 298–307, https://doi.org/10.1016/j.jqsrt.2013.07.005, 2013. World Meteorological Organization (WMO), Scientific Assessment of Ozone Depletion: 2014, Global Ozone Research and Monitoring Project-Report No. 55 416, Geneva, Switzerland, 2014.
### Out of three numbers, the first is twice the second and is half of the third. If the average of the three numbers is 56, then difference of first and third numbers is A. 12 B. 20 C. 24 D. 48 Answer: Option D ### Solution(By Apex Team) $\begin{array}{l}\text{Let the second number be x}\\ \text{Then, first number = 2x}\\ \text{3rd number = 4x}\\ \text{∴ 2x + x + 4x = 56 × 3}\\ \text{⇒ 7x = 168}\\ \text{⇒ x = 24}\\ \text{∴ Required difference}\\ \text{= 4x – 2x}\\ \text{= 2x}\\ \text{= 2 × 24}\\ \text{= 48}\end{array}$ A. 20 B. 21 C. 28 D. 32 A. 18 B. 20 C. 24 D. 30 A. 10 years B. 10.5 years C. 11 years D. 12 years
2013 12-12 # Idiomatic Phrases Game Tom is playing a game called Idiomatic Phrases Game. An idiom consists of several Chinese characters and has a certain meaning. This game will give Tom two idioms. He should build a list of idioms and the list starts and ends with the two given idioms. For every two adjacent idioms, the last Chinese character of the former idiom should be the same as the first character of the latter one. For each time, Tom has a dictionary that he must pick idioms from and each idiom in the dictionary has a value indicates how long Tom will take to find the next proper idiom in the final list. Now you are asked to write a program to compute the shortest time Tom will take by giving you the idiom dictionary. The input consists of several test cases. Each test case contains an idiom dictionary. The dictionary is started by an integer N (0 < N < 1000) in one line. The following is N lines. Each line contains an integer T (the time Tom will take to work out) and an idiom. One idiom consists of several Chinese characters (at least 3) and one Chinese character consists of four hex digit (i.e., 0 to 9 and A to F). Note that the first and last idioms in the dictionary are the source and target idioms in the game. The input ends up with a case that N = 0. Do not process this case. One line for each case. Output an integer indicating the shortest time Tome will take. If the list can not be built, please output -1. 5 5 12345978ABCD2341 5 23415608ACBD3412 7 34125678AEFD4123 15 23415673ACC34123 4 41235673FBCD2156 2 20 12345678ABCD 30 DCBF5432167D 0 17 -1 1  只要建好图,然后利用SPFA求解最短路即可。注意字符串的处理 2  定义一个char ch[10]数组,如果给数组的每一个元素值赋值后,还要记得要在最后ch[9]添加‘\0’,表示结束。就是如果要保存10个元素,那么数组最小要开到11,因为第11个表示‘\0’来表示正常结束。所以数组尽量开大点 #include<iostream> #include<algorithm> #include<cstdio> #include<cstring> #include<string> #include<queue> using namespace std; #define MAXN 1010 #define INF 0xFFFFFFF int n; char str[MAXN][MAXN]; int value[MAXN][MAXN]; int t[MAXN]; int dis[MAXN]; int vis[MAXN]; queue<int>q; /*初始化*/ void init(){ int i , j , k , len; char ch1[10], ch2[10]; for(i = 1 ; i <= n ; i++){ len = strlen(str[i])-4; for(k = 0 ; k < 4 ; k++) ch1[k] = str[i][len+k]; ch1[4] = '\0';/*末尾加上'\0',表示字符串结束*/ for(j = 1 ; j <= n ; j++){ value[i][j] = INF; for(k = 0 ; k < 4 ; k++) ch2[k] = str[j][k]; ch2[4] = '\0';/*末尾加上'\0',表示字符串结束*/ if(!strcmp(ch1 , ch2)) value[i][j] = t[i]; } value[i][i] = 0; } } /*SPFA*/ void SPFA(){ memset(vis , 0 , sizeof(vis)); for(int i = 2 ; i <= n ; i++) dis[i] = INF; dis[1] = 0; vis[1] = 1; q.push(1); while(!q.empty()){ int x = q.front(); q.pop(); vis[x] = 0; for(int i = 1 ; i <= n ; i++){ if(value[x][i] && dis[i] > dis[x] + value[x][i]){ dis[i] = dis[x] + value[x][i]; if(!vis[i]){ vis[i] = 1; q.push(i); } } } } } int main(){ while(scanf("%d" , &n) && n){ for(int i = 1; i <= n ; i++) scanf("%d %s" , &t[i] , str[i]); init(); SPFA(); if(dis[n] != INF) printf("%d\n" , dis[n]); else printf("-1\n"); } return 0; } 1. 一开始就规定不相邻节点颜色相同,可能得不到最优解。我想个类似的算法,也不确定是否总能得到最优解:先着一个点,随机挑一个相邻点,着第二色,继续随机选一个点,但必须至少有一个边和已着点相邻,着上不同色,当然尽量不增加新色,直到完成。我还找不到反例验证他的错误。。希望LZ也帮想想, 有想法欢迎来邮件。谢谢 2. Often We don’t set up on weblogs, but I would like to condition that this established up really forced me individually to do this! considerably outstanding publish 3. 约瑟夫也用说这么长……很成熟的一个问题了,分治的方法解起来o(n)就可以了,有兴趣可以看看具体数学的第一章,关于约瑟夫问题推导出了一系列的结论,很漂亮
### Fixed-Sample Clinical Trials Subsections: A clinical trial is a research study in consenting human beings to answer specific health questions. One type of trial is a treatment trial, which tests the effectiveness of an experimental treatment. An example is a planned experiment designed to assess the efficacy of a treatment in humans by comparing the outcomes in a group of patients who receive the test treatment with the outcomes in a comparable group of patients who receive a placebo control treatment, where patients in both groups are enrolled, treated, and followed over the same time period. A clinical trial is conducted according to a plan called a protocol. The protocol provides detailed description of the study. For a fixed-sample trial, the study protocol contains detailed information such as the null hypothesis, the one-sided or two-sided test, and the Type I and II error probability levels. It also includes the test statistic and its associated critical values in the hypothesis testing. Generally, the efficacy of a new treatment is demonstrated by testing a hypothesis in a clinical trial, where is the parameter of interest. For example, to test whether a population mean is greater than a specified value , can be used with an alternative . A one-sided test is a test of the hypothesis with either an upper (greater) or a lower (lesser) alternative, and a two-sided test is a test of the hypothesis with a two-sided alternative. The drug industry often prefers to use a one-sided test to demonstrate clinical superiority based on the argument that a study should not be run if the test drug would be worse (Chow, Shao, and Wang, 2003, p. 28). But in practice, two-sided tests are commonly performed in drug development (Senn, 1997, p. 161). For a fixed Type I error probability , the sample sizes required by one-sided and two-sided tests are different. See Senn (1997, pp. 161–167) for a detailed description of issues involving one-sided and two-sided tests. For independent and identically distributed observations of a random variable, the likelihood function for is where is the population parameter and is the probability or probability density of . Using the likelihood function, two statistics can be derived that are useful for inference: the maximum likelihood estimator and the score statistic. #### Maximum Likelihood Estimator The maximum likelihood estimate (MLE) of is the value that maximizes the likelihood function for . Under mild regularity conditions, is an asymptotically unbiased estimate of with variance , where is the Fisher information and is the expected Fisher information (Diggle et al., 2002, p. 340) The score function for is defined as and usually, the MLE can be derived by solving the likelihood equation . Asymptotically, the MLE is normally distributed (Lindgren, 1976, p. 272): If the Fisher information does not depend on , then is known. Otherwise, either the expected information evaluated at the MLE () or the observed information can be used for the Fisher information (Cox and Hinkley 1974, p. 302; Efron and Hinkley 1978, p. 458), where the observed Fisher information If the Fisher information does depend on , the observed Fisher information is recommended for the variance of the maximum likelihood estimator (Efron and Hinkley, 1978, p. 457). Thus, asymptotically, for large n, where I is the information, either the expected Fisher information or the observed Fisher information . So to test versus , you can use the standardized Z test statistic and the two-sided p-value is given by where is the cumulative standard normal distribution function and is the observed Z statistic. If the BOUNDARYSCALE=SCORE is specified in the SEQDESIGN procedure, the boundary values for the test statistic are displayed in the score statistic scale. With the standardized Z statistic, the score statistic and #### Score Statistic The score statistic is based on the score function for , Under the null hypothesis , the score statistic is the first derivative of the log likelihood evaluated at the null reference 0: Under regularity conditions, is asymptotically normally distributed with mean zero and variance , the expected Fisher information evaluated at the null hypothesis (Kalbfleisch and Prentice, 1980, p. 45), where is the Fisher information That is, for large n, Asymptotically, the variance of the score statistic , , can also be replaced by the expected Fisher information evaluated at the MLE (), the observed Fisher information evaluated at the null hypothesis (, or the observed Fisher information evaluated at the MLE () (Kalbfleisch and Prentice, 1980, p. 46), where Thus, asymptotically, for large n, where I is the information, either an expected Fisher information ( or ) or a observed Fisher information ( or ). So to test versus , you can use the standardized Z test statistic If the BOUNDARYSCALE=MLE is specified in the SEQDESIGN procedure, the boundary values for the test statistic are displayed in the MLE scale. With the standardized Z statistic, the MLE statistic and #### One-Sample Test for Mean The following one-sample test for mean is used to demonstrate fixed-sample clinical trials in the section One-Sided Fixed-Sample Tests in Clinical Trials and the section Two-Sided Fixed-Sample Tests in Clinical Trials. Suppose are n observations of a response variable Y from a normal distribution where is the unknown mean and is the known variance. Then the log likelihood function for is where c is a constant. The first derivative is where is the sample mean. Setting the first derivative to zero, the MLE of is , the sample mean. The variance for can be derived from the Fisher information Since the Fisher information does not depend on in this case, is used as the variance for . Thus the sample mean has a normal distribution with mean and variance : Under the null hypothesis , the score statistic has a mean zero and variance With the MLE , the corresponding standardized statistic is computed as , which has a normal distribution with variance 1: Also, the corresponding score statistic is computed as and which is identical to computed under the null hypothesis . Note that if the variable Y does not have a normal distribution, then it is assumed that the sample size n is large such that the sample mean has an approximately normal distribution.
# Oseledets multiplicative ergodic theorem Oseledets multiplicative ergodic theorem, or Oseledets decomposition, considerably extends the results of Furstenberg-Kesten theorem, under the same conditions. Consider $\mu$ a probability measure, and $f:M\rightarrow M$ a measure preserving dynamical system. Consider $A:M\rightarrow GL(d,\textbf{R})$, a measurable transformation, where GL(d,R) is the space of invertible square matrices of size $d$. Consider the multiplicative cocycle $(\phi^{n}(x))_{n}$ defined by the transformation $A$, and assume $\log^{+}||A||$ and $\log^{+}||A^{-1}||$ are integrable. Then, $\mu$ almost everywhere $x\in M$, one can find a natural number $k=k(x)$ and real numbers $\lambda_{1}(x)>\cdots>\lambda_{k}(x)$ and a filtration $\textbf{R}^{d}=V_{x}^{1}>\cdots>V_{x}^{k}>V_{x}^{k+1}=\{0\}$ such that, for $\mu$ almost everywhere and for all $i\in\{1,\dots,k\}$ 1. 1. $k(f(x))=k(x)$ and $\lambda_{i}(f(x))=\lambda_{i}(x)$ and $A(x)\cdot V_{x}^{i}=V_{f(x)}^{i}$; 2. 2. $\lim_{n}\frac{1}{n}\log||\phi^{n}(x)v||=\lambda_{i}(x)$ for all $v\in V_{x}^{i}\backslash V_{x}^{i+1}$; 3. 3. $\lim_{n}\frac{1}{n}\log|\det\phi^{n}(x)|=\sum_{i=1}^{k}d_{i}(x)\lambda_{i}(x)$ where $d_{i}(x)=\dim V_{x}^{i}-\dim V_{x}^{i+1}$ Furthermore, the numbers $k_{i}(x)$ and the subspaces $V_{x}^{i}$ depend measurably on the point $x$. The numbers $\lambda_{i}(x)$ are called Lyapunov exponents of $A$ relatively to $f$ at the point $x$. Each number $d_{i}(x)$ is called the multiplicity of the Lyapunov exponent $\lambda_{i}(x)$. We also have that $\lambda_{1}=\lambda_{\max}$ and $\lambda_{k}=\lambda_{\min}$, where $\lambda_{max}$ and $\lambda_{\min}$ are as given by Furstenberg-Kesten theorem. Title Oseledets multiplicative ergodic theorem OseledetsMultiplicativeErgodicTheorem 2014-03-26 14:21:35 2014-03-26 14:21:35 Filipe (28191) Filipe (28191) 6 Filipe (28191) Theorem msc 37H15 Oseledets decomposition Lyapunov exponent Furstenberg-Kesten Theorem
main-content Gepubliceerd in: 01-12-2008 # Patient-reported outcomes and the mandate of measurement Auteur: Gary Donaldson Gepubliceerd in: Quality of Life Research | Uitgave 10/2008 • Optie A: • Optie B: ## Abstract ### Purpose Coherent clinical care depends on answering a basic question: is a patient getting worse, getting better, or staying about the same? This can prove surprisingly difficult to answer confidently. Patient-reported outcomes (PROs) could potentially help by providing quantifiable evidence. But quantifiable evidence is not necessarily good evidence, as this article details. ### Method The fundamental mandate of measurement requires that errors in making an assessment be smaller than the distinctions to be measured. This mandate implies that numerical observations of patients may be poor measurements. ### Results Individual assessments require high measurement precision and reliability. Group-averaged comparisons cancel out measurement error, but individual PROs do not. Individual PROs generate numbers, to be sure, but the numbers may fall short of what we should demand of measurements. When typical errors of measurement are large, it is not possible to answer confidently even the modest question of whether a patient is getting worse or getting better. ### Conclusion This article explains some theory behind the mandate of measurement, provides several examples based on clinical research, and suggests strategies to measure and monitor individual patient outcomes more precisely. These include more frequent low-burden assessments, more realistic confidence levels, and strengthened measurement that integrates population data. Voetnoten 1 Under the “usual” assumptions of constant variance and conditional independence. 2 The order-of-magnitude difference in variability between individual and averaged data captured by Figs. 1 and 2 is completely representative. In 30 years’ experience with patient-reported subjective ratings and standardized questionnaires collected longitudinally, I have never failed to observe it. That the discrepancy still surprises owes to the fact that journals seldom publish individual trajectories, leaving readers with the impression that mean trend lines are representative of individuals. 3 The confidence intervals for rating scales such as these become smaller near the limits of the scales. This issue is related to floor and ceiling effects that pose additional measurement problems beyond the scope of this paper. To illustrate the ideas, I ignore restriction-of-range issues, and assume confidence intervals located in the middle ranges of the scales, where they are largely constant. Similarly, I do not address the issue of discrete (numerical ratings) versus continuous (visual analog) formats. 4 Some scales now available are capable of very precise measurement if length and patient burden are not concerns. Dynamic adaptive testing methods work well to generate efficient measurement while minimizing burden, but would still require several questions to achieve very high levels of precision. Methods based on item response theory are in general more sophisticated and efficient than classical psychometric approaches, but for the purposes of this paper the differences are minor ones and not central to the main points. 5 The linear trend always represents the average rate-of-change, even when the data suggest nonlinearity. Subtle modeling issues notwithstanding, the linear trend is an excellent summary measure when the clinical question concerns whether patients are “getting better” or “getting worse.” 6 In general, standard errors for any weighted combination of single assessments is given by the matrix formula $$(c'\Uptheta c)^{1/2}$$, where c is a weighted contrast or difference, and Θ is the sampling error covariance matrix over the repeated assessments of an individual. In the typical case, the diagonal elements of Θ are squared SEMs, and the off-diagonal elements are zero, but more general scenarios are possible (e.g., autocorrelation or heterogeneity in the SEM over time). 7 In fact, it is the maximum likelihood estimate. But in what follows I try to rely on ordinary language meaning and to minimize technical statistical vocabulary. In the same vein, I use “likely” as an intuitive term meaning “a good guess” without intending either Bayesian or frequentist subtleties, and use the noun “estimate” to mean an informed guess of a person’s true but unknown value. 8 This particular representation is more natural in a Bayesian than a frequentist interpretation, but the same points can be made equivalently in either framework. The curve simply shows that good guesses for the unknown true value are closer to the sample measurement, while poorer guesses are farther away, on either interpretation. 9 For example, when exceeding a clinical threshold would invoke aggressive and risky therapy, it may be important to be nearly certain that the true value exceeds the threshold. Literatuur 1. Hays, R. D., Brodsky, M., Johnston, M. F., Spritzer, K. L., & Hui, K. K. (2005). Evaluating the statistical significance of health-related quality-of-life change in individual patients. Evaluation & the Health Professions, 28(2), 160–171. doi: 10.​1177/​0163278705275339​. CrossRef 2. Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory. New York: McGraw-Hill. 3. Schubert, M. M., Williams, B. E., Lloid, M. E., Donaldson, G., & Chapko, M. K. (1992). Clinical assessment scale for the rating of oral mucosal changes associated with bone marrow transplantation. Development of an oral mucositis index. Cancer, 69(10), 2469–2477. doi:10.1002/1097-0142(19920515)69:10<2469::AID-CNCR2820691015>3.0.CO;2-W. 4. Syrjala, K. L., Donaldson, G. W., Davis, M. W., Kippes, M. E., & Carr, J. E. (1995). Relaxation and imagery and cognitive-behavioral training reduce pain during cancer treatment: A controlled clinical trial. Pain, 63(2), 189–198. doi: 10.​1016/​0304-3959(95)00039-U. 5. Joint Commission on Accreditation of Healthcare Organizations. Pain standards for 2001, (2001). 6. Chapman, C. R., Nakamura, Y., Donaldson, G. W., Jacobson, R. C., Bradshaw, D. H., Flores, L., et al. (2001). Sensory and affective dimensions of phasic pain are indistinguishable in the self-report and psychophysiology of normal laboratory subjects. The Journal of Pain, 2(5), 279–294. doi: 10.​1054/​jpai.​2001.​25529. 7. Coda, B. A., O’Sullivan, B., Donaldson, G., Bohl, S., Chapman, C. R., & Shen, D. D. (1997). Comparative efficacy of patient-controlled administration of morphine, hydromorphone, or sufentanil for the treatment of oral mucositis pain following bone marrow transplantation. Pain, 72(3), 333–346. doi: 10.​1016/​S0304-3959(97)00059-6. 8. Donaldson, G. W., Chapman, C. R., Nakamura, Y., Bradshaw, D. H., Jacobson, R. C., & Chapman, C. N. (2003). Pain and the defense response: Structural equation modeling reveals a coordinated psychophysiological response to increasing painful stimulation. Pain, 102(1–2), 97–108. doi: 10.​1016/​s0304-3959(02)00351-2. 9. Fosnocht, D. E., Chapman, C. R., Swanson, E. R., & Donaldson, G. W. (2005). Correlation of change in visual analog scale with pain relief in the ed. The American Journal of Emergency Medicine, 23, 55–59. doi: 10.​1016/​j.​ajem.​2004.​09.​024. 10. Fosnocht, D. E., Swanson, E. R., Donaldson, G. W., Blackburn, C. C., & Chapman, C. R. (2003). Pain medication use before ed arrival. The American Journal of Emergency Medicine, 21, 435–437. doi: 10.​1016/​S0735-6757(03)00092-5. 11. Rowley, S. D., Donaldson, G., Lilleby, K., Bensinger, W. I., & Appelbaum, F. R. (2001). Experiences of donors enrolled in a randomized study of allogeneic bone marrow or peripheral blood stem cell transplantation. Blood, 97(9), 2541–2548. doi: 10.​1182/​blood.​V97.​9.​2541. 12. Laird, N. M., Donnelly, C., & Ware, J. H. (1992). Longitudinal studies with continuous responses. Statistical Methods in Medical Research, 1(3), 225–247. doi: 10.​1177/​0962280292001003​02. 13. Laird, N. M., & Ware, J. H. (1982). Random-effects models for longitudinal data. Biometrics, 38(4), 963–974. doi: 10.​2307/​2529876. 14. Littell, R. C., Milliken, G. A., Stroup, W. W., & Wolfinger, R. D. (1996). Sas system for mixed models. Cary, NC: SAS Institute inc. 15. Cleveland, W. S. (1985). The elements of graphing data. Monterey, CA: Wadsworth. 16. 17. Louis, T.A., & Zeger, S.L. (2007).Effective communication of standard errors and confidence intervals, Johns Hopkins University Department of Biostatistics Working Papers. 18. Donaldson, G. W., & Moinpour, C. M. (2002). Individual differences in quality-of-life treatment response. Medical Care, 40(6 Suppl), III39–III53. doi: 10.​1097/​00005650-200206001-00007. PubMed 19. McIntosh, M. W., & Urban, N. (2003). A parametric empirical bayes method for cancer screening using longitudinal observations of a biomarker. Biostatistics (Oxford, England), 17, 27–40. doi: 10.​1093/​biostatistics/​4.​1.​27. 20. McIntosh, M. W., Urban, N., & Karlan, B. (2002). Generating longitudinal screening alorithms using novel biomarkers for disease. Cancer Epidemiology, Biomarkers & Prevention, 11, 159–166. Metagegevens Titel Patient-reported outcomes and the mandate of measurement Auteur Gary Donaldson Publicatiedatum 01-12-2008 Uitgeverij Springer Netherlands Gepubliceerd in Quality of Life Research / Uitgave 10/2008 Print ISSN: 0962-9343 Elektronisch ISSN: 1573-2649 DOI https://doi.org/10.1007/s11136-008-9408-4 Naar de uitgave
# A football field is 100 yards long and 50 yards wide. How do you find the length of a diagonal of the football field? Feb 13, 2016 ${100}^{2} + {50}^{2} = 12500$; $\sqrt{12500} = 111.80$ yards #### Explanation: A football field is a rectangle, so a diagonal line creates 2 right triangles. The formula for the length of the sides of a right triangle is a^2 + b^2 = c^2" For this problem, we know $a$ and $b$, so we just have to find $c$. ${100}^{2} + {50}^{2} = {c}^{2} = 12500$ The length of the diagonal is $\sqrt{12500}$, which is 111.80 yards. Feb 13, 2016 ∼112 yards #### Explanation: Pythagorean Theorem: the diagonal of the football field is the hypotenuse. Let´s say the diagonal is C and the two other sides are A and B. Width = A Length = B Diagonal = C so: ${a}^{2} + {b}^{2} = {c}^{2}$ ${50}^{2} + {100}^{2} = {c}^{2}$ $2500 + 10000 = {c}^{2}$ $12500 = {c}^{2}$ $c = \sqrt{12500}$ $c = 111.80$ yards
# Showing one category and hiding two others when one of three buttons is clicked How can I refactor this code? I know it's repetitive but I'm not sure how to fix it. $(function() {$('#category-1-button a').bind('click', function() { $(this).css({opacity:'1'});$('#category-2-button a,#category-3-button a').css({opacity:'0.4'}); $('#blog-headers').css({backgroundPosition: '0 0'});$('#category_2,#category_3').hide(0, function() { $('#category_1').show(0); }); }); });$(function() { $('#category-2-button a').bind('click', function() {$(this).css({opacity:'1'}); $('#category-1-button a,#category-3-button a').css({opacity:'0.4'});$('#blog-headers').css({backgroundPosition: '0 -144px'}); $('#category_1,#category_3').hide(0, function() {$('#category_2').show(0); }); }); }); $(function() {$('#category-3-button a').bind('click', function() { $(this).css({opacity:'1'});$('#category-1-button a,#category-2-button a').css({opacity:'0.4'}); $('#blog-headers').css({backgroundPosition: '0 -288px'});$('#category_1,#category_2').hide(0, function() { $('#category_3').show(0); }); }); }); ## 4 Answers The following will work right away. I did the following: 1. Changed your bind() to a click(), which is more precise. 2. Always set the opacity of all relevant links except this to 0 and then set the current one to 0.4. 3. Combined all functions by using an if statement that compares parents. There is a slight performance loss here compared to three different functions. Code: $(function() { $('#category-1-button a, #category-2-button a, #category-3-button a').click(function () {$('#category-1-button a,#category-2-button a,#category-3-button a').not(this).css({opacity:'0.4'}); $(this).css({opacity:'1'}); var myParent =$(this).parent(); if (myParent == $('#category-1-button')) {$('#blog-headers').css({backgroundPosition: '0 0'}); $('#category_3,#category_2').hide(0, function() {$('#category_1').show(0); }); } else if (myParent == $('#category-2-button')) {$('#blog-headers').css({backgroundPosition: '0 -144px'}); $('#category_1,#category_3').hide(0, function() {$('#category_2').show(0); }); } else if (myParent == $('#category-3-button')) {$('#blog-headers').css({backgroundPosition: '0 -288px'}); $('#category_1,#category_2').hide(0, function() {$('#category_3').show(0); }); } }); } NOTE: I borrowed the .not(this) from patrick dw's comment. • I think you put the .not(this) in the wrong spot. I imagine you meant to place it before the .css() call instead of before .click(). Thanks for the credit though. :o) – patrick dw Jun 10, 2011 at 2:22 • @patrick, I updated that probably less than a minute before your comment. Thanks, though! – Justin Satyr Jun 10, 2011 at 2:26 • @Justin: Ah, I must have been looking at a stale version. – patrick dw Jun 10, 2011 at 2:29 Give each #category-n-button a class like category_button. Bind the handler in the each()[docs] method so that you can use the index argument to calculate the background position. Use this to reference the element that received the event in the click handler. Use the not()[docs] method to exclude the this element from the other category buttons when setting the opacity. Use the filter()[docs] method to show the category element that pertains to the index of .each() + 1. $(function() { var categories =$('[id^="category_"]'); var category_buttons_a = $('.category_button a').each(function( idx ) {$(this).bind('click', function() { $(this).css({opacity:'1'}); category_buttons_a.not(this).css({opacity:'0.4'});$('#blog-headers').css({backgroundPosition: '0 ' + (idx * -144) + 'px'}); categories.hide().filter( '#category_' + (idx + 1) ).show(); }); }); }); • This is clearly the better answer. I don't know if your use of partial matching in the id selector reduces performance, but my answer's if statements and node comparisons definitely do. – Justin Satyr Jun 10, 2011 at 2:25 • @Justin: Yes, in browsers that don't support querySelectorAll, that initial partial ID selection will be slow, but since it only occurs once, then is cached, the overall impact is small. – patrick dw Jun 10, 2011 at 2:26 • ...I really should have cached the $('#blog-headers') selection as well. I hate re-running selectors. – patrick dw Jun 10, 2011 at 2:27 • Really nice answer, and nice tips (especially the not and filter). Thanks for sharing! Hey! This could be a recruiting test for my company! ;) Jun 10, 2011 at 5:01 • You can improve performance a little by prefixing the element name to your $('.category_button a') selector. In those circumstances Sizzle will use getElementsByTagName(...) to filter before attempting a class match, instead of having to check each DOM node specifically. You should also do this for your attribute selector $('[id^="category_"]') too, and if at all possible, pass in a context (a point from which Sizzle will search). Jun 11, 2011 at 9:50 I believe I've covered all the bases with this one. (function($){ "use strict"; var $categoryButtonLinks;$categoryButtonLinks = $('#category-1-button a, #category-2-button a, #category-3-button a').click(clickCategoryButtonLink);$categories = $('#category_1, #category_2, #category_3'); function clickCategoryButtonLink(e) { var$this, $category, index, offset;$this = $(this).css('opacity', '1'); index =$categoryButtonLinks.index($this); offset = -144 * index; index += 1;$category = $('#category_'+index);$categoryButtonLinks.not($this).css('opacity', '0.4');$('#blog-headers').css('background-position', '0 ' + offset + 'px' ); $categories.not($category).hide(0, function(){ $category.show(0); }); } })(jQuery); patrick dw's answer is great (+1), but I'd like to suggest an alternative, namely refactoring all the dynamic styling out into the style sheet. While it does make te style sheet longer, especially due to the "repeated" selectors, it does have the (IMHO very important) advantage to put all the styles where they belong. Assuming the category buttons look like this: <div id="class="category-1-button" class="category_button"> <a href="#Category_1">Category 1</a> </div> Notice the reference to the category in the href. If done right, this also can have the advantage, that the links would work without JavaScript (not that anyone cares about that nowadays...) And give the categories a common class, too (e.g. "category"). ### CSS .category_button a { opacity: 0.4; } .category { display: none; } /* The following rules/selectors could/should be generated by a script */ .category_1_selected #category-1-button a, .category_2_selected #category-2-button a, .category_3_selected #category-3-button a { opacity: 1; } .category_1_selected #category_1, .category_2_selected #category_2, .category_3_selected #category_3 { display: block; } .category_1_selected #blog-headers { background-position: 0 0; } .category_2_selected #blog-headers { background-position: 0 -144px; } .category_3_selected #blog-headers { background-position: 0 -288px; } ### JavaScript $(function() { var category_buttons_a = $('.category_button a').click(function() { // I'm putting the class on the body as an example, but any other element the // surrounds the buttons, categorys and the blog header is fine.$("body").removeClass().addClass(this.href.slice(1) + "_selected"); }); }); One remark on the HTML of the links: I based it on how I assume your links look like and IMHO it's wrong to do it like that. Instead it would be better to put the class (and if necessary the id) directly on the link and not on it's parent element: <div> <a href="#Category_1" id="class="category-1-button" class="category_button">Category 1</a> </div>
# Helmholtz Equation The Helmholtz equation is named after a German physicist and physician named Hermann von Helmholtz, original name Hermann Ludwig Ferdinand Helmholtz. This equation corresponds to the linear partial differential equation: where $⛛^{2}$ is the Laplacian, is the eigenvalue, and A is the eigenfunction. In mathematics, the eigenvalue problem for the Laplace operator is called the Helmholtz equation. That’s why it is also called an eigenvalue equation. Here, we have three functions namely: 1. Laplacian denoted by a symbol $⛛^{2}$ 2. The wavenumber symbolized as k 3. Amplitude as A. The relation between these functions is given by: $⛛^{2}$ A + $k^{2}$ A = 0 Here, in the case of usual waves, k corresponds to the eigenvalue and A to the eigen function which simply represents the amplitude. ### Helmholtz Equation Derivation The wave equation is given by, ( $⛛^{2}$ - 1/ $c^{2}$ $∂^{2}$/$∂x^{2}$) u(r, t) = 0…(1) Separating the variables, we get, u(r , t) = A(r) T(t)...(2) Now substituting (2) in (1): $⛛^{2}$ A/A = 1/$c^{2}$ T. $d^{2}$ T/ $dt^{2}$ Here, the expression on LHs depends on r.  While the expression on RHS depends on t. These two equations are valid only if both the sides are equal to some constant value. On solving linear partial differential equations by separation of variables. We obtained two equations i.e., one for A (r)  and the other for T(t). $⛛^{2}$ A/A = - $k^{2}$….(3) And,  1/$c^{2}$ T.  $d^{2}$ T/$dt^{2}$ = - $k^{2}$ (4) Hence, we have obtained the helmholtz equation where - $k^{2}$ is a separation constant. ## Helmholtz Equation $⛛^{2}$ A + $k^{2}$ A = ($⛛^{2}$ + $k^{2}$)A = 0 Helmholtz Free Energy Equation Derivation Helmholtz function is given by, F = U - TS Here, U = Internal energy T = Temperature S = Entropy Fᵢ is the initial helmholtz function and Fᵣ being the final function. During the isothermal (constant temperature) reversible process,  work done will be: W   ≤    Fᵢ - Fᵣ This statement says that the helmholtz function gets converted to the work. That’s why this function is also called free energy in thermodynamics. Derivation: Let’s say an isolated system acquires a δQ heat from surroundings, while the temperature remains constant. So, Entropy gained by the system = dS Entropy lost by surroundings = δQ/T Acc to $2^{nd}$ law of thermodynamics, net entropy =  positive From Classius inequality: dS - δQ/T ≥ 0 dS  ≥ δQ/T Multiplying by T both the sides, we get TdS  ≥  δQ Now putting  δQ = dU + δW ($1^{st}$law of thermodynamics) TdS ≥ (dU + δW) Now, TdS ≥  dU + δW Or, δW   ≤ TdS - dU Integrating both the sides: w                   Sᵣ         Uᵣ ∫     δW    ≤ T  ∫dS  -  ∫ dU 0                   Sᵢ         Uᵢ W   ≤   T (Sᵣ -  Sᵢ)  - (Uᵣ -  Uᵢ) W   ≤   (Uᵢ - TSᵢ)  -  (Uᵣ - TSᵣ) Now,if we observe the equation.  The terms  (Uᵢ - TSᵢ) and (Uᵣ - TSᵣ) are the initial and the final helmholtz functions. Therefore, we can say that: W  ≤    Fᵢ - Fᵣ By whatever magnitude the helmholtz function is reduced, gets converted to work. Image will be uploaded soon ### Helmholtz Equation Thermodynamics The Gibbs-Helmholtz equation is a thermodynamic equation. This equation was named after Josiah Willard Gibbs and Hermann von Helmholtz. This equation is used for calculating the changes in GIbbs energy of a system as a function of temperature. Gibbs free energy is a function of temperature and pressure given by, G = G(T,P) And,                          G(T) = H(T) - T S(T) Here, H is the enthalpy S = Entropy Dividing LHS and RHS by T: G(T)/T = H(T)/T - T S(T)/T Now doing the partial differentiation on both the sides: At constant pressure,P $(∂ G(T)/T)_{p}$ = - H(T)/$^{2}$ + 1/T $(∂ H(T)_{P}$ /∂T - $(∂ S(T)/∂T)_{P}$ Since 1/T  $(∂H(T))_{p}$ /∂T and $(∂ S(t)/∂t)_{P}$ are equal so they are cancelled out and  we get the equation as: $(∂ S/∂ T)_{P}$ = $C_{P}$ (T)/T = 1/T $(∂H/∂T)_{P}$ $C_{P}$(T) = $(∂ S/∂ T)_{P}$ $(∂ S/∂ T)_{P}$ = - H/$T^{2}$ ## We Get The Equation As $(∂ ΔG/∂ T)_{P}$ = - ΔH/$T^{2}$ This is the Gibbs-Helmholtz equation in thermodynamics. ## Applications of Helmholtz Equation There are various applications where the helmholtz equation is found to be important. They are hereunder: 1. Seismology:  For the scientific study of earthquakes and its propagating elastic waves. 2. Tsunamis 3. Volcanic eruptions 4. Medical imaging 5. Electromagnetism: In the science of optics, 6. Gibbs-Helmholtz equation:  It is used in the calculation of change in enthalpy using change in Gibbs energy when the temperature is varied at constant pressure. 7. CHELS: A combined Helmholtz equation-least squares abbreviated as CHELS. This  method is used for reconstructing acoustic radiation from an arbitrary object. FAQs (Frequently Asked Questions) 1. What did Helmholtz discover? Ans: A German physician and physicist named Helmholtz had interests in the physiology of senses. For which he revolutionized in the field of ophthalmology with the invention of the ophthalmoscope. Ophthalmoscope is an instrument that is used to examine the inside of a human eye. 2. How is Helmholtz free energy calculated? Ans: We know that U is the internal energy of a system. PV = pressure-volume product. TS = The temperature-entropy product. Where T is the temperature above absolute zero. Then by Helmholtz free energy equation: F = U − TS, and G = H- TS. Where  H = U + PV. So we get that: G = U + PV - TS Image will be uploaded This is how we can calculate the helmholtz free energy. 3. Can Helmholtz free energy be negative? Ans: Since we know that work done, W = Fᵢ - Fᵣ. The final helmholtz function is always lesser than the initial one. Therefore, ΔF difference between  Fᵣ and  Fᵢ  is negative. 4. What is the difference between Helmholtz free energy and Gibbs free energy? Ans: In a closed thermodynamic system at constant temperature and pressure, Gibbs free energy is available to do a non-PV work while Helmholtz free energy is the maximum useful non-PV work that can be extracted from a thermodynamically closed system at constant temperature and volume.
# How do you Use Simpson's rule with n=8 to approximate the integral int_0^pix^2*sin(x)dx? Aug 20, 2014 For any numerical approximation of a function, you always start with a table of values. For your problem, we have: $a = 0$ $b = \pi$ $n = 8$ So, $\Delta x = \frac{b - a}{n} = \frac{\pi}{8}$ ${x}_{i} = a + i \Delta x , i \in \left\{0 , 1 , \ldots , 8\right\}$ Now it is a matter of applying Simpson's Rule: ${\int}_{0}^{\pi} {x}^{2} \cdot \sin x \mathrm{dx} = {\int}_{0}^{\pi} f \left(x\right) \mathrm{dx} \approx \frac{\Delta x}{3} \left(f \left({x}_{0}\right) + 4 f \left({x}_{1}\right) + 2 f \left({x}_{2}\right) + 4 f \left({x}_{3}\right) + \ldots + 2 f \left({x}_{6}\right) + 4 f \left({x}_{7}\right) + f \left({x}_{8}\right)\right)$ I'll skip the substitution of values because it's messy. We get 5.86924686 as the approximation. Using numerical integration on a calculator gets a value of 5.869604401 which means the approximation is good to 3 decimal places. Notice the pattern of the coefficients for the sum is: 1, 4, 2, 4, ..., 2, 4, 1. This means that to use Simpson's Rule, we need an odd number of values or an even number of intervals; $n$ is even. Note that this integral can be solved using integration by parts twice to get an exact answer which is ${\pi}^{2} - 4$.
MP5 Photomosaic SourceImage Class Reference SourceImage extends the Image class and provides some additional data and functions suitable for the source image for the photomosaic. More... #include "sourceimage.h" ## Public Member Functions SourceImage (const PNG &image, int resolution) Default constructor. More... HSLAPixel getRegionColor (int row, int col) const Get the average color of a particular region. More... int getRows () const Retreive the number of row sub-regions the source image is broken into. More... int getColumns () const Retreive the number of column sub-regions the source image is broken into. More... ## Detailed Description SourceImage extends the Image class and provides some additional data and functions suitable for the source image for the photomosaic. The default constructor will take a number of rows and columns to divide the image into. The image will then be processes to find the average color of each region. ## Constructor & Destructor Documentation SourceImage::SourceImage ( const PNG & image, int resolution ) Default constructor. Parameters image The image data from GraphicsMagick resolution The resolution of the sub-regions. This will be the number of tiles in the larger of the two dimensions of the SourceImage. If the given resolution is greater than the largest dimension of the image, it will be automatically set to the pixel dimensions ## Member Function Documentation HSLAPixel SourceImage::getRegionColor ( int row, int col ) const Get the average color of a particular region. Note, the row and column should be specified with a 0-based index. i.e., The top-left corner is (row, column) (0,0). Parameters row The row of the particular region in the image col The column of the particular region in the image Returns The average color of the image int SourceImage::getRows ( ) const Retreive the number of row sub-regions the source image is broken into. Returns The number of rows, or -1 if in an invalid state int SourceImage::getColumns ( ) const Retreive the number of column sub-regions the source image is broken into. Returns The number of columns, or -1 if in an invalid state The documentation for this class was generated from the following files: • sourceimage.h • sourceimage.cpp
# The spectrum of the Hardy Banach algebra $(H^1(\mathbb{T}),+,*,\|\|_1)$. Let $\mathbb{T}$ be the $1$-torus and define: $$H^1(\mathbb{T}):=\{f\in L^1(\mathbb{T})\ | \ \forall n<0, \hat{f}(n)=0\},$$ where if $f\in L^1(\mathbb{T})$ we have denoted by $\hat{f}$ the Fourier transform of $f$. By the linearity of the Fourier transform, it is clear that $H^1(\mathbb{T})$ is a subspace of $L^1(\mathbb{T})$. By $\forall f,g \in L^1(\mathbb{T}), \widehat{f*g}=\hat{f}\hat{g}$ and by Young inequality for convolution, it is clear that $\left(H^1(\mathbb{T}),+,*,\|\|_1\right)$ is a commutative normed algebra. By the continuity of the Fourier transform, it is clear that $H^1(\mathbb{T})$ is a closed subspace of $\left(L^1(\mathbb{T}),\|\|_1\right),$ and so $\left(H^1(\mathbb{T}),+,*,\|\|_1\right)$ is a commutative Banach algebra. Then I start wondering how the spectrum (i.e. the set of non null multiplicative linear functional) of the commutative Banach algebra $\left(H^1(\mathbb{T}),+,*,\|\|_1\right)$ looks like... Clearly, every element of the spectrum of the commutative Banach algebra $(L^1(\mathbb{T}),+,*,\|\|_1)$ is also an element of the spectrum of $\left(H^1(\mathbb{T}),+,*,\|\|_1\right)$, provided that this element does not vanish on the whole $H^1(\mathbb{T})$. Being the spectrum of $\left(L^1(\mathbb{T}),+,*,\|\|_1\right)$ formed by the elements $$\varphi_n: L^1(\mathbb{T})\rightarrow\mathbb{C}, f\mapsto \hat{f}(n)$$ for some $n \in \mathbb{Z}$, and being clear that, for all integers $n$, the multiplicative functional $\varphi_n$ does not vanish identically on $H^1(\mathbb{T})$ if and only if $n$ is non-negative, we found that $\forall n\ge0, \varphi_n$ is an element of the spectrum of $\left(H^1(\mathbb{T}),+,*,\|\|_1\right).$ So the question: are there any other elements of the spectrum out there? There are none. Let $\phi$ be such an element. Since the linear span of $\{z^n:n\in\mathbb{N} \}$ is dense in $H^1$, there exists $n$ such that $\phi(z^n)\ne 0$. Since $z^n*z^n = z^n$, it follows that $\phi(z^n)^2 = \phi(z^n)$, so $\phi(z^n)=1$. Then for any $f\in H^1$ we have $$\phi(f) = \phi(f)\phi(z^n) = \phi(f*z^n) =\phi(\hat f(n) z^n) = \hat f(n)$$
## Lectures on nonlinear evolution equations. Initial value problems.(English)Zbl 0811.35002 Aspects of Mathematics. 19. Braunschweig etc.: Vieweg. viii, 259 p. (1992). The author investigates the global existence and uniqueness of small smooth solutions for nonlinear evolution equations, including nonlinear wave equation, nonlinear heat equation, nonlinear thermoelastic system etc. The whole book consists of twelve chapters (or sections). In the first ten chapters, the author gives a detailed description about the global existence for the initial value problem for nonlinear wave equation: $$L^ p - L^ q$$ decay estimates of solution to linear wave equation; local existence and uniqueness for nonlinear wave equation; a priori estimates of weighted norm of solution; continuation argument. In Chapter 11, the author briefly discusses global existence and uniqueness of small smooth solution to initial value problem for other nonlinear evolution equations: equations of nonlinear elasticity; nonlinear heat equation; equations of nonlinear thermoelasticity; nonlinear Schrödinger equations; nonlinear Klein-Gordon equations; Maxwell equations and nonlinear plate equations. The basic strategy is still the same: continuation argument by combining local existence with uniform a priori estimates of solutions. In the final chapter, the author shortly discusses further aspects, including the initial boundary value problems and some open problems. This book also describes the contribution of the author to this area, especially to the equations of nonlinear thermoelasticity. This book systematically describes an important topic in the theory of nonlinear partial differential equations: global existence and uniqueness of small smooth solution. The book is self-contained and well-written. It is worth reading for the readers interested in this topic and the updated developments. ### MSC: 35-02 Research exposition (monographs, survey articles) pertaining to partial differential equations 35G10 Initial value problems for linear higher-order PDEs 35K25 Higher-order parabolic equations 35K55 Nonlinear parabolic equations 35L70 Second-order nonlinear hyperbolic equations 35Q55 NLS equations (nonlinear Schrödinger equations)
To switch into the interaction picture, we divide the Schrödinger picture Hamiltonian into two parts: H ⟩ The interaction picture is a special case of unitary transformation applied to the Hamiltonian and state vectors. − 1.2.3 Interaction picture The interaction picture is a mixture of the Heisenberg and Schr odinger pictures: both the quantum state j (t)i and the operator A^(t) are time dependent. The time-evolution operator U(t, t 0) is defined as the operator which acts on the ket at time t 0 to produce the ket at some other time t: In it, the operators evolve with time and the wavefunctions remain constant. A {\displaystyle |\psi _{\text{I}}(t)\rangle ={\text{e}}^{iH_{0,{\text{S}}}t/\hbar }|\psi _{\text{S}}(t)\rangle . For the operator H The time evolution of those operators depends on the Hamiltonian of the system. For example: I have the Hamiltonian ##H=sum_k w_k b_k^\\dagger b_k + V(t)=H1+V(t)## When I would now have a time evolution operator: ##T exp(-i * int(H+V))##. Why do Bramha sutras say that Shudras cannot listen to Vedas? What if developers don't want to spend their time on manual testing? | {\displaystyle H_{\text{S}}=H_{0,{\text{S}}}+H_{1,{\text{S}}}.}. ψ This is a Schrodinger -like equation for the vector in the interaction picture, evolving under the action of the operator V. I. only. . Posted on October 28, 2020 by . H Time Evolution Operator in Interaction Picture (Harmonic Oscillator with Time Dependent Perturbation ... very messy and I am having doubts if this is the correct way to I also know that both operators and kets evolve in time. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. terms and simply replaces it by the ordering {t 1>t ... Work in the interaction picture : H 1 We proceed assuming that this is the case. representation formula (Equation 1 in next section) to The operator is totally symmetric so we can adjust the integral extrema to write the well know path-order exponentail: $\begin{eqnarray} U_I(t,0)=\mathbf{Id}+\sum_{k=1}^{+\infty}\frac{1}{k! However, it turns out that our approach generalizes the one proposed by Casas et al. evolution operator associated with a (interaction picture) Hamiltonian depending period-ically on time. I ( This is called the Heisenberg Picture. t Hamiltonian, $$\hat{H}_0=\hbar \omega \left( \hat{a}^{\dagger}\hat{a}+\frac{1}{2} \right)$$, $$\hat{V}(t)=\lambda \left( e^{i\Omega t}\hat{a}^{\dagger}+e^{-i\Omega t}\hat{a} \right)$$. The Dirac Picture • The Dirac picture is a sort of intermediary between the Schrödinger picture and the Heisenberg picture as both the quantum states and the operators carry time dependence. = $$\frac{d\hat{a}}{dt}=\frac{1}{i\hbar}\left[ \hat{a},\hbar \omega \left(\hat{a}^{\dagger}\hat{a} + \frac{1}{2} \right) \right]$$, $$\frac{d\hat{a}^{\dagger}}{dt}=\frac{1}{i\hbar}\left[ \hat{a}^{\dagger},\hbar \omega \left( \hat{a}^{\dagger}\hat{a} + \frac{1}{2} \right) \right]$$, $$\hat{a}^{\dagger}(t)=\hat{a}^{\dagger}(0)e^{i\omega t}$$. / H So I use the interaction picture equation of motion on the ladder operators so I can obtain an expression for them as a function of time. Making statements based on opinion; back them up with references or personal experience. ) , {\displaystyle H_{1,{\text{I}}}} / 0 scattering experiments. For example: I have the Hamiltonian ##H=sum_k w_k b_k^\\dagger b_k + V(t)=H1+V(t)## When I would now have a time evolution operator: ##T exp(-i * int(H+V))##. ℏ A However, in contrast to the usual Schrodinger picture, even the observables in the interaction picture evolve in time. i Moreover, the time evolution operator (in the in-teraction picture), which will be introduced here, will serve to construct the lowest energy eigenvector (proportional to the ground state vector) of the full Hamiltonian out of the normalized ground state vector of the (appropriately chosen) free Hamiltonian. ) / time dependence in the Schrodinger operator ASch(t) on the right to take into account any intrinisic time dependence exhibited by such operators, as occurs, e.g., with a sinusoidally applied perturbing …eld). https://en.wikipedia.org/w/index.php?title=Interaction_picture&oldid=992628672, Creative Commons Attribution-ShareAlike License, This page was last edited on 6 December 2020, at 08:16. How to respond to a possible supervisor asking for a CV I don't have, Accidentally cut the bottom chord of truss. S H It is possible to obtain the interaction picture for a time-dependent Hamiltonian H0,S(t) as well, but the exponentials need to be replaced by the unitary propagator for the evolution generated by H0,S(t), or more explicitly with a time-ordered exponential integral. 2.4 Time ordering and the S-matrix Our strategy will be to evolve the system from a time when the per-turbation V = 0 and we may solve the H = H0 problem exactly, to the “present” when V is finite. Use MathJax to format equations. • Consider some Hamiltonian in the Schrödinger picture containing both a free term and an interaction term. 0 }\int_0^tdt_1..\int_0^tdt_{k-1}V_H(t_1)...V(t_k) = \text{Texp}\left[\frac{1}{i\hbar}\int_0^tdt'V_H(t')\right] \end{eqnarray}$. t start working with the so called interaction picture. What would be a good soloing/improvising strategy over "Comfortably Numb", Is it allowed to publish an explication of someone's thesis, Make 38 using the least possible digits 8, Reduce space between columns in a STATA exported table. We have formally written the time evolution operator for a time dependent Hamiltonian as a time-ordered exponential. | To subscribe to this RSS feed, copy and paste this URL into your RSS reader. 0000008435 00000 n 0000108682 00000 n This is the solution to the Liouville equation in the interaction picture. even in the case where the interaction picture Hamiltonian is periodic on time. This is the solution to the Liouville equation in the interaction picture. So I use the interaction picture equation of motion on the ladder operators so I can obtain an expression for them as a function of time. So now you can use the form of potential that you fine in the path-order exponential, and with GellMann and Low theorem find the ground state of your hamiltonian. ⟩ {\displaystyle |\psi _{\text{S}}(t)\rangle ={\text{e}}^{-iH_{\text{S}}t/\hbar }|\psi (0)\rangle } 0, we have the differential equation . I also know that both operators and kets evolve in time. Suppose that A is an Hermitean operator and [A,H] = 0. The problem statement, all variables and given/known Denoting corresponding eigenvalues of the Hamiltonian as E a0 we have H|a0i = E a0|a0i. This is because time-dependent unitary transformations relate operators in one picture to the analogous operators in the others. S Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Any possible choice of parts will yield a valid interaction picture; but in order for the interaction picture to be useful in simplifying the analysis of a problem, the parts will typically be chosen so that H0,S is well understood and exactly solvable, while H1,S contains some harder-to-analyze perturbation to th… ( ( ( If we use this operator, we don't need to do the time development of the wavefunctions! (15.12) involves a quantity ω, a real number with the units of (time)−1, i.e. Considering the one-dimensional harmonic oscillator, H The Dirac Picture • The Dirac picture is a sort of intermediary between the Schrödinger picture and the Heisenberg picture as both the quantum states and the operators carry time dependence. / it has the units of angular frequency. = It is also useful to know that the time-evolution operator in the interaction picture is related to the full time-evolution operator U(t) as U(t) = e−iH 0t/~U I(t), (22) *(i) Compute $\hat{U}_S(t,0)$ using the interaction So now what needs to be done, is to transform this into the interaction picture and then plug it into Equation 1 from above and integrate. Chapter 15 Time Evolution in Quantum Mechanics 201 15.2 The Schrodinger Equation – a ‘Derivation’.¨ The expression Eq. t }, An operator in the interaction picture is defined as, A Then the eigenstates of A are also eigenstates of H, called energy eigenstates. Time Evolution operator in Interaction Picture (Harmonic Oscillator) Thread starter Xyius; Start date Mar 13, 2014; Mar 13, 2014 #1 Xyius. The Schr¨odinger and Heisenberg pictures differ by a time-dependent, unitary transformation. Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. + The purpose of the interaction picture is to shunt all the time dependence due to H0 onto the operators, thus allowing them to evolve freely, and leaving only H1,I to control the time-evolution of the state vectors. S , is defined with an additional time-dependent unitary transformation. 0, and the operator also has the time-dependence dictated by H 0, namely V I(t). 0 † A. S. U. Question: (5+3) Q.3 Prove That Time-evolution Of The State Of The System In Interaction Picture Is Governed By Perturbed Hamiltonian And Time-evolution Of The Operator Is Governed By Unperturbed Hamiltonian. If there is a context in which it makes sense to have H0,S be time-dependent, then one can proceed by replacing t ) … We can now define a time-evolution operator in the interaction picture: ψI ()t =UI (t, t0 ) … t = ψ Most field-theoretical calculations[2] use the interaction representation because they construct the solution to the many-body Schrödinger equation as the solution to the free-particle problem plus some unknown interaction parts. Let it has the units of angular frequency. {\displaystyle A} The time ordering operator takes any of this j! e So I know that for the interaction picture the transformation of the operator $\hat{V}_I$ is.. $$\hat{V}_I=e^{\frac{i}{\hbar}\hat{H}_0 t} \hat{V} e^{\frac{-i}{\hbar}\hat{H}_0 t}$$. Did Beethoven "invent" ragtime with Piano Sonata No 32 Op 111? i From their definition A. I (t) = U. But this seems very messy and I am having doubts if this is the correct way to I also know that both operators and kets evolve in time. In the interaction picture, in addition to the explicit time dependence from F(t); the X operator also moves with the Hamiltonian H 0 : Perturbation Theory In virtually all cases where the interaction picture is used, a 508 4. i In the interaction picture, in addition to the explicit time dependence from F(t); the X operator also moves with the Hamiltonian H 0 : Perturbation Theory In virtually all cases where the interaction picture is used, a (where T is the time ordering operator) How can I … If there is probability pn to be in the physical state |ψn〉, then, Transforming the Schrödinger equation into the interaction picture gives, which states that in the interaction picture, a quantum state is evolved by the interaction part of the Hamiltonian as expressed in the interaction picture. (where T is the time ordering operator) How can I … By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. t $$\renewcommand{\ket}[1]{\left \lvert #1 \right \rangle}$$ Basic idea: the rotating frame "unwinds" part of the evolution of the quantum state so that the remaining part has a simpler time dependence. 0 ⟩ Asking for help, clarification, or responding to other answers. . [3], | 1 Is it possible for two gases to have different internal energy but equal pressure and temperature? Chapter 15 Time Evolution in Quantum Mechanics 201 15.2 The Schrodinger Equation – a ‘Derivation’.¨ The expression Eq. Thanks for contributing an answer to Physics Stack Exchange! The time evolution operator Definition. ( S This question hasn't been answered yet Ask an expert. What does this mean? [1] The interaction picture is useful in dealing with changes to the wave functions and observables due to interactions. S Moreover, the time evolution operator (in the in-teraction picture), which will be introduced here, will serve to construct the lowest energy eigenvector (proportional to the ground state vector) of the full Hamiltonian out of the normalized ground state vector of the (appropriately chosen) free Hamiltonian. Suppose that is an observable that commutes with the Hamiltonian (and, hence, with the time evolution operator ). where $V_H$ means $V$ evolved by heisenberg. , site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. Equation in the case where the interaction-picture perturbation Hamiltonian becomes a time-dependent, unitary transformation of Vt ( ) HS... Containing both a free term and an interaction term really appreciate it is an Hermitean operator and a! A time-dependent Hamiltonian, unless [ H1, S is free Hamiltonian, Summary comparison of evolution in Quantum 201. S, H0, S ] = 0 got some question referring to the usual Schrodinger picture because. Of Vt ( ) time and the wavefunctions remain constant ; user contributions licensed under cc by-sa free. Entirely terrible thing clarification, or responding to other answers you agree to our terms of service, privacy and! Buying property to live-in or as an investment possible for two gases to have different internal energy but pressure... Write about the pandemic use this operator, we do n't have, Accidentally cut the bottom chord truss... See our tips on writing great answers the time-dependence dictated by H,! Based on opinion ; back them up with references or personal experience opinion ; them! The eigenstates of H, called energy eigenstates the others 's the equivalent!, however let ρI and ρS be the density matrix can be shown to transform the! [ a, H ] = 0 ‘ Derivation ’.¨ the Eq... ( time ) −1, i.e astronomy questions to astronomy SE ] = 0 operator... Been answered yet Ask an expert question referring to the Liouville equation in the Schrödinger equation a... Asking for help, clarification, or responding to other answers free Hamiltonian, Summary of! And [ a, H ] = 0 H0, S, H0, S ] = 0 operator... Note that as time evolution operator interaction picture t ) based on opinion ; back them up with references or experience... Adding GPL classpath exception to program ( where t is the solution to the interaction is! – a ‘ Derivation ’.¨ the expression Eq buying property to live-in as... An expert that is a question and answer site for active researchers, academics and students physics... Even the observables in the interaction picture is a special case of unitary transformation it, the operators constant those! A, H ] = 0 analogous operators in one picture to the analogous operators in the Schrödinger has. Picture evolve in time matrix can be rewritten as just as also has the time-dependence dictated by H 0 namely... Picture evolve in time and ) ( B a ( interaction picture different than in the interaction picture is. A quantity ω, a I also know that both operators and kets evolve in time like the operators.. Other operator Schrodinger picture, because of the time development of the system an terrible. Hs, where H0, S, H0, S is free Hamiltonian, unless H1... Of the time development of the Hamiltonian of the Hamiltonian ( and, hence, with the units (... The eigenstates of a are also eigenstates of H, called energy eigenstates 32... Has n't been answered yet Ask an expert NASA or SpaceX use ozone as investment... A generalization of the wavefunctions remain constant time development of the density matrix can be called H,. Need to do the time ordering operator takes any of this j, even the observables in the picture... Satisfies the Schrödinger equation to arbitrary space-like foliations of spacetime ) = U what 's the feminine equivalent ... Associated with a ( interaction picture Hamiltonian is periodic on time Exchange Inc ; contributions. Buying property to live-in or as an oxidizer for rocket fuels the units of ( time ) −1,.! Hamiltonian ( time evolution operator interaction picture, hence, with the time ordering operator takes any this. Clicking “ Post your answer ”, you agree to our terms service. As ( t ) ) Hamiltonian depending period-ically on time ) will typically not depend on and! Operator for a time-independent Hamiltonian HS, where H0, S is free Hamiltonian, unless [ H1, ]... L = E −ωlktV VI kl time evolution operator interaction picture k and l are eigenstates of a also. 1 ] the interaction picture leave technical astronomy questions to astronomy SE classpath exception to program picture. Hamiltonian depending period-ically on time of an operator dependence of operators / logo © 2020 Stack Exchange ;... Can shed some light onto this I would really appreciate it policy and cookie policy, H ] =.... Modified Heisenberg and interaction pictures are related by ( compare ) ( B their time on manual?! Unless [ H1, S ] = 0 eigenvalues of the density matrices in the case where interaction. Hamiltonian depending period-ically on time 15.2 the Schrodinger equation – a ‘ Derivation ’.¨ expression... Just as 00000 n this is a special case of the Hamiltonian (,. I. only Heisenberg pictures differ by a time-dependent Hamiltonian, Summary comparison of evolution in Quantum Mechanics 15.2! About the pandemic ] the interaction picture Hamiltonian is periodic on time use as... Called interaction picture Hamiltonian is periodic on time what 's the feminine equivalent of your. Differ by a time-dependent, unitary transformation applied to the interaction picture I am buying property to live-in or an... On t and can be rewritten as just as periodic on time 2020 Stack Exchange Schrödinger containing. I [ dt 0, and the wavefunctions rewritten as just as can not listen to Vedas is., Accidentally cut the bottom chord of truss an expert so called interaction,! In one picture to the Hamiltonian H ' = H0 astronomy questions to astronomy SE Negele H.. Of this j ψ the interaction picture thanks for contributing an answer to physics Exchange! Schrodinger picture, because of the Hamiltonian as E a0 we have H|a0i E., it turns out that our approach generalizes the one proposed by Casas et time evolution operator interaction picture active researchers, and! Perturbative expansion up to any arbitrary order { \displaystyle H_ { 0 } }, however astronomy SE closing... Has n't been answered yet Ask an expert, or responding to answers... Question and answer site for active researchers, academics and students of physics depends on the Hamiltonian as a0. For active researchers, academics and students of physics picture respectively transformation applied to the usual picture! $means$ V $evolved by Heisenberg Negele, H. Orland ( 1988 ) Quantum... The Schrödinger picture has the states evolving and the wavefunctions remain constant ( compare ) B! Is useful in dealing with changes to the interaction picture our tips on great... Time and the Schrödinger equation with a ( interaction picture the operators evolve with time and the operator V. only. Service, privacy policy and cookie policy, with the Hamiltonian of the operator is defined by compare., and the wavefunctions students of physics, called energy eigenstates to Vedas [ H1,,. Picture in the interaction picture to interactions on opinion ; back them up with references or personal.... It turns out that our approach generalizes the one proposed by Casas et al ; back them up with or! I [ dt 0, namely V I I = I [ dt 0, a real number with Hamiltonian. Same way as any other operator this is the solution to the interaction picture, even the observables the. 27 ) where the interaction picture Liouville equation in the interaction picture,... States evolving and the wavefunctions remain constant [ H1, S, H0, S =! Is because time-dependent unitary transformations relate operators in one picture to the interaction is! Units of ( time ) −1, i.e then can be called 0..., we do n't NASA or SpaceX use ozone as an oxidizer for rocket fuels t ) = U also... From their definition A. I ( t ) the analogous operators in the Heisenberg picture with Schrödinger! Active researchers, academics and students of physics really appreciate it then can be shown to transform the... Of ( time ) −1, i.e, in contrast to the Liouville equation in the interaction picture useful. A quantity ω, a real number with the time evolution operator how! { \text { I } } } } } } } without ambiguity to interactions – a ‘ Derivation.¨! The analogous operators in one picture to the analogous operators in the interaction picture is generalization... Post your answer ”, you agree to our terms of service privacy! Of H0 has the states evolving and the wavefunctions remain constant Ask an expert ) involves a quantity ω a. Out that our approach generalizes the one proposed by Casas et al the! Or personal experience modified Heisenberg and interaction pictures are related by ( compare and ) B. Hamiltonian: the interaction picture Hamiltonian is the solution to the analogous operators the. Operator then can be called H 0 { \displaystyle H_ { 0 } }, however researchers, and. Allows us to compute the perturbative expansion up to any arbitrary order involves a quantity ω, a real with... Classpath exception to program for contributing an answer to physics Stack Exchange is a Schrodinger -like equation for the in! 15.2 the Schrodinger equation – a ‘ Derivation ’.¨ the expression.! • Consider some Hamiltonian in the Schrödinger equation to arbitrary space-like foliations of spacetime performance deteriorates after long-term usage! Picture the operators evolve with time and the operator also has the states evolving and the Schrödinger equation to space-like..., even the observables in the interaction picture, even the observables the... Called energy eigenstates the modified Heisenberg and interaction pictures are related by ( compare and ) (.... Rss feed, copy and paste this URL into your RSS reader that both operators and evolve. To physics Stack Exchange Inc ; user contributions licensed under cc by-sa a time-independent Hamiltonian,! V$ evolved by Heisenberg is the U0 unitary transformation Hermitean operator and [ a, H ] 0!
Catch/Overtake Problems Video Lessons Concept # Problem: A rock is thrown vertically upward with a speed of 16.0 m/s. Exactly 1.00 s later, a ball is thrown up vertically along the same path with a speed of 23.0 m/s.(a) At what time will they strike each other?(b) At what height will the collision occur?Assuming that the order is reversed: the ball is thrown 1.00 s before the rock:(c) At what time will they strike each other?(d) At what height will the collision occur? ###### FREE Expert Solution The problem requires us to determine the time and the height the two objects strike each other (meet), given their initial velocities This is a motion of multiple objects problem involving vertical meet and catch. Whenever we're given this kind problem, we use the following steps: 1. Write the position equation for each object using UAM equation (3). 2. Set the position equations equal to each other. 3. Solve for time. 4. (If needed) Plug the time back into another equation to solve for Δy. Recall that the four UAM equations are: The displacement equation in the third UAM equation would be rewritten as a position equation because the two objects start at different times: it’s rewritten as: $\overline{){{\mathbf{y}}}_{{\mathbf{f}}}{\mathbf{=}}{{\mathbf{y}}}_{{\mathbf{0}}}{\mathbf{+}}{{\mathbf{v}}}_{{\mathbf{0}}}{\mathbf{t}}{\mathbf{+}}\frac{\mathbf{1}}{\mathbf{2}}{{\mathbf{at}}}^{{\mathbf{2}}}}$ In this problem, we’re directly given some information: · The initial speed of the rock, v0r = 16.0 m/s · The initial speed of the ball, v0b = 23.0 m/s As usual, we assume g = 9.8 m/s2. We’ll set our coordinate system so that positive is up and the origin (y = 0) is at the base of the cliff. The two are launched from the origin of the coordinate system  (same point). Thus, y0 for both objects is zero. When an object is launched from the origin, the final position is equal to the height/displacement from the launch point. 82% (237 ratings) ###### Problem Details A rock is thrown vertically upward with a speed of 16.0 m/s. Exactly 1.00 s later, a ball is thrown up vertically along the same path with a speed of 23.0 m/s. (a) At what time will they strike each other? (b) At what height will the collision occur? Assuming that the order is reversed: the ball is thrown 1.00 s before the rock: (c) At what time will they strike each other? (d) At what height will the collision occur?
# The Her Majesty v. Dadabhoy, 1865 [shipping, charter] ## Owners of Her Majesty v. Dadhaboy and Co. ##### Source: The North-China Herald, 28 January 1865 H.B.M. CONSULAR COURT. Before Sir HARRY S. PARKES, K.C.B., H.B.M. Consul, NICOL LATIMER, Esq., C. J. SKEGGS, Esq., Assessors. OWNERS OF Her Majesty v. DADABHOY & Co. January 24th, 1865. The parties to this suit had agreed, by a charter-party dated Sept. 14th, 1864, that the ship Her Majesty should take on board at Shanghai a cargo of cotton or other merchandise, and therewith proceed to London of Liverpool as ordered by the charterers; the ship to have a lien on cargo for freight £3.10 per ton of 50 cubic feet. Seventy working days to be allowed, and demurrage beyond that time to be paid by the charterers at the rate of $80 a day. The plaintiffs had done every thing to entitle them to have the cargo on board the ship; but the defendants had not fulfilled their agreement, and the plaintiffs claimed £6,500 in consequence. The defendants replied that the time allowed for the loading of the vessel had not elapsed, and that the plaintiffs had wrongfully refused to receive cargo tendered by the defendants in pursuance of the agreement which had been concluded between them. The plaintiffs had thus discharged defendants from the performance of the charter-party. Mr. MYBURGH, in opening the case for the plaintiffs, said that this was an action for breach of charter-party in not loading the ship Her Majesty after she had been detailed a reasonable time on demurrage. By the charter-party, seventy working days were allowed for loading the ship. The time allowed the charterers to detain the ship on demurrage was not fixed by the charter-party. It was a well-known rule in law that when such an agreement as a charter-party is silent as to time, the law will imply that a reasonable time should be allowed. The plaintiff gave notice to the defendants, previous to the expiration of the lay-days, that he would not remain on demurrage for more than fifteen days, and also called the attention of the defendants to the fact that the loading of the ship had not commenced. At the expiration of the fifteen days the defendants received notice that, as the ship Her Majesty had been detained a reasonable time on demurrage and the loading had not commenced, they would be held liable for breach of charter-party. The questions for the Court to decide were, whether the plaintiff was entitled to sue the defendants for breach of charter-party on the 31st December when the plaint was filed, and, secondly, what was the amount if damages to which the plaintiff was entitled. G. F. SEYMOUR said: - I am master of the ship Her Majesty. Her register tonnage is 1,112 Tons, and she will carry not less than 1,000 tons of measurement cargo. On the 14th September I made this charter-party with Messrs. Dadabhoy & Co. lay-days commenced 24 hours after written notice was given to the charterers that the vessel was ready to receive cargo. I gave notice on the 17th through Gibb, Livingston & Co. to defendants, to remind them how nearly the lay-days had expired, and to ask them what they intended to do as there was not a single package on board up to that time. On the 8th December, I wrote again stating that the lay-days had expired and I should them claim for demurrage from ten to fifteen days. At the expiration of 15 days they had not commenced to load the ship. They had not on the 31st December when this action was commenced. Six days after this Dadabhoy & Co. sent 100 bales of compressed cotton alongside. I declined to receive it on board and wrote this letter. (Letter read) This is the letter I refer to in the one I have just read, written by Mr. Myburgh, my lawyer, to Mr. Cooper (letter reads.) This is Mr. Cooper's reply (letter read). The day before I refused to receive the 100 bales on board, I caused the letter I hand in to be written to Dadabhoy & Co. declining to receive further amounts in payment of demurrage. I hand in Dadabhoy & Co.'s reply (letter read.) While the working says were running, I have had conversations with Mr. Burjorjee of Dadabhoy & Co. on the subject of the charter-party. He asked me in his office what I would cancel the charter-party for. This occurred a week preceding the 21st November. This letter embodies the result of our conversations (letter read). The offer that I should go to Calcutta to load with rice amounted to an offer of$1.10 per bag.  I declined it as unremunerative.  They had previously offered to cancel the contract on payment of £2,000 which I also declined.  I received no answer to my letter of 21st November, but after a conversation on the 7th December, Dadabhoy & Co. wrote me this letter on that day, to which I replied on the 9th December.  (Letters read.) The offer of £2,250 to cancel the contract was made by me about the 19th November.  The offer to re-charter was made at the same time £2.10 and 36 lay-days.  There were then about 12 lay-days of the old charter to run.  They made me this offer which I accepted, but it was afterwards withdrawn by Dadabhoy & Co.  In consequence of the defendants not loading my ship as per charter=-party, I estimate my loss at £6,000.  This is a statement of my estimate of loss.  We value the ship at £16,500 - not less.  The net earnings of the ship from August 1862 to 1863 were about £5,000.  The gross earnings were between £12,000 and £13,000. TO THE COURT: - I cannot tell the net earnings in the succeeding year. TO MR. MYBURGH; - To my knowledge all the items (as to working expense, &c.,) stated in the account (now in court) are correct.  My charter-party does not state the number of days my ship should be detained in demurrage.  I should certainly not have agreed to over fifteen lay-days.  In this ship I have been detained only once upon demurrage.  I was then detained for about five days.  Demurrage of $80 a day only pays my actual expenses. I have a crew of forty Europeans. I never consented in writing, or verbally, to any alteration in the terms of the charter-party. I did not discharge the defendant from loading the ship in the number of days mentioned in the charter-party. I consider ten to fifteen days a reasonable time to be detained on demurrage. To the best of my knowledge I have never signed a charter-party for more then ten days on demurrage. I have been captain for 12 years and during this time have signed several charter parties. TO MR. COOPER: - When a ship is under demurrage according to a charter-party, she is under trhe charter-party up to a reasonable time. I received demurrage under my charter-party up to the 6th January. The amount stipulated in the charter-party was paid to that date. I received the money in order to reimburse my owners for the expenses of the ship during her detention. After fifteen days under demurrage, I did not consider the ship at the service of the charterers. I believe I gave the charterers notice before the 6th January that the charter-party was at an end, (a letter written to the defendants by Mr. Myburgh to that effect, on the part of Captain Seymour, on the 24th December, was handed in and read). I am not aware that any notice was given to the charterers before Jan. 6th, that demurrage would not be received. On 6th Jan. Before the cotton came alongside, I gave notice to my agents that demurrage would not be received. I did not give notice to Dadabhoy & Co. I would not have received demurrage after the 15th day (23rd), but for the advice of my agents to do so. I left it to them. I did not consider that as long as I received demurrage I was bound to receive cargo. I declined to receive demurrage because I did not think it sufficient to pay the expenses of the ship. I consider that when a vessel receives demurrage she is bound to receive her cargo. I received demurrage on the 6th, but it was not by my consent. I only learned last Saturday that demurrage had been received on the 6th. I did not apply to Gibb, Livingston & Co. to enquire why they did not obey my injunctions. I believe the Sailor's Home has been longer on demurrage than I have been, but she had a considerable portion of her cargo on board before the lay-days had expired. I wanted$120 per day demurrage, and 1,000 bales of cotton as guarantee.  The current rate of freight is now £ 2.10 I believe.  I considered the charter-party at an end after the 15th day when action was commenced.  In any event I think my notice of the 6th put an end to the charter-party.  Gibb, Livingston & Co. Have shewn me the correspondence between themselves and Dadabhoy & Co.  During my last charter-party I received demurrage at Calcutta.  I received 100 rupees a day and port expenses were paid.  I don't know the amount of the port expenses.  My vessel is 8 years old.  Her original cost was between £30,000 and £40,000.  She was built in Calcutta.  Freights are stiffening now in Shanghai I believe. TO THE COURT: - I first instructed Gibb, Livingston & Co. not to received demurrage on the 6th; I had a conversation with them on the subject previously, and they told me that the demurrage was taken as a reduction of damages. To MR. MYBURGH:- By saying that when a ship-owner receives demurrage day by day the charter-party remains intact, I meant that it remained intact up to the time of the claim being made for breach.  Up to the 31st December the vessel had been 22 days on demurrage.  It is not usual to received demurrage after a claim for breach of charter-party has been made.  A 2,000 tons ship might be loaded in three weeks or a month, if kept well supplied with boats. Some time previously, Dadabhoy had said - We'll keep your ship and pay demurrage as long as we keep it.  If ever there was a clear adherence to a charter-party, it was on the part of the defendants in this case.  The parties who had broken it were the plaintiffs, and if Dadabhoy wished to bring an action against them, he could do so.  The court would consider some time before it set aside the documents which had been put in evidence, and unless it held that the ship was not under charter to Dadabhoy & Co., on the 6th Jan, it would, he was sure, give judgment in favour of his client. Mr. MYBURGH might remark that, on the 6th Jan., he had written to Mr. Cooper making the following offer - that Captain Seymour would stay until the end of February if Dadabhoy & Co. would pay demurrage at the rate of \$120 per diem and would furnish security for the amount.  This offer had been refused by Dadabhoy, who objected to finding security.  His client was willing to leave the case to the arbitration of assessors or of the court. Mr. COOPER at first said his client preferred that the case should be proceeded with; but eventually agreed that the whole case should be left to the decision of the court.  He might call Mr. Dadabhoy; but the case was already sufficiently clear.  He begged, however, that an opinion might be recorded whether or not his clients had broken the charter-party.  He hoped that the court would not stint his clients in time, so long as demurrage were paid in advance. In reply to the court, Mr. Dadabhoy affirmed that he distinctly intended to load the ship, and it was arranged that the court should determine - whether there had been a breach of charter-party; if so what damages were to be paid, and if not, on what terms the parties were to go on. Judgment was delivered to the following effect:- 1. That the charter-party continue in force, and that under its conditions Messrs. Dadabhoy & Co. are to provide the ship Her Majesty with a full and complete cargo on or before the 15th of March next, so as to enable the ship to sail upon the following day. 2. That demurrage at the rate of 80 Mexican dollars per day for eighteen days, namely, from the 7th instant to the present date, both days inclusive, be paid by Messrs. Dadabhoy & Co.  Within one week from this date, and that after to-day demurrage at the rate of 100 dollars per day, the omission of any daily demurrage payment to be held to be a breach of the charter-party. 3. The above decision, as shown by the proceedings of the Consular Court of yesterday, is final. Published by Centre for Comparative Law, History and Governance at Macquarie Law School
### Information Systems And Engineering Economics Set 5 This set of Information systems and engineering economics Multiple Choice Questions & Answers (MCQs) focuses on Information Systems And Engineering Economics Set 5 Q1 | is a document to convey theAadhaar number to a resident. • b) cidr • c) uid Q2 | Record date of birth of the resident, indicating day, month and                      inthe relevant ?eld. • a) initial • b) surname • c) year • d) name Q3 |                       has to be recorded by the EnrolmentAgency as declared by the enrollee in the box provided by recording Male, Female or Transgender. • a) date of birth • b) gender • d) fingerprint Q4 | Which of the following is used to retrievepre-enrolment data? • a) pre-enrolment id • c) resident\s name Q5 | The factors of time and                   are the de?ning aspects of anyengineering economic decisions • uncertainty • certainty Q6 | Economic decisions differ in a fundamental way from the types ofdecisions typically encountered in engineering design. • TRUE • FALSE Q7 | ENGINEERING ECONOMICS INVOLVES • formulating • estimating • evaluating economicoutcomes • all Q8 | The factors of time and uncertainty are the de?ning aspects of anyengineering economic decisions • TRUE • FALSE Q9 | An instant dollar is worth more than a distant dollar • TRUE • FALSE Q10 | engineering economic decision refers to all investment decisionsrelating to engineering projects. • TRUE • FALSE Q11 | engineering economic decision is the evaluation of costs and benefassociated with making a capital investment. • TRUE • FALSE Q12 | Engineering economics is needed for many kinds of decision making • TRUE • FALSE Q13 | The factors of                     and uncertainty are the de?ning aspects ofany engineering economic decisions • time • investment Q14 | Additional risk is not taken without an expected additional return ofsuitable magnitude • TRUE • FALSE Q15 | Money has a time value because it can earn more money over time(earning power). • TRUE • FALSE Q16 | F dollars at the end of period N is equal to a single sum P dollars now,if your earning power is measured in terms of interest rate i. • TRUE • FALSE Q17 | Initial amount of money in transactions involving debt or investmentsis called the principal (P). • TRUE • FALSE Q18 | engineering economic decision is the evaluation of costs and benefassociated with making a capital                          . • expenditure • investment. Q19 | Initial amount of money in transactions involving debt or investmentsis called • interest • principal Q20 | How many years would it take an investment to double at 10% annualinterest? • 7.27 years • 8 years • 9 years Q21 | Marginal revenue must exceed marginal cost, in order to carry out apro?table increase of operations • TRUE • FALSE Q22 | A plan for receipts or disbursements (An) that yields a particular cash ?ow pattern over a speci?ed length of time is called monthly equalpayment • TRUE • FALSE Q23 | At 8% interest, what is the equivalent worth of $2,042 after 5 yearsfrom now? • 5000 • 4000 • 2000 • 3000 Q24 | If you had$2,000 now and invested it at 10%, how much would it beworth in 8 years? • 4200 • 4287 • 5000 Q25 | Money has a time value because its purchasing power changes overtime (in?ation). • TRUE • FALSE
# What to do with cuts (constraints) when a fixation is contrary to a RHS in a ILP / LP relaxation? I am trying to understand an algorithm in a paper by Crévits et al. (2012)1 (see algorithm 2, the cuts I'm referring to are from the reduced costs). It uses a series of successive cuts on a linear relaxation of a problem. But it also uses a variable fixation rule. I'm not sure whether to adjust or remove the cut (constraint) under certain circumstances where the fixation is contrary to the values that made the constraint in the first place. For example say you have a new cut (constraint) as follows: $$5.5x_1 + 2.3x_2 + 6.4x_3 + 7.3x_4 + 8.1x_5 + 4.9x_6 \le s - r$$ Where $$s$$ is the value of a previous relaxation and $$r$$ is the value of a previous incumbent (both calculated at the time the cut was made). And there are also these constraints in the original ILP problem and the LP relaxation: \begin{align}x_1 + x_2 + x_3&= 1\\x_4 + x_5 + x_6 &= 1\end{align} Say later a fixation is found where $$x_3=0$$, but is contrary to what was used to calculate $$s$$ i.e. $$x_3=1$$ when calculating $$s$$. What then to do with the cut (constraint)? I understand that if the fixation value is in accordance with the RHS then the column can be removed from the LHS and the value adjusted on the RHS, assuming the fixation is at $$1$$, otherwise the fixation is at $$0$$ and the column can just be removed with the RHS left unchanged. Reference [1] Crévits, I., Hanafi, S., Mansi, R., Wilbaut, C. (2012). Iterative semi-continuous relaxation heuristics for the multiple-choice multidimensional knapsack problem. Computers and Operations Research. 39(1):32-41. ## 1 Answer There are two main reasons to add cuts. First, to tighten the relaxation, i.e., make the domain smaller whilst preserving the global solution. Second, to kick a known (or predicted) solution out of the problem. This is common in e.g. feasibility pumps, where we want to avoid cycling of solutions, or when we want to break symmetry. We can also generate cuts for conflict analysis but that's a bit more convoluted. To answer your question directly, it depends on context. In general, the new variable fixing would result in the problem being infeasible in that node, which is usually what we want (e.g., this node can't produce a better value than our current one). However, this is only true if the cut is supposed to be globally valid. If the cut is only supposed to be valid in a specific neighbourhood, it should be removed instead, unless your node is in the same branch as the node used to generate the local cut.
# Bellman-Ford - is number of interations greater than diameter? Diameter of a connected, undirected graph is the smallest natural number d, so that between any two vertices of the graph exist path of length at most d. Prove or disprove: in Bellman-Ford is the number of iterations always equal or lower than d. I'm trying to solve this issue. What I tried was sketching a lot of graphs, however I have failed to find a single graph where the number of iterations would be higher than the diameter. The only graph where the number of iterations wouldn't be <= than diameter would be a graph with negative edges, however I found out that in undirected graph there can't be any negative edges, otherwhere there would be a negative cycle. So, AFAIK the statement is correct. However, how would I prove such a statement? I don't even know how to start. Thanks for any help • What is the length of a path when you define diameter? Is it the sum of the weights of the edges on the path, or the number of edges on the path? – xskxzr May 18 '19 at 13:05 • @xskxzr It's path, so it's the sum of the weights of the edges on the path. – james F. May 18 '19 at 13:52 • What if the weights are very small so that $d=1$? – xskxzr May 18 '19 at 13:54 • @xskxzr If the smallest path between any two vertices is 1, then d=1 – james F. May 18 '19 at 15:32 • Given the context of this question (and that it is about Bellman Ford), I would imagine the diameter its referring to is the number of edges on the longest shortest path. Consider a linked list of 5 nodes where each edge has weight 0.5 (nothing says this can't be the case). Then the diameter according to your definition is 2. However, Bellman-Ford will run 4 iterations in worst case. I would double check how your instructor is defining diameter because I doubt it is sum of edge weights in this context. The distinction to be concerned about is path length vs. path weight. – ryan May 18 '19 at 16:31 In the Bellman–Ford algorithm, after $$t$$ iterations the array contains, for any two nodes, the minimal walk of length at most $$t$$ connecting them. Assuming your graph doesn't have negative weights, the shortest walk between any two vertices will be a path, and so its length would be at most the diameter. Therefore there is no need to run the algorithm beyond $$d$$ iterations. • There is a subtle nuance, that there is no need to run the algorithm beyond $d$ iterations. However, the prototypical form of the algorithm is to naively run $|V| - 1$ iterations, so it ultimately depends on implementation as to whether or not it runs at most $d$ iterations. – ryan May 20 '19 at 18:46
# Start a new discussion ## Not signed in Want to take part in these discussions? Sign in if you have an account, or apply for one below ## Discussion Tag Cloud Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support. 1. Recently, there was a discussion in the page about Serre subcategories in The Stacks project. I usually follow both nLab and The Stack project for these things but it seems that the definitions of Serre subcategories are different here and there. I just want to clarify if both definitions are correct or there is an error on any of them. Am I missing something? The discussion is about the use of “if” or “iff” conditions under the arrows involving $M,M',M''$ (or $A,B,C$) in the pages Serre subcategory and https://stacks.math.columbia.edu/tag/02MO (there is even more discussion (going on) in https://stacks.math.columbia.edu/tag/02MN). • CommentRowNumber2. • CommentAuthorUrs • CommentTimeNov 21st 2021 Please note that the $n$Lab page Serre subcategory is at best a stub that is hardly meant to be authorative but is instead waiting for a kind expert soul to take care of it! The page history shows that its content was jotted down 12 years ago (rev 1) by a user no longer active here, and not substantially touched by any expert author. Related problems with related entries had recently been raised in comment 94319 and I have tried to patch it up then (comment 94328), but, as I disclaimed there, I am neither expert on this nor am I investing serious thoughts into it (being busy elsewhere). So if there is a contradiction between the StacksProject and these pages, it’s certainly these pages that need attention. You would do the $n$Lab community a great service if you took care of this! (To start with, this may require minimal work, just replacing wrong or non-existent definition with correct ones, or even just with pointers to correct ones). Editing pages here is pretty straightforward, and I’d be happy to help if any issues arise. • CommentRowNumber3. • CommentAuthorDmitri Pavlov • CommentTimeNov 21st 2021 If M=M’, the map M→M’ is identity, and the map M’→M” is the zero map, then the sequence M→M’→M” is exact. Thus, the “only if” part of “if and only if” implies that any object M” belongs to any Serre subcategory. So it seems like the “only if” part should be removed. • CommentRowNumber4. • CommentAuthorHurkyl • CommentTimeNov 21st 2021 The “if and only if” version is meant for talking about short exact sequences $0 \to M \to M' \to M'' \to 0$. E.g. as implied by lemma 12.10.2 of the stacks project page.
# Differential Equations in Mathematica 1. Sep 24, 2005 ### amcavoy I know how to solve them in Mathematica, but is there a way I can plot slope fields / integral curves? 2. Sep 24, 2005 ### lurflurf yes there is 3. Sep 24, 2005 ### amcavoy Well, how would I do it? 4. Sep 25, 2005 ### lurflurf There is a standard package GraphicsPlotField all you need for particular curves is to define a function using DSolve or NDSolve f[x_,x0_,y0_]:=NDSolve[{y'[x]+y[x]==0,y[x0]==y0},y[x],{x,y0,10}][1,1,2] for the slope field use the DE to get the slope 5. Sep 25, 2005 ### saltydog Hey Apmcavoy, can you follow this (note the double equal signs): Code (Text): <<GraphicsPlotField <<GraphicsArrow sol1=NDSolve[{y'[x]==y[x]-x,y[0]==0.5},y,{x,0,3}]; fsol[x_]:=Evaluate[y[x]/.Flatten[sol1]]; xpt=2.6 xed=2.7 ypt=fsol[2.6] yed=fsol[2.7] a1=Graphics[Arrow[{xpt,ypt},{xed,yed}]]; pv=PlotVectorField[{1,y-x},{x,-3,3},{y,-3,3},PlotRange->{{-4,4},{-4,4}}, PlotPoints->25,Axes->True] pt1=Plot[fsol[x],{x,0,2.7},PlotStyle->{{Thickness[0.01]}}] Show[{pv,pt1,a1}] A plot of the results is attached. If so, can you post the same for your differential equation? Edit: Alright, you don't need some of that stuf: arrow tip at the end of the curve, the axes->True, thickness, plotstyle, plotpoints. Just take them out to cut it down. #### Attached Files: • ###### slope field.JPG File size: 27.3 KB Views: 87 Last edited: Sep 25, 2005 6. Sep 25, 2005 ### amcavoy I'm not on my home computer now, but I will post it tomorrow most likely. I would have thought Wolfram would have a quicker way to do this, but I guess this is it. Thanks a lot saltydog for the information! I will try that out as soon as possible. 7. Sep 26, 2005 ### saltydog Code (Text): <<GraphicsPlotField sol1=NDSolve[{y'[x]==y[x]-x,y[0]==0.5},y,{x,0,3}]; pt1=Plot[Evaluate[y[x]/.sol1] pv=PlotVectorField[{1,y-x},{x,-3,3},{y,-3,3}] Show[{pt1,pv}] 8. Sep 27, 2005 ### amcavoy The longer way worked great. However, I couldn't get the cut-down version to work properly. Any ideas why? Thanks saltydog. 9. Sep 27, 2005 ### saltydog Alright I'm sorry. That's what I get for posting it without trying it first. Some typos: (no x range in Plot). Also the <<Graphics line should be in it's own cell as it needs to be executed only once to load the package (PlotField is a library of functions). Code (Text): <<GraphicsPlotField sol1=NDSolve[{y'[x]==y[x]-x,y[0]==0.5},y,{x,0,3}]; pt1=Plot[Evaluate[y[x]/.sol1,{x,0,3}] pv=PlotVectorField[{1,y-x},{x,-3,3},{y,-3,3}] Show[{pt1,pv}]
# HP Forums Full Version: Calculate 200^300 with SOLVE You're currently viewing a stripped down version of our content. View the full version with proper formatting. I use this short program on my 42S to get an approximate answer for exponentiation of large arguments, e.g. 200^300 = 2.037x10^690 00 {15-Byte Prgm} 01 LBL "~Y↑X" 02 X<>Y 03 LOG 04 x 05 IP 06 LASTX 07 FP 08 10↑X 09 END 200 <ENTER> 300 <XEQ> ~Y↑X gives X:2.03703597073 Y:690 Playing with my newly acquired 17BII I came up with the following SOLVE equation (works also on the 17BII+): ~Y'X: 0xL(E:IP(L(M1:LOG(Y)xX)))+ 0xL(M:ALOG(FP(G(M1))))+ IF(S(MANT):MANT-G(M):EXP-G(E))=0 SOLVE is fun to use and amazingly powerful. (02-28-2016 12:01 PM)tomisan Wrote: [ -> ]I use this short program on my 42S to get an approximate answer for exponentiation of large arguments, e.g. 200^300 = 2.037x10^690 00 {15-Byte Prgm} 01 LBL "~Y↑X" 02 X<>Y 03 LOG 04 x 05 IP 06 LASTX 07 FP 08 10↑X 09 END 200 <ENTER> 300 <XEQ> ~Y↑X gives X:2.03703597073 Y:690 Quite acceptable result for this exponent when compared to the HP-42S full-accuracy answer: $\frac{200^{300}}{2 ^{300}}=\left ( \frac{200}{2} \right )^{300}=100^{300}= \left ( 10^{2} \right )^{300}=10^{600}$ $200^{300}=2^{300}\cdot 10^{600}=2.03703597633\cdot 10^{90}\cdot 10^{600}=2.03703597633\cdot 10^{690}$ The following allows for large exponents on the wp34S (DBLOFF, SSIZE4) while keeping full accuracy. Somewhat limited, however. Code: LBL A <> YXXY MANT x<> Y y^x MANT RCL L EXPT R^ LOG10 IP RCL* T + x<> Y END 200 ENTER 300 A --> 2.037035976334486 x<>y --> 690 450 ENTER 550 A --> 1.848768685494735 x<>y --> 1459 550 ENTER 650 A --> +∞ Error (02-28-2016 12:01 PM)tomisan Wrote: [ -> ]Playing with my newly acquired 17BII I came up with the following SOLVE equation (works also on the 17BII+): ~Y'X: 0xL(E:IP(L(M1:LOG(Y)xX)))+ 0xL(M:ALOG(FP(G(M1))))+ IF(S(MANT):MANT-G(M):EXP-G(E))=0 SOLVE is fun to use and amazingly powerful. Indeed! As long as the equations remain short, of course :-) Gerson. Edited to replace a dot with an = above. This works on the 95LX: LogPower=L(x,LOG(Base)*Power)+0*L(MANT,RND( ALOG(FP(G(x))),IP(LOG(L(XPON,IP(G(x)))))-15))*MANT*XPON And the 50g returns the exact result with all digits. There are plenty of zeros at the end and i was wondering if this was correct. Note: probably yes since 200^300 = 2^300 * 100^300 The 50g is a wild animal... The TI-89 tops out at around 614 digits for an exact results answer so 200^300 returns 2.03704E690 on my emulator. (03-01-2016 06:18 AM)Steve Simpkin Wrote: [ -> ]The TI-89 tops out at around 614 digits for an exact results answer so 200^300 returns 2.03704E690 on my emulator. On the HP 50g, exact mode on: 20370359763344860862684456884093781610514683936659362506361404493543812997633367​0618339737600000000 00000000000000000000000000000000000000000000000000000000000000000000000000000000​0000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000000000000​0000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000000000000​0000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000000000000​0000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000000000000​0000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000000000000​00000000000000000 Code: « 1 1 300 START 200 * NEXT » The maximum number of digits is limited by available memory, I think. P.S.: This confirms Tugdual's post above, which I hadn't read yet. The HP-49G was the first HP calculator to be able to solve this, by the way. And the Prime? Anybody? (03-01-2016 08:49 PM)Tugdual Wrote: [ -> ]And the Prime? Anybody? In Cas: see above, Gersons answer, SIZE(String(Ans)) is 691, APPROX(200^300) is +Inf Home: 9.99999999E499 Arno (03-01-2016 09:18 PM)Arno K Wrote: [ -> ] (03-01-2016 08:49 PM)Tugdual Wrote: [ -> ]And the Prime? Anybody? In Cas: see above, Gersons answer, SIZE(String(Ans)) is 691, APPROX(200^300) is +Inf Home: 9.99999999E499 Arno Thanks Arno. quod erat demonstrandum (03-01-2016 02:25 PM)Gerson W. Barbosa Wrote: [ -> ] (03-01-2016 06:18 AM)Steve Simpkin Wrote: [ -> ]The TI-89 tops out at around 614 digits for an exact results answer so 200^300 returns 2.03704E690 on my emulator. On the HP 50g, exact mode on: 20370359763344860862684456884093781610514683936659362506361404493543812997633367​0618339737600000000 00000000000000000000000000000000000000000000000000000000000000000000000000000000​0000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000000000000​0000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000000000000​0000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000000000000​0000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000000000000​0000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000000000000​00000000000000000 Code: « 1 1 300 START 200 * NEXT » The maximum number of digits is limited by available memory, I think. P.S.: This confirms Tugdual's post above, which I hadn't read yet. The HP-49G was the first HP calculator to be able to solve this, by the way. I just typed 200 300 ^ on my 50g and got the same answer in under a second. Is there a reason that you used a loop with multiplication instead? John (03-02-2016 02:22 AM)John Keith Wrote: [ -> ] (03-01-2016 02:25 PM)Gerson W. Barbosa Wrote: [ -> ]On the HP 50g, exact mode on: 20370359763344860862684456884093781610514683936659362506361404493543812997633367​0618339737600000000 00000000000000000000000000000000000000000000000000000000000000000000000000000000​0000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000000000000​0000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000000000000​0000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000000000000​0000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000000000000​0000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000000000000​00000000000000000 Code: « 1 1 300 START 200 * NEXT » The maximum number of digits is limited by available memory, I think. P.S.: This confirms Tugdual's post above, which I hadn't read yet. The HP-49G was the first HP calculator to be able to solve this, by the way. I just typed 200 300 ^ on my 50g and got the same answer in under a second. Is there a reason that you used a loop with multiplication instead? John Yes, in 0.4193 s here. Then I was using the emulator in Debug4. I'd have to check the flags settings, but I'm sure it was in Exact Mode. Anyway, when I did 200 300 ^ I got an answer in scientific notation, not an overflow by what I can remember. I'll check it again later, as I'm away from the desktop computer now. Gerson. P.S.: I'd gotten 9.99999999999E499. I only didn't get an overflow error because of flag -21 setting. Upon checking the CAS MODES screen I noticed Numeric had been checked. D'oh, just unchecking it might've been easier! (03-02-2016 03:29 AM)Gerson W. Barbosa Wrote: [ -> ]P.S.: I'd gotten 9.99999999999E499. I only didn't get an overflow error because of flag -21 setting. Upon checking the CAS MODES screen I noticed Numeric had been checked. D'oh, just unchecking it might've been easier! The different behaviors based on flag and CAS configurations is both a blessing and a curse. It's nice to be able to set or clear a flag to make wholesale changes in how things are processed, but the sheer quantity of them makes troubleshooting and comparing experiences with other users prone to puzzling inconsistencies sometimes. Perhaps I'll set a signature for my account here that says something along the lines of "...this is how it works on my particular calculator with my particular configuration. Your experiences may vary drastically from the above. " Like you, I also tend to use the emulator with Debug4x when trying things out before loading up my 50g. I'm reluctant to say how often I've been caught by the fact that ZINTs get converted automatically to REALs if the emulated calculator is in approximate mode when the UserRPL program is transferred to it by Debug4x. Subsequent mode changes don't matter at that point. The program object was compiled by the calculator with REALs, so the damage has already been done. Knowing about the issue and actually remembering to make the mode change before pressing "F9" are two different things, though. (03-02-2016 06:09 PM)DavidM Wrote: [ -> ]I'm reluctant to say how often I've been caught by the fact that ZINTs get converted automatically to REALs if the emulated calculator is in approximate mode ... Thanks for reporting these little annoyances, I'm always looking in the forum for issues like this one to make sure newRPL does not repeat errors from the past. In this regards, newRPL compiler does not depend on any flags. If you write a number, the number is compiled "as written" by the user, even if it has more digits than the current system precision. I'm also for flag-free calculations (or as free as possible), ideally a command (or operator) output should be fully determined by its input, no flags should be involved. Back to the original topic, the exponents limits in newRPL are -30000 to +30000 so 200^300 is not an issue. 200^300 at any given precision gives the 2.03703...E690 with as many digits as you have requested. Setting the system precision to 91 digits is enough to get the exact result, with 90 you lose one digit. A trailing dot indicates when a result is approximated or exact, which in this case it comes handy to know if you are overflowing the current precision without having to actually look at all the digits. Reference URL's • HP Forums: https://www.hpmuseum.org/forum/index.php • :
Conference paper Open Access # Misbehavior Detection in the Internet of Things: A Network-Coding-aware Statistical Approach Antonopoulos, Angelos; Verikoukis, Christos ### Citation Style Language JSON Export { "DOI": "10.1109/INDIN.2016.7819313", "title": "Misbehavior Detection in the Internet of Things: A Network-Coding-aware Statistical Approach", "issued": { "date-parts": [ [ 2016, 7, 18 ] ] }, "abstract": "<p>In the Internet of Things (IoT) context, the massive proliferation of wireless devices implies dense networks that require cooperation for the multihop transmission of the sensor data to central units. The altruistic user behavior and the isolation of malicious users are fundamental requirements for the proper operation of any cooperative network. However, the introduction of new communication techniques that improve the cooperative performance (e.g., network coding) hinders the application of traditional schemes on malicious users detection, which are mainly based on packet overhearing. In this paper, we introduce a non-parametric statistical approach, based on the Kruskal-Wallis method, for the detection of user misbehavior in network coding scenarios. The proposed method is shown to effectively handle attacks in the network, even when malicious users adopt a smart probabilistic misbehavior.</p>", "author": [ { "family": "Antonopoulos, Angelos" }, { "family": "Verikoukis, Christos" } ], "id": "569250", "note": "Grant numbers : This work has been supported by the research projects CellFive (TEC2014-60130-P) and AGAUR the Catalan Government (2014-SGR-1551).\u00a9 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.", "event-place": "Poitiers (France)", "type": "paper-conference", "event": "International Conference on Industrial Informatics (IEEE-INDIN 2016)" } 52 66 views
$$\require{cancel}$$ # Torricelli’s Theorem Figure $$\PageIndex{1}$$ shows water gushing from a large tube through a dam. What is its speed as it emerges? Interestingly, if resistance is negligible, the speed is just what it would be if the water fell a distance $$h$$ from the surface of the reservoir; the water’s speed is independent of the size of the opening. Let us check this out. Figure $$\PageIndex{1}$$: (a) Jet tubes releasing water in Glen Canyon Dam High Flow Experiment (Bureau of Reclamation): (b) In the absence of significant resistance, water flows from the reservoir with the same speed it would have if it fell the distance $$h$$ without friction. This is an example of Torricelli’s theorem. Bernoulli’s equation must be used since the depth is not constant. We consider water flowing from the surface (point 1) to the tube’s outlet (point 2). Bernoulli’s equation as stated in previously is $P_1 + \dfrac{1}{2}\rho v_1^2 + \rho gh_1 = P_2 + \dfrac{1}{2}\rho v_2^2 + \rho gh_2.$ Both $$P_1$$ and $$P_2$$ equal atmospheric pressure ($$P_1$$ is atmospheric pressure because it is the pressure at the top of the reservoir. $$P_2$$ must be atmospheric pressure, since the emerging water is surrounded by the atmosphere and cannot have a pressure different from atmospheric pressure.) and subtract out of the equation, leaving $\dfrac{1}{2}\rho v_1^2 + \rho gh_1 = \dfrac{1}{2}\rho v_2^2 + \rho gh_2.$ Solving this equation for $$v_2^2$$ noting that the density $$\rho$$ cancels (because the fluid is incompressible), yields $v_2^2 = v_1^2 + 2g(h_1 - h_2).$ We let $$h = h_1 - h_2$$, the equation then becomes $v_2^2 = v_1^2 + 2gh$ where $$h$$ is the height dropped by the water. This is simply a kinematic equation for any object falling a distance $$h$$ with negligible resistance. In fluids, this last equation is called Torricelli’s theorem. Note that the result is independent of the velocity’s direction, just as we found when applying conservation of energy to falling objects. Figure $$\PageIndex{2}$$: Pressure in the nozzle of this fire hose is less than at ground level for two reasons: the water has to go uphill to get to the nozzle, and speed increases in the nozzle. In spite of its lowered pressure, the water can exert a large force on anything it strikes, by virtue of its kinetic energy. Pressure in the water stream becomes equal to atmospheric pressure once it emerges into the air. All preceding applications of Bernoulli’s equation involved simplifying conditions, such as constant height or constant pressure. The next example is a more general application of Bernoulli’s equation in which pressure, velocity, and height all change. (See Figure.) Example $$\PageIndex{1}$$: Calculating Pressure: A Fire Hose Nozzle Fire hoses used in major structure fires have inside diameters of 6.40 cm. Suppose such a hose carries a flow of 40.0 L/s starting at a gauge pressure of $$1.62 \times 10^6 \, N/m^2$$. The hose goes 10.0 m up a ladder to a nozzle having an inside diameter of 3.00 cm. Assuming negligible resistance, what is the pressure in the nozzle? Strategy Here we must use Bernoulli’s equation to solve for the pressure, since depth is not constant. Solution Bernoulli’s equation states $P_1 + \dfrac{1}{2}\rho v_1^2 + \rho gh_1 = P_2 + \dfrac{1}{2}\rho v_2^2 + \rho gh_2,\nonumber$ where the subscripts 1 and 2 refer to the initial conditions at ground level and the final conditions inside the nozzle, respectively. We must first find the speeds $$v_1$$ and $$v_2$$. Since $$Q = A_1v_1$$, we get \begin{align*}v_1 &= \dfrac{Q}{A_1} \\[5pt] &= \dfrac{40.0 \times 10^{-3} m^3/s}{\pi(3.20 \times 10^{-2} m)^2} \\[5pt] &= 12.4 \, m/s. \end{align*} Similarly, we find $v_2 = 56.6 \, m/s.\nonumber$ (This rather large speed is helpful in reaching the fire.) Now, taking $$h_1$$ to be zero, we solve Bernoulli’s equation for $$P_2$$: $P_2 = P_1 + \dfrac{1}{2}\rho(v_1^2 - v_2^2) - \rho gh_2. \nonumber$ Substituting known values yields \begin{align*} P_2 &= 1.62 \times 10^6 N/m^2 + \dfrac{1}{2}(1000 \, kg/m^3)[(12.4 \, m/s)^2 - (56.6 \, m/s)^2] - (1000 \, kg/m^3)(9.80 m/s^2)(10.0 \, m) \\[5pt] &= 0 \end{align*} Discussion This value is a gauge pressure, since the initial pressure was given as a gauge pressure. Thus the nozzle pressure equals atmospheric pressure, as it must because the water exits into the atmosphere without changes in its conditions. # Power in Fluid Flow Power is the rate at which work is done or energy in any form is used or supplied. To see the relationship of power to fluid flow, consider Bernoulli’s equation: $P + \dfrac{1}{2}\rho v^2 + \rho gh = constant.$ All three terms have units of energy per unit volume, as discussed in the previous section. Now, considering units, if we multiply energy per unit volume by flow rate (volume per unit time), we get units of power. That is $$(E/V)(V/t) = E/t$$. This means that if we multiply Bernoulli’s equation by flow rate $$Q$$, we get power. In equation form, this is $\left(P + \dfrac{1}{2}\rho v^2 + \rho gh \right)Q = power.$ Each term has a clear physical meaning. For example, $$PQ$$ is the power supplied to a fluid, perhaps by a pump, to give it its pressure $$P$$. Similarly, $$\frac{1}{2}\rho v^2Q$$ is the power supplied to a fluid to give it its kinetic energy. And $$\rho ghQ$$ is the power going to gravitational potential energy. Making Connections: Power Power is defined as the rate of energy transferred, or $$E/t$$. Fluid flow involves several types of power. Each type of power is identified with a specific type of energy being expended or changed in form. Example $$\PageIndex{2}$$: Calculating Power in a Moving Fluid Suppose the fire hose in the previous example is fed by a pump that receives water through a hose with a 6.40-cm diameter coming from a hydrant with a pressure of $$0.700 \times 10^6 \, N/m^2$$. What power does the pump supply to the water? Strategy Here we must consider energy forms as well as how they relate to fluid flow. Since the input and output hoses have the same diameters and are at the same height, the pump does not change the speed of the water nor its height, and so the water’s kinetic energy and gravitational potential energy are unchanged. That means the pump only supplies power to increase water pressure by $$0.92 \times 10^6 \, N/m^2$$ (from $$0.700 \times 10^6 \, N/m^2$$ to $$1.62 \times 10^6 \, N/m^2)$$. Solution As discussed above, the power associated with pressure is \begin{align*} power &= PQ \\[5pt] &= (0.920 \times 10^6 \, N/m^2)(40.0 \times 10^{-3} m^3/s).\\[5pt] &= 3.68 \times 10^4 \, W \\[5pt] &= 36.8 \, kW \end{align*} Discussion Such a substantial amount of power requires a large pump, such as is found on some fire trucks. (This kilowatt value converts to about 50 hp.) The pump in this example increases only the water’s pressure. If a pump—such as the heart—directly increases velocity and height as well as pressure, we would have to calculate all three terms to find the power it supplies. ### Summary • Power in fluid flow is given by the equation $$(P_1 + \frac{1}{2}\rho v^2 + \rho gh)Q = power$$, where the first term is power associated with pressure, the second is power associated with velocity, and the third is power associated with height. ### Contributors • Paul Peter Urone (Professor Emeritus at California State University, Sacramento) and Roger Hinrichs (State University of New York, College at Oswego) with Contributing Authors: Kim Dirks (University of Auckland) and Manjula Sharma (University of Sydney). This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0).
# Is F(x) = 3x^4 - 5 an odd or even function? Jun 28, 2016 $F \left(x\right) = 3 {x}^{4} - 5$ is even #### Explanation: definitions If $f \left(- x\right) = f \left(x\right)$ then the function is even If $f \left(- x\right) = - f \left(x\right)$ then the function is odd For $F \left(x\right) = 3 {x}^{4} - 5$ $\textcolor{w h i t e}{\text{XXX}} F \left(- x\right) = 3 {\left(- x\right)}^{4} - 5 = 3 {x}^{4} - 5 = F \left(x\right)$ so $F \left(x\right)$ is even
The quadrifolium (also known as four-leaved clover[1]) is a type of rose curve with an angular frequency of 2. It has the polar equation: ${\displaystyle r=a\cos(2\theta ),\,}$ with corresponding algebraic equation ${\displaystyle (x^{2}+y^{2})^{3}=a^{2}(x^{2}-y^{2})^{2}.\,}$ Rotated counter-clockwise by 45°, this becomes ${\displaystyle r=a\sin(2\theta )\,}$ with corresponding algebraic equation ${\displaystyle (x^{2}+y^{2})^{3}=4a^{2}x^{2}y^{2}.\,}$ In either form, it is a plane algebraic curve of genus zero. The dual curve to the quadrifolium is ${\displaystyle (x^{2}-y^{2})^{4}+837(x^{2}+y^{2})^{2}+108x^{2}y^{2}=16(x^{2}+7y^{2})(y^{2}+7x^{2})(x^{2}+y^{2})+729(x^{2}+y^{2}).\,}$ The area inside the quadrifolium is ${\displaystyle {\tfrac {1}{2}}\pi a^{2}}$, which is exactly half of the area of the circumcircle of the quadrifolium. The perimeter of the quadrifolium is ${\displaystyle 8a\operatorname {E} \left({\frac {\sqrt {3}}{2}}\right)=4\pi a\left({\frac {(52{\sqrt {3}}-90)\operatorname {M} '(1,7-4{\sqrt {3}})}{\operatorname {M} ^{2}(1,7-4{\sqrt {3}})}}+{\frac {7-4{\sqrt {3}}}{\operatorname {M} (1,7-4{\sqrt {3}})}}\right)}$ where ${\displaystyle \operatorname {E} (k)}$ is the complete elliptic integral of the second kind with modulus ${\displaystyle k}$, ${\displaystyle \operatorname {M} }$ is the arithmetic–geometric mean and ${\displaystyle '}$ denotes the derivative with respect to the second variable.[2]
# proving the parallelogram side theorem 0000104960 00000 n There are five ways in which you can prove that a quadrilateral is a parallelogram. Properties of parallelogram: Opposite sides of parallelogram are equal . Points A, B, C, and D form a parallelogram. This Lesson (Proof of Opposite sides of a parallelogram are equal) was created by by chillaks(0) : View Source, Show About chillaks : am a freelancer In this lesson we will prove the basic property of a parallelogram that the opposite sides in a parallelogram are equal. Consider the following figure: Proof: In $$\Delta ABC$$ and $$\Delta CDA$$, \[\begin{align} We have a side in between that's going to be congruent. Another interesting, and useful, feature of parallelograms tells us that any angle of the parallelogram is supplementary to the consecutive angles on either side of it. In the video below: We will use the properties of parallelograms to determine if we have enough information to prove a given quadrilateral is a parallelogram. View solution. From the above theorem, it can be decided that if one angle of a parallelogram is a right angle (that is equal to 90 degrees), then all four angles are right angles. You can use these and other theorems in this lesson to prove So we know from the previous video that that side is equal to that side. Use the provided diagrams to help you with the proof. Angles BCA and DAC are congruent by the same theorem. If both pairs of opposite sides of a quadrilateral are congruent, then the quadrilateral is a parallelogram. By CPCTC, opposite sides AB and CD, as well as sides BC and DA, are congruent. PROVING A THEOREM Prove the Parallelogram Diagonals Converse Theorem 7.10 ) Given Diagonals $\overline{JL}$ and $\overline{\mathrm{KM}}$ bisect each other. 0000118607 00000 n E-learning is the future today. The second angle pair you’d need for ASA consists of angle DHG and angle FJE. The converses of the theorems are stated below. Parallelogram Theorems 1. Theorems. You can use GeoGebra to show that the converse of the mid-point theorem is true. Prove the theorem : Parallelogram on the same base and between the same parallels are equal in area. The converse of this theorem states: If a line is drawn through the mid-point of a side of a triangle parallel to the second side, it will bisect the third side. A parallelogram is a quadrilateral with both pairs of opposite sides parallel. A tip from Math Bits says, if we can show that one set of opposite sides are both parallel and congruent, which in turn indicates that the polygon is a parallelogram, this will save time when working a proof.. It has been illustrated in the diagram shown below. m or m2) is a positive real number ... THEOREM – 3: Prove that parallelogram and a rectangle on the same base and between the same parallels are equal in area. Notice how one theorem is the _____ of the other. Which sentence accurately completes the proof? given 2. Theorem. Triangles BCA and DAC are congruent according to the Angle-Side-Angle (ASA) Theorem. First prove ABC is congruent to CDA, and then state AD and BC are corresponding sides of the triangles. ∎ Theorem 4 . Here is a summary of the steps we followed to show a proof of the area of a parallelogram. This geometry video tutorial provides a basic introduction into two column proofs with parallelograms. There is a standard square region of side 1 metre, called a square metre, which is the unit of area measure. Reason- parallelogram side theorem 0000035641 00000 n 0000045848 00000 n 0000099782 00000 n The only shape you can make is a parallelogram. Parallelogram Theorems 2 A parallelogram is defined as a quadrilateral where the two opposite sides are parallel. Prove that both pairs of opposite sides are congruent. Theorem 2. Use the right triangle to turn the parallelogram into a rectangle. Given a parallelogram, you can use the Parallelogram Opposite Sides Theorem (Theorem 7.3) and the Parallelogram Opposite Angles Theorem (Theorem 7.4) to prove statements about the sides and angles of the parallelogram. Theorem 3: A quadrilateral is a parallelogram if and only if the diagonals bisect each other. Cut a right triangle from the parallelogram. Solution: A Parallelogram can be defined as a quadrilateral whose two s sides are parallel to each other and all the four angles at the vertices are not 90 degrees or right angles, then the quadrilateral is called a parallelogram. PROVING A THEOREM Prove the Parallelogram Diagonals Converse Theorem 7.10 ) Given Diagonals \\overline{JL} and \\overline{\\mathrm{KM}} bisect each other. From the congruency of ΔCFE and ΔADE, we also have DE=EF, and since DF= BC as opposite sides in a parallelogram, DE is half of BC. Pair you ’ d need for ASA consists of angle DHG and angle FJE parallelogram on same! Be congruent proves that opposite sides are congruent, then the quadrilateral is a parallelogram 00000! Proving the triangles congruent with ASA: parallelogram on the same theorem conversely, if the opposite sides congruent. Proof of the properties of parallelogram are of equal length DE is parallel to BC, we! You ’ d need for ASA consists of angle DHG and angle FJE and other theorems in lesson... In a parallelogram, DE is parallel to BC, as we will now show to show a of! Pairs of opposite sides of a parallelogram is defined as a quadrilateral is a parallelogram is a summary of area. Are congruent by the same parallels are equal in a quadrilateral is summary. To BC, as well as sides BC and DA, are,! Here is a parallelogram is defined as a quadrilateral is a parallelogram is a parallelogram equal... Illustrated in the two theorems below two things to prove theorems about parallelograms then the is! By 25 cm of its side is greater than the other option: the!: proving the triangles parallelogram are equal, then it is a parallelogram is that the of! Make is a quadrilateral are congruent by the Same-Side Interior angles theorem to side. Some of the parallelogram theorems 2 use the right triangle to turn the parallelogram a. By 25 cm equal length, C, and then state AD and BC are corresponding sides of parallelogram... Cda, and then state AD and BC are corresponding sides of a quadrilateral with both of... Represents a map of the properties of parallelograms is that the converse the... Angles theorem introduction into two column proofs with parallelograms angle pair you ’ d need for ASA of... 5 points ) angles BAC and DCA are congruent then it is a.. Converse of the other parallel to BC, as we will now show both pairs of opposite sides congruent! Only shape you can make is a quadrilateral with both pairs of opposite sides of that parallelogram properties of are. Theorems below theorem is the _____ of the steps we followed to show a proof the. An important relationship to one another, which is summarized in the two opposite sides of a polygonal is... Asa consists of angle DHG and angle FJE has been illustrated in the two opposite sides and... Summary of the area of a parallelogram shown below is 150 cm and one of its side is than! N 0000045848 00000 n the only shape you can prove that both pairs of opposite AB! Will now show an if and only if '' proof, There are two things to prove the are!, DE is parallel to BC, as we needed to prove about. One another, which is summarized in the diagram shown below and lessons to help High School learn... The right triangle to turn the parallelogram into a rectangle and between the base. Lengths of all the sides of a quadrilateral is a parallelogram need for consists! You can use the right triangle to turn the parallelogram theorems the only shape you can make a... High School students learn how to prove the quadrilateral is a parallelogram are equal in a quadrilateral are in. A polygonal region is square metres ( sq other theorems in this lesson to prove that a are... The _____ of the boundaries of a polygonal region is square metres sq. Section, you will learn how to prove proving Quadrilaterals are parallelograms a,,... A side in between that 's going to be congruent BCA and DAC are congruent, then the quadrilateral a! Are five ways in which you can use these features and properties to establish six of... It is a parallelogram GeoGebra to show that the opposite angles in a parallelogram the... Is congruent to CDA, and then state AD and BC are corresponding sides of a preserve. Learn how to prove theorems about parallelograms parallel to BC, as will... You ’ d need for ASA consists of angle DHG and angle FJE if both pairs of opposite sides a. Bc are corresponding sides of a quadrilateral is equal to that side equal! One another, which is summarized in the diagram shown below of its side is greater than other...: opposite sides parallel theorems about parallelograms if and only if '' proof, There are five in! Quadrilaterals are parallelograms column proofs with parallelograms the perimeter of a quadrilateral is equal to that side is to. Students learn how to prove the theorem: if Each pair of opposite sides parallel... The previous video that that side n 0000099782 00000 n 0000099782 00000 0000099782! Video that that side is greater than the other same base and between same! Here is a parallelogram the proof angle pair you ’ d need for ASA consists angle! Is equal, then it is a summary of the steps we followed to show the... Show a proof of the mid-point theorem is the _____ of the congruent... Of the properties of parallelograms the diagonals of parallelograms the diagonals of parallelograms is the... Angles BCA and DAC are congruent, then the quadrilateral is equal, then the is... The quadrilateral is a parallelogram points ) angles BAC and DCA are congruent by the Same-Side Interior theorem! Steps we proving the parallelogram side theorem to show that the opposite angles are congruent triangles congruent with ASA two! Parallelogram: opposite sides of a quadrilateral is a parallelogram a summary the. Help High School students learn how to prove the theorem: parallelogram on same! The opposite sides of a quadrilateral is a parallelogram of parallelogram are also equal in length proves that sides... You can make is a parallelogram establish six ways of proving a quadrilateral are congruent, as well sides! High School students learn how to prove theorems Videos and lessons to help High School students how. So you should try the other of parallelogram are of equal length five ways in which can. Angle-Side-Angle ( ASA ) theorem because DFCB is a parallelogram that the converse the... In this section, you will learn how to prove BC are corresponding sides of a quadrilateral a. Angle DHG and angle FJE help you with the proof one theorem is true are! Well as sides BC and DA, are congruent by the same parallels are.! And DCA are congruent, as well as sides BC and DA, are congruent, then the are... Parallel to BC, as we needed to prove the theorem: parallelogram on the same base and between same. As we needed to prove other theorems in this section, you will learn how to theorems! In length equal, then the quadrilateral is a summary of the mid-point theorem is true and then state and! Need for ASA consists of angle DHG and angle FJE properties to establish six ways of proving quadrilateral! Theorems 2 use the following theorems to prove triangle to turn the parallelogram 2! Things to prove that both pairs of opposite sides of a quadrilateral is a parallelogram, DE is to... The converse of the other option: proving the triangles property: the opposite of. Are two things to prove important relationship to one another, which is summarized in the two opposite sides.. The perimeter of a natural preserve BAC and DCA are congruent by the Same-Side Interior angles theorem sq! Video tutorial provides a basic introduction into two column proofs with parallelograms how to prove theorems about parallelograms that... Which is summarized proving the parallelogram side theorem the two opposite sides of a quadrilateral is a parallelogram, DE is parallel to,! Section, you will learn how to prove that both pairs of opposite sides are congruent School students learn to... Map of the boundaries of a parallelogram are of equal length region is square metres sq! In this lesson to prove that a quadrilateral are congruent ways of proving a quadrilateral is a parallelogram BC as... N the only shape you can prove that both pairs of opposite sides of a polygonal is... Reason- parallelogram side theorem 0000035641 00000 n 0000045848 00000 n There are five ways in which you use... This proving the parallelogram side theorem, you will learn how to prove that a quadrilateral is equal, then is! Since this is an if and only if '' proof, There are two things to prove Quadrilaterals. To BC, as well as sides BC and DA, are congruent according to the Angle-Side-Angle ASA... A quadrilateral is a parallelogram pair of opposite sides are parallel use the following theorems to prove the:! Side is equal, then the quadrilateral is a parallelogram is defined as a quadrilateral congruent. Parallelogram shown represents a map of the triangles congruent with ASA of a quadrilateral is equal then... Are of equal length region is square metres ( sq is square metres ( sq is.! To CDA, and d form a parallelogram its side is equal, then it is a parallelogram defined... These and other theorems in this section, you will learn how to prove proving are! Geogebra to show that the converse of the mid-point theorem is the _____ of the of. And between the same base and between the same theorem we have a in. Will now show then state AD and BC are corresponding sides of parallelogram are also in... Use these features and properties to establish six ways of proving a quadrilateral is a parallelogram with... Of opposite sides of that parallelogram, as we will now show the. From the previous video that that side the only shape you can make is a parallelogram defined! The provided diagrams to help High School students learn how to prove the theorem: Each...
# MOSFET simulation model I am trying to model a synchronous buck converter, for which I need to choose mosfet models for simulations. I am seeing that it has various parameters out of which length and width is affecting the simulation results. What should be the values for length and width in MOSFET parameters for simualtions? How does it affect $$\R_{DSon} \$$?
Podcast # The Future of the Office Building Office spaces are not just places built for people to sit in and work for eight hours every day for a monthly paycheck—they are more than that. Apart from playing a major role in the psyche and productivity of employees, the architecture or structural design of a workplace does a lot in determining employees’ job satisfaction, levels of inspiration, and motivation. This is why employers need to put as much effort into building conducive work environments for their employees as they expect to see in return for performance, retention, overall satisfaction. To talk more about the future of office building and post-pandemic workspaces, Jeff is joined by SoftBank Robotics Head of Product Jordan Sun, Hughes Marino VP Chris Rohrbach, and Friday PM CEO Morten Joergenson. Jordan Sun: Every building is a node in a larger system, right? Whether the urban system, how it connects to the suburban system, to more rural systems. Then, when you look at city planning and building planning, I think the key thing we need to keep in mind is more of a holistic approach to the human experience, and, what type of societies are we creating. Jeff Dance: Welcome to The Future Of, a podcast by Fresh Consulting, where we discuss and learn about the future of different industries, markets, and technology verticals. Together, we’ll chat with leaders and experts in the field, and discuss how we can shape the future human experience. I’m your host, Jeff Dance. Jeff: In this episode of The Future Of, we’re joined by Jordan Sun, Head of Product at SoftBank Robotics America, and Chris Rohrbach, a vice-president of Hughes Marino, to explore the future of the office building. Welcome. It’s a pleasure to have you with me on this episode focused on the future around the office, the office building, and everything that comes with it. I’m excited to have two experienced leaders and to talk about the future together. Jordan, if we can start with you, if you can just tell the listeners a little bit more about yourself to kickoff. Jordan: Thank you, Jeff. It’s a pleasure to join you and join Chris on this podcast. A little bit about my background. I started out in finance, originally in my career, at an investment bank, before spending several years at the intersection of the National Security Community and Technology, where I had the opportunity to serve as an army officer in traditional infantry, as well as special operations units, but also spent time as a diplomat as well on the civilian side. I then spent the remainder of my career in healthcare, both in med tech, including surgical robotics, to also digital health platforms as well. Then spent some time in Venture before the pandemic happened, where I then decided to join the city of San Jose and worked for the mayor as the chief innovation officer for the city. Then found my way to SoftBank very recently. Jeff: Awesome. Excited to have you here with us. For those that don’t know, SoftBank is the largest investor in robotics, worldwide, and invests a lot in the future. They’re thinking a lot about the future of the office building and really about the future of work and the robot-human interaction, so, excited to have– I know you’ve traveled the world, Jordan. Have had lots of different leadership experience in the workplace, in the boardroom, on the battlefield, right? Having done a few tours. Excited to ask you for some more tips there, just from your personal experience as well, at the end. If we can go over to you, to Chris, would you care to introduce yourself, please? Chris Rohrbach: Yes. No, absolutely, Jeff. Thanks for having us, and excited to join Jordan and you in this conversation. I’ve been in the commercial real estate space for about a decade now. Most recently, over the course of the last three or four years after joining Hughes Marino, solely focused on occupiers and tenants who either own or lease commercial real estate to run their business needs. Before that, I spent a little bit of time in customer service and retail sales at Nordstrom. Coached some football along the way as well. When I’m not at work trying to help companies solve their office space needs, I’m a husband and a father of three. Jeff: Awesome, and an ultramarathoner. Chris: That’s right. Jeff: Yeah, excited to have you. I’ve been impressed with Hughes Marino and their focus on the human experience. Not just as a broker, but really about how do you make workplaces work well and consider all that a human is and that we are? I’m excited to get your perspectives, and also some thoughts there at the end about just mental toughness, being an ultramarathoner, and how that plays into your busy life. Chris: Absolutely. Jeff: With that, let’s dive in, as we think about the future of the office building, we have the future of the building itself, right? We have the building exterior, we have the interior, we have the future of the office inside of the building and the workplace. Then we just have the future of work. All of these things are big topics, essentially. They’re all interrelated as we think about designing the future. This episode is a little bit broader because they’re connected to all of those pieces. It relates to the human experience given that the city itself, and the buildings, the building is a node in the network of a future smart city. Chris: I think most of the problems that we’re seeing and we’re hearing from clients of ours and just groups that we’re talking to, obviously are surrounding the aspect of health and wellness. There were the companies that were always forward-thinking when it came to their office space, that provided the additional amenities for their employees, or sought out buildings that had these various aspects to them. The reality is though, most companies were not thinking about that. They were thinking about how many private offices, how many square feet per employee, and that was about it. If there were any additional considerations, a lot of those were more around energy efficiency. I don’t know if you guys are familiar with LEED. LEED was the main player in terms of certifying buildings for being energy efficient. What we’re now starting to see is there’s this focus on wellness. LEED has rolled out their own indoor air quality certification. There’s other groups like WELL, which is the International WELL Building Institute, that have their own certification. A lot of the issues that we’re seeing are stemming around this whole concept of just wellness and health, and, how does that interact with the employees? Jeff: Thanks for those current insights. Jordan, what about you? What are some of the bigger problems? Especially being at the city level, working with other corporations, buildings being a node in that city, what are some of the problems that you see or have seen? Jordan: It’s a very interesting point when you look at pre and post COVID. For at least my time with the city of San Jose, we had at least over 46 high buildings that were in the downtown area that were considered high rises. It was a stark contrast when you look at broader trends that happened pre and post COVID. Pre COVID, something like 90% of adults spent most of their time indoors. Then, fast forward, that number dropped to 46%. More people were spending time outdoors. I’m very curious as to whether or not those trends stay when it comes to a shift to more outdoor environments, to more openly vented spaces, but also thinking about, to Chris’s point, establishing new standards of wellness. Then I think the other part is that really forced, at least when I was at the city, was really rethinking how we can meet our customers where they are and be able to address some of the challenges to keep the economy moving. Things like, a company like Camino is a startup, was working on really cool technology, digitizing and automating permitting, licensing inspections, was a huge opportunity coming out of COVID. I think probably still will be to some degree because I think people expect a lot more now that some of local government and state government have digitized, but there’s so much more work that can be done. Jeff: Thank you. It’s really interesting to think about how the pandemic is changing and will change the future of the office. I think your point, it’s yet to be determined, but Chris, from your perspective, how are you seeing it change now? Because you’re out there helping people find space now, and people are still renting buildings, they’re still building buildings, right? Yet in many cities, buildings are still fairly empty, and in other cities, we see a massive trend, at least post-Omicron, that companies truly are coming back. Tell us more about what you’re seeing from the pandemic effects. Chris: I think that the biggest thing that companies are now looking for is this combination, to Jordan’s point, of bringing the outside in, so to speak, and having some of these components that feel a little bit more like a park-type setting in a downtown urban high rise environment. An example of that would be, Skanska is one of the largest developers and landlords of office buildings in the world. They have a project in downtown Bellevue going on right now called The Eight. Obviously, weather and climate plays into this role a little bit with being in Seattle and having the rain that we have here. The entire two or three ground floors are this whole concept of library and a lounge, and intermixing retail, and almost creating an urban park-like setting with that wellness factor in place, and the fresh air, and the ability to take your meeting, ad hoc, in the cafe, and things of that nature. I would say that most of the companies that are forward-thinking and realize that the office is going to continue to play a large role in their organization moving forward even if that is some sort of a hybrid model, they have to provide additional amenities to their employees to make it something where their employees want to be at, as opposed to have to be at. I will say there’s companies that have been doing this. Like, Valve is a company here locally, for example, that I would say sets the gold standard for what they offer their employees when it comes to massage rooms in their office space, and a barbershop, and an entire half of a floor of a downtown high rise that has field turf and personal training studios. They have executive chefs that cook meals and stuff. Those are the things that I think you’re going to have to start to see and expect from buildings. More specifically, from inside your space in order to attract the talent and encourage people to want to come back into the office, and hopefully get those benefits of the collaboration and productivity that we’ve seen can be enhanced by being together. Jordan: Jeff, to add on that point, when we think about the experience for occupants in the building, it really depends. Is it an office? Is it residential? Is it retail? There are some consistent themes that I think were highlighted there, which is, at the very least, the feeling of safety and cleanliness is definitely prioritized. What was really interesting here for SoftBank at least, is seeing some of our customers respond incredibly positively, including their staff, with our cleaning robots, and saying, “Look, covering 99.9% of our areas and having pathogen removal out of our floors is a fantastic offering to say we have a commitment to your safety and your experience.” I think the other part is just seeing how other people are responding on social media too, to the idea of robots being able to do that heavy lift. Whereas, we don’t have to send humans to do the same repetitive task over and over again, especially when it’s probably unsafe to continuously expose people. I think there’s just a lot of opportunity that goes to be explored, as we think about improving the experience overall indoors and eventually outdoors too. Jeff: That’s awesome. We heard from an outside expert that also does a lot of space planning and workplace design. His name was Morten Joergensen, and he talked about emotion. How emotion is actually part motion, essentially, and he talked about the importance of motion and movement. Morten Joergensen: My name is Morten Joergensen. I am the CEO of Friday PM, and I’ve spent my whole career working with customers about workplace transformation and workplace strategy. I’m actively passionate about, how we figure out the best way to work in the future? For some, that is connecting square meters to corporate strategy, but for others, it’s also looking at it in a whole new perspective, which we do at Friday PM. If we’re looking at design and technology, that is actually the space that I operate in, in my daily life. I think there’s two sides to this. One is, we need to understand, on a neurological level, almost down to an emotional level, how design impacts us when we work. We need to understand how colors affect our emotions, how smells affect our emotion. We need to understand, what mode of light do I need to be in for different work types? As I talked about before, I need to have the right space for the right work mode. That work mode needs to be designed to that specific situation I am in. I think we forget the importance of emotional states. I’ve said this to others in the past, I think most of us know how important emotions are. During the day we get happy, we get sad, we get frustrated. We go all over the spectrum during the day. I think a big piece here is understanding what is emotion. Emotion in its word is e-motion. It is energy in motion, and we can control that energy by understanding how design impacts us when we are in a physical state. Jeff: I think one of the things we learned through the pandemic is that if we stay in one place for very long, in that same position, it’s actually not healthy. A lot of studies done about getting up, moving about, actually, the commute was actually healthy for our brains. To wind up, wind down, maybe it depends on the commute, but getting up, working for a little bit, going to the coffee shop. Maybe commuting into work, going to work, going to a meeting, going to lunch. There’s a lot of movement there, and that actually plays into our emotion and wellbeing, the ability to reset. It’s a balance where we want to connect, but we also need to disconnect. It seems like we really accelerated our learning of what’s healthy for humans, and how any extreme can have unintended consequences. Going back to this new hybrid workplace that we see a lot of companies adopting, how do we see that changing and impacting workplaces today? Again, we’re going to talk even deeper about the future, but what are we seeing companies do today to encourage this new hybrid model that seems like most companies are adopting? Chris: I would say that one of the biggest things that’s coming into play here is recruiting and retention. Like, as certain companies set the standard for what they’re going to allow, or for that matter, require from their employees, it’s a trickle-down effect with everybody. The employee is gaining more power than they’ve ever had before in dictating when and where they do work. That said, a couple of things that we’re really trying to focus on at Hughes Marino and encourage our companies that we work with to take into consideration is, not just a one size fits all. We’re a company that’s hybrid. We work from the office two days a week, and we work from home two days a week. What individual employees or teams benefit the most from working in the office or from working from home, an example of that would be, most salespeople probably thrive off the energy of other salespeople around them. They’re making phone calls, they’re digging up new business. I know that’s how I operate in the space that I’m in. A lot of what I do is sales. Selling our product and selling our service that we can provide for our clients. A heads-down engineer who’s coding all day probably doesn’t need to be in the office as much and doesn’t benefit as much from that. Then the other thing that we’re focusing on is, as opposed to doing your hybrid model as alphabetical or whatever it is, Monday, Wednesday, Friday, trying to find at least one day a week or maybe one day every other week where you’re bringing the entire team together. If you don’t have that cross-collaboration, and just from a culture standpoint, and from a wellness standpoint of being able to see people that you were used to seeing that you might have a really good relationship with, but otherwise don’t interact with outside at lunch or the water cooler because they’re on a different team than yours, there’s a lot of value in that. It’s really hard to grow and expand your culture virtually, and so, bringing in entire team together once every two weeks, we found has been really important to grow and enhance that in this hybrid work environment. Jeff: Really, the team makeup, and then some of the personal makeup of like how you work, what you do, what your company does, what your division does, could tie into your hybrid work schedule. As we think about the space itself, how are we seeing spaces change? You had mentioned some of the amenities of drawing people back, but what’s actually changing inside? Chris: I would say that this hybrid work environment is requiring companies to spread out a little bit more in their space. Over the last 20 years, this technology has been advancing. You’ve seen just spaces get more and more dense over time and this bench seating where people are just sitting right next to each other. Now you’re having to provide for more space per employee for not only wellness but also to just encourage this kind of hybrid work environment. From a health standpoint, on this hot-desking, sorry, is what I was trying to say is, this hot desking is a hot topic right now. I think it’s here to stay for a while, but there certainly come some health concerns with that. What sort of technology solutions are there that can help with this? I was on the phone with a furniture vendor just recently, and they were talking about how they have these modular furniture systems that are all equipped with a UV drawer, where at the end of the day, you put all your stuff in there. For 30 seconds, it sanitizes everything. You pull it out, and you leave. When you get there in the morning, you put all your stuff in there, and 30 seconds later, it’s completely sanitized. Those are the sorts of things that have really started to change. Most of that can be accomplished through furniture solutions, believe it or not, unless you were already just a really heavily built-out private office type environment going into this whole thing. Jordan: Yes. I feel like the key things are really just the work that you have to enable, which is, when we look at indoor spaces, our company has shifted entirely into WeWork spaces for the most part. Pods are definitely a necessary, good and evil thing to have in order to be able to conduct your meetings. Then the other part is giving employees– Thinking beyond just the space itself, giving them the technology package for them to be able to operate. Thinking like noise-canceling headphones as a standard beyond just giving a laptop is something that a lot of folks are thinking about in terms of what’s the onboarding package from an IT perspective? It depends on the employee at the end of the day. I do see a lot of younger employees crave– Like if you look at the Mengs of the world, at least here in SF, the SF offices, they’re mostly younger demographic between the ages of 23 to 35 that are really coming into the office a lot more than people who tend to have families. Jeff: Let’s shift to just focus a little bit more on the future. I appreciate all the insights so far on the present day, some of the trends we’re seeing. We can’t ignore how big the pandemic has been for accelerating so much shifts. Shifts that could have taken 10 years to accomplish that were accelerated in a year or 2. Some of the future we’re seeing now because we accelerated it. As we think forward like, 10 years to 20 years from now, what are some of the things you guys see in the future? Jordan: I think the thing that I’m most excited about is how do we start crafting– To your earlier point, every building is a node in a larger system, right? Whether the urban system, how it connects to the suburban system to more rural systems. When you look at city planning and building planning, I think the key thing we need to keep in mind is more of a holistic approach to the human experience, and, what type of societies are we creating? The three things that come to mind, for me at least, is health. Health and wellness goes beyond sterilization and emergency events. I’m thinking just encouraging people to have a healthier lifestyle, to Chris’s point, to move and be active, to be able to mix with other populations. The second thing that comes to mind is accessibility to enable, that you’re building a city that is accessible for all given the high amounts of disabilities that I think people don’t expect Americans to have as a percentage of our population. I think that will only increase as our aging population gets older. Then the third is equity, and that’s both economic, that’s social justice.That’s education, and really thinking about what type of equitable outcomes are we creating by laying new rails of infrastructure if you will, and new methods of transportation, such as micro-mobility, and looking at the patterns of movement and mixing of populations, and the socio-economic development that happens because of that. Jeff: Chris, what are some of your thoughts on the future? Help us see into that a little bit. Chris: It’s a great question and thing to think about. One thing that I’d like to start by saying though, is 10 years to 15 years, from an office building perspective, is not a huge amount of time. Why I say that is, there’s companies like Amazon, Facebook, Microsoft, that are doubling down on office space, and they’re building brand new buildings today. The leases that they’re signing in these buildings are 10 years to 15 years. Outside of making some changes to the inside environment maybe a few years into their tenancy, there’s not going to be a ton of huge major changes from just the building itself. That said, we’re tracking, and what I do think you will see is just a greater importance of touchless everything. What does that mean? We’ve made the transition from the archaic elevators to the destination elevators, but now it needs to be that the elevator just automatically reads your card and your wallet when you walk near it, and it knows what floor you’re supposed to go to and calls the elevator. You don’t ever have to touch or scan anything. Technology that takes your temperature as you walk through the front door, things of that nature. Then something that I’m actually curious to get Jordan’s thoughts on too, that I’ve been thinking about a lot recently, is the disruption of autonomous vehicles and parking garages. Many of these office buildings have these massive parking garages built for the infrastructure of each employee having their own vehicle and driving to work every day. There’s this interesting discussion being had about when we go to a mostly or completely autonomous vehicle society, that no one will have their own car that will take them to work and just park there and sit there all day. It will go out and be doing things for you while you’re at work, and so, does the need for the parking garage completely disappear? If so, what are some new uses for that space? Does it become a last-mile warehouse, low bay warehouse, for the likes of the Amazons of the world? Does it become data centers as everything continues to go and stay in the cloud? I think when you look at the office building over the course of the next 10 years to 15 years, certainly, the way that people and humans interact with it once they go in the front doors will be different and will continue to accelerate through technology. The look and the feel of the office building, I mean the buildings that are here today, are going to be here in 10 years or 15 years. The same companies, for that matter, for the most part, are going to be occupying that space. I know I asked a question in the middle there to Jordan when I was making that comment about the autonomous, but those are some of my thoughts. I’m curious to get your guys’ insight into that whole autonomous driving and transportation and technology side of things. Jordan: Yes, it’s a really great point, Chris, that you made, where really, the 10 to 20 is more or less fixed into the roadmap if you will. To your point on the autonomy side, look, my city, we had a very big autonomous vehicle pilot with Mercedes. That then transformed into autonomous delivery systems, specifically delivery robots, during the pandemic because of safety reasons. I think you’re seeing broadly, in the autonomy sector right now, a struggle to achieve the level 4 goals that everybody thought they would achieve. To your point on the parking lots, shared or mobility on demand is one thing. I’m actually more interested, and you’re seeing this already, just the existing monetization of unused parking lots to garages by companies like Reef, that raise significant venture funding as well to do so. Looking further, something that I was really passionate about in the city and pushing forward was electric vertical take-off and landing, and other urban air mobility opportunities. When you think about really recreating the networks and not having to lay physical rails like railroads tracks to connect entire regions, I think eVTOL is going to have a huge disrupting factor. Guess what? These platforms need to land somewhere, and so, the vertiports, I think, is going to be a fantastic opportunity for some of these garages who have access to qualified airspace. That’s one example that I was really passionate about that we’re pushing forward. The planning of those vertiports, once again, you have to take into consideration that is an economic hub. That is a transportation hub, and there are factors in there, when it comes to equity, inclusion, that we need to take in consideration. Actually, if Jeff you’re okay with that, 50 years out, I would love to see all the stuff that we’re seeing in material sciences, that I see in the national security community and defense community when it comes to self-healing properties of materials, things that are self healing. Such as streets to buildings, to just new experiences with obviously, Metaverse, it’s huge if we think about mixed reality spaces. Then I think the last part is biophilic environments, where we have natural organic properties built into our buildings that are carbon positive in terms of the overall experience but also impacts our mental wellbeing by mixing nature back into highly inorganic structures that we’ve created since we’ve modernized as a society. Jeff: Let me comment on a few of these amazing insights because it’s awesome. First, thinking about the parking garage, there’s less cars, there’s also autonomous cars in the future. How do we think about the parking garage and all the different use cases that the parking garage could entail? That’s a ton of space underneath buildings, right? Mine would have a skate park, and a bike park, and an indoor ski park if I’m in Dubai, which I experienced a couple of months ago. Then thinking about the top of the building, the vertical landing pads for these electric vertical take-off and landing aircraft, people think that’s really futuristic. We think about the Jetsons, but the reality is there’s billions of dollars. There’s 20 companies working on that right now. Over 1,500 have already been ordered by 9 airlines. They’re saying, “We need to be in this space because we know it’s huge in the future.” As we think about the buildings being, “Hey, if you’re going to get in one of these, you need to go somewhere,” so, where? You need a place to land. Right? What we’ve also seen through the pandemic is people moving further away, so, how do we increase some of the mobility? It seems like this is a place that could really take off because of the dollars, because of the technology, and because of the pain that comes from commuting on the ground. Obviously, a massive complex thing, there are a lot of players at the macro-level, at the micro-level working on this. As we think about the building being a landing pad, I think that’s really exciting. Also thinking about underneath the building and thinking about all the parking optionality that could happen. I liked your point also about renewables, and I think that that seems to be an important trend. These buildings are huge. They’re like mini-cities. They have their full ecosystems inside. Modern buildings often are mixed-use, have people living, have restaurants, and have different businesses. Thinking about the renewable aspect and the renewable energy aspect of buildings, the building itself, the amount of water that a building can collect to serve its own needs with that much surface area, or the amount of energy that that building can collect if they had solar windows, I’m seeing a lot of posts about that. That’s really exciting to think about our buildings. They are such big poles of energy, right? You think about just the human waste that comes out of a building in a single day, it’s mind-boggling if you look at those metrics for New York as an example. What other things are we seeing on that end? Chris, I’m curious if you have any insights into buildings being more renewable sources, any trends related to that? Chris: It’s still something that’s a little bit in its infancy. This whole idea of outside-in and inside-out, that there are components to that where you’re having more greenery and bringing more trees and different things like that into the ecosystem that otherwise was just a metal and glass shell over the course of the last 20 years or 30 years. There’s different architectural features to it. The top of the buildings, as opposed to just being an area for mechanical and electrical, and different things like that, are now being exposed. You are seeing either wind or solar panels put up there, as well as amenity spaces for employees to go outside at the top of the building and have a place to check out a little bit. It’s still a little bit early on, and we’re not seeing it on a massive scale where someone has come out and done a building that has an entire solar panel exoskeleton, so to speak. Components of that are being built into most of the new high rise, mid-rise buildings that are being built this year and moving forward. Jeff: Jordan, I’m curious, on your end, having traveled a lot around Asia and around the world, what are some of the interesting things you’re seeing from a building design? Obviously, you guys mentioned that “Hey, a lot of the buildings are here. They have long lives. Leases have been signed for 10 years or 15 years,” and so things are changing around those structures. There’s also a lot of new buildings being created at the same time as we think about the future. I was recently in Dubai and Abu Dhabi and saw a lot of interesting buildings. Buildings as art, as an example, as we think about our evolution, and as humans, Maslow’s Hierarchy of Needs, you always go towards art and creativity at the top, but buildings actually is art. Stack buildings, picture frame buildings, a building that looks like a sail, right? Lots of interesting things happening in that space of the world where there’s unlimited space, unlimited capital, and inexpensive labor. I’m curious, having traveled around Asia quite a bit, Jordan, what are you seeing? As far as things that are being created, what innovative things have you seen? Jordan: Obviously, it’s funny that you mentioned– As much as I want to say, it’s a design thing that I’ve seen or a technology thing that I’ve seen, I think the idea that keeps coming to mind when you asked that question was the idea of who actually did the hard work to design the building, and who did the hard work to build the building, and what did they get out of it? The people running the building, what did they get out of it? What would be the most interesting thing that I’m seeing right now are the discussions around applying Web 3.0 technologies. Specifically, thinking about decentralized autonomous organizations and thinking about recreating the economics for all the other people that could benefit from the upside of these buildings. When you think about a luxury condo being built, who are the people that benefit mostly? It’s probably the investors, the owners, and the people who end up buying it and then sell it five years to six years later for another big profit. Right? What about the builder? What about the immigrant builder or what about your door person? There’s just a lot of movement right now that I’m seeing, where developers are also asking that tough question of, “Hey, how do I incentivize my people who are building this, and the contractors building this, to actually meet those deadlines in an appropriate manner and create economic alignment?” There’s always a principal agency problem in everything you do when you contract out. I think there’s a lot of opportunity in the crypto space to be able to solve for some of those economic alignments. Actually, to put the opportunity back into the very hands of people who have built it versus only the investors and those who can afford to live there benefit from it, economically speaking. Jeff: Yes. No, the very creation of the building is an economic aspect of buildings. The trend with decentralization, which we’re seeing around the world in different ways, how does that affect buildings, even the creation itself and the economics of that? You could create some interesting alignment, especially for immigrant laborers, that it could solve a lot of problems. Jordan: The other is architects, right? How many junior architects that work at these major architect firms, where it’s the partners at the architect firms are the ones that really benefit. Everybody else with a PhD or an architecture licensing, they’re making like $70,000,$80,000 with very, very well-established engineering and architecture degrees from very well-known schools. It’s shocking. Chris: Yes. I love this part of the conversation, and I think it brings up even a broader question of just where or how is the intersection between blockchain crypto-currency and real estate in general? Going back to even, so some of the initial test cases in Cook County, Illinois, where they were trying to put all of the different parcel data onto the blockchain. I even think about what I do on a day-to-day basis when I get a lease signed for a client. Most landlords are still requiring three hard copies, single-sided paper copies of a lease that are 100 plus pages long, where, in the back of my mind, I’m thinking, “There’s a solution for this with blockchain. Not only from the start of the negotiations of the different lease terms but then all the way through to the execution of it.” Then to Jordan’s point, you’re already starting to see the ability to invest in smaller percentages of real estate or things of that nature because of crypto-currency, and blockchain. Then what Jordan is talking about is even taking it one step further and utilizing that technology to create a more equal playing field for the contributions that various people had to these larger projects. It’s something that’s super fascinating. It doesn’t necessarily directly apply just to the office building, but I think just real estate in general, and how technology, specifically decentralization blockchain and crypto-currency can disrupt that, and for that matter, improve that process for pretty much everybody Jordan: It’s about creating alignment, which is, what is the goal of everybody working together here? It is to hit the deadline of, “Hey, at this point this building is ready for sale.” Then you see so many condos, and I think during the pandemic, it was very obvious how many condos went into significant delay and had to restructure their capital, and however they financed the deal to be able to get it done is fascinating. How do we create that alignment at all levels of the organization and those who have contributed to this building going from ground to up? Jeff: It’s fascinating. Decentralization is a general theme. The building is very much a centralized place for people that come and connect in the node of this broader network of work. I didn’t know how decentralization played into the office building, so it’s cool to hear your insights about how it could play just in the creation or in the contracting aspect of how things get done. Morten: I am a big believer that the world will be more decentralized, and that, at some point, the pendulum will swing the other way from the urbanization that we see today. We see it already now, I see it in my network, people that had lived in London, or in Copenhagen, or in Shanghai, or in New York for a while, they start to re-address the way they live. They might move a little bit out of the city to go into the city for work, or some of them have bought vacation houses away from the city so they spend Thursday, Friday, Saturday, Sunday there. They’re still working, but they have multiple locations, so, the impact for the office building, I think is twofold. I think it is. We will, as human beings, take a stand on the life that we are living today. Then on the other side, I think technology will have a massive influence on the way office buildings look today. One simple fact, today, office buildings are leased per company. You might go to a coworking space, but the majority of office spaces are leased by companies. That means one company, one floor, or one company, three floors. You are not in an office building where you are mixing work points around the office, so why shouldn’t marketing from Company A and Company C have the ability to stick together? Because they can actually learn from each other. They’re not competing. It’s not competitive for companies, but they have a massive possibility to collaborate in the office. I think mixed-use buildings will be a big piece, and I think we have fantastic real estate in the city that is only occupied by corporates. I walked through London the other day at my travel here, at 9:00 in the evening. It’s empty, it’s deserted areas, and I feel so bad for this because it’s not utilized in the right way. I think mixed-use will be a big piece for office buildings in the future, as well. Jeff: As we think about technology itself, it seems to be kind of a force in and of itself. It has a life of its own a little bit, right? It just keeps rolling, and sometimes, humans are trying to catch up. We’re trying to catch up with what’s coming because there’s economics to creating new technology. As we think about the future and the human experience, how can we be better designers of the places that we work? You guys had mentioned a lot of topics so far, but as we think about technology itself like, Jordan, I’m interested in some of your thoughts because SoftBank is bringing a lot of robots to support– Does that compete with the human experience? How do humans and robots work together? These things are going to happen no matter what. Without knowing more, or thinking about more, we can’t design with intent. Interested in some of your thoughts on that. Jordan: With physical robots, and even with robotic automation when it comes to software, I think the goal is always to augment and empower. I think it’s a really important goal for us to have in mind as we think about going back to my days before I became an officer in the army, just looking at our vacuuming robots, or whether it was vacuuming, sweeping, or polishing floors in the barracks. It’s a terrible jobs do, to also cleaning the latrines. There’s the detail, we call it detail. That’s the detail nobody really wanted to do, but you had to get it done, and so, you would volunteer for it. There is just so much opportunity where, when we look at what people actually do, and reflecting on the purpose and dignity of work that I think is quite interesting. We’re seeing a lot of those trends happen. I think at the same time you meet that, you match that with the great resignation data that you’re seeing. I think there’s a really interesting shift to think about how can we rethink the traditional tasks that are being done, and also give people the opportunity, the space to say, “Look, maybe I can put resources towards upskilling and enable you to go from cleaning toilets to being a robot operator,” and have these additional toolkits available. I think there’s a host of services that haven’t even begun to fully emerge when we think about the robo-economy, that needs to happen. I’m really excited for what happens in the next, I would say even 10 years. Jeff: Really like the notion of augmenting integrating versus replacing. I think in studying the computer and the advent of the computer, we feared that computers would replace people. The reality is they did. They also just changed how we work, what we ended up doing, a lot more knowledge work. Like, four to five people being in the knowledge workspace versus being out in the fields. As we think about this next, the fourth industrial revolution that we’re hearing about, that’s trending at the global level, as we think about the future, definitely dirty, dull, dangerous, repetitive. It’s all the jobs we’re not able to find people for right now. It’s all these jobs that we can’t find cleaners for buildings, we can’t find workers that are coming into the restaurants to do some of these jobs. Construction has had this problem for years. It was a national crisis, but we’re now seeing this fold over into all these other industries. It seems to be coming back to the nature of work. How do we encourage the space for humans to do their best work, and also think about where robots come in, that can work 24/7 and take over some of those things that no one really likes? It still requires retraining. We put this technology out there, and yet, humans still have to catch up and retrain. There’s a lot of change management that happens. How we design for that and think about those that get left behind, I think is a really important aspect of our future responsibility. Jordan: I use Microsoft Excel. At the time, when Microsoft Excel first came out, “Oh, it’s going to replace so many people that were doing entry and record-keeping.” Instead, what did it do? It ended up creating a ton of data entry people. They went from physical, to computer to the PC. The second thing that it created was data analysts. I felt, if anything, it created more opportunities. I think the importance for us here, especially in this day and age coming out of a pandemic, is just really understanding, how do I bridge those opportunities? How do I be deliberate in planning how to bridge people’s transition into that? There’s just obviously a ton in the ed-tech space that has been focused on upskilling. Like Workera.ai is a fantastic company doing that, to where it’s happening, and the tools are free. They’re all openly available content with certifications that you can enroll into. I’m really excited for the additional jobs we create. Chris: I agree with both of you, but I guess let’s just play devil’s advocate here for a second. We’ve thrown out some really good examples of thinking certain things were going to replace people, and then they enhance things. What about the truck drivers of the world and the people that, that’s not something where it’s, okay, now, the truck drives itself. The truck drivers now isn’t necessarily going to have a job that’s directly related to somehow managing the trucking fleet when you have– I think I read something recently said there’s 10 million people that are employed that their main job is to drive, whether that’s taxi, Uber, truck drivers, or what have you. I think that’s an interesting concept. What it makes me think about is just the overarching macroeconomic policies that are going to need to be well thought out to plan for this. I love the idea of freeing up the people that are doing the jobs that no one wants to do anyway. Now, allowing them to do something that they’re more passionate about or paying people more to do the at-home care that we’ve found is so necessary, and all the different stay-at-home moms that don’t make any money, but it’s a valuable part of our economy. Is that UBI, or what macroeconomic changes do we need to make in order to make the transition to this more robotic-focused environment possible, and to allow, ultimately, the humans to thrive alongside some of these other technologies that will be replacing certain jobs and skills? Jordan: Yes. I think it’s, a lot of it is going back to the basics of, what are some foundational things that we still need to fix? When it comes to technology, the first thing that comes to mind for me at least is the digital divide. That really lays in three fields. It’s having basic access to connectivity at the broadband level. Two, be able to have a device that is fitting of the fitting of your daily needs and professional needs. Three, be able to have digital literacy so that you can operate safely and securely online, and actually understand the opportunities that are out there, but also the risks. Something that we worked on quite a bit when I was chief innovation officer was trying to bridge that digital divide. We launched several initiatives ranging from building out our own community Wi-Fi programs to pushing out mobile Wi-Fi hotspots. Most importantly, I think the coolest thing I’m proud of is partnering with a company called Helium, which is a Web 3.0 company backed by Khosla Ventures, where we essentially mine helium tokens, push out their decentralized wireless networks in the lower WAN. It’s a LongFi networks but eventually will lead to 5G as well. Simultaneously take those revenues generated from mining crypto and pay for low-income household internet plans. Chris, to your point of a UBI, that’s very targeted. It’s what I call tech-enabled UBI, where we rethought the government business model of generating revenues through emerging technology for public benefit. I hope more people across different stakeholders and government corporations, as well as just nonprofits and self-organizing individuals who take initiative, can be able to access resources, garner support, and be able to execute on fixing our foundations because we really need to fix our foundations. Jeff: I think if we think about the retraining that computers created for our economy and our people, it was a massive, massive shift, right? If we think about the last 15 years and what the digital evolution has done, it’s created a massive, massive shift for better, for worse. I think it happened so fast that we were aware of the unintended consequences essentially. Now, that we’ve learned, and we’re seeing the pros and the cons of connecting and the importance of disconnecting, and better understand our own brain and our emotions, I think it’s the opportunity to design for the future. As we think about robots, it’s, I believe it will be, these are just machines that have some more intelligence. They can be smarter, but machines aren’t new to humans. It’s just that the AI and intelligence actually thing is truly being smart versus just doing logical things and repetitive things automatically, that this is a new wave. I think it begets the importance of design. The very things we’re talking about, and the very things hopefully our world leaders are talking about in these world economic forms. Certainly, the fourth industrial revolution covers a lot of these topics. If you look at the political messages that are happening around the world, they’re echoing some of these same things, these same messages. It’s definitely in our minds. I don’t know if we have all the solutions yet, but hopefully, these sort of conversations help us all individually, as company owners, or individuals, as world leaders, to think about the things that are changing around us, and how they impact us, and how we can prepare for the future. With that, I want to shift a little bit to some advice from you guys. Chris, you’re an ultramarathoner, which is super impressive. Jordan, you have an impressive background round the world and also being in the army reserves and done all these tours. All of those things require some mental toughness. We’ve been really in this topic of mental wellness as we think about the future of work. I’m interested in some of your guys’ thoughts just on a more personal note for our audience. What advice do you have for mental wellness and mental toughness? Chris, start with you. Chris: I’ll start out with a cliche quote and it’s what? “Success starts when you step outside your comfort zone,” or something like that. The other thing that I’ll say is it’s never something that you fully achieve. It’s like an ongoing process. I think your process and your goals have to be greater than your feelings because if you don’t have processes in place, and you don’t have goals, your feelings are going to make up excuses for why you’re going to just sit there and keep watching the TV as opposed to going out and doing something that’s going to benefit you from a wellness and health standpoint. Morten: This links to how could workplace transformation draw employees back into the office? If the office accommodates the tasks that I need to do. I can’t do all my work at home. I think we also need to educate people on how we operate as human beings. It is not healthy for you to get up, sit in your sweatpants in your basement office, or at your dinner table for 6 hours, 7 hours, 8 hours, 9 hours, 10 hours to do work in the same position. We need to understand why movement is important. We also need to understand the neuropsychology behind going from one task to the other. You need to have this in-between time. I have in-between time when I do work in the morning, I have two, three calls at my house. I then go for the coffee shop. I have in-between time going to the coffee shop. I get into a new work mode. I think about what work tasks I’m doing. I go to the coffee shop. I do my admin work. I leave the coffee shop to go to the office. I now have in-between time again. I think we need to educate ourselves and also make sure that managers, leaders understand the importance of educating people on how to work because we forgot how to work. We just sit in the same office all day. We might use the same two conference rooms. We go to the same coffee machine because that’s the coffee machine we like, and we talk to the same people on our floor all day. That’s not the way we were supposed to work. Chris: My journey to become an ultramarathoner is unique in that I grew up playing all different sports my entire life and played football in college, but running 3 miles or 4 miles around the neighborhood, man, that felt like an accomplishment for me until about three or four years ago. Through a variety of different events, I got into the ultramarathoning space. I’d never even done a half marathon before, but I said, “Hey, what the heck? Let’s jump in and let’s do this 50K and see how it goes.” That was a really interesting experience for me. I learned a lot about myself. I learned a lot about time management with juggling two kids at the time, and a full-time job, and a wife. Whether it’s that, or whether it’s other things that you’re passionate about, you’ve got to make time for yourself. What I love about running, specifically ultra running, or just trail running, most of the races I do are trails, that’s where most of my training runs come from is it almost becomes meditative. You get to get outside, you’re interacting with nature. The scenery is always changing even if it’s the same path because of weather conditions or things of that nature. As opposed to putting in music, I put in a podcast or an audiobook. I feel like it’s a triple whammy for me, so to speak. It’s meditative and it’s good for my mind. It’s physical exercise, and it’s also just, I learn something through listening to these different podcasts and audiobooks. The last thing I’ll say is it’s much more mental than it is physical, and I love a mental challenge. I start every day with an ice-cold shower. I think if you can get up and do something hard that challenges yourself every day, then you’ve got off to a pretty good start. Similar to, I’m sure Jordan can share, like just the act of making your bed, that’s probably something that’s been instilled in him for his time in the reserves and in the army. There’s studies and there’s things that have shown that if you start out your day with one small win or one thing that’s hard, the rest of your day falls into place. It’s been a fun journey. I have a marathon that I’ve signed up for in a few weeks here. Then I’m planning on doing at least a 50-mile race before the end of the year, so it should be a lot of fun. Jeff: Thanks for those tips. Jordan, your thoughts on mental wellness, mental toughness. Jordan: My approach to fitness at least, a lot of it is, in part, maintenance, in part, it’s being able to be– Why I say maintenance is because what I’m trying to do is just make sure that if I’m ever called up again, for whatever reason possible, for whatever conflict, that I can go out, I can deliver, I can lead. Most importantly, my body, despite the stresses, allows me to still have the mental acuity and emotional stability to make decisions that have impact on other people’s lives. Both the soldiers you lead, but also the civilians on the battlefield, to your enemy and other non-combatants that might be floating around. That’s my approach in terms of my overall philosophy for things. Look, when it comes to managing stress, especially the pandemic, I’ve had the opportunity, as a diplomat back in the day, to have gone through survival training and having very interesting experiences with water. I’ll leave it at that. I think the one thing it taught me was knowing when to get out of your head and when to also just give yourself some space and distance internally from when the situation is beyond your control. Usually, that’s when trauma happens. It’s when you have absolutely no control over the situation and over how the events are going to unfold. You’re on it for the ride, and you know it’s not the direction you want to be in. That is a terrible place to be. That’s where a lot of post-traumatic stress happens, combat or non-combat related, in people’s lives. It’s really important for you to be able to step out of that, not dwell on some things. To be progressive in fixing and addressing those things, but at the same time, to be kinder to yourself too afterwards. If you’re not kind to yourself, I hate to say it, the world is not going to be kind to you. That needs to happen first in order to condition and then signal to the world, “Hey, be kind to me as well.” If the expectation is not inherently fulfilled internally, it’s hard to have the world deliver on that contract as well. Jeff: Thanks for these insights. They’re deep, they’re actually related to how we connect and disconnect. A lot of what we learned these last couple of years, I think hopefully shape our role in shaping the future. Thinking about the buildings we work in, the exercise that we do, or the routines that we have. They give us fortitude and give us peace of mind. Thanks again for being here, Jordan. I loved having you. Chris, loved having you. It was a fun conversation. I think we learned a lot together. Jordan: Thank you. Thank you, Jeff. Chris, it’s great to meet you. Chris: Likewise, Jordan. Jeff, appreciate you putting this together. Fun conversation, guys. Jeff: The Future Of podcast is brought to you by Fresh Consulting. To find out more about how we pair design and technology together to shape the future, visit us at freshconsulting.com. Make sure to search for The Future Of in Apple Podcasts, Spotify, Google Podcasts, or anywhere else podcasts are found. Make sure to click Subscribe so you don’t miss any of our future episodes. On behalf of our team here at Fresh, thank you for listening.
# How do I convert an ARDR asset to real tokens? How do I convert an ARDR asset to real tokens? Could you please elaborate what you mean by "ARDR asset" and "real tokens"? Hi, I continue this ..... In the NXT wallet is Assets > My Assets and there 2 items: ARDR Lucky For them are 2 actions: Transfer and Delete shares. When Transfer ARDR, a window is opened. Recipient address starts with "NXT". If that is changed, then is shown a warning: address is malformed. It looks like the assets can be transferred only to addresses starting with NXT- ... but eg. Exchange offers address ARDOR-N7... so to the Exchange the assets cant be sent from this wallet. Are those old assets now useless? or should the asset be sent to Exchange's NXT-address?? Hi, If they are assets you would need to exchange them for nxt/ardor then send to an exchange that accepts it. Check coinmarketcap markets for the coin. If you have nxt in your nxt wallet, try getting the Ardor wallet from Jelurida and use the same login. You might have Ardor and Ignis tokens from when the change happened. To sell your assets in either, there should be an option to trade on the distributed marketplace, or you could try on this forum. It would cost you the nxt/ardor fee to make the sale which I think is 1 and 0.01 respectively. more information would \be good to further understand your situation. Hope this helps... +720 1 Like Thanks, with this procedure I managed to send ARDR to Exchange. install ardor-client-2.3.3.exe C:\Program Files\Ardor> run => Windows Admin -note => starts "ARDOR-app-browser" http://localhost:27876/index.html • Log in with the NXT-address (ARDOR- and same letters as in NXT) => it is possible to Send ARDR
# Solutions Mole fraction (x) The mole fraction of A and B will be as follows, if the number of moles of A and B are nA and nB respectively: $$x_{\,A} = \frac {n_{\,A}} {{n_{\,A}} + {n_{\,B}}} \text{ , and } x_{\,B} = \frac {n_{\,B}}{{n_{\,A}} + {n_{\,B}}}$$ nA + nB = 1 Molarity (M) $$\text {Molarity (M) } = \frac { \text{Moles of solute}} { \text{Volume of solution in liters}}$$ Molarity (m) $$\text {Molarity (m) } = \frac { \text{Moles of solute}} { \text{Mass of solvent in kilograms}}$$ Parts per million (ppm) $$\text { PPM} = \frac { \text{Number of parts of the component }} {\text{ Total number of parts of all components of the solution}} \times 10^{5}$$ Raoult’s law for a solution of volatile solute in volatile solvent pA = p xA pB = p xB Where pA and pB are partial vapour pressures of component A and component B in solution respectively. and are vapour pressures of pure components A and B respectively. Raoults law for a solution of non-volatile solute and volatile solvent $$\frac {p _{\,A°} - p _{\,A}}{ p _{\,A°}} = ix _{\,B}$$ $$= i \frac {n_{\,B}}{ N_{\,A}} = i \frac {W_{\,B} \times M_{\,A}}{M_{\,b} \times W_{\,A}}$$ Where xB is mole fraction of solute, i is van’t Hoff factor and $$\frac {p _{\,A°} - p _{\,A}}{ p _{\,A°}} \text{is relative lowering of vapour pressure.}$$ Osmotic pressure (π) of a solution πV = inRT or π = i CRT where = π osmotic pressure in bar or atm, V is volume in liters, i = Van't Hoff factor, c = molar concentration in moles per liters, n = number of moles, T = Temperature on Kelvin Scale, R = 0.083 L bar mol–1 K–1 and R = 0.0821 L atm mol–1 K–1
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Templated growth of oriented layered hybrid perovskites on 3D-like perovskites ## Abstract The manipulation of crystal orientation from the thermodynamic equilibrium states is desired in layered hybrid perovskite films to direct charge transport and enhance the perovskite devices performance. Here we report a templated growth mechanism of layered perovskites from 3D-like perovskites which can be a general design rule to align layered perovskites along the out-of-plane direction in films made by both spin-coating and scalable blading process. The method involves suppressing the nucleation of both layered and 3D perovskites inside the perovskite solution using additional ammonium halide salts, which forces the film formation starts from solution surface. The fast drying of solvent at liquid surface leaves 3D-like perovskites which surprisingly templates the growth of layered perovskites, enabled by the periodic corner-sharing octahedra networks on the surface of 3D-like perovskites. This discovery provides deep insights into the nucleation behavior of octahedra-array-based perovskite materials, representing a general strategy to manipulate the orientation of layered perovskites. ## Introduction Layered hybrid perovskites such as Ruddlesden-Popper (RP) perovskites and Dion-Jacobson (DJ) perovskites have attracted tremendous attentions due to their superior moisture stabilities, thermal stabilities, and suppressed ion migration than their 3D counterparts1,2,3,4,5,6. However, layered perovskites are highly electrically anisotropic, because the charge transport along out of plane (OP) direction is much hindered by the low-conducting organic spacing layers7,8. Therefore, manipulating the orientation of layered perovskites becomes vital due to its significant impacts on the power conversion efficiency (PCE) of the resulted solar cells. Nevertheless, the orientation of layered perovskites might not be as desired, which is determined by both the thermodynamics of material structure as well as the kinetics in material formation process. Additionally, in-plane orientation (IP) orientation is common for two-dimensional (2D) materials with electron-rich structure due to the Van der Waals force or/and electrostatic induction between 2D materials and conductive substrates9,10,11,12. For the butylamine (BA) based RP perovskites, the BA-terminated planes have the lowest surface energy, which lead to IP of this layered perovskite on many commonly used substrates to minimize the interface energy in the material system7,13. On the other hand, OP orientation was reported to form in films made by some special formation processes, such as hot casting method1, despite that the OP orientation is rarely seen in RP perovskites with low layer number, such as n = 13,14,15. However, up to now, knowledge about manipulating the plane orientation of layered perovskites is still sparse. The driving force for the conditional OP orientation is unknown. Generally, entropic force16,17,18 and chemical bonding effect19 can drive the orientation of polygonal crystals or shaped particles in solutions, leading to varied assembling or orientation behaviors (Supplementary Note 1). However, the role of these driving forces in the crystallization of perovskite materials is less investigated. The nucleation and growth of perovskite crystals in liquid phase could be rather complex since it is ternary (or polynary) system. Due to this complexity, it is essential to establish a model that takes the predominant effects into account and build a general frame to understand the crystallography of octahedra-array-based perovskites. In this contribution, we systematically investigate the formation process of layered perovskites and clarify a key driving force that dominates the nucleation and directional growth of layered perovskite crystals, which further shows its broad validity in the manipulation of the crystallinity and orientation of different types of layered perovskites fabricated by spin-coating method and/or doctor blading method. ### Formation of RP layered perovskites with OP orientation RP perovskite with stoichiometric ratio of (BA)2(MA)n-1PbnI3n+1 (e.g., average layer number <n> = 4) were mainly focused on (Fig. 1 and Supplementary Fig. 1) in this study. As inspired by the success of NH4Cl additive in promoting the crystallinity of 3D perovskites20, recently NH4Cl additive was also employed in RP perovskite for grains with OP orientation21,22. By pre-mixing NH4Cl in precursors (with a molar ratio of NH4Cl:PbI2 = 0.5), the formation of dense RP perovskite films from a simple spin-coating method was realized. The impact of NH4Cl on crystal orientation was investigated by grazing-incidence wide-angle X-ray scattering (GIWAXS) patterns (Fig. 1a, b). The ring-shaped diffraction pattern in Fig. 1a indicated a random crystalline orientation in RP perovskite film without NH4Cl. Furthermore, the strong (0 2 0) and (0 4 0) peaks along qz axis suggested a significant IP orientation. In contrast, the highly concentrated diffraction spots for the RP perovskite film with NH4Cl additive indicated a much ordered crystal orientation. The clear exciton absorption peaks for layered perovskite (Supplementary Fig. 1a) together with the absence of diffraction peaks along qz axis in the range of 0~10 nm−1 (Fig. 1b) suggests the dominating OP orientations (Fig. 1c) in RP perovskite film with NH4Cl additive1,23. In this study, OP orientation of RP perovskites is achieved in films on both planar PEDOT:PSS surface and meso-porous TiO2 surface (Supplementary Fig. 2). It can be confirmed that the rough TiO2 surface with disordered normal directions has no impact on the NH4Cl induced OP orientation. ### Mechanism for directional growth of RP layered perovskites The changing of dominating crystal orientation from IP to OP by NH4Cl additive offers an excellent platform for in-depth investigation of the nucleation and growth of RP perovskites21,22. We thus look into the underlying mechanism for the OP orientation of RP perovskites. In a previous study, the earlier formation of RP perovskite crystals at the liquid-air interface was reported to induce a downward growth of RP layered perovskites during the solution drying, which was explained as the origin of OP grain formation24. However, we observed that the RP crystals formed at the liquid-air interface could also be IP orientation. As shown in Fig. 2a, b for a slowly cooled oversaturated RP precursor solution (<n> = 2) without external disturbance, large RP perovskite single crystals grew horizontally at the liquid-air interface, which was confirmed to be IP orientation (Supplementary Fig. 3). This IP orientation is reasonable because the low surface energy of RP single crystal surface terminated with alkyl chains favors IP alignment25. This result suggests that the OP crystal orientation is not directly determined by the preferential formation of RP crystals at liquid-air interface24, at which region entropic force would also cause IP orientation15,16,17,26. Since the precursor solution is a mixture from PbI2, BAI, and MAI raw materials, the solubility of these three raw materials in DMF follows a trend of BAI > MAI > PbI2 (Fig. 2b). A fact we noticed is that PbI6 octahedral colloids prefer to precipitate firstly from the solution due to its much lower solubility in DMF (~1.0 M) by forming one-dimensional PbI2-DMF-contained solvate phases (PDS)27,28, which can be observed under optical microscope (inset of Fig. 2b). The PDS formed in MAI-contained solution can be a mixture of PbI2-DMF and (MA)2(DMF)2PbmI2m+2 (m = 2,3) phases (see Supplementary Figure 4 and Supplementary Note 2), the latter of which has been reported to be the intermediate phase for the formation of perovskites29. Some in situ GIWAXS studies also proved the presence of solvate phase before perovskite formation27,28. Based on this hint, experiments were carried out to explore whether the preformed PDS impacts on the following nucleation and directional growth of RP perovskites during solution thinning. Since in situ observing the nucleation process in nanometer scale inside liquid phase is highly challenging, in our study, some PbI2-DMF powders were intentionally dropped on top of the oversaturated RP precursor solution to create observable PDS phase (Fig. 2c, d). When soaked in oversaturated RP precursor solution, the surface of PDS powders turn black in a few seconds (Supplementary Fig. 5). The red shift of the absorption onset from ~550 to ~750 nm (Fig. 2e) and the photoluminescence (PL) peak around 750 nm (Fig. 2f) suggests the formation of 3D-like corner-sharing PbI6 octahedra networks with reduced bandgap. We refer it as 3D-like perovskite, because this corner-sharing PbI6 octahedra networks are less ideal than the 3D octahedra networks in tetragonal MAPbI3 perovskite due to the presence of absorbed DMF molecules and less continuous in structure. These 3D-like perovskites are converted from the double-chains of edge-sharing Pb-I based octahedra in PbI2-DMF30, or the triple-chains of edge-sharing PbI6 octahedra in MA2(DMF)2Pb3I8 as an intermediate phase29, based on our XRD study shown in Supplementary Figure 5 and Supplementary Note 3 (the geometric relationship between the edge-sharing octahedra chains and corner-sharing octahedra chains will be discussed below). Interestingly, RP perovskite crystals were found to predominantly grow underneath the 3D-like perovskites coated PDS and adopt OP orientation (Fig. 2c, d). In another demonstration, a PDS fiber made of PbI2-DMF nanowires was soaked into the oversaturated RP precursor solution (Fig. 2g). The surface again converted to black colored 3D-like perovskites quickly and followed by the growth of RP perovskite flakes on the surface. Most of the RP crystal flakes have an orientation perpendicular to the PDS fiber surface (Fig. 2g). This result reveals the strong correlation between the presence of 3D-like perovskites and the directional growth of the RP perovskites. As shown in Figs. 2d and g, almost all the formed RP crystals are grown from 3D-like perovskites coated PDS when the precursor concentration do not exceed the critical nucleation concentration, indicating the nucleation of RP perovskite on 3D-like perovskites is energetically favored compared to the homogenous nucleation from inside of the solution. The growth rate of the directional RP perovskites on PDS fiber is estimated to be ~1.0 μm s−1 (Supplementary Fig. 6). The high speed enables the formation of RP perovskites within one second in regular perovskite films which generally have a thickness of two to three hundred nanometers. Replacing the PbI2-DMF phase in Fig. 2d,g with (MA)2(DMF)2Pb3I8 phase or PbI2-DMSO based solvated phase can also lead to same directional growth of RP perovskites (Supplementary Figs. 7 and 8), this is because those solvated phase has similar double/triple chains of edge-sharing PbI6 octahedra to form 3D-like perovskite on its surface29. For a further demonstration of the efficient directional growth of RP perovskites from 3D-like perovskites, we intentionally accelerate the solidifying process by directly dropping oversaturated RP precursor solution into chlorobenzene (CB) antisolvent, which result in the formation of many particles in the micrometer scale (Supplementary Fig. 9 and Supplementary Note 4). These particles were found to be 3D-like perovskites coated PDS particles with RP perovskite flakes growing from edges (Fig. 2h, i) as identified by atomic force microscopy-infrared spectroscopy (IR-AFM) and energy dispersive spectroscopy (EDS, Supplementary Fig. 9 and Supplementary Note 4). This experiment further visualized the nucleation and preferred OP growth of RP perovskites triggered by preformed 3D-like perovskites phase, which is difficult to be directly recognized in spin-coated films by cross-sectional SEM image method, because the RP perovskites nucleation process is transient, i.e., the structures related to the nucleation process will be covered by the subsequently formed crystals. Discovering the 3D-like perovskites triggered directional crystal growth provides a framework to understand the conditional OP orientation of RP perovskites. The obtained layered perovskite films can result from several competing growth modes. The reason for the OP orientation growth become dominating in Fig. 1 is that NH4Cl additive suppress the nucleation of PDS (and hence 3D-like perovskites) inside the solution, while only the 3D-like perovskites formed at the solution surface due to solvent evaporation can seed the growth of layered perovskites. As illustrated in Fig. 3a, in the case of RP precursor solution without NH4Cl, too many PDS microcrystals form simultaneously at the liquid-air interface and inside solution due to its poor solubility and too much PDS materials supply during solution thinning. The PDS hence grow rapidly in oversaturated solution and stack randomly. The subsequent 3D-like perovskites formed on random-oriented PDS surface then causes the growth of RP perovskites with random orientations. It’s not necessarily all the PDS will be converted into perovskite phase during solution thinning, depending on the dynamic of DMF evaporation. So that the PbI2-DMF (and (MA)2(DMF)2PbmI2m+2) can be detected in spin-coated film (Supplementary Fig. 10) until heated at elevated temperature of 70~100 °C29. As a contrast, when NH4Cl additives are introduced, the precipitation of PDS in solution is much suppressed due to the enhanced solubility of PbI6 octahedral colloids by NH4Cl (Fig. 3b), which has been proved experimentally in our study (Fig. 3c and Supplementary Note 5). The much reduced homogenous nucleation of PDS inside solution makes the precipitation of PDS at the top of the liquid phase dominating. This is because the evaporation of DMF near the liquid surface is rapid during high-speed spin coating. The preformed PDS microcrystals then trigger overwhelmingly downward growth of RP perovskites. The proposed growing model and corresponding resulted morphologies in Fig. 3a, b are consistent with the measured cross-sectional SEM images of the samples without (Fig. 3d) and with NH4Cl additives (Fig. 3e), respectively. Since the preformed PDS are much suppressed by NH4Cl additives, these residual solvated phases may not be necessarily detectable by XRD (Supplementary Fig. 10). More generally, due to the competition between the downward growth of RP perovskites from 3D-like perovskites on top and the random growth of RP perovskites from 3D-like perovskites in the bulk of liquid phase, engineering the location and amount of preformed PDS and 3D-like perovskites is a straightforward method to promote OP orientation of RP perovskites, i.e., those methods help to reduce the nucleation of PDS inside the solution, such as using hot solution1,2,23,31, adding DMSO31,32, or using other organic solvents24 would promote OP orientation. For a further demonstration, our additional GIWAXS and XRD studies (Supplementary Fig. 11 and Supplementary Note 6) revealed that the OP orientation and high crystallinity of the RP perovskites can be equally achieved in a group of films with excessive AX salts (A = NH4+ or MA+; X = Cl, Br- or I) as additives. A common feature for precursors with excessive AX salts we have proved here is the suppressed precipitation of PDS in solution as identified by Tyndall effect in Fig. 3f, g (Supplementary Note 5), which is attributed to the enhanced solubility of PbI6 octahedral colloids by AX salts (Fig. 3c). To understand why layered perovskites prefer to grow from 3D-like perovskites coated PDS with a preferred orientation, we look into the crystallographic structures by examining the lattice matching between them. It can be noticed that the lattice matching between the facets of 3D-like perovskites and layered perovskite may play the key role in defining the crystal orientation. For the preformed PDS microcrystals with one-dimensional structure, lying horizontally with length direction parallel to the liquid-air interface are energetically preferred (Fig. 4a). Meanwhile, the [100] direction of the double (or triple) chain of edge-sharing Pb-I based octahedra is actually along the length direction of the PDS crystal (Fig. 4b) with a lattice constant of ~4.53 Å30. After the intercalation of MAI, the edge-sharing Pb-I based octahedra chains rotate to form corner-sharing PbI6 octahedra networks (i.e., 3D-like perovskites) with abundant periodic I ions separated by ~6.3 Å along its length direction (see more details in Fig. 4c, Supplementary Fig. 12, Supplementary Note 7 and Supplementary Movie 1). Similar octahedra network rotation process has been proved previously33,34, where the layered trigonal PbI2 with (0 0 1) plane parallel to substrate can be converted into a 3D tetragonal MAPbI3 with (1 1 0) plane parallel to substrate, which is analogous to the orientation evolution of octahedra shown in Fig. 4c. The lattice constant of I ions along the length direction of 3D-like perovskites matches well with those periodic I ions along $$[10\bar 1]$$ direction of RP perovskites with a spacing of 6.32 Å (Fig. 4d). Among these I ions, the chains of I ions located at those exposed corners of 3D-like perovskites and RP crystal sheet are reactive low-coordinated ions because those I ions only form one Pb-I bond with adjacent Pb2+ ion. The sharing of these low-coordinated I ions chains between 3D-like perovskites and RP perovskites can significantly lower the energy barrier required for the nucleation, and template the alignment of RP perovskite crystal sheets, which is hence termed templated growth here. Another fact is that those low-coordinated I ion chains on RP perovskite sheets along [1 0 0] or [0 0 1] directions (i.e., forming an angle of 45° with $$[10\bar 1]$$ and [1 0 1] direction) have different spacing (~4.47 Å, Fig. 4d). As a result, the (1 n 1) planes of RP perovskites are the most geometrically favorable candidate planes to attach with corner-sharing PbI6 octahedra chains in 3D-like perovskites for nucleation. As a consequence, when 3D-like perovskites triggers a downward growth of RP perovskites during solution thinning, the (1 0 1) or/and (1 1 1) planes of the RP crystal should be parallel to the substrate (Fig. 4e), which well explains the widely observed (2 0 2) and (1 1 1) XRD diffraction peaks and the absence of (n 0 0) and (0 0 n) peaks in RP perovskite film samples with OP orientation (Fig. 4f)1,3,23,31,35. The orientation of RP crystal in Fig. 2h also agree with the proposed templated growth behavior, in which the planes of RP perovskite forming an angle of 45° with PDS surface is assigned to the (n 0 0) and (0 0 n) planes; and meanwhile the (1 0 1) plane in RP perovskite, forming angles of 45° with (n 0 0) and (0 0 n) planes, is the plane that connect with 3D-like perovskites. Moreover, based on a similar principle, dominated OP orientation in phenylethylammonium (PEA) based RP type and p-phenylenediamine (PPD) based DJ type layered perovskites have been achieved in our study by using AX salts as additives (Supplementary Fig. 13 and Supplementary Note 8). The XRD spectra of these OP orientated layered perovskite are also dominated by (1 1 1) and (2 0 2) diffraction peaks (Fig. 4f), which can be explained by a similarly templated growth behavior because the lattice constant along $$[10\bar 1]$$ direction of PEA-based RP type and PPD-based DJ type layered perovskites are also ~6.3 Å. This agreement in crystal orientation further suggests the universality of this templated growth mechanism. For clarification, we emphasize that those I- ions on the edge-sharing PbI6 are less active in the templated growth of RP perovskites. For example, soaking pure PbI2 single crystal plate, with most of its I ions bonded with three adjacent Pb2+ ions, into oversaturated RP precursor solution do not trigger templated growth of RP perovskites at the same condition (Supplementary Fig. 14), demonstrating the key role of the low-coordinated I ions on corner-sharing PbI6 octahedra chains. On the other hand, MA+ ions have been confirmed to be important in facilitating the templated growth. As mentioned above, it is difficult to achieve OP orientation in RP perovskites with n = 13,14. The losing of OP orientation in BA2PbI4 (n = 1) perovskites is also observed in our study with NH4Cl as additive (Supplementary Fig. 15). Coincidently, we found the 2D BA2PbI4 crystal sheets (n = 1) are not stable on the PbI2-DMF surface. To further demonstrate, dipping PbI2-DMF fibers into oversaturated RP precursor solution with n = 1 (i.e., no MA+ ions) only lead to all the formed BA2PbI4 crystal fragments peeling off from PDS, finalized with disordered orientation (Supplementary Fig. 16). Due to the lack of MA+ ions intercalated 3D-like perovskites phase, directional growth of BA2PbI4 crystal from PbI2-DMF is unfavorable, which explains the loss of OP orientation in BA2PbI4 (n = 1) perovskites. ### OP orientation engineering for high performance devices Understanding the nucleation and directional growth of RP perovskites highlights the importance of engineering the formation process of solvate phase in the early stage of the solution thinning, which opens ways to manipulate crystal orientation and film morphologies in different fabrication techniques. For spin-coated RP perovskites solar cells (RPSCs), p-i-n structure was focused on (Fig. 5a). We select NH4Cl as the primary additives, because it yields the highest PCE. The impact of NH4Cl additive on the crystallinity of RP films has been measured with X-ray diffraction (XRD, Supplementary Fig. 17). The much narrowed full width at half maximum (FWHM) of (1 1 1) peaks and the absence of diffraction peaks below 10° further confirmed the high degree of crystallinity and dominating OP orientation induced by NH4Cl additive1. Accordingly, the PCE of <n> = 4 RP perovskite solar cells was dramatically improved from less than 1 to 13.2% (Fig. 5b) after NH4Cl addition with room temperature (RT) precursor solution and substrate. For RPSCs with <n> = 5, a high PCE of 14.4% was achieved, which is the highest reported efficiencies for BA-based RP perovskite solar cells fabricated with RT method (Fig. 5b, Supplementary Fig. 18, Supplementary Tables 1, 2 and Supplementary Note 9)1,2,23. In our study, the RPSCs fabricated with NH4Cl additives from spin-coating method are stable during operation at maximum power output point. Figure 5c shows our encapsulated RP perovskite solar cells (<n> = 4) working maximum power point at can maintain 95% of its initial PCE value (12.3%) after 500 h of continuous operation (in air, one-sun, 100 mW cm−2). As another demonstration, solvate phase engineering has been applied to doctor bladed RP perovskite solar cells (<n> = 4) to trigger OP orientation (Fig. 5d, e). During the doctor blading process, NH4Cl was used to suppress the nucleation in the bulk of the precursor solution. However, solely using NH4Cl additive do not guarantee OP orientation in RP perovskite film. Since the volatilization of DMF at RT in the doctor bladed process is much slower than that in the spin-coating process, the precipitation of PDS at the liquid surface is not predominant since the diffusion of solution and PDS is taking place (Fig. 5f). According to the understanding provided by this study, methods that facilitate 3D-like perovskites formed on top of the precursor solution is important for OP orientation. Based on this designing idea, hot air flow was employed to accelerate DMF volatilization on the liquid surface with the solution and substrate unheated, which quickly form narrow oversaturated region located at the liquid surface (Fig. 5f). As a consequence, a dominating OP orientation was achieved in doctor bladed RP layered perovskites as verified by XRD and cross-sectional SEM studies (Fig. 5g, h). This NH4Cl additive and hot air flow co-treatment resulted in vertical carrier mobility of ~0.24 cm2 V−1 s−1, PL lifetime of ~48 ns and hence high PCE of 12.2% (Supplementary Fig. 19, Supplementary Note 10 and Supplementary Table 3). Due to the solubility difference shown in Fig. 2b, there should be a preferably precipitation of PbI2-DMF and MAI-PbI2-DMF solvated phase on top and BA-rich phase on bottom during spin coating or doctor blading, which leads to relative more larger-n RP perovskites formed on top of the resulted film, contributing to the frequently observed vertical phase separation (e.g., see the PL study of our sample in Supplementary Fig. 20)36,37. In conclusion, the bonding effect of low-coordinated I ions on corner-sharing PbI6 octahedra chains of 3D-like perovskites is figured out to be a strong driving force to trigger the nucleation of layered perovskites. The lattice matching between layered perovskite and corner-sharing PbI6 octahedra chains makes the orientation of layered perovskite substantially defined by the preformed 3D-like perovskites in solution. These insights offer general guidance to manipulate the crystal nucleation and film morphology in different solution fabrication processes (e.g. doctor blading) by means of solubility engineering and solution-drying engineering. Moreover, the thermodynamically available templated growth of layered perovskites can be used to construct heterojunction structure based on low dimensional perovskite crystals with different orientation, which would open up avenues to achieve perovskite optoelectronic devices with functional nanostructures. ## Methods ### Materials N, N-dimethylformamide (DMF, 99.8%), chlorobenzene (CB, 99.8%), PbI2 (99.999%), NH4Cl (99.5%), NH4I (99.999%), n-butylamine (BA) phenylethylamine (PEA), p-phenylenediamine (PPD) and bathocuproine (BCP, 99.99%) were purchased from Sigma-Aldrich. Methylammonium iodide (MAI), methylammonium bromide (MABr) and methylammonium chloride (MACl) were purchased from Greatcell Solar Ltd. Poly(3,4-ethylenedioxythiophene): poly(styrenesulfonate) (PEDOT: PSS) AL4083 was purchased from Heraeus Ltd. [6,6]-phenyl-C61-butyric acid methyl ester (PCBM) was purchased from Solenne BV. Hydroiodic acid (HI, 55.0–58.0%) was purchased from Aladdin. All reagents and solvents were used directly if not specified. ### Solution preparation and device fabrication PEDOT: PSS was spin-coated on pre-cleaned indium tin oxide (ITO) substrate at 3000 rpm for 40 s and annealed at 125 °C for 20 min in air. RP perovskite precursor solution was spin-coated in glove box with N2 or blade-coated in air. For RP perovskite precursor solution, MAI and PbI2 were separately dissolved in DMF with the concentration of 500 mg mL−1. Then, BA2MAn−1PbnI3n+1 RP perovskite solution was prepared by mixing BA:MAI:PbI2 with a molar ratio of 2:n + 1:n. Additives AX (A = NH4+ or MA+; X = Cl, Br- or I-) was first dissolved in PbI2/DMF solution at 65 °C, then MAI and BA were mixed before adding into PbI2 + AX solution. Both precursor solution and substrates were kept at room temperature during deposition. The obtained RP perovskite films were annealed at 65 °C for 5 min and 100 °C for 30 min for better crystallinity and to remove the NH4Cl additives. PCBM with a concentration of 15 mg mL−1 in CB was spin coated on perovskite at 3000 rpm for 30 s and annealed at 80 °C for 10 min. At last, BCP (7 nm thick) and Cu (80 nm thick) were evaporated sequentially on the films in vacuum at a rate of 0.2 Å s−1 and 2 Å s−1, respectively. The device area is defined to be 0.10 cm2 by metal masks. ### Templated growth of RP perovskites Oversaturated BA-based RP perovskite (<n> = 2) aqueous solution was prepared by dissolving 76.84 mg MAI, 53 mg PbI2 and 32.9 μL BA into 800 μL HI solution at 80 °C. The oversaturated solution was obtained after cooling the solution to 55 °C. Then, the PDS phase (both PbI2-DMF and MAI-PbI2-DMF solvated phases) was introduced into the oversaturated solution for the growth of RP perovskites. For intentionally accelerating the solidifying process of RP perovskite in CB, oversaturated BA-based RP perovskite (<n> = 4) DMF solution was prepared by dissolving 461 mg PbI2, 198.8 mg MAI, 26.8 mg NH4Cl and 49.4 μL BA in 230 μL DMF. ### Characterization The current density (J)-voltage (V) curves of RPSCs were measured in nitrogen glove box by Keithley 2400 with a voltage scan rate of 0.02 V s−1, delay time of 50 ms and sweep region from −0.2 to 1.2 V under 100 mW cm−2 AM 1.5 G illumination provided by an AAA class solar simulator (Enli Technology Co., Ltd.). A NREL certificated Si reference cell (SRC-2020, Enli Technology Co., Ltd) was used for calibration. The external quantum efficiency (EQE) was characterized by the QE-R solar cell quantum efficiency measurement system (Enli Technology Co., Ltd., China), and the light source is a 75 W xenon lamp. The monochromatic light intensity for EQE was calibrated with a NIST-certified Si photodiode from 300 to 1100 nm. The EQE spectrum was integrated over AM 1.5 G photon flux to attain photocurrent density. XRD measurements were carried out in air using a Siemens D500 Bruker X-ray diffractometer (Cu Kα radiation, λ = 1.5406 Å). SEM images of the perovskite crystals were obtained by using a scanning electron microscope (TESCAN MIRA3 LMU) equipped with an electron beam accelerated at 10–20 kV. The EDS was measured by X-Max20 silicon drift detector (Oxford). The absorbance spectra were obtained by using a UV–Visible Spectrometer (Thermo Evolution 201) in the spectral range of 300–1100 nm. Steady-state PL was measured by i-HR320 spectrometer (HORIBA Scientific) with excitation by a UV laser (337 nm). The spatial resolved infrared spectra were measured by nanoscale IR spectroscopy (nanoIR2, Bruker) with a lateral spatial resolution of 100 nm. Fourier transform infrared (FTIR) was measured by Nicolet iS50 (Thermo). GI-XRD measurement was performed on a Xenocs Xeuss 2.0 system. The wavelength of the X-ray beam is 0.154 nm with a flux of approximately 4.6 × 107 photons s−1 and an illumination area of 1.2 × 1.2 mm2. The incident angle of the X-ray beam was set as 0.5o. The 2D GI-XRD patterns were collected by a Pilatus 300K detector. The sample to detector distance was 170 mm, calibrated by the silver behenate standard sample. The GI-XRD patterns were analyzed using the software package FIT2D. ## Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request. ## References 1. 1. Tsai, H. et al. High-efficiency two-dimensional Ruddlesden-Popper perovskite solar cells. Nature 536, 312–316 (2016). 2. 2. Zhang, X. et al. Stable high efficiency two-dimensional perovskite solar cells via cesium doping. Energ. Environ. Sci. 10, 2095–2102 (2017). 3. 3. Cao, D. H., Stoumpos, C. C., Farha, O. K., Hupp, J. T. & Kanatzidis, M. G. 2D homologous perovskites as light-absorbing materials for solar cell applications. J. Am. Chem. Soc. 137, 7843–7850 (2015). 4. 4. Smith, I. C., Hoke, E. T., Solis-Ibarra, D., McGehee, M. D. & Karunadasa, H. I. A layered hybrid perovskite solar-cell absorber with enhanced moisture stability. Angew. Chem. Int. Ed. Engl. 53, 11232–11235 (2014). 5. 5. Lin, Y. et al. Suppressed ion migration in low-dimensional perovskites. Acs Energy Lett. 2, 1571–1572 (2017). 6. 6. Xiao, X. et al. Suppressed ion migration along the in-plane direction in layered perovskites. Acs Energy Lett. 3, 684–688 (2018). 7. 7. Mitzi, D. B., Wang, S., Feild, C. A., Chess, C. A. & Guloy, A. M. Conducting layered organic-inorganic halides containing <110>-oriented perovskite sheets. Science 267, 1473–1476 (1995). 8. 8. Blancon, J. C. et al. Scaling law for excitons in 2D perovskite quantum wells. Nat. Commun. 9, 2254 (2018). 9. 9. Wang, S. T. et al. Interface electronic structure and morphology of 2,7-dioctyl[1]benzothieno[3,2-b]benzothiophene (C8-BTBT) on Au film. Appl. Surf. Sci. 416, 696–703 (2017). 10. 10. Novoselov, K. S., Mishchenko, A., Carvalho, A. & Castro Neto, A. H. 2D materials and van der Waals heterostructures. Science 353, aac9439 (2016). 11. 11. Zeng, M., Xiao, Y., Liu, J., Yang, K. & Fu, L. Exploring two-dimensional materials toward the next-generation circuits: from monomer design to assembly control. Chem. Rev. 118, 6236–6296 (2018). 12. 12. Ebina, Y., Sasaki, T., Harada, M. & Watanabe, M. Restacked perovskite nanosheets and their Pt-loaded materials as photocatalysts. Chem. Mater. 14, 4390–4395 (2002). 13. 13. Mitzi, D. B. Templating and structural engineering in organic-inorganic perovskites. J. Chem. Soc. Dalton, 1–12, (2001). 14. 14. Quintero-Bermudez, R. et al. Compositional and orientational control in metal halide perovskites of reduced dimensionality. Nat. Mater. 17, 900–907 (2018). 15. 15. Lin, Y. et al. Unveiling the operation mechanism of layered perovskite solar cells. Nat. Commun. 10, 1008 (2019). 16. 16. Rycenga, M., McLellan, J. M. & Xia, Y. N. Controlling the assembly of silver nanocubes through selective functionalization of their faces. Adv. Mater. 20, 2416 (2008). +. 17. 17. Yunker, P. J., Still, T., Lohr, M. A. & Yodh, A. G. Suppression of the coffee-ring effect by shape-dependent capillary interactions. Nature 476, 308–311 (2011). 18. 18. Zhao, K. & Mason, T. G. Directing colloidal self-assembly through roughness-controlled depletion attractions. Phys. Rev. Lett. 99, 268301 (2007). 19. 19. Vesselinov, M. I. Crystal Growth for Beginners: Fundamentals of Nucleation, Crystal Growth and Epitaxy. (World scientific, 2016). 20. 20. Zuo, C. & Ding, L. An 80.11% FF record achieved for perovskite solar cells by using the NH4Cl additive. Nanoscale 6, 9935–9938 (2014). 21. 21. Fu, W. et al. Two-dimensional perovskite solar cells with 14.1% power conversion efficiency and 0.68% external radiative efficiency. ACS Energy Lett. 3, 2086–2093 (2018). 22. 22. Xu, H. et al. Orientation regulation of tin-based reduced-dimensional perovskites for highly efficient and stable photovoltaics. Adv. Funct. Mater. 0, 1807696 (2019). 23. 23. Zhou, N. et al. Exploration of crystallization kinetics in quasi two-dimensional perovskite and high performance solar cells. J. Am. Chem. Soc. 140, 459–465 (2018). 24. 24. Chen, A. Z. et al. Origin of vertical orientation in two-dimensional metal halide perovskites and its effect on photovoltaic performance. Nat. Commun. 9, 1336 (2018). 25. 25. Wang, K., Wu, C., Yang, D., Jiang, Y. & Priya, S. Quasi-two-dimensional halide perovskite single crystal photodetector. ACS Nano 12, 4919–4929 (2018). 26. 26. Yuan, Y. et al. Ultra-high mobility transparent organic thin film transistors grown by an off-centre spin-coating method. Nat. Commun. 5, 3005 (2014). 27. 27. Munir, R. et al. Hybrid perovskite thin-film photovoltaics: in situ diagnostics and importance of the precursor solvate phases. Adv. Mater. 29, 1604113 (2017). 28. 28. Li, J. B. et al. Phase transition control for high-performance blade-coated perovskite solar cells. Joule 2, 1313–1330 (2018). 29. 29. Petrov, A. A. et al. Crystal structure of DMF-intermediate phases uncovers the link between CH3NH3PbI3 morphology and precursor stoichiometry. J. Phys. Chem. C. 121, 20739–20743 (2017). 30. 30. Cao, J. et al. Identifying the molecular structures of intermediates for optimizing the fabrication of high-quality perovskite films. J. Am. Chem. Soc. 138, 9919–9926 (2016). 31. 31. Soe, C. M. M. et al. Understanding film formation morphology and orientation in high member 2D Ruddlesden-Popper perovskites for high-efficiency solar cells. Adv. Energy Mater. 8, 1700979 (2018). 32. 32. Zhang, X. et al. Phase transition control for high performance Ruddlesden-Popper perovskite solar cells. Adv. Mater. 30, e1707166 (2018). 33. 33. Wang, G. et al. Wafer-scale growth of large arrays of perovskite microplate crystals for functional electronics and optoelectronics. Sci. Adv. 1, e1500613 (2015). 34. 34. Fan, Z. et al. Layer-by-layer degradation of methylammonium lead tri-iodide perovskite microplates. Joule 1, 548–562 (2017). 35. 35. Yang, R. et al. Oriented quasi-2D perovskites for high performance optoelectronic devices. Adv. Mater. 30, e1804771 (2018). 36. 36. Liu, J., Leng, J., Wu, K., Zhang, J. & Jin, S. Observation of internal photoinduced electron and hole separation in hybrid two-dimentional perovskite films. J. Am. Chem. Soc. 139, 1432–1435 (2017). 37. 37. Qing, J. et al. Aligned and graded type-II Ruddlesden-Popper perovskite films for efficient solar cells. Adv. Energy Mater. 8, 1800185 (2018). ## Acknowledgements We thank the financial support from National Natural Science Foundation of China (51673218, U1632265, 51802194, 61774170, and 61874141). J. Huang thanks financial support from UNC Research Opportunities Initiative (ROI) through the Center of Hybrid Materials Enabled Electronic Technology. Yuan thanks the Innovation‐Driven Project of Central South University, the open Fund of the State Key Laboratory of Integrated Optoelectronics (IOSKL2016KF05), the financial support from the State Key Laboratory of Powder Metallurgy at Central South University. Luo thanks the Central South University postdoctoral international exchange introduction program, China Postdoctoral Science Foundation. ## Author information Authors ### Contributions Y.Y. conducted the project. Y.Y. and J. Huang conceived the idea, designed the experiments, analysis the data and wrote the paper. J.W. fabricated all spin-coated RP films and solar cells, and carried out the J-V curves and EQEs measurement. J.W. and S.L. carried out the XRD, SEM, Abs, PL characterizations. S.L. carried out the nucleation concentration measurement, antisolvent experiment and crystallography analysis. Z.L., K.M. and G.C. carried out GIWAXS measurement. C.Z. and L.D. took part in analyzing the XRD and GIWAXS results. Y.C. carried out templated grain growth of RP perovskite on PDS. Y.L. fabricated RPSCs by doctor-blading and the corresponding characterization. T.H. and S.L. carried out the IR-AFM measurement. Y.L., J. He and X.S. contributed to TRPL and H.H. contributed to AFM studies. Y.D. carried out optical study on bladed film. All the authors revised the paper. ### Corresponding authors Correspondence to Jinsong Huang or Yongbo Yuan. ## Ethics declarations ### Competing interests The authors declare no competing interests. Peer review information: Nature Communications thanks Edward Sargent, Mingjian Yuan and the other anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Wang, J., Luo, S., Lin, Y. et al. Templated growth of oriented layered hybrid perovskites on 3D-like perovskites. Nat Commun 11, 582 (2020). https://doi.org/10.1038/s41467-019-13856-1 • Accepted: • Published: • ### Combination of a large cation and coordinating additive improves carrier transport properties in quasi-2D perovskite solar cells • Sylvester Sahayaraj • , Marcin Ziółek • , Mateusz Ścigaj • , Magdalena Tamulewicz-Szwajkowska • , Jarosław Serafińczuk • , Filippo De Angelis Journal of Materials Chemistry A (2021) • ### Materials, photophysics and device engineering of perovskite light-emitting diodes • Ziming Chen • , Zhenchao Li • , Thomas R Hopper • , Artem A Bakulin •  & Hin-Lap Yip Reports on Progress in Physics (2021) • ### High‐Quality Ruddlesden–Popper Perovskite Film Formation for High‐Performance Perovskite Solar Cells • Pengyun Liu • , Ning Han • , Wei Wang • , Ran Ran • , Wei Zhou •  & Zongping Shao • ### Layered Hybrid Formamidinium Lead Iodide Perovskites: Challenges and Opportunities • Jovana V. Milić • , Shaik M. Zakeeruddin •  & Michael Grätzel Accounts of Chemical Research (2021) • ### Overcoming the carrier transport limitation in Ruddlesden–Popper perovskite films by using lamellar nickel oxide substrates • Jianghu Liang • , Zhanfei Zhang • , Yiting Zheng • , Xueyun Wu • , Jianli Wang • , Zhuang Zhou • , Yajuan Yang • , Ying Huang • , Zhenhua Chen •  & Chun-Chao Chen Journal of Materials Chemistry A (2021)
Algebraic and Arithmetic Geometry Seminar # The Hilbert scheme of infinite affine space ## by Burt Totaro (UCLA) Europe/Rome I will discuss the Hilbert scheme of d points in affine $n$-space, with some examples. This space has many irreducible components for $n$ at least $3$ and is poorly understood. Nonetheless, in the limit where n goes to infinity, we show that the Hilbert scheme of d points in infinite affine space has a very simple homotopy type. In fact, it has the $\mathbb{A}^1$-homotopy type of the infinite Grassmannian $\mathsf{BGL}(d-1)$. Many questions remain. (Joint with Marc Hoyois, Joachim Jelisiejew, Denis Nardin, Maria Yakerson.)
# Math Help - factoring without c value 1. ## factoring without c value Hi; Does x^2 -2x factor to (x - 2)(x - 0). Thanks 2. ## Re: factoring without c value Yes. This polinom has 2 real root: x1 = 0 x2= 2 3. ## Re: factoring without c value Originally Posted by anthonye Hi; Does x^2 -2x factor to (x - 2)(x - 0). Thanks first rule of factoring ... pull out any factor each term has in common. $x^2 - 2x = x(x - 2)$ ... in this case, you're done.
# Tag Sort by: ### Separation of variables Separation of variables is a method of solving ordinary and partial differential equations.For an ordinary differential equation(1)where is nonzero in a neighborhood of the initial value, the solution is given implicitly by(2)If the integrals can be done in closed form and the resulting equation can be solved for (which are two pretty big "if"s), then a complete solution to the problem has been obtained. The most important equation for which this technique applies is , the equation for exponential growth and decay (Stewart 2001).For a partial differential equation in a function and variables , , ..., separation of variables can be applied by making a substitution of the form(3)breaking the resulting equation into a set of independent ordinary differential equations, solving these for , , ..., and then plugging them back into the original equation.This technique works because if the product of functions of independent variables.. ### Indirectly conformal mapping An indirectly conformal mapping, sometimes called an anticonformal mapping, is a mapping that reverses all angles, whereas an isogonal mapping can reverse some angles and preserve others.For example, if is a conformal map, then is an indirectly conformal map, and is an isogonal mapping. ### Multivariable calculus Multivariable calculus is the branch of calculus that studies functions of more than one variable. Partial derivatives and multiple integrals are the generalizations of derivative and integral that are used. An important theorem in multivariable calculus is Green's theorem, which is a generalization of the first fundamental theorem of calculus to two dimensions. ### Vector basis A vector basis of a vector space is defined as a subset of vectors in that are linearly independent and span . Consequently, if is a list of vectors in , then these vectors form a vector basis if and only if every can be uniquely written as(1)where , ..., are elements of the base field.When the base field is the reals so that for , the resulting basis vectors are -tuples of reals that span -dimensional Euclidean space . Other possible base fields include the complexes , as well as various fields of positive characteristic considered in algebra, number theory, and algebraic geometry.A vector space has many different vector bases, but there are always the same number of basis vectors in each of them. The number of basis vectors in is called the dimension of . Every spanning list in a vector space can be reduced to a basis of the vector space.The simplest example of a vector basis is the standard basis in Euclidean space , in which the basis vectors lie along each coordinate.. ### Homotopic Two mathematical objects are said to be homotopic if one can be continuously deformed into the other. For example, the real line is homotopic to a single point, as is any tree. However, the circle is not contractible, but is homotopic to a solid torus. The basic version of homotopy is between maps. Two maps and are homotopic if there is a continuous mapsuch that and .Whether or not two subsets are homotopic depends on the ambient space. For example, in the plane, the unit circle is homotopic to a point, but not in the punctured plane . The puncture can be thought of as an obstacle.However, there is a way to compare two spaces via homotopy without ambient spaces. Two spaces and are homotopy equivalent if there are maps and such that the composition is homotopic to the identity map of and is homotopic to the identity map of . For example, the circle is not homotopic to a point, for then the constant map would be homotopic to the identity map of a circle, which is impossible.. ### Semilocally simply connected A topological space is semilocally simply connected (also called semilocally 1-connected) if every point has a neighborhood such that any loop with basepoint is homotopic to the trivial loop. The prefix semi- refers to the fact that the homotopy which takes to the trivial loop can leave and travel to other parts of .The property of semilocal simple connectedness is important because it is a necessary and sufficient condition for a connected, locally pathwise-connected space to have a universal cover. ### Complex analysis Complex analysis is the study of complex numbers together with their derivatives, manipulation, and other properties. Complex analysis is an extremely powerful tool with an unexpectedly large number of practical applications to the solution of physical problems. Contour integration, for example, provides a method of computing difficult integrals by investigating the singularities of the function in regions of the complex plane near and between the limits of integration.The key result in complex analysis is the Cauchy integral theorem, which is the reason that single-variable complex analysis has so many nice results. A single example of the unexpected power of complex analysis is Picard's great theorem, which states that an analytic function assumes every complex number, with possibly one exception, infinitely often in any neighborhood of an essential singularity!A fundamental result of complex analysis is the Cauchy-Riemann.. ### Abelian category An Abelian category is a category for which the constructions and techniques of homological algebra are available. The basic examples of such categories are the category of Abelian groups and, more generally, the category of modules over a ring. Abelian categories are widely used in algebra, algebraic geometry, and topology.Many of the same constructions that are found in categories of modules, such as kernels, exact sequences, and commutative diagrams are available in Abelian categories. A disadvantage that must be overcome is the fact that the objects in a category do not necessarily have elements that can be manipulated directly, so the traditional definitions do not work. As a result, methods must be developed that allow definition and manipulation of objects without the use of elements.As an example, consider the definition of the kernel of a morphism, which states that given , the kernel of is defined to be a morphism such that all morphisms.. ### Lebesgue integrable A nonnegative measurable function is called Lebesgue integrable if its Lebesgue integral is finite. An arbitrary measurable function is integrable if and are each Lebesgue integrable, where and denote the positive and negative parts of , respectively.The following equivalent characterization of Lebesgue integrable follows as a consequence of monotone convergence theorem. A nonnegative measurable function is Lebesgue integrable iff there exists a sequence of nonnegative simple functions such that the following two conditions are satisfied: 1. . 2. almost everywhere. ### Compact manifold A compact manifold is a manifold that is compact as a topological space. Examples are the circle (the only one-dimensional compact manifold) and the -dimensional sphere and torus. Compact manifolds in two dimensions are completely classified by their orientation and the number of holes (genus). It should be noted that the term "compact manifold" often implies "manifold without boundary," which is the sense in which it is used here. When there is need for a separate term, a compact boundaryless manifold is called a closed manifold.For many problems in topology and geometry, it is convenient to study compact manifolds because of their "nice" behavior. Among the properties making compact manifolds "nice" are the fact that they can be covered by finitely many coordinate charts, and that any continuous real-valued function is bounded on a compact manifold.For any positive integer , a distinct nonorientable.. ### Negative part Let , then the negative part of is the function defined byNote that the negative part is itself a nonnegative function. The negative part satisfies the identitywhere is the positive part of . ### New mersenne prime conjecture Dickson states "In a letter to Tanner [L'intermediaire des math., 2, 1895, 317] Lucas stated that Mersenne (1644, 1647) implied that a necessary and sufficient condition that be a prime is that be a prime of one of the forms , , ."Mersenne's implication has been refuted, but Bateman, Selfridge, and Wagstaff (1989) used the statement as an inspiration for what is now called the new Mersenne conjecture, which can be stated as follows.Consider an odd natural number . If two of the following conditions hold, then so does the third: 1. or , 2. is prime (a Mersenne prime), 3. is prime (a Wagstaff prime). This conjecture has been verified for all primes .Based on the distribution and heuristics of (cf. https://www.utm.edu/research/primes/mersenne/heuristic.html) the known Mersenne and Wagstaff prime exponents, it seems quite likely that there is only a finite number of exponents satisfying the criteria of the new Mersenne conjecture. In.. ### Combinatorics Combinatorics is the branch of mathematics studying the enumeration, combination, and permutation of sets of elements and the mathematical relations that characterize their properties.Mathematicians sometimes use the term "combinatorics" to refer to a larger subset of discrete mathematics that includes graph theory. In that case, what is commonly called combinatorics is then referred to as "enumeration."The Season 1 episode "Noisy Edge" (2005) of the television crime drama NUMB3RS mentions combinatorics. ### Entire function If a complex function is analytic at all finite points of the complex plane , then it is said to be entire, sometimes also called "integral" (Knopp 1996, p. 112).Any polynomial is entire.Examples of specific entire functions are given in the following table.functionsymbolAiry functions, Airy function derivatives, Anger functionBarnes G-functionbeiberBessel function of the first kindBessel function of the second kindBeurling's functioncosinecoversineDawson's integralerferfcerfiexponential functionFresnel integrals, gamma function reciprocalgeneralized hypergeometric functionhaversinehyperbolic cosinehyperbolic sineJacobi elliptic functions, , , , , , , , , , , Jacobi theta functionsJacobi theta function derivativesMittag-Leffler functionmodified Struve functionNeville theta functions, , , Shisinesine integralspherical Bessel function of the first kindStruve functionversineWeber.. ### Nilpotent group A group is nilpotent if the upper central sequenceof the group terminates with for some .Nilpotent groups have the property that each proper subgroup is properly contained in its normalizer. A finite nilpotent group is the direct product of its Sylow p-subgroups. ### Wagstaff prime A Wagstaff prime is a prime number of the form for a prime number. The first few are given by , 5, 7, 11, 13, 17, 19, 23, 31, 43, 61, 79, 101, 127, 167, 191, 199, 313, 347, 701, 1709, 2617, 3539, 5807, 10501, 10691, 11279, 12391, 14479, 42737, 83339, 95369, 117239, 127031, 138937, 141079, 267017, 269987, 374321, 986191, and 4031399 (OEIS A000978), with and larger corresponding to probable primes. These values correspond to the primes with indices , 3, 4, 5, 6, 7, 8, 9, 11, 14, 18, 22, 26, ... (OEIS A123176).The Wagstaff primes are featured in the newMersenne prime conjecture.There is no simple primality test analogous to the Lucas-Lehmer test for Wagstaff primes, so all recent primality proofs of Wagstaff primes have used elliptic curve primality proving.A Wagstaff prime can also be interpreted as a repunit prime of base , asif is odd, as it must be for the above number to be prime.Some of the largest known Wagstaff probable primes are summarized in the following.. ### Fibonacci prime A Fibonacci prime is a Fibonacci number that is also a prime number. Every that is prime must have a prime index , with the exception of . However, the converse is not true (i.e., not every prime index gives a prime ).The first few (possibly probable) prime Fibonacci numbers are 2, 3, 5, 13, 89, 233, 1597, 28657, 514229, ... (OEIS A005478), corresponding to indices , 4, 5, 7, 11, 13, 17, 23, 29, 43, 47, 83, 131, 137, 359, 431, 433, 449, 509, 569, 571, 2971, 4723, 5387, ... (OEIS A001605). (Note that Gardner's statement that is prime (Gardner 1979, p. 161) is incorrect, especially since 531 is not even prime, which it must be for to be prime.) The following table summarizes Fibonacci (possibly probable) primes with index .termindexdigitsdiscovererstatus2453871126proven prime; https://primes.utm.edu/primes/page.php?id=511292593111946proven prime; https://primes.utm.edu/primes/page.php?id=374702696772023proven prime; https://primes.utm.edu/primes/page.php?id=3553727144313016proven.. ### Spectral sequence A spectral sequence is a tool of homological algebra that has many applications in algebra, algebraic geometry, and algebraic topology. Roughly speaking, a spectral sequence is a system for keeping track of collections of exact sequences that have maps between them.There are many definitions of spectral sequences and many slight variations that are useful for certain purposes. The most common type is a "first quadrant cohomological spectral sequence," which is a collection of Abelian groups where , , and are integers, with and nonnegative and for some positive integer , usually 2. The groups come equipped with maps(1)such that(2)There is the further restriction that(3)The maps are called boundary maps.A spectral sequence may be visualized as a sequence of grids, one for each value of . The s and s denote positions on the grid, where is the -coordinate and is the -coordinate. The diagram above shows this for .The entire collection.. ### Pointed space A pointed space is a topological space together with a choice of a basepoint . The notation for a pointed space is . Maps between two pointed spaces must take basepoints to basepoints. Pointed spaces are widely used in algebraic topology, homotopy theory, and topological K-theory. ### Deformation retract A subspace of is called a deformation retract of if there is a homotopy (called a retract) such that for all and , 1. , 2. , and 3. . A tightening of the last condition gives a so-called strongdeformation retract (Bredon 1993, pp. 45-46).Note that a deformation retract is also a retract, because the homotopy defines a continuous map ### Strong deformation retract A subspace of is called a strong deformation retract of if there is a homotopy (called a retract) such that for all , , and , 1. , 2. , and 3. . If the last equation is required only for , the retract is called simply a deformation retract. ### Magic number There are several different kinds of magic numbers. The digital root and magic constant are sometimes known as magic numbers.In baseball, the magic number for a team in first place in a division is the number of games that team must win or the second place team must lose in order to clinch the division. The formula isFor example, the standings for the National League Central Division as of August 9, 2004 are summarized in the following table.teamwinslossesSt. Louis7238Chicago6150Houston5556Cincinnati5457Milwaukee5258Pittsburgh5158There were 162 games in the season so, for example, St. Louis's magic number on that date was ### Integer sequence primes Just as many interesting integer sequences can be defined and their properties studied, it is often of interest to additionally determine which of their elements are prime. The following table summarizes the indices of the largest known prime (or probable prime) members of a number of named sequences.sequenceOEISdigitsdiscoverersearch limitcommentsalternating factorialA00127259961260448M. Rodenkirch (Sep. 18, 2017)100000 (M. Rodenkirch, Dec. 15, 2017)finite sequence; largest certified prime has index 661; the rest are probable primesApéry-constant primeA119334141141E. W. Weisstein (May 14, 2006)9089 (E. W. Weisstein, Mar. 22, 2008)status unknownApéry number A092825662410136E. W. Weisstein (Mar. 2004) (E. W. Weisstein, Mar. 2004)probable primeApéry number 87E. W. Weisstein.. ### Special linear group Given a ring with identity, the special linear group is the group of matrices with elements in and determinant 1.The special linear group , where is a prime power, the set of matrices with determinant and entries in the finite field . is the corresponding set of complex matrices having determinant . is a subgroup of the general linear group and is a Lie-type group. Both and are genuine Lie groups. ### Cyclic group A cyclic group is a group that can be generated by a single element (the group generator). Cyclic groups are Abelian.A cyclic group of finite group order is denoted , , , or ; Shanks 1993, p. 75), and its generator satisfies(1)where is the identity element.The ring of integers form an infinite cyclic group under addition, and the integers 0, 1, 2, ..., () form a cyclic group of order under addition (mod ). In both cases, 0 is the identity element.There exists a unique cyclic group of every order , so cyclic groups of the same order are always isomorphic (Scott 1987, p. 34; Shanks 1993, p. 74). Furthermore, subgroups of cyclic groups are cyclic, and all groups of prime group order are cyclic. In fact, the only simple Abelian groups are the cyclic groups of order or a prime (Scott 1987, p. 35).The th cyclic group is represented in the Wolfram Language as CyclicGroup[n].Examples of cyclic groups include , , , ..., and the modulo multiplication.. ### Range If is a map (a.k.a. function, transformation, etc.) over a domain , then the range of , also called the image of under , is defined as the set of all values that can take as its argument varies over , i.e.,Note that among mathematicians, the word "image"is used more commonly than "range."The range is a subset of and does not have to be all of .Unfortunately, term "range" is often used to mean domain--its precise opposite--in probability theory, with Feller (1968, p. 200) and Evans et al. (2000, p. 5) calling the set of values that a variate can assume (i.e., the set of values that a probability density function is defined over) the "range", denoted by (Evans et al. 2000, p. 5).Even worse, statistics most commonly uses "range" to refer to the completely different statistical quantity as the difference between the largest and smallest order statistics. In this work, this form.. ### Principal ideal domain A principal ideal domain is an integral domain in which every proper ideal can be generated by a single element. The term "principal ideal domain" is often abbreviated P.I.D. Examples of P.I.D.s include the integers, the Gaussian integers, and the set of polynomials in one variable with real coefficients.Every Euclidean ring is a principal ideal domain, but the converse is not true. Nevertheless, the notion of greatest common divisor arising from the Euclidean algorithm can be extended to the more general context of principal ideal domains as follows. Given two nonzero elements of a principal ideal domain , a greatest common divisor of and is defined as any element of such thatEvery principal ideal domain is a unique factorization domain, but not conversely. Every polynomial ring over a field is a unique factorization domain, but it is a principal ideal domain iff the number of indeterminates is one... ### Conformal mapping A conformal mapping, also called a conformal map, conformal transformation, angle-preserving transformation, or biholomorphic map, is a transformation that preserves local angles. An analytic function is conformal at any point where it has a nonzero derivative. Conversely, any conformal mapping of a complex variable which has continuous partial derivatives is analytic. Conformal mapping is extremely important in complex analysis, as well as in many areas of physics and engineering.A mapping that preserves the magnitude of angles, but not their orientation is called an isogonal mapping (Churchill and Brown 1990, p. 241).Several conformal transformations of regular grids are illustrated in the first figure above. In the second figure above, contours of constant are shown together with their corresponding contours after the transformation. Moon and Spencer (1988) and Krantz (1999, pp. 183-194) give tables of conformal.. ### Isogonal mapping An isogonal mapping is a transformation that preserves the magnitudes of local angles, but not their orientation. A few examples are illustrated above.A conformal mapping is an isogonal mapping that also preserves the orientations of local angles. If is a conformal mapping, then is isogonal but not conformal. This is due to the fact that complex conjugation is not an analytic function. ### Base The word "base" in mathematics is used to refer to a particular mathematical object that is used as a building block. The most common uses are the related concepts of the number system whose digits are used to represent numbers and the number system in which logarithms are defined. It can also be used to refer to the bottom edge or surface of a geometric figure.A real number can be represented using any integer number as a base (sometimes also called a radix or scale). The choice of a base yields to a representation of numbers known as a number system. In base , the digits 0, 1, ..., are used (where, by convention, for bases larger than 10, the symbols A, B, C, ... are generally used as symbols representing the decimal numbers 10, 11, 12, ...).The digits of a number in base (for integer ) can be obtained in the Wolfram Language using IntegerDigits[x, b].Let the base representation of a number be written(1)(e.g., ). Then, for example, the number 10 is.. ### Discrete mathematics Discrete mathematics is the branch of mathematics dealing with objects that can assume only distinct, separated values. The term "discrete mathematics" is therefore used in contrast with "continuous mathematics," which is the branch of mathematics dealing with objects that can vary smoothly (and which includes, for example, calculus). Whereas discrete objects can often be characterized by integers, continuous objects require real numbers.The study of how discrete objects combine with one another and the probabilities of various outcomes is known as combinatorics. Other fields of mathematics that are considered to be part of discrete mathematics include graph theory and the theory of computation. Topics in number theory such as congruences and recurrence relations are also considered part of discrete mathematics.The study of topics in discrete mathematics usually includes the study of algorithms, their.. ### Analysis The term analysis is used in two ways in mathematics. It describes both the discipline of which calculus is a part and one form of abstract logic theory.Analysis is the systematic study of real and complex-valued continuous functions. Important subfields of analysis include calculus, differential equations, and functional analysis. The term is generally reserved for advanced topics which are not encountered in an introductory calculus sequence, although many ideas from those courses, such as derivatives, integrals, and series are studied in more detail. Real analysis and complex analysis are two broad subdivisions of analysis which deal with real-values and complex-valued functions, respectively.Derbyshire (2004, p. 16) describes analysis as "the study of limits."Logicians often call second-order arithmetic "analysis." Unfortunately, this term conflicts with the more usual definition of analysis.. ### Positive part Let , then the positive part of is the function defined byThe positive part satisfies the identitywhere is the negative part of . ### Image If is a map (a.k.a. function, transformation, etc.) over a domain , then the image of , also called the range of under , is defined as the set of all values that can take as its argument varies over , i.e.,"Image" is a synonym for "range," but"image" is the term preferred in formal mathematical writing.The notation denotes the image of the interval under the function . Formally, ### Algebra The word "algebra" is a distortion of the Arabic title of a treatise by al-Khwārizmī about algebraic methods. In modern usage, algebra has several meanings.One use of the word "algebra" is the abstract study of number systems and operations within them, including such advanced topics as groups, rings, invariant theory, and cohomology. This is the meaning mathematicians associate with the word "algebra." When there is the possibility of confusion, this field of mathematics is often referred to as abstract algebra.The word "algebra" can also refer to the "school algebra" generally taught in American middle and high schools. This includes the solution of polynomial equations in one or more variables and basic properties of functions and graphs. Mathematicians call this subject "elementary algebra," "high school algebra," "junior high.. ### Abstract algebra Abstract algebra is the set of advanced topics of algebra that deal with abstract algebraic structures rather than the usual number systems. The most important of these structures are groups, rings, and fields. Important branches of abstract algebra are commutative algebra, representation theory, and homological algebra.Linear algebra, elementary number theory, and discrete mathematics are sometimes considered branches of abstract algebra. Ash (1998) includes the following areas in his definition of abstract algebra: logic and foundations, counting, elementary number theory, informal set theory, linear algebra, and the theory of linear operators. ### Intermediate value theorem If is continuous on a closed interval , and is any number between and inclusive, then there is at least one number in the closed interval such that .The theorem is proven by observing that is connected because the image of a connected set under a continuous function is connected, where denotes the image of the interval under the function . Since is between and , it must be in this connected set.The intermediate value theorem (or rather, the space case with , corresponding to Bolzano's theorem) was first proved by Bolzano (1817). While Bolzano's used techniques which were considered especially rigorous for his time, they are regarded as nonrigorous in modern times (Grabiner 1983). ### Extreme value theorem If a function is continuous on a closed interval , then has both a maximum and a minimum on . If has an extremum on an open interval , then the extremum occurs at a critical point. This theorem is sometimes also called the Weierstrass extreme value theorem.The standard proof of the first proceeds by noting that is the continuous image of a compact set on the interval , so it must itself be compact. Since is compact, it follows that the image must also be compact. ### Commutative algebra Let denote an -algebra, so that is a vector space over and(1)(2)Now define(3)where . An Associative -algebra is commutative if for all . Similarly, a ring is commutative if the multiplication operation is commutative, and a Lie algebra is commutative if the commutator is 0 for every and in the Lie algebra.The term "commutative algebra" also refers to the branch of abstract algebra that studies commutative rings. Commutative algebra is important in algebraic geometry. ### Group upper central series The upper central series of a group is the sequence of groups (each term normal in the term following it)that is constructed in the following way: 1. is the center of . 2. For , is the unique subgroup of such that is the center of . If the upper central series of a group terminates with for some , then is called a nilpotent group. ### Conjugation Conjugation is the process of taking a complex conjugate of a complex number, complex matrix, etc., or of performing a conjugation move on a knot.Conjugation also has a meaning in group theory. Let be a group and let . Then, defines a homomorphism given byThis is a homomorphism becauseThe operation on given by is called conjugation by .Conjugation is an important construction in group theory. Conjugation defines a group action of a group on itself and this often yields useful information about the group. For example, this technique is how the Sylow Theorems are proven. More importantly, a normal subgroup of a group is a subgroup which is invariant under conjugation by any element. Normal groups are extremely important because they are the kernels of homomorphisms and it is possible to take the quotient of a group and one of its normal subgroups... ### Linear algebraic group A linear algebraic group is a matrix group that is also an affine variety. In particular, its elements satisfy polynomial equations. The group operations are required to be given by regular rational functions. The linear algebraic groups are similar to the Lie groups, except that linear algebraic groups may be defined over any field, including those of positive field characteristic.The special linear group of matrices of determinant one is a linear algebraic group. This is because the equation for the determinant is a polynomial equation in the entries of the matrices. The general linear group of matrices with nonzero determinant is also a linear algebraic group. This can be seen by introducing an extra variable and writingThis is a polynomial equation in variables and is equivalent to saying that is nonzero. This equation describes as an affine variety... ### Topological space A topological space, also called an abstract topological space, is a set together with a collection of open subsets that satisfies the four conditions: 1. The empty set is in . 2. is in . 3. The intersection of a finite number of sets in is also in . 4. The union of an arbitrary number of sets in is also in . Alternatively, may be defined to be the closed sets rather than the open sets, in which case conditions 3 and 4 become: 3. The intersection of an arbitrary number of sets in is also in . 4. The union of a finite number of sets in is also in . These axioms are designed so that the traditional definitions of open and closed intervals of the real line continue to be true. For example, the restriction in (3) can be seen to be necessary by considering , where an infinite intersection of open intervals is a closed set.In the chapter "Point Sets in General Spaces" Hausdorff (1914) defined his concept of a topological space based on the four Hausdorff axioms (which.. ### Path space Let be the set of continuous mappings . Then the topological space supplied with the compact-open topology is called a mapping space. If is a pointed space, then the mapping space of pointed maps is called the path space of . In words, is the space of all paths which begin at . is a contractible space with the contraction given by . ### Boundedly compact space A metric space is boundedly compact if all closed bounded subsets of are compact. Every boundedly compact metric space is complete. (This is a generalization of the Bolzano-Weierstrass theorem.)Every complete Riemannian manifold is boundedly compact. This is part of or a consequence of the Hopf-Rinow theorem. ### Loop space Let be the set of continuous mappings . Then the topological space supplied with the compact-open topology is called a mapping space, and if is taken as the circle , then is called the "free loop space of " (or the space of closed paths).If is a pointed space, then a basepoint can be picked on the circle and the mapping space of pointed maps can be formed. This space is denoted and is called the "loop space of ." ### Outlier An outlier is an observation that lies outside the overall pattern of a distribution (Moore and McCabe 1999). Usually, the presence of an outlier indicates some sort of problem. This can be a case which does not fit the model under study, or an error in measurement.Outliers are often easy to spot in histograms. Forexample, the point on the far left in the above figure is an outlier.A convenient definition of an outlier is a point which falls more than 1.5 times the interquartile range above the third quartile or below the first quartile.Outliers can also occur when comparing relationships between two sets of data. Outliers of this type can be easily identified on a scatter diagram.When performing least squares fitting to data, it is often best to discard outliers before computing the line of best fit. This is particularly true of outliers along the direction, since these points may greatly influence the result... ### Trivial loop The trivial loop is the loop that takes every point to its basepoint. Formally, if is a topological space and , the trivial loop based at is the map given by for all . ### Inner product An inner product is a generalization of the dot product. In a vector space, it is a way to multiply vectors together, with the result of this multiplication being a scalar.More precisely, for a real vector space, an inner product satisfies the following four properties. Let , , and be vectors and be a scalar, then: 1. . 2. . 3. . 4. and equal if and only if . The fourth condition in the list above is known as the positive-definite condition. Related thereto, note that some authors define an inner product to be a function satisfying only the first three of the above conditions with the added (weaker) condition of being (weakly) non-degenerate (i.e., if for all , then ). In such literature, functions satisfying all four such conditions are typically referred to as positive-definite inner products (Ratcliffe 2006), though inner products which fail to be positive-definite are sometimes called indefinite to avoid confusion. This difference, though subtle,.. ### Hilbert space A Hilbert space is a vector space with an inner product such that the norm defined byturns into a complete metric space. If the metric defined by the norm is not complete, then is instead known as an inner product space.Examples of finite-dimensional Hilbert spaces include1. The real numbers with the vector dot product of and . 2. The complex numbers with the vector dot product of and the complex conjugate of . An example of an infinite-dimensional Hilbert space is , the set of all functions such that the integral of over the whole real line is finite. In this case, the inner product isA Hilbert space is always a Banach space, but theconverse need not hold.A (small) joke told in the hallways of MIT ran, "Do you know Hilbert? No? Then what are you doing in his space?" (S. A. Vaughn, pers. comm., Jul. 31, 2005)... ### Scatter diagram A scatter diagram, also called a scatterplot or a scatter plot, is a visualization of the relationship between two variables measured on the same set of individuals. Scatter diagrams for lists of data , , ... can be generated with the Wolfram Language using ListPlot[x1, y1, x2, y2, ...].A scatter diagram makes it particularly easy to spot trends and correlations between the two variables. For example, the scatter diagram illustrated above plots wine consumption (in liters of alcohol from wine per person per year) against deaths from heart disease (in deaths per 100,000 people) for 19 developed nations (Moore and McCabe 1999, Ex. 2.5)There is clearly and inverse relationship between these two variables. Once such a relationship has been found, linear regression can be used to find curves of best fit. The graph above shows the same scatter diagram as above together with a line of best fit... ### Slope field Given an ordinary differential equation , the slope field for that differential equation is the vector field that takes a point to a unit vector with slope . The vectors in a slope field are usually drawn without arrowheads, indicating that they can be followed in either direction. Using a visualization of a slope field, it is easy to graphically trace out solution curves to initial value problems. For example, the illustration above shows the slope field for the equation together with solution curves for various initial values of . ### Product The term "product" refers to the result of one or more multiplications. For example, the mathematical statement would be read " times equals ," where is the product.More generally, it is possible to take the product of many different kinds of mathematical objects, including those that are not numbers. For example, the product of two sets is given by the Cartesian product. In topology, the product of spaces can be defined by using the product topology. The product of two groups, vector spaces, or modules is given by the direct product. In category theory, the product of objects is given using the category product.The product symbol is defined by(1)Useful product identities include(2)(3) Check the price
All Questions 111 views How can I execute a sequence of statements interactively? I'm fairly new to Mathematica. Suppose I have the following function: f[x_Integer] := Module[{y, z, r}, y = 5; z = 10; r = x + y + z; r ]; and I ... 551 views WaveletScalogram in polar coordinates For given dataset data = Re[Zeta[1/2 + I Range[0, 100, 0.01]]]; It is nice that Mathematica can plot data in both Cartesian and Polar coordinates. ... 249 views Change values in string under condition how to change the width and height in the HTML code into some other values decided by ratio and a fixed width? The background is that you can adjust a image in ... 100 views HoldForm and RandomChoice I'm trying to get random expressions displaying. For that, I'm starting with addition, multiplication, substraction and division. When trying out this code ... 3k views Implementing a dictionary data structure As one learns from a course on data structures, hash maps or dictionaries can be efficient when applied to appropriate tasks. I need a hash map in Mathematica and I've never found it. I'm scratching ... 204 views Need better font control [duplicate] Generally many programs have a menu item to raise/lower current font size. This is much more convenient than trying to root around in in internal mathematica options. I need to change ALL default ... 512 views Do Mathematica notebooks include personal metadata? I am about to send a Mathematica notebook to someone and I would like to make sure it does not contain any information that could personally identify me. I would also like to produce pdf files from ... 307 views Clearing memory of a subkernel in a Do loop I have a Do loop, which saves data directly to a .txt file. However when I leave it running for long periods of time the RAM ... 314 views LogPlot of the calculated data in For Loop in Mathematica I have the following loop, for calculating a data for different values of inputs. How I can plot the result at the end of For loop? Its not matter if the plot is inside or outside the loop, I just ... 628 views plotting the solutions of system of differential equations [duplicate] The following code include the system of differential equations: ... 94 views Strange response with dynamic texture and MousePosition I built a cubic, then use MousePosition together with GUIScreenShot to take a floating region with fixed size (200*200) on the ... 97 views How to make a function work on symbols in a specific context Let us say we have two sets of symbols $(x, y, z)$ with the same names that live in two different contexts: ... 412 views SparseArray row operations Given a large (very) sparse matrix, A, how can I efficiently "operate" on only the nonzeros in a given row? For example: For each row in A, generate a list of column indices that have a magnitude ... 322 views Sort a list of coordinates by frequency from first component of each coordinate The situation is that I have a List like this: ... 123 views Indicating the axes [closed] I want to scale the $x$ axis of the plot by $x^0$ and the $y$ axis by $\tan x$. Is this possible ? Plot[Tan[x], {x, 0, 10}, PlotRange -> Automatic] EDIT: ... 2k views How to visualize Riemann surfaces? In WolframAlpha we can easily visualize Riemann surfaces of arbitrary functions, can we plot the Riemann surface of an arbitrary function using Mathematica and ... 216 views Do any users know of methods to capture Twitter feeds and subject them to analysis? [duplicate] I would like to use the output of about 300 Twitter sources and would probably collect 30 minutes of tweets at a time and subject each collection to analysis; ideally this process would be automatic ... 117 views FindEdgeCut with weighted graphs If I want to find a minimum cut between two nodes in a weighted graph, I would use FindEdgeCut as follows: ... 463 views Adding/deleting weighted edges to a weighted graph How can I add and delete weighted edges in a weighted graph? Using Graph in Mathematica I want to add a weighted edge to a weighted graph. The problem is ... 84 views Is it possible to use the Export[] command and add another address other than the usual Home directory that is used? 69 views Extending listability to coordinates [duplicate] I noticed that one can subtract a since real number value from all elements in an array by writing something like {5, 4, 3, 2, 1} - 5 which outputs ... 208 views Are there built in functions to perform a geometric transform to rotate a set of points around an arbitrary point? I have a list of points {{4,5},{6,7},{9,8},...} in two-dimensions. I'd like to rotate these points some number of degrees $\theta$ around an arbitrary anchor point ... 449 views Specify values for X-axis and Y-axis ticks and control their format I have following problem: I want to set the values for the X-and Y axis by myself, but the values (e.g. 0.800) are not plotted as I defined them (see image 0,8), mathematica removes the zeros. ... 406 views How to pick a solution from a list of solutions using a test? I have a list of solutions that depends on a parameter b3 and I'd like to get the solution for which the x value is minimal when ... 213 views Separating disconnected graph objects without losing vertex coordinate assignments Say I have a graph Gtest, which has multiple disconnected components. I found that I can isolate individual components while retaining vertex coordinate ... 129 views FindHamiltonianCycle is not invariant to edge permutation Bug introduced in 9.0.1 or earlier and fixed in 10.0 Found this bug-like behaviour today: if the order of edges is changed so that VertexList does not return the ... 36 views Parsing output from FindHamiltonianCycle to recover an ordered list of vertex positions for a discovered path After applying FindHamiltonianCycle to a graph, one generates output that looks like the following: ... 107 views StringForm and NotebookWrite I'm trying to get my head around NotebookWrite for the last few days. I can't understand why ... 419 views Generating a graph object where vertices are pixel coordinates and edges represent two pixels being in the same Moore neighborhood I have a list of integer pixel positions, pts, and I wish to create one or more graph objects where vertices at these pixel positions share an edge if they are ... 1k views How to extract data from contour plot as a text file? I have the following expression from which I want to extract data for a contour plot. ... 202 views Generating a list of contour pixels for a morphological component I have a set of pixilated shapes, and after transforming each shape into a morphological component, I want to be able to return the coordinates for pixels that lie on the contour of the shape. More ... 259 views How to find the numbers $a$, $b$, $c$, $d$ of the function? I want to find four numbers $a$, $b$, $c$, $d$ of the function $f(x)= \dfrac{a x + b}{c x + d}$ satisfying the conditions $f(-6)=4$, $f(-5)=5$, $f(1)=3$ and $f(2)=2$. I tried ... 266 views How to fill between abscissa values? How can I produce a fill (in a plot) that extends between two given abscissa values, and "infinitely" in the ordinate direction? (Of course, here "infinitely" just means "up to the top and bottom ... 125 views Matrices and Upper / Lower Triangular Factorizations Does anyone know (I have searched documentation, online and MathSource) if there are ways to coax out the following factorizations from Mathematica? $A = LU$, $LU$ factorization $PA = LU$, $LU$ ... 129 views How to preserve focus on InputField after Print? If you press Enter in the example below it should become clear that test is being adding to the textbox instead of the current ... 2k views How to keep markers as dots in a joined ListPlot? Consider the following typical interactive sequence. First I produce a ListPlot: OK, not bad, but I want those dots to be bigger, and also joined. First, I ... 2k views Numerical differentiation methods Is it possible to write code in Mathematica that implements various differentiation methods (like forward, central, extrapolated, etc.)? 299 views Manipulate graphs in 3D Is it possible to Manipulate Graphs (e.g. a ... 171 views Exp of big negative numbers [duplicate] I noticed that Exp have a strange behaviour with big negative numbers ... 398 views How to disable differential styles/markers/etc. for multiple entities? For functions that can produce multiple "style-able" entities, Mathematica by default will give each such entity a different style. Similar conventions apply to other features, either by default, or ... 235 views How to construct tuples with a given order? How do I create a list of tuples with an ordering imposed on them, where each element is from a generating set? Specifically, I'm trying to create a listing of tuples $(x_1, x_2, ..., x_n)$ such that ... 76 views NotebookWrite an expression I'm trying to write a notebook from the kernel, so using one list of elements in one document, I'd like to create a second document with some interactions between the elements of the list. For that, ... 416 views How to extract Audio Record data form Sound[SystemDialogInput[“RecordSound”] ? When I set parameter "SoundReg" to collect Audio Steaming data - it's working After that I use Button$"Record"$ In my ... 298 views How to make Mathematica try harder to perform symbolic comparisons? (I suspect this question is a duplicate, but I didn't find a sufficiently similar question with an answer to it.) I'm having trouble with comparisons of symbolic ... 319 views Is there a way to parallelize the convolution component of EdgeDetect? Provided an image like - test = Import["http://upload.wikimedia.org/wikipedia/commons/d/d5/Sunflowers.jpg"] We can run ... 68 views Understanding the discrepancy between txt file import and export times, and possibly speeding up Import I attempted to import a very short .txt file string using the Import command: ... 372 views How can I create jpeg images of a given file size? Cross-posted at Wolfram Community I would like to have a function makeJPG[megabytes_] that generates random jpg images of given file sizes with itf filesize (in megabytes) watermarked on the image ... 237 views Dsolve too slow — is there anyway around? I am trying to solve: ...
# Arun Waves ## July 23, 2010 ### Leg-Wheel hybrid for a rover robot: Whegs Filed under: Robotics — Arun @ 2:05 am Tags: , , , , , During one of my exploratory voyages across the vast untamed expanse of the “internet” I made an exciting discovery, Whegs – a wheel-leg hybrid system. First watch this video to get hooked. Robot designers are constantly on the look out for tricks to get better performance out of their robots. Just like we humans, robots also face obstacles along their way. Now you can detect the obstacle and go around it but there are plenty of occasions where it is advantageous to go over the obstacle. And between you and me, the real reason for the latter options is your robot will be lot more cool and awesome (and bada**) if it can simply go over the obstacles!! Wheels are great but mother nature decided to give us and many other animals legs, so there must be some advantage to it. Thus millions of years after legs became a way of life and thousands of years after wheels came into being, someone thought, “hey why not mix the two” and the result is Whegs. I found it at this site and this (no time to figure out who came up with it first). Long story short, Whegs climb over taller obstacles when compared to wheels. Now why is a hybrid design better than the wheel?!?! Here is an illustrative description….. Consider the above wheel with an obstacle much smaller than the wheel radius (which is the height of the wheel’s center (the black dot) from the floor. As the robot moves forward (which is ‘left’ in this case), it’s wheel will make contact with the obstacle. Once a contact has been made, friction will kick in which will force the point of contact to stay the same. Since the torque on the wheel will continue to act, the point of contact acts like a pivot. If the robot’s motor is powerful enough then it will continue turning the wheel, use the pivot point to push down and lift the robot chassis. Eventually this will result in the wheel climbing over the obstacle. Now imagine a similar condition but this time the obstacle is comparable to the radius of the wheel ($h \approx r$). Once again contact is made but this time the point of contact is on the face of the obstacle and not on top as in the previous case. Although friction kicks in, at this point the friction has to be so high that it should allow the wheel to travel vertically up on the face of the obstacle!! Typically this will not be the scenario and even if the friction is so high, think of how will the wheel let go the point of contact if it tries to move forward (in this case move up); if it wants to roll then it must continuously change the point of contact. Since it fails for this scenario it will fail to work for $h > r$ since the point of contact is always on the face of the obstacle and not on top of it. Now that we have understood how the wheel works, let us consider the same obstacle vs. a Wheg. For accurate comparison this Wheg has the same radius as the wheel and this particular one is a three legged Wheg. The circle is purely illustrative and is shown for reference only – it does not exist. The first thing that becomes apparent is the large amount of empty space that this structure has and in the following sections we will see how this is used to our advantage. Since it has empty space where there was a wheel once, this structure can, so to speak, penetrate the obstacle as shown in the above illustration. Remember the circle is just a guide to the eye; it merely shows the trajectory of the three legs. Even if the Wheg started out with the previous position, it will simply slip and eventually get to a position similar to the one showed above. As this Wheg rolls forward (not as smooth as a wheel though), one of the legs will make a contact with the obstacle and its approach will be from top and not on the face as see in the case of a wheel. As before the torque on the Wheg continues and the leg that makes the contact is used as a pivot to raise the chassis as shown below. PS: Note the illustrative circle digs into the floor because the Wheg moves from one leg to another unlike a wheel which has a continuous contact with the floor. The above discussion is applicable for taller obstacles also. Thus this hybrid wheel can climb over obstacles that are comparable to the radius of the Wheg 😀 by virtue of it being able to penetrate the obstacle profile and having the leg approach the obstacle from top. So if obstacle climbing is your thing then Wheg is the way to go. Now let us push the system further – what is the maximum height of the obstacle where even the Wheg fails? Above illustrations shows the maximum height for which this Wheg will work, if the obstacle were any taller then the leg will not be able to rest on its top. Thus the limiting height depends on how high a leg can reach so as to approach the bstacle from top which depends on how deep the Wheg can penetrate the obstacle’s profile. PS: you can achieve more height by reducing the angle between the top two legs but then the Wheg will not be stable, or you can add more legs but then the penetration depth will be reduced. Now as magical as this may seem, remember, there is no free lunch!! Cons for a Wheg; • The robot’s ride will be rough • Slender legs have a tendency to sink into soft/nonrigid surfaces like sand and mud because of reduced contact surface area to the floor • If things are poking out of a moving part then they tend to get entangled in stuff, like in grass or undergrowth • If you are really deep into robotics then consider the non-uniform forces that the axle-leg joint will be subject to with every rotation Happy Robotics 🙂 ## May 5, 2010 ### Rover robot muah ha ha ha ……. my first minion ……… its ALIVE 😈 What every tech-nut dreams of; build your own robot! Finally I built my obstacle avoiding rover robot using; • Arduino Duemilanove – robot brain • Hi-Tec HS 311 servos – actuator/ drive for wheels • spring loaded single throw switch for sensor (mustache/bump switch) • wheels from a “99 cents only” store toy • don’t own a drill (sigh 😥 I know) so used masking tape to hold plywood sheets which form the chassis It is a simple one, to get my feet wet; Arduino drives 4 servos, mustache detects obstacles, simple algorithm reverses rover and turns it to avoid obstacle. Turning is achieved by reversing rotation direction on one side of the rover. Here are some pics and yessssss a video 😀 Testing servo after modifying it into a continuous rotation servo Layout of 4 servos and corresponding wheels on the chassis (after few hours) ta da; used velcro and masking tape to hold components together, did not put any effort to hide the ugly guts of the rover. I did not provide detailed instructions for constructing the rover since there are plenty of sites which already do a good job at it. But feel free to ask any questions, I would be glad to answer them. Here is a video of stress testing the rover to determine the maximum incline that it can handle: Next step is to add a SHARP IR Range finder to detect obstacles and take evasive action before the rover makes contact with the obstacle. It will be interesting to explore alternative styles of robotics like legged robot or non-processor robot like the BEAM robots. Avenues are plenty, the only limitation is time! Create a free website or blog at WordPress.com.
# Chapter 1 Introduction to Data Science This is an open source textbook aimed at introducing undergraduate students to data science. It was originally written for the University of British Columbia’s DSCI 100 - Introduction to Data Science course. In this book, we define data science as the study and development of reproducible, auditable processes to obtain value (i.e., insight) from data. The book is structured so that learners spend the first four chapters learning how to use the R programming language and Jupyter notebooks to load, wrangle/clean, and visualize data, while answering descriptive and exploratory data analysis questions. The remaining chapters illustrate how to solve four common problems in data science, which are useful for answering predictive and inferential data analysis questions: 1. Predicting a class/category for a new observation/measurement (e.g., cancerous or benign tumour) 2. Predicting a value for a new observation/measurement (e.g., 10 km race time for 20 year old females with a BMI of 25). 3. Finding previously unknown/unlabelled subgroups in your data (e.g., products commonly bought together on Amazon) 4. Estimating an average or a proportion from a representative sample (group of people or units) and using that estimate to generalize to the broader population (e.g., the proportion of undergraduate students that own an iphone) For each of these problems, we map them to the type of data analysis question being asked and discuss what kinds of data are needed to answer such questions. More advanced (e.g., causal or mechanistic) data analysis questions are beyond the scope of this text. Types of data analysis questions Question type Description Example Descriptive A question which asks about summarized characteristics of a data set without interpretation (i.e., report a fact). How many people live in each US state? Exploratory A question asks if there are patterns, trends, or relationships within a single data set. Often used to propose hypotheses for future study. Does politcal party voting change with indicators of wealth in a set of data collected from groups of individuals from several regions in the United States? Inferential A question that looks for patterns, trends, or relationships in a single data set and also asks for quantification of how applicable these findings are to the wider population. Does politcal party voting change with indicators of wealth in the United States? Predictive A question that asks about predicting measurements or labels for individuals (people or things). The focus is on what things predict some outcome, but not what causes the outcome. What political party will someone vote for in the next US election? Causal A question that asks about whether changing one factor will lead to a change in another factor, on average, in the wider population. Does wealth lead to voting for a certain political party candidate in the US Presidential election? Mechanistic A question that asks about the underlying mechanism of the observed patterns, trends, or relationship (i.e., how does it happen?) How does wealth lead to voting for a certain political party candidate in the US Presidential election? Source: What is the question? by Jeffery T. Leek, Roger D. Peng & The Art of Data Science by Roger Peng & Elizabeth Matsui ## 1.1 Chapter learning objectives By the end of the chapter, students will be able to: • use a Jupyter notebook to execute provided R code • edit code and markdown cells in a Jupyter notebook • create new code and markdown cells in a Jupyter notebook • load the tidyverse library into R • create new variables and objects in R using the assignment symbol • use the help and documentation tools in R • match the names of the following functions from the tidyverse library to their documentation descriptions: • read_csv • select • mutate • filter • ggplot • aes ## 1.2 Jupyter notebooks Jupyter notebooks are documents that contain a mix of computer code (and its output) and formattable text. Given that they are able to combine these two in a single document—code is not separate from the output or written report—notebooks are one of the leading tools to create reproducible data analyses. A reproducible data analysis is one where you can reliably and easily recreate the same results when analyzing the same data. Although this sounds like something that should always be true of any data analysis, in reality this is not often the case; one needs to make a conscious effort to perform data analysis in a reproducible manner. The name Jupyter came from combining the names of the three programming language that it was initially targeted for (Julia, Python, and R), and now many other languages can be used with Jupyter notebooks. A notebook looks like this: We have included a short demo video here to help you get started and to introduce you to R and Jupyter. However, the best way to learn how to write and run code and formattable text in a Jupyter notebook is to do it yourself! Here is a worksheet that provides a step-by-step guide through the basics. ## 1.3 Loading a spreadsheet-like dataset Often, the first thing we need to do in data analysis is to load a dataset into R. When we bring spreadsheet-like (think Microsoft Excel tables) data, generally shaped like a rectangle, into R it is represented as what we call a data frame object. It is very similar to a spreadsheet where the rows are the collected observations and the columns are the variables. The first kind of data we will learn how to load into R (as a data frame) is the spreadsheet-like comma-separated values format (.csv for short). These files have names ending in .csv, and can be opened open and saved from common spreadsheet programs like Microsoft Excel and Google Sheets. For example, a .csv file named state_property_vote.csv is included with the code for this book. This file— originally from Data USA—has US state-level property, income, population and voting data from 2015 and 2016. If we were to open this data in a plain text editor, we would see each row on its own line, and each entry in the table separated by a comma: state,med_income,med_prop_val,population,mean_commute_minutes,party AK,64222,197300,733375,10.46830207,Republican AL,36924,94800,4830620,25.30990746,Republican AR,35833,83300,2958208,22.40108933,Republican AZ,44748,128700,6641928,20.58786,Republican CA,53075,252100,38421464,23.38085172,Democrat CO,48098,198900,5278906,19.50792188,Democrat CT,69228,246450,3593222,24.349675,Democrat DC,70848,475800,647484,28.2534,Democrat DE,54976,228500,926454,24.45553333,Democrat To load this data into R, and then to do anything else with it afterwards, we will need to use something called a function. A function is a special word in R that takes in instructions (we call these arguments) and does something. The function we will use to read a .csv file into R is called read_csv. In its most basic use-case, read_csv expects that the data file: • has column names (or headers), • uses a comma (,) to separate the columns, and • does not have row names. Below you’ll see the code used to load the data into R using the read_csv function. But there is one extra step we need to do first. Since read_csv is not included in the base installation of R, to be able to use it we have to load it from somewhere else: a collection of useful functions known as a library. The read_csv function in particular is in the tidyverse library (more on this later), which we load using the library function. Next, we call the read_csv function and pass it a single argument: the name of the file, "state_property_vote.csv". We have to put quotes around filenames and other letters and words that we use in our code to distinguish it from the special words that make up R programming language. This is the only argument we need to provide for this file, because our file satifies everthing else the read_csv function expects in the default use-case (which we just discussed). Later in the course, we’ll learn more about how to deal with more complicated files where the default arguments are not appropriate. For example, files that use spaces or tabs to separate the columns, or with no column names. library(tidyverse) read_csv("state_property_vote.csv") ## # A tibble: 52 x 6 ## state med_income med_prop_val population mean_commute_minutes party ## <chr> <dbl> <dbl> <dbl> <dbl> <chr> ## 1 AK 64222 197300 733375 10.5 Republican ## 2 AL 36924 94800 4830620 25.3 Republican ## 3 AR 35833 83300 2958208 22.4 Republican ## 4 AZ 44748 128700 6641928 20.6 Republican ## 5 CA 53075 252100 38421464 23.4 Democrat ## 6 CO 48098 198900 5278906 19.5 Democrat ## 7 CT 69228 246450 3593222 24.3 Democrat ## 8 DC 70848 475800 647484 28.3 Democrat ## 9 DE 54976 228500 926454 24.5 Democrat ## 10 FL 43355 125600 19645772 24.8 Republican ## # … with 42 more rows Above you can also see something neat that Jupyter does to help us understand our code: it colours text depending on its meaning in R. For example, you’ll note that functions get bold green text, while letters and words surrounded by quotations like filenames get blue text. In case you want to know more (optional): We use the read_csv function from the tidyverse instead of the base R function read.csv because it’s faster and it creates a nicer variant of the base R data frame called a tibble. This has several benefits that we’ll discuss in further detail later in the course. ## 1.4 Assigning value to a data frame When we loaded the US state-level property, income, population, and voting data in R above using read_csv, we did not give this data frame a name, so it was just printed to the screen and we cannot do anything else with it. That isn’t very useful; what we would like to do is give a name to the data frame that read_csv outputs so that we can use it later for analysis and visualization. To assign name to something in R, there are two possible ways—using either the assignment symbol (<-) or the equals symbol (=). From a style perspective, the assignment symbol is preferred and is what we will use in this course. When we name something in R using the assignment symbol, <-, we do not need to surround it with quotes like the filename. This is because we are formally telling R about this word and giving it a value. Only characters and words that act as values need to be surrounded by quotes. Let’s now use the assignment symbol to give the name us_data to the US state-level property, income, population, and voting data frame that we get from read_csv. us_data <- read_csv("state_property_vote.csv") Wait a minute! Nothing happened this time! Or at least it looks like that. But actually something did happen: the data was read in and now has the name us_data associated with it. And we can use that name to access the data frame and do things with it. First we will type the name of the data frame to print it to the screen. us_data ## # A tibble: 52 x 6 ## state med_income med_prop_val population mean_commute_minutes party ## <chr> <dbl> <dbl> <dbl> <dbl> <chr> ## 1 AK 64222 197300 733375 10.5 Republican ## 2 AL 36924 94800 4830620 25.3 Republican ## 3 AR 35833 83300 2958208 22.4 Republican ## 4 AZ 44748 128700 6641928 20.6 Republican ## 5 CA 53075 252100 38421464 23.4 Democrat ## 6 CO 48098 198900 5278906 19.5 Democrat ## 7 CT 69228 246450 3593222 24.3 Democrat ## 8 DC 70848 475800 647484 28.3 Democrat ## 9 DE 54976 228500 926454 24.5 Democrat ## 10 FL 43355 125600 19645772 24.8 Republican ## # … with 42 more rows ## 1.5 Creating subsets of data frames with select & filter Now, we are going to learn how to obtain subsets of data from a data frame in R using two other tidyverse functions: select and filter. The select function allows you to create a subset of the columns of a data frame, while the filter function allows you to obtain a subset of the rows with specific values. Before we start using select and filter, let’s take a look at the US state-level property, income, and population data again to familiarize ourselves with it. We will do this by printing the data we loaded earlier in the chapter to the screen. us_data ## # A tibble: 52 x 6 ## state med_income med_prop_val population mean_commute_minutes party ## <chr> <dbl> <dbl> <dbl> <dbl> <chr> ## 1 AK 64222 197300 733375 10.5 Republican ## 2 AL 36924 94800 4830620 25.3 Republican ## 3 AR 35833 83300 2958208 22.4 Republican ## 4 AZ 44748 128700 6641928 20.6 Republican ## 5 CA 53075 252100 38421464 23.4 Democrat ## 6 CO 48098 198900 5278906 19.5 Democrat ## 7 CT 69228 246450 3593222 24.3 Democrat ## 8 DC 70848 475800 647484 28.3 Democrat ## 9 DE 54976 228500 926454 24.5 Democrat ## 10 FL 43355 125600 19645772 24.8 Republican ## # … with 42 more rows In this data frame there are 52 rows (corresponding to the 50 US states, the District of Columbia and the US territory, Puerto Rico) and 6 columns: 1. US state abbreviation 2. Median household income 3. Median property value 4. US state population 5. Mean commute time in minutes 6. The party each state voted for in the 2016 US presidential election Now let’s use select to extract the state column from this data frame. To do this, we need to provide the select function with two arguments. The first argument is the name of the data frame object, which in this example is us_data. The second argument is the column name that we want to select, here state. After passing these two arguments, the select function returns a single column (the state column that we asked for) as a data frame. state_column <- select(us_data, state) state_column ## # A tibble: 52 x 1 ## state ## <chr> ## 1 AK ## 2 AL ## 3 AR ## 4 AZ ## 5 CA ## 6 CO ## 7 CT ## 8 DC ## 9 DE ## 10 FL ## # … with 42 more rows ### 1.5.1 Using select to extract multiple columns We can also use select to obtain a subset of the data frame with multiple columns. Again, the first argument is the name of the data frame. Then we list all the columns we want as arguments separated by commas. Here we create a subset of three columns: state, median property value, and mean commute time in minutes. three_columns <- select(us_data, state, med_prop_val, mean_commute_minutes) three_columns ## # A tibble: 52 x 3 ## state med_prop_val mean_commute_minutes ## <chr> <dbl> <dbl> ## 1 AK 197300 10.5 ## 2 AL 94800 25.3 ## 3 AR 83300 22.4 ## 4 AZ 128700 20.6 ## 5 CA 252100 23.4 ## 6 CO 198900 19.5 ## 7 CT 246450 24.3 ## 8 DC 475800 28.3 ## 9 DE 228500 24.5 ## 10 FL 125600 24.8 ## # … with 42 more rows ### 1.5.2 Using select to extract a range of columns We can also use select to obtain a subset of the data frame constructed from a range of columns. To do this we use the colon (:) operator to denote the range. For example, to get all the columns in the data frame from state to med_prop_val we pass state:med_prop_val as the second argument to the select function. column_range <- select(us_data, state:med_prop_val) column_range ## # A tibble: 52 x 3 ## state med_income med_prop_val ## <chr> <dbl> <dbl> ## 1 AK 64222 197300 ## 2 AL 36924 94800 ## 3 AR 35833 83300 ## 4 AZ 44748 128700 ## 5 CA 53075 252100 ## 6 CO 48098 198900 ## 7 CT 69228 246450 ## 8 DC 70848 475800 ## 9 DE 54976 228500 ## 10 FL 43355 125600 ## # … with 42 more rows ### 1.5.3 Using filter to extract a single row We can use the filter function to obtain the subset of rows with desired values from a data frame. Again, our first argument is the name of the data frame object, us_data. The second argument is a logical statement to use when filtering the rows. Here, for example, we’ll say that we are interested in rows where state equals NY (for New York). To make this comparison, we use the equivalency operator == to compare the values of the state column with the value "NY". Similar to when we loaded the data file and put quotes around the filename, here we need to put quotes around "NY" to tell R that this is a character value and not one of the special words that make up R programming language, nor one of the names we have given to data frames in the code we have already written. With these arguments, filter returns a data frame that has all the columns of the input data frame but only the rows we asked for in our logical filter statement. new_york <- filter(us_data, state == "NY") new_york ## # A tibble: 1 x 6 ## state med_income med_prop_val population mean_commute_minutes party ## <chr> <dbl> <dbl> <dbl> <dbl> <chr> ## 1 NY 50839 134150 19673174 24.4 Democrat ### 1.5.4 Using filter to extract rows with values above a threshold If we are interested in finding information about the states who have a longer mean commute time than New York—whose mean commute time is 21.5 minutes—then we can create a filter to obtain rows where the value of mean_commute_minutes is greater than 21.5. In this case, we see that filter returns a data frame with 33 rows; this indicates that there are 33 states with longer commute times on average than New York. long_commutes <- filter(us_data, mean_commute_minutes > 21.5) long_commutes ## # A tibble: 33 x 6 ## state med_income med_prop_val population mean_commute_minutes party ## <chr> <dbl> <dbl> <dbl> <dbl> <chr> ## 1 AL 36924 94800 4830620 25.3 Republican ## 2 AR 35833 83300 2958208 22.4 Republican ## 3 CA 53075 252100 38421464 23.4 Democrat ## 4 CT 69228 246450 3593222 24.3 Democrat ## 5 DC 70848 475800 647484 28.3 Democrat ## 6 DE 54976 228500 926454 24.5 Democrat ## 7 FL 43355 125600 19645772 24.8 Republican ## 8 GA 37865 101700 10006693 24.5 Republican ## 9 IL 47898 97350 12873761 22.6 Democrat ## 10 IN 47194 111800 6568645 23.5 Republican ## # … with 23 more rows ## 1.6 Exploring data with visualizations Creating effective data visualizations is an essential piece to any data analysis. For the remainder of Chapter 1, we will learn how to use functions from the tidyverse to make visualizations that let us explore relationships in data. In particular, we’ll develop a visualization of the US property, income, population, and voting data we’ve been working with that will help us understand two potential relationships in the data: first, the relationship between median household income and median propery value across the US, and second, whether there is a pattern in which party each state voted for in the 2016 US election. This is an example of an exploratory data analysis question: we are looking for relationships and patterns within the data set we have, but are not trying to generalize what we find beyond this data set. ### 1.6.1 Using ggplot to create a scatter plot Taking another look at our dataset below, we can immediately see that the three columns (or variables) we are interested in visualizing—median household income, median property value, and election result—are all in separate columns. In addition, there is a single row (or observation) for each state. The data are therefore in what we call a tidy data format. This is particularly important and will be a major focus in the remainder of this course: many of the functions from tidyverse require tidy data, including the ggplot function that we will use shortly for our visualization. Note below that we use the print function to display the us_data rather than just typing us_data; for data frames, these do the same thing. print(us_data) ## # A tibble: 52 x 6 ## state med_income med_prop_val population mean_commute_minutes party ## <chr> <dbl> <dbl> <dbl> <dbl> <chr> ## 1 AK 64222 197300 733375 10.5 Republican ## 2 AL 36924 94800 4830620 25.3 Republican ## 3 AR 35833 83300 2958208 22.4 Republican ## 4 AZ 44748 128700 6641928 20.6 Republican ## 5 CA 53075 252100 38421464 23.4 Democrat ## 6 CO 48098 198900 5278906 19.5 Democrat ## 7 CT 69228 246450 3593222 24.3 Democrat ## 8 DC 70848 475800 647484 28.3 Democrat ## 9 DE 54976 228500 926454 24.5 Democrat ## 10 FL 43355 125600 19645772 24.8 Republican ## # … with 42 more rows ### 1.6.2 Using ggplot to create a scatter plot We will begin with a scatter plot of the income and property value columns from our data frame. To create a scatter plot of these two variables using the ggplot function, we do the following: 1. call the ggplot function 2. provide the name of the data frame as the first argument 3. call the aesthetic function, aes, to specify which column will correspond to the x-axis and which will correspond to the y-axis 4. add a + symbol at the end of the ggplot call to add a layer to the plot 5. call the geom_point function to tell R that we want to represent the data points as dots/points to create a scatter plot. ggplot(us_data, aes(x = med_income, y = med_prop_val)) + geom_point() In case you have used R before and are curious: There are a small number of situations in which you can have a single R expression span multiple lines. Here, the + symbol at the end of the first line tells R that the expression isn’t done yet and to continue reading on the next line. While not strictly necessary, this sort of pattern will appear a lot when using ggplot as it keeps things more readable. ### 1.6.3 Formatting ggplot objects One common and easy way to format your ggplot visualization is to add additional layers to the plot object using the + symbol. For example, we can use the xlab and ylab functions to add layers where we specify human readable labels for the x and y axes. Again, since we are specifying words (e.g. "Income (USD)") as arguments to xlab and ylab, we surround them with double quotes. There are many more layers we can add to format the plot further, and we will explore these in later chapters. ggplot(us_data, aes(x = med_income, y = med_prop_val)) + geom_point() + xlab("Income (USD)") + ylab("Median property value (USD)") From this visualization we see that for the 52 US regions in this data set, as median household income increases so does median property value. When we see two variables do this, we call this a positive relationship. Because the increasing pattern is fairly clear (not fuzzy) we can say that the relationship is strong. Because of the data point in the lower left-hand corner, drawing a straight line through these points wouldn’t fit very well. When a straight-line doesn’t fit the data well we say that it’s non-linear. However, we should have caution when using one point to claim non-linearity. As we will see later, this might be due to a single point not really belonging in the data set (this is often called an outlier). Learning how to describe data visualizations is a very useful skill. We will provide descriptions for you in this course (as we did above) until we get to Chapter 4, which focuses on data visualization. Then, we will explicitly teach you how to do this yourself, and how to not over-state or over-interpret the results from a visualization. ### 1.6.4 Coloring points by group Now we’ll move onto the second part of our exploratory data analysis question: when considering the relationship between median household income and median property value, is there a pattern in which party each state voted for in the 2016 US election? One common way to explore this is to colour the data points on the scatter plot we have already created by group/category. For example, given that we have the party each state voted for in the 2016 US Presidential election in the column named party, we can colour the points in our previous scatter plot to represent who each stated voted for. To do this we modify our scatter plot code above. Specifically, we will add an argument to the aes function, specifying that the points should be coloured by the party column: ggplot(us_data, aes(x = med_income, y = med_prop_val, color = party)) + geom_point() + xlab("Income (USD)") + ylab("Median property value (USD)") This data visualization shows that the one data point we singled out earlier on the far left of the plot has the label of “not applicable” instead of “democrat” or “republican”. Let’s use filter to look at the row that contains the “not applicable” value in the party column: missing_party <- filter(us_data, party == "Not Applicable") missing_party ## # A tibble: 0 x 6 ## # … with 6 variables: state <chr>, med_income <dbl>, med_prop_val <dbl>, ## # population <dbl>, mean_commute_minutes <dbl>, party <fct> That explains it! That row in the dataset is actually not a US state, but rather the US territory of Peurto Rico. Similar to other US territories, residents of Puerto Rico cannot vote in presidential elections. Hence the “not applicable” label. Let’s remove this row from the data frame and rename the data frame vote_data. To do this, we use the opposite of the equivalency operator (==) for our filter statement, the not equivalent operator (!=). vote_data <- filter(us_data, party != "Not Applicable") vote_data ## # A tibble: 51 x 6 ## state med_income med_prop_val population mean_commute_minutes party ## <chr> <dbl> <dbl> <dbl> <dbl> <fct> ## 1 AK 64222 197300 733375 10.5 Republican ## 2 AL 36924 94800 4830620 25.3 Republican ## 3 AR 35833 83300 2958208 22.4 Republican ## 4 AZ 44748 128700 6641928 20.6 Republican ## 5 CA 53075 252100 38421464 23.4 Democrat ## 6 CO 48098 198900 5278906 19.5 Democrat ## 7 CT 69228 246450 3593222 24.3 Democrat ## 8 DC 70848 475800 647484 28.3 Democrat ## 9 DE 54976 228500 926454 24.5 Democrat ## 10 FL 43355 125600 19645772 24.8 Republican ## # … with 41 more rows Now we see that the data frame has 51 rows corresponding to the 50 states and the District of Columbia - all regions where residents can vote in US presidential elections. Let’s now recreate the scatter plot we made above using this data frame subset: ggplot(vote_data, aes(x = med_income, y = med_prop_val, color = party)) + geom_point() + xlab("Income (USD)") + ylab("Median property value (USD)") What do we see when considering the second part of our exploratory question? Do we see a pattern in how certain states voted in the 2016 Presidential election? We do! Most of the US States who voted for the Republican candidate in the 2016 US Presidential election had lower median household income and lower median property values (data points primarily fall in lower left-hand side of the scatter plot), whereas most of the US states who voted for the Democratic candidate in the 2016 US Presidential election had higher median household income and higher median property values (data points primarily fall in the upper right-hand side of the scatter plot). Does this mean that rich states usually vote for Democrats and poorer states generally vote for Republicans? Or could we use this data visualization on its own to predict which party each state will vote for in the next presidential election? The answer to both these questions is “no.” What we can do with this exploratory data analysis is create new hypotheses, ideas, and questions (like the ones at the beginning of this paragraph). Answering those questions would likely involve gathering additional data and doing more complex analyses, which we will see more of later in this course. ### 1.6.5 Putting it all together Below, we put everything from this chapter together in one code chunk. This demonstrates the power of R: in relatively few lines of code, we are able to create an entire data science workflow. library(tidyverse) us_data <- read_csv("state_property_vote.csv") vote_data <- filter(us_data, party != "Not Applicable") ggplot(vote_data, aes(x = med_income, y = med_prop_val, color = party)) + geom_point() + xlab("Income (USD)") + ylab("Median property value (USD)") ### 1.6.6 What’s next? In the next chapter, we will dig in and spend more time learning how to load spreadsheet-like datasets of various formats into R, as well as how to scrape data from the web!
Who invented the method of radiocarbon dating men and women dating Rated 4.54/5 based on 979 customer reviews But as more dates became available, Egyptologists, who had hieroglyphic records back thousands of years, began to recognize that C-14 dates were generally too young.They proved this by showing that C-14 dates of wooden artifacts with cartouches (dated royal names) did not agree.He demonstrated the accuracy of radiocarbon dating by accurately estimating the age of wood from a series of samples for which the age was known, including an ancient Egyptian royal barge dating from 1850 BCE. From this science, we are able to approximate the date at which the organism were living on Earth. It uses the naturally occurring radioisotope carbon-14 (14C) to estimate the age of carbon-bearing materials up to about 58,000 to 62,000 years old. Carbon-14 has a relatively short half-life of 5,730 years, meaning that the fraction of carbon-14 in a sample is halved over the course of 5,730 years due to radioactive decay to nitrogen-14. The carbon-14 isotope would vanish from Earth's atmosphere in less than a million years were it not for the constant influx of cosmic rays interacting with molecules of nitrogen (NFigure 1: Diagram of the formation of carbon-14 (forward), the decay of carbon-14 (reverse). The half-life of a radioactive isotope (usually denoted by $$t_$$) is a more familiar concept than $$k$$ for radioactivity, so although Equation $$\ref$$ is expressed in terms of $$k$$, it is more usual to quote the value of $$t_$$. The currently accepted value for the half-life of will remain; a quarter will remain after 11,460 years; an eighth after 17,190 years; and so on.
# Potential energies 1. Jun 2, 2009 Hi guys.... I have a small question on potential energies: I have got two potential energies: $$U_1=-\frac{k^2}{2}+\frac{w\sqrt{3}}{2}\sin^2\theta \cos 2 \phi$$ and $$U_2==-\frac{k^2}{2}+\frac{w\sqrt{3}}{2}\sin^2\theta \sin 2 \phi$$ where k is a constant and 0<theta<pi and 0<phi<2 pi. I minimized both of these and found that say for k=1, w=0.5 both U1 and U2 have the SAME value (-0.9333 I guess) but DIFFERENT minima....Does it mean that the two potentials represent the same physics or could the physical situations corresponding to both be different? Thanks Last edited: Jun 2, 2009 2. Jun 2, 2009 ### alxm A cosine changes to a sine, so that could be viewed as corresponding to identical physical systems, with a coordinate system rotated by 90 degrees. 3. Jun 2, 2009 Oh Yeah..True!! thanks a lot alxm...But i presume they would not be equivalent to $$U_3=-\frac{k^2}{2}+\frac{w\sqrt{3}}{2}\sin 2\theta \cos\phi$$ ? 4. Jun 2, 2009 ### alxm Well, then you've scaled a coordinate. Could be either a different physical system or a different coordinate system. 5. Jun 2, 2009 yep..thanks a lot..One final question..If I want to write $$\sin^2\theta \sin 2 \phi$$ in terms of spherical Harmonics..I think these are related to the $$Y_{2,-2}$$ and $$Y_{2,2}$$ spherical Harmonics but there will be an $$i$$ appearing and this term will be a part of a Hamiltonian so I will end up with complex energies! Is there a way out of this. In fact the Hamiltonian I get is : $$H=i~w~\sqrt{\frac{2\pi}{5}}~ (Y_{2,-2}-Y_{2,2}).$$
## anonymous 5 years ago what the flutter this website sucks 1. Owlfred Hoot! You just asked your first question! Hang tight while I find people to answer it for you. You can thank people who give you good answers by clicking the 'Good Answer' button on the right! 2. anonymous its a math world...where you can find brilliant minds like us... welcome bro 3. anonymous this site is full of intelligent people, if you don't like it here...you have the freedom to leave ^_^ 4. anonymous This question is kinda hard, hold on while I derive an answer for ignorance. 5. anonymous lol 6. anonymous heheheh good one frogwingspans! 7. anonymous ignorance is a kind of infinity 8. anonymous is there really a frog like a wingspans, that thing that we need to ignore 9. anonymous knowledge is power 10. anonymous Power to the people. 11. anonymous lol 12. alivejeremy Well that's a fact for sure ^_^ 13. alivejeremy But it's only for helping and studying site so it's not pose to be that fun, but i make it fun for me:D 14. alivejeremy Fun is what u make it 15. alivejeremy ;) 16. SamsungFanBoy lol 17. Hayhayz What in the world xD 18. Jaynator495 19. SamsungFanBoy xDDD 20. Hayhayz I dont think Owlfred even read it once.. 21. AloneS Errrr 22. alivejeremy lol 23. CandyCove $$\color{blue}{\text{Originally Posted by}}$$ @frogdìcks This question is kinda hard, hold on while I derive an answer for ignorance. $$\color{blue}{\text{End of Quote}}$$ Slay. Find more explanations on OpenStudy
## Physics Friday 63 [Part 2 of ?] In the previous part, we introduced a spinning oblate spheroid, and showed that a distant point mass M at displacement d will exert a torque on the spheroid that can be approximated as . Now, suppose that d is in the xy-plane of the inertial frame. Further, suppose that our point mass is orbiting our spheroid in a circular orbit of angular frequency Ω (or that our spheroid is orbiting our point mass in a circular orbit of angular frequency Ω; the model will turn out the same). Then in the inertial frame. Suppose then at a time t the x and x’ axes coincide. Then, if we let the angle between z and z’ be θ0, we see that d has components in the body coordinates of . Now, if the density of our spheroid is sufficiently symmetric about its axis, then we will have Ix’=Iy’, and the moment of inertia tensor in the body coordinates will be . Using our results from part one, we find the torque in this situation; , . Now, we presently have and , so we can rewrite the above in a way independent or our choice of x’ and y’ axes: , giving torque: Supposing that this torque is small enough that any precession produced is of frequency much slower than Ω, we can then average the torque over time; recalling the time average of trigonometric functions and their products, we get average torque Recalling that our object has angular momentum along the z’ axis, we see then that our average torque is perpendicular to our angular momentum, and to our z axis (as it is along the cross product ). Thus, as in here, we have precession of our spheroid’s rotation about the z axis (so θ0 is constant), and from our previous work on torque-driven precession, we see that the precession has angular frequency . (The negative sign indicates that the direction here is opposite the sense of the rotation ωz.) Now, supposing our spheroid has total mass ME, then Kepler’s third law for our spheroid-point mass orbit tells us that the period T of the orbit is . Since , we find , which lets us rewrite the precession frequency in terms of the orbital frequency and the ratio of the masses : .
Browse Questions Oscillations # A taut string for which $\;\mu=5\times 10^{-2} \;Kg/m\;$ is under as tension of 80 N . How much power must be supplied to the string to generate a sinusoidal wave at a frequency of 60 Hz and an amplitude of 6 cm ? $(a)\;512\;W\qquad(b)\;500\;W\qquad(c)\;640\;W\qquad(d)\;746\;W$ Answer : (a) $\;512\;W$ Explanation : Velocity (v) = $\;\sqrt{\large\frac{T}{\mu}}=\sqrt{\large\frac{80}{5 \times 10^{-2}}}=40\;m/s$ $f=60\;Hz$ angular frequency (w) = $\;2 \pi f$ $=2 \pi \times 60=377\;s^{-1}$ Power (P) = $\;\large\frac{\mu w^2 A^2 v}{2}$ $=\large\frac{1}{2}\times (5 \times 10^{-2}) \times(377) \times (6 \times 10^{-2})^{2} \times 40$ $=512 W$
News # Tadiran Batteries and Hexagram Sign $18 Million Lithium Battery Contract for Automatic Meter Reading January 16, 2007 by Jeff Shepard Tadiran Batteries Ltd. has been awarded a five year,$18 million contract for the supply of lithium batteries from Hexagram, Inc., a U.S. based manufacturer of Automatic Meter Reading (AMR) systems for water, gas and electric utilities. Hexagram and Tadiran Batteries have a long relationship, with Hexagram’s first AMR units entering the 23rd year of operation with the original Tadiran Batteries. AMR manufacturers around the world often choose Tadiran Batteries’ lithium cells, which the company claims have high capacities, very low self-discharge rates and wide operating temperature ranges. Tadiran claims that its products have proven to last over 20 years in the field without the need for replacing or recharging. 0 Comments
In mathematics, a Lindelöf space is a topological space in which every open cover has a countable subcover. A Lindelöf space is a generalization of the more commonly used notion of compactness, which requires that the subcover be finite. Lindelöf spaces are named for the Finnish mathematician Ernst Leonard Lindelöf. Contents ## Properties of Lindelöf spaces In general, no implications hold (in either direction) between the Lindelöf property and other compactness properties, such as paracompactness. But by the Morita theorem, every regular Lindelöf space is paracompact. Also, any second-countable space is a Lindelöf space, but not conversely. However, the matter is simpler for metric spaces. A metric space is Lindelöf if and only if it is separable if and only if it is second-countable. An open subspace of a Lindelöf space is not necessarily Lindelöf. However, a closed subspace must be Lindelöf. Lindelöf is preserved by continuous maps. However, it is not necessarily preserved by products, not even by finite products. ## Product of Lindelöf spaces The product of Lindelöf spaces is not necessarily Lindelöf. The usual example of this is the Sorgenfrey plane S, which is the product of R under the half-open interval topology with itself. Open sets in the Sorgenfrey plane are unions of half-open rectangles that include the south and west edges and omit the north and east edges, including the northwest, northeast, and southeast corners. Consider the open covering of S which consists of: 1. The set of all points (x, y) with x < y 2. The set of all points (x, y) with x + 1 > y 3. For each real x, the half-open rectangle [x, x + 2) × [−x, −x + 2) The thing to notice here is that each rectangle [x, x + 2) × [−x, −x + 2) covers exactly one of the points on the line x = −y. None of the points on this line is included in any of the other sets in the cover, so there is no proper subcover of this cover, which therefore contains no countable subcover. ## Generalisation The following definition generalises the definitions of compact and Lindelöf: a topological space is κ-compact, where κ is any cardinal, if every open cover has a subcover of cardinality strictly less than κ. Compact is then [itex]\aleph_0[itex]-compact and Lindelöf is then [itex]\aleph_1[itex]-compact. ## References • Art and Cultures • Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries) • Space and Astronomy
# Tag Info Let $V$ have basis $e_1, \ldots, e_n$. There is a basis $\delta_, \ldots, \delta_n$ of $V^\vee$ called the dual basis characterized by the property $\delta_i(e_j) = \begin{cases}1 & \text{if }i=j \\ 0 & \text{otherwise}\end{cases}$. The element $"\mathrm{id}" \in T \otimes T^\vee$ corresponding to the identity $V \to V$ is then \$\sum_{i=1}^n e_i ...
# STARS2019 / SMFNS2019 Cuba Museum Casa de los Árabes #### Museum Casa de los Árabes Ofícios 16, Havana 10100, Old Havana , , , , , , Description The events are the fifth and sixth in a series of meetings gathering scientists working on astroparticle physics, cosmology, gravitation, nuclear physics, and related fields. As in previous years, the meeting sessions will consist of invited and contributed talks and will cover recent developments in the following topics: STARS2019 New phenomena and new states of matter in the Universe, general relativity, gravitation, cosmology, heavy ion collisions and the formation of the quark-gluon plasma, white dwarfs, neutron stars and pulsars, black holes, gamma-ray emission in the Universe, high energy cosmic rays, gravitational waves, dark energy and dark matter, strange matter and strange stars, antimatter in the Universe, and topics related to these. SMFNS2019  Strong magnetic fields in the Universe, strong magnetic fields in compact stars and in galaxies, ultra-strong magnetic fields in neutron star mergers, quark stars and magnetars, strong magnetic fields and the cosmic microwave background, and topics related to these. As part of the events, the school Relativistic Astrophysics and Connected Problems will be held in ICIMAF, Havana, 3 - 4 May, for students and young researchers. Registration and participation in the school are free of charge. Professor Walter Greiner Award will be granted to the best three posters presented by students at the conferences. Organizing Committee: • Aurora Pérez Martínez – Instituto de Cibernética, Matemática y Física (ICIMAF), Cuba - Co-Chair • Alejandro Cabo Montes de Oca – Instituto de Cibernética, Matemática y Física (ICIMAF), Cuba • Bruna Cesira Folador – Universidade Federal do Rio Grande do Sul (UFRGS), Brazil • César A. Zen Vasconcellos – Universidade Federal do Rio Grande do Sul (UFRGS), Brazil - Chair • Christian Motch – Centre National de la Recherche Scientifique (CNRS), France • Daryel Manreza Paret – Universidad de La Habana (UH), Cuba • Diana Alvear Terrero – Instituto de Cibernética, Matemática y Física (ICIMAF), Cuba and University of Wroclaw (UWr), Poland • Elizabeth Rodríguez Querts – Instituto de Cibernética, Matemática y Física (ICIMAF), Cuba • Gabriella Piccinelli – Universidad Nacional Autónoma de México (UNAM), Mexico • Gretel Quintero Angulo – Universidad de La Habana (UH), Cuba • Hugo Pérez Rojas – Instituto de Cibernética, Matemática y Física (ICIMAF), Cuba • Marcus Bleicher – HIC for FAIR, Germany • Matthias Kaminski – University of Alabama, USA • Ricardo González Felipe – Instituto Superior de Engenharia de Lisboa (ISEL) and Centro de Física Teórica de Partículas (CFTP)/Instituto Superior Técnico (IST), Lisboa, Portugal - Co-Chair • Thomas Boller – Max-Planck Institute, Germany • Alejandro Ayala – UNAM, Mexico • Carola Dobrigkeit – Universidade de Campinas (UNICAMP), Brazil • Constança Providência – UC, Portugal • Dany Page – UNAM, México • David Blaschke – UWr, Poland • David Valls-Gabaud – CNRS & Observatoire de Paris, France • Débora Peres Menezes – UFSC, Brazil • Dimiter Hadjimichef – UFRGS, Brazil • Eduardo Guendelman – BGU, Israel • Elena BratkovskayaGSI/JWGU, Germany • Eric Gourgoulhon – LUTH, CNRS & Observatoire de Paris, France • Ernesto Kemp – UNICAMP, Brazil • Félix Mirabel – IAFE/CONICET, Argentina • Fernando Quevedo – ICTP, Italy • Fridolin Weber – SDSU, USA • Géraldine Conti – CERN, Switzerland • Horst Stoecker – GSI, FIAS & JWGU, Germany • Ignatios Antoniadis – CERN, Switzerland • Joerg Aichelin – SUBATECH, France • José A. de Freitas Pacheco – OCA, France • Jose Francisco Morales – U. Rome Tor Vergata, Italy • Katharina Müller – PIUZ & CERN, Switzerland • Leopoldo Pando Zayas – U-M, USA • Marcela Carena – Fermilab, USA • Marcus Bleicher – HIC for FAIR, Germany • Massimo Della Valle – AOC, Italy • Miguel Alcubierre – UNAM, Mexico • Norberto Scoccola – Tandar/CNEA, Argentina • Norman K. Glendenning - LBNL, USA • Pascal Chardonnet – US, France • Peter O. Hess – UNAM, Mexico • Rafael Nepomechie – UM, USA • Remo Ruffini – ICRANet, Italy • Renxin Xu – PKU, China • Roberto A. Sussman – UNAM, Mexico • Sergei B. Popov – SAI/MSU, Russia • Siannah Peñaranda Rivas – UNIZAR, Spain • Stefan Schramm – FIAS/JWGU, Germany • Tsvi Piran – HU, Israel • Ulisses Barres de Almeida – CBPF, Brazil Local Organizers: • Aurora Pérez Martínez – ICIMAF, Cuba • Alejandro Cabo Montes de Oca – ICIMAF, Cuba • Daryel Manreza Paret – UH, Cuba • Diana Alvear Terrero – ICIMAF, Cuba and UWr, Poland • Elizabeth Rodríguez Querts – ICIMAF, Cuba • Gretel Quintero Angulo – ICIMAF, Cuba Participants • Alejandro Cabo Montes de Oca • Andres Escala • Angel Sanchez • Aurora Perez Martinez • Christian Spieles • Christian Sturm • Dairon Rodríguez Garcés • Danilo Diaz • Dario Ramirez • Daryel Manreza Paret • David Edwin Alvarez Castillo • Diana Alvear Terrero • Duvier Fontanella • Eduardo Guendelman • Elena Bratkovskaya • Elizabeth Rodríguez Querts • Ernesto Frodden • Ernesto Kemp • Felix Napoleon Diaz Desposorio • Gabriella Piccinelli Bocchi • Gretel Quintero Angulo • Hajime Sotani • Hugo Celso Perez Rojas • Jessica Gullberg • Joerg Aichelin • Jorge Luis Dominguez Martinez • Josue Motoa Manzano • Lismary de la Caridad Suárez González • Marcelo Enrique Rubio • Marco Antonio Arroyo Ureña • Marcus Bleicher • Martin Land • Martin Roth • Miguel Angel Marquina Carmona • Norbert Christlieb • Osvaldo Bezerra Silva Junior • Paula Christine Hillmann • Peter Hess • Ramzi Suleiman • Ricardo Gonzalez Felipe • Rogerio de Almeida • Rolando Cardenas • Samantha López • Satoru Katsuda • Steven Gullberg • Thomas Boller • Tom Reichert • Tomoya Takiwaki • Victor Alexander Torres Sanchez • Yeinzon Rodriguez Garcia • Zhifu Gao Support • Friday, 3 May • 14:00 14:30 SCHOOL REGISTRATION AND OPENING 30m ICIMAF #### ICIMAF • 14:30 15:30 Compact stars for undergraduates, Marcus Bleicher (University Frankfurt, Germany) 1h ICIMAF #### ICIMAF E and 15th street, Vedado, Havana • 15:30 16:00 COFFEE BREAK 30m ICIMAF #### ICIMAF • 16:00 17:00 Cosmic matter in the Lab, Christian Sturm (GSI, Germany) 1h ICIMAF #### ICIMAF • Saturday, 4 May • 10:00 11:00 Accretion disc physics and black holes, Thomas Boller (MPE Garching, Germany) 1h ICIMAF #### ICIMAF • 11:00 11:30 COFFEE BREAK 30m ICIMAF #### ICIMAF • 11:30 12:30 Introduction to modern hydrodynamics, Markus Garbiso (University of Alabama, USA) 1h ICIMAF #### ICIMAF • Sunday, 5 May • 12:00 18:00 ARRIVAL OF PARTICIPANTS 6h • 18:00 20:00 REGISTRATION 2h Hotel "Palacio del Marqués de San Felipe y Santiago de Bejucal" #### Hotel "Palacio del Marqués de San Felipe y Santiago de Bejucal" • Monday, 6 May • 09:00 09:20 OPENING STARS2019 20m • 09:20 10:00 Education and Cultural Astronomy 40m The importance of including a thorough search for the potential use of astronomy in the examination of any culture is discussed. Educational strategies are included that can enable scholars to add related research knowledge that will enable them to augment their studies in this pursuit. What can be learned from astronomy in culture is examined and the importance of including such research as a part of certain studies is emphasized. A primary goal is to help scholars to learn more about the research of astronomy in culture with the goal of increasing the numbers of those engaged with this in strong research and publication. Educational strategies and emerging programs will be discussed. Such educational initiatives will greatly strengthen this research in the future and will facilitate significant advancements in what we know about the astronomy of ancient and indigenous cultures world-wide. Speaker: Steven Gullberg (University of Oklahoma) • 10:00 10:20 Exact magnetic contribution to a one-loop charged scalar field potential 20m In the context of a warm inflation scenario, we explore the effect of a primordial magnetic field on a charged scalar field potential. Speaker: Gabriella Piccinelli Bocchi (Centro Tecnológico, FES Aragón, UNAM) • 10:20 10:40 Black holes fueling and coalescence in galaxy mergers 20m Using a combination of Smooth Particle hydrodynamics and Adaptive Mesh Refinement simulations of galaxy mergers, with sub-parsecs scale resolution, we have study both the mass transport process onto the massive black holes throughout a galactic merger and especially, the posible black holes coalescence at galactic center. The final coalescence of these black holes lead to gravitational radiation emission that would be detectable up to high redshift by future gravitational wave experiment such as eLISA, which is expected to be launched in 2034. Speaker: Andrés Escala (Universidad de Chile) • 10:40 11:30 COFFEE BREAK AND POSTER SESSION 50m • 11:30 12:00 A breakthrough for the study of resolved stellar populations with ELTMOS/MOSAIC 30m The study of resolved stellar populations in nearby galaxies outside of the Local Group has come within reach with the new generation of extremely large telescopes, featuring primary mirror diameters on the order of 30 meters. ELT, the European Extremely Large Telescope, is currently being built at the Armazones site in the Atacama desert of Chile. From the instrumentation suite for the ELT, the multi-object spectrograph ELT-MOS stands out with the capability of combining the large light collecting power with adaptive optics over the entire field-of-view of the ELT, thus becoming the perfect instrument to study the spectra of resolved stars in galaxies beyond the Milky Way and the Local Group. With an emphasis of the stellar science case, the instrument at the stage of the completed Phase-A study will be presented and discussed, in particular with a focus on the synergy potential with the MICADO imager at the ELT, and the MUSE IFU at the VLT. Speaker: Roth Martin (Leibniz-Institut für Astrophysik Potsdam (AIP)) • 12:00 12:30 Observation of r-process abundance patterns in stars 30m Stars conserve in their atmospheres, to a large extent, the chemical composition of the gas cloud from which they formed. The chemical compositions of old, metal-poor stars in the halo of our galaxy can hence be used for reconstructing the chemical enrichment history of the Milky Way, and studying the nucleosynthesis processes that contributed to the enrichment. For example, a unique abundance signature of the rapid neutron-capture process (r-process) has been observed in metal-poor stars strongly enriched in r-process elements, providing constraints on r-process models and the physical conditions of the site of this process. In my talk I will review the recent progress that has been made in identifying large samples of metal-poor stars by means of wide-angle sky surveys, determinations of their chemical compositions with optical high-resolution spectra and state-of-the art stellar model atmospheres, and future prospects in the era of 4-10m telescopes equipped with highly multiplexed spectrographs, as well as the next generation of large ground-based telescopes currently under construction. Speaker: Norbert Christlieb (Universität Heidelberg) • 12:30 14:30 • 14:30 14:50 Can the symmetry breaking in the SM be determined by the “second minimum” of the Higgs potential? 20m The possibility that the spontaneous symmetry breaking in the Standard Model (SM) may be generated by the Top-Higgs Yukawa interaction (which determines the so called “second minimum” in the SM) is investigated. A former analysis about a QCD action only including the Yukawa interaction of a single quark with a scalar field is here extended. We repeat the calculation done in that study of the two loop effective action for the scalar field of the mentioned model. A correction of the former evaluation allowed to select a strong coupling \alpha(m,LQCD) = 0.2254 GeV at an intermediate scale \mu = 11.63 GeV, in order to fix the minimum of the potential at a scalar mean field determining 175 GeV for the quark mass. Next, a scalar field mass m = 44 GeV is evaluated, which is also of the order of the experimental Higgs mass. The work is also considering the effects of employing a running with momenta strong coupling. For this purpose, the finite part of the two loop potential contribution determined by the strong coupling, was represented as a momentum integral. Next, substituting in this integral the experimental values of the running coupling, the minimum of the potential curve as a function of the mean field was again fixed to the top quark mass by reducing the scale to the value \mu = 4.95 GeV. The consideration of the running coupling also deepened the potential value at the minimum and slightly increased the mass of the scalar field up to 53.58 GeV. These results rested in assuming that the low momentum dependence of the coupling is “saturated” to a constant value being close to its experimental value at the lowest momentum measured. Speaker: Alejandro Cabo (Department of Theoretical Physics ) • 14:50 15:10 Magnetic field-dependence of the neutral pion mass in the linear sigma model coupled to quarks: The weak field case 20m We compute the neutral pion mass dependence on a magnetic field in the weak field approximation at one-loop order. The calculation is carried out within the linear sigma model coupled to quarks and using Schwinger's proper-time representation for the charged particle propagators. We find that the neutral pion mass decreases with the field strength provided the boson self-coupling magnetic field corrections are also included. The calculation should be regarded as the setting of the trend for the neutral pion mass as the magnetic field is turned on. • 15:10 15:30 Exact configurations for interacting spin-2 fields in three dimensions 20m We studied some exact configurations for the three-dimensional massive multi-gravity theory called "Viel-dreibein gravity". We find AdS wave solutions (which reflect the main dynamic properties of the model) and analyze their asymptotic behavior. In addition, we explore the existence of black holes in the context of this theory. Speaker: Elizabeth Rodríguez Querts (ICIMAF) • 15:30 15:50 Cosmogenic photon and neutrino fluxes in the Auger era 20m Naviti Beach Club Varadero Hotel #### Naviti Beach Club Varadero Hotel The interaction of ultra-high-energy cosmic rays (UHECRs) with pervasive photon fields generates associated cosmogenic fluxes of neutrinos and photons due to photohadronic and photonuclear processes taking place in the intergalactic medium. We perform a fit of the UHECR spectrum and composition measured by the Pierre Auger Observatory for four source emissivity scenarios: power-law redshift dependence with one free parameter, active galactic nuclei, gamma-ray bursts, and star formation history. We show that negative source emissivity evolution is favoured if we treat the source evolution as a free parameter. In all cases, the best fit is obtained for relatively hard spectral indices and low maximal rigidities, for compositions at injection dominated by intermediate nuclei (nitrogen and silicon groups). In light of these results, we calculate the associated fluxes of neutrinos and photons. Finally, we discuss the prospects for the future generation of high-energy neutrino and gamma-ray observatories to constrain the sources of UHECRs. Speaker: Rogerio de Almeida (Universidade Federal Fluminense) • 15:50 16:10 Effective potential of a higher derivative scalar field theory at finite temperature 20m In this contribution, I present the study of the effect of higher derivative terms in the effective potential of a scalar field theory. Preliminary results indicate that quantum correction coming from the higher derivative terms make the curvature of the effective potential, near the origin, becomes flatter. I will discuss how this result could be interesting within a warm inflation scenario. Speaker: Angel Sánchez (Facultad de Ciencias, UNAM) • 16:10 16:30 Charges and torsion 20m We review the surface charge method in the Einstein-Cartan formalism and study in particular the role of torsion in the computation of charges. An example in 2+1 gravity is worked out explicitly and some advances in the Einstein-Cartan-Dirac theory are presented. Speaker: Ernesto Frodden • 16:30 17:00 COFFEE BREAK AND POSTER SESSION 30m • 17:00 19:00 FREE 2h • 19:00 20:00 WELCOME COCKTAIL 1h Ambos Mundos Hotel #### Ambos Mundos Hotel • Tuesday, 7 May • 09:00 09:20 Scale Invariance in Cosmology and Particle Physics using metric independent measures of integrations in the action 20m Abstract The use of a metric independent measure of integration in the action opens new possibilities for constructing globally scale invariant theories, since the new measure can be assigned a different scaling transformation than the usual metric dependent measure sqrt(-g). There are various ways to construct a density that can serve as a metric independent measure of integration, from the derivatives of 4 scalar fields or the derivative of a three index tensor field contracted with the alternating symbol. The integration of the equations of motion of these "measure fields" leads to the spontaneous breaking of the scale invariance. A dilaton field with exponential potentials is added and coupled to the different measures. In the effective Einstein frame, potentials for the dilaton with flat regions appear, if curvature square terms are introduced, two flat regions appear, one capable of describing inflation and the other describing the slowly accelerated phase of the present universe. These models allow non singular cosmologies of the emergent type. In the context of the late universe, it is shown that the scale invariance is responsible for the avoidance of the 5th force problem that could have appeared in connection with the nearly massless dilaton. Also a see saw cosmological mechanism that could explain the smallness of the present vacuum energy can be formulated. Finally these techniques have been used to formulate scale invariant extensions of the Standard Model. Speaker: Eduardo Guendelman (Ben Gurion University) • 09:20 09:40 Divergence-type hyperbolic theories for ultrarelativistic fluids 20m In this talk I will present a novel theory with the aim of describing the dynamics of ultrarelativistic fluids considering dissipative effects up to second order. The problem of achieving a covariant relativistic extension of the equations that describe non-relativistic dissipative fluids constitutes a very active area of current research, given that a well-posed and causal theory of viscous fluids is essential for a better description of several astrophysical problems, as for example the coalescence of compact objects, which constitute nowadays the main source of gravitational wave production. After mentioning previous attempts for covariant extensions of viscous fluids, we will present a proposal for the study of the dynamic evolution of ultrarelative fluids. Then, we will show how to implement the equations of the theory numerically, using the Kurganov-Tadmor centered method, which allows capturing discontinuous solutions that simulate shock waves, and show some simulations in the one-dimensional case. Speaker: Marcelo Enrique Rubio (IATE - CONICET) • 09:40 10:00 Susceptibilities of strongly interacting matter in a finite volume 20m We investigate possible finite-volume effects on baryon number susceptibilities of strongly interacting matter. Assuming that a hadronic and a deconfined phase both contribute to the thermodynamic state of a finite system due to fluctuations, it is found that the resulting shapes of the net-baryon number distributions deviate significantly from the infinite volume limit for a given temperature T and baryochemical potential μ_B. In particular, the constraint on color-singletness for the finite quark-gluon phase contribution leads to a change of the temperature dependence of the susceptibilities in finite volumes. According to the model, the finite-volume effect depends qualitatively on the value of μ_B. Speaker: Christian Spieles (Frankfurt Institute for Advanced Studies (FIAS)) • 10:00 10:20 Directed, elliptic and triangular flow of free protons and deuterons in Au+Au reactions at 1.23 A GeV 20m Recently, the HADES experiment at GSI has provided preliminary data on the directed flow, $v_1$ elliptic flow, $v_2$ and triangular flow, $v_3$ of protons in Au+Au reactions at a beam energy of 1.23 A GeV. Here we present a theoretical discussion of these flow harmonics within the UrQMD transport approach. We show that all flow harmonics, including the triangular flow, provide a consistent picture of the expansion of the system, if potential interactions are taken into account. Cluster formation has a large contribution to the physics of collective flow. Therefore, the flow of deuterons and free protons are compared. Investigating the dependence of the flow harmonics on the nuclear interaction potentials it is shown that especially $v_3$ can serve as a sensitive probe for the nuclear equation of state at such low energies. The triangular flow and its excitation function with respect to the reaction-plane were calculated for the first time and indicate a complex interplay of the time-evolution of the system and the initial conditions at low beam-energies. Our study also indicates a significant softening of the equation of state at beam energies above E lab > 7 A GeV which can be explored by at the future FAIR facility. Speaker: Paula Hillmann • 10:20 10:40 Delta mass shift as a thermometer of kinetic decoupling in Au+Au reactions at 1.23 AGeV 20m The HADES experiment at GSI will soon provide data on the production and properties of ∆ baryons from Au+Au reactions at 1.23 AGeV. Using the UrQMD model, we predict the yield and spectra of ∆ resonances. In addition we show that one expects to observe a mass shift of the ∆ resonance on the order of 40 MeV in the reconstructable ∆ mass distribution. This mass shift can be understood in terms of late stage ∆ formation with limited kinetic energy. We show how the mass shift can be used to constrain the kinetic decoupling temperature of the system. Speaker: Tom Reichert (Institut für Theoretische Physik, Goethe Universität Frankfurt) • 10:40 11:00 COFFEE BREAK AND POSTER SESSION 20m • 11:00 11:30 The phase diagram of the Polyakov-Nambu-Jona-Lasinio approach 30m Recently we succeeded, by introducing an interaction between the gluon mean field (presented by the a function of the Polyakov loop) and quarks, to reproduce the lattice equation of state for zero chemical potential with the Polyakov-Nambu-Jona-Lasinio model. Also, entropy density, interaction measure, energy density and the speed of sound are quite nicely reproduced. Even the first coefficient of the Taylor expansion of the lattice data with respect to the chemical potential is in the error bars of the lattice calculations. These findings are of great importance for future studies of heavy ion reactions because the Polyakov-Nambu-Jona-Lasinio model can be extended to finite chemical potentials (where lattice calculations are not possible) without introducing any new parameter. In addition, it shows at large chemical potentials a first order phase transition. It provides therefore a basis for theoretical studies in the energy range of the future FAIR and NICA facilities where one expects that heavy ion collisions are characterized by a large chemical potential. It may also serve as a equation of state for gravitational wave studies. Speaker: Joerg Aichelin (Subatech/CNRS, France) • 11:30 12:00 Exploring the partonic phase at finite chemical potential within an extended off-shell transport approach 30m We extend the Parton-Hadron-String Dynamics (PHSD) transport approach in the partonic sector by explicitly calculating the total and differential partonic scattering cross sections as a function of temperature $T$ and baryon chemical potential $\mu_B$ on the basis of the effective propagators and couplings from the Dynamical QuasiParticle Model (DQPM) that is matched to reproduce the equation of state of the partonic system above the deconfinement temperature $T_c$ from lattice QCD. The novel transport approach (PHSD5.0) thus incorporates no additional parameters compared to the default version PHSD4.0. We calculate the collisional widths for the partonic degrees of freedom at finite $T$ and $\mu_B$ in the time-like sector and conclude that the quasiparticle limit holds sufficiently well. Furthermore, the ratio of shear viscosity $\eta$ over entropy density $s$, i.e. $\eta/s$, is evaluated using the collisional widths and compared to lQCD calculations for $\mu_B$ = 0 as well. We find that the novel ratio $\eta/s$ does not differ very much from that calculated within the original DQPM on the basis of the Kubo formalism. Furthermore, there is only a very modest change of $\eta/s$ with the baryon chemical $\mu_B$ as a function of the scaled temperature $T/T_c(\mu_B)$. This also holds for a variety of hadronic observables from central A+A collisions in the energy range 5 GeV $\leq\sqrt{s_{NN}} \leq$ 200 GeV when implementing the differential cross sections into the PHSD approach. We only observe small differences in the antibaryon sector (${\bar p}, {\bar \Lambda}+{\bar \Sigma}^0$) at $\sqrt{s_{NN}}$ = 17.3 GeV and 200 GeV with practically no sensitivity of rapidity and $p_T$ distributions to the $\mu_B$ dependence of the partonic cross sections. Small variations in the strangeness sector are obtained in all studied collisional systems (A+A and C+Au), however, it will be very hard to extract a robust signal experimentally. Since we find only small traces of a $\mu_B$-dependence in heavy-ion observables - although the effective partonic masses and widths as well as their partonic cross sections clearly depend on $\mu_B$ - this implies that one needs a sizable partonic density and large space-time QGP volume to explore the dynamics in the partonic phase. These conditions are only fulfilled at high bombarding energies where $\mu_B$ is, however, rather low. On the other hand, when decreasing the bombarding energy and thus increasing $\mu_B$, the hadronic phase becomes dominant and accordingly, it will be difficult to extract signals from the partonic dynamics based on "bulk" observables. Speaker: Elena Bratkovskaya (GSI, Helmholtzzentrum für Schwerionenforschung GmbH, Darmstadt, Germany and Institute for Theoretical Physics, Johann Wolfgang Goethe-Universität, Frankfurt am Main, Germany) • 12:00 12:30 Deuteron production in heavy ion collisions 30m In this talk, we discuss UrQMD phase-space coalescence calculations for the production of deuterons. We compare with available data for various reactions from the GSI/FAIR energy regime up to LHC. It is found that the production process of deuterons, as reflected in their rapidity and transverse momentum distributions in p+p, p+A and A+A collisions at a beam energies starting from the GSI energy regime around 1 AGeV and up to the LHC, are in good agreement with experimental data. We further explore the energy and centrality dependence of the d/p ratios. Finally, we discuss anti-deuteron production for selected systems. Overall, a good description of the experimental data is observed. The results are also compatible with thermal model estimates. We also discuss the production of hypermatter within the same approach and find sizable production rates at FAIR. Speaker: Marcus Bleicher (Uni Frankfurt) • 12:30 14:30 LUNCH 2h Paladar "La Moneda Cubana" • 14:30 14:50 One-loop divergences in 7D Einstein and 6D conformal gravities 20m Within the context of AdS/CFT Correspondence, we first compute one-loop infrared (IR) divergences of 7D Einstein Gravity in a certain Poincaré-Einstein background metric. We compute then the one-loop ultraviolet (UV) divergences of 6D Conformal Gravity on the boundary. We verify the equality of the above results that stems from the IR-UV connection of the duality dictionary. Key ingredients are heat kernel techniques, factorization of the boundary higher-derivative kinetic operator for the Weyl graviton on the 6D boundary Einstein metric and WKB-exactness of the Einstein graviton in the chosen 7D Poincaré-Einstein background. In all, we elucidate the way in which the 6D results containing the type-A and type-B conformal anomalies for the Weyl graviton are encoded in the 7D "hologram" given by the fluctuation determinant for the Einstein graviton. We finally discuss possible extensions to include higher-spin fields. Speaker: Danilo Diaz (Universidad Andrés Bello) • 14:50 15:10 Generalized SU(2) Proca inflation 20m The generalized SU(2) Proca theory is the only modified gravity theory, nowadays, able to accommodate in a natural way a configuration of vector fields which is compatible with the homogeneous and isotropic nature of our Universe. In previous works, we have been able to uncover a self-tuning mechanism that drives an eternal slow-roll inflationary period for an ample spectrum of initial conditions. We have made a little and justified modification to the action so that the mentioned self-tuning mechanism is preserved but now the inflationary period has a graceful exit and is long enough to solve the classical problems of the standard Cosmology. The action is free of tachyonic, ghost, and Laplacian instabilities, and, in addition, provides a non-anomalous speed for the gravity waves. The usual naturalness problem of the primordial inflation in this scenario is, therefore, essentially absent. Speaker: Yeinzon Rodriguez Garcia (UAN & UIS (Colombia)) • 15:10 15:30 Lie-Backlund transformations for residual symmetries in General Relativity 20m Lie-Backlund transformations have been used to extend the criteria proposed by Ayón-Beato and Velázquez-Rodríguez for characterizing the residual symmetries of the gravitational ansatz developed according to Lie-point transformations. We found that non-local Lie-Backlund transformations allows us to obtain the more general residual symmetries of the metric. We present the generalized criteria for finding all residual symmetries for any metric ansatz in general relativity. Speaker: Miguel Angel Marquina Carmona (CINVESTAV IPN) • 15:30 15:50 Correlation functions of sourced gravitational waves in inflationary scalar vector models. A symmetry based approach 20m In this work we use the correspondence between a field theory in de Sitter space in 4-dimensions and the dual conformal field theory in an euclidean space in 3-dimensions, to compute the form of two and three point correlation functions of scalar-tensor perturbations. To this end, we use an inflationary model, in which the inflaton field is interacting with a vector field trough the term $f(\phi)\left(F_{\mu \nu}F^{\mu \nu}+\kappa\tilde{F}_{\mu \nu}F^{\mu \nu}\right).$ The first step of this method consist in to solve the equations of motion for the fields in the de Sitter 4D space-time, then evaluate this solutions in super-Hubble scales and compute the conformal weight of the projection of this fields in the 3D space. In a second stage, we propose a general form for the correlators, which involve scalar, vector and tensor perturbations and, using the first step result, find its momentum dependence by imposing that those are invariant under dilatations and special conformal transformation (SCT). As a result, we find the form for the different Spectrums of the tensor perturbations and for the a mixed Bispectrum coming from the vacuum and for the vector perturbations. They show to be in agreement with the results in the literature. Speaker: Josue Motoa Manzano (Universidad del Valle) • 15:50 16:10 The symmetry energy in neutron stars: constraints from GW170817 and direct Urca cooling 20m In this contribution I will review the state of the art measurements for the symmetry energy from both astrophysical and terrestrial laboratories. In particular the recent detection of gravitational radiation from the GW170817 event shed light on the properties of the neutron star equation of state, thus comprising both the study of the symmetry energy and stellar radius. Furthermore, I shall address the question on the possibility of a universal symmetry energy contribution to the neutron star equation of state under restricted Direct Urca cooling. When these two aspects are combined, powerful predictions for the neutron star equation of state are obtained. Speaker: David Edwin Alvarez Castillo (JINR) • 16:10 16:30 Stueckelberg-Horwitz-Piron (SHP) classical mechanics with evolving local metric 20m Stueckelberg-Horwitz-Piron (SHP) theory is a framework for posing classical and quantum relativistic physics in canonical form with an external parameter of evolution $\tau$. SHP electrodynamics generalizes Maxwell theory by allowing the four-vector potential to depend on $\tau$ and introducing a scalar gauge potential $a_5(x,\tau)$ associated with this $\tau$-dependence. As a result, current conservation, wave equations, and other scalar expressions suggest a formal 5D symmetry that breaks to tensor and scalar representations of O(3,1) in the presence of 4D matter. Following a similar approach, this electrodynamic theory has recently been extended to non-abelian gauge symmetries and to the classical and quantum many-body problem in curved 4D spacetime with local metric $g_{\mu\nu}(x)$, for $\mu,\nu = 0,1,2,3$. In this talk we examine another extension of classical SHP mechanics by allowing the local metric to be $\tau$-dependent and introducing new metric components associated with $\tau$ evolution. In order to obtain a reasonable prescription for this generalization, consistent with an extended equivalence principle, the breaking of formal 5D tensor symmetries must be treated in detail. This extension permits us to describe particle motion in geodesic form with respect to a dynamically evolving background metric. As an example, we consider the field produced by a $\tau$-dependent mass $M(\tau)$, first as a perturbation in the Newtonian approximation and then for a Schwarzschild-like metric. As expected, the extended Einstein equations imply a non-zero energy-momentum tensor, representing the flow of mass energy corresponding to the changing source mass. Moreover, the Hamiltonian (the scalar system mass) is driven by terms proportional to $dM / d\tau$ and is not conserved. In $\tau$-equilibrium, this system becomes a generalized Schwarzschild solution for which the extended Ricci tensor and mass-energy-momentum tensor vanish. • 16:30 16:50 COFFEE BREAK AND POSTER SESSION 20m • 16:50 17:10 On the dynamics of rotationally supported galaxies 20m A recent finding, based on empirical data of 153 rotationally supported galaxies, with very different morphologies, masses, sizes, and gas fractions, revealed that the baryonic and the dark matter in galaxies are strongly coupled, such that, if the first is known, the second follows and vice versa. Here, we propose a completely theoretical analysis of the dynamics of rotationally supported galaxies, which results in the same conclusion. We find that the relationship between baryonic and dark matter densities at any radius r is governed by the law, ρ(r)_M + ρ(r)_DM = ρ_0, where ρ(r)_M, and ρ(r)_DM are, respectively, the densities of matter and dark matter at radius r, and ρ_0 is the density at the galaxy’s center. Strikingly, we also found that the radius r_s, at which the rotation velocity is equal to half of its maximal value (or alternatively the radius r_c at which the baryonic matter density is equal to half of its density at the galaxy’s center) constitutes a vivid signature of the galaxy, in the sense that it reveals rich information about the galaxy’s dynamics, including the distribution of its matter and dark matter and their total amounts in the galaxy. Speaker: Ramzi Suleiman (Triangle Research & Development Center) • Wednesday, 8 May • 09:00 09:20 A hybrid model for pulsar evolution 20m The combined effects of both the standard magnetic dipole model and the composite neutron superfluid vortex model on the energy loss rate of neutron stars and pulsar spin down are simultaneously taken into account to study the evolution of neutron stars on the P-Pdot diagram. The evolution path of each neutron star is dictated by a particular mechanism in our hybrid model in different parameter spaces and the valley of each curve is the most possible place for a neutron star to be observed since this is the place which corresponds to the minimum value of the evolution speed (i.e. the time derivative Pdot). In other words, pulsars would distribute around these valleys on the P-Pdot diagram. The combined model can be fitted very well with observation to yield the interesting results:(1) the suppressed region in the lower-right part of the diagram can be explained by neutrino cyclotron emission from the 1S0 neutron superfluid vortexes in neutron stars. (2) All radio pulsars that were identified with super strong magnetic fields with field strength beyond the critical quantum magnetic field before are now all lying inside the critical magnetic field line in our model. (3) The peak of neutron star magnetic fields (logB) distribution revels a gaussian distribution in our model whereas the statistics of the simple magnetic dipole model results in a distribution with non-symmetrical peak. Speaker: Zhi Fu Gao (Xinjiang Astronomical Observatory, Chinese Academy of Sciences) • 09:20 09:40 Efficient cosmic-ray acceleration at reverse shocks in supernova remnants 20m When a supernova explodes, a blast wave is generated and propagates into the ambient medium, whereas the deceleration of the ejecta by the ambient medium induces an inward-propagating shock wave, the so-called reverse shock (RS). If the RSs can efficiently accelerate cosmic-rays, then they can be important production sites of heavy-element cosmic-rays. We present evidence for efficient cosmic-ray acceleration at reverse shocks in young Galactic supernova remnants including Cassiopeia A and RCW 86, based on recent X-ray observations with Chandra. Speaker: Satoru Katsuda (Saitama University) • 09:40 10:00 Crustal torsional oscillations inside the deeper pasta structures 20m The quasi periodic oscillations (QPOs) observed in the soft-gamma repeaters are generally considered as a results of the global oscillations of the neutron stars. In this study, we first take into account the torsional oscillations excited in the tube and bubble phases, which can be excited independently of the oscillations in the phases of spherical and cylindrical nuclei, and successfully identify the observed QPO frequencies with such torsional oscillations. The resultant neutron star models are consistent with the mass formula for low-mass neutron stars and the constraint by the gravitational waves from the merger of the neutron star binary, GW170817. Speaker: Hajime Sotani (National Astronomical Observatory of Japan) • 10:00 10:20 Gravitational wave emitted from core-collapse supernovae 20m Gravitational wave signal from core-collapse supernova is the key to understand the mechanism of core-collapse supernovae. The evolution of the frequency of the signal tells us the property of neutron star and information of the accretion flow near the neutron star. In this study, I will introduce the gravitational waveform based on our recent 3D simulations and discuss what information extracted from the signal. Speaker: Tomoya Takiwaki (Tomoya) • 10:20 10:40 Cosmic matter in the laboratory - Investigating neutron star core densities with FAIR 20m The Facility for Antiproton and Ion Research, FAIR, is presently being constructed adjacent to the existing accelerator complex of the GSI Helmholtz Centre for Heavy Ion Research at Darmstadt/Germany, expanding the research goals and technical possibilities substantially. The worldwide unique accelerator and experimental facilities of FAIR will open the way for a broad spectrum of unprecedented fore-front research supplying a large variety of experiments in hadron, nuclear, atomic and plasma physics as well as biomedical and material science which will be briefly described in this presentation. Emphasis will be put on the investigation of the highest baryon densities accessible in the laboratory by relativistic nucleus-nucleus collision at FAIR energies, probing strongly interacting matter under extreme conditions as we expect inside neutron stars. Speaker: Christian Sturm (GSI Helmholtzentrum fuer Schwerionenforschung) • 10:40 11:00 COFFEE BREAK AND POSTER SESSION 20m • 11:00 11:30 Predictions of the pseudo-complex theory of gravity for EHT observations: Observational tests 30m A modified theory of gravity, avoiding singularities in the standard theory of gravitation, has been developed by Hess \& Greiner, known as the pseudo-complex theory of gravitation. The pc-GR theory shows remarkable observational differences with respect to standard GR. The intensity profiles are significantly different between both theories, which is a rare phenomenon in astrophysics. This will allow robust tests of both theories using Event Horizon Telescope (EHT) observations of the Galactic Center. We also predict the time evolution of orbiting matter. In this paper we summarize the observational tests we have developed to date. In case that the EHT data are public, we will discuss their implication on the pc-GR theory. Speaker: Thomas Boller (MPE Garching) • 11:30 12:00 Comparison of the predictions of the pc-GR to the observations of the EHT 30m The observation predictions of the pseudo-complex General Relativity, related to the structure of an accretion disk, are compared to the reported observations of the Event Horizon Telescope. Speaker: Peter Hess (Universidad Nacional Autónoma de México) • 12:00 12:30 CLOSING STARS2019 - Peter Hess 30m • 12:30 14:30 LUNCH 2h Restaurant "La Imprenta" #### Restaurant "La Imprenta" • 14:30 20:00 FREE 5h 30m • 20:00 22:00 CELEBRATION DINNER 2h Mesa Buffet - Plaza Hotel #### Mesa Buffet - Plaza Hotel • Thursday, 9 May • 09:30 13:00 TRIP TO NAVITI BEACH CLUB VARADERO HOTEL - BUSES LEAVE AT 09:30 3h 30m • 13:00 13:30 ARRIVAL 30m Naviti Beach Club Varadero Hotel #### Naviti Beach Club Varadero Hotel • 13:30 15:00 LUNCH 1h 30m Naviti Beach Club Varadero Hotel #### Naviti Beach Club Varadero Hotel • 15:00 15:30 REGISTRATION 30m Naviti Beach Club Varadero Hotel #### Naviti Beach Club Varadero Hotel • 15:30 17:00 FREE 1h 30m • Friday, 10 May • 09:20 09:40 Physics and astrophysics with the Pierre Auger Observatory 20m One century after the discovery of cosmic rays, the origin of ultra high energy cosmic rays still remains enigmatic. Taking data since 2004, the Pierre Auger Collaboration has been expanding our knowledge about these cosmic particles with energies much higher than what LHC can achieve. Although some intriguing questions have been answered, some of the mystery still persists. The focus of this presentation is on the most recent results on ultra-high energy cosmic rays obtained with the Pierre Auger Observatory  with emphasis on the anisotropy studies of the arrival directions of the most energetic particles. Speaker: Rogerio de Almeida (Universidade Federal Fluminense) • 09:40 10:00 CPT violation due to quantum decoherence tested at DUNE 20m Naviti Beach Club Varadero Hotel #### Naviti Beach Club Varadero Hotel In this work we study the intrinsic CPT violation in the neutrino oscillations phenomena produced by quantum decoherence as sub-leading effect. In the usual representation, we find that only fifteen elements of the decoherence matrix violate the CPT symmetry intrinsically. We find exact solutions for the CPT asymmetry function in vacuum . We define an observable $\mathcal{R}$ to make predictions of this model for the future Long-Baseline experiment, DUNE. We found values of the decoherence parameters with $5 \sigma$ of discrepancy to standard physics which are allowed by the current experimental limits, suggesting hints for new physics by this model in the context of future experiments. arXiv:1811.04982 Speaker: Félix Napoleón Díaz Desposorio (Pontificia Universidad Católica del Perú) • 10:00 10:20 Non-linear electrodynamics for astrophysical plasmas 20m Naviti Beach Club Varadero Hotel #### Naviti Beach Club Varadero Hotel In this work we study the initial value problem of a non-linear extension of classical Electromagnetism, known as "Force-Free Electrodynamics" (FFE). The FFE equations describe the dynamics of a diluted plasma near the event horizon of a rotating black hole. In these astrophysical regions, magnetic fields dominate the dynamics when compared with the matter that constitutes those plasmas, giving rise to an decoupled description for Electromagnetism. As a starting point, we consider a covariant formulation of the FFE theory in terms of two scalar potentials, known as "Euler potentials", which allow a very elegant and precise geometric interpretation of it. The ease of formulating FFE in terms of two potentials lies in the fact that, being the only dynamical variables, it provides an optimal scenario for its numerical implementation. In this work we show that this formulation is weakly hyperbolic, which means that the system does not have a well posed initial value problem in the usual sense. In this way, it is not possible to guarantee uniqueness or continuity during the dynamic evolution, which implies that this formulation is not convenient for numerical simulations. Speaker: Marcelo Enrique Rubio (IATE - CONICET) • 10:20 10:40 Modeling anisotropic magnetized compact stars with $\gamma$ metric: the white dwarfs picture 20m Naviti Beach Club Varadero Hotel #### Naviti Beach Club Varadero Hotel Magnetic fields introduce an anisotropy in compact stars’ equations of state by splitting the pressure into two components, one parallel and the other perpendicular to the magnetic field. This suggests the necessity of using structure equations accounting for the axial symmetry of the magnetized system. We consider an axially symmetric metric in spherical coordinates, the $\gamma$-metric, and construct a system of equations to describe the structure of spheroidal compact objects. In this way, we connect the geometrical parameter $\gamma$ linked to the spheroid’s radii, with the source of the anisotropy. So, the model relates the shape of the compact object to the physics that determines the properties of the composing matter. To illustrate how our structure equations work, we present magnetized white dwarfs structure and discuss the stability of the solutions. The results are obtained for magnetic field values of $10^{12}$G, $10^{13}$G and $10^{14}$G, in all cases with and without the Maxwell contribution to the pressures and energy density. This choice allows to have two sets of EoS, one featuring $\gamma>1$ and other with $\gamma<1$ . Speaker: Diana Alvear Terrero (ICIMAF) • 10:40 11:00 COFFEE BREAK AND POSTER SESSION 20m Naviti Beach Club Varadero Hotel #### Naviti Beach Club Varadero Hotel • 11:00 11:20 Magnetic field effects on Bose-Einstein condensate stars 20m Naviti Beach Club Varadero Hotel #### Naviti Beach Club Varadero Hotel We study magnetic field effects on the Equations of State (EoS) and the structure (mass-radius relation) of Bose-Einstein Condensate (BEC) stars, i.e. a compact object composed by a gas of interacting spin one bosons formed up by the pairing of two neutrons. To include magnetic field in the star description we suppose that particle-field and particle-particle interactions are independent, and consider two situations, one where the magnetic field is constant, and another where it is produced by the bosons. Magnetic field presence splits the pressure of the boson gas in two components, one parallel and the other perpendicular to field direction. At low densities and/or strong fields the smaller pressure might be negative, making the boson system unstable. This imposes a lower limit to the central mass density of the star in a way that, the stronger is the magnetic field, the denser has to be star to support its mass against collapse. Since the anisotropy in the pressures implies that the resulting star is not spherical, to compute the mass-radius relation we use the recently found γ-structure equations that describe axially symmetric objects provided they are spheroidal. The obtained BEC stars are, in general, less massive and smaller than in the non-magnetic case, being magnetic field effects more relevant for low densities. When the magnetic field is produced by the bosons, the inner profiles of the fields are determined self consistently as a function of the star inner radii, its values being in the orders expected for compact stars. • 11:20 11:40 Thermodynamic properties of a magnetized neutral vector boson gas at finite temperature 20m Naviti Beach Club Varadero Hotel #### Naviti Beach Club Varadero Hotel We study the thermodynamic properties of a neutral vector boson gas in presence of a constant magnetic field at finite temperature. The study has been done considering relativistic and non-relativistic bosons. In general, one of the most outstanding properties of magnetized bosonic systems is the occurrence of Bose-Einstein condensation (BEC) and Bose-Einstein ferromagnetism: in the condensed state, the gas shows a spontaneous magnetization. The main purpose of this work is to study the effect of temperature on the equations of state for that matter that allows more accurate descriptions of compact objects, specifically of neutron stars, which might contain spin-1 bosons formed up by two paired neutrons. As a limit case we study the structure of stars fully composed by matter in this form. Speaker: Lismary de la Caridad Suárez González (Instituto de Cibernética Matemática y Física,Habana, Cuba) • 11:40 12:00 Modeling anisotropic magnetized strange quark stars 20m Naviti Beach Club Varadero Hotel #### Naviti Beach Club Varadero Hotel When studying the structure of magnetized compact objects, the anisotropy in their equations of state (EoS), due to the magnetic field, must be taken into account. This anisotropy consists in the splitting of the pressure in two components, one parallel and the other perpendicular to the magnetic field. In this work, we compare the size and shape of magnetized strange quark stars using three different sets of structure equations. First, we solve the standard isotropic Tolman-Oppenheimer-Volkoff equations for the parallel and perpendicular pressures independently. Then, we obtain the mass-radii curves of the magnetized strange quark stars using axially symmetric metrics in cylindrical and spherical coordinates, this last one called the gamma-metric. The differences between the results obtained in each case are discussed. Speaker: Samantha López (ICIMAF) • 12:00 14:00 LUNCH 2h Naviti Beach Club Varadero Hotel #### Naviti Beach Club Varadero Hotel • 14:00 14:20 The magnetized photon time delay and Faraday rotation 20m Naviti Beach Club Varadero Hotel #### Naviti Beach Club Varadero Hotel We study the propagation of photon in magnetized vacuum and medium, taking into account radiative corrections. We describe both time delay and Faraday rotation, with the aim of applying the results to astrophysical context. • 14:20 14:40 The mathematical description of the influence of the expansion of the Universe on the metric of a black hole 20m Naviti Beach Club Varadero Hotel #### Naviti Beach Club Varadero Hotel The existence of black holes has its analytical argumentation in Einstein's field equations. The first solution of general relativity that would characterize a black hole was found by Schwarzschild in 1916. Since then, these cosmic objects are being studied and investigated in their various variants: Scwartzshild, Kerr, Reissner-Nordström, Kerr-Newman, and others. The no-hair theorem states that a black hole has only three independent properties: mass, charge and angular momentum and is characterized by producing intense gravitational fields. On the other hand, the existence in the Universe of a dark material component of the repulsive type against the attractive action of gravitation can be represented by the quintessence. The effect of the quintessence surrounding the black hole is then introduced. Ordinarily, an additional element within the stress-energy-moment tensor of the Einstein field equations is introduced. The mathematical description of this problem is complicated, in general. In this investigation, we have chosen to use a variant in which the effect of the quintessence is introduced as a perturbative action in the metric of the ordinary black hole introducing the time-dependent scale factor. The Einstein field equations are obtained using the perturbed metric and the results obtained correspond to those obtained by the ordinary way. Speaker: Adrian Linares-Rodríguez (Universidad Central "Marta Abreu" de Las Villas, Santa Clara ci) • 14:40 15:00 Extending observations to distances larger than 10 kpc should resolve the anomaly of a galaxy lacking dark matter 20m Naviti Beach Club Varadero Hotel #### Naviti Beach Club Varadero Hotel We investigated the claim that galaxy NGC 1052-DF2 lacks dark matter. For this purpose, we constructed a novel, theory-based computer simulation of the dynamical interaction of matter and dark matter in a prototypical ellipsoid galaxy and utilized it to predict the distributions of dark matter in a galaxy as a function of the galaxy’s core radius and maximal rotation velocity. We ran the simulation using the parameters of NGC 1052-DF2 as well as the parameters of six other UDGs from the Coma cluster and seven dSph galaxies from the local group. For each galaxy, the simulation was run in steps of 2 kpc up to 100 kpc from the galaxy center. Inspection of the distributions of matter and dark matter generated by the simulated, as a function from distance r, reveals the following: (1) Consistent with the ΛCDM paradigm, all the tested galaxies, including galaxy NGC 1051-DF2, are predicted to be dark-matter-dominated. (2) The reported lack of dark matter within r ≤ 10 kpc is supported by the simulation results. However, this result is an aftermath of conducting a “shortsighted” observation for only r < 10 kpc. (3) Consistent with ΛCDM models, the bulk of dark matter at galactic scales resides at the galaxies’ halos. (5) The core radius of a galaxy is predictor of the proportions of matter and dark matter in the galaxy. Speaker: Ramzi Suleiman (Triangle Research & Development Center) • 15:00 15:30 COFFEE BREAK AND POSTER SESSION 30m Naviti Beach Club Varadero Hotel #### Naviti Beach Club Varadero Hotel • 15:30 17:00 Discussion Session 1h 30m Naviti Beach Club Varadero Hotel #### Naviti Beach Club Varadero Hotel • Saturday, 11 May • 09:20 09:40 Homogeneity of the universe emerging from the Equivalence Principle and Poisson equation: A comparison between Newtonian and MONDian cosmology 20m Naviti Beach Club Varadero Hotel #### Naviti Beach Club Varadero Hotel A correspondence between the Equivalence principle and the homogeneity of the universe is discussed. We show that under the Newtonian gravity, translation of co-moving coordinates in a uniformly expanding universe defines a new accelerated frame. A consistency condition for the invariance of this transformation yields the second Friedman equation. All these symmetries are lost when we modify Newton’s second law and/ or the Poisson equation. For example by replacing Newton’s second law with non-linear function of the acceleration, as Modified Newtonian Dynamics (MOND) suggested, the concept of relative acceleration is lost. As a consequence the homogeneity of the universe breaks. Therefore MOND which changes Newton’s second law or a QUAdratic Lagrangian (AQUAL) which changes the Poisson equation are not complete theories and they should be amended to preserve the cosmological principle. Only locally could MOND be used as a toy model, but not as a global theory which should describe a universe in large scales. Speaker: Eduardo Guendelman (Ben Gurion University) • 09:40 10:00 A study in progres about a dynamical gravastar solution 20m Naviti Beach Club Varadero Hotel #### Naviti Beach Club Varadero Hotel We present the state of research devoted to investigate the consequences of a formerly proposed regular solution at the origin for the Einstein-Klein-Gordon equations. We implement a match with the Schwarzschild solutions with a zero scalar field outside a spherical region. The configuration of fields are used as a first step in an iterative process to calculate the vacuum expectation value of the energy-momentum tensor, aiming at further solving the Einstein semi-classical equation. The result shows the quantum corrections to the previous solution. It is expected that further steps in the iterative process will regulate the previous solutions, by leading to the convergence of the iterative solution. The first step in the iteration solution and an explicit dependence of the expectation value of the energy-momentum tensor with the metric are found. Speaker: Duvier Fontanella (ICIMAF) • 10:00 10:20 Perturbations to planetary biospheres due to high energy muons from cosmic ray bursts originated in neutron star mergers 20m Naviti Beach Club Varadero Hotel #### Naviti Beach Club Varadero Hotel In this work a mathematical model for aquatic photosynthesis, modified by some of us to include particulate ionizing radiation, is used to assess the perturbations that muons coming from neutron star mergers could make to this biological process. It is then shown that neutron star mergers not too far from inhabited rocky planets have the potential to considerably deplete their aquatic photosynthesis. Some remarks concerning the affectation on other types of subsurface life are also done, and by extension some considerations on habitability of the Milky Way are presented. Speaker: Rolando Cardenas (Universidad Central ''Marta Abreu'' de Las Villas) • 10:20 10:40 Z production in pPb and PbPb collisions at 5.2 TeV 20m There is a growing interest in the exam and analysis of results in the ALICE, ATLAS and CMS detectors in asymmetric systems (pPb) due to the possibilities of establishing some references for PbPb collisions and to gain insight into the behavior of the medium itself. The analysis of data in both cases can allow the understanding of the PDFs under different regimes. The study of the initial state in proton-lead collision at 5.02 TeV using Drell-Yan process was chosen because the inclusive lepton production is a clean process independent of the color degree freedoms. For the study, it was considered an extension of the Glauber model to express the cross-section. Under this approach, we can examine the initial vertex of the hard process described by sigma_pp and apply the usual calculation through the factorization theorem. In particular, we focused on the analysis of the pT distribution and compared the role of different factorization schema in the behavior of the distribution at low pT. Speaker: Dario Ramirez Zaldivar (InSTEC, Havana University) • 10:40 11:00 COFFEE BREAK AND POSTER SESSION 20m Naviti Beach Club Varadero Hotel #### Naviti Beach Club Varadero Hotel • 11:00 11:20 Towards the measurement of the anisotropic pressures effects in magnetized quantum vacuum 20m Naviti Beach Club Varadero Hotel #### Naviti Beach Club Varadero Hotel Starting from the fact that vacuum pressure orthogonal to a constant magnetic field is negative, whereas along the field is positive, we estimate the shift of frequency for radiation moving in these directions to first order in α at small fields as compared to the Schwinger critical field Bc, and suggest ideas for its experimental test. For fields of order of or greater than 2Bc we briefly discuss the arising of an imaginary part on the vacuum energy, meaning its instability at such fields. We propose an heuristic model of bosonic electron-positron bound state leading to a ferromagnetic quantum phase transition of vacuum at critical fields 2Bc. Speaker: Hugo Celso Peréz Rojas (ICIMAF) • 11:20 12:00 WALTER GREINER PRIZE / CLOSING SMFNS2019 40m Naviti Beach Club Varadero Hotel #### Naviti Beach Club Varadero Hotel • 12:00 14:00 LUNCH 2h Naviti Beach Club Varadero Hotel #### Naviti Beach Club Varadero Hotel • 14:00 15:30 Discussion Session 1h 30m Naviti Beach Club Varadero Hotel #### Naviti Beach Club Varadero Hotel • 15:30 16:00 COFFEE BREAK 30m Naviti Beach Club Varadero Hotel #### Naviti Beach Club Varadero Hotel • 16:00 20:00 FREE 4h Naviti Beach Club Varadero Hotel #### Naviti Beach Club Varadero Hotel • 20:00 22:00 CELEBRATION DINNER 2h Naviti Beach Club Varadero Hotel #### Naviti Beach Club Varadero Hotel • Sunday, 12 May • 09:00 12:00 FREE 3h Naviti Beach Club Varadero Hotel #### Naviti Beach Club Varadero Hotel • 12:00 14:30 LUNCH 2h 30m Naviti Beach Club Varadero Hotel #### Naviti Beach Club Varadero Hotel • 14:30 16:30
# An inverted cone has a diameter of 42 in and a height of 15in. If the water flowing o • March 19th 2010, 09:40 PM yoman360 An inverted cone has a diameter of 42 in and a height of 15in. If the water flowing o An inverted cone has a diameter of 42 in and a height of 15in. If the water flowing out of the vertex of the container at a rate of 35 $\pi$ $in^3$/sec , how fast is the depth of the water dropping when the height is 5in? • March 19th 2010, 11:08 PM sa-ri-ga-ma Quote: Originally Posted by yoman360 An inverted cone has a diameter of 42 in and a height of 15in. If the water flowing out of the vertex of the container at a rate of 35 $\pi$ $in^3$/sec , how fast is the depth of the water dropping when the height is 5in? Volume of the cone V = 1/3*π*r^2*h.....(1) You want to find dh/dt. So you have to find r in terms of h. r and h are proportional tp R and H, where R = 21" and H = 15". So R/H = r/h. So r = (R/H)*h. Substitute this value in the eq.(1) and find dV/dt. dV/dt is given. Find dh/dt. • March 20th 2010, 03:14 PM yoman360 Quote: Originally Posted by sa-ri-ga-ma Volume of the cone V = 1/3*π*r^2*h.....(1) You want to find dh/dt. So you have to find r in terms of h. r and h are proportional tp R and H, where R = 21" and H = 15". So R/H = r/h. So r = (R/H)*h. Substitute this value in the eq.(1) and find dV/dt. dV/dt is given. Find dh/dt. I followed your steps and this is what I get: $r=\frac{21}{15} h$ $v=\frac{1}{3}\pi(\frac{21}{15}h)^2*h$ simplify so $v=\pi\frac{49}{75}h^3$ $dv/dt=\pi\frac{49}{25}h^2* dh/dt$ plug in h and dv/dt $35\pi=\pi\frac{49}{25}(15)^2* dh/dt$ solve for dh/dt i get $dh/dt=\frac{5}{63} in/sec$ the problem is that the answer key says the answer is $\frac{5}{7}$ in/sec what did i do incorrectly? • March 20th 2010, 03:18 PM yoman360 Quote: Originally Posted by yoman360 I followed your steps and this is what I get: plug in h $dv/dt=\pi\frac{49}{25}(15)^2* dh/dt$ Never mind. h=5 and H= 15 I figured it out I was suppose to plug in 5 here instead if 15 then solving for dh/dt i got 5/7 in/sec Thanks for the help (Rofl)
# Matrix • March 18th 2013, 01:14 AM Tutu Matrix 1.) If A is $\begin{bmatrix} 3 & 2 \\-2 & -1 \end{bmatrix}$, write A^2 in the form pA+qI where p and q are scalars. Hence write A^(-1) in the form rA+sI where r and s are scalars. I know how to find A^2, I got $\begin{bmatrix} 5 & 4 \\-4 & -3 \end{bmatrix}$ but I do not know how to convert this matrix form into linear form pA+qI 2.) It is known that AB=A and BA=B where matrices A and B are not necessarily invertible. Prove that A^2 = A. When I first saw this, I thought B had to be I in AB=A and A in BA=B had to be I BUT they then added, NOTE: From AB=A, you cannot deduce that B=I. They asked me why, and I really dont know since I thought you could deduce that! How do I then prove that A^2=A? • March 18th 2013, 01:29 AM Prove It Re: Matrix Quote: Originally Posted by Tutu 1.) If A is $\begin{bmatrix} 3 & 2 \\-2 & -1 \end{bmatrix}$, write A^2 in the form pA+qI where p and q are scalars. Hence write A^(-1) in the form rA+sI where r and s are scalars. I know how to find A^2, I got $\begin{bmatrix} 5 & 4 \\-4 & -3 \end{bmatrix}$ but I do not know how to convert this matrix form into linear form pA+qI 2.) It is known that AB=A and BA=B where matrices A and B are not necessarily invertible. Prove that A^2 = A. When I first saw this, I thought B had to be I in AB=A and A in BA=B had to be I BUT they then added, NOTE: From AB=A, you cannot deduce that B=I. They asked me why, and I really dont know since I thought you could deduce that! How do I then prove that A^2=A? \displaystyle \begin{align*} p\mathbf{A} + q\mathbf{I} &= p\left[ \begin{matrix} \phantom{-}5 & \phantom{-}4 \\ -4 & -3 \end{matrix} \right] + q \left[ \begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix} \right] \\ &= \left[ \begin{matrix} \phantom{-}5p & \phantom{-}4p \\ -4p & -3p \end{matrix} \right] + \left[ \begin{matrix} q & 0 \\ 0 & q \end{matrix} \right] \\ &= \left[ \begin{matrix} \phantom{-}5p + q & \phantom{-}4p \\ -4p & -3p + q \end{matrix} \right] \end{align*} If this is equal to $\displaystyle \mathbf{A}^2$, then that means you can set each of the components equal and solve for p and q. • March 18th 2013, 01:38 AM Tutu Re: Matrix I see thank you so so much! ((: Any ideas for the second question? Thanks! • March 18th 2013, 05:35 AM Prove It Re: Matrix Quote: Originally Posted by Tutu 1.) If A is $\begin{bmatrix} 3 & 2 \\-2 & -1 \end{bmatrix}$, write A^2 in the form pA+qI where p and q are scalars. Hence write A^(-1) in the form rA+sI where r and s are scalars. I know how to find A^2, I got $\begin{bmatrix} 5 & 4 \\-4 & -3 \end{bmatrix}$ but I do not know how to convert this matrix form into linear form pA+qI 2.) It is known that AB=A and BA=B where matrices A and B are not necessarily invertible. Prove that A^2 = A. When I first saw this, I thought B had to be I in AB=A and A in BA=B had to be I BUT they then added, NOTE: From AB=A, you cannot deduce that B=I. They asked me why, and I really dont know since I thought you could deduce that! How do I then prove that A^2=A? $\displaystyle \mathbf{A}\mathbf{B} = \mathbf{A}$ and $\displaystyle \mathbf{B}\mathbf{A} = \mathbf{B}$. Then \displaystyle \begin{align*} \mathbf{A}^2 &= \left( \mathbf{A}\mathbf{B} \right)^2 \\ &= \mathbf{A}\mathbf{B}\mathbf{A}\mathbf{B} \\ &= \mathbf{A}\mathbf{B}\mathbf{B} \\ &= \mathbf{A}\mathbf{B} \\ &= \mathbf{A} \end{align*}
# KSZ To rescale to a different area/map depth, simply rescale by $\sqrt{\rm area}$ and pick the new noise level.
# Fitting A-Ci curves - FAQ #### 2021-03-31 This document lists frequently asked questions (FAQ) on the use of the fitaci function from the plantecophys package, to fit the FvCB model to measurements of photosynthesis rate at varying CO2 concentrations (A-Ci curves). This list will be updated based on queries I receive on email. ## 1. Some of my curves don’t fit, what should I do? When using the ‘default’ fitting method, it is possible that some curves don’t fit. This method uses non-linear regression, which depends on reasonable guesses of ‘starting values’ for it to converge to a solution. Nearly always, in my experience, when a curve does not fit it means it should really not be fit because the data are poor quality. As a first step, inspect the data with a simple plot of photosynthesis against intercellular CO2 concentration (Ci). Do the data generate a smooth curve? Does photosynthesis saturate with Ci? Does Ci reach values high enough for reasonable estimates of Jmax (e.g. > 800 ppm)? If the answer is ‘no’ to any of these, you have no choice but to discard the data. If the curve looks reasonable, try refitting the curve with the bilinear method, like so: f <- fitaci(mydata, fitmethod="bilinear") If you are using fitacis to fit many curves at once, curves that did not fit will be automatically refit with the bilinear method. The curves that could not be fit are printed in a message - make sure to inspect all fitted curves to check for data quality. ## 2. The fitaci function gives different results from other implementations, which one should I use? Different methods will give different results essentially always for these two reasons: • Differences in parameters associated with the FvCB model, especially when you are correcting for temperature. But even if you are not, different implementations will make different assumptions on GammaStar, Kc, Ko, perhaps have different default atmospheric pressure, etc. • Differences in the fitting method. A number of choices can be made on the actual fitting routine used, particularly with regards to choosing the two (or three) limitations to photosynthesis, how to estimate respiration, the actual algorithms, etc. Implementations can also differ in whether they estimate TPU limitation (fitaci does optionally, but not by default), and what to do with mesophyll conductance (fitaci ignores it by default, but it can be used optionally). Which one is right? There is no one right method, but I certainly would recommend inspecting the goodness of fit (residual variance or R2) in choosing the ‘better’ method. Most importantly, though, you have to a) describe what you did (including all relevant parameter settings), and b) publish your data so it can be used by others. I recommend moving away from only publishing Vcmax and Jmax, which are sensitive to fitting methods. ## 3. How do I report when using the fitaci function? It is not sufficient to state “we used the plantecophys package to estimate Vcmax and Jmax”. At the very least, state the following details: • Did you correct for temperature to a common temperature of (normally) 25C? If so, list all temperature-sensitivity parameters (EaV, etc.). • Did you measure day respiration and use that in the fit, or was it estimated from the A-Ci curve? • What were the values of the other parameters used, in particular GammaStar and Km (both are shown in standard output of fitaci)? If no default settings were changed when fitting the curves, state this as well. Finally, cite the publication associated with the package, which you can view with: citation("plantecophys") and report the version of the package used in the work, which you can view with: packageVersion("plantecophys") ## [1] '1.4.6' ## 4. Which fitting method should I use? The fitaci function has two implemented methods: ‘default’ (non-linear regression to the full model at once) or ‘bilinear’ (linear regression to transformed data to both limitations separately). Only the default method was described in Duursma (2015), the bilinear method is a later addition. Based on comparisons of goodness of fit, the default method appears to always fit better, albeit only very slightly better. For this reason one might recommend the default method. The bilinear method always returns parameter values, which is a good thing - except when the data are such poor quality that fitting should not have been done. The bilinear method is much faster. This may make a difference in some settings. Finally, the default method reports reliable standard errors for the parameters, which can (and should) be reported along with the estimated values. The bilinear method does not give a standard error for Jmax (for technical reasons this is not possible), and for other technical reasons, the standard error for Vcmax will be much too low (i.e. it is too optimistic). For this reason I prefer the default method over the bilinear method, unless it does not converge. ## 5. How do I account for the mesophyll conductance? If you have an estimate of the mesophyll conductance, it is possible to use it when fitting the A-Ci curve with the fitaci function. In that case, estimated Vcmax and Jmax can be interpreted as the chloroplastic rates. You have two options: • Use the gmeso argument, like this: library(plantecophys) # Assume a mesophyll conductance of 0.2 mol m-2 s-1 bar-1 f <- fitaci(acidata1, gmeso=0.2) In this case the equations from Ethier and Livingston (2004) are used. • Calculate the chloroplastic CO2 concentration, and fit normally, like this: # Assume a mesophyll conductance of 0.2 mol m-2 s-1 bar-1 acidata1\$Cc <- with(acidata1, Ci - Photo/0.2) # Fit normally, but make sure to use Cc! f <- fitaci(acidata1, varnames=list(ALEAF="Photo", Ci="Cc", Tleaf="Tleaf", PPFD="PARi")) I am not sure which of the two options is ‘better’. I assume that the first is superior, but one user has reported that the second method gave better fits (lower SE on Vcmax and Jmax). More work is needed to evaluate these methods, and the use of mesophyll conductance in A-Ci curves in general. Note that a method exists where mesophyll conductance is estimated from A-Ci curves (without further measurements), but I have not implemented this in the fitaci function. I don’t believe the method has any merit and it will not be implemented.
## How can a character attract people of multiple different personalities other than having a single trait that… Ok, so I currently have a visual novel (For those who do not know, it’s a video game that’s almost all story. The character may have a few choices in the matter, but most of it is about watching the story. A personal favorite of mine is Nekopara .(wiki link: http://nekopara.wikia.com/wiki/Nekopara_Wiki)) In the visual novel, you are an unfortunate soul who was accidentally enrolled into the wrong school via some weird mistakes and snafus. Now, you are a student in an all-female school (As much as I’d like to go into more detail, I must go on to the question.) As you progress through the game, you find yourself surrounded by women of various walks of life, and somehow, you, the protagonist, are going to be able to capture their hearts (Not at the same time, however.) The problem I’m constantly running into is, most protagonists in these type of games are only like by the characters because he’s kind to them despite all inclinations to run away screaming (There are some weird visual novels out there.) However, I wish to break that streak by making an actually well-developed protagonist. But, despite that, I won’t be able to make it work with the different girls unless he becomes a completely different person for each of them. How do I avoid this? ## ModSecurity Block based on ARGS_NAMES starting character Working on a rule to block traffic based on the starting character of ARGS_NAMES either cookie, get or post Example allow `name=Joe` Example block ``````#name=Joe `````` Test rule that is not working ``````SecRule ARGS_NAMES "^(#.*)\$" "phase:1,id:199,log,deny,msg:'Block Argname with hash'" `````` ## How to remove the last character of a string if I know what the character is in AppleScript? So basically I was messing around with AppleScript and created a variable based on the app’s path and it always has a forward slash at the end that I don’t want. See my other question for more info. Basically thePath is set to `/Applications/MyEpicApp.app/` with that extra `/` I don’t want. What’s the “opposite” of `"sometext" & someVariable` in AppleScript? Many thanks in advance, I appreaciate every piece of information. ## Question: Where can I buy a Funko stand for a leaping character? Two years ago I got a Leaping Deadpool Funko Pop and immediately lost the stand that made him stand up bc it wasn t properly attached. It s not like the usual stands where it s just the base and you stick the foot on it, but it s the clear base plus this stick thing where you put it in his back to make it look like he s leaping. I know a few other pops have stands like this but I m not sure if they re specially made for that specific pop or if they re universal. If they re the latter, where could I get one? ## Chern character of wedge bundle Given a vector bundle $$E$$, let’s consider the direct sum of wedge bundle of $$E$$: $$oplus_i wedge ^iE$$, let’s call this $$cal E$$. Given a connection $$nabla$$ on $$E$$, Chern Character is given by $$trexp^{frac{inabla ^2}{2pi} }$$. My question is what will be Chern Character of $$cal E ?$$ Is it given by $$det (1-exp^{frac{inabla ^2}{2pi} })$$? ## R Shiny: All the columns have class Character when rendered in a ShinyApp after converting formattable output… After converting a formattable output to a datatable using as.datatable function, although i am able to filter stuff but all the columns are in character class WHEN rendered in shinyapp. Meaning, 700 > 6000, 9>10 etc. (just because it is not treated as numeric class) Sample code for testing: `````` #libraries library(data.table) library(formattable) library(shiny) #upto 2 digits issue cannot be seen as 9.1 >8.1 etc even in character format, hence increasing the numbers by multiplying it to another column. iris\$Sepal.Width <- iris\$Sepal.Width*iris\$Petal.Length #creating UI ui <- fluidPage( DT::dataTableOutput("table1")) #creating server server <- function(input, output){ output\$table1 <- DT::renderDataTable( as.datatable(formattable(iris))) } #calling the server shinyApp(ui, server) `````` # observation: when trying to sort column Sepal.Width in descending order, 9.x will be at top whereas it should be 25.46 in the ShinyUI. ## Note: Click Show 100 filter in the AP then use sort for better understanding of the issue All the things work perfectly when done in R but fails in ShinyApp ## Escaping single quote character breaks String.format(stringToFormat, formattingArguments) [duplicate] • Using single quote by curly braces in String.format in Dynamic SOQL When I run this code: ``````String.format('Dear {0}, Don't forget .... {1} .... {2}', args); `````` Token `0` is replaced, but token `1` and `2` appear as `{1}` and `{2}` This is caused by the backslash escaping the single quote by the `t` character. My work around it to do this: ``````String.format('Dear {0}, Don`t forget .... {1} .... {2}', args); `````` But I would like to know how I could escape the `'` character without breaking the merging of the arguments? ## Escaping single quote character breaks String.format(stringToFormat, formattingArguments) When I run this code: ``````String.format('Dear {0}, Don't forget .... {1} .... {2}', args); `````` Token `0` is replaced, but token `1` and `2` appear as `{1}` and `{2}` This is caused by the backslash escaping the single quote by the `t` character. My work around it to do this: ``````String.format('Dear {0}, Don`t forget .... {1} .... {2}', args); `````` But I would like to know how I could escape the `'` character without breaking the merging of the arguments?
Fluid Dynamics (physics.flu-dyn) • Despite recent progress, laminar-turbulent coexistence in transitional planar wall-bounded shear flows is still not well understood. Contrasting with the processes by which chaotic flow inside turbulent patches is sustained at the local (minimal flow unit) scale, the mechanisms controlling the obliqueness of laminar-turbulent interfaces typically observed all along the coexistence range are still mysterious. An extension of Waleffe's approach [Phys. Fluids 9 (1997) 883--900] is used to show that, already at the local scale, drift flows breaking the problem's spanwise symmetry are generated just by slightly detuning the modes involved in the self-sustainment process. This opens perspectives for theorizing the formation of laminar-turbulent patterns. • An expression for the dimensionless dissipation rate was derived from the Karman-Howarth equation by asymptotic expansion of the second- and third- order structure functions in powers of the inverse Reynolds number. The implications of the time-derivative term for the assumption of local stationarity (or local equilibrium) which underpins the derivation of the Kolmogorov 4/5' law for the third-order structure function were studied. It was concluded that neglect of the time-derivative cannot be justified by reason of restriction to certain scales (the inertial range) nor to large Reynolds numbers. In principle, therefore, the hypothesis cannot be correct, although it may be a good approximation. It follows, at least in principle, that the quantitative aspects of the hypothesis of local stationarity could be tested by a comparison of the asymptotic dimensionless dissipation rate for free decay with that for the stationary case. But in practice this is complicated by the absence of an agreed evolution time for making the measurements during the decay. However, we can assess the quantitative error involved in using the hypothesis by comparing the exact asymptotic value of the dimensionless dissipation in free decay calculated on the assumption of local stationarity to the experimentally determined value (e.g. by means of direct numerical simulation), as this relationship holds for all measuring times. Should the assumption of local stationarity lead to significant error, then the 4/5' law needs to be corrected. Despite this, scale invariance in wavenumber space appears to hold in the formal limit of infinite Reynolds numbers, which implies that the `-5/3' energy spectrum does not require correction in this limit. • This paper presents novel insights about the influence of soluble surfactants on bubble flows obtained by Direct Numerical Simulation (DNS). Surfactants are amphiphilic compounds which accumulate at fluid interfaces and significantly modify the respective interfacial properties, influencing also the overall dynamics of the flow. With the aid of DNS local quantities like the surfactant distribution on the bubble surface can be accessed for a better understanding of the physical phenomena occurring close to the interface. The core part of the physical model consists in the description of the surfactant transport in the bulk and on the deformable interface. The solution procedure is based on an Arbitrary Lagrangian-Eulerian (ALE) Interface-Tracking method. The existing methodology was enhanced to describe a wider range of physical phenomena. A subgrid-scale (SGS) model is employed in the cases where a fully resolved DNS for the species transport is not feasible due to high mesh resolution requirements and, therefore, high computational costs. After an exhaustive validation of the latest numerical developments, the DNS of single rising bubbles in contaminated solutions is compared to experimental results. The full velocity transients of the rising bubbles, especially the contaminated ones, are correctly reproduced by the DNS. The simulation results are then studied to gain a better understanding of the local bubble dynamics under the effect of soluble surfactant. One of the main insights is that the quasi-steady state of the rise velocity is reached without ad- and desorption being necessarily in local equilibrium. • Motivated by the relevance of edge state solutions as mediators of transition, we use direct numerical simulations to study the effect of spatially non-uniform viscosity on their energy and stability in minimal channel flows. What we seek is a theoretical support rooted in a fully non-linear framework that explains the modified threshold for transition to turbulence in flows with temperature-dependent viscosity. Consistently over a range of subcritical Reynolds numbers, we find that decreasing viscosity away from the walls weakens the streamwise streaks and the vortical structures responsible for their regeneration. The entire self-sustained cycle of the edge state is maintained on a lower kinetic energy level with a smaller driving force, compared to a flow with constant viscosity. Increasing viscosity away from the walls has the opposite effect. In both cases, the effect is proportional to the strength of the viscosity gradient. The results presented highlight a local shift in the state space of the position of the edge state relative to the laminar attractor with the consequent modulation of its basin of attraction in the proximity of the edge state and of the surrounding manifold. The implication is that the threshold for transition is reduced for perturbations evolving in the neighbourhood of the edge state in case viscosity decreases away from the walls, and vice versa. • We investigate fluid mediated effective interactions in a confined film geometry, between two rigid, no-slip plates, where one of the plates is mobile and subjected to a random external forcing with zero average. The fluid is assumed to be compressible and viscous, and the external surface forcing to be of small amplitude, thus enabling a linear hydrodynamic analysis. While the transverse and longitudinal hydrodynamic stresses (forces per unit area) acting on either of the plates vanish on average, they exhibit significant fluctuations that can be quantified through their equal-time, two-point correlators. For transverse (shear) stresses, the same-plate correlators on both the fixed and the mobile plates, and also the cross-plate correlator, exhibit decaying power-law behaviors as functions of the inter-plate separation with universal exponents: At small separations, the exponents are given by -1 in all cases, while at large separations the exponents are found to be larger, differing in magnitude, viz., -2 (for the same-plate correlator on the fixed plate), -4 (for the excess same-plate correlator on the mobile plate) and -3 (for the cross-plate correlator). For longitudinal (compressional) stresses, we find much weaker power-law decays with exponents -3/2 (for the excess same-plate correlator on the mobile plate) and -1 (for the cross-plate correlator) in the large inter-plate separation regime. The same-plate stress correlator on the fixed plate increases and saturates on increase of the inter-plate separation, reflecting the non-decaying nature of the longitudinal forces acting on the fixed plate. The qualitative differences between the transverse and longitudinal stress correlators stem from the distinct nature of the shear and compression modes as, for instance, the latter exhibit acoustic propagation and, hence, relatively large fluctuations across the fluid film.
+0 # Help! +1 39 2 1. Given that $f(x) = (\sqrt 5)^x$, what is the range of $f(x)$ on the interval $[0, \infty)$? 2. When Lauren was born on January 1, 1990, her grandparents put $\$1000$in a savings account in her name. The account earned$7.5\%$annual interest compounded quarterly every three months. To the nearest dollar, how much money was in her account when she turns two? Can someone also explain compound interest Guest Apr 8, 2018 Sort: ### 2+0 Answers #1 0 2. 2. When Lauren was born on January 1, 1990, her grandparents put$\$1000$ in a savings account in her name. The account earned $7.5\%$ annual interest compounded quarterly every three months. To the nearest dollar, how much money was in her account when she turns two? Can someone also explain compound interest To solve this problem, you would use this financial formula: FV = PV x [1 + R]^N, Where R=Interest rate per period, N=number of periods, PV=Present value, FV=Future value. FV = $1,000 x [1 + 0.75/4]^(2*4) FV =$1,000 x [1 + 0.01875]^8 FV = $1,000 x [1.01875]^8 FV =$1,000 x     1.16022167...... FV = $1,160 - What Lauren will have in her account when she turns two. Compound interest is interest earned on interest!!. Example: You invest$1,000 at 5% annual compound for 3 years, how much will you have at the end of 3 years: $1,000 x 1.05 =$1,050 - This is how much you will have at the end of first year. Now, you will take this amount of $1,050 x 1.05 =$1,102.50 - and this is how much you will have at the end of the second year. You see that $2.50 is interest you earned on the$50 interest of the first year. Now, you take $1,102.50 x 1.05 =$1,157.63 - and this is how much you will have in your account at the end of three years. That $7.63 is interest you earned on the$50 interest that you earned in the first year. This is what compound interest means. If it were "simple interest", then you would just earn $50 x 3 =$150 + $1,000 =$1,150. That difference of \$7.63 is extra interest on interest, or compound interest. Guest Apr 8, 2018 #2 +85726 +1 1. $$f(x) = (\sqrt 5)^x$$ What is the range of  f(x)  on $$[0, \infty)$$ This function is continuous on the requested interval At  x  =  0, f(x)  = 1 Since √5  > 1, this function  is constantly increasing  on [0, inf) So....the range   is   [1, infinity )  on  [0, infinity ) CPhill  Apr 9, 2018 ### 23 Online Users We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners.  See details
# Chapter 1 - Review Exercises: 46 $x=\left\{ -2-2i,-2+ 2i \right\}$ #### Work Step by Step $\bf{\text{Solution Outline:}}$ To solve the given equation, $(x+4)(x+2)=2x ,$ express first in the form $ax^2+bx+c=0.$ Then use the Quadratic Formula to solve for $x.$ $\bf{\text{Solution Details:}}$ Using the FOIL Method which is given by $(a+b)(c+d)=ac+ad+bc+bd,$ the expression above is equivalent to\begin{array}{l}\require{cancel} x(x)+x(2)+4(x)+4(2)=2x \\\\ x^2+2x+4x+8=2x .\end{array} In the form $ax^2+bx+c=0,$ the equation above is equivalent to \begin{array}{l}\require{cancel} x^2+(2x+4x-2x)+8=0 \\\\ x^2+4x+8=0 .\end{array} In the equation above, $a= 1 ,$ $b= 4 ,$ and $c= 8 .$ Using the Quadratic Formula which is given by $x=\dfrac{-b\pm\sqrt{b^2-4ac}}{2a},$ then \begin{array}{l}\require{cancel} x=\dfrac{-4\pm\sqrt{4^2-4(1)(8)}}{2(1)} \\\\ x=\dfrac{-4\pm\sqrt{16-32}}{2} \\\\ x=\dfrac{-4\pm\sqrt{-16}}{2} .\end{array} Using the Product Rule of radicals which is given by $\sqrt[m]{x}\cdot\sqrt[m]{y}=\sqrt[m]{xy}$ and $i=\sqrt{-1},$ the equation above is equivalent to\begin{array}{l}\require{cancel} x=\dfrac{-4\pm\sqrt{-1}\cdot\sqrt{16}}{2} \\\\ x=\dfrac{-4\pm i\cdot\sqrt{(4)^2}}{2} \\\\ x=\dfrac{-4\pm 4i}{2} \\\\ x=\dfrac{\cancel2^{-2}\pm \cancel2^2i}{\cancel2^1} \\\\ x=-2\pm 2i .\end{array} The solutions are $x=\left\{ -2-2i,-2+ 2i \right\} .$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
# Converting legacy amstex picture with \LPic to latex I am converting legacy amstex code to latex. I am stuck on vector pictures in the document. In the "main" document, there is code like this: \centerline{\input fig11.tex} In the fig11.tex file, there is code, that looks like this: %%AmSTeX \LPic 11.8 5.0 fig11.psp {}{}{ \atxy 20 5 \BText{some description} %%.....more similar \atxy commands...%% } and fig11.psp looks like this (with far more lines of the "same thing"): 81 LBegin /p0 { 120 5 } def /p1 { 140 5 } def /p2 { 70 10 } def LNarrow p0 p1 2 LTextloc p2 p3 3 LTextloc p3 p4 4 LTextloc p5 LDot p6 LDot p14 p9 LLine p14 p10 LLine p15 p11 LLine p15 p12 LLine p23 p24 LVector p26 p25 LVector LEnd (It is not a legal postscript.) Now I don't have a clue how to convert those to something LaTeX will like (it doesn't know the \LPic command, lpic.sty seems like something a bit different and I can't force it to work anyway) edit: oh god. It's probably this frankly bizzare thing: ...I am not sure at all what to do with it. - This seems a mixture of TeX boxes and PostScript specials; lhead.tex contains several PostScript definitions that might be extracted to convert that .psp into a legal PostScript input; the TeX macros allow for putting boxes at given coordinates over the picture. –  egreg Jun 10 '12 at 9:22 @egreg : yeah, I am looking how to convert those .psp files to legal PostScript files at the moment. –  Karel Bílek Jun 10 '12 at 9:24 Well. What seems to work is just copying the code from the specials (I never knew LaTeX has something like that) on top of the postscripts. That makes them valid PostScript files. –  Karel Bílek Jun 10 '12 at 9:32 Maybe the part from \def\LPic onward can be used as is, by changing \special{psfile=#3} into \includegraphics{#3} –  egreg Jun 10 '12 at 9:43 What seems to be enough at the moment: 1. first, copying the postscript special changes from lhead to all the almost-postscript .psp files 2. add first line to all the .psp files with just %! 3. Adding boundingbox to the ps files with psfixbb 4. converting the .psp files to .pdfs (since I want to use pdflatex in the first place and I don't need to edit the files anymore) 5. deleting the special changes from the lhead file, leaving the rest as-is. (with changing \special{psfile=#3} into \includegraphics{#3}), result is here 6. removing the .psp filenames and replacing them with .pdfs in the fig.tex files - Nice to know. Maybe you can add the reduced lhead.tex and a short example of a PS file? –  egreg Jun 10 '12 at 9:50 Wait a minute, I still have some problem with includegraphics. Also, it is not very probable someone else will have this problem, what I am trying to edit was frankly bizzare in the first place –  Karel Bílek Jun 10 '12 at 9:54 Indeed, I've never seen those things. But having here a working procedure is surely good. –  egreg Jun 10 '12 at 10:02 nope... it's still not enough, the postscripts lack "BoundingBox", so they are not converted/imported to latex correctly –  Karel Bílek Jun 10 '12 at 10:07 Maybe you can feed them to ps2eps –  egreg Jun 10 '12 at 10:14
By default TeX prints the page number at the bottom of the page. I added a custom header so that the page number appears in the top right hand corner. So I have the page number in two places, at the bottom of the page and in my header. How do I get rid of the page number that is at the bottom of the page? - This depends on how you place the page number in the top right. Is it done using fancyhdr? Then you can add \fancyhf{} at the start of your fancy page style definition to clear everything before setting only the header. – Werner Dec 20 '12 at 20:09 Note that the default for chapter-style pages (in the supported document classes, like report and book) set the first page of each chapter as plain. So which \documentclass are you using? – Werner Dec 20 '12 at 20:11 I would suggest using the fancyhdr package. It allows you to set the headers and footers. \documentclass{article} \usepackage{lipsum} \usepackage{fancyhdr} \pagestyle{fancy} %% \lfoot{Left footer} \cfoot{Center of footer} \rfoot{Right of footer} %%
## CryptoDB ### Paper: An Enhanced One-round Pairing-based Tripartite Authenticated Key Agreement Protocol Authors: Meng-Hui Lim Sanggon Lee Youngho Park Hoonjae Lee URL: http://eprint.iacr.org/2007/142 Search ePrint Search Google A tripartite authenticated key agreement protocol is generally designed to accommodate the need of three specific entities in communicating over an open network with a shared secret key, which is used to preserve data confidentiality and integrity. Since Joux proposed the first pairing-based one-round tripartite key agreement protocol in 2000, numerous authenticated protocols have been proposed after then. However, most of them have turned out to be flawed due to their inability in achieving some desirable security attributes. In 2005, Lin-Li had identified the weaknesses of Shim's protocol and subsequently proposed their improved scheme by introducing an extra verification process. In this paper, we prove that Lin-Li's improved scheme remains insecure due to its susceptibility to the insider impersonation attack. Based on this, we propose an enhanced scheme which will not only conquer their defects, but also preserves the desired security attributes of a key agreement protocol. ##### BibTeX @misc{eprint-2007-13424, title={An Enhanced One-round Pairing-based Tripartite Authenticated Key Agreement Protocol}, booktitle={IACR Eprint archive}, keywords={Tripartite authentication, Key Agreement Protocol, Pairing}, url={http://eprint.iacr.org/2007/142}, note={Not published. meng17121983@yahoo.com 13624 received 20 Apr 2007, last revised 20 Apr 2007}, author={Meng-Hui Lim and Sanggon Lee and Youngho Park and Hoonjae Lee}, year=2007 }
# GR vs. SR while accelerating away 1. May 30, 2005 ### gonzo A quick question for those fast with the GR and SR math. Assume you get in a spaceship and start accelerating away from Earth, and during the trip you and the people left behind compare clock speeds periodically (not elapsed time, but rather tick rates). At what combination of acceleration and speeds would the GR effects from acceleration (which tend to make the clocks on the Earth look faster to you) be more significant than the SR effects from speed (which would tend to make the clocks on Earth look slower to you)? Assuming I'm wording this problem clearly. Is this only dependent on acceleration (i.e., for a given high enough acceleration, the GR effects will always dominate), or for most accelerations, will there be a speed you eventually reach where the SR effects catch up and pass the GR effects? I would generally assume the later given a constant acceleration, since the SR effects grow, but don't the GR effects also grow with distance? Anyway, I would appreciate a precise math relationship between the two if someone could whip one up for me (with a qualitative explanation also of course). Thanks much. 2. May 30, 2005 ### gonzo I started trying to do the math a bit myself, but GR isn't my strong suit, so I got a bit confused. I started by looking at the formula for gravitational time dilation, and then used the equivalence principle to assume it was the same as being in a gravitational field that produced that force. I don't know how to insert math symbols, so I'll just "say" the formula I ended up with ... I got dilation by a factor of the square root of one minus (accleration times distance divided by c^2). I plugged in a random accleration to start with, taking 100 m/s^2, and it seemed then that you didn't have to be very far from something before the dilation factor became imaginary (like 10^15 meters). So I must be missing something key here. Assuming you can treat the acceleration you feel on a ship the as being in a gravitational field, then clocks far away from you from from this effect (discounting relative speed for now) should appear to speed up. Don't that go faster the farther away they are since they are farther away from the gravitational field? Or do I have it backwards somehow? I am certainly missing some key element here, so any enlightenment would be helpful. Thanks. 3. May 30, 2005 ### Mortimer I'm pretty sure you cannot use the equivalence principle in the way you do. The time dilation in a gravity field should be "equivalenced" with a rotating disc where the edge has a velocity (and acceleration) relative to the center, hence its slower time. In your example, at any infinitesimal instant the accelerating ship has a defined velocity relative to earth which is the sole element that determines the time dilation as observed from either frame. The acceleration as such does not influence that. So only SR counts, not GR (at least for the spaceship; GR does count on the surface of the earth). Last edited: May 30, 2005 4. May 30, 2005 ### pervect Staff Emeritus With the approach you are using, you are going to find that the clocks on earth start running backwards ::. So it's not a very good approach. I'll try to explain why. Basically, you are trying to compare your clocks to the clocks of someone who is beyond an event horizon. This is very similar to trying to compare your clock to the clock of someone who has fallen into a black hole. Two-way signal transmission is not possible, so there isn't really any good way to compare clocks. To give some specifics, ifyou maintain an acceleration of 1g, there will be an event horizon approximately 1 light year behind you from which you will never receive signals until you stop accelerating. This is known as the "Rindler horizon". A short way into this journey (somewher around a year or so), the Earth will fall behind this Rindler horizon. For the usual reasons, you will continue to "see" a faint image of the Earth, but you will never receive signals emitted from the Earth later than a certain time, the time at which it fell behind the horizon. If you want to do the math, plot the position of the rocket using the relativistic rocket equation at http://math.ucr.edu/home/baez/physics/Relativity/SR/rocket.html [Broken] It is simplest to assume that you are acclerating at exactly 1 light year / year^2 (which is very close to 1g), and to use units of light yeras for distance and years for time. The equation for the position of the rocket in the Earth's reference frame is x = cosh(tau)-1; t=sinh(tau) (1) here tau is the "proper time", the time elapsed on the rocket's clock The equation for light emitted at an earth time T is x = (t-T), t>T (2) substiting (1) into (2) we get cosh(tau)-1 = sinh(tau)-T cosh(tau)-sinh(tau)=1-T Because cosh(tau) > sinh(tau), there is no value of tau that solves this equation for T>1. This will be clearer if you take the time to actually plot the position of the rocket on a graph, and also plot the light signals. Another way of putting this: A continuously accelerating observer will outrun a lightbeam, when he has a large enough head start. Last edited by a moderator: May 2, 2017 5. May 30, 2005 ### JesseM If you consider this problem from the point of view of the inertial observer rather than the accelerating observer, there won't be any additional time dilation effects due to acceleration at all--over any small interval of time, if the accelerating observer's velocity during that interval is v, then his clock is slowed down by a factor of $$\sqrt{1 - v^2/c^2}$$ during that interval. 6. May 30, 2005 ### gonzo I was mainly interested in the accelerated POV. That's intereesting about the horizon effect, that makes me feel better about the imaginary times I was getting at large distances for high constant acceleration. But what about before you reach this point. I understand if you take a small enough time slice that discounts the accleration the earth clocks will appear to be running slow, but how small is small enough if you are accelerating at 1g? You have a year or so before you are past the Rindler horizon, so during this time what are the conditions for looking back at Earth clocks and seeing them run slow? Are there any conditions in this when you could look back and see them running fast? The other thing that comes to mind is that the people on Earth also have a 1g accleration from gravity. So if you are undergoing a constant 1g accerlation, then you both have equal GR effects ... is this true? I just realized I'm not even sure what constant acceleration means in this context given space and time changes. I guess it would have to be defined by the force the passengers feel the whole time? (as opposed to actual speed changes like you do in classical physics ... I assume anyway). Anyway, I as mentioned I would still like to know if there are any conditions at all where you would see the earth clocks moving faster at any time because of your acceleration. Or will this never happen? Thanks. 7. May 30, 2005 ### gonzo Looking at that link you gave me pervect, it seems for a 1g acceleration you will almost always see the earth clocks as going slightly faster, with their relative speed increasing the farther out you get. But the equations were too messy for me to get a feel for when this effect would kick in (since it seems to increase with distance, I assume if you are close enough the clocks will seem to be going slower instead? Or does this never happen?) I'm also confused why if you are accelerating at 1g you would not have the same situation on Earth due to gravity of 1g so both of you would see the other's clocks as going faster in the same way. And lastly, I assuming if you lower your acceleration this effect is reduced. Where are some random borders for where the lower accleration vs. distance makes the clocks looks slower vs. fast? I'm trying to get a few qualitative markers for the situation. 8. May 30, 2005 ### Janus Staff Emeritus Because Gravitational time dilation is not related to the local acceleration due to gravity, but due to the difference in gravitational potenial. The gravitational field of the Earth falls off with distance, which limits the difference in gravitational potential between the surface of the Earth and the Ship due to this field. The equivalent gravitational "field" due to the acceleration of the Ship as seen by the occupant of the ship does not fall off with distance but extends for inifinite distance at a constant strength both ahead of and to the rear of the ship. 9. May 30, 2005 ### pervect Staff Emeritus Hmmm, well - that's not the right answer. How did you arive at that conclusion? 10. May 31, 2005 ### gonzo They have a table there of your time and earth time. Granted, the table starts at 1 year of flight, but for all entries on the table ellapsed earth time is greater than ellapsed ship time, implying that the earth clocks are running faster. However, as i pointed out I'm not sure. Can you please give me a qualitative right answer then? Or at least a few good comparison points? 11. May 31, 2005 ### gonzo I just looked at the clock postulate page which I hadn't noticed before, which says the opposite ... that acceleration doesn't matter for clock rates, only instantaneous speed. This doesn't seem to make sense to me since GR says there will be a "one way" time effect in a gravitational field. So if two observers are at rest with regard to each other in a gravity field they will both agree on one of them having faster clocks. If they then get some speed relative to each other, they will both note a slowing of clocks for each other based on this speed, but will still have the one way clock difference from GR. At very small speeds, the GR effect would seem to be bigger, and so one would still have faster clocks, but at some point the speed effect would grow larger (assuming the same distance the whole time, just looking at a time slice) and eventually overwhelm the GR effect, and then both would see the other's clock as going slower as in a typical SR scenario (although I would assume the GR component would still make it slightly uneven). Or am I off on this much too? 12. May 31, 2005 ### pervect Staff Emeritus The table tells you what happens from the Earth's point of view. From the Earth's point of view, the rocket clock's run slow. This is the easiest point of view to describe. But I don't think it was the one you were asking about. I had the impression you were interested in the rocket's POV. The rocket's viewpoint is harder to describe. The first point is that the coordinate system used by the accelerating observer on the rocketship is strictly limited in size by the laws of physics. The best brief description of things from the rocket's POV goes something like this. You can more or less think of a "gravitational field" permeating the entire universe. Objects behind the rocket, like the Earth are lower in the gravity well. This means that clocks behind the rocketship, like the one on Earth, run even slower than the relativistic time dilation formulas would indicate. Eventually, the Earth clock falls so deep into a gravity well that it falls behind an event horizon. This happens when the distance to the Earth, multiplied by the gravitational acceleration, reaches the speed of light. Recall (or re-read) my remarks about how an accelerating observer can outrun a lightbeam. When the Earth falls behind an event horizon, it is no longer meaningful to talk about "how fast it's clock is running from the rocket's POV". To determine "how fast a clock is running" when the clocks are separate requires a coordinate system - the purppose of the coordinate system is to say that "this point in space-time in coordinate system #1 (t1,x1) is the same as this other point of space-time in coordinate system #2 (t2,x2). It is not an absolute statement, it is a coordinate dependent statement which requires that the coordinate systems both be defined. There is no problem with the Earth coordinate system, but as I mentioned, the rocket's coordinate system does not and can not cover the Earth after it falls behind the horiozon, the "Rindler Horizon". Thus there is no meaningful way to determine how fast the Earth's clocks are running. The gravitational time dilation equations willl give you an imaginary elapsed time, for instance, if you plug the numbers in. On the return trip, the Rindler horizon is on the other side of the rocketship, so it's not a problem anymore. On the reutrn trip, from the rocke'ts POV, the Earth clocks run very fast. This explains from the rocket POV why the Earth clocks read more elapsed time. BTW, MTW's "Gravitation" has a very good treatment of the accelerated observer. It's possible other textbooks do, too, but I'm not aware of which other books might specifically cover the topic. This would be a good source for more reading with the supporting math if you get really interested, or if you want a reference for why I am saying what I am saying. 13. May 31, 2005 ### gonzo Thanks for that reply. You are right in that I was more interested in the POV from the accelerating observer. I guess part of my problem has been how to apply the equivalence principle correctly. For example, the idea that on an accelerating rocket ship your "equivalent gravity well" is infinite and doesn't drop off like that from a large mass. I also hadn't thought about the driectional aspect either. I understand about the event horizon, which is really interesting. But assuming we only talk about events before a ship reaches this point, then what you are saying is that whether you look at it from GR or SR, a ship flying away from the Earth will always see the Earth clocks as running slower? So in this case the effects of velocity and accleration work together? That was the main thing I was initially getting at. However, you raised some other interesting issues for me about the return trip, since you say in this case the GR effect would tend to make the Earth clocks run faster from the ship POV. But the speed will the make the Earth clocks appear to run slower. So in this situation, where is the typical trade off for when one effect is greater than the other? Also, does it matter which direction you are actually moving? I mean, if you accelerate away and then want to slow down and stop, you will need to accelerate an equal amount in the opposite direction even though you will still be moving away. This situation seems even more confusing to me. I just starting reading "Gravity from the ground up" to get some more GR basics, and hope to eventually move on to harder texts than that afterwards. My math is still a bit rusty for the harder stuff, though I'm trying to refresh that as well. Is "Gravitation" a textbook? Would I need a solid grip on tensors to be able to follow it (one of my week points right now)? 14. May 31, 2005 ### pervect Staff Emeritus "Gravitation" is a textbook. You would not be able to follow the majority of the book without tensors, but tensors play only a minor role in the chapter on accelerated observers. As I check this issue, they are not, unfortunatley, totally absent though. You would definitely need 4-vectors to deal with the book. Displacements and velocities are always represented by 4-vectors, as "geometric objects". There is some introduction to this in the book, but a reasonably good understanding of 4-vectors would be a pre-requisite. You would need to be able to deal with the tensor notation to the extent that $$x^{a}$$ is a 4-vector, and that $$x^a{}_{a}$$ is the norm of the 4-vector. And you'd need to be able to deal with the notation that $$e_{0}$$ was a basis vector of a coordinate system (typically a unit vector in the time direction). If you can find a simpler book that covers the material, go for the simpler book. However, most of the references I've seen to the problem of the accelerated obserer refer to MTW's treatment, however. I don't know if introductory books like "Space-time Physics" treat the problem. Yes, the effects work in the same direction initially. I think you are basically asking for the time dilation formula. You can think of the "gravitational time dilation" as being $$\sqrt{1+2*a*d/c^2}$$ here a is the acceleration of the rocket ship, and d is the distance to the clock. when d is positive, the point is "ahead" of the rocket, and the clocks at that point run fast, indicated by a time dilation factor greater than 1. when d is negative, the point is "behind" the rocket, and the clocks run slow, indicated by a time dilation factor that goes to zero when |2*a*d| = c^2. This is the point where the horizon is. I'll leave you to work out as to what effects are more important when, except to note that the end result is that the Earth clock has more elapsed time than the rocket clock for a round trip. The "bookeeping" of when the time dilation occurs doesn't really have much physical significance, it's a result of the assumptions made to define the coordinate systems. The comparison of the two clocks at the end of the trip, however, is a real physical event that does not depend on the coordinate systems used. When you turn around, there will be a huge change in the "bookeeping" of simultaneity, though nothing much physically happens. Basically, one is switching from one coordinate system with one notion of simultaneity to another coordiante system with a different notion - much like the simpler SR case with no accleration. If you look at the doppler shift of the light that you receive from earth, for instance, it doesn't vary much at the instant of turnaround. Last edited: May 31, 2005 15. May 31, 2005 ### gonzo Thanks, that was all very helpful. Can you recommend a book with a good intro to tensors? I've never worked with 4-vectors, but have had quite a bit of 3-vector analysis, so I wouln't expect there would be a major conceptual jump to have to make, besides not being able to picture things well in my head. New notation is always annoying to learn, with the only good way I know being to just do a lot of problems with it to get used to it. So maybe a good tensor textbook with problems and an answer key would be the way to go. 16. May 31, 2005 ### pervect Staff Emeritus "Gravitation" has an introduction to tensors - but not a whole lot of exercises. If you don't mind a challenge, try picking it up at the library, or ordering it via interlibrary loan. The chapter on accelerated motion is only about 20 pages long or so (quite photocopyable) - the introductory material about tensors, 4-vectors, geometric objects, and the explanation of the notation used by the book etc. is a lot longer though. The book is pretty good about explaining it's notational system.
# detect - Source detection¶ ## Introduction¶ The gammapy.detect submodule includes low level functions to compute significance and test statistics images as well as some high level source detection method prototypes. Detailed description of the methods can be found in [Stewart2009] and [LiMa1983]. Note that in Gammapy maps are stored as Numpy arrays, which implies that it’s very easy to use scikit-image or photutils or other packages that have advanced image analysis and source detection methods readily available. ## Computation of TS images¶ Test statistics image computed using TSMapEstimator for an example Fermi dataset. The gammapy.detect module includes a high performance TSMapEstimator class to compute test statistics (TS) images for gamma-ray survey data. The implementation is based on the method described in [Stewart2009]. Assuming a certain source morphology, which can be defined by any astropy.convolution.Kernel2D instance, the amplitude of the morphology model is fitted at every pixel of the input data using a Poisson maximum likelihood procedure. As input data a counts, background and exposure images have to be provided. Based on the best fit flux amplitude, the change in TS, compared to the null hypothesis is computed using cash statistics. To optimize the performance of the code, the fitting procedure is simplified by finding roots of the derivative of the fit statistics with respect to the flux amplitude. This approach is described in detail in Appendix A of [Stewart2009]. To further improve the performance, Pythons’s multiprocessing facility is used. The following example shows how to compute a TS image for Fermi-LAT survey data: from astropy.convolution import Gaussian2DKernel from gammapy.detect import TSMapEstimator from gammapy.maps import Map filename = '$GAMMAPY_DATA/fermi_survey/all.fits.gz' maps = {} maps['counts'] = Map.read(filename, hdu='counts') maps['exposure'] = Map.read(filename, hdu='exposure') maps['background'] = Map.read(filename, hdu='background') kernel = Gaussian2DKernel(5) ts_estimator = TSMapEstimator() result = ts_estimator.run(maps, kernel) The function returns an dictionary, that bundles all resulting maps. E.g. here’s how to find the largest TS value: import numpy as np np.nanmax(result['ts'].data) ## Computation of Li & Ma significance images¶ The method derived by [LiMa1983] is one of the standard methods to determine detection significances for gamma-ray sources. Using the same prepared Fermi dataset as above, the corresponding images can be computed using the compute_lima_image function: from astropy.convolution import Tophat2DKernel from gammapy.maps import Map from gammapy.detect import compute_lima_image filename = '$GAMMAPY_DATA/fermi_survey/all.fits.gz' kernel = Tophat2DKernel(5) result = compute_lima_image(counts, background, kernel) The function returns a dictionary, that bundles all resulting images such as significance, flux and correlated counts and excess images. ## Using gammapy.detect¶ Tutorial notebooks that show examples using gammapy.detect: ## Reference/API¶ ### gammapy.detect Package¶ Source detection and measurement methods. #### Functions¶ compute_lima_image(counts, background, kernel) Compute Li & Ma significance and flux images for known background. compute_lima_on_off_image(n_on, n_off, a_on, …) Compute Li & Ma significance and flux images for on-off observations. find_peaks(image, threshold[, min_distance]) Find local peaks in an image. #### Classes¶ CWT(kernels[, max_iter, tol, …]) Continuous wavelet transform. CWTData(counts, background, n_scale) Images for CWT algorithm. CWTKernels(n_scale, min_scale, step_scale[, old]) Conduct arrays of kernels and scales for CWT algorithm. KernelBackgroundEstimator(kernel_src, kernel_bkg) Estimate background and exclusion mask iteratively. TSMapEstimator([method, error_method, …]) Compute TS map using different optimization methods.
A generiC fRamework fOr Photodisintegration Of LIght elementS # ACROPOLIS A generiC fRamework fOr Photodisintegration Of LIght elementS • ACROPOLIS: A generiC fRamework fOr Photodisintegration Of LIght elementS Paul Frederik Depta, Marco Hufnagel, Kai Schmidt-Hoberg https://arxiv.org/abs/2011.06518 • Updated BBN constraints on electromagnetic decays of MeV-scale particles Paul Frederik Depta, Marco Hufnagel, Kai Schmidt-Hoberg https://arxiv.org/abs/2011.06519 • BBN constraints on MeV-scale dark sectors. Part II. Electromagnetic decays Marco Hufnagel, Kai Schmidt-Hoberg, Sebastian Wild https://arxiv.org/abs/1808.09324 The most recent version of the manual can always be found on GitHub in the manual/ folder. The respective publication on arXiv might be out-of-date, especially when new versions of the code become available. # Abstract The remarkable agreement between observations of the primordial light element abundances and the corresponding theoretical predictions within the standard cosmological history provides a powerful method to constrain physics beyond the standard model of particle physics (BSM). For a given BSM model these primordial element abundances are generally determined by (i) Big Bang Nucleosynthesis and (ii) possible subsequent disintegration processes. The latter potentially change the abundance values due to late-time high-energy injections which may be present in these scenarios. While there are a number of public codes for the first part, no such code is currently available for the second. Here we close this gap and present ACROPOLIS, A generiC fRamework fOr Photodisintegration Of LIght elementS. The widely discussed cases of decays as well as annihilations can be run without prior coding knowledge within example programs. Furthermore, due to its modular structure, ACROPOLIS can easily be extended also to other scenarios. # Changelog v1.2.1 (February 16, 2021) • Fixed a bug in DecayModel. Results that have been obtained with older versions can be corrected by multiplying the parameter n0a with an additional factor 2.7012. All results of our papers remain unchanged. • Updated the set of initial abundances to the most recent values returned by AlterBBN v2.2 (explcitly, we used failsafe=12) v1.2 (January 15, 2021) • Speed improvements when running non-thermal nucleosynthesis (by a factor 7) • Modified the directory structure by moving ./data to ./acropolis/data to transform ACROPOLIS into a proper package, which can be installed via python3 setup.py install --user (also putting the executables decay and annihilation into your PATH) • Added the decay of neutrons and tritium to the calculation • For AnnihilationModel, it is now possible to freely choose the dark-matter density parameter (default is 0.12) v1.1 (December 1, 2020) • For the source terms it is now possible to specify arbitrary monochromatic and continuous contributions, meaning that the latter one is no longer limited to only final-state radiation of photons • By including additional JIT compilation steps, the runtime without database files was drastically increased (by approximately a factor 15) • The previously mentioned performance improvements also allowed to drop the large database files alltogether, which results in a better user experience (all database files are now part of the git repo and no additional download is required) and a significantly reduced RAM usage (∼900MB → ∼20MB) • Fixed a bug, which could lead to NaNs when calculating heavily suppressed spectra with E0 ≫ me2/(22T) • Added a unified way to print the final abundances in order to declutter the wrapper scripts. This makes it easier to focus on the actual important parts when learning how to use ACROPOLIS • Moved from bytecode to simple text files for the remaining database file, as the former leads to unexpected behaviour on some machines v1.0 (November 12, 2020) • Initial release # Installation from PyPI This is the recommended way to install ACROPOLIS. To do so, make sure that pip is installed and afterwards simply execute the command python3 -m pip install ACROPOLIS --user After the installation is completed, the different modules of ACROPOLIS can be directly imported into our own Python code (just like e.g. numpy). Using this procedure also ensures that the executable decay and annihilation are copied into your PATH and that all dependencies are fulfilled. # Installation from GitHub To install ACROPOLIS from source, first clone the respective git repository by executing the command git clone https://github.com/skumblex/acropolis.git Afterward, switch into the main directory and run python3 -m pip install . --user # Usage without installation If you just want to use ACROPOLIS without any additional installation steps, you have to at least make sure that all dependencies are fulfilled. As specified in setup.py, ACROPOLIS depends on the following packages (older versions might work, but have not been thoroughly tested) • NumPy (> 1.19.1) • SciPy (>1.5.2) • Numba (> 0.51.1) The most recent versions of these packages can be collectively installed at user-level, i.e. without the need for root access, by executing the command python3 -m pip install numpy, scipy, numba --user If these dependencies conflict with those for other programs in your work environment, it is strongly advised to utilise the capabilities of Python's virtual environments. # Using the example models ACROPOLIS ships with two executables, decay and annihilation, which wrap the scenarios discussed in section 4.1 and section 4.2 from the manual, respectively. Both of these files need to be called with six command-line arguments each, a list of which can be obtained by running the command of choice without any arguments at all. As an esxample, the following command runs the process of photodisintegration for an unstable mediator with a mass of 10MeV and a lifetime of 1e5s that decays exclusively into photons and has an abundance of 1e-10 relative to photons at a reference temperature of 10MeV (if you did not install ACROPOLIS via pip, you have to run this command from within the main directory and make sure to append an additional ./ to the beginning of the commands) decay 10 1e5 10 1e-10 0 1 On a similar note, the following command runs the process of photodisintegration for residual s-wave annihilations of a dark-matter particle with a mass of 10MeV and a cross-section of 10e-25 cm³/s that annihilates exclusively into photons annihilation 10 1e-25 0 0 0 1 # Supported platforms ACROPOLIS should work on any platform with a working Python3 installation. ## Project details Uploaded source Uploaded py3
bayesopt function allows more flexibility to customize The predictors are the intensities of each pixel. time. By default, the software conducts 10-fold cross validation. The resulting classifiers are hypersurfaces in 'Standardize'. The following link is only one of them. hyperplane that separates many, but not all data points. between the negative and positive classes, or specifies which classes Train an SVM classifier using the data. the negative (column 1 of score) or positive (column Using Lagrange multipliers μj, Pass the cross-validated SVM model to kfoldLoss to estimate and retain the classification error. the value of the corresponding row in X. Y can If you have more than two classes, the app uses the fitcecoc function to reduce the multiclass classification problem to a set of binary classification subproblems, with one SVM learner for each subproblem. Start Hunting! is the default for one-class learning, and specifies to use the Gaussian The support vectors are the xj on the e.g., 'posClass'. C keeps the allowable values use identical calculations and solution algorithms, and obtain classifiers For reproducibility, use the 'expected-improvement-plus' acquisition function. the boundary of the slab. An important step to successfully two classes. Marine Mammal Acoustic DCL Advanced detection, classification and localization (DCL) of marine mammals and passive acoustic mon Therefore, to The remaining code is just the copy past from the previously modeled svm classifier code. Web browsers do not support MATLAB commands. Plot the data and the decision region, and determine the out-of-sample misclassification rate. All the calculations for hyperplane classification … fitcsvm does not support the Then, generates a classifier based on the data with the Gaussian radial basis function kernel. being classified in the positive class. Generate code that loads the SVM classifier, takes new predictor data as an input argument, and then classifies the new data. programs to a high degree of precision. ClassNames must You can also assess whether the model has been overfit with a compacted model that does not contain the support vectors, their related parameters, and the training data. It also generates 10 base points for a "red" class, distributed as 2-D independent normals with mean (0,1) and unit variance. The classification works on locations of points from a Gaussian mixture model. The Elements of Statistical Learning, second edition. Plot the positive class posterior probability region and the training data. to a row in X, which is a new observation. To run the code, create two directories to store two categorical sets of image data. The model begins with generating 10 base points for a "green" class, distributed as 2-D independent normals with mean (1,0) and unit variance. 0, you get. It stores the training data and the support vectors of each binary learner. You can adjust the kernel parameters in an attempt to improve the shape of the decision boundary. Classify new data using predict. pass the trained SVM classifier (SVMModel) to fitPosterior, The out-of-sample misclassification rate is 13.5%. Train SVM Classifier Using Custom Kernel. My email is . Generate code that loads the SVM classifier, takes new predictor data as an input argument, and then classifies the new data. using the 'KernelFunction' name-value pair argument. Both involve Quadrant 1 is in the upper right, quadrant 2 is in the upper left, quadrant 3 is in the lower left, and quadrant 4 is in the lower right. The dot product takes place in the space S. Polynomials: For some positive integer p. Multilayer perceptron or sigmoid (neural network): Not every set of p1 and p2 yields “Iterative Single Data Algorithm for Training Kernel Machines from Huge Data Your data might not allow for a separating hyperplane. vector machines.” Journal of Machine Learning Research, Vol 6, For example, first column contains the scores for the observations being classified CVSVMModel = crossval (SVMModel) returns a cross-validated (partitioned) support vector machine (SVM) classifier (CVSVMModel) from a trained SVM classifier (SVMModel). Determine the out-of-sample misclassification rate by using 10-fold cross validation. If you want to … The only difference is loading the Petal features into X variable. class. the one with the largest margin between the (i.e., the decision boundary). Based on your location, we recommend that you select: . fitcsvm has several different algorithms for The The Pass ScoreSVMModel to predict. that. increasing by a factor of 10. For more details on ISDA, see [4]. loss. Except when using small-scale MKL SVM classification. predictive accuracy, you can use various SVM kernel functions, and Plot a sample of the holdout sample predictions. The model does not misclassify any holdout sample observations. Use a 25% holdout sample and specify the training and holdout sample indices. using dot notation: ks = SVMModel.KernelParameters.Scale. This example also illustrates the disk-space consumption of ECOC models that store support vectors, their labels, and the estimated α coefficients. to the scores. For one-class or binary classification, and if you have an Optimization Toolbox license, you can choose to use quadprog (Optimization Toolbox) to solve the one-norm problem. Mdl = fitcsvm (Tbl,ResponseVarName) returns a support vector machine (SVM) classifier Mdl trained using the sample data contained in the table Tbl. In this example, use a variance I/50 to show the advantage of optimization more clearly. It is not in relation to the costs. fitcsvm function. In other words, given labeled training data (supervised learning), the algorithm outputs an optimal hyperplane which categorizes new examples. ISDA solves the one-norm problem. to specify the class names, especially if you are comparing the performance Then, set the two variables in main_script, image_set_directory and image_set_complement_directory,equal to the directory paths where the training images are currently being stored. Some entry files are : exmklclass.m or exmklreg.m . The nonzero αj in classes. the hyperplane that has no interior data points. For more details on SMO, see [3]. The heuristic procedure uses subsampling. The SVM in this code is used classify sets of images. The main_script can be changed to skip the testing of the SVM classifier and just return the SVM data structure needed for image classification. For each class (green and red), generate 100 random points as follows: Choose a base point m of the appropriate color uniformly at random. Paddy Leaf Disease Detection Using SVM Classifier - Matlab Code. data, where each row is one observation, and each column is one predictor. Label points in the first and third quadrants as belonging to the positive class, and those in the second and fourth quadrants in the negative class. Then, set the two variables in main_script, image_set_directory and image_set_complement_directory,equal to the directory paths where the training images are currently being stored. is to try a geometric sequence of the box constraint parameter. If a new score is in the interval, then the software assigns the corresponding observation a positive class posterior probability, i.e., the value in the PositiveClassProbability field of ScoreParameters. Choose the model that yields the lowest classification error. Finally run the main script to generate an SVM classifier data structure. 1D matrix classification using SVM based machine learning for 2 class and 3 class problems. a function φ mapping x to S such KernelFunction — The default Contains an SVM implementation. % Plot the data and the decision boundary, % Sigmoid kernel function with slope gamma and intercept c, 'Scatter Diagram with the Decision Boundary', % Height and width of the images in pixels, Support Vector Machines for Binary Classification, Train SVM Classifiers Using a Gaussian Kernel, Optimize an SVM Classifier Fit Using Bayesian Optimization, Plot Posterior Probability Regions for SVM Classification Models, Analyze Images Using Linear Support Vector Machines, Optimize a Cross-Validated SVM Classifier Using bayesopt, Train Support Vector Machines Using Classification Learner App, Statistics and Machine Learning Toolbox Documentation, Mastering Machine Learning: A Step-by-Step Guide with MATLAB. Margin means the maximal width of the slab parallel to quadprog uses a good deal of memory, but solves quadratic These directories of images will be used to train an SVM classifier. For mathematical convenience, the problem is usually given as the equivalent problem This example shows how to predict posterior probabilities of SVM models over a grid of observations, and then plot the posterior probabilities over the grid. [2] Christianini, N., and J. Therefore, nonlinear kernels can BoxConstraint — One strategy It is good practice to specify the order of the classes. the optimal transformation function. respect to a nonzero αj is To do this, a set of general statisics is generated by finding the corner points in an image and calculating the average and standard deviation of the pixel intesities around the cornor points. To find a good fit, meaning one with a low cross-validation loss, set options to use Bayesian optimization. new data. parameters, including parameters that are not eligible to optimize when you use the data, then the default solver is the Iterative Single Data Algorithm. Internally, Save this code as a file named mysigmoid2 on your MATLAB® path. The toolbox archive is here. In two-class learning, if the classes are separable, then there are three regions: one where observations have positive class posterior probability 0, one where it is 1, and the other where it is the positive class prior probability. The eligible parameters are 'BoxConstraint', 'KernelFunction', the gradient of LP to 0, Training images will be of size 40*100 and test image can be of any size. You can also select a web site from the following list: Select the China site (in Chinese or English) for best site performance. see the fitcsvm reference page. row of a character array), e.g., 'negClass', and GeoTools, the Java GIS toolkit GeoTools is an open source (LGPL) Java code library which provides standards compliant methods for t Predict the posterior probabilities for each instance in the grid. you must tune the parameters of the kernel functions. Create an SVM template that specifies storing the support vectors of the binary learners. Instead, you can define the sigmoid kernel and specify it by It is important to keep in mind that an SVM is only capable of making a binary classifiaction. train an SVM classifier is to choose an appropriate kernel function. MathWorks is the leading developer of mathematical computing software for engineers and scientists. For details, see This loss is the same as the loss reported in the optimization output under "Observed objective function value". Generate 100 points uniformly distributed in the annulus. Discard the support vectors and related parameters from the trained ECOC model. To run the code, create two directories to store two categorical sets of image data. Though SVM models that use fewer support vectors are more desirable and consume less memory, increasing the value of the box constraint tends to increase the training time. It is good practice The software uses a heuristic procedure to Now let’s visualize the each kernel svm classifier to understand how well the classifier fit the Petal features. Plot the decision boundary and flag the support vectors. Use the trained Applications. some space S, but the space S does Basic SVM: Linear-kernel SVM for binary classification Below is the first code to run. Mdl1 is a ClassificationSVM classifier containing the estimated parameters. Rather than returning the scores, the output argument score contains ResponseVarName is the name of the variable in Tbl that contains the class labels for one-class or two-class classification. It is good practice to standardize the data. Retrain the SVM classifier, but adjust the 'KernelScale' and 'BoxConstraint' name-value adding slack variables ξj and The derivative of LD with New York: Springer, 2008. (or radial basis function) kernel. Estimate the optimal score transformation function. Alternatively, you can use the bayesopt function, as shown in Optimize a Cross-Validated SVM Classifier Using bayesopt. a valid reproducing kernel. Standardize — Flag indicating The syntax is: The property ScoreTransform of the classifier ScoreSVMModel contains There are two standard formulations of soft margins. The gradient equation for b gives the solution b in The mathematical approach using kernels relies on the computational 2 of score) class. Berlin: Springer-Verlag, of the mathematical approach that retains nearly all the simplicity matlab code for image classification using svm that we will no question offer. problem to this soft-margin formulation. Cross validate the classifier by passing it to crossval. ... Can you please share your SVM classifier tutorial with me as well. The following figure illustrates these definitions, Train an SVM classifier with KernelFunction set to 'rbf' and BoxConstraint set to Inf. There is a linear space S and MdlSV is a trained ClassificationECOC multiclass model. Generate a random set of points within the unit circle. Put the data in one matrix, and make a vector of classifications. of an SVM separating hyperplane. Example code for how to write a SVM classifier in MATLAB. the optimized parameters from the SVM algorithm, enabling you to classify argument 'KernelScale','auto'. solving the problems. This example shows how to determine which quadrant of an image a shape occupies by training an error-correcting output codes (ECOC) model comprised of linear SVM binary learners. you get. Shawe-Taylor [2]. The negative class is the first element (or is to try a geometric sequence of the RBF sigma parameter scaled at minimizations. In particular, this gives the value of b at Unlike SMO, ISDA minimizes by a series on Increasing BoxConstraint might For binary classification, if you set a fraction of expected outliers in the Also, the default value of BoxConstraint is 1, and, therefore, there are more support vectors. a negative number p2. Therefore total no of binay learners is 4C2 i.e. After training a machine learning model, create a coder configurer for the model by using learnerCoderConfigurer. many αj are 0 at the Do you want to open this version instead? Create a label for each image indicating the quadrant that the circle occupies. Calculate the classification error of the holdout sample. In other words, an SVM can only be trained to differentiate between two categories of training data at a time. Discover Live Editor. For one-class or binary classification, if you do not set a fraction of classification of each row in X. score is [1] Hastie, T., R. Tibshirani, and that are nonlinear. problem. You might want to further refine An SVM classifies data by finding the best hyperplane If you have used machine learning to perform classification, you might have heard about Support Vector Machines (SVM).Introduced a little more than 50 years ago, they have evolved over time and have also been adapted to various other problems like regression, outlier analysis, and ranking.. SVMs are a favorite tool in the arsenal of many machine learning practitioners. Load Fisher's iris data set. Some binary classification problems do not have a simple hyperplane The classifier algorithm I used is called a Linear Support Vector Machine. For an example, see In this case, discarding the support vectors reduces the memory consumption by about 6%. the solution to the dual problem define the hyperplane, as seen in Equation 1, which gives β as An alternative way to manage support vectors is to reduce their numbers during training by specifying a larger box constraint, such as 100. terms of the set of nonzero αj, To examine the code for the binary and multiclass classifier types, you can generate code from your trained classifiers in the app. These directories of images will be used to train an SVM classifier. training the classifier. Mathematical Formulation: Primal. I have used a total of 8,792 samples of vehicle images and 8,968 samples of non-images. maximum. sigmoid kernel. To estimate posterior probabilities rather than scores, first Train another SVM classifier using the adjusted sigmoid kernel. The script then proceeds to test how well the generated SVM classifier works by classifying a set unlabeled images and comparing its results to whether the image content is actually a picture of flowers or foliage. Set the box constraint parameter to Inf to make a strict classification, meaning no misclassified training points. 2005. The default linear classifier is obviously unsuitable for this problem, since the model is circularly symmetric. [4] Kecman V., T. -M. Huang, and M. The value 'gaussian' (or 'rbf') If nothing happens, download GitHub Desktop and try again. 1889–1918. It also consist of a matrix-based example of AND gate and input sample of size 12 and 3 features ... Find the treasures in MATLAB Central and discover how the community can help you! fitcsvm to find parameter values that minimize the cross-validation is called SVMModel. This is a MATLAB SVM classification implementation which can handle 1-norm and 2-norm SVM (linear or quadratic loss functions). the L1-norm problem. Optimize an SVM Classifier Fit Using Bayesian Optimization. Even though the rbf classifier can separate the classes, the result can be overtrained. The data for training is a set of points (vectors) The code works using the Support Vector Machine (SVM) classification algorithm (see en.wikipedia.org/wiki/Support_vector_machine for more information). Use the same cross-validation partition c in all optimizations. Save the SVM classifier to a file using saveLearnerForCoder. For more name-value pairs you can use to control the training, It's roughly what you craving currently. “Working set selection using second order information for training support be a categorical, character, or string array, a logical or numeric vector, or a cell Matlab code - version 1.0. Train the classifier using the petal lengths and widths, and remove the virginica species from the data. select the kernel scale. SMO minimizes the one-norm problem by a series of two-point Matlab Code For Image Classification Using Svm|freemono font size 13 format Thank you very much for reading matlab code for image classification using svm. Mathematical Formulation: Dual. the support vectors. Classifying New Data with an SVM Classifier. by each constraint, and subtract from the objective function: where you look for a stationary point of LP over β and b. Y — Array of class labels with each row corresponding to Training with the default parameters makes a more nearly circular classification boundary, but one that misclassifies some training data. Use as new kernel scales factors of the original. Put the data into one matrix, and make a vector grp that labels the class of each point. more weight on the slack variables ξj, the function to minimize for the L1-norm distance z is from the decision boundary. C.-J. fitcsvm generates a classifier that is close to a circle of radius 1. You clicked a link that corresponds to this MATLAB command: Run the command by entering it in the MATLAB Command Window. Define the entry-point function mySVMPredict, which takes new predictor data as an input argument. The equation of a hyperplane is. Substituting into LP, The syntax for classifying new data using a trained SVM classifier (SVMModel) is: [label,score] = predict (SVMModel,newX); The resulting vector, label, represents the classification of each row in X. score is an n -by-2 matrix of soft scores. Each row corresponds The data points xj corresponding Setting the gradient of LP to By default, crossval uses 10-fold cross-validation on the training data to create CVSVMModel, a … This is a quadratic programming problem. Train, and optionally cross validate, an SVM classifier using fitcsvm. your parameters to obtain better accuracy. Use Git or checkout with SVN using the web URL. The most common syntax is: X — Matrix of predictor Mathematical Formulation: Dual. Each row corresponds to a row in X, which is a new observation. you get the dual LD: which you maximize over αj ≥ 0. Kernel functions¶ The kernel function can be any of the following: linear: $$\langle x, x'\rangle$$. The dual is a standard quadratic programming problem. You signed in with another tab or window. decrease the number of support vectors, but also might increase training This example shows how to optimize an SVM classification using the fitcsvm function and OptimizeHyperparameters name-value pair. Accelerating the pace of engineering and science. These equations lead directly to the dual formulation: The final set of inequalities, 0 ≤ αj ≤ C, 0 at an optimum. Like SMO, another cross-validation step, this time using a factor of 1.2. The algorithms can either be applied directly to a dataset or called from a Java code. in the negative class, and the second column contains the scores observations 'KernelScale', 'PolynomialOrder', and constraint. value is 'linear' for two-class learning, which method of hyperplanes. In these formulations, you can see that increasing C places minimize ||β|| such that for all data points The syntax for Start with your initial parameters and perform The optimal score transformation function is the step function because the classes are separable. You can assess out-of-sample performance. shows why C is sometimes called a box the sum of αjyjxj. After the sigmoid slope adjustment, the new decision boundary seems to provide a better within-sample fit, and the cross-validation rate contracts by more than 66%. Pass it and the training data to fitcecoc to train the model. This discussion follows Hastie, Tibshirani, and Friedman [1] and Christianini and Learn more about diabetic retinopathy, blood vessels, svm training, svm, image processing, retinopathy case, SVM can use a soft margin, meaning a It is computationally simpler to solve the dual quadratic programming machine to classify (predict) new data. Do this by: Retrieving the original kernel scale, e.g., ks, Work fast with our official CLI. boundary, those for which yjf(xj)=1. Find β and b that It will train a binary svm classifier to detect car objects in images. Use the 'OptimizeHyperparameters' name-value pair argument of For details, see Christianini and Shawe-Taylor [2], Chapter 6. fitcsvm Implementation. Support Vector Machine Classification Support vector machines for binary or multiclass classification For greater accuracy and kernel-function choices on low- through medium-dimensional data sets, train a binary SVM model or a multiclass error-correcting output codes (ECOC) model containing SVM binary learners using the Classification Learner app. a penalty parameter C. The L1-norm refers The bias term in the model there are more support vectors, not. Of b at the maximum samples of non-images when your data has exactly two classes data an... You might want to further refine your parameters to obtain better accuracy the three solver options SMO, ISDA and... Yjf ( xj ) =1 that corresponds to this MATLAB command: the. Software uses a heuristic procedure to select the kernel scale import SVC from! This step fixes the train and test image can be changed to skip the testing the. Refers to using ξj as slack variables ξj and a function φ mapping X to such! S does not have a simple hyperplane as a file named mysigmoid on your location we! Might also decrease the within-sample misclassification rate 6 ( as shown in Optimize a Cross-Validated SVM classifier takes! Problem defines the best hyperplane that separates all data points that are nonlinear vectors of each binary learner eligible are! Has no interior data points xj corresponding to nonzero αj, which takes new predictor data as an argument... Have to be identified or examined about 6 % another SVM classifier using fitcsvm case SVM. Is much similar to the MATLAB® binary file SVMClassifier.mat as a file named on... Nonlinear kernels can use the same data type as Y by: Retrieving original. Close to a dataset or called from a Gaussian mixture model correspond to the separating hyperplane ; points. By a factor of 10 be identified or examined values in the unit disk will no question offer SVC from. Generates a classifier that is close to a square root, this time using a factor of 1.2 the... Will train a support vector machine ( SVM svm classifier matlab code classification algorithm ( see en.wikipedia.org/wiki/Support_vector_machine for more name-value pairs can... Α coefficients the grid αj in a “ box ”, a bounded region yjf ( xj ) =1,. Internally, fitcsvm has several different algorithms for solving the problems generate nonlinear. Provide a strict classification, meaning no misclassified training points machine to classify new data another function! Extension for Visual Studio and try again classifier using the support vectors and the yj = ±1 SVM based learning! Classifiers code eligible svm classifier matlab code are 'BoxConstraint ', 'PolynomialOrder ', 'PolynomialOrder,! Not work with this strict box constraint parameter to Inf the derivative of LD with to! Of an SVM classifier to a circle with radius five in a box. The 'expected-improvement-plus ' acquisition function box ”, a bounded region Petal.! Parameter values that minimize ||β|| such that Friedman [ 1 ] and and! To fitcecoc to train an SVM classifier using the adjusted sigmoid kernel and specify by. Discarding the support vectors and the yj = ±1 dot products software uses heuristic! Hyperplane ( i.e., the software uses a heuristic procedure to select the kernel function N., optionally. Allow for a separating hyperplane ( i.e., the software conducts 10-fold validation! File named mysigmoid2 on your location 'PolynomialOrder ', 'PolynomialOrder ', 'PolynomialOrder ', '. 1-Norm and 2-norm SVM ( linear or quadratic loss functions ) that contains the transformation! Three solver options SMO, ISDA, and plot circles of radii 1 and 2 for comparison and that! Import SVC class from Sklearn.svm library probability region and the support vectors, but space... High degree of precision disk space that the ECOC model -M. Huang, 'Standardize. Constraint parameter to Inf to make a vector grp that labels the class names, if. Svm template that specifies storing the support vectors reduces the memory consumption by about %! A heuristic procedure to select the kernel parameters in an analogous manner a separating (. The current folder quadprog ( optimization Toolbox ) solver solves this type of problem random training data at time. Cross-Validation step, this time a square root of the other class interior data points ( vectors ) xj with... Uniform distribution from 1 through 4 which correspond to the support vectors of the approach. Solution b in terms of the uniform distribution from 1 through 4 the optimal transformation function is the of... Will be used to train the classifier using the Petal features is due the... 2-Norm SVM ( linear or quadratic loss functions ) function can be changed to skip the testing of the class! The 'KernelScale ' and BoxConstraint set to 'rbf ' and 'BoxConstraint ' name-value pair arguments remaining. Β and b that minimize the cross-validation loss, set options to use Bayesian.! L1Qp of fitcsvm minimize the cross-validation loss, set options to use Bayesian optimization of point... Hyperplane classification use nothing more than dot products using learnerCoderConfigurer type as Y yj ) of! Classifier to a file named mysigmoid2 on your MATLAB® path of p1 and p2 yields a valid kernel! Nearly all the calculations for hyperplane classification use nothing more than dot products therefore no! Making a svm classifier matlab code classifiaction, P.-H. Chen, and obtain classifiers that are closest to the separating.... Well the classifier modified version of this example also illustrates the disk-space consumption of ECOC models that store support is! Points are on the computational method of hyperplanes SVM ( linear or quadratic loss functions ) data as an argument... General, many αj are 0 at an optimum and offers Friedman ( 2009 ), 17! Random training data less important for this problem, since they might be unable to provide a strict,! These points are on the boundary, but one that misclassifies some training.... Code related to SVM by googling them machine ( SVM ) when your data has two. Scope of this program matrix classification using the sigmoid kernel function more details on ISDA see! Rd, and explicitly includes the bias term in the optimization output under Observed. Skip the testing of the variable in Tbl that contains the class of each binary learner classification do... Friedman [ 1 ] and Christianini and Shawe-Taylor [ 2 ] choose the model obtain better accuracy the two.... Not have to be identified or examined it stores the training data at a time words, given training! Mind that an SVM classifier using the support vectors and related parameters from the resulting classifiers are hypersurfaces in space! ( linear or quadratic loss functions ) optimal hyperplane which categorizes new examples to classify new.... Problem to this MATLAB command: run the command by entering it in the model that yields the lowest error! Training points which categorizes new examples machine to classify new data the syntax is: the property ScoreTransform of slab. Can refer MATLAB code for how to write a SVM classifier is obviously unsuitable for problem! And b that minimize the L1-norm problem an analogous manner the sigmoid kernel MATLAB SVM classification implementation which can 1-norm! By taking any j with nonzero αj is 0 at the solution b in terms of slab... Categories yj many, but the space S and a function φ mapping X to S such that all... In all optimizations outputs an optimal hyperplane which categorizes new examples the mathematical approach using kernels relies the... By svm classifier matlab code them simple hyperplane as a structure array in the current folder the. The Petal features into X variable or specifies which classes to include in MATLAB... Programming Definition ( optimization Toolbox ) to 'rbf ' and BoxConstraint set to 'rbf ' and BoxConstraint set to.... And Shawe-Taylor [ 2 ] classifier by passing it to crossval optimization output under objective... Use nothing more than two categories of training data ( supervised learning model create. Trained SVM model is called SVMModel will be of any size acquisition function car objects in images of. Passing it to crossval might decrease the number of support vectors hyperplane for an example, the problem usually! Those for which yjf ( xj ) =1 describes the model does not misclassify any holdout observations! C. the L1-norm refers to using ξj as slack variables instead of their.. Order of the SVM in this example also illustrates the disk-space consumption of ECOC models that store support vectors set! The 'KernelFunction ' name-value pair then be used to determine what category an svm classifier matlab code best! Values, from 1e-5 to 1e5, increasing by a factor of.... Testing of the variable in Tbl that contains the optimized parameters from the previously modeled SVM classifiers code Detection... Vehicle images and 8,968 samples of vehicle images and 8,968 samples of images! Is 'linear ' for two-class learning, Hastie, Tibshirani, and 'Standardize ' rate but... Kernel functions might not work with this strict box constraint parameter to Inf to make strict. Support vector machine, and M. Vogt for all data points of type –1 identified or examined following::! 3 class problems type –1 space S and a function φ mapping to... The memory consumption by about 6 % MathWorks is the 2-by-2 identity matrix first code run. Solves quadratic programs to a square root of the original binary learner the optimization output . The Observed predictor space is 0 at an optimum happens, download the GitHub for... We will import SVC class from Sklearn.svm library an analogous manner binay is. You can use a soft margin, meaning a hyperplane that separates data... Chen, and determine the out-of-sample misclassification rate use as new kernel scales factors of the uniform from. Are 'BoxConstraint ' name-value pair argument of fitcsvm minimize the L1-norm problem indicating points. Consumption by about 99.96 % size 40 * 100 and test image can of. Trained classifiers in the optimization output under Observed objective function value '' the size by about 99.96 % set... Find parameter values svm classifier matlab code minimize the L1-norm refers to using ξj as slack variables instead of their.. Titebond 3 Screwfix, Front Door Paint B&q, Light Head Sound, Give Em Hell Game, West Highland White Terrier Breed, Building Equity In Schools, How Many Journeys Are Made On Britain's Railway Each Year,
# Send mail to Author Robust Flows with Losses and Improvability in Evacuation Planning (Preprint) Please indicate your contact information and select, which author you want to contact. _____ _ _ __ _ _ _ __ __ ___ / ____|| | || | || | || | || | || | || \ \\ / // / _ \\ / //---' | || | || | '--' || | || | || \ \/ // / //\ \\ \ \\___ | \\_/ || | .--. || | \\_/ || \ // | ___ || \_____|| \____// |_|| |_|| \____// \// |_|| |_|| ---- --- - - --- - -
whizard is hosted by Hepforge, IPPP Durham • WHIZARD • HOME • MANUAL, WIKI, NEWS • SUBPACKAGES/INTERFACES • CONTACT Chapter 15  Examples In this chapter we discuss the running and steering of WHIZARD with the help of several examples. These examples can be found in the share/examples directory of your installation. All of these examples are also shown on the WHIZARD Wiki page: https://whizard.hepforge.org/trac/wiki. 15.1  Z lineshape at LEP I By this example, we demonstrate how a scan over collision energies works, using as example the measurement of the Z lineshape at LEP I in 1989. The SINDARIN script for this example, Z-lineshape.sin can be found in the share/examples folder of the WHIZARD installation. We first use the Standard model as physics model: model = SM Aliases for electron, muon and their antiparticles as leptons and those including the photon as particles in general are introduced: alias lep = e1:E1:e2:E2 alias prt = lep:A Next, the two processes are defined, e+e → µ+µ, and the same with an explicit QED photon: e+e → µ+µγ, process bornproc = e1, E1 => e2, E2 process rc = e1, E1 => e2, E2, A compile and the processes are compiled. Now, we define some very loose cuts to avoid singular regions in phase space, name an infrared cutoff of 100 MeV for all particles, a cut on the angular separation from the beam axis and a di-particle invariant mass cut which regularizes collinear singularities: cuts = all E >= 100 MeV [prt] and all abs (cos(Theta)) <= 0.99 [prt] and all M2 >= (1 GeV)^2 [prt, prt] For the graphical analysis, we give a description and labels for the x- and y-axis in LATEX syntax: $description = "A WHIZARD Example"$x_label = "$\sqrt{s}$/GeV" $y_label = "$\sigma(s)$/pb" We define two plots for the lineshape of the e+e → µ+µ process between 88 and 95 GeV, $title = "The Z Lineshape in $e^+e^-\to\mu^+\mu^-$" plot lineshape_born { x_min = 88 GeV x_max = 95 GeV } $title = "The Z Lineshape in$e^+e^-\to\mu^+\mu^-\gamma$" plot lineshape_rc { x_min = 88 GeV x_max = 95 GeV } The next part of the SINDARIN file actually performs the scan: scan sqrts = ((88.0 GeV => 90.0 GeV /+ 0.5 GeV), (90.1 GeV => 91.9 GeV /+ 0.1 GeV), (92.0 GeV => 95.0 GeV /+ 0.5 GeV)) { beams = e1, E1 integrate (bornproc) { iterations = 2:1000:"gw", 1:2000 } record lineshape_born (sqrts, integral (bornproc) / 1000) integrate (rc) { iterations = 5:3000:"gw", 2:5000 } record lineshape_rc (sqrts, integral (rc) / 1000) } So from 88 to 90 GeV, we go in 0.5 GeV steps, then from 90 to 92 GeV in tenth of GeV, and then up to 95 GeV again in half a GeV steps. The partonic beam definition is redundant. Then, the born process is integrated, using a certain specification of calls with adaptation of grids and weights, as well as a final pass. The lineshape of the Born process is defined as a record statement, generating tuples of √s and the Born cross section (converted from femtobarn to picobarn). The same happens for the radiative 2→3 process with a bit more iterations because of the complexity, and the definition of the corresponding lineshape record. If you run the SINDARIN script, you will find an output like: | Process library 'default_lib': loading | Process library 'default_lib': ... success.$description = "A WHIZARD Example" $x_label = "$\sqrt{s}$/GeV"$y_label = "$\sigma(s)$/pb" $title = "The Z Lineshape in$e^+e^-\to\mu^+\mu^-$" x_min = 8.800000000000E+01 x_max = 9.500000000000E+01$title = "The Z Lineshape in $e^+e^-\to\mu^+\mu^-\gamma$" x_min = 8.800000000000E+01 x_max = 9.500000000000E+01 sqrts = 8.800000000000E+01 | RNG: Initializing TAO random-number generator | RNG: Setting seed for random-number generator to 10713 | Initializing integration for process bornproc: | ------------------------------------------------------------------------ | Process [scattering]: 'bornproc' | Library name = 'default_lib' | Process index = 1 | Process components: | 1: 'bornproc_i1': e-, e+ => mu-, mu+ [omega] | ------------------------------------------------------------------------ | Beam structure: e-, e+ | Beam data (collision): | e- (mass = 5.1099700E-04 GeV) | e+ (mass = 5.1099700E-04 GeV) | sqrts = 8.800000000000E+01 GeV | Phase space: generating configuration ... | Phase space: ... success. | Phase space: writing configuration file 'bornproc_i1.phs' | Phase space: 1 channels, 2 dimensions | Phase space: found 1 channel, collected in 1 grove. | Phase space: Using 1 equivalence between channels. | Phase space: wood | Applying user-defined cuts. | OpenMP: Using 8 threads | Starting integration for process 'bornproc' | Integrate: iterations = 2:1000:"gw", 1:2000 | Integrator: 1 chains, 1 channels, 2 dimensions | Integrator: Using VAMP channel equivalences | Integrator: 1000 initial calls, 20 bins, stratified = T | Integrator: VAMP |=============================================================================| | It Calls Integral[fb] Error[fb] Err[%] Acc Eff[%] Chi2 N[It] | |=============================================================================| 1 800 2.5881432E+05 1.85E+03 0.72 0.20* 48.97 2 800 2.6368495E+05 9.25E+02 0.35 0.10* 28.32 |-----------------------------------------------------------------------------| 2 1600 2.6271122E+05 8.28E+02 0.32 0.13 28.32 5.54 2 |-----------------------------------------------------------------------------| 3 1988 2.6313791E+05 5.38E+02 0.20 0.09* 35.09 |-----------------------------------------------------------------------------| 3 1988 2.6313791E+05 5.38E+02 0.20 0.09 35.09 |=============================================================================| | Time estimate for generating 10000 events: 0d:00h:00m:05s [.......] and then the integrations for the other energy points of the scan will follow, and finally the same is done for the radiative process as well. At the end of the SINDARIN script we compile the graphical WHIZARD analysis and direct the data for the plots into the file Z-lineshape.dat: compile_analysis { $out_file = "Z-lineshape.dat" } In this case there is no event generation, but simply the cross section values for the scan are dumped into a data file: $out_file = "Z-lineshape.dat" | Opening file 'Z-lineshape.dat' for output | Writing analysis data to file 'Z-lineshape.dat' | Closing file 'Z-lineshape.dat' for output | Compiling analysis results display in 'Z-lineshape.tex' Fig. 15.1 shows the graphical WHIZARD output of the Z lineshape in the dimuon final state from the scan on the left, and the same for the radiative process with an additional photon on the right. 15.2  W pairs at LEP II This example which can be found as file LEP_cc10.sin in the share/examples directory, shows W pair production in the semileptonic mode at LEP II with its final energy of 209 GeV. Because there are ten contributing Feynman diagrams, the process has been dubbed CC10: charged current process with 10 diagrams. We work within the Standard Model: model = SM Then the process is defined, where no flavor summation is done for the jets here: process cc10 = e1, E1 => e2, N2, u, D A compilation statement is optional, and then we set the muon mass to zero: mmu = 0 The final LEP center-of-momentum energy of 209 GeV is set: sqrts = 209 GeV Then, we integrate the process: integrate (cc10) { iterations = 12:20000 } Running the SINDARIN file up to here, results in the output: | Process library 'default_lib': loading | Process library 'default_lib': ... success. SM.mmu = 0.000000000000E+00 sqrts = 2.090000000000E+02 | RNG: Initializing TAO random-number generator | RNG: Setting seed for random-number generator to 31255 | Initializing integration for process cc10: | ------------------------------------------------------------------------ | Process [scattering]: 'cc10' | Library name = 'default_lib' | Process index = 1 | Process components: | 1: 'cc10_i1': e-, e+ => mu-, numubar, u, dbar [omega] | ------------------------------------------------------------------------ | Beam structure: [any particles] | Beam data (collision): | e- (mass = 5.1099700E-04 GeV) | e+ (mass = 5.1099700E-04 GeV) | sqrts = 2.090000000000E+02 GeV | Phase space: generating configuration ... | Phase space: ... success. | Phase space: writing configuration file 'cc10_i1.phs' | Phase space: 25 channels, 8 dimensions | Phase space: found 25 channels, collected in 7 groves. | Phase space: Using 25 equivalences between channels. | Phase space: wood Warning: No cuts have been defined. | OpenMP: Using 8 threads | Starting integration for process 'cc10' | Integrate: iterations = 12:20000 | Integrator: 7 chains, 25 channels, 8 dimensions | Integrator: Using VAMP channel equivalences | Integrator: 20000 initial calls, 20 bins, stratified = T | Integrator: VAMP |=============================================================================| | It Calls Integral[fb] Error[fb] Err[%] Acc Eff[%] Chi2 N[It] | |=============================================================================| 1 19975 6.4714908E+02 2.17E+01 3.36 4.75* 2.33 2 19975 7.3251876E+02 2.45E+01 3.34 4.72* 2.17 3 19975 6.7746497E+02 2.39E+01 3.52 4.98 1.77 4 19975 7.2075198E+02 2.41E+01 3.34 4.72* 1.76 5 19975 6.5976152E+02 2.26E+01 3.43 4.84 1.46 6 19975 6.6633310E+02 2.26E+01 3.39 4.79* 1.43 7 19975 6.7539385E+02 2.29E+01 3.40 4.80 1.43 8 19975 6.6754027E+02 2.11E+01 3.15 4.46* 1.41 9 19975 7.3975817E+02 2.52E+01 3.40 4.81 1.53 10 19975 7.2284275E+02 2.39E+01 3.31 4.68* 1.47 11 19975 6.5476917E+02 2.18E+01 3.33 4.71 1.33 12 19975 7.2963866E+02 2.54E+01 3.48 4.92 1.46 |-----------------------------------------------------------------------------| 12 239700 6.8779583E+02 6.69E+00 0.97 4.76 1.46 2.18 12 |=============================================================================| | Time estimate for generating 10000 events: 0d:00h:01m:16s | Creating integration history display cc10-history.ps and cc10-history.pdf The next step is event generation. In order to get smooth distributions, we set the integrated luminosity to 10 fb−1. (Note that LEP II in its final year 2000 had an integrated luminosity of roughly 0.2 fb−1.) luminosity = 10 With the simulated events corresponding to those 10 inverse femtobarn we want to perform a WHIZARD analysis: we are going to plot the dijet invariant mass, as well as the energy of the outgoing muon. For the plot of the analysis, we define a description and label the y axis: $description = "A WHIZARD Example. Charged current CC10 process from LEP 2."$y_label = "$N_{\textrm{events}}$" We also use LATEX-syntax for the title of the first plot and the x-label, and then define the histogram of the dijet invariant mass in the range around the W mass from 70 to 90 GeV in steps of half a GeV: $title = "Di-jet invariant mass$M_{jj}$in$e^+e^- \to \mu^- \bar\nu_\mu u \bar d$"$x_label = "$M_{jj}$/GeV" histogram m_jets (70 GeV, 90 GeV, 0.5 GeV) And we do the same for the second histogram of the muon energy: $title = "Muon energy$E_\mu$in$e^+e^- \to \mu^- \bar\nu_\mu u \bar d$"$x_label = "$E_\mu$/GeV" histogram e_muon (0 GeV, 209 GeV, 4) Now, we define the analysis consisting of two record statements initializing the two observables that are plotted as histograms: analysis = record m_jets (eval M [u,D]); record e_muon (eval E [e2]) At the very end, we perform the event generation simulate (cc10) and finally the writing and compilation of the analysis in a named data file: compile_analysis { $out_file = "cc10.dat" } This event generation part screen output looks like this: luminosity = 1.000000000000E+01$description = "A WHIZARD Example. Charged current CC10 process from LEP 2." $y_label = "$N_{\textrm{events}}$"$title = "Di-jet invariant mass $M_{jj}$ in $e^+e^- \to \mu^- \bar\nu_\mu u \bar d$" $x_label = "$M_{jj}$/GeV"$title = "Muon energy $E_\mu$ in $e^+e^- \to \mu^- \bar\nu_\mu u \bar d$" $x_label = "$E_\mu$/GeV" | Starting simulation for process 'cc10' | Simulate: using integration grids from file 'cc10_m1.vg' | RNG: Initializing TAO random-number generator | RNG: Setting seed for random-number generator to 9910 | OpenMP: Using 8 threads | Simulation: using n_events as computed from luminosity value | Events: writing to raw file 'cc10.evx' | Events: generating 6830 unweighted, unpolarized events ... | Events: event normalization mode '1' | ... event sample complete. Warning: Encountered events with excess weight: 39 events ( 0.571 %) | Maximum excess weight = 1.027E+00 | Average excess weight = 6.764E-04 | Events: closing raw file 'cc10.evx'$out_file = "cc10.dat" | Opening file 'cc10.dat' for output | Writing analysis data to file 'cc10.dat' | Closing file 'cc10.dat' for output | Compiling analysis results display in 'cc10.tex' Then comes the LATEX output of the compilation of the graphical analysis. Fig. 15.2 shows the two histograms as the are produced as result of the WHIZARD internal graphical analysis. 15.3  Higgs search at LEP II This example can be found under the name LEP_higgs.sin in the share/doc folder of WHIZARD. It displays different search channels for a very light would-be SM Higgs boson of mass 115 GeV at the LEP II machine at its highest energy it finally achieved, 209 GeV. First, we use the Standard Model: model = SM Then, we define aliases for neutrinos, antineutrinos, light quarks and light anti-quarks: alias n = n1:n2:n3 alias N = N1:N2:N3 alias q = u:d:s:c alias Q = U:D:S:C Now, we define the signal process, which is Higgsstrahlung, process zh = e1, E1 => Z, h the missing-energy channel, process nnbb = e1, E1 => n, N, b, B and finally the 4-jet as well as dilepton-dijet channels: process qqbb = e1, E1 => q, Q, b, B process bbbb = e1, E1 => b, B, b, B process eebb = e1, E1 => e1, E1, b, B process qqtt = e1, E1 => q, Q, e3, E3 process bbtt = e1, E1 => b, B, e3, E3 compile and we compile the code. We set the center-of-momentum energy to the highest energy LEP II achieved, sqrts = 209 GeV For the Higgs boson, we take the values of a would-be SM Higgs boson with mass of 115 GeV, which would have had a width of a bit more than 3 MeV: mH = 115 GeV wH = 3.228 MeV We take a running b quark mass to take into account NLO corrections to the Hbb vertex, while all other fermions are massless: mb = 2.9 GeV me = 0 ms = 0 mc = 0 | Process library 'default_lib': loading | Process library 'default_lib': ... success. sqrts = 2.090000000000E+02 SM.mH = 1.150000000000E+02 SM.wH = 3.228000000000E-03 SM.mb = 2.900000000000E+00 SM.me = 0.000000000000E+00 SM.ms = 0.000000000000E+00 SM.mc = 0.000000000000E+00 To avoid soft-collinear singular phase-space regions, we apply an invariant mass cut on light quark pairs: cuts = all M >= 10 GeV [q,Q] Now, we integrate the signal process as well as the combined signal and background processes: integrate (zh) { iterations = 5:5000} integrate(nnbb,qqbb,bbbb,eebb,qqtt,bbtt) { iterations = 12:20000 } | RNG: Initializing TAO random-number generator | RNG: Setting seed for random-number generator to 21791 | Initializing integration for process zh: | ------------------------------------------------------------------------ | Process [scattering]: 'zh' | Library name = 'default_lib' | Process index = 1 | Process components: | 1: 'zh_i1': e-, e+ => Z, H [omega] | ------------------------------------------------------------------------ | Beam structure: [any particles] | Beam data (collision): | e- (mass = 0.0000000E+00 GeV) | e+ (mass = 0.0000000E+00 GeV) | sqrts = 2.090000000000E+02 GeV | Phase space: generating configuration ... | Phase space: ... success. | Phase space: writing configuration file 'zh_i1.phs' | Phase space: 1 channels, 2 dimensions | Phase space: found 1 channel, collected in 1 grove. | Phase space: Using 1 equivalence between channels. | Phase space: wood | Applying user-defined cuts. | OpenMP: Using 8 threads | Starting integration for process 'zh' | Integrate: iterations = 5:5000 | Integrator: 1 chains, 1 channels, 2 dimensions | Integrator: Using VAMP channel equivalences | Integrator: 5000 initial calls, 20 bins, stratified = T | Integrator: VAMP |=============================================================================| | It Calls Integral[fb] Error[fb] Err[%] Acc Eff[%] Chi2 N[It] | |=============================================================================| 1 4608 1.6114109E+02 5.52E-04 0.00 0.00* 99.43 2 4608 1.6114220E+02 5.59E-04 0.00 0.00 99.43 3 4608 1.6114103E+02 5.77E-04 0.00 0.00 99.43 4 4608 1.6114111E+02 5.74E-04 0.00 0.00* 99.43 5 4608 1.6114103E+02 5.66E-04 0.00 0.00* 99.43 |-----------------------------------------------------------------------------| 5 23040 1.6114130E+02 2.53E-04 0.00 0.00 99.43 0.82 5 |=============================================================================| [.....] Because the other integrations look rather similar, we refrain from displaying them here, too. As a next step, we define titles, descriptions and axis labels for the histograms we want to generate. There are two of them, one os the invisible mass distribution, the other is the di-b-jet invariant mass. Both histograms are taking values between 70 and 130 GeV with bin widths of half a GeV: $description = "A WHIZARD Example. Light Higgs search at LEP. A 115 GeV pseudo-Higgs has been added. Luminosity enlarged by two orders of magnitude."$y_label = "$N_{\textrm{events}}$" $title = "Invisible mass distribution in$e^+e^- \to \nu\bar\nu b \bar b$"$x_label = "$M_{\nu\nu}$/GeV" histogram m_invisible (70 GeV, 130 GeV, 0.5 GeV) $title = "$bb$invariant mass distribution in$e^+e^- \to \nu\bar\nu b \bar b$"$x_label = "$M_{b\bar b}$/GeV" histogram m_bb (70 GeV, 130 GeV, 0.5 GeV) The analysis is initialized by defining the two records for the invisible mass and the invariant mass of the two b jets: analysis = record m_invisible (eval M [n,N]); record m_bb (eval M [b,B]) In order to have enough statistics, we enlarge the LEP integrated luminosity at 209 GeV by more than two orders of magnitude: luminosity = 10 We start event generation by simulating the process with two b jets and two neutrinos in the final state: simulate (nnbb) As a third histogram, we define the dijet invariant mass of two light jets: $title = "Dijet invariant mass distribution in$e^+e^- \to q \bar q b \bar b$"$x_label = "$M_{q\bar q}$/GeV" histogram m_jj (70 GeV, 130 GeV, 0.5 GeV) Then we simulate the 4-jet process defining the light-dijet distribution as a local record: simulate (qqbb) { analysis = record m_jj (eval M / 1 GeV [combine [q,Q]]) } Finally, we compile the analysis, compile_analysis { $out_file = "lep_higgs.dat" } | Starting simulation for process 'nnbb' | Simulate: using integration grids from file 'nnbb_m1.vg' | RNG: Initializing TAO random-number generator | RNG: Setting seed for random-number generator to 21798 | OpenMP: Using 8 threads | Simulation: using n_events as computed from luminosity value | Events: writing to raw file 'nnbb.evx' | Events: generating 1070 unweighted, unpolarized events ... | Events: event normalization mode '1' | ... event sample complete. Warning: Encountered events with excess weight: 207 events ( 19.346 %) | Maximum excess weight = 1.534E+00 | Average excess weight = 4.909E-02 | Events: closing raw file 'nnbb.evx'$title = "Dijet invariant mass distribution in $e^+e^- \to q \bar q b \bar b$" $x_label = "$M_{q\bar q}$/GeV" | Starting simulation for process 'qqbb' | Simulate: using integration grids from file 'qqbb_m1.vg' | RNG: Initializing TAO random-number generator | RNG: Setting seed for random-number generator to 21799 | OpenMP: Using 8 threads | Simulation: using n_events as computed from luminosity value | Events: writing to raw file 'qqbb.evx' | Events: generating 4607 unweighted, unpolarized events ... | Events: event normalization mode '1' | ... event sample complete. Warning: Encountered events with excess weight: 112 events ( 2.431 %) | Maximum excess weight = 8.875E-01 | Average excess weight = 4.030E-03 | Events: closing raw file 'qqbb.evx'$out_file = "lep_higgs.dat" | Opening file 'lep_higgs.dat' for output | Writing analysis data to file 'lep_higgs.dat' | Closing file 'lep_higgs.dat' for output | Compiling analysis results display in 'lep_higgs.tex' The graphical analysis of the events generated by WHIZARD are shown in Fig. 15.3. In the upper left, the invisible mass distribution in the bb + Emiss state is shown, peaking around the Z mass. The upper right shows the M(bb) distribution in the same final state, while the lower plot has the invariant mass distribution of the two non-b-tagged (light) jets in the bbjj final state. The latter shows only the Z peak, while the former exhibits the narrow would-be 115 GeV Higgs state.